title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
What Can We Learn From Quibi’s Failure
What Can We Learn From Quibi’s Failure Prevent these mistakes to succeed in your startup P.C — The Author With a new craze of binge-watching, online streaming services are on the boom. Yet, Quibi, ‘the could-be’ streaming video giant, pulled the plug on their service on October 21, 2020, within just six months of their startup. Their main USP was short entertainment videos under 10 minutes. Because of our busy schedules, their theme looked promising. Moreover, having many A-list star cast featuring Jennifer Lopez, Chrissy Teigan, Sophie Turner, Liam Hemsworth, etc., made it look like a fairy tale venture. Even the telecommunication investing giants like Disney, Sony Pictures, MGM, 21st Century, Warner Bros, etc., helped Quibi raise $1.75 billion of funding to commence their startup. Despite this great startup, what went wrong? Did the cut-throat competition kill Quibi, or it sabotaged itself? When Fortune Magazine asked Quibi’s management about their failure, they replied, “Our failure was not for lack of trying; we’ve considered and exhausted every option available to us.” Up to an extent, their answer is agreeable but went wrong exactly? Could they have saved their business, or was it predestined to fail? Though the authorities blame the COVID Pandemic situation as a significant reason for their debacle, it’s not easily digestible because the facts mentioned in #6 proves otherwise. Let’s find out their mistakes and how they could have prevented it.
https://medium.com/the-partnered-pen/what-can-we-learn-from-quibi-failure-586b42f7450c
['Darshak Rana']
2020-11-12 21:16:12.045000+00:00
['Business Strategy', 'Business', 'Marketing', 'Technology', 'Social Media']
How Africa Might Change The World
How Africa Might Change The World It’s both a question and a statement Photo by Doug Linstedt on Unsplash In 2018 the U.N. goals of solving energy poverty in Africa by 2030 are hardly on track to be met. The goal is to provide electricity to 600 million people in Africa who don’t currently have access to it using the least cost method possible. The major barriers to this goal inertia, aversion to change, corruption, political ineptitude, and a lack of knowledge about the energy sector among political and community leaders. Border, religious, and tribal conflicts don’t help either. Some nations may need to cooperate in the future if a solution is to be found. The debate to hurdle the current barriers revolves around who should pay for the electrification of Africa and what type of energy should be used. Then there are the ongoing payments to consider giving the levels of poverty in many African nations. Finally, Africa will ultimately need a solution to future transport systems that currently rely on diesel fuel. Look at Electrification. Fuel poverty in Africa has massive implications. It means that many families use biomass to cook often in ventilated accommodation that burns kerosene lamps which also emits indoor pollution. If they can’t afford it they use diesel generators to power electrical equipment. These all have health and financial impacts on the quality of their lives. Notice both diesel and kerosene are oil-based products. There are three ways to solve the African fuel poverty crisis. Each African nation could simply extend its national grid into rural areas. They could build many smart grids consisting of power generation like solar Panel battery storage, automated computer control, and transmission capacities. These are ideal for households and businesses of grids or integrated with a national grid. They could also invest in standalone systems however big or small perhaps to solve individuals' needs. These could be upgraded and integrated with a smart grid at a later date. These three methods all have a part to play where you go to look at which one is the least costly. The International Energy Agency in 2017 estimated that smart grids could serve two hundred nineteen million people by 2030 and has the least cost option. Although free choices were mentioned earlier. Smart mini-grids are self-sufficient electricity grids that process their own power generation usually in the form of micro renewable energy such as solar, battery storage, and the ability to transmit and share that power locally. One household can be generating energy whilst the number utilizes its power when there is a demand. Alternatively, unused energy can be stored to be used at night time if the system is large enough. This can do away with the need to use diesel generators in homes or eliminate the need to use biomass to cook. In other words, it is cheaper to put in smart mini-grids than it is to extend the national grid into a particular area. Also, it can be achieved at such a low cost that governments could easily persuade utility companies to help fund it. They make the ultimate cost savings in the future it can be connected to extensions to the national grid that may occur as the country economically develops.
https://medium.com/the-innovation/how-africa-might-change-the-world-81643f47780e
['Mamun Ju']
2020-12-28 16:02:34.482000+00:00
['Nature', 'Politics', 'Climate Change', 'Energy', 'Environment']
Two Reasons Why You Should Market Your Services When You’re Drowning in Work
Two Reasons Why You Should Market Your Services When You’re Drowning in Work …and never in famine periods Photo by Med Badr Chemmaoui on Unsplash Picture the scene: Business is going well; in fact, it’s going superbly! You’re drowning in work, staying up late to finish all your projects and running on caffeine. The last thing on your mind right now is marketing your services. After all, you have clients to look after and projects to finish! There’s no second to waste; marketing can wait! Guess what? Wrong! According to a 2019 Australian Freelance Market survey, a shocking 40.81% of freelancers reported feeling stressed about marketing themselves, and a quarter (25.09%) said they’re worried about the feast and famine cycle. I would expect that once the marketing issue is addressed, the feast and famine problem will decrease significantly. Let’s take a look at the two main reasons why marketing should remain a priority even if you’re up to your ears in work.
https://medium.com/the-innovation/why-you-should-market-your-services-when-youre-drowning-in-work-7e8b01026ae2
['Kahli Bree Adams']
2020-08-12 17:38:56.188000+00:00
['Freelancing', 'Business Strategy', 'Business', 'Marketing', 'Small Business']
How well did The Jetsons really predict our future?
Want cool Future Vision Merch? Check out our store here When I was about 8 to 10 years old, my favorite TV show by a great far was the Jetsons. Maybe it was an early sign of my passion for technology. And although I only got to see it in Brazil around 94, this American animated sitcom was first aired in the U.S. in 62 (with a second and third season aired only way later in 85 and 87). It is amazing how they got to predict so many things that are only starting to take place right now. From left to right: Judy, Astro, George, Elroy and Jane Jetson, and the robot maid Rosie. For those of you who don’t really remember — or weren’t even born when it was a hit –, the show was about a family living in the future, in a town called Orbit City, using flying cars and living and working in suspended buildings. They also used teleconferences and smartwatches, and were surrounded by all sorts of robots, like their house helper Rosie — a ‘typical’ maid –, and Didi, the voice assistant to family’s teenage girl Judy, who she referred to as her “virtual diary”. Every time I hear about an upcoming voice assistant, or a company developing a new flying car model I get impressed by how producers Hanna-Barbera managed to go so far on their imagination. Also, it confirms a well-known phrase by Walt Disney that “if we can dream it, we can do it”, right? But how well did the Jetsons really predict our future? Apart from the amazing wonders of tech, it shows an all-white family living in a world where there is absolutely no diversity. The opening of the sitcom shows wife Jane getting money from her husband George’s wallet — yes, paper money¸ that she has to ask for — because she takes care of the house while he goes to work presentially, with other men. The only upside in this future is that George manages to go home from work and disconnect (kudos!) to enjoy time with the whole family while they are actually talking to each other (here is hoping!). Back in 62, producers imagined a future with no sharing economy, no digital money, no gender-equality or any diversity what-so-ever. In the Jetsons scenery, although people get rid of most daily house chores and travel differently, the means of consumption, the education system and the job market continue to respect the same models of hierarchy as the decade they were written in. Maybe it’s just easier to predict technology than society! Anyways, I’m really happy writers were WAY OFF on their guessing and that we have been redesigning our society models and values… a small step at a time.
https://medium.com/future-vision/how-well-did-the-jetsons-really-predict-our-future-6937b208dba5
['Larissa Rosa']
2019-05-08 18:08:19.640000+00:00
['Tech', 'Future', 'Diversity', 'Gender Equality', 'Jetsons']
AWS Serverless Deployment — 101. AWS Lambda is the compute service under…
AWS Lambda is the compute service under the AWS Serverless ecosystem. For development and deployment of Lambda function along with the infrastructure it needs, one tends to use development frameworks. Two of the most popular frameworks used for this purpose are — AWS Serverless Application Model (SAM) Serverless Framework Assuming you are using AWS SAM, initializing a green field Hello World Lambda function with SAM CLI (tool that operates on an AWS SAM template and application code) is a three-step process: Step 1 > sam init In this step, you get the option to select runtime for the Lambda function, provide service name and few other options depending on the runtime selected. Once done, you get a folder structure with basic resources created. sam init console output The Hello World Function inside the template file looks as below: Hello World Resource in Template File Step 2 > sam package --s3-bucket <bucket name> In this step, the application code and dependencies are bundled into a deployment package and is pushed to S3 bucket. And generates a Yaml template file as output. sam package console output Step 3 > sam deploy --template-file package.yaml --stack-name hello-world --capabilities CAPABILITY_IAM In this step, the lambda function and other infrastructure resources gets deployed via SAM template which gets converted to CloudFormation stack. sam deploy console output Every time the sam deploy command is issued, SAM template gets translated to CloudFormation template, and it eventually publishes the new version of the Hello World Function. This new version is tagged as $LATEST. Hello World Lambda Function in AWS Console With this deployment, any new request coming for the Hello World function is handled by the latest version of the lambda. And all existing running instances of Hello World function gets terminated. Cool!!! But what if the new function has got some bug in it which could not be captured in the early stages of Unit testing and Integration testing. All your traffic will be impacted, and User Experience will go for a toss 😩. We can handle such issues by following the Blue Green deployment strategy. It helps to increase the availability of the system and reduce risk associated with deployment. In this strategy, we can have 2 versions of the function running at the same time and traffic can be diverted to the latest version in an incremental fashion. AWS SAM supports multiple traffic dialing strategies out of box. Canary — Traffic is shifted in two increments from old version to new version Linear — Traffic is shifted from old version to new version in equal increments with an equal number of minutes between each increment. To use the Blue/Green option, SAM template has two attributes which needs to be configured in the Lambda function — AutoPublishAlias — AWS SAM manages the traffic distribution by creating an Alias with the name specified and assigns the percentage of traffic to be distributed between new version and old version depending on the deployment preference type mentioned below. Deployment Preference Type — Dictates how distribution of traffic between new and old version is to be handled. Some of the pre-configured deployment preference type are available out of box — Canary10Percent30Minutes Canary10Percent5Minutes Canary10Percent10Minutes Canary10Percent15Minutes Linear10PercentEvery10Minutes Linear10PercentEvery1Minute Linear10PercentEvery2Minutes Linear10PercentEvery3Minutes Let’s say we want to route 10 % of the production traffic incrementally to new Lambda version every 1 minute. To configure this, the Hello World Lambda Function will look something as below — Hello World Lambda with Blue/Green Deployment Strategy When sam deploy command is issued with the above changes, AWS SAM internally configures AWS Code Deploy (another AWS Serverless service for deploying the code and managing the traffic distribution using Blue/Green pattern) application which manages the traffic dialing. This is how it looks like in the AWS Code Deploy console when the deploy command is issued — AWS Code Deploy Console with Linear Blue/Green Deployment You can also configure Cloud Watch Alarms which if triggered can roll back the deployment process. As part 2 of this blog, will have an actual Serverless Micro service implemented with Lambda and configured with end to end CI/CD DevOps process using AWS Development tools — AWS Code Commit, AWS Code Build, AWS Code Deploy and AWS Code Pipeline. Till then, Cheers!!!
https://waswani.medium.com/aws-serverless-deployment-101-b8917bae137f
['Naresh Waswani']
2020-08-16 17:12:53.078000+00:00
['Blue Green Deployment', 'AWS Lambda', 'Serverless Framework', 'Deployment Model', 'AWS']
Routing in React without React-Router
Using React’s Context API and hooks to perform routing. I have been using React JS in my projects for quite some time now and am used to managing routing in my app using the react-router package. I have always been keen on having as little dependencies in my apps as possible, so, I always felt perturbed by the use of this particular package in simpler apps which did not have complex routes. Recently, while working on a project I decided to use a custom solution to handle the routing, using the Context API and hooks. I will go on to describe the solution with some code samples. In another post, I will describe how I integrated the above mentioned solution with Firebase (Firebase Authentication). First we need to create a navigation provider , the navigate method will be used to navigate from one route to the other from within the routes. We first create a context using React.createContext() Then, we create a provider, NavigationProvider, with the help of the context we just created. The render function returns a Provider encompassing it’s children. The navigate method uses the history object’s pushState method to create a new history entity. The Route component returns the component (screen) that is it’s child if the url matches the name of the route (value of the ‘href’ attribute). Else, it returns null. So, only the screen whose route name matches the url is rendered. Below is an example that demonstrates all of the above. Now, you only need to wrap the screen components (or page components as some like to call it) with the corresponding Route component. For example, <Route href="Home" > <Home> </Home> </Route> Considering you run this locally, localhost:3000/Home would render the Home component. To navigate from within the component just grab the navigate function from the NavigationContext that we created previously. We use the the useContext hook provided by react. Like this, var navObj= useContext(NavigationContext) navObj contains the navigate function which can be used to navigate to a different route. For example, to navigate to the “Login” screen navObj.navigate('/Login') As simple as that ! I hope this helped you if you wanted to try something different for your routing needs. The part 2 of this guide will show how this can work with Firebase Authentication to secure private routes. Thank you for your time.
https://medium.com/front-end-weekly/routing-in-react-without-react-router-36d16e2baa23
[]
2020-09-14 17:04:24.945000+00:00
['React Hook', 'React', 'Reactjs', 'JavaScript', 'Javascript Development']
I created Pac-Man in C#/.NET Console
Scene Now with a working base for rendering and controls it was time to define the game map with the player, objects and ghosts. I decided that the map should be easily edited and with that in mind I implemented a Scene class that was responsible for loading a .txt map file and store the game state with the current 2d grid state, positions and entities. Game Constants After defining some game constants, our base map.txt looks like this: ????????????????????? ?...?...........?...? ?.?.?.?.?????.?.?.?.? ?.?...?...?...?...?.? ???.?????.?.?????.??? ?.?...?.......?...?.? ?.?.?.?.?? ??.?.?.?.? ?...?...?@@@?...?...? ?.?.?.?.?????.?.?.?.? ?.?...?.......?...?.? ???.?????.?.?????.??? ?.?...?...?...?...?.? ?.?.?.?.?????.?.?.?.? ?...?.....C.....?...? ????????????????????? Of course, some implementations were still missing, like game fruits and other logic, but for this step it was going good enough. So, for now the logic was still basic, load the text file and set each position of our grid (2d array) to the respective character in the text, then it’s time to implement something a little bit more complex. Let’s think about the game movements focusing on three objects, the player, score and ghost, when the player moves towards a score position, after he exits that position what character are we going to set in that position? It might look obvious, an empty space, for this case it works but what if the ghost is moving towards a score position and after it leaves that position? A general approach wouldn’t solve this, instead we need to store which object was in that position before, then we will create some ‘layers’ in each position and with that we can have a ordered list / sequence of objects that are in the same position and we’ll display only the top layer of each position and when anything moves we just move that top layer position in front of another position and automatically the previous position will display the correct character. For this logic the LinkedList is the perfect choice to these ‘layers’ approach, I’ll not write about this data structure in this post but I strongly recommend that you read about if you’re not familiar with it. Then our 2d char grid became a grid of LinkedLists and I created a base class (EntityBase) for every entity in the map, then the map loading sequence was the following:
https://fabio-stein.medium.com/i-created-pac-man-in-c-net-console-88d7648a098a
['Fabio Stein']
2020-12-08 19:29:36.274000+00:00
['Technology', 'Game Development', 'Coding', 'Dotnet', 'Csharp']
How to Handle Errors Gracefully in Our Apps
Photo by Johann Siemens on Unsplash Handling errors gracefully in our apps ensure that failure will be noticed quickly so users or developers can fix the issues presented. In this article, we’ll look at how to handle errors in a graceful way to make fixing them easier. Crash Early Our programs should crash early so that we can detect problems as soon as we can. For instance, instead of letting a program continue to write invalid data to a database, we should check if the data and throw an error before it gets written. This way, the user can reenter the data correctly and then save it to the database. Once it’s in the database, then we’ll have to go to the database to fix it, which is a lot harder. And if lots of people enter invalid data and we let them save them, then it’ll be pretty much impossible to fix quickly and it’ll be a nightmare for everyone. We can throw exceptions in most languages to throw an error. Then the error will be obvious to us right away. Assertive Programming We should use assertions if something that we don’t allow to happen can’t happen. For instance, if we don’t allow something to be null , then we should check for that before a program or function can continue to run. This way, we won’t let invalid values propagate in our system. Leave Assertions Turned On Assertions don’t add overhead to code. Also, we shouldn’t assume that assertions are only triggered by bugs in the code. It can also be triggered by user error. Therefore, we should leave them on in any environment. Therefore, turning them off won’t save us any resources, but it can cause us lots of problems later on. We shouldn’t have assertions that have side effects. This way, we know what we’re checking is just what we’re checking against. If we commit side effects while running for assertions, then we may be doing something that we didn’t expect and check for in our side effects and we’ll run into problems even if we check for them. When to Use Exceptions Exceptions should be used whenever they’re available. For example, a try...catch block is much cleaner than using if and else to check status codes returned by different functions. If those functions throw exceptions instead of returning error values, then it makes our code much cleaner if we catch them and then handle them gracefully. We should use them when an error may arise from the running code. For instance, if a function throws an exception when it checks if a file exists, then we need to catch the error and handle that. If the file must be there, then we can catch the exception if it’s not there. If it’s optional, then we may just check if the file exists and run some code if it is. Programs that throw and catch exceptions everywhere suffer from readability and maintainability problems as they jump from one place to another when an exception is thrown. Also, encapsulation is broken since they’re more tightly coupled by handling exceptions. Error Handlers Are an Alternative Instead of throwing and catching exceptions everywhere, we can also write error handler functions to deal with errors gracefully. We can use them in any language since they’re just regular functions. They can handle all the errors thrown in one central place. We can just write one error handler and use them everywhere. Photo by Jacob Mejicanos on Unsplash Cleaning Up Resources Our programs should clean up after themselves when needed. For instance, if our program writes temporary files to the file system, then we should clean them up when our program is done. This also includes situations when our program ends when an error occurs. We got to make sure that resources are freed in those situations. This way, we won’t have lots of unused temporary files lying around after our program runs multiple times. However, if a top-level structure of our program is responsible for cleaning up resources, then we should let them do that. If it doesn’t then our code should do it for them. Conclusion Our program should handle errors gracefully and clean up resources after they’re used. Also, errors should be displayed early so that everyone involved can act of them faster.
https://medium.com/dev-genius/how-to-handle-errors-gracefully-in-our-apps-31fbfd3f68c5
['John Au-Yeung']
2020-08-14 22:38:37.399000+00:00
['Programming', 'Software Engineering', 'Software Development', 'Web Development', 'Technology']
A Nation United
Will it really take a long time to re-unite the USA? You get what you wish for. Last I checked the “U” in USA stands for United. I’ve noticed MAGA does not contain a “U” in it. Ok, so maybe MAGA means the movement wants “America” back regardless of the “United States”. I don’t know, but I think it’s more complicated than that. Ev is right, it’s all a matter of priorities. Of all the things I’ve had trouble with in my life (especially my career), differences in priorities to get to the same goal is tops the list. It’s pretty clear that the meaning (and value) of priority escapes most people. Very competent people often use urgency as a proxy for priority. As a result they often do things out of order. This expends a lot of energy. When the person is your manager, you get to partake in the fury. I prefer not to, but sometimes it’s unavoidable. I could discuss how to properly deal with things at the right priority and urgency to remain sane, maybe another time. I’d rather consider a nation united instead. In my opinion, making a united nation top priority for everyone is the essential first step. This starts with a seed, hopefully by a leader. It appears the seed has been planted. It may take a while, but if Trump can demonize just about anything by just saying it enough times (and it getting repeated in the echo chamber), then a United States isn’t too high a goal. I’ve read a number of headlines in Medium about the fall of the United States. I get where these people are coming from, but don’t agree it’s impossible to come back from. If Steve Jobs can make a “dent in the universe” by enabling a connected world in his lifetime, I think we can re-integrate a natural concept like unity. Throughout history humans have cooperated to thrive and survive. It’s major reason why we didn’t die out. Periodically, fighting breaks out and interrupts this behavior. This is actually abnormal. War is not a natural to humans, yet in modern times it always exists. It’s rather simple matter, warmongers want the world to be as they believe it should be, instead of relying on principles to make the world what it actually should be. Often, semantics in thought make all the difference. Let’s take an example. It was long held that the Vietnam war wouldn’t end until somebody “won”. Eventually it did end and nobody won. How did it end if winning was the only thing that mattered? It was a shift from “war” to “peace”. The word peace (and love) replaced the word war. After a while, the focus shifted to peace, regardless of a government intent on continuing the war. The opposite of divided is united. When everyone includes “united” in conversations, division will diminish. It seems clear that the populace wants unity, but leaders aren’t willing to embrace it yet. Dogma feeds division. Rejection of dogma is based in open-mindedness, i.e., the willingness to listen to and seriously consider alternate viewpoints. Division is also fed by absolutes. Absolutes are very enticing. It allows following the natural tendency to be a “lazy thinker”. Everyone is a lazy thinker, it’s part of being human. However, when big problems arise, it’s necessary to think about them and resolve the matter equitably. The biggest problems today are based in the inability for leaders to move off an absolute. This is a decision. There are many reasons for it. The solution is courage. The courage to keep the United States of America intact and trust the methods that make it possible to do so. It is amazing how the structure of the US is able to take on a frontal assault to democracy and absorb it. People get crazy and scared. But the system works. It’s fortunate that States have as much power for governance as they do, it helps survive a divided populace. Having States allows the populace to have a home and feel safe with like minded people. This is only possible if the USA remains the USA. Supporting division, i.e., “My way or the highway” at the expense of unity is equivalent to sawing a tree branch off while sitting on the wrong side. It’s amusing to watch, but often deadly. When I was a kid, horsing around in a dangerous manner resulted in an intervention by an adult. Usually, something like “Stop it! You’ll put someone’s eye out!” (actually know someone who did lose an eye while horsing around). We eventually learned how to horse around safely. Horsing around with the United in USA is probably something we should stop doing. Since we’re all adults, it’s hard for an adult to intervene. Maybe not. It seems the child egging on bad behavior has been replaced by an adult. We’ll see if the lesson is (re)learned.
https://medium.com/out-of-your-mind/a-nation-united-1a4c7035980f
['Joe Bologna']
2020-11-08 19:22:22.937000+00:00
['Society']
Kaggle User Survey Dashboard— 2019
A dashboard made using R, Flexdashboard, and the Highcharter library to analyze Kaggle’s user survey conducted in 2019. As we bid adieu to 2019, I wanted to explore and analyze the state of data science ecosystem especially the participation of women in data science and the perennial R vs Python debate. That’s when I stumbled upon a survey conducted by Kaggle in Oct, 2019 which they’ve made available here. This is one of the biggest surveys conducted with 19,717 responses. In this dashboard, I’ve attempted to analyze 2 major themes — Women In STEM. R vs Python for data science. This blog post is broken down in 3 different parts — Home, Women In STEM, and R vs Python. I will cover the key findings here.
https://towardsdatascience.com/kaggle-user-survey-2019-326e187ff207
['Akshaj Verma']
2020-06-30 14:52:44.271000+00:00
['Dashboard', 'Analytics', 'Rstats', 'Data Science', 'Data Visualization']
Preparing for a neuroscience PhD
Preparing for a neuroscience PhD You don’t need a neuroscience background, and it might even help if you don’t. You do, however, need research experience. Wrangle your studies before Catwoman steals your books. (Photo: Ashley Juavinett) You’ve given it some thought, and a neuroscience PhD could be in your future. So, what do you need to get there? Undergraduate degrees before the PhD In theory, you could come from almost any academic background before coming into neuroscience, as long as you also have research experience (see below). According to a recent Society for Neuroscience report, most students who matriculate into Ph.D. programs in the U.S. have degrees in neuroscience, biology, or psychology. However, a huge chunk of applicants also come from chemistry and mathematics, and many students come in with dual degrees (see SfN 2016 for more details). I was curious about the broader population of neuroscientists beyond those currently enrolled in PhD programs, so I conducted a Twitter survey: Many, many people had write-in responses. When we pull together all of the responses (n=950), here’s what the breakdown looks like: There were a host of other undergraduate degrees ranging from biotechnology to animal behavior to economics (okay, isn’t that just very advanced animal behavior?) with one or two folks claiming them. They’re not included in the graph for simplicity, but the full dataset is here. Not surprisingly, the answers for computational neuroscientists were slightly different. Among the list, there were a few fun answers. Neuroscientist and Journal of Neuroscience Editor-in-Chief Marina Piciotto was a Biology & English major. Famed neuroscientist David Eagleman also had a surprising answer: British & American Literature. “No joke,” he added. In summary, neuroscientists come from a wide range of backgrounds, and you shouldn’t feel like there’s only one path into a career of neuroscience. Neuroscience is a wonderfully diverse field that touches on almost every other discipline — after all, it is fundamentally the study of how brains (and their owners) interact with the world. Our field is better off with perspectives from every intellectual angle, as well as with people who have thought deeply about very specific subfields. How competitive is it? Regardless of your undergraduate major, you should be at the top of your game academically. Neuroscience programs in the U.S. receive anywhere between 5 and 875 program applicants — 170 on average. For the academic year 2016–2017, the average acceptance rate for U.S. PhD programs was 19%. Although there’s more applicants, most programs report that they’re accepting the same number of students, largely because of limited funding from training grants and space in faculty labs. So, it does seem to be getting more competitive, and it’s not clear that more positions for graduate students are going to be opening soon. What are admissions committees looking for? Most graduate programs will evaluate you on three main categories: your research experience, your GPA (grade point average from college), and your GRE scores (Graduate Record Examinations, a commonly required standardized test in the U.S.). The relative weight of those attributes will vary between school to school, and even depends on the members on the admissions committee. Some schools will set hard cutoffs for the numerical categories there, but they’re typically not disclosive about it. Applicants in 2016 had an average undergraduate GPA of 3.56, and average verbal as well as quantitative GRE scores of 158. Your best bet is to do as best as you can in those three categories. Of course, if you’re out of college, it’s hard to go back and change your GPA. If you have a low GPA and GRE scores, the best way to improve your chances of getting into graduate school is by spending some time working in a lab. Research experience before graduate school While there isn’t usually a strict requirement for research experience, a striking 98% of applicants to U.S. PhD programs have at least some previous experience working in lab. Still, you should get some research experience for more than just getting into PhD programs— you should have research experience so that you have insight into whether or not you like doing research. Your research experience could be in one lab for a long time, or short bursts in other labs. If you’re at a college that doesn’t have a ton of research, there are many summer research programs out there. I found it really informative to find research experiences beyond the brick walls of my small liberal arts school. Many summer programs are also specifically for underrepresented minorities, and most of them will pay you a stipend as well as cover room and board.
https://medium.com/the-spike/preparing-for-a-neuroscience-phd-6a0bb4ee5275
['Ashley Juavinett']
2020-10-23 18:39:10.661000+00:00
['Research', 'College', 'Education', 'Graduate School', 'Neuroscience']
oh Lorde, deliver me from Fucking Joan.
(this piece is an excerpt from a longer post over at my patreon. the comments and feedback over there were so phenomenal — go read them — that i thought this bit was good enough to post over here on medium. enjoy.) me & the dumpsters. i’ll never forget the moment, maybe eight or nine years ago, where I CAUGHT my brain red-handed in the vicious downward spiral of uselessly comparing myself directly to someone else. i was mulling about regina spektor, whose music i love, and who i’ve toured with, and i was, and i noticed that i was feeling mopey and insecure and jealous. goddamit, i thought. i wish i had what regina had: a clearer, more polished voice, better piano chops, i wish i had the support of a fancier manager and a huge powerful label who treated me nicely, i wish i could write my weird piano songs and have them accepted into mainstream speakers and hear my songs played on TV and in hip little cafes. i thought: her life must be somehow better. nobody bitches at regina, i thought. nobody attacks her on the internet, nobody hates her. i stopped myself in my tracks and thought a few things in quick succession. first was: amanda, you’re on crack. you have absolutely no fucking idea what regina’s life is really like. you don’t know fuck ALL. for all you know, regina hates her manager, can’t stand her label, and is lost in a world of pain. who knows if she’s fucking happy? (and for the record, even being pals and having toured with her, i didn’t feel like i could call her up at 3 am and say: “regina, hi! amanda here. i’m having a minor existential panic attack and only you can help me: so, super-super-quick, are you truly fulfilled?”) but it wasn’t just regina…it was EVERYBODY. i started noticing this more and more as i slowed down my head and examined my thought patterns. it was fiona apple. it was imogen heap. it was PJ Harvey. lady gaga. zoe keating. ani difranco. lorde. it was my peers and semi-peers. the irony was hard to untangle: all the women whose music i loved and respected also made me shake myself and my past decisions with a disappointed frustration. that i hadn’t done it THAT WAY, that I didn’t have what THEY had. i was good friends with some of these women. that made it worse. important note: i only caught myself doing the comparison game with other women, never with my male colleagues, and I never compared myself to the women who were far away from me in fame or genre (Beyonce or Madonna really didn’t throw me into a panic — i would never make those choices, and i didn’t want to have a career as a dancing pop star). it was more what i’ve come to call “piano string theory”: two piano strings that resonate closely but not perfectly will almost sound more out of tune and grating than two strings that are further apart. we feel insecurity about what’s almost us, but not quite us. we’re not wildly jealous of the strangers across the globe living in totally different buildings with totally different cultures: we’re jealous of the fuckers right next door. like two pieces of translucent design making moiré pattern, the almost-equal creates more noise than anything else. (why is that? theories welcome.) i was grateful to the revelation back then. not only did i notice that i was having these thoughts, but i (oh fuck) realized i’d been having them my whole life and not really acknowledging them. it also felt like a massive relief, even though it kind of scared the shit out of me that i’d spent my life caught in this hell. so i just paid attention and, instead of letting the insecurity grip me and shake me, i’d look at it head-on, cringe, and name it. “oh! hi. there’s that shitty feeling: you’re comparing yourself with your near-female peers. why you doing that, amanda? ain’t helping anyone. ain’t helping you, ain’t helping them. quit it. you know nothing.” and it worked. i also starting using this embarrassing mental trick in which i tried to flip the fictional script and i’d imagine fiona apple or regina spektor thinking about me and getting annoyed. i’d try to imagine fiona apple sitting on a chaise lounge surrounded by wonderful plants and colorful furniture and pretty clothes staring into the sinking L.A. sun and thinking: “why am i making this goddamn 12-song record? i should be more like amanda palmer and have a patreon, so I could do whatever I felt like whenever I felt like it and know I’d be supported and free from record company interference. i am terrible. fucking amanda palmer.” or i’d imagine PJ Harvey looking over the edge of a wind-swept cliff in dorset, clutching the leashes of a couple forlorn greyhounds, thinking “ughhhh…i wish I could give a TED talk about how irritating the bloody music industry is. fucking amanda palmer.” or tori amos sitting in a woodland mansion somewhere, surrounded by exotic birds, rare concert grand pianos and herbal teas, musing about whether she should have started a band with just a drummer, then taken off her clothes more often while learning to play the ukulele. i knew that these fictional scenarios were bonkers. but even the mild idea that these amazing, perfect-seeming women were potentially sitting there having similar fret-fits of insecurity took the edge off. i know fiona apple is not me. i am not fiona apple. i know i have no idea who she really is or what her life is like. nor she about mine. it’s apples and oranges. it’s apples and palmers. it’s palmers and gagas, difrancos and heaps. it’s heaps of fucking ridiculousness. and yet, it’s so real. this phenomenon isn’t really the fraud police, as I’ve come to name that basic sense of imposter syndrome that we all seem to feel feel. we need to call this….something else. it’s like “evil comparison syndrome” with a splash of Fear of Missing Out. it’s the deep, creeping feeling that you made all the wrong decisions when compared to the decisions that someone else made. in the fifties they used to talk about “Keeping Up With The Joneses” (imagine quaint scrambles to Sears & Roebuck to make sure you’ve got the latest model of glamorous, turquoise electric cake-stand mixer), so perhaps we could call this “Keeping Up With Joan”. but not quite. it isn’t just “Keeping Up With Joan”…..it’s darker. Your Personal Joan (reach out, touch faith) could be living in a cave with no belongings and shaming you into grieving guilt about your materialistic greed. it’s more that you’re comparing her very existence with your own. and you know it’s your immature fault for having these shallow, superficial feelings, but YOU WOULDN’T BE HAVING THEM, goddamit, if Joan weren’t standing right there across the lawn. fucking Joan. we should just call this phenomenon Fucking Joan. there we go. Fucking JOAN. and you kind of hate Joan, and yet you can’t really, because you worship her and her perfection. to make things worse, since Joan is everything you wish you could be (compassionate! grounded! successful AND magnanimous!), it makes it absolutely impossible to feel any ill-will towards Joan. she’s amazing, obviously, SHE’S JOAN. you can’t want to kill her, she’s too amazing. so you just want to kill yourself. and yet! you know how bullshit this whole thing is. because obviously: Fucking Joan is fucking You. Fucking Joan is just a painful projection of your own insecurities. and there you go. all you can do is notice it. so next time you find yourself comparing yourself to the One Next To You (no matter the gender), remember that Fucking Joan is, while real, simply You Fucking Yourself Over. no matter how hard you hate yourself for not being the person next to you, you’ll never escape the fact that there is no Industrial Standard for Personhood — you’re the only one who creates your personal measuring-ruler. if you’re perpetually coming up short, it’s because you’ve set the standard so impossibly high that you will never, ever measure up, and knowing and understanding that your own ruler is broken is the only exit out of hell. insecurity takes time and energy, and lately, i’ve had no time. if you don’t have time to feel insecure because you’re too busy changing a diaper/cleaning up a little pile of baby puke that you just found on your bed, you’re kind of in the clear. after the ecstasy the laundry, as jack kornfield says. and yet: after the laundry, when you’re standing there thinking “oh my god the laundry is done and i have time to think” your follow-on thought could very well be “oh no the laundry is done what the FUCK have i ACTUALLY been doing all my life. OH. NO.” i’ve been on wild tear for the past two or three years to really kick the shit out of my bucket list, and i’ve done it with fervor. i’d wanted to make a record with my dad. (i did it and i put it out using the funds from my patreon. it was wonderful.) i’d wanted, for ten years, to make a record with Edward Ka-spel. (i did it. i’m wrapping up that tour this week. it was really beautiful and fulfilling.) i had a baby. and i’ll admit it: none of these three above projects were a huge commercial success. but a nice side effect of doing all these projects, baby included, is that i haven’t had much time and/or energy to feel insecure. i’ve been too goddamn busy, every hour of every day, trying to convince me and everyone around me that i can Do It All — be a Pretty Good Mother, a Pretty Good Friend, a Pretty Good Partner to my husband, a Pretty Good Artist. and it’s the Pretty Good Artist thing that is creeping out of the dark doorway and slithering up my dress as i take my walk around the mental block today. and i am thinking about it for two reasons: one reason is that i’m wrapping up this tour with Edward and i have LITERALLY no plans for a record after this. i have a ton of songs that i wrote in australia over the winter (they’re amazing, mostly), i have a list of art and music projects and collaborations as long as my arm, and i have a ton of unfinished video projects to tie up. i still need to write my long piece about lesvos and i have one beautiful brand-new song i’ve recorded but haven’t put out yet. but i don’t have any PLAN-plans. no recording plans, no touring plans. this is one of those rare moments where i get to stop and look around and PICK which way to walk. the other reason is that — as i write this — i’m away from my baby for the first time. this is the first time in his life that we’ve been apart for three entire days and nights. i’m touring around with my band, with the gaps in my time-landscape that i used to have on a daily basis: the time to read the newspaper, the time to catch up on other people’s careers and news, and the time to think about the choices i’ve made and the choices i’m about to make. and most importantly, the time to feel insecure about all of those choices. and if i’m super lucky, time to start feeling insecure about feeling insecure. i read this interview with lorde in the guardian yesterday and there they were. surging waves of Fucking Joan. let me do deep dive on Lorde here… i fucking love Lorde’s music; i cherish her debut album. absolutely and purely and shamelessly (and i won’t go on a long tangent here about how i don’t think anyone should feel any shame about liking ANY music. suffice to say that “guilty pleasure” should be stricken from our lexicon. if it’s music and it brings you pleasure, there should be absolutely no guilt about it, whether it’s ABBA, Mozart, Kenny G christmas carols or air supply). i’ve had a hard time getting into music in the last fifteen years, probably due to aural and spiritual exhaustion thanks to endless touring and thinking of music as Work. but i heard enough about that record while i was in australia in 2014 (while writing “the art of asking” alone in a little flat in melbourne) that i went to the local CD shop on the corner (RIP) and bought a copy of “pure heroine”. it was cosmic timing — i was free and alone for the hard marathon of book-writing and i had sonic and physical space back to myself for the first time in years. i put the record on expecting to be disappointed (as i am pretty much every time i take a risk on a new hip pop/indie album), and instead, i felt transported. Lorde was enough parts new wave and originality to speak my language and draw me into a new conversation of Song. i reveled. for the next few months i listened to the album non-stop, and i googled Lorde, wondering what the deal was with this 17-year-old wunderkind from new zealand. i felt a kind of protective parental protectiveness towards her (ONE OF US! ONE OF US!) and it warmed me with pride when david bowie would say nice things about her, or she’d be invited to do huge fancy things at the rock and roll hall of fame, etc…it felt like a community win. she wrapped up that tour and album cycle and i wondered what she would do next. and, looking back, it appears she did everything right. she didn’t race into the studio with the swedish-hitmaster du jour and stay on the road and in the glare of the media like most young female pop stars. according to the article, she just stopped. she went back to new zealand, hung out with her friends, and took five years to release her next record. she didn’t chase fame and endless oncoming opportunities. she made some real friends, she kept it relatively real, and she didn’t rush. she wrote a batch of songs she wasn’t happy with, she started over, she did what a good artist should do: she exhaled and exalted the content above all. from what i’ve read, the album is killer (i’m going to listen to it the minute i’m off tour). and as i read that guardian article, there it was again. Fucking Joan. i noticed at least six moments where i compared my choices and my life path to Lorde’s. WHAT? why? she and i are so incredibly different, and at such completely different points in our lives. but still, how deliciously tempting… if only i had a nice mother from new zealand! i thought. if only i LIVED IN NEW ZEALAND. fuck. if only someone had recognized my talent when i was fifteen! if only i had released my sophomore album at TWENTY! if only i had a MANE OF HAIR! FUCK, i thought, here we go. here we Joan. it didn’t take long for the rest to tumble. why didn’t the dresden dolls take a break in 2005 when we should have? why didn’t i leave college at 18 instead of staying put like a sheep on zoloft for four miserable years? I WANT THOSE FOUR YEARS BACK. and finally: WHAT AM I DOING READING THE GUARDIAN APP INSTEAD OF WRITING BRILLIANT SONG LYRICS? and ultimately: I AM BAD. LORDE IS GOOD. and then, of course, it all felt so extreme that i could laugh at myself and acknowledge this as a Very Classic Fucking Joan Moment. and let’s get meta, i’ve just spent the last three hours of my life in a tour van writing THIS, instead of some super-reflective and beautiful poetry, because … seriously? FUCK IT. this is what i wanted to do on this van ride. write a blog. not a poem. there’s a long laundry list of potential regrets I could have, and HAVE had. that i’d gone to school for art. that i’d spent more time caring about how my hair and clothes looked so that fashion magazines had given me the time of day. that i hadn’t signed with a major label. that i had hidden from view so that people could think i was sexily mysterious (and, by default, way more amazing). that i’d forced myself to be disciplined enough to write more songs about anthony and birth and my first true love, Jason (who introduced me to the legendary pink dots) when Edward and i were working on this record. that i’d spent more time practicing piano. that i’d picked a perfume to wear when i was 20 so that everyone would associate me with one wonderful smell. that i’d spent less time answering my email. that i’d been better at answering my email. that i’d read more books. that i’d taken time off between my huge albums and hadn’t allowed myself all the side-project and ukulele indulgences. that i’d been more articulate in my interviews. that i’d stuck to my guns when the whole kickstarter kerfuffle happened and not let myself get bullied by my management. that i’d learned guitar. that i’d moved to New York in my twenties. that i’d been kinder to my lovers. that Anthony was still alive. that none of us would never die…. we could go on and on. but the question i have to ask myself is this: if i HAD spent my time and energy on all these fictional pursuits, what WOULDN’T have i done? do i really think life works this way? it’s not like i spent the last fifteen years sitting in a basement on a dirty couch shooting heroin. i’ve always been working. i’ve always been trying. going left, going right, making decisions as they came. and for every single thing on that list, there’s an inverse… if i’d gone to school for “art”, who knows? i would not be here, now, making this music that i’m making that i so love. if i’d spent less time on the internet, i may not have created a kickstarter, a TED talk, a book…. ….a child. if i hadn’t just spent the last two years making a record with my dad and with my childhood songwriting hero edward, i would be regretting…not having done it, bascially. bucket dreams take time. if i’d spent time on my fucking hair, i might not have spent the time on my friends. ….. you only have so much time. i made all the choices i made, and now they are made, and here i am, the culmination of Me. i’ve put out like seven albums. i’ve toured the world 14 times. i’ve forged some truly beautiful, deep, real friendships with a handful of people that could only grow with the fertilizer of actual time. i’ve written a best-selling book. i’ve managed to convince 11,000 people on patreon to give me their credit cards so i can make whatever art i want. i’ve managed not to have any drug overdoses or massive accidents. i’m alive. i put my life on hold for a few years to be with a friend dying of cancer. i’ve played carnegie hall, the sydney opera house, and fronted the boston pops… seriously. i look at that list and i want to cry and laugh. WHAT THE FUCK MORE DO I WANT FROM ME???? Fucking Joan. that’s fucking what. so. if you are like me, if you also have a relationship with Fucking Joan… let’s all agree to just let it go, shall we?? together. please join me, right now, wherever you are sitting or standing (or bathing, or sprawling) in a huge, bellowing BARBARIC YAWP/PRIMAL SCREAM of: FUCKING JOAN!!! FUCKIIIIIIIIINNNNNG JOAAAAAAAANNNNNNNNN!!!!!!!!!!!! FUUUUUUUUUUUUUUUUUUUUUUUCK!!!!!!!!!!!!!!!!!!!! there. and now. whatever we’ve done, haven’t done, chose, didn’t choose, think we fucked up, think we missed….whatever. you’re here and you’re alive and you’re reading this. and we’re totally fine. believe me. ………. start from now. i love you. i even love Joan. fucking. love, amanda fucking palmer, your daily motivational rock star. p.s. the photo is from backstage at the festival i eventually wound up at with my four sweaty dudes, taken by my friend marjolein after arriving in mannheim and just before taking stage. lest you believe the hype, the porta-potty is to the left just off-camera (even thurston moore had to use it), the dumpsters smelled fantastic, and since i had full boobs from not seeing the baby in three days and there was no privacy in the horse-stables-turned-festival dressing rooms and the porta-potties were pretty foul, i crouched behind the porta-potties and expressed my leaking boobs into the bushes. yes, the life of a rock star IS as fucking sexy as you thought. (the tour dress is by Kambriel.) ……………… thanks to marjolein van elteren, kirbanu, jack nicholls, judith clute and neil gaiman, all of whom proof-read this blog and none of whom judge me for no capitalizing. ee cummings 4eva. ………………. i’m a songwriter, musician, performer, feminist and general weirdo who funds my work through the generosity of 11,000+ folks contributing to patreon.com/amandapalmer. if you want to support me there for as little as $1 per project and get regular emails from me, i’d be grateful. it’s a fantastic, loving, intelligent community of people — a little corner of light on the net. if you want a taste of my music, go here for a guide. lastly, if you liked this blog, you’d probably like my book, “the art of asking”. it’s here.
https://medium.com/we-are-the-media/oh-lorde-deliver-me-from-fucking-joan-17ed0a1d83e8
['Amanda Palmer']
2019-10-24 21:45:52.375000+00:00
['Feminism', 'Amanda Palmer', 'Insecurity', 'Music']
We can transform the energy sector away from fossil fuels to renewables, now
IMAGE: EDans (CC BY) Take five minutes to read this article in Fast Company, “We know how to build an all-renewable electric grid”, which uses links to academic articles, to illustrate the feasibility of reforming the power generation infrastructure of a huge country like the United States, resulting in its complete decarbonization and the generation of enough electricity to recharge all land transportation, as well as being able to heat buildings, produce steel using electrical power, and to create enough renewable electricity to produce hydrogen for a host of other requirements. All this without the need to resort to nuclear energy, which nobody wants in their back yard. Progressive improvements in renewable energy mean it is now possible to use sustainable power to feed national grids in advanced industrial economies: science can now refute the lies spread by people like Donald Trump. Transforming our energy generation infrastructure is one of the most urgent tasks in tackling our climate emergency: electricity generation creates around 25% of greenhouse gas emissions, transportation contributes around 14%, while a great many industrial processes that today use energy obtained through fossil fuels can be converted to electricity. Converting to clean electricity means more than illuminating our homes and powering the national grid: we would be creating a source of energy to help boost the sectors of the economy that produce the remaining 75% of greenhouse gas emissions. Cars and electric buses, domestic and industrial heating and cooling systems along with energy-intensive factories. The transition to renewable energies will lead to a drastic reconversion of the balance of payments of many countries and the development of a new geopolitical order, but it will also play the biggest role in mitigating the effects of the climate emergency, meaning we must do it as soon as possible. It is no longer a matter of infrastructure amortization cycles: it is a question of survival. Our climate emergency is a tragedy of the commons on a global scale: the rational and selfish decisions of each nation have worsened circumstances for all. Why should anybody reduce its greenhouse gas emissions when they are so reliant on the energy resources that emit them? But if we all follow that rationale, which is what we’ve been doing for a long time, then we will all suffer the cumulative impacts of all those emissions. In other words, what is best for each country is worse for the planet as a whole. At the same time, what is worse for each country, over time will benefit the planet. At this point, that’s the only viable solution.
https://medium.com/enrique-dans/we-can-transform-the-energy-sector-away-from-fossil-fuels-to-renewables-now-a563195b24b7
['Enrique Dans']
2019-09-02 20:21:23.606000+00:00
['Climate Emergency', 'Climate Change', 'Energy', 'Electricity', 'Renewable Energy']
Facebook’s Libra Masterplan
The basis of the exploit lies in combining the pseudonymity of Bitcoin’s public key cryptography with the transparency of the Bitcoin blockchain. The transparency gives the participants in the diagram above the ability to surveil the Bitcoin system and produce reports that ticks all the boxes necessary for regulatory compliance. But the pseudonymity of the system still makes it easy enough for anyone with a computer to circumvent those exact surveillance methodologies when it’s necessary (more info here & here). In the world of financial surveillance, those surveilling know they’re not going to catch every bad guy. It’s typically enough that you can demonstrate that you’re able to provide insight into a sufficient portion of transactions coming into your platform and that you possess far-reaching blacklisting capabilities and keep updated lists of blacklisted entities. And because nearly all activity in the Bitcoin system originates from speculators who typically do not bother to circumvent surveillance, the pie-chart diagrams the blockchain analysis firms produce on behalf of their clients will indeed look compelling. What this numbers-based exercise completely fails to capture is the underlying potential embedded in the pseudonymous design to circumvent surveillance whenever and wherever it is needed. To understand this better, by analogy, let’s say that a government wanted to surveil 3D printers, so that no one prints guns in their homes. To make sure that 3D printers are not being used for this purpose, every 3D printer starts coming with government-installed webcams attached to them. As soon as this happens, websites start popping up with software to patch your printer to send a static video stream to the webcam to hide your activities. Now, let’s say 98% of the 3D printer owners do not have any interest in printing guns or manipulating the video stream, so they just leave the webcams on. If the government was a blockchain analysis company, they would produce a diagram with detailed reports showing how they’ve effectively observed and cataloged 98% of all 3D printing activity, and that 3D printing is one of the most transparent systems in the world and that the country is safeguarded against 3D-printed guns. That’s essentially how the system of Bitcoin surveillance works today. The on- and off-ramps may be regulated, but the Bitcoins themselves are fickle and leak through their cracks. This is an amazing deal for Bitcoin because it means it can both trade at regulated venues and serve the institutional market while at the same time trickle down into the hands of every person from every walk of life on the planet. Transparency and pseudonymity — it is the ultimate combination that any aspiring form of digital currency should try to emulate for global reach. And with the Libra, Facebook is intentionally cloning both of these two properties. Listen closely to what David Marcus says in the video below. David Marcus is the Director of Libra and VP of Messaging Products at Facebook, but he is also a Bitcoin fan and up until recently sat on the Board of Directors of the largest cryptocurrency exchange business in the United States: Coinbase. Why Facebook is doing this The reason why Facebook is doing it is because they believe the plan has a chance of working. And if it is successful, it pushes an enormous amount of the regulatory responsibility (KYC/AML) of operating the on- and off-ramps away from Facebook and to the cryptocurrency exchanges where the Libra is traded. It’s letting the market figure out a way to give people access to the Libra that works, any way that works, just like it has worked for Bitcoin for 10 years. In fact, opening up the opportunity for anyone to run a Libra exchange means that there’s probably even going to be some exchanges that will try to avoid KYC/AML regulations altogether, furthering the Libra’s reach into the world. Many cryptocurrency exchanges have been operating without licenses and without any particular regulatory oversight in the past, and some still do today. And whenever one gets shut down or implements KYC/AML restrictions, another one pops up somewhere else that doesn’t, sometimes by people who are unaware of the fact that they’re breaking any rules. And sometimes, not even the regulators in that region are aware whether any rules are being broken. The LocalBitcoins platform which helped people to meet in person to trade Bitcoin for cash envelopes successfully operated without ID requirements for seven years before being forced to remove the option earlier this month. But the “gap” isn’t fully gone yet. There still exists platforms such as Bisq and Hodl Hodl where people are able to circumvent these types of regulations. Here’s a quote from a blog post that Hodl Hodl recently posted when LocalBitcoins shut down in Iran: The main difference between Hodl Hodl and other P2P cryptocurrency exchanges is that we do not hold user’s funds and do not have KYC/AML procedures. Hodl Hodl is also cheaper than most of the other P2P exchanges, with a maximum fee of 0.6% per trade. So, by combining the properties of pseudonymity and transparency into their own Libra blockchain, Facebook hopes to achieve this sweet spot of simultaneous regulatory compliance and regulatory arbitrage, allowing the Libra to spread all over the world like wildfire while other businesses shoulders the heat. And why wouldn’t it spread like wildfire? The Facebook app family (Facebook, Messenger, WhatsApp, Instagram) is home to ~2.5 billion users. And the Libra, being backed by a basket of national currencies and government debt securities, is probably going to be a more stable currency alternative than what anyone else can provide in today’s world except for maybe the Federal Reserve. It’s an e-commerce play, duh Ted Livingston did a great write-up on what the long-term ambitions are with all of this in what he calls the WeChat Playbook. Basically, the most plausible scenario is that once you’ve sold your current money for the Libra, Facebook is going to do everything they can to make sure you never need to take your money out of its family of apps again. They will do this by offering you the ability to pay for everything there; sending money to friends, shopping online, paying inside physical stores, paying your bills, buying airplane tickets, bus tickets, and even tipping beggars on the street. Critics are going to complain that the Libra will be a tool for Facebook to extract even more data about its users, to which Facebook is going to respond that they have no special insight or control over the Libra blockchain, because they are just one of 100 validator nodes from the Libra Association. This is mostly true, so save yourself some time and don’t fall into this argument trap. That doesn’t mean that Facebook isn’t going to be able to harvest data about the purchases that occur within their own app ecosystem. Facebook has already begun clawing at this today with the roll-out of in-app purchases in Instagram (buying from brands without leaving the app) and Facebook Marketplace. If the Libra is your currency of choice and the Facebook app family is its natural home, the conversion rate between you and the targeted ads Facebook shows you will likely increase considerably. With one-click purchases, the advertising companies will always be just one click away from your money. And who makes money from that, except for the advertising company? The company selling the ad space! And then we haven’t even mentioned that the Libra’s backers will be able to extract enormous interest earnings from the fact that the Libra Association is sitting on giant piles of everyone’s cash in the “real world” while everyone else is just sending around funny Libra tokens on a blockchain in the cloud. The Libra Masterplan Simply put, the Libra Masterplan is borrowing pages from the Bitcoin playbook and the WeChat playbook both at once. If successful, it makes the Libra accessible to everyone on the planet while offloading the regulatory burden of operating the on- and off-ramps to other business. With the massive network effects of its ~2.5 billion user app ecosystem, it has the potential to create the largest digital money platform that has ever existed, where it can record all purchases you make and market goods and services to you on a daily basis while leveraging the fact that Facebook already knows more about you than almost anyone else. What happens next (and what this means for the cryptocurrency industry) In the grand scheme of things, a successful Libra is probably going to do more for Bitcoin in terms of warming users up to the idea of cryptocurrency than nothing has ever done in the past. Bitcoin increased in value by more than 10% over the past weekend, and is nearing a 15-month high. Moreover, since the Libra is a “stablecoin” at the mercy of central banking monetary policy, it doesn’t pose a significant threat to Bitcoin as an investment vehicle. Thus, a successful Libra is probably a net good for Bitcoin. That said, the regulatory response to the Libra during the coming year is going to carry significant consequences to the Bitcoin industry in the short-term, as I lay out below. I see four potential scenarios moving forward. Scenario 1: No Libra Launch Regulators put a stop to Facebook’s plans before they even materialize, citing privacy issues, or that they do not like the idea of Facebook sitting on such vast sums of reserves, or fears that the Libra would have a destabilizing effect on the economy. Everything goes back to normal. Scenario 2: Libra launches, but with KYC In this scenario, regulators are okay with the reserve structure but see through the Libra transparency-pseudonymity masterplan. The Libra Association can attempt to please regulators by restricting the blockchain to only process transactions coming from wallets that have been verified with government ID, such as Facebook’s own Calibra wallet. While this is a possible outcome (and technically easy for them to implement), it also eliminates the entire purpose of the Libra blockchain. In this case, the Bitcoin industry could be in trouble as well, because it is currently exploiting that exact same transparency-pseudonymity loophole that allows it to fit nicely into the regulated financial market. Scenario 3: Libra launches, without KYC (good for bitcoin case) In this scenario, the Libra launches in the exact form as they envision it today. Ideally, this means that there isn’t anything wrong with the Bitcoin playbook either and we can all stop stressing. The Libra and Bitcoin can then compete with each other, or complement each other, on their own merits. Scenario 4: Libra launches, without KYC (bad for bitcoin case) In the worst case, regulators take note of the transparency-pseudonymity loophole, but notices that the Bitcoin project has a wildly different relationship to privacy compared to the Libra. In Bitcoin, the project’s developers and supporters are always seeking new and innovative ways to eliminate the effectiveness of the blockchain analysis firms. And there’s no “Bitcoin Association” you can regulate if things start going south. It is possible that the Libra brings so much heat to the cryptocurrency industry that in the turmoil that erupts, the Libra is the only cryptocurrency that survives the regulator’s scrutiny on the virtue of being the absolutely easiest cryptocurrency to control and surveil. Thanks to Joey Krug for useful feedback.
https://onezero.medium.com/the-libra-masterplan-dc9560e41c87
['Eric Wall']
2019-07-26 21:31:39.831000+00:00
['Bitcoin', 'Privacy', 'Industry', 'Cryptocurrency', 'Facebook']
Build an RSA Asymmetric Cryptography Generator in Go
Background Secure communication is essential for a business to provide strong trust. Furthermore, it is mandatory by most governments to encrypt personal identity information (PII) when sending information to another business party. Asymmetric cryptography is a universal approach for encryption in information exchange applications. In my experience, I found some new businesses have a hard time to generate their pair of asymmetric keys. Then they ask my team to generate the key pair for them. By principle, that’s a broken security practice. Because the private key should only be seen and kept by the owner, if my team creates the private key for them, they actually run the risk of business information leaking into my team because they have seen it. Another situation that I have noticed often is another party generating the key which does not match the specifications. We requested a 2048 bit, but we got 1024 bit from the business partner. From that observation, I think building a portable asymmetric cryptography generator that can run everywhere is a good idea. An application that can run on most platforms with little effort is a useful tool — having that capability can solve most of the initiation phase to establish secure communication.
https://medium.com/better-programming/build-an-rsa-asymmetric-cryptography-generator-in-go-d202b18bcfd0
['Purnaresa Yuliartanto']
2020-01-29 00:15:44.969000+00:00
['Startup', 'Encryption', 'Software Development', 'Cybersecurity', 'Programming']
Algorithmic Problem Solving: How to efficiently compute the parity of a stream of numbers
Problem Statement: You are getting a stream of numbers (say long type numbers), compute the parity of the numbers. Hypothetically you have to serve a huge scale like 1 million numbers per minute. Design an algorithm considering such scale. Parity of a number is 1 if the total number of set bits in the binary representation of the number is odd else parity is 0. Solution: Approach 1 - Brute Force: The problem statement clearly states what parity is. We can calculate the total number of set bits in the binary representation of the given number. If the total number of set bits is odd, parity is 1 else 0 . So the naive way is to keep doing a bit-wise right shift on the given number & check the current least significant bit (LSB) to keep track of the result. In the above code snippet, we are going through all the bits in the while loop one by one. With the condition ((no & 1) == 1) , we check if the current LSB is 1 or 0 , if 1 , we do result ^= 1 . The variable result is initialized to 0 . So when we do xor (^) operation between the current value of result & 1 , the result will be set to 1 if the result is currently 0 , otherwise 1 . If there are an even number of set bits, eventually the result will become 0 because xor between all 1’s will cancel out each other. If there are an odd number of 1’s , the final value of result will be 1 . no >>> 1 right shifts the bits by 1. >>> is logical right shift operator in java which shifts the sign bit (the most significant bit in a signed number) as well. There is another right shift operator — >> which is called arithmetic right shift operator [see reference 1 at the last of the page]. It does not shift the sign bit in the binary representation — the sign bit remains intact at its position. Finally result & 0x1 returns 1 if there is parity or 0 otherwise. Advantages: The solution is very easy to understand & implement. Disadvantages: We are processing all the bits manually, so this approach is hardly efficient at scale. Time Complexity: O(n) where n is the total number of bits in the binary representation of the given number. Approach 2 - Clear all the set bits one by one: There is a bottleneck in the above solution: the while loop itself. It just goes through all bits one by one, do we really need to do that? Our concern is about set bits, so we are not getting any benefits by going over unset bits or 0 bits. If we can just go over only set bits, our solution becomes little more optimized. In bitwise computation, if we are given a number n , we can clear the rightmost set bit with the following operation: n = n & (n-1) Take an example: say n = 40 , the binary representation in 8-bit format is: 00101000 . n = 0010 1000 n - 1 = 0010 0111 n & (n - 1) = 0010 0000 We have successfully cleared the lowest set bit (4th bit from the right side). If we keep doing this, the number n will become 0 at a certain point in time. Based on this logic, if we compute parity, we don’t need to scan all bits. Rather we scan only k bits where k is the total number of set bits in the number & k <= length of the binary representation . Following is the code: Advantages: Simple to implement. More efficient than brute force solution. Disadvantages: It’s not the most efficient solution. Time Complexity: O(k) where k is the total number of set bits in the number. Approach 3 - Caching: Look at the problem statement once more, there’s definitely a concern about scale. Can our earlier solutions scale to serve millions of requests or still is there any scope to do better? We can probably make the solution faster if we can store the result in memory — caching. In this way we can save some CPU cycles to compute the same result. So if the total number of bits is 64 , how much memory we need to save all possible numbers? 64 bits will lead us to have Math.pow(2, 64) possible signed numbers (the most significant bit is used to store only sign). The size of a long type number is 64 bits or 8 bytes, so total memory size required is: 64 * Math.pow(2, 64) bits or 134217728 TeraBytes . This is too much & is not worth it to store such a humongous amount of data. Can we do better? We can break the 64 bits number into a group of 16 bits, fetch the parity of those individual group of bits from cache & combine them. This solution works because 16 divides 64 into 4 equal parts & we are concerned just about the total number of set bits. So as far as we are getting parity of those individual group of bits, we can xor their results with each other, since xor is associative & commutative. The order in which we fetch those group of bits & operate on them does not even matter. If we store those 16 bit numbers as an integer, total memory required is: Math.pow(2, 16) * 32 bits = 256 Kilo Bytes . In the above snippet, we shift a group of 16 bits by i * WORD_SIZE where 0 ≤ i ≤ 3 and do bitwise AND operation ( & ) with a mask = 0xFFFF ( 0xFFFF = 1111111111111111 ) so that we can just extract the rightmost 16 bits as integer variables like masked1, masked2 etc, we pass these variables to a method checkAndSetInCache which computes the parity of this number in case it’s not available in the cache. In the end, we just do xor operation on the result of these group of numbers which determines the final parity of the given number. Advantages: At the cost of relatively small memory for the cache, we get better efficiency since we are reusing a group of 16-bit numbers across inputs. This solution can scale well as we are serving millions of numbers. Disadvantages: If this algorithm needs to be implemented in an ultra-low memory device, the space complexity has to be well thought of in advance in order to decide whether it’s worth it to accommodate such amount of space. Time Complexity: O(n / WORD_SIZE) where n is the total number of bits in the binary representation. All right / left shift & bitwise &, |, ~ etc operations are word level operations which are done extremely efficiently by CPU. Hence their time complexity is supposed to be O(1) . Approach 4 - Using XOR & Shifting operations: Let’s consider this 8-bit binary representation: 1010 0100 . The parity of this number is 1 . What happens when we do a right shift on this number by 4 & xor that with the number itself? n = 1010 0100 n >>> 4 = 0000 1010 n ^ (n >> 4) = 1010 1110 n = n ^ (n >>> 4) = 1010 1110 (n is just assigned to the result) In rightmost 4 bits, all the bits are set which are different in n & n >>> 4 . Now let’s concentrate on this right most 4 bits only: 1110 , let’s forget about other bits. Now n is 1010 1110 & we are just concentrated on the lowest 4 bits i.e; 1110 . Let’s do a bitwise right shift on n by 2 . n = 1010 1110 n >>> 2 = 0010 1011 n ^ (n >>> 2) = 1000 0101 n = n ^ (n >>> 2) = 1000 0101 (n is just assigned to the result) Just concentrate on the rightmost 2 bits now & forget about leftmost 6 bits. Let’s right shift the number by 1 : n = 1000 0101 n >>> 1 = 0100 0010 n ^ (n >>> 1) = 1100 0111 n = n ^ (n >>> 1) = 1100 0111 (n is just assigned to the result) We don’t need to right shift anymore, we just now extract the LSB bit which is 1 in the above case & return the result: result = (short) n & 1 . At a glance, the solution might look a little confusing, but it works. How? We know that 0 xor 1 or 1 xor 0 is 1 , otherwise 0 . So when we divide the binary representation of a number into two equal halves by length & we do xor between them, all different pair of bits result into set bits in the xor-ed number. Since parity occurs when an odd number of set bits are there in the binary representation, we can use xor operation to check if an odd number of 1 exists there. Hence we right shift the number by half of the total number of digits, we xor that shifted number with the original number, we assign the xor-ed result to the original number & we concentrate only on the rightmost half of the number now. So we are just xoring half of the numbers at a time & reduce our scope of xor. For 64 bit numbers, we start xoring with 32 bit halves, then 16 bit halves, then 8 , 4 , 2 , 1 respectively. Essentially, parity of a number means parity of xor of equal halves of the binary representation of that number. The crux of the algorithm is to concentrate on rightmost 32 bits first, then 16 , 8 , 4 , 2 , 1 bits & ignore other left side bits. Following is the code: Advantages: No extra space uses word-level operations to compute the result. Disadvantages: Might be little difficult to understand for developers. Time Complexity: O(log n) where n is the total number of bits in the binary representation. Following is the full working code: Learning from this exercise: Although it’s basic knowledge, I want to mention that word level bitwise operations is constant in time. At a scale, we can apply caching by breaking down the binary representation into equal halves of suitable word size like 16 in our case so that we can accommodate all possible numbers in memory. Since we are supposed to handle millions of numbers, we will end up reusing 16 bit groups from cache across numbers. The word size does not necessarily need to be 16 , it depends on your requirement & experiments. You don’t need to store the binary representation of a number in the separate array to operate on it, rather clever use of bitwise operations can help you achieve your target. References: [1]. https://stackoverflow.com/questions/2811319/difference-between-and [2]. https://gist.github.com/kousiknath/b0f5cd204369c5cd1669535cc9a58a53
https://medium.com/free-code-camp/algorithmic-problem-solving-efficiently-computing-the-parity-of-a-stream-of-numbers-cd652af14643
['Kousik Nath']
2019-05-05 08:33:23.800000+00:00
['Tech', 'Java', 'Algorithms', 'Interview', 'Programming']
Insulin Causes Fat Storage, Sure, But It Doesn’t Make You Gain Weight
Read through any article that embraces the validity of the Calories In, Calories Out (CICO) model for weight loss, and you’re bound to come across comments like: “Wrong. All you need to care about is insulin production. If you keep your insulin levels low, you’ll lose weight. Truth is, the total number of calories you eat doesn’t matter.” And then, right on cue, they’ll bring up the carbohydrate-insulin model of obesity — which proposes that insulin drives nutrients to be stored as fat, thus leaving the rest of the body with low energy, causing individuals to overfeed. Well. Truth is, while hormones do play a critical part in the regulation of body weight, it is incredibly simplistic and naive to single out insulin as the sole contributor. To understand why we need to take a deep-dive into the science behind insulin and the carbohydrate-insulin model of obesity. So, grab those nerd glasses (and possibly a cup of coffee!) and follow along. It’s going to be a wild, evidence-based ride. What is insulin, anyway? Insulin is a hormone made in your pancreas, a gland located behind your stomach. And one of its main roles is to help regulate blood sugar levels. But — how? Well, when you eat, the carbohydrates in your meal is broken down into glucose, which elevates your blood glucose levels. This, in turn, signals your body (more specifically, your pancreas) to release insulin. The hormone then shuttles glucose from your blood into your muscle and fat cells, where it can be used for energy or stored for later use. Thereby lowering and stabilizing your blood glucose levels. So, when your body is functioning optimally, blood glucose and insulin are in lockstep. You eat a meal, blood glucose level goes up, insulin goes up, blood glucose level goes down, and insulin goes down. So, how does insulin (supposedly) cause weight gain? With everything we’ve covered till far, insulin sounds like a ‘good guy.’ Without it, glucose would build up in the blood. Having too much sugar in the blood for prolonged periods (i.e. hyperglycemia) can damage the vessels that supply blood to vital organs, which can increase the risk of heart disease and stroke, kidney disease, vision problems, and nerve problems. Eeks. So, what’s up with all the bad rep insulin gets? It really has to do with two other roles insulin plays in your body. #1: Insulin inhibits lipolysis During lipolysis, your body breaks down the fat in your adipose tissue stores — fatty tissues that cushion and line your bodies and organs — into free-moving fatty acids that can be repurposed or used as fuel. Interestingly, high insulin levels inhibit this process. Thus, explaining why many people equate insulin spikes with ‘turning off’ your body’s ability to burn fat. Misguided thinking, of course, but we’ll get to that in a bit. #2: Insulin stimulates lipogenesis Just so you know, lipogenesis is the process your body uses to move fatty acids from your bloodstream into your adipose tissue stores, where they’re stored for later use. If it makes things a little easier, you can think of lipogenesis as the ‘fat storage mode.’ It’s kind of the opposite of lipolysis. Also worthy of mention is the fact that your body can also convert and store carbs as fat through a process known as de novo lipogenesis (DNL). That said, it’s crucial to note that DNL only happens in meaningful amounts when you consistently eat in a calorie- and carb- surplus. Wait a minute — when you take into account these two roles of insulin, you’d almost feel compelled to conclude that insulin is indeed the culprit behind weight gain. Not only does it inhibit lipolysis (i.e. prevent the breaking down of fats), but it also stimulates lipogenesis (i.e. triggers the storing of fat)! At first glance, it definitely seems as though insulin is the villain here. But this view is too simplistic and naïve. Attributing weight gain to a single hormone is too simplistic Instead of thinking that insulin’s primary purpose is to make you fat, you should think of it as a hormone that helps your body use and absorb the food you’ve eaten in the best way possible. Think about it: why would your body want to break down more fat when there’s already plenty of incoming, readily-available nutrients from your meal (e.g. carbs and protein)? Insulin here is just doing the sensible and rational thing to keep you alive. Consider this: if you had plenty of takeaway food stored in your fridge, would you still cook dinner? Probably not. And perhaps more importantly, insulin isn’t the only hormone in your body. There’s always a complex interplay of various hormones and enzymes that control your body’s response to a meal. For instance, while insulin inhibits fat-burning, other hormones that are active — such as glucagon, growth hormone, cortisol, and epinephrine — stimulate fat-burning. And while insulin stimulates fat storage, other hormones, such as leptin, inhibit fat storage. So, to say that insulin is the sole hormone responsible for weight gain when there are so many other hormones and enzymes in your body is too myopic. It’s akin to saying that the cause of extreme poverty is that of individuals not working hard enough, instead of taking into account other crucial factors such as lack of education, poor healthcare systems, lack of government support, etc. Back to the issue on hand — the takeaway from this section should be that when your insulin levels are high, you’ll burn less fat than when your insulin levels are low. But your body’s fat-burning engines won’t shut off completely, as had been suggested by so many ‘health experts’ and ‘fitness gurus’ (who often have a low-carb cookbook to sell). More proof why you shouldn’t blame insulin for your weight gain OK, I hear people saying, “So what if there are other hormones at play? I still believe that insulin is the Bad Guy. I controlled my insulin levels and dropped 30 pounds. How about that?” Well, the following points might help. #1: Protein is a potent stimulator of insulin too This is a fact carbohydrate-insulin model proponents and insulin-haters like to avoid. Truth is, protein is a potent stimulator of insulin too. In fact, high-protein, low-carbohydrate (HPLC) meals can cause more insulin to be released than high-carbohydrate meals. Your body releases as much insulin when you eat beef as when you eat brown rice. And more importantly, protein also causes a rapid rise in insulin followed by a rapid decline — just like carbs. And yet (here’s the important bit), high-protein diets have consistently been shown to be effective at aiding in and maintaining weight loss. Surprised? Besides, here’s something fun to think about. As per the beliefs of insulin-haters, if the elimination of insulin spikes leads to fat loss, what macronutrient are they left with to eat? Carbs are definitely out of the picture. So is protein. I don’t know about you, but I’d pay to see what they’d eat. #2: Low-carb diets do not lead to greater weight loss results Even if we were to overlook the fact highlighted above (i.e. that protein also causes insulin spikes, yet is effective in weight loss), this upcoming point is probably going to drive the nail into the coffin for the theory that insulin causes weight gain. And that is, low-carb, high-fat (LCHF) diets do not produce greater weight loss compared to high-carb, low-fat (HCLF) diets when protein and calories are equated. To quote a recent meta-analysis: When total calories and protein are equated, the amount of carbohydrates versus fats does not produce differences in fat loss. In fact, high-carb, low-fat diets actually produced a slight advantage in energy expenditure and fat loss. #3: Insulin does not make you hungry Also, remember how the carbohydrate-insulin model proposes that because insulin drives all nutrients to be stored as fat, the body essentially ‘starves,’ forcing you to eat more? Here’s the thing. The evidence to support this assertion is extremely weak. If insulin were to really be responsible for ‘emptying’ your bloodstream of fatty acids and glucose (by storing them), then you’d expect that obese people and diabetics — individuals with elevated levels of insulin — would have lower levels of circulating fatty acids, right? Research shows this is not the case. People with obesity actually exhibit normal or even high levels of fatty acids in their bloodstream. Not to mention, insulin has consistently been shown to suppress appetite, instead of stimulating it. Insulin is not the villain Truth is, insulin is not the terrible fat-gaining hormone that self-made fitness and nutrition gurus make it out to be. If you truly wanted to minimize insulin spikes, you’d have to go on a low-carbohydrate, low-protein, and high-fat diet. Imagine what that’s like! Bottom line? Instead of fixating on insulin spikes, focus your attention on staying within a calorie deficit (i.e. eating fewer calories than your body burns). Because that’s the key to losing weight. That said, it’s not to say that all you have to care about is the number of calories you eat. You can’t just eat 1,600 calories worth of chips every day. You’d lose weight, of course (if you’re in a calorie deficit), but it’s not healthy. So, no — nothing that extreme. You still need to focus on the quality of your calories. For example, your macronutrient and micronutrient requirements. While this can be hard to hear, it has to be said: there is no ‘best diet.’ A universal, cookie-cutter nutrition plan that works for everyone doesn’t exist. There is only what works best for you — something that allows you to be in a calorie deficit, yet is sustainable, healthy, and enjoyable for you. If it’s intermittent fasting, great. And if it’s the ketogenic diet, sure, why not. Just know that the reasons why these diets work are that they create a calorie deficit, sometimes without you even realizing — it’s not that they’re superior to the traditional way of dieting (i.e. calorie-restriction). And just, please, for the last time, don’t blame your weight gain on insulin.
https://gene-lim.medium.com/insulin-causes-fat-storage-sure-but-it-doesnt-make-you-gain-weight-6ee454c0468e
['Gene Lim']
2020-12-14 06:03:36.418000+00:00
['Lifestyle', 'Weight Loss', 'Health', 'Diet', 'Nutrition']
Who fixes the robots?
Written by Joe Wheatley, Leading tech-enabled change and transformation, Triad This question emerged during the Digital Leaders Salon event in March where Adrian Leer, our MD at Triad Group Plc, presented the findings of our 2020 tech trends survey. The context around it was the ever-increasing use of technology in the workplace, in particular white-collar automation and the move to low-touch business processes. The participants, led by Sarah Burnett of the Everest Group, discussed how, as automation in business processes increases and human involvement correspondingly decreases, we may well arrive, if we don’t plan ahead, at a situation where organisations struggle to react to changes in process. Or, that the support mechanisms for automated processes are so structured that they leave little room for adaptation — meaning that these processes rapidly become obsolete. Since I joined Triad to lead the Intelligent Automation practice, we’ve also identified this challenge and developed a dual approach. We know that automation represents a significant opportunity, and provides benefits for many businesses. However, the implementation must be done in a sustainable and flexible way that allows businesses to adapt to changing situations and market conditions in both the near and long term Firstly, it’s about people. As automation increasingly replaces human interaction with the data flowing through these processes the role of staff will change, and this must be thought through ahead of going into any automation initiative. This consideration applies elsewhere, but here the focus is on one crucial element; upskilling process ‘doers’ into process ‘administrators’. By this, we mean ensuring that, as part of the development process and through deliberate training, some of your Subject Matter Experts (SMEs) gain an understanding of how to work in the RPA technology you choose. These technologies are, after all, designed to be low-code, and suitable for use by non-technical business staff. Here, we mean simply the ability to perform basic break/fix actions and ‘keep the lights on’ by handling some of the exceptions that will be thrown up by an active process. In this way, some staff can learn new skills and move into more rewarding positions within their current teams as part of the process of implementing automation. The company also keeps people who know the process involved in its management. Designing for flexibility Secondly, it’s about how we design support and maintenance organisations for automation and the roles that are performed in that function. Context varies, but Triad would typically recommend having rudimental support done by people who are familiar with the process and situated within the department that owns the outcome, as demonstrated above in the form of process administrators. But for many companies, particularly the (relatively) smaller ones, for whom automation is rapidly becoming more affordable and accessible, it may not make financial sense to build a large internal capability to manage their automations. As such, outsourcing may appeal as a possible solution. This can, however, also present a problem; many traditional outsourcing contracts have very limited flexibility and therefore are not very adaptable to change, while business processes have to be adaptable to changing customer desires and marketplace shifts. As such, when organising this type of work, we advise companies to ensure their contracts take into account both their current needs and their expectations for change. The solution to both parts of the ‘who fixes the robots?’ approach comes back to adoption. Often the term ‘adoption’ is taken to mean the end-users adapting to the new technology with which they find themselves working, but the solution here is about senior leaders in industry and outsourcing accepting that their businesses will change as a result of the spread of novel technologies and adapting their business models to take advantage of that. Planning for the future evolution of processes and therefore the ‘fixing’ of robots to reflect those changes needs building into an automation programme from the start. Originally published here. More thought leadership
https://medium.com/digital-leaders-uk/who-fixes-the-robots-d07cee9644b0
['Digital Leaders']
2020-03-25 15:06:11.440000+00:00
['AI', 'Robots', 'Technology']
Climate change — American as apple pie?
Autumn and with it, harvest season is upon us. A time when we can look forward to a fresh apple pie and gourd themed decorations on our doorsteps. You really couldn’t imagine this time of year without them — they’re as much a part of the fall season as the vibrant red, orange and golden hues that light up our trees and forests. But what if there were no changing leaves, no crisp sunny autumn days and ultimately no fall season at all? In most of the U.S., we experience four seasons of roughly equal length however, scientists predict that climate change will affect how the seasons are arranged and the year will be dominated by only two seasons — winter and summer. Spring and autumn would be short transitional periods in April and October and the rest of the year divided between extreme hot and cold. Agriculture will be one of the obvious casualties and fruit bearing trees are no exception. Coming out of the hottest September on record, in what is known as “seasonal creep” — when the seasons begin to bleed into each other — many orchards are bracing themselves for a sub-par crop this year. Higher temperatures have always been problematic for agriculture overall. A warm spell too early in the Spring can trigger an early bloom, but the inevitable cold snap that follows can damage the plant and retard the ability to produce nuts, seeds and fruit later in the year. In the same way, a heat wave even during the summer can increase the risk of disease and pests, both of which are driven by temperature, rain, humidity and other environmental factors. Sticking with the seasonal theme, let’s talk apples. New England, where I live, has long been considered an ideal location for apple growing thanks to a climate that supports a wide variety of the fruit. Unfortunately, this has begun to change over the last century with an average rise in both winter and summer temperatures. The former might not seem like such a problem, but some types of apple trees, such as Macintosh and Empire, require a minimal “chill period” every winter in order to flower and produce fruit properly. The unpredictable weather patterns — sudden winter thaws or spring frosts — can damage the buds and stress the trees, affecting the quality of their fruit. In fact, according to a report by Manomet Center for Conservation Sciences, current models show fruit-producing areas worldwide losing the ability to successfully grow tree fruit from loss of adequate winter chill days. While other varieties of apples, such as Fuji or Granny Smith, might benefit from additional growing days brought on by a warmer climate, transitioning from one to the other will place a financial burden on the farmer who has planted entire orchards of Macintosh and Empire apples. Even if an orchard reaches harvest time with a perfectly edible crop, it can still be deemed a failure thanks to temperamental weather. The reason? Not pretty enough. These russet marks mean these apples won’t be seeing the inside of a supermarket. If the temperature drops too much after the apples have begun to grow, their skin can develop rusty brown splotches called russeting, or if temperatures don’t drop enough at night, then the fruit won’t develop the signature red color. The pigment won’t fix in place, leaving it an unappetizing pinkish brown. It bears repeating that these apples are fine to eat, but their appearance means they will go towards making juice or applesauce. This severely cuts into the grower’s revenue, as the apples are sold at a far lower price than if they were sent to grocery stores. As seasons lose their distinguishing characteristics, our markers for their arrival will do the same. Greenish gold new leaves and magnolia trees in the spring, watermelons, peaches and corn in the summer, apples, pumpkins and brilliant colors in the fall, snowmen in the winter. All these things may be a thing of the past by the year 2050. How about them apples?
https://medium.com/the-green-space/climate-change-is-as-american-as-apple-pie-90c5dbf3433f
['Veer Mudambi']
2019-10-09 18:25:55.893000+00:00
['Climate Change', 'Autumn', 'Agriculture', 'Apple', 'New England']
Super Easy Ways to Make Yourself a Morning Person
Photo by Andrea Piacquadio As a night owl, I know what it feels like to think morning people are aliens. They wake up early and actually like it? Who are these people? I never understood how they could actually like waking up. I always thought laying in bed all morning was a dream come true. When I wrote my productivity article, I was already consistent with my sleep and morning routine. Before that, I would try to be consistent but I’d go out late during the weekends and play catch up all week. It was a vicious cycle. That changed when I started my master’s program. I finally started being serious about it. I needed all the hours I could get in a day so I finally stuck to a routine. Without even realizing it, I was growing into a morning person. I would get up and look forward to my coffee. I liked that everyone was still sleeping or waking up and wouldn’t bother me at least for that hour. It felt powerful. Like I was finally a part of this club I didn’t understand before. Have a “you-centered” task planned When you have something to do right when you wake up, you have a reason to get out of bed. Whether it’s for your business, a side project or personal improvement, make sure it’s something that means something to you. I used to start my morning doing work for a client and hated waking up. It’s not fun waking up because you have to do it for someone else. I made the switch into doing something I love and it’s a game-changer. I always say “fill your cup before you fill others”, meaning I focus on stuff that fills me with a sense of accomplishment and happiness first. Then, I’ll do work for others. Having that time in the morning isn’t selfish. It sets you up for the day so you can do things for others without feeling resentful. Have your breakfast ready Knowing that your breakfast is already taken care of takes the burden away from figuring out what to make. When you have nothing planned, you’ll be thinking about what to make when you should be focusing on your “you-centered” task. Make it easy for yourself to grab something from the fridge. Something you can eat from the container or heat up and enjoy. I’ve been eating overnight oats for almost a year now and haven’t gotten sick of them. If you like eggs, you can try these Trader Joe’s egg frittatas. If you like sweet flavors in the morning, try having a smoothie or pre-made açaí bowl. As long as it’s already done for you, you won’t have to worry about what to make. Photo by Jack Sparrow Let yourself wake up slow Waking up slow is a game-changer. When I was working full-time, I’d wake up within 30 minutes of when I needed to leave. I’d get myself ready in 15 minutes, eat breakfast and drink coffee in the other 15 minutes and run out the door. In hindsight, those mornings sucked. I was setting myself up for failure. Now, I can’t live without having at least an hour of my own time before I even change from my pajamas. If it’s too cold in my room, I turn off the air and let myself wait it out until it gets warm. If I’m sick that day, I give myself some extra time to lay in bed. Having those moments to yourself without force sets yourself up for success. Being kind to yourself is key. Every day is going to be different, so if you have an hour to yourself, you should spend it how you like. Have a set routine I read this book called the Miracle Morning and the author talked about the importance of a set routine. He has lots of suggestions on how to best spend this time. He suggests starting with meditation, exercising, reading, writing and other tasks. Of course, this isn’t for everyone. What I took from it was the way I should plan my mornings. For instance, I start with my skincare routine, I brush my teeth, set up my coffee and make my bed. Then, I sit on my couch and do my “me-centered” task, which is always writing. I rarely stray from this. Having a routine lets you go on autopilot so you don’t have to think too much and just do. Photo by Andrea Piacquadio Make sleep your #1 priority When you’re young, you can get away with sleeping at midnight and waking up at 7 a.m. to go to school. All you had to do was sit through class and pay attention. You’re not making any life-altering decisions there. What I noticed was I was always dragging. Whether I wasn’t getting enough sleep or I was sleeping too much on the weekends. There was no structure to it. Now, sleep is the most important part of my routine. I always try to sleep at the same time and get up at the same time. By doing that, you let your body know it’s going to rest a certain amount every night. The sleep is more restful which makes waking up easier. Change your mindset Tell yourself you love the morning. Appreciate the way the sun glows into the room. Think about how happy you are to have this time to yourself. It’s time to start calling it your “you time”. Just like you should have a “you-centered” task, you should call it your “you time”. This gives it a special place in the day. If you’re married and have kids, this time can be before everyone gets up. If that’s too early, then set aside a time that the whole house knows is your time and no one can bother you. Make it work for you. When you make this your “you time”, you cherish it and use it as best you can because it’s yours and only yours. Ease into early rising If you wake up at 9 a.m. every day, you won’t be able to just switch to waking up at 7 a.m. Your body and internal clock need time to get used to changes. Give yourself a new goal every week to wake up between 10 and 15 minutes earlier. A gradual change makes it much easier. You’ll also need to scale back a bit and go to bed earlier. There’s no sense in trying to wake up earlier if you don’t go to bed earlier. You need a full seven or eight hours of sleep every night to wake up feeling refreshed. The best part about this is you can listen to your body along the way. There’s no need to rush the process. You’ll start feeling tired earlier in the day and naturally start sleeping earlier. It’s all in the process. To recap, becoming a morning person is a process. It takes time and consistency to get there. A few things you should remember are:
https://medium.com/curious/super-easy-ways-to-make-yourself-a-morning-person-782c5485c6a
['Lauren Liebler']
2020-08-25 04:34:11.705000+00:00
['Health', 'Lifehacks', 'Morning Routines', 'Lifestyle', 'Self Improvement']
Maxwell’s Equations
Before we begin, be aware that talking about electromagnetism in any meaningful way means talking about vector calculus. Please don’t be intimidated, even if you don’t have the first idea what any of the symbols or terms mean. Vector calculus is hard but its core ideas are intuitive and I will explain everything as we go. In SI units, Maxwell’s famous equations for the electric and magnetic fields are: These are differential equations (equations which describe a rule for the rate of change of a function with respect to one or more of its input variables) for the electric field E and the magnetic field B in the presence of a charge function ρ (“rho”) and an electrical current j. The quantities ε₀ (“epsilon naught”) and μ₀ (“mu naught”) are physical constants called the permittivity and permeability of vacuum. The speed of light c obeys the important relation c² = 1/ε₀μ₀. When boundary conditions for the fields are specified, these equations completely and uniquely determine the fields. Usually one does not attempt to solve these equations directly for a given configuration of boundary conditions, charges, and currents. Instead, numerous mathematical tricks have been invented to simplify many different kinds of problems. However, it is still important to understand the physics behind these equations. The electric and magnetic fields A charge q with velocity vector v and speed much less than c in the presence of an electric field E and a magnetic field B is subject to the Lorentz force: Interestingly, this is still true in the relativistic case when the force F is meant to denote the time rate of change of the relativistic momentum instead of the classical momentum. There are two terms in the Lorentz force. The first is qE, called the electrostatic force. This force is caused by the electric field E, which is produced by stationary charges. The second force is qv⨯B, which is called the magnetic force. The symbol ⨯ is called the vector or cross product, and it denotes a vector perpendicular to v and B and with magnitude |v||B|sin(θ) where θ is the angle between v and B. The “right-hand rule” is a useful mnemonic for remembering the directions of the vectors in the cross product. Source A magnetic field B is produced by a current and interacts with a moving charge to produce a force. A current is a charge times a velocity so qv is a current element, and it follows that magnetic forces act between currents. To simplify things, we will pretend that currents and charges are separate entities that exist independently of each other. Now this obviously is not the case because a current is by definition a moving charge, but when we start talking about moving charges we have to bring in special relativity. However, we can get very far without relativity, as did James Maxwell and his contemporaries. It turns out that Maxwell’s equations are already relativistic, though the 19th century physicists couldn’t have been aware of it. A follow-up to this article will address this if there is enough interest. So for our purposes, stationary charges exert forces on other stationary charges via the electric force and currents produce forces on moving charges via the magnetic force. An example of the Lorentz force can be seen in the case of cyclotron motion. Suppose that a magnetic field points into this page and an electron has a velocity vector entirely in the plane of this page. The x symbols denote a uniform magnetic field pointing into the page. The force vector, in red, is the cross product of v and B and so by the right-hand rule the force vector points towards the center of a circle. If a charge in a uniform magnetic field is given some initial velocity with a direction perpendicular to B then the charge will move in a circle at constant speed. This is called cyclotron motion. A Teltron tube (pictured above) is a device that demonstrates cyclotron motion. Free electrons are produced heating a small filament and given an initial velocity by producing an electric field in the small region around the device. The field in the tube produced by the two coils is approximately uniform, so it pulls the motion of the electrons into a ring. The electrons produce light as they strike atoms of a very low-pressure gas. Vector fields, field lines, and flux The electric and magnetic fields are vector fields. A vector field is a function that assigns vectors to points in space, as in the following picture which shows the electric field vectors of an electric dipole consisting of a positive charge at (+1,0) and a negative charge at (-1,0). Note that for clarity’s sake, the vectors only show the direction and not the magnitude. You can see that electric field vectors point away from positive charges (sources) and towards negative charges (sinks). Since the force on a positive test charge q is given by F=qE, this corresponds to the fact that charges of opposite sign attract and charges of equal sign repel. We almost always assume that the sources of the fields, be they charges or currents, do not move. Just as important as the vectors of the vector field are the field lines. For the electric dipole the field lines probably look familiar to many of you: To get a clearer picture of what the field lines tell us, let’s pick out just a few of them and show them along with the field vectors: This diagram shows some important properties of field lines. Field lines are not just curves in space, they also have a direction. A field line originates at a source and terminates at a sink. It may also originate or terminate at infinity. They appear to break in the pictures because of the limitations of the software I used to draw them. Field lines are tangent to field vectors at every point. Field lines never intersect each other, or else the vector at the point of intersection would be pointing in two directions at once, which is impossible. Field lines change their direction continuously. Let us now briefly change our focus from electromagnetism to fluid dynamics. Suppose that the velocity of water with uniform area density (kg/m², since we are considering a two dimensional problem) ρ around a source is given at every point by a vector-valued function v(r, θ), where r is the radial distance from the source and θ is the angle between the position vector of r and the horizontal. Suppose that the source is located at the blue dot, and that the boundary S is a circle of radius R. How much water flows past this boundary per second? Dimensional analysis is always a great place to start a problem like this. We are asked for something with units of kg/s and we are given an area density with units kg/m² and a velocity with units m/s. We can combine these to get the momentum per unit area, which has units of kg/m⋅s. If we can eliminate the units of 1/m from this quantity then we will end up with something with the right units. This should make us think about integrating the quantity ρv with respect to length l along a curve C, but ρv is a vector quantity and we are asked for a scalar quantity, so we need to introduce a dot product somewhere. Since the water flows over S, we might naturally guess that the curve C should be the circle S and the vector should be the unit vector pointing out of the circle. It turns out that this is the correct approach, and the answer to the problem is the quantity This is called the flux of the vector field v over the boundary S. The flux of a three-dimensional vector field through a surface or a two-dimensional vector field through a curve can be interpreted as telling us how much that vector field “flows” over the surface or curve. If v is any vector field and S is a boundary (surface or curve) enclosing a region (volume or area) V, then we also have the critically important divergence theorem, which we present here without proof: Thus the total flux over the boundary of the volume is equal to the integral of ∇⋅v within the volume, so we can think of ∇⋅v as the flux leaving each point within V. The quantity ∇⋅v is called the divergence of v and it is the subject of the first two of Maxwell’s equations. Gauss’s Law This is the differential form of Gauss’s Law. Let’s first consider the integral form. Suppose that S is a closed surface and that the total charge in the region enclosed by S is Q. Then: So Gauss’s Law tells us that the flux of the electric field through S is the total charge enclosed by S divided by the permittivity. One of the great features of this law is that S can be any surface that completely encloses the charge distribution, and the flux through the surface will be the same. The differential form is obtained with the divergence theorem: Note that in general ρ is a function of position. We will not consider the case where it is also a function of time since that would require special relativity, although the follow-up to this article might. The differential form can be thought of as the integral form applying to infinitesimally small spheres enclosing every point in space. Gauss’s Law tells us a few useful things: If ρ(x,y,z) is positive then the flux leaving point (x,y,z) is positive and if ρ(x,y,z) is negative then the flux leaving point (x,y,z) is negative, which is equivalent to saying that a positive flux enters point (x,y,z). This formalizes our earlier claim that field lines originate at positive charges and terminate on negative charges. If there is no charge in a region of space, then any field line that enters that region must exit that region. Let’s derive Coulomb’s Law as a demonstration of Gauss’s Theorem. Suppose that a point charge Q is located at the origin and a test charge q is located at a distance r from the origin. Since F=qE, this problem can be solved by finding E due to Q. Let the Gaussian surface S be a sphere of radius r centered at the origin. Let’s start by writing down Gauss’s Law: The unit vector is in the radial direction, and the differential area element dS for the surface of a sphere of radius r is r²sin(θ)dθdφ where φ and θ are as in the diagram: Note: Some authors switch φ and θ. Source Therefore: By symmetry, we can see the electric field depends only on distance from the origin so we can pull it and r out of the integral: E must be radial because electric forces act on the line between two charges: Then we obtain Coulomb’s Law with F=qE: Gauss’s Law for magnetism This law tells us that all magnetic fields are divergenceless. Using the ideas we developed in the last section, this means that: There is precisely zero net flux entering or leaving any region in the presence of a magnetic field. Any field line that enters any region must exit that region. All field lines form closed loops. There are no sources or sinks for magnetic field lines. Equivalently, we can say that magnetic charges, or monopoles, do not exist in classical electrodynamics (the jury’s still out on whether they exist in quantum electrodynamics). The most important use of Gauss’s Law for magnetism is in defining the vector potential. That would take us beyond the scope of this article so we will just move on. Interlude: Conservative fields We know from basic physics that the work done on a particle when that particle is made to move a distance ∆x by a constant force F is W=F∆x. If instead F varies continuously and the particle is made to move in a straight line from a to b then the work is What if instead the particle moves along a curve of arbitrary shape in a three-dimensional force field? Suppose that F is a force field and that the particle moves along a curve C, which may or may not form a closed loop. At any point along the curve, we can say that F is a vector with three components: one along êₙ, perpendicular to the curve, one along êₜ, tangent to the curve, and a component along êₙ⨯êₜ, perpendicular to êₙ and êₜ. Only the component of F in the direction of the path does work so we take its dot product with êₜ and integrate with respect to dl, the differential length element of the path: There exists a special class of force fields called conservative fields, which may be written as the negative gradient of a potential energy function: F=-∇U. If this is the case, then the fundamental theorem of calculus for line integrals tells us that: This means that for a conservative force field, the work done by a particle as it moves from point A to point B depends only on those points and not on the path chosen. In fact, this is usually given as the definition of a conservative force field, but the definition of a conservative field as a vector field arising from the gradient of a potential is exactly equivalent: a force field F is conservative if and only if we can say that F = -∇U. It is also clear from this equation that if the particle moves along any closed curve C, then the work done is 0. We write this as: The circle in the integral sign indicates that the path of integration is a closed loop. In addition to the divergence theorem, the other theorem you must know to get anywhere with learning electromagnetism is Stokes’ Theorem, which we present without proof. For any surface S for which C is the boundary: From this we can also see at once another equivalent definition of a conservative field: since if F is conservative then it is the gradient of a function and since ∇⨯(∇A)=0 for any function A, we see that ∇⨯F=0 if and only if F is conservative. We can think of ∇⨯F as the tendency of a field to cause rotational motion about a point. This quantity is known as the curl of F and it is the subject of the next two of Maxwell’s Equations. The Maxwell-Faraday equation This is one of the first of two equations that connect E and B. It tells us that E is a conservative field in the absence of a magnetic field or if the magnetic field is constant in time. To interpret this, let’s start with what we know about about potential and kinetic energy. Physical systems will evolve through time in a way that allows them to minimize their stored potential energy. They do so by transforming potential energy into kinetic energy by performing work. In a conservative force field, a particle does no work by moving in a closed loop, and so there is no way that a conservative force field can cause a particle initially at rest to move in a loop. Closed orbits can occur depending on the initial velocity, as in the case of planetary motion. Suppose that we want a charged particle, initially at rest, to move in a closed loop. This means that we must make the electric field nonconservative, so we must give it a nonzero curl. We turn to dimensional analysis to guide our intuition. Since E has units of N/C, ∇⨯E has units of N/C⋅m. We know that B has units of Teslas, and 1T = N⋅s/C⋅m. Therefore ∇⨯E has units of T/s. Since ∇⨯E is a vector field, this means that we must expose a charged particle to a vector field with units of T/s if we want it to move in a closed loop. Since the quantity -∂B/∂t is a vector field with units of T/s, it’s possible that this is the quantity that we’re looking for, and Maxwell determined that it is indeed the case that ∇⨯E=-∂B/∂t by interpreting Faraday’s experimental data. We can use this equation to derive Faraday’s Law of Induction, which states that: The quantity ℰ (script E) is called the electromotive force in a loop of wire and has units of volts, and 𝛷 is the flux of the magnetic field through the area enclosed by the loop. The EMF is the work done by a unit charge as it moves once around the loop, therefore: So according to Stokes’ Theorem: Then by the Maxwell-Faraday equation: The partial derivative with respect to time gets pulled out of the integral and turned into an ordinary derivative because the integral does not depend on position. By definition, this integral is the flux of B through the area enclosed by the loop of wire C so this completes the proof that This explains why a changing magnetic field near a circuit induces a current in that circuit. Ampere’s Circuital Law And finally we come to Ampere’s Law. Ampere’s Law allows us to finish the process of building a complete unified theory of electromagnetism and electromagnetic waves. Let’s start with the original form of this law, ∇⨯B=μ₀j, which holds when there is no time dependence. This tells us that the circulation of the magnetic field around a point is proportional to the current at that point. The integral form of this equation is: This says that the integral of B around a closed loop is proportional to the total current penetrating the area enclosed by the loop. As a demonstration, we can use this to find the magnetic field around a wire. In this case the tangent vector is in the polar angle direction and so if we put the wire at the center of an imaginary loop of radius r, then dl =rdθ. Then: This formalizes the right-hand rule for a magnetic field around a wire: In the time-independent case, E is always conservative because its curl is zero. But we’ve just seen that even in the time-independent case B is only conservative in a region containing no currents, and most interesting problems involve regions near current. Furthermore, the magnetic force F=qv⨯B is never conservative because it depends on velocity. So we can’t treat B as a conservative field in general, and those rare cases where we can are not important for us right now. Now let’s talk about the second term in the right hand side of Ampere’s Law, called the displacement current. The name comes from another field, called the electric displacement D=ε₀E, which is useful in problems involving fields in materials rather than empty space. Ampere’s Law as originally written did not include this term, and it was Maxwell who discovered that it was necessary. Before then, that incompleteness in Ampere’s Law caused some problems. Most significantly, it would have meant that electromagnetic waves couldn’t exist. In physics, a wave equation is a differential equation with the form: The operator ∇² is called the Laplacian and it is given by: And k is the speed with which the wave propagates through space. If the function f is instead a vector field, then each component of the vector satisfies the equation. Let’s try to obtain wave equations in free space (no charges or currents), pretending to be ignorant of the displacement current. Maxwell’s equations would then be: Let’s start with the vector calculus identity ∇⨯(∇⨯A)=∇(∇⋅A)-∇²A where ∇²A means that the Laplacian operator is applied to each component function of A. Since ∇⋅E=∇⋅B=0, this means that ∇⨯(∇⨯E)=-∇²E for the electric field and ∇⨯(∇⨯B)=-∇²B for the magnetic field. If ∇⨯B=0 in free space then observe that we can’t get the correct wave equations. For E we get: And for B we get: To fix this, we have to add the displacement current. Of course, we can’t just plug in the displacement current because it will give us the equations that we want, we need to justify it. To do so, let’s anticipate that the problem is in the equation ∇⨯B=μ₀j. Let’s take the divergence of both sides: ∇⋅(∇⨯B)=μ₀∇⋅j. Since the divergence of a curl is always zero, this means that ∇⋅j=0 always. But this not the case. Electric currents obey the continuity equation: This equation says that the rate of change of the charge in a region is the negative of the net current flowing out of the region. If ∇⋅j is positive so that there is a net outflow of charge then the charge in the region must decrease, and vice versa if it is negative. This means that we must instead have: This means that ρ is the divergence of some vector field. Happily, Gauss’s Law tells us that there is a vector field whose divergence is equal to ρ, that being ε₀E, the displacement. If we plug ε₀∇⋅E in for ρ and cancel the divergence from both sides, then we get the correct equation: This gives us the correct Maxwell equations for free space: Now we can get the correct wave equations: For E. Then for B: This is why these equations are named for Maxwell even though he didn’t discover the laws that they describe. It was his ingenious idea to introduce the displacement current into Ampere’s Law that allowed for the final unification of the theory of classical electromagnetism. Conclusion If you managed to make it to the end of this, then you should be proud of yourself. This is pretty difficult stuff and we’ve only just scratched the surface. If there’s enough interest, then the next article will be expand on some points about relativity that I alluded to in this article. Let me know in the comments if you think I should write that. I’ll end this article with a little teaser on what that would entail. Stinger: Covariance The principle of covariance says that the laws of physics must appear to be the same according to all observers in the universe, in the sense that the form of the equations describing those laws must be the same in all reference frames. Suppose that the origins of two coordinate systems S and S’ are in relative motion with constant speed V along the x-axis. The primed coordinates are related to the unprimed coordinates by the Galilean Transform: Let’s consider just the one-dimensional wave equation for the x-component of E. The principle of covariance tells us that if someone in frame S observes an electromagnetic wave and determines that the wave equation is then the observer in S’ measuring that same wave must observe the equation in terms of their coordinates as Can we accomplish this by making a Galilean transform on the coordinates? Using the transforms and the chain rule, the partial derivatives transform as: So the unprimed wave equation transforms into: This means that there’s a problem, and it’s either the wave equation or the transformation. It can’t be the wave equation because Maxwell’s equations are verified by experiment, so the problem is with the transformation. To solve this problem, we must let go of some of our most fundamental ideas about the world and introduce special relativity.
https://medium.com/cantors-paradise/maxwells-equations-7484212839b1
['Panda The Red']
2019-10-20 13:18:45.278000+00:00
['Technology', 'Physics', 'Science', 'Tech', 'Mathematics']
Are You In The “Friend Zone”?
When I was in high-school the worst phrase a guy could hear from a gal was “I like you as a friend.” Oh, the anguish and trauma of unrequited love! It was a curse to be a ”friend” with someone you were sweet on. It was practically impossible to move from “friends” to “more than friends.” These days, my kids tell me this is called being in the “friend zone.” And from what my son says its still the kiss of death. You’re not in high-school anymore, but you might still be trapped in the “friend zone” at work, especially if you got promoted to lead a team you used to be a part of. Of, if you own the company and hired folks who you become friends with. You’re not in high-school anymore… Being in the “friend zone” with your team is also a bad situation. From my experience, being in the friend zone with a programmer meant… 1. I resisted offering them honest, direct correction, for fear of offending them. 2. I constantly wanted to be seen as a “nice guy, a real friend.” 3. I was much more likely to “pretend something bad hadn’t happened” (For more on this, see yesterday’s email) 4. Team members who weren’t as close to me felt I played favorites” (nepotism). 5. Firing them was 100x harder. In short, when I managed people who were in my “friend zone,” everything was harder. This was because when I was leading the team, I had a larger obligation to the entire team, and the company. I was trying to balance two very different roles. If this is you, it’s time to take stock. You should always be friendly. You should always be nice. But most of the time you shouldn’t be “real friends.” You simply can’t afford to try and be both boss and friend. If you’re like me, you’ll end up neglecting one and focusing on the other, which confuses everyone. To address this, I suggest you a list of your developers. Then ask yourself which of them, if any, you are concerned about being in the “friend zone” with. Ask yourself if you’ve been treating them better (or worse?) than others and if this relationship needs to change. Remember, if you’re withholding feedback to be nice, they aren’t going to “just get the hint.” They deserve a real boss, who treats them like everyone else.
https://medium.com/maker-to-manager/are-you-in-the-friend-zone-ccf3d660b4d1
['Marcus Blankenship']
2017-06-15 17:59:13.793000+00:00
['Agile', 'Management', 'Leadership', 'Software Development', 'Startup']
The PTSD-like Affliction That’s Traumatizing Health Care Workers
The PTSD-like Affliction That’s Traumatizing Health Care Workers The invisible wounds of moral injury run deep for those on the front lines Photo: San Francisco Chronicle/Hearst Newspapers/Getty Images “Some experiences imprint themselves beyond where language can speak.” These are the words of psychiatrist and trauma expert Bessel van der Kolk. This is also the experience of many health care workers ensnared in the Covid-19 pandemic. “I just can’t… can’t find the words… there simply are none,” whispered a doctor friend working in a hospital in New York City’s viral epicenter. We were Zooming — both of our backgrounds dark. Through the screen’s dim glow, I watched her head fall into her hands and rock back and forth. Her shoulders slumped forward, and she started to shake. Marie, I’ll call her, and I had worked together in California’s Bay Area when I was doing palliative care chaplaincy and clinical ethics work. She was never at a loss for words. In fact, to call her highly verbal would be an understatement. And yet what she has recently witnessed, been forced to do, and could not prevent on the Covid-19 front line has brought about some kind of internal preternatural silence. “It’s as if part of my soul had been shredded with a knife,” she told me when she finally could speak; “the part that holds me in relation to my Hippocratic oath and personal values.” Marie, like many health care workers, entered the field of medicine because, as she would say, she cares about doing good and not doing bad. From the day she started medical school, she had a clear vision of who she was and how she could serve humanity. But after “death by a thousand cuts” from a pandemic that has made her betray her vow to “do no harm,” she’s now questioning who she is, who other people are, and what life is all about — generally speaking. Some might call this experience a loss of innocence — a recognition that the world is more of a babel of bad than Marie originally believed. What I also know is that this suffering is a moral injury. Moral injury is a transgression of conscience. It is what happens when a person’s deeply held values, beliefs, or ways of being in the world are violated. That violation could result from things the person did themselves, things they experienced, things they were made to do against their will or better judgment, or things they couldn’t stop from happening. And it’s more prevalent than many would think. Of the 2.7 million service people who served in Afghanistan and Iraq, reports show that roughly the same number who were diagnosed with post-traumatic stress disorder (11% to 20%) were also coping with moral injury. But moral injury is not unique to veterans. Moral injury is a pall that has blanketed individuals, families, and communities throughout time and across cultures. It can be found on the battlefield; at the front line of disaster; behind closed doors of churches and temples; in hospitals, bars, brothels, prisons, refugee tents, abortion clinics, soup kitchens, unemployment lines; at borders and in detention centers; on school playgrounds and social media; and even in the unsuspecting house or office next door. This is because wherever human beings are, so too dwells moral injury. Moral injury is a transgression of conscience. It is what happens when a person’s deeply held values, beliefs, or ways of being in the world are violated. We are, as a species, hardwired to embody goodness, love, compassion, empathy, and a sense of right and wrong. Moral expectations are at the heart of who we are as people and societies. But human beings are also imperfect and limited. We can’t always meet our own moral expectations nor can others always meet them. Sometimes life throws us into situations where the stakes are high and no outcome is good, and we or others act, doing what we or they otherwise know to be bad, aware that harm will come in one way or another to ourselves or to another. Sometimes that is simply life. Some have likened PTSD and moral injury. And while intrusive images of the past are similar in each experience, with moral injury, memories don’t trigger fear. Instead, they beget shame, guilt, rage, disgust, emptiness, and despair. With PTSD, the primary concern is physical safety. With moral injury, it is existential safety — or trust. Moral injury makes a person question themselves, others, life, or their God. It makes them question their or others’ ability to do right or be good. Moral injury deteriorates one’s character, ideals, ambitions, and attachments. It leaves people feeling contaminated in their being or that something they once held dear has been sullied. “Unworthy,” “beyond redemption,” “gone forever,” and “emotionally dead” are how many people have described the experience. “A soul divided against itself” is how Rita Brock, an author and the director of the Shay Moral Injury Center, defines it. “How can I be a saver of life and a monster in scrubs at the same time?” Marie asked, her eyes distant and dark. “We’re all killing ourselves to save everyone we can, and yet we have to play God and decide who lives and dies. Who am I — or any one of us, for that matter — to make such a call?” Anyone who has listened to the news in the last few months knows well the issue of limited personal protective equipment (PPE) and ventilators in this country: There simply weren’t enough. While politicians and talking heads debated the veracity of need, people like Marie were wading through jam-packed ER wards as if they were minefields, donning soiled or homemade masks. For the first time in their careers, many health care workers had to determine not if a patient needed a ventilator but rather who would get the high-value, vital air. “How can you look anyone in the eye … gasping patients, pleading family members, knowing that your decisions will send many to their graves … or more like the make-shift morgue in the U-Haul van outside? I struggle to look at myself in the mirror, let alone at any of the people I’m trying to help.” Marie mentioned that the sound of a cough is beginning to be what fireworks were to her Vietnam veteran father — a PTSD response. But the nausea in her belly — the sickness that comes from disgust at the situation — is the making of moral injury. Shortages of equipment, overloaded hospitals, overburdened staff, and insufficient testing made the U.S. Covid-19 response an ill-fated mission from the start. Having to be surrogate mothers, fathers, sons, and daughters for dying patients — holding up cellphones for family members to say their final goodbyes — was a task beyond medical training. Not being able to hold or breastfeed an infant child or tuck a scared youngster into bed at night because their essential work took precedence over essential familial love was felt by many to be a dereliction of duty. The “invisible wounds” that are injuring Covid-19 front line workers bear the markings of a system equally scarred. “A betrayal of what is right, by a person who holds legitimate authority, in a high stakes situation,” is how psychiatrist and author Jonathan Shay first defined moral injury when he coined the phrase in the 1990s. In the months since Covid-19 first reared its ugly head in the U.S., we have witnessed repeated stall tactics by leaders in the upper echelons of the government despite dire warnings from around the globe. We’ve heard the virus called a “hoax” and minimized in severity and threat. We’ve seen safety guidelines for the general public developed by respected public health officials and then flouted by the very leaders who employ said officials. We’ve heard the virus was contained when it wasn’t. We’ve discovered the organizations that ought to have been prepared for such a pandemic actually weren’t. We’ve endured a lack of testing and faulty testing. We’ve been exposed to inefficiencies and inequities in our health system. We’ve witnessed hospital administrations put finances above safety. We are, as a species, hardwired to embody goodness, love, compassion, empathy, and a sense of right and wrong. While no one person is to blame for the “injuries” that many are now suffering and while much is finally being done to stem the Covid-19 tide and get our society back on track, the above “betrayals” can also be summed up as widespread asystemic thinking and disorganization. Whatever the labyrinthine conditions and events that created the current morally injurious climate for Covid-19 health care workers, we, as a society, must do better to help them heal — because, as research shows, once the acute phase of a situation like this subsides, it is the following period that is often the hardest for people to come to terms with. While the rest of society hastens to return to normal life, shifting their attention from stories of the front line to the economy and getting kids reintegrated in school, there will be swaths of people like Marie feeling the weight of all their decisions, questioning how this all could have happened, overwhelmed by guilt, shame, anger, disgust, emptiness, and despair. Brock pointed out in a recent BBC article that the fight against the coronavirus is similar to battlefield medicine: “desperate and unrelenting encounters with patients, an environment of high personal risk, an unseen lethal enemy, extreme physical and mental fatigue, inadequate resources and unending accumulations of the dead.” I wouldn’t pretend to know the demons that Dr. Lorna Breen, the medical director of the emergency department at New York-Presbyterian Allen Hospital in Manhattan, was wrestling with when she took her own life after fighting Covid-19 herself and fighting against it on behalf of her patients. But I do know — and research with veterans shows — that moral injury is associated with increased suicide risk. Covid-19 health care workers are putting their lives on the line, and in some cases sacrificing their lives, so that others can live. We cannot allow the lives lost to be for naught nor the future of those who survived to be put at further risk or sacrificed because we, as a society, got distracted by our desire for normalcy and exhaustion with discomfort. Healing from moral injury requires a person to reconcile many difficult truths and to transform in difficult yet often unexpected ways. But it also requires communities and systems of shared values to support them. We all must do that now. To the women and men who were courageous enough to serve on the Covid-19 front, we thank you and honor your experience. To those struggling to heal from the wounds of moral injury, either now or in the future, please know you’re not alone. If you or someone you know is suffering from moral injury, here is a wealth of resources. If you are in crisis and in need urgent of help, please contact the National Suicide Prevention Lifeline at (800) 273-8255.
https://elemental.medium.com/the-ptsd-like-affliction-thats-traumatizing-health-care-workers-1864be616086
['Michele Demarco']
2020-07-10 05:31:01.407000+00:00
['PTSD', 'Mental Health', 'Morality', 'Trauma', 'Ethics']
Men Respect My Imaginary Husband More Than Me
Men Respect My Imaginary Husband More Than Me Why I can’t stop lying about being married Photo by mulugeta wolde on Unsplash From the corner of my eye, I saw him approaching me. Not in the mood for a conversation, I picked up my pace and focused on my podcast. Apparently, my body language was still too welcoming because he stepped in front of me and started talking. I gestured at my earbuds and moved around him, keeping my social distance. He spoke again and pointed at my ears. Annoyed, I pulled out one of my buds and raised my eyebrows. “Excuse me, I see you walking here often. Do you live here?” He sounded nice, but I wasn’t going to give a stranger information about my address. So I shrugged and started walking again. He walked with me. “You are such a beautiful woman. You seem calm and happy. But maybe I only have that impression because I don’t know you.” I flashed him a smile and regretted it immediately. I didn’t want my smile to encourage him to keep talking — I just wanted to walk home without a random stranger trying to pick me up. “Really, you are very beautiful. May I know your name?” I said no in a friendly but firm tone. When he asked why not, I explained that I wasn’t comfortable talking to strangers on the street. “But if we introduce ourselves, we aren’t strangers anymore.” I started walking and repeated that it didn’t feel right to talk to strangers. “Why not? Are you already married?” He sounded agitated, and I saw this as my way out. “Yes, I am married,” I lied. “And it is very disrespectful to my husband to stand here and talk to you. So I am going home now. Have a nice day.” This time he didn’t walk with me. He said my husband was lucky and walked away. My imaginary husband demanded more respect than I did. And that is exactly why I keep lying about having a husband.
https://medium.com/an-injustice/men-respect-my-imaginary-husband-more-than-me-d51f7b2f33a
['Judith Valentijn']
2020-09-27 20:36:02.905000+00:00
['Self', 'Relationships', 'Life Lessons', 'Equality', 'Society']
Pandas Functions— Heavy Rotation
Most of my day to day work can be put into three broad categories: Data Wrangling, Data Movement/ETL and Data Analysis/Reporting. I’ve relied heavily on the Pandas library to accomplish some or all the data task that make up these categories. Any process where a dataset is involved goes through at least one of the Pandas functions I’m going to talk about here. I’m calling these my “Heavy Rotation” because they’re consistently used to help me across many different tasks and I think is core to learning how to use Pandas. merge loc/iloc goupby apply Here are a few brief examples of work that fall into the three categories listed above. These are not definitions of these terms, rather personal interpretations based on some of my routine work. Data Wrangling — Scraping data from websites then cleaning and aggregating to useable levels. Specifically, gathering price information from a competitor to use as a benchmark for comparison. Data Movement/ETL — Reading data from one or more sources and bringing it together for reporting or storage in a database. Business users need data to work off of that changes rapidly. Often times this means automating data extraction and loading into a Google Sheet or Dash app for them to use. Data Analysis /Reporting — This one is pretty straight forward. Summarizing data by grouping it into different subsets, displaying general statistics or data visualizations to help answer business questions. Automating workflows that aide data augmentation in Tableau reports. I’m simply trying to show a few patterns of use with some of the core Pandas functions. To do this I will share an exploratory data analysis example using data I found on data.world. Even with different data task I’m consistently using the same set of techniques. You don’t need to know every single function to be effective using Pandas. Really it comes down to being comfortable enough with a data structure that you are able to access, add or transform any piece at any level of that data structure with as little complexity as possible.
https://medium.com/analytics-vidhya/pandas-functions-heavy-rotation-6c10f49031
['Spencer Guy']
2020-01-02 14:14:55.899000+00:00
['Jupyter Notebook', 'Pandas', 'Data Science', 'Data']
How To Create Your Own Python Library
Library, Package and Module A library is a collection of modules and packages that together fulfills a specific requirement. A python module is a .py file that has variables, functions, classes, statements etc. related to a specific task. A package is a collection of python modules under a common namespace that is created by putting different modules on a single directory along with some special files (such as __init__.py). In order for a folder to be recognized as a package, a file named __init__.py must also be stored in that folder, even if this file is empty. Note — A library can have one or more packages. Procedure for creating a library Step 1: First of all, we have to decide the basic structure of the library. Here, we’ll decide the names and number of the modules to be included. In our case, The name of our library will be mylibrary and the basic structure of our library is shown below. We’ll create this directory any where we like in our system. Step 2: Now let’s write the python code for all the files. First of all, we’ll write the code for intro.py and it is given below. Then in the welcome directory, the hello.py file will contain the following code - And the whatsup.py file will contain the following code — Now, in the goodbye directory, we will write the code for seeyou.py as follows- Step 3: Since our package is ready, now it’s time to associate it with Python by attaching it to Python’s site-packages folder of current Python distribution in our system. We can import a library and package in Python only if it is attached to its site-packages folder. To find the location of this folder, we’ll open the python IDLE and will type the following command. >>import sys >>print(sys.path) ['', 'C:\\Users\\mav\\AppData\\Local\\Programs\\Python\\Python38-32\\Lib\\idlelib', 'C:\\Users\\mav\\AppData\\Local\\Programs\\Python\\Python38-32\\python38.zip', 'C:\\Users\\mav\\AppData\\Local\\Programs\\Python\\Python38-32\\DLLs', 'C:\\Users\\mav\\AppData\\Local\\Programs\\Python\\Python38-32\\lib', 'C:\\Users\\mav\\AppData\\Local\\Programs\\Python\\Python38-32', 'C:\\Users\\mav\\AppData\\Local\\Programs\\Python\\Python38-32\\lib\\site-packages'] The sys.path attribute gives information about PYTHONPATH and it shows the directories that the Python interpreter will look in when importing modules. So, from the above output we’ll copy the path to site-packages directory and we’ll navigate into that. There we’ll paste our mylibrary directory. And that’s how, we’ll associate our own library with Python. Step 4: Now, we’ll open the IDLE and we’ll check whether our library is working or not. So, we can see that our own custom library is working properly. All the above code is also available in my GitHub repository.
https://medium.com/python-in-plain-english/create-your-own-python-library-c2a8464cfbc
['Souvik Paul']
2020-11-27 15:13:12.733000+00:00
['Python', 'Programming', 'Software Development', 'Python3', 'Python Libraries']
“No Ho Ho Homo,” Says Michael Bublé as He Covers “Santa Baby”
Basically, Michael Bublé tried to make a song that was unequivocally about wanting to fuck Santa Claus platonic. The entire premise of the original song is sexual. Trying to make “Santa Baby” about two straight bros would be like insisting that Rudolph’s nose is red because of a crippling cocaine addiction. It just isn’t done. First of all, Bublé keeps the original title of the song and the first two words, which are “Santa baby.” But, as we will soon learn, there is no point in keeping these undeniably flirtatious lyrics in the song when the rest of it is spent trying to distance himself as far away from a gay interpretation as possible. Throughout the song, Bublé addresses Santa in various ways. While Eartha Kitt, and virtually every other female performer of the song, continues to refer to Santa as “baby,” Bublé mixes it up by calling him, “buddy” and “dude,” as well as “pally,” and “poppy.” You know, those nicknames that guys in the twenty-first century famously give each other? He also swaps out some of the girlier gifts for more masculine-sounding ones, like a sable fur coat for a Rolex watch. This particular decision makes sense, seeing as Michael Bublé is actually a spokesperson for Rolex. But some of the other switches he made were strange, to say the least. For instance, he changes a light blue ’54 convertible to a steel blue one from ’65. Now, a ’54 model was chosen in the original song because it was released in ’53, implying that the speaker wants only the newest model. Perhaps Bublé changed it because he would rather have something vintage, which I can understand. Nothing wrong with making a slight alteration like that to show your personal preference. However, changing “light blue” to “steel blue” is not a switch I would have anticipated. Apparently, “light blue” was too feminine for Bublé’s taste, despite the fact that we’ve been using that exact color to celebrate the birth of baby boys for decades. But that’s far from the only ridiculous lyric change. In the original “Santa Baby,” the speaker asks Santa for “a duplex,” which Michael Bublé swaps out for “Canucks tix.” The Canucks are a Vancouver hockey team. Just in case you forgot that he’s Canadian. Instead of asking for Santa to trim his Christmas tree with decorations from the luxury jewelry store Tiffany’s, Bublé asks for decorations bought at Mercedes. I can only assume he means the car company Mercedes-Benz. Because it wasn’t enough to mention cars in an earlier verse. And everyone knows that men who like cars simply cannot be gay. It is written in the laws of the universe. Also, like, do they even sell holiday decorations at Mercedes? If so, I have to imagine they’re cheaply made ornaments that you can pick up in their lobby for ten bucks at most. I just picture the owner of the dealership setting out some of the blotchy ornaments that his five-year-old painted at school, thinking that if he’s going to have to display his kid’s godawful art in order to prove he’s a good dad, he could at least make a quick buck out of it. The point is, if you’re willing to fuck Santa in exchange for whatever gift you want, I would at least ask for the more expensive kind. Eartha Kitt had the right idea. She knew her worth. But all of that pales in comparison to the astronomically heterosexual lyric change that Bublé made in the very last verse. The original contains the lyrics, “forgot to mention one little thing, a ring/ I don’t mean on the phone.” Michael Bublé, in all of his straight-bro glory and wisdom, changes this to “forgot to mention one little thing, cha-ching/ no, I don’t mean as a loan.” This line was written to imply that the speaker wanted Santa to propose to her. I struggle to find any other interpretation. But because that would sound too gay for Bublé, he decided to ask for Santa to be his sugar daddy instead. Because, you know, that’s the totally straight thing to do. The original speaker had clearly already committed herself to the idea of marrying Santa. She asks him to “think of all the fellas that I haven’t kissed,” implying she has saved herself for Santa, her one and only. Once again, there is absolutely no way to make this line platonic, but Bublé storms on anyway and gives it his best shot by changing the word “fellas” to “hotties.” “Ladies” also has two syllables, but apparently respecting women makes you gay. There’s also the unmistakable fact that the word “hotties” is gender-neutral. Is Bublé leading Santa on, dangling the idea of maybe possibly one day being more than “pallies” someday in front of him? If so, then that’s just cruel. There’s no need to lead Santa on like that, Michael. He’s a good man.
https://medium.com/the-haven/no-ho-ho-homo-says-michael-bubl%C3%A9-as-he-covers-santa-baby-95983d0319f4
['Danny Jackson H.']
2020-12-16 23:00:48.389000+00:00
['Christmas', 'Humor', 'Music', 'LGBTQ', 'Satire']
What’s Wrong with Blue Apron?
In the article that brought me to Gimlet Creative’s attention — and in others — two things are argued: Today’s unsustainable agriculture is aggregated, centralized, and global; technology and global markets drive production with little concern for distribution or ecology. Tomorrow’s sustainable agriculture should be distributed, decentralized, and micro-regional; ecology and local markets drive production, technology drives distribution. Blue Apron has both feet in the former model. via Blue Apron This is a diagram Blue Apron uses to convince you that its devouring of the farm-to-table supply chain is somehow good for you. Let’s start on the left with “Farm & Producers.” Blue Apron has “partnered” with some 150 farms and snapped up BN Ranch from serial rancher Bill Niman (of Niman Ranch, which is now owned by Perdue — because of course it is — albeit without Mr. Niman’s involvement, and is still carried by Whole Foods). Blue Apron is expanding the BN Ranch network as quickly as possible and sending its resident agroecologist to babysit the methods and practices of the farms it doesn’t own outright, all while setting the farms’ gate prices with an eye on its own profits.* In short, Blue Apron is setting itself up to either own or control the entire production end of the supply chain. This is exactly the way oft-maligned companies like Perdue and Tyson’s operate. Then we get to the middle of the diagram: Wholesalers, Regional Grocery Warehouses, and Grocery Stores. Blue Apron owns this outright: they’re the wholesaler, paying rock bottom gate prices to the farmers. They’re the warehouse, with three big distribution centers in the continental United States. And they’re the end retailer, replacing the grocery store with a website and its army of meal preppers, packers, and delivery trucks. In short, Blue Apron adopted the playbook of the conventional agricultural aggregator/integrator and extended it into the retail space with delivery trucks and meal prep. Despite this being little better than incremental progress, they go much farther and refer to it as “innovation.” It may be the single most offensive thing about Blue Apron. Or this. Via A Slob Comes Clean They’re not shy about being a vertical integrator, either. Co-founder and COO Matthew Wadiak (a former protege of the legendary Alice Waters, who herself speaks of Blue Apron with thinly veiled contempt) flat out stated: “We can pay farmers more, while paying less for meat,” Wadiak says. “By vertically integrating, we’re going to be more lean, and that’s better for everyone.” Pay farmers more than what, exactly? Anyway, this is where it’s important to mention that Blue Apron is a [terrible] publicly traded company whose sole legal obligation is to its shareholders. When the principles of sustainable agriculture collide with the purses of investors, the latter wins. Always. *In Blue Apron’s Vision page, it claims “guaranteed markets for farmers,” which is a polite way of saying “we set the prices, we are 100% of your market, we own your ass,” effectively turning farmers into serfs who take on all the operational risk at whatever price their sole customer will offer. This is the model of commodity farming that dominates American agriculture today. Perdue is particularly famous for this crap.
https://medium.com/sylvanaquafarms/whats-wrong-with-blue-apron-60bd26f31676
['Chris Newman']
2018-06-11 19:31:08.948000+00:00
['Blue Apron', 'Sustainability', 'Business', 'Farming', 'Food']
Another Amtrak Adventure
To be a star, you must burn. Goethe Once, from a train near Tucumcari, he watched a herd of Angels break into a stellar stampede. It was heavenly pandemonium, sudden Angelic anarchy. Bliss broke out all around. Rapture ruptured the desert. New Mexico suddenly became Elysium. But a psychiatrist and a computer arrived and proclaimed them delusions. The Reality Cowboys rode them down and corralled their auras. They burned out like sad super nova, in a flashing of destruction. New Mexico became New Mexico, and the train chugged on it’s way. In their bright burning, everything and nothing at all had happened.
https://medium.com/thenewnorth/another-amtrak-adventure-9371c9873c3d
['Mike Essig']
2017-03-31 18:18:55.456000+00:00
['Angels', 'Creativity', 'Surrealism', 'Poetry']
How I Was Productive During Election Week
SATIRE How I Was Productive During Election Week My landlord has to stop annoying me about paying rent that was due a week ago Photo by Markus Spiske on Unsplash This week, we had an election that gripped the attention of, well, everyone. To be productive, I decided that I needed to spend time looking at my phone every single minutes of every day to see results from the election pile in. What’s wrong with you, Georgia? Why can’t you bring in results sooner? To be productive this week amidst a hectic week in the news cycle, I decided to scroll through my phone and read advice all over the Internet about how to be productive. They told me to stop reading the news, to stop doomscrolling the news. To stop reading the news and doomscrolling, I looked on Facebook and Twitter instead. That relieved a ton of stress, obviously, because social media is less harmful than actual media. When I complained to my boss about how I had trouble doing work this week because I was so distracted by election news and results, she seemed to be looking away from the screen. She didn’t hear a word I was saying. She kept looking at my phone, and told shouted: “why haven’t they called Wisconsin yet?” Fortunately, I knew I wasn’t alone, that no one was getting anything done around the office. How could they possibly expect us to function as normal? How could my boss expect me to finish the report that was due three weeks ago when there’s the future of our country rides on this election? To be productive, I read advice online about “how to be more productive.” It’s pretty clear all those articles were insufficient to describe our current political crisis. They told me to sleep — how could I sleep when I was wondering if more ballots would come in from Pennsylvania at 3 a.m.? How could I wake up at 5 a.m. when I was already awake at 5 a.m. checking to see if Trump would come back in Arizona? I will say I’m particularly proud of one thing I did — I diversified my news sources. My Trump-supporting friends will no longer call me a “libtard” since I tuned into Fox News’s election coverage when they prematurely called Arizona for Biden. Now, I no longer live in an echo chamber. My landlord has to stop annoying me about paying rent that was due a week ago — doesn’t he know about the stress of all the election going on? Doesn’t he know that I was trying to carry on being productive as normal during election week? How can I pay rent when the whole city roared in applause when the AP called the election for Biden? How does he have so much gall to ask for something as trivial as rent on a week so important to the future of our country?
https://medium.com/muddyum/how-i-was-productive-during-election-week-be75a04062bd
['Ryan Fan']
2020-11-09 02:23:52.191000+00:00
['Election 2020', 'Humor', 'Politics', 'Society', 'Satire']
Ranking All 57 of Madonna’s Billboard Hits in Honor of Her 60th(!) Birthday
Madonna Louise Ciccone, better known simply as Madonna, was born 60 years ago today — August 16, 1958. She was born in Bay City, Michigan to devoutly Catholic parents and her mother died of breast cancer when she was 5. She became a rebellious teen and dropped out of college at the age of 20, opting instead to pursue a career in dance in New York City. Five years after arriving in New York, she released her eponymously titled debut album. A few months after the album’s release, Dick Clark invited her to perform on American Bandstand. Before she took the stage Clark asked, “What do you hope will happen, not only in 1984 but for the rest of your professional life? What are your dreams? What’s left?” Without hesitation Madonna gave her legendary answer: “To rule the world.” And she did rule the world. At least the pop music world. The undisputed “Queen of Pop,” Madonna has an astonishing array of richly deserved superlatives. With over 300 million records sold, she is the best-selling female recording artist of all time. With a record 38 Top 10 songs, she is the most successful solo act in the history of the Billboard Hot 100 Chart. With box office receipts totaling over $1.4 billion (unadjusted for inflation), she is the highest grossing solo touring artist of all time. She has won a bevy of awards, including 7 Grammys, 2 Golden Globes (one for acting and one for songwriting), and an eye-popping 20 MTV Video Music Awards. She was inducted into the Rock and Roll Hall of Fame in 2008. To be fair, she has had her share of high profile failures. Some of her musical reinventions were markedly less successful than others, particularly in recent years. Despite appearing in several hit films like Desperately Seeking Susan, Dick Tracy, and A League of Their Own, she never successfully launched a career as an actress (and in fact has amassed more Razzie Awards than any other actress in history). And, to be honest, more often than not the controversies stirred by the boundary-pushing iconoclast feel more about grabbing headlines than truly making a statement. But despite her misses and her icy public persona, Madonna is an undisputed music legend. She has crafted some of the best pop songs of all time, revolutionized the art of the music video, and redefined the touring model. She has diversified into multiple successful business ventures and has maintained a tremendous amount of autonomy throughout her 35 year career. And arguably she is the last living pillar of 1980s popular music following the deaths of Michael Jackson, Whitney Houston, and Prince in the past decade. In honor of her 60th birthday, I decided to rank all of her songs that charted on the Billboard Hot 100, which coincidentally also turned 60 less than 2 weeks ago. She has made 57 entries on the chart, nearly one for every year of her life. Revisiting 35 years of her hits to make this list was a wildly entertaining ride, one I highly recommend. Note: This is not a list of what I think are the best songs Madonna has ever made, nor is it a ranking of her most popular or influential songs. Due to her vast back catalogue, I limited this list to songs that charted on the Billboard Hot 100. The rankings are based on my assessment of each songs overall merit in terms of songwriting, composition, vocals, and production. Just about everyone will disagree with my ranking, but that’s part of the fun. *** 57. “Bitch I’m Madonna” (2015; Billboard Peak: 84) Madonna’s most recent album, 2015’s Rebel Heart, was a wildly uneven 22-track epic that had some truly brilliant songs. Unfortunately, the only song to chart from the album — this obnoxious, juvenile collaboration with Nicki Minaj — was not one of them. 56. “American Life” (2003; Billboard Peak: 37) Perhaps the biggest commercial and critical failure of Madonna’s career, the lead single from her disastrous 2003 album of the same name is almost listenable until the 3 minute and 10 second mark where she breaks into one of the most ill-conceived rap breaks in music history. (Sample lyrics include: “I’m drinkin’ a soy latte/ I get a double shot-ay,” “I drive my Mini Cooper/ And I’m feeling super-duper,” and “I do yoga and pilates/ And the room is full of hotties/ So I’m checking out their bodies.”) 55. “American Pie” (2000; Billboard Peak: 29) It remains unclear why Madonna thought an abbreviated dance remake of Don McLean’s classic slice of Americana was a good idea. It wasn’t. 54. “Die Another Day” (2002; Billboard Peak: 8) This wildly overproduced and lyrically nonsensical song was written for the 20th James Bond film, which shared the same name. It assuredly ranks toward the bottom of any list of Bond themes and Madonna singles. 53. “Give Me All Your Luvin’” (2012; Billboard Peak: 10) Madonna’s 38th and most recent Top 10 single is this disposable and age-inappropriate ditty, which unnecessarily incorporates cheerleaders, a marching band, and raps by Nicki Minaj and M.I.A. 52. “Causing a Commotion” (1987; Billboard Peak: 2) There is nothing especially bad about this track from the soundtrack of Who’s That Girl? but it is profoundly forgettable, particularly in comparison with the brilliant singles that preceded and followed it. 51. “Keep it Together” (1990; Billboard Peak: 8) The fifth single from the terrific Like a Prayer album, this one pales in comparison to the four that preceded it and has not aged well. 50. “Hanky Panky” (1990; Billboard Peak: 10) Perhaps the strangest of Madonna’s hits and perhaps the most unlikely Top 10 hit of the 1990s, this track from her Dick Tracy-inspired album is a throwback jazz and swing song with bawdy lyrics that certainly gets points for its boldness and audacity. Yet it never truly rises above its novelty status. 49. “Celebration” (2009; Billboard Peak: 71) Clearly designed exclusively for the clubs and probably written and produced in the span of an hour, this song is hardly the best of her dance club hits, but it works much better than it needs to. 48. “Angel” (1985; Billboard Peak: 5) This sweet and catchy ditty features one of her better early vocal performances but deviates too little from the template of her early tracks to be a standout. 47. “Bedtime Story” (1995; Billboard Peak: 42) This song was far more famous for its music video (which at the time was the most expensive ever made) than for the quality of the song, but its undoubtedly intriguing in its lyrics and production and clearly set the stage for the Ray of Light era. 46. “Nothing Really Matters” (1999; Billboard Peak: 93) This is a perfectly fine cut from her best album (Ray of Light), but there are several other songs on the album that are better and would have made more interesting singles. 45. “Me Against the Music” (2003; Billboard Peak: 35) Many found this hotly anticipated collaboration between Madonna and Britney Spears to be underwhelming. Although it clearly doesn’t rank as either of their best songs, it has an infectious beats and has some moments that really work. 44. “4 Minutes” (2008; Billboard Peak: 3) Much like the above collaboration with Justin Timberlake’s famous ex-girlfriend, this collaboration with Timberlake and Timbaland is a serviceable dance floor hit that doesn’t soar. 43. “Who’s That Girl?” (1987; Billboard Peak: 1) In a Season One episode of the Netflix comedy Unbreakable Kimmy Schmidt, Lillian (Carol Kane) asks “Who’s That Girl?” Kimmy answers, but it turns out Lillian was watching Jeopardy and the category was “Worst Madonna Songs.” The Clue: “This 1987 one is terrible.” Terrible is a strong word, but it is most definitely not one of her best. 42. “You Must Love Me” (1996; Billboard Peak: 18) This Oscar-winning song was written by Tim Rice and Andrew Lloyd Weber for the film version of their stage musical Evita, in which Madonna starred as legendary Argentinian first lady Eva Peron. It is exceedingly earnest and not particularly interesting as a song, but it does feature one of the strongest and most elegant vocal performances Madonna has ever delivered. 41. “Beautiful Stranger” (1999; Billboard Peak: 19) Significantly more successful than her stab at a James Bond theme (see #54) was this theme song for the blockbuster Bond spoof Austin Powers: The Spy Who Shagged Me. It’s bold and clever and interestingly bridges the Ray of Light and Music eras. 40. “Love Don’t Live Here Anymore” (1996; Billboard Peak: 78) This song had a very strange evolution. It was a minor hit from 1978 by Rose Royce that Madonna first covered on her second studio album (Like a Virgin). A reworked version was featured on her 1995 ballad compilation Something To Remember. It was this version that charted. It marks a rare but intriguing and largely successful foray into remakes. 39. “Don’t Cry for Me Argentina” (1997; Billboard Peak: 8) She’s no Patti LuPone (the Tony-winning Broadway legend performed the definitive version of this song), but her vocals are excellent and the song is beautifully composed. It was elevated on the charts by its dance remix, which is bizarre in theory but works terrifically in execution. 38. “True Blue”(1986; Billboard Peak: 3) Despite being a huge hit and the title track off one of her biggest albums, Madonna excluded this song from her 1990 greatest hits compilation The Immaculate Collection. It’s a charming pop ditty with no significant flaws, but clearly even Madonna knew then that it wasn’t a standout in her catalogue. 37. “Bad Girl” (1993; Billboard Peak: 36) Despite what I consider to be one of her weaker vocal performances, this song was justly praised by critics for its mature lyrical composition as it tells the story of a woman experiencing shame at her behavior in the aftermath of a breakup. 36. “Oh Father” (1989; Billboard Peak: 20) This affecting pop ballad found Madonna getting much more personal than usual as she explores her complicated dynamics with her father. 35. “You’ll See” (1995; Billboard Peak: 6) Both the song and music video were intended as a sequel of sorts to the hit ballad “Take a Bow.” It never quite reaches the heights of that Babyface-assisted chart topper, but this collaboration with legendary songwriter and producer David Foster is one of her stronger ballads. 34. “Lucky Star” (1983; Billboard Peak: 4) Madonna’s first smash hit that hasn’t aged particularly well but is nevertheless an iconic and catchy early ’80s pop gem that clearly announced her arrival. 33. “Give it 2 Me” (2008; Billboard Peak: 57) Madonna collaborated with Pharrell Williams on this upbeat, anthemic dance song about her perseverance in the music industry. With its clever lyrics and unique percussion, this is a truly underrated song in her catalogue. 32. “What it Feels Like for a Girl” (2001; Billboard Peak: 23) Some of Madonna’s best lyrics can be found in this tenderly executed indictment of society’s harmful treatment of young women and with better production it could have been a true classic. 31. “Dress You Up” (1985; Billboard Peak: 5) Sure, it’s juvenile and earnest and doesn’t stray too far from the formula of her early hits, but it’s a clever, compulsively listenable dance-pop song. 30. “Rescue Me” (1991; Billboard Peak: 9) This collaboration with Shep Pettibone, the record producer who helped Madonna craft “Vogue” a year earlier, closed out The Immaculate Collection with a gospel-tinged anthem that combines some of the best elements of “Like a Prayer,” “Express Yourself,” and — yes — “Vogue.” 29. “Crazy For You” (1985; Billboard Peak: 1) The theme song from the long-forgotten film Vision Quest was this memorable Madonna ballad, which was her first hit that found a home on adult contemporary radio rather than the clubs. 28. “Sorry” (2006; Billboard Peak: 58) The second single released from her terrific Confessions on a Dance Floor is a disco-infused banger that marks one of the best songs of her later career. 27. “Holiday” (1983; Billboard Peak: 16) Madonna’s first chart hit was this infectious call to party that has some trite lyrics but an undeniable beat that has become ingrained in popular culture. 26. “Live To Tell” (1986; Billboard Peak: 1) The haunting lead single from True Blue, is moody, mysterious, and — at nearly 6 minutes — massive. Its quite opaque (and admittedly a bit too long), but its mature lyrics and elegant vocals marked a major change for Madonna. 25. “Human Nature” (1995; Billboard Peak: 46) One of Madonna’s best forays into R&B, this all-around-clever song features bold, unapologetic lyrics that serves as her unofficial manifesto. (Sample lyric: “Did I say something true? Oops, I didn’t know we couldn’t talk about sex/ Did I have a point of view? Oops, I didn’t know we couldn’t talk about you.”) 24. “Cherish” (1989; Billboard Peak: 2) After the first two singles from Like a Prayer boldly tackled themes of religion and sexuality, Madonna switched things up for this optimistic, wholesome, and romantic pop song that is one of the most joyful that she has ever recorded. 23. “Rain” (1993; Billboard Peak: 14) One of the best ballads of Madonna’s career, this mature and heartfelt ballad never got the attention or respect it deserved (probably because it was released amidst the most controversial stage of her career.) 22. “Don’t Tell Me” (2000; Billboard Peak: 4) Madonna demands independence from her lover on this fascinating merge of country and dance that ranks high among the most unique songs of her career. 21. “I’ll Remember” (1994; Billboard Peak: 2) This moving ballad was a better theme song than the forgettable film With Honors deserved. It effectively merges synthesizers, beautiful background vocals, mature vocals, and tearjerking lyrics. 20. “Material Girl” (1985; Billboard Peak: 2) It may present Madonna at her vainest and the lyrics may be immature and dated, but the fact is that this is one of the most iconic songs of the 1980s for a reason — its brilliant pop music. 19. “The Power of Goodbye” (1998; Billboard Peak: 11) One of the most underrated songs of Madonna’s career, this melancholy electronica ballad is one of her most deeply affecting. (Sample lyric: “Your heart is not open so I must go / The spell has been broken, I loved you so / Freedom comes when you learn to let go / Creation comes when you learn to say no / You were my lesson I had to learn / I was your fortress you had to burn / Pain is a warning that something’s wrong/ I pray to God that it won’t be long.”) 18. “Erotica” (1992; Billboard Peak: 3) The wild controversy surrounding the music video and accompanying book, both of which explicitly portrayed S&M, overshadowed the fact that this is one of the most innovative songs of Madonna’s career. It is an orgy of hip hop and electronica, with spoken word verses, a killer bridge, and an infectious chorus. 17. “Borderline” (1984; Billboard Peak: 10) One of the highlights of Madonna’s early career, this flawless pop song features brilliant production and strong vocals. 16. “Take a Bow” (1994; Billboard Peak: 1) To the surprise of many, this soulful ballad written and produced with R&B super producer Kenneth “Babyface” Edmonds is Madonna’s longest running #1 hit. It features one of her most nuanced vocal performances and some of the most poetic lyrics of her career. 15. “La Isla Bonita” (1987; Billboard Peak: 4) Madonna’s romanticism (perhaps fetishization) of Latin culture probably wouldn’t play well if released in today’s age of sensitivity to cultural appropriation, but her incorporation of Spanish lyrics and diverse instrumentation (including Spanish guitars and Cuban drums) make this one of the most unique and memorable songs in her catalogue. 14. “Like a Virgin” (1984; Billboard Peak: 1) Probably Madonna’s most iconic song, it endures in popular culture for reasons other than the infamous MTV Video Music Awards performance in the wedding dress. It is stunning pop that merges provocative lyrics, a synth-fueled baseline, and a classic melody. 13. “Open Your Heart” (1986; Billboard Peak: 1) Originally a rock-oriented song written for Cyndi Lauper, Madonna and frequent collaborator Patrick Leonard reworked the song into a rousing, infectious, innuendo-laden dance-pop spectacular. 12. “Music” (2000; Billboard Peak: 1) Madonna’s most recent #1 hit, this bold and aggressive electro-funk song may not be particularly lyrically inspired (“Music makes the bourgeoisie and the rebel”… do what exactly?) but it is rightfully hailed as one of her finest and most unexpected hours by music critics. 11. “Ray of Light” (1998; Billboard Peak: 5) The lyrics may be utterly nonsensical, but this warm, upbeat, EDM anthem starts slow and gradually builds to an utterly frenetic climax. It’s an entrancing roller coaster for its entire 5 and a half minute duration. 10. “Hung Up” (2005; Billboard Peak: 7) It may combine lyrics from her Prince collaboration “Love Song” and heavily sample ABBA’s “Gimme! Gimme! Gimme! (A Man After Midnight)” but somehow the song feels bold and fresh. It is arguably the best disco song of the past 20 years and the last masterpiece Madonna has given us to date. 9. “Secret” (1994; Billboard Peak: 3) This seductive and soulful ballad was the first single off of her tepidly received foray into R&B (Bedtime Stories), but nevertheless is a highlight of her career. The genre-blurring song combines an entrancing melody, acoustic guitar, passionately felt vocals, and intriguing lyrics. 8. “Deeper and Deeper” (1992; Billboard Peak: 7) My vote for the most underrated of all of Madonna’s hits, this disco anthem is 5 and a half minutes of pure bliss that marks another masterful collaboration between Madonna and Shep Pettibone. 7. “Frozen” (1998; Billboard Peak: 2) The lead single of her most acclaimed album (Ray of Light) exemplifies why her transformation into a goddess of electronica and spirituality was her most successful. The entrancing melody, grand production values, and epic length led many critics to declare it a masterpiece upon its release. 6. “This Used to Be My Playground” (1992; Billboard Peak: 1) When Madonna was asked to write and record a song for the Geena Davis-Tom Hanks film A League of Their Own (in which she costarred), one might have expected that she would have gone for a ’40s inspired swing song (a la her Dick Tracy inspired collection.) Instead she went for a gut-wrenching ballad that explores themes related to nostalgia, grief, and heartbreak. In my opinion, this is the best ballad of her career and one of the best ballads of the 1990s. 5. “Justify My Love” (1990; Billboard Peak: 1) This genre-defying song is a downtempo, sensuous slow-burn. Co-written and produced by Lenny Kravitz (and also featuring passionate backup vocals from him), this song features breathy spoken word passages conveying intense sexual longing over fresh beats. It is hardly her most accessible and crowd-pleasing song but it is an innovative and distinctive masterpiece. 4. “Express Yourself” (1989; Billboard Peak: 2) This dance-pop anthem is a call to arms for all women to demand more — from their men and from society. Its inspiring and empowering message aside, it’s also just damn good pop music. With its insanely catchy chorus, commanding vocals, and horn-fueled instrumentation, it is one of the best pop songs of the 1980s. 3. “Papa Don’t Preach” (1986; Billboard Peak: 1) Madonna’s biggest controversy-generator to date at the time of its release, this song told the story of a young woman’s unplanned pregnancy that throws her life into turmoil and strains her relationship with her traditional father. The lyrics are sophisticated and Madonna’s gritty, lower-register vocal performance perfectly conveys their power. 2. “Vogue” (1990; Billboard Peak: 1) Even excluding the majesty of its legendary music video and seriously considering the reasonable allegations of cultural appropriation, this song is a masterwork. The trendsetting house song features straightforward and empowering lyrics, brilliant pacing and structure, and exquisite production. It is quite possibly the best dance song ever made. 1. “Like a Prayer” (1989; Billboard Peak: 1) Perhaps Madonna’s most controversial song, this gospel-infused pop anthem provocatively united religious imagery with sexual innuendo and had an equally scandalous video. It’s a shame that the controversy dominated so much of the song’s legacy as it is one of my nominees for best pop song ever made. It’s a riveting, slow building, and ultimately transcendent piece of popular music that only gets better with repeated listens. *** Bonus: 13 Great Madonna Songs that Should Have Charted! “Everybody” (1982) “Burning Up” (1982) “Into the Groove” (1985) “Love Song” (Duet with Prince; 1989) “Swim” (1998) “Be Careful (Cuidado Con Mi Corazon)” (Duet with Ricky Martin; 1999) “Deserve It” (2000) “Hollywood” (2003) “Nothing Fails” (2003) “Push” (2005) “Miles Away” (2008) “Ghost Town” (2015) “Rebel Heart” (2015) ***
https://medium.com/rants-and-raves/ranking-all-57-of-madonnas-billboard-hits-in-honor-of-her-60th-birthday-b4f5e2d10fcd
['Richard Lebeau']
2020-02-06 23:46:38.134000+00:00
['Hollywood', 'Celebrity', 'Music', 'Pop Culture', 'Culture']
Managing Remote Teams — Use Checklists
On remote teams, conveying team norms is a different process from in the office. Office workers can usually stroll to another desk and ask somebody a question whenever one comes up. On remote teams, your team members may work at different times, or be busy with family errands in the middle of the workday (and that’s OK — people should work when they can be most productive). So how should remote workers communicate the team’s practices and procedures when they can’t just shout, “Hey, how do we do code reviews around here?” Checklists. I’ve been using checklists for years. In software development we have many of them, like the SOLID principles on object-oriented design. But it wasn’t until I read “The Checklist Manifesto” that I realized the true power of checklists, and started making them standard operating procedures on my software teams. The book describes a study which is particularly relevant today, because as I type this, the world is suffering from the worst global disaster since World War II: The COVID-19 pandemic. The study was conducted by Stephan Luby with support from Proctor & Gamble to test the effectiveness of anti-bacterial soap, and it delivered incredible results: The incidence of various diseases fell 35% — 52%. But what’s really interesting about this study is that the kind of soap that was used didn’t make a big difference, and the people already had and used soap. The difference was that the study instructions included two checklists — When to wash hands: Before preparing food or feeding it to others After sneezing our coughing After wiping an infant After using a bathroom Most people were already washing their hands after using a bathroom, but they weren’t doing it properly. The checklist also included instructions: Use soap. Wet both hands completely. Rub the soap until it forms a thick lather covering both hands completely. Wash hands for at least 20 seconds [not included in this study, but we know now it takes at least 20 seconds to break down viral bugs including Coronavirus so I’m putting it here for posterity]. Completely rinse the soap off. It was not the particular soap used — any soap is effective. It was the checklists that prevented illness. The checklists changed behaviors and taught people how and when to properly wash their hands. If you talk to members of my teams, you’ll discover that we create checklists for lots of things. The best checklists are: [ ] Short enough to memorize [ ] Only include the key points If they get too long, conformance to the checklist drops, as people begin to see checklist points as optional suggestions. Here are some real examples of the checklists we commonly use on our teams. A few of these (like FIRST and RAIL) are widely used and developed externally. Several others (including 5 Questions, RITE Way, Test Timing, and both CI/CD lists) were developed by me, but inspired by common industry best practices: Code Review Checklist Before merging a pull request, check that the following have been considered: [ ] PR is small enough (otherwise, break it up) [ ] Code is readable [ ] Code is tested [ ] The features are documented [ ] Files are located and named correctly [ ] Error states are properly handled [ ] Bonus: Screenshots/screencast demo included Code Test Checklist (RITE Way) In quality software, developers must deliver tests which automatically prove that the code works. To test the RITE Way, each test should be: [ ] Readable [ ] Isolated (units and tests)/Integrated (for integration tests) [ ] Thorough [ ] Explicit Test Timing Checklist [ ] Unit tests run in under 10 seconds [ ] Functional tests should run in under 10 minutes [ ] CI/CD checks should run in under 10 minutes 5 Questions Every Unit Test Should Answer [ ] What is the component under test? [ ] What is its expected behavior (in human readable form)? [ ] What is its actual output? [ ] What is its expected output? [ ] How do you reproduce a test failure? (Double check that the answers to the above answer this question). Which is unsurprisingly similar to the bug report checklist, because all failing unit tests should be good bug reports. Bug Report Checklist Each bug report should include: [ ] Description (including location) [ ] Expected output [ ] Actual output [ ] Instructions to reproduce [ ] Environment (browser/OS versions, extensions) [ ] Bonus: Screenshot/screencast demonstrating the bug Component Checklist (FIRST) Components should follow the FIRST principles: [ ] Focused [ ] Independent [ ] Reusable [ ] Small [ ] Testable Software User Interface (UI) Performance Checklist (RAIL) Software UIs should conform to the RAIL performance model: [ ] Respond to user interaction in under 100ms [ ] Animation frames should draw in under 10ms [ ] Idle time processes should be batched in blocks of less than 50ms [ ] Load in under 1 second Continuous Delivery Preparedness Checklist [ ] A minimum of 80% of the code is covered by unit tests. [ ] All critical user workflows are covered by functional tests. [ ] All critical integrations are covered by integration tests. [ ] A feature toggle system exists to toggle features on and off in the production environment. All unfinished features are toggled off by default. CI/CD Checklist
https://medium.com/javascript-scene/managing-remote-teams-use-checklists-301aae93e7a5
['Eric Elliott']
2020-05-06 01:02:26.683000+00:00
['Software Engineering', 'Technology', 'Software Development', 'Remote Work', 'Management']
How to improve your Y Combinator application
Y Combinator is the best startup accelerator in the world. It’s no surprise that getting accepted is more competitive than getting into Harvard, Stanford, or MIT. Twice a year, there are roughly 30,000 applications for less than 150 spots. Meaning the acceptance rate is under 1%! So what can you do to improve your chances? Read successful applications One of the best things you can do is learn from the people that got in. It’s a bit tricky to find past applications because companies aren’t required to publicly post them online. Lucky for us, after a lot of searching, I found about half a dozen companies that were nice enough to put their applications out in public: Take your time, read through them, and make some notes. Also, please give a big thank you to each of these companies for putting their applications online for us to learn from. Listen to the YC Partners The partners at YC (the people who make the decisions) don’t hide what they want to see in applications. If you google search most of the partners’ names and “how to get into YC” a video or an article will pop up detailing what that partner wants to see. Another great source of information detailing what the partners are looking for is the Y combinator blog. The blog has great advice on starting a startup, so even if you don’t get into YC, I’d recommend bookmarking this page for when you need business advice. Get right to the point “Brevity is the soul of wit” - Shakespeare I think this is the hardest thing to get right in the Y Combinator application. One thing the application is testing for is your ability to concisely and persuasively get your point across. This is an essential skill that founders need, and it makes sense that YC values it so highly. Why is this such a challenge? Well, look at some of the past successful applications. Compare the answers Flex gave to the ones that Dropbox provided. Dropbox gave much shorter answers than Flex. Does that mean one is better than the other? No, they both got in, which is the measure of success here. Even though Dropbox’s answers were much shorter, there is one major thing in common. Both of the applications answer what the question is asking and nothing more. When the application asks what your company will make, don’t get into the details of your marketing plan or try to convince them that you’re the next Google/Uber/Amazon. Just stick to what your company is going to make. I know this sounds easy, but trust me, this is where most founders go wrong. Before you submit your application, go through each question, and ask yourself — “Did I answer more than the question was asking?” Use a bottom-up, not top-down approach to estimate how much your company can make. YC wants to see that you are obsessed with your target customer. A bottom-up approach starts by identifying your target customer and then extrapolating, while a top-down approach begins with the total market size and then thining it down. It’s no surprise that YC prefers the bottom-up method because it starts with the customer. Here’s a bit of a break down between the two approaches: A top-down approach starts by determining your total market size, and then you assume a percentage of the market you’ll get. For example, let’s say my company sells brewed coffee. A top-down approach would say that the US market size for coffee is ~88 billion dollars. I plan only to locate my coffee shops in California. We’re going to be the In-N-Out Burger of coffee shops. California accounts for ~12% of the US population, so my total market size is about 10.7 billion dollars (assuming an even distribution of coffee drinkers and no tourism). I’ll be conservative and say I’ll only capture 1% of the market. So my company has the potential to make 107 million dollars per year. A bottom-up approach starts with the customer and then works it’s way up. Let’s say I find out that every day *1 million cups of coffee are sold in California. That’s 365 million cups per year (366 if it is a leap year). My company believes it can sell 1% of those cups of coffee at $3.25 per cup (it’s artisanal single-origin coffee). So our potential market is 11.9 million dollars per year. If you’d like to do some more research on the two approaches, here are two great articles on the subject: *I made up these numbers. Do a rough draft on Google Docs (or any other word editor) Don’t edit your application on the Y Combinator website. It’s hard to catch mistakes in the small word boxes that YC gives you to fill in. Create a separate google doc to spell check easily, edit and allow friends to review your work. This will save you a lot of time and errors. Application Red flags Application red flags to watch out for: Answer all of the questions! Roughly half of the 30,000 applications get thrown out because the founders didn’t completely fill out the application. Your equity should be evenly distributed. A big red flag is one founder having much more equity than another founder. Large differences in equity are common causes of disputes between founders and disputes between founders are among the most common reasons startups fail. Summary 1) Read past successful applications Take notes on how past founders wrote their applications. 2) Listen to the YC partners Google search for the plethora of content YC partners have written and recorded about what they look out for in applications. 3) Get right to the point Make sure you answer the question being answered and nothing more. 4) Use a bottom-up market analysis When estimating your market size use a bottom-up approach, not a top-down one. 5) Do a rough draft outside of YC’s website Create a separate document for your YC application and then copy and paste the answers in. And most importantly! Good luck to everyone applying to YC. Whether you do or do not get in I wish you the best on the crazy roller coster ride that is starting a business. You got this!
https://medium.com/the-innovation/tips-to-make-your-y-combinator-application-better-a152b12daf3e
['Joe Robbins']
2020-09-26 11:06:44.121000+00:00
['Startup', 'Business', 'Venture Capital', 'Ycombinator', 'Accelerator']
Podcast Episode #8: The Insights of a Data Science Academic with Justin Eldridge
Podcast Episode #8: The Insights of a Data Science Academic with Justin Eldridge Wilson Xie Follow Aug 12 · 5 min read This article recaps the main takeaways of our podcast episode with Justin Eldridge. Make sure to listen to the full podcast below or on Podbean. Follow us to stay tuned for more episodes! Justin Eldridge is an Assistant Teaching Data Science Professor at UCSD, and has taught DSC 10 Principles of Data Science, DSC 40A Theoretical Foundations of Data Science I, and DSC 40B Theoretical Foundations of Data Science II. In this podcast episode, Justin recounts his undeviating journey through academia and offers advice to students pursuing a potential double major or project. Educational Background and Research Works The entirety of Justin’s undergraduate and graduate education was spent at the Ohio State University. He initially intended to pursue an undergrad degree in Aerospace Engineering, but after realizing the subject was not for him, switched to Physics and Applied Mathematics. During this time, he became interested in machine learning and artificial intelligence and decided to pursue a PhD in Computer Science at OSU. His dissertation was concentrated on hierarchical clustering, which seeked to examine the accuracy of clustering algorithms. Justin primarily focused on developing definitions and theories to prove any clustering algorithms correct. For example, when we listen to music, we often try to categorize songs by different genres, mood, and more. But there is no one “right way” to categorize (cluster) our music because we can organize them using different methods. So Justin attempted to define correct ways of clustering, proving the algorithms that can cluster correctly. When Justin came to UCSD, he further expanded his clustering focus to his classes, DSC 40A and DSC 40B. These two courses concentrate on the mathematical theories and applications within Data Science, so knowing the concept behind clustering is extremely useful for students when learning about the foundations of Data Science. Teaching at UCSD Before becoming a professor at UCSD, Justin underwent the classic dilemma of what to pursue: academia or industry? Justin picked the academic path, and mainly focused on teaching rather than research. However, the University of California offered him the opportunity to continue his research while still teaching at UCSD as a Data Science professor. This opportunity attracted Justin because he always wanted to continue his research while becoming a teaching professor. Additionally, Justin liked working in San Diego because he always wanted to move back to California after graduating at OSU. When Justin arrived at UCSD, HDSI was newly established. So Justin was given the opportunity to further develop the Data Science program by creating new major classes and writing their corresponding textbooks. The Data Science presence in San Diego also contributed to HDSI’s success because there were so many opportunities. Justin added, “San Diego has many more jobs in Data Science especially in Biotech, such as Bioinformatics. It is definitely a popular destination for people in Data Science because the environment could help their curriculum.” Besides contributing to the department of Data Science, Justin also met many great people that shared the similar mindsets with him here at UCSD. He said, “There are so many people here who are famous in theory of machine learning. It’s ridiculous. A lot of people I see in textbooks or giving Keynote speeches at conferences are teaching and working here. It’s so cool to be in the same place as them.” Many professors with industry experience like to share and promote their experiences as they incorporate what they learned into their teaching style. However, Justin is a pure academic, having only seen industry through a short-lived consulting lens. When asked about this, he acknowledged this difference and replied that his teaching method was shaped by his physics background. His motives for enrolling in most of his physics classes at OSU stemmed from pure interest. Justin promotes similar ideologies in his Data Science classes, believing that students should have a parallel motive when taking his classes. “Treat people like they’re interested in the thing,” he said, “and even if they aren’t, if you show them what’s interesting about it, and have enthusiasm for the subject, then it’s contagious to the students who weren’t interested.” Babypandas When Justin began teaching DSC 10, an entry-level Data Science class, he and professor Fraenkel intended to model it after UC Berkeley’s Data 8 class. Although the material helped introduce Data Science programming — such as how to organize/make tables and develop simple data visualization through the Numpy library — it did not prepare students for the Pandas module. Looking for a way to improve DSC 10, they decided to create a simplified Pandas library for the class. The Babypandas library introduced Pandas with fewer methods and arguments. It was designed for students who had no coding experience to learn Pandas through a simpler method. Babypandas differed from Pandas such that it had fewer methods and arguments. It replaced Berkeley’s Data Science module in DSC 10. This allowed a better transition for students to learn Pandas in the future. Moreover, Justin also rewrote the textbook that emphasized Babypandas, since the previous textbook used the Data Science module. Here is the textbook link. Advice for Data Science Students · Double Major Justin primarily recommends students to consult with their Data Science advisor when considering a double major. But in his opinion, if a student is interested in academia, double majoring in Data Science and math would be an interesting approach because there would be theories that require math once the student enters graduate school. If a student double majors in Data Science and other science subjects, the student would have a great track in the career of Bioinformatics. For students double majoring in Data Science and history, Justin believes that they would have unique and useful perspectives because they are majoring in two different subjects, which takes a lot of time and effort. · Data Science Projects For students that want to work on data science projects, Justin recommends students to look for the data first before answering the problem. Although sometimes students come up with impactful questions, the data to answer the question is limited to them. Therefore, Justin suggests students to work on projects that are interesting to them and have already been answered by other people. Also, Justin advises students to always organize the work before starting a data science project. He adds, “you don’t want to get into a point where it’s a pain to do a simple thing in a project.” So be sure to plan out the schedule before starting a project, or else the student would lose interest in it. Comments on the Data Science Society When asked about the DS3, Justin was very satisfied with the organization. He said that Data Science professionals had to work on projects in order to stay competitive in the field, and the DS3 played an important role in giving students the opportunity to work on data science projects outside of classes. Future and More In the future, Justin will be teaching DSC 80, The Practice and Application of Data Science. He is also currently working on establishing DSC 40C, a new course in the DSC 40 series that features the application of probability in linear algebra. If you’re interested, check out Justin’s personal website to learn more about him and his research on clustering.
https://medium.com/ds3ucsd/podcast-episode-7-the-insights-of-a-data-science-academic-with-justin-eldridge-61008057ce0e
['Wilson Xie']
2020-09-22 01:17:59.500000+00:00
['San Diego', 'Education', 'Data Science', 'Clustering', 'Academia']
James Durbin
James and I chatted about his post idol plans and charity work he does.
https://medium.com/a-teen-view/james-durbin-a9382b75ec9a
['Arin Segal']
2016-11-04 00:42:13.515000+00:00
['James', 'Idol', 'Music', 'Durbin', 'American']
Text Summarization on COVID-19 research data
Summarize massive text data into a short, precise, and fluent summary that help us consume relevant information faster. Photo by Susan Yin on Unsplash The entire world is currently experiencing COVID-19 virus. Ever wondered what researchers have been finding of this virus? White House, in collaboration with other leading research groups, prepared COVID-19 research data set for the global research community to help in understanding more about the disease by applying AI and Natural Language Processing Techniques(NLP). This dataset also includes publications on SARS-CoV-2 and other coronaviruses. Going through each of the research paper from the dataset is not practically feasible as there are more than 130,000 articles and 60K+ of full texts. To overcome the challenge to go through each paper in detail, I have developed NLP based Text Summarization model that gives a summary of the text out of entire document so that reader can get a decent understanding of what article is about and what topic author had focused in the research paper. Let us begin with the steps involved in the summarization of text from the corpus of the data, and then step by step to accomplish text summarization on COVID-19 dataset. One can use this approach on any text document to create shorter and meaningful summaries. Text Summarization: Text summarization is one of the statistical methods to produce summaries from vast chunks of texts. Text summarization can be achieved using Natural Language Processing techniques and using deep learning algorithms. This article covers the NLP technique for text summarization. Five necessary steps for text summarization using NLP approach: Tokenize the sentences in the text. Pre-process the text. Create sentence frequency matrix. Determine weighted matrix for each sentence. Rank the sentence based on highest weight. Now, let’s jump into COVID-19 dataset and perform required steps for summarizing the text. Import all necessary libraries # Start with loading all necessary libraries import re import nltk import heapq import numpy as np from os import path from PIL import Image from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator import pandas as pd import matplotlib.pyplot as plt % matplotlib inline ### load stopwords nltk.download('stopwords') Before we get into summarizing the text, let’s us create some descriptive analytics and distribution of the data on critical fields in the dataset. ### read the file df_text = pd.read_csv(‘covid.csv’) Create a list of custom keywords and add to the keywords from NLTK. stopwords_custom = nltk.corpus.stopwords.words('english') customize_stop_words = ['common','review','describes','abstract','retrospective','chart','patients','study','may','associated','results','including','high' 'found','one','well','among','Abstract','provide','objective','objective:','background','range','features','participates','doi', 'preprint', 'copyright', 'org', 'https', 'et ','et' 'al', 'author', 'figure', 'table', 'rights', 'reserved', 'permission', 'use', 'used', 'using', 'biorxiv', 'medrxiv', 'license', 'fig', 'fig.', 'al.', 'Elsevier', 'PMC', 'CZI', ] ### append custom stopwords to default stopwords from NLTK for i in customize_stop_words: stopwords_custom.append(i) Let’s see which words are frequently used in all 50K+ papers from the data set are. It looks like some of the most commonly used words are ‘respiratory’, ‘epidemiology’ which are relevant in the current COVID-19 situation. Can we find any interesting critical words in publication titles from all the 50K+ research papers? Once again, there are some common words in larger text and the title of the papers. From the below word cloud, we can also observe that some authors submitted multiple publications. Also, ‘paper published year’ is one of the variables in the dataset. There are publications from the mid 20th century to till date. 2021 as Published year must be incorrect entry because this paper published in April 2020. Distribution of publication year in entire dataset: Pre-process the data: Clean the text by replacing special characters, extra spaces etc.. def clean_text(text): '''Clean the text with special characters, extra spaces etc..''' clean_abstract = [] clean_abstract = re.sub(r’\d’,’ ‘,text) clean_abstract = re.sub(r’\W’, ‘ ‘,clean_abstract) clean_abstract = re.sub(r’\s+’, ‘ ‘,text) text = re.sub(r’\[[0–9]*\]’,’ ‘,text) return text ### select any text from abstract for summarization' text= str(df_text['abstract'][9999]) print(text) Code in above cell performs pre-processing steps and generates clean text to feed into the model and full text after pre-processing shown below. Tokenize and Remove Stop words from text. Now, input the preprocessed text to next steps to exclude stopwords and tokenize the sentences. Code in the following cell also creates a frequency matrix for each sentence. Function(tokenise_text) returns tokenized sentences and frequency matrix def tokenise_text(text): '''remove stop words, tokenize the text and create word count with scores''' wordcnt = {} for word in nltk.word_tokenize(text): if word not in stopwords_custom: if word not in wordcnt.keys(): wordcnt[word] = 1 else: wordcnt[word] += 1 ## Tokenize the text sentenses = nltk.sent_tokenize(text) for key in wordcnt.keys(): wordcnt[key] = wordcnt[key]/max(wordcnt.values()) return wordcnt,sentenses wordcnt,sentenses = tokenise_text(text) Create sentence frequency matrix, weighted sentence matrix and rank the sentence based on the weights. Create a frequency matrix of all those words mapped to the number of times those words appeared in the whole paragraph and weighted sentence matrix as well. The weighted matrix created by all values corresponding to a word divided by the maximum value of among all words. Length of the desired sentence is one of the hyperparameters. One has to be mindful in choosing this because sometimes longer sentences are not essential but have high weights, and that may cause choosing the wrong sentence as top rank. Determining the number of best sentences as well is our choice. I went with picking the top 2 sentences as best sentences. def summrize_text(wordcnt,sentenses): '''summarize the text!! choose desired number of words in sentence and number of sentense''' sent_score = {} for sentense in sentenses: for word in nltk.word_tokenize(sentense.lower()): if word in wordcnt.keys(): if len(sentense.split(‘ ‘)) < 25: if sentense not in sent_score.keys(): sent_score[sentense] = wordcnt[word] else: sent_score[sentense] += wordcnt[word] ### Pick best sentence for summarized text best_sent = heapq.nlargest(2,sent_score,key = sent_score.get) return best_sent best_sent = summrize_text(wordcnt,sentenses) Finally, print the summarized text for the text that I considered from the dataset. for sentense in best_sent: print(sentense) As we can see, long text that we passed as input to the model, converted to a short and meaningful summary. Here is the full code in readable format: GitHub link to the code: References: Please comment or provide feedback :)
https://medium.com/analytics-vidhya/text-summarization-on-covid-19-research-data-a5ab28695e11
['Vamsi Krishna']
2020-06-02 15:53:47.336000+00:00
['Data Science', 'Python', 'Naturallanguageprocessing', 'Covid 19']
10 “Silicon Valley” Liners/Puns that are So Funny, Apt & Relatable to the Tech World
That’s exactly software engineering is all about. Are the project managers listening? Img Credits : Pinterest We all have that one guy in the team who is basically like “One Man Army”. Isn’t it? Image credits : Pinterest That guy who is 24*7 on his computer. He eats code, sleeps code and …… Image credits : Pinterest When Jared asks what it is that he (Gilfoyle) does? Gilfoyle answers — “What do I do? System Architecture. Networking and Security. No one in this house can touch me on that. But does anyone appreciate that? While you were busy minoring in gender studies and singing a cappella at Sarah Lawrence, I was getting root access to NSA servers. I was a click away from starting a second Iranian revolution. I prevent cross site scripting, I monitor for DDoS attacks, emergency database rollbacks, and faulty transaction handlings. The internet, heard of it? Transfers half a petabyte of data a minute, do you have any idea how that happens? Every dipshit who shits his pants if he can’t get the new dubstep Skrillex remix in under 12 seconds. It’s not magic, it’s talent and sweat. People like me ensuring your packets get delivered unsniffed. So what do I do? I make sure that one bad config on one key component doesn’t bankrupt the entire f**king company. That’s what the f**k I do. … Listen, wherever we end up here, I just want to say that I feel I should get more equity than Dinesh.”
https://medium.com/datadriveninvestor/10-silicon-valley-liners-puns-that-are-so-funny-apt-relatable-to-the-tech-world-a2ee797f7949
['Naina Chaturvedi']
2020-12-26 15:36:15.829000+00:00
['Humor', 'Programming', 'Tech', 'Software Development', 'Startup']
Using Apache Airflow to Manage Data Workflow POC
Configuration as Code In this sector, I talk about Airflow DAG configuration. Before talking about this topic, you need to create a folder call dags inside your airflow project. This folder will store all the .py files. The default dag configuration is shown below. This is the basic settings. ‘provide_context’ is used to pass a set of keyword arguments that can be used in your function. ‘provide_context’ should place in PythonOperator but I place inside default_args. Airflow use Xcom (Cross communication) to exchange messages and Xcom can be “pushed” or “pulled”. default_args = { 'owner': 'DennyChen', 'start_date':datetime(2020, 8, 14, 0, 0), 'depends_on_past': True, #With this set to true, the pipeline won't run if the previous day failed 'email': [''], 'email_on_failure': True, #upon failure this pipeline will send an email to your email set above 'email_on_retry': False, 'retries': 5, 'retry_delay': timedelta(seconds = 5), 'provide_context': True, } dag = DAG( 'my_first_dags', default_args = default_args, description = 'A simple DAG', schedule_interval= '*/10 * * * *' ) After this basic configurations, we can create our own jobs. First, we create a bash job. This job just simply print greetings and time. Because of ‘provide_context’ is placed inside the ‘default_args’, so I need to place **context inside my function in order to fetch environment variable. def test(**context): print("Hi, I'm Denny") time = str(datetime.today()) print("Time: " + time) Next, we write a small function to get ASUS stock price via Yahoo Finance. At the end of function, I return three values. r = requests.get(' soup = BeautifulSoup(r.text, 'lxml') def get_asus_stock_price(**context):r = requests.get(' https://finance.yahoo.com/quote/2357.TW%3FP%3DASUS/' soup = BeautifulSoup(r.text, 'lxml') org = soup.find('h1', 'D(ib) Fz(18px)').text print(org) price = soup.find('span', 'Trsdu(0.3s) Fw(b) Fz(36px) Mb(-4px) D(ib)').text print(price) # up_or_down = soup.find('span', 'Trsdu(0.3s) Fw(500) Pstart(10px) Fz(24px)').text # up_or_down = soup.find('span', {'data-reactid': '33'}) # print(up_or_down) date = soup.find('span', {'data-reactid':'35'}).text print(date) return org, price, date After I fetch data from Yahoo Finance, I need to insert them to Database. Here, I choose MongoDB. def insert_to_mongo(**context): client = pymongo.MongoClient("") db = client["stock"] collection = db["stock_price"] dblist = client.list_database_names() # dblist = myclient.database_names() # print(dblist) if "stock" in dblist: print("Exist") org, price, date = context['ti'].xcom_pull(task_ids='get_stock_price') print(org) print(price) print(date) time = str(datetime.today()) dict = {"Company": org, "Price": price, "Date":date, "Insert_Time":time} collection.insert_one(dict) Here I use xcom_pull to get data return from def get_asus_stock_price(). At last, I need to set up each job. python_callable links to the def I just write above. # allow to call bash commands bashtask = BashOperator( task_id = 'bash_task', bash_command = 'date', dag = dag, ) # allow to call python tasks python_task = PythonOperator( task_id = 'python_task', python_callable = test, dag = dag, ) dummy_task = DummyOperator( task_id='dummy_task', # retries=3, dag = dag, ) get_stock_price = PythonOperator( task_id = 'get_stock_price', python_callable = get_asus_stock_price, provide_context=True, dag = dag, ) insert_to_mongo=PythonOperator( task_id='insert_to_mongo', python_callable=insert_to_mongo, dag=dag, ) I mention in the above that get_stock_price and insert_to_mongo has dependency relationship. In Airflow, you can set downstream and upstream get_stock_price >> insert_to_mongo Easy, right? or you can use another syntax. get_stock_price.set_downstream(insert_to_mongo) You can use downstream or upstream to set up any dependency relationship you want. In Airflow, it’s called Relationship-builder.
https://medium.com/analytics-vidhya/a-data-flow-poc-using-airflow-47820eac85b2
['Denny Chen']
2020-09-30 15:29:15.069000+00:00
['Airflow', 'Python', 'Data', 'Mongodb', 'Dataflow']
There’s no Harm in Slowly Migrating to React.js (Start ASAP)
There’s no Harm in Slowly Migrating to React.js (Start ASAP) For this reason, it’s always a good time to start folding your static HTML / CSS / JS webpages into a React.js web app Image by Alvaro Felipe on ed.team I’m not the first to admit it — I should have migrated my HTML / CSS / JS / jQuery site to React.js a long time ago. On the verge of 2021, the reasons are clear; React allows for better organization, truly reusable components, powerful routing, Server-Side Rendering (SSR), Static Site Generation (SSG), modern web features, and more. Still, it definitely wasn’t my first thought to jump ship to React. As my site grew in complexity and scale, manually editing dozens of .html files became inconvenient, but the thought of learning a new framework was more daunting. According to computer science proverbs, if it ain’t broken, why fix it, right? Wrong. In fact there is hardly any consequence to migrating your website to React.js incrementally, at your own pace. Step 1: Create your React Web App At this point, you‘re probably worried about how to set up your site in the first place. Luckily, the work is done for us, thanks to create-react-app! Note: feel free to use Next.js for SSR, or Gatsby.js for SSG On your machine, initialize your React project with npx create-react-app your-website This will populate a directory your-website with all the resources you need to get up and running. Voila! your-website ├── README.md ├── package-lock.json ├── package.json ├── public │ ├── favicon.ico │ ├── index.html │ ├── manifest.json │ └── robots.txt └── src ├── App.css ├── App.js ├── index.css ├── index.js └── logo.svg Notice that two directories are created: /src and /public. /src is the main home of the React app itself. Here, you will write components, assemble pages, configure routers, etc. /public is the home of all static content. This means images, sitemaps, your robots.txt and manifest.json files (as visualized), and you guessed it — static standalone webpages. Step 2: Integrate Your Existing Webpages Now that we’ve established /public to be the parent directory to our current static website, let’s drop in those files and directories. Imagine I have 3 webpages: about, shop, and contact. Each is a directory that contains .html, .css, and .js files. I also have a background image, sitemap, and terms and conditions document. Dropping these files into the public folder leads to the following structure: your-website ├── README.md ├── package-lock.json ├── package.json ├── public │ ├── about │ │ ├── functions.js │ │ ├── index.html │ │ └── styles.css │ ├── background.jpg │ ├── contact │ │ ├── functions.js │ │ ├── index.html │ │ └── styles.css │ ├── favicon.ico │ ├── index.html │ ├── manifest.json │ ├── robots.txt │ ├── shop │ │ ├── functions.js │ │ ├── index.html │ │ └── styles.css │ ├── sitemap.xml │ └── terms.pdf └── src ├── App.css ├── App.js ├── index.css ├── index.js Note: Remember to link any static files from inside the root index.html file, if applicable. Step 3: Verify your Static Routes in the Browser On your machine, start serving the web app over localhost with npm run start This should take you to this interface: Resulting webpage after the command npm run start This is your homepage. Remember the static index.html file create-react-app generated for you? That’s what you’re seeing! When your React app is built, its components will populate that index.html file, bringing the boilerplate to life. To test your static routes, visit them through the browser’s url bar. Remarkable! The urls localhost:3000/about, localhost:3000/shop, and localhost:3000/contact all appear and act as they did before. This is crucial to your migration because the static standalone webpages you provided do not affect the React app itself. In other words, both styles of web development can coexist on the same domain. Furthermore, the timeline of your site’s migration is entirely up to you. As you begin to learn more about React.js with time, you can slowly morph these .html files into React components using tag-based JSX. On the other hand, you can choose to never bother with migrating the static webpages, and simply start publishing your future pages with React. The choice is all yours! That Sounds Great, but What’s the Catch? I knew this process sounded too good to be true. All described above is accurate, but there is one potential pitfall: Your root webpage (www.your-website.com/) needs to be built with React. Still, it’s not the end of the world; for most cases, this only requires changing the placeholder code in src/App.js to fit your needs. React has incredible documentation, and the web is an infinite hive of tutorials, blog posts, and other resources. Put simply, since you’ve already committed to learning and using React.js, building your homepage with it should not stand in your way. Conclusion Is migrating your static HTML / CSS / JS / jQuery page to React.js a flawless process? Not entirely. Should you still undergo the migration as soon as possible? Absolutely. Static web development is an acceptable practice… for now. As web technologies evolve, you’re going to want your framework to keep up. For those reasons, migrating to React.js is the obvious choice. Your static webpages are only a short term solution; slowly moving your codebase to a modern framework at your own pace will help future-proof your web application for the next ten years, with no sweat or tears involved.
https://medium.com/swlh/theres-no-harm-in-slowly-migrating-to-react-js-start-asap-b36bddccc60e
['Blake Sanie']
2020-12-14 06:52:18.965000+00:00
['Front End Development', 'React', 'Web Development', 'JavaScript', 'HTML']
Blockchain vs. iTunes: How Startups and Established Brands Change the Status Quo
Image source: Unsplash.com If we look at the music industry today, all of its key performance indicators grow exponentially from year to year, which marks a certain level of maturity. However, it remains a highly fragmented market, with large and famous record labels being at the forefront and indie labels lagging behind in terms of revenues and digital transformation. Not to be unfounded, let’s take a look at some stats. Music streaming industry in a nutshell The total revenue of the global music industry grew by 9.7% to $19.1 billion in 2018. Almost half of the total revenue came from paid subscriptions to streaming audio services, according to a report by the International Federation of Sound Recording (IFPI). A global streaming services user base has increased by 44% and reached 255 million users last year. Copyright holders’ royalties from music streaming have increased by almost 10% in recent months and reached over $2 billion globally. All revenues from music streaming are shared between the authors (i.e., artists, bands, composers, other media content creators) and the companies that buy their copyrights and then share the profits (i.e., record labels, indie studios, producers). How different music industry actors can win from distributed ledger technology In 2016, TechCrunch was among the first media outlets to have published an article describing the prospects for blockchain solutions in the music industry and streaming in particular. The article claimed that “one of the advantages of a blockchain ledger is that it can establish a more direct relationship between creators and consumers. Music can be published on the ledger with a unique ID and timestamp in a way that is effectively unalterable. This can solve the historic problem of digital content being downloaded, copied, and modified at the leisure of users.” As a matter of fact, each record can store ownership and copyright metadata in an immutable and completely transparent database that anyone can verify and make sure that the right people get paid for their content use by 3rd parties. Among other benefits that blockchain can offer to music content creators are: New Monetization Opportunities Since all blockchain-based cryptocurrencies support micropayments, it helps eliminate any transfer costs for content owners. Also, blockchain allows users to select a record of their choice and reward stakeholders immediately with cryptocurrency. Independence from the Monopoly of Large Record Labels It can allow indie artists to finally gain independence from large record labels and, thus, make money and increase user base without giving the lion’s share of their royalties to expensive producers. Distributed ledger technology can help remove middlemen from the market and enable indie record labels to sell their content directly to users. “Token economies recognize and reward fans for coming to the stage and utilizing services. On this stage, an artist can sell tokens to fans directly in order to raise funds for creative projects. By leveraging and utilizing smart contracts, artists are then able to receive immediate payouts and monies generated from the song, rather than waiting months or even years to see their hard work paid off.” Andrew Rossow, Forbes This statement suggests that blockchain will also push indie artists to make an effort and increase their digital literacy and find new and outside-the-box ways of interacting with their target audience (e.g., selling memorabilia, collectibles and own virtual merchandise on the blockchain along with tracks and tunes). Despite the optimism of some experts, creating a highly secure royalty distribution system on the blockchain has yet to make a long way from ideation to the workable and effective solution. While it’s a matter of the future, some innovative startups and established brands are already working on it now. Let’s review a couple of demonstrative cases. Using Blockchain for Building Royalty Distribution and Management Solutions One of the projects that has offered the most feasible solution to the above issues is Choon founded by a famous DJ and music producer Gareth Emery and software developer John Watkinson. Choon goal is “to destroy the monopoly of major labels on the distribution of musicians’ works.” What Choon essentially does is when you pay to play a song on their platform, your money goes directly to the artist via their internal cryptocurrency called Notes ($NOTES). Besides, platform users get paid for listening to new music and thus helping unknown artists and bands gain popularity and spread the word of their music. Even playlist curators get paid for streams from their playlists. As we can see, Choon is going to evolve as a blockchain-based ecosystem that will provide different win-win opportunities for absolutely all stakeholders. All revenue splits are handled automatically thanks to smart contracts. Artists and listeners will be able to withdraw fiat money on any crypto exchange that lists Notes. How will Choon make money? Through advertising. At the moment, the platform already features over 5,000 registered artists, but mostly the indie ones. This can pose a particular risk as the platform may be struggling to attract really “big fish” in the future, i.e., famous labels and artists. Competitive platforms include Musicoin, Ujo, Resonate, Bittunes, and more. Another case is from the corporate world. Large and established brands that build multi-level blockchain-based platforms have better opportunities to create effective royalty management systems, as they have a better chance to attract the best blockchain architects and developers. In June 2018, two large corporate players, Microsoft and Ernst & Young launched a blockchain solution for media and entertainment content rights and royalties management. The system is designed to “enable increased trust and transparency between industry players, significantly reduce operational inefficiencies in the rights and royalties management process, and eliminate the need for costly manual reconciliation and partner reviews.” In addition, the solution provides real-time visibility of all sales transactions in the blockchain network and help all stakeholders respond faster and smarter to new market needs by providing in-depth analytics and insights and fostering informed decision-making. The system is based on the Azure platform; EY will be in charge of creating smart contracts and legal support of the solution along the entire value chain. Due to the lack of working blockchain systems for distribution of royalties, it is difficult to talk about the growth of profits for musicians and composers so far. However, according to a 2017 study of the Inter-American Development Bank (IADB), the introduction of such a solution will benefit first of all large platforms. “While the transformative nature of blockchain ought to be celebrated, it would be naïve to hope that, absent good policies, the industry as a whole (principally artists) will be well served merely by technological advances fueling novel business models,” said Ignacio De Leon and Ravi Gupta of IADB. When it comes to the music industry, cyber piracy inflates the cost of policing IP infringement, especially in jurisdictions that lack adequate legal protections for copyrights and enforcement. So the question is whether distributed ledger technology will be able to change the status quo and help artists and content creators overcome existing hurdles. We’ve yet to see if blockchain will be able to eliminate a high fragmentation of the industry as a whole and who’ll benefit from this technology the most — artists, indie studios or large labels. The latter traditionally stay ahead of the curve because they can allocate huge budgets and attract top developer talent to bring their ideas to life faster and generate substantial word of mouth among all market participants. What’s your take on this? Leave it in the comments below.
https://medium.com/hackernoon/blockchain-vs-itunes-how-startups-and-established-brands-change-the-status-quo-3f23e5966e0
['Vik Bogdanov']
2019-06-12 16:10:19.705000+00:00
['Music', 'Media', 'Business Case', 'Distributed Ledgers', 'Blockchain']
Thank You to My High School Crush
My junior year (and his senior year) I had a challenging school schedule. After the first week, I dropped a class to pick up an extra study hall. That single decision would profoundly impact not just my junior year but my life. I strolled into the study hall scanning for any familiar faces. Who should be there? My blonde crush. I summoned up the courage to say hello and found a nearby seat. He and I would spend every 6th period for the rest of the school year in those red auditorium seats. In those early afternoon hours, we unknowingly were forging a friendship of a lifetime over music (particularly the Smiths and Morrissey), mundane high school subjects, and unspoken things like the final throes of my parents’ marriage as well as his own issues. But, mostly? We shared music, sarcasm, and laughter. He influenced my musical taste as much as anyone has in my life. I went by his house after he graduated. I’m sure I had the urge to kiss him, but I never acted on that. I never told him how I felt. He told me he had something for me. He grabbed a shabby silver-ish ring and thrust it into my hand. He didn’t say much, just that he wanted me to have it. It wasn’t even made of real metal. It was already starting to chip. And it was definitely too large. But I cherished that ring. In fact…I still have it. To anyone else it would be worthless. I’m no romantic. But I am sentimental. That scrap belongs with me. He moved away to attend college in another town. We stayed in touch, but it wasn’t easy during those pre-Internet days. Eventually I drove down to see him over spring break of my freshman year at FSU. More music, more laughter. I was as enamored with him as I’d ever been. We ended up alone, listening to the Smiths (natch). Three years after that haircut, two and half years since that fateful decision to add that study hall, countless hours of time spent together being silly, and lots of time spent apart while we were in different cities. It all culminated in his leaning in to kiss me for the first time as “Back to the Old House” played. I had to return to FSU. He had a girlfriend. Life went on as it always had. But our lives would intersect again. And again. And again. And again.
https://bonniebarton.medium.com/thank-you-to-my-high-school-crush-1e2f1ce1bbfa
['Bonnie Barton']
2018-09-04 22:31:59.272000+00:00
['Gratitude', 'Relationships', 'Music', 'Friendship', 'Love']
8 Advanced Python Tricks Used by Seasoned Programmers
1. Sorting Objects by Multiple Keys Suppose we want to sort the following list of dictionaries: people = [ { 'name': 'John', "age": 64 }, { 'name': 'Janet', "age": 34 }, { 'name': 'Ed', "age": 24 }, { 'name': 'Sara', "age": 64 }, { 'name': 'John', "age": 32 }, { 'name': 'Jane', "age": 34 }, { 'name': 'John', "age": 99 }, ] But we don’t just want to sort it by name or age, we want to sort it by both fields. In SQL, this would be a query like: SELECT * FROM people ORDER by name, age There’s actually a very simple solution to this problem, thanks to Python’s guarantee that sort functions offer a stable sort order. This means items that compare equal retain their original order. To achieve sorting by name and age, we can do this: import operator people.sort(key=operator.itemgetter('age')) people.sort(key=operator.itemgetter('name')) Notice how I reversed the order. We first sort by age, and then by name. With operator.itemgetter() we get the age and name fields from each dictionary inside the list in a concise way. This gives us the result we were looking for: [ {'name': 'Ed', 'age': 24}, {'name': 'Jane', 'age': 34}, {'name': 'Janet','age': 34}, {'name': 'John', 'age': 32}, {'name': 'John', 'age': 64}, {'name': 'John', 'age': 99}, {'name': 'Sara', 'age': 64} ] The names are sorted primarily, the ages are sorted if the name is the same. So all the Johns are grouped together, sorted by age. Inspired by this StackOverflow question.
https://towardsdatascience.com/8-advanced-python-tricks-used-by-seasoned-programmers-757804975802
['Erik Van Baaren']
2020-06-27 14:52:19.537000+00:00
['Technology', 'Software Development', 'Learning To Code', 'Python', 'Programming']
LinkedIn Job Postings Trends Visualized
LinkedIn has a community of professionals and is also a fitting online platform for job hunting, especially during a pandemic. In this story, we saved a data set of the number of job postings for the 35 job functions that LinkedIn is able to filter. These “functions” include broad categorizations for jobs like “design”, “business development”, “engineering”, and more. By visualizing this data on line graphs and pie charts, we could observe similarities between the types of jobs and also gain insight into the distribution of the categories or jobs. The technical specifics of our process can be found at the link to our other story where we explain how to use Selenium and Beautiful Soup for scrapping data or performing automated functions: https://medium.com/@sophie14159/linkedin-scrapper-a3e6790099b5 Our Dataset As a Canadian resident, the information I gathered with our program focuses on jobs located in Canada. With this location filter on LinkedIn, I ran my code to search these following types of jobs on LinkedIn. Below is the list of how many job postings are categorized in a specific type of job function that LinkedIn filters. This is an example of the data gathered from August 15 2020-08-15 Management 53147 2020-08-15 Manufacturing 49236 2020-08-15 Information Technology 44666 2020-08-15 Other 36069 2020-08-15 Sales 34510 2020-08-15 Business Development 24167 2020-08-15 Health Care Provider 19360 2020-08-15 Engineering 18664 2020-08-15 Customer Service 11247 2020-08-15 Finance 10042 2020-08-15 Administrative 8847 2020-08-15 Marketing 6185 2020-08-15 Design 5374 2020-08-15 Education 5210 2020-08-15 Training 5124 2020-08-15 Art/Creative 4919 2020-08-15 Project Management 4669 2020-08-15 Accounting/Auditing 4510 2020-08-15 Analyst 4108 2020-08-15 Science 3354 2020-08-15 Human Resources 2633 2020-08-15 Quality Assurance 2485 2020-08-15 Consulting 1983 2020-08-15 Research 1915 2020-08-15 Writing/Editing 1814 2020-08-15 Public Relations 1738 2020-08-15 Legal 1671 2020-08-15 General Business 1391 2020-08-15 Supply Chain 1173 2020-08-15 Product Management 897 2020-08-15 Purchasing 728 2020-08-15 Production 698 2020-08-15 Strategy/Planning 684 2020-08-15 Distribution 303 2020-08-15 Advertising 174 The data is stored in a text file that can be accessed in order to visualize the data onto either a series of line graphs or combined into a single one. Visualizing the Data With data gathered daily from August 15th to September 11th, we obtained graphs that demonstrated the trends of each job function. For example, the graph for “Health Care Provider” is posted below.
https://medium.com/swlh/linkedin-job-trends-2dd64f1d4541
['Sophie Zhao']
2020-09-20 20:18:20.550000+00:00
['Python', 'LinkedIn', 'Selenium', 'Data Visualization', 'Job Hunting']
Average Jeans Color by State, 2020
Want to help me improve the map? Fill out the 3-question survey here! Original map: Average Jeans Color per State, 2018 This month I bought the first pair of blue jeans I’ve owned in a decade. I don’t know what inspired me to stray from my usual black-on-black, but regardless, I’m never looking back; I’ve been dressing like an androgynous Jerry Seinfeld every day since. With denim on my mind, I found myself revisiting a meme I had seen on twitter months ago: Average Jeans Color Per State. In this post, I’m going to show you how I recreated this map with Python. Data Collection The first step in this process involved researching the source and methodology used to create the original map. I scoured the internet, and eventually came to the conclusion that the origin of this map will remain a mystery to us all. Along a similar vein, I was also unable to find any datasets about denim color popularity, which led me to think the original map was altogether made up. Survey Question #2: Which is your favorite shade of denim to wear? Survey Thus, I decided that if the data didn’t exist, I would make my own through primary data collection. I created a survey to collect age, State of residency, and the participant’s favorite shade of denim to wear, chosen from a selection of eight possible shades. I shared the survey among friend/family circles, on social media channels, and in online forums specific to survey sharing, such as r/SampleSize on Reddit. At the time of writing, I received 377 responses. Color Thief The next step in this process involved deriving RGB and hex values from each of the eight denim picture samples from the survey. This was accomplished using a Python library called Color Thief, which can be used to grab the RGB color palette from an image. Under the hood, Color Thief uses k-means clustering to return the k most dominant colors in an image. You can specify the number of colors to grab with the color_count parameter. rgb_palette = ColorThief(img_path).get_palette(color_count=6, quality=10) After grabbing the RGB values for each image, I wrote a function to display each original image alongside its dominant colors in a pie chart, converted to hex values: display(Image.open(img_path, 'r')) hex_palette =[webcolors.rgb_to_hex(rgb) for rgb in rgb_palette] Here are some examples using sample photos: When applied to a denim swatch, this is the result: Average RGB Values After grabbing all of the RGB values from the eight survey images, I needed to map these colors to each survey response. To accomplish this, constructed a Pandas DataFrame from the results .csv file, and then used np.select() , to map each shade to its corresponding set of RGB values. The next task involved averaging all of the responses together to arrive at one color per State. I did this by grouping the data by State, and taking the mean of each RGB value to derive the average RGB per state. This average RGB tuple was then converted to its corresponding hex value and added to the DataFrame. grouped = df.groupby([‘state’])[‘r’, ‘g’, ‘b’].mean() Mapping The last step of creating the new map involved locating a shapefile representing United States borders. This file contains GIS data on a specific location’s spatial and geographic information. I merged the shapefile with my main DataFrame by using a package called GeoPandas to create a GeoDataFrame. Finally, I used Matplotlib to plot the GeoDataFrame: Updated map: Average Jeans Color by State, 2020 Limitations The purpose of this project was to learn about extracting dominant colors from images with Python more so than to derive any legitimate insights about jeans. I caution against drawing any conclusions from the mapping aspect of this project for the following reasons: Sample Size The results shown here represent 377 survey responses, averaging ~7 respondents per State, which is not nearly large enough of a sample to legitimize a relationship between color preference and location. Survey Question 3 Responses: What State do you currently live in? Sample Bias The survey respondents do not represent an independent, random sample. My primary method of collecting responses was through my own social media accounts, meaning most respondents live in either Michigan or New York. In addition, most respondents fit a similar demographic to me in age, socioeconomic status, and education level. As a result, this sample was not truly representative of the US population, and cannot be used to generalize denim preferences. Author Bias A data science instructor once brought to my attention that even the “raw” data we use is biased in some capacity. I experienced this first-hand through this exercise: I created the survey, meaning I selected the eight denim shades respondents had to choose from, based on my own subjective idea of what a representative sample of denim looks like. Had another colleague created the survey, the outcome of the project may have looked entirely different. The “raw” data we find ultimately was produced from an entity that is biased in some capacity, as all humans are. Thus, it is important to remember as data scientists, that even when our methodology is designed to eliminate as much bias as possible, each person’s perception of the world is subjective, and so is the data they create. Want to explore the full project? Check out the Github repository here!
https://medium.com/swlh/average-jeans-color-by-state-2020-c480baf25f0f
['Khyatee Desai']
2020-10-28 18:50:25.292000+00:00
['Python', 'Lol', 'Data Visualization', 'Denim', 'Maps']
An Interview with Marc Martel of The Ultimate Queen Celebration Who Appears at The Great Auditorium on July 14
An Interview with Marc Martel of The Ultimate Queen Celebration Who Appears at The Great Auditorium on July 14 Spotlight Central Follow Jul 9, 2018 · 8 min read By Spotlight Central Queen — the superstar band which brilliantly fused classical and rock music — topped the music charts in the 1970s and ‘80s with hits like “We Are the Champions,” “We Will Rock You,” “Another One Bites the Dust,” and “Crazy Little Thing Called Love.” Led by frontman Freddie Mercury, the group — which also included guitarist Brian May and drummer Roger Taylor — went on to become one of the world’s most popular bands selling hundreds of millions of records, being inducted into the Rock and Roll Hall of Fame, and earning a prestigious Grammy Lifetime Achievement Award. Following Freddie Mercury’s passing, Brian May and Roger Taylor created an official Queen tribute band — The Queen Extravaganza — and after holding a worldwide video competition, they hand-selected a talented young singer to take the place of Freddie Mercury. His name? Marc Martel. A musician from Montreal, Canada, Martel had spent the past dozen or so years touring with his contemporary Christian band, Downhere. With the encouragement of friends and family members, he submitted a video of himself singing “Somebody to Love.” His audition not only earned him the job in the Queen Extravaganza tribute — it also made him a YouTube sensation, got him invited to appear on The Ellen DeGeneres Show, and was ultimately seen by over 14 million people! After performing with May and Taylor in The Queen Extravaganza, Martel created his own Queen tribute band — The Ultimate Queen Celebration — a group that will perform LIVE! At Ocean Grove, NJ’s Great Auditorium on Saturday, July 14, 2018 at 8pm. Unlike other tribute artists, Martel’s Ultimate Queen Celebration band doesn’t try to replicate the look of the original group. Instead, Martel and his musicians concentrate on carefully recreating the sound, energy, and excitement that Freddie Mercury and Queen brought to auditoriums and ampitheatres around the world. During their upcoming Great Auditorium performance, audience members can expect to hear Martel’s Mercury-like renditions of “Radio Gaga,” “Killer Queen,” “Tie Your Mother Down,” “Fat Bottomed Girls,” “Bohemian Rhapsody,” and many more. At this historic venue, they will not only learn how much Martel sounds like Mercury, but that he also embodies the heart-and-soul that Freddie poured into every song he ever performed. Spotlight Central recently had a chance to interview Marc Martel, who took time off from working on a pair of Queen recording projects to chat with us about his childhood, his work as a contemporary Christian artist, and his experience performing the music of Freddie Mercury and Queen all over the globe. Spotlight Central: As a child growing up in Montreal, how did you first get involved in music? Marc Martel: That would have been from the earliest days of just being a kid with musical parents. My mom is the piano player in my dad’s church — she’s also the choir director — and she’s just a musician through and through. I remember, as a really young child, at probably the age of four or five, that she would play the piano to put us kids to sleep. So she’d play Beethoven and stuff like that — “Moonlight Sonata,” for example — and when she’d play it, I can remember analyzing the notes in my head. You see, I’d started playing piano at the age of five. And I remember hearing her play the piano, and I could tell the difference between the sound that she was able to make, as compared to what I was able to make. I remember thinking, “Why does the low end of the piano sound so full for her?” and I soon discovered it was because her hands were big enough to make an octave. And, so, I remember, as a kid, just having this penchant for analyzing music in my head. I always loved music — it was always in the house — and my younger brother is musical, too. Plus I always had a stage — at church, for instance — where I could practice my performing chops. Spotlight Central: And what were some of your favorite types of music to listen to as a youngster? Marc Martel: I grew up on a lot of gospel music. The first artist I really fell in love with was a guy named Keith Green, who was a really fiery piano player, singer, and songwriter. He died in the early ’80s in a tragic plane crash at the age of 28. His music left a big mark on me because he was so full of energy, and the way he played the piano — he just tore it up; he lit it on fire! So I ate up his music as a budding player and I learned all of his music by ear. And then I got into a lot of Top 40 stuff — I didn’t listen to rock and roll, honestly! I was born in ’76, but I didn’t get into rock until the early ’90s when the grunge stuff started coming out. So in the ’80s, I was into a lot of softer rock like George Michael and Richard Marx, aside from the gospel music. Spotlight Central: Did you play in any bands as a teenager? Marc Martel: No. Bands were something that were really foreign to me. I was on my own doing music. Aside from being in the high school concert band and stuff like that, the rock band thing didn’t start until college. When I was 18 years old, another musician and I were put in a band to lead the music for this conference that was coming up. We had a mutual friend who put us together; he was like, “I know these two guys who are really good at music — I bet they would do a really good job doing the music for this event.” And, so, that musician and I really hit it off. We became best friends and started a band in college. I had never been in a band before and realized that it was really fun to play music with other people [laughs] and ever since, I’ve, sort of, been a band guy! Spotlight Central: And, of course, the group you’re talking about is your band, Downhere. Marc Martel: Yes, that’s the band we formed. Spotlight Central: And you guys started that band when you were in college in Saskatchewan, but you eventually moved to Nashville, where the group achieved international success with all sorts of recordings and awards. Do you still enjoy creating and performing Christian music today? Marc Martel: Yeah, I do — especially around Christmastime, when you can give yourself the freedom to talk about the Christmas story. I don’t do so much non-Christmas Christian music these days because doing Queen takes up a lot of my time — it has actually taken up what would be writing time for me — but I am doing a lot of recording this year, including a Queen EP and a Queen Christmas EP, which people will be able to hear in the near future. Spotlight Central: Tell us a little about the year 2011 when you auditioned to go on the official Queen tribute tour — The Queen Extravaganza Live Tour — and you became a YouTube sensation. What was all that like? Marc Martel: It was pretty nuts! It was one of those things that doesn’t really happen to people very often — something that you literally fantasize about that literally comes true. People would compare me to Freddie Mercury incessantly. It just became this joke after awhile: “I wonder how many people are gonna tell me I sound like Freddie Mercury after this show?” Usually, it was at least five people who would come up to me and say, “Do you know who you remind me of?” And I’d say, “Yeah, I know exactly who I remind you of!” So I started thinking, “I don’t know that it’s a particularly unique ability to sound like Freddie Mercury — there are lots of tenor rock singers in the world who I think have a similar sound — but maybe I have that little bit more that gives the music that uncanny feeling?” And so this friend of mine in Nashville got wind of an audition contest that Roger Taylor of Queen was putting together, and he forwarded the information about it to me. And as soon as I went to the website, I was thinking to myself, “There’s a good chance I could win this — I mean, if what thousands of people have told me over the past ten years is true, I don’t know how I’m not gonna win this!” And, sure enough, after I uploaded my audition video the next day, my life went haywire for about a month or two. Spotlight Central: And, today, you’re still doing Queen’s music in your own Ultimate Queen Celebration concerts! Do you have a specific Queen song or songs that you especially enjoy performing? Marc Martel: Yeah. I mean, I have to say “Bohemian Rhapsody” — it’s hard to get away from that one. And “Under Pressure” is always a favorite. Also, “Love of My Life” is a nice moment — especially when doing it live because it’s so stark and beautiful. And, actually, I love “Who Wants to Live Forever” because I’m doing a lot of symphony work lately and that’s always a highlight of those shows — it’s just a beautiful piece. Spotlight Central: So what can concertgoers expect to see and hear at your upcoming July 14 Ultimate Queen Celebration Concert at The Great Auditorium in Ocean Grove, NJ? Marc Martel: Well, pretty much everything you’d expect from a Queen concert, except for the dressing up. It’s high energy — a lot of fun — and people say we sound more like Queen than Queen does, so if that’s what you’re after — an authentic Queen sound with lots of energy and a great-looking show with all the emotional ups and downs — it’s all there. We have a lot of fun playing this music, people sing along with us all night long, and everyone has a great time! The Ultimate Queen Celebration starring Marc Martell will appear at The Great Auditorium in Ocean Grove, NJ on July 14, 2018 at 8pm. Ticket options for the show include $60 VIP tickets with a meet-and-greet with the band and an autographed photo; reserved seating for $30 and $50; and at-the-door general admission for $20. The Great Auditorium is located at Pilgrim and Ocean Pathways in Ocean Grove, New Jersey. All facilities are handicapped accessible. For reservations: phone 800–590–4064 or go online to www.oceangrove.org.
https://medium.com/spotlight-central/an-interview-with-marc-martel-of-the-ultimate-queen-celebration-who-appears-at-the-great-auditorium-73e2e0490d81
['Spotlight Central']
2018-07-09 12:20:39.304000+00:00
['New Jersey', 'Queen', 'Music', 'Rock', 'Tribute']
Best Python Books
Let’s ask Wikipedia what kind of language Python is. Python is a widely used high-level programming language for general-purpose programming […]. An interpreted language, Python has a design philosophy which emphasizes code readability […] and a syntax which allows programmers to express concepts in fewer lines of code than might be used in languages such as C++ or Java. So what are the top Python books? Python Crash Course is a fast-paced, thorough introduction to programming with Python that will have you writing programs, solving problems, and making things that work in no time. In the first half of the book, you’ll learn about basic programming concepts, such as lists, dictionaries, classes, and loops, and practice writing clean and readable code with exercises for each topic. You’ll also learn how to make your programs interactive and how to test your code safely before adding it to a project. In the second half of the book, you’ll put your new knowledge into practice with three substantial projects: a Space Invaders-inspired arcade game, data visualizations with Python’s super-handy libraries, and a simple web app you can deploy online. As you work through Python Crash Course, you’ll learn how to: Use powerful Python libraries and tools, including matplotlib, NumPy, and Pygal; Make 2D games that respond to keypresses and mouse clicks, and that grow more difficult as the game progresses; Work with data to generate interactive visualizations; Create and customize simple web apps and deploy them safely online; Deal with mistakes and errors so you can solve your own programming problems. Learning Python (eBook — $33.03, paperback — $42.44) Get a comprehensive, in-depth introduction to the core Python language with this hands-on book. Based on author Mark Lutz’s popular training course, this updated fifth edition will help you quickly write efficient, high-quality code with Python. It’s an ideal way to begin, whether you’re new to programming or a professional developer versed in other languages. Complete with quizzes, exercises, and helpful illustrations, this easy-to-follow, self-paced tutorial gets you started with both Python 2.7 and 3.3 — the latest releases in the 3.X and 2.X lines — plus all other releases in common use today. You’ll also learn some advanced language features that recently have become more common in Python code. Explore Python’s major built-in object types such as numbers, lists, and dictionaries; Create and process objects with Python statements, and learn Python’s general syntax model; Use functions to avoid code redundancy and package code for reuse; Organize statements, functions, and other tools into larger components with modules; Dive into classes: Python’s object-oriented programming tool for structuring code; Write large programs with Python’s exception-handling model and development tools; Learn advanced Python tools, including decorators, descriptors, metaclasses, and Unicode processing. Python’s simplicity lets you become productive quickly, but this often means you aren’t using everything it has to offer. With this hands-on guide, you’ll learn how to write effective, idiomatic Python code by leveraging its best — and possibly most neglected — features. Author Luciano Ramalho takes you through Python’s core language features and libraries, and shows you how to make your code shorter, faster, and more readable at the same time. Many experienced programmers try to bend Python to fit patterns they learned from other languages, and never discover Python features outside of their experience. With this book, those Python programmers will thoroughly learn how to become proficient in Python 3. This book covers: Python data model: understand how special methods are the key to the consistent behavior of objects; understand how special methods are the key to the consistent behavior of objects; Data structures: take full advantage of built-in types, and understand the text vs bytes duality in the Unicode age; take full advantage of built-in types, and understand the text vs bytes duality in the Unicode age; Functions as objects: view Python functions as first-class objects, and understand how this affects popular design patterns; view Python functions as first-class objects, and understand how this affects popular design patterns; Object-oriented idioms: build classes by learning about references, mutability, interfaces, operator overloading, and multiple inheritance; build classes by learning about references, mutability, interfaces, operator overloading, and multiple inheritance; Control flow: leverage context managers, generators, coroutines, and concurrency with the concurrent.futures and asyncio packages; leverage context managers, generators, coroutines, and concurrency with the concurrent.futures and asyncio packages; Metaprogramming: understand how properties, attribute descriptors, class decorators, and metaclasses work. It’s easy to start writing code with Python: that’s why the language is so immensely popular. However, Python has unique strengths, charms, and expressivity that can be hard to grasp at first — as well as hidden pitfalls that can easily trip you up if you aren’t aware of them. Effective Python will help you harness the full power of Python to write exceptionally robust, efficient, maintainable, and well-performing code. Utilizing the concise, scenario-driven style pioneered in Scott Meyers’s best-selling Effective C++, Brett Slatkin brings together 59 Python best practices, tips, shortcuts, and realistic code examples from expert programmers. Drawing on his deep understanding of Python’s capabilities, Slatkin offers practical advice for each major area of development with both Python 3.x and Python 2.x. Coverage includes: Algorithms Objects Concurrency Collaboration Built-in modules Production techniques And more Each section contains specific, actionable guidelines organized into items, each with carefully worded advice supported by detailed technical arguments and illuminating examples. Using Effective Python, you can systematically improve all the Python code you write: not by blindly following rules or mimicking incomprehensible idioms, but by gaining a deep understanding of the technical reasons why they make sense. Python Cookbook (eBook — $27.72, paperback — $30.45) If you need help writing programs in Python 3, or want to update older Python 2 code, this book is just the ticket. Packed with practical recipes written and tested with Python 3.3, this unique cookbook is for experienced Python programmers who want to focus on modern tools and idioms. Inside, you’ll find complete recipes for more than a dozen topics, covering the core Python language as well as tasks common to a wide variety of application domains. Each recipe contains code samples you can use in your projects right away, along with a discussion about how and why the solution works. Topics include: Data Structures and Algorithms Strings and Text Numbers, Dates, and Times Iterators and Generators Files and I/O Data Encoding and Processing Functions Classes and Objects Metaprogramming Modules and Packages Network and Web Programming Concurrency Utility Scripting and System Administration Testing, Debugging, and Exceptions C Extensions More Python ebooks are available here for free. You may also like: Best Swift Books In 2017
https://medium.com/level-up-web/best-python-books-in-2017-b064dfac287
['Bradley Nice']
2018-09-25 13:23:32.370000+00:00
['Coding', 'Python', 'Data Science', 'Programming', 'Web Development']
Programming Has No Age: How to Learn Java Even if You Think It’s Too Late
Programming Has No Age: How to Learn Java Even if You Think It’s Too Late Alex Vypirailenko Follow Dec 16 · 8 min read Photo by Martin Reisch on Unsplash The older we get, the more often we think that it’s too late for us to learn new things, especially coding. We are sure that our brain doesn’t work as it used to work in the youth and we won’t be able to grasp nuances of programming. In other words, we write ourselves off. But, the truth is that we all can learn Java and other languages regardless of how old we are. I’ve met a lot of people in my life who started to learn coding at a mature age. And they succeeded because people at their age see things differently and exercise an informed choice, especially when it comes to finding the right way of learning. The latter helps them master a new activity more easily. Other than that, the IT industry is known for its friendly and accepting community, at which specialists are valued by their skills, not by their age. If that still sounds unconvincing, I recommend considering the next 5 conclusive arguments for why you should fulfill your wishes and start learning to code at any age. 5 Reasons Why Age is Not a Hindrance in Programming 1. Acquiring New Knowledge Keeps Brain Working Learning to program is equal to a mental exercise — the more you strain your brain, the better your focusing mental effort is. Back in 2013, Cesar Quililan defined the impact of sustained engagement in learning new skills on human sanity in his study published in Sage Journals. The experiment involved individuals between the ages of 60 to 90 and encouraged them to try a new hobby or craft, such as photography and quilting. While spending months acquiring a new skill, this group of participants got the most of the memory gain compared to those watching movies or playing simple games. It all means that you shouldn’t worry — at 30 or 40, your brain is working great! The main thing is to bring it into shape. I have one happy-ending story of a 32-year-old specialist. He started with zero technical competence and had a hard time while learning to code from scratch. Soon he came across an online course and after completing the course, he qualified to apply for a web developer position. Nobody in the company cared about how old he was. 2. Programming is Not About Body Flexibility and Young Neurons Speed I also had a friend Arnold, who decided to make a fresh start at his 38 and, like many other adult learners, doubted his abilities. When he encountered his first challenge, he came to me and said: “What if I don’t have enough energy, and indeed, why did I decide that I could?” Sure, right after stopping to doubt himself and getting stuck to repeated practice, he overcame all possible difficulties. While training, the students should remember that only patience and a systematic approach can lead them to success. Programming languages are not what requires physical preparation and have the autumn of life. 3. Educational Sources Don’t Ask Your Age These days, the web is full of interactive online courses you can come with to learn Java. A few of them include: CodeGym, an online platform that suggests completing over 1200 tasks to learn Java programming. Right after registering for the course, you will write lots of code to polish your skills and land a job in the future. Thanks to built-in code validation, you can have each task checked instantly and receive feedback from the virtual mentor. The course developed using the latest techniques, such as gamification and storytelling, will keep you engaged and motivated. CodeAcademy, an education company created to enhance your learning experience and keep you motivated to continue your training, providing interactive and real-world code challenges. CodeChef, a unique platform that will encourage you to learn coding by participating in programming contests and challenges hosted three times a month. Absolute beginners, in their turn, can begin with video tutorials on Coursera or Udemy, such as: The Complete Java Masterclass, a practical class to teach you Java from fundamentals to the level, at which you can write programs using OOP, Interfaces, Generics, and other concepts. Java Programming for Complete Beginners (in 250 steps), a course that will guide you through major Java concepts, from Java Basics to Java Collections, Generics, Exception handling, Multithreading and Concurrency, Functional Programming Networking, and File handling. Java Certification by Duke University, a Java introduction course for new-comers that will shed light on the fundamental programming concepts and provide tools needed to solve the problems. Mentoring support from professionals is what can significantly support you in your path to Java programming. Other than that, Java coders are known for their friendly community, which means the specialists are willing to help fresh learners when they get stuck at a specific problem. So, here’s a list of the platforms, at which you can get answers to your coding questions or ask for guidance. Java Forum, a standard programming forum including various topics and is separated into sections to ensure a fast and hassle-free search. Java World, a platform that brings together Java news, how-tos, features, reviews, blogs, and other Java-related things. CodeGym Help, a community created for novices to provide answers to frequently asked questions along with fast and adequate support. r/learnJava, a subreddit putting together resources for learning Java. r/learnprogramming, a subreddit for all questions related to coding in any programming language. Other than practicing Java using online courses or video tutorials and hanging out on forums, I recommend putting blogs on the list as well. The authors are keeping an eye on updates and newly added features to share them with you and enhance your coding experience. I would identify the following two blogs as ones worth considering. Java Geek, a source that provides specific cases or problems clearly explained. Bench Resources, another source including described issues and cases related to Java. 4. Your Age Doesn’t Really Matter I often tell my adult friends who doubt they can start all over again that age is just a number of their experience. After all, who said that humans should work at the same job for the rest of their life?! We are all mature individuals here who know what they want and what results they expect to see. So, if you feel like you desperately want to learn Java or any other programming language, don’t put your desire off until later, start learning right now and your effort will be rewarded shortly. Besides, don’t compare yourself to other specialists, especially if they are already halfway through the journey you begin. The only person you should compare yourself to is you at the jumping-off point. You’ll be pleased to realize that you are progressing in comparison with the previous phase. I have another positive experience with a programmer from my course. He came without any previous technical experience, yet after passing a few online courses he could land a dream job. This and other examples once again prove that it makes no sense to worry about the limited expertise or false code. Everyone at any age makes mistakes, especially when they just start their programming journey. But once you sharpen your skills, the mistakes will disappear and you will feel more confident about programming. Apart from that, both younger and older students are equally worried about whether they can get a job without sufficient work experience. First of all, the technologies are developing at a lightning-fast speed, which makes it hard for specialists to master all at once. Secondly, many companies prefer to hire specialists with little experience to train them for specific projects. So, there is no real reason to worry. 5. There Won’t Be A Better Time Than Today Let’s face it: people are so programmed that they often wait for the right time to start something new. But to tell you the truth, the right time doesn’t exist — now is the best time ever. Moreover, in pandemic reality, working in IT is relatively safe and stable as the tech sector together with pharmaceuticals, logistics, and healthcare is less affected by Covid-19. Bev White, CEO at Harvey Nash, also said that 82% of IT managers in the United Kingdom expected their headcount to remain the same or even increase. Many companies are now looking for quick access to specialists who can help deliver digital projects at short notice. With all that being said, now is high time to start learning Java or any other programming language to join the IT market. Summing Things Up Programming is not ballet or figure skating, where you need to be young, flexible, and flourishing. Learning to code will rather require some time, effort, and the proper mindset you start it with, while your age is not something that should interfere with the training. Instead, think of your age as something that can help you notice things that younger professionals don’t. In short, learning programming will keep your attention and focus on a high level.
https://medium.com/javarevisited/programming-has-no-age-how-to-learn-java-even-if-you-think-its-too-late-3f0835e7d0f8
['Alex Vypirailenko']
2020-12-16 07:05:57.968000+00:00
['Programming', 'Java', 'Coding', 'Learn To Code', 'Learning To Code']
Cap Your To-Do List at Three Things Every Day
Cap Your To-Do List at Three Things Every Day When you get clear on what matters, you weed out what doesn’t Photo: Hero Images / Getty Tell me if this sounds familiar: You wake up thinking about all the things you have to do. But after working like mad all day, you manage to complete only half the tasks on your list. You spend the rest of the evening feeling annoyed. There’s a way to remedy this common feeling of overwhelm: Cap your to-do list at three things every day. The advice comes from Dan Sullivan, founder of Strategic Coach. Unsurprisingly, he gets a lot of pushback whenever he suggests it — high-performing entrepreneurs tell him that three tasks a day just isn’t enough to be successful. But after trying it myself, I suggest giving it a shot. If you’re like me, it may even help you to achieve your biggest goals. Capping your daily to-do list at three items forces you to ask yourself: What actually matters most today? What truly moves your needle? Is posting multiple times a day on social media really taking your business to the next level? Or would spending time honing your craft be a better long-term investment? When you get clear on what matters, you naturally weed out what doesn’t. It’s also simply good for your mental well-being. If each day you complete all the things you plan to get done — which is much more possible when there are three items rather than 23 — you can spend the rest of the evening celebrating your wins, rather than stewing over unfinished tasks. According to Sullivan, prioritizing time away from our work is one of the best things we can do for our work. After all, it’s impossible to see the opportunities around us if we only focus on what’s in front of us. At first, I was skeptical that this approach would be helpful to me. But as a dad who works from home, I often struggle to transition between work and family mode, so I decided to give it a shot. This past summer, while my two boys were home from school, I capped my daily to-do list at three things and cut off work at 2 p.m. With that extra time to be present with my family, I ended up feeling more inspired than I ever had. I finally managed to turn my side hustles of writing and coaching into my full-time job. It’s easy to try it for yourself. Every morning before you start working, sit down for 10 minutes and envision the three most important things you want to finish by the end of the day. Here’s a little trick I picked up from my wife: Instead of keeping this list on your phone or in your journal, try writing out each task on individual notecards. There, you can also note the steps you need to take to accomplish the task. Then, after you finish a task, place the card on top of the pile of the other actions you’ve already completed. As the days, weeks, and months pass, this growing stack of completed cards will serve as a powerful reminder of all you’ve already accomplished. Looking at it will grant you the much-deserved permission you need to walk away from your work at the end of the day, guilt-free.
https://forge.medium.com/cap-your-to-do-list-at-three-things-every-day-6bd7bd37f6fd
['Michael Thompson']
2019-10-16 14:18:59.981000+00:00
['Succeed', 'Personal Development', 'Business', 'Productivity', 'Inspiration']
On the Run (2020)
Imagine what will happen if you let two Artificial Intelligence Entities in one room? Well, this:
https://medium.com/merzazine/on-the-run-2020-c227e67290e3
['Vlad Alex', 'Merzmensch']
2020-11-25 09:19:55.613000+00:00
['Chatbots', 'Artificial Intelligence', 'Merznlp', 'Art', 'Videos']
10 Important Maven plugins for Java Developers
Plugins are widening the maven. Using different maven plugin it is able to do a lot of super cool stuff. In this article lets discuss some of the most frequently using maven plugins. Photo by Anton Nazaretian on Unsplash If you are a java developer then it is very sure that you should aware of Maven. Maven is one of the great tools for the build process. If you integrated your project with maven, then you don't need to worry about the build complexity because maven makes the build process so smooth. In this article, we are discussing some advanced plugins in maven which you should bookmark. build-helper-maven-plugin The plugin is mainly for adding additional source or resource directories. It is helpful if your project has an additional source directory other than the src folder. Using this plugin it is able to include more source directories as part of the build. Below example explains it in more detail. In the above pom.xml explains, it considers the additional source directory gen/src folder as part of the build. maven-shade-plugin Using the maven-shade-plugin, it is able to create the uber jar. The uber jar contains everything including the dependencies of the project. While executing, You don't need to add the dependent jars on the classpath. All the dependencies are included in the uber jar. This plugin is so helpful for making executable single jars. Using the maven-shade-plugin configuration, it can able to mention what are the things to be included/excluded on the uber jar. Below shows a small example using maven-shade-plugin. The above pom.xml creates an uber jar by including the commons-lang3 dependency, also it is excluding the unwanted files inside the META-INF and other folders as based on the configuration. sql-maven-plugin The main use of sql-maven-plugin is used to execute the SQL statement as part of the build process. This plugin helps execute test cases with DB connection. With the help of this plugin, it is able to clean up the database, Run the init scripts, populate the database based on the environment. spring-boot-maven-plugin The spring-boot-maven-plugin is used for enabling the spring boot support on maven. Using this plugin it is able to control the artifact output. Such as it can repackage the artifact for jar or war, it can specify the main class explicitly via plugin, Able to specify custom layers for the application. Below shows a pom.xml file with spring-boot-maven plugin. Here the layout of the artifact mentioned is war and the main class explicitly specified in the plugin. There are two output for the build, one is the original one and another one is the repackaged one using this plugin. maven-deploy-plugin This plugin is mainly used for handling the deployment. Using this plugin it is able to deploy the file to a remote repository or a local repository. There are two main phases for this plugin. One is to deploy the artifact to a remote location using the deploy:deploy goal. The second one is can upload the artifact into a remote maven repository using the deploy:deploy-file goal. For executing this goal, the user should need to pass the repo path, artifact file, group id, artifact id, version, and repo information. maven-surefire-plugin Maven-surefire-plugin helps to execute the test cases as part of the build. Also, the plugin will output the execution output as plain text or XML format. The plugin supports the test execution tools such as JUnit, TestNG. The below image explains the maven surefire plugin report generation. In the above image shows, the build executing two test cases, where one output success result and other one failed. The report shows the execution status of both test cases with the reason of failure. maven-compiler-plugin The plugin is used to compile the source. By applying this plugin on pom.xml it is able to compile the source with any java versions, which can be specified in the plugin itself. Above shows an example of maven-compiler plugin , here it is mentioned the source and target as 1.8. The source is 1.8 which means you can use the java 1.8 features in your source code. The target is1.8 which means the compiled classes are compatible with java 1.8. launch4j-maven-plugin The plugin is used for making windows executable. You can create console based or GUI based installation tool by using launch4j. Let's understand how to integrate the launch4j-maven-plugin with your swing application. For that, I created a swing JFrame and build it as a jar file. Here is the launch4j integrated pom.xml file for the application In the above launch4j plugin output two types of executable outputs. One is console output and another one is GUI output. appbundle-maven-plugin This plugin is used to create the application bundle for OS X. The bundle contains your project, its dependencies and the necessary metadata files. Also, it is possible to embedding java JDK with this bundle. debian-maven-plugin This plugin is used to build DEB packages for the maven project. This packaging can be used in the DEB based OS such as ubuntu and debian.
https://medium.com/javarevisited/10-important-maven-plugins-for-java-developers-330b98b71720
['Anish Antony']
2020-12-28 09:21:33.487000+00:00
['Web Development', 'Coding', 'Maven', 'Java', 'Programming']
Khazer
When it comes to bagels, all bets — and diets — are off!
https://backgroundnoisecomic.medium.com/khazer-5d1b4c0752cd
['Background Noise Comics']
2019-02-13 21:33:47.100000+00:00
['Health', 'Jewish', 'Diet', 'Comics', 'Cartoon']
I went a little crazy trying to choose Charted’s colors.
It was actually one of the hardest parts of building and designing it. Here were the general goals: Feel bright and colorful, but still professional Work well in order so that each growing subset of 2, then 3, then 4, etc. still looks good as a complete set Have each color work decently well with every other color, in case they end up next to each other Increase color contrast to improve accessibility Work on both white and black backgrounds, and as both lines and bars I took inspiration from Airbnb’s recent brand refresh and a couple artists. Then I started throwing things together in Illustrator, trying different combinations and approaches until my eyes fell out. I completely replaced them a few times. Eventually I settled on these seven, in this order: I’m still not totally satisfied with them. Internally at Medium, too, I get requests all the time to improve them and add more. We’ll see, I say, maybe eventually. But for now, months later, I’m still letting my eyes rest. If any color experts out there have better suggestions, there’s a list of hex values in the open source code awaiting your love.
https://medium.com/data-lab/i-went-a-little-crazy-trying-to-choose-charted-s-colors-8d4182c1d324
['Mike Sall']
2017-05-22 05:04:15.539000+00:00
['Design', 'Colors', 'Data Visualization']
Software Development Life Cycle
Let us explore the various stages involved in a typical Software Development Process. There are 6 stages involved: Requirements phase Analysis phase Design phase Development phase Testing phase Deployment & Maintenance phase Gathering Requirements This is the most important phase in the SDLC. In this phase, the technical and business team involved in the project gather complete information about the requirements from the customers. It is critical to not make any assumptions about requirements. Practice active listening and document every elicitation activity. It involves various activities as follows: 1. Requirements Elicitation The developers and stakeholders meet ; the latter are inquired concerning their needs and wants regarding the software product. 2. Requirements Analysis Requirements are identified and conflicts with the stakeholders are solved. Various written and graphical tools such as user stories and UML are used. 3. Requirements Specification Requirements are documented in a formal artifact called a Requirements Specification (RS) which is officially approved only after validation. For eg. Software Requirements Specification (SRS). 4. Requirements Validation This process involves checking that the documented requirements and models are consistent and meet the stakeholder’s needs. Only if the final draft passes this validation process, the RS becomes official. 5. Requirements Management It involves managing all the activities pertaining to the gathered requirements, supervising as the system is developed and being adaptive to the changing requirements. Requirements engineering is crucial for success of software products.
https://medium.com/nerd-for-tech/software-development-life-cycle-cde7f069d5f3
['Vaidhyanathan S M']
2020-11-22 03:37:31.196000+00:00
['Sdlc', 'Software Engineering', 'Software Development']
The Beauty of Big Data
Teradata’s Art of Analytics helps people make an emotional connection with the numbers that rule their world. In her day-to-day work, Yasmeen Ahmad tackles immensely complex datasets, deploying an arsenal of approaches and methodologies that would sound intimidating to most lay people. From predictive modelling to text analytics, time series analysis to development of attribution strategies, few people can easily wrap their heads around what such terms mean, and even fewer are capable of drawing actionable insights from them — which is why data scientists such as herself are always in such high demand. Ahmad worked in the life sciences industry before pivoting to commercial work, where she is now Director of Think Big Analytics, the consulting branch of IT service management company Teradata. Over many years helping clients across a variety of industries to make sense of their data, however, she realised that the best way help them see meaning in those datasets was to literally paint them a picture. “Visualisation is a core component of any data science and analytical project,” she explains. “It is almost always used at the beginning to understand the datasets you are working with, and can help to quickly identify anomalies, outliers and strong correlations in the data.” As far as data is concerned, she says, a picture really is worth a thousand words, as visualisation helps to add meaning on top of data that is much easier to assimilate for humans than descriptive words or single numbers and values alone. Her team would therefore routinely include such visuals when they presented their key results to clients, and found that even people who might not be well versed in data science or technology could still connect with them. The visualisations supported storytelling around a project, engaging business stakeholders to understand connections, relationships and associations in the data. “As more investment goes into data platforms and analytical technologies, the artwork helps to provide a face to this investment. We had business leaders commenting on how beautiful the visualisations were. Colour, shape and layout were all dimensions that were used to convey meaning. The choice of how a visualisation was formed is actually a creative process — like creating a piece of art.” From there, she explains, it was a short leap to the idea behind the Art of Analytics project, which brings together a range of those visualisation pieces from their previous projects. Ahmad believes one of the main strengths of the project is its ability to bring data to life for lay audiences, creating a connection between data insights and observers and bridging the technical gap. “The visualisations push the human to look beyond individual numbers and values, to thinking about data as a series of connections to be explored. They make it particularly easy to see associations, connections, pathways etc. The art is providing form to the complex fields of big data and data science — making them accessible to a wider audience. Yet the usefulness of data visualisation is not limited to non-technical people. According to Ahmad, it is also a key component of a data scientist’s toolkit: “During my life sciences research — I was working in a field where everything was abstracted. The data I analysed often came from human cell samples that could only be seen under microscopes. Hence, collecting data about these samples, analysing it to create insight and then relating the insight back to reality was somewhat difficult. Visualisation was key to help portray not only the insights, but also how they linked to human cells and biology in general.” Art of Analytics was also an opportunity for Ahmad to bring together those creative and scientific sides. Data science, she says, is actually a highly creative discipline that combines technical know-how with lateral thinking and the ability to tease out stories from complex datasets. “I believe that art and science need to come together to help to solve the world’s most complex problems, and the best data scientists are not only great at statistics, maths and analytical subjects, but are also creative problem solvers who can translate their work into meaningful messaging that connects with their business and commercial audiences.” This is an on-going project, and there are plans to create new data representations as they work with new datasets they haven’t encountered before. Ahmad is also keen to explore how other media such as animation, video and perhaps even VR could help add other dimensions to that work. By creating a video, she says, it would be possible to create another level of emotional connection with audiences, by representing how those relationships have evolved over time.
https://medium.com/edtech-trends/big-data-is-beautiful-d8397e10e4f8
['Alice Bonasio']
2017-11-22 08:57:47.281000+00:00
['Data Science', 'Big Data', 'Data Visualisation', 'Analytics', 'Art']
Why and how you should protect your Web Applications in the cloud
Public Internet is brutal. It is essential to have a Web Application Firewall (WAF) and powerful Content Delivery Network (CDN) capabilities to protect your Web applications and Web sites. But what vendor shall we choose and why? The average cost of a data breach has risen to $3.92 million. Reports show a 1.6% increase in costs in 2018 and a 12% rise over the last five years. Fines for violating the regulation can range from up to €20 million ($22.5 million) to 4 per cent of a company annual global revenue — whichever is greater. Top Cases of data breaches Globally, just under 30% of organizations are likely to suffer at least one breach over the next 24 months. U.S. organizations face the highest costs with an average of $8.19 million per breach, driven by a complex regulatory landscape that can vary from state-to-state, especially when it comes to breach notification. In the UK the figure is slightly lower than the global average, at $3.88 million. The size of the average data breach is now 25,575 records, an increase of 3.9% compared to 2018. The average breach size in the U.S. is higher at 32,434 and slightly lower in the UK at 23,600 (both figures up over 2018). Each record lost costs around $150 on average globally; in the U.S. that figure rises to $242 while in the UK the cost is $155 per record. While the loss of thousands of records at a time is becoming common, Equifax-level breaches involving millions of records are still relatively rare. A “mega-breach” of 1 million records could cost a company $42 million — up from $40 million last year — while the loss of 50 million records might cost a company $388 million. On the other hand, the prices of Attack services are becoming very low. Attack services are inexpensive For example, for $327 per week, bad actors can perform a DDoS attack on your Web application paralysing your business costing you thousands or millions. Executives start getting these messages. Companies start setting better security practices. The penalties significantly outweigh savings from inaction. One of the most effective ways to protect your Web applications is to introduce Proactive Defense mechanisms. Proactive Defence infrastructure “predicts” cyberattacks before it happens and mitigates in real-time. Modern cyberattacks are sophisticated and massive. For example, a distributed denial-of-service (DDoS) attack is a malicious attempt to disrupt normal traffic of a targeted service by overwhelming the target with a flood of Internet traffic. A DDoS attack is like a traffic jam clogging up with highway, preventing regular traffic from arriving at its desired destination. Unlike other types of cyberattacks, DDoS attack defence requires extensive infrastructure that can absorb malicious traffic while letting regular traffic through. A DDoS attack in Layman's terms Other types of cyberattacks include sending maliciously-formed requests with the expectation to disrupt Business services. Many attack types can be detected in real-time by Web Application Firewalls (WAF). WAF analyses the incoming traffic and automatically blocks undesired communications. WAF blocks malicious traffic There are hundreds of Security software and hardware solutions on the market. This post is about modern cloud web applications and therefore, we shall analyse only modern cloud platforms capable to protect from enormous DDoS attacks. We shall go through four major players on the market. Microsoft Azure The Microsoft Azure solution has a rich set of functionality that is built from various Azure components. Building and deploying multiple components may bring higher costs and may be prone to errors and misconfigurations. Ongoing support may also require more advanced Security Operations knowledge and skills. Azure DDoS protection service provides defence against DDoS attacks. There are two options: Basic DDoS that comes at no extra costs, and Standard, a paid option, which can provide better services, access to logs, monitoring, L7 protection via WAF. Azure DDoS protection Azure Application Gateway with WAF is a web traffic load balancer that manages Web Applications traffic while providing the centralised protection of web applications from common exploits and vulnerabilities. A centralised WAF helps make security management much simpler and gives better assurance to application administrators against threats or intrusions. A WAF solution can also react to a security threat faster by patching a known vulnerability at a central location versus securing each of individual web applications. Azure Web Application Firewall Cloudflare Cloudflare is one of the world’s largest networks. Cloudflare provides security solutions to businesses, non-profits, bloggers, websites and apps. More than 20 million Internet properties are on Cloudflare. The Cloudflare network is growing by tens of thousands each day. Cloudflare powers Internet requests for ~10% of the Fortune 1,000 for more than 1 billion unique IP addresses per day. Cloudflare provides security by protecting Internet properties from malicious activities like DDoS attacks, malicious bots, and other nefarious intrusions. Cloudflare has an excellent reputation with the very advanced DDoS protection, WAF, Content Delivery Network (CDN), TLS traffic encryption, automatic certificate management and many other features. Cloudflare model Cloudflare administration includes a common control plane over multiple well-integrated services. The configuration can be done via an intuitive, secure Web portal. Configuring DNS zone in Cloudflare portal Akamai Akamai is a very advanced solution in Web Application Security and Content Delivery. The combination of Akamai solutions covers the requirements of the most demanding customers. However, the solution can be an overkill in specific applications. Akamai has a multitude of products. Kona and Ion products can cover the majority of the requirements related to Cybersecurity and Web Performance. Kona Site Defender provides application security at the Edge — closer to attackers and further from applications. With 178 billion WAF rule triggers a day, Akamai harnesses unmatched visibility into attacks to deliver curated and highly accurate WAF protections that keep up with the latest threats. Flexible protections help secure the entire application footprint and respond to changing business requirements, including APIs and cloud migration, with dramatically lower management overhead. Akamai reported that it successfully protected a customer experiencing the largest (1.3 TBps) DDoS attack. Akamai Kona Site Defender features Ion is a suite of intelligent performance optimisations and controls that helps deliver superior web, mobile app experiences. Built on the SLA-backed availability of the globally distributed Akamai Intelligent Edge Platform™, Ion continuously monitors real user behaviour — applying best-practice performance optimisations automatically — and adapting in real-time to content, user behaviour, and connectivity changes. Akamai Ion Amazon AWS CloudFront Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront is integrated with AWS — both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services. CloudFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code closer to customers’ users and to customize the user experience. AWS CloudFront logo Amazon CloudFront is a highly-secure CDN that provides both network and application-level protection. Your traffic and applications benefit through a variety of built-in protections such as AWS Shield Standard, at no additional cost. You can also use configurable features such as AWS Certificate Manager (ACM) to create and manage custom SSL certificates at no extra cost. CloudFront edge locations map Conclusion We have described four major players on the market. It is important to go through a particular system’s requirement to make your choice. We deliberately not including Gartner Magic quadrant charts here. These companies are jumping in the quadrant quite fast. The quadrant also presents some parts of the Security feature set. It is much better to look at the feature set and prices of the services that you want to use. In my opinion, in general, the Cloudflare solution provides the best combination of well-integrated security services at a very reasonable price. Cloudflare also has entry-level plans, including a free plan for a simple single domain. The solution has a proven history of defending from massive-scale attacks. The Cloudflare solution is capable of absorbing an attack with the traffic that is 15 times higher than the world’s largest registered DDoS attack to date. Cloudflare is relatively easy to implement. It is a solution that works out-of-the-box without extensive engineering efforts. The solution offers a very nice and easy administration Web portal. John Yoon Solution Architect
https://medium.com/the-cloud-builders-guild/how-to-protect-your-cloud-web-applications-with-waf-and-cdn-what-vendor-to-choose-874e31534058
['John Yoon']
2020-02-26 23:12:57.976000+00:00
['Cdn', 'Cloudflare', 'Waf', 'Azure', 'AWS']
ข้าวขาหมู ที่เป็นมากกว่ามื้อกลางวัน
Written by Frontend Developer At Central JD FinTech Co., Ltd
https://medium.com/23perspective/%E0%B8%82%E0%B9%89%E0%B8%B2%E0%B8%A7%E0%B8%82%E0%B8%B2%E0%B8%AB%E0%B8%A1%E0%B8%B9-%E0%B8%97%E0%B8%B5%E0%B9%88%E0%B9%80%E0%B8%9B%E0%B9%87%E0%B8%99%E0%B8%A1%E0%B8%B2%E0%B8%81%E0%B8%81%E0%B8%A7%E0%B9%88%E0%B8%B2%E0%B8%A1%E0%B8%B7%E0%B9%89%E0%B8%AD%E0%B8%81%E0%B8%A5%E0%B8%B2%E0%B8%87%E0%B8%A7%E0%B8%B1%E0%B8%99-6de876a343ab
['Tuanrit Sahapaet']
2018-01-31 07:57:35.992000+00:00
['Technology', 'Engineering', 'Tech', 'Office Culture', 'Developers']
Birth Control Methods are Targeted at the Wrong Gender
Both The Partners Are Responsible For Conception With the progress of technology and science, we all know how fertilization takes place and both the partners are equally responsible and answerable for conception. But it also resulted in putting the pressure of birth control primarily at women. It somewhat translated in freedom for women so was welcomed with open arms. But with times, the need for some male-centric birth control methods was felt by everyone. So how come still we rely only on women-centric methods of contraception? There are so many products aimed at women like: Oral pills IUD Contraceptive Implants Contraceptive Injections Emergency Pill or Morning-After pill Contraceptive Ring Diaphragm Sterilization or Ligation and the list goes on And for men? Only two: Condoms Vasectomy Out of which only condoms are used widely because they protect against STDs too. When it comes to Vasectomy, very few males actually go ahead fearing their masculinity will go away with it. Although these same men have no such problem when their wives go through the sterilization process even though again and again it’s recommended to sterilize a male because the procedure is much simpler compared to a female’s. It’s reversible too though for women the reversible operation is much complicated and has a low success rate. One more reason why men should consider sterilization instead of making their wives do it. The onus of childbearing and rearing had always been on women naturally but males can at least take some responsibility when it comes to contraception but as always they don’t want to be bothered. Since childbirth affects a woman’s health primarily, she has to take steps to improve her health by protecting against unwanted pregnancy.
https://medium.com/an-injustice/birth-control-methods-are-targeted-at-wrong-gender-a402aecd4024
['Richa Khare']
2020-12-24 20:17:01.040000+00:00
['Health', 'Feminism', 'Equality', 'Culture', 'Women']
40. Slowdown landscapes: Meadow
And although Infield is deeply rooted in historical research, it is not looking backwards to some pre-Acceleration Arcadian vision. Kieran Long, ArkDes director, suggests that it is asking different questions of public spaces, as environments that could “work with nature instead of against it, making space for non-human species and sharing the city with them.” Indeed, Tegg notes that due to these intensive human-plant interactions, “Sweden’s remnant infields are among the most species rich plant communities in on Earth.” Our challenge is to live well amongst such species-rich biodiversity, as one of these active species; not by placing nature ‘over there’, cultivated across agricultural infields or lying fallow in outfields, but in public space, in urban space, amidst the infield equivalents in our One Minute Cities. Creating gardens and meadows that are curated, cared for and owned by residents is a powerful act, particularly given that the current context of streets and spaces regulated and maintained by abstract others. Outside of activism, today’s streets offer little opportunity for meaningful participation. In fact, such shared gardens and meadows necessitate engagement. They require care. This creates a pull on people, which can only be fulfilled by reorganising the way we live, by flattening time and power relations, by slowing down. In that flattening, we learn not only to live with nonhumans, but as humans too. Paraphrasing Richard Sennett, the point of cities is to learn to live well with people who are not like us. Now we know we must learn to live well with nonhumans too. Morton again makes clear how all these patterns are entwined when he says “we make a thin, rigid and untenable distinction between the human and the nonhuman. And the big reason for that is racism.” Through the traditional lenses of disciplines and institutions, it unlikely that we immediately connect an art installation in a car park about a Swedish meadow to the Black Lives Matter protests on the streets. Yet this is the powerful idea in Morton’s thinking: that all of these false separations, and ‘over yonder’-ing, and othering, are forms of racism, essentially. They deny a basic truth about our relationships. With other ways of seeing, we can look to the decisions and actions we take about gardens, meadows, streets, and neighbourhoods as continuous interconnected tangled systems, the matter in which the otherwise intangible dark matter is played out. (The first of Büscher and Fletcher’s suggested ‘convivial conservation’ actions in The Conservation Revolution is “Historic Reparations”, both material and non-material.) Given our tendency to move on from crises — in fact, often to move backwards — we must keep putting these issues onto the table, keep searching for new ways of seeing, keep looking for clues in the environment around us, and in that work that we make. One of the joys of Olelekan Jeyifous’s drawings, for example, is that nature is not “clandestine”, just as infrastructure is not invisible. Culture, whether of the Caribbean or of code or of both, is to the fore too. The nonhuman, whether root systems or robots, is entangled with the humans and vice versa. The insights from Tegg’s Infield include a sensation of something looser, more open, foregrounding the experiential, infrastructural and cultural at the same time. As an installation, and as a garden or meadow, it demonstrates a process, a way of acting, a set of behaviours, as well as attitudes to nature, time, production, public space, urbanism, ownership and collective decision-making. Despite its smallness — or perhaps because small is beautiful — it stands for the necessary everyday struggles and contests will we face in this shift of gears from acceleration to slowdown. Meadows over lawns I visited Infield many times, during project meetings for Street, and got to watch its progress unfold. In July, visiting with my children, we got talking to one of the ArkDes curators, who described the struggle that Infield itself had gone through. Initially, the production was derailed by corona, with Tegg coordinating the work from Australia and leaning on the Swedish crew 10,000 kilometres away. Then, when installed, a series of existential challenges, as if foregrounding the changing climate: first, an unusually late very cold period, with frost through to late-May. Then a week later, intense heat with 35C+ days on the tarmac through to late June. Then … the geese! Then huge amounts of rain during July; warmish rain, but gallons of it, in continuous downpours. Who knows what next. The curator was clearly quietly impressed by the meadow’s resilience, fond of how it was facing these challenges and still thriving. (If Tegg were around, perhaps she might invoke the Australian colloquialism ‘little battler’.) The curator then did a wonderful small thing. We’d described how my son and I cut the grass on our lawn at home to deliberately leave a couple of small squares of un-cut meadow. Doing so was my son’s idea, actually, though after some gentle nudging from me (in fact, one of my favourite old Fabrica projects was Project Meadow, which is a story for another day.) Listening carefully to my son, the curator led my kids over to a clump of flowering grasses (Lotus corniculatus, often known as ‘Egg and Bacon’ in English) and kneeling down, she began to describe how they could also help pollinate, like a bee. After feeling her way through a few of the grasses with her fingers, she carefully plucked a couple of seed pods and pressed them into my son’s hand, saying we could simply pop them into our small ‘meadows’ at home. She was quick to add she was an artist, not a botanist, and that it might not work — but it was worth a try “as an experiment.”
https://medium.com/slowdown-papers/40-slowdown-landscapes-meadow-fac47d00b076
['Dan Hill']
2020-12-13 11:04:02.513000+00:00
['Cities', 'Gardening', 'Design', 'Landscape', 'Art']
4 Simple Ways to Make Someone Feel Incredibly Valued
The world doesn’t necessarily need more people who talk. It needs more people who listen, and it needs more people who care. Because really, nobody likes a phony. And in today’s world, there’s a lot out there that’s fake. People are so often posting their “best selves” online, trying to appear perfect for friends and strangers alike. The pressure to look like you have everything together and be the best version of yourself is intense. In part because of this, it can sometimes be difficult to find true authentic relationships. Not just romantic ones, but day-to-day interactions too. And it can be hard to feel like other people actually care. People want to be heard, seen, respected, and listened to. But sometimes, a person who truly makes you feel valued is hard to find. You can be that person. “People will walk in and walk out of your life, but the one whose footstep made a long lasting impression is the one you should never allow to walk out.” — Michael Bassey Johnson If someone walks away knowing that you actually listened to them as an individual and they feel truly appreciated, they’re going to remember you. So how do you go about making someone feel incredibly valued? Four simple things to keep in mind:
https://medium.com/curious/4-simple-ways-to-make-someone-feel-incredibly-valued-13520cca3d6a
['Samantha Blake']
2020-12-04 07:52:25.570000+00:00
['Relationships', 'Self', 'Advice', 'Society', 'Life']
‘Stoney’: How Post Malone Forged His Musical Identity
Rashad Grove Post Malone’s emergence onto the pop music scene reads almost like a fairy tale. During his improbable rise to the mainstream, he navigated all the hurdles before him while the universe aligned in his favour, making his debut album, Stoney, one of the most highly anticipated releases of 2016. Listen to Stoney right now. Refusing to conform Austin Richard Post has never been just a rapper or a singer. He is a musician with the rare ability to move in and out of various musical styles. His superpower as an artist is that he refuses to conform his vision of music to fit labels or definitions. He’s not beholden to any of it — and that stance, while controversial in some circles, allowed him to reach the masses. After grinding it out in relative obscurity, Malone was discovered by the FKi production team and, in August 2015, released ‘White Iverson’, which went viral and catapulted him from SoundCloud rapper to a bona fide star. As his profile began to grow, so did the status of his collaborators. He worked with Kanye West, landed a coveted spot as Justin Bieber’s tour opener and dropped his well-received August 26th mixtape, with guest appearances from Larry June, 2 Chainz, FKi 1st, Jeremih, Lil Yachty, Jaden Smith and Teo. After making a name for himself and getting co-signed from the crème de la crème of the music industry, Post Malone prepared Stoney. Released on 9 December 2016, it solidified him as a star. Artistic fluidity Spanning 18 tracks and clocking in at just over an hour in length, Stoney introduces Post Malone as a versatile artist who’s not afraid to be brutally honest about his demons. From the outset, his artistic fluidity refuses to be boxed in by critical perceptions: he integrates all his influences, from hip-hop, pop and even country music, to create a unique sound. Throughout the album, Malone addresses his struggle with drugs and alcohol addiction, and how his newfound fame has magnified those issues. But he also knows how to break out of his melancholy shell and enjoy the fruits of his labour. Stoney taps into the full emotional range of the human experience. Though released a year before the album’s release, ‘White Iverson’, even in its more polished album version, remains magical. An ode to the basketball Hall Of Famer, it set the tone for Stoney, launched Malone’s career and was eventually certified five-times platinum for sales of over five million digital copies. Life-changing success Deploying Quavo of Migos, ‘Congratulations’ is a celebratory anthem that encapsulated the life-changing successes both artists were experiencing. Produced by the trio of Metro Boomin, Frank Dukes and Louis Bell, ‘Congratulations’ surpassed even ‘White Iverson’, peaking at №8 on the Billboard Hot 100 to become Malone’s highest-charting single at the time. Malone invites his friends to contribute to the atmosphere he creates on Stoney. Guest co-stars include R&B star Kehlani (‘Feel’), Pharrell Williams (contributing sleek, soulful production to ‘Up There’), Justin Bieber (‘Cha-Cha’) and the minimalist motif of River Tiber (‘Cold’), all helping to round out Stoney ‘s diversity. Through it all, Malone delivers heartfelt lyrics and vocals over a variety of styles that make the album a unique listen. A promising debut All and all, Stoney was a promising debut album that foreshadowed the enormous success Malone would achieve. It debuted at №6 on the US Billboard 200 — an extremely strong showing for a new artist in the mid-2010s. On 6 June 2018, the album was certified triple-platinum by the RIAA, proving Malone’s assertion that the new wave of music could be genre-blind and still commercially viable. It’s Malone’s penchant for melodic hooks combined with lush trap production that make Stoney a notable debut from a burgeoning superstar. Still finding himself as an artist, it was evidence that the best was yet to come. Stoney can be bought here. Listen to the best of Post Malone on Apple Music and Spotify. Join us on Facebook and follow us on Twitter: @uDiscoverMusic
https://medium.com/udiscover-music/stoney-how-post-malone-forged-his-musical-identity-205024654d90
['Udiscover Music']
2019-12-09 14:58:48.635000+00:00
['Hip Hop', 'Music', 'Features', 'Culture', 'Pop Culture']
Bruce Shapiro’s Tech-Art Movement
Though they involve computer-driven mechanisms that imbue objects with surprising, lifelike qualities, Bruce Shapiro hesitates to call his creations robots: That term would not be inaccurate, but it conjures human-like machines popularized by Hollywood. Roboticists tend to work on more and more complex machines, often with the goal of equaling or besting human capabilities. I take an almost opposite approach: the simpler, the better… To me, nothing could be better than a single-motor robot moving real materials in a captivating way. Shapiro’s kinetic sculpture Sisyphus comes close to achieving this minimalist ideal, using a two-motor mechanism beneath a table to guide a steel ball in precise paths across a bed of sand. The result is a mesmerizing mandala-like drawing that the machine constantly creates and erases, recalling the repetitive struggle of its mythological namesake, eternally condemned to push a boulder uphill. From its first incarnation as an installation piece for science museums to the new iteration intended to live in people’s homes, Shapiro has made and remade Sisyphus for nearly twenty years. But there’s no sense of futility in this long journey—rather, there’s a broadening of possibilities. Shapiro’s current Kickstarter campaign for Sisyphus has gained the support of nearly two thousand backers, becoming the Art category’s highest-funded project to date. Shapiro sees these supporters not just as financial patrons, but also as creative collaborators who will explore his work in ways he’s never imagined: Sisyphus is not just a kinetic artwork. It is also an instrument. The paths it “plays” are just as important as the sculpture— no different than a musical instrument and the songs played on it. My hope is that by getting it out there — into more hands, eyes, and minds — people who are moved by it may spend their time and energies to compose for it. Time-lapse of Sisyphus’ algorithmic paths In the late ’80s, Shapiro, then working as a physician in Minneapolis, stumbled upon a bin of unusual motors labelled “steppers” while browsing a local surplus store. They had eight wires instead of the traditional two. “I had no idea how to run them,” he recalls, “but I instantly grasped that they must work by breaking rotation into discrete steps — motion pixels!” Shapiro had recently learned about computer-generated fractal art and describes the “light bulb in head” moment of realizing that he could do something similar with the physical world. Stepper motors are what allow machines like 3D printers and CNC (computer numerical control) routers to execute precise, computer-controlled movements. Today, the growing interest in DIY electronics and the emergence of open-source platforms like Arduino means that a Google search yields a wealth of tutorials, control boards, and starter kits that make it relatively easy to build something with steppers. But Shapiro’s pre-internet, pre-maker movement learning process involved going to the library to research these then-obscure components. After several months, I finally had two steppers and some crude circuits connected to the printer port of my PC running DOS. Friends and family knew that I had been consumed by this obsession for months, and I was at last ready to show off my great achievement. I gathered them down in my basement shop and presented my grand demonstration: two motors sitting on a desk with pieces of tape stuck to their shafts, looking like little flags. When I typed something on the keyboard, each motor rotated exactly one revolution. Then variations of doing that one motor at a time, different directions. At the end of the demo, I turned my gaze from the motors to my audience, and was met with boredom, annoyance, and a lot of worried looks. This was my first — but hardly my last — lesson that my feelings of fascination and excitement could not be depended upon to correlate with those of others! Realizing that this raw technical demo failed to convey the beauty of his vision, Shapiro set out to find a more exciting application. Easter was around the corner, so he rigged up a machine that could decorate eggs with intricate patterns, mostly to entertain his kids. It was a hit. Dubbed EggBot, the device is now available as a popular DIY kit from Evil Mad Scientist Laboratories and has even made appearances at the White House Easter Egg Roll. EggBot’s official business EggBot’s enthusiastic reception gave Shapiro the affirmation he needed to pursue his creative work full time. I left my day job practicing internal medicine to devote myself to this growing obsession with exploring motion control for making art, while at the same time being able to spend more time taking care of our kids at home. I wasn’t unhappy with medicine — I liked it. But this felt different. I loved it — and still do, more than ever. Early on, this exploration took the form of CNC machines built from scavenged industrial parts that, like EggBot, used digital control to make static sculptures with elaborate, often algorithmically generated designs. Sisyphus represented a conceptual leap. Shapiro explains, “eventually, one of my CNC machines did escape from the ‘drudgery’ of fabricating sculpture, to become the kinetic artwork itself.” This vision of the machine as a factory worker suddenly freed to express itself conveys the endearing, Geppetto-like affection Shapiro has for his creations. Beyond that, it points to the inspiration he draws from observing life and nature. While never taking the literal approach to mimicking human and animal behavior embraced by eighteenth-century builders of automata and the roboticists of today, Shapiro keeps a notebook of movements and phenomena he encounters in daily life, ranging from the way bubbles form in water to the way fabric moves. I look for movement in nature that I find beautiful or interesting and that my bag of motion control “tricks” might allow me to capture. The way a cheetah runs? Gorgeous, but no chance with my simple machines. But once in a while, I can tease out at least some essence of a movement I find beautiful, like the flow of silk through the air in traditional Chinese ribbon dance. My Ribbon Dancer installations came out of striving toward this goal. Ribbon Dancer on display at Questacon — The National Science and Technology Centre in Canberra, Australia Indeed, Shapiro’s work is notable for how organic and unmechanical it feels. While artists going back to the Italian Futurists in the early twentieth century have celebrated the aesthetic power of machines, Shapiro doesn’t fetishize the impressive technology behind his work, treating it as a tool for creating beautiful experiences. And though the mechanisms in pieces like Sisyphus are often hidden, he is unequivocal about wanting to inspire audiences’ curiosity about how they work: It is no accident that my large public works are installed in science centers… While I needed no convincing as a child, working as an artist-in-residence at a science museum taught me that many kids view the practice of science as sterile, full of memorization, and generally not fun. But when they see work like mine, they can see all this science and technology being used for exactly that — fun! My hope is to entice people to want to learn how to do motion control for making stuff, be it in the guise of a CNC machine or 3D printer or a whimsical art-machine. Sisyphus’ inner workings This generous, democratizing impulse is certainly found in Shapiro’s decision to create a version of Sisyphus that people can experience in their homes—an idea that grew out of his involvement with a now-defunct Minneapolis makerspace called The Mill. Aside from professional CNC machines that made fabrication faster and easier to repeat, he found a community of collaborators who could help him push his ambitions further. The machines were invaluable, but perhaps more so were the other makers I met there. One in particular made a lasting impression: Micah Roth, who had already developed a reputation for being able to do and make anything. When that makerspace closed, we decided to form a new one called Nordeast Makers, which will serve as our center of production. Shapiro and Roth build a Sisyphus table And this appreciation of community is also at the heart of Shapiro’s effort to share his work on Kickstarter. In contrast to the commercial art world, which thrives on exclusivity, Shapiro sees value in opening up his work and making it more accessible. He views the backers of his campaign less as collectors or customers but as collaborators who will use Sisyphus as a creative instrument for their own imaginations. There’s little precedent for offering such a complex, idiosyncratic work at this scale — artistic editions generally center around printmaking or other techniques designed for easy reproduction. But in carving out an approach that leverages the recent accessibility of automated manufacturing, Shapiro hopes to inspire others to follow suit:
https://medium.com/kickstarter/bruce-shapiros-tech-art-movement-7f9c9c3545dd
[]
2019-05-16 17:01:19.401000+00:00
['Technology', 'Design', 'Robotics', 'Art', 'Kickstarter']
On Society
Imagine there is this utopian society where no one engages in robbery or stealing in any form. If one day then comes this exceptional case of a single individual who have stolen something, one can easily attribute the occurrence to the personally unique traits of that individual and put all of the responsibility upon them. The pragmatic solution might as well be to incarcerate that individual and isolate them from our utopia to revert it to a state of zero stealing. The problem is the individual, and obviously removing the individual would easily solve the problem. Now imagine a more realistic looking society, where for example five percent of people indulge in stealing. The society is still mostly free of that crime, but as a direct result of the sheer number of individuals that constitute this society, five percent would mean a number far larger than one outlier individual. Statistically, it is highly unlikely that there exists this unique and rare personal trait that results in this situation, as even if it is a result of a personality divergence it is one that afflicts five percent of the population and hence is more logically attributable to the society as a whole rather than distinct individuals. Factoring in the massive role of context and circumstance in decision making of each individual as well, it is highly unlikely that these “stealing people” would be distinguishable from the rest by just considering the individual. It becomes simply not pragmatic to blame the individual, as it is rather evident that the issue is indeed with the collective. The same argument can be easily made for a lot of divergent and “undesirable” social phenomenon, the deciding factor simply being statistical significance of the phenomenon among members of the collective. Poverty, addiction, joblessness and crime, bribery, corruption, biased journalism or misinformed scientific research, aggressive advertising and breach of personal privacy in name of security, all are way more prevalent than to be attributable to individual misalignment, and practically can only be blamed on the society itself. However, in many cases, we still tend to only see these issues through the lens of individual ethical or social failures and treat them as such, as if the reality of our society was indeed the unrealistically utopian case of our first example. More progressive societies might have started tackling a number of these issues with a more practical approach, for example in many parts of the world joblessness is no longer understood as a personal unfitness but rather a socioeconomic phenomenon. However, we still love to declare wars on drugs and political corruption and indulge in name calling and frowning on biased journalism, while turning a blind eye to the system that causes that phenomenon. The reason for this short-sightedness is rather obvious: a single individual or a specific group is a tangible external target for the negative emotional reactions one might have towards these undesirable phenomena. the easiest and probably least fruitful, if not mostly counterproductive, response to high crime rates is of course expanded police force and authority. The idea of taking the responsibility as a collective also subjectively implies taking personal blame for issues not involving “me”, which naturally results in an emotional guard that impedes necessary logical reflection under a non personal and objective light. This inappropriately personal interpretation stems from a negligence on what constitutes a society in the first place: a society is not a large group of independent individuals living in physical proximity, it is rather the rules and mechanisms that binds these individuals (who are most probably in each other’s physical proximity merely as a direct result of our now rapidly fading limits in communication, and hence by no means an essential aspect of the concept of society) to form a bigger collective with a higher level of internal synergy. As a result, when a particular problem is said to be one of the society as a whole rather than the individual, it simply means that there is an aspect of that mechanism, game if you will, that is not well understood and has resulted in an unintended and undesired outcome. This lack of knowledge can of course itself be the result of a lack of a full understanding on the decision making process of the players, i.e. members of the society. For example we know now that even statistically these players aren’t making rational decisions, as they behave more conservatively in face of perceived gain and more risk taking in face of perceived loss, even given mathematically identical situations. Misunderstanding of this core aspect of human decision making played an important role in 2008’s financial crisis, and although dire effects of that crisis still affect more susceptible economies around the world, we have yet to change the rules of the economic game that was designed based on that false assumption. That economic game was among the few that were actually (mostly) designed based on a systemic perspective of the collective involved rather than individual expectations. More often than not, the mechanisms and rules fail due to the inherent flaw in the framework they were designed within. Our laws are (mostly) written as guidelines to outline the exemplar citizen rather than to establish a mechanism that would in effect encourage individuals to behave as such. Crime is mostly seen as a result of ethical shortcomings on a personal level rather than a symptom of the system that causes it. The punishments that typically accompany undesirable behavior will of course dampen the gains of such behavior and hence do affect the game, however when the gains seem high enough that simply changes the goal to “try not to get caught” or in cases with much higher rewards “lets cheat the system responsible for enforcing the law”, which always inevitably leads to deep corruption within the legal system itself. What is worse is that still after all of that we would see the situation a result of individuals’ ethical weakness and seek the solution in hands of “moral heroes”, while practically we are turning a blind eye on the actual problem and refusing to even attempt to understand it, let alone resolve it. As for another example one can take a deeper look at how most modern democracies are maintained. The key for a democracy to be truthful is of course the guarantee that governance is representative of the collective and public interests. Since governance always includes access to great amount of social power, it is always extremely beneficial for individuals that are that representation to act in contrast to their own mission by sacrificing public wellbeing for private ones, most possibly those of their own and/or of certain select groups (such as corporates). The main mechanism we have for countering that is constant public vigilance in such affairs. However, alongside other shortcomings, this mechanism merely changes the game to that of being publicly perceived as truly a representative and not actually being one. In this context, one optimal solution for the game is a government who dedicates majority of the resources at its disposal focused on the goal of maintaining that public image by the narrowest possible margin, i.e. that only the minimum sufficient percentage of the population has the minimum sufficient belief in that public image. These margins of sufficiency in turn do depend on how much the government, once established, is dependent on that public image to be able to maintain function, so for example governments with higher reliance on tax and/or constant public votes tend to maintain a more democratic image than those reliant on military power or natural resources. All this simply means that the political game that we have enacted in order to maintain democracy does not naturally lead to efficient and benevolent leadership, instead it gradually converges on populists portraying that public image in a marginally believable and acceptable manner. It is not a matter of personal ethics, as even the guy who wants to utilise that leadership status towards charitable prospects for the society needs to be paying the same game. It is also notable that in face of perceived loss, due to the aforementioned increase in risk acceptance on part of the public, that margin of believability decreases to encompass otherwise deemed impossible individuals as serious policy makers and leaders. As the cost of maintaining a believable public image reduces in such cases, if the costs of propagating the perception of loss and danger aren’t too high, the strategy of criticising the status quo in an overly exaggerated manner in support of an abnormal individual or group becomes highly effective as it yields better results for lower costs. As such, there should be no surprise that in a sociopolitical environment susceptible to such fear mongering tactics, rise of controversial and outlandish populists becomes all but prevalent.
https://medium.com/datadriveninvestor/on-society-648403a218f
['Eugene Ghanizadeh']
2018-06-21 08:13:33.413000+00:00
['Game Theory', 'Economics', 'Politics', 'Mechanism Design', 'Society']
How to Build a Tesla Data Dashboard with the Tesla API
*Update 11/13/2020: Unfortunately, Tesla has started restricting access to their API through most major services including AWS. This decision has broken this tutorial. Introduction If you own a Tesla, there is a good chance you fell in love with all of the advanced technology your vehicle puts at your fingertips. Geeking out about all of the things your car can do has become a favorite pastime of Tesla owners. While autopilot, sentry mode, auto-presenting door handles, frequent firmware updates, ludicrous mode, easter eggs, etc. are all go-tos when showing off unique capabilities, there is one really amazing feature that you probably haven’t taken advantage of yet — the Tesla API. Tesla’s instrumentation panels give you unmatched real-time information about what is happening in and around your car while you drive. However, it cannot let you see a historical log of information like where you have been, your battery charge history, or how hot/cold it got in your car last night. Your Tesla cannot do things like automatically text your child when you are arriving to pick them up from swim practice. Or can it? This tutorial will show you how you can connect your Tesla to a web service to do these things and more. This tutorial will leverage Initial State as the connecting web service since Initial State provides a simple, codeless Tesla API integration. Background Info: Initial State Initial State is an Internet of Things platform for data visualizations. You can stream real-time data into your Initial State account and build dashboards and custom trigger alerts from your data. There are several nifty integrations with other services that let you build your dashboards/triggers with data from those services with only a couple of clicks (i.e. no coding required). The Tesla API integration is particularly useful for this tutorial. Initial State has a 14-day free trial and costs $8.33/mo for a paid account unless you are a student, then it is free. Background Info: Tesla API Like all good, web-connected software, Tesla has an API. This API is used by the Tesla app and their internal diagnostics tools. However, Tesla does not publicly support this API and provide official documentation. That has not stopped the Tesla community from reverse engineering their API endpoints and providing excellent documentation (e.g. https://www.teslaapi.io ). The information available through this API is extremely useful and interesting. Your vehicle’s information is secured by your Tesla credentials, specifically OAuth, a standard protocol for authorization. This means you (and Tesla) are in control of your vehicle’s information. Getting Started Initial State account registration The first step to getting started is to create an Initial State account. Go to https://www.initialstate.com and click on the GET STARTED button to register for an account. This will start a 14-day free trial. You do not need to enter anything other than your email address and a password to create an account and go through this tutorial. Initial State app landing page after account creation After creating an account, you will be automatically logged in to the Initial State web app, which will look similar to the screenshot above. To get to the data integrations section, click on the View all integrations link or go to https://integrations.initialstate.com . This is where you will find the Tesla API integration. Click on the Tesla integration, then the Begin Setup button. Tesla API authorization To use this data integration, you will need to authorize it by clicking on the Authorize button and entering your Tesla login information. This will generate an OAuth access token and allow access to your Tesla vehicle’s information. Tesla integration setup — authorizing your Tesla Once authorization is successful, you will return to the setup page where you should see that you successfully authorized your Tesla. Let’s complete the setup. Tesla integration setup — starting the integration Click the Begin Setup button to set up the integration parameters. Fill out the required information on the right Give your integration a name (e.g. Tesla Vehicle Data). Select your vehicle from the drop-down (required). Leave Create a new bucket selected under “Choose a Bucket”. Leave All available data selected under “What data would you like?”. This will pull in all available information from the Tesla API. Select Every minute under “How often do you want new data?”. This is how often data will be fetched from your Tesla. Click Start Integration at the bottom right to start receiving data from your Tesla. You are ready to start viewing your Tesla data. Return to the app starting page by clicking on the icon in the top left. Tesla integration setup complete A data bucket containing your Tesla data will be automatically created and shown in the bucket shelf on the left. Click on this data bucket to open up a dashboard of your Tesla data. Tesla data dashboard Your dashboard will look something like the one above. This dashboard only shows data since you started your integration. Hop in your car and drive around to start collecting some interesting data. Create a Mobile Optimized Dashboard The default dashboard created is great for viewing on your desktop but terrible for viewing on your phone. Let’s create a second view of your data optimized for your phone. Data bucket settings Click the bucket shelf icon on the top left to show a list of your data buckets. Click the settings icon under your Tesla Vehicle Data bucket. Creating a duplicate dashboard view Click the Duplicate button. Change the name to something like “Tesla Vehicle Data (Mobile)” in the top left and click the Create button at the bottom to create a duplicate view of your Tesla data. Duplicated dashboard view A second data bucket will be shown in your bucket shelf, an identical copy of your other Tesla data view. Let’s change this duplicate copy’s view to a mobile optimized view. Click on the settings link under the new data bucket. Next, click on the Import View button. Importing a mobile template Paste the following URL in the input field at the top: https://go.init.st/7uzcl7i . This is a dashboard template built for a Tesla mobile view. Click on the “Tesla Data (mobile)” link below the input field and then click the Import button at the bottom to import this template. Tesla Mobile Dashboard Log in to your Initial State account on your phone and open the mobile dashboard view. It should look like the above screenshot. The same Tesla data will drive both the desktop and mobile dashboard. If you are using an iOS device, you can add an icon to your home screen and get rid of the top and bottom toolbars for a fullscreen view by following the instructions here. Cool Things You Can Do In addition to having a nice looking web dashboard, this integration adds a couple of features to your Tesla that can be used for several applications. The biggest new capabilities are historical data storage and real-time triggers (e.g. text message / email alerts). Let’s go over some cool things you can do with these new features. Create a detailed trip log
https://medium.com/initial-state/how-to-build-a-tesla-data-dashboard-with-the-tesla-api-4ebee4b9827c
['Jamie Bailey']
2020-11-13 15:25:37.911000+00:00
['Technology', 'Dashboard', 'Internet of Things', 'Data Visualization', 'Tesla']
How to Publish Shortform
Medium 101 How to Publish Shortform You have more flexibility than you think with your new profile pages We’ve been having a lot of fun experimenting with Medium’s new publishing tools at GEN. We publish a lot of different kinds of writing about politics and culture, including longform investigative features, opinion pieces, big ideas essays, interviews, series—anything from 500 words to 5,000 words. But sometimes the news is moving fast and we want to respond in kind. Or we want to shout out a great piece from one of our fellow anchor publications or platform writers. We want to write short, usually under 100 words. Here’s how we’re going about it, and why. Some goals for shortform Vary the writing on your profile page. Not all of our articles are created equal, right? Sometimes we want to respond to breaking news with a quick comment, other times we’ve seen a great piece floating around that relates to something we’ve already published. Think of your profile page as a feed. What’s long, what’s medium length, and what’s short? Your new Medium profile isn’t just a gathering of your writing — it’s a living, breathing thing. Not all of our articles are created equal, right? Sometimes we want to respond to breaking news with a quick comment, other times we’ve seen a great piece floating around that relates to something we’ve already published. Think of your profile page as a feed. What’s long, what’s medium length, and what’s short? Your new Medium profile isn’t just a gathering of your writing — it’s a living, breathing thing. Introduce the reader into your profile stream. A great thing about shortform is that doesn’t have the “read more” break under 150 words. This means shortform can be a gateway to get people into your stream, reading your work. A great thing about shortform is that doesn’t have the “read more” break under 150 words. This means shortform can be a gateway to get people into your stream, reading your work. Shout out an article, or another writer. Did you read something you really enjoyed? Write a short post about it, tag the writer, and link to their piece. A few technical parameters Under 150 words. This size article will appear in its entirety on your profile page. It won’t break to “read more.” This size article will appear in its entirety on your profile page. It won’t break to “read more.” Over 150 words. This will break at 100 words with a “read more.” Consider if you want people to click to read more, or if you want to cut down. The tools You’ve always had the tools for shortform. Here are a few we like to use together. Bold the first sentence of the paragraph. See, I just did it. Think of this as your headline. Now your reader knows what they’re getting into. Recirculation. Copy and paste a link into your post. Now hit “enter.” Give it a minute and the link should pop up looking like this. If you’re writing about a person, tag them. This lets them know you’re writing about them. Just type “@” and then begin to type the name of the person. It should pop-up and the text will turn green when selected. A few examples Here’s a super short post. We wanted to highlight the great LEVEL series, “The Only Black Guy in the Office” by giving it a shoutout. This is about 100 words. Here we’re saying a bit more. More commentary on the piece, more stats, and more quotes. This is about 130 words. Here we’re using a quote in the Header font. Make sure you give that quote attribution below by tagging the writer. And why not link above and throw in a recirc link, too! The more you cite your sources, the better. Here’s a really cool way the Medium Coronavirus Blog team uses shortform: The pinned post at the top of their feed updates every day with the latest news. Since the news is moving so quickly, this is a great way to keep readers updated. When it comes to publishing shortform, variation—and consistency within that variation—is the key. Once you develop some formats that work for you, stick with them!
https://medium.com/creators-hub/how-to-publish-shortform-c059b5fda2d5
['Michelle Legro']
2020-11-20 22:27:04.516000+00:00
['Creativity', 'Writing Tip', 'Medium 101', 'Short Form']
Mobile-First Indexing is Here — Is Your Website Ready?
As if SEO wasn’t already complicated and time consuming enough, Google has changed the rules of the game once again. Googlebots are crawling your website a little differently now, and it’s important to understand how this can and will affect the SEO ranking of your website. Ever since “Mobilegeddon” in 2015, there has been a significant shift of focus on how important mobile-friendly websites are to SERP rankings. It only makes sense that there would be another change and people are actually wondering what took so long. WHAT IS MOBILE-FIRST INDEXING? The term simply refers to the way that Google is now crawling websites to determine where to rank them in the search engine results pages (SERP). Instead of crawling your site as a desktop user, Googlebots are now simulating mobile user experiences. According to a November 2016 article in TechCrunch, “combined traffic from mobile devices, which includes mobile and tablet devices, surpassed traffic from desktop devices for the first time by 2.5%.” With the mobile traffic shift, it makes sense that Google would change the way it measures the relevance of your site. In fact, according to Search Engine Land, Google confirmed mobile searches had surpassed desktop searches back in May of 2015, so the switch from desktop to mobile indexing appears to be long overdue. HISTORY OF MOBILE-FIRST INDEXING The Google Webmaster Central Blog first announced the testing of mobile-first indexing on November 4, 2016. They stated the reasoning behind the change is that it further improves upon their mission to continually deliver the most relevant search results to users. Although it’s technically live, it still has a way to go. If you’re in the process of designing a new website or redesigning an existing website, it’s an important factor to consider. Even if you weren’t considering a web redesign, if your current site isn’t responsive or mobile-friendly, it may be time to make some critical updates in order to improve the user experience for your viewers, and for Google. THE IMPACT While it’s still only one factor that goes into how Google ranks your website, it’s a highly significant one. Other major SEO factors, such as having properly working links and short page loading times, are directly correlated with how mobile-friendly or responsive your website is. If your website was originally built for a desktop user, and was later modified to meet the needs of mobile users, a mobile visitor (or Googlebot) may experience slower load times and arrive at broken links. It’s easy to see how this would negatively affect your site rank. Whether your website is mobile friendly/responsive or not, it still can be indexed at this time. Though Google’s mobile-first indexing is still in its beginning stages of implementation, it’s likely that it will continue to have more of an effect on website rankings. If you don’t make the switch, there is a high possibility that you could see a decrease in your site rank sooner than later. KNOWING WHERE YOUR WEBSITE STANDS If you aren’t sure how your current website is seen by Googlebots, you can actually get a glimpse by using the Google Search Console. Simply click on the “crawl” button, located on the left-hand side bar, and choose the option, “Fetch as Google.” To view your site as a mobile user, simply click on “mobile smartphone”, and then, “Fetch and Render.” Find out how your website is viewed by Google by running a “Fetch as Google” test. After you’ve gotten a chance to see your site from Google’s perspective, it may be time to make some changes. Take a look at the content, and determine if the desktop and mobile content is the same. If it isn’t, you may have a problem. You’ll also want to ensure that your site speed is acceptable. You can run a page speed test on each page of your site to check for any load time issues. Take a look at your keyword strategy, and make sure that you’re optimizing for mobile keywords and desktop keywords. There is actually a significant difference and will affect how easily mobile users find your site. Don’t forget to update your company information on local listing sites, such as Google My Business, Yelp, and other local review sites. According to Think with Google, “50% of consumers who conduct a local search on their smartphone visit a store within a day, and 18% of those searches lead to a purchase within a day.” Ensuring that your company information is complete, accurate and consistant is becoming increasingly important for mobile success. Unsure if your company NAP (name, address, phone) is being displayed correctly across local review sites? Request a free local listings scan from us! Mobile-First indexing is here. Is your website ready?
https://medium.com/commonplaces-interactive/mobile-first-indexing-is-here-is-your-website-ready-ac7edbfe9c47
['Commonplaces Interactive']
2017-05-03 17:11:21.922000+00:00
['SEO', 'Google', 'Web Design', 'Website', 'Mobile First']
Climate Change: An Appeal to the UN Committee on the Rights of Children
On the day Greta Thunberg gave her emotion-filled speech at the United Nation’s (UN) Climate Summit, another historic event involving the Swedish activist and 15 other youthful climate hawks — representing 12 countries — took place. The filing of the first-ever legal complaint about climate change to the UN’s Committee on the Rights of the Child. The communication is titled Sacchi et al. vs. Argentina, et al. Like the plaintiffs in the case of Juliana vs. US, the young petitioners — all ranging in age between 8 and 17 — are seeking to protect themselves and future generations from the harsh consequences of global climate change. Impacts like extreme droughts and rising sea levels that most of the world’s scientists have been warning of for decades; warnings that have gone mostly unheeded in terms of needed state actions. The Republic of the Marshall Islands, home to three of the petitioners, formally declared a National Climate Crisis on September 30, 2019. A low lying archipelago in the southern Pacific Ocean, portions of the Marshall Islands were the site of 67 nuclear weapons tests by the United States, including the 15-megaton Castle Bravo hydrogen bomb test that produced significant fallout in the region. Having survived those tests, the Marshall Islands now face the prospect of being uninhabitable by 2050 — swallowed by the waters that have sustained its populations for hundreds of centuries. Its 29 atolls average only 6.5 feet above sea level. The petition — or communication as it is termed — was filed with the United Nations Committee on the Rights of the Child (Committee or CRC) that was established under the United Nations Convention on the Rights of the Child (UNCRC). The Committee monitors the implementation of the Convention that protects the human rights of children around the globe. The convention was signed by every country in the world, save for the United States. Of the 196 signatories, 45 have agreed to the Third Optional Protocol, allowing children to petition the UN directly about treaty violations. The five respondent nations named in the communication — Argentina, Brazil, France, Germany, and Turkey — are among the 45. The named respondents are aware of the causes and consequences of global warming, emitters of greenhouse gases, and signatories of the Paris Accord. The UN Committee on the Rights of Children is comprised of 18 independent experts elected by the states that oversee the operation of the Convention on the Rights of the Child by its parties. When countries ratify the CRC, they accept that they: · are bound by its clauses; · have a duty to incorporate its provisions into their domestic laws; · must subject themselves to the scrutiny and jurisdiction of the CRC. The communication filed with the Committee is quite similar in content to the pleadings filed by the Juliana plaintiffs in the US Federal District Court for Oregon. It offers a science-based description of the causes of climate change and references some of the impacts, e.g., forest fires, lost food sources, insect-borne diseases, permanently inundated coastlands, more violent and frequent weather-related events, etc. The communication is careful to point out that the consequences of Earth’s warming are being suffered now and states they will only get worse if nations don’t increase current efforts. It introduces the petitioners and describes specific harms they have already experienced: In the Marshall Islands, Petitioner Ranton Anjain contracted dengue fever in 2019, now prevalent in the islands, and Petitioner David Ackley III contracted chikungunya, a new disease there. In Cape Town, South Africa, drought has made Petitioner Ayakha Melithafa’s family, and 3.7 million other residents prepare for the day municipal water supplies run dry. The subsistence way of life of many indigenous communities is at stake. In Akiak, Alaska, Petitioner Carl Smith learned to hunt and fish from the elders of the Yupiaq tribe, but the salmon population on which they rely has been dying from heat stress in record numbers, and the warming temperatures have prevented his tribe from accessing traditional hunting grounds. According to the communication, the respondent nations have contributed to the climate crisis with their past emissions and are failing to put themselves on a pathway consistent with keeping the climate’s temperature rise under 2.0 degrees Centigrade over the 21st century. The failure of the named respondents is essentially the failure of all nations, including the signatories on the Paris climate accord. The pledged reductions are inadequate to the task of achieving both the aspirational 1.5 degrees and the agreed-upon 2.0 degrees Celsius targets. The cumulative sum of the respondents’ historical emissions shows that they are major emitters, responsible for a significant share of today’s concentration of GHGs in the atmosphere. Each of the respondents ranks in the top 50 historical emitters since 1850, based on fossil fuel emissions: Germany ranks 5th, France 8th, Brazil 22nd, Argentina 29th, and Turkey 31st. When land-use, such as deforestation, is factored in, Brazil surpasses France in its historical share. Establishing the reality of climate change and identifying actual harms suffered by the petitioners at the hands of the respondents by their delay in taking the steps necessary to decarbonize their economies are preludes to the central point of the petition. The communication alleges that the respondents have shifted the enormous burden and costs of climate change onto children and future generations. The relief requested in the petition is a series of findings by the Committee. The primary finding being asked for is a declaration that climate change is a children’s rights crisis. Also requested are findings that the respondent nations have knowingly disregarded the science-based evidence of the causes, consequences, and remedial steps necessary to protect children everywhere from the ravages of climate change. The petitioners are also asking the Committee to recommend to the respondent nations that they amend their laws and policies to make the best interests of the children a primary consideration when allocating the costs and burdens of climate change mitigation and adaptation. The petitioners are further requesting the Committee to encourage the respondents to provide for the direct access of children and their representatives to the decisionmaking process where they would have the right to express their views freely. It is not clear whether the petition is admissible. Ordinarily, complaints about states can only be brought by other states. Optional Protocol 3 (OP3) provides an avenue for the admissibility of a communication directly from a child petitioner if they can meet two requirements: has the respondent nation ratified OP3; and, have the complainants exhausted domestic remedies? The first requirement has been met, as all five respondent nations have ratified OP3. The second requirement has not been met. The petition addresses their noncompliance with the second prerequisite by suggesting there are practical problems that prevent them from complying with the condition. Each of the petitioners can point to reasons why they would not be able to pursue their complaints in their own countries. For example, petitioner Sacchi would not be able to challenge Argentina’s failure to use diplomatic means to protect her from US emissions in an Argentine court. Petitioners Alexandria Villasenor and Carl Smith are able to point to several state and federal cases, e.g., Juliana vs US, in which the question of a child’s standing to bring such suits has not been established at the federal level. Standing has already been denied in some state courts. They could also point to several state and federal cases, e.g., Juliana vs US, in which the question of a child’s standing to bring such suits has not been established at the federal level. Standing has already been denied in some state courts. Critics of the case point to the futility of it. Critics of the case point to the fu-tility of it. They claim that even if the communication is taken up by the 18- member Committee there’s very little that they can do without needed enforcement authority. Edentulism is a trait shared by many international agreements. Implemen-tation of agreements like the Paris Accord and the CNRC relies on the good-will of signatories to live up to their promises. In the end, many of the commitments are little more than hollow gestures. Given that even a win for the petitioners will have little immediate or practical impact as the respondent nations are unlikely to act on any recommendations the CRC may make, the question bluntly becomes — why bother? The meteoric rise of the youth climate movement is born of fear and frustration. If the mainstream climate-science community is accurate, the young are right to have these feelings. Surveys are consistently showing that the young do, in fact, suffer from climate anxiety because they won’t have the opportunities that the generations before them have had. Earth’s warming will cause the destruction of critical ecosystems and scarcity in its wake. A recent Washington Post-Kaiser Family Foundation poll found that 86 percent of US teenagers believe what the world scientists have been saying are the causes, consequences, and needed responses to the Globe’s continued warming. Their concerns are both science and sensory-based in that they see evidence almost daily of what the experts are telling us is going to happen as the planet heats up. Youth around the world share these same fears. (See Figure 1) Unlike many adults, youth around the world accept the straight-line relationship between acts and consequences. Moreover, the young are not nearly as burdened with partisan bias nor hesitancy about asking govern-ments and international bodies to do their duties as they’ve sworn to. Without constructive political action, they are convinced Earth’s warming will cause the destruction of critical ecosystems and leave scarcity in its wake. Climate advocates like Ms. Thunberg and Representative Ocasio-Cortez (D-NY) have tapped into and released a force that hadn’t been paid attention to by older generations. Not only is the force being tapped, but it’s being focused on constructive ways to effectuate change. The young should be commended not condemned for what they are doing. Young climate hawks are looking to excise the portions of “the system” that have consistently failed to take the steps needed both to slow Earth’s warming and increase community resilience by adapting to changes in advance of their occurring. The petitioners in Sacchi, like the plaintiffs in Juliana, are looking to expand the enforceable rights of all individuals to a habitable environment. Their tender ages are being highlighted for strategic reasons. The UN and most governments acknowledge, at least on paper, that they owe a higher degree of protection to the young. The UNCRC offers a new avenue of pursuit to establish a habitable environment as a human right, and the Sacchi petitioners are willing to travel down it. Win or lose in the formal venues in which they’ve chosen to pursue protection, these cases and the publicity that surrounds them are winning in the court of public opinion. The petitions — whether filed with the CRC or a federal district court in Oregon — stimulate an open dialogue that has been missing for too long. A year ago, there was no debate in Congress about climate change. Now conservative Republicans are being forced into a dialogue they had hoped to avoid about a problem they’ve been unwilling to admit even exists. The Dem-ocrats, for their part, have embraced climate change as a central theme of their 2020 political campaign. Would landmark cases like Brown v. Board of Education and Roe v. Wade have ever been decided if the plaintiffs had been dissuaded by earlier attempts that failed? With each failed case, something more was learned; another step closer was taken. The Juliana plaintiffs have been standing in a federal district court in Oregon for four years — waiting to find out if they even have the right to sue. Should they be denied standing, it cannot accurately be said they failed to move the needle closer to a sustainable future. The case has inspired others in and be-yond the US to challenge their legal and legislative institutions to make a habitable environment a fundamental right. The Sacchi petitioners are a diverse group, as are their stories of how climate change is affecting their lives and those of their communities. Their petition may be rejected, but that doesn’t mean they shouldn’t have bothered. Finally, consider the reaction of President Macron to France’s being named a respondent in the communication. Macron called the filing of the petition “very radical” and likely to “antagonize societies.” What does it say about the future of the planet when the truth is called radical? In my experience, cultural change never comes about without antagonizing the keepers of the status quo. Make no mistake doing what’s necessary to combat climate change, rather than what is politically possible, requires nothing less than the cultural shift the youth climate movement has in its sights.
https://thejbsgroup78.medium.com/climate-change-an-appeal-to-the-un-committee-on-the-rights-of-children-28fca67e42d6
['Joel B. Stronberg']
2019-10-15 05:52:31.546000+00:00
['Environment', 'Politics', 'Greta Thunberg', 'Climate Action', 'Climate Change']
Code Smell 51 — Double Negatives. Not operator is our friend. Not not…
Code Smell 51 — Double Negatives Not operator is our friend. Not not operator is not our friend Photo by Daniel Herron on Unsplash Problems Readability Solutions Name your variables, methods and classes with positive names. Sample Code Wrong Right Detection This is a semantic smell. We need to detect it on code reviews. We can tell linters to check for Regular Expressions like !not or !isNot etc as a warning. Tags Readability Conclusion Double negation is a very basic rule we learn as junior developers. There are lots of production systems filled with this smell. We need to trust our test coverage and make safe renames and other refactors. Relations More info
https://medium.com/dev-genius/code-smell-51-double-negatives-993b6160f3e1
['Maximiliano Contieri']
2020-12-24 09:22:48.346000+00:00
['Software Engineering', 'Clean Code', 'Software Development', 'Programming', 'Code Smells']
Under Pressure
Under Pressure How being a voice within the LGBTQ community is like being shoved into another closet Recently I had someone from the Trans community reach out to me through the Facebook page for this publication. She was wanting me promote the messages of goodwill and hope that her “Civil Unrest Bicycle Club” was putting up on Portland area bridges. I really liked what she was aiming for but the belief that I am somehow in a journalistic position is where that conversation gets skewed. Needless to say I explained my position in very straightforward terms… My exact response to this individuals plea for publication. Immediately after this response she said “OK” and unfriended me. This isn’t my issue. I was honest and forward. I did not try to placate this woman with “maybes” or “mights.” I told her the God’s honest truth, and her response was to write me off. Is this the life that’s relegated to those who gain some level of stature? I surely didn’t start writing about my transition and the issues of the trans community to be treated like dog-shit by those within the community. If you disagree with what I say or who I accept myself to be, then why bother trying to be my friend in the first place? A friend is not someone you seeks to extract favors, but rather someone you relate to, and seek to share equalitive experiences with. Catharsis is a wonderful thing. Maybe this is going to end up short on context, but I am trying to make my way in a world that says I should “just die;” you know, because the narrative says I’m “mentally ill.” But we are the same. We should see one another struggle and lift one another up when we can. I cannot do a lot for many of the people who reach out to me; believe me that causes me great emotional strife. But I refuse to be held emotionally blackmailed by people who seek to exploit my position. I have done NOTHING that is beyond the scope of any normal individual. I am only known because I chose not to go stealth. I made this decision for the purpose of giving visibility to a community that often knows the safest place is the shadows. This means that I expect that the brunt of judgement should come at me from the conservative/religious crowds; instead it seems more directly focused from within the community itself. Seriously, most of my haters are actually within the community; that’s a fact that has been really alarming. So my question is this… If you were me, and I was asking YOU to do something you are unable to give proper consideration to; or even offer a proper platform for, how would you respond given such a limited audience? The people who predominantly read my publication are already in the community. I cannot deliver a message of hope beyond my own… I am here. I choose to be seen. I will continue to exist, and I will persist. Every breath I take is an act of defiance. Every step I take is an act of self-preservation. And every word I speak is proof that the naysayers have not crushed my spirit. If that is the hope that you seek to put into the world, you do not need me waving your flag. That’s the hope that you need to put into the world. Take a breath, live out loud, and speak your truth. You don’t need me to do that for you. At the end of the day, I’m just a truck driver. I work nearly 70 hours a week; I don’t get as much rest as I’d like, my spouse is dating outside our marriage, and I am very much alone whilst doing the same. This life is soul-crushing and lonely because I have chosen to live as the woman I feel I should have always been. I assume all the hurt and consequence that comes with that; but what really catches me off guard is being disregarded by people of equal circumstance. Just because I cannot help you does not make my existence devoid of merit; if we were only friends so that you could manipulate me, were we ever “friends” to begin with? Mirrors are often an enemy of the Trans community. When you look at yourself, who do you see? Someone who can only be exalted by my voice, or a strong-willed person who takes full ownership of that reflection and all the consequence of that acceptance? Are you an independent and strong individual who is ready to proclaim their own sovereignty, and their own public identity? If so, be your own Kira. I’ll be right here cheering you on. I want to see you be awesome, but I can’t be awesome for you. You need to do you!
https://medium.com/the-transition-transmission/under-pressure-4a120d740912
['Kira Wertz']
2019-09-25 15:29:49.342000+00:00
['Life Lessons', 'LGBTQ', 'Advocacy', 'Facebook', 'Transgender']
Pibble Sponsors the Hangang 42K Nightwalk in Seoul
At the foundation of Pibble is People. We want everyone to live their best life, engage with each other, and act as a support structure for each other across the globe, regardless of where they are living. You can see this within the main functions of the Pibble app, we provide a way for individual users to raise money for charities and crowdfund their own dreams. But, beyond that, we are looking for traditional partners that can help us spread awareness for blockchain and cryptocurrency while providing a positive message of the impact that it can have. With this in mind, Pibble is proud to sponsor the Hangang Nightwalk 42K. What is the Hangang Nightwalk? The ‘Hangang Night Walk 42K’ is a walking marathon program held at the Han River in Seoul (gang=river in Korean). This marathon has been held for 4 years and almost 10,000 participants join each year to walk along Hangang from night until the next morning. The event offers 3 courses for participants: a 15km, 25km, and 42km routes that they can walk through the night. Before the start of the walk, everyone will gather for a concert and do some group stretching. However, the event is more about camaraderie and creating a shared experience, as it is a non-competitive walking marathon. It is about challenging your limits, focusing on your inner self, and honestly a great excuse to spend the night out with your friends and loved ones while enjoying the beauty of the Han River. With this in mind, Pibble jumped at the opportunity to sponsor an event about community building that also promotes both physical and mental health. More information soon about what Pibble will be doing as part of our sponsorship. If you are attending, come check us out at the Pibble booth! Early nightwalk from 2018 Event Details Where: Starting from the Yeouido Park at the Han River in Seoul When? 5pm July 27th 2019 (Saturday)— 8am July 28th 2019 (Sunday) *Unfortunately, all the tickets have already been sold out** If you cannot attend the Nightwalk but would still like to show your support, please check out their social media links or support us or any of the other sponsors. We hope to see anyone who has already registered at the event! As always, follow us on our other social media: Homepage: https://www.pibble.io/ Medium: https://medium.com/pibbleio Telegram: https://t.me/pibbleio_eng Facebook: https://www.facebook.com/pibbleio/ Twitter: https://twitter.com/pibbleio Youtube: https://www.youtube.com/channel/UCkMYrB69xASlhSW3DcYsPsA Need to ask a question? Contact me directly on Telegram https://t.me/bryanhigh
https://medium.com/pibbleio/pibble-to-sponsor-the-hangang-42k-nightwalk-in-seoul-1c410ac58f2
['Bryan High']
2019-07-15 09:39:52.995000+00:00
['Health', 'Blockchain', 'Charity', 'Cryptocurrency', 'Social Media']
Envisioning a Post-Trump America
Envisioning a Post-Trump America Where does the anger go? Like so many Americans, I’ve spent the last four years in an ever-increasing state of disbelief about the occupant of our highest office. A man who once mocked a reporter's disability, bragged about grabbing women “by the pussy,” and lies like it’s the only way he can draw breath has continued to redefine the social contract we all at least previously pretended to live by. With just over a week to the election and polls showing an overwhelming likelihood of a Biden victory, the question of what happens after Trump leaves office is more pertinent than ever. Assuming he loses (and I acknowledge, this by no means a foregone conclusion), this will not be as simple as wiping a slate clean. Some of the vilest, most vitriolic parts of American society have been exposed as alive and well, and we can no longer continue to believe they are a fading relic of a time past. It would be naive to a dangerous extreme to think we won’t have to deal with a new reality in a post-Trump world. You may have witnessed neighbors you once thought benign wearing white supremacist regalia in your neighborhood store, or had family redefining blowouts over Thanksgiving dinner. These experiences and memories will endure long after the final ballots are cast and counted and the time to prepare for that new world is now. Where will the racists go? White supremacists and neo-Nazi’s emboldened under a Trump presidency sadly will not simply slink back into the countryside we like to believe they emerged from. The truth is they’ve always been here, in our cities as well as the rural areas, and as we’ve learned; ignoring them did not make them go away. America cannot unsee what it has seen in these last four years. Racism has had a comeback the likes of which even Hollywood rarely has the audacity to attempt. From systemic and institutional racism, police violence against Black Americans, all the way down to interpersonal racist interactions captured on cell phones, our twisted history of violence around race must be reckoned with openly and honestly in a way we’ve been avoiding since our inception. The social dynamics will take some time to play out, and could get far worse before (if) they get better. Humiliation, anger, and self-delusion are dangerous states of mind that easily breed violence, and are likely to become commonplace should Trump lose. Americans will need to choose to confront and denounce racism, xenophobia, and misogyny at every turn. Coupled with the emergence of widespread race and equity training, the tools to build a better, more equitable society will be there. A 52-state solution Puerto Rico has been a commonwealth territory of the United States for over 100 years. Its residents are U.S. citizens and pay taxes. The District of Columbia pays the highest per-capita federal income taxes in the country. Neither are actual states. Puerto Rico has no electoral votes, and the District of Columbia has no representation in the Senate, while Puerto Rico enjoys 1 non-voting resident commissioner in the U.S. House of Representatives. Collectively they represent 4 million non-represented American citizens; about one and half times the number of people in Chicago. Arguments for statehood for both have been ongoing for years, but appear to be coming to a head as the United States begins to grapple with the shortcomings of its electoral college and its disparities with the popular vote. It’s been over sixty years since a new state has been added to the union, but a post-Trump consolidation of all U.S. citizens could include finally recognizing statehood for these under-represented Americans. The Supreme Court Last night, Amy Coney Barrett was confirmed and sworn into the United States Supreme Court. She became the third judge seated to the highest court in the land under President Trump and solidifies a jarring 6–3 conservative lean. The implications for reproductive rights, LGBTQIA+ rights, healthcare, and electoral sovereignty are potentially staggering, and a sad coda to the life of Ruth Bader Ginsburg. Discussions about “court-packing” have abounded in recent weeks, and are likely to intensify if Democrats do win the election, and even more so should they flip the Senate at the same time. Debates around term limiting Supreme Court justices have also come much more to the forefront than at any time in our countries history. It’s entirely possible we see movements to do both. Expanding and term limiting the court are essentially the only weapons left to rebalance the scales in the justice system. Beyond that, legislative options will become much more important as new and bulletproof laws will need to be passed should the Supreme Court strike down reproductive rights, healthcare, or LGBTQIA+ protections.
https://medium.com/bigger-picture/envisioning-a-post-trump-america-6b38d7c134a0
['Ryan Nehring']
2020-11-03 01:04:02.853000+00:00
['Election 2020', 'Trump', 'Politics', 'Society', 'Social Justice']
As if it happened only yesterday
Jim Steinman’s shanty town website is neither modest nor modern. It is a dog’s dinner even by the standards of the internet dark ages, from which it is surely a relic. I’ve seen better designed Bebo fan pages. A bunch of self-aggrandising facts, figures and celebrity chum-shots have been puked onto a homepage which offers no hint of navigation. Instead each careless click is a one-way ticket into rabbit holes which are crammed full of superlatives and from which there is no way back. It is very trippy. And for brazen brashness it is off the scale. But, if your name is Jim Steinman, it is a mirror-mirror-on-the-wall website that never gives the wrong answer to the question, “Who is the baddassest rock lyricist of all time?” Jim is. Unquestionably so. As Kerrang! magazine said back in 1983: Jim Steinman is a man who accepted excess into his heart in the way a Christian must accept God into his soul…Probably the ultimate definition of the genius-as-madman producer since Phil Spector…creator of the most gloriously pounding, emotionally derailed, headily deranged, chrome hard, wildly demented, madly powerful, too real, wholly unreal, sexually monumental, totally melodramatic all-time masterpiece album ever made. The album in question is Bat Out Of Hell by Meatloaf for which Jim Steinman wrote all the words and all the music. It has sold over forty three million copies since it was released in 1977. Bat Out Of Hell entered the world just as I was entering my teens. I was an impressionable young lad at high school on the outskirts of Wigan. And for several years I became an attentive student of hard rock and heavy metal music. My school timetable included five periods, also known as lunch hours, that were devoted to the subject. A group of us would eat our sandwiches at Stuart Beveridge’s house whilst listening to his elder brother’s record collection and reading his elder brother’s latest copy of Sounds magazine. The playlist in those days — back when records were records (including limited edition coloured vinyl and picture discs) — was a who’s who of long-haired, leather-clad, guitar-smashing egotists. We stepped out of school into a world of Marshall amp stacks, Fender Stratocasters and patchouli oil. Writing a list of bands from those days gives me chills of nostalgia: Deep Purple, Rainbow, AC/DC, Motörhead, Saxon, Led Zeppelin, Scorpions, UFO, Whitesnake, Black Sabbath, Magnum, Rush, Hawkwind, Jethro Tull, Gillan, Iron Maiden, Judas Priest, along with lesser known bands such as Michael Schenker Group, Tygers Of Pan Tang and even Budgie. And Meatloaf of course. Bat Out Of Hell was an instant icon. The album transcended both genre and playground factions. Mods, rockers, new wavers and punks alike were swept along by its originality, its swagger and its grand ambition. The songs are epic, and I do not use that word lightly. The album is a mixture of the divine and the comedic. It is meaty, beaty and big throughout, bouncy in places. The title track, which opens the album, is 9 minutes 48 seconds long. And the closing track, For Crying Out Loud, comes in at 8:45. Its first verse alone is longer than many pop songs. These majestic anthems cradle and bracket five songs which may be shorter in length but which lack nothing in stature: You Took The Words Right Out Of My Mouth, Heaven Can Wait, All Revved Up With No Place To Go, Two Out Of Three Ain’t Bad, and Paradise By The Dashboard Light. There is no padding on this record, no making up of numbers. Bat Out Of Hell is an album without album tracks. It is a clever album. Steinman worked with the standard primary themes of heavy rock — sex, drink, fast moving vehicles, and Gothic fantasy — but his stories are layered and complex, and his writing is poetic. Like a bat out of Hell I’ll be gone when the morning comes But when the day is done And the sun goes down And the moonlight’s shining through Then like a sinner Before the gates of Heaven I’ll come crawling on back to you It is no surprise to learn that Steinman cut his teeth writing for musicals. And it is almost no surprise that he is pictured with Andrew Lloyd Webber on the homepage of his aforementioned website. Jim Steinman is the Tim Rice of heavy rock. Bat Out Of Hell could easily be the soundtrack album for a Broadway show*. Mama Mia with power chords. Back in school I was an equally attentive student of the poems of Robert Browning. Along with Shakespeare’s Twelfth Night and some classic novels whose names escape me, the poems were part of my English Literature O-Level syllabus. I had to study them. But the attentiveness was down to my teacher. Mr Jones was very tall, but the thrall in which he held his classes came not from his physical stature but from his infectious enthusiasm for the English language. His unashamed passion made him a charismatic teacher and we gladly set aside our teenage self-consciousness and cynicism and opened our hearts and minds to the import and enchantment of literary works. Mr Jones made it cool to be keen. After years of dark grey he turned up proudly one day in a new slate green suit which drew favourable comments from colleagues and pupils alike. Encouraged by our affirmative responses he wore the new suit every day for a week or so before a cheap ballpoint pen leaked into the breast pocket of the jacket. No amount of dry cleaning could remove the stain but he continued to wear the suit nonetheless. On reflection it was entirely fitting that a man who inspired so many children to appreciate the written word should have ink so conspicuously close to his heart. It is down to him that now, more than thirty years later, a list of poems gives me the same nostalgic chills as a list of rock bands: Porphyria’s Lover, My Last Duchess, The Laboratory et al. Robert Browning is acknowledged as a master of the dramatic monologue; poems written in the first person, expressing the thoughts and feelings of someone other than the poet, to an assumed audience that is obviously present but not overtly referred to. And it strikes me, in a crossing of nostalgic streams, that Bat Out Of Hell is a pretty mean dramatic monologue as well as a very mean rock anthem. It shares some important characteristics of the form. For one it opens in medias res, in the middle of things: The sirens are screamin’ And the fires are howlin’ Way down in the valley tonight Compare that with the opening of Porphyria’s Lover: The rain set early in tonight, The sullen wind was soon awake, It tore the elm-tops down for spite, And did its worst to vex the lake: In both instances the language is extreme, urgent and portentous. In a dramatic monologue the speaker reveals his or her personality and attitudes through their words. And Messrs Steinman and Browning are both adept at portraying subjects with nihilistic streaks. From Bat Out Of Hell: Nothing ever grows In this rotting old hole And everything is stunted and lost And nothing really rocks And nothing really rolls And nothing’s ever worth the cost And from The Laboratory: Now that I, tying thy glass mask tightly, May gaze thro’ these faint smokes curling whitely, As thou pliest thy trade in this devil’s-smithy — Which is the poison to poison her, prithee? He is with her, and they know that I know Where they are, what they do: they believe my tears flow While they laugh, laugh at me, at me fled to the drear Empty church, to pray God in, for them! — I am here. Both writers are fascinated by love-crazed protagonists and the lengths to which they will go in the name of jealousy or possessiveness or insecurity in various other forms. Their heroes will do anything for love. And more often than not they will do “that”. Porphyria’s lover, concerned that Porphyria will not remain captivated by him forever, decides to preserve the moment at which he is most sure of her devotion by killing her. Happy and proud; at last l knew Porphyria worshiped me: surprise Made my heart swell, and still it grew While I debated what to do. That moment she was mine, mine, fair, Perfectly pure and good: I found A thing to do, and all her hair In one long yellow string l wound Three times her little throat around, And strangled her. And the dick-led teenage boy in Paradise By The Dashboard Light, driven to hormonal distraction by his desire to score a home run, expediently decides to say whatever he has to in the heat of the moment, leaving him with the rest of his life to repent at leisure. Lord, I was crazed And when the feeling came upon me like a tidal wave I started swearing to my God and on my mother’s grave That I would love you to the end of time I swore that I would love you to the end of time So now I’m praying for the end of time To hurry up and arrive ‘Cause if I gotta spend another minute with you I don’t think that I can really survive I’ll never break my promise or forget my vow But God only knows what I can do right now I’m praying for the end of time, It’s all that I can do Praying for the end of time so I can end my time with you Paradise By The Dashboard Light is the most clever, most knowing and most complex song on Bat Out Of Hell. It is formed of three distinct parts: Paradise, Let Me Sleep On It and Praying For The End Of Time, which navigate from naive optimism, through sexual morality and frustration, to desperate, acrimonious regret. It is a duet rather than a monologue. And it is five and half minutes of brilliant storytelling. The lyrics are super-saturated with meaning and metaphor, such as the wickedly knowing baseball commentary, set to a funky guitar backing, that punctuates parts 1 and 2. Commentator: There, almost daring him to try and pick him off, the pitcher Glances over, winds up, and it’s bunted, bunted down the third Base line, the suicide squeeze is on, here he comes, squeeze Play, it’s gonna be close, holy cow, I think he’s gonna make it Girl: Stop right there! I gotta know right now Before we go any further do you love me? Will you love me forever? Do you need me? Will you never leave me? It takes me back. It takes me back because we’ve all been those teenagers. And it takes me back on a vivid reminiscence trip. I am reminded of people, places and seemingly trivial incidents that I haven’t thought about for over thirty years — ink-stained pockets, Stuart Beveridge’s neurotic dog, that Whitesnake album cover… As the first line of Paradise By Dashboard Light says: Well, I remember every little thing as if it happened only yesterday Bat Out Of Hell is a timeless time capsule. And Jim Steinman is a genius. But, if his website is anything to go by, he’d probably think I was damning him with faint praise by saying so. *Bat Out Of Hell did become a musical, written by Steinman and incorporating songs from the three eponymous albums
https://medium.com/a-longing-look/as-if-it-happened-only-yesterday-cf6cc2f03166
['Phil Adams']
2018-11-16 21:11:37.808000+00:00
['Meatloaf', 'Music', 'Lyrics']
Busy In The Village
Today, we volunteered at a public hospital in a small agricultural village approximately 10km East of Faridabad. The hospital serves as a hub for the nine surrounding villages and Tuesdays is the day for pregnant women to receive their pre and post-natal checkups. Checkups are simple, including a basic urinalysis for glucose and protein, a simple hemoglobin test, blood typing, vitals, delivery date estimate, etc. Ultrasounds are available in large hospitals or private clinics, but it sounds like many of the women just do without them. Determination of a child’s gender before birth is illegal in India due to male favoritism. The female volunteers here were allowed to help at the ultrasound clinic last week and found the doctor quickly scanning over regions where the patient might be able to guess her child’s sex. Often, the doctor would know the gender but be unable, by law, to tell her patient. As a result of this law, the pre-natal checkups also include a questionnaire as to the mother’s caste, number of attempted deliveries, number of abortions, number of living children, and the gender of those living children. The government will investigate any suspicious activity regarding abortions. For instance, a mother who has a living female child and who aborts her next fetus will likely be scrutinized. [The following paragraph has been updated due to a misunderstanding during translation] If you read this post previously, you will recall that I said the cost of pre and post-natal healthcare depended upon one’s caste. I have since learned, however, that it’s the other way around. Mothers are actually paid different amounts depending on her caste. Because education and health care are so limited in rural villages, it has been common for mothers to deliver their children at home (using midwives) without seeking professional medical attention. This has resulted in abnormally high maternal and infant mortality rates. Therefore, the government has instituted the “National Rural Health Program” that actually pays certain mothers to give birth at the hospital. Today, the women were grouped into three caste categories: the General Caste (highest), Schedule Caste (middle), and Backwards Caste (lowest). A woman from the Backwards Caste, for instance, is given Rs 1500 ($30) as an incentive to have three pre-natal checkups, three post-natal checkups, and deliver her child in a hospital. Women from the Schedule Caste receive Rs 500 ($10) and women from the General Caste do not receive any financial benefit because they are believed to be wealthier and more educated. Furthermore, the hospital staff members and people called “ashas” receive a financial bonus (appx. Rs 500-Rs 1000, or $10-$20) for each child delivered in a hospital. An asha is a person who, from what I understand, serves as a motivator for the proper health care decisions of approximately 1,000 people. He or she keeps records of the people under his or her care and educates people about different health options such as the benefits of going to a hospital to give birth. These are just some of the government’s methods of incentivizing proper health. I spent the majority of the day determining blood types (AB positive, O negative, etc.), performing hemoglobin tests, conducting simple urinalyses with basic equipment, and preparing samples for tests of malaria. The most shocking aspects of the day were the frigid temperatures within the unheated hospital, the lack of alcohol swabbing before pricking, and the staff’s handling of blood covered slides and bloody fingers without gloves or even thorough washing. This is not due to any oversight or negligence. It is just common practice here. And for those of you who are curious, yes, I wear gloves that I bring with me.
https://medium.com/squalor-to-scholar/busy-in-the-village-42c537361a4c
['John Schupbach']
2017-06-29 01:00:20.069000+00:00
['Travel', 'India', 'Health']
Facebook Won’t Accept New Political Ads in Week Before US Election
Facebook won’t accept new ads in the last week of the campaign, but that doesn’t mean you won’t see them on the social network. Anyone who buys ads before that one-week deadline is in the clear. By Chloe Albanesius Facebook this morning announced a number of updates intended to avoid misinformation and chaos in the lead-up to Election Day in the US, including a ban on new political ads a week before Nov. 3. No New Political or Issue Ads “We’re going to block new political and issue ads during the final week of the campaign,” CEO Mark Zuckerberg writes in a Facebook post. “It’s important that campaigns can run get out the vote campaigns, and I generally believe the best antidote to bad speech is more speech, but in the final days of an election there may not be enough time to contest new claims.” Facebook won’t accept new ads in the last week of the campaign, but that doesn’t mean you won’t see them on the social network. Anyone who buys ads before that one-week deadline is in the clear. “Advertisers will be able to continue running ads they started running before the final week and adjust the targeting for those ads, but those ads will already be published transparently in our Ads Library so anyone, including fact-checkers and journalists, can scrutinize them,” Zuckerberg says. Expanded Crackdown on Voting Misinformation Facebook is also expanding its crackdown on voting misinformation. “We already committed to partnering with state election authorities to identify and remove false claims about polling conditions in the last 72 hours of the campaign, but given that this election will include large amounts of early voting, we’re extending that period to begin now and continue through the election until we have a clear result,” Zuckerberg writes. The company is consulting with state election officials on whether certain voting claims are accurate, he adds. “We already remove explicit misrepresentations about how or when to vote that could cause someone to lose their opportunity to vote-for example, saying things like ‘you can send in your mail ballot up to 3 days after election day,’ which is obviously not true,” Zuckerberg explains. “We’re now expanding this policy to include implicit misrepresentations about voting too, like ‘I hear anybody with a driver’s license gets a ballot this year,’ because it might mislead you about what you need to do to get a ballot, even if that wouldn’t necessarily invalidate your vote by itself.” Forwarding Limits on Facebook Messenger (Image: Facebook) Like it did with WhatsApp, meanwhile, Facebook will limit message forwarding on Messenger so you can only send something to five people or groups at a time. “Limiting forwarding is an effective way to slow the spread of viral misinformation and harmful content that has the potential to cause real world harm,” says Jay Sullivan, Director of Product Management, Messenger Privacy and Safety. The WhatsApp limits came about amid a surge in coronavirus misinformation, and Facebook expects that to continue in the run-up to Election Day. As such, it’ll remove Facebook posts that say you’ll get COVID-19 if you vote, and add links to authoritative information about the coronavirus to posts that use COVID-19 to discourage voting. Don’t Expect Results on Election Night Zuckerberg also expects that a jump in mail-in voting means we might not have a definitive answer on election night. “It’s important that we prepare for this possibility in advance and understand that there could be a period of intense claims and counter-claims as the final results are counted,” he writes. Facebook will make this clear with notices in its Voting Information Center and will partner with Reuters and the National Election Pool on election results. “We’ll show this in the Voting Information Center so it’s easily accessible, and we’ll notify people proactively as results become available,” he says. “Importantly, if any candidate or campaign tries to declare victory before the results are in, we’ll add a label to their post educating that official results are not yet in and directing people to the official results.” The social network also plans to label posts that look to delegitimize legal voting methods-”for example, by claiming that lawful methods of voting will lead to fraud,” Zuckberg says. Foreign Interference Is Still a Thing Though Zuckberg famously argued in 2016 that fake news swaying an election was a “crazy idea,” he has come around to the impact of foreign election meddling. “This threat hasn’t gone away,” he writes today. This week, Facebook took down a network of 13 accounts and two pages that were trying to mislead Americans and amplify division, he says. “However, we’re increasingly seeing attempts to undermine the legitimacy of our elections from within our own borders.” “I believe our democracy is strong enough to withstand this challenge and deliver a free and fair election-even if it takes time for every vote to be counted. We’ve voted during global pandemics before. We can do this,” Zuckerberg concludes. “But it’s going to take a concerted effort by all of us-political parties and candidates, election authorities, the media and social networks, and ultimately voters as well-to live up to our responsibilities.” That’s All, Folks Don’t expect any other changes to Facebook’s policies, meanwhile. “We’ll enforce the policies I outlined above as well as all our existing policies around voter suppression and voting misinformation, but to ensure there are clear and consistent rules, we are not planning to make further changes to our election-related policies between now and the official declaration of the result,” according to Zuckerberg.
https://medium.com/pcmag-access/facebook-wont-accept-new-political-ads-in-week-before-us-election-3fab53222c23
[]
2020-09-03 12:56:51.961000+00:00
['Election 2020', 'Technology', 'Politics', 'Facebook', 'Social Media']
The Friend OS Open Source Project
Since 2014, the developers in Friend Software have been laboring on a new operating system that aims to use the Internet like a computer. The project has been a “closely kept secret” while the team has been working hard to both prove the technology as well as its viability in the marketplace. Now, after more than five years in the making, the project is finally ready for the limelight. Even though Friend is very early in adoption, being run in about a dozen companies in Scandinavia, the existing team has achieved very much against high odds. Friend already offers its own integrated office suite. Chat and video conferencing. File sharing and file management. Mobile apps for iOS and Android with a full touch environment. Its own Integrated Development Environment. A range of productivity utilities and applets. Third party apps with backing companies who have chosen Friend OS as the base of their products and services. What we have proven, is that Friend OS is a viable and compelling solution that can offer combined functionality that other projects can’t. It delivers a scalable system that make deploying web based software simpler, with less password prompts and powerful operating system features that accelerate feature development for developers. We’ve spent years talking to advisors and taking part in Blockchain, decentralization and other high-tech software conferences to learn where Friend fits and with whom to collaborate, resulting in numerous partnerships. An Operating System Friend OS was started as meta operating system based on the TRIPOS architecture, a networking operating system first developed by Dr. Martin Richards, and later adopted by Commodore in their Amiga line of computers as part of the Amiga OS. Friend is heavily inspired by AROS (Amiga Research Operating System), but adopts many concepts from contemporary operating systems too, like Haiku OS, Linux, Windows and Mac OS X. Friend OS aims to become an easy to use, beautiful and expressive operating system throughout its design, from the Friend Core co-kernel that augments the underlying operating system kernel, to the end-user graphical user interface, APIs and command line interface. Friend has been present all over Europe, looking for talent, building a community and getting feedback from developers. Here, in Warsaw in 2017. Open Source First The Open Source project is a core part of Friend Software Corporation’s commercial strategy. From the start, a decision was made to release the entire Friend Operating System as Open Source Software. Friend is a values based company, and as part of our values system, we support moving away from software patents and endorse copyleft and Open Source licenses as a viable alternative. The Friend Open Source project is filled with open minded people who share values and who are engaged in building a sprawling and fun community around the technology and software ecosystem. It is a place where startups are formed, and where existing businesses find new opportunities. It is a place where IT professionals find new tools. And new friendships and partnerships are formed. As a horizontal technology — Friend can drive any kind of desktop as well as mobile workflow. Ecosystem Friend, as a horizontal technology, can integrate with all manners of third party applications — web applications in particular. As an ecosystem, the Friend teams are very open to work with external projects that can run in the Friend Workspace or mobile apps. With the Friend Marketplace (currently in development) Friend cloud instances can be seen as fantastic opportunities for distribution to end users. The Friend Marketplace app installs third party software into the Friend Workspace for users with a Friend account. Friend interviewed at a Ethereum conference in Paris in 2018. We see Friend OS as an exciting chance to build a whole new operating system for the Internet age. With a great vision of the future. With security, privacy and freedom at its heart. Later, we will build Friend Books, Linux distrubitions and smart-phone versions of the OS so that users and developers can get deeply immersed in the platform and help us build a new market and software ecosystem that may challenge the OS and Cloud players dominated by Big Tech. Participation The Friend Open Source project consists of multiple Github repositories for various technologies and applications. Additionally, team members have different roles and different capabilities that make the project a diverse place. This gives new participants several tasks and areas to choose from for their first attempt at helping Friend grow and improve. Developers The most important people for the project will always be its developers. Friend developers mainly write code in JavaScript, PHP and C for the front-end and back-end systems. Developers work on the core technology, the user interface, the built-in apps as well as supporting applications for managing the OS. Additionally, they bring in own- and third-party software and integrate them into the Friend Workspace. Document authors Technical users and developers write documentation, tutorials and example snippets for use by newcomers — in addition to more indepth reference materials for seasoned Friend developers. Documentation is an integral part of the project, and we continuously endeavour to improve it. The system has a huge pool of functionality and many use cases which would benefit from easy to read manuals and interactive, searchable material. Betatesters Our testers make sure we do not aggregate too many issues across our supported devices. They feed the team tasks which are prioritized by our task master. Security / penetration testers Server administrators and white-hat hackers help us to bring a high level of security to the Friend OS. Across our networking protocols, and across our APIs, we need to make sure that user’s data remain safe by being vigilant and proactive. Our current security team needs to grow as we spread our technology across the world. Community managers Our community managers evangelize and bring people into the project. They make sure that the open source team and the users are aligned. They make sure developers have what they need, and that the whole community feels part of the project and vision. Designers / artists Graphics artists and designers build great UX, artwork and material that can be used in documentation and marketing. Any OS needs to look great! The design needs to compell users to use it. Additionally, designers and artists will have a choice of sub projects and apps to work with, which will mean a continuous stream of challenges and fun opportunities. Where to start If you are unfamiliar with the Friend project, your first depo of information is friendos.com. If you feel like engaging with the Friend team, you will find that our Discord channel contains multiple channels and approachable people who can answer most of your questions: Friend is hosted on Github. Here, you will find all of the repositories, the source code, the current Open Source teams and documentation. Choose Freedom | Choose Friend
https://medium.com/friendupcloud/the-friend-os-open-source-project-2ebba21b08f7
['Hogne Titlestad', 'Co-Ceo', 'Founder', 'Friend Software']
2019-09-25 14:13:40.692000+00:00
['Operating Systems', 'JavaScript', 'Open Source', 'Programming', 'Cloud Computing']
HOW THE CORONAVIRUS (COVID-19) MIGHT BE STOPPED BY DATA SCIENCE
HOW THE CORONAVIRUS (COVID-19) MIGHT BE STOPPED BY DATA SCIENCE Alteryx Follow Mar 27 · 5 min read We know that data and analytics play a role in everyday products — from recommendations on what music we might like to hear to automated re-routing by our GPS system. But how might the power of analytics be brought to bear on a disease that is currently threatening the health and economic welfare of people across the globe? If we rewind the clock to the 1850s, there are two significant examples of how early pioneers in data science made incredible impacts on the world that can provide some insight into what we might see happen next. A powerful case of data and analytics being leveraged to drive a significant change to the course of a disease spreading. It was 1852, and the cholera pandemic had made its way to London. Over 23,000 people had died already. To make matters worse, unbalanced press reporting led people to believe that victims were more likely to die in the hospital than their homes, and that doctors would kill their patients for anatomical dissection, an outcome known as “Burking.” John Snow, who is frequently described as the father of epidemiology, began to geospatially analyze the deaths that were occurring in London and isolated the source of the disease, a contaminated well that supplied water in the Soho area of London — the Broad Street Pump. (Map of John Snow’s recordings of cholera cases in London.) Using his analysis, he convinced local officials to remove the handle to the pump, and the cholera cases rapidly dropped, ending the spread of the disease in London. Just a few years later, in nearly the same geography, a young nurse, Florence Nightingale, solved another significant medical problem. The British Empire was at war against the Russian Empire, and thousands of soldiers were being hospitalized. The conditions at the hospitals were horrid and the odds of surviving once admitted were less than 60%. Nightingale was data driven, and as she implemented new procedures (like hand washing), she methodically recorded data on how each performed and analyzed the results. One of the most famous reports showed how her practices in these field hospitals reduced the mortality rates from 42% to 2%. And if that wasn’t compelling enough, Nightingale gathered data on these same rates from the best London hospitals to show that these innovative practices needed to be instituted everywhere. Many of these methods used to reduce the spread of disease are still practiced today. Believe it or not, during that period, most believed that foul odors were the causes of how diseases spread. These two early pioneers in data science set the stage for many that followed. In both cases, they were domain experts — trained in medicine. They had access to data and an understanding of how to analyze the data to drive outcomes. And this pattern continues to repeat in more modern-day examples. In a different kind of disease outbreak, during the avian flu pandemic in 2009, we saw Alteryx leveraged by the USDA to respond with incredible speed to stop the outbreak. Utilizing geospatial data and the modern-day analytics platform Alteryx, the agency was able to drive analysis to the field faster than before, helping to end the outbreak quickly and reduce the economic impact. WHAT BREAKTHROUGHS MIGHT OCCUR TO SLOW DOWN OR END THE CORONAVIRUS (COVID-19)? There are current reports out of China that one of the biggest enablers to slowing down the spread has been the use of Artificial Intelligence (AI). By logging where reported cases were occurring and joining that data with GPS movement of cell phones, the government was able to create analytic models to predict what neighborhoods were most likely to have future cases. With this information, they could rapidly quarantine and put measures in place to reduce and/or stop the spread of the disease. While this level of data sharing would likely not occur in many other countries, early indications suggest the actions have meaningfully reduced the impact of the disease, with China already reporting fewer new cases than several other countries. CASE IN POINT Deanna Sanchez, a phenomenal data scientist who is focused on geospatial relationships, also has domain knowledge, with a concentration in Medical Geography. Applying this to the coronavirus, she’s already seeing patterns in the data. “By using Alteryx we were able to create the maps below, showing the spread of the coronavirus in the U.S. over the period of a few weeks. Each dot represents confirmed cases of the disease with color variations illustrating one or more instances of the disease. Note: Maps show data of confirmed cases as of 02/11/20 10:50. “The spread and reach of the disease are both visually palpable while providing instant insights, such as the disease’s limited impact, its containment to major cities, and its non-contiguous spread. The ‘where’ of the coronavirus, its propagation patterns and the types of people it affects can also be effectively analyzed using GIS.” — Deanna Sanchez, Alteryx ACE, Practice Manager — Intelligence & Analytics, PK — the Experience Engineering Firm (How Spatial Analytics Can Help Fight the Coronavirus) MIGHT DATA SCIENCE CONTINUE TO BE LEVERAGED TO STOP THE SPREAD OF THE CORONAVIRUS? When I was debarking a plane recently, I was interviewed by the CDC based on analytics that showed I had traveled to a high-risk area. Certainly, this is a great analytic use case and one that is incredibly easy to implement on modern-day analytic platforms. But I believe there are still more breakthroughs to come with even more significant effects, whether in vaccine analytics or containment methodologies, in treatment efficacy analysis, or new procedures to protect first responders. I expect amazing people with great subject matter experience and knowledge to continue to leverage advanced analytic tools and techniques to change the world, and fully expect to hear more examples of how COVID-19 meets its match with today’s superhero — the knowledge worker with data science skills. KEY TAKEAWAYS Analytics and Big Data are critical to understanding and combating the spread of deadly diseases. Domain knowledge and access to data — along with an understanding of how to analyze the data — are key drivers of positive outcomes. Data science and Alteryx can help you change the world. This blog was originally published here: https://www.alteryx.com/input/how-coronavirus-covid-19-might-be-stopped-by-data-science
https://medium.com/input-by-alteryx/how-the-coronavirus-covid-19-might-be-stopped-by-data-science-65b79d01b68c
[]
2020-04-02 19:52:46.516000+00:00
['Data', 'Covid 19', 'Data Science', 'Coronavirus', 'Analytics']
On the Health Record: Interview with Ted Blosser of Workramp
Founder and CEO of the learning software Workramp takes us from his early days as a student athlete to his time working on product at Box and eventually starting Workramp. He also shares the insights he’s learned from business leaders — from Aaron Levie to Bob Iger and Mark Benioff — and how he models those lessons in his own company. Our transcript has been edited, but you can listen to the full podcast episode here. You were a student athlete at Santa Clara University. Can you tell us about being a student athlete in college and what your mindset was like back then? Yeah, I played tennis at Santa Clara. I’d played competitively most of my life and then walked on to the team and played for two years. I did that while doing electrical engineering. It was definitely a challenge. I remember my grades were slipping because you’re putting in 20–25 hours a week, you’re traveling, and you’re missing labs while all of your friends are studying and going to all the extra learning sessions. But what is taught me is how to time manage while in college. Especially as a CEO now, I have to learn to juggle multiple things. I wouldn’t want to trade that experience for the world. I want to talk about Box where you were a product manager. Was that your first job out of college? And did you know that you wanted to go straight into product, or is that something you had to discover? That was not my first job out of college. I actually had two stints before that. One was at Cisco right after school. I got recruited into essentially a mini MBA program out in Raleigh, North Carolina, and I was there for about three and a half years. Then I had a stint of a failed startup that I founded called StuffBox. When that startup failed, I realized I had to go learn how to actually build a startup and learn from the best. That was when I started at Box. I knew I was going to go back into product. When I was interviewing around right before I joined Box, I had a plethora of different opportunities I was chasing. l decided to do sales and get paid a boatload of money at the time compared to what I was making, which was nothing. I knew that if I could get that background in selling a SaaS product, that’d be a good entry way. So it was always with a short-term mindset, but I knew I wanted to go crush it for a couple of years and then move back into product. Can you paint the picture for me of what that team looked like at Box? They have a great product, so I imagine that their product team is reflective of that. What did it actually look like? At Box, product was definitely one of the top teams in the company. It was definitely a team that people aspired to to join. When you go into a lot of these different companies, the DNA of the company typically comes from the CEO or one of the co-founders, and Aaron Levie (Box co-founder and CEO) was definitely a product guy at heart, even though he is pretty funny and vocal. That’s where he spends most of his time. What I liked about that team was that they actually owned some core parts of the product that they consider a “platform”. So you got some actual experience on the product itself, but you also got some experience into the ecosystem, like the APIs and the technical nitty-gritty. I was really excited to kind of straddle both sides of the fence there, both with the core product and also the technical ecosystem side of the house as well. What were some of your key takeaways from being part of that product team? And was there anything unexpected you learned from Aaron? Probably one of the coolest things I got to do was spend time watching Aaron do his magic. There are so many cool startup CEOs now, but at that time there weren’t many cool startup CEOs like Aaron. He was really trailblazing. I think the biggest learning from him was that he knew what the market wanted and where you should position yourselves. For example, everyone now during COVID is talking about digital transformation, like Benioff and ServiceNow CEO McDermott. Aaron was talking about digital transformation back in 2013. He had this foresight where he knew exactly what messaging would resonate, even if he had built nothing around that specific area. If you think about Box, it’s really just file sharing, right? There’s not anything extremely special about that, but then he would reframe that to say, “Hey, as a business, your company runs on your files. In the next 5 to 10 years, the whole world’s going to digitally transform, so that’s why you need a service like Box to be the foundation of that digital transformation.” So he took something as boring as dropping a file in an uploader to saying, “This is the crux of your entire revenue stream of your business.” He would reframe the products he was selling to match what the industry and customers wanted. Benioff does that extremely well too. I would say the two of them are some of the best I’ve seen in terms of that reframing. Interesting. And they weren’t wrong. The value of Box is huge. Now I’d like to transition to talking about delivering value to users. This is something that comes up all the time today, especially in product circles. Amazon focuses on customer delight for example. What does delivering value mean to you and how have you done it in practice? I had this founder tell me something really insightful a couple months ago. He runs a pretty big business, and he said that companies will spend $10 million with Accenture, for example, and say, “I want you to go deliver a faster time to market for our product team.” That’s the outcome they want to buy, and they’re going to write a $10 million check and guarantee that they’re going to reduce that time by let’s say 20%. And the ROI for them on that $10 million is $100 million, so it’s $90 million ROI that they’re getting. He said that is what needs to happen in the software space, especially in enterprise. You need to guarantee your clients the business outcomes they want to see. For us, a great example is that you might buy us to reduce ramp time of your sales team. If you could reduce ramp time from five months to four months, you’re seeing a 20% reduction in ramp time, which for one individual rep might mean a hundred thousand dollars for you. So that ROI for one rep can be easily quantifiable by the software you bring in. That’s really how I think about the value you can deliver with software. What are the outcomes you drive for your client? And then everything else comes after that — the features, the usability, the prettiness of the design. But if you could deliver the business outcome, the client will keep coming back to you. I want to go back a bit and ask about starting your company in 2015. Box had just gone public, and you were working there. Why did you decide to start a company at that point? And what was the core issue you were trying to solve? In 2015, we actually just had our first daughter. For me, I think that was a kick in the butt. I told myself that if I wanted to do this, I should do it now. I think I had just turned 30 at the time, and I thought that there was probably no better time to do this than now. I didn’t want to wait until my forties or fifties to start my own company for the first time. Then once I got into that mindset, it was just about what market that I want to go into. What was I passionate about? My criteria was pretty simple. What is a big pain point? What has a huge TAM (total addressable market), and what is a market that I’m pretty equipped to service with my background? When we looked at learning software, it basically checked all those points. The TAM is gigantic. There’s $270 billion per year spent on corporate training, and then there are many people that have a sales and product background in enterprise SaaS that want to go start a company in Silicon Valley in the learning space. I recruited one of my good friends and now co-founder who worked early at Box as well, and then we just started building the product from there. Can you tell me what your most exciting moment has been so far as a CEO? I’m sure you’ve had plenty, but is there one that stands out? There’s so many. I would say there are a lot in every different phase of the business, but there is a good one when we were just starting out. We’d just got our first significant paying client, $5k a year. I remember we were basically living off savings, trying to make this work, and I remember opening up the mailbox and seeing this $5,000 check, and we didn’t have a bank account at that time. I don’t even think we had incorporated officially yet. I remember that right away, we filed all the paperwork to incorporate, and we opened up the bank account. I was just so proud walking to the teller and saying I wanted to deposit this $5,000 check into the business. When you’re working at other companies, you think that stuff is all automatic, but when you see it tangibly, you’re in awe of it. You realize you’re actually providing an outcome to clients through online software, and they’re going to pay you money for it. That was pretty exciting. Obviously in all the other phases, there are fun stories like that, but from the early days that was one of the most exciting. That’s an exciting moment. So you took me to the top of the mountain. Now let’s go to the bottom of the valley. What was your biggest moment of despair as a founder and CEO? I remember very early on, we were experimenting with the market and looking for product market fit. There was a very large tech company testing out our platform. At that time, I would sell to someone’s mom or brother or cousin- anybody who wanted to buy our software. I was driving up to San Francisco every day from San Carlos and meeting with this buyer. I was bragging about them to potential future investors, and then after a good two months, this buyer emailed me and said that they weren’t going to move forward with us. And I remember getting that email, walking into our master bedroom and just shutting off the light in the middle of the day and laying on the bed in silence and just thinking, “Man, is this ever going to work?” I was pitying myself for about an hour, and then I got out of the bed, opened the blinds and was like, “You know what, that’s ok. That was not the best use case.” I kind of knew in my heart of hearts that it wasn’t a good fit, but I’m so glad that happened because it showed me that you shouldn’t force a square peg in a round hole, even though it’s a great name brand. In the micro evaluation of it, you think the world has ended, but when you look back on it, it’s probably the best thing that could’ve happened to the company, even though it was really harmful in the beginning. You have done so much at your age, and you seem like a guy that gets a lot done and has it all together. So I want to know what inspires you. Do you have a person or another company that you sort of model yourself after? It’s funny, I only read non-fiction business books. Before COVID, I was listening to business books on tape pretty much every single commute. I would crush through a book a week or listen to a really good business podcast. I’m kind of a junkie on business literature, so I would say I do admire a lot of different executives, the most recent one being Bob Iger from Disney. I read The Ride of a Lifetime about a year ago. What I loved about him was that he took such a humble approach to being a CEO, and he had such a clear strategy of what he wanted to get done at Disney. During our all-hands kickoff for 2020, before COVID, we modeled our themes of the business around how he did that at Disney. There were basically three things that he wanted to get accomplished. It was to become more international and integrate with countries like China. The second one was to move digital, and he basically created Disney Plus from thin air. The last was to essentially focus on the content, so he acquired a ton of companies during his tenure, like Marvel and Lucas. It made Disney the powerhouse that it is. So for me, I strive to be humble, but also have a very, very clear direction with the team and get everyone flowing in the same direction. Just like Bob, I started this year with three key messages to the team and said that we’re going to go conquer those three things, and every quarter we’ve come back to those three things — even with COVID they were still really relevant — and we got everybody rowing in the same direction. So that’s probably who I’ve admired the most, especially with his approach and also his strategy. Listen to the full podcast episode here.
https://drchrono.medium.com/on-the-health-record-interview-with-ted-blosser-of-workramp-e1fa738ab089
[]
2020-11-30 16:36:42.455000+00:00
['Software Development', 'Leadership', 'Technology', 'Entrepreneurship']
The Breakup Song
written in your lines, all my once upon a times — I can’t escape, you break me wide open — snatch pieces of soul, scatter me like confetti and celebrate — as you take me under your waves I drown a little — but learn to swim, I lose my breath, you find my strings — and play, pump adrenaline through my veins, take me over — night and day you ignite my tears as your symphony of misery — sings my life, blue days are just so long, the nights melodies — just sound so unkind you call my name with your crazy beat, damn — you make such sense to me but I can’t explain, set fires to my bones, I like the way it burns — dance all over me with flames you play your song, heart skips to your beat you shoot me down, I trip over my feet — fall back onto those memories, flipping through ex files once again I lose myself — and lose my way, nostalgia paints me over — and I’m never quite the same, never quite the same you’re so bad for me — but feel so good at the same time, you’re killing me so endlessly — but making me feel so alive, feel so alive your lyrics cut — and bleed me dry it’s such a bloody, big mess in my mind, your chorus breaks my heart, bends my spine, dislocates my hip — sways my waistline, sways my waistline I turn the radio up as the DJ rewinds — just one more time, when the music dies — we say goodbye just one more time, when the music dies — we say goodbye but freedom doesn’t feel so free — when you’re out of love like you and me, you and me — out of time
https://dabboh76.medium.com/the-breakup-song-2b08fd1bc5d
['D Abboh']
2020-03-03 11:07:23.363000+00:00
['Breakup Songs', 'Music', 'Heartbreak', 'Song Lyrics', 'Poetry']
On Digital Irony, Sincerity and the Rhythm which Binds the Two*
Another tool for irony detection? This disconnect has led to sparse and mechanized investigations in the field of data science for the sole purpose of detecting irony and sarcasm. Accounts like [1] [2] [3] look at Machine Learning (ML) methods of detecting sarcasm. The underlying assumption about irony which all these accounts make is that it’s absolute and self-contained. They isolate single statements for analysis, disregarding the context in which the statements were said. For example [1] uses ML classifier to detect the difference between an apparent sentiment and a database of objective truths as a marker for irony. For example, “ I just love being stuck in traffic” provides an apparent contrast between “I love” which is a positive sentiment and “being stuck in traffic” which is an objectively negative experience. However, as online discourses and cyberspaces have increased in complexity, more ML efforts attempted to center context-aware understanding of irony . For example, [3] looks at specific user histories in order to define and understand each user’s attitude towards irony. Meanwhile. [4] tries to look at the corpus of nouns from an entire community to understand the underlying sentiment of the community. As some of these approaches might be costly, and still gamble on cultivating a coherent understanding of the lingo and subculture of each of the cyberspaces in question, a different framework was needed to better grasp the network nature of online irony. Here debuts the notion of Lefebrvian rhythm; French philosopher Henri Lefebvre in his posthumously published book Rhythemanalysis provides a meditation on everyday rhythms and how they reflect and influence social processes. His study of rhythm has inspired multiple investigations in the field of urban studies, reclamation of space and collective memory. Seeing as cyberspace is also a space where rhythms contest, compete and reflect techno-cultural battles, and that irony is a multi-use cultural marker, how can we better expose the relationship between rhythm and irony? The scope technocultural rhythms would be anything that falls within the human-machine collaboration that makes the web 2.0 possible; from the flow of packets as regulated by a protocol, to the sorting of a feed of information by a social media algorithm, to the flow of clicks, reactions and comments. two different rhythms imposed on the same content [source] A digital study of rhythm as pointed out by Shintaro Miyazaki can bean important analytical tool in studying the techno-cultures of the web. What can rhythm teach us about the way irony is used online? Figure 1: a rhythmic experience of irony Figure 1 is an illustration of a single user’s experience of irony; the user in question is experiencing a feed of posts. Based on the user’s cultural understanding of the content, they’re bound to project their own un(ironic)meaning on the post. Since Irony is also a shared and networked experience, the variety of ways in which different users can experience the same feed of content must also be considered [figure 2]. Figure 2: Networked rhythm of Irony This sparsity of cultural understanding, and the different projection of rhythms when experiencing a feed of content teach us two things:
https://medium.com/digitalsocietyschool/on-digital-irony-sincerity-and-the-rhythm-which-binds-the-two-8f6686481403
['Abdo Hassan']
2019-03-04 13:58:02.172000+00:00
['Irony', 'Facebook', 'Track Insight', 'Online', 'Rhythm']
How To Run a React App As a Container On GCP VM
Example Project This is a simple project which demonstrates developing and running React application with NodeJS. We have a simple app in which we can add users, count, and display them at the side, and retrieve them whenever you want. Example Project Since we are focusing on running the whole project on the GCP VM we are not going to develop this project in this post. If you are not familiar with the whole process of developing a React app with the NodeJS backend you can go through the below post. Here is the GitHub link to this project. You can clone it and run on it your machine. git clone // clone the projectgit clone https://github.com/bbachi/react-nodejs-minikube.git // strat the apicd api npm install npm run dev // start the react appcd my-app npm install npm start Run it On Docker We use the multi-stage builds for efficient docker images. Building efficient Docker images are very important for faster downloads and lesser surface attacks. In this multi-stage build, building a React app and put those static assets in the build folder is the first step. The second step involves taking those static build files and serve those with the node server. Let’s build an image with the Dockerfile. Here are the things we need for building an image. Stage 1 Start from the base image node:10 There are two package.json files: one is for nodejs server and another is for React UI. We need to copy these into the Docker file system and install all the dependencies. We need this step first to build images faster in case there is a change in the source later. We don’t want to repeat installing dependencies every time we change any source files. Copy all the source files. Install all the dependencies. Run npm run build to build the React App and all the assets will be created under build a folder within a my-app folder. Stage 2 Start from the base image node:10 Take the build from stage 1 and copy all the files into . /my-app/build folder. folder. Copy the nodejs package.json into ./api folder folder Install all the dependencies Finally, copy the server.js into the same folder Have this command node ./api/server.js with the CMD. This automatically runs when we run the image. Here is the complete Dockerfile Dockerfile Let’s build the image with the following command. // build the image docker build -t react-node-image . // check the images docker images Once the Docker image is built. You can run the image with the following command. // run the image docker run -d -p 3080:3080 --name react-node-ui react-node-image // check the container docker ps Once you run the container you can the application on port 3080. Project Running on Docker Publishing the Docker Image Let’s publish the Docker image to Docker Hub with this command docker push <repo name> . Before that, you need to create a Docker Hub account if you dot have one. Here is the link for it. Let’s create a repository and it’s bbachin1 in my case. We need to login, tag the image, and push it finally. // login docker login // tag the image docker tag react-node-image bbachin1/react-node-webapp // push the image docker push bbachin1/react-node-webapp
https://medium.com/bb-tutorials-and-thoughts/how-to-run-a-react-app-as-a-container-on-gcp-vm-1e27970f5bfc
['Bhargav Bachina']
2020-10-26 05:02:48.104000+00:00
['Google Cloud Platform', 'Programming', 'Docker', 'React', 'JavaScript']
Masculinity+: Sexual Violence
Masculinity+: Sexual Violence What does research say about the links between masculine norms and sexual violence? The Man Box: A Study on Being a Young Man in the US, UK, and Mexico April was Sexual Assault Awareness Month (SAAM), a time to stand with survivors of all forms of sexual violence, raise awareness of the issue and what drives it, and reflect on what we can all do to prevent it. As the month comes to a close, it’s also time to reflect on what we know from Promundo’s and partners’ decades of research when it comes to understanding what masculinity has to do with sexual violence. Sexual harassment and sexual assault are, unfortunately, ubiquitous in the US: a 2019 study co-sponsored by Promundo shows that nationwide, 81 percent of women and 43 percent of men report having experienced some form of sexual harassment and/or assault in their lifetime. Empowering sexual assault survivors to seek support and justice while holding those who use violence accountable is vital. It’s also important to foster conversations around preventing sexual assault and advocating for evidence-based methods to end violence. What does our research tell us about the links between harmful masculinity and sexual violence? Promundo and partners often define ‘harmful masculinity’ as a the narrow set of norms and expectations associated with the “right” way to “be a man.” These have negative consequences not only for an individual’s mental, physical, and psychological well-being, but also for the well-being of those around him and society as a whole. These traits and tendencies include: the expectation to be self-sufficient, to always act tough, to conform to rigid gender roles, and to use aggression and violence — including sexual violence — as a way to assert dominance over others. 1. Harmful masculinity drives men’s perpetration of sexual assault and violence. According to The Man Box, Promundo’s study of young men ages 18 to 30 in the US, UK, and Mexico, harmful ideas around masculinity are linked to a higher likelihood of perpetrating violence. The study found that men who identify more strongly with stereotypical notions of masculinity — or who are in the “Man Box” — are up to six times more likely to report having perpetrated sexual harassment, and up to seven times more likely to have used physical violence. The associations between harmful masculine norms and violence are so strong, that if we got rid of the Man Box all together, we could reduce sexual violence in the US by at least 69 percent annually. The Man Box: A Study on Being a Young Man in the US, UK, and Mexico 2. Childhood experiences also link with men’s adherence to harmful masculine norms and their likelihood of using violence, especially sexual violence. Childhood experiences can be extremely influential in shaping an individual’s likelihood to perpetrate sexual violence against others. A study by the International Center for Research on Women and Promundo found that men who experienced trauma at home during childhood, such as being neglected by parents, witnessing violence between parents, or experiencing physical and sexual violence as a child, are much more likely to perpetrate intimate partner violence and sexual violence in their adult life. This is not to say that these links are automatic, of course; many individuals are able to use adverse childhood experiences as partial reason to resist the use of violence later in life. But the intergenerational transmission of violent behaviors remains a strong and consistent challenge for violence prevention efforts. 3. Sexual assault is not the problem of a few individual men, but rather is part of the social cultures at schools, workplaces, and other institutions. We should always hold those who use sexual violence accountable in ways that reflect true justice according to the wishes and vision of those most harmed by violent acts. It’s also important to remember that the drivers of sexual harassment are often deeper and more systemic than simply individuals’ pathologies or decisions. Research on college sexual assault reveals that individuals’ decisions are informed by social interactions and by observations of social settings. If college-age individuals find themselves in an educational environment where peers and colleagues hold disrespectful beliefs about women and relationships, and those harmful attitudes are tolerated, this creates an enabling environment for violence. Whether on college campuses or in wider society, the prevalence of sexual violence relates strongly with so-called “rape culture” above and beyond individual decisions. As such, all of us are responsible for social and cultural shifts to reject all forms of violence and discrimination. 4. Sexual violence and harmful masculinity don’t only have social and physical costs, they have economic costs as well. Harmful masculine norms and sexual violence pose enormous economic costs to society (in addition to, of course, the massive range of unquantifiable traumas, pains, and lost opportunities that disproportionately impact cis women and trans and nonbinary individuals in patriarchal societies). According to a study by the National Sexual Violence Resource Center, rape alone costs the US more than any other criminal act, at $127 billion annually. A recent costing study done by Promundo, focused on men 18–30 years old, has found that sexual violence attributed to harmful masculine norms cost the US a bare minimum of $631 million a year, a cost incurred by those who survive and those who are harmed by men’s use of violence. The Cost of the Man Box: A study on the economic impacts of harmful masculine stereotypes in the United States How can we prevent sexual violence and create lasting change? In order to prevent future violence and sexual violence, listening to women and including their voices at all levels of programming and leadership is crucial. Communities must elevate the voices of women of underrepresented identities and create safe spaces for all sexual assault survivors to be heard and to heal. Institutions must put into place zero-tolerance policies with public support from leadership, and conduct trainings to sensitize staff and improve bystander intervention skills. We also need to engage and empower youth, targeting boys, in cultivating discussions and critical reflections around gender norms and forming healthy relationships. Whether through school curricula, after-school programs, or community programming, conversations on healthy masculinities should start early in life. An example is Promundo’s Manhood 2.0 group education initiative, in which boys and young men participate in open, direct conversation around shaping masculinities based on respect, care, empathy, consent, and rejection of violence.
https://medium.com/masc-conversations-on-modern-masculinity/what-we-know-about-harmful-masculinity-and-sexual-violence-764517e4c61f
[]
2020-04-29 18:37:07.684000+00:00
['Research', 'Sexual Assault', 'Masculinity', 'Society', 'Gender Equality']
Thom Yorke
Did you have any trepidation about embarking on the Atoms For Peace project? That was the real head-masher. During the first day of rehearsals it was clear that everyone had really done their homework. So when I got there with Nigel, we just started up and it was just there for the taking, it was fucking mental. It was really the first time I’d played properly with another band, ever, since I was like, 16. No kidding, it was a headfuck. I was buzzing for weeks. It was all informed by what I’d done on my own on a laptop, which I just thought was really wild. You have such a diverse back catalogue now. would you ever go back through your Radiohead archives and remix it all? I could do, yeah. I love remixing because you can take something people already identify with and claim it for something else. You can actually spend your whole life going back and sampling yourself — but that would be a bit like masturbation. “I can’t say I love the idea of a banker liking our music, or David Cameron. But who cares? As long as he doesn’t use it for his election campaigns, I don’t care. I’d sue the living shit out of him if he did” – Thom Yorke Does the fact that your music attracts everyone from teenagers and middle-aged dads to bankers and prime ministers annoy or delight you? I can’t say I love the idea of a banker liking our music, or David Cameron. I can’t believe he’d like King of Limbs much. But I also equally think, who cares? As long as he doesn’t use it for his election campaigns, I don’t care. I’d sue the living shit out of him if he did. I’m now getting this thing where a cute 18-year-old girl will come up to me and she’ll say, ‘Aww man, will you sign this for my mum?’ She turned me onto your music when I was tiny.’ And I’d be like ‘Ohhh, fuck’s sake!’ That spins me out on a number of levels. I’ve got two generations now. You wrote The Eraser’s ‘Harrowdown Hill’ about the suicide of biological-warfare expert David Kelly. Do any of your new songs have a political agenda? The David Kelly thing was very much an exception. I thought it was just so horribly English, so fucked up. I get obsessed and that often ends up in lyrics. Politics is not a fun thing to write about. Now it’s too fucking dark. I went to the Copenhagen summit (on climate change), and that permanently flipped my lid, because the whole thing was so wrong. Obama stormed straight past me after the meeting he had with China, and it was just horrible. It sort of spun me out permanently to be honest. But shouldn’t that have provoked you to write something? Yes, but when you’re presented with that level of stupidity, it kind of blows your mind. Which sounds terrible, because I don’t want to be the person that goes, ‘We’re all fucked,’ because I don’t think we are. I’m trying to convince myself not to care. It’s like this phrase I keep seeing around — ‘I couldn’t care less, it’s such a mess.’ Are you sick of people saying that you only write and sing miserable songs? It used to piss me off and then I thought, ‘Well, people hear something in my voice and respond to it, and there’s nothing I can do about it.’ You could say the same thing about Scott Walker. Recently it’s not as heavy, it’s a lot lighter, because I’m more into rhythm and the fact that it dances through the track rather than grabbing you and being the centre of attention. Sometimes I don’t want it to be. Sometimes I just want it cruising through the rhythm. Do you ever feel caged in by your voice? Absolutely. Maybe not as much now, but certainly it can be quite frustrating. I’ve done enough stuff now that it’s not such an issue — at some point you’ve got to say, ‘This tone is me, there’s no getting round it.’ Now, in a way, having that signature is licence to do more. It’s kind of liberating to say, ‘Well, that’s my instrument, and that’s a very clear limitation right there.’ But what’s nice is you can make a really complicated piece of music and then just put a simple line through it and suddenly you don’t see any of the complications there at all. What about your image? Have you become more or less confident about your looks over the years? I’m never confident about how I look, but I’m always into being shocking and visually interesting. It comes down to whether I’m comfortable or not. It takes me a long time to get my head around that. I was deeply uncomfortable with the ‘Lotus Flower’ video. I did the whole thing, it was such a crack, and then they showed me the rushes the next day and I was like, ‘This ain’t going out.’ It was like paparazzi footage of me naked or something. It was fucked up. But if it’s a risk that’s probably a good thing. “I was deeply uncomfortable with the ‘Lotus Flower’ video. I did the whole thing, it was such a crack, and then they showed me the rushes the next day and I was like, ‘This ain’t going out.’ It was like paparazzi footage of me naked or something. It was fucked up. But if it’s a risk that’s probably a good thing.” — Thom Yorke Are you surprised that ‘Lotus Flower’ has now been watched over 20 million times on YouTube? It’s a massive kick. That’s what everybody wants. If it’s something you’ve worked at and it goes over the edge like that then that’s great. If you do a few duffers it puts you off for a while. Which are the duffers? Oh, I couldn’t possibly say… (laughs) Which is your favourite? ‘Karma Police’ is still my favourite, because when I watch it or see clips it just reminds me how much of a laugh I had shooting that. It was brilliant. Especially because I’m totally wasted in it. What were you doing? All sorts. (laughs) Do you think your videos and your reluctance to do much press have built up this mythology around you? Do you encourage the mythmaking? No, I think it just extends to whatever you’re doing next, to see if you can bend people’s heads out of shape. It’s the old art student in me really. If you’re going to do something you should at least shock or mess with their expectations — not that it’s necessarily art. When you were last on the cover of Dazed you talked about not recognising your reflection. Was that true? I really didn’t. It was quite scary. It’s hard to explain. It was all part of this weird catatonic headspace I was in. I can’t do a lot of photographs because I become too aware of that projected image and I can’t handle it. It sounds really precious but that’s just what I know and how I know it is. “I was so driven for so long, like a fucking animal, and then I woke up one day and someone had given me a little gold plate for OK Computer and I couldn’t deal with it for ages” You also admitted that you had always wanted to become famous. Seventeen years later, have your feelings changed? I guess it depends what you’ve become famous for. Fame for fame’s sake, or for working your nuts off at what you do. Also when I was a kid, I always assumed that it was going to answer something — fill a gap. And it does the absolute opposite. It happens with everybody. I was so driven for so long, like a fucking animal, and then I woke up one day and someone had given me a little gold plate for OK Computer and I couldn’t deal with it for ages. I moved down to Cornwall, went out to the cliffs and drew in a sketchbook, day in, day out. I was allowed to play the piano and that was it, because that was all we had in the house. I did that for a few months and I started to tune back into why I’d started doing it. That’s how I remember it anyway. I remember having nothing in the house, except a Yamaha grand piano. Classic. And the first thing I wrote was ‘Everything in Its Right Place’. Do you have much recollection of what life was like before you became famous? I’m now painfully aware that I’ve been doing this for longer than I haven’t. Which is pretty fucking mental. Am I aware of how it was before that? I think so. I mean, we did sign our deal when I was 22, so for my whole 20s and 30s I was working. I don’t even remember it. It’s quite weird. How well do you think you’ve aged? My favourite quote of Tom Waits is, ‘I wish to age disgracefully.’ And I’m doing that, that’s me. I’m probably easier to deal with but I wish to remain disgraceful, if at all possible. (laughs) Why do you think you have been portrayed as such a mercurial character over the years? I’m not as volatile as I used to be, which is good ’cos I’d have burned out if I was. I can still be a nightmare though. There’s a quote that I thought would be a good way to finish. It’s from your friend Stanley Donwood talking about his artwork series Lost Angeles, from which the cover for Amok is taken: ‘There is no future, there is only the present… no one seems to care much about the present.’ What do you care most about right now? The present. Trying to stay in the present because that’s how to not get ill. Don’t overthink. Let it go. Originally published at www.dazeddigital.com on February 12, 2013.
https://medium.com/tim-noakes/splitting-atoms-with-thom-yorke-c67fe2fbc631
['Tim Noakes']
2017-06-11 21:05:13.632000+00:00
['Music', 'Thom Yorke', 'Radiohead']
The Magic Behind Zynga Poker’s Hand Strength Meter
How It Works The way this feature currently works is relatively straightforward: 1. Using the player’s 2 hole cards and visible community cards, the best possible 5 card (or less, if the community cards are not yet visible) hand the user could make is found on our game server. 2. That hand is classified as one of the following High Card One Pair With A Kicker Two Pair Three-Of-A-Kind With A Kicker Straight Flush Full House Four-Of-A-Kind Straight Flush 3. That hand will win in a showdown on the river a certain percentage of the time. For example, the full house “Three-of-a-kind twos and a pair of Jacks” has historically won, let’s say, seventy percent of the time when shown down on the river. Note that this excludes all the times where someone holding that hand won because all their opponents folded. This percentage is already stored in one of our databases. It is retrieved whenever a new card is dealt and sent down to our client. 4. This percentage is translated on our client into the visual filling of a meter displayed to a user. For example, a winning percentage of fifty percent would be translated into a meter filled halfway. Zynga Poker’s Hand Strength Meter Tradeoffs of This Design The Good Because the winning percentages have already been pre-calculated and stored in Part 3 of “How It Works” rather than calculated on-the-fly, the process of looking up the winning percentage for a hand is very fast. This allows us to show the same information to all of our users while allowing them a reasonable time with which to take their turn. The Bad The percentage calculated in Part 3 of “How It Works” is usually fairly accurate, but in some cases (particularly degenerate ones) this percentage can be far from accurate. As one of the most extreme examples, let’s suppose we have Our Hole Cards and the current community cards are Community Cards For simplification, let’s assume that we need at least a straight or a flush to win the hand. In that case, any of the following cards would give us the winning hand: Outs Both the turn and the river would have to not be one of these cards for us to lose the hand, a likelihood of 100 * (34/47 * 34/46) = 53.47% This in turn means that we have a 46.53% chance of winning the hand. However, the percentage found in Part 3 from “How It Works” only takes into account our current hand, Jack high, not our potential for improving our hand. Since our hand has a high potential for improving, we’re missing key information in our estimate. Why We Chose This Tradeoff The degenerate case above raises the question, “Why did you do this approximation instead of actually calculating the exact percentage? How much time could that really take to calculate?” Let’s find out, for the most extreme case, just how long this takes. For this specific example, we’ll assume there are eight opponents, all still in the hand, with all 5 community cards yet to come. To find our exact chance of winning, we need to: 1. Enumerate the total possible ways the yet unknown cards could be revealed. 2. For each of these possibilities, find a. The best hand we could make b. The best hand each of our opponents could make c. Whether our hand is better than all of our opponents’ hands. 3. Take the number of possibilities that would have resulted in us having a winning hand, divide it by the total number of possibilities, and multiply it by 100 to give us our percentage chance of winning. We have 8 opponents, each with two unrevealed cards, and 5 unrevealed community cards. This means that there are (50 choose 2) * (48 choose 2) * (46 choose 2) * (44 choose 2) * (42 choose 2) * (40 choose 2) * (38 choose 2) * (36 choose 2) * (34 choose 5) = 50!/(2 * 48!) * 48!/(2 * 46!) * 46!/(2 * 44!) * 44!/(2 * 42!) * 42!/(2 * 40!) * 40!/(2 * 38!) * 38!/(2 * 36!) * 36!/(2 * 34!) * 34!/(5! * 29!) = 50!/2 * 1/2 * 1/2 * 1/2 * 1/2 * 1/2 * 1/2 * 1/2 * 1/(5! * 29!) = 50!/(256 * (5! * 29!)) = 50!/(30720 * 29!) = (50*49*48…*30)/30720 = 111,973,393,664,173,216,721,145,600,000 possible outcomes. All of these would need to be checked to see if you would have the winning hand. For the sake of argument, let’s say we can make that check (step 2 from above) for a given outcome in one millisecond. This means you would complete step (2) from above in 3,548,222,731,265,153,773 years. According to scientists who hypothesize an eventual collapse of the universe (The Big Crunch), you wouldn’t be done until ~ 35 quadrillion years after the universe collapsed on itself! Knowing that it’s physically impossible to find an exact percentage chance that we will win a given hand in this situation, our current method of providing a “best guess” is a decent alternative despite the existence of degenerate cases. Future Hand Strength Meter Design In order to consistently present our users with a much closer approximation to their exact chance of winning a hand, we devised an adaptation of Monte Carlo simulation. Rather than search through all possible eventual outcomes of a hand for the ones in which you would win, we instead randomly generate a subset of those outcomes and see how many of that subset you would win. There are key tradeoffs regarding the size of this sample subset. A smaller sample size will result in faster completion of the simulation such that a solution is presented to the user with more time with which they can act. However, that smaller sample size will lead to a potentially less accurate approximation. Conversely, a larger sample size will require more time to complete while providing a more accurate approximation. One way to allow for the benefits of both these approaches while lessening their respective downfalls is for us to show to users the current value of the approximation as the algorithm progresses. That is, on your turn we can show you the current approximation that you will win the hand at showdown. That number will be constantly changing, becoming ever more precise until you either take an action or run out of time. This allows us to show the user as accurate an answer as possible while also allowing them as much time as possible with which to act. Why Our New Hand Strength Meter Hasn’t Been Released In a word: fairness. We aim to make our game equally fair to all players. A user with a 5-year-old phone would be at a distinct disadvantage to a user with a brand new, state of the art phone should this feature be released today. We support such a wide range of devices in our game that the number of simulations one device can perform in a given time can be drastically different than the number of simulations that another device can perform in that same time. More simulations means higher accuracy in this approximation, and higher accuracy leads to a competitive advantage. There is hope, however. After a certain number of iterations, the discrepancy between the current approximation and the approximation after further iteration diminishes to the point of inconsequence. As our minimum supported devices continue to improve and allow for faster iterations, we get closer to being able to provide this amazing feature to all our players without introducing any unfairness. Conclusion Calculating the exact chance of winning a poker hand at the beginning of a hand of 9 players is currently unsolvable. In order to solve this problem, we are currently approximating that chance and presenting it to our users. This solution has its own downfall of inaccuracy, though. We’ve created a feature that goes a long way in solving this problem, but we are currently waiting for the right time to present it to our users while allowing for complete fairness. We have a U.S. Patent No. 9,940,783 (issued Apr. 10, 2018) regarding this design if you are interested in learning more about it. Thank you for the read! If you found this post interesting or helpful, give it a share. If the ramifications of insurmountable combinatorics in our game is of interest to you, I encourage you to read our previous post regarding random number generation. If you have any input you’d like to give us, reach out to us at Poker-Engineering-Blog@zynga.com or shoot me an email at jarredwsimmerengineering@gmail.com. References: https://www.iflscience.com/physics/big-crunch-back-possible-end-universe/ https://en.wikipedia.org/wiki/Monte_Carlo_method
https://medium.com/zynga-engineering/the-magic-behind-zynga-pokers-hand-strength-meter-90b63cff17ba
['Zynga Poker Engineering']
2018-10-22 15:15:46.191000+00:00
['Software Engineering', 'Poker', 'Video Game Development', 'Mathematics', 'Zynga']
I Wasn’t Sleeping Through the Night. Here’s What I Did to Fix It.
Something happened when I became pregnant with my daughter. I started waking up in the middle of the night. For a long time, this didn’t bother me at all. I’d wake up, do some reading, and go back to sleep. As my daughter grew older, however, waking up in the middle of the night became problematic. It’s not too unusual to carry odd hours with an infant. It’s another story when you’re trying to balance work and a school-aged child. For the past couple of years, I’ve tried to curb my split-sleep habit. Occasionally, I’d still wake up in the middle of the night and wind up getting some writing done. But I still spent most nights with the intention of sleeping through it without any interruption. For the most part, it worked. Things shifted again in 2019. It seemed like no matter how hard I tried, or how desperate I was for a good night’s sleep, I kept waking up at random times like 2AM or 4AM — and then I couldn’t get back to sleep. After several months of that, I felt exhausted. I knew that something needed to change, but I was having a hard time making anything positive happen. The coronavirus pandemic certainly didn’t help me out, either. Basically, I was sleeping “well” for about 2 to 4 hours each night, waking up tired yet unable to fall back asleep, and then I was drowsy all day. The lack of sleep left me feeling grumpy and foggy. Over the course of a year, I tried sleeping pills, gummy vitamins, and all sorts of supplements, but nothing seemed to help. My friends swore by things like CBD, melatonin, and lavender essential oil. And do you know what? I was desperate enough to try them all, so, I did. But none of those suggestions actually worked for me. A lot of them made me feel even more exhausted. After several months of experimenting with my daily routine, here are the steps that actually got me sleeping through the night with zero interruptions — save the occasional wakeup from my 6-year-old, of course.
https://medium.com/honestly-yours/i-wasnt-sleeping-through-the-night-here-s-what-i-did-to-fix-it-9c18b22d868d
['Shannon Ashley']
2020-12-29 15:18:22.376000+00:00
['Lifestyle', 'Sleep', 'Self', 'Health', 'Life']
The Next Step to Build Better APIs — Consistent Data Structure
By Andrew Turner You don’t build good APIs through coding alone. Like any other part of your business, APIs are best when they are developed as part of a detailed, end-to-end strategy. However, there are multiple parts of a comprehensive strategy and understanding each piece is essential to building better APIs. In our first post about APIs, we talked about the need for a documentation-first strategy and how 3 main tools can help you build more effectively. The next natural step in the process is to choose an appropriate data structure formatting convention for API endpoint responses and ensure it is applied consistently — here’s how. UNDERSTAND YOUR API USE CASE We talk a lot about “starting with the why” for product development — starting small discovery projects and using design thinking to get to the heart of the problem your product will solve. Understanding the “why” is also an important concept when building APIs. The key is to determine your API use case. Is the API for a mobile app? Is it for front end use? Will it be public for third party developers? The “why” of your API will help you choose the right data structure convention for development. However, there are many different options to choose from. COMMON API RESPONSE FORMATTING CONVENTIONS Navigating the various conventions for formatting API data structures might seem overwhelming at first. Here are a few of the more popular data structure conventions to consider: JSON API JSON API is a community-driven specification for building APIs and formatting API responses. EmberData Ember is a framework for creating ambitious web applications. If you’re building an API for an Ember application, you may want to consider using the EmberData convention for your API responses. Core API Another community-driven convention is Core API. Flat Response This is a simple and straightforward way to format your data structure that simply returns the data requested without a namespace object. Twitter and GitHub are popular APIs that follow this data structure formatting convention. Seeking the pros and cons of each data structure convention seems like a logical next step once you’ve listed your options. However, there is no single correct data structure convention. You should choose a convention based on what is pragmatic and intended for your API use case. It’s possible that a specific data structure convention would work best for your API’s use case — for example, Vinli’s need to return telemetry data could command a different convention than a project management API that returns tasks and project updates. Generally, though, keeping your data structures consistently formated will result in a better API. When you establish your data structure convention from the outset of your project, you can better coordinate the “why” of your API and the way you’re building it. Then, you can include the data structure convention in your documentation and use Dredd to validate its use throughout the project. After you establish a formatting convention for your API’s data structures, you should implement meaningful HTTP status codes to ensure your API responses are accurate. STATUS CODES SHOULD ACCURATELY REFLECT YOUR API RESPONSE As you use Dredd to validate your data structure formatting, make sure you use the appropriate HTTP status code for each type of response. However, because there are so many HTTP status codes, it is important to pay attention to the right ones throughout your development process. At a higher level, there are 4 main HTTP status code categories: 200 Level: The HTTP 2xx codes convey successful responses. The HTTP 2xx codes convey successful responses. 300 Level: HTTP 3xx codes are reserved for redirects. HTTP 3xx codes are reserved for redirects. 400 Level: This block of codes conveys errors that originate on the client side. This block of codes conveys errors that originate on the client side. 500 Level: Similar to the 400 level codes, HTTP 5xx codes translate errors, but on the server side instead. To give you a better idea of the specific codes any engineer should know, we’ve compiled a list of the most important ones: By using these meaningful status codes, you can avoid sending 200 level status codes when your API is returning an error! You want your API to be predictable and easy to work with — ensuring status codes match the true API response is essential to building a better API. THERE’S MORE TO APIS AND PRODUCT DEVELOPMENT THAN CONSISTENT DATA STRUCTURES You won’t be able to build an effective API without proper documentation and consistent data structures. And you won’t be able to succeed with digital transformation projects if you can’t buildbetter APIs. However, there’s much more to digital transformation product development than just APIs. If you want to learn more about the product development process and how your better APIs fit into the bigger picture, download our free End-to-End Product Development Guide and discover how we combine business and innovation consulting, user experience design, software engineering and hardware engineering to create products that users love. Originally published at https://by.dialexa.com/consistent-data-structures-the-next-step-to-building-an-api.
https://medium.com/back-to-the-napkin/the-next-step-to-build-better-apis-consistent-data-structure-38667444f37e
[]
2017-10-03 17:47:43.746000+00:00
['API', 'Technology', 'Software Engineering', 'Software', 'Software Development']
Tommy Ramone, RIP
I was born the year the Ramones played their first show making me a generation too young for their revolution. But still the simplicity and power in their sound impacted me like no other band. In my early teenage years I dubbed a copy of Ramones Mania (for the Ramones were already being anthologized in the late 80s) from a friend and listened until the tape wore out. At 16, I would have listed hardcore punks the Dead Kennedys as more influential to me but it was the Ramones who stuck around. As I grew into a proper fan, each Ramone came to mean something special to me. Johnny was the curmudgeon with integrity I wanted to be. Joey was the sensitive guy I identified with. Dee Dee was the troubled kid I knew. But Tommy might have been the closest in temperament and personality to me. Tommy was the architect of the band. He put the other elements in place and when he couldn’t find a suitable drummer, he stepped in. After three albums and the constant touring, he called it a day…as drummer. Tommy continued to be the architect. He literally sat behind Marky at the drum kit to teach him the Ramones style of drumming. He produced Road to Ruin and Too Tough to Die. He stepped into the role of historian after the band broke up. It’s easy to miss how much of a struggle the Ramones had. They rebooted music. They created punk rock: its ethos, sound, and look. But they didn’t start making money until the bitter end. Johnny and Joey spent decades hating each other but traveling in a van together for months out of the year. A van. They couldn’t afford tour buses. But more than anyone else, Tommy and Johnny knew the importance of the Ramones and Johnny kept them moving until their legacy could take hold. Tommy is the last of the original band to pass away. None of them made it to old age. When I saw the headline of his death this morning, I went to Arturo Vega’s website to see if he had posted a tribute. When the website didn’t load, I had a sinking feeling. Arturo — the creator of their iconic logo, their lighting and design director for their entire career — passed away last year. I’d missed the news. I bought my Ramones shirt from him. He was the fifth Ramone. I wrote a novel largely about one fan’s obsession with the Ramones. In it, the fan mourns Johnny’s death wrapped in her leather jacket watching old clips of the band. I feel like doing that today to mourn the passing of four blue-collar weirdos from Queens who reshaped rock music. They were accidental futurists with retro impulses. They were deliberately confrontational to the status quo but cartoonish enough to be lovable. They were — arguably — geniuses. I don’t experience fandom the way I see it in others but I’m unabashedly a fan of the Ramones. They mean more to me than their music. I feel a kinship with them. I’m sad to hear of Tommy’s passing. I’ll remember him and his band fondly and I’ll do my part to keep their legacy alive.
https://medium.com/hey-todd-a/tommy-ramone-rip-c85bdcc514b
['Todd A']
2018-07-26 17:49:57.501000+00:00
['Pop Culture', 'Ramones', 'Music', 'Zine']
Penggunaan algoritma Nearest Neighbors pada Machine Learning sederhana untuk deteksi Taksi Online pada area parkir.
Easy read, easy understanding. A good writing is a writing that can be understood in easy ways Follow
https://medium.com/easyread/penggunaan-algoritma-nearest-neighbors-pada-machine-learning-sederhana-untuk-memprediksi-taksi-726a7fdfa186
['Ilyas Ahsan']
2018-01-11 01:48:52.072000+00:00
['Python', 'Data Science', 'Scikit Learn', 'Machine', 'Algorithms']
Why You Shouldn’t Use AdWords Express
Google’s AdWords Express is an advertising platform that is widely used by small business owners. Google promises: “more customers with easy advertising!” And then: “Set up your online ad in 15 minutes and let Google do the rest.” With such bold promises, it’s no wonder so many businesses choose Express. If you’re thinking about running AdWords Express ads, read this post before committing. If you’re already using AdWords Express and are frustrated by a lack of results (or an empty bank account), read the FAQ at the end of this post for help switching from AdWords Express to AdWords. Unfamiliar with advertising on Google? You can pay to get your website shown at the top of the search results right away with Google AdWords! That sounds great, doesn’t it? If your website doesn’t rank high organically in the search results, AdWords is a great way to get your website in front of people that are searching for what you have to offer. For example, all of these plumbers are paying Google to get their website in front of people searching for plumbing services: What is AdWords Express? AdWords Express is “easy online advertising”, according to Google. In short, AdWords Express is an oversimplified version of AdWords that is fast and easy to set up. But if there’s one thing we’ve all learned in life, it’s that the easy route isn’t always the best route. “But Google says that they’re practically the same! Just look at this chart:” — Small Business Owner Sure, AdWords Express and AdWords are similar, as outlined in the chart above — but AdWords gives you control and offers customization options. You wouldn’t give advertising money to a random person on the street and say “advertise my business!” would you? That’s the same reason why you shouldn’t sign up for AdWords Express — you’re practically telling Google, “Here, take my money!” And spend it, they will. So, Why Do Small Businesses Choose AdWords Express? The main reason: it’s easy. Google promises that you can set up your ad in 15 minutes. And who doesn’t have 15 minutes to spare? It’s true that AdWords Express is less intimidating than AdWords, and Google’s copywriters make business owners feel okay to “set and forget” their AdWords Express accounts. However, if you don’t put time into managing your advertising accounts, all you are doing is making Google money and hurting your own bottom line. People inherently trust Google simply because it’s Google. They assume that Google knows what’s best and are therefore fine with letting Google take the wheel with AdWords Express. However, what small businesses don’t know is the true price they are paying in comparison to what ROI they are (not) seeing. Remember: Google is a business and they want your money. Just like how you want your customer’s money! Online advertising works, but you need to control it, nurture it, and keep Google’s hands out of it. Why AdWords Express is Awful Here are a few reasons to not use AdWords Express: 1. It uses broad match keywords. Broad match is exactly what it sounds like — your ads will appear on similar keywords to the ones you select, such as synonyms and related searches. This gives Google more freedom to show your ads when they want, and increases the likelihood that your ads will appear for irrelevant searches. For example, one of our clients (a caterer) who used AdWords Express had a bid for the keyword edmonton catering companies. Sounds like a great keyword for that type of company, right? Think again. Because Express uses broad match, the ad ended up showing to people searching for edmonton pig roast companies, which is something they absolutely don’t offer. The real kicker? You are only able to see this if you view the account in real AdWords — AdWords Express doesn’t show the actual search term the user searched to get to your site. Such a good secret keeper, that sneaky, sneaky Google! 2. You cannot specify the keywords you want to use or how much you want to bid. Instead, you choose categories, as shown below: This gives Google full control over what keywords your account bids on — and if they don’t have a keyword in one of their lists you want to bid on, you can’t add it manually. You can only toggle keywords on or off, but since they are broad match keywords it doesn’t really matter if you turn off a keyword, because a search for another broad match keyword could easily trigger a similar irrelevant ad result. Not being able to specify how much money you want to bid on certain keywords is also a huge downside of AdWords Express. The control Google asserts over your account with Express can easily lead to higher cost-per-clicks and tons of money being spent on keywords that aren’t converting. 3. You cannot add negative keywords. Negative keywords are keywords that you don’t want your ads to appear for. This is very important! If you offer a premium product, you can put in cheap, low cost, and inexpensive as negative keywords in real AdWords so your ads don’t show to people that are looking for budget options. In Express, you can turn off keywords that you don’t want to bid on, but this won’t actually keep Google from showing your ad for that keyword. Your ad could still show to a searcher using one of your negative keywords through another broad match keyword. 4. You cannot create ad extensions to enhance your ad. Ad extensions include callouts, sitelinks, and structured snippets. These different options enable you to show more information to the searcher and give them more incentive to click your ad. Below are two ads. One is an ad with no ad extensions (probably using AdWords Express) and the other is an ad that uses the callouts (text separated by the dots) and sitelinks extension (additional links at the bottom of the ad). Which ad would you click on? In summation, a lack of control over what keywords you’re bidding on — and how much you’re paying for them — is why AdWords Express is terrible. If you’re using AdWords Express right now, you’re probably wasting money on irrelevant keywords. The customizability and control you get in real AdWords provides is what separates it from AdWords Express. But don’t just take our word for it! Read this Twitter thread where other industry professionals express their distaste for AdWords Express. Or, simply check out the search results for the query “AdWords Express Bad”. Mini FAQ: I have ads running on Google right now. How can I tell if I’m using AdWords Express or AdWords? The easiest way to tell if you’re using AdWords Express or AdWords is to look at your browser tab. If you see AdWords Express as the title, you are using Express, but don’t panic! You can switch to using AdWords. I am using AdWords Express but want to use AdWords instead. How do I make the switch? If you see three dots in the top right hand corner of AdWords Express, click on them and choose “View in AdWords”. Now you’re in AdWords! If you do not see the three dots, contact Google support and they will change your account from AdWords Express to AdWords: 866–246–6453 Available Monday–Friday from 9am to 8 pm EST Important: Just switching from AdWords Express to AdWords isn’t enough! When you switch, the exact campaigns you had in AdWords Express are moved to AdWords — if you do not change them they will continue to waste your money. Now that you’re in AdWords, you have more control over what keywords you’re bidding on and how your ads appear. You can either edit your AdWords Express campaigns or start from scratch and set up new campaigns. When updating keywords or adding new ones, remember not to use broad match keywords! If you do, you’ll be in the same boat as if you were using AdWords Express (using broad match modifier is okay though). I’m interested in learning more about AdWords so I can manage my account. Do you have any recommended resources to get started? The best place to start Google’s plethora of articles, videos, and guides. Here are a few quick links to help you get started: Of course, there are a ton of companies blogging about AdWords too — a quick Google search for “AdWords Guide” or “AdWords for Beginners” will get you started in the right direction! I am using AdWords Express, but don’t have time to manage my own campaigns with AdWords. Can you help me? Yes, we can manage and optimize your account for you. We can also train you to manage AdWords in-house, and we offer coaching! AdWords isn’t as scary and intimidating as it seems, we promise. Managing your own AdWords account is more cost-effective and you can put the money you’d be spending on an agency managing it for you towards a larger advertising budget. Get in touch with us and we’ll help you decide how to move forward,
https://medium.com/kick-point/why-you-shouldnt-use-adwords-express-dbd12cf80df2
['Brittany Zerr']
2017-11-16 19:28:12.873000+00:00
['SEO', 'Google', 'Advertising', 'Adwords', 'Help']
Training a k-NN model to Predict Bank Customer Churn using AWS
This article is a summary of one of the case studies in the Full Stack ML course by AICamp. The problem is that of predicting customer churn, which is the fraction of customers lost by a business. Feature engineering The first step towards data preparation is to gather all your data in one single table, and apply feature engineering (the set of techniques used to transform the raw data) to obtain features in a format that can be used by the model. For AWS SageMaker specifically, you need to prepare the data in a specific format: the first column should contain the labels, and there should be no headers. You can apply other feature engineering techniques to prepare your dataset : One hot encoding Numerical encoding (when categories are hierarchical) Removing unnecessary columns that do not contain useful data Aggregating columns Normalization Missing values replacement (interpolation, frequency, removal…) The Garbage In, Garbage Out principle When deploying models, we need to make sure that we are integrating the feature engineering process in the production pipeline too. Data coming as input in the production pipeline will be in raw form, and it is the responsibility of the ML pipeline to transform the data. It is very important to make sure that the data is constantly monitored, as even minor changes in the pipeline could cause predictions to fail. Even though the pipeline might not break as a result of those changes, predictions will be wrong: as long as the model sees the data, it will make a prediction, but the model itself won’t check whether the input data is accurate. An example of how this might happen is if two columns are at some point accidentally swapped in the input dataset: the number of features will still be the same, so the model will make a prediction, but the results will be unreliable. This is the concept of GIGO: Garbage In, Garbage Out. Data preparation For the bank churn exercise, we have a dataset with with a total of 10000 entries (rows) and 8 features plus the label (columns): The label is our “Exited” column, as we want to predict whether a customer will exit or not. A useful step for transforming the dataset to the SageMaker format is to check whether the label is in the first column, and if not, swap it: label = "Exited" # Rearrange the dataset columns cols = data.columns.tolist() colIdx = data.columns.get_loc(label) # Do nothing if the label is in the 0th position # Otherwise, change the order of columns to move label to 0th position if colIdx != 0: cols = cols[colIdx:colIdx+1] + cols[0:colIdx] + cols[colIdx+1:] # Change the order of data so that label is in the 0th column modified_data = data[cols] In this case there are no categorical columns, so we will not do any categorical or one hot encoding. The first feature engineering we perform is to fill out missing values. We can use the scikit-learn Python library that has a few built-in functions to replace missing values. One is SimpleImputer, which by default replaces missing values with the mean calculated along each column: from sklearn.pipeline import Pipeline from sklearn.impute import SimpleImputer from sklearn.compose import ColumnTransformer # Initialize Simple Imputer and fill missing values with the median value numeric_transformer = Pipeline(steps=[ (‘imputer’, SimpleImputer(strategy=’median’))]) numeric_features = data_without_label.select_dtypes(include=[‘int64’, ‘float64’]).columns #numeric_features # Create the column transformer preprocessor_cols = ColumnTransformer( transformers=[(‘num’, numeric_transformer, numeric_features)]) # Create a pipeline with the column transformer, note that # more things can be added to this pipeline in the future preprocessor = Pipeline(steps=[(‘preprocessor’, preprocessor_cols)]) preprocessor.fit(data_without_label) modified_data_without_label = preprocessor.transform(data_without_label) We also need to split the dataset between the train and test dataset. There are different techniques for making this split, in this case we are using the default train test split function in scikit-learn, which randomly samples data points to create two new datasets in a given proportion for train and test (in this case we select a 80:20 split): from sklearn.model_selection import train_test_split modified_data_array = np.concatenate( (np.array(modified_data[label]).reshape(-1, 1), modified_data_without_label), axis=1) # Split the file into train and test (80% train and 20% test) train, test= train_test_split(modified_data_array, test_size=0.2) Storing the dataset in AWS S3 Now that the dataset preparation phase is done, we need to upload our dataset to our S3 bucket. To create the S3 bucket, go to the AWS Management Console and search for S3. If it’s your first time creating buckets, you should see something like this: Go ahead and select “create bucket”. You’ll be asked to input a name for your bucket and select a region. It is preferable to use a region close to where you are located, but most importantly you should use the same region throughout your project (a training job created in one region cannot access data from another region, even if you’ve uploaded it using the same account). Other useful options are public access and bucket versioning. The first enables different levels of accessibility, while the second enables saving all versions of your datasets uploaded. Click on “create bucket” at the bottom of the page. Now you can click on your bucket (I called mine ‘francesca-aicamp’) and you should see these options: Click “upload” and add your modified dataset as produced in the feature engineering step. k-NN model training We now want to train our model using k-nearest-neighbors (k-NN). K is the parameter representing how many neighbors the model is considering to classify the data. The algorithm looks at its k nearest neighbors in the training dataset to classify each new data point. In this classification problem, the new data point is assigned to the class most common amongst the number of neighbors defined by k, that are closer to the new data point in the feature space. K-NN can also be used in regression problems, where the result of the prediction is given by the average of the neighbors. To launch our training job, we use the k-NN algorithm provided by SageMaker. Go back to the AWS console and search for SageMaker. In the sidebar, click on training jobs: Click on “create training job” and choose a name for your training job, I chose AICamp-KNN-churn-Dec03. You’ll also be asked to select an IAM role, the set of permissions to control access between AWS services. Here we give permission to SageMaker to access S3.Create a new IAM role or use an existing one that gives access to any S3 bucket: In “algorithm options”, choose Amazon SageMaker built-in algorithm and select k-NN. Leave the remaining options to default and jump to the Hyperparameters section. For this exercise, we are using the following parameters: Feature dimension (the number of features we have in the dataset): 8 Min batch size (the number of data points at each iteration): 100 K value (how many neighbors to consider): 5 Predictor type: classifier Next, we need to specify the location of our input data. To specify the S3 location, go to S3 in a new tab (to avoid losing the parameters you configured so far). Go to the location of your file and copy the URI: Add another channel and repeat for the test dataset, specifying the name “test” for the channel (SageMaker expects two channels for the k-NN algorithm, specifically “train” and “test”). Specify a location where to save your model output, which can be the S3 bucket that you just generated, for instance: You’ve now finished configuring the training job! Click on “create training job”. It might take 3–5 minutes for the job to be completed. Once the job is completed, you can click to get more details. Scroll to the “Monitor” section and click on “view algorithm metrics”: you can see different metrics, click on “test accuracy”. The CloudWatch console will open, where you can plot different values monitored during training. For the parameters used in this problem, the accuracy is 0.77: To test a different value of k, we can clone the training job and modify the hyperparameter k. Go back to the training jobs dashboard, select the job, then go to “actions” and “clone”. You’ll see a copy of your training job: You can see the accuracy slightly increased to 0.78: Finally, you can go back to your output directory in S3: you’ll find the trained model saved in a tar.gz format. Once you’re happy with the level of accuracy reached with a certain set of hyperparameters, you have finished your training, and can use the model to make predictions on new data! Full code for data preprocessing:
https://francesca-donadoni.medium.com/training-a-k-nn-model-to-predict-bank-customer-churn-using-aws-86284da5094a
['Francesca Donadoni']
2020-12-05 17:45:47.347000+00:00
['Machine Learning', 'Knn', 'AWS', 'Aws S3', 'Churn']
If Only I’d Known …
If Only I’d Known … Limericks let loose for Christmas If I had written a different story, this picture would have been the perfect fit. (Photo by v2osk on Unsplash) Pulled out of my driveway still unscathed, was running late, but so glad I bathed. Backed down to my neighbor’s, whose wife still belabors, “Together, you’ve never behaved” What does she mean? What what and what where? We’re normal buds — just a fun-loving pair. At Hooters, no leering, just burgers and beering. If only I’d known, temptation was there. A bit embarrassed, not sure I should share. Went to the restroom to comb my hair. Missed sign, “Please step down.” Tripped, fell, hit my crown. If only I’d known, that warning was there. Seems a buxom server had heard me swear. She saw my head bleeding, bump big as pear. Applied forehead band-aid, healing kiss (sweet handmaid.) If only I’d known, she left lipstick there. Now heading home, looking worse for wear. Took alley short cut, my friend’s stupid dare. Heard a pop, what was that? Can’t believe, tire’s flat. If only I’d known, that nail was there. The bad news was, I had no cab fare. The good news was, there’s air in my spare. We’re late and we’re speeding, Speed limit exceeding. If only I’d seen that cop hidin’ there. “No license, Sir” — he started to glare, then smiled and said, “It wouldn’t be fair. Since it’s almost Christmas, a gift for your missus.” If only I’d known, a heart was in there. Back underway, I stopped at The Square to buy sexy nightie — less fabric, more bare. Clerk said, “Is she my size?” I flirt, “Not your big eyes.” If only I’d known, my wife’s in swimwear. Dropped my friend off, was still unaware my wife thought I was in an affair. Lingerie was gift wrapped, held it out, but got slapped. Turns out she had seen lipstick near my hair. She walked out on me, now living elsewhere, left me with the kids, who need childcare. I’ve made some mistakes, but hit the sweepstakes. That Hooter handmaid’s now my live-in au pair. ©2020 HHThorpe. All rights reserved.
https://medium.com/no-crime-in-rhymin/if-only-id-known-57a6e404e678
['Harper Thorpe']
2020-11-30 04:45:21.452000+00:00
['Humor', 'Music', 'Love', 'Poetry', 'Satire']
How to Ace Your First Code Change at Your New Job
How to Ace Your First Code Change at Your New Job CodeRevGuru Follow Jun 30 · 4 min read Photo by Charles Deluvio on Unsplash Meet CodeRev, a career focused software developer that wants to take over the world. CodeRev studied hard. CodeRev leetcoded hard. CodeRev did tons of side projects. CodeRev even learned React on his spare time. CodeRev passed the technical interview and successfully got a developer job at a new company. Congrats to CodeRev! CodeRev got his first issue assigned. CodeRev rubbed his hands together and wanted to make a good impression with his first pull request / code review and banged out the code in a jiffy. Submit pull request. Boom. Mike drop. But then… CodeRev got 40 comments on the PR. Turns out the styling is all messed up. Use spaces! CodeRev missed an edge case in his unit test. CodeRev had a spelling error in a variable name. Actually, CodeRev’s PR simply doesn’t build at all. CodeRev reads through the comments with horror. He’s done. His career is over at this company. Better find a new job. Photo by Jackson Simmer on Unsplash Don’t be like CodeRev. Here are some common mistakes to avoid on your first PR at your new job. Breaking the Build Do not break the build. Do not check in broken code. Do not ask people to review broken code. Do not assume other people will checkout out your changes locally and build the code. Do not assume. Common gotchas include not rebasing / merging the latest changes and build your changes on top, not running checkstyle / jslint / whatever linter tools the existing codebase uses, and not running all the legacy unit tests. Yes, all of those tests. Run the build locally. Run all tests. Make sure checkstyle passes. Don’t break the build and don’t waste other’s time. Not Actually Solving the Problem First and foremost, your code needs to be correct. That is, it must solve the problem that you’re trying to solve. Look through the JIRA issue carefully. Ask the issue creators for clarifying questions. Identify for all potential edge cases. For example, can the input ever be invalid or null? What should the UI display if the backend returns 400? Should option A actually be enabled if option B is already involved? Ask lots of questions. Ask your colleagues for help. Ask the PM before you become enemies. Ask developers that worked on the same code base. Assume nothing. Write good tests. Do the right thing. Not Respecting Convention / Existing Styles and Patterns Yes, you are ambitious and you want to do the right thing. You want to drive change and write beautiful code with elegant abstractions. Those are all good things, but there is a place and time for everything. If your first issue assigned is something like changing the color of this banner, don’t rewrite all the CSS files with LESS cause it’s cooler and much easier to read. The original dev probably used CSS because LESS wasn’t a thing yet. Other developers on your team probably only know plain CSS. The original dev is not less hip than you, the code base is simply an artifact of the times. Respect what came before. People do things for a reason. Try to follow what is established in the code base. If the existing stuff is hot garbage, bring up refactoring as a separate issue. Communicate with your teammates. Communicate. Not Proof-Reading Your Code Change You want to bang out the PR ASAP to show that you’re an elite coder with legendary productivity to establish clout. But then your PR is plagued with copy-paste errors, typos, weird indentations, just downright messy. Not proof-reading / double-checking your changes express to others that you do not value their time. You are taking away precious focus time from other developers by not being careful about your code. Honest mistakes are OK, but double, triple check your changes to avoid obvious errors. A good way to attack this is to use a code review checklist like this. Want One-on-One Help for Your First PR? At coderevguru.io we believe in creating a safe space for developer to ask for help. From one-on-one Slack sessions for general questions to individual code feedbacks with actionable items, coderevguru.io will help you get to the next level. Reach out to us for help today at coderevguru@gmail.com.
https://medium.com/dev-genius/how-to-ace-your-first-code-change-at-your-new-job-efba52dd4a27
[]
2020-07-01 07:41:05.487000+00:00
['Developer', 'Software Engineering', 'Software', 'Software Development', 'Code Review']
Getting Things Right With Checklists
👉 This article has moved and is now available here.
https://medium.com/production-ready/getting-things-right-with-checklists-24455a99dddf
['Mathias Lafeldt']
2019-04-20 13:06:40.489000+00:00
['Microservices', 'Software Development', 'Complexity', 'Checklists', 'Safety']
Foo Fighters — Concrete and Gold. June 1st began like any other day of…
June 1st began like any other day of Summer 2017 for me; I was working as a student tech for one of the colleges at Western Michigan University and was appropriately slacking off by chatting with the other techs and attendants. While half-listening to my coworkers, I checked my phone and saw that I had a notification that the Foo Fighters had released a new single: “Run.” I bolted from the room without so much as a “hang on a second.” You see, Foo Fighters have been my favorite band since sometime in late 2006. I’m not entirely sure what it is about them; most of their career has been spent being a safe, vanilla flavor of alternative rock music that doesn’t exactly bleed originality beyond some distinctly Grohl flavors. However, no other band really connects with me like they do. I know the history of the band in and out. All of the members who’ve ever touched a Foo record, even down to the one guitar track by Greg Dulli on the self-titled album’s “X-Static.” Something about Dave’s songs just resonates with me on a level that no other artist does. My best friends from middle and high school and I formed a rock band back in 8th grade, and the first song we ever performed (or even played all the way through without messing up) was the Foo Fighters’s “Everlong,” easily one of my favorite songs of all time. In fact, we always played one, if not two, Foo covers at the few shows that we played. One of those friends and I still have what sometimes feels like scheduled deep conversation every few months about Foo Fighters; how their sound has changed, what we miss, what we love, which producers we want to record the next album, you name it. So to put it in fewer words: a surprise single was a big deal for me. I wish that I could say that I was as excited after listening to it as I was prior; my expectations for the album were tempered by a surprisingly safe single that didn’t elicit much enthusiasm from me. Still, I marked my calendar for September 15, 2017: the release of the new album Concrete and Gold. Foo Fighters — Concrete and Gold (2017) I pre-ordered the album as soon as I was able, and listened the first moment that I could, forcing my stream of consciousness upon my Foo-loving friend as I listened. Afterwards, and to this day, we still talk about this album a lot, but not for the reasons Dave Grohl would likely hope; frankly, we were disappointed in it. It sits as my least favorite album from the group. Even still, we each have things that we do enjoy about the album, and while I can’t speak for him, I can at least describe my own thoughts. I’ll try to find at least one highlight from each song that I really enjoy. The first song, “T-Shirt,” is a short-but-sweet number that has two modes: lullaby and stadium-shaking rock. It starts with just Dave and an acoustic guitar singing how he “don’t wanna be king” and is “just trying to keep his t-shirt clean.” But just when you may reach to turn up the volume during this nearly-too-quiet section (I did, and regretted it), the song shatters its shell and explodes, a wall of guitars and choir advancing outwards in all directions. Dave ominously sets a darker tone, announcing “There’s one thing I have learned//If it gets much better//It’s going to get worse//And you get what you deserve.” I enjoy both halves of the song; Dave’s intimate acoustic numbers always feel delightfully honest, and he sure as hell knows how to write an impactful rock chorus. If anything, this song is too short, but it does an intro’s job. The highlight: The climax after the 1:00 minute mark. The problem with “Run” is that it’s just so painfully forgettable. What is supposed to be a sing-along, lazy, dreamy intro seems more boring, and the crunchy riff that defines the song is just two repetitive notes that isn’t far from a toddler playing with a toy electric guitar. I do appreciate the guitar work in the chorus, though. While it’s obvious they mostly just follow the VI-v-i chord progression, they walk over enough notes between the bars that it feels fresh. The guitar solo and bridge sections feel frantic and energetic, and are easily the height of the song. The highlight: The chaotic guitar solo section at 4:06. “Make it Right” is an interesting number; a riffy, bluesy song that features Dave doing a bit less singing and a bit more preaching. The chorus pulls off a clever key change that is a bit jarring at first, but settles in just fine. The real beauty in this song, though, is in the bridge. It features a riff the climbs up the bluesy scale continuously in a very groovy way. In the background is a very un-Foo-like “La” on every beat, sung by none other than Justin Timberlake. While it’s a bluesier section than I’d ever heard from the band, it ended up being my favorite part of the song. Oh, and at the end of the bridge is this tasty drum roll from Taylor Hawkins set under a phaser effect of some sort. It’s trippy, but it’s memorable in all the good ways. The highlight: The groovy signature guitar riff under double-time hi-hats. Next up is the single that had the most airplay: “The Sky is a Neighborhood.” This song accomplishes exactly what it set out to do: be a sleepy, rhythmic stadium sing-along. It begins with a haunting chorus and string section behind an arpeggiating guitar before that all stops, leaving a hollow and reverb-heavy Dave singing more or less just with a drum beat. If it was supposed to give the impression that it’s just a regular dude singing and not a professional, it works, and it’s charming. This whole song really just is a sing-along number, and it really kicks in once the whole band joins in for the huge choruses. The highlight: The sing-along chorus. “La Dee Da” has a lot to love, in my opinion. Laden with flavors of groovy and faster, harder rock from their excellent 2011 album Wasting Light, it strums along through the verse before exploding into a scream-heavy chorus. While I’m not a fan of the guitar solo in this song (for as exciting and energetic as the choruses surrounding it are, the solo is just so flat and boring), it’s not enough to kill it, and it’s still a winner. The highlight: The locomotive energy Dave’s screams bring in the chorus. The real crown jewel of the album is here at the halfway point: “Dirty Water.” It’s unlike anything the Foos have really done much before. Starting with simple finger-plucked acoustic dyads that sound influenced by Spanish classical music, it blossoms into a lazy languor of Dave describing his dreams. The chorus is very choir heavy again, a recurrent element on Concrete and Gold. Halfway through the song, we get the heavy repetitive riff that Dave and the Foos are known for, a simple repeating of E octaves at first that transforms into more from there. Dave repeatedly warns, “Bleed dirty water//Breathe dirty sky,” which is much more effective in practice than it sounds. The way the melody resolves is incredibly satisfying, and Dave adds enough intensity on each repetition to keep you hooked, constantly climbing his vocal range until he’s scratching his grated ceiling. Finally, the song hits its satisfying last chorus, and ends with a taste of the E octave riff again before finally screeching to a halt. The highlight: The whole damned thing. “Arrows” is an interesting song. A unique drum rhythm gives it a sort of asymmetric rhythm, feeling more like two 3/4 bars followed by a 2/4 and then repeating than two 4/4 bars. The song builds up to a claustrophobic chorus as Dave sings “Arrows in her eyes//Fear where her heart should be.” It hits me with a feeling that I’m frantically searching for something that I’m unable to find, and time is running out. The bridge section does what the opening verses allude to; turning to 3/4 time for a few straightforward rock bars before going back to the asymmetric rhythm (there’s likely a better word for that). This is the highlight of the song for me. The song sheds its distressed vibe and reveals a commanding tank of rock music. The highlight: When the song stops teasing you and gives you a few 3/4 bars of pure rock in the bridge. This is where the album takes a tonal shift in my mind. The next song, “Happy Ever After (Zero Hour),” wears its Beatles inspiration on its sleeve. One can’t listen to this song without thinking about the ’60s group’s “Blackbird.” Dave laments about drinking whiskey while sending resignation letters, pondering “Where is your Shangri-La now?” as “There ain’t no superheroes now//They’re underground.” All together, it’s a really enjoyable acoustic tune that is a nice blend of that “Blackbird” Beatles sound with the Foos’ relatively recognizable version of alternative rock. The highlight: The beautiful chorus hook. Speaking of the Beatles, and especially “Blackbird” singer Paul McCartney, drums on the next track, “Sunday Rain” are played by none other than McCartney himself. And that’s not the only thing different about the track; once in awhile the Foo Fighters like to throw a wrench into things and have drummer Taylor Hawkins sing a song (first on In Your Honor’s “Cold Day in the Sun,” and frequently during covers when performing), and this is that song on Concrete and Gold. “Sunday Rain” is a slow burn that is reminiscent of darker Tom Petty tunes to me. The verses are quiet with a simple, unimaginative guitar riff, It’s also the longest song on the album, coming in at just over six minutes long. If anything, the song does overstay its welcome, but Taylor is an enjoyable, smokey singer so it’s a mostly relaxing, sexy jam. The highlight: Taylor Hawkins’s voice is the best thing about this song. “The Line” is the penultimate song on the album, and is one of the spots where the album’s less-than-stellar (read: bad) production is most evident. The song starts immediately, with low piano and bass notes ringing as Dave sings right out of the gate. However, when the band comes in about 15 seconds in, what I imagine is supposed to feel like a spirited eruption is capped by the ridiculous lack of dynamics this album has. Apart from the first track, there is little difference in volume between the “quiet” and the “loud” sections, which makes moments like this lose a ton of their energy. I think “The Line” is one of the better songs on the album musically, but the lack of energy that it’s supposed to have makes it harder to enjoy. The highlight: The intro, before the band comes in, an experimental sound by Foo Fighters standards. I wish I could say things about the closing title track, “Concrete and Gold.” It’s clearly supposed to be a tired hangover of a song that unfolds into a huge, dreamy chorus. But I think Dave stuck the landing with the “hangover” part a little too well; his vocals are often too quiet and slurred to understand. He sings, “Our roots are stronger than you know//Up through the concrete we will grow” repeatedly, and maybe the slowness is the point. His lyrics reference the flowers, weeds, grass, and various life that grows through the cracks in asphalt, as if they’re fighting and finally burst through over ages of continuous effort. Sadly, it’s probably the biggest miss on the album, and frankly, it’s boring. The highlight: I enjoy the quiet bridge section; it reminds me of an old cowboy western flick. The brightest spots on Concrete and Gold happen where Foo Fighters experiment a little without losing sight of who they are. That being things like the jumpy, riffy bridge to “Make it Right” with Justin Timberlake sprinkled in. The scream-heavy choruses to “La Dee Da,” brimming with energy. The mysterious acoustic “Dirty Water” and its patented Foo Fighters repetition-based build up to the final chorus. The melancholic “Happy Ever After (Zero Hour)” that wears its Beatles inspiration on its sleeve. There’s still quite a bit to love in this album, just not as much as other Foo Fighters records.
https://notlikethesoup.medium.com/foo-fighters-concrete-and-gold-42f8dbf07915
[]
2020-02-19 21:35:10.321000+00:00
['Review', 'Foo Fighters', 'Music']
Used to ache sometimes and swell
With Grandma’s Hands, Bill Withers tells us the story of his earliest years in flashback, and in close-up. What he lets us see is controlled, our view narrowed; we’re made to see what a child would see. Spielberg’s camera made us do this too, once, in ET, by placing us at a child’s height — the height of door handles, of the dinner table, the height of the waists of shadowy men. In Grandma’s Hands, Bill makes us see through a camera that is the shape of his memories. He remembers being Billy, back when he was knee high, and tours us around his town using sometimes Grandma’s voice, but mainly by her hands. Hands that were signposts for Billy’s learning and to a moral code that as Bill he still lives by. Those hands loom large, especially for a small child sat in a pew. Grandma’s hands clapped in church on Sunday morning Grandma’s hands played a tambourine so well Grandma’s hands used to issue out a warning She’d say, “Billy don’t you run so fast, Might fall on a piece of glass, Might be snakes there in that grass” Grandma’s hands These hands, they believe. Their faith resounds through the church, joining in with everyone else, asserting the sense of belonging that Bill finds so reassuring. The hands perform, too, staking out the beat that tethers the music and the ritual to the community that understands itself through both. Hands mark out the rhythm of worship, but also the familiar rhythm of the week, the weeks, and hence life. We imagine the hands, perhaps, to be clean, proud hands, protruding from neat, pressed cuffs; dressed in Sunday Best, presentable, as society expects, but never stiff or starched. The church in which these hands clap and play tambourine is warm and welcoming. That’s clear enough when the hands — which speak in gesture as well as music — issue out a warning that isn’t fire and brimstone, but friendly, and protective. It’s the voice of experience. Grandma helped Bill to learn; she was saving him not from damnation, but from himself. These hands did a lot of saving. Grandma’s hands soothed a local unwed mother Grandma’s hands used to ache sometimes and swell Grandma’s hands used to lift her face and tell her, She’d say, “Baby, Grandma understands, That you really love that man, Put yourself in Jesus’ hands” Grandma’s hands The hands understand. They give comfort. They know how to navigate a world that causes fear and anger — a grown-up world where snakes, the dangerous, figurative ones, still hide in the grass. These hands, as someone else once said, know too much to argue or to judge, embodying values indivisible from the woman to whom they belong and from the faith she carries with her. They ache sometimes, these hands, and swell, but they cannot help but love. It’s instinctive, or maybe learned. Either way, they’re there when the burden is too much for others to take. They are hands that never cross the road. Billy knows this, instinctively. But Bill, looking back, understands it. Grandma’s hands used to hand me piece of candy Grandma’s hands picked me up each time I fell Grandma’s hands, boy, they really came in handy She’d say, “Matty don’ you whip that boy What you want to spank him for? He didn’t drop no apple core” But I don’t have Grandma anymore If I get to Heaven I’ll look for Grandma’s hands Those hands were always giving, always helping, always saving. Always able to see things from your point of view. You hear it when Billy’s Dad gets a word or two. There’s no moralising, just logic — he didn’t do anything wrong, Matty; you might fall on a piece of glass, Billy. The patience of a saint. And now she’s up there herself. No doubt still helping people to help themselves. Her faith repaid. And from here, no longer knee high, Bill knows that Grandma’s hands stand for safety, for comfort, for strength. Young Billy learned by them, and learned to trust too; from what they did, who they touched, and how. They’re still the focal point of his love for her and of hers for him; signposts for the world around him, and the values he’s grown up with. Those hands carry a lot of weight. Bill Withers says that of all the songs he wrote, Grandma’s Hands is his favourite.
https://medium.com/a-longing-look/used-to-ache-sometimes-and-swell-93021629a15b
['James Caig']
2017-02-03 08:52:23.046000+00:00
['Lyrics', 'Music', 'Family', 'Soul', 'Love']
How “Homeland” Raised the Bar for Television Drama
The Evolution of Homeland The Emmy-winning espionage thriller Homeland premiered on October 2, 2011. For context, its premiere occurred 10 years and a few weeks after the historic events of 9/11 that in many ways serve the catalyst for the show and 16 months after the series finale of the Fox juggernaut 24, an espionage thriller that mined similar narrative territory but with a decidedly more network TV spin. Created by Howard Gordon and Alex Gansa (both of whom had worked on 24), the series was an adaptation of an Israeli series called Prisoners of War, which had premiered a year prior to great acclaim. Season One promotional image (Copyright: Showtime/20th Television) The show centered on CIA agent Carrie Mathison (Claire Danes), a brilliant, brave, and determined woman battling some internal demons, including a longstanding struggle with bipolar disorder. In the pilot episode, she became convinced that a U.S. Marine Corps Scout Sniper named Nicolas Brody (Damian Lewis), who was being touted as an American hero after being held as a prisoner of war by Al-Qaeda, had actually been turned by the enemy and was a threat to national security. One of the only people who believed her — or was at least open to what she has to say — was her mentor Saul Berenson (Mandy Patinkin), who was at that time the CIA’s Middle-East Division Chief. Homeland was a breakout hit in many ways. It instantly won over critics, as evidenced by its first season’s stunning 100% approval rating on Rotten Tomatoes and average rating of 92 out of 100 on Metacritic. Its ratings were solid by the pay cable network’s standards and steadily grew over the course of the season. The show generated enormous buzz both for its quality and its tantalizing premise, not to mention the fact that it featured highly praised turns by a pair of actors quite familiar to American audiences (Claire Danes had starred on the cult favorite TV drama My So-Called Life as a teenager and in hit films like Romeo + Juliet, whereas Mandy Patinkin had a long career on Broadway, won an Emmy for his work on the medical drama Chicago Hope, and is well known to film fans for his role in The Princess Bride.) Its breakout status was cemented at the 64th Annual Primetime Emmy Awards where it almost made a clean sweep of the main categories, winning Outstanding Drama Series, Outstanding Lead Actress (Danes), Outstanding Lead Actor (Lewis), and Outstanding Writing. Claire Danes with one of her Emmys for her role as Carrie Mathison (Copyright: ABC/ATAS) Its second season, in which it was firmly established that Carrie was right about Brody all along and their fates became intertwined, grew even bigger in the ratings and scored even more Emmy nominations than the first. (This time around it didn’t sweep the Emmys, but it did score repeat wins for Danes and its writing.) Even by its second season, however, the show was becoming an uneven. This was largely due to too much focus on the meandering domestic drama of Brody’s wife and children and the provocative dynamic between Carrie and Brody that frequently threatened to stretch believability too far. Nevertheless, it produced some truly masterful episodes and rallied for a jaw dropping season finale that was one of the most electrifying television episodes of the decade. Everything that was starting to falter during the second season came to a head in the third. The writers were simply unable to generate interesting and believable content for the Brody family. The season was all over the place, a mish-mash of tepid domestic drama, explosive espionage, and logic-defying soap opera antics. Many critics and fans wrote off the show as a two-season wonder. The Emmys followed suit, giving it only two major nominations (for Danes and Patinkin, who admittedly continued to turn in brilliant work even when the narrative lost its way). Season Four promotional image (Copyright: Showtime/20th Television) What most people assumed was a flameout, however, turned out actually to be just a bump in the road. The show returned the following fall with its best season yet. The fourth season rebooted the series by dropping the Brodys entirely, giving Carrie and Saul new jobs, moving the action to Afghanistan and Pakistan, and introducing a host of compelling characters. The show clawed its way back into key Emmy categories like Outstanding Drama Series and Outstanding Directing, which is highly unusual. (More often than not, once the Emmys drop a show, it rarely gets back in their good graces.) Season Five rebooted the series again, relocating Carrie to Germany and introducing one of its most brilliant (albeit short-lived) characters in Russian double agent Allison Carr (played by Miranda Otto). That season, too, gained Emmy nominations in several major categories, including Outstanding Drama Series. The sixth season premiered in the final weeks of the tragic 2016 Presidential election campaign. Sensing the shift in American viewers’ interests and the global political and military conversations, the series redirected its focus to domestic affairs. The sixth and seventh season both largely focused on Carrie’s strained relationship with Saul, her complicated relationship with motherhood, and the tumultuous presidency of Elizabeth Keane (played brilliantly by Elizabeth Marvel). Although frequently compelling, the seasons were a bit uneven, particularly the seventh. The show delved into subplots like a custody battle for Carrie, an epic relapse of her bipolar disorder, and her complex relationship with former colleague Quinn (Rupert Friend, who rose to the occasion with a brilliant performance in Season Six after his character became physically, emotionally, and cognitively scarred by exposure to a chemical weapon). There was brilliant acting, strong production values, and compelling sociopolitical commentary, but the show had lost some of the spark it had in its early seasons with the central Brody dynamic and the previous two seasons that benefitted enormously from the change of scenery. Season Eight promotional image (Copyright: Showtime/20th Television) Thankfully, the writers recognized this and moved the action back to the Middle East for the reinvigorated final season, which wrapped last Sunday. The season began with Saul enlisting Carrie’s help as he tries to end the war in Afghanistan in his new capacity as National Security Advisor. Unfortunately, he is just about the only person who trusts Carrie, given that she was recently released from seven months as a political prisoner in Russia. It’s a brilliant reversal of the dynamic that kicked the show off — this time Carrie is the returning prisoner of war who no one trusts. It is also an infuriating, but fitting and realistic, journey for her character given that Carrie Mathison has spent the show’s whole run breaking rules, taking enormous and reckless risks, and refusing to play nice. The fact that it all plays out against the backdrop of an imminent political crisis involving Afghanistan, Pakistan, and Russia makes it all the richer. It’s not as consistently brilliant as its best season (that would be Season Four), but it came tantalizingly close. And, thankfully, it spent its final episodes focused on Carrie and Saul, the show’s most fascinating and enduring relationship.
https://medium.com/rants-and-raves/how-homeland-raised-the-bar-for-television-drama-3b6d95a593fb
['Richard Lebeau']
2020-05-25 23:05:56.816000+00:00
['Television', 'Feminism', 'Media', 'Culture', 'Society']
Getting Started With Adobe I/O Runtime (Project Firefly)
#1 Design Overview The requirements are very similar to the previous GDPR UI but here’s a quick re-cap of what we’re trying to achieve. The requirements for the UI are that the user can: i) specify whether it is an “Access” or a “Delete” request ii) add a Ticket ID for internal records iii) add one or more declared IDs (used as the custom key for each request) iv) receive confirmation when the requests are registered successfully The end result should look something like this: #2 Environment & Project Set Up Once you have Runtime access, the environment set up is very straightforward, just follow the steps listed here in the “Local Environment Set Up” section. The next step is to create a project; if you’re familiar with Adobe IO then you’ve probably been creating from “Empty project” but for Runtime we’ll choose the “Project from template” option: Then select Project Firefly: Choose a Project title and App name: Navigate into the Stage workspace (Stage & Production workspaces are created by default but you can add more if you need to), click “Add service” and select API. There is a wide range of APIs available but we’ll go ahead and select the Privacy Service: The next step is to create the jwt for the service account — a nice, (relatively) new feature in Adobe I/O is the option to have Adobe generate the key pair for you, which saves a little bit of time: Once the step above is completed, the public/private keys will be downloaded to your device. The Privacy Service API should now be listed in the Stage workspace of the project, however, there won’t be any user defined actions: To generate an action we need to switch across to the Terminal; the Runtime documentation describes what’s required, so I won’t recreate them here, just follow the steps in the help guide, starting from the section called “3. Signing in from CLI”. Once those steps are complete, the Stage workspace should show that some actions have been defined: #3 App UI Development The completed build includes a sample app as a starting point, which provides useful signposting for any custom code additions. We can now use the “aio app run” command in the Terminal to start up the development server and take a look at the sample app. I won’t go through every component part of the build but there are a few parts that are worth pointing out. The first part to mention is that the sample app is built using React; you don’t necessarily need to create your UI using React — you could quite easily store your API response(s) in a window variable and use jQuery if you really wanted to — but if you want to achieve the Adobe look and feel then it’s a good idea to make use of the React Spectrum libraries. The second piece of the puzzle is the App.js file, which has two main parts to it. The first part provides some helper functions, such as invokeAction, which allows you to send data from the UI to your backend action: The other, equally important, part of App.js is the render() method; I’m not going to dwell on this part too much because this is definitely not a React tutorial but all you need to know is that it’s this block of code that renders the UI elements: Most of the UI is really simple but I think it’s worth going over how each component is created: The first part is a dropdown that allows the user to specify if it is an Access or Delete request, which translates into the code below: The first Ticket ID / Master Consumer ID row displays when the app is loaded and looks like this: The most complex part is dynamically adding & removing rows based on user interaction. Fortunately, I found this really nice codepen example, which is a very good fit. I don’t think it is an exact copy & paste but it’s not too far off: Finally, the Submit button looks like this: #4 Sending data to the backend action The invokeAction function is included as part of the sample app build, so it is a straightforward task to make some minor updates to include the data needed for the API requests. The params object forms the request body, so needs to include the ticketID, consumerId & requestType variables that the Privacy API expects. Note that invokeAction is just triggering the backend action and passing data to it, it’s not calling the Privacy API directly: The screenshot below shows another part of the invokeAction function; this is also included in the sample app build, so it’s just a question of being aware that the API response will be stored in actionResponse: Part of the invokeAction function includes a call to actionWebInvoke, which is what ultimately triggers the backend action. The main thing here is to ensure that the requestDetails structure fits with what the Privacy Service API is expecting, otherwise you will see an error response: When you execute “aio app run” in the Terminal, you have the choice to view the app on localhost or in the Experience Cloud shell. If you choose localhost you will need to ensure that you generate & pass the access token in the API request header. However, if you choose the shell option, the access token is generated automatically and passed into the request header. As shown below, the actionWebInvoke request sets new headers but also includes anything else that might already be in the headers object (i.e. the access token) via the …headers statement: The final part of actionWebInvoke handles the response that is returned from the backend action: #5 Privacy Service API requests The last step in the sequence is to make the request to the Privacy Service API, which is done in the index.js file: This part is pretty straightforward, you just need to ensure that the final structure of the request body meets the API requirements: Anything that’s included in the actionWebInvoke request can be referenced by using the params.__ow_ prefix, which you can see examples of in the screenshots above and below: The final part is to return the API response, which will ultimately find its way back to the actionResponse object: #6 Handling the API response The final task is building a table based on the response data; again, the sample app points us in the right direction by including some code that is triggered when a successful response is received. However, this just adds the “Success!” message to the UI, so I need to create a Table component that dynamically builds itself: Essentially this just passes the existing state object — which includes the actionResponse object that contains the Privacy Service response data — through to the Table component when a successful API response is received. That state object is then accessible via the props keyword, which means the data can be re-structured into a format that makes sense for a table: Once the data is re-structured, the map method can be used to dynamically construct the table elements: The final step is to insert the table using the render() method: The end result is a GDPR UI that displays Privacy Service response data in a table, which allows the employee to easily keep a record of the request details that have been submitted:
https://medium.com/swlh/getting-started-with-adobe-i-o-runtime-project-firefly-ac8d8aa77eaa
['Alex Bishop']
2020-08-22 19:03:14.292000+00:00
['Nodejs', 'React', 'Gdpr', 'Firefly', 'Adobe']
Asha Ashok, Meesho’s first woman tech employee, on how solving tech challenges & taking ownership at Meesho has been truly empowering for her
Asha Ashok, Meesho’s first woman tech employee, on how solving tech challenges & taking ownership at Meesho has been truly empowering for her Amrita Bose Follow Dec 13, 2019 · 6 min read At Meesho, 35 percent of our workforce comprise women and we are constantly working to change this statistic and build more equity in diversity and gender representation at the workplace. As Meesho is on a constant mission to empower women across India to start their own businesses with zero investment, and help them achieve not only financial freedom but also a sense of identity, we bring the same kind of ethos to our workforce and are strong believers of gender equality at the workplace. According to research conducted by AnitaB.org, a global, California based non profit (founded by computer scientists: Anita Borg and Telle Whitney) that works towards advancing opportunities for women in the field of technology, women IT employees in India currently comprise 34 per cent of the workforce. Compared to the US and Europe, women’s tech equity ratio in india is ahead of these countries. However the ratio is still much more skewed towards male tech employees to female in India. Asha Ashok was the first ever Senior Software Engineer to join Meesho’s Tech team in January 2019. Since then our women workforce across Product, DevOps and QA has been steadily growing — surely a sign of the times. Here’s Asha talking about her journey at Meesho, how she learnt to Take Ownership (a core Meesho value) here, and what it means for her to be a woman in tech. From Kochi to Benaras I grew up in Kochi and never really had to step out of Kerala till I completed my 12th in sciences. Once I got admission at the Indian Institute of Technology (BHU) Varanasi to pursue a BTech in Electrical Engineering, I actually travelled out of Kerala for the first time and that too all the way to Varanasi. Apart from studying my engineering textbooks, I also picked up Hindi, because for the next four years I had to survive there. I am proud to say that not only did I become fluent in Hindi but also managed to acquire a slight UP accent. At Meesho, you will most likely find me conversing in Hindi! Once I graduated from IIT (BHU) in 2016, I joined Oracle as part of a direct campus placement and spent two and a half years there. In Oracle, I was part of a team that was partly working in India and partly in the US, so it wasn’t really like being part of the bigger picture. At Oracle, I got a good exposure to various technologies, but somewhere along the way I began to feel a bit stagnated. I wanted a new challenge and hence began my search for the next opportunity that would let me do so. Being the first While I was hesitant to join a startup after coming from an organised corporate set up, Meesho welcomed me eagerly on board when I applied. I joined Meesho in January 2019 as a Senior Software Engineer, at a time when it was just beginning to scale aggressively, and the Tech team was also starting to expand. I was the first woman tech employee at Meesho. And today, I am happy to say that there are many more who have joined between Product and Tech teams (in DevOps and QA), and we are 16 of us and counting! If you ask me, from school to college and work, I have always been in groups where the ratio of women to men has been lean. Being a science student and then engineering means, your classmates are most likely to be men. Techies are infamous for not being very social, but I actually love interacting with people. It took me a while to break the ice with my team at Meesho — because there is nothing that a round of beer after work cannot solve! As soon as I joined Meesho, our Director of Engineering Samir Ranjan handed me my first project. I had to fly solo from day one by taking ownership of my first project — related to payments. Though it took me a while to understand the Meesho tech, taking up that first big project got me accelerating. Being able to successfully execute this project along with my colleagues was a big confidence booster for me. This was when I felt like I could now easily fit into the startup way of life! Currently I am part of the Payments team and whatever actions we take as part of this team, our biggest goal is to make our resellers lives easier, better, and to make them even more successful (Make Resellers Successful is a value all of us live by at Meesho). For instance, when our resellers get paid out their earnings by Meesho, we wanted them to get to know when they would receive the payment by sending them a notification. If you look at it, it just a small action, but even that small step can make their lives easier, knowing that the money is now in their account and not having to manually check it. Building this would be a minor tech feature, but it would be still making a small but meaningful impact in the resellers’ lives. Whatever we do as a team, the impact on a reseller’s life has to be huge. And not only our end users, but also even for Meesho if we can make processes easier by, say automating the payments flow from our end, then we are making the finance team’s work easier (Putting Company>Team>Individual is another Meesho value). Among several small and big projects — I have contributed towards releasing features for the suppliers side of the business, and have ensured that these features have also shipped out fast. At Meesho, we believe in Speed over Perfection (a Meesho value). You can keep perfecting a feature, but at some point you have to release the feature, let it go, see the impact and then it can come back to you for further tweaking as required. Because otherwise you won’t know whether something is working or not. We develop features quickly, ship them out fast but at the same time keep monitoring the metrics and impact of it. On an equal footing At Meesho, I haven’t felt like I have missed out on any professional opportunity because of my gender. There is no difference or preference shown to me vis-a-vis my teammates (most of whom are guys, which you must have guessed by now). What is expected out of each and every member of the team — is expected out of me — gender no bar. The other thing I love at Meesho is that here I can try new things, people are always welcome to change here. If I want to try a new tool, one that is not being currently used in our existing tech, as long as there is logic and reasoning behind its use, I can go ahead and experiment with it. Autonomy in decision making is expected of you at Meesho. I love participating in Tech team activities including Cricket Wednesdays and the recently held Meesho Premier League and also Meesho Hackathons. Asha (extreme left) at the Meesho Premier League Participating in Meesho hackathons has been an eye opener for me. I actually understood the importance of product while participating. As engineers, what we think would make for a successful product is not necessarily how it will work out. Because when you build a product, it might be too much of a tech feature. You might think you have built an amazing product, but a layman/or end user may or may not find it beneficial. And you won’t see that until and unless you start thinking like how a user/reseller thinks. So the challenge lies in how much can you define the tech within those user parameters. This was a great learning for me at the hackathon. Asha and the Tech team during Hack.Mee (2.0) — The Meesho Hackathon In conclusion, I think one of my biggest learnings at Meesho has been that I have learnt to take ownership here. And that you have to fail sometimes in order for you to be able to understand and fix those failings. Fixing your mistakes is a very important step to development and unless you get an error or make one, you will never know how to fix it. And this can only happen when when you learn to take ownership. Want to strengthen Meesho’s women tech force and help create 20 million entrepreneurs by 2020? Then apply here. We are hiring!
https://medium.com/meesho-tech/asha-ashok-meeshos-first-woman-tech-employee-on-how-solving-tech-challenges-taking-ownership-a6fa2b8f16bd
['Amrita Bose']
2019-12-13 10:21:19.863000+00:00
['Startup', 'Culture', 'Women In Tech', 'Software Development']
White Box AI: Interpretability Techniques
While in the previous article of the series we introduced the notion of White Box AI and explained different dimensions of interpretability, in this post we’ll be more practice-oriented and turn to techniques that can make algorithm output more explainable and the models more transparent, increasing trust in the applied models. The two pillars of ML-driven predictive analysis are data and robust models, and these are the focus of attention in increasing interpretability. The first step towards White Box AI is data visualization because seeing your data will help you to get inside your dataset, which is a first step toward validating, explaining, and trusting models. At the same time, having explainable white-box models with transparent inner workings, followed by techniques that can generate explanations for the most complex types of predictive models such as model visualizations, reason codes, and variable importance measures. Data Visualization As we remember, good data science always starts with good data and with ensuring its quality and relevance for subsequent model training. Unfortunately, most datasets are difficult to see and understand because they have too many variables and many rows. Plotting many dimensions is technically possible, but it does not improve the human understanding of complex datasets. Of course, there are numerous ways to visualize datasets and we discussed them in our dedicated article. However, in this overview, we’ll rely on the experts’ opinions and stick to those selected by Hall and Gill in their book “An Introduction to Machine Learning Interpretability”. Most of these techniques have the capacity to illustrate all of a data set in just two dimensions, which is important in machine learning because most ML algorithms would automatically model high-degree interactions between multiple variables. Glyphs Glyphs are visual symbols used to represent different values or data attributes with the color, texture, or alignment. Using bright colors or unique alignments for events of interest or outliers is a good method for making important or unusual data attributes clear in a glyph representation. Besides, when arranged in a certain way, glyphs can be used to represent rows of a data set. In the figure below, each grouping of four glyphs can be either a row of data or an aggregated group of rows in a data set. Figure 1. Glyphs arranged to represent many rows of a data set. Image courtesy of Ivy Wang and the H2O.ai team. Correlation Graphs A correlation graph is a two-dimensional representation of the relationships (i.e. correlation) in a data set. Even data sets with tens of thousands of variables can be displayed in two dimensions using this technique. For the visual simplicity of correlation graphs, absolute weights below a certain threshold are not displayed. The node size is determined by a node’s number of connections (node degree), its color is determined by a graph community calculation, and the node position is defined by a graph force field algorithm. Correlation graphs show groups of correlated variables, help us identify irrelevant variables, and discover or verify important relationships that machine learning models should incorporate. Figure 2. A correlation graph representing loans made by a large financial firm. Figure courtesy of Patrick Hall and the H2O.ai team. In a supervised model built for the data represented in the figure above, we would expect variable selection techniques to pick one or two variables from the light green, blue, and purple groups, we would expect variables with thick connections to the target to be important variables in the model, and we would expect a model to learn that unconnected variables like CHANNEL_Rare not very important. 2-D projections Of course, 2-D projection is not merely one technique and there exist any ways and techniques for projecting the rows of a data set from a usually high-dimensional original space into a more visually understandable 2- or 3-D space two or three dimensions, such as: Data sets containing images, text, or even business data with many variables can be difficult to visualize as a whole. These projection techniques try to represent the rows of high-dimensional data projecting them into a representative low-dimensional space and visualizing using the scatter plot technique. A high-quality projection visualized in a scatter plot is expected to exhibit key structural elements of a data set, such as clusters, hierarchy, sparsity, and outliers. Figure 3. Two-dimensional projections of the 784-dimensional MNIST data set using (left) Principal Components Analysis (PCA) and (right) a stacked denoising autoencoder. Image courtesy of Patrick Hall and the H2O.ai team. Projections can add trust if they are used to confirm machine learning modeling results. For instance, if known hierarchies, classes, or clusters exist in training or test data sets and these structures are visible in 2-D projections, it is possible to confirm that a machine learning model is labeling these structures correctly. Additionally, it shows if similar attributes of structures are projected relatively near one another and different attributes of structures are projected relatively far from one another. Such results should also be stable under minor perturbations of the training or test data, and projections from perturbed versus non-perturbed samples can be used to check for stability or for potential patterns of change over time. Partial dependence plots Partial dependence plots show how ML response functions change based on the values of one or two independent variables, while averaging out the effects of all other independent variables. Partial dependence plots with two independent variables are particularly useful for visualizing complex types of variable interactions between the independent variables. They can be used to verify monotonicity of response functions under monotonicity constraints, as well as to see the nonlinearity, non-monotonicity, and two-way interactions in very complex models. They can also enhance trust when displayed relationships conform to domain knowledge expectations. Partial dependence plots are global in terms of the rows of a data set, but local in terms of the independent variables. Individual conditional expectation (ICE) plots, a newer and less spread adaptation of partial dependence plots, can be used to create more localized explanations using the same ideas as partial dependence plots. Figure 4. One-dimensional partial dependence plots from a gradient boosted tree ensemble model of the California housing data set. Image courtesy Patrick Hall and the H2O.ai team. Residual analysis Residuals refer to the difference between the recorded value of a dependent variable and the predicted value of a dependent variable for every row in a data set. In theory, the residuals of a well-fit model should be randomly distributed because good models will account for most phenomena in a data set, except for random error. Therefore, if models are producing randomly distributed residuals, this is an indication of a well-fit, dependable, trustworthy model. However, if strong patterns are visible in plotted residuals, there are problems with your data, your model, or both. Breaking out a residual plot by independent variables can additionally expose more granular information about residuals and assist in reasoning through the cause of non-random patterns. Figure 5. Screenshot from an example residual analysis application. Image courtesy of Micah Stubbs and the H2O.ai team. Seeing structures and relationships in a data set makes those structures and relationships easier to understand and makes up a first step to knowing if a model’s answers are trustworthy. Techniques for Creating White-Box Models Decision trees Decision trees, predicting the value of a target variable based on several input variables, are probably the most obvious way to ensure interpretability. They are directed graphs in which each interior node corresponds to an input variable. Each terminal node or leaf node represents a value of the target variable given the values of the input variables represented by the path from the root to the leaf. The major benefit of decision trees is that they can reveal relationships between the input and target variable with “Boolean-like” logic and they can be easily interpreted by non-experts by displaying them graphically. However, decision trees can create very complex nonlinear, nonmonotonic functions. Therefore, to ensure interpretability, they should be restricted to shallow depth and binary splits. eXplainable Neural Networks In contrast to decision trees, neural networks are often considered the least transparent of black-box models. However, the recent work in XNN implementation and explaining artificial neural network (ANN) predictions may render that characteristic obsolete. Many of the breakthroughs in ANN explanation were made possible thanks to the straightforward calculation of derivatives of the trained ANN response function with regard to input variables provided by deep learning toolkits such as Tensorflow. With the help of such derivatives, the trained ANN response function prediction can be disaggregated into input variable contributions for any observation. XNNs can model extremely nonlinear, nonmonotonic phenomena or they can be used as surrogate models to explain other nonlinear, non-monotonic models, potentially increasing the fidelity of global and local surrogate model techniques. Monotonic gradient-boosted machines (GBMs) Gradient boosting is an algorithm that produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees. Used for regression and classification tasks, it is potentially appropriate for most traditional data mining and predictive modeling applications, even in regulated industries and for consistent reason code generation provided it builds monotonic functions. Monotonicity constraints can improve GBMs interpretability by enforcing a uniform splitting strategy in constituent decision trees, where binary splits of a variable in one direction always increase the average value of the dependent variable in the resultant child node, and binary splits of the variable in the other direction always decrease the average value of the dependent variable in the other resultant child node. Understanding is increased by enforcing straightforward relationships between input variables and the prediction target. Trust is increased when monotonic relationships, reason codes, and detected interactions are parsimonious with domain expertise or reasonable expectations. Alternative regression white-box modeling approaches There exist many modern techniques to augment traditional, linear modeling methods. Such models as elastic net, GAM, and quantile regression, usually produce linear, monotonic response functions with globally interpretable results similar to traditional linear models but with a boost in predictive accuracy. Penalized (elastic net) regression As an alternative to old-school regression models, penalized regression techniques usually combine L1/LASSO penalties for variable selection purposes and Tikhonov/L2/ridge penalties for robustness in a technique known as elastic net. Penalized regression minimizes constrained objective functions to find the best set of regression parameters for a given data set that would model a linear relationship and satisfy certain penalties for assigning correlated or meaningless variables to large regression coefficients. For instance, L1/LASSO penalties drive unnecessary regression parameters to zero, selecting only a small, representative subset of parameters for the regression model while avoiding potential multiple comparison problems. Tikhonov/L2/ridge penalties help preserve parameter estimate stability, even when many correlated variables exist in a wide data set or important predictor variables are correlated. Penalized regression is a great fit for business data with many columns, even data sets with more columns than rows, and for data sets with a lot of correlated variables. Generalized Additive Models (GAMs) Generalized Additive Models (GAMs) hand-tune a tradeoff between increased accuracy and decreased interpretability by fitting standard regression coefficients to certain variables and nonlinear spline functions to other variables. Also, most implementations of GAMs generate convenient plots of the fitted splines. That can be used directly in predictive models for increased accuracy. Otherwise, you can eyeball the fitted spline and switch it out for a more interpretable polynomial, log, trigonometric or other simple function of the predictor variable that may also increase predictive accuracy. Quantile regression Quantile regression is a technique that tries to fit a traditional, interpretable, linear model to different percentiles of the training data, allowing you to find different sets of variables with different parameters for modeling different behavior. While traditional regression is a parametric model and relies on assumptions that are often not met. Quantile regression makes no assumptions about the distribution of the residuals. It lets you explore different aspects of the relationship between the dependent variable and the independent variables. There are, of course, other techniques, both based on applying constraints on regression and generating specific rules (like in OneR or RuleFit approaches). We encourage you to explore possibilities for enhancing model interpretability for any algorithm you choose and which is the most appropriate for your task and environment. Evaluation of Interpretability Finally, to ensure that the data and the trained models are interpretable, it is necessary to have robust methods for interpretability evaluation. However, with no real consensus about what interpretability is in machine learning, it is unclear how to measure it. Doshi-Velez and Kim (2017) propose three main levels for the evaluation of interpretability: Application level evaluation (real task) Essentially, it is putting the explanation into the product and having it tested by the end user. This requires a good experimental setup and an understanding of how to assess quality. A good baseline for this is always how good a human would be at explaining the same decision. Human level evaluation (simple task) It is a simplified application-level evaluation. The difference is that these experiments are not carried out with the domain experts, but with laypersons in simpler tasks like showing users several different explanations and letting them choose the best one. This makes experiments cheaper and it is easier to find more testers. Function level evaluation (proxy task) This task does not require humans. This works best when the class of model used has already been evaluated by humans. For example, if we know that end users understand decision trees, a proxy for explanation quality might be the depth of the tree with shorter trees having a better explainability score. It would make sense to add the constraint that the predictive performance of the tree remains good and does not decrease too much compared to a larger tree.
https://medium.com/sciforce/white-box-ai-interpretability-techniques-93ef257dd0bd
[]
2020-04-07 13:46:54.408000+00:00
['Machine Learning', 'Artificial Intelligence', 'Deep Learning', 'Data Science', 'Data Visualization']
E-Learning Theory. Keywords and ideas
Keywords and ideas E-learning theory → using multimedia to enhance learning; Multimedia → using more than one way of communication e.g. text and audio; Cognitive load → the amount of working memory resources used; Three types of cognitive load → extraneous, intrinsic, germane; Personalization principle → presenting text that is imaginable in first-person e.g. Einstein imagining himself as a light beam; Autism and high extraneous cognitive load. Abstract The e-learning theory describes how using multimedia enhances learning e.g. a combination of text, audio, and graphics. The e-learning theory further rests on the idea of cognitive load, namely how making use of multimedia reduces cognitive load, and therefore enhances learning. I will attempt to make a summary here of what I have learned the recent week. Reducing cognitive load to enhance learning So first of all, there are three main types of cognitive load: extraneous, intrinsic, and germane: Extraneous cognitive load is the amount of working memory resources used to process irrelevant stimuli i.e. distractions; Intrinsic cognitive load is the amount of working memory resources used to process the task itself e.g. solving a math problem; German cognitive load is the amount of working memory resources used to access long-term knowledge as well as creating long-term knowledge e.g. creating first principles. Each type of cognitive load can be reduced by the following examples: Extraneous cognitive load can be reduced through removing irrelevant stimuli: I. Removing distracting sounds like your phone; II. Worked-out examples; III. Isolating elements; IV. Integrated examples and avoiding split-attention examples, etc. These things follow the coherence principle. Intrinsic cognitive load can be reduced by splitting the task into smaller parts, each part requiring a smaller intrinsic cognitive load to be processed. This can be done through: I. Worked-out examples; II. Incremental learning; III. Isolating elements; IV. Integrated examples; V. The signalling principle (e.g. making keywords bold), etc. These things follow the segmenting principle. Germane cognitive load can be reduced by making the principles, fundamentals, and ideas more clear. This can be done effectively through: I. Multimedia (combining text with images); II. The signalling principle (e.g. making keywords bold); III.. Inductive reasoning (putting the puzzle pieces together); IV. Worked-out examples, etc. These things follow the modality principle. Indeed, some things like the worked-example effect tends to reduce all of them. Now, I haven’t explained very well what the difference between integrated examples and examples that split your attention are, but I guess this image will suffice: Image source: https://en.wikipedia.org/wiki/Split_attention_effect — Split attention effect, Wikipedia Now, another way to reduce probably all three types of cognitive load, is through the personalization principle, which means to present text in a way that is imaginable in first-person for the reader. It somewhat reminds me of how Einstein tried to imagine himself as a light beam to understand light better. In the follow-up chapter, I want to connect this idea with autism, namely how people with autism can experience a rather high extraneous cognitive load, hindering their ability to act and think clearly (intrinsic cognitive load), and therefore also having less resources for germane cognitive load to learn from situations as effectively (or at least during the situation itself). You can see my Quizlet set here: https://quizlet.com/438904895/e-learning-theory-worked-example-effect-expertise-reversal-effect-diagram/ — [Quizlet] E-learning (theory), Worked-example effect, Expertise reversal effect And for the complete information, see: https://en.wikipedia.org/wiki/E-learning_(theory) — E-learning (theory), Wikipedia Subscribe for more: https://mailchi.mp/261ae9e13883/autibiography
https://medium.com/superintelligence/10-16-2019-e-learning-theory-ae1349144eb8
['John Von Neumann Ii']
2019-11-10 20:27:28.153000+00:00
['Teaching And Learning', 'Learning', 'Learn', 'Teaching', 'Neuroscience']
TensorFlow is dead, long live TensorFlow!
If you’re an AI enthusiast and you didn’t see the big news this month, you might have just snoozed through an off-the-charts earthquake. Everything is about to change! What is this? The TensorFlow logo or the letter you use to answer tough True/False exam questions? Whatever it is, we’re celebrating TF 2.0’s full release! Last year I wrote 9 Things You Need To Know About TensorFlow… but there’s one thing you need to know above all others: TensorFlow 2.0 is here! It is out of beta and officially yours to enjoy as of September 30, 2019! The revolution is here! Welcome to TensorFlow 2.0. It’s a radical makeover. The consequences of what just happened are going to have major ripple effects on every industry, just you wait. If you’re a TF beginner in 2019, you’re extra lucky because you picked the best possible time to enter AI (though you might want to start from scratch if your old tutorials have the word “session” in them). In a nutshell: TensorFlow has just gone full Keras. Those of you who know those words just fell out of your chairs. Boom! A prickly experience I doubt that many people have accused TensorFlow 1.x of being easy to love. It’s the industrial lathe of AI… and about as user-friendly. At best, you might feel grateful for being able to accomplish your AI mission at mind-boggling scale. You’d also attract some raised eyebrows if you claimed that TensorFlow 1.x was easy to get the hang of. Its steep learning curve made it mostly inaccessible to the casual user, but mastering it meant you could talk about it the way you’d brag about that toe you lost while climbing Everest. Was it fun? No, c’mon, really: was it fun? You‘re not the only one — it’s what TensorFlow 1.x tutorials used to feel like for everybody. TensorFlow’s core strength is performance. It was built for taking models from research to production at massive scale and it delivers, but TF 1.x made you sweat for it. Persevere and you’d be able to join the ranks of ML practitioners who use it for incredible things, like finding new planets and pioneering medicine. What a pity that such a powerful tool was in the hands of so few… until now. Don’t worry about what tensors are. We just called them (generalized) matrices where I grew up. The name TensorFlow is a nod to the fact that TF’s very good at performing distributed computations involving multidimensional arrays (er, matrices), which you’ll find handy for AI at scale. Image source. Cute and cuddly Keras Now that we’ve covered cactuses, let’s talk about something you’d actually want to hug. Overheard at my place of work: “I think I have an actual crush on Keras.” Keras is a specification for building models layer-by-layer that works with multiple machine learning frameworks (so it’s not a TF thing), but you might know it as a high level API accessed from within TensorFlow as tf.keras. Incidentally, I’m writing this section on Keras’ 4th birthday (Mar 27, 2019) for an extra dose of warm fuzzies. Keras was built from the ground up to be Pythonic and always put people first — it was designed to be inviting, flexible, and simple to learn. Why don’t we have both? Why must we choose between Keras’s cuddliness and traditional TensorFlow’s mighty performance? Why don’t we have both? Great idea! Let’s have both! That’s TensorFlow 2.0 in a nutshell. This is TensorFlow 2.0. You can mash those orange buttons yourself here. “We don’t think you should have to choose between a simple API and scalable API. We want a higher level API that takes you all the way from MNIST to planet scale.” — Karmel Allison, TF Engineering Leader at Google The usability revolution Going forward, Keras will be the high level API for TensorFlow and it’s extended so that you can use all the advanced features of TensorFlow directly from tf.keras. All of TensorFlow with Keras simplicity at every scale and with all hardware. In the new version, everything you’ve hated most about TensorFlow 1.x gets the guillotine. Having to perform a dark ritual just to add two numbers together? Dead. TensorFlow Sessions? Dead. A million ways to do the exact same thing? Dead. Rewriting code if you switch hardware or scale? Dead. Reams of boilerplate to write? Dead. Horrible unactionable error messages? Dead. Steep learning curve? Dead. TensorFlow is dead, long live TensorFlow 2.0! You’re expecting the obvious catch, aren’t you? Worse performance? Guess again! We’re not giving up performance. TensorFlow is now cuddly and this is a game-changer, because it means that one of the most potent tools of our time just dropped the bulk of its barriers to entry. Tech enthusiasts from all walks of life are finally empowered to join in because the new version opens access beyond researchers and other highly-motivated folks with an impressive pain threshold. One of the most potent tools of our time just dropped the bulk of its barriers to entry! Everyone is welcome. Want to play? Then come play! Eager to please In TensorFlow 2.0, eager execution is now the default. You can take advantage of graphs even in eager context, which makes your debugging and prototyping easy, while the TensorFlow runtime takes care of performance and scaling under the hood. Wrangling graphs in TensorFlow 1.x (declarative programming) was disorienting for many, but it’s all just a bad dream now with eager execution (imperative programming). If you skipped learning it before, so much the better. TF 2.0 is a fresh start for everyone. As easy as one… one… one… Many APIs got consolidated across TensorFlow under Keras, so now it’s easier to know what you should use when. For example, now you only need to work with one set of optimizers and one set of metrics. How many sets of layers? You guessed it! One! Keras-style, naturally. In fact, the whole ecosystem of tools got a spring cleaning, from data processing pipelines to easy model exporting to TensorBoard integration with Keras, which is now a… one-liner! There are also great tools that let you switch and optimize distribution strategies for amazing scaling efficiency without losing any of the convenience of Keras. Those distribution strategies are pretty, aren’t they? The catch! If the catch isn’t performance, what is it? There has to be a catch, right? Actually, the catch was your suffering up to now. TensorFlow demanded quite a lot of patience from its users while a friendly version was brewing. This wasn’t a matter of sadism. Making tools for deep learning is new territory, and we’re all charting it as we go along. Wrong turns were inevitable, but we learned a lot along the way. It’s not a matter of sadism. Deep learning was uncharted territory. The TensorFlow community put in a lot of elbow grease to make the initial magic happen, and then more effort again to polish the best gems while scraping out less fortunate designs. The plan was never to force you to use a rough draft forever, but perhaps you habituated so well to the discomfort that you didn’t realize it was temporary. Thank you for your patience! We’re not giving up performance! The reward is everything you appreciate about TensorFlow 1.x made friendly under a consistent API with tons of duplicate functionality removed so it’s cleaner to use. Even the errors are cleaned up to be concise, simple to understand, and actionable. Mighty performance stays! What’s the big deal? Haters (who’re gonna hate) might say that much of v2.0 could be cobbled together in v1.x if you searched hard enough, so what’s all the fuss about? Well, not everyone wants to spend our days digging around in clutter for buried treasure. The makeover and clean-up are worth a standing ovation. But that’s not the biggest big deal. The point not to miss is this: TensorFlow just announced an uncompromising focus on usability. It’s an unprecedented step in AI democratization! AI lets you automate tasks you can’t come up with instructions for. It lets you automate the ineffable. Democratization means that AI at scale will no longer be the province of a tiny tech elite. Now anyone can get their hands on the steering wheel! Imagine a future where “I know how to make things with Python” and “I know how to make things with AI” are equally commonplace statements… Exactly! I’m almost tempted to use that buzzword “disruptive” here. The great migration We know it’s hard work to upgrade to a new version, especially when the changes are so dramatic. If you’re about to embark on migrating your codebase to 2.0, you’re not alone — we’ll be doing the same here at Google with one of the largest codebases in the world. As we go along, we’ll be sharing migration guides to help you out. We’re giving you great tools to make your migration easier. If you rely on specific functionality, you won’t be left in the lurch — except for contrib, all TF 1.x functions will live on in the compat.v1 compatibility module. We’re also giving you a script which automatically updates your code so it runs on TensorFlow 2.0. Learn more in the video below. This video’s is a great resource if you’re eager to dig deeper into TF 2.0 and geek out on code snippets. Your clean slate TF 2.0 is a beginner’s paradise, so it will be a downer for those who’ve been looking forward to watching newbies suffer the way you once suffered. If you were hoping to use TensorFlow for hazing new recruits, you might need to search for some other way to inflict existential horror. If you’re a TensorFlow beginner, you may be late to the AI party, but yours is the fashionable kind of late. Now’s the best time to arrive! Sitting out might have been the smartest move, because now’s the best time to arrive on the scene. This article was written just after TensorFlow 2.0 was released in alpha (that’s a preview, you hipster you), but as of September 30, 2019, the full release is available to you in all its glory! If ever there was a time to dive in, this is it! TF 2.0 is a beginner’s paradise. Following the dramatic changes, you won’t be as much of a beginner as you imagined. The playing field got leveled, the game got easier, and there’s a seat saved just for you. Welcome! I’m glad you’re finally here and I hope you’re as excited about this new world of possibilities as I am. Dive in! Check out the shiny redesigned tensorflow.org for tutorials, examples, documentation, and tools to get you started… or dive straight in with: pip install tensorflow==2.0.0-alpha0 You’ll find detailed instructions here.
https://medium.com/hackernoon/tensorflow-is-dead-long-live-tensorflow-49d3e975cf04
['Cassie Kozyrkov']
2019-10-04 05:19:46.055000+00:00
['Machine Learning', 'Technology', 'Artificial Intelligence', 'TensorFlow', 'Hackernoon Top Story']
What do Terre Haute Prisoners, Tuskegee Sharecroppers, and Guatemalans Have in Common? The US Government Gave Them Venereal Disease.
It started in Tuskegee, Alabama. The US Public Health Service (PHS) began a study on 600 black sharecroppers in which they were offered free medical treatment. Many of the men had never been to the doctor. Of the men, 399 already had latent syphilis and the control group of 201 men was syphilis free. They all were told they were being treated for “bad blood,” a name given for a variety of ailments at the time. While most of the men already had the disease, some of their spouses and unborn children had not yet contracted it and the government effectively gave it to them. Part of the purpose of the study was to track the full progression of syphilis and the men were given only placebos or aspirin while the disease took its course, ravaging their bodies. By 1947, penicillin had been proven an effective treatment for syphilis but the men were not offered treatment, only watched as they went blind, insane, and many eventually died. In the mid-1960s, PHS investigator Peter Buxton voiced his concerns that the study was unethical. A committee was formed to review the program but the Public Health Service elected to see the study through its completion until all the men were dead. Years later, Buxton leaked the story to a reporter who told Associated Press reporter Jean Heller who broke the news nationwide. The public outcry forced the study to shut down. At that time, 28 patients had died from syphilis, 100 more from related complications, 40 spouses had contracted the disease and 19 infants got syphilis at birth. In 1973, Congress authorized a $10 million settlement for the remaining participants and heirs. New laws were established as to what could happen in US Government-funded research. Many people are generally aware of the Tuskegee Experiments but far fewer know of two other government-backed studies on captive populations, involving some of the same doctors who should have known better. In 1943, at the Federal Correctional Institute in Terre Haute, Indiana, the government injected 241 men with gonorrhea so that treatments could be studied as to their effectiveness. The men were offered $100, a certificate of merit, and a letter in their file for their parole board hearings. After several months, the study was concluded in 1944 because injecting men in their penises proved an unreliable method of passing along the disease. There is no information available as to the racial make-up of the Indiana subjects as opposed to the black ones in Tuskegee and the brown ones in Guatemala. One can’t help but wonder? In September 2011, a report titled, “ETHICALLY IMPOSSIBLE” STD Research in Guatemala from 1946 to 1948, was released. The following is from the preface: On October 1, 2010, President Barack Obama telephoned President Álvaro Colom of Guatemala to extend an apology to the people of Guatemala for medical research supported by the United States and conducted in Guatemala between 1946 and 1948. Some of the research involved deliberate infection of people with sexually transmitted diseases (“STDs”)1 without their consent. Subjects were exposed to syphilis, gonorrhea, and chancroid, and included prisoners, soldiers from several parts of the army, patients in a state-run psychiatric hospital, and commercial sex workers. Serology experiments that did not involve intentional exposure to infection, which continued through 1953, also were performed in these groups, as well as with children from state-run schools, an orphanage, and several rural towns. President Obama expressed “deep regret” for the research and affirmed the U.S. government’s “unwavering commitment to ensure that all human medical studies conducted today meet exacting” standards for the protection of human subjects. While the United States expressed “deep regret,” it later declared itself not liable for these tests conducted outside the United States. Guatemalans were not entitled to the $10 million the remaining participants and heirs of the Tuskegee Experiment got. They didn’t get the $100 the Terre Haute prisoners got. The United States of America said Guatemalans were entitled to nothing at all. Judge Reggie Walton said he was following Federal law but was deeply troubled by the study. He urged the government to provide assistance to the affected. “This lawsuit is simply not the appropriate vehicle for remedying those wrongs.” Lawyers representing the Guatemalans are now seeking redress against private companies involved and in January 2019, a Federal Court declared that the Rockefeller Foundation, the Johns Hopkins Hospital and related entities, and Bristol-Myers who they describe as, “the driving force” behind the study. Several of the doctors worked for Johns Hopkins and received support from the Rockefeller Foundation. Bristol-Myers provided drugs to the small percentage that received treatment. In response to the $1 billion lawsuit lawyers for Johns Hopkins said, “Johns Hopkins expresses profound sympathy for individuals and families impacted by the deplorable 1940s syphilis study funded and conducted by the U.S. government in Guatemala. We respect the legal process, and we will continue to vigorously defend the lawsuit.” A spokesman for the Rockefeller Foundation said the lawsuit has no merit and they had no role in the funding, management, or design of the study. Bristol-Myer had no comment at the time. The US Military had an interest in finding a treatment for syphilis which had no cure when the Tuskegee Experiment began. They estimated that as many as 350,000 soldiers might contract syphilis while fighting overseas wars and at home. Someone in the government thought it was just fine to infect targeted populations of prisoners (race undetermined), poor black people, and brown people of Guatemala. The government was shamed into a settlement with the Tuskegee survivors. They had no concern for the Guatemalans they infected with most receiving no treatment. The trial against the private companies is in discovery and we don’t know what information will be revealed. Here’s hoping that $1 billion is the floor and not the ceiling for compensation.
https://medium.com/recycled/what-do-terre-haute-prisoners-tuskeegee-sharecroppers-and-guatemalans-have-in-common-1b9009025804
['William Spivey']
2020-03-26 04:40:49.059000+00:00
['Tuskeegee', 'Health', 'Syphilis', 'Racism', 'Guatemala']