title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Google Cloud Data Catalog — Live Sync Your On-Prem Hive Server Metadata Changes
Disclaimer: All opinions expressed are my own, and represent no one but myself…. They come from the experience of participating in the development of fully operational sample connectors, available at: github. The Challenge Entering the big data world is no easy task, the amount of data can quickly get out of hand. Look at Uber story, on how they deal with 100 petabytes of data using the Hadoop ecosystem, imagine if every time they would sync their on-premise metadata into a Data Catalog, a full run was executed, that would be impractical. We need a way to monitor changes executed at the Hive server, and whenever a Table or Database is modified we capture just that change and incrementally persist it in our Data Catalog. If you missed the last post, we showcased ingesting on-premise Hive metadata into Data Catalog, in that case we didn’t use an incremental solution. To grasp the situation, a full run with ~ 1000 tables took almost 20 minutes, even if only 1 table had changed. In the Uber story, that would be no fun, right?
https://medium.com/google-cloud/google-cloud-data-catalog-live-sync-your-on-prem-hive-server-metadata-changes-4f5e661626d8
['Marcelo Costa']
2020-05-19 21:44:35.355000+00:00
['Database', 'Big Data', 'Google Cloud Platform', 'Apache Hive', 'Data']
W. E. B. Du Bois’ staggering Data Visualizations are as powerful today as they were in 1900 (Part 1)
The Exhibit of American Negroes Exposition des Nègres d’Amerique, Exposition Universelle, Paris 1900. W. E. B. Du Bois Papers (MS 312). Special Collections and University Archives, University of Massachusetts Amherst Libraries “The Exhibit of American Negroes” at the Exposition Universelle of 1900 in Paris was created by activist and sociologist W. E. B. Du Bois, in collaboration with educator and social leader Booker T Washington, prominent black lawyer Thomas J. Calloway and students from historically black college Atlanta University. In his remarkable 1968 Autobiography, Du Bois at the age of 90 recounts a lecture from a lifetime earlier in 1897. “The American Negro deserves study for the great end of advancing the cause of science in general. No such opportunity to watch and measure the history and development of a great race of men ever presented itself to the scholars of a modern nation. If they miss this opportunity — if they do the work in a slipshod, unsystematic manner — if they dally with the truth to humor the whims of the day, they do far more than hurt the good name of scientific truth the world over, they voluntarily decrease human knowledge…” Du Bois describes the exhibition as “Thirty-two charts, 500 photographs, and numerous maps and plans form the basis of this exhibit. The charts are in two sets, one illustrating conditions in the entire United States and the other conditions in the typical State of Georgia”. The data visualizations in “The Exhibit of American Negroes” is therefore split into 2 sections: “A Series of Statistical Charts Illustrating the Condition of the Descendants of Former African Slaves Now in Residence in the United States of America” which focuses on the national view of the data, and a companion work done the same year called “The Georgia Negro”. Introductory chart from “The Georgia Negro”, 1900, via Library of Congress Prints and Photographs Division In the article “The American Negro at Paris” he writes: “It was a good idea to supplement these very general figures with a minute social study in a typical Southern State. It would hardly be suggested, in the light of recent history, that conditions in the State of Georgia are such as to give a rose-colored picture of the Negro; and yet Georgia, having the largest Negro population, is an excellent field of study.” Du Bois continues in his Autobiography: “I wanted to set down its aim and method in some outstanding way which would bring my work to the thinking world. The great World’s Fair at Paris was being planned and I thought I might put my findings into plans, charts and figures, so one might see what we were trying to accomplish.” The resulting exhibition was more than just a scientific report. It was a targeted attempt to sway the world’s elite to acknowledge the American Negro in an effort to influence cultural change in the USA from abroad. The charts in the exhibition are arranged to tell a story with data that presents a complex picture of a people, their struggle and perseverance despite more than a century of abject slavery. This is where I’d like to begin. “Proportion of Freemen and Slaves among American Negroes”, 1900, via Library of Congress Prints and Photographs Division The above chart is a masterwork of data journalism. It’s hard to look at the chart above and not feel like you’ve been kicked in the gut. The mountainous black area punctuated by the word(s) SLAVES sits immovable under a green ribbon that opens to the right of the chart. The story it tells is simple: for 76 years no less than 86% of all Negros in the USA were slaves. But like most charts, the subtleties might be easy to miss. The Emancipation Proclamation was signed on Jan 1, 1863, yet it takes an additional 7 years (and a Civil War) for the remaining 6,675,000 slaves to find their freedom. “Slaves and Free Negroes” 1900, via Library of Congress Prints and Photographs Division The above is a breakout chart focusing just on ‘The Georgia Negro”. If one can visualize this chart rotated 90 degrees, this serves as a “double click” on the preceding chart above. It shows the percent of free Negroes only in the state of Georgia, which at it’s greatest point was only 1.7% over a 73 year period. Let’s look to the 1860 census to get some sense of scale. Of the total population of 1,057,286 people in Georgia, 462,198 were slaves — 44% of the entire population. Remember, the audience for the exposition were the elite leaders in science and business from Europe and the western world. Slavery in America was still very fresh in everyone’s mind. Du Bois knew a logical argument presented in scientific terms would provoke conversation and the brutally graphic truth of each of these charts would be impossible to deny. Du Bois writes “…[the] exhibit which, more than most others in the building, is sociological in the larger sense of the term — that is, is an attempt to give, in as systematic and compact a form as possible, the history and present condition of a large group of human beings.” Notice the emphasis on the term ‘human beings’ consistently linked with ‘American Negro’. By acknowledging slavery as a foundation for the American Negro, he also establishes a baseline by which to show how far this large group of human beings has progressed. In the charts below Du Bois focuses on population growth: “Increase of the Negro population in the United States of America” 1900, via Library of Congress Prints and Photographs Division By reducing each chart to its essence, Du Bois adds successive arguments to the larger message. The above chart shows a fairly steady 68% — 88% population growth over a 140 year period. As early as 1807 an Act Prohibiting Importation of Slaves was promoted by President Thomas Jefferson which sought to block the flow of slaves into the southern states. Then in 1820 slave-trading became a capital offense, and as promising as that sounds, only 74 cases were raised, few captains were convicted and only one miserable bastard was actually executed. The chart above is proof that the measures taken to end slaving in the mid-1800s were a failure. Du Bois understood that his cultured audience knew the events and politics more than the raw data he provides — the data itself was an incrimination. “Comparative rate of increase of the White and Negro elements of the population of the United States” 1900, via Library of Congress Prints and Photographs Division The above chart shows the explosion of the overall US population from 1830 to 1890 with only a marginal rate of increase for the Negro population in general. Despite a huge boom in European immigration, few Negros immigrated to the US during this time. Natural population growth and a decrease in the mortality rate after the 1865 passing of the Thirteenth Amendment abolishing slavery were the likely causes of the increase. That said, the overall size of the Negro population was still massive, to which Du Bois brilliantly compares against the entire populations of several European countries below. “Negro population of the United States compared with the total population of other countries” 1900, via Library of Congress Prints and Photographs Division Again in “The American Negro at Paris” Du Bois says “At a glance one can see the successive steps by which the 220,000 Negroes of 1750 had increased to 7,500,000 in 1890; their distribution throughout the different States; a comparison of the size of the Negro population with European countries bringing out the striking fact that there are nearly half as many Negroes in the United States as Spaniards in Spain.” “Proportion of Negroes in the Total Population of the United States” 1900, via Library of Congress Prints and Photographs Division By visualizing Negro population growth as a small nation growing inside of the American silhouette, Du Bois elegantly crafts a complex argument. As the silhouette of the country grows, the ‘Negro’ population also grows, not at a faster rate but as a distinctly different entity. This is not a line or bar chart to compare numbers. Du Bois’ visualizes the data in terms of distinct nations. When viewed alongside the preceding image showing a fully ‘Negro’ populated United States in comparison to European countries, Du Bois clearly implies the existence of a separate Negro nation/state. “The Amalgamation of the White and Black elements of the population in the United States” 1900, via Library of Congress Prints and Photographs Division One of my favorites, the above chart not only shows the fluidity of race as applied to the term ‘Negro’ but also slyly asserts a sizable portion of the White population had ‘Negro blood’. By crafting a dispassionate argument focused on the numbers Du Bois makes an argument an African-American would be prevented from articulating verbally. The massive black area a slight of hand to distract from the not-so-subtle accusation on the right side of the chart. “Race Amalgamation in Georgia ” 1900, via Library of Congress Prints and Photographs Division Another “double click” into the smaller Georgia demographic, a single stacked-bar chart puts the emphasis on the values of black and brown to create an 84% block. The blood-red “40%”, a corporeal smear only semi-visible in the center block. But like the preceding chart, the grouping of the dark values points away from the uncomfortable data showing that 56% majority of Negros were of some mixed blood.
https://towardsdatascience.com/w-e-b-du-bois-staggering-data-visualizations-are-as-powerful-today-as-they-were-in-1900-64752c472ae4
['Jason Forrest']
2020-07-09 01:26:15.306000+00:00
['Design', 'Dvhistory', 'African American History', 'Data Science', 'Data Visualization']
Data Visualization for Product Designers
Data visualization for Product Designers Designers work with different types of data on a daily basis. It’s one of the most crucial aspects of the work. This article will explain the most common mistakes we make and provide tips about improving our perceptions. It is sometimes problematic to choose the proper chart type. Source: FlowingData What is data visualization and why should I care? Data visualization is perceived as a modern aspect of visual communication. It involves representations of data, visually, so that it can be more easily understood by the viewer. The main goal is to simplify it in a way that makes the data more accessible, understandable and usable. As designers, we have a vast impact on what the end user will perceive while reading our data. It is crucial that we leave no doubts for the user while presenting complex chunks of data. The main goal of data visualization is to communicate information clearly and effectively through graphical means. — George Friedman Showing current trends The most frequently misused chart types are line charts and histograms. By no means can they be substituted for one another. In fact, line charts should only be used for data that is correlated and depicts a continuous interval or a time period. There are loads of examples that would be great for line charts, including stock prices or sales numbers. When we want to visualize data that has no correlation, such as our spending or the number of users, we should use histograms. This is because they visualize the distribution of data over a certain time period for types of data that aren’t correlated. While showing data that is not correlated, we shouldn’t connect the values together. Messing with the scale When we first think of manipulating data, we mostly think of changing the values. Sure, this is undoubtedly the most common form of manipulation. However, there are many other ways we can actually make false impressions on the users. One way is to change the scale of our chart. This results in a false perception of real values. By no means is 40% twice as much as 36%, but as you can see in the example below, changing the scale of your chart can change the way the data looks. Changing the scale can have a dramatic impact on the end perception. Same values, different perception Even if we use the right chart type and scale, we can still give the users an incorrect perception of our data. This is an example that comes from an influential mobile network provider. When it comes to the amount of mobile data we can consume, we are used to focusing on the amount of data we have used, rather than the amount of data that is left. It can give us a totally different impression when presented the other way round. The same data can be perceived in a dramatically different way, Accessibility is vital Another key aspect of presenting data, that we undoubtedly must consider, is accessibility. We must take into account that not every user will be adopting our data in perfect conditions, for instance, a well-lit office or apartment. In fact, most of our users might be using our products in conditions we have never considered. It is vital that the way we present data that is accessible to everyone. The second aspect when it comes to accessibility is visual impairment, which many people experience. Due to the extensive use of smartphones and computers on a daily basis, the number of people with visual impairment is dramatically increasing. It’s estimated that around 14 million people in the US are visually impaired. While choosing colors that represent various sets of data we must think about this rapidly growing group of users. Here’s a simulation of what would the same chart look for a person with deuteranomaly. Is it real life, or is it just Dribbble? This platform is undoubtedly one of the most prominent sources of design inspiration. However, if we decide to investigate this phenomenon further, we will see a lot of really cool looking designs for data visualizations, but in reality, they are often completely unrealistic. The work that is shared on Dribbble has no limits. We come across designs that are based on unrealistic data (or no data at all), use charts that don’t match with the type of data or are simply impossible (or extraordinarily expensive) to implement. Even without real data, we can save a lot of time while designing charts. Writing chart libraries from scratch is insanely time-consuming. We must be aware of which libraries the developers will be using and examine what the styling possibilities are and what we can achieve. Examples could be Chart.js or NVD3. In the video above I based my styling limits on the Chart.js library so that my design is possible to implement in this specific instance. Designers often fail to achieve a balance between form and function, creating gorgeous data visualizations which fail to serve their main purpose — to communicate information — George Friedman What can designers do about it? We have a tendency to undervalue the form and function of the data we want to represent and just focus purely on the visuals. This can have a dramatic impact on the end user and their understanding of the information we are trying to portray. There are a few good practices that will undoubtedly improve the way we work with data. Make sure that the way you present the data meets the business requirements. Use real data provided by the client whenever possible. If not, try to base your designs on data that is commonly available. Use color values that are easy to differentiate. This makes your data visualization accessible for people who are visually impaired or that are colorblind. Be familiar with chart libraries, know what the limitations are. Don’t draw charts from scratch, it’s too time-consuming. It’s better to focus on the form. Data visualization is a powerful tool Data visualization is one of the most powerful weapons in any designer’s arsenal. It has a significant impact on the business end of every product. My advice is to keep it simple and focus on what’s actually important for the business. It’s so important to select the correct type of chart for the data you are going to display, while also making it accessible!
https://medium.com/elpassion/data-visualization-for-product-designers-fcfbbdb47c59
['Jakub Wojnar-Płeszka']
2019-04-15 06:44:17.325000+00:00
['Accessibility', 'UI', 'UI Design', 'Design', 'Data Visualization']
Who is a Polymath?. Are you one? Can you claim to be one?
Who is a Polymath? Are you one? Can you claim to be one? @sebaspenalambarri unsplash.com I have a problem with the word Polymath. According to Wikipedia, this is a Greek term that means an individual whose knowledge spans a significant number of subjects, who are drawn to complex bodies of knowledge to solve complex problems. They are supposed to be Renaissance thinkers. Leonardo DiVinci is probably one of the most well-known Polymaths. I learned of the word Polymath from people’s profiles on social media. There are influencers (often from technology) who are polymaths. I can understand how technologists and thinkers are often people who love to learn. They are fluent and curious about many topics spanning science, arts, literature, technology, and more. But, isn’t that just someone who is a curious individual. When my family immigrated from China, my parents were scientists. They were shunned by the Maoist era politicians because they were “polymath” people or “well-rounded intellectuals”. They were people who were curious about all things. They could write, investigate, experiment and develop theories. They were curious individuals. Then, as I grew up in China, at school, teachers advocated for “well-rounded” individuals who can achieve in all disciplines. It wasn’t enough to be good in math and science, you also had to be good at sports and sang in a choir. Like all such children from the top 5% of overachieving families in a country with billions of people, I grew up under the tall order of being a “polymath”. Like all such children under the weight and pressure of the system, I hated the words, “well-rounded” and “polymath”. Due to my curiosity about all things, I became a pretty “well-rounded” individual. My well-roundedness was a by-product of my curiosity. My knowledge in many subjects is the result of careful studies with opportunities provided by the societies that I’m fortunate enough to live in and the circumstances that I live in. My well-roundedness is not renaissance or special. When I look around, there are a lot of people who grew up in my generation who became technologists, inventors, entrepreneurs, scientists, analysts, writers, etc…. These people often work at the intersection of many disciplines. They are big-picture people. They are not specialists. The fact that they are big-picture people, the fact that they are curious about many subjects allows them to work to straddle many different subjects in their professional lives. These people are not overly special than the specialists who are content with working in detail on one subject matter. So, I don’t have a problem with the term “polymath” or “well-roundedness”. I have a problem with people who use the term to assert authority or gain special status. Our knowledge is not ours to claim. We come to it on our path of living. We learn the knowledge that is shared by all who we can learn from. It is as if to say that Science is yours. It is not yours or mine. We might know more than others because we studied more, we read more literature, and we worked more in this one area. The reason we did all of that is because we had fortunate circumstances. Our expertise does not make us more special. It simply allows us to venture on our paths with more depth. I think of my adventures into the depths of subjects as deep-sea explorations. I think of my adventures around the big picture as snorkeling around the shallow sea. I think of my adventures around multi-disciplines as living with a vast sea of sea creatures and exploring their homes. Does that make me a polymath? No. It simply makes me curious.
https://medium.com/jun-wu-blog/who-is-really-a-polymath-51f5dcd6fca
['Jun Wu']
2019-11-15 22:44:37.632000+00:00
['Polymath', 'Self', 'Learning', 'Creativity', 'Education']
How to Support Your Dying Loved One
How to Support Your Dying Loved One I watched my husband go through different stages. Here’s what to expect. Our display at my husband’s memorial service. They told us my husband’s cancer was terminal: He had months to live. Maybe as long as a year. From that moment in the doctor’s office, we waited for death. One day, Brock napped for an extra hour. Was this the end? He woke up with a cough. Was this it? With both of us in our late thirties, we had never witnessed death up close. On television, we watched Steve Jobs and Jack Layton (a Canadian politician) become skeletal as their cancers progressed. “Will that happen to me?” Brock asked. We didn’t know. I patted his healthy tummy, assuring us both we weren’t in that final stretch, yet. I read the books, found a spouses’ support group, and was matched with a hospice counselor. We met with Brock’s palliative care doctor and chatted with the home care nurses. We were surrounded by supportive experts. And yet, I felt very alone. In that final year, Brock slept or was sleepy much of the time. Eventually, he was not capable of making decisions. It was up to me to decide when to call for help, and when to let a new symptom play out. If it was after clinic hours, I had to figure out whether our questions and concerns justified calling an after-hours emergency line, or even texting our doctors’ home numbers. One evening, after many months of this anxiety and uncertainty, Brock was suddenly unable to swallow or communicate. It was 11:30 p.m. First thing the next morning, I texted an update to our palliative care doctor and she told me this was the end. After living with stage four kidney cancer for three years, my husband was dying. Of all the emotions I could feel at hearing this, I felt relief. I was relieved because, while this was the moment we’d feared and dreaded, at least we knew what was happening. Finally, there was certainty. There are books, hospice resources, and palliative care pamphlets that describe the common stages of death. Here is our own experience with those stages, shared with the hope that hearing our story will alleviate your own anxieties when caring for a loved one in their final year/months/weeks/days. What dying looks like… months before the end During our final year of Brock’s life, I joined a Facebook group for caregivers, and someone posted a photo of her husband in the afternoon: asleep on his side in bed, in a dark bedroom. I looked up from my chair in our darkened bedroom at my own view: my husband was asleep in bed. Other group members posted their own identical photos. This was when I realized how similar our experiences were: like pregnancy and childbirth, death is a common experience. There is a predictability to how we die. The earliest sign that Brock had cancer was his afternoon naps. My energetic, entrepreneurial husband never napped. This need to rest and sleep increased over the three years he was sick, and in that final year he spent more and more time in bed. How to support this stage To make Brock’s life easier, I sourced a fancy hospital bed that let him raise the back, legs, and feet with a remote control. (Our preschooler loved this bed.) When Brock found the plastic-lined mattress too hot, I bought a cooling pillow-top mattress. I bought extra sheets and changed the bed daily, whenever he went to the washroom or sat in the living room. In addition to these fancy beds, the Canadian Red Cross and other agencies loan out easy-rise chairs, bath benches, commodes, toilet seat risers, and much more. To protect Brock’s limited energy reserves, I became his gatekeeper: when family or friends visited, I set a timer and asked them to keep their visits to the half-hour limit (a limit Brock set with me beforehand). When we visited family, I explained he might need to close his eyes mid-conversation, to rest. As Brock got weaker and less able to advocate for his own needs, I became his tough guardian. Somedays, Brock didn’t want to eat anything, while other times he craved the same foods day after day. He went through phases where all he wanted to eat was Spitz sesame seeds and apple juice, then chicken burgers from the pub, then chocolate sauce cake with whole cream poured on top. I enabled him, shopping and ordering takeout and baking whatever he felt like, at any time of the day. And if he didn’t want to eat anything, I didn’t force him. My retired farmer plants seeds for a backyard garden, not knowing whether he’ll live long enough to harvest the vegetables or admire the flowers. (He did.) What dying looks like… one month before the end When we get close to death, that line between being awake (conscious) and being asleep (unconscious/subconscious) starts to break down. A common metaphor is having a foot in both worlds at the same time. Seeing ghosts Brock and I weren’t spiritual people, so it surprised me when he told me he was seeing faces. He saw them around the room, in the patterns of the dresser’s wood, and so on. He didn’t recognize these faces. While seeing a mysterious face sounds scary, they didn’t make Brock feel anxious. He knew it was a weird thing to see, and was embarrassed to tell me about it, in case I’d judge him. I am not a neuroscientist, yet I interpret these hallucinations to be an example of how our brains play a role in the dying process; those synapses misfire and start shutting down, leading us to misinterpret our sensory input. Psychologically, our brains assure us: “Yes, death is scary, but look at all these people around you, keeping you company.” I think this explains why some people close to death see their loved ones at their bedsides, or ghosts. Fragmentation of the self Brock also started to feel like he was (sometimes) three people. For example, he said he knew he’d have a good sleep if all three of him were ready to go to bed. If one or two of him were missing, he’d have a hard time falling asleep or wouldn’t sleep well. I once brought him a glass of chocolate milk, because he’d asked for it, and he said: “Phew. I can drink that. I thought I’d have to drink all three glasses.” Or when he made a physically strenuous journey to the washroom, with help from me and a nurse, and he said: “Oh, that wasn’t as hard as I expected. I didn’t have to do it three times.” Brock and I brainstormed where his three people came from. The father, the son, and the holy ghost. Ego, superego, and id. Given Brock’s experience, maybe there’s a reason that a lot of cultural patterns occur in threes. Here are some of the things Brock said during that half-conscious, half-unconscious stage: “I was about to offer you whatever I was eating in my head.” “I think we’re done with the bread, if you want to put that away. And, as I’m saying this, I’m realizing there is no bread.” (Said while I gave Brock a back massage.) “Is that the smallest letter?” How to support this stage When someone sees something that (to you) isn’t there, it’s tempting to argue. But I wanted Brock to share his experience with me, and so I accepted everything he said. I listened without comment, without joking or questioning the validity of what he was seeing, thinking, and saying. When Brock would see or share something especially weird, I resisted gossiping about it with other people, or being silly. Instead, I wrote it down. (Brief tangent here: I wrote these funny quotes and stories down, thinking that I would share them with Brock when he was better. Some day, I thought, we will laugh together about how funny he’s being. That’s how powerful denial is: I was watching my husband die, and yet some part of my monkey brain thought there would come a time when he’d be back to normal.) On our last family road trip: Brock teaches our son the fine art of toasting marshmallows. What dying looks like… weeks before the end A familiar theme among the dying is travel. My theory is that some part of our brain understands we’re about to undertake a significant departure or change (i.e. death). In order to break that reality down so we can psychologically grasp and accept the idea without freaking out or getting depressed, our brain tells us to expect travel. I asked Brock once if he was scared of dying, and he said no. Again, I thank the miraculous human brain for easing his passage here. He was slowing down in every way — mentally, physically… He was very tired and weak by the end, and the idea of permanent rest wasn’t scary. How to support this stage Half asleep on the toilet one day, Brock asked: “Where’s the car parked?” I assured him the car was parked nearby, and I knew where it was. At this stage, it’s all about reassurances, listening, and keeping our loved one calm. A rare time Brock became agitated was when he couldn’t understand which medication to take. Nothing had changed, I marked the containers as usual. He asked me again and again to explain the doses to him, and suggested we write down the (very simple) steps. I had to stay patient, assure him I would just give him what he needed, and take that level of decision-making away from him. While his physical decline was obvious, I hadn’t realized until this point that his cognitive ability was also disintegrating. Brock uses up his day’s energy to copilot our son to his first day of preschool, 12 days before he dies. What dying looks like… when we shift from living to dying Three years after his diagnosis, following a year of extreme weight loss and declining energy and strength, I woke Brock up to take a pill and he wasn’t able to swallow it. He was foggy and not really conscious, although he was able to move around. That night, he would not stay still; he kept sitting up and trying to stand, as if he had somewhere else to be. At first, that uncertainty I’d felt over the year returned. Did he need the toilet? Was he more comfortable upright? Should I help him get to wherever he was going? It wasn’t until the morning, after a difficult night of constantly easing Brock back onto our bed, that I learned about this stage. Restlessness, wandering… It’s a normal stage in the dying process that they watch for in hospices and care facilities, because if someone gets up and starts walking in the middle of the night they can fall and injure themselves. I think it’s a continuation of that “travel” calling. The dying person knows their journey is about to start, so they get out of bed and start walking. How to support this stage When we expect this stage, we can have an injection ready that will calm the dying person, and keep them safely in bed. (We did not have this injection.) Similarly, one of Brock’s painkiller medications was in oral-only form; we should have had an injection form on hand, for when he was no longer able to swallow. It bothers me to think he was in pain for that one night without his top-up painkillers. In my husband’s final years of life, we visited all his favorite places. What dying looks like… at the end Over four days, Brock gradually slipped away from us. The word is “comatose” — he couldn’t control his body anymore, including his eyes, which stayed half-open and glazed. That first morning, he could grunt and gesture enough to tell us he was extremely thirsty, but since he couldn’t swallow anymore we could only wet his mouth with a sponge. Our friends and family arrived, camping out in the living room to work, or making food in the kitchen, in between visits with Brock. We spoke to him, watched his favorite movie together (Lord of the Rings: The Two Towers) and listened to his Spotify playlist. We adjusted his position in our bed every hour, taking care not to lie him on the tumor-y side. We sponged his mouth, changed the moisture-absorbing sheet under him as he sweated, cleaned him up when his body purged his bladder and bowels, and dosed him with painkillers religiously. His body became hot in some places and ice cold a few inches away as his circulation slowed. I laid beside him and felt like he was trying to tell me something. It was frustrating not being able to communicate. How to support this stage Brock’s mom, a nurse, assured us he could still hear us. We tried not to talk about him; instead, we spoke to him. I reached out to all the important people in his life. Many were able to come and say goodbye. When his best friend entered the room, Brock suddenly shouted “Johnny!” When another close friend (who had ordered a Tesla) phoned from New York, Brock yelled: “I want to drive a Tesla!” I’d done enough reading to know the things I should say to my husband at this point: Explain what was happening. Assure him he was still in control. Give him permission to die (I had to lie). Over and over, I said: “This is what’s happening: This is the end of your life. We love you and are doing our best to keep you comfortable. I’m really sorry if we aren’t doing it right. Your job is to let go when you’re ready.” On the fourth day, at a rare moment when his parents, brother, and I were all with him in the bedroom, he died. Death is a natural process I believe that death is a right: We should be able to end our life before our natural time, if we choose. “Medical assistance in dying” is legal in Canada, as it should be. At the same time, witnessing Brock’s death made me realize that death is a natural process; just like pregnancy and childbirth, our bodies and brains (usually) know what to do. When we trust this process, monitor it, and work with it, death can be a peaceful, loving experience. (To be clear: Sometimes these natural processes need medical support. I had preeclampsia and an emergency C-section; Brock needed hefty doses of fentanyl and morphine, plus an oxygen tank.) Understanding death lets us enjoy the life we have left It’s terrible that my husband died at age 38. We love and miss him every day. I have no regrets about those final years together, about our choices, or how we were able to support him. We gave him the best death we could. If you find yourself loving or caring for someone nearing the end of their life, I hope our story and knowing what to expect will allow you to enjoy the time you have together in those final years, months, weeks, or days.
https://humanparts.medium.com/how-to-support-your-dying-loved-one-36d4726db3db
['Heather Mcleod']
2020-11-04 15:03:27.166000+00:00
['Cancer', 'Death', 'Health', 'This Is Us', 'Family']
Follow Policy: Twitter, Facebook, LinkedIn, et al.
Social networks are powerful, meaningful tools I have had the opportunity to leverage we ll for personal and professional benefit. Just last week I was trying to find good camping locations and received several good recommendations. I have also gone overboard at times with my virtual social life by, for example, facebooking when I should be playing with my kids, or being glued to twhirl instead of my work. It’s time to make some changes to who and why I follow/friend/add people to these networks. In the past, I have allowed LinkedIn to be my most conservative network, but time and life dictates that I need to be much more selective about my other social interactions, too. So, with a hat tip to Shel, here’s my Twitter, LinkedIn and Facebook (“TwinkedInBook”?) follow policy: As a general rule, I only add/follow people I actually know . That means we have had some meaningful interaction and that your connection aligns with the things I care about. . That means we have had some meaningful interaction and that your connection aligns with the things I care about. I welcome conversations. If I have said something you want to comment on, please do, but be real, and open about who you are. I don’t have to follow you to hear you . This is the Internet. @robertmerrill me, and I will find it. If you really need to reach me, um, my full name at gmail dot com is a pretty good starting point. . This is the Internet. @robertmerrill me, and I will find it. If you really need to reach me, um, my full name at gmail dot com is a pretty good starting point. I get that networking is not just socializing [via @jibberjobber]. I like both, but I am primarilly online for networking in the sense of a professional relationship. I believe networking is about what you give and get. I wont follow you if I don’t think I can meaningfully give to you and/or that I can receive from you appropriately as well . By the way, I still believe the appropriate balance is to GIVE 10x what you RECEIVE in any relationship, but I will not follow you unless you’re providing a relevant, useful and thought-provoking connection. . By the way, I still believe the appropriate balance is to GIVE 10x what you RECEIVE in any relationship, but I will not follow you unless you’re providing a relevant, useful and thought-provoking connection. If I don’t follow you, or just stopped following you, please don’t be offended. Really. It’s not you. It’s me. I will follow companies when I want announcements or information. I will follow people when I see relevance in what you’re talking about. I will unfollow and/or block both when the conversation becomes you selling something, or regurgitating, or just “@username lol!” -ing all day long. The things I say are my opinions and beliefs. Yours are your own. Keep that clear and we will get along just fine :) :) I reserve the right to change this or update it at any time, or at no time, for any or no reason. And now begins the purging of the accounts…
https://medium.com/connected-well/follow-policy-twitter-facebook-linkedin-et-al-bb844c0583c1
['Robert Merrill']
2016-03-27 20:25:15.803000+00:00
['Facebook', 'Follow Policy', 'Follow']
Creating a Shared and Cached Fetch Request
There are cases when you have multiple components that display the same data from the same source. If every component that requests the data need to fetch it from the source, you would end up with too many fetch request; thus consuming many of the user network resources. Typically, this can be solved by requesting the data only once on the parent component, then pass down the data to the components that might need it. However, now you have to responsibly manage the state of the data on the parent component and pass it around. What if we could reuse the same code, that it is written like every component is requesting the data, but the data was actually shared and cached instead of fetching it every time from the source? The concept is simple, wrap the fetch call into a function that handles sharing fetch state and cache expiration. You can play with this example on https://codesandbox.io/s/shared-cached-fetch-9szql Another Approach What do you think about this approach? Do you have a better alternative or addition to this implementation? Feel free to share your thoughts! I remember using saga patterns that handle this kind of issue, but adding a whole saga pattern seems like an overkill for my small project.
https://medium.com/clearview-team/creating-a-shared-and-cached-fetch-request-4deb4381a05f
['Aditya Purwa']
2020-06-08 04:45:22.690000+00:00
['Typescript', 'Frontend', 'Web Development', 'JavaScript', 'Software Development']
Recording MP3 audio using ReactJs under 5 minutes
Hello folks. I was building a website using ReactJs and I came across a requirement to record an audio file on the web. Like, everyone, I started Googling it. After googling some time, I came across an npm package that records mp3 files called mic-recorder-to-mp3 . You can find it in the below link. https://www.npmjs.com/package/mic-recorder-to-mp3 Install and Import To install the above npm package into your React project, do the following npm install --save mic-recorder-to-mp3 After installing it you have to import the package in the project. import MicRecorder from 'mic-recorder-to-mp3'; Configure MicRecorder Now set the bit rate for the audio to be recorded to 128 bits const Mp3Recorder = new MicRecorder({ bitRate: 128 }); Check for Browser Permissions First set some basic state values like this.state = { isRecording: false, blobURL: '', isBlocked: false, } Before we start recording we should check if the permission for the microphone is allowed in the browser. To do that we need to use "navigator.getUserMedia()" function. Look more about it here. I recommend using the below code in componentDidMount() navigator.getUserMedia({ audio: true }, () => { console.log('Permission Granted'); this.setState({ isBlocked: false }); }, () => { console.log('Permission Denied'); this.setState({ isBlocked: true }) }, ); The above code checks the permissions of the web browser to access the microphone for recording audio. If permission is given the isBlocked is set to true or else false Recording Audio Set up two buttons and an HTML5 audio tag in the render() function. <button onClick={this.start} disabled={this.state.isRecording}> Record </button> <button onClick={this.stop} disabled={!this.state.isRecording}> Stop </button> <audio src={this.state.blobURL} controls="controls" /> To start the audio recording call the start() function in the Mp3Recorder which in turn returns a Promise. start = () => { if (this.state.isBlocked) { console.log('Permission Denied'); } else { Mp3Recorder .start() .then(() => { this.setState({ isRecording: true }); }).catch((e) => console.error(e)); } }; Now the audio starts to record once the above function is called. It continues to record until the stop() function is called. After the recording is stopped, getMp3() is called which returns a Promise. A blob and buffers are received as arguments when the Promise is resolved. We can create a “blob URL” from the received blob using URL.createObjectURL(). Now set the received URL as src for the audio tag. stop = () => { Mp3Recorder .stop() .getMp3() .then(([buffer, blob]) => { const blobURL = URL.createObjectURL(blob) this.setState({ blobURL, isRecording: false }); }).catch((e) => console.log(e)); }; Hooray!!, Now you can record audio in MP3 format by clicking the start button. The complete example is available in the GitHub link given below. Say Hi, It’s free @matheswaaran_S or https://matheswaaran.com
https://medium.com/front-end-weekly/recording-audio-in-mp3-using-reactjs-under-5-minutes-5e960defaf10
[]
2019-10-17 07:33:39.017000+00:00
['React', 'ES6', 'MP3', 'JavaScript', 'Audio Recording']
My Favorite Books & Online Courses for Statistical Learning, Data Visualization & Machine Learning
Statistics and Probability I had my statistics and probability courses in university, but if you’re looking for an online course, I believe that the MIT course Introduction to Probability and Statistics is an excellent choice. This course provides an introduction to basic probability definitions and theorems, and it also talks about basic statistics topics — Bayesian Inference, Frequentist Inference (NHST Null Hypothesis Significance Testing), Confidence Intervals, and Regression. For Bayesian Inference, I also enjoy two Coursera courses from the University of Califonia Santa Cruz: Bayesian Statistics: From Concept to Data Analysis and Bayesian Statistics: Techniques and Models. And want to know something about history in statistics? The book “The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century” talks about the revolutionary ideas that changed our life.
https://medium.com/better-programming/my-favorites-books-and-online-courses-to-learn-statistical-learning-data-visualization-machine-168915195969
['Niannian Wu']
2020-09-15 08:38:42.745000+00:00
['Python', 'Data Visualization', 'Data Science', 'Statistics', 'Programming']
Image Captioning with Attention: Part 2
Model Training In the first part of the article, we have covered the overall architecture of the Encoder-Decoder model for image captioning. Now let’s discuss the training process in detail. You can find the training notebook via the GitHub link. Hyperparameters Note the smaller batch_size results in a stronger regularization effect and makes it easier to fit one batch in memory. I’ve chosen a batch size equal to 64 . results in a stronger regularization effect and makes it easier to fit one batch in memory. I’ve chosen a batch size equal to . vocab_threshold - the total number of times that a word must appear in the captions before it is used as part of the vocabulary. The higher the threshold is, the stricter limit we impose on creating our vocabulary. - the total number of times that a word must appear in the captions before it is used as part of the vocabulary. The higher the threshold is, the stricter limit we impose on creating our vocabulary. Based on the experimentation the number of epochs was set to 14 . In practice, we keep training as long as the training and validation errors keep dropping. Transformers Data transformation First, we resize the original image, performing transforms.Rezize(256) and randomly crop to get a 224x224 image sample- transforms.RandomCrop(224) . and randomly crop to get a 224x224 image sample- . Subsequently, we flip the sample horizontally transforms.RandomHorizontalFlip() , convert to a tensor transform.ToTensor() and normalize. , convert to a tensor and normalize. Note that normalization is applied to all channels (depth=3) of an image sample - transform.Normalize((0.485, 0.485, 0.485), (0.229, 0.229, 0.229)) , given the mean of 0.485 and a standard deviation of 0.229 . Training loop To complete the training on a single epoch, we define the function which receives the following arguments: epoch — number of the current epoch. encoder — model’s Encoder, which is set to evaluation mode eval() . . decoder — model’s Decoder, which we aim to train. optimizer — model’s optimizer (Adam in our case). Adam is a common choice for the training, that possesses the properties of both AdaGrad and RMSProp ⁶. criterion — loss function to optimize. We use Cross-Entropy Loss CrossEntropyLoss() that comprises an effect of Negative Log-likelihood NLLLoss() , applied to probabilities, produced by softmax function. that comprises an effect of Negative Log-likelihood , applied to probabilities, produced by softmax function. num_epochs — total number of epochs. We used 14 epochs to train the model. epochs to train the model. data_loader — specified data loader (for training, validation, or testing). write_file — file to write the training logs. We store the stats in two separate txt files. files. save_every - to save the results after each trained epoch. Note, we store the captions that we train on without the first word in the captions_train variable and target captions without the last word in captions_target variable. The full code for the training loop is shown below. Training function Validation loop The validation function follows the same logic as the training one, except for the BLUE score we calculate for the model’s hypothesis. At each validation step: We pass the output indices terms_idx , generated by the model to the get_hypothesis() function and get the list of hypothesis hyp_list for image batch. , generated by the model to the function and get the list of hypothesis for image batch. Next, we populate the hypothesis and references lists with hyp_list and caps_processed correspondingly. The first list stores all hypotheses we get within a single epoch. The second list contains all processed target captions caps_processed , returned using get_batch_caps() function. and lists with and correspondingly. The first list stores all hypotheses we get within a single epoch. The second list contains all processed target captions , returned using function. Calculate the BLEU score using the corpus_bleu() function from the NLTK package. BLEU score in a nutshell BLEU score is a standard metric, which is widely used in NLP and CV domains for evaluating the machine-generated translation or caption against the human one. The details of the BLEU algorithm can be found in the original paper¹ and this famous Deep learning course by Andrew Ng. I also provide the formula and supportive example to demonstrate the BLEU score calculation for a single image, according to the NLTK implementation. BLEU score calculation BP — “brevity penalty”. We set it to 1.0 if the length of a candidate is the same as any translation length. Brevity penalty calculation Let’s have a look at BLEU scores estimation based on 1,2,3 and 4-gram precision for a single image from the batch.
https://medium.com/analytics-vidhya/image-captioning-with-attention-part-2-f3616d5cf8d1
['Artyom Makarov']
2020-12-15 16:34:45.404000+00:00
['Deep Learning', 'Image Captioning', 'Computer Vision', 'Pytorch']
Rockset’s RocksDB-Cloud Library — Enabling the Next Generation of Cloud Native Databases
Authored by David Cohen Rockset and I began collaborating in 2016 due to my interest in their RocksDB-Cloud open-source key-value store. This post is primarily about the RocksDB-Cloud software, which Rockset open-sourced in 2016, rather than Rockset’s newly launched cloud service. In it, I will explore how RocksDB-Cloud can be used to build an open-source cloud-friendly storage system. Rockset’s emergence from stealth mode deserves some reflection on a key observation underlying their platform: there are a core set of services common to offerings across the largest public Cloud Service Providers (CSPs). Two in particular, REST-based Object Storage (e.g. Amazon S3) and Event Streams (e.g. Amazon Kinesis), are used to compose other services, serving as a shared storage service for these caching systems. Rockset’s open-source RocksDB-Cloud library provides an interesting illustration on how existing caching systems can be adapted for the cloud. What is meant by a “caching system?” This is a system that manages its state across main memory and primary storage. RocksDB employs an implementation of the Log Structured Merge Tree (LSM-Tree) to achieve this goal. Underlying the LSM-Tree is a rule of thumb that has held for over 30 years. The “Five-Minute Rule” captures succinctly the inherent economic trade-off between memory and storage in data store design (Appuswamy/ADMS@VLDB 2017). What’s unique about RocksDB-Cloud’s vision is how this trade-off is adapted to the cloud. A Cloud Native Database (CNDB) is a local database system built specifically for the cloud era. Such a database is designed to take full advantage of the abundance of networking, processing, and storage resources available in the cloud. To this end, a CNDB maintains a consistent image of the database — data, indexes, and transaction log — across cloud storage volumes to meet user objectives, and harnesses remote CPU workers to perform critical background work such as compaction and migration. How does a database system designed to operate on a local host get refactored so that a consistent image of its state resides on Cloud Storage? The answer is twofold. First, the database must be a caching system as opposed to a memory system. Caching systems maintain the full image of the database on local persistent storage while only the active state is in memory. Once identified, the transformation of such a caching system to a CNDB requires that this persistent state be mapped on to Cloud Storage constructs so that it can be accessed by remote workers. The persistent state of a caching system consists of point-in-time snapshots, metadata/primarily DDL, and the transaction log. One stipulation, however, is that the caching system must do “blind updates.” That is to say all mutations applied to the database must be appended to the log. Indexing schemes are then employed to manifest the current image of the database from within the local host’s memory. The RocksDB library provides the means of building such caching systems (Lomet/DaMoN18). For example, Facebook deploys MySQL configured to use the MyRocks storage engine, which internally uses the RocksDB library. RocksDB implements the Log Structured Merge Tree (LSM-Tree). This means all mutations are appended to the Write-Ahead Log (WAL) and then applied to in-memory mem-tables. Over time these mem-tables become immutable, are flushed to sstable files, and subsequently, the associated resources freed. Transforming MySQL w/MyRocks into a CNDB is therefore a function of mapping the WAL and sstable files on to the appropriate Cloud Storage constructs. Enter Rockset’s RocksDB-Cloud library, an extension of RocksDB that maps local sstable files on to an S3-style bucket and WAL entries on to a Cloud Native Log such as a Kafka or Kinesis Partition. Intel has been collaborating with several end-customers and the Rockset team to enable this deployment scenario. Thus far we’ve successfully enabled the database to operate with MariaDB configured to maintain its sstable files locally and in a Minio-based S3 bucket. The WAL is configured to be local. We rely on the MariaDB instances binlog for maintaining the current a database’s change-state. Rockset also uses RocksDB-Cloud as the foundation for its own cloud service. The Rockset service is a serverless search and analytics CNDB that indexes semi-structured documents using RocksDB-Cloud. RocksDB-Cloud is a great addition to the arsenal of data tools that the open-source community can leverage to build other CNDBs as well. What is Intel’s interest in enabling Next-Gen CNDBs? Intel’s anticipated introduction of Optane DC Persistent Memory will afford the database ecosystem the opportunity to revisit trade-off orthodoxy that has held for generations. One trade-off that will remain, however, is the Five-Minute Rule (Appuswamy/ADMS@VLDB 2017). The cache system model of CNDBs is the embodiment of this trade-off. We therefore believe CNDBs provide the foundation for wide-spread adoption of Intel’s Persistent Memory over time. Storage engines that use the RocksDB-Cloud library to incorporate Intel’s Optane DC Persistent Memory into RocksDB’s cache substrate is a big step in this direction. References (1) Barr, “New — Parallel Query for Amazon Aurora,” 2018 https://aws.amazon.com/blogs/aws/new-parallel-query-for-amazon-aurora/ (2) Lomet, “Cost Performance in Modern Data Stores: How Data Caching Systems Succeed,” 2018 https://dl.acm.org/citation.cfm?id=3211927 (3) Wu et al, “Eliminating Boundaries in Cloud Storage with Anna,” 2018 https://arxiv.org/abs/1809.00089 (4) Arpaci-Dusseau, “Cloud-Native File Systems,” 2018 https://www.usenix.org/system/files/conference/hotcloud18/hotcloud18-paper-arpaci-dusseau.pdf (5) Borthakur, “The birth of RocksDB-Cloud,” 2017 http://rocksdb.blogspot.com/2017/05/the-birth-of-rocksdb-cloud.html (6) Appuswamy et al, “The five-minute rule thirty years later,” 2017 http://www.adms-conf.org/2017/camera-ready/5minute-rule.pdf
https://medium.com/rocksetcloud/rocksets-rocksdb-cloud-library-enabling-the-next-generation-of-cloud-native-databases-fb6a5aef9659
['Shawn Adams']
2019-07-30 20:12:46.310000+00:00
['Database', 'Rocksdb', 'Cloud Storage', 'Data', 'Cloud Computing']
Interactive and scalable dashboards with Vaex and Dash
In what follows, we are going to assume a reasonable familiarity with Dash and will not expose all of the nitty-gritty details, but rather discuss the most important concepts. Let us start by importing the relevant libraries and load the data: Note that the size of the data file does not matter. Vaex will memory-map the data instantly and will read in the specifically requested portions of it only when necessary. Also, as is often the case with Dash, if multiple workers are running, each of them will share the same physical memory of the memory-mapped file — fast and efficient! Vaex will memory-map the data instantly and will read in the specifically requested portions of it only when necessary. Also, as is often the case with Dash, if multiple workers are running, each of them will share the same physical memory of the memory-mapped file — fast and efficient! The next step is to set up the Dash application with a simple layout. In our case, these are the main components to consider: The components part of the “control panel” that lets the user select trips based on time dcc.Dropdown(id='days') and day of week dcc.Dropdown(id='days') ; and day of week ; The interactive map dcc.Graph(id='heatmap_figure') ; ; The resulting visualisations based on the user input, which will be the distributions of the trip costs and durations, and a markdown block showing some key statistics. The components are dcc.Graph(id='trip_summary_amount_figure') , dcc.Graph(id='trip_summary_duration_figure') , and dcc.Markdown(id='trip_summary_md') respectably. , , and respectably. Several dcc.Store() components that to track the state of the user at the client side. Remember, the Dash server itself is stateless. Now let’s talk about how to make everything work. We organize our functions in three groups: Functions that calculate the relevant aggregations and statistics, which are the basis for the visualisations. We prefix these with compute_ ; ; Functions that given those aggregation make the visualisation that are shown on the dashboard. We prefix these with create_figure_ ; ; Dash callback functions, which are decorated by the well known Dash callback decorator. They respond to changes from the user, call the compute function, and pass their outputs to the figure creation functions. We find that separating the functions into these three groups makes it easier to organize the functionality of the dashboard. Also, it allows us to pre-populate the application, avoiding callback triggering on the initial page load (a new feature in Dash v1.1!). Yes, we’re going to squeeze every bit of performance out of this app! Let’s start by computing the heatmap. The initial step is selecting the relevant subset of the data the user may have specified via the Range Slider and Dropdown elements that control the pick-up hour and day of week respectively: In the above code-block, we first make a shallow copy of the DataFrame, since we are going to use selections, which are stateful object in the DataFrame. Since Dash is multi-threaded, we do not want users to affect each others selections. (Note: we could also use filtering e.g. ddf = df[df.foo > 0] , but Vaex treats selections a bit differently from filters, giving us another performance boost). The selection itself tells Vaex which parts of the DataFrame should be used for any computation, and was created based on the choices of the user. We are now ready to compute the heatmap data: All Vaex DataFrame methods, such as .count() , are fully parallelized and out-of-core, and can be applied regardless of the size of your data. To compute the heatmap data, we pass the two relevant columns via the binby argument to the .count() method. With this, we count the number of samples in a grid specified by those axes. The grid is further specified by its shape (i.e. the number of bins per axis) and limits (or extent). Also notice the array_type="xarray" argument of .count() . With this we specify that the output should be an xarray data array, which is basically a numpy array where each dimension is labelled. This can be quite convenient for plotting, as we will soon see. Keep an eye on that decorator as well. We will explain its purpose over the next few paragraphs. Now that we have the heatmap data computed, we are ready to create the figure which will be displayed on the dashboard. In the function above, we use Plotly Express to create an actual heatmap using the data we just computed. If trip_start and trip_end coordinates are given, they will be added to the figure as individual plotly.graph_objs.Scatter traces. The Plotly figures are interactive by nature. They are already created in such a way that they can readily capture events such as zooming, panning and clicking. Now let’s set up a primary Dash callback that will update the heatmap figure based on any changes in the data selection or changes to the map view: In the above code-block, we define a function which will be triggered if any of the Input values is changed. The function itself will then call compute_heatmap_data , which will compute a new aggregation, given the new input parameters and use that result to create a new heatmap figure. Setting the prevent_initial_call argument of the decorator prevents this function to be called when the dashboard is first started. Notice how the compute_heatmap_data is called when the update_heatmap_figure is triggered when trip_start or trip_end change, even though they are not parameters of compute_heatmap_data . Avoiding such needless calls is the exact purpose of the decorator attached to compute_heatmap_data . While there are several way to avoid this (we explored many), we finally settled on using the flask_caching library, as recommended by Plotly, to cache old computations for 60 seconds — fast, easy, and simple. To capture user interactions with the heatmap, such as panning and zooming, we define the following Dash callback: Capturing and responding to click events is handled by the this Dash callback: Note that both of the above callback functions update key components needed to create the heatmap itself. Thus, whenever a click or relay (pan or zoom) event is detected, updating the key components will trigger the update_heatmap_figure callback, which in turn will update the heatmap figure. With the above function we create a fully interactive heatmap figure, which can be updated using external controls (the RangeSlider and Dropdown menu), as well as by interacting with the figure itself. Note that due to the nature of Dash application — stateless, reactive, and functional — we just write functions that create the visualisations. We do not need to write distinct functions to create and functions to update those visualisations, saving not only lines of code, but also protecting against bugs. Now, we want to show some results, given the user input. We will use the click events to select trips starting from the “origin” and ending at the “destination” point. For those trips, we will create and display the distribution of cost and duration, and highlight the most likely values for both. We can compute all of that in a single function: Let us define a helper function to create a histogram figure given already aggregated data: Now that we have all the components ready, it is time to link them to the Dash application via a callback function: The above callback function “listens” to any changes in the “control panel”, as well as any new clicks on the heatmap which would define new origin or destination points. When a relevant event is registered, the function is triggered and will in turn call the compute_trip_details and create_histogram_figure functions with the new parameters, thus updating the visualisations. There is one subtlety here: a user may click once to select a new starting point, but then the new destination is not yet defined. In this case we simply “blank out” the histogram figures with the following function: Finally, to be able to run the dashboard, the source file should conclude with: And there we have it: a simple yet powerful interactive Dash application! To run it locally execute the python app.py command in your terminal, provided that you have named your source file as “app.py”, and you have the taxi data at hand. You can also review the entire source file via this GitHub Gist. Something different Plotly implements a variety of creative ways to visualise one’s data. To show something other than the typical heatmaps and histograms, our complete dashboard also contains several informative, but less common way to visualise aggregated data. On the first tab, you can see a geographical map on which the NYC zones are coloured relative to the number of taxi pick-ups. A user can than select a zone on the map and get information on popular destinations (zones and boroughs) via the Sankey and Sunburst diagrams. The user can also click on a zone on these diagrams to get the most popular destinations of that zone. Creating this functionality follows the same design principles as for the Trip planner tab that we discussed above. The core of it revolves around groupby operations followed by some manipulations to get the data into the format Plotly requires. It’s fast and beautiful! If you are interested in the details, you can see the code here. Performance You may wonder, how many users can our full dashboard serve at the same time? This depends a bit on what the users do of course, but we can give some numbers to get an idea of the performance. When changing the zone on the geographical map (the choroplethmapbox) and no hours or days are selected, we can run 40 computations (requests) per second, or 3.5 million requests per day. The most expensive operations happen when interacting with the Trip Planner heatmap with hours and days selections. In this case, we are able to serve 10 requests per second, or 0.8 million requests per day. How this translates to a concurrent number of users depends very much on their behaviour, but serving 10–20 concurrent users should not be a problem. Assuming the users stay around for a minute and interact with the dashboard every second, this would translate to 14–28k sessions per day exploring over 120 million rows on a single machine! Not only cost-effective, but also environmentally friendly. All these benchmarks are run on an AMD 3970X (32 cores) desktop machine. Scaling: More users Do you want to serve more users? Because Dash is stateless at the server-side, it is easy to scale up by adding more computers/nodes (scaling horizontally). Any DevOps engineer should be able to add a load balancer in front of a farm of Dash servers. Alternatively, one can use Dash Enterprise Kubernetes autoscaling feature which will automatically scale up or scale down your compute according to the usage. New nodes should spin up rapidly since they only require to have access to the dataset. Starting the server itself takes about a second due to the memory mapping. Scaling: More data What about dashboards showing even larger data? Vaex can easily handle datasets comprised of billions of rows! To show this capability, we can also serve the above dashboard with a larger version of the NYC taxi dataset that numbers over half a billion trips. Due to computation cost however, we do not share this version of the app publicly. If you are interested in trying this out, please contact either Plotly or Vaex. Let your data talk. All of it. The combination of Dash and Vaex empowers data scientists to easily create scalable, production ready web applications utilising rather large datasets that otherwise can not fit into memory. With the scalability of Dash, and the superb performance of Vaex, you can let your data tell the full story — to everyone.
https://medium.com/plotly/interactive-and-scalable-dashboards-with-vaex-and-dash-9b104b2dc9f0
['Jovan Veljanoski']
2020-06-23 14:37:08.608000+00:00
['Dash', 'Vaex', 'Dashboards', 'Machine Learning', 'AI']
The Social Network Is the Computer
The Social Network Is the Computer What does the explosion of social media and the Hype Machine mean for society and our world? By Irving Wladawsky-Berger “Human beings have always been a social species,” writes MIT professor Sinan Aral in the opening paragraph of The Hype Machine, his recently published book. “We’ve been communicating, cooperating, and coordinating with one another since we were hunting and gathering.” Our increasingly complex social interactions have been the critical factor in the exponential increase of human cranial capacity over the past few million years. “But today something is different. Over the last decade, we’ve doused our kindling fire of human interaction with high-­octane gasoline. We’ve constructed an expansive, multifaceted machine that spans the globe and conducts the flow of information, opinions, and behaviors through society.” Professor Aral has been studying social media since its beginning in the early 2000s, when it was driven by the idealistic vision of connecting the world and providing free access to information. The Hype Machine is what he now calls our global social media ecosystem. This ecosystem has been designed to stimulate and manipulate the human psyche, “to draw us in and persuade us to change how we shop, vote, and exercise, and even who we love.” His book nicely explains how the social media industrial complex works, how it affects us, and what we can do to help achieve its original vision while avoiding its later perils. Let me discuss a few of its key points. The Social Network Is the Computer For years, The Network Is the Computer was used as a marketing tagline by Sun Microsystems — now part of Oracle- to emphasize that in the Internet age, networked systems and applications were far more powerful than any single computer. Aral tells us that on a visit to Facebook’s Menlo Park headquarters, a particular mural caught his attention, so he took a picture with his phone. As he was trying to understand the inner workings of the Hype Machine, the mural’s image kept coming to mind. It simply said The Social Network Is the Computer. For Facebook, this was a reasonable marketing tagline. “But for me the mural had a deeper meaning,” said Aral. “It described a view of the world in which society is essentially a gigantic information processor, moving ideas, concepts, and opinions from person to person, like neurons in the brain or nodes in a neural network, firing synapses at each node in the form of decisions and behaviors — what products to buy, who to vote for, or who to date — billions of times per minute, every day. In this analogy, we are the nodes, and the architecture of the information-processing machine we collectively inhabit is the social network.” If the social network is the computer, what’s its underlying architecture?; how does it work?; and how does it change our behavior? The Technology Trifecta Three technologies make the Hype Machine possible by transforming how information is produced and consumed: digital social networks, machine intelligence, and smartphones. The digital social network is the substrate at the core of the machine, the underlying architecture which determines how information flows. Social networks have seen explosive growth since their inception as Web 2.0 about 20 years ago. According to Our World in Data, Facebook grew from 100 million users in 2008 to 2.4 billion in 2019; YouTube grew from 20 million users in 2006 to around 2 billion in 2019; and WhatsApp from 300 million users in 2013 to also around 2 billion. Behind these three leaders are another set of fast growing social networks, including Instagram, WeChat, Tumblr, TikTok, Redditt and Twitter. Social networks are now used by one in three people in the world and over two-thirds of Internet users. The social network has truly become the computer. Machine intelligence, the second trifecta technology, is the process that controls what information flows over the network and how it’s distributed around the world. After decades of promise and hype, machine intelligence has finally reached a tipping point of market acceptance thanks to advances in machine learning algorithms, powerful and inexpensive computer technologies, and access to huge amounts of data, including data about human behaviors and opinions. The cyclical interplay between machine and human intelligence determines what we focus on, what stories and pictures we see, suggestions on potential colleagues and dates, as well as the ads shown alongside the content. The Hype Machine then observes how we consume this information and make decisions, learning what and who we like and how we think so that it subsequent suggestions will be even more engaging. Smartphones are the medium, the key input/output devices for interacting with the Hype Machine. With roughly 3.5 billion users around the world, smartphones are far and away the primary devices for engaging with social media, an always on environment through which information is provided to and received from the Hype Machine. Future medium devices could also include augmented and virtual reality headsets, in-home virtual assistants, and wearable technologies. The Hype Loop The Hype Loop is the engine of the Hype Machine — a constantly evolving feedback loop with four components. At the heart of the Hype Loop are its two fundamental elements: machine intelligence and human behavior. The two are intimately intertwined by the sense and suggest loop which structures how machine intelligence influences human behavior, and the consume and act loop, which determines how human agency, our response to the machine’s recommendations, influences what the machine does next. Let me briefly describe each component. Machine intelligence. Machine intelligence analyzes what’s happening inside the Hype Machine in order to optimize its overall objectives, such as maximizing engagement or increasing viewership. “The machine ingests what we post, how we read, who we follow, how we react to the content we see, and how we treat one another. It then reasons over this data to display new content, friend suggestions, and advertisements that maximize specific goals.” We tend to choose among the machine’s suggestions because we don’t have the time or attention to search more broadly. “By providing us with an algorithmically curated set of options, technology both enables and constrains us. In this way, the Hype Machine influences what we read, who we friend, what we buy, and even who we love.” Sense and Suggest Loop. The machine then senses our behavior based on the massive amounts of information posted and consumed on social media every minute of every day — e.g., what we read, the videos we watch, who we friend, what we buy. After analyzing all this information, it then makes suggestions that nudge us in the directions that maximize its overall objectives. “The major social media platforms use deep learning neural networks to analyze the text we type, the audio we speak, and the facial expressions and body positions in our pictures and videos to understand what we are doing, what we are interested in, what makes us happy or sad, and how the things that motivate us are related to our engagement, purchasing patterns, and connectivity.” Human Behavior. Technology is a major part of the story. But so is human behavior — how we respond to the machine’s nudges and recommendations. “Although the Hype Machine helps to create our reality, we are the ones who ultimately appropriate and act on this technology. Human agency shapes the inputs that our machines analyze to suggest new alternatives. Our behavior — what we post, what we read, how we make friends, and how we communicate and interact with one another — shapes how the Hype Machine interprets what we want from technology and how we want to live and be treated.” Consume and Act Loop. While machines attempt to structure our reality, humans are responsible for consuming and acting on those suggestions. “[T]he Consumer and Act Loop is our process of turning recommendations into action and feeding the resulting behaviors, reactions, and opinions back into the Hype Machine.” “There’s been a lot of debate about how the Hype Machine influences, polarizes, and incites us,” writes Aral. “But it’s important to remember that we control how we react to and use social media. The norms we develop as a society play an important role in our relationship with this technology, and a linear view of technology as only impacting us removes our agency and our responsibility to consider how our appropriation of technology contributes to the outcomes we are experiencing…” “One thing I’ve learned, from twenty years researching and working with social media, is that these technologies hold the potential for exceptional promise and tremendous peril — and neither the promise nor the peril is guaranteed,” he writes. “Social media could deliver an incredible wave of productivity, innovation, social welfare, democratization, equality, health, positivity, unity, and progress. At the same time, it can and, if left unchecked, will deliver death blows to our democracies, our economies, and our public health. Today we are at a crossroads of these realities.”
https://medium.com/mit-initiative-on-the-digital-economy/the-social-network-is-the-computer-449202844c
['Mit Ide']
2020-09-29 19:43:53.357000+00:00
['Information Technology', 'Social Network', 'Social Media', 'AI', 'Digital Marketing']
Impressions from Google I/O 2019: Android Q — Gesture Navigation
Gesture Navigation has come to Android Q and it has large impacts on Android app layouts. What is it? Google has decided to include Gesture Navigation as an alternative to the usual 3 button mode that Android users have become accustomed to. Gestures are often used by Android power users. Gesture Navigation was introduced to unify gestures used across all Android phones (apparently third party phones were making too many different and confusing gesture sets). There are 3 main gesture areas: left, right, and bottom. Gesture Navigation Areas Users can swipe up from the bottom to return home, see recent apps, or kill the app. Users can also swipe from the left or right gesture areas to go back. Drawing Full Screen Gesture Navigation mode means that the app should be rendered across the full screen. The status bar should be set to be transparent in this mode, and the navigation bar should also be made more transparent. Android has already taken care of automatically changing the navigation bar’s transparency depending on what content is behind it in this mode. This means that the best practice is to render content behind the status bar and navigation bar, taking up the full screen. Staying out of System Gesture Areas The system gesture areas now belong to the OS and will not register touches to send to the app. This means that anything that is clickable, swipe-able, etc. needs to be moved out of the gesture areas. Floating Action Button Pre-adjustment for system Window Insets Fortunately, Android has provided us with some system Window Insets that can be used to adjust your layouts to be away from the gesture areas. Floating Action Button post-adjustment for system Window Insets Some issue areas include: Floating Action Buttons Recycler view clickable content at the bottom of the screen Sliders, Seek bars, etc. at bottom of the screen Navigation Drawers (left) Bottom sheets (peek content below the gesture area) View Pagers (left & right) Landscape mode Google strongly suggests that developers test every screen in their app for the button vs. gesture mode and for landscape mode as well. Unfortunately, most of the suggestions on how to fix these layout issues were just to add padding or margins for these gesture areas and move clickable content closer to the center of the screen. Workaround: Opt out of Gesture Areas If fixing the layout on your screen for left and right gesture areas is too daunting a task, there is a workaround. Developers can have their layout opt out of the gesture navigation zones on the left and right with View.setSystemGestureExclusionRects(..). It was suggested at Google I/O that app developers try to stay away from using this as common practice, as it defies the user’s expectation of being able to use the left and right gesture navigation, but it is an option. Unfortunately, apps cannot opt out of the home gestures. Summary Gesture Navigation mode has opened up some new pain points that developers will need to handle. On one hand, the new mode should allow users to use the whole of their screen to see content. On the other, this is likely to require fixes in every layout of every screen in every app — and moving content closer to the center may cause issues viewing content on smaller phones. Window insets will help keep clickable areas where they can be used, and if all else fails, developers can try to opt out of gesture navigation on a layout (left and right at least). Hopefully new solutions and developer feedback will help make this a smoother transition. “All screenshots were taken from the Google IO 2019 Dark Theme & Gesture Navigation session, available from the Google IO video sourced above.” “The Google IO logo, names, images, and all goodwill therein are the property of Google, Inc.”
https://medium.com/ua-makers/impressions-from-google-i-o-2019-android-q-gesture-navigation-80c28ce45b1a
['Lauren Yew']
2019-05-28 13:57:44.893000+00:00
['Android', 'AndroidDev', 'Gestures', 'Android App Development', 'Google']
You Know Frank Lloyd Wright, but Did You Know His Wisconsin Home Was the Site of a Massacre?
An American architect Even if you are not a student of architecture, chances are you have heard of Wright. He began his career in Chicago, but his most famous works are the Solomon R. Guggenheim Museum in New York and Fallingwater in Pennsylvania. His Prairie style of residential architecture, featuring simple decor, horizontal lines (to mimic the midwest prairie), and flat roofs, were widely popular in the US. His home in Oak Park, Illinois, is an example of Prairie style, and it is open for tours to the public. Additionally, there is a walking tour that can be taken starting from his house through the Oak Park neighborhood, which highlights some of his most significant houses in the area. He shared his home and studio with his wife and six children. It’s remarkably modern looking for a house built in 1889. One of the most striking features of the house is the expansive playroom on the second floor. (Children’s playroom in Wright’s Oak Park home and studio) Image by Carol M. Highsmith, Public domain, via Wikimedia Commons You will learn lots of interesting facts on the tour of his Oak Park home. It’s fascinating, and I recommend it if you are ever in the area. But don’t expect to learn from your tour guide the story of how Wright scandalously left his wife and children for a mistress in 1909. And then built a home for her in Wisconsin that he dubbed his “love cottage,” where she was tragically murdered along with her two children and four of Wright’s employees.
https://medium.com/crimebeat/you-know-frank-lloyd-wright-but-did-you-know-his-wisconsin-home-was-the-site-of-a-massacre-63347fa306a9
['Jennifer Geer']
2020-12-18 01:14:36.777000+00:00
['True Crime', 'Relationships', 'Society', 'Architecture', 'History']
How to Create a Hub-and-Spoke Plot with Plotly
How to Create a Hub-and-Spoke Plot with Plotly Plot lat/long data on the map with lines connecting the locations A while ago, I needed to plot the lines connecting the latitude (lat) and longitude (long) coordinates of some locations on the map. What I needed to draw is known as the “hub-and-spoke” plot. Knowing that Plotly already has some samples such as the one here, I was able to do it pretty easily. But real-life data won’t always lend itself perfectly to Plotly’s examples, and there may be other requirements that out-of-the-box examples don’t meet. In this blog, I’ll show you how to build one of these graphs on your own. In particular, we’ll assume that: The data is not always from the same region (like the US). Our geospatial data can be for North America, South America, or Europe. The plot needs to be saved as an HTML file. The graph should be offline*. *This was actually something that required more effort prior to Plotly version 4, especially since their documentation emphasizes online mode, as you can see here. But version 4 is offline only. You can read more about the migration of online features here. Some Background on Plotly Figures There are multiple ways to represent a figure in Plotly. Two of the most common ways are using (a) dictionaries or using (b) graph objects. Though each has its pros and cons, Plotly recommends the latter for its additional benefits, which you can check out here. To create a figure, we need two main components: the data and a layout. From the documentation: You can build a complete figure by passing trace and layout specifications to the plotly.graph_objects.Figure constructor. These trace and layout specifications can be either dictionaries or graph objects. The layout dictionary specifies the properties of the figure’s layout, while data is a list of trace dictionaries with configuration options for individual traces (trace is a word Plotly uses to describe plots). Now that we know how figures are constructed, we can prepare our data and plot them on the map. Although the code below is for Plotly version 4, you can find the implementation for both versions 3 and 4 in this Jupyter Notebook. Data Preparation The data (as you can see in this sample .csv file) consists of information about each origin and destination location (namely, city, state, country, lat, and long). To draw the plot, we only care about: Lat/long of origins and destinations. Location attributes on the map, such as size, color, and shape. And any other information about a location, or the path between the locations that we’d like to show on the map. For example, you may want to show the name of each location, or the length (in miles) of each path when a user’s mouse hovers over it (We only do the former here.) So first, let’s load the data: import pandas as pd import plotly.graph_objects as go data = pd.read_csv('sample_us.csv') Next, create the location dataframe with all needed attributes: After that, we can add each location and its attributes to the figure: After that, we can add each path to the figure: Finally, let’s specify the layout of our map: Since we want to show the map as an HTML file, we need one more line: fig.write_html(file=f'{title}.html', auto_open=True) That’s it! This works as intended, but only if the lat/long data is confined to the US. To make it more responsive to different kinds of underlying data (like European locations, for example), we need to make one small change. General Data If you look closely at the codes above, you’ll see that in the layout dictionary, we specified scope='usa' . According to Plotly’s documentation, scope accepts one of the following values: "world" | "usa" | "europe" | "asia" | "africa" | "north america" | "south america". In order to decide what scope to use, we should look at the range of lats and longs in our data. We can then compare our range against the approximate range of each scope to decide which one to use (I found these by searching for them online — they should serve our purposes just fine.) All that’s left to do is to change scope=scope in layout and we’re all done. And voila! We’re done. Let’s test out this method on two different datasets. This is a map of US data: And this is one from Europe: Note that the sample data and the more-detailed code for this blog are available on GitHub.
https://towardsdatascience.com/how-to-create-a-hub-and-spoke-plot-with-plotly-d11d65a4200
['Ehsan Khodabandeh']
2020-01-08 14:42:57.860000+00:00
['Data Science', 'Programming', 'Data Visualization', 'Python']
Highlights: Week #24
Dear lovely people of KTHT and beyond, Thank you for all the submissions, engagement and constant efforts to make this community a welcoming, warm, uplifting, safe place for us to open our hearts and share things we wouldn’t otherwise share. I am humbled, grateful and excited to be on this journey with you. Looking at 2020 in hindsight, I don’t think it was all doom and gloom. Although I felt very lonely at times, as I am sure a lot of you have, having this platform to share my thoughts and emotions on has been a blessing, a game changer. I really don’t know how I would’ve survived quarantine without it… Fast forward to today, I have another list of revelatory, insightful, awe-inspiring stories to share with you that I hope will keep you warm this cold winter day. Find a cozy place, get comfortable and enjoy the ride these articles will take you on. Ready?
https://medium.com/know-thyself-heal-thyself/highlights-week-24-2480fe9c5217
['𝘋𝘪𝘢𝘯𝘢 𝘊.']
2020-12-20 20:33:06.383000+00:00
['Short Story', 'Inspiration', 'Highlights', 'Energy', 'Creativity']
R.I.P 2020
As our online held fall semester ends with a first-time launched elective at ISDI Parsons -ISU-Mumbai — (AIML |A Whole New World ) Communication Designers, Shrishti Sahani and Siddhi Mandora ‘ design a brave new world where AI plays a predominant role in our life as well as our death. In a New World — Consumer Impact due to AI / ML will shape design decisions for most businesses. Just as the Internet has been such a great driver of change across so many spheres over the past 20 years, we will see machine intelligence in the same role over the coming decades. Artificial Intelligence (AI) and (ML) Machine Learning is poised to disrupt our world. With intelligent machines enabling high-level cognitive processes like thinking, perceiving, learning, problem-solving and decision making, coupled with advances in data collection and aggregation, analytics and computer processing power, AI presents opportunities to complement supplement human intelligence and enrich the way people live and work. A relationship manager along with a AI based tracking system which enables to keep track of vitals and well-being of the policy holder. An inbuilt camera in device of choice, non- intrusive and remotely controlled can let you know if your dear ones are indeed safe. No longer will we need to worry about our little ones left stranded outside a mortuary or senior citizens abandoned in a single apartment unit only to be discovered days after their demise. The rationale for forcing people to die alone is that isolation and social distancing will slow the spread of Covid-19 and prevent deaths, however, ripped of dignity in the last hours can be traumatizing for both the victim and the survivors who will carry the scars of the event forever in their living world. We design how we live we, often discounting the fact that death is also a part of one’s life, and people often leave this planet without planning it! Our pampered and secured existence suddenly is compromised when we leave those decisions to people who we have never come in contact with! The pandemic has shown us one of the most gruesome faces of departing this world…with bodies piled up in ‘Morgues —unclaimed due to fear of Covid infections. Mass burials have been a norm from NewYork Islands to Ballari. Religious faiths have been compromised on a global scale as people had to look at getting closure in the most unexpected ways. Modern applications of AI have already given us self-driving cars and virtual assistants and have helped us detect fraud and manage resources like electricity more efficiently. Now we can apply the same smart AI application to ‘Insurance policies which can come up with a system that protects one even after one's death. A systematic appraoch to death entails details of a persons faith, documentation and handling of financing, choice of medical facilties and the aftermath of ones demise and handling of ones remains. All can be designed and predetermined. Policy Details: Pre-plan your departure - Camera for the security of your loved ones - Personal Relationship Manager- Regular check-ups at your convenience - Security chip. AI Health Assistance: A Smart Camera for a Smart Generation - Tracks Movement - Thermal Body Temperature - Detection of position and rotation of parts - AI Chip helps in medical data collection - attached to Wheelchairs, Spectacles, Supportcane. App: Unique ID provided for policyholders - Secured personal data- Data monitoring and collection - Easily accessible- Gives alerts on any kind of abnormalties. A much-needed innovation in the ‘afterlife’ sector for senior citizens — and a new approach to death. As AI-based solutions permeate the way we live and do business, questions on ethics, privacy and security will also emerge. Students have engaged in various speculations and predicted scenarios. Disclaimer: All creatives are hypothetical classroom projects. All rights reserved — I S D I 2 0 2 0 Creative Mentor: Prof. Utkarsha Malkar ISDI CAMPUS Indian School of Design & Innovation ISDI Tower, One Indiabulls Centre, Senapati Bapat Marg, Lower Parel, Mumbai 400013. General Enquiries: enquiries@isdi.in Placements & Industry Connect: placements@isdi.in
https://medium.com/datadriveninvestor/r-i-p-2020-7861daea0c58
['Utkarsha Malkar']
2020-12-27 04:18:22.803000+00:00
['Senior Care', 'AI', 'Technology', 'Pandemic', 'Healthcare']
The Ariana Grande Albums Ranked
I remember once being annoyed by pop music. Oh, for sure, a broad swath of pop music is still terrible, as is a broad swath of music or any other media form. But in the realm of contemporary pop music, Ariana Grande looms large for me. Effecting (or being subjected to) a transformation from young, family friendly Nickelodeon teen to a sexualized diva, Grande has benefited from an array of music producers and image stylists over the course of the seven years she has released her six albums (and before that as well). Her latest, POSITIONS, is the most recent and most explicit affirmation of Grande’s “grown up” attitude, and it’s the impetus for this piece. I am unashamedly an Ariana Grande fan, even though I don’t care about the Pete Davidson drama or the constant tabloid surveillance of her life. I’m all about the music, man, the semi-shallow, manufactured-by-committee music, man. #6 — MY EVERYTHING (2014) Favorite track: “One Last Time” Grande definitely faced the sophomore slump with MY EVERYTHING, her second album. Its array of hit singles, which really stand as my earliest awareness of the singer, are certainly the weakest in the span of her discography. The record feels like an attempt to “mature” Grande’s sound, as a half step between YOURS TRULY and DANGEROUS WOMAN, but in the process, MY EVERYTHING feels the blandest of all of her releases. “One Last Time” is a nominal favorite, but it’s also indicative of my lukewarm response to the album. It’s a middling pop record, and not much else. #5 — DANGEROUS WOMAN (2016) Favorite track: “Bad Decisions” But DANGEROUS WOMAN was clearly the definitive image shift. Adorning the cover is a picture of Grande in a latex rabbit mask, evoking a dominatrix Playboy Bunny. The title, and its single of the same name, cements the step away from relatively inoffensive lyrical content. Oh sure, the music on DANGEROUS WOMAN is still in the same vein of inoffensive (in a different way) modern pop music that is present on MY EVERYTHING. It’s just much better. The hooks are stronger, the production more memorable, and the vocal delivery from Grande more varied. “Bad Decisions” has a catchy chorus, overriding the groove of even “Dangerous Woman” and the jangle of “Be Alright.” DANGEROUS WOMAN was definitely where I took more notice of Grande, but there was clearly better to come. #4 — POSTIONS (2020) Favorite track: “Nasty” POSITIONS is the third in a trilogy of Grande albums that seems geared to more personal expression and the cultivation of critical praise. As with her previous two albums, all of the track titles on POSITIONS are stylized in lower case, continuing the lowkey vibe set by SWEETENER. It’s rendered with more, I guess an appropriate word is “sultriness?” Look, POSITIONS, as you might guess from the title, is just about sex. The best song, “Nasty,” is groovy, and the title single is definitely catchy. And with names like “34+35” and “POV,” well, you get the point. But all of this innuendo, and then explicit sexual content, is contained within songs that just don’t hit the same as Grande’s other recent releases. It carries an impressively cohesive sound across its 14 tracks, like its two predecessors do, but there aren’t as many songs that stand out to me. #3 — YOURS TRULY (2013) Favorite track: “Honeymoon Avenue” In fact, the opposite bookend to Grande’s career, her debut YOURS TRULY, stands as a greater pop achievement. As opposite from POSITIONS as you can get within the singer’s discography, YOURS TRULY has simpler, more innocent lyrical content and a ’60s, doo-wop influence. The latter fleshes the album out into something much richer than you might expect. Modern production and pop styles still primarily define YOURS TRULY, but a song like “Honeymoon Avenue” has staying power. The record is as silly as you get in an already pretty silly poppy body of work, but YOURS TRULY is simply an enjoyable, catchy listen. #2 — SWEETENER (2018) Favorite track: “Successful” SWEETENER was actually Grande’s best received album since her debut, at least critically. And as much as you can make fun of some of his work, I think that can be chalked up to the influence of Pharrell Williams’ production on the record. SWEETENER is probably the most “experimental” of Grande’s albums, and you can hear that in a song like “The Light Is Coming;” N.E.R.D.-ish sounds abound on the songs Williams produces, and the others feel a little more conventional. “Successful,” another Williams joint, is the best track on the album, with some great bloops and blips (sorry I can’t get more technical than that), and indeed the rest of SWEETENER paints an intriguing and immersive soundscape. #1 — THANK U, NEXT (2019) Favorite track: “Needy” But the album that really took the world by storm is, so far, Grande’s best. THANK U, NEXT is a moody, powerful piece of pop music, compelling like not much else coming out in that field today is. Lord knows I played “Needy” on repeat for a while there, as a real theme song for someone who came out of a five-year relationship just over three months before the record came out. The rest of the album is, of course, uniformly great, full of absolute bangers that often outshine the huge hit that was the titular single. I listen to THANK U, NEXT and wonder, if this is where Grande had taken her career (or where others had taken her career) in just (at that time) six years, where will she be in another six? Or sixteen? Will she be able to mature into a new sound, or will her reliance on a huge batch of producers end up leaving her in the dust? Truly, that speculation is too cerebral after listening through THANK U, NEXT. This is as good as pop music gets right now.
https://trettleman.medium.com/the-ariana-grande-albums-ranked-f58ba2c3d8e1
['Tristan Ettleman']
2020-11-16 17:47:55.745000+00:00
['Music']
What is Artificial Intelligence?
AI, Technology, Philosophy, Self. What is Artificial Intelligence? A Starting Point for Exploring the technology and Social Implications of AI. Not AI. Photo by Rudi Endresen on Unsplash Seriously, What Do We Think When Some Say “AI”? Are we just mystified by AI or perhaps we just ignore it? Maybe some have visions of killer robots or totalitarian regimes controlling our minds? Misinformation, hype and ignorance abound and I decided to ignore all this rubbish and start with the original explanation of “Artificial Intelligence”. On August 31, 1955, J. McCarthy, M. L. Minsky, N. Rochester, and C.E. Shannon proposed that a “study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it” The proposal outlines the interests of participants and areas of work. The precise description would be mathematical in nature and based on the concepts under development by the participants. The implication of the words “simulate it” are key to a faithful interpretation of AI. “Simulate” and “Precisely Described”. Simulation is key to our discussion. AI does not have an objective to create intelligence but rather to “simulate it”. The success of the simulation depends upon the precision of the description of human intelligence and the method adopted for validating the simulation. The Turing test dating to 1950 was adopted as an early validation technique. Here, a human would judge conversations between a human and a machine designed to generate human-like responses. If the judge could not tell the difference between human and machine, then the simulation would be considered valid. The Turing test and its variants are still the subjects of much controversy. The phrase “precisely described” implies a reductionist approach to the study of intelligence. However, intelligence emerges from the complex electrochemical interactions between the brain’s neurons. Using reductionism to study intelligence means separating and simplifying phenomena and ignoring the neuronic interactions. Reductionism fails in the goal of providing a precise description of intelligence. This explains why after nearly 60 years of intensive research booms and busts the optimism of the 1956 participants has not been realised. The phrase “Artificial Intelligence” has been shown to be a misnomer, but the term has penetrated the common lexicon to such a degree that it’s here to stay. The mathematical tools developed as subsets of AI are extremely useful and valuable in the way they underpin our modern interconnected society. The Tools of AI: Machine Learning and Deep Learning While “Artificial Intelligence” (AI) is concerned with human intelligence, the commercially successful tools of AI research focus on data parsing and pattern matching. These tools are categorised under the broad church of “Machine Learning”. Machine Learning has been widely deployed in fields including facial recognition; cargo distribution; sales and marketing; climate science; social media; and financial trading. Machine Learning algorithms can be deployed to many applications without the need for changing any code. Instead, standard sets of labelled data are used to train its algorithms to recognise desired patterns. These “trained” algorithms can then be applied to new data sets to discover the desired patterns. A Machine Learning technique called Artificial Neural Networks is designed to mimic the electrical interactions between neurons of the human brain. In Machine Learning, the set of algorithms of ANN’s consist of three levels: input, parsing and output. “Deep Learning” is applied when a neural network has more than three layers. Some of the modern Deep Learning neural networks have many thousands of layers. Every layers can be optimised through a technique called backpropagation that gives Deep Learning the ability to independently learn and make decisions on pattern recognition. Deep learning is ideal for large unstructured data sets. Going Forward This introductory article establishes the framework for my articles on AI. Future articles will give examples for programming in Machine Learning with a view to “learn by doing” and showing exactly what AI can and can’t do. I want to discuss the social impacts of AI, particularly in view of the new data laws and privacy considerations for protecting people’s rights to own their own data. If you have any questions or suggests please comment on this article. Acknowledgement I want to thank Dr Mehmet Yildiz for his vision in creating the new Technology publication. I have always wanted a broad platform of technologists to share and learn new knowledge.
https://medium.com/technology-hits/what-is-artificial-intelligence-aaf562bb48f1
['Dr John Rose']
2020-12-08 06:44:32.176000+00:00
['Philosophy', 'Neural Networks', 'Deep Learning', 'Technology', 'AI']
Efficiency of the Shotput Packing Algorithm
Abstract: The Shotput packing algorithm is used at present for determining the best box size for an order we receive into our system. Through a series of testing the accuracy of the Shotput box packing algorithm against the pyShipping packing algorithm, it was determined Shotput packs significantly and consistently more efficiently than pyShipping. Note: The links in this article no longer work. If you have any questions, please feel free to reach out. Context: Though the algorithms being tested are specifically looking at their box packing algorithm efficiency, the problem lies in the greater context of selecting the correct box for shipment. After originally implementing pyShipping into the Shotput codebase and discovering inaccuracies, I was tasked with writing a new packing algorithm. I wrote the box packing algorithm in the context of needing to achieve the following goals: Determine the smallest box and with fewest number of parcels required to pack and ship a predetermined set of products/items. Reduce the number of items needing to be packed into the last parcel, thereby allowing it to be repacked into a smaller box. Maintain and improve upon the speed and efficiency of a generating details for a shipment We use the algorithm for every order that goes into the system. We take boxes available to a team — some teams we work with have specific boxes, they use, where as others use boxes available to everyone — from there we send each box accompanied by all of the products for an order through a weeding out process. To cut down on runtime and the number of boxes we need to look for, we check in a speed optimal way that all the items with their given dimensions will fit into a box. For each of the acceptable boxes we send the products through the packing algorithm, which returns for each box, an arrangement of how the products will be organized into different boxes. Once we have determined which box can be used which both requires the fewest number of parcels to pack and is the smallest in volume relative to others that require the same number of parcels to be packed, this box is selected as the box for the order. Because there can be a variety of sizes of products shipped, and the products have been packed, selecting for the largest products in the first parcels, often the last parcel may contain fewer or smaller products, so we resend the contents of the last box through the algorithm to select the smallest box which will fit all the products. By reducing the number of parcels, or the size of the box required, we can reduce shipping cost and our environmental impact of each order we ship out, while upholding and improving customer and client experiences. Methods: In comparison to pyShipping, an open source python library for 3D bin packing: What was tested: Number of bins of a certain dimension that were required to pack a set of items When the number of bins is equal, the number of items packed in the last bin Runtime How it was tested: I tested using three types of datasets: Large — a large range of box and item size, and a large number of items packed Medium — a medium range of box and item sizes — close in dimensions, and a moderate number of items packed Small — a small range of box and item sizes — closest in dimensions, small number of items packed Through use of the python random library I generated boxes with random dimensions and a random number of items with random dimensions. I then sent the data through both the packing algorithms (pyShipping and Shotput). The number of parcels required and the number of items in the last box (with a qualifier for when all items fit into a single box) were recorded. I repeated those steps 10,000 times and retrieved the packing and efficiency data. Though 10,000 trials with 3 different data sets may seem excessive, as bin-pack is an NP-Hard problem, I wanted my results to be unconditionally evident, significant, and repeatable. To test the processing time, I used the dimension ratios from test case 2, and incrementally increased the number of items packed with 1000 trials at each item set size. Rather than using a random range of the number of parcels, like I do for trials 1, 2, and 3, I use a set, hard coded number of items. According to their readme, pyShipping has been optimized to have overall similar speed as David Pisinger’s 3D bin packing solution in C. PyShipping was discovered to be a little slower and a little more accurate than Pisinger’s algorithm, from which we can deduce that the speed and accuracy of our algorithm relative to a third algorithm. Through pyShipping’s research they determined that they on average use fewer parcels to pack 500 items. Because I use randomly selected numbers of items to be packed, with a wide range, I cannot use the same metric. However, I gathered statistical data on the number of parcels saved through use of our algorithm vs pyShipping, as well as the percent fewer of the overall number of parcels needed. From that we can deduce relative to pyShipping’s success against Pisinger’s, the success of Shotput’s. Results: For the sake of using only data which can be interpreted, we can exclude in our statistical analysis the cases where all items for both algorithms were packed into a single box, though it will remain reported. Test Case 1 (Large): Inputs: Box size: 50–400 units for each dimension Item size: 1–50 units for each dimension Items packed: 100–1000 items The Shotput algorithm improved upon pyShipping in the number of parcels require to pack the same randomly sized items 77.93% of the time. All instances of a tie in the number of parcels required, resulted in either all items fitting into one box for both algorithms, or Shotput packing fewer items in the last box — and therefore is more efficient. If we exclude the ties where all items fit into a single parcel, we are left with 8416 times that we performed better, and 18 where pyShipping outperformed Shotput, or in other words 99.78% of the time, the Shotput algorithm packed more efficiently. In the cases where pyshipping performed better, all were instances where the pyShipping required 1 box, where the Shotput algorithm fit all but one item into the first box. (Example 138 items to be packed, pyShipping fits all in one bin, Shotput fits 137 in one bin, and the last item in a second bin. Test Case 2 (Medium): Inputs: Box size: 100–200 units for each dimension Item size: 20–100 units for each dimension Items packed: 20–100 items This test case tests boxes that were closer in size to the items being packed with less variability and fewer number of items. Shotput packed using fewer parcels 97.8% of the time, and packed more efficiently — including fewest parcels in the last bin, and excluding instances where they were all in one bin 99.55% of the time. .21% of the time there was a tie, and .45% of the time pyShipping performed better. The large mean and median are an indicator of the large average number of parcels needed to pack an average of 60 items. Test Case 3 (Small): Inputs: Box size: 100–200 units for each dimension Item size: 40–100 units for each dimension Items packed: 10–30 items This test case is closest to how Shotput’s algorithm is currently used. For this case I used smaller boxes and smaller items. As a result, there were more ties, and more variation. 85.88% of the time, Shotput used fewer number of parcels to pack the same items, where .58% of the time, pyShipping required fewer parcels. 92.24% of the time the Shotput algorithm packed more efficiently, while pyShipping packed more efficiently 3.2% of the time. Time Component: The importance of the time component was placed lowest on the evaluation of the two algorithms as for our immediate needs at Shotput — which is to say our immediate needs require precision over scalability — however I think it is important to mention that pyShipping’s time component is faster across the board. For packing of fewer parcels — which is closest to our real world application, the difference is calculable but negligible from a user stand point, however as complexity increases, as does the disparity between pyShipping and Shotput time components. From from an exponential best fit of both algorithms, we see that Shotput increases at a slightly higher exponential rate than pyShipping, but also if we were to say `y = cx^p` where p is 1.8 and 1.7 respectively, the linear component, c, is nearly 4x greater in Shotput’s algorithm. As we also know Pisinger’s algorithm runs slightly faster than pyShipping, these results leave Shotput something to shoot for as we continue to build and adapt the algorithm and put greater emphasis on scalability. Of note: When I tested the medium box and item size ranges with 500 items to be packed, the mean percent parcels saved was 53.2% with a standard deviation of 5.4% while the average number of parcels saved was 56, with a standard deviation of 25. Over all 1000 trials, Shotput packed fewer parcels 100% of the time. Conclusions: The results were very clear. After compiling the data from 28,230 trials over test cases 1, 2, and 3, (30,000 trials minus ties where all items were packed in one bin for both algorithms), Shotput’s box packing algorithm successfully required fewer parcels for the same items 92.5% of the time, and packed more efficiently 97.0% of the time. The z-score on these results on a binary distribution is 156 with a standard deviation of 86.6. In other words, we can say with off the chart certainty that the Shotput shipping algorithm is more precise and more accurate that pyShipping. In our current phase of development, and the current use cases (represented by test cases 2 and 3) of the Shotput box packing algorithm, the algorithm is fast, and from a user stand point does not preform badly relative to pyShipping. Both algorithms have a Big O better than O(n)^2, which is far better than a brute force approach at O(n)^6. Because Shotput scales at a slightly higher exponential rate and a noticeably greater linear rate, as complexity increases, as does the difference in runtimes. This data can be used to entice the next stage in optimization and development, while maintaining accuracy. From the statistical analysis, we see the most striking success from test case 2, at 41% fewer parcels on average used with the Shotput algorithm vs the pyShipping algorithm. Test cases 1 and 3 show the largest standard deviations, however from further analysis of results we can see that the data is skewed. In test case one, the mean (3.68) number of parcels saved is significantly higher than the median (2) while the median percent parcels saved is 50%, vs the mean of 45%. This is an indicator that a large portion of the test cases resulted in 1 or 2 parcels being packed by both Shotput and pyShipping, resulting in a greater percentage difference for fewer parcels different. The mean being so high is an indicator that in the dataset there is a wide range of how many parcels are required for different trials. In general the large standard deviations are an indicator of wide ranges of results, but as we seen by the low number of successes of pyShipping in outperforming Shotput, where the median parcels saved is always lower than the mean, that wide range is in favor of the Shotput algorithm. Note: additional tests were run with 1,000 trials for num parcels averages at 10, 100, 200, and 300 with small item and box dimensions One thing that isn’t tested in the remaining volume in the final parcel, which when implemented into the larger context of the our use case, will affect the overall success. As in the test cases, all the products are of random dimensions, depending on which items are placed in the final parcel could be significant. The Shotput algorithm iterates through the items to pack from largest to smallest, resulting in the smallest items which won’t fit in the other parcels in the final parcel, but without additional testing, we do not know the volumes left available in the final parcel by the pyShipping algorithm, and it is possible that in cases of ties where one algorithm packs more items into the last parcel, the other has an overall better efficiency in a tie. Because there are checks ahead of time for boxes that will not be useable, we are able to limit the number of times the algorithm needs to be hit per use case. With improvements in packing efficiency this significant, the increased runtime for a more accurate result is favorable to a quicker but less efficient option. In summary, the Shotput algorithm performs significantly better at efficient packing than pyShipping’s algorithm, from which we can deduce it’s performance is also more efficient than Pisinger’s algorithm. Documentation for using the Shotput Box Packing API can be found at https://warehousing.shotput.com/developers/box-docs.
https://medium.com/the-chain/efficiency-of-the-shotput-packing-algorithm-a690e914d49c
['Stephanie Hutson']
2018-02-25 17:26:08.662000+00:00
['Algorithms', 'Software', 'Logistics', 'Startup', 'Data Science']
Why developers are falling in love with functional programming
Functional programming has been around for the last 60 years, but so far it’s always been a niche phenomenon. Although game-changers like Google rely on its key concepts, the average programmer of today knows little to nothing about it. That’s about to change. Not only are languages like Java or Python adopting more and more concepts from functional programming. Newer languages like Haskell are going completely functional. In simple terms, functional programming is all about building functions for immutable variables. In contrast, object-oriented programming is about having a relatively fixed set of functions, and you’re primarily modifying or adding new variables. Because of its nature, functional programming is great for in-demand tasks such as data analysis and machine learning. This doesn’t mean that you should say goodbye to object-oriented programming and go completely functional instead. It is useful, however, to know about the basic principles so you can use them to your advantage when appropriate. It’s all about killing side effects To understand functional programming, we need to understand functions first. This might sound boring, but at the end of the day it’s pretty insightful. So keep reading. A function, naively stated, is a thing that transforms some input into some output. Except that it’s not always that simple. Consider this function in Python: def square(x): return x*x This function is dumb and simple; it takes one variable x , presumably an int , or perhaps a float or double , and spits out the square of that. Now consider this function: global_list = [] def append_to_list(x): global_list.append(x) On the first glance, it looks like the function takes a variable x , of whichever type, and returns nothing since there is no return statement. But wait! The function wouldn’t work if global_list hadn’t been defined beforehand, and its output is that same list, albeit modified. Even though global_list was never declared as an input, it changes when we use the function: append_to_list(1) append_to_list(2) global_list Instead of an empty list, this returns [1,2] . This shows that the list is indeed an input of the function, even though we weren’t explicit about it. And that could be a problem. Being dishonest about functions These implicit inputs — or outputs, in other cases — have an official name: side effects. While we were only using a simple example, in more complex programs these can cause real difficulties. Think about how you would test append_to_list : Instead of just reading the first line and testing the function with any x , you need to read the whole definition, understand what it’s doing, define global_list , and test it that way. What’s simple in this example can quickly become tedious when you’re dealing with programs with thousands of lines of code. The good news is that there is an easy fix: being honest about what the function takes as an input. This is much better: newlist = [] def append_to_list2(x, some_list): some_list.append(x) append_to_list2(1,newlist) append_to_list2(2,newlist) newlist We haven’t really changed much. The output is still [1,2] , and everything else remains the same, too. We have changed one thing, however: the code is now free of side effects. And that’s great news. When you now look at the function declaration, you know exactly what’s going on. Therefore, if the program isn’t behaving as expected, you can easily test each function on its own and pinpoint which one is faulty. Keeping your functions pure is keeping them maintainable. Photo by Christina @ wocintechchat.com on Unsplash Functional programming is writing pure functions A function with clearly declared in- and outputs is one without side effects. And a function without side effects is a pure function. A very simple definition of functional programming is this: writing a program only in pure functions. Pure functions never modify variables, but only create new ones as an output. (I cheated a bit in the example above: it goes along the lines of functional programming, but still uses a global list. You can find better examples, but it was about the basic principle here.) Moreover, you can expect a certain output from a pure function with a given input. In contrast, an impure function may depend on some global variable; so the same input variables may lead to different outputs if the global variable is different. The latter can make debugging and maintaining code a lot harder. There’s an easy rule to spot side effects: as every function must have some kind of in- and output, function declarations that go without any in- or output must be impure. These are the first declarations that you might want to change if you’re adopting functional programming. What functional programming is not (only) Map and reduce Loops are not a thing in functional programming. Consider these Python loops: integers = [1,2,3,4,5,6] odd_ints = [] squared_odds = [] total = 0 for i in integers: if i%2 ==1 odd_ints.append(i) for i in odd_ints: squared_odds.append(i*i) for i in squared_odds: total += i For the simple operations that you’re trying to do, this code is rather long. It’s not functional, either, because you’re modifying global variables. Instead, consider this: from functools import reduce integers = [1,2,3,4,5,6] odd_ints = filter(lambda n: n % 2 == 1, integers) squared_odds = map(lambda n: n * n, odd_ints) total = reduce(lambda acc, n: acc + n, squared_odds) This is fully functional. It’s shorter. It’s faster because you’re not iterating through many elements of an array. And once you’ve understood how filter , map , and reduce work, the code isn’t much harder to understand either. That doesn’t mean that all functional code uses map , reduce and the likes. It doesn’t mean that you need functional programming to understand map and reduce , either. It’s just that when you’re abstracting loops, these functions pop up rather a lot. Lambda functions When talking about the history of functional programming, many start with the invention of lambda functions. But although lambdas are without doubt a cornerstone of functional programming, they’re not the root cause. Lambda functions are tools that can be used to make a program functional. But you can use lambdas in object-oriented programming, too. Static typing The example above isn’t statically typed. Yet it is functional. Even though static typing adds an extra layer of security to your code, it isn’t essential to make it functional. It can be a nice addition, though. Functional programming is easier in some languages than in others. Photo by Christina @ wocintechchat.com on Unsplash Some languages are getting more functional than others Perl Perl takes a very different approach to side effects than most programming languages. It includes a magic argument, $_ , which makes side effects one of its core features. Perl does have its virtues, but I wouldn’t try functional programming with it. Java I wish you good luck with writing functional code in Java. Not only will half of your program consist of static keywords; most other Java developers will also call your program a disgrace. That’s not to say that Java is bad. But it’s not made for those problems that are best solved with functional programming, such as database management or machine learning applications. Scala This is an interesting one: Scala’s goal is to unify object-oriented and functional programming. If you find this kind of odd, you’re not alone: while functional programming aims at eliminating side effects completely, object-oriented programming tries to keep them inside objects. That being said, many developers see Scala as a language to help them transition from object-oriented to functional programming. This may make it easier for them to go fully functional in the years to come. Python Python actively encourages functional programming. You can see this by the fact that every function has, by default, at least one input, self . This is very much à la the Zen of Python: explicit is better than implicit! Clojure According to its creator, Clojure is about 80% functional. All values are immutable by default, just like you need them in functional programming. However, you can get around that by using mutable-value wrappers around these immutable values. When you open such a wrapper, the thing you get out is immutable again. Haskell This is one of the few languages that are purely functional and statically typed. While this might seem like a time-drainer during development, it pays of bigly when you’re debugging a program. It’s not as easy to learn as other languages, but it’s definitely worth the investment! This is yet the beginning of the era of big data. Photo by Austin Distel on Unsplash Big data is coming. And it's bringing a friend: functional programming. In comparison to object-oriented programming, functional programming is still a niche phenomenon. If the inclusions of functional programming principles in Python and other languages are of any significance, however, then functional programming seems to be gaining traction. That makes perfect sense: functional programming is great for big databases, parallel programming, and machine learning. And all these things have been booming over the last decade. While object-oriented code has uncountable virtues, those of functional code, therefore, shouldn’t be neglected. Learning some basic principles can often be enough to up your game as a developer and be ready for the future. Thanks for reading! If you’d like to know how to implement more elements of functional programming in your Python code, stay tuned. I’ll cover this in my next story.
https://towardsdatascience.com/why-developers-are-falling-in-love-with-functional-programming-13514df4048e
['Rhea Moutafis']
2020-08-05 20:14:11.855000+00:00
['Python', 'Software Development', 'Towards Data Science', 'Functional Programming', 'Programming']
My Top 7 Tips for Saving Money at Whole Foods
My Top 7 Tips for Saving Money at Whole Foods By: Kate Doubler If you saw the title of this post and thought, “Whole Foods? No way can I shop there,” let me promise you that there are great ways to save big bucks at Whole Foods. How do I know? Because I do it all the time! I’m going to give you my best-kept secrets for how I personally save money shopping at a store that is well known for its high prices. My uncle jokingly calls it “Whole Paycheck”, but it does not have to be that way. You just have to have a plan! Here are my tips for saving money at whole foods 1. Ask about deliveries The best time to hit your local produce section of the grocery store is immediately after they get a delivery. This ensures that your produce will stay fresh longer so there’s less waste. On the flip side, if you are planning to use it up quickly, say for a hearty veggie soup, it’s smart to go to Whole Foods produce section the day before they get their new produce delivery. They often put produce that’s past its prime on deep discount so they don’t have to pitch it. So finding out when their produce is delivered each week can ultimately save you 2 different ways. 2. Is it Wednesday yet? Wednesday is an important day at Whole Foods because it’s when the new ad comes out. But did you also know that their ads run Wednesday through the next Wednesday? That means that if you time it right, you can cash in on this week’s special deals, but also last week’s ad deals! Double bang for your buck! #winning. They also have an app where you can find coupons for Whole Foods! You can download it here. 3. Buy frozen Of course it’s better to buy fresh produce if possible because of the nutritional value, but if you need veggies or fruit that aren’t in season, you will save a ton of money by getting them frozen instead. We buy frozen organic berries year round because the prices are so great, and frozen berries are delicious!!! Plus when they are on sale, you can stock up! 4. Check your pantry Before you head out, make a list of things you are really running low on and another column for things that you could use if the price is right. Also note what you have an abundance of. It’s easy to get carried away when you see all types of yumminess they offer in the store, but don’t be swayed by a so-so good price if you already have a heap of the stuff. Avoid the, “I’m not sure if I have this so I’d better get some” trap. Know what you really need before heading out. Also, do not be lured in by the samples. Yes they’re tasty, but are those chips really worth $7 a bag? 5. Don’t buy everything on your list there Although you can get good prices on items at Whole Foods when you shop smart, I don’t recommend using it as the grocery store for everything. Some of their prices are just too high for most people’s budget, including mine. For example, they offer excellent meats and produce, so those are always on my list. Also, buy in bulk there because not all health food stores offer that cost-saving option, but avoid buying prepackaged snacks there. You can find those types of healthy snacks elsewhere for much cheaper. HERE is where I buy all of our prepackaged snacks and nonperishables. 6. Buy “generic” You can get superb quality while saving money by buying their 365 organic brand foods. You will find them all over the store. 7. Take your calculator Okay, yes, you may look like a geek or a cheapskate, but I think saving cash is worth it. A calculator is your best friend in many stores, including Whole Foods. You will need it to quickly determine which price is better when trying to decide between different sizes of packages. So go ahead and geek out! I do it on the sly, my calculator is on my phone. Pretend you are checking your grocery list on your phone while you do some quick calculations. I won’t tell. See? I told you that you could brave the aisles of Whole Foods and walk out of there without having to hand over your car keys in payment. You just have to be smart, make a list, and have a plan. What are YOUR best money saving tips at Whole Foods? I would love to hear about them in the comments below!
https://medium.com/the-paleo-post/my-top-7-tips-for-saving-money-at-whole-foods-4c184a447aa8
[]
2016-10-11 08:53:22.135000+00:00
['Republished', 'Food', 'Health', 'Paleo', 'Budget']
How I Built a REST API Using Google Sheets
Details for Selected Steps Step 2. Create Node project and install Express framework mkdir node-push-up-tracker // Change directory to the created project folder cd node-push-up-tracker // Create Node Project npm init -y // Install Express Framework npm install express Step 3. Write the business logic and create the API endpoint We will create two endpoints in this step. The first endpoint is to retrieve today’s push-ups while the second endpoint is to retrieve the push-ups within the date range. Step 5. Perform authentication using googleapis and retrieve Google Sheets data We will create the pushupService.js that’s responsible for retrieving Google Sheets data. But before that, we will have to perform authentication using the googleapis library. // Install Googleapis dependency npm install googleapis Step 6. Create the basic Dockerfile on how to start your application In this step, I copy the basic Dockerfile on how to start the Node application. Step 7. Build your container image and submit it to Cloud Run In this particular step, we will build the container image and submit it using the gcloud command. If you have yet to install gcloud, please refer to the installation guide. Before we start, you’ll have to know your Google Project ID. Go to Google Cloud Console and select your project. You’ll see the Project ID in the Project Info section. Refer to the screenshot below. Get your Project ID in the Project Info section Let’s build the container image now and submit it to Cloud Run. I have written a bash script for the build process to avoid having to repeatedly type the build command (which I am always forgetting so always end up referring to the documentation again). Run the build script via sh build.sh . What this build script does is: It will assign your Google Project ID to the GOOGLE_PROJECT_ID variable and use this variable to identify which project this container image belongs to. variable and use this variable to identify which project this container image belongs to. You will have also named the container image under CONTAINER_IMAGE_NAME variable. Thus, if you're using the same container image name, each build process will generate a revision for you. Refer to the screenshot below where I have two revisions since I submitted the built image twice. I have two revisions of images Step 8. Deploy your container Let’s deploy the container image you have built and submitted. For your information, you can actually deploy the container image using the gcloud command too. However, I would like to show how you can deploy the service using the Google Cloud Platform website — it’s easy. Go to your Google Cloud Console and select the project that you have created. Click the Navigation Menu, and then click Cloud Run within the navigation menu. (You can also refer to the red box in the screenshot below.) Cloud Run on Navigation Menu 3. You will see the Cloud Run Services screen, where you can create the service using the container image you have built. Each Cloud Run service has a unique endpoint. 4. Let’s create a new service by clicking Create Service. Create Service 5. Fill in the input fields (deployment platform, service name, and authentication) and then click Next. You can refer to the example below. Example of how you can fill up 6. Select the container image you have built by clicking Select. You should see a list of built container images. 7. Before you click Create, click the Variables tab to add environment variables, if you have any. In my use case, I would need to add SHEET_ID and TAB_ID so it knows which spreadsheets and which tab it should retrieve the data from. Adding Environment Variable at Variables Tab 8. Click Create, and your service will be created and deployed. You will see the screen below if your service is successfully created and deployed. You can verify by calling the API endpoint you created with the URL. The service is ready now!! Step 9. Grant permission to the compute engine to access your Google Sheets If you have followed the steps to here and tried to call the API, you might get “permission denied” when trying to access Google Sheets. By default, you’re the only one who can access your Google spreadsheets due to security and privacy concerns. Copy the compute account number that you found in the Permissions tab and add it to the spreadsheets sharing. Now you will be able to successfully retrieve the spreadsheet data. Refer to the screenshot below. Copy compute account member Share your spreadsheet to the compute account Now you should have a working Express application, you should be able to retrieve the total push-ups from spreadsheets. Here is the screenshot where I successfully retrieved my push-up data.
https://medium.com/better-programming/how-i-built-a-rest-api-using-google-sheets-5bbf356b01f0
['Tek Loon']
2020-07-31 02:10:47.128000+00:00
['JavaScript', 'Serverless', 'Software Engineering', 'Google Cloud Platform', 'Programming']
Enabling students to practice on Bitesize
Introduction Bitesize recently celebrated a 20th year anniversary last September, complete with new branding and personalisation features allowing students to customise their Bitesize experience via the My Bitesize page. Bitesize has great content that students enjoy, one of those bits of content that it provides is in the form of a test. Consisting of 10 questions at the end of a revision guide, students use these to test their knowledge of the guide ahead of revising it. We believe that by enabling the ability to practice content within the Bitesize site, students will be able to retain content in memory for longer. We also want to drive students back to the Bitesize web and Bitesize app via e-mail and in-app notifications. To implement these new features we have partnered with a third-party company who provide practice and notifications features as a platform. How did we do it? The BBC has its own sign in service, this service falls under the Audience Platform team within the BBC and they provide services to manage user accounts (login, password, e-mail addresses, under 13 account management, etc.) and to enable personalisation and participation on all BBC online products: We wanted integration with the third-party to use the BBC’s account management system to have a joined up BBC user experience around accounts We wanted to strictly limit the data we sent to the third-party (so no sharing of e-mail addresses, etc.). Before we started this project there was no common way within the BBC of doing things like: Providing third-parties with anonymised BBC iD tokens to create accounts in third-party systems Providing one-to-one notifications (e-mail or app) for people with BBC iD accounts Our third-party requires a unique iD to reference the student (and an e-mail address, but we don’t want to provide that to them). Our requirements “We want to be able to embed the third-party’s iframe within Bitesize and associate the student’s BBC account with the third-party (whilst not giving away any personably identifiable information!)” “We need to handle Subject Access Requests for students that wish to see what data the BBC is holding about them” “We want to be able to send one-to-one notifications to students; the third-party platform can trigger and send notifications, but the notification needs to be Bitesize branded, link to Bitesize content, and we want to be able to send either e-mail notifications or in-app notifications via the Bitesize app” Integrating the third-party into our account system To solve this we added a new endpoint to our Bitesize personalisation service ( Newton) to enable integration — this endpoint uses a plugin written by the Audience Platform team to verify the user’s identity cookie is valid and extracts the unique iD that references that user — Newton then takes that unique iD and encrypts it using Amazon KMS. It is that encrypted iD that is then sent to our third-party to launch the practice session within Bitesize. The flow to launch a practice session within Bitesize Handling Subject Access Requests We also needed to be able to handle SAR (Subject Access Requests) for users if they wish to request the data that the BBC holds about them. We integrated with the existing SAR architecture that Audience Platform had defined — this means that if a user requests their personal data from the BBC we get informed via a AWS SNS topic that a user has requested their data, this executes a Lambda on our side and we make a call to our third-party’s SAR API. We take responsibility for handling errors that might occur whilst contacting the third-party to initiate a SAR request while our third-party is responsible for uploading the user’s data to the appropriate place. Respecting user’s Subject Access Requests Sending personalised notifications We want to be able to send one-to-one notifications to Bitesize students to drive them back to the web-site. We felt that the rest of the BBC would benefit from this functionality and so we worked with other teams across the BBC to deliver a single solution that is scalable and shareable. Our third-party notifies us of a notification via a SNS topic, this invokes a Lambda that we’ve created to proxy the notifications, this Lambda decrypts the iD the third-party holds and forwards it onto the notifications API to initiate an e-mail send request. Sequence of actions for us to send a notification to a BBC user In future we’d like to use this system to expand notifications out to the Bitesize App. Where are we now? We have integrated our anonymised BBC iD user token service with our third-party provider in order to launch the practice iframe within Bitesize test chapter pages. Students can practice content over a variety of guides in different subjects within GCSE ( Biology, Chemistry, Computer Science, Drama, German, and others). We’ve released a dashboard on the My Bitesize page that enables students to which content they need to practice next. And finally, we’ve also released personalised one-to-one email notifications to students. What’s next? Releasing more and more practice content into subjects and a fullscreen experience! We’ll also be releasing a version of the Bitesize app that supports practice and which will enable us to provide personalised one-to-one in app notifications via the app.
https://medium.com/bbc-design-engineering/enabling-students-to-practice-on-bitesize-1ade4564dd66
['Duncan Mcdonald']
2019-08-12 08:39:06.463000+00:00
['AWS', 'Bitesize', 'Learning', 'BBC', 'Education']
Temas en Dataviz: Una Base para Empezar
Sign up for The 'Gale By Nightingale Keep up with the latest from Nightingale, the journal of the Data Visualization Society Take a look
https://medium.com/nightingale/temas-en-dataviz-una-base-para-empezar-3790440b7b7b
['Martin Telefont']
2019-04-27 14:00:33.513000+00:00
['Typography', 'Process', 'Storytelling', 'Communication', 'Data Visualization']
How to Come Up With Programming Project Ideas
Elements of Good Project Ideas I have project ideas all the time. But the good project ideas are the ones that I end up pursuing for months on end. This is the framework I go by when picking good project ideas. The way I see it, every good project idea has the chance of turning into something more. The starting point of the project is as important as the journey itself. You want to orient your journey by the correct compass. For me, my compass is always applications in the real world. I don’t program just for myself. I program to use my skills to solve some problems. Fun — First and foremost, are you having fun? Going through life clocking in from 9 to 5 can get repetitive at any job. Are you having fun in your weekend projects? I don’t sit in front of my computer if I’m not having fun on my weekends. You shouldn’t either. Solve a problem — What is the one thing that you want to improve in your life? How can you develop a product to solve that? Provide value — Providing value with your product is one thing. But is the project providing you with value? Are you learning new skills? Are you gaining new information? Starting from scratch — Often, programmers like to start from scratch. Can you google and see if anyone in the community has developed a project similar to yours? Not starting from scratch means time saved. Build to keep — Many programmers don’t think of side projects as anything that they want to keep. But good side projects can be turned into startups. You want to build and design it so that you can keep it. Side projects can generate side revenues. You’re investing in side projects. Start simple — Every project I’ve ever started was so simple in the beginning. You want to concentrate on just one idea and one functionality. Think microservices. Take the workflow you want apart. Just automate one piece of the puzzle. Synergy — You’re a programmer. But who are you as a person? When you have synergy between who you are and what you do, now that’s the recipe for a good life. Create projects that you’re passionate about because they align with who you are as a person. You can also create synergy between work projects and side projects. Once I wrote some functionality to solve one of my problems. Later on, I used the code to apply to a work project as well.
https://medium.com/better-programming/how-to-come-up-with-programming-project-ideas-50f7281b294d
['Jun Wu']
2020-02-04 17:13:03.335000+00:00
['Learn To Code', 'Software', 'Programming', 'Software Engineering', 'Software Development']
The Surprising Traits of Successful Sales Leaders & Their Explosive Wealth
Courtesy: The greatest salesman in the world, readingraphics Movies such as “the wolf of wall street” have made us to believe that Sales is a dirty word. That, sales folks are persuasive and focus solely on making a fortune by selling us products through mesmerizing language. But I believe that sales is about rallying people to buy your product, vision, or a cause. Not everyone can be in successful in Sales role. To be in Sales requires being at the top end of the money & risk pyramid. It is daunting, risky and tough. And very unfairly so and for the reason that they make big money selling, the world hates sales people. Salesmen are considered as greedy, money mongering, selling customers stuff they do not need, superficially sweet talking, loud and pompous, evil daemons. Even now, when we say someone is in Sales, we think “Oh that guy who made his way into millions”. There is half jealousy and half suspicion. But I set out to investigate if that could really be true. Could it be really possible that Sales people rake millions of dollars by bluffing their way through customers? Even if they did, was it sustainable? You can fool others once or twice but can you, for long? In our current times where customers are well equipped with information? And so I went about interviewing a 30 odd really successful sales guys across various professions. Some really rich and very successful ones. I read and re-read the book “The greatest salesman of the world” and many other books on sales published for the modern times. And the perspective and the learning that came out of this exercise was the complete opposite of what I had imagined about sales guys earlier. Unlike what we all imagine, a sales job is not just about wining and dining and having fun. That is what it seems on the surface. But behind all that loud talk and laughter lies a secret code. Sales guys that crack that secret code alone survive in the long run. What are those traits? a) They are patient as a Saint “Sales guys can never lose patience”, I am told. “If you are short tempered, and do not have the ability to live through hours of repeating the same product pitch to hundreds and thousands of restless, impatient, some time rude, different types of customers, don’t consider sales.” And to be patient with people, one needs to love & be empathetic with people. With all their flaws and warts and all. A highly judgmental person that expects people to behave in a certain way, and cannot stand resistance wont be patient with people or successful in selling. And sales usually takes excruciatingly long cycles. Customers debate, analyze, procrastinate and then walk away after a long conversation. In spite of all of these, a success rate of just 15% out of total prospecting, a sales guy needs to maintain patience. Patience is tolerance and kindness exhibited to people that think you are selfish. The first test of eligibility to do a sales job is to have patience. Patience through countless hours of brutal days — of waiting, rejection, loss and insults. And persisting. If the front end sales guy is a snob and restless, the business stands to lose. b) They are masters in psychology In a good way. Sales guys need to change themselves with every customer they meet. “One of the greatest proven human strategy to influence the other to agree with you is this : People like people that like them & People like people that ARE like them. With every sale that is successfully made, you have made the buyer believe that you are trust worthy, one among them, their guide that has understood their need and empathizes with them”. A sale is always based on trust. In a situation where there is an element of doubt, no sale happens. And how to build trust? By using a little imitation, flattery and completely shedding self perspectives and getting into a customer’s shoes & mind. Almost like a chameleon, a sales guy needs to shed his skin and adopt his customer’s behavior, mindset and body language. Good sales people research and understand their target customers very well. They often ask questions to themselves that a customer would ask. How would i like to be treated? What are the features i am looking for in the product? What is the price point? How do i make buying decisions? By always putting customer first, understanding and shifting to a customer’s mindset, successful sales people sell through trust. Hence they don’t start with a pitch, they probe by asking empathetic questions & completely mapping the product to a customer’s mindset. And then framing the pitch and product for the customer’s needs. c) They are happy by practicing emotionally intelligence “Grumpiness, being sad, loathing etc are the privileges of the back end team — — engineering/manufacturing, designing team. People in the field, mainly the sales folks cannot be grumpy. There are moments where I want to have “ME time” after an overwhelming event. But we don’t have that luxury. Imagine doing business with a dull sales guy. A lifeless, brooding person that sits across the counter. Emotions are contagious and you need to be happy and radiate optimism in spite of what happens in their internal world” And no one can fake happiness for long. Happiness is an inward thing and the most adept sales people are happy because they are also light hearted and keep emotions at check. Learning to stay the course amid the noise, and to perform to the customers through day and night requires a different level of emotional intelligence. Warren buffet said “If you cannot handle your emotions, you cannot handle money”. How does one build emotional intelligence? By constantly observing and being watchful of oneself, through several years of practicing to handle resistance in life, resisting impulsive behavior and by taking total responsibility for one’s actions in life. But more than all that, sales people are born people pleasers too. They love people, they love making people happy. They light up on seeing customers. d) They are big thinkers and focus on acceleration We are used to believing that Sales guys focus on extracting money out of everyone walks their path. What I observed was quite the opposite. “Successful sales people focus on strong prospects. A select set of prospects that can build 10X the momentum as compared to a whole bunch of prospects that that give 1X the momentum. You need to understand the difference between catching a whale and a bucket of jelly fishes. The focus is therefore to create a big impact and large enough return from the best prospect(s) rather than try to extract a sale out of every customer. For instance, I do not let my team spend their energy on prospects that not fall in line with our acceleration journey” Yet, they make it a point to refer the prospect to where their demands will be met — thus serving as a guide to help customers achieve their needs rather than serving every customer. A tight control on focus and straight line execution, keeping focussed value prospects is what gives the successful sales guys edge over the less successful ones. e) They have extraordinary creativity The last one. Somehow, creativity is called out only when it is in some art form. Such as a painting, drawing or a sculpture. And hence sales folks have always been type cast as empty talkers with no substance. Nothing can be far from truth than this. Creativity is anything built new to create something better than what was before. “Sales people need spontaneous, creative thinking as they are thrust into a world of unknowns and objections every day. Prospecting requires creativity in getting the prospect interested and to speak up. Interrogative questioning requires creativity to get the desired answers. Objections require smart responses to overcome them at the moment without referring to a playbook of questions and answers. Every minute of the sales life requires creativity in selling and bringing home deals The creativity of sales people may not be found in museums but in the greatest businesses that has been built all over the world.” Businesses, products, institutions run because of the biggest risks and intelligence brought by the sales teams. Trashing sales team because of few poor business practices does not make sense anymore. Courtesy: DREAMSTIME — “BUSINESS WARRIOR” Only the tough warriors are cut out for long lasting success in sales. Imagine being on the war field every day, day after day in your life for the next 30–40 years and helping your teams win. Do you have what it takes? Write to me : subhashrinivasan@gmail.com
https://medium.com/swlh/the-surprising-traits-of-successful-sales-leaders-their-explosive-wealth-f32fafe63730
['Subha Shrinivasan']
2020-11-18 12:14:57.546000+00:00
['Growth', 'Sales', 'Entrepreneurship', 'Business', 'Money']
ITSM Analytics
Information Technology has become the backbone of almost every business in the last few decades, driving productivity and efficiency in every business function. Today, IT organizations have more data than ever before. Whether its service ticket management, asset tracking, budgeting, staffing, or infrastructure and platform monitoring — that data has the power to speed up and simplify your job. The 4 core IT processes ITSM (or IT Service Management) refers to all the activities involved in designing, creating, delivering, supporting and managing the lifecycle of IT services. What are IT services? Think of any piece of information technology at your workplace — from business critical services like ERP systems to less critical stuff like your laptop, the apps installed on it. They’re all services provided by internal or external IT service providers who are responsible for the end-to end service lifecycle from design through deployment to continuous improvement and termination. Why you need ITSM? · Improve (internal) customer experience · Better control and governance · Better Business — IT alignment · Reduce costs and risk · Increased efficiency · Transparent Service Levels · Standardization There are many tools that provide service desk and/or monitoring functionalities, but ITSM tools generally lack the sophisticated analytics and dashboarding capabilities we data heads are used to. Luckily we can always use our favourite analytic tools to tackle these kind of challenges. Why you need ITSM Analytics? In a complex, heterogeneous environment of tools, infrastructure and organizations, there is no single window to access and analyze IT Service Management data. This lack of visibility is a barrier of sufficient governance, driving inefficiencies and increasing the cost of delivering services. Platforms like Tableau or Power BI allow companies to bridge the gap between information silos and to analyze all IT service related data in one single place. This integrated approach enables you to create detailed domain specific dashboards for the various roles including subject matter experts, service managers and executives to provide insights into the performance, costs, health and availability of IT services. These dashboards need to be easily customizable and be able to provide drill-down and drill-across functionality. This will allow businesses to understand how well IT services are meeting business objectives and agreed service levels. Our two example ITSM dashboards help application owners and service managers track the performance and availability of all applications, number of incidents and requests as well as trends in resolution time. Let me know if you have feedback, suggestions or questions.
https://medium.com/starschema-blog/itsm-analytics-7b14a585f0f1
['Ivett Kovács']
2019-04-11 09:25:00.485000+00:00
['Tableau', 'Data Visualization', 'Management', 'Itsm', 'Dataviz']
React Native Animations — Zero to Hero
After briefly covering Animated , useNativeDriver and LayoutAnimation . Next on the menu is — interpolation. One of the more interesting and useful features to know when talking about animation in React Native is Interpolation. Animated allows us to interpolate the animated value and provide some logic for how our transition should behave. If, for instance, I would like to create a fade animation, I would animate the opacity prop, changing its value from 1 to 0 . It should look something like this Animated.timing(this.state.fadeAnimation, { toValue: 1, duration: 300, }).start(); <View style={{opacity: this.state.fadeAnimation}}/> While this is a common and simple use case sometimes the range of values we want to animate is more complex. Let’s say we want to animate a clock hand, we would want to use the rotate transformation and move in a circle shape. In this case, the values we’ll be using here are degrees ( 0deg–360deg ). Because an Animated value can’t hold these types of values, we will use a simple number between 0–360 instead, then interpolate this value into degrees. It should look something like this… renderClockHand() { const clockSize = 150; const rotation = this.timeAnimation.interpolate({ inputRange: [0, 360], outputRange: ['0deg', '360deg'], }); return ( // rotating container <Animated.View width={clockSize} height={clockSize} style={{transform: [{rotate: rotation}]}}> // clock hand <View width={clockSize / 2} height={clockSize / 2} style={{borderRightWidth: 3}} /> </Animated.View> ); } Unfortunately, react-native doesn’t support transform-origin, that’s why we’re rotating a container and placing the clock hand in its first quarter, creating an illusion of rotating clock hand. Interpolation generates a transition function for your animation. It can be a linear function like the one we used for the clock-hand or a non-linear function, depends on the given inputRange and outputRange . When performance hits In some cases, even when following the rules and when you think you got everything right, there are factors you can’t control. We were going after this animation Again, a pretty trivial use case, which can’t be too difficult to implement, and it is, not until you add performance to the equation. In our case, this is the first screen our users see when they open the app. While this screen is being loaded lots of other things happening in the background, especially because it’s the initial phase of the app when usually network calls and other initialization logic occurs. This, of course, affects the (RN) bridge and performance in general. Now, if we’ll go back to our animation. How would we tackle this type of animation? Well, the first approach with this animation was to trigger each card entrance separately with a delay of 100ms using timeout. I guess this is a pretty intuitive solution most people will come up with. Improving this can be done by using Animated.stagger API. Which behind the scenes handle it the same way — using JS timeout. But why is that a problem? Because we rely on a delay of 100ms between each card entrance and because each card entrance animation is being created and invoked one at a time in the JS thread we might get a flaky animation just because the bridge is overloaded what can cause some inconsistency in the cards entrances. You can see here what’s happening when the JS thread is busy while the entrance animations are happening. To simplify, imagine you’re in line for a roller coaster ride. On this ride, each person goes up alone and sits in a solo cart. The premise is to send a single person each minute. Now, here comes a group of 4 friends. They want to enjoy the ride together, but unfortunately, on this ride, each one of them will go up alone and have to wait for the rest of his friends at the end of the ride. Excepts for riding alone, this ride premise of sending a person each minute is not doable. Since the staff is slow and lazy, and the line is crowded you might have to wait a couple of minutes more. What will be the solution then? Bigger carts! By having a cart of fours, a group of friends can enjoy the ride together and guarantee to finish together. This is exactly how we should treat our animation. Instead of seeing it as 4 different card animations, we should see it as a single entrance animation of the whole screen. If the ride’s line is the JS thread and the actual ride is the native thread, then by moving a group of animations as one, the animation starts once and goes all the way through till it ends on that native thread achieving a smooth, performant animation. How all of this should be reflected in our code? The Animated Value! Instead of having 4 different animated values, one for each card, and instead of transitioning 4 values, we should have a single animated value — a single transition. Master of Interpolation Ok, enough talking. How are we going from 4 animations to 1? First, we create a single animate value in our screen this.entranceAnimation = new Animated.Value(0) Next, we start the transition, in our case, when the screen mount. componentDidMount() { Animated.timing(this.entranceAnimation, { toValue: 100, duration: 1300, useNativeDriver: true }).start() } Here, we’re going from 0 to 100 in 1300ms . Meaning our whole animation will take 1.3 seconds. Why did I pick to transition to the value of 100 ? cause it’s an easy number to work with. The way I see it, our animated value represents a timeline, and during this timeline different animations are triggered. With interpolation, we can have different output values for each card by using different input values. So for example, our code for the first card entrance will look like this renderFirstCardForNow() { translateX = this.entranceAnimation.interpolate({ inputRange: [0, 25], outputRange: [-100, 0] // the card will translate from the left side of the screen to its natural position }); return (<Animated.View style={{transform: [{translateX}]}}> // Card content </Animate.View>) } Let’s take a look at the following illustration to understand how the whole animation takes place. Each card’s animation starts at a different time — on a different animated value, therefore the inputRange for each card is different, for the first card it’s [0, 25] , for the second card it’s [25, 50] and so on. Now, the actual animation effect is similar for each card, meaning all cards entrance are the same. They just run at different times. So the outputRange should also be the same, for our example we use [-100, 0] . And the code for rendering each card should look like this renderCard(cardIndex) { translateX = this.entranceAnimation.interpolate({ inputRange: [cardIndex * 25, cardIndex * 25 + 25], outputRange: [-100, 0] }); return (<Animated.View style={{transform: [{translateX}]}}> // Card content </Animate.View>) } If you would like to make the animation look even slicker you can play with the easing of the animation and try overlapping the interpolation’s inputRange of each card. The final result should feel smooth and clean, even when the JS thread is busy. For this animation, I used the following ranges const translateX = this.entranceAnimation.interpolate({ inputRange: [ index * 25, (100 - index * 25) / 2 + index * 25, 100 ], outputRange: [-30, 10 - index * 10, 0], extrapolate: 'clamp' }); A bit confusing, I know, but if we’ll take a look at the timeline illustration, it should look something like this New Players Recently, two amazing libraries: react-native-reanimated & react-native-gesture-handler by Krzysztof Magiera had become the number one go-to for creating sophisticated, native-like components based on complex animations and native gestures. Those libraries might not seem as trivial at first, but after diving into their API and learning how to use them correctly you might get to a new level of components and animations. I will not go into them (it will probably require a whole different post), but I recommend checking them out if you feel safe enough with everything else we’ve talked about.
https://medium.com/wix-engineering/react-native-animations-zero-to-hero-17ebf7e8be81
['Ethan Sharabi']
2019-12-01 13:19:17.042000+00:00
['Animation', 'JavaScript', 'React Native', 'Mobile App Development']
How to Survive the Dutch Winter
Darkness. Rain. Punishing winds. Gray skies. “Oh God, when will it end?” These are some of the words that come to mind when thinking of Dutch winters. Photo credit: Dan Geddes September, when the sun still occasionally shines in Holland, is a good moment to prepare yourself mentally, physically and spiritually for the coming Dutch winter, which I half-affectionately call “the dark time.” Darkness will descend upon the land very quickly. One September morning you wake up around 7:00 a.m., and it’s still light out and the birds are singing, but seemingly only a few mornings later you will notice it’s as dark as midnight. And the same happens in the evening: during your after-dinner walk you suddenly notice that you’re shrouded in darkness. Every day it seems like you lose a half hour of sunlight. Some people embrace the winter. “Maybe we will get snow and ice this year!” they say hopefully, as if that’s a good thing. But not all are so enthusiastic. One Dutch woman told me the only sensible thing to do would be to “move all of Amsterdam, brick by brick, to the south of France.” The short days and long dark nights lead many to despair. Don’t let this happen to you! Here are some tips for surviving the Dutch winter. * Book a trip to a sunny location for December, January or February. Do it now. Right now. Today. You need something to look forward to. There must be light at the end of the tunnel. * Find a cozy café to make your second home. Look for one with a wooden interior where they put candles on the table and string up Christmas lights to create a winter ambience. Try some seasonal autumn beers, Glühwein, or a hot chocolate with whipped cream. * Put up Christmas lights and light candles to re-create this same magical ambiance in the comfort of your own home. * Get into the spirit of the winter holidays, which cleverly serve to break up the long dark time with festivals and other distractions: Sint Maarten, Sinterklaas, Christmas, New Year’s Eve and New Year’s Day are some of the best known holidays. As Americans my family and I also celebrate Halloween and Thanksgiving, so we have a holiday celebration, featuring truly excessive overeating, almost every week. Create your own holiday traditions! * However, if you have children, adopt some rules about holiday sugar intake. No child should eat more than 2,000 pepernoten per day, except of course on the day Sinterklaas arrives from Spain by ship and on 5 December. Upon these days, by sacred tradition, the 2,000 pepernoten limit per child is not enforced. * Oliebollen wagons will soon materialize on busy corners. Try to resist the siren call of the oliebollen (deep-fried sugary balls of white-flour dough, i.e., Dutch donuts). But if you must indulge, the apple beignet is the king of all the deep-fried winter-time sugar rushes. And while apple is indeed a fruit, remember that the apple beignet does not count as one of your recommended five daily fruits or vegetables. * Book some tickets for a show. Remember to book many weeks in advance for the most popular shows. This is a densely populated country! See http://www.amsterdamsuitburo.nl/extra/uitkrant for tickets. * Learn to appreciate traditional Dutch comfort food, such as stamppot, erwten soup, and boerenkool. * Winter, of course, is cold and flu season. Wash your hands frequently, especially when you find a bathroom that provides opulent amenities such as hot water and soap. * Despite all precautions, you may develop a condition known as “perma-cold,” whereby you have cold symptoms for four solid months and you may believe you are dying. No matter how miserable you become, most doctors will be philosophical about your suffering and prescribe only tea, sleep, and maybe, if they are feeling magnanimous, a few tablets of paracetemol (if you are a weakling or a foreigner). Other people will suggest traditional remedies such as jenever or Jägermeister. Use such remedies with caution. * Find indoor shelter for your bicycle, even if you must put it in your bathtub. If you leave it outside the whole winter, your bike will quickly be reduced to a useless pile of rust. * If you cycle frequently, buy at least two sets of rain gear. If you have a talent for design, consider designing some shirts and pants using rain-resistant materials. Your new all-weather fashion-line is bound to sell well here in Holland. * Don’t even think of bicycling on the icy streets unless you are Dutch. If you are Dutch, consider petitioning the International Olympic Committee to introduce “Ice Cycling” as a new Olympic sport, preferably for various distances (100 m, 500 m, 1 km, 5 km) and bridges as obstacles. This will help inflate Holland’s medal count at the next Winter Olympics. * Consider taking a short walk at lunch time, as this may be your only chance to have the sun shine on your face during the dark time (on the small chance that the sun is shining). Vitamin D is very important to stave off thoughts of despair or emigration! * Further develop and refine your convictions about Dutch weather. Is it better when it’s freezing and icy because at least you can see some blue sky? Or do you actually prefer the perpetual gray weather? Is the weather predestined, or just a matter of chance? If predestined, is the weather God’s way of punishing the Dutch for past transgressions or imperial hubris? Libraries are filled with learned commentary on the philosophical implications of Dutch weather. Start talking early and often about how much you are looking forward to spring. Keep telling yourself that “spring is coming soon” even when it’s December, January, or February. The endless dark mornings may remind you of the classic movie Groundhog Day (1993), where the Bill Murray character keeps waking up on the same day (2 February), trapped, with no hope of escape, in a never-ending winter. Try to take inspiration from Murray’s character, Phil Connors, who eventually comes to the realization that no matter what situation you are in, you should strive for excellence in yourself (learn piano, French, or ice-sculpting) and compassion for others (perform small kindnesses), even if it seems like you have been cast into eternal darkness forever by some malevolent force. Take heart: it’s only for six months. First published in The Satirist
https://dan-geddes.medium.com/how-to-survive-the-dutch-winter-7ebb85ae5009
['Dan Geddes']
2019-11-01 16:15:54.346000+00:00
['Humor', 'Winter', 'Netherlands', 'Amsterdam', 'Weather']
Pessimism Is the Superpower of the Future
I’m generally not on people’s speed dial for pep talks. And yet there she was, popping the question: “Things are going to get better, right?” My friend needed to hear some hope. “Well,” I said. “I don’t really know.” Her voice grew emphatic. “It has to. It just does. Last semester was so awful. I don’t know if I can go through that again.” She was obviously talking about the pandemic. It owns a penthouse in everyone’s mind, and it’s not vacating anytime soon. To her insistence I replied: “I mean, it doesn’t have to get better…” Her voice was starting to tremble. So I did my best to calm her down. “Look, I don’t know if it’s going to get better. I just know we’ll get more used to it. We’ll get better at handling things. We don’t have a choice.” “That’s not very comforting,” she said. I laughed. This is a fairly standard conversation for me, especially over the last year. You’d think people would learn. It’s safe to say, my pessimism didn’t help her, but it brings me comfort every day. I’ll tell you why. Hope is bad for you. The higher the hope, the deeper the disappointment. Philosophers have known this for a while, going back a few thousand years. There’s a reason why Schopenhauer called hope a “folly of the heart,” and Nietzsche described it as “the worst of all evils.” Hope blinds you to reason. It makes you dumb and gullible, easy to manipulate. Most people don’t realize it, but hope is their greatest weakness. Optimism is just an expectation. Pessimists are keen on letting go of expectations. It’s the quickest route to what we’d call “happiness.” The problem with optimism is that it’s not a strategy. It’s just an expectation for things to work out. Nothing about optimism prepares you for setbacks and failures, and definitely not bad luck. That’s why it doesn’t work, even if people say so. You’re better off with pessimism. Pessimists plan for the worst. A pessimist is someone who anticipates bad outcomes and then prepares for them. When they happen, they’re ready. Imagine that. One of my friends is an Olympic athlete. A couple of years ago, she fell during a race and went from first place to last. That would devastate a normal person. My friend just shrugged it off. “I was actually thinking about that the other day,” she said. “And I told myself if I fell, I would just get up and finish.” Because she visualized a failure, my friend was able to get through it without a meltdown. She won her next race. That’s how pessimism works. Positive thinkers trick themselves into thinking nothing will ever go wrong. When something does, they freak out. They shut down. They get defensive. Or they just ignore reality. It makes everything worse. Pessimists don’t get worked up over disaster. So they get over them faster. Pessimists are honest. Most brands of sunny optimism don’t allow you to speak your mind. You’re not allowed to say anything that could be interpreted as negative. You just have to push it all down. All that suppressed feeling erupts in the ugliest ways. Pessimists don’t have this problem. They get to say whatever they want. Here’s the thing about telling the truth, even if it’s bleak: It makes you happier. Pessimists are tough. We don’t need anyone to tell us that “everything’s going to be okay.” In fact, we can spot that phrase for what it is. It’s a red flag. When someone has to say that, things are actually pretty bad, and you need a plan for when they get even worse. While the positive thinkers are sitting around trying to make each other feel better, pessimists are dealing with the reality in front of them. Optimists spend most of their time building a fake world. When their delusions fall apart, so do they. Pessimists don’t have this problem, either. They keep their cool. Pessimists do better work. We have critical minds, and we anticipate negative reactions. We don’t try to block out negative voices. We listen to them. The standard advice is to block out or ignore negative voices, but that’s not effective when you’re a professional. Pessimists learn how to harness their inner critic and put them to work. They don’t get married to outcomes, either. They don’t visualize success. They visualize process. They get lost in the flow of their work. They don’t worry about failing, because they assume they will at some point. That’s why pessimists perform better. Pessimists focus on action. We know that rousing speeches and platitudes wear off. The only way to really make life better is to make it that way. We’re not waiting for the universe to send down miracles. We just get to work. Pessimists know how to improvise. Positive thinkers get stuck on trying to make a bad plan work, even when everyone knows it’s doomed. When you assume your first plan could fail, you don’t get as worked up when it eventually does. You’re more open to improvising. Your mind is more flexible. That’s a valuable skill. Pessimists are more careful. Pessimists wind up better off in the long run, emotionally and financially. They don’t take as many dumb chances. They hang back and watch first before making a decision. Skepticism is another valuable skill. Pessimists are more attentive to details. People tend to look down on you when you’re critical. Being critical means you’re more likely to find problems. That’s a good thing. Finding flaws and inconsistencies, even tiny ones, can save your life and your job. Pessimists keep it simple. A pessimist doesn’t need to dress up the world. They see things for what they are. They don’t mince words. They don’t get worked up over bad news. They spend less time looking for the silver linings and more time looking for actual solutions. They don’t mess around. They get things done. Pessimists know when something’s done. Optimists look for perfection. They’ll call something “finished,” and convince themselves to feel good about it. They focus on strengths, when they should be paying more attention to weaknesses. Pessimists look for flaws. They review their work over and over looking for imperfections. They only stop when they don’t see anything else to add or change. When they finish their work, they don’t judge it. They know it’s everyone else who has the final say on the quality — not them. We’re better off without hope. At least in America, people dismiss pessimists as “glass half empty” types who sit around and complain all day. That’s not true. Pessimists are probably the most productive people you’ll ever meet. They don’t spend all day trying to make themselves feel good about the future. They don’t try to trick themselves into a good mood. They know what they’re good at, and what needs to be done. Hope is a dangerous thing. Too much gives us permission to sit back and let life happen. You can waste your entire life hoping. Don’t hope. Do.
https://medium.com/curious/pessimism-is-the-superpower-of-the-future-4bedadc9613e
['Jessica Wildfire']
2020-12-22 07:37:05.382000+00:00
['Life', 'Self Improvement', 'Mindfulness', 'Health', 'Humor']
OpenAI GPT-2: Understanding Language Generation through Visualization
OpenAI GPT-2: Understanding Language Generation through Visualization How the super-sized language model is able to finish your thoughts. In the eyes of most NLP researchers, 2018 was a year of great technological advancement, with new pre-trained NLP models shattering records on tasks ranging from sentiment analysis to question answering. But for others, 2018 was the year that NLP ruined Sesame Street forever. First came ELMo (Embeddings from Language Models) and then BERT (Bidirectional Encoder Representations from Transformers), and now BigBird sits atop the GLUE leaderboard. My own thinking has been so corrupted by this naming convention that when I hear “I’ve been playing with Bert,” for example, the image that pops into my head is not of the fuzzy unibrowed conehead from my childhood, but instead is something like this: I can’t unsee that one, Illustrated BERT! I ask you — if Sesame Street isn’t safe from NLP model branding, what is? But there was one model that left my childhood memories intact, an algorithm that remained nameless and faceless, referred to by its authors from OpenAI simply as “the language model” or “our method.” Only when authors of another paper needed to compare their model to this nameless creation was it deemed worthy of a moniker. And it wasn’t ERNIE or GroVeR or CookieMonster; the name described exactly what the algorithm was, and no more: GPT, the Generative Pre-trained Transformer. But in the same breath that it was given its name, GPT was unceremoniously knocked off the GLUE leaderboard by BERT. One reason for GPT’s downfall was that it was pre-trained using traditional language modeling, i.e., predicting the next word in a sentence. In contrast, BERT was pre-trained using masked language modeling, which is more of a fill-in-the-blanks exercise: guessing missing (“masked”) words given the words that came before and after. This bidirectional architecture enabled BERT to learn richer representations and ultimately perform better across NLP benchmarks. So in late 2018, it seemed that OpenAI’s GPT would be forever known to history as that generically-named, quaintly-unidirectional predecessor to BERT. But 2019 has told a different story. It turns out that the unidirectional architecture that led to GPT’s downfall in 2018 gave it powers to do something that BERT never could (or at least wasn’t designed for): write stories about talking unicorns: From https://blog.openai.com/better-language-models/ . Edited for length. You see, left-to-right language modeling is more than just a pre-training exercise; it also enables a very practical task: language generation. If you can predict the next word in a sentence, you can predict the word after that, and the next one after that, and pretty soon you have… a lot of words. And if your language modeling is good enough, these words will form meaningful sentences, and the sentences will form coherent paragraphs, and these paragraphs will form, well, just about anything you want. And on February 14, 2019, the OpenAI’s language model did get good enough — good enough to write stories of talking unicorns, generate fake news, and write anti-recycling manifestos. It was even given a new name: GPT-2. So what was the secret to GPT-2's human-like writing abilities? There were no fundamental algorithmic breakthroughs; this was a feat of scaling up. GPT-2 has a whopping 1.5 billion parameters (10X more than the original GPT) and is trained on the text from 8 million websites. How does one make sense of a model with 1.5 billion parameters? Let’s see if visualization can help. Visualizing GPT-2 OpenAI did not release the full GPT-2 model due to concerns of malicious use, but they did release a smaller version equivalent in size to the original GPT (117 M parameters), trained on the new, larger dataset. Although not as powerful as the large model, the smaller version still has some language generation chops. Let’s see if visualization can help us better understand this model. Note: Code for creating these visualizations can be found on GitHub. An Illustrative Example Let’s see how the GPT-2 small model finishes this sentence: The dog on the ship ran Here’s what the model generated: The dog on the ship ran off, and the dog was found by the crew. Seems pretty reasonable, right? Now let’s tweak the example slightly by changing dog to motor and see what the model generates: The motor on the ship ran And now the completed sentence: The motor on the ship ran at a speed of about 100 miles per hour. By changing that one word at the beginning of the sentence, we’ve got a completely different outcome. The model seems to understand that the type of running a dog does is completely different from the type that a motor does. How does GPT-2 know to pay such close attention to dog vs motor, especially since these words occur earlier in the sentence? Well, the GPT-2 is based on the Transformer, which is an attention model — it learns to focus attention on the previous words that are the most relevant to the task at hand: predicting the next word in the sentence. Let’s see where GPT-2 focuses attention for “The dog on the ship ran”: The lines, read left-to-right, show where the model pays attention when guessing the next word in the sentence (color intensity represents the attention strength). So, when guessing the next word after ran, the model pays close attention to dog in this case. This makes sense, because knowing who or what is doing the running is crucial to guessing what comes next. In linguistics terminology, the model is focusing on the head of the noun phrase the dog on the ship. There are many other linguistic properties that GPT-2 captures as well, because the above attention pattern is just one of the 144 attention patterns in the model. GPT-2 has 12 layers, each with 12 independent attention mechanisms, called “heads”; the result is 12 x 12 = 144 distinct attention patterns. Here we visualize all of them, highlighting the one we just looked at:
https://towardsdatascience.com/openai-gpt-2-understanding-language-generation-through-visualization-8252f683b2f8
['Jesse Vig']
2020-10-11 22:38:20.529000+00:00
['Machine Learning', 'NLP', 'Artificial Intelligence', 'Deep Learning', 'Data Visualization']
The Beautiful Consistency of Mathematics — Alexander Yessenin-Volpin
Mathematics is often believed to bring people to madness. We hear many stories like those about Gödel, Cantor, Nash, and Grothendieck, describing geniuses haunted by insanity that is developing along with their mathematics. And there is something to it. A certain psychologist said that A paranoid person is irrationally rational. . . . Paranoid thinking is characterized not by illogic, but by a misguided logic, by logic run wild Mathematics is the paradigm of rationality and maybe if the rationality takes over all of the aspects of life, we can talk of a mental issue. But this time I want to bring to light an opposite example. This time I want to share a story about a mathematician who was the voice of reason and sanity in the world that has run wild. And one whose mathematics was the model of his approach in social life. Meet Alexander Yessenin-Volpin (1924–2016). Alexander Sergeyevich with his mother in 1928 The Son of a Poet Born in the era of power struggle in the USSR and raised under Stalin’s rule, young Alexander experienced the birth of one of the most oppressive political systems on earth. But this was not obvious from the beginning: many Russian intellectuals strongly supported Bolshevik ideas of overthrowing the rotten tsarism and bringing the power to people. Volpin’s father, Sergei Yesenin (1895–1925), was surely one of them. As one of the most influential Russian poets of the 20th century, he stood up for the revolutionists. Although he never met his son, the atmosphere of alliance of Russian intelligentsia with the communist government must have been accompanying the growing Alexander. It must have contributed to the shock of discovering how the idyllic idea met the real life in the soviet Russia. Volpin recounts in Free Philosophical Tractate, which he wrote when two decades old, his “adolescent crisis” in April 1939 when he pledged himself to reason over the mundanely-understood “emotion”. The latter was propagated by the Russian communist ideologues of the era as the antidote to the bourgeois abstract non-materialistic attitude of the anti-Marxist philosophies. Volpin however felt the need to free oneself form the ties of the down-to-earth pragmatism. In his early writings he repeated again and again that Life is an old prostitute whom I refused to take as my governess. He believed that the liberation should emerge through authenticity and precision of language, understood ideally as mathamatically-inspired formalisation of the language of areas closest to the practical and social life: ethics and jurisprudence. Without a language that is transparent and unambiguous we will not be able, he believed, to trust our thoughts. He sought for a tool for making the legal languege more exact in applying modal calculi to the juristic dictionary. Today we know that various deontic logics turned out to be quite handy instruments in legal theory (but not practice). Indeed, they allow to make legal inferences more transparent, but only on a rather superficial level: problems arise always when it comes to specification of good and life-fitting definitions of “permitted” and “obligatory”, two basic operators in deontic logics, along with formalisation of other legal terms. His Law Volpin did not give up. He was one of the first initiators of the civil rights movement in USSR. And his approach was quite exceptional given the system he lived in. He would explain to anyone who cared to listen a simple but unfamiliar idea [...]: all laws ought to be understood in exactly the way they are written and not as they are interpreted by the government, and the government ought to fulfill those laws to the letter. Yessenin-Volpin in fact praised the 1936 “Stalinist” constitution for various civic rights it granted. Vladimir Bukovskii, a friend of Volpin and later dissident criticizing Soviet abuse of psychiatry for political purposes, recounted that Volpin was the first person in our life who spoke seriously about Soviet laws. [ . . . ] We laughed at him: ‘what kind of laws can there be in this country? Who cares?’ ‘That’s the problem,’ replied Alik, ‘Nobody cares. We ourselves are to blame for not demanding fulfillment of the laws.’ He rebuked Russians for acting as if they had no rights. Surreal as it might sound, it was this “literal” approach of Volpin’s that forced Soviet authorities to let the political opposition meet at Maiakovsky Square in Moscow to publicly read (it’s obvious which kind of) poetry. And it was Volpin who convinced the court guards to let him in the courtroom during the trial of writers Bakshtein, Osipov and Kuznetsov by pointing to applicable paragraphs in his copy of the Soviet Criminal Code he always carried with him. This “concrete” approach to law was a surprisingly effective method of opposition as it openly demanded that the authorities observe their own laws. But Volpin took consistency and transparency to the next level. He applied the same hard-core concretist reasoning in the most exact of sciences. His Mathematics Yessenin-Volpin believed that the traditional style of making mathematics is similarly hypocritical to the style of handling legal issues in the Soviet Union. He claimed that the unreasonable and careless inclusion of the concept of infinity into mathematical discourse is the culprit of depriving it of exactness it was actually to grant. Therefore he urged for a radical revision of foundations of mathematics, based on the claim that the concept of infinity, both potential and actual, is utterly nonsensical. He repudiated the existence of the infinite and so confined the domain of mathematical objects only to the finite ones. Such approach might ring a bell when we think of the finitists or finitistically-inspired mathematicians like Hilbert or Skolem. But Volpin went much further: recall that Hilbert’s Program did not reject the existence of the “infinitary” part of mathematics, but only strived to found it on the more concrete “finitary” part. Apart from that, Hilbert allowed for what we now call recursive algorithms ranging over infinite domains whilst for Volpin operations involving them were meaningless. The expression f(n) (for any n) was completely senseless for Volpin, since it involved an unspecified number n, when one cannot be sure whether f is applicable to all numbers or whether what mathematicians call “all numbers” even exists for that matter. Note that it is not even real numbers and continuum that we talk about. Volpin rejected even the idea of the set of natural numbers, hence he called his stance “ultrafinitism”, where he assumed that one can only operate on specific numeric symbols expressing finite numbers and those only. And so the conventional (especially real) analysis, irrational numbers, calculus, traditional number theory along with other fields get annihilated. Not even mentioning topology or set theory. Such approach is possibly even more heretical to a mathematician than the idea of allowing assemblies and free press was to Soviet authorities. But Volpin did not create it out of mere whim. Like in ethics, he struggled for conceptual precission. If he was shown a symbol, he wanted to be given its exact meaning — and not the metaphorical or unspecified “any” or “some”. For, and I believe that we have to grant him at least that, when we talk about transfinite numbers, beginning with ω, we do take their meanings as metaphors of some sort and we do make a leap of faith that one can operate on infinity as if it was a number. Volpin wanted to achieve his required level of exactness by founding the mathematical endeavor on the more concrete and down-to-earth, physical world. Hence he even contested the existence of numbers too big to occur in the sensible physical description of the universe. Harvey Friedman in his lectures related that I have seen some ultrafinitists go so far as to challenge the existence of 2¹⁰⁰ as a natural number, in the sense of there being a series of “points” of that length. There is the obvious “draw the line” objection, asking where in 2¹, 2², 2³, … , 2¹⁰⁰ do we stop having “Platonistic reality”? Here this is totally innocent, in that it can be easily be replaced by 100 items (names) separated by commas. I raised just this objection with the (extreme) ultrafinitist Yessenin-Volpin during a lecture of his. He asked me to be more specific. I then proceeded to start with 2¹ and asked him whether this is “real” or something to that effect. He virtually immediately said yes. Then I asked about 2², and he again said yes, but with a perceptible delay. Then 2³, and yes, but with more delay. This continued for a couple of more times, till it was obvious how he was handling this objection. Sure, he was prepared to always answer yes, but he was going to take 2¹⁰⁰ times as long to answer yes to 2¹⁰⁰ then he would to answering 2¹. There is no way that I could get very far with this. This anecdote perfectly captures Volpin’s consistent approach: if 4 is twice as much as 2, we should need twice as much time to realize the “shape” or the four-ness of this number. The tacit idea here is that numbers are not all cognized in the same kind of mental act, but are composed of other numbers, so in order to come to grips with the idea of a bigger number, one firstly has to grasp the idea of a smaller one. The procedure of answering “yes” to each of Friedman’s question with respective delays aptly pictures the ultrafinitistic stance on the mathematical reality. The latter is understood as a structure build up the most concrete “atoms” of mathematics — units. And this is the concretist, anti-metaphorical approach that made Volpin interpret mathematics in this manner. We can imagine him saying ‘Look, here are the “bricks” of mathematics — the starting point of mathematical reflection. One can operate on them in various ways: add and multiply them and do all sorts of operations on them, but without external presumptions about their nature or other metaphysics.’ As in ethics, Volpin wanted to free mathematics from what he believed to be unjustified dogmatism, from which originated all murky considerations about the infinite. He wrote in 1959 that the fallacy lies in the deceptive dogma that what is useful is also true: We desire some kind of practical result, and we divide the sphere of all possible assumptions into two parts. One corresponds to “yes”; the other to “no”. We explore reality and also divide the sphere of possible assumptions into two parts corresponding to “yes” and “no”. … We very often forget that these two divisions differ from one another, and as a result we adopt as reality that which is favorable. We can note that such link is repeatedly occurrent in statements of various Platonists, regardless of it being G.H. Hardy connecting the beauty of mathematics with its truthfulness, or W.V.O. Quine stating that the usefulness of mathematics in explaining the nature necessitates its truthfulness. Yessenin-Volpin dubbed this fallacy ignoratio elenchi (ignoration of refutation) and believed that it was “the intellectual basis for every kind of demagogy.” Yessenin-Volpin’s most renowned work in mathematics may be found in the following proceedings: His Fight Thus we see Alexander Yessenin-Volpin struggling against demagogy in two most fundamental realms of human’s intellectual activity, the pure and practical reason. His efforts for civic rights in USSR earned him a number of periods in psychiatric asylums and even a 5-year exile in Syberia. Most interestingly, the official “diagnosis” that put him into asylum in 1968 was, as Vladimir Bukovskii reported, pathological honesty. Whether being honest with others and oneself could cause a mental issue is a topic for psychiatrists, but it is sure that Volpin, with his independence and simple sincerity, did not fit into the oppressive society he lived in. And he inspired others with his inner freedom: he stood behind the famous Glasnost (transparency) demonstration in 1965, was called the intellectual godfather of the civil rights movement in Russia, and contributed to awakening of the generation of political dissidents a decade before Solzhenitsyn. When he was incarcerated in the asylum in 1968, 99 Soviet mathematicians sent an open letter to the authorities requesting his release. After the case became international, Volpin was set free. He emigrated to USA in 1972. Ironically, he was similarly alienated for his mathematical ideas in the free world as he was in the Soviet Union for political reasons. I believe this says something about traditional mathematics. The upshot is that either Volpin mistakenly interpreted the philosophical and foundational assumptions at the underpinning of mathematical practice, or his thought aptly pictures the intellectual inconsistencies in the so-called free society. It is certainly valuable to study his ultrafinitism in search for misconceptions, whether it be for recovering the philosophical justification for mathematics, or for sole development in scholarship. But regardless of the question whether there is some point to Yessenin-Volpin’s heresies, what is exceptional in this figure is the intention of overarching struggle for independence and unity of thought. To me, the story of his life and fight is the realization of a deep message about the abstract and the practical being not so distant from each other. I interpret it as the manifestation of Weininger’s words that Logic and ethics are fundamentally the same, they are no more than duty to oneself. Reading List For more about Yessenin-Volpin’s life see: For a critique of Volpin’s ultrafinitism see: For a discussion of more contemporary account of ultrafinitism see: http://users.uoa.gr/~apgiannop/zeilberger.pdf
https://medium.com/cantors-paradise/the-beautiful-consistency-of-mathematics-alexander-yessenin-volpin-b3c672f8ce96
['Jan Gronwald']
2020-12-16 10:35:36.846000+00:00
['Math', 'Philosophy', 'Ethics', 'Philosophy Of Mathematics', 'Science']
Sound
Haiku is a form of poetry usually inspired by nature, which embraces simplicity. We invite all poetry lovers to have a go at composing Haiku. Be warned. You could become addicted. Follow
https://medium.com/house-of-haiku/sound-56eaaa4ae06d
['Sean Zhai']
2020-12-15 16:12:41.799000+00:00
['Poetry', 'Environment', 'City Living', 'Civilization', 'Art']
Are You Beautiful Enough…
To wear a bikini? Photo by Lauren Richmond on Unsplash She. Is. Gorgeous. Trendy cut-out bikini, tanned body, laughing with friends in the pool. Every man’s gaze is on her. Beautiful face, thick glossy hair, pert breasts and pretty much everything I want right now. She’s twenty-ish and totally carefree. I’m forty-ish with two kids and things that are not where they once were, including my self-esteem. Don’t get me wrong. I have rebuilt my body from debilitating illness. I love that I’m lean, healthy and strong and, above all, still here. But put me in front of a taught, twenty-something and I can’t help feeling a little sad. I am a self-care superstar, dance queen and wannabe yogi, but I’m never going to get my twenty-something body back. And that’s never more apparent than when I’m looking at someone else’s twenty-something body. I really took my twenty-something body for granted. I poured alcohol into it, partied endlessly and then wallowed in self-pity when I became seriously ill. In truth, I didn’t like myself that much, so I didn’t take care of myself. As Oscar Wilde said, Youth is wasted on the young. And I know better than to compare myself to other women, really I do, but as a bio-hacker I find bodies weirdly addictive. Besides, I’m wearing dark glasses and the beautiful young thing in the pool is as oblivious to my scrutiny as she is to her own loveliness. She laughs without lamenting about lines and recklessly tans her face without worrying about age spots. She may live to regret those things but right now she is, as my retired contraceptive dispensing mother-in-law calls it, “ripe.” I am not withered by any standards but, up against teeny bikini and friends, I don’t feel as plump and juicy as I used to. So I hide behind my holiday read and secretly watch as the bathing beauty eases herself out of the water and saunters past to get a towel. I gasp in surprise. She has cellulite! Shockingly, she does not seem to care! I am busy feeling slightly superior as I have never had cellulite, when Teeny Bikini’s friend, with hair down to her minuscule waist and perfect almond shape eyes, skips past. Clearly not worried about slipping, or showing off her stocky calves and thick ankles. What?! My jaw drops as they keep coming. One with plump upper arms I would have trained tighter and one sporting unusually long boobs that are slightly lopsided. No one else seems to notice. Least of all, them. It seems youth has made them impervious to their imperfections. I gulp and wonder when I became so body obsessed. Was it from rebuilding my own? Or is this what all women do? I spend the rest of the day shamelessly staring at women. They are everywhere. The thirty-something latino woman with the perfect hourglass figure and incongruous pot belly. The lean forty-something with her rock hard runner’s body and wrinkled, outdoorsy face. The fifty-something stunner with the creamiest, smoothest skin I’ve ever seen, balanced out by a huge, wobbly bottom. We are all in a semi-naked state so the assortment of pre and post baby bodies, wrinkles, dimples, sagging and surgical scars is endless and mesmerising. I play the trading game — would I give my voluminous breasts for a tighter tummy? Would I cash in my flat bottom for more curve? Would I give my slender thighs for fuller, sensuous lips? I could do all this cosmetically of course. With the right budget and a higher pain threshold I could easily form the Frankenstein version of me. Then I realise there’s a method to nature’s madness. No woman’s body is truly perfect or without flaw as the media would define it. I see how, against my best intentions, I have been groomed by every poster, TV ad or insta shot that offers only perfection. When maybe we’re only ever meant to be perfect to a point. What if our imperfections are deliberate? What if each thing we don’t like is really an opportunity to accept our humanity? Look at the anxiety and chaos comparison creates on social media. We have a generation of young women all trying to look like Kimmy K or the latest love island loser. We are conditioned to fill, fake and filter to be acceptable insta fodder. This level of judgement undoes our uniqueness, ignores our individuality and smothers our self-expression. The message we all need to hear is: Don’t fit the mould, break it. Maybe it’s the sunshine, but I start looking at every woman and seeing beauty. From what the world would define as beached whale to beach babe. From sumptuous hips to slender ankles. From tiny boobs to tanned painted toes. I feel the feminine in each woman. I witness the beauty of each woman radiating most when she is just being herself. It’s not how you look, it’s how you feel about how you look It’s often said the most beautiful women are those who are happiest in their own skin, regardless of its shape or size. There’s something seriously sexy about self-confidence. This is why the latino with the perfect hourglass and pot belly was being fawned over by a gorgeous younger man. This is why men care less about our looks and more about our essence. There is something gorgeous about a woman who glows with self-love. A woman who does not aim for perfection but simply shows up as herself. In a world where women struggle with self-worth, self-acceptance is a start. We have to love what we’ve been given. We have to work with what we’ve got, because when we step out of self-judgement we open a door to inner strength. Sure, I still have scars and wrinkles, but after staring at semi-naked women I am photoshopping less and loving myself more. My twenty-something body is gone but I finally REALLY like myself and that feels kinda ripe and juicy. We have been taught to look at women’s bodies and judge them instead of love them. We don’t need scalpels and surgery, or the chaos of competition that comes from an outdated beauty benchmark. We just need the self-esteem super pill of knowing we are perfectly imperfect. When we see the beauty in every woman, including ourselves, we start to accept our own radiant deliciousness, regardless of age. We flaunt our femininity and feel divine and that, my friend, is beautiful. I dare you to go look in a mirror and love what you see. I guarantee there’s a gorgeous woman waiting there…and maybe she wants to wear a bikini…
https://medium.com/publishous/are-you-beautiful-enough-caf26f64ba75
['Jacqueline Escolme']
2019-05-07 15:33:11.872000+00:00
['Beauty', 'Body Image', 'Life Lessons', 'Women', 'Health']
How To Use A Business Model Canvas To Create Your Own Assets
2. Entrepreneurial Business Proposal (Option B) In this project, you will imagine yourself as an entrepreneur establishing a business after graduation. You will prepare a focused and creative entrepreneurial business proposal for your enterprise. Think of this as an ‘elevator pitch’ where you present your business idea to potential funders. As you come up with your business proposal, try to identify real problems in society, create innovative solutions, and commercialize those solutions to a target market. You can create a one-person digital business for example. Or you can learn to sell on Amazon or Etsy. You can establish your freelance company and use online platforms to share your online work. You can start a café, a restaurant, a consulting/training company, a technology company, a social media company, and whatnot. The sky is the limit. Try to demonstrate creativity, insight, reflection, and depth. Innovation, integration, and synthesis are critical. Use your best creative skills and talents. What are you really curious and passionate about? Your business proposal should have the following sections (each of these should be very focused/brief): 1. Executive summary: Summarize your business idea as an elevator pitch. Where is your unique value proposition? 2. Business description: What is your core or your “secret sauce” which is not easily duplicated? How do you define your sustainable competitive advantage? How does this business idea relate to your passion and goals? 3. Your product /service: How do you design and build your product or service? How do you develop the idea, technology and passion necessary to get started? Think about your impact and contribution you want to make if this business is successful. 4. Market analysis: What about the competition in the market? How do you differentiate your business in this market? 5. Marketing plan: Who are your customers? (A profile of your targeted customer) How do your customers access/reach your product or service? What is the process for acquiring a targeted customer? 6. Financial/operational plan: What metrics need to be put into place to determine if your product/service is successful (or not) early in the process? How do you make money? How do you scale to widen and diversify the revenue stream? 7. Creative advertisement/poster: Create an A4 poster for your business; which will be your advertisement. Please note that the points above are intended to help you structure and write your business idea, but you do not have to strictly follow this structure or answer all the questions from 1 to 7. It is your business idea, you are the owner and the entrepreneur and it is up to you how to narrate or present it. The applicable word limit for the written part (main body) of this assignment is 1500 words. Please feel free to use concept maps, figures, tables, and visuals as these do not count towards the word count. Appendices (i.e. seminar and lecture evidence materials) are not counted. Indicative Notes: 1) At the heart of this assignment is independent thinking and creativity. We want you to take ownership of your ideas and express them passionately and eloquently. The structure is less important. How you visualize and represent your ideas in an engaging and interesting way is more important. Think of it as an exercise in imagination. We encourage you to play, create stories, dream of possibilities, and incorporate your own strengths and passions. 2) There is no one best way to write up this assignment. There is no one right answer. You will need to find and develop your own ‘right’ answer. It is your playground and you are encouraged to play/experiment with crazy, risky, creative, imaginative business ideas. You will need to find your own voice and incorporate that voice into your pitch. 3) This project does not require you to write an advanced business plan with all its functional, operational, and financial details. In this sense, do not think of this assignment as a traditional business plan. It is more like an elevator pitch for your business idea. As we live in a world of information overload, people do not want to read very long business plans or business reports. Try to make it compelling, visual, and creative. Make it interesting to capture attention. 4) Try to illustrate the basics of how you would pitch and initiate your business idea. We do not want you to delve into all the details of your market analysis report or your marketing communications strategies or your projected financial statements for the next five years. Please remember that this module focuses on the big picture, rather than the managerial functions. We want you to conceptualize, design, and integrate your business ideas into an exciting business proposal. 5) Demonstrating evidence of your module learning and engagement with module materials is a critical aspect of this assignment. Therefore, the appendices are very important. You will need to incorporate and apply the toolkits/models/skills we have discussed and learned during the lectures and seminars. Please incorporate these in the form of tables, figures, and visuals. Feel free to apply multiple perspectives, frameworks or toolkits that are relevant (such as business model canvas, design thinking tools, benchmarking, six hats thinking).
https://medium.com/an-idea/how-to-use-a-business-model-canvas-to-create-your-own-assets-ebebc254b2d7
['Fahri Karakas']
2020-11-27 03:56:29.965000+00:00
['Personal Development', 'Art', 'Entrepreneurship', 'Business', 'Self Improvement']
Retro-Review: ‘Friends of Hell’ by Witchfinder Genral
Believe me, there’s more to this picture, but I’ve got to keep this page PG-13. Witchfinder General’s 1983 album Friends of Hell is for those who just couldn’t get enough of Black Sabbath’s sound from the early to mid 1970s. Their flavor of doom metal, which really wasn’t thought of as a genre then, would satisfy the fans of the dark sounds that Ozzy Osbourne, Tony Iommi, Geezer Butler and Bill Ward churned out from Master of Reality in 1971 to Sabotage in 1975. The dark guitar tones, crushing slow riffs and Ozzy-esque wailing vocals are all there. Plus, they wrote some pretty damn good songs too. Much like Black Sabbath, they took their name from a movie. In their case, it was the 1968 flick Witchfinder General (with the alternative name The Conquering Worm in the U.S.) starring Vincent Price in the title role. And much like Black Sabbath, they sung songs about the occult and the darker side of life. Founded in Stourbridge, England, in 1979, they released their first album, Death Penalty, in 1982, quickly followed by their second album, Friends of Hell in 1983. They would disband in 1984 and reform in 2006, releasing the album Resurrected in 2008, disbanding again that year. Handling vocals on Friends of Hell is Zeeb Parkes, who sings in an Ozzy-esque wail, but doesn’t quite sound like Ozzy. But, he didn’t need to sound like Ozzy, though. His vocals are the driving force on Friends of Hell, helping give them their own sound. Guitarist Phil Cope and bassist Rod Hawkes, on the other hand, could pass off as Iommi and Butler to those that aren’t diehard Sabbath fans. They don’t rip off Sabbath’s riff, Witchfinder General’s music is definitely their own, but they do sound like Sabbath if Sabbath decided to just stay with their early sound. Grounding things is drummer Graham Ditchfield. He’s pretty consistent as far as keeping things nailed down with a steady beat. The album The first three tracks on the album are also the best in my opinion. “Love on Smack” kicks off Friends of Hell in kick-ass fashion. It’s reminiscent of Sabbath in their faster moments. It could easily be mistaken for a Sabbath track if it weren’t for Parkes’ vocals. From the main riff to the solo, it just drips of Sabbath and their blues-influence heaviness. It’s one of the best tracks on the album and you’ll remember it long after listening. You’ll find yourself saying the closing verses “she’s dying, she’s dying, she’s dying … she’s dead.” “Last Chance,” on the other hand is pretty much doom metal all the way through, with its heavy and slow opening riffs to its whirling guitars throughout the verses. If you like “Children of the Grave,” then this one is probably right up your alley. After two, rather dark, Sabbathy tunes, Witchfinder General kicks into rock anthem gear with “Music.” It’s kind of a shock after the first two tracks, but man, is it catchy. After the first time the chorus, you’ll find yourself singing “I need music, everyday!” It’s like a fusion of 70s and 80s arena rock with a big sound and fist pounding bravado. Now, they don’t slouch on the rest of the album, but after those first three songs, it’s difficult to maintain that level. Not that other songs don’t come close. “Friends of Hell” is lyrically speaking about as Sabbath as you can get. It’s a narrative song about a protagonist hunting down some devil worshippers who he watches melt by the end of the song. While pretty good, it doesn’t match the songs before it as far as energy. Witchfinder General stays in the narrative territory throughout most of the album, with most of them being a story within themselves, whether it be about someone who considers themselves a coward for not being able to kill themselves (“Requiem for Youth”) to another song about contemplating suicide because of a lost love (“I Lost You”). Parkes really shines as a vocalist on “Shadowed Images.” You can hear quite a bit of the sound that would go on to become the doom metal we know today intermingle with a lot of Sabbathy riffs here. It’s an enthralling song with an almost ethereal feel to it. “Quietus,” though, comes closest to matching the first three songs on the album. It kicks off like you’re watching elephants walk up a hill then takes off as the elephants hit the pinnacle and start stampeding downhill. It goes a lot of different places and has a great instrumental section to it. In a way, it reminds you how Sabbath would meander into almost unrelated instrumental territory at the end of the song. Definitely recommend this one. The verdict Friends of Hell is definitely worth checking out, especially if you’re a fan of doom metal, early Black Sabbath or both. The songs are strong, memorable and you’ll be singing the choruses before you realize it. Also, if you love doom metal, then this is a must listen. You’ll hear one of the roots which the modern doom metal movement springs from.
https://medium.com/earbusters/retro-review-friends-of-hell-by-witchfinder-genral-b8122c9632f0
['Joseph R. Price']
2018-12-18 01:13:29.884000+00:00
['Doom Metal', 'Witch', 'Black Sabbath', 'England', 'Music']
The Mindset to Deal With Rejection From the Company You Admire
The Mindset to Deal With Rejection From the Company You Admire “Fall down seven times, get up eight.” — Japanese Proverb Photo by CHUTTERSNAP on Unsplash In the last few days, I was a bit restless because I wanted to know about the feedback from the company I was interviewed for. I really wanted to work there and that’s why I was feeling the fear of rejection. Gradually I experienced that that fear was affecting my work in the current workplace, and because of waiting for feedback from that company, I could not concentrate on my current work as well as my personal life. All the time, I was eagerly waiting to get positive feedback from that company and for sure that’s a really bad thing. So I was thinking about how I can get rid of that thinking and concentrate on my current job. Then I started studying tons of blogs, articles, and books on how to overcome that situation. So today I am going to share those experiences that helped me to train my mind to get rid of that fear of rejection. We’re all going to get rejected one day. It’s not possible to get every single job you applied for. Even the greatest engineers sometimes get rejected. Rejection doesn’t mean anything, especially when it comes to an interview. It doesn’t go on the resume or in your LinkedIn profile. It just disappears. Rejection doesn’t make someone a bad coder. It’s just not the right fit for a candidate with the interviewer or the company. Just think, they rejected me but maybe I would not enjoy working with them. I would never get the current job if I was not rejected by the previous job. Rejections are never the predictor for success. If you are rejected, then try to get as much feedback as possible from the company. Tell them that you are trying to constantly improve and that feedback from them is crucial for your improvement. You need to think of a technical interview as an opportunity to learn. One of the best things about the software industry is sharing knowledge. I learned so many things during these exchanges that have helped me grow as a better developer. Actually, we don’t spend much time throughout our lives applying for a job than doing the job. Getting rejected after trying for one week doesn’t mean anything because maybe you will stay at the place you join next for a long time. So spending some time applying for a job and getting prepared for that job interview is always worth it. Sometimes luck also plays a role in the interview, but that’s not the only thing. So don’t take the rejections personally. Sometimes you just have a bad day. For the companies, they are more likely to say “no” instead of “yes” because it’s a big risk to hire a new employee. Sometimes they also stay in some sort of contradiction, like on whether you are the right person for the job or not. If you fail ten times then just try ten more times. You just need one success, that’s it. It’s not the end of the world if you fail. In spite of all these rejections, you will grow significantly from this experience. Each interview helps you get ready for the next. Eventually, you will receive an offer, but you never would have made it there without all the rejections that came before. The most important part is to make sure that you turn every rejection into a positive learning experience. If you’re learning each day, then your skills are getting better and better. One day you will get the offer for sure. Searching for jobs can be stressful and the rejections you get can be very frustrating. Use the rejection experiences to grow and become a better developer. The best mindset might be thinking about how can you become a better developer by filling the technical lackings when that interview is over.
https://medium.com/better-programming/the-mindset-to-deal-with-rejection-from-the-company-you-admire-a74833c1fa9f
['Ashraful Islam']
2020-12-04 15:53:10.141000+00:00
['Programming', 'Software Development', 'Interview', 'Startup', 'Work']
“Dr. Google” Thinks I Have a Tumor
We have all had this experience of the cancer diagnosis on Google. We can laugh it off, but sometimes these doomsday experiences can make us less likely to believe something is wrong when there is an actual problem. For example, one night I suddenly became quite ill. I was in terrible pain and had a bunch of weird symptoms. I tried a few things that didn’t work and then, alone and freaked out in middle of the night, I huddled over the laptop and started googling. Soon I realized I was dying. At least according to Dr. Google. I was sad at first, but then I took a deep breath and told myself, “wow this is ridiculous, you are not dying, you are just being a huge hypochondriac”. I got up and went to work, because I’m a 50-something-year-old woman who grew up in the Midwest. Unless there’s a bone sticking out or you’re bleeding from the head, I was raised that you should get your butt to work. I felt a little better with the distraction of work. But later in the morning, I suddenly felt like someone had hit me with a truck. I went from feeling vaguely unwell to “I’m about to pass out and die at my desk” in the course of five minutes. I called the Advice Nurse, who said I needed to get to the ER right away. I argued, because who wants to spend the day in the ER for something that’s probably nothing. Despite the sudden onset of symptoms and how terrible I felt, I continued to work for a little while longer. But it kept on, so I finally left and headed to the Emergency Room for only the second time in my life. While I didn’t have any of the terrible things that Dr. Google or the Advice Nurse suggested, I did need actually need care for something. There was an actual problem. I suffered in pain for hours and worried that something terrible was wrong with me because I stubbornly avoided going to the ER. The therapeutic benefits of an IV, pain meds, and a battery of tests to rule out your worst fears cannot be overstated. I say that recognizing my privilege as a person with good health insurance and coming from a group that traditionally receives better healthcare treatment. That experience made me think about catastrophizing versus minimizing. It’s such a balance. You don’t want to be that person who runs to the doctor every other day for every tiny symptom, but you also don’t want to be the person who ignores something serious.
https://medium.com/illumination/dr-google-thinks-i-have-a-tumor-55cf2162602e
['Rose Bak']
2020-10-14 10:41:35.110000+00:00
['Self', 'Health', 'Medical', 'Internet', 'Self Help']
Niramai, early detection of breast abnormalities
NIRAMAI has raised $7M in total. We talk with Dr. Geetha Manjunath, the cofounder, CEO and CTO of NIRAMAI health analytix. She holds a PhD from Indian Institute of Science and has over 25 years of expertise in the IT innovation at Hewlett Packard Labs and Xerox Research. She has proposed and led multiple research projects in Artificial intelligence, mobile, and distributed computing. Her research in the above areas has resulted in innovative prototypes, patents, publications, new products, and many national and international recognitions. She cofounded NIRAMAI along with Nidhi Mathur in July 2016, when she saw two of her close family members suffer from breast cancer in their late 30s. PetaCrunch: How would you describe Niramai in a single tweet? Geetha Manjunath: Early detection of breast abnormalities in a completely privacy aware manner. PC: How did it all start and why? GM: I was a Senior Director in a corporate research lab working on multiple interesting usecases of AI with my team. The trigger to start working on this technology research problem was actually when I lost two of my young cousin sisters to breast cancer due to late detection. When I researched on this issue, I found out about thermography which had the ability to detect abnormalities in all age groups, but had accuracy issues. I created a small team to explore use of machine learning algorithms to address that gap, and when I started seeing early promising results, decided to do this full time and founded NIRAMAI with some of my earlier team members Nidhi, Himanshu and Siva. PC: What have you achieved so far? GM: Niramai now has over 30 installations at hospitals and diagnostic centres across 10 Indian cities. We have screened over 12000 women so far. The innovative methods used in NIRAMAI’s solution have led to 9 US patents and 1 in Canada. We have also conducted employee wellness camps with 20+ corporates. We partner with NGOs and cancer societies to conduct free screening camps for the underprivileged. We obtained permission to conduct the screening test in government hospitals in Bangalore as well as a few other states. Most importantly, we now have a bright team of 20+ innovators and an able leadership team. NIRAMAI has won several national and international awards. We have received support from BIRAC, Karnataka Startup and recognition from Accenture, Philips, Google and Amazon as one of the top startups. We won the Best Preventive Insurance Idea award in Milan and the Gold prize in Hack Osaka 2019 International competition held in Japan. We are very proud to be the only Indian company listed in the 2019 cohort of AI 100 startups in the world by global business data intelligence platform CB Insights. PC: How will you use your recent funding round? GM: Our Series A funding is being used for scaling our operations in India, hiring top talent, and getting the international regulatory approvals. PC: What do you plan to achieve in the next 2–3 years? GM: NIRAMAI is a unique non-invasive, safe solution for early detection of breast abnormalities in women of all age groups. There is a need to create awareness about the need for preventative screening in women. Our single goal in the next few years to spread this awareness and enable more number of women to get access to our solution across multiple cities. We would love to partner with corporate employee welfare organizations, insurance providers, large hospital and diagnostic chains to take it to many urban women. We would also like to work with community outreach entities, NGOs and cancer societies to conduct population screening in rural areas and enable affordable healthcare to the under privileged. Our vision is to help every woman do a screening early and address any issue, rather than wait for a lump or another symptom.
https://medium.com/petacrunch/niramai-early-detection-of-breast-abnormalities-ce4251d8a8d5
['Kevin Hart']
2019-09-13 14:00:05.349000+00:00
['Health', 'India', 'Healthcare', 'Cancer', 'Breast Cancer']
Vistrates: A Unified Platform for Data Analytics
What is Vistrates and why is it needed? Current tools for data analysis are optimized for specific analytical tasks. Tableau is used for exploration of data and presentation of stories from the data. Adobe’s Data Illustrator is used for creating custom narrative visualizations. Microsoft’s ChartAccent is ideal for annotating charts in a data-driven fashion. Jupyter notebooks enable exploratory analysis through interactive programming. However, to be truly useful, each of these tools needs to be used together by teams of analysts working on different devices (laptops, tablets, or even smartphones). This means when using these tools, the team needs to come to a collective compromise about the protocols, jobs, and results expected to reach the final outcome. Vistrates is built on top of existing web platforms (Codestrates and Webstrates) that promote shareability, distributability, and malleability of applications. It adapts these platforms to support data analysis and promotes unification and interoperability between interfaces, devices, and users. Vistrates, in essence, is built on two ideas: Flexible user interface: The user interface of Vistrates is flexible and can be changed based on the analytical activity in question. During visual exploration, for example, the interface can be turned into a dashboard (an abstraction) to filter and analyze data across multiple attributes. Similarly, during storytelling, the interface can be converted into a slideshow along with speaker notes to enable presentations (Figure 1 — right). A vistrate can even appear as an interactive visualization tool to an end-user. Composable, extensible, and reusable components: Each vistrate is made of analytical components that feed each other and together support a complex activity. A classification method can be an analytical component with a data output representing the class/group of each data point. A bar chart component can take the input from the classification and provide a visual output of the groups. Furthermore, components are editable, reusable, and extensible. What does a Vistrate document look like? A vistrate is simply a web document containing executable code, text, and media. Like Google Docs, vistrate documents are inherently collaborative — you can share a URL with your teammate to work together on a document. Like Jupyter notebooks, a vistrate is an interactive programming environment with code cells integrated with cells of rich text. The building block of a vistrate is a component consisting of a controller (written in Javascript), a state specification (resembles JSON), and a view (in HTML). Each component can get input from and output data to other components along with maintaining the state (such as user interaction) in the state specification. More levels of UI abstractions can be accessed in a Vistrate. For instance, in contrast to a linear layout for interactive programming, a dashboard abstraction can be used to show all component views together in a grid layout. A mobile abstraction level is also present for a small-screen device. The figure below shows these different abstractions for a crime analysis scenario.
https://medium.com/hcil-at-umd/vistrates-a-versatile-tool-for-data-analytics-18c8ec3cfd9b
['Karthik Badam']
2019-04-19 17:37:20.077000+00:00
['Analytics', 'Research', 'Data Science', 'Data Visualization', 'Storytelling']
Why the Music Industry Needs Blockchain
Let’s think about some things blockchain is good for: Processes with numerous middlemen Over-centralized industries Tracking the flow of assets or digital items Creating trust by removing the option of dishonesty One of my favorite explanations of blockchain and its applications is this excerpt from TheConversation. You can find the original article here. Blockchains can be used for a wide variety of applications, such as tracking ownership or the provenance of documents, digital assets, physical assets or voting rights. Blockchain technology was popularized by the Bitcoin digital currency system. But, essentially, a blockchain is just a special kind of database. The Bitcoin blockchain stores cryptographically signed records of financial transfers, but blockchain systems can store any kind of data. Blockchains can also store and run computer code called “smart contracts”. What makes a blockchain system special is that it doesn’t run on just one computer like a regular database. Rather, many distributed processing nodes collaborate to run it. There can be a full copy of the database on every node, and the system encourages all those nodes to establish a consensus about its contents. I’ll refer back to the excerpt throughout this article, it’s important to completely understand the potential solutions if you want to understand the depth of the problem. Now let’s look at the music industry. There are two main areas where I see blockchain disrupting the music industry. Artist Funding Record labels and their A&Rs (artist and repertoire agents) are the gatekeepers to the industry. They decide who gets the capital. Think of record labels as venture capitalists (VC), except their investing in higher risk investment vehicles than any sane VC ever would. The chances of an artist succeeding are extremely slim. Because of this, record labels get to put terms in their artist contracts that are extremely favorable to the label. The typical deal is 70–80% in favor of the record label. These deals tend to come with an advance, anywhere from $50,000 up to millions of dollars depending on the speculative value of the artist in the future. The catch is, that 20% you get from the money you generate as an artist — you don’t see a dime of that until you recoup the advance you’re given. On top of the fees you’re now paying to your label, you have even more people you need to cut in if you want to succeed. You need a manager, you need a PR team, a booking agent, a tour manager — I can go on and on. Some of these expenses are covered by the label but more often than not at least the manager is taking a cut of whatever % the artist is making. After the labels cut. “Record labels can work both in your favor and against you. All depends on the type of deal you sign, but if you sign the wrong deal you very easily can get lost in the system, lose creative control & always be in debt to your label. Also you can easily get shelved if you don’t compete with the labels top artists, and can become a low priority to the team. But for Kid Buu, that will never happen haha!” — Kid Buu, Hip Hop Artist Independent artists: Kid Buu and Colacino (1/2 of OohdemBeatz) Kid Buu is an example of an artist who is paving his own path, and exactly the kind of artist that can benefit from decentralization. He built up an impressive audience around his persona and music before he started conversations with record labels. He’s bringing something to the table (an audience), and therefore he’s getting more favorable offers. However, with the size of his fanbase, he’d be able completely fund his career and only give up a fraction of his royalties if the fund raising process was decentralized. If the risk were spread across his fans, and not resting solely upon record labels — the system is more balanced. I’m not saying that record labels are the enemy. There are plenty of cases where record labels help up-and-coming artists capture momentum and create great success in their careers. The only problem is, this is only the case for a small fraction of artists who sign deals with labels. Many of them get “shelved”, meaning their projects are put on the back burner so the label can focus on artists that they see as more promising revenue opportunities. “We honestly always dreamed of being apart of a major label just to see what it is like, but the honest truth to the music industry nowadays is you have to build it by yourself. You can not expect a label to help develop your career. You basically have to do absolutely everything independent to the point where, by the time a label is even interested in signing you, you basically don’t even need them anymore! This is why we have stayed independent so long. Now that labels and management etc want to get involved after 6 years of blood sweat and tears, we almost don’t even need them anymore. It’s almost like getting a credit card. You need credit to get a credit card, but you can’t get credit without a credit card. Ironic!” — OohdemBeatz, Producers The low hanging fruit here would be to ask record labels and royalty distributors to move over to the blockchain, so the artist, and the fans, could see where all the artists’ revenue was going. There can be a full copy of the database on every node, and the system encourages all those nodes to establish a consensus about its contents. This would establish trust amongst artists, labels, and fans. If one artist on a labels roster wasn’t being financially supported as much as they should be- it would be known. But in all honestly, the likeliness of this occurring in my opinion is close to zero. So lets get back to record deals and these “advances” that entice most artists into bad contracts. The problem with advances, is that they’re a catalyst to a form of financial slavery. Most artists never pay off their advances, they live their creative careers constantly paying the record label they signed. When and if they ever do pay off their advances, they get the huge reward of 20–30% of the revenue their music generates for the rest of their contract term. Blockchain can be used to decentralize the process of fundraising for artists. If you distribute the risk amongst, let’s say, 100 stakeholders as opposed to 1 (the record label), than the artist could potentially see a more favorable royalty split. The centralization of capital sources in the music industry also means that record labels essentially get to dictate what music, is “good” music. Unless an artist makes it independently (which is becoming more common, but still rare), they were at one point signed off on by a record label. Decentralizing the industry to allow fans and music enthusiasts to decide which artists receive funding democratizes the process of discovering new talent. Royalty Distribution Royalty distribution in its current form is a mess. There are several companies that handle digital distribution and royalty distribution, but the list of individuals that need to be paid out on most professionally made songs is extensive. For example, check out the credits for Drake’s song “Glow”. “Glow” f/ Kanye West Preliminary Publisher Information: Sandra Gale/EMI Pop Music Publishing (GMR), Please Gimme My Publishing/EMI Blackwood Music Inc. (BMI), Mavor & Moses Inc./Kobalt (ASCAP), EMI April Music Inc. (ASCAP), Downtown DMP Songs (BMI), Sony/ATV Songs LLC (BMI), WB Music Corp. (ASCAP) Kobalt Music (ASCAP) Written by A. Graham, K. West, N. Shebib, L. King Jr, M. Yusef, J. Sakiya Sandifer, N. Goldstein, Phillip Bailey, Maurice White, Aubrey Graham, Carlo Montagnese, Majid Al Maskati, Gabriel Garzón-Montano, Anthony Jeffries, Ilsey Juber, Kenza Samir, Noah Shebibm Jordan Ullman, C. Young Sample Credits: Contains samples from “Devotion” written by Phillip Bailey and Maurice White published by EMI April Music Inc. (ASCAP). Used by permission. All rights reserved. Excerpts from “Devotion” performed by Earth Wind & Fire courtesy of Sony Music Entertainment. Contains excerpts from “6 8” performed by Gabriel Garson-Montano, Courtesy of Styles Upon Styles, Inc. Used by Permission. Contains excerpts from “Jungle” performed by Drake courtesy of Universal Music Enterprises Produced by Noah “40” Shebib for Mavor Moses Inc. and Kanye West/Additional Production by Noah Goldstein Recorded by Noel Cadastre, Noah Shebib & Harley Arsenault for Evdon Music Inc. & Noah Goldstein at SOTA Studios, Toronto, CA, Park Hyatt, Paris & No Name Studios, CA Mixed by Noel “Gadget” Campbell for Evdon Music Inc. / T.O. Music Group at SOTA Studios & Studio 306, Toronto, ON Kanye West appears courtesy of Getting Out Our Dreams II, LLC This isn’t even the longest credit list I’ve seen, if you have the time, Google the credits for the song “Fade” by Kanye West. The moral of the story is, the majority of the people in that list have to be paid out royalties — and I’ve heard many cases (and experienced first hand) where producers or song writers don’t get paid out at all. Think about cases where an artist is under the radar for years before starting to build a lot of momentum. In these situations, there can be hundreds of songs the artist has already released independently. All the producers and features that artist worked with deserve royalties when the artist “makes it”. Yet, its not uncommon for the creatives who contributed to the start of an artist to be completely forgotten when record labels get involved. They don’t see royalties, recognition, some don’t even get mentioned in credit lists. The process of tracking distributing royalties is far from transparent, whoever uploads the song essentially controls who gets paid as it generates revenue. Blockchains can be used for a wide variety of applications, such as tracking ownership or the provenance of documents, digital assets, physical assets or voting rights. Blockchain can be used to create an immutable system that’s completely transparent — so if someone isn’t getting paid, they’ll know, and be able to address it. If an artist was able to tokenize each project they launched, and offer tokenized royalty shares, royalties could be distributed to each person contributing to the album automatically. Further, this would also allow the decentralized funding we were just talking about. An artist can issue 100 tokens, each token equally 1% of the royalties generated from their album. The tokens can be distributed to everyone who contributed to the song, a portion can be set aside for the artist as their share of the profits, and the rest can be sold off to fans in order to raise funds to produce the album. I don’t think blockchain can disrupt the music industry, I think it needs to disrupt the music industry.
https://medium.com/hackernoon/why-the-music-industry-needs-blockchain-55aa41e16516
['Reza Jafery']
2018-08-07 23:21:21.395000+00:00
['Blockchain', 'Blockchain Music', 'Cryptocurrency', 'Bitcoin', 'Music']
The Importance of Refactoring
The Importance of Refactoring Nobody likes it, Everyone should do it After spending the past two days refactoring a major system in my game, I was reminded of the importance of refactoring code. A lot of programmers don’t like refactoring and I get it. You have a perfectly functional piece of code and you willingly break it hoping that it turns out better after you are done. It goes against the “never change a running system” mentality a lot of people have — myself included. I always get a bit anxious when I am about to refactor a bigger code section because no matter how good your plan is, there are always problems you only realize after you are halfway done with the changes. But in most cases, refactoring your code is worth it. It will be better structured, less complex, and hopefully more efficient. Photo by Markus Spiske on Unsplash My history with refactoring When I first started programming I never refactored any code at all. I always preferred to start over and make the code better the second time around. I actually started over so often that it became a bit of a meme between me and a friend who was also learning programming at that time. While starting from scratch may have been a valid strategy in the beginning, as my number of libraries, classes, and lines of code grew, it became a lot less viable to do so. When you are just learning to code, refactoring isn’t an issue, because your codebase is small and you can easily keep the entire scope in your mind. Even in university hardly any project was big enough to require any real refactoring, so despite being taught the concept I only learned how to refactor once I got to work on bigger programs. At this point, working on a game and my engine, refactoring is necessary to keep the code maintainable. Obviously, perfect code doesn’t exist and there are always ways to improve the code quality, but I have been getting better at refactoring regularly. When to refactor? I’m sure there are people, even college professors, out there telling you to refactor after every little bit of code you’ve added. But I don’t think that’s necessarily true. In most cases, you can keep adding new features and functionality for a while before you need to think about refactoring. However, the time to refactor is a double-edged sword. If you wait too long and add too many features, the code gets messier and messier and will be a lot harder to refactor once you get around to it. If you refactor too often, your code will be very well structured and clean, but you spend more time refactoring than actually making progress. As with most things in life you need to find a balance. Especially when you are developing something completely new and you are not sure about the best way to do it you try different solutions and the code ends up being a mess. I generally focus on getting the code working correctly before I worry too much about the structure. However, I have been getting better at spending some time refactoring and polishing the code after I am done with a feature before I continue to the next, new and shiny thing, while all the details are still fresh in my head.
https://medium.com/swlh/the-importance-of-refactoring-c2d5c259332d
['Christian Behler']
2020-06-05 08:43:16.799000+00:00
['Programming Tips', 'Programming', 'Software Engineering', 'Software Development', 'Refactoring']
Yes, Singleton Can Be Better
Recently, at my University, I was listening to subject Design patterns where they forced us to use design patterns while making our homework and projects. I was using Singleton pattern as standard implementation where I defined types that I wanted to use, but then I saw an interesting implementation from one guy. As a software enthusiast I always try to find better ways to make some piece of code more effective so I started to use his approach. “In software engineering, the singleton pattern is a design pattern that restricts the instantiation of a class to one object. This is useful when exactly one object is needed to coordinate actions across the system.” according to Wikipedia. There are some discussions how Singleton is bad and have nickname “ant-pattern” but I will not enter that discussion in this article, I will just show this approach. In this approach we will define just Hashmap that will have Object as second parameter,with this way we can put any object that we need without worry to define new variable in our singleton class. Implementation First we need to create a class Registry and define protected constructor. public class Registry { private static volatile Registry instance = null; protected Registry() { } public static Registry getInstance() { if (instance == null) { instance = new Registry(); } return instance; } } As I said before, we will not define every variable, we will have just one that will be HashMap, so we will add that in our Registry class and put initialization in our constructor. private final HashMap<String, Object> items; protected Registry() { items=new HashMap<>(); } Also, we need to have setters and getters for our data in Singleton, we will do that through these two methods. public void set(String key, Object value) { this.items.put(key, value); } public Object get(String key) { return this.items.get(key); } You should use it in this way Registry.getInstance().set("model.cat", new Cat("Lion", 1)); Object object=Registry.getInstance().get("model.cat"); if(object instanceof Cat){ System.out.println(((Cat)object).getName()); } That’s it, we are done, we have our Singleton and we can start working with it! Also, full source code is released at my Github profile. If you are interested in full implementation (example included) check it RegistryExample
https://medium.com/it-works-locally/yes-singleton-can-be-better-ed2e181f9289
['Alen Huskanović']
2016-11-01 12:54:30.721000+00:00
['Android', 'Software Development', 'Design Patterns']
Build Intuition for the Fourier Transform
Build Intuition for the Fourier Transform A Magical Algorithm for Convolution and Signal Processing The Fourier Transform and its cousins (the Fourier Series, the Discrete Fourier Transform, and the Spherical Harmonics) are powerful tools that we use in computing and to understand the world around us. The Discrete Fourier Transform (DFT) is used in the convolution operation underlying computer vision and (with modifications) in audio-signal processing while the Spherical Harmonics give the structure of the Periodic Table of the Elements. While it’s easy enough to write down a formula, there is a broader view revealing that each arises from simple considerations about symmetry. We will pursue that view here. As always, I will strive to avoid unnecessary technical jargon complications. If you’re interested in them, try the footnotes. There will, however, be some complex numbers: you’ve been warned. The Fourier Series The Fourier Series is the oldest of the bunch and was originally studied by a Frenchman, Joseph Fourier. Fourier, a friend of Napoleon’s, wrote about it in the 1820s while studying heat flow, shortly before he discovered the Greenhouse Effect we spend so much time worrying about today. The natural setup for the Fourier Series is complex-valued functions on a circle.¹ As you will recall, a function is something that takes and input and gives an output. In this case, the output will be a complex number (we denote the complex numbers as ℂ) The input will be a point on the circle. Note that we aren’t including the interior of the circle, just its boundary. We denote the circle as S¹. We will think about this circle as having radius 1 if it comes up. We will also require our functions to be “nice.” They won’t try to do mean things like be jump around (be discontinuous), go to infinity, be undefined, or call you rude names. Symmetry Symmetry is one of those mystical things people love to talk about. For example, research has consistently shown that if your face is symmetric, people find you more attractive. It also shows up in physics: the best theory of the world we currently have can cryptically be described as the symmetry SU(3)×SU(2)×U(1), whatever that means. But when we talk about symmetry, we mean something laughably simple: if you rotate a function on a circle, you get a function on a circle. Now, it’s a little tricky to draw a function on a circle. So please recall that if you take a piece of string and connect the two ends, you can make a circle. With this in mind, we can graph a function on a circle by graphing it on a line segment; the fact that the left and right ends have to be connected you have to visualize in your head. The figure below shows a simple function on a circle (left). To rotate it, we just “shift” with the proviso that whatever goes off the right end comes back on the left. In this case, the rotation is by ᴨ/2 radians (which is 90°). The input to the function is just the angle around the circle (going clockwise starting from the point all the way on the right, say). A Function on a Circle (left) and its rotation by 90° (Right) Another way to think about this is, if you have a function on a circle, you can just rotate the circle (because the circle is, tautologically, circularly symmetric).² Hopefully you are convinced so far that there is no particular wizardry going on yet. All we said was that if you rotate the function (or the circle), you get another perfectly good function. Representation Theory The study of the fact that you can rotate a function on a circle to get another function is called representation theory by mathematicians and it involves one extra step that opens up a whole world of understanding. The extra step isn’t too complicated in itself. In the previous setup, when you wanted to rotate the circle, you just rotated it. Now we will add an intermediary. I want you to imagine that you are going to give instructions to rotate the circle and I am going to interpret them and do the rotation. For example, you might say “rotate the circle 73°” and I will rotate the circle 73°. Now, I never said I have to do exactly what you said. There are some rules that apply to what I do, and we’re going to explore them by example. If you say “rotate the circle 360°” I am not going to do anything. Similarly, if you said “rotate the circle 370°”, I will only rotate it 10. Why? Well, hopefully you see that if I rotate 360°, we are just going to get back to where we started. You could object and say “no no I really wanted to see it spin around, it’s fun” but actually you should be picturing all of these rotations as happening instantaneously. We are not interested in the process of rotating; we’re interested in the result. Now imagine that when you say “rotate the circle 10°” I instead rotate it 15. Likewise, if you say 20°, I will rotate it 30! In general I rotate by 1.5 times what you ask. This may seem all well and good (I never did make any promises) but there is a problem. If you say “rotate the circle 360°” you are, in fact, entitled to the assumption that nothing will change. So if I were to do this “150% effort overachiever” strategy, I would be breaking the rules because I would rotate 180° (the same as doing 360×1.5 = 540 = 360 + 180). It is a rule that, whatever number of degrees you say, I have to do some multiple. So what multiples are allowed? 1 is obviously fine. So is 2! If you ask for 360°, you’ll get no rotation (720°), exactly what you should expected. So is 3, 4, 5, etc. Also 0, -1, -2, etc. These are in fact the only options. It would be quite a diversion to go deep into the details of where these rules come from. But very briefly, the “rotations of a circle” come with a certain mathematical structure (called a group). Roughly, the structure is that obvious fact that rotating by 1° and then again by 1° is the same as rotating by 2°. Plus the fact that 360° is the same as 0°, which does nothing. The rules: I have to rotate by a constant multiple of what you say If you say 360°, I have to end up doing nothing follow from the requirement that we preserve this structure and in turn imply that I can only multiply by an integer. This consequence is called quantization and is precisely what is meant by the “quantum” in quantum mechanics. In fact, it turns out that the reason that electric charges come in discrete (quantized) amounts follows directly from the argument we have just made. For this reason, the integer multiple I use is sometimes called charge! To recap, recall that we have: You, asking for a certain number of degrees of rotation. Let’s call the collection of possible rotations G (for “Group”). Me, interpreting those instructions. In a sense, I am a function that takes an input (what you tell me) and gives an output (what to do). We’ll call this ρ. Functions, which are being acted on. We’ll call the collection of all the nice-enough functions V (for “Vector” because functions are vectors: they can be added, multiplied by a constant, and there is a 0 function). Taken all together, these three mathematical objects form a representation.³ I imagine it is so called because I am “representing” your abstract instructions with a concrete action. Irreducible Representations This is where the magic happens. In many contexts (like the one here), the thing being acted on (the functions) can be decomposed into constituent parts.⁴ Roughly what’s going to happen is: A function could be really complicated Imagine we have a bunch of buckets labeled …-3, -2, -1, 0, 1, 2, 3, 4, etc. and inside each bucket is a very simple function like cos(2θ). Every function can be made by adding up components, 1 from each bucket. If we have done this, we have broken down a complicated thing into the sum of a number of simple parts.⁵ This is good. If we do this, we win. Now, proving all of this is going to work is beyond the scope of this article. If you are interested, it’s often the first thing covered in any textbook on representation theory and it isn’t crazy hard or anything. Instead, we are just going to introduce a sleight of hand and then describe the buckets. Rotation is Multiplication Consider the function f below, and look at what happens when we rotate it by an angle a. As in the previous diagram, rotation is the same as shifting the argument to the function by a. i is the imaginary unit, the square root of -1 (don’t worry about it for now). Everything else follows from the exponent rules you learned in math class. Rotation is Multiplication It turns out (by Schur’s lemma) that the functions in our buckets have to satisfy the property that rotating them is the same as multiplying them. And the functions above, where n is any integer, are exactly the functions that do satisfy this property. Therefore, we can write any function on the circle as a sum of such functions: Decomposition of functions on the circle where the coefficients a just tell us how much of each function to take. This decomposition is called the Fourier Series. If you don’t like complex numbers as much, you may be more comfortable using the fact that which means that we can write every function f on the circle in the form Trigonometric version of the Fourier Series and the complex numbers are hiding in the coefficients a and b if they are necessary. (Note: we combined terms like n=2 and n=-2 so we only sum over non-negative n. And for n=0, we only really need one of these terms.) Finally, note that we might need infinitely many terms whereas in practice we only get finitely many. The animation below shows what happens as we take more and more terms. The blue lines are a “sawtooth wave” which you can think about as a discontinuous function on the circle (it jumps every 360°). The red curve is the Fourier Series approximation with 1, 2, 3, etc. terms. The Cousins Let’s recap what we did. We realized that if you rotate a circle you get a circle back (the symmetry). We expanded this so that we worked with functions on the circle and allowed our rotation instructions to be modified (the representation). We learned that the functions can be decomposed into simpler parts (the irreducible representations) and that the simpler parts had to satisfy a simple property that rotation is multiplication (Schur’s lemma). With this we could write down those simple parts. To recover everything else, all we need to do is consider functions on X where X is something other than the circle. The Fourier Transform Okay, let’s make X into ℝ, the real numbers. They symmetry is that you can translate the real numbers (add 5, say) and end up with what you started with. Or stated with functions, a regular function that takes as input a real number and has as output a real number can be translated and you get back another perfectly good function.⁶ Formally, this is similar to what we did on a circle where rotation also looked like a shift. Unlike before, however, the result will not be quantized. Instead of having discrete buckets, we will have on bucket for every real number. To add up a continuous set of things, we need to use an integral instead of a summation; but conceptually this is the same as with the circle. Change the variable from θ to x Change the sum to an integral Change the bucket index from n to p (p stands for “momentum”) Change the coefficients a so that there is one for every bucket-index p The Fourier Transform decomposes a function Note this is not the formula you will see on Wikipedia for a couple reasons. First, the Fourier transform is usually written down with a slightly different normalization (we need a 2ᴨ term somewhere to make some later formulas nice). Second, it is the inverse transform: the transform computes the coefficients a(p), and the inverse transform is how we write down the decomposition of the function f. The Discrete Fourier Transform Now let’s change the underlying space to be a “discrete” version of the circle. Let’s say there are N points equally spaced around a circle. The symmetry is that you can rotate by 360/N degrees or any integer multiple of it. A “function” on the “discrete circle” is just a list of N numbers, 1 for each point. Rotating the function is just shifting the list of N numbers (and moving anything that goes off the edge back to the other side). Now there will only be N buckets. The smallest rotation is by an angle of 2ᴨ/N radians. The variable changes from x to k and k is an integer, one of 0,1,2,…,N-1. The Discrete Fourier Decomposition Again, slightly different than what you will see on Wikipedia for the same reasons. The Discrete-Time Fourier Transform Now let’s change the underlying space to be a discrete version of the line (the real numbers). So we get the integers. The symmetry is shifting by any integer. A “function” is an infinite list of numbers, one for every integer. Discrete Time Fourier Decomposition Again, we aren’t being careful with our 2ᴨ normalizations. The Spherical Harmonics What if the underlying space is a sphere? By sphere, I mean the boundary and exclude the interior. This space is denoted S². The symmetry is any rotation of 3 dimensional space. Because this symmetry group is a little more complicated, the number of functions in each “bucket” is not always one. The result is called the spherical harmonics and you learned about them in chemistry class. The size of these buckets also gives the structure of the periodic table. This is pretty amazing: just by trying to do a Fourier Transform on a sphere, you can basically figure out chemistry. (There are some more details: everything gets doubled because electrons can be spin up or spin down). I will briefly try to describe the buckets and their relation to the periodic table. The first bucket is called “s” has only one function in it. Double that and you get 2, the number of elements in the first row of the periodic table (the 1s orbital). The second bucket is called “p” and has three functions in it. Double that and you get 6 electrons that can go in the 2p orbital (alongside the 2 in the 2s orbital). The third bucket is called “d” and has 5 functions in it. Double that and you get 10, the number of electrons that can fit in the 4d orbital (Sc–Zn, the blue elements in the middle below). The fourth bucket is called “f” and has 7 functions in it. Double that and you get 14, the number of electrons that can go in the 6f orbital. (pink La–Yb below and excluding the last one Lu which should properly be colored blue). The fifth bucket is called “g” and has 9 functions in it. Double that and you get 18, the number of electrons that could go in a g-orbital if the atom was stable enough to exist. And so on (1,3,5,7,9,11, etc.). Periodic Table (E. Generalic, periodni) To spell it out, the quantum state of an atom is like a function on a sphere (the radial part factors out and accounts for the 1s, 2s 2p, 3s 3p, 4s 4p 4d part of the structure). If you rotate that state you get another perfectly good state (you’d better!). So the representation theory applies and we can do our “Fourier” analysis. Conclusion We’ve seen that the Fourier transform and its cousins can be viewed as doing the same process to a function, with slightly different results depending on the data that make up the function. The table at left captures the 5 versions we described. And of course you can do it on any shape that has a symmetry. For example, you could work out Fourier analysis on a function that is defined on a plane (2-dimensional real numbers) or for an image (which is a finite 2d array of numbers that we can take to loop back on itself, forming a discrete torus). We’ve also seen just how powerful this tool is. It gives us the periodic table of the elements (essentially our whole world). In computer vision, it turns out that it’s faster to do a (discrete) Fourier transform, multiply the results, and then inverse transform than just naively multiplying numbers. In audio signal processing, the Fourier transform and its variants (like the wavelet transform) is a powerful tool for feature extraction. Tune in next time for applications.
https://towardsdatascience.com/build-intuition-for-the-fourier-transform-b0bd338c6d4f
['Ravi Charan']
2020-06-28 05:30:11.022000+00:00
['Computer Vision', 'Mathematics', 'Data Science', 'Chemistry', 'Machine Learning']
Americans Love to Suffer, and That’s Why Nothing Ever Changes
Americans Love to Suffer, and That’s Why Nothing Ever Changes Requiem for the American Dream. Photo by Allef Vinicius on Unsplash My uncle lives out in the woods, on a hundred acres my family stole generations ago. There’s a cemetery out there I’ve never been to, but it’s where most of my ancestors are buried. He doesn’t care about the pandemic at all. It doesn’t affect him. He’s been alone for years, and doesn’t care to be around people. He came back from the Vietnam War broken, and stayed that way. That’s the America I know. You’re supposed to suffer. It’s noble. The rest of the world looks at us and scratches their heads. They don’t get us. They think something went horribly wrong, but the truth is simpler than that. We were flawed from the start. This pandemic, and all the chaos that comes with it, is a fulfillment of our destiny. We could change it, but do we want to?
https://medium.com/the-apeiron-blog/americans-love-to-suffer-and-thats-why-nothing-ever-changes-46e8d29c73e7
['Jessica Wildfire']
2020-12-21 17:50:51.937000+00:00
['Politics', 'Society', 'Opinion', 'Culture', 'Economics']
Patient Empowerment is a Dangerous Euphemism
Photo by rawpixel on Unsplash Patient Empowerment is a Dangerous Euphemism How to end the patient-physician power struggle and become a world-class patient. It gnaws at you like a rodent, the nagging suspicion that your physician has missed something. The physical exam was rushed and the interview was abruptly halted. Your anxiety grows as your physician retreats behind the computer and its unintelligible charts and graphs. No one knows your body and how you’re feeling like you do. You’ve done your homework. You know there’s more to these symptoms than what’s been discussed and the consequences are dire if they’re misdiagnosed. Such is the feeling of disempowerment. And if you’re advocating for your child or a loved one, these concerns can boil over to a full blown panic. No question, you have the moral high ground to do something about it. But I want to explain why you shouldn’t, at least not in the way these matters are typically negotiated. Let me surface my bias immediately. My wife Deana and I have many inspiring, heroic, and yes, haunting memories of the last 16-years, advocating for the needs of our son. It’s hard to speak of the fragility of these things in academic terms when you’ve seen your child’s heart beating through an open cavity in his chest. In a very literal and serious way, our lives are in the hands of our healthcare professionals. With that said, I won’t argue some weak-kneed deference to power. Quite the opposite really. I’m serious when I say patients will lead, and I’ll close with some suggestions for how to do that. But if we want to get good at this, to be world-class patients, we need to understand this issue deeply. We need to unpack what empowerment really means, and more importantly, what it entails. I’ve learned that patient empowerment is a dangerous euphemism, masking a range of issues and competing priorities. It advances arguments for good behaviour, when what patients need most is good knowledge. There’s a deep tension between authority and knowledge. Ideally, both you and your physician are subordinate to the pursuit of good knowledge. The villain here isn’t the turf-protecting physician or the self-aggrandizing patient, it’s the prioritization of authority. Against the life and death reality of your health, knowledge rules. What does patient empowerment really mean? There are many definitions of patient empowerment in the medical literature. Commonly, it focuses on autonomy, the freedom from coercion and the independence to make your own decisions. It has both relational and psychological aspects: Empowerment is about the patient-physician relationship and how it makes you feel. So what could be wrong with that? Nothing, when examined in isolation. But here, in the swirling waters of relationships and feelings, any literal interpretation of patient empowerment drowns. Patient empowerment entails a shifting of authority. Critically, autonomy isn’t just about you. Your autonomy and freedom only makes sense when evaluated from the perspective of where that power currently resides. The Pulitzer prize-winning author and physician, Siddhartha Mukherjee pulled no punches about the fundamental relationship between a patient and a doctor. “Even today, the power lies in one direction: the doctor knows that the patients are there to try to seek help.” Physicians have always held the power, and if this is changing, it’s changing slowly. Mukherjee recognizes that his mere acknowledgement of this reality will raise hackles, and it says nothing about how the power dynamic should change in the future. But like it or not, patient empowerment needs to be understood as a shifting of authority. Perhaps power can be shared. You’re happy to cede authority to your physician in medical matters. But you also understand the important experiential and personal aspects that demand your input. And your claims here are unimpeachable. Unfortunately, this perspective is insufficient to support the mandate of patient empowerment, to enable you to make autonomous decisions about your treatments. Where is the boundary between your symptoms, their underlying cause, and your treatment? It’s everywhere at once, your experience and the objective reality of it interwoven like a quilt. Laura Nimmon and Terese Stenfors-Hayes discovered that for experienced physicians, “power sharing” is rhetorical. Power is handled by the physicians, something they inherently own and use. If your child’s life is on the line, would you still cherish “your right to make the wrong decision”? People will rightly point out that medicine routinely weighs decisions where there are no universally correct answers. Treatments create a distribution of effects, some good, some bad. Here, proponents of empowerment argue that patients need all the relevant information about these compromises and trade-offs to make an informed decision. The challenge is that knowledge and expertise in weighing medical decisions is itself hard-won expertise. There’s a great deal of work in decision theory and decision-support systems directed to the problem of how highly trained physicians can make good decisions. In this, medical decision-making is better understood as a means of production for creating good knowledge. And once produced, good knowledge often leaves little room for subsequent decisions. Consider the case of a sick child. Assuming the parents only want their child to live a long, happy and healthy life, what choice is afforded them? Good knowledge leading to safety or bad knowledge leading to death? If your child’s life is on the line, would you still cherish “your right to make the wrong decision”? There are rare exceptions, and I’ll speak to these in a moment. But given these goal-driven constraints, the question becomes, which party is better placed to assimilate the most relevant information and adjudicate it fairly? In this light, transparent and collaborative decision-making is more about how it makes us feel, satiating our scepticism and building trust, than any literal notion of autonomous decision-making. Again, laudable goals, but rarely your top priority. Do you want to be a third-rate physician or a world-class patient? Similarly, there’s nothing wrong with medical literacy, when examined in isolation. The internet has profoundly enabled patients in this area. I encourage people to learn as much as they can about their condition and treatment options. In a moment, I’ll introduce you to someone who’s taken medical literacy to dizzying heights. But in so doing, you have to acknowledge that there’s a limit to the understanding you’ll acquire, to say nothing of your lack of experience in applying that understanding. Peter Schulz and Kent Nakamoto, reflecting on the limits of patient literacy, are concerned with the unintended consequences. “For it may be that the precise and complete provision of information makes a patient interrupt or cancel a therapy that is beneficial to him.” There’s an old saying, “A physician who treats himself has a fool for a patient.” Do you want to be a third-rate physician or a world-class patient? Even the most knowledgeable people are prisoners of their own perspective. And when you’re sick, or worse, when your loved ones are sick, your recall and decision-making abilities are diminished. Consideration of psychological factors and biases led Oxford philosopher Neil Levy to the paradoxical conclusion that patient autonomy is increased by limiting it! This conclusion, that effective and responsible autonomy involves limiting the degrees of patient freedom, motivates the development of aids in support of patient decision-making. Based on an analysis of over 100 studies, patients that use decision aids are more knowledgeable about their options, they feel better informed, and more involved in their care. And there are no adverse effects on health outcomes or satisfaction, at least among patients. But it begs the question, how does patient empowerment make your physician feel? Your physician has feelings, too As with decision-making, out of enlightened self-interest, I’ll argue that you should care deeply how your physician feels. Pamela Wible marshalled evidence from a Medline poll of physicians, as well as her personal experiences as a physician, to the question of whether patient empowerment is a myth. Remarking on the obvious skew in power towards physicians, she wonders if patient empowerment is an oxymoron. She reflects on the increasing lack of autonomy felt by physicians as we move towards “assembly-line medicine” and the deep resentments that follow from these struggles for authority. “So who holds the real power in the healthcare system? It’s not the patient. And lately, it’s not the doctor. Most docs I know are disillusioned — or worse.” You don’t want a disillusioned doctor — or worse — providing your care. Too often, tension and conflict is what autonomy-shifts deliver. Few people welcome criticism or questions they feel are superfluous to the job at hand. But at its core, their negativity stems from a genuine concern over your health and wellbeing. You’re in their care and physicians take that responsibility seriously. This is why many are frustrated with empowerment programs. They have a duty to the quality of service, and its difficult to serve more than one master. The most dire of circumstances may be the only time when the priority of patient authority makes sense. In her otherwise illuminating essay, there are two aspects that Wible gets wrong. “The concept [of patient empowerment] makes little sense. And it makes even less sense in the newborn intensive care unit, and emergency department.” First, we know that patient empowerment makes a lot of sense, that’s why it’s such an explosive idea. When it fails, it fails against a competing priority. Rarely should the priority of empowerment and authority-shifts be elevated above the priority of good knowledge. And contrary to Wible’s examples of emergencies and intensive care, the most dire of circumstances may be the only time when the priority of patient authority makes sense. I can count on one hand the moments when Deana and I felt moments of authentic autonomy, and they were the worst experiences of our life. When Sam faced particularly dire prospects, when he was expected to die, his physicians sat us down for “the talk” about the quality of his life and the balance of his experience. These are moments where subjectivity reigns, where claims to good knowledge are fuzzy at best. Patients are not empowered in this way during the less extraordinary moments of care, and for good reason! I was reminded of the difference between knowledge and empowerment recently through a poignant essay by the physician Shekinah Elmore. She describes how she learned of the genetic condition that predisposes her to cancer, and the profound impact it had on her psyche. “My knowledge has both empowered and broken me — I don’t know which it’s done more.” You’re the CEO! I want to conclude with a few suggestions on how to reimagine your relationship with your physician. Meet Larry Smarr, a physicist and pioneer in the application of computer engineering to medicine. Using himself as the guinea pig, Larry is pushing the idea of the quantified self to unprecedented levels. He measures and monitors what he eats and drinks, the calories he burns, and what he excretes (seriously). His DNA has been sequenced, he’s had extensive and regular imaging, and he maintains 3D models of his insides. Supercomputers crunch all that data to support decision-making, which he uses to deliver precise instructions (some would say marching orders) to his physicians. I don’t think Smarr is a reasonable exemplar for patients. However, he does illustrate where things are heading. Medicine is continuing its relentless march away from subjective symptoms towards objective signs, signs that may be increasingly plumbed with technology. We can debate what’s lost in the process, but it’s undeniably impacting both patients and physicians. Soon, we may no more strive for self-control in healthcare than we do in air travel. We want to choose where to go and when, but thereafter we entirely cede control to others, humans and machines, for our safety. So where does that leave you, the patient? Smarr encourages us is to become “the CEO of our own body,” and this is the posture I recommend. CEOs are legitimately empowered people, but they don’t behave in ways routinely prescribed by empowerment programs. CEOs don’t tolerate turf wars, however legitimate the underlying feelings. They don’t get bogged down in the weeds, however collaborative the spirit. They delegate authority to those that are closest to the truth, those that possess good knowledge and the experience to apply it. They support them, giving them the context, values and purpose they need to make good decisions, but then they let them do their job. The first question, perhaps the only question, is Why? But with delegation comes responsibility. CEOs hold their team accountable. If the Russian proverb, trust but verify, can navigate nuclear disarmament, it can navigate patient-physician relationships. Any physician worth their salt will evolve a clinical hypothesis to guide your treatment. Your physician should be able to communicate the explanations that underlie their decisions. Start there and return there at every step through the course of your treatment. Patient empowerment programs often come loaded with questions to ask your physician. But the first question, perhaps the only question, is Why? Why do you think this treatment will help me? You’ll frequently be surprised by the answers. You may learn that your treatment has proved effective on only a small proportion of the populations studied, or that their specific effects and mechanisms are not well established. You may learn that alternative treatments, perhaps the ones you’ve Googled, carry undesirable side effects or dubious explanations. This focus on the upstream sources of the knowledge, the received wisdom, its limits and veracity, neutralizes the perception of a power struggle with your physician. It subordinates authority to the pursuit of the best explanation for your condition and how to evolve a treatment plan that works for you. I won’t pretend that the role of CEO will make your job any easier. There will be times when tough decisions rise to your office, and you’ll have to assert the vision you hold. You may find your physician is uncomfortable with your role as CEO and you’ll have to make a change. But if you’re doing your job right, leading with your values, focusing your questions on the veracity of the received wisdom and the evolving explanation of your treatment plan, you’ll be rewarded, even empowered.
https://petersweeney.medium.com/end-the-battle-with-your-doctor-f18f92e85345
['Peter Sweeney']
2020-10-31 15:24:45.498000+00:00
['Health', 'Doctors', 'Healthcare', 'Patients', 'Medicine']
How to Build a Reporting Dashboard using Dash and Plotly
A method to select either a condensed data table or the complete data table. One of the features that I wanted for the data table was the ability to show a “condensed” version of the table as well as the complete data table. Therefore, I included a radio button in the layouts.py file to select which version of the table to present: Code Block 17: Radio Button in layouts.py The callback for this functionality takes input from the radio button and outputs the columns to render in the data table: Code Block 18: Callback for Radio Button in layouts.py File This callback is a little bit more complicated since I am adding columns for conditional formatting (which I will go into below). Essentially, just as the callback below is changing the data presented in the data table based upon the dates selected using the callback statement, Output('datatable-paid-search', 'data' , this callback is changing the columns presented in the data table based upon the radio button selection using the callback statement, Output('datatable-paid-search', 'columns' . Conditionally Color-Code Different Data Table cells One of the features which the stakeholders wanted for the data table was the ability to have certain numbers or cells in the data table to be highlighted based upon a metric’s value; red for negative numbers for instance. However, conditional formatting of data table cells has three main issues. There is lack of formatting functionality in Dash Data Tables at this time. If a number is formatted prior to inclusion in a Dash Data Table (in pandas for instance), then data table functionality such as sorting and filtering does not work properly. There is a bug in the Dash data table code in which conditional formatting does not work properly. I ended up formatting the numbers in the data table in pandas despite the above limitations. I discovered that conditional formatting in Dash does not work properly for formatted numbers (numbers with commas, dollar signs, percent signs, etc.). Indeed, I found out that there is a bug with the method described in the Conditional Formatting — Highlighting Cells section of the Dash Data Table User Guide: Code Block 19: Conditional Formatting — Highlighting Cells The cell for New York City temperature shows up as green even though the value is less than 3.9.* I’ve tested this in other scenarios and it seems like the conditional formatting for numbers only uses the integer part of the condition (“3” but not “3.9”). The filter for Temperature used for conditional formatting somehow truncates the significant digits and only considers the integer part of a number. I posted to the Dash community forum about this bug, and it has since been fixed in a recent version of Dash. *This has since been corrected in the Dash Documentation. Conditional Formatting of Cells using Doppelganger Columns Due to the above limitations with conditional formatting of cells, I came up with an alternative method in which I add “doppelganger” columns to both the pandas data frame and Dash data table. These doppelganger columns had either the value of the original column, or the value of the original column multiplied by 100 (to overcome the bug when the decimal portion of a value is not considered by conditional filtering). Then, the doppelganger columns can be added to the data table but are hidden from view with the following statements: Code Block 20: Adding Doppelganger Columns Then, the conditional cell formatting can be implemented using the following syntax: Code Block 21: Conditional Cell Formatting Essentially, the filter is applied on the “doppelganger” column, Revenue_YoY_percent_conditional (filtering cells in which the value is less than 0). However, the formatting is applied on the corresponding “real” column, Revenue YoY (%) . One can imagine other usages for this method of conditional formatting; for instance, highlighting outlier values. The complete statement for the data table is below (with conditional formatting for odd and even rows, as well highlighting cells that are above a certain threshold using the doppelganger method): Code Block 22: Data Table with Conditional Formatting I describe the method to update the graphs using the selected rows in the data table below.
https://medium.com/p/4f4257c18a7f#b125
['David Comfort']
2019-03-13 14:21:44.055000+00:00
['Dashboard', 'Towards Data Science', 'Data Science', 'Data Visualization', 'Dash']
Do You Lose Hours?
Do You Lose Hours? And where do those hours go? My studio! Photo by the author. The best days for an artist are when we have uninterrupted time doing our art. If we are pulled away, there is a tendency to call it “lost hours.” In the life of an artist, there are many hours or days, where roadblocks come along to keep us from doing our art. A good thought can be when doing something non-art related, it might inspire the next painting. The time away from our art, the visual distance or a needed break, brings fresh eyes to the art piece we are currently working on. Being creative in some way, is what makes us happy. Getting caught up in our creativity is called being in the zone! If I get outside to paint “plein air,” I can usually be assured minimal interruptions, and I end up in the zone. When you are in the zone, you don’t really realize it. Later you think about where did the time go, but the feeling you do remember is of euphoria! Many of us talk about those hours as lost. Time flew by! Those are not really what I mean by “lost hours.” Here are a few things that happen to me that feel like lost hours, and I bet to you also. Organizing is a big item taking time away from creating! Can you imagine trying to organize the studio above. It looks like chaos, right? To do so is a big feeling of lost hours. I have to tackle it about every three months just so I can find something I worked on months ago! When I do take time to organize, it usually becomes a full day of work. Or, it may be only four hours of moving stuff around that makes more sense on that particular day. It is a good thing to do occasionally. Organizing frees my mind from the creating, and the “lost hours” eventually become a good feeling and productive time. If I am not doing art, I am usually thinking about something art-related. While doing some of these “hours,” I might be planning what my next painting will be. So maybe they are not really lost! As an artist we are always planning. Planning is another of those things that can be time away from the studio easel and feels like lost hours. This is a big one that gets in the way of that creativity and being in the zone mentioned above. It has been said that 50% of our time is spent on non-creative aspects of our art and mostly on promotion. We can use some of those lost hours to think about a variety plans that may promote our work. If you wish to sell, then the artwork needs visuals. Taking photos, writing descriptions and logging the work into inventory are big items that take part of that time. This leads to the next item on my list. These things are what you need to do to promote on your website, or on social media. Definitely the need of these is a big one for your newsletter which needs sending on a consistent basis. You are going to point your people in the direction of your website and your galleries if you are in some. Definitely this is what feels like lost hours, but is not at all! Marketing, Galleries and Selling! There are times I need to think about what the next marketing move should be or when I need to implement one I have already planned. Being a professional artist, I create paintings and have a few galleries that represent me. Many paintings never make it to a frame, and I must do something with those. I need always to be thinking of where those paintings are going or where to store them. The photo below shows a recent event to sell some of my pieces at my home. It took me a couple of weeks of a few hours each day away from my art to put the sale together. It required creating and sending emails and a newsletter to promote the sale. Ready for a Sale! Image courtesy of the author. Most professional artists never have a sale because it undermines their art’s current value and the art purchased by their clients and patrons. If we are overwhelmed with stacks of art, it becomes a thorn in our side. We need to do something with all those studies and experiments! The above-referenced sale was my first “sale” ever … in 49 years of being a practicing professional artist. Yes, it created a lot of what might be considered lost hours, but they were necessary one!
https://medium.com/the-innovation/do-you-lose-hours-638a046ba05a
['Marsha Hamsavage']
2020-11-30 11:44:23.415000+00:00
['Planning', 'Painting', 'Artist', 'Organization', 'Creativity']
Why Every Successful Artist Throughout History Started Their Careers By Creating For an Audience of One
In a world where we’ve quantified the value of people’s artistic endeavors with reviews, rankings, fan, and follower counts, modern-day artists have become excessively concerned with validation from strangers on the Internet, and metrics over master when it should really be the other way around. It seems counterintuitive that you could reach an audience of millions by creating for an audience of one (aka yourself). Successful artist throughout history started their careers by creating for an audience of one. Because Everybody Starts at Zero Most people who start making art don’t know they’re going to become famous. They love the work. So they keep doing it. You have to love the work to build a career in the arts. If you don’t, you won’t be able to navigate the geography of a creative life and get past those periods of lingering obscurity, which is a rite of passage for artists. Every writer starts with zero readers. Every podcaster starts with zero listeners. Every musician starts with zero fans. The only way you get past zero is by starting with an Audience of One. If you aren’t willing to persist when nobody is consuming your art, then you’ll never actually get to the point where people are consuming your art. Because of the Eternal Gap Between Ambition and Ability If you keep making your art, make better art, and eventually make unmistakable art, you will become so good they can’t ignore you. You’ll build an audience for your work. But you can’t rest on your laurels. There is no “I’ve made it” moment because there’s an eternal gap between your ambitions and abilities. After becoming the most famous composer in Bollywood, A.R. Rahman has started to dabble in making his own movies. Olivia Wilde went from acting to directing with her debut film, Booksmart. Your ambitions will change and there will be a new gap to bridge. There will be new projects, new ideas, and new goals. Because You want to Play the Infinite Game The purpose of being commercially successful with your creative work isn’t external recognition, fame, or wealth. Those things are byproducts. The biggest benefit of being commercially successful with your creative work is that you get to keep doing it. If you write a successful book, you get to write more books. If you’re in a successful movie, you get to make or be in another one. A lot of authors write books with the goal of using their book as a business card to do public speaking engagements. Writing my books has definitely helped with that. But I don’t want to sell more books for more speaking gigs. I want to sell more books so I can write more books. The real value of being commercially successful with your art is that you get to make more art. Because it’s More Rewarding When you let go of your expectations, the work becomes much more rewarding, and your outcomes might exceed your expectations. You tap into the joy of being immersed in your work, experiencing the high of being in the flow, that the work becomes its own reward. I didn’t have a contract for a third book with my publisher, but I wasn’t going to let that stop me from writing another book. Free from rules, deadlines, and any expectations, The Scenic Route was one of the most rewarding creative projects to date. I can’t tell you how many people read the free version or bought the Kindle version on Amazon. I didn’t write it to sell copies. I wrote it to touch hearts and I wrote it for myself. Artists keep making art long after they’ve been commercially successful because they’re more addicted to the process than the results. The results are out of your control, the process is not. So that’s where you should be spending the bulk of your time, energy, and attention. Because You Leave the World a Bit Different for Your Having Been Here Maybe you won’t build an empire, become the next Steve Jobs, Oprah, or Beyoncé, or put a dent in the universe. But you can, in the words of Neil Gaiman, “Leave the world a bit different for your having been here.” Every time you create something that didn’t exist before, you change the world, you give the people in your life a gift, and you leave us something to remember you by. You get to say, “I was here. This is what I’ve made.” Because Making Good Art Can Be a Lifesaver After 41 years of living a life that hasn’t gone according to plan, battling depression, and a molotov cocktail of bullshit in my head, I’m convinced that surfing and writing saved my life. Surfing gave me passion. Writing gave me a purpose. While interviewing people might be something I do for an audience of listeners, writing was something that I’d always done for myself. My writing skills pale in comparison to my interviewing skills. But, I get so much joy from it that for more than half a decade, I’ve woken up in the morning and written 1000 words a day. Because the Work is Where You’ll Spend the Bulk of Your Time For the one day an author gets to upload pictures of his book on Instagram, or share the news on Facebook, there are hundreds of days, and thousands of hours of sitting quietly in a room, tapping away at a keyboard, battling fear, resistance, doubt, procrastination, and that voice in your head that says, “This stuff is really coming out the wrong end.” Moments in the spotlight are finite. This is the reality of life in the arts. This is what you are signing up for. The question you have to ask yourself is if you’re up for this. Are you willing to focus on the process instead of the prize, do what it takes to master your craft, and honor your commitments day after day, month after month, year after year? Are you willing to go far past where the average person quits? Whether that’s crickets chirping after working on a blog for months or getting rejected by one publisher or casting director after another? Are you willing to keep putting messages in bottles and keeping your fingers crossed that one of them will eventually reach its intended recipient? Paul Graham has said you have to endure a million dollars worth of pain to make a million dollars. You might have to endure a million hours worth of pain to make good, great, and unmistakable art. It’s up to you to decide if the juice is worth the squeeze.
https://skooloflife.medium.com/why-every-successful-artist-throughout-history-started-their-careers-by-creating-for-an-audience-6797a095f071
['Srinivas Rao']
2019-06-28 00:07:08.184000+00:00
['Creativity']
12 (+Bonus) amazing Youtube Channels To Learn Python Programming for Free
Photo by CardMapr on Unsplash 12 (+Bonus) amazing Youtube Channels To Learn Python Programming for Free Here you have a great list of Youtube channels if you want to dig into Python programming and learn from the best. A few days ago, I published an article with 21 of the best channels on Youtube where you can learn Data Science, AI, and Machine Learning for free and... Boom! It was a great success; many people wrote to me that the article was handy and helped them find great content! If you are looking for the best Youtube channels to dig into Python programming and learn from the best, you have a great list with 12 (my lucky number) amazing programmers who share tips and secrets that will help you to become a master!
https://medium.com/towards-artificial-intelligence/21-great-youtube-channels-for-you-to-learn-python-programming-for-free-d6470c444f7d
['Jair Ribeiro']
2020-12-08 19:37:18.811000+00:00
['Python Programming', 'Programming Languages', 'Programming', 'Python', 'Learning To Code']
Making Interactive Maps of Public Data in R
For a version of this post with interactive maps, check it out on GitHub. Introduction Oftentimes, when working with public data, there will be a geospatial component to the data — the locations of public libraries, for example, or which neighborhoods of a city are most bike-friendly. In this tutorial, we will walk through how to import, transform, and map data from a public dataset using R. The data we will be using comes from the City of New Orleans Open Data Portal, and concerns grants from the Department of Housing and Urban Development (HUD) given to local partners to help build and rebuild affordable housing in the city after Hurricane Katrina. HUD distributes government funding to support affordable housing, community development, and reduce homelessness. This dataset contains information about each funding award and the location of the housing development the funding was used for. The data has the potential to provide insight into how New Orleans is continuing to recover from the long-term impact of Hurricane Katrina. This tutorial came about when, after looking at a static map of HUD grants in New Orleans (pictured below), we wanted to be able to zoom in and examine grant sites in more depth than a static map provides. Interactive maps are great for this sort of deep, customizable exploration. Step One: Importing Data Downloading data from the open data portal First things first, we need to download the data we want to work with. To create this map, we’ll need two different datasets: the main dataset of HUD grants, found here on the New Orleans data portal, and a dataset that contains a shapefile of New Orleans neighborhoods, which can be found here. We’ll download these by clicking on the Export link, selecting CSV format for the first file, and shapefile for the second (since we’re going to be using that data to map the neighborhood boundaries, it’s easiest to download it in a predefined geospatial format). Download the files, unzip the shapefile, and place the HUD grant CSV and shapefile folder in your R working directory. If you aren’t sure where that is, run getwd() in your R console. Setting up an R session Now that we have our datasets, let’s make sure our R session has the proper setup. Load the tidyverse (used here for data wrangling), sf (simple features package used for geospatial data), leaflet (an R implementation of the Leaflet Javascript plotting library), and viridis (better color maps) packages like below: library(tidyverse) library(sf) library(leaflet) library(viridis) Next, we’ll read in the neighborhood and HUD grant data. read_sf() and read_csv() are standard functions to read in data, but we’ll want to make sure that our HUD grant data is properly recognized as geospatial data. To do that, we use the st_as_sf() function to transform the Longitude and Latitude columns of the CSV into simple features for plotting. neighborhoods <- read_sf("new_orleans_neighborhoods") hud_grants <- read_csv("new_orleans_hud_grants.csv") %>% st_as_sf(coords = c("Longitude", "Latitude"), crs = 4326, agr = "field") Step Two: Transforming Data Now that we have our data read into R, let’s clean it up a bit and create some readable labels for our map. The below code is fairly standard data cleaning, and will be different for each dataset, so I’m not going to go into it too much here: capwords <- function(s) { cap <- function(s) paste(toupper(substring(s, 1, 1)), {tolower(substring(s, 2))}, sep = "", collapse = " ") sapply(strsplit(s, split = " "), cap, USE.NAMES = !is.null(names(s))) } neighborhoods_clean <- neighborhoods %>% st_transform(4326) %>% mutate(neighborhood_label = paste0('<b>Neighborhood:</b> ', capwords(GNOCDC_LAB))) hud_grants_clean <- hud_grants %>% rename(OCD_Program = `OCD Program`) %>% filter(OCD_Program != "None") %>% mutate(popup_label = paste(paste0('<b>Partner: ', Partnership, '</b>'), paste0('Address: ', Address), sep = '<br/>')) There is an important point about geospatial mapping here, however; in the st_transform(4326) we take the neighborhoods dataset and transform it into the proper projection system. When mapping, you need to make sure all your spatial datasets are in the same projection system — in this case, 4326, which corresponds to WGS 84. Another general point about leaflet: when we create our grant project label, we use HTML formatting codes like <br/> and <b>. It’s not necessary to have a full understanding of HTML to make interactive maps — certainly, these labels don’t need formatting — but knowing how to bold, italicize, and put line breaks in your labels will make your maps that much nicer and easier to use. Step Three: Mapping Data It’s helpful, when making a map using leaflet, to think about the process in terms of building a map up from its constituent pieces: first you have the map, then the neighborhood areas, then the grant project markers. Each piece is layered on top of each other — physically, in the map, and virtually, in your code. To emphasize this point, we’ll walk piece-by-piece through making our map, with images of the map along the way. Your Base Map The first step in any mapping project, once you’re ready to build your map, is to choose what your base map is going to be. The leaflet package has a variety of basemaps, which run the gamut from highly-detailed, realistic maps (e.g., OpenStreetMap) to stylized art (Stamen’s watercolor tiles). For this map, we’ll use the OpenStreetMap basemap, since it has a number of nice details that will help contextualize the grant locations, like landmark names and types. Below is what the basemap looks like when we’ve only loaded the map, no data in it yet. We can still zoom in and move around and explore the OpenStreetMap tiles, though. leaflet() %>% addTiles() In this code, we used addTiles() to add the OpenStreetMap basemap, which works because OpenStreetMap is the default basemap for Leaflet. If we wanted to use a different basemap, e.g. Stamen Toner, we’d do something like addProviderTiles(“Stamen.Toner”). You’ll notice that the initial view is zoomed out to the whole world; we could manually set it to be zoomed in on New Orleans, but if we add our data, then the map will automatically focus around the data in the map. Let’s try it by adding our neighborhood data. Adding Polygons (Neighborhoods) leaflet() %>% addTiles() %>% addPolygons(data = neighborhoods_clean, color = 'white', weight = 1.5, opacity = 1, fillColor = 'black', fillOpacity = .8, highlightOptions = highlightOptions(color = "#FFF1BE", weight = 5), popup = ~neighborhood_label) To replicate the static map above, the line color is set to white and the fill to black. The opacity of the fill is .8, which is only slightly transparent, to both preserve the starkness of the initial display of the map, which will look a lot like the static image, and allow people to zoom in and still be able to see the basemap detail underlying the polygons. Information overload is a danger when creating interactive visualizations; having relatively opaque neighborhood areas in this map (literally) blocks out some of the detail, focusing the reader’s eyes on what you want them to look at. Adding Points (Grant Projects) Now, let’s add in the grant programs. To replicate the static image, we’re going to have to create a color palette to color our markers, which we do through the use of the colorFactor function in leaflet. Our palette is a slightly modified version of the magma palette from the viridisfamily, compressed so that each color is visible on a dark background (surprisingly, dark purple and black don’t go that well together). Another important thing to note here, which you may have noticed from the addPolygons call, is that in order to use the name of a variable from our data in a leaflet function parameter, we need to precede it with a tilde (~). This creates a one-sided formula, which leaflet knows to evaluate in the context of your input data. For example, when we run addCircleMarkers(data = hud_grants_clean, popup = ~popup_label), the popup = ~popup_label will be evaluated as using the popup_label column from the hud_grants_clean table. By now, we’ve started to add interactive and clickable components to our maps. You can click on neighborhoods and see the neighborhood label, or click on different project markers to see which partner organization worked on the project and the address of the project. pal <- colorFactor( palette = viridis_pal(begin = .4, end = .95, option = 'A')(3), domain = hud_grants_clean$OCD_Program ) leaflet() %>% addTiles() %>% addPolygons(data = neighborhoods_clean, color = 'white', weight = 1.5, opacity = 1, fillColor = 'black', fillOpacity = .8, highlightOptions = highlightOptions(color = "#FFF1BE", weight = 5), popup = ~neighborhood_label) %>% addCircleMarkers(data = hud_grants_clean, popup = ~popup_label, stroke = F, radius = 4, fillColor = ~pal(OCD_Program), fillOpacity = 1) Adding a Legend The final step in creating our map is to add a legend to indicate which programs are represented with which colors. Thankfully, doing this is easy — we just use the addLegend function and point it to our data, the color palette we used, the values we’re representing on the legend, and add a title. leaflet() %>% addTiles() %>% addPolygons(data = neighborhoods_clean, color = 'white', weight = 1.5, opacity = 1, fillColor = 'black', fillOpacity = .8, highlightOptions = highlightOptions(color = "#FFF1BE", weight = 5), popup = ~neighborhood_label) %>% addCircleMarkers(data = hud_grants_clean, popup = ~popup_label, stroke = F, radius = 4, fillColor = ~pal(OCD_Program), fillOpacity = 1) %>% addLegend(data = hud_grants_clean, pal = pal, values = ~OCD_Program, title = "HUD Grant Program") In this tutorial, we’ve walked through how to import, transform, and map public data to create an interactive map in place of a static map. Along the way, we’ve touched on important visualization concepts like avoiding information overload, the appropriate degree of stylization, and choosing compatible colors. We’ve also created a pretty awesome map! Hopefully, this tutorial will help you be able to create your own interesting and useful interactive maps in the future. If you’re interested in walking through this tutorial yourself in R, the RMarkdown file is available here; just put it in the same directory as the city data files and you’ll be able to run all the code in this tutorial yourself. Take a look at our GitHub page to play with the interactive maps yourself. And while we’ve only worked with this New Orleans HUD grant data in this tutorial, the import and mapping steps should be applicable to any sort of public geospatial data. Many cities have easily accessible data in their open data portals; go out and give it a shot!
https://medium.com/civis-analytics/making-interactive-maps-of-public-data-in-r-d360c0e13f13
['Civis Analytics']
2019-03-20 20:06:28.435000+00:00
['Govtech', 'Dataviz', 'Rstats', 'Civictech', 'Data Visualization']
Making climate science a human story
Making climate science a human story A chat with ClimateAdam: the YouTuber who uses gin & tonics to explain rising sea levels Adam Levy is a climate scientist and a journalist who combines science and storytelling on YouTube. ClimateAdam, as he is known to his almost 3,000 followers, has been revealing nuggets of climate science in a playful and accessible way for the past five years. He made his YouTube debut explaining sea level rise with the help of a gin & tonic in a pub. He believes that science and journalism is the perfect combination to turn the grim reality of climate change statistics into human stories. Ahead of our upcoming News Impact Summit on climate change, we talked with ClimateAdam about his unique approach to making climate change topics accessible. Ask the doctor, on YouTube ClimateAdam thrives on curious people who dare to ask him honest questions about climate change. “I got a question recently from somebody who said they were in Year 9 at school in the UK. They heard something in class that we were all going to die because of climate change in 18 months, and if that was true.” “That’s the best possible comment I can get because it shows someone has been misinformed but they’re trying to find videos like mine and are brave enough to ask these questions. (…) That is one of the wonderful things about new media like YouTube or Twitter or Facebook, it’s not passive, it gives an opportunity to initiate a conversation with a random kid at school.” Adam’s YouTube persona, “a slightly more confident and more clueless version” of himself came into existence when he noticed a huge gap between the scientific understanding and the public conversation on climate change. “The goal and hopefully the impact of ClimateAdam is not only to explain some key ideas around climate science but also to humanise them and the process around them,” Adam said. In his first and most-watched video to date, ClimateAdam uses a gin & tonic to explain the science behind rising sea levels and their connection to climate change. His priority is to make videos that are consistent with the platform and enjoyable. “When I think about a topic, I try and boil things down to the single key question that I want to talk about,” Adam said, whether it’s what causes sea levels to rise or how to spot pro-climate policies. “Then I consider essential elements of each question and what elements can be made into something more playful without actually running away from the serious topic.” The climate crisis is no laughing matter, but ClimateAdam found that “his own ridiculousness” would get folks smiling at the screen even when watching videos about climate change. “Sometimes the hardest bit is to work out where you can afford to be playful and what you can be funny about without making a serious topic seem lighthearted,” explained Adam. Changing the narrative through social media Adam has witnessed how the tide has turned in favour of the climate — with movements like Extinction Rebellion and the Youth for Climate. “I wasn’t sure I would ever see any real action on climate change. Even the thought of having a global climate agreement felt like a distant prospect, never mind an agreement which had keeping temperature increase below 1.5 degrees Celsius as its target. That in itself is incredible.” He has seen how much the narrative has changed in the last year. “People everywhere are talking about it. The volume and the quality of the conversation have improved hugely,” the YouTuber notes. “But we still have to ask whether that conversation is achieving what needs to be achieved. Although the narrative is shifting, the reality, unfortunately, is still behind.” Social media like YouTube has played a major factor in stirring debate and mobilising action for the climate in the way it allows people from all over the world to interact, relate and organise. “I think it’s been a really huge factor in it, especially speaking to a lot of youth climate strikers. [Social media] is just a fundamental part of the conversation for any young people. Social media isn’t a secondary medium, it is a primary medium and a fundamental part of their social interactions. It’s really formed the backbone for a lot of the new activism we are seeing.” But the YouTuber notes that social media wasn’t the only medium to bring momentum to the movement. “Extinction Rebellion has gone out of their way to do things in person and to really do things in a face-to-face, human way, having meetups, small local events, knocking on doors.” Collaboration between scientists and journalists is key Adam understands the two paths he walks — scientist and journalist — can be perfectly aligned to discuss climate. “The job of the scientist is to find results, show relationships, find patterns, and search for truth and information. It is the job of the journalist to tell stories and to make information human,” says Adam. He believes journalists and scientists can team up to show people how scientific findings impact their lives. Collaboration between science experts and journalists is key to changing the pace and the tone of the narrative on climate change. “News is usually about a specific event in a specific place involving specific people but climate change is everywhere. And when something is everywhere, it can feel like it is nowhere,” he explains. “It is the job of journalists to localise climate change, to make it human, to make it heard, to make it feel close. That is a really big challenge. A lot of journalists are getting much better at it, but for a long time both the volume and quality of the way we talked about climate change has been insufficient.” According to Adam, we have failed to cover climate change in a way that does it justice, shown by the slow pace at which action is being taken in particular when it comes to government policies. ClimateAdam has two tips for journalists covering the climate: focus on the story and on the solutions. “Always focus on the story. A story is different from a fact, a story is something human.” On the other hand, a lot of stories tend to be depressing, and can even lead to despair for not presenting a solution to the problem. ClimateAdam says:
https://medium.com/we-are-the-european-journalism-centre/making-climate-science-a-human-story-a46654c368a6
['Vera Penêda']
2019-09-26 12:05:18.404000+00:00
['Climate Change', 'Environment', 'Climate Crisis', 'Insights', 'YouTube']
Why ethical reasoning should be an essential capability for Data Science teams
Why ethical reasoning should be an essential capability for Data Science teams And two concrete actions to kickstart your team on ethical knowledge Image source: mbolina via iStock Wherever new technology is introduced, ethics and legislation will trail behind the applications. The field of data science cannot be called new anymore from a technical point of view, but it has not yet reached maturity in terms of ethics and legislation. As a result, the field is especially prone to make harmful ethical missteps. How do we prevent these missteps right now, while we wait for — or even better: work on — ethical and legislative maturity? I propose that the solution lies in taking responsibility as a data scientist yourself. I will give you a brief introduction on data ethics and legislation, before I reach this conclusion. Also, I will share a best-practice from my own team, which gives concrete actions to make your team ethics-ready. “But data and models are neutral in itself, why worry about good and bad?” Image source: Kirill_Savenko via iStock If 2012 denoted the kickoff of the golden age of data science applications — through the crowning of data science as the ‘Sexiest job of 21st century’, 2018 might be the age of data ethics. It is the year where the whole world started forming an opinion on how data may and may not be used. The Cambridge Analytica goal of influencing politics clearly fell in the ‘may not’ camp. This scandal opened up major discussion about the ethics of data use. Multiple articles have since then discussed situations where the bad of algorithms outweighed the good. The many examples include image recognition AI erroneously denoting humans as gorillas, the chatbot Tay which became too offensive for Twitter within 24 hours and male-preferring HR algorithms (which raises the question: is data science the sexiest, or the most sexist job of the 21st century?). Clearly, data applications have left neutral ground. In addition to — or maybe caused by — attention from the public, large (governmental) organisations such as Google, the EU and the UN now also see the importance of data ethics. Many ‘guidelines of data/AI/ML’ have been published, which can provide ethical guidance when working with data and analytics. It is not necessary to enter the time-consuming endeavour of reading every single one of these. A meta study on 39 different authors of guidelines shows a strong overlap in the following topics: Privacy Accountability Safety and security Transparency and explainability Fairness and non-discrimination This is a good list of topics to start thinking and reading about. I highly encourage you to deeper investigate these yourselves, as this article will not explain these topics as deeply as their importance deserves. Legal governance, are we there yet? Image source: serts via iStock The discussion on the ethics of data is an important step in the journey towards appropriate data regulation. Ideally, laws are based on shared values, which can be found by thinking and talking about data ethics. To write legislation without prior philosophical contemplation would be like blindly pressing some numbers at a vending machine, and hoping your favourite snack comes out. Some first pieces of legislation aimed at the ethics of data are already in place. Think of the GDPR, which regulates data privacy in the EU. Even though this regulation is not (yet) fully capable of strictly governing privacy, it does propel privacy — and data ethics as a whole — to the center of the debate. It is not the endpoint, but an important step in the right direction. At this moment, we find ourselves in an in-between situation in the embedding of modern data technology in society: Technically, we are capable of many potentially worthwhile applications. Ethically, we are reaching the point we can mostly agree what is and what is not acceptable. However, legally, we are not in a place where we can suitably ensure that the harmful applications of data are prevented: most data-ethical scandals are solved in the public domain, and not yet in the legal domain. Responsibility currently (mostly) rests on the shoulders of Data Scientists Image source: Asergieiev via iStock So, the field of data cannot be ethically governed (yet) through legislation. I think that the most promising alternative is self-regulation by those with the most expertise in the field: data science teams themselves. You might argue that self-regulation brings up the problem of partiality, I do however propose it as an in-between solution for the in-between situation we find ourselves in. As soon as legislation on data use is more mature, less–but never zero–self-regulation is necessary. Another struggle is that many data scientists find themselves in a split between acting ethically and creating the most accurate model. By taking ethical responsibility, data scientists also receive the responsibility to resolve this tension. I am persuadable with the argument that the unethical alternative might be more expensive in terms of money (e.g. GDPR fines) or damage to company image. Your employer or client may be harder to convince. “How to persuade your stakeholders to use data ethically” sounds like a good topic for a future article. My proposal has an important consequence for data science teams: next to technical skills, they would also need knowledge on data ethics. This knowledge cannot be assumed to be present automatically, as software firm Anaconda found that just 18% of data science students say they received education on data ethics in their studies. Moreover, just a single person with ethical knowledge wouldn’t be enough, every practitioner of data science must have basic skill in identifying potential ethical threats of their work. Otherwise the risk for ethical accidents remains substantial. But how to reach overall ethical knowhow in your team? Two concrete actions towards ethical knowledge Image source: davidf via iStock Within my own team, we take a two-step approach: group-wide discussion on what each finds ethically important when dealing with data and algorithms construct a group-wide accepted ethical doctrine based on this discussion In the first step we educate the group on the current status in data ethics in both academia and business. This includes discussing problems of data ethics in the news, explaining the most prevalent ethical frameworks, and conversation about how ethical problems may arise in daily work. This should enable each individual member to form an opinion on data ethics. The team-wide ethical data guidelines constructed in the second step should give our data scientists a strong grounding in identifying potential threats. The guidelines shouldn’t be constructed top down; the individual input that comes out of the group-wide discussions forms a much better basis. This way, general guidelines that represent every data scientist can be constructed. The doctrine will not succeed if constructed as a detailed step-by-step list. Instead, it should serve as a general guideline that helps to identify which individual cases should be further discussed. Precisely that should be a task of the data scientist: ensure that potentially unethical data usage will not go unnoticed. Unethical usage not only by data scientists, but by all colleagues who may use data in their work. This way, awareness for data ethics is raised, which enables companies to responsibly leverage the power of data. In short: start talking about data ethics We are technically capable of life-changing data applications, however a safety net in the form of legislation is not yet in place. Data scientists walk a tightrope over a deep valley of harmful application, where overall knowledge of ethics acts as the pole that helps them balance. By initiating the proper discussion, your data science team has the tools to prevent expensive ethical missteps. As I argue in the article, discussion on data ethics propels the field towards maturity, such that we can arrive at a “rigorous and complex ethical examination” of data science. So, engage in discussion: be critical about this content, form an opinion, talk about it, and change your opinion often as you encounter novel information. This not only makes you a better data scientist; it makes the whole field better.
https://towardsdatascience.com/why-ethical-reasoning-should-be-an-essential-capability-for-data-science-teams-5be1b9da67d3
['Tom Jongen']
2020-07-23 14:03:25.454000+00:00
['Ethics', 'Data Science', 'Legislation', 'Philosophy', 'Artificial Intelligence']
You’ve probably missed one of the greatest ML packages out there
Get Mathematica Before you ask if it’s free — it is free-ish. Officially it’s proprietary and, of course, expensive — at the time of writing, a home desktop license runs $354. But there’s two large caveats here: If you’re a student or affiliated with a university, your institution probably provides it for free — this is probably the most common exposure to this great tool. If you’re not a university — Raspberry Pi OS (formerly Raspbian) ships with Mathematica for free. That means you can pick up a little $5 Raspberry Pi Zero (maybe even the ZeroW with built in WiFi!) and get a free working copy of Mathematica. From $354 down to $5 — how absurdly cool is that! Three ways to make a neural network Load a pre-made one = NetModel . Make a simple layer-to-layer one, like the ones you might see in an introductory ML textbook = NetChain . Make a general graph = NetGraph . Let’s do each one in turn. Load a pre-made network The main command here is NetModel : net = NetModel["LeNet Trained on MNIST Data"] Note: You always can (and should) click the little plus button to get a more detailed breakdown of the network. You can just fire up NetModel[] to see a list of all possible pre-loaded networks — I’ll put it at the bottom of this article. Simple neural networks A “simple” neural network in this case means a graph of layers where the i-th layer is connected to the i+1-st layer. This is done with the NetChain command: net = NetChain[{ LinearLayer[10], Tanh, LinearLayer[20], LogisticSigmoid }, "Input" -> 10, "Output" -> 20 ] This introduced a couple new things: Layers: a NetChain is composed of layers that are connected together. A LinearLayer is just one like W * input + b . There’s lots more, like ElementwiseLayer , SoftmaxLayer , FunctionLayer , CompiledLayer , PlaceholderLayer , ThreadingLayer , RandomArrayLayer . Probably the most useful one is FunctionLayer , where you can apply a custom function (I’ll show you this layer). is composed of layers that are connected together. A is just one like . There’s lots more, like , , , , , , . Probably the most useful one is , where you can apply a custom function (I’ll show you this layer). Activation functions: you can stick in pretty much any function you like! Tanh , Ramp (ReLU), ParametricRampLayer (leaky ReLU), and LogisticSigmoid are obvious choices, but you can use anything here. You can even repeat activation functions (for whatever reason) — no constraints here. You can see the network is “uninitialized”. We can initialize the net with random weights and zero biases: net = NetInitialize[net] Let’s get those weights out. You can use Information[net, “Properties”] to query all the possible parameters. The layers can be extracted with: layers = Information[net, "LayersList"] linearLayers = Select[layers, ToString[Head[#]] == "LinearLayer" &] The weights and biases can be obtained with: weights1 = NetExtract[linearLayers[[1]], "Weights"] Normal[weights1] // MatrixForm biases1 = NetExtract[linearLayers[[1]], "Biases"] Normal[biases1] // MatrixForm Note that Normal converts NumericalArray to the standard Mathematica list object that we can manipulate easily. Make any arbitrary graph Mathematica is great at graphs. The main command to make neural networks is NetGraph . This is where it gets really cool! Let’s start with a network that takes a target vector of size 2 and another vector of size 2, and computes the L2 loss: layerDefns = <| "l2" -> FunctionLayer[(#MyTarget - #MyLossInput)^2 &, "MyTarget" -> 2, "MyLossInput" -> 2, "Output" -> 2], "sum" -> SummationLayer[] |>; layerConn = {{ NetPort["MyTarget"], NetPort["MyLossInput"]} -> "l2" -> "sum" -> NetPort["MyLoss"]}; l2lossNet = NetGraph[layerDefns, layerConn] We see the format for NetGraph : An association (the Mathematica term for a dictionary of key/values) of name to layer. (the Mathematica term for a dictionary of key/values) of name to layer. A list of layer names connected with arrows -> . Also introduced is the notion of a NetPort . A NetPort is a named input/output of a network. You can clearly see the input ports on the left side of the layerConn definition, and the output ports on the right side: layerConn = {{ NetPort["MyTarget"], NetPort["MyLossInput"]} -> "l2" -> "sum" -> NetPort["MyLoss"]}; The inputs are "MyTarget" and “MyLossInput” , and the output is “MyLoss" . You can clearly see these in the graph as well: Look at this carefully — it shows you the dimensions of each connection in the graph, which is extremely helpful. The inputs are vectors of length 2, while the output is a scalar. You can now reuse this graph in yet larger graphs, for example with another subnetwork: subNet = NetChain[{LinearLayer[2], Tanh, LinearLayer[10], Tanh, LinearLayer[1]}] and the combination of the sub network and the loss network: layerDefns = <| "sn1" -> subNet, "sn2" -> subNet, "l2loss" -> l2lossNet, "catentate" -> CatenateLayer[] |>; layerConn = { NetPort["MyInput"] -> "sn1" -> "catentate", NetPort["MyInput"] -> "sn2" -> "catentate", {NetPort["MyTarget"], "catentate"} -> "l2loss" -> NetPort["MyLoss"] }; net = NetGraph[layerDefns, layerConn] Let’s look at that more closely: We have an input vector that get’s passed to in two separate channels of the subnetworks for a size of 2, then through different activation functions. These are then catenated (stacked), and passed into the loss function along with a target to finally obtain the loss. Train networks The command to train neural networks is NetTrain . Let’s train the net from the previous example. First, we need some data: data = Flatten@Table[ <| "MyInput" -> {x, y}, "MyTarget" -> {Cos[x + y], Sin[x + y]} |> , {x, -1, 1, .02} , {y, -1, 1, .02}]; data[[1]] We can see each data sample is of the format of an association (dictionary) with every NetPort labeled. Next the training: trainedNet = NetTrain[net, data, LossFunction -> "MyLoss"] This is great — we get a simple overview and can visualize the training live. You can set different options — for example, the maximum number of iterations with: NetTrain[..., MaxTrainingRounds->50000] Finally, let’s evaluate the trained network. The network trainedNet outputs the full value of the loss function. Since we are interested more in the actual output, we can use NetTake to grab all layers up to and including the CatenateLayer : trainedSubNet = NetTake[trainedNet, "catentate"] trainedSubNet[data[[1]]] data[[1]]["MyTarget"] Let’s plot that and do some regression! pts = Flatten[Table[ {x, y, trainedSubNet[{x, y}][[2]]} , {x, -1, 2, .05} , {y, -1, 2, .05}], 1]; ptsTrain = Select[pts, #[[1]] <= 1 && #[[2]] <= 1 &]; ptsExtrapolate = Select[pts, #[[1]] > 1 || #[[2]] > 1 &]; pltPtsTrain = ListPointPlot3D[ptsTrain, PlotStyle -> Black]; pltPtsExtrapolate = ListPointPlot3D[ptsExtrapolate, PlotStyle -> Blue]; pltTrue = Plot3D[Sin[x + y], {x, -1, 2}, {y, -1, 2}, PlotStyle -> Opacity[0.3], Mesh -> None, PlotLabel -> "Sin[x+y]", BaseStyle -> Directive[FontSize -> 24]]; Row[{ pltTrue, Show[pltPtsTrain, pltPtsExtrapolate, PlotRange -> All], Show[pltPtsTrain, pltPtsExtrapolate, pltTrue, PlotRange -> All] }] Final thoughts You can read Mathematica’s more complete guide on neural networks here. However, it’s bit long…. If you prefer to just read through the API pages, here’s a better list from Wolfram.
https://medium.com/practical-coding/youve-probably-missed-one-of-the-greatest-ml-packages-out-there-ff22549acee3
['Oliver K. Ernst']
2020-12-29 19:40:02.175000+00:00
['Coding', 'Programming', 'Artificial Intelligence', 'Data Science', 'Machine Learning']
Quadrant December Update: Quadrant’s First Year, AMA Recap, A Look at the Year Ahead
As we prepare to say goodbye to 2018, we reflect on the month of December and also the amazing year Quadrant has had. From our successful token sale to our Mainnet launch, we have come a long way in 12 short months. Here are some highlights from a good month and a great year: 2018: The Year that Was Develop and Deliver — that has been Quadrant’s guiding principle throughout the year. Ever since our customers came to us with pressing data needs in the fall of 2017, we have been focused on building a solution that allows these users to trust and utilise their data in new and beneficial ways. From an idea to use the blockchain to provide a better solution to verify and map data, a strong business has grown and flourished. In July, Quadrant completed its token sale, and turned its focus to the commercialisation of our Testnet and the launch of our Mainnet. Keeping our promise to the community to solve real-world issues facing the data economy was, and is, of the utmost importance. With that in mind, in November we successfully launched the Quadrant Mainnet. This means that Quadrant is now up and running and ready to meet the needs of our customers. We are a strong business with top-notch technology and a growing team, and the support of the community has had a great deal to do with that success. From an idea to a functioning Mainnet and customers, all in one year. Wait until you see what 2019 has in store. Recap of Quadrant’s December 6 AMA On December 6, Quadrant CEO Mike Davie hosted an AMA to answer a wide range of questions posed by the community. This came on the heels of a busy November that saw the launch of the Quadrant Mainnet and the unlocking of our token, as well as the company’s presentation at a major event sponsored by the Government of Singapore’s Info-communications Media Development Authority (IMDA). Mike detailed the company’s current business priorities and upcoming near, as well as long-term milestones on our roadmap. He detailed some of the business reasons behind our decision to unlock the token when we did, and explained Quadrant’s key goals for the coming year (hint: adoption, adoption, adoption). Mike finished the session with a note of thank you for the continued support of the community. We were, as always, encouraged by your involvement and we will continue working hard to build a big data ecosystem that offers the solutions your businesses and organisations need. You can read the full AMA recap here. What’s in Store in 2019 With our Mainnet up and running, Quadrant is laser focused on growing our sales team and achieving enterprise partnerships in the first half of 2019. Just as we hit our milestones for our token sale and our Mainnet launch, our energies are now fully concentrated on winning new customers in the new year. To that end, we have hired a new head of sales, Senior Sales Consultant Glenn Harrison, who brings decades of experience working with enterprises in the data space. The fact that he has chosen to join Quadrant speaks to the strength and promise of our platform. We are delighted to have Glenn as part of the team and we are confident he will play a key role in meeting our adoption goals. You can read an introduction to Glenn here. Finally, 2019 will be a year spent building toward another big milestone: the launch of our Guardian Nodes program. While the distribution isn’t scheduled until 2020, the groundwork is already being laid. Expect to learn more about the specifics of the program as the new year unfolds. Come Get the Most out of Data with Quadrant If you are an organisation that requires multiple location datafeeds to meet your business needs and optimise your products or services, we invite you to set up a conversation with one of our sales professionals. Quadrant’s expertise in tracking the data supply chain provides access to auditable location datafeeds that you can have confidence in. Our blockchain platform can help you gain the insights that will help you solve critical problems and scale your business. You can set up a conversation here. To all the members of our community, thank you again for your continued support. It has been an amazing year, and we can’t wait to move forward into 2019 together. All the best, The Quadrant Protocol Team
https://medium.com/quadrantprotocol/quadrant-december-update-quadrants-first-year-ama-recap-a-look-at-the-year-ahead-d45b27da0a59
['Nikos', 'Quadrant Protocol']
2018-12-28 11:56:33.025000+00:00
['Community Update', 'Big Data', 'Data Science', 'Data', '2019']
Album Review: Idiot Prayer-Nick Cave Alone at Alexandra Palace // Nick Cave
North London’s Alexandra Palace, also known as “The People’s Palace,” the sister location to South London’s Crystal Palace, opened on Queen Victoria’s 54th birthday in May 1873. Just 16 days later on the morning of 9 June “Ally Pally,” as it is affectionately called, broke out in flames caused by what the Sydney Morning Herald called “the heedless conduct of a plumber.” The local fire brigade couldn’t handle the blaze alone and the Metropolitan Fire Brigade was called into assist. Not only were the large, open auditoriums and corridors inside The People’s Palace a problem that too easily caused the fire to spread quickly, the edifice was precariously situated atop Muswell Hill, and the arduous seven-mile trek to the top of the landmark overcame the 120 firefighters and horse-drawn and manual engines tasked with dousing the flames. By 3 p.m. Ally Pally would be gutted and three people would ultimately be dead as a result of the inferno. Of course Alexandra Palace was rebuilt and reopened two years later. Over the years the structure has become home to recreation and entertainment events as well as serving as refugee camp for Belgians during the First World War. In 1980, the curse of irony struck and Ally Pally suffered yet another instance of “heedless conduct” as it were, this time during a jazz festival, and was claimed by a second fire. The Palace reopened in 1988 and has remained home to countless sporting events, concerts, and performing arts to this day. In 2020, as the world metaphorically burns by the light of a relentless pandemic amidst “heedless conduct,” one has to wonder if singer-songwriter-punk rock’s ex-patriate, and the world’s most-likely singular living specimen of a vampire-Nick Cave, was aware of the venue’s cursed, ironic history as he, without a trace of irony, chose the location for his solo piano performance Idiot Prayer-Nick Cave Alone at Alexandra Palace. After the pandemic forced the Bad Seed to cancel his European and North American tours for 2020, in a stroke of genius, as the world ironically called for everyone’s unanimous isolation, in a “hold my beer” move, Nick Cave released the streaming concert event Idiot Prayer-a title taken from a song of the same name on 1997’s The Boatman’s Call-featuring only himself alone at a piano inside London’s creepy, Victorian-chic Alexandra Palace. As I said, not an ounce of irony. This week, Idiot Prayer was finally released as an album. If you’ve attended any “Conversations With Nick Cave” events in the past couple of years, you’ll know that Cave has forged a newfound, intimate bond with his fans, right up to and including putting himself in the dangerously uncomfortable position of inviting them on stage and giving them permission to ask him literally anything they want. During those events, between questions awkward and profound, Cave performs surprisingly stripped down solo piano versions of classic Bad Seeds and Grinderman tunes. “I loved playing deconstructed versions of my songs at these shows, distilling them to their essential forms,” Cave says in Idiot Prayer ‘s liner notes. “I felt I was rediscovering the songs all over again.” He goes on to say that the self-reflective silence of the pandemic inspired him to record and film these songs “as a prayer into the void,” resulting in “a souvenir from a strange and precarious moment in history” that is Idiot Prayer. Loaded with customary Bad Seeds standards “Stranger Than Kindness,” “The Ship Song,” “Into My Arms,” and “The Mercy Seat,” the album shines brightest with surprises like “Sad Waters,”-a song only previously available this beautifully on the concert DVD The Abattoir Blues Tour and the very rare Secret Life of the Love Song & the Flesh Made Word -and”He Wants You” from the weakest Bad Seeds album Nocturama, and isn’t necessarily considered a fan favorite, but is a delightful moment that mixes up expectations. I daresay, ironic. The Grinderman tracks “Palaces of Montezuma” and “Man In the Moon” are even thrown into the mix and translate magnificently as solo piano numbers. By the way, “Palaces of Montezuma” might be one of the finest love songs ever, chock full of boyish romantic intent, complete with ghastly imagery in one of the greatest lyrics in rock ‘n roll: “the spinal cord of JFK wrapped in Marilyn Monroe’s negligee, I give to you,” Cave desperately pleads for his lover’s affections. That’s about as punk rock as you can get. And that’s not all! Idiot Prayer is also the first time we hear the tracks “Spinning Song,” “Waiting For You,” and “Galleon Ship” from last year’s Ghosteen played live, period. The biggest surprise is the one new track titled “Euthanasia” which lyrically moves away from the death and afterlife themes of the past couple of Bad Seeds albums. In this song, Nick gets back to what he’s best at, the good old fashioned love song: “When you stepped out of the vehicle / And attached yourself to my heart / It was a kind of dying / A kind of dying / Dying of time.” See what I mean. I’m sure its macabre title will give pause to lots of little old ladies at many a wedding in the years to come. It’s fascinating to hear these tunes played only on piano, many for the first time, like the near melody-less “Higgs Boson Blues.” Here, the song comes across as a prescient and ominous meditation amidst these “strange and precarious times.” And “Jubliee Street,” when played live with the band is worth whatever you paid for admission, doesn’t hit quite as hard in solo piano form, yet its bareness ironically reveals the song’s desperate, violent edge. Having seen the streaming event myself earlier this year, and while Cave had the best intentions, Idiot Prayer is far more interesting to listen to than to watch. But if you’re up for it, it’s certainly worth checking out and the solo concert is scheduled to play in select cinemas across Europe and the UK in January. My favorite track on Idiot Prayer is “Papa Won’t Leave You, Henry,” a song originally housed in a cacophony of music as it lyrically spirals into depraved madness on the album Henry’s Dream: “I entered through, and the curtain hissed / Into the house with its blood-red bowels / Where wet-lipped women with greasy fists / Crawled the ceilings and the walls.” Here, in solo piano, the song plays hauntingly and far more unsettling than Henry could have ever dreamed. I’ve been saying for years that Nick Cave should release a solo album, and now that that day is here, I truly only have a ubiquitous plague to thank for it. Who says nothing good could come from “heedless conduct,” isolation, and sanitizer-chapped hands? Even though the intent behind Idiot Prayer is the furthest thing from irony, I’m sure even un-ironic Nick Cave can see the delightful irony in such a wonderful thing to come from something so terrible in place that’s had its share of troubles. Idiot Prayer-Nick Cave Alone at Alexandra Palace is now available on CD, vinyl, streaming, and digital download. 9/10 Words by Lucas Hardwick.
https://medium.com/the-indiependent/album-review-idiot-prayer-nick-cave-alone-at-alexandra-palace-nick-cave-ab791e50164c
['Lucas Hardwick']
2020-12-01 15:20:59.111000+00:00
['Covid 19', 'Music Review', 'Nick Cave', 'Album Review', 'Music']
Strict Types in PHP
In December 2015, PHP 7 introduced scalar type declarations and with it the strict types flag. To enable the strict mode, a single declare directive must be placed at the top of the file. This means that the strictness of typing for scalars is configured on a per-file basis. This directive not only affects the type declarations of parameters, but also a function’s return type. The good thing about declaring a PHP file as strict is that it actually applies to ONLY the current file. It ensures that this file has strict types, but it doesn’t apply to any other file in the whole project. It allows you to do, step by step, this migration from non-strict code to strict code, especially for new files or projects. Strict types affect coercion types Using hint type without strict_types may lead to subtle bugs. Prior to strict types, int $x meant $x must have a value coercible to an int . Any value that could be coerced to an int would pass the hint type, including: a proper int (example: 42 -> 42) (example: 42 -> 42) a float (example: 13.1459 -> 13) (example: 13.1459 -> 13) a bool (example: true -> 1) (example: -> 1) a null (example: null -> 0) (example: -> 0) a string with leading digits (example: “15 Trees” -> 15) By setting strict_types=1 , you tell the engine that int $x means $x must only be an int proper, no type coercion allowed . You have the great assurance you're getting exactly and only what was given, without any conversion or potential loss. Who should care about this “strict type” line? Actually, declare(strict_types=1); is more for the reader than for the writer. Why? Because it will explicitly tell the reader: The types in this current scope (file/class) are treated strictly. 'strict_types=1' is more for the reader than for the writer The writer just needs to maintain such strictness while writing the expected behavior. That said, as a writer, you should care about your readers, which also includes your future self. Because you are going to be one of them.
https://medium.com/swlh/strict-types-in-php-d4166bd25394
['Jose Maria Valera Reales']
2020-08-11 08:32:52.938000+00:00
['Software Engineering', 'PHP', 'Programming', 'Software Development', 'Web Development']
Do the Rosling
Anyone who has learned Tableau online (via Udemy or any other similar platform) did the Hans Rosling chart as an exercise. So did I, and I felt it was time to do justice to that old visualization. Bear with my story on how I designed this dashboard, and you can see that early viz of mine at the end of the post. Despite all the bad news we face in media’s coverage of our modern times, the world is getting better in so many ways. This visualization shows how life expectancy has risen all over the globe in the past five decades, while the fertility rate has declined. The causes of these trends are rooted in several factors, but one thing is for sure: we’re living longer than any generation before us. Click here for the interactive version. Final dashboard design in Tableau I might not be the most technical Tableau specialist in the world, but my main drive is conveying straightforward messages in a clear and minimalist style. When I’m doing visualizations for fun, design is my №1 principle to follow. I usually combine Tableau with Adobe Illustrator to pre-design the background and add it back to the software as an image. This way I have better control over the look and feel of the dashboard and don’t have to load all the texts and images one by one. Another reason I use Illustrator is that I fell in love with the Futura typeface after listening to a TED talk about its rise in popularity after being used on the Apollo 11 mission. In fact, if I had to choose only one font to use till the end of my life, I’d say Futura in the blink of an eye. Unfortunately, Tableau only supports a couple of web safe options and Futura is not one of them.
https://medium.com/starschema-blog/do-the-rosling-9b350d30acaf
['Judit Bekker']
2020-02-27 14:36:52.570000+00:00
['Data Visualization', 'Dataviz', 'Social Change', 'Hans Rosling', 'Tableau']
Managing a Software Team in Sri Lanka with Dirk Jan — Conxillium
Managing a Software Team in Sri Lanka with Dirk Jan — Conxillium Winning practices for managing a successful software offshore team. Dirk Jan van Kessel, Software Development Manager for Conxillium Relocating across the world to build a team, develop software and travel. We spoke with Dirk Jan van Kessel, Software Development Manager for Conxillium, to discuss why Sri Lanka is the perfect place for him, how he manages his team and what he finds interesting about running a software team that is remote. Q: Could you start by telling us about Conxillium and about your role? At Conxillium, our goal is to develop, commercialize and support a broad range of software for the local government in The Netherlands. The applications pool is diverse, from developing software for election systems, civil affairs to tax applications and workflow management systems etc. We partnered with Gapstars in 2016 to scale up and build an extended team of software developers with varied expertise. Today, our team consists of 12 Senior Engineers working at the Gapstars tech centre in Sri Lanka. My role mainly revolves around building bridges between our teams in The Netherlands and Sri Lanka. I travelled to Sri Lanka 4 years back as an in-house Software Manager to lead the development team. I’m more involved in team management, developing a high-performance environment, and keeping the team motivated. It’s my job to extend the Netherlands teams with Developers from Sri Lanka and make sure the distributed teams serve the product managers to the best of their abilities. Q: With even more companies looking to set up an offshore development team. Could you share the management practices needed to lead a successful offshore team? When it comes to managing distributed teams, especially if team members hail from different cultures, mutual understanding doesn’t always come naturally. Hence investing in some real face-time at the start of your cooperation is essential. Build trust from the start. Personally spend some quality time in selecting your offshore developer team. We practice agile methods within our team. The Agile framework provides teams with the freedom to organize themselves and manage their own work. Which cuts out unnecessary micromanagement. Another practice is to have a clear communication discipline. Incorporate a reporting structure, meeting schedules, communication tools, and team roles at the start. Do: face calls. Don’t: chat and email. Select the right tools to work with. The Target process is a good tool when it comes to tracking and reporting. Microsoft teams for collaboration. Q: What type of skills do you need to thrive in this role? First, I believe you need to be a leader more than a manager. Leading remote teams requires managers to develop unique leadership skills to engage remote workers and maintain effective collaboration. Listen to your team. Provide team members with the space to express. Creating a “bottom-up” culture within the team. This is only possible if a strong layer of trust is developed. We are allowed to make mistakes and we learn from them. Communicate openly and honestly. Share what’s really going on. Provide support whenever required but empower them to be responsible for the result. Q: What are your favourite things about Sri Lanka? Culture, conversation, and community? Believe it or not, I consider myself a Sri Lankan now. Sri Lanka is definitely a very interesting place to live and work. When I first came here, I was surprised at the level of hospitality and friendliness of the people in Sri Lanka. Everyone treats you with respect and you’re made to have this sense of belonging. The Conxillium group is recognized by Gapstars as a “Star Partner” for achieving the highest standards of innovation and performance in remote development. One of its keys for success is its adoption and practice of agile to build engagement.
https://medium.com/life-at-gapstars/managing-a-software-team-in-sri-lanka-with-dirk-jan-conxillium-cc31bd39727e
['Jonathan Francis']
2020-11-16 06:02:05.881000+00:00
['Remote Working', 'Software Development', 'Agile', 'Startup', 'Netherlands']
The Next Wave of the Digital Economy — Promises and Challenges
By Irving Wladawsky-Berger “The next wave of digital innovation is coming. Countries can welcome it, prepare for it, and ride it to new heights of innovation and prosperity, or they can ignore the changing tide and miss the wave,” writes Robert Atkinson in The Task Ahead of Us. Atkinson is founder and president of the Information Technology and Innovation Foundation (ITIF), a think tank focused on science and technology policy. We’re now entering the third wave of the digital economy, says Atkinson. The first was based on personal computing, the Internet, Web 1.0, and e-commerce. The second brought us Web 2.0, big data, smartphones and cloud computing. The emerging third wave promises to be significantly more connected— including higher bandwidth and a wide variety of devices; more automated — with more work being done by machines while integrating the physical and digital worlds; and more intelligent— leveraging huge volumes of data and advanced algorithms to help us understand and deal with our increasingly complex world. “Building and adopting the new connected, automated, and intelligent technology system will lead to enormous benefits globally, not least of which will be robust rates of productivity growth and improvements in living standards. Moreover, these technologies will help address pressing global challenges related to the environment, public health, and transportation, among others.” We’re in the early stages of this third wave. 5G, IoT, robotics, AI, and other promising technologies are being embraced by early marketplace adopters, but their full-scale impact is still five to 10 years away. We’re in a period not unlike the late 1980s, when it was clear that IT was on the brink of a major transition, but the Internet revolution didn’t arrive until the mid-1990s. Source: HBO-VICE News According to Atkinson, this transition will be more complicated and take longer to come to full fruition that the first two. In both previous eras, “consumers needed only Internet- connected devices, and companies needed little more than websites (and to be sure, logistics changes and new payment systems). Moving forward, progress will depend on a much more complex reworking of organizations’ production systems and business models — not just within organizations, but between them.” Moreover, beyond the technical and organizational challenges, one of the biggest risks standing in the way is the rising neo-Luddite opposition to the ongoing digitization of the economy and society. “Implementing the next wave of digital technologies will be much more difficult from a sociopolitical perspective than it was during the last two digital transformations because there is broader and stiffer opposition today. In past digital transitions, the technology industry was largely seen as a force for positive societal change: Computers helped organizations become more productive, and the Internet spread access to knowledge. Today, by contrast, ‘Big Tech’ is increasingly demonized and challenged on a host of issues, from privacy to job disruption.” Given its compelling benefits, the next digital wave will largely be inevitable, says Atkinson. But, its support need not be based on unrealistic optimism. There will be serious challenges, as has been the case with technological transformations over the past two centuries, including cybersecurity and the need to provide transition assistance for displaced workers. As noted in a recent McKinsey report on the future of work, “while there may be enough work to maintain full employment to 2030 under most scenarios, the transitions will be very challenging — matching or even exceeding the scale of shifts out of agriculture and manufacturing we have seen in the past.” “But societies have managed to address similar challenges in past transformations, and there is no reason to believe they cannot do so again going forward, especially if more of civil society shifts from opposing technology implementation to supporting proper rules and governance frameworks,” writes Atkinson. Markets and firms will play the biggest role in developing and implementing next-wave digital technologies and their ensuing organizational transformations. But governments have a major role to play. They need to make the next-wave digital evolution a central policy goal. More specifically, governments should enact policies that support and enable digital transformation; remove institutional and regulatory barriers to implementation; and encourage citizens to embrace digital evolution. Here’s a closer look at each of these policy recommendations. Support policies where the benefits are largely unequivocal Such policies include “supporting R&D, digital skills, and digital infrastructures; transforming the operations of government itself; embracing global market integration; and encouraging the transformation of systems heavily influenced by government (e.g., education, health care, finance, transportation).” As I read this list of policies, I was reminded of the National Innovation Initiative (NII), a 2005 report based on 15 months of intensive study and deliberations — which I was part of— on the changing nature of innovation at the dawn of the 21st Century, and what it would take for the U.S. to effectively compete and collaborate in an increasingly interconnected world. The findings and recommendations in the NII report were organized into three broad categories: Talent: The human dimension of innovation, including knowledge creation, education, training and workforce support. Investment: The financial dimension of innovation, including R&D investment; support for risk-taking and entrepreneurship; and encouragement of long-term innovation strategies. Infrastructure: The physical and policy structures that support innovators, including networks for information, transportation, health care and energy; intellectual property protection; and business regulation. It’s not surprising that calling for policies that support talent, investment and infrastructure remain as prominent today as they were 15 years ago. While we may already be in the third wave of digital technologies, their transformational impact on economies and societies is still in the early stages. 2. Remove institutional and regulatory barriers But, the bloom is off the rose. In the earlier waves, we mostly viewed digital technologies as enhancing communications, disseminating knowledge, and improving productivity. Now, digital technologies are also viewed as threatening privacy and security, providing access to polarizing and hateful information, and seriously disrupting jobs and the well-being of many workers. “The most strident opposition to digitally driven economic progress comes from a growing, vocal minority that seeks to ban or heavily regulate emerging digital technologies such as robots, autonomous vehicles, and biometrics to dramatically limit their adoption.” As has long been the case, it’s a matter of balance and trade-offs. We need policies that support the positive benefits of digital technologies while addressing their negative impacts. Overly stringent data privacy policies will hamper the potential advances that AI might bring in medicine, drug design and public health. “For example, giving users the right to opt out of data collection (rather than mandating they opt in), will protect privacy while limiting negative effects on digital innovation.” While eschewing policies that limit digital advances, policymakers should actively pursue illegal or unethical activities. For example, policies that seek to regulate negative activities— “such as ‘revenge porn,’ spam, financial fraud, hacking, ID theft, malware, and Internet piracy — do little or nothing to limit digital transformation (and in most cases advance it), but they achieve important social goals.” 3. Encourage citizens to embrace digital evolution Finally, the trap of anti-technology groupthink will seriously limit and slow down digital transformation. “Government officials and other elites need to embrace and advance an optimistic narrative about how digital transformation will lead to increased living standards and better quality of life, and actively counter self-promoting fearmongers seeking to instigate techno-panics.” Anti-technology narratives blame digital innovations for a number of societal challenges, including “inequality; loss of jobs and worker rights; addiction; surveillance; algorithmic bias and manipulation; cybercrime; social media coarseness and polarization; lack of diversity; political bias; concentrated economic and political power; and tax evasion. The truth is, digital technologies are not the principal cause of most of these challenges; and where they contribute, measured responses can often provide effective solutions without harming innovation.” “At the end of the day, nations’ success in embracing next-wave digital technologies will depend on a combination of awareness and strategic action,” writes Atkinson in conclusion. “Each nation needs to ask itself where it stands on both fronts. Do policymakers truly understand the technologies and competitive strengths, weaknesses, opportunities, and threats they present?… In taking strategic action, are nations focused on learning from global best practices in the wide range of policy areas affecting next-wave digital technologies, and then ensuring they adapt those lessons to fit the realities of their own nations? Getting this right will have a significant, positive impact on the living standards and quality of life of future generations.”
https://medium.com/mit-initiative-on-the-digital-economy/the-next-wave-of-the-digital-economy-promises-and-challenges-ff0d245d17
['Mit Ide']
2019-05-07 14:46:31.507000+00:00
['AI', 'Technology', 'Automation', 'Innovation', 'Digital Economy']
Feel-good Marketing
Feel-good Marketing Create a Marketing Plan that Doesn’t Lead to Burnout I’ve been working in marketing for years, and the more I think about it, the more I realize how much I don’t really like marketing. Or, at least I don’t like the kind of marketing that I see everywhere, invading my social feeds and inbox on a daily basis. Most of it feels…well…not great. It’s why I’ve struggled so much to market myself. Because while I can get behind a lot of these tactics for someone else’s product or service, I have a really hard time doing those same things for myself. It took a long time for me to admit that to myself, let alone to the world. But if I’m not marketing myself in a way that feels aligned, then all the marketing in the world isn’t going to bring me the kinds of clients I actually want to work with. After a lot of consideration of the way people market to me, the way I like to be marketed to, and how I’ve marketed things in the past, I came up with a way of marketing that feels good to me (hence, “Feel-good Marketing”). I’m still in the early days of actually implementing these ideas, so I can’t say yet how effective they’ll be. But I can say that I’m not feeling drained by the idea of marketing myself anymore. And that’s a win in and of itself. Marketing That Doesn’t Feel Good Let’s start with the marketing methods that don’t feel good. If you’re trying to figure out your own feel-good marketing plan, this is where I’d suggest you start. Look at all the marketing activities that leave you feeling “meh” or “bleh” as a starting point. Disregard whether they work or not for now. This is all about how these methods make you feel. Your list might vary from mine, and that’s 100% okay. The Bait and Switch I responded to a free webinar ad recently that promised to reveal a marketing secret that sparked my curiosity. I entered my email address and sat through a 30-minute video that never revealed what the ad promised. Instead, it invited viewers to book a free call in order to find out. Curiosity still piqued, I booked a call. Instead of finding out the “secret” marketing method, I ended up on a sales call (which was expected, but at least deliver on the promise) that turned into the person attempting to make me feel insecure about what I was doing and then trying to guilt me into investing tens of thousands of dollars. Nope, no thank you, we’re done here. And I still never got the “secret.” (I wonder if there even was one…) Dozens of Super Salesy Launch Emails If you’ve ever signed up for a newsletter from a coach, consultant, or online educator, you’ve likely been subjected to at least one launch email sequence. Any time a company releases a new program or offering, they send out anywhere between five and fifteen (or more) launch emails, designed to grab your attention and get you to buy. There’s nothing inherently wrong with this strategy, and obviously it works or you wouldn’t have thousands of people using it. But there’s a big difference between launch sequences that provide value and those that are just focused on sales. If you’re going to send a dozen emails, make sure that each of those provides value to the person reading it. If it’s just telling them how awesome your new offer is, you’re likely going to turn off way more people than you turn on. Being “Authentic” (But Not Really) Authentic is one of those buzzwords that has gotten way overplayed, to the point it’s almost meaningless. We have to be “authentic” in everything we do online. Well, duh. Antonyms of authentic include untrustworthy, false, corrupt, and unreliable. None of us want any of those words associated with our products or services, so of course we need to be authentic. But the way that many people show up in an “authentic” way doesn’t really ring true. They’re still painting a pretty rosy picture of their life and business. The word is just played out at this point. Authenticity should be the bare minimum, not something that we strive for. Shaming and Making People Feel Insecure I already talked about the business coach who tried to shame me for not spending tens of thousands of dollars with them. The only thing it accomplished was that I will never spend money with them. But there are many marketers who use more subtle versions of this. Their sales pages are focused on making people feel like they’re lacking something. They aim to drum up all the insecurities a person might have in order to convince them that the only way out is to buy their product. Again, this is one of those tactics that definitely works. But I’m not crazy about preying upon people’s insecurities and doubts. Spending a Ton of Money on Ads Don’t get me wrong, I’m not anti-advertising. But ads can be used as a replacement for building a genuine audience. Need more sales? Spend more on ads! It’s a quick and easy solution. But if you’re not also building an audience organically, your success is forever tied to your advertising budget. Convincing People They Absolutely Need Whatever You’re Selling Selling people something they want or need is admirable. But convincing people they need what you’re selling when they don’t isn’t. You shouldn’t need to convince people they need what you’re selling. That doesn’t mean you don’t offer it to them (they may decide they need it after all). But you want customers and clients who are excited about what you’re selling, not who regret their purchase three days later because they realized they don’t need it after all. Giving Away Your Best Content This is a popular marketing trope in the infopreneur world: that you should give away your best content in order to attract customers. But if you’re giving away your best content, why would someone buy your other content? And if they do, they might be disappointed in the value they get. Because it won’t measure up to the freebie you gave them first. You should give away good content for sure. But keep your very best for your paying clients. Following the Latest Marketing Trends I’m a big fan of experimenting, especially when it comes to marketing. But that doesn’t mean you have to jump on every single marketing trend that comes along. Video is huge right now, especially live video. Personally, I hate doing videos. I’ve tried to force myself to do them. And I’ve done a few. But they’re uncomfortable and I end up procrastinating for hours (days…weeks…) and waste a lot of time and energy. So I gave myself permission to stop doing them. Does that mean I’ll never ever do a live video? Of course not. If inspiration strikes and I feel drawn to making a video or going live, I’ll do that. But I’m not going to hinge my marketing strategy on doing it. Because if I’m uncomfortable and don’t like it, that’s going to show and deter people from wanting to work with me. Try the trends if you feel called to try them. But don’t jump on every single trend that comes along because you feel like you should. A Marketing Plan that Feels Good There might be other marketing strategies that don’t feel good to you. Make a list. Then start thinking about what does feel good. Here’s what I came up with: Honesty About What You’re Selling If you promise something to your customers, deliver on that promise. Being honest about what you can — and can’t—do is key to marketing that feels good. That also means ruling out customers who aren’t a good fit or wouldn’t benefit from what you’re selling. Don’t be afraid of not being all things to all people. Find the people who are really going to benefit from what you have to offer and forget the rest. Emails That Feel Like Conversations Email marketing has been my jam for a long time. I can double open rates and click-through rates in my sleep. But just getting opens and clicks isn’t all there is to effective email marketing. Connecting with the people who have explicitly given you permission to communicate with them is vital to long-term success. That’s why writing and sending emails that feel like conversations is a better strategy if you’re trying to build a business that lasts. Instead of constantly striving for that higher open rate or a few extra clicks, focus on building a relationship with each and every one of your emails. Being Vulnerable Authenticity is the bare minimum. Vulnerability is more like the gold standard. Especially if what you’re selling is yourself. Showing your potential clients the darker side, the nitty-gritty, and the less-than-fun parts of what you do is a way to form connections. Telling them about the times you failed, the times you were wrong, the times you’ve struggled—all of those are ways to show them that what you’re offering is genuine. Very few people want to work with someone who’s perfect. They don’t want to work with someone who’s never struggled with the same issues they have. They want to know that the person they’re turning to for help actually understands where they’re at. And besides, no one is perfect. Anyone claiming to be so (even in one area) is lying (either to you or to themselves). Uplifting People and Giving Them Hope Obviously, we create solutions that solve problems, no matter how big or small those problems are. But our goal should be to give people hope about their problems, not make them feel worse and like we’re the only people who can help them. We should show people the options for how we can help them, but that doesn’t mean guilting or shaming them into buying. Uplift them, give them hope, and make them want to work with you out of joy, not out of fear. Organic Connections First Your goal with any marketing activity you do should be to build organic connections with people. People want to work with people they like. They want to work with people they admire and feel connected to. That means showing up consistently and being real about where you’re at, what you do, and what you can do. It means having conversations, being vulnerable when you feel called to, and genuinely wanting to help people. And organic connection isn’t built through advertising or paid methods. It can be reinforced with those methods. But it can’t be built from there. So spend time building an audience organically before you worry about paid marketing. Calling in Soulmate Clients Don’t let the term “soulmate” deter you as being too woowoo. A soulmate client is simply someone who you’re excited to work with and who’s excited to work with you. When both parties are thrilled with the work you’ll do together, you’ll end up with better results. The way you call in those soulmate clients is by building connections, being vulnerable, and being honest about what you can do for them and how you’ll do it. Feel-Good Marketing Should Feel Good for Everyone The key to feel-good marketing is that it should feel good for everyone involved. That means it should feel good to you as a marketer as well as for those you’re selling to. There needs to be balance. Taking time to build a marketing plan that you enjoy and that connects you with the perfect clients is worthwhile. It may take some trial and error and definitely takes commitment and planning, but overall it creates a marketing strategy that doesn’t leave you feeling burnt out.
https://medium.com/swlh/feel-good-marketing-16c43155acbd
['Cameron Chapman']
2020-12-04 07:06:58.148000+00:00
['Marketing', 'Business', 'Social Media', 'Clients', 'Social Media Marketing']
Is your advertising risk-assessed?
Here’s a true story that happened to someone I know. Ruth gets a windfall Meet Ruth — we’ll call her that. Ruth is a widow with a daughter and a new grandson. She works a full-time job on a fixed salary. She recently inherited a nice lump sum — say $100,000 — and wants to invest it well. To go towards, retirement, but also with the occasional splurge. Of course, Ruth goes to a bank. Ruth asks for the highest interest investment at the highest risk level that she qualifies for. She puts her money down in a tax-free investment vehicle and off she goes. Happy that she has the extra support and can rest a little easier. Looking forward to the 12% returns. Time passes. Ruth goes to work on weekdays and visits her grandson and daughter on weekends. She sleeps just a little better every night. Her morning coffee tastes just slightly better every day. She worries just a little less. 6 months later, Ruth plans a getaway Six months later, Ruth decides to plan a little getaway. She wants to treat her daughter and her husband and baby grandson to a family vacation. Something simple, not too far away. She does the math and goes to the bank with a big smile on her face. $100,000 at 12% annually after 6 months — that’s a healthy $6,000. She won’t even need all of her earnings, she thinks. Here is where it all turns. Of course, that’s not how investment works. To her dismay the banker explains to her that her portfolio has historically returned 12% over the years, but the return has never been guaranteed. In fact, in the past 6 months she’s lost money. Whether the bank’s explanation was adequate or Ruth read her contracts appropriately are all issues for another day. What I’d like to do is make a greater point about investments, mindsets, and how all this relates to advertising. So how does this relate to advertising? If Ruth’s line of thinking sounds ridiculous to you, consider this. How many times have you been in a boardroom or reporting to your boss 3, 6 or 12 months after they signed off on a campaign budget being faced with the question: “I thought you said we would see a sales lift of X% by now. What happened?” You could say that advertising does not always have a linear effect on the bottom line. You could say that advertising’s effect is compounded over the years, as is your brand’s. You could say that advertising is but one small piece of what makes a good marketing mix; product, people, places, and so much more play an integral role in ROI calculations. True, but. The crux of the issue here is: we too often predict advertising performance based on historical data. In the investment community, investors are mandated to explain that past performance is no guarantee of future results to clients — but even still, that communication fell apart with Ruth. And in our industry, it’s not even mandated. Worse yet, we tend to think of advertising as a spend, which leads us to make bad decisions. We think in “campaigns” — spend on advertising for 2 months, then stop for 5, then on again for another 3. Imagine if you invested like this? So I ask, if advertising is an investment into a business — I think we can agree that it is or at least ought to be — why not treat it as such? As a thought experiment … how about managing media budgets like investment portfolios? Choosing high, medium, and low risk “assets” — read media channels — based on factors like our company’s risk appetite, media savvy, buyer journey, and product complexity. Because just like the stock market, the media landscape is in constant flux. New platforms emerge and fall. Consumers are diverted by new tech, new content and new toys. Publishers pivot business models constantly. Isn’t it time to stop pretending that we know the future? And infuse our advertising choices with intelligent risk assessment? We marketers need to take the lead to educate our colleagues about what data we have, what it means, and how it can be used to project — not predict or foresee — potential revenues. Can we do it? What do you think?
https://medium.com/empathyinc/is-your-advertising-risk-assessed-3637bc90ee25
['Mo Dezyanian']
2018-11-07 16:18:15.490000+00:00
['Investing', 'Business', 'Marketing', 'Advertising']
Daft Punk Releases 6 Minutes of Samples From TRON: Legacy Soundtrack
So it’s Monday and you’re back at the office/Fortress of Solitude/evil science lair of evil preparing for the long push until next week when you get to kick back for Thanksgiving. Perhaps this will help. Disney just dropped 6 minutes of score samples from the upcoming TRON: Legacy, and I must say that I find them quite awesome. As Third has pointed out before, Daft Punk is what some scholars have referred to as, “fucking awesome” and the six minutes of audio splendor presented here are no exception. I’ve had it running on repeat all morning. Thanks to Consequence of Sound for the news. Embedded player after the break.
https://medium.com/nerd-news-reviews/daft-punk-releases-6-minutes-of-samples-from-tron-legacy-soundtrack-840d42d4ec31
['Wade Tandy']
2017-06-26 21:21:38.738000+00:00
['Daft Punk', 'Movies', 'News', 'Disney', 'Music']
5 of the Best Themes for VS Code
Get Themes From Extension Marketplace From the community, VS Code has a large number of themes. Users can get these themes from the Visual Studio Code Marketplace by using the browser. There are hundreds of themes for you to select from, and they vary from colours to popular TV series. Each theme will have a rating and a full “Getting Started” guide to make life easier for you. Another alternative is that you can directly select and install themes by going to the “Extensions” view in your VS Code window. If you find one you want to use, simply install it, restart VS Code, and the new theme will be available. Install theme using VS Code Extensions Let’s dive into our top five themes!
https://medium.com/better-programming/10-of-the-best-themes-for-vs-code-e97ad80d2728
['Uditha Maduranga']
2020-07-27 15:29:06.683000+00:00
['Vscode', 'Software Development', 'Software Engineering', 'Programming', 'Software']
Ask (Another) Abortion Provider: Roe vs. Wade, 39th Anniversary Commemorative Edition
by Lola Pellegrino Nearly 40 years ago, abortion was legalized in the United States. To mark the occasion, Lola McClure, a registered nurse, interviewed Dr. Nancy Stanwood, an obstetrician/gynecologist, abortion provider, mother, and board member with the Physicians for Reproductive Choice and Health. Hello Dr. Stanwood, it’s wonderful to meet you today! I knew I would like you instantly when I saw that you were wearing a zebra print shirt under your lab coat; I thought, “Dr. Nancy Stanwood is cool.” I guess that I’ll start there: why is the only 100% true stereotype in medicine that people who work in reproductive health are the coolest super-smart people who have excellent senses of humor and are always clinically current and up to date on evidence? [Laughs] That’s a great question. I think there are a couple different pieces to that. I think those of us who feel prompted to help women in this way, and feel capable of doing this work and handling the controversy that comes with it, have a certain baseline balance and sense of humor. I think the second thing you asked — being up to date on evidence — I think all people are hopefully out there to be excellent doctors, no matter what we do, but I know with myself that I was raised with the five-P rule: Prior Preparation Prevents Poor Performance. Whoa. That’s why I’m in family planning! To plan. Because I think planning is good and healthy. Being prepared, doing the right thing, and doing it well are what matter. I think a lot of us [abortion providers] sense that extra need to do it twice as well as everybody else. It’s like those women in the ’70s and ’80s: to be able to do everything the boys can do but twice as fast. There still persists to this day almost 40 years after Roe this perception that any doctor who would do abortions on a regular basis — not the casual, four patients once a year, but those who make it a part of their integrated practice — that they must be quacks or bad doctors. There’s this stigma of the abortionist that — two generations later — still looms large. We feel like we need to prove all that much more that we’re caring, thoughtful, educated physicians who think carefully about what we do for our patients, how we counsel them, how we understand the incredible delicacy of this issue, and how we recognize the privilege it is to help women in this way. So I think that’s primary; but we’re also cool to begin with. [But] there is this presumption of guilty-until-proved-innocent in anything that we do, which is unfortunate. That’s part of why the stigma persists for women who have abortions, which is why some women don’t talk about it, which is how people can say “I don’t know anybody who’s had an abortion.” It’s like 20 years ago, nobody thought they had gay friends — well, yes you do, they’re just not out. Because of the stigma — it’s that whole thing all over again, silence equals death. The quieter it is, the less people recognize that it’s normal, that lots of people do this, that it’s an integrated part of medicine. It’s so hard for anybody to be out no matter what their position is — it’s hard for people to say that they had an abortion and to tell their friends. And it’s hard for providers to come out, because there’s that — let’s call it what it is! — threat of terrorism. Domestic terrorism, yes. Even on an interpersonal level, you don’t want to be “impolite” or bring up a “weird” subject. But then you think, what could be more normal? I feel like it’s a radical act to just speak plainly about it. There’s the question of how your work factors into your life, especially with pro-life friends and coworkers. How has the bigger “us versus them” manifested for you personally? How do you navigate what you do versus the fact that you have to, you know, live your life? Day by day I think: what is the venue? What are the upsides and downsides to talking about my work? Certainly, you know, I have colleagues and acquaintances who know what I do. But in the work sphere I carry the title, I do the work, and I don’t necessarily have to keep outing myself with people. It’s who I am and what I do at work. It’s more in the social sphere … you’re out meeting with some friends and you just want to have cocktails and eat some good food. So you don’t want to — invest the energy in your advocacy work in your downtime. But there are times when it feels like the right, necessary thing to do, especially if conversation is going in the direction of “pro life, pro choice” and people are saying crazy stuff. Wrong stuff! I feel obliged to speak up — but in my downtime, I don’t necessarily seek that conflict. Any moments that stick out to you? One thing that comes to mind was early — I was in residency and had just started moonlighting at Planned Parenthood. I was out with some friends, and the person sitting next to me asks, “oh, what do you do?” “I’m a doctor.” “Oh, what kind of doctor?” “I’m a gyn.” The next question he asked me: “do you do abortions?” I’m like, “yes!” And he was clearly quite bothered by that. We didn’t get into the philosophical discussion of why, but apparently he felt that he needed to know that. It was early on for me, but it showed me: “Oh! If you mention this during cocktails, weird things can happen.” My partner’s grandmother is the nicest lady, and this past Christmas we were preparing food and she said, “So. You know, the laws, they’re terrible. It’s going to be illegal again soon. It’s so bad. What can we do? How does someone help?” My answer for myself was — I threw myself into [being a nurse] but not everybody is going to be able to do direct service work. So I smiled and kind of … didn’t really have an answer. I had nothing to say to this helpful grandma! So, help: what should we do!? I think the first thing is to be an informed citizen. Because this is a very polarized and hot-button topic, there’s so much misinformation and propaganda out there that’s not accurate medically. Find sources of information that are reliable and fact-based — I’ll give a shout out to the Guttmacher Institute, they’re a non-partisan public health research group that specifically looks at pregnancy, contraception, reproduction, and abortion to let us know what’s really happening. Let’s not just deal with propaganda, let’s look at a public health view of what’s really happening in people’s lives. So that’s first and foremost, to be informed and try to pass through the miasma cloud of misinformation and outright lies that are out there. Second, find some local thing to do! You know, “live globally, act locally” is a very good strategy. There are a lot of opportunities to donate money, donate time, talk with like-minded people, build a group of local activists. Doesn’t need to be something grandiose that needs to change the world. There’s a quote from JFK that said “One person can make a difference, and every person should try.” Sometimes people feel overwhelmed by the issue — “oh, I can’t fix that” — no, you can’t fix it, but you can be a small, incremental part of the solution. Be informed and then take some thoughtful action. I read recently something to the tune of, “Roe was so important, but rich women could always go to Puerto Rico or England and get a safe abortion.” I absolutely see this happening again, especially since the first reason people seem to have is often a financial: “I can’t afford to have a baby right now” or “If I had a baby I wouldn’t be able to support it.” And that’s reproductive justice, right? Framing this so people can have the children that they want, not just not-have the children they don’t want. I’m curious about what you think about that — how even though abortion is “legal,” the distribution of access is so much along class lines. Just to be a little historical here, it was that burden of morbidity, mortality, disease, and death that fell on the poor who couldn’t get a safe abortion illegally that led to the activism in the medical community to decriminalize abortion. I think theoretically, 39 years later, part of what’s happened is that not only can rich women get an abortion more easily, but they can get birth control more easily as well. So what I’ve seen is the proportion [of women] who are poor having abortions is increased. That disparity exists in access to reproductive healthcare in general, too. The most effective methods of contraception, like IUDS and implants, are unfortunately more expensive, and those can be out of reach. [This is the truth for] a lot of women in our country, and you reap what you sow … because those women have fewer resources to care for an unexpected potential child, they are then more boxed in. The circumstances of their lives unfortunately predict what they feel like they have to do. So then I think recognizing the increasing disparity is very important, and recognizing that when those women are not able to get what they need through safe channels, some of them do unsafe things. Fortunately, it’s still relatively rare in the US, but there are reports of self-induced abortion and of women going to clinicians who aren’t well trained, and it’s harkening back to the pre-Roe era. The fundamental issue, again, is that making abortion less available doesn’t stop it from happening, it just means that more women suffer and die. It’s that simple. And that, unfortunately, is not a part of the public consciousness around abortion anymore, because it’s been safe and legal and accessible for the majority of women for the past 39 years. In that way, we can’t necessarily use that argument anymore, because people don’t necessarily remember “Oh yeah, I remember when Aunt Millie died, it was all hush-hush and 10 years later I found out she had an unsafe abortion. That’s why my cousins grew up with my brother and sister.” Not that that’s my story, but things like that — that story happened in that era. I don’t think that discussion hits anybody at the visceral level anymore, but it’s still important to make the point. I agree! My own grandmother is first on the waiting list for when they make marrying Catholic priests legal — she’s right there. She wants one. One of the most Catholic people I’ve ever met in my whole life. She was the oldest of many, many children, and because of that, she had to give up her full scholarship to college to stay home and take care of her little siblings. She told me that it ruined her life. So she’s very Catholic, but she’s also, “Give them all birth control! I love what you do! They should have abortions!” You see it, you see what happens, and there’s that conversion reaction. I think that what might replace that visceral reaction in the age of legal abortion is speaking very plainly about your own experiences. And what you said! Or JFK said: you have to try. It’s almost easier to make change happen with the issue of abortion rather than other issues, because there’s still so, so much silence around it as an experience that actually happens to people; that if you just talk about it, you’re doing so much good already. Along those lines, it’s sometimes sadly easy to help my patients become grateful. Women come in expecting to be judged, treated impolitely, and degraded, and if you show them even the slightest bit of normal human courtesy — not even going to the point of affirming your trust in them, and your belief that they’re doing the best they can — it’s so easy to make them grateful. Because sadly, they expect to be disrespected. They expect to be treated shamefully. Or they’re being punished. Or like they should act like they’re going to a funeral. Part of the way I envision it when we talk about “when does it feel safe, or good, or worthwhile to speak out and step out of the silence, or the closet” — the times when I do that, one of the things I envision is that all of my patients are standing behind me. I have this big group of patients standing behind me and they want me to share what I know, because they can’t. And it’s that much more important, for their sake, that I let people know the truth, that aside from what we talked about, we doctors who do abortions are not just “abortionists,” that we’re thoughtful, caring, compassionate people who have chosen this work because we want to, not because we want to do anything or else or that we’re in it for the quick buck. That we have made a conscious, moral, ethical decision that this is important. I think the flipside there — Pause for high five. [high five] I think the flipside there is there’s this narrative of women who have abortions that goes along with the welfare-queen narrative of the ’80s. The idea that these are fallen women, women who allowed their sexuality to run rampant. This incredibly negative, demeaning perception that also has a lot of sexism, racism, classism in it — it’s all the isms tied in together. For me to share the stories of my patients and portray them accurately, to let people know that’s not who we’re talking about here — we’re talking about your mother, sister, daughter. People you know who are thoughtful, careful, compassionate, and doing the best that they can with what they have. It’s that idea of: how can we get our society to trust women, and to realize that this is something that women know best, and that needs to remain private, in the sphere of the doctor-patient relationship? Was abortion what took you to Ob/Gyn, or was it something you found in residency along the way? I would say that my feminist awakening came when I was a resident. I had been a passive feminist, passively pro-choice … raised in a relatively liberal family, where I was taught that girls were as good as boys, that girls can do anything they want, and that having access to abortions is important. I went into Ob/Gyn to go into Ob/Gyn and kind of figured … of course I’ll do abortions! Don’t all of us? And … there was some naivete to it. And I really had my feminist awakening just with my patients — that’s part of why they call it practicing medicine, because your patients teach you. I was awed and at times terrified by what women had to go through in childbirth and the dangers that could occur. Like many laypeople, gradually, as your medical training occurred, you realized, “Oh! Not all pregnancies go well. Not all pregnancies are safe, and even things that look safe can suddenly become emergency situations.” I think I was just incredibly impressed by the fortitude of women in the obstetrical world, and then it started extending to [abortion]. I remember when I was in residency, I had a patient come to me for her first prenatal visit. Nobody had discussed options counseling with her before. So I naively went in there and took this full prenatal history and then she said, “You know, I actually kind of want to have an abortion.” I went, “Oh! Okay! Let me figure this out for you.” That’s not where my brain was going, you know? I hadn’t had any experience with options counseling before, so I’m sure I didn’t give her my best, but I gradually began to realize that wow, not every Ob/Gyn does this. And this is really important. She shouldn’t have to go through that emergency c-section. If a woman doesn’t feel ready to have the child of her abuser, she shouldn’t have to. If a woman doesn’t feel prepared for the rigors and responsibilities and joys of motherhood, she shouldn’t have to do that if she’s not ready. I think it was that commitment to how important motherhood is, and that it should be voluntary as opposed to drafted. I think that military analogy is kind of apt. I think it’s perfect! It’s similar to some of the language from the early 20th century. Margaret Sanger, one of her campaigns was “voluntary motherhood” — that’s why she’s talking about birth control and decriminalizing education about birth control. It’s the same thing: I think motherhood should be voluntary. It’s the toughest job you’ll ever love, and that’s what I came to see in the trenches as a resident — that it’s so important to do motherhood well, and to feel ready to do it. And women know when they’re ready. And I trust them to know that. And I recognize that a woman is the only one who CAN know that for her own life. Other people can tell her what to do, and have all kinds of assumptions and preconceptions about what her life is “really like,” but I think that’s immoral. And I use that term provocatively because I think, unfortunately, the idea of providing abortion or women having abortions has all been laden with this idea that it’s the “immoral” thing to do. I think it’s immoral to tell a woman to stay pregnant when she’s not ready. How about that having an abortion is not “taking responsibility for the pregnancy?” Right! And that the actual responsible choice is to wait until you’re ready. I see my patients as being very thoughtful, deliberative, and responsible in what they do in their lives. I want to support them in that. I heard [pediatrician and family-planning specialist] Rachael Phelps give a talk once, and she said something that really stuck with me: she said that no matter where you are in this issue, pro life or pro choice, whatever, everyone wants all children to be born wanted. We all want a baby to be born to a person who wants to become a parent and have that child. We all have different options that we consider and that other people consider, but that’s what it comes down to. And she’s a pediatrician, not an Ob/Gyn, who felt called to become a provider because she saw how unplanned parenthood was damaging to the families in her practice. Again, I think it’s the idea that abortion is about motherhood — people think that they’re polar opposite things, but they’re not. More than half of women who have abortions are already mothers. They know what it takes to become a mother. Which is why they sometimes say, no, not now. For myself — I’m a Unitarian Universalist, and one of the ministers became a friend of mine. We ran the Reproductive Rights and Social Justice task force at the church. She came to see me about three weeks after my daughter was born and I was on maternity leave. She came as a friend and a minister, reflecting on the amazingness and hardness of it, and she asked me, “How does it feel for you, doing what you do providing abortions for women and your dedication to that idea of helping women, how does it feel to be a mom now?” “Oh my god, all the more dedication to it, because nobody should have to do what I did unless they’re ready.” Parts of it are really, really hard and scary, and this is from an obstetrician who has been delivering babies for 16 years! To say, “that was really hard and should only be chosen” and you wonder why some people have PTSD after delivering. Certainly for me the transition to becoming a mother was that much more affirming of my work and my advocacy for my patients. Did you see how in 2011 they enacted 135 provisions that restricted abortion — that graph that goes like that. [draws air squiggly line with finger, then points straight up] I’ve seen the same graph. One of the things that seem to be moving is policing practice — the “demand” side instead of the supply side, laws like waiting periods. I’m thinking about the Texas ultrasound law, or something like reading scripts to patients with medically inaccurate lies in them. I’d like to talk about that — it’s very fascinating to me because I can’t imagine working in a clinic in Texas right now. Restrictions that are placed on medical practice within abortion care — and only in abortion care, singled out and stigmatized within medicine — are because there’s this presumption that we’re not doing it well, that’s part of it, and there’s the harassment factor to scare physicians away or make it harder to do their job. Specifically to the requirement that a woman would need to see ultrasound images before having an abortion — I think I can sort of understand what the anti-choice side thinks they’re doing. They think that women don’t understand, and that it’s going to change their minds. But in my experience, that’s just not the case. Women know why they feel the need to have an abortion, and seeing an ultrasound image doesn’t change the facts of their lives. They don’t feel ready for a baby, and having an ultrasound doesn’t suddenly make them ready. Again, it comes back to that respect for the responsibility of motherhood and the wish to do it well. It’s misguided to say that being shown an ultrasound will change your whole life. No! It won’t! In many cases this is a very difficult choice, let alone for people who wanted the pregnancy but now have to terminate. And I think that it’s important to see that even if abortion were no longer safe and legal, women would still do it. Which is why thinking about the anniversary of Roe v. Wade … my entire medical career has been after Roe. I have to think back to the things that my mentors taught me in residency — the old graybeards who were almost all men, but who became ardent feminists when they saw what was happening to women, and who advocated for the decriminalization of abortion. In medicine, if something is an intern’s task, it means it’s kind of — repetitive, not particularly important, kind of menial. And what interns end up doing is sometimes telling of how things are considered to be important in medicine. I had an old graybeard attending in residency who told a story from his residency, pre-Roe, in an inner-city hospital in Detroit. The intern every morning had to mix up the IV pressors for the women who would come in septic after an abortion, and they would use these pressors to avoid dying. The ward where they put them — gallows humor, you have to deal somehow — they called the septic tank. And that’s what he saw as a trainee. He saw women incredibly sick and incredibly maimed, dying, and dead. All because of their determination and recognition of “I am not ready to be a mother. I cannot do this.” Women will take really frightening risks when they don’t have access to safe care. Let’s say, thought experiment. Let’s say Roe v. Wade got overturned. There’d be 1.5 million women who had been seeking abortions who can’t have a safe one. Someone will have an unsafe one and will die or be damaged for life; some women will have the child and not be capable of taking care of it. And we know that women who have unplanned pregnancies who go on to deliver have a higher risk of complications in pregnancy, high rate of pre-term birth, a higher rate of the children having behavioral difficulty, poor achievement, cycles of poverty, domestic violence. And the whole idea that somehow adoption can solve it all is just not how the American public thinks. Only 1% of women with an unplanned pregnancy go forward with adoption in the US — very, very small. And I hear it from my patients for all different reasons: they never could do it, the interesting thing they say is that they don’t trust anybody else to raise their child. Will the child be loved? Will the child be well cared for? Again, it gets to the idea that they understand how important motherhood is — I don’t necessarily see out there the American public ready to adopt 1 million babies. So just from a practical point of view, if you do a thought experiment of making it illegal or ridiculously more restricted than it is now, more women will die, more families will suffer, and that’s not good. That is not a moral good. It’s scaring people. It’s to scare people, to tell them lies, it’s a version of domestic, psychological terrorism. It’s not in any way, shape, or form medically necessary to mandate these things. It’s apparently politically necessary and politically expedient. But it doesn’t help the issue. I think the other piece that I’ve been neglecting on the Roe anniversary is the whole “Where are we with birth control? Where are we with comprehensive sex education?” issue. Not so great. Half of all pregnancies are “oops” — it’s been that way for a really, really, embarrassingly long time. It’s all too much “blame the victim” — “oh, she didn’t take her pills” — but maybe it’s just that pills aren’t the right thing. Why blame women for the fact that methods most of them are presented with don’t fit into their lives? So, it’s that incredibly sad situation of creating the victims and then blaming them for their situation. We don’t put our money where our mouth is when we talk about women and children first. We’re looking at restricting funding to WIC, restricting funding to early childhood programs, and, I mean, this is not helpful. We need to support families, we need to help people rise up out of poverty, and then they won’t feel like they have to have an abortion because they can’t afford another baby. And the whole sex-education issue — we just had a whole generation come of age in the era of abstinence-only education. And people who don’t feel empowered to have responsible sex lives are still going to have sex lives, they just won’t be as safe, because they haven’t been equipped with the knowledge and access to contraception. Have you had any patients recently that stick out in your mind? I had a patient recently — and I think this gets to the issue of second-trimester abortion, which is of course is much more of a hot-button topic and has been used for the anti-choice side both out of proportion statistically for what it is and out of misunderstanding its complex nature. There’s what I call the triad of delay. It’s a natural question: “why did she wait? Why wasn’t she there at six weeks rather than 18?” Maybe she had irregular periods, she didn’t have Mother Nature’s early warning system. All kinds of reasons. Maybe she was raped — there’s another level of denial that goes with that. There’s also the decision-making process. Women assess all their responsibilities and resources, and ask — do I have enough to be a good mom and have a baby? For some people they do that really fast, and for other people it takes longer. Conversations about stress in a relationship, changes in employment status. Those decisions take longer. And then there’s the access to care — that gets into that issue of disparities. Poor women have to make the arrangements: time off work, time off school, childcare, travel. If you have a waiting period, you have to travel twice and it’s that much more expense. Those factors can all delay a woman. And we know that statistically women who show up in the second trimester are younger, poorer, and have lower education, typically, then women in the first trimester. They are a more vulnerable population who need that much more care, counseling, consideration, and compassion, so that it’s really unfortunate that that whole aspect of it is being demonized when actually those are the people who need our help and thoughtful compassion the most. The patients who suffer most from the bureaucracy that’s been imposed on them. Yes, I was thinking about a recent patient — since you’re asking about the stories that stand out in my mind — I had a patient who had an unplanned pregnancy, and she thought she and her partner could make it work. She was getting prenatal care, but at 20 weeks she found out that he was married, had children with his wife, and also had children with another woman. She had to totally re-evaluate her life plans. She had two children from a previous relationship who were a bit older, and she had been in a partnership to raise them, and now she was looking at, “Do I have this baby while I’m with this big fat liar? Do I have this baby alone?” So that she found out late that she need to reconsider her ability to have another child. And she needed a long time to think about it correctly … and she had complete support from her family. I am, again, day by day, impressed by the genuine concern and thoughtful deliberation of patients referring to this issue, and I was so impressed by her careful thought process and that of her family and support people. So she did; she did decide to have an abortion. It was later. I can’t even imagine that kind of being blindsided. I’ll finish with my funny protester story? Okay. It was a procedure day at this clinic, so there were a ton of protesters outside. Suddenly, a woman — this stately matron in a power suit — comes up to the group of protesters and yells, “EVERYBODY GET OUT OF MY WAY!! I HAVE A YEAST INFECTION!!” and busts through them, pushing everyone aside, to get to the clinic entrance. Took any of the power out of the protesters. It was magnificent. Thank you for that story. Previously: Ask an Abortion Provider. Lola Pellegrino is a registered nurse. Here is her tumblr. Photo via Jess Silk.
https://medium.com/the-hairpin/ask-another-abortion-provider-roe-vs-wade-39th-anniversary-commemorative-edition-69a8721cd9aa
['The Hairpin']
2016-06-01 21:17:23.775000+00:00
['Abortion', 'Nancy Stanwood', 'Health']
Team Update
We are delighted to announce and welcome Vincent Dsouza our technical adviser to the team. Vincent is a Cloud Computing and Blockchain Specialist. He is an accomplished IT professional with 16+ years of experience in the roles of lead solution architect, technical consultant, pre-sales/business development, service delivery, hands-on deployment in IT Infrastructure Management Services, Cloud Solutions and Blockchain Services. Vincents’ expertise includes IT Infrastructure Management tools, Cloud Solutions (migration services) and Data Center migration services. Vincent is certified in IBM Blockchain, AWS, Microsoft Azure, Google Cloud, IoT, Bigdata, Machine Learning and Artificial Intelligence. For more details on our Private sale please visit www.YourDataSafe.io
https://medium.com/your-data-safe/team-update-b72c8c020417
['Your Data Safe']
2018-09-04 19:45:41.820000+00:00
['Blockchain', 'Crypto', 'Data', 'Cloud Computing', 'Microsoft']
Rethinking Revenue Models of Social Media Companies
The status quo Today’s major networks are mostly monetizing in the same way across the board: advertisements. Ads alone are annoying and damaging enough. On the surface, people are impulsively buying useless gadgets because the advertisement was perfectly tailored to their instincts. But the problem runs deeper. We are in the midst of a war of disinformation. People and bots are pushing conspiracy theories and disinformation into our daily feeds, hijacking our minds and our seemingly freely formed opinions. Disinformation has become such an apparent problem that even the European Union is currently figuring out the needed steps to take against it. One of these possible measurements is particularly promising: membership fees. The EU proposed that social media companies switch from having an ad-based revenue model to monthly membership fees. To understand the reasoning behind this proposal, we first have to look at the actual danger of the current monetization. Ad-based revenue models When you are running a website that contains ad-banners, you have one goal: keep the user on your website for as long as possible. As a single blogger, the way you could go about this goal is to produce high-quality articles. I don’t think there is any harm in this, so you can probably stay with ad-banners as a form of monetization. It changes once a billion-dollar company is running the website. Today’s technology is unbelievably advanced and capable of achieving things thought of impossible. Tech companies can predict human behavior more precisely with every input we give them to analyze. These algorithms learn which picture we need to see next to stay as long as possible on the app. We voluntarily expose ourselves to the same mechanism used in slot machines to turn their users into addicts. And it makes sense. The more time we spent on a social network, the more revenue it can generate. If we are helplessly addicted to an app and revert to opening it up habitually, we will tremendously increase the company's profit. As social media companies have to optimize their revenue to please their shareholders, turning their social networks into an addictive slot machine is inevitable. Turning users into addicts is one problem. A completely new issue is which content should be used to achieve this. You can see the amount someone spends on social media based on the number of conspiracy theories they believe. It is more addictive than regular content or credible news sources and will be promoted and pushed by the algorithms. According to an Avaaz study, conspiracy theories attract four times as many views as information from credible sources. Conspiracy theories are attractive to users. Once you buy into a conspiracy theory, you will think of yourself as smarter than everybody else. You are the one that is intelligent enough and can see through the schemes of whoever is currently the one controlling the world according to the newest conspiracies. As a result, ad-revenue models are not only turning us into addicts and worsen our lives but are also dividing our society by not only stirring us up but by providing different alleged facts to different people. With all this being said, there has to be an alternative to such a destructive practice. The European Union is regulating social media companies continuously and recently announced a new set of rules. “In my mind no doubt that platforms — and the algorithms they use — can have an enormous impact on the way we see the world around us. We need to know why we are shown what we are shown.” - Margrethe Vestager As the executive vice-president of the European Commission, Vestager proposed another regulatory concept: membership fees. Membership models Adding another monthly fee to your expenses may seem like a bad idea. But I think membership models for social networks offer excellent benefits. With ad-based revenue models, social media companies’ goal is to turn users into addicts, ultimately worsening their lives massively. This changes once social networks demand a monthly fee. Suddenly, the companies have no value in users spending hours daily on their apps, but rather in them renewing their memberships. Optimizing an algorithm to addict people will become an unnecessary expense. When we are paying for a social network, we are not only paying for the service itself. We are also paying to stay in control. We are paying for the company to respect us, and our interests. Addictive content would drastically decrease in its value. And if a user is addicted, it's easier to quit by canceling the membership, and the hurdle for a potential relapse increases with the necessity to pay money for it. Another benefit membership models would provide us with is privacy. Our data is analyzed and used against us daily. We are training the same algorithms that are planning to addict and divide us further. This is why the EU recently prohibited Facebook to send European users’ data to the U.S. By sending data to the U.S., Facebook is improving its algorithms to target people more directly and keep them engaged longer. The terrific outcome of such improved algorithms became visible in the recent Cambridge Analytica scandal. A company that used targeted ads and posts with the help of Facebooks’s algorithm to massively influence the U.S. election in 2016. As a response to the prohibited data transfers, Facebook threatened to leave the EU since its revenue would take a severe hit. Implementing a monthly fee would, therefore, not only keep the users healthier but also their data private. The bots on social media would also decrease massively once every account would have to pay a fee. Using social media during the past few months has been a nightmare. Comment sections below pictures were dominated by conspiracy theory spreading bots, decreasing my user experience, and increasing Weltschmerz — a german philosophical term describing the feeling of sadness for humanity's current course. My main concern has been the possibility that some people might fall for these bots’ conspiracy content. This terrible feeling alone made me quit social media during this pandemic. The challenge ahead With all the positive benefits of switching from ad-based to membership models, there is one seemingly insurmountable challenge ahead of us: The membership fee itself. One could argue that social networks are only popular because they are free. And this argument has a point. Social media would have never reached the growth it did if it wasn’t free. But it already grew by now. People know about the possibilities of social networks, and although this article may have suggested otherwise, they can be fantastic tools, too. Hence, people would pay for using these tools, especially once the addictive and invaluable features are removed. Also, membership sites are currently on the rise. If you are reading this article, you already know one of them. Magazines and other successful content-focused social networks are also running on fees instead of ads. And I believe that the more people become aware of the negative impacts the attention-grabbing culture is having on us and realize the hidden cost of a seemingly free product, the more prone they will be to pay a membership fee. According to this analysis, users would have to pay between $7 and $14 per month to make up for the lost revenue caused by the removal of ads. And although this cost could even decrease, as it would be tolerable for Facebook to generate a little less money while transitioning between the two models, the price would be worth it. Switching the revenue models would increase the service's value while decreasing the adverse effects on society and the individual. Just imagine the mental relief you would achieve by a less addicting and dividing social network, as well as the additional time you will have at your disposal. And with several companies switching their models, they might even have to add discounts to compete for users, which would ultimately benefit the latter, instead of already insanely rich tech companies.
https://medium.com/swlh/rethinking-revenue-models-of-social-media-companies-5bebd7178c1b
['Julian Drach']
2020-12-18 11:02:22.644000+00:00
['Law', 'Politics', 'Business', 'Marketing', 'Social Media']
A Back-End Developer’s Guide to Vue.js Component Testing
Stylistic Changes to a Component What if our component contains a property that changes the color of the background? The truthiness of the success prop determines the outcome of the background-color attribute. Therefore, more tests are needed to validate that specific behavior. Notice that although the component has a prop for message , it is neither described nor mentioned in either of these tests. Why? Because message has no impact on whether or not the color of the component is modified. Another way to think about this is, instead of a Vue component, what if this object were described as a class: If you are testing the resulting value of success, you would set and call .success . Setting and getting the value of .message has no impact on the outcome of the .success field and should have its own separate set of validations. To go a bit further, in addition to setting the background color of the element to green, the message text needs to be underlined. The template could look like so: It is possible to write a test to validate the style of the div element and then validate the style of the p element. But what if more stylistic features are added or changed? That becomes a bit of a hassle and in turn makes the test more brittle. At the end of the day, component testing isn’t really about how a component looks, it’s about how it functions. So functionally this implementation is equivalent to the following. And the test would be this: And if you are shitty at stylesheets like I am, you could even get away with a template like this one:
https://medium.com/better-programming/a-back-end-developers-guide-to-vue-js-component-testing-b692fc80ef08
['Summer Mousa']
2020-12-11 16:49:07.379000+00:00
['Programming', 'Unit Testing', 'Vuejs', 'JavaScript', 'Vue']
What It Means to Lose 25 Pounds When You Weigh 460
What It Means to Lose 25 Pounds When You Weigh 460 It may not sound like much, but go pick up 25 pounds at the gym, and imagine carrying that around all day Credit: Malte Mueller/Getty My family doesn’t have a lot of Christmas traditions. We open presents on whatever day everybody can get together. Mama has always had an artificial tree. The ornaments are whatever’s on sale at Kmart. But we always had my sister Brenda’s peanut butter logs. Nobody remembers exactly when she started making them, or where the recipe came from. We’ve had them at Christmas for as long as I can remember. Every year Brenda and Mama would coat them together, getting chocolate all over their hands and the stove. They’d pack the logs in cookie tins. We’d get them out after the big Christmas meal. If you wanted to take some home, you had to hide them. Otherwise, they’d be gone by dark. But then, on Christmas Eve 2014, Brenda died from a leg infection caused by her excess weight. She was older than me, but I was even bigger than her. On New Year’s Eve that year, I weighed in at 460 pounds. I had always known I had to lose weight and get in shape or I wouldn’t live much longer. But Brenda’s death set that feeling in stone. I went to her funeral and saw my future. I set out to find a way to lose weight in a steady, sustained way. No crash diet. No fad of the month. Just make sure my calories in were less than my calories out. Little victories, every day. I craved a Big Lebowski life. Drift along, hang out with friends, keep a cocktail in hand at all times. A year later, not long before Christmas 2015, my wife found Brenda’s recipe in our recipe box. We decided to make some and take them to the family as a Christmas surprise. I was worried about two things with this plan. One, we might ruin Christmas. Two, I might eat all the logs before we got to Georgia. That’s the kind of thing the old me would have done. Most years, over the five or six days around Christmas, I’d eat 25 or 30 peanut butter logs. In 2015, I had four and a half. Years ago, I saw the author Tom Wolfe speak. His novels tend to be about people who let their lives get way out of hand. In his speech, he criticized the “loose life” he saw so many people trying to get away with — shaky morals, bad habits, ready-made excuses. Problem was, the loose life sounded great to me. That’s what I’d always hoped for — a life I could live however I wanted, without any real consequences. I didn’t want to murder or pillage or cheat on my wife. I didn’t mind working — I’m a writer and I love to write. But otherwise, I craved a Big Lebowski life. Drift along, hang out with friends, keep a cocktail in hand at all times. Skip the White Russians. Give me bourbon over ice. The loosest life I wanted was with food, because food has given me more pleasure than anything else. That doesn’t mean I like food more than sex. It just means I haven’t had sex three (or four or five) times a day for 51 years. I’ve eaten too much of too many bad things for the cheap thrill of it, trying to stay one step ahead of paying the price, like a grifter kiting checks. I knew how much it would cost me later. But I craved that moment of joy, now. That’s the way a child thinks. My wife Alix and I started using the word adulting around the time I got serious about losing weight. When we would wash the supper dishes right away instead of waiting until midnight, we were adulting. When we filed away papers instead of letting them pile up in a stack, that was adulting, too. I’d come to realize that adulting is the only way I can beat my addiction to food. My childhood didn’t give me a great start. I was a sedentary child who grew up on the normal Southern diet for people who stayed on their feet all day. I learned to love all those calories as friends when I didn’t have many. As I got older, my choices made things a whole lot worse. I gravitated to salt, sugar, and fat (and the fourth element: alcohol). I never cared enough about myself to think it mattered. My approach to life sent me off in the wrong direction, and it took me forever to turn around and head back. I don’t know how close I got to falling off the cliff. It’s foggy out there. I could slip and fall. All I know is that I’ve finally walked away from the edge. I wrote up a little guide to my adulthood: I have to lose weight to have a longer, healthier, more meaningful life. I have to do it in a way I can live with tomorrow and tomorrow and tomorrow. I have to find other sources of joy and solace, especially in hard times. I have to accept delayed gratification. I have to mute the self-hating voice in my head. I have to believe that I’m worth saving. I have to do all this not just for myself, but for the people who love me. I had resisted those things all these years because it felt like so much work. It is work. But the loose life — the life that looked like so much fun — turned out to be a fraud. It got me to 460 pounds. It made me an actuarial disaster. It threatened my life. It limited me more than a disciplined life ever could. Let me tell you what it felt like to lose just 25 pounds. That may not sound like much. We see the stories on the cover of People, all those women who lost 150 pounds in a year. Judged by that standard, you might think 25 pounds barely matter. But go to the gym and pick up a 25-pound dumbbell, or go to the hardware store and lift a 25-pound bag of mulch. Hold it for a while, let it be part of you. Think about the load you have to carry. Now set it down and walk away. That is how it felt for me. Food will never be just a quick, cheap pleasure for me. Sometimes it’s a path back to the best moments of my life with the people I love. Maybe I kept eating because I kept trying to find those moments. Only now do I realize they can be small and sacred. A peanut butter log and a sip of sweet tea wouldn’t pass for a church Communion. But here, now, it’s enough.
https://elemental.medium.com/what-it-means-to-lose-25-pounds-when-you-weigh-460-7c3a4d1fc0f4
['Tommy Tomlinson']
2019-01-17 14:01:01.015000+00:00
['Weight Loss', 'Food', 'Health', 'Self', 'Book Excerpts']
Spotlight: Surgical robotics
Q: Tell us a little bit about life in Boston, what you’re doing there and how it came about? A: I commenced an exchange at Harvard Medical School in February. It’s exciting to be at one of the world’s top research-oriented medical schools, in a city that is arguably the world’s most active robotics hub. The experience and opportunity to learn is just humungous. How did it come about? Short answer: I’m a big believer in the power of positivity. I wanted to do an exchange as part of my PhD, so I cold-called — well actually, ‘cold emailed’ — my current supervisor, Nobuhiko Hata. He’s a Professor of Radiology at Harvard Medical School (HMS) and leads the Surgical Navigation and Robotics Laboratory at the HMS-affiliated Brigham and Women’s Hospital. He gets hundreds of CVs every day, but he came back to me because he was interested in the fact I had experience working on the Da Vinci Surgical System at Imperial College London. I completed my Masters of Research there, at the Hamlyn Centre, before joining the Australian Centre for Robotic Vision. The big thing for me though — and getting back to my point on the power of positivity — is that Professor Hata was also interested in what makes me tick, and what I like to do in my free time. Dr Nobuhiko Hata, Professor of Radiology at Harvard Medical School (HMS), with Artur Banach who explores surgical robotics with the QUT Centre for Robotics. Q: So, what made Professor Hata interested in your personality? A: Well, I was very impressed with his outlook and being interested in the essence of what makes someone tick as opposed to just their research capabilities. I think that’s a sign of a great leader. I told him I’m a very social person and I have this drive to contribute to the happiness quotient of the world. I mean, what’s our purpose here if we’re not helping others and making a positive difference. Q: Does robotics research also make you happy? A: I’m actually on the edge of two disciplines — robotics and medicine. This can be challenging because it’s hard to be an expert in multiple things. My focus is to become an expert in surgical robotics. My PhD focuses on the challenges of robotic-assisted and image-guided intervention to increase the safety of minimally invasive procedures. That in itself, no matter how challenging, brings me great happiness because the end result is all about reducing patient suffering and improving quality of life. In the future, I hope to be able to help less fortunate people living in developing countries, opening the way to better medical and healthcare services. But in order to make an impact, I need to first understand robotics, the medical world and make connections with people across the two fields. I also like working in hospital environments around like-minded people, and I love working with surgeons. The beauty about this is that if you want to introduce a new system, you have to make sure it is robust and the surgeons are comfortable using it. You see, if a robotic system passes the surgeons’ test, then you know the technology is truly useful. Q: How difficult is it for robots to operate inside the human body? A: I often explain what I’m doing is like SLAM (simultaneous localisation and mapping) inside the body. Imagine you’re in a room with white walls, trying to navigate inside that room. Tissue inside our body is homogenous, a bit like being surrounded by white walls, which makes it difficult to know where you are. In surgical procedures — key-hole surgery, whether inside joints (knee, shoulder) or in the abdominal cavity or brain — poor vision is not the only challenge. There’s also blood, smoke (from burned tissue), water and surgical tools that obstruct a clear view — not forgetting that everyone’s body is slightly different. Environments inside our bodies are so unpredictable. You could compare surgical robotics challenges to those affecting the future operation of autonomous cars. Q: Why’s that? What do autonomous cars and robots designed to navigate inside your body have in common? A: They face a similar problem when it comes to successfully navigating unexpected scenarios. For autonomous cars on our roads, there are challenges like poor weather conditions, unexpected road works, other drivers or potential hazards like animals or pedestrians to avoid. In both cases (surgical robotics and autonomous cars), mistakes can be life-threatening. Q: As a kid, did you dream of a career in robotics? A: Actually, I thought I’d go to the Olympics. I’m from Poland. When I was still in high school, I was selected for the Polish National Team in Olympic Windsurfing Class. Unfortunately, they can only send one person to the Olympics. I was on the national squad for two years but didn’t compete at an Olympic Games. Q: What brought you to Australia? A: A big part of that is the adventurous spirit my parents instilled in me. I always wanted to travel! After graduating with a Bachelor of Engineering in Automatic Control and Robotics from Poznan University of Technology, in Poland, I spent four months at Universidad Politecnica de Cartagena, in Spain, as part of the Erasmus exchange program. I then spent time completing my Masters at Imperial College London, and after that went backpacking around Southeast Asia for three months. I came across an opportunity at the Australian Centre for Robotic Vision on LinkedIn and literally spent one day in Brisbane for an interview. The rest is history. I moved from London to join the Centre in 2018. Q: What do you love most about the Australian Centre for Robotic Vision? A: It wasn’t until I came to the Centre that my eyes were truly opened up to the ‘big picture’ of robotics and how robots able to see and understand can help make the world a better place for all. I’m talking robots that operate underwater; helping to protect coral reefs; flying robots or UAVs; self-driving cars; social robots; underground robots; and, of course, medical robotics. The Centre is just so inter-disciplinary. This is what makes it unique. There are so many opinions shared, points of view and expertise in the one place. It’s helped me broaden my context in robotics and engineering in general. I’ve also made some life-long friends and met so many interesting people who have helped me connect the dots in understanding how the different fields connect. Q: You’ve been in Boston since February. What do you love most about the city? And, what do you miss most about Brisbane? It was very cold when I arrived! The change from Queensland’s summer to Boston’s winter has been hard! But the weather is getting milder and I am getting more used to it. Boston is great in terms of meeting smart and cool people from the best universities in the world. Harvard is like Hogwarts (yes, I’m a big Harry Potter fan). It’s such an incredible place; strolling around its ‘castles’ is like being in another world. Longwood Medical Area, where all the biggest hospitals and Harvard Medical School are based, is enormous. Maybe close to the size of Brisbane CBD. I definitely miss Brisbane for quick trips to the beach, my great friends there and my wonderful supervisors. But, hey! I’m coming back next year.
https://medium.com/thelabs/spotlight-surgical-robotics-47dfbdd1aa28
['Qut Science']
2020-08-31 23:03:58.658000+00:00
['Technology', 'Research', 'Robotics', 'Engineering', 'Medicine']
Measuring Battery Voltage Is Simple, Or Is It?
Inside Pencil by FiftyThree stylus An important decision when designing a Bluetooth® Low Energy, aka “Bluetooth® Smart,” device like our stylus Pencil is selecting a battery; too small and the device will need constant recharging, too large and the device becomes too big, heavy, or expensive. Measuring battery characteristics provides critical data to help select the right battery for a device, and one of the characteristics we measure is how a battery’s voltage changes as it drains. This article details this specific measurement and the lessons we learned in automating the collection of voltage data over time. Why automate? Why not simply take a multimeter and stick the probes on the battery? Multimeters are great, but they’re expensive and require someone to manually capture their readings. Collecting large amounts of data from multiple batteries would become an onerous task, which means we needed some basic automation to get enough data to make informed decisions. Since we regularly use Arduino open-source hardware in our prototyping process, we decided to use them to automate the entire battery test process. This seemed straightforward enough, but we soon learned that measuring voltage accurately was more subtle then we first thought. LED display recording the battery voltage for Pencil What We Learned Lithium-polymer (LiPo) batteries are well suited for Bluetooth Low Energy (BLE) applications. Their low discharge rate is perfect for devices that don’t draw considerable current, and the cell’s compact footprint makes them easy to squeeze into tiny devices. The challenge with LiPo batteries lies in their charge and discharge profile because unlike nickel or lead-based batteries, LiPo cell voltage is not self-limiting. Without a specifically designed charger, the battery voltage would continually increase until it bursts into flames (a generally frowned upon outcome in electrical design). Discharging LiPo batteries without proper protection is only marginally better, and will result in cell damage without restricting the operating voltage to a very specific range. Most small LiPo batteries, like the ones used in Pencil, have purpose-built circuitry to prevent this damage, cutting off battery voltages below a certain threshold. Although convenient in practice, this provides a challenge for testing because the cutoff voltage is surprisingly inconsistent. To further complicate matters, the discharge profile of batteries and the resulting product performance heavily depend on the types of loads applied. Clearly, there was something to be learned about the voltage/time/load relationship here. To help steer design decisions, we developed a few core criteria that our battery measuring rig must comply with: Record the time/voltage relationship (real time clock involved) Measure voltage accurately (+/- 0.25%) Measure the battery current (+/- 1%) Vary the load applied to the battery Be compatible with any battery Record the external temperature (batteries are thermally dependent) Although the goal of accurately measuring battery voltage is very specific in scope, we were guided by the overarching theme of automating data acquisition. With this in mind, we established two more criteria: Allow automation of battery tests Allow scaling of the test automation Analog to Digital Conversion A fluctuating battery voltage is a quintessential example of an analog signal, and clearly a sign that we needed to convert to a digital one. The ATmega chips found on most Arduinos come with built in 10-bit analog to digital conversion, so using this converter seemed like a good place to begin. Calling analogRead() enables the ADC, which converts the input voltage on a certain pin to a number between 0 and 1023. This number is directly proportional to the reference voltage used by the Arduino. For example, a 3.3 volt input with a 5V reference would yield an output of 675 (1023/5*3.3). The Arduino comes with a convenient 5V internal reference, which seemed suitable for the 2.5–4.2 volt operating range of the batteries we were testing. We optimistically uploaded the simplest possible sketch: //Positive (+) to pin A0 //Negative (-) to GND void setup() { Serial.begin(9600); analogReference(INTERNAL); } void loop() { Serial.println(analogRead(A0)/1023.0*5.0); } //Case closed... or not Reference and Gain Bias As we quickly realized, there were two main issues with this approach. The first was noise, indicated by rapid, seemingly random fluctuations in values. The second problem was one of offset. There was a consistent difference of approximately 100mV between the Arduino and more accurate voltmeter readings. We initially attributed this to gain bias, a phenomenon which plagues analog to digital converters (ADC). No ADC is perfectly made, and there will always be a certain amount of offset between the actual input voltage and what the ADC reads. The ATmega chip has specs about what kinds of gain bias to expect, which indicated that one hundred millivolts was too substantial to be ADC inaccuracy alone. So, what was the issue? The entire time we had been taking something for granted: the five volt internal reference on the Arduino. Applying a high quality voltmeter to the “5V” pin reveals a very unsatisfactory 5.12 volts. However, in our sketch we assumed that the reference voltage was precisely 5.0V, and the difference was skewing our readings downwards. The Arduino has a built-in solution to this problem: the analog reference pin (AREF). Applying a more accurate reference voltage to the AREF pin and calling analogReference(EXTERNAL) switches the reference voltage to the external source. There are a number of reference components of varying precision and accuracy, and our original measurement specification of +/- 0.25% dictated our use of a low dropout, high precision, 2.5V reference. A quick voltmeter check reveals that this device outputs 2.504 volts. Variable Current Draw The current drawn from a battery is more or less inversely proportional to the resistance seen by its terminals (real batteries have some intrinsic, internal resistance). So, varying current draw is as simple as changing the resistance seen by the battery's terminals. A variable resistor in parallel with our voltage divider performs exactly this purpose. The barebones schematic can be seen below. Handling Noise: The Low Pass Filter With everything in place, we were ready to run a test and see what our data looked like. Much to our disappointment, there still appeared to be considerable noise and an unacceptable lack of precision in our readings. With a solid reference, and precise, highly accurate resistors, what could be causing this issue? We had one last trick up our sleeve: a low pass filter. Low pass filters operate on the principle that the voltage across a capacitor cannot instantaneously change. Momentary spikes or drops in voltage are absorbed by the filter, and as a result, the signal is smoothed. The relationship f = 1/(2*π*R*C) describes the relationship between the capacitor, resistor, and the threshold frequency (above which signals are absorbed). LiPo batteries cannot be discharged quickly, and we were sampling every 3 seconds (f = 1/3s). Arduino analog inpt pins require as little as 0.5mA to operate, so R = 10kΩ is appropriate. Solving the aforementioned equation gives C = 47µF. Empirical testing, however, revealed that the noise was still greatly reduced with capacitors as small as 47 nF. This was some seriously high frequency noise! We ran a few more tests with this setup, and were extremely pleased with the result. Success! We now had the ability to collect high quality voltage data using an inexpensive Arduino micro controller, a high quality voltage reference costing less than $5US, and a handful of passive components. If you'd like more details on this project, or have any questions about our products or team, feel free to reach out to us @FiftyThreeTeam. We welcome any feedback and hope you find our experience helpful in your project! Makers David Ferris, Scott Dixon, Julian Walker & Rachel Romano
https://medium.com/fiftythree-space-to-create/measuring-battery-voltage-is-simple-or-is-it-b54f12606a25
[]
2017-05-01 15:24:48.785000+00:00
['Hardware', 'Battery', 'Stylus', 'Engineering', 'Company']
This December, Improve Your Coding Daily with Advent of Code
What It Is Advent of Code is an annual series of daily coding challenges that happens during the year-end holiday season. While its name alludes to Christian traditions, it is a simple pun choice meant to convey the 25-day nature of the series (come one, come all!). Each day, between the 1st and 25th of December, a new puzzle is published. The puzzles usually string along a wonderfully-constructed storyline, keeping you entertained as you are challenged. Here’s an example from 2018: Example Puzzle (Day 1 2018) After feeling like you’ve been falling for a few minutes, you look at the device’s tiny screen. “Error: Device must be calibrated before first use. Frequency drift detected. Cannot maintain destination lock.” Below the message, the device shows a sequence of changes in frequency (your puzzle input). A value like +6 means the current frequency increases by 6; a value like -3 means the current frequency decreases by 3. For example, if the device displays frequency changes of +1, -2, +3, +1, then starting from a frequency of zero, the following changes would occur: Current frequency 0, change of +1; resulting frequency 1. Current frequency 1, change of -2; resulting frequency -1. Current frequency -1, change of +3; resulting frequency 2. Current frequency 2, change of +1; resulting frequency 3. In this example, the resulting frequency is 3. Here are other example situations: +1, +1, +1 results in 3 +1, +1, -2 results in 0 -1, -2, -3 results in -6 Starting with a frequency of zero, what is the resulting frequency after all of the changes in frequency have been applied? - Day 1 2018, Advent of Code At this point, a link is available on the page for you to obtain the full input sequence (about a thousand inputs) as a text file. This is where the rubber meets the road between your problem-solving skills and software implementation know-how. Be careful! Most people will happily nose dive into regurgitating code that reads the input, and then face-plant when they realize it’s time to implement the core algorithm. Take the time first to make sure you fully understand the problem at hand, and (at least mentally) form the algorithmic solution before rushing to the I/O code. Additionally, do not underestimate the complexity of the series. By design, it starts gently, allowing everyone to get their feet wet with it. Day after day, you will notice the complexity creep in, with the puzzles taking you minutes longer. Stick with it! You will be challenged before it is over. Most of the participants I know use Advent of Code as a way to simply practice. For the more competitive spirits, a global (and within-group, if you join one) leaderboard provides additional motivation to find the solution quickly. You can find the derivation of the scoring mechanism as well as the top 25 of 2019’s competition in the image below. Regardless of your motivation, it’s a lot of fun and good for your professional development.
https://towardsdatascience.com/this-december-improve-your-coding-daily-with-advent-of-code-9bd37f69dc3e
['Anthony Agnone']
2020-11-27 23:09:54.311000+00:00
['Machine Learning', 'Artificial Intelligence', 'Software Development', 'Software Design', 'Data Science']
A Survey of Attention Mechanism and Using Self-Attention Model for Computer Vision
A Survey of Attention Mechanism and Using Self-Attention Model for Computer Vision Swati Narkhede Follow Nov 1 · 7 min read Original Survey Authors: Sneha Chaudhari , Gungor Polatkan , Rohan Ramanath , Varun Mithal 1. Introduction The Attention Model was first established for Machine Translation gained massive popularity in the Artificial Intelligence community. Over the years, it has become a significant part of neural network architecture for various Natural Language Processing, Statistical Learning, Speech Recognition, and Computer Vision applications. There has been a rapid advancement the attention modeling in neural networks because these models are state-of-the-art for multiple tasks. They can interpret neural networks and overcome the limitations of Recurrent Neural Networks (RNNs). The functioning of tasks like Vision Processing, Speech, Translation, Summarization in the human biological system is the best suitable example of an idea behind the Attention mechanism. These systems only focus on a relevant part of input useful for getting the required knowledge or working on a task and ignoring irrelevant details. The Attention model integrates this concept of relevance by focusing only on the relevant aspects of the given input, which is useful for a compelling performance of the task. 2. Use of Attention to overcome drawbacks of traditional Encoder-Decoder An encoder-decoder architecture is a part of a sequence to sequence model. Encoder-Decoder are Recurrent Neural Networks (RNN), in which encoder takes inputs sequence {x1, x2, x3,….., xT } of length T and encodes it to a series of fixed-length vectors {h1, h2, h3,….., hT } also referred to as context vector. The decoder takes these fixed-length vectors as input and generates output sequence {y1, y2, y3,….., yT }. In this process, the encoder compresses the long input sequence and stores then in a single fixed-length vector, leading to a loss of information. The decoder cannot focus on the relevant input tokens to generate output. Passing fixed-length context vectors to the decoder limit it from aligning the input sequence to the output sequence. As a solution to this drawback of Encoder-Decoder architectures, the attention model induces weights α on the decoded input sequence, which gives an idea of the relevance of each token of context vector (candidate state) to the tokens of input vector (query state) of the decoder. These attention weights αi are used to build the context vector to input decoder. It enables the decoder to access the entire decoded input vector to generate the output. Each token of this context vector is a weighted sum of all candidate states of the encoder and their respective weights. 3. Categories of Attention There are four broader categories of attention models. Each category has different types of attention models. Although these categories are mutually exclusive, we can apply attention models using a combination of different categories. Hence, these categories can also be considered as dimensions. Types of Attention Models a. Number of Sequences: In Distinctive Attention models, the candidate state and query states from encoder-decoder belong to two distinct input and output sequences. Co-attention models have multiple input sequences at the same time. Attention weights are learned based on all the input sequences. Co-attention models can be used for image inputs. In recommendations and text classification problems, the input is in sequence, but the output is not a sequence. For such issues, Self-Attention is used where candidate state and query state both belongs to the same input sequence. b. Number of Abstractions: The attention models having single level abstractions compute attention weights just for the original input sequence. The multi-level abstraction attention models apply attention on multiple levels of abstraction on the input sequence. In this type of attention, the lower abstraction level’s context vector becomes the query state for high-level abstraction. Such models can be classified further as top-down or bottom-up models. c. Number of Positions: In this category of attention models, the models are further classified based on the input sequence position where attention function is calculated. In Soft Attention models, the context vector is computed using the weighted average of all hidden stages of the input sequence. These models enable the neural network to learn from backpropagation efficiently. However, it leads to quadratic computational loss. In hard attention models, the context vector is built using hidden states which are stochastically sampled in the input sequence. The global attention model is similar to the soft attention model, whereas the local attention model is midway between soft and hard attention mechanisms. d. Number of Representations: The Multi-Representational attention models determine different aspects of the input sequence through multiple feature representations. The weight importance is assigned to these multiple feature representations using attention to decide which aspects are most relevant. In multi-dimensional attention, the weights are generated to determine the relevance of each dimension of the input sequence. These models are used for natural language processing applications. 4. Network Architectures with Attention Below list of Neural Network Architectures is used in combination with the attention models. a. Encoder-Decoder: The ability of Attention models to separate the input representations from output enables one to introduce hybrid encoder-decoders. The popular hybrid encoders-decoders are the ones in which Convolutional Neural Network (CNN) is used as an encoder, and Long Short-Term Memory (LSTM) is used as a decoder. This architecture is useful for image and video captioning, speech recognition, etc. b. Memory Networks: For some applications like Chatbots, input to the network is knowledge database and query, having some facts more relevant to the query than others. For such problems, the end to end memory networks uses an array of memory blocks to store the database of facts and use the attention models to determine the relevance of fact to answer the query. 5. Applications Attention models is an active area of research. I will discuss the application of attention modeling in four domains: a. Natural Language Generation: Natural Language Generation domain involves tasks in which natural language texts are generated as outputs. Machine translation, question answering, and multimedia description are applications from Natural Language generation, which benefit from using attention models. b. Classification: Multi-level, multi-dimensional, and multi-representational self-attention models are used for the task of document classification. Sentiment classification can also be performed with the use of attention models. c. Recommender System: Attention mechanism is widely used in recommendation systems for user profiling. It is used to assign attention weights to the items the user has interacted with to capture users’ long-term and short-term interest. Self-attention mechanism is used for this task. d. Computer Vision: Attention models are used for various problems in Computer Vision like Image Captioning, Image Generation, Video Captioning. Many works augment self-attention models with Convolutional Neural Networks (CNNs) for computer vision tasks. 6. Stand-Alone Self-Attention Model for Computer Vision Convolutional Neural Networks (CNNs) have gained lots of popularity in the Computer Vision domain. It is considered a building block of the computer vision architectures. CNN’s have low scaling properties concerning the large receptive fields. Hence, it uses attention models to capture long-range interactions. The attention model is always used on top of other networks for computer vision tasks. Therefore, a group of researchers built a fully stand-alone self-attention vision model. This model was built by replacing all instances of spatial convolutions from an existing convolutional architecture with a form of self-attention applied to ResNet model and by replacing the convolutional stem. 7. Experiments performed using Stand-Alone Self-Attention Model a. ImageNet Classification: The researchers experimented on ImageNet Classification task containing 1.28 million training images and 50000 test images. They replaced the spatial convolutional layer with a self-attention layer and used position-aware attention stem. The attention models outperform the baseline across all depths. b. COCO Object Detection: The stand-alone self-attention model was evaluated on COCO Object Detection task using RetinaNet Architecture. The researchers used the attention-based backbone in RetinaNet. A fully self-attention model performed efficiently across all vision tasks. 8. Conclusion This article has provided high-level information about Attention models, Architecture, types, and applications of Attention Models. Along with that, I have an overview of a Stand-Alone Self Attention model for Computer Vision. For more detailed information, kindly refer to the original article links provided below. 9. Review about the survey paper a) Overall quality: Overall quality of the survey paper is acceptable. The authors have provided sufficient information on Attention models and their application in Deep Learning. They have provided a summary of the key papers based on Attention Models, which is insightful. b) Critique of the paper: Ablation study is missing in the survey paper. c) What can be done to improve the (b): Ablation study can be added to this survey paper to improve. d) Future directions and suggestions: I think this survey paper is of good quality and gives sufficient information about Attention Models. As Attention models are widely used for computer vision and many researched are being performed in computer vision using Attention Models. In future work, authors can provide information about Attention Models’ application in the Computer Vision domain. 10. References
https://medium.com/swlh/a-survey-of-attention-mechanism-and-using-self-attention-model-for-computer-vision-ed6195f486e
['Swati Narkhede']
2020-11-04 18:19:29.228000+00:00
['Computer Vision', 'Attention Model', 'Attention Mechanism']
Looking inside the technology that powers Pinterest
By Vanja Josifovski Pinterest began as a small startup, and has grown to a company that serves +175B Pins to +250M people. While we have one of the largest datasets online, we have 600+ engineers, and so technology needs to be accessible for each person to play their part in efficiently building and scaling this visual discovery engine. Underlying all of this growth is technology built into systems that organize and serve massive amounts of data on both clients and servers. In a relatively short period of time, our journey has forced us to build and rebuild our systems multiple times — sometimes with urgency. Therefore, many of our technical implementations are as much a result of historical circumstances and urgency to deliver as they are of rigorous design. As with all organically grown artifacts, several parts of our technical systems didn’t develop in a predictable and logically pre-designed manner. As a result, different parts of the stack sometimes collide and conflict with each other. Other times, there has been significant overlap and duplication between systems. While these issues largely resulted from the velocity and frugality of our efforts, they’ve also fueled our company’s growth to date. To continue scaling, last year we began optimizing for velocity and effectiveness by clarifying and codifying the direction of our technical foundation’s development, resulting in a set of technical strategies for key portions of our stack. Unifying these individual technical strategies is a framework for technology at Pinterest defining the mission, context and key underlying principles of our individual technical strategies. Mission Our technical strategy’s mission matches our company’s mission: to help people discover and do things they love. We will not develop technology just for the sake of developing technology. Instead, we will develop technology with a purpose — to support Pinterest’s mission. Context Let’s examine the context of product engineering at Pinterest and see what kinds of systems we need in order to provide the greatest possible experience for Pinners, as defined by our mission. Here are the key parameters of our technical environment: Complex product: Pinterest as a product is rather complex. The amount of data we generate, consume, process and serve is enormous. We have different surfaces requiring specific approaches to selecting presented Pins. For instance, the home feed requires personalization fundamentally different from Search and Related Pins in a Pin closeup. We have heterogeneous data — products and Rich Pins are very different artifacts than regular Pins. Finally, every surface has both organic and Promoted Pins, which are also fundamentally different in terms of corpus size, lifetime and user interaction patterns. Pinterest as a product is rather complex. The amount of data we generate, consume, process and serve is enormous. We have different surfaces requiring specific approaches to selecting presented Pins. For instance, the home feed requires personalization fundamentally different from Search and Related Pins in a Pin closeup. We have heterogeneous data — products and Rich Pins are very different artifacts than regular Pins. Finally, every surface has both organic and Promoted Pins, which are also fundamentally different in terms of corpus size, lifetime and user interaction patterns. Resources: As a growing company building a product for hundreds of millions of people, every engineering team at Pinterest could always use more resources. We are a small organization that covers a lot of territory. Headcount allocation is one of the hardest decisions for the leadership team as there are always several different areas where added resources can improve the outcome for Pinners, Partners and Pinterest (always prioritized in that order). As a growing company building a product for hundreds of millions of people, every engineering team at Pinterest could always use more resources. We are a small organization that covers a lot of territory. Headcount allocation is one of the hardest decisions for the leadership team as there are always several different areas where added resources can improve the outcome for Pinners, Partners and Pinterest (always prioritized in that order). Ambition: As we grow our global user base, our systems must be geared toward achieving a long-term growth trajectory and not focus on small, incremental improvements. Strategy Principles Here are the key strategy principles defining how we approach technological advancement in general: Simplicity & Velocity : In alignment with our engineering principles, we focus on starting simple and then iterating. Our technical strategy is critical to ensuring we have a clear, well thought-out, yet flexible northstar for our engineering teams. This provides just the right amount of direction while giving us sufficient freedom to leverage the latest innovations. Everything we build needs to support rapid iteration. : In alignment with our engineering principles, we focus on starting simple and then iterating. Our technical strategy is critical to ensuring we have a clear, well thought-out, yet flexible northstar for our engineering teams. This provides just the right amount of direction while giving us sufficient freedom to leverage the latest innovations. Everything we build needs to support rapid iteration. Scale : We build for impact systems that are Pinterest scale, looking ahead to billions of users and trillions of Pins. Our technical strategies ensure long-term thinking and allow us to anticipate and build for future scale instead of making short-sighted decisions that may require expensive rework later. : We build for impact systems that are Pinterest scale, looking ahead to billions of users and trillions of Pins. Our technical strategies ensure long-term thinking and allow us to anticipate and build for future scale instead of making short-sighted decisions that may require expensive rework later. Ownership: By keeping our strategies directional rather than strictly prescriptive, we encourage engineers to own local decisions about balancing velocity and quality. Additionally, the strategies themselves are developed with broad input from engineers across the company to ensure all teams have a real stake in charting the technical future of our stack. Additionally, we have identified the following principles to inform our strategic vision more specifically: Reusability : While focusing on the problem at hand, we look to see if there are other systems that could satisfy our needs. We join forces to build together, almost always crossing internal boundaries and often crossing corporate boundaries by using and contributing to open source projects. We think hard about how to bring use cases to a maximal common denominator. We consider many different ways of reuse and find the right balance with velocity. : While focusing on the problem at hand, we look to see if there are other systems that could satisfy our needs. We join forces to build together, almost always crossing internal boundaries and often crossing corporate boundaries by using and contributing to open source projects. We think hard about how to bring use cases to a maximal common denominator. We consider many different ways of reuse and find the right balance with velocity. Focused complexity: We are experts in our areas and need to be well-versed in technology outside of Pinterest. We are able to pick and choose the right cutting-edge technology and leverage key areas where we choose a complex solution. We accept complexity very deliberately and with deep understanding of the tradeoffs. Technical Strategies We have embodied these principles in the individual technical strategies below. We’re developing these strategies with appropriate input from technical leaders while avoiding unnecessary disruption to engineering teams. In the spirit of iteration, technical strategies are living artifacts and will continually evolve over time. Here are the current strategies that are either completed or in progress: Machine Learning: Machine learning is critical to the operation of many teams and technologies at Pinterest. A unified strategy helps us maximize the velocity of model experimentation. From this strategy, we developed a single model training and serving pipeline that powers the majority of both our organic and ads use cases. The technology developed from this strategy reduces the barrier to building new ML-based production applications at Pinterest. Machine learning is critical to the operation of many teams and technologies at Pinterest. A unified strategy helps us maximize the velocity of model experimentation. From this strategy, we developed a single model training and serving pipeline that powers the majority of both our organic and ads use cases. The technology developed from this strategy reduces the barrier to building new ML-based production applications at Pinterest. Content Distribution Infrastructure: Most core use cases at Pinterest serve organic or paid content based on a user input. Common to all of these are technical challenges like achieving low latency, supporting huge scale, avoiding system fragmentation, and so on. As such we have built a strategy based on a common set of building blocks and usage patterns that allow the right mix of reuse and customization. Among others, these building blocks include inverted indices with incremental updates, key-value stores, scatter-gather layers and graph traversal infrastructure.. Most core use cases at Pinterest serve organic or paid content based on a user input. Common to all of these are technical challenges like achieving low latency, supporting huge scale, avoiding system fragmentation, and so on. As such we have built a strategy based on a common set of building blocks and usage patterns that allow the right mix of reuse and customization. Among others, these building blocks include inverted indices with incremental updates, key-value stores, scatter-gather layers and graph traversal infrastructure.. Data Management: Pinterest has a strong culture of data-driven decision-making via real-world experimentation. To ensure the continued success of our engineering and technical decision-making, we reinforce and maintain trust in our business-critical metrics, improve developer productivity, and increase ROI on data pipelines. The data management strategy track focuses on areas like data governance, quality, discovery, and encoding. Pinterest has a strong culture of data-driven decision-making via real-world experimentation. To ensure the continued success of our engineering and technical decision-making, we reinforce and maintain trust in our business-critical metrics, improve developer productivity, and increase ROI on data pipelines. The data management strategy track focuses on areas like data governance, quality, discovery, and encoding. Data Processing : Building off the broader data management strategy above, this track provides strategic direction on specific logging, query processing, programming frameworks and data processing systems. : Building off the broader data management strategy above, this track provides strategic direction on specific logging, query processing, programming frameworks and data processing systems. Experimentation: Our iteration speed depends on the pace at which we can run experiments. Under the experimentation strategy, we have defined the evolution of our experimentation infrastructure and methodology to 10x experiment throughput. Our iteration speed depends on the pace at which we can run experiments. Under the experimentation strategy, we have defined the evolution of our experimentation infrastructure and methodology to 10x experiment throughput. Cloud : The cloud strategy initiative aims to document our strategic approach to our foundational cloud infrastructure with a 2–4 year horizon. Clear infrastructure direction will ensure Pinterest remains highly available, resilient, performant, well-utilized, cost effective and predictable. : The cloud strategy initiative aims to document our strategic approach to our foundational cloud infrastructure with a 2–4 year horizon. Clear infrastructure direction will ensure Pinterest remains highly available, resilient, performant, well-utilized, cost effective and predictable. Supported Languages: At Pinterest, we use a variety of programming languages in our work, including Java, Python, JavaScript, C++ and Go. Each language requires a certain level of broad support across our engineering organization. The languages strategy provides a framework for analyzing and building the language support required to develop our products quickly, securely and cost-effectively. At Pinterest, we use a variety of programming languages in our work, including Java, Python, JavaScript, C++ and Go. Each language requires a certain level of broad support across our engineering organization. The languages strategy provides a framework for analyzing and building the language support required to develop our products quickly, securely and cost-effectively. Core Client Platform : This track charts out a strategic approach to building client technologies that produce a fast Pinner experience, align with platform conventions, take advantage of native device capabilities and quickly respond to changing experiments, network conditions, and server responses. : This track charts out a strategic approach to building client technologies that produce a fast Pinner experience, align with platform conventions, take advantage of native device capabilities and quickly respond to changing experiments, network conditions, and server responses. API: The API track provides a coherent and consistent direction for all APIs and API endpoints at Pinterest. This includes both internal APIs for serving product features to first-party clients as well as external APIs for partners, third party application developers and third-party product integrations. The strategies are live and evolving documents. They are used as a reference when starting new efforts as well as to onboard new engineers. The strategies also help drive internal clarity on technical issues that might span multiple organizations. Finally, the strategy documents are used to communicate with stakeholders outside engineering on the approaches used and the level of funding needed to support our core engineering goals. Now that you know more about our technical framework, check out our open engineering roles and join us!
https://medium.com/pinterest-engineering/looking-inside-the-technology-that-powers-pinterest-2e8bd1cfc329
['Pinterest Engineering']
2019-07-24 20:02:15.861000+00:00
['DevOps', 'Engineering', 'Machine Learning']
How to Build a Reporting Dashboard using Dash and Plotly
A method to select either a condensed data table or the complete data table. One of the features that I wanted for the data table was the ability to show a “condensed” version of the table as well as the complete data table. Therefore, I included a radio button in the layouts.py file to select which version of the table to present: Code Block 17: Radio Button in layouts.py The callback for this functionality takes input from the radio button and outputs the columns to render in the data table: Code Block 18: Callback for Radio Button in layouts.py File This callback is a little bit more complicated since I am adding columns for conditional formatting (which I will go into below). Essentially, just as the callback below is changing the data presented in the data table based upon the dates selected using the callback statement, Output('datatable-paid-search', 'data' , this callback is changing the columns presented in the data table based upon the radio button selection using the callback statement, Output('datatable-paid-search', 'columns' . Conditionally Color-Code Different Data Table cells One of the features which the stakeholders wanted for the data table was the ability to have certain numbers or cells in the data table to be highlighted based upon a metric’s value; red for negative numbers for instance. However, conditional formatting of data table cells has three main issues. There is lack of formatting functionality in Dash Data Tables at this time. If a number is formatted prior to inclusion in a Dash Data Table (in pandas for instance), then data table functionality such as sorting and filtering does not work properly. There is a bug in the Dash data table code in which conditional formatting does not work properly. I ended up formatting the numbers in the data table in pandas despite the above limitations. I discovered that conditional formatting in Dash does not work properly for formatted numbers (numbers with commas, dollar signs, percent signs, etc.). Indeed, I found out that there is a bug with the method described in the Conditional Formatting — Highlighting Cells section of the Dash Data Table User Guide: Code Block 19: Conditional Formatting — Highlighting Cells The cell for New York City temperature shows up as green even though the value is less than 3.9.* I’ve tested this in other scenarios and it seems like the conditional formatting for numbers only uses the integer part of the condition (“3” but not “3.9”). The filter for Temperature used for conditional formatting somehow truncates the significant digits and only considers the integer part of a number. I posted to the Dash community forum about this bug, and it has since been fixed in a recent version of Dash. *This has since been corrected in the Dash Documentation. Conditional Formatting of Cells using Doppelganger Columns Due to the above limitations with conditional formatting of cells, I came up with an alternative method in which I add “doppelganger” columns to both the pandas data frame and Dash data table. These doppelganger columns had either the value of the original column, or the value of the original column multiplied by 100 (to overcome the bug when the decimal portion of a value is not considered by conditional filtering). Then, the doppelganger columns can be added to the data table but are hidden from view with the following statements: Code Block 20: Adding Doppelganger Columns Then, the conditional cell formatting can be implemented using the following syntax: Code Block 21: Conditional Cell Formatting Essentially, the filter is applied on the “doppelganger” column, Revenue_YoY_percent_conditional (filtering cells in which the value is less than 0). However, the formatting is applied on the corresponding “real” column, Revenue YoY (%) . One can imagine other usages for this method of conditional formatting; for instance, highlighting outlier values. The complete statement for the data table is below (with conditional formatting for odd and even rows, as well highlighting cells that are above a certain threshold using the doppelganger method): Code Block 22: Data Table with Conditional Formatting I describe the method to update the graphs using the selected rows in the data table below.
https://medium.com/p/4f4257c18a7f#d574
['David Comfort']
2019-03-13 14:21:44.055000+00:00
['Dash', 'Dashboard', 'Data Science', 'Data Visualization', 'Towards Data Science']
Python’s Most Powerful Data Type
Using Dictionaries Dictionary view objects Some built-in dictionary methods return a view object, offering a window on your dictionary's key and values. Values in a view object change as the content of the dictionary changes. This is best illustrated with an example: phone_numbers = { 'Jack': '070-02222748', 'Pete': '010-2488634', 'Eric': '06-10101010' } names = phone_numbers.keys() phone_numbers['Linda'] = 9876 print(names) The output of this code is dict_keys(['Jack', 'Pete', 'Eric', 'Linda']) . As you can see, Linda is part of the list too, even though she got added after creating the names view object. Access and delete a single key/value pair We’ve already seen how to access and delete a single key-value pair: >>> phone_numbers['Eric'] = '06-10101010' >>> del(phone_numbers['Jack']) To overwrite an entry, simply assign a new value to it. You don’t need to del() it first. If the requested key does not exist, an exception of the type KeyError is thrown: >>> phone_numbers['lisa'] Traceback (most recent call last): File "<stdin>", line 1, in <module> KeyError: 'lisa' If you know data can be missing (e.g. when parsing input from the outside world), make sure to surround your code with a try ... except . Get all the keys from a dictionary There are two easy ways to get all the keys from a dictionary: list() returns all the keys in insertion order, while sorted() returns all the keys sorted alphabetically. There’s also the dict.keys() method that returns a view object containing a list of all the dictionary keys. The advantage of this object is that it stays in sync with the dictionary. It's perfect for looping over all the keys, but you still might opt for the list or sorted methods, though, because those return a native list that you can manipulate as well. Check if a key exists in a dictionary You can check if a key exists inside a dictionary with the in and not in keywords: >>> 'Jack' in phone_numbers True >>> 'Jack' not in phone_numbers False Getting the length of a dictionary The len() returns the number of key-value pairs in a dictionary: >>> phone_numbers = { 'Jack': '070-02222748', 'Pete': '010-2488634', 'Eric': '06-10101010' } >>> len(phone_numbers) 3 Looping through a dictionary
https://medium.com/better-programming/pythons-most-powerful-data-type-89628a9e1467
['Erik Van Baaren']
2020-11-17 20:36:36.251000+00:00
['Python3', 'Programming', 'Software Development', 'Python', 'Data Science']
Rust Adventures — A Java programmer understanding Rust Ownership
Hi there folks! As you know I’m learning Rust as a 2020 goal and today I decided to learn more about the key feature of the language: Ownership. Beginning My career up until now was based solely under Java stacks, so I summed up some experience with Java, Javascript and their libraries. For a long time I heard of the trails of people from C and C++ used to have managing the memory allocation of their programs and how the Garbage Collector saved the day. The other side of the coin is that I always heard about the low-level languages programmers how the Virtual Machines languages had a poor performance. For some times when I was writing programs that needed to have a high performance I wanted to use low level language and for the most trivial programs I was just happy to do not have to worry about memory management and for concurrency… well it’s a pain doesn’t matter from where you came if you works with a pre 2010 language. So for some time the principals solutions we had were: 1 — Low level programming languages with the difficult of memory allocation. 2 — VM languages with GC that removed the memory problem but it had a cost over performance. Rust comes up with a third option: Ownership. This feature molds everything in the language and is crucial to understand it. Stack and Heap People that come from a VM language like Java usually doesn’t have to worry about the Stack or Heap memory because of the GC, although you are working with Strings, so let’s try to summarize it a little bit. Stack is a memory that the content must have a predefined size and is stored in a ordered manner called LIFO (last in, first out). It’s similar of a pile of plates, you put them on the top and remove from the top. As this memory has a precise size and order it’s fast and secure. FILO asbtraction The heap is the opposite, it’s a dynamic area that allocates the values where it fits properly and send the pointer indicating where the value was inserted, this search has a cost and loses performance over stack, but it doesn’t have the necessity to pre define the size of memory a value holds. In low-level languages it was a task of the programmer to manage what is on the stack and what is on the heap, even when the memory will be free, in a VM language with GC there is no need for the developer to worry about it. Rust uses ownership to keep track in compile time of what data is on the heap and minimizes problems concerned over it. But what is the problems we might have? Let´s follow The Book and use the String as example for that. Imagine a language that have a Structure to hold a String value, it has some meta-data but the actual value will be allocated over the heap, because this String is mutable and we can’t know it’s value over compile time. We can define a variable like that: variable text1 = String(“value”); When the program runs it will calculate how much memory the text “value” needs and allocate it over the heap, the heap will give the variable a pointer to where this value is, up until now everything good, so the next line comes the following code: variable text2 = text1; The language will now create a new structure with metadata to text2, but what about it’s value? How can we solve it? One solution is to each one have it’s own metadata but share the pointer between them. What happens when we free from the memory the both variables, let’s say first text2 and after text1? When text2 is released the “value” will be removed from the heap and what will text1 release? Exactly we don’t know and it can cause some problems over the application. It is known as double-free error. You might be thinking, so let’s duplicate the value or in another words, make a deep copy. For sure it resolves the double-free error, but remember, the heap is not the fastest memory we have, the search for space and the following of pointer has a cost in performance and we can have a software that is slower than it should be, or that consumes a lot more memory that is advisable. So what Rust does? Rust ownership has pretty simple rules but with a lot of impacts over the language. Here are them: 1 — Each value in Rust has a variable that’s called its owner. 2 — There can only be one owner at a time. 3 — When the owner goes out of scope, the value will be dropped. A scope in Rust is a couple of {}, you can put it anywhere in your code and create a new scope. When it ends the values are dropped as well. So what about our little example from before? Let’s write it in Rust. The underline before text2 is just for the compiler stop warning me that it is not been used, so up until now everything is ok: So why do not print the values to see if everything is ok? The IDE already warns me that text1 used in the print method is invalid and the compiler confirms that. Do you remember the first and second rules? A value needs a variable as a owner and it will only have one owner per time. When we pass text1 as the value of text2 we are passing the ownership of “value” to it, so after that we cannot use text1 anymore. Let’s see another example: Wait, did we just do the same thing with integers and it worked? How that? Do you remember when we talked about the difference between stack and heap? Integer and scalar values has a known size by compiling time, so it goes to stack, because of that Rust implements a trait called Copy that make a copy of the value in stack memory that is almost without cost. If you pay attention the compiler says that String doesn’t have this trait: When Rust can’t copy a value it moves the value making another variable the owner. So in the String case only text2 has value to release, text1 doesn’t have a value anymore and in the case of Integers each one has a value of their own. We can call a method clone to solve this, but as we are speaking of heap memory this is not a desirable solution: Another context is about the scope, the ownership is passed even for parameters because we are changing the scope from one function to another with {}: As we did before scalar values will be copied as well if you pass as parameter to another function: The return of a value changes the owner of the value as well, we can use it to move the value inside the function scope and return back to the previous scope: We have another trick here, there is a literal value for String in rust, it is the &str and it needs a predefined value and size at compile time, so guess what? It implements the Copy trait! You noted that it starts with &? It is called a reference in Rust. Reference is the way we can use to just borrow the value for another scope but keeps the ownership on the original value. Rust knows that the rightful owner of “value” is text1 and it is just borrowing to text2 and borrow function, when the scope ends just text1 has a value and just it will release memory. If the borrow function try to change the value Rust will complain about that: If you need or want to change the value that the reference holds you need to create a mutable reference with &mut, for that text1 must be declared as mut as well: But for the sake of memory safety we can have just one mutable reference, for this reason the following code will panic: If you combine immutables and mutables references the code will panic as well: With this rust can secure: 1 — Two or more pointers access the same data at the same time. 2 — At least one of the pointers is being used to write to the data. 3 — There’s no mechanism being used to synchronize access to the data. References are kept alive just until it’s use, so after that the variable is free to be referenced again, even in a mutable reference: This bring us another compilation problem called Dangling Reference, making reference to dangling pointer, that is a pointer that reference to something invalid. A method that returns a reference of a variable created inside it’s scope, will make a dangling reference. As we declared before, the value needs a variable as its only owner and by the end of the scope, Rust will release the variable and value from memory, so we are creating a String inside method dagling_reference and return its reference, but when the method finalize the reference will have a pointer to nothing because rust will clean up variable “s” with its value. To fix it we must return a String, with that we move the ownership to another variable. So for reference we can summarize: 1 — At any given time, you can have either one mutable reference or any number of immutable references. 2 — References must always be valid. We have slices, they are another data type without ownership that represents a part of a collection, and Strings can be considered a collections of chars. We can make them with &[starting_index..ending_index] notation, in Rust the sequence index start with zero so the minimum value of starting_index can have is 0 and ending_index if exclusive, or in another words it is value higher than the one you want, please note that it is a reference of the values, the rightful owner of the value is the String it’s originated from. The “..” is the range syntax in Rust, by default it starts from 0 and terminates on the end of collection, so we can change the code above and teh result won’t change: Note that this is valid only for ASCII characters, if you need to work with multibyte characters it can lead to error. I have a story where I needed to work with graphemes because of that, later I’ll make a more detailed article about this matter. Everything is OK, but this type &{unknow} is not the most descriptive one so let’s change the code for it’s real return: As we discussed before the &str is a reference and cannot be changed, with that we can make methods that receives a reference and return another: As we only use reference that is not moving of ownership of “Medium is great” value, it stays at text1 and end with main method scope. But what happen if we declare text1 as a mutable and tries to clear it? Do you remember that we cannot have a mutable borrow and an immutable one? It happens with parameters as well, and the Rust compiler panic over that as well. It makes an API easier to use and safe to errors as well. You might have noticed by now that &str literal is immutable because it is a reference pointing to specific points in binary. Conclusion Rust is for sure blazing fast and safe, and it all thanks to Ownership that made possible a mix of performance from low-level language and security of garbage collector ones. With that rust deliver a safe code, very performative one and fun to learn. I hope you people enjoyed this Story, until next time!
https://medium.com/analytics-vidhya/rust-adventures-a-java-programmer-understanding-rust-ownership-edbeb6b8001
['Floriano Victor Peixoto']
2020-05-15 15:47:21.613000+00:00
['Rust', 'Programming', 'Software Engineering', 'Software Development', 'Technology']
How the Online Pursuit of Fame at all Cost is Destroying Intimacy
How the Online Pursuit of Fame at all Cost is Destroying Intimacy We are giving away too much, too soon. To everyone. Photo by Marvin Meyer on Unsplash Without discernment, we shop our private lives piecemeal online in exchange for a click, a buck, the odd byline perhaps. Day in, day out, we crank out shockers inspired by our most intimate shame, those vexatious moments when others humiliated us. Sometimes, our body of work becomes an online manual to our sexual self, the one lusting for validation and compensation between the lines. One installment at a time, we disclose the secret life of our various pleasure orifices, our assorted fantasies as we attempt to exact revenge upon those who scorned or mistreated us by reliving those moments in print to spin them into gold. Often, we attempt to parlay our sexless and loveless lives into something edgy and marketable by passing it off as vulnerability. Soon, dignity devolves into incontinent pathos; we do not regain control of our narrative, we repackage it into something we can sell, striving to replicate what mainstream media has shown us works. As a result, the internet turns into Jerry Springer and everyone is writing National Enquirer style headlines. We reveal how the celebrities we slept with are, you know, humans like us. Only with more money, more fame than us and we want some of that so we take it as payment for having shared naked moments together. But kiss-and-tell is no longer the preserve of shoddy journalism, it’s a mindset now; we are all media, we are all potential celebrities. Even if our one claim to fame is that we talk when we have nothing of import to say.
https://asingularstory.medium.com/how-the-online-pursuit-of-fame-at-all-cost-is-destroying-intimacy-a4ddfa7a5822
['A Singular Story']
2020-01-23 17:44:23.661000+00:00
['Relationships', 'Philosophy', 'Future', 'Self', 'Social Media']
Customer Data Platform: The Hero behind User Engagement at Tokopedia
As one of the biggest Indonesian technology companies, Tokopedia has hundred of million users who fulfill their daily needs through various services it offers. One of Tokopedia’s DNAs that embodies in our daily work as Nakama* is to Focus on Consumer, as each of them is special and cherished. However, with the magnitude of our user base, the challenges we face to engage and embrace them are enormous. Consider a hypothetical case of John from Tokopedia Internet Marketing team who is in charge of building millennial user retention and at the same time promoting trending products from our merchants. Based on John’s problem, we could formulate the problem statement as below: “Whom should receive emails about the newly arrived Yeezy shoes?” “Which user should I notify when there are good deals on a particular Marvel comic book?” To solve the problem, and in turn reinforcing data-driven culture, we need a tool that can assist us in understanding our customers more. Specifically, customers are not only limited to buyers; they can also either be merchants or any partners within Tokopedia ecosystem. The tool must be capable of organizing the data of million Tokopedia customers efficiently, and at the same time supports multiple use cases of data retrieval. Illustration of customer segmentation (source tellius.com) Here comes the Customer Data Platform. Customer Data Platform (CDP) collects and processes data from multiple sources and unify them in a single data platform. This collection of customer profiles is made accessible to other systems, supporting multiple use cases from company-wide stakeholders. An example of its utilization would be as a data source for the Data Scientist team to build predictive models based on the user’s individual preferences and their spending patterns. The internet marketing John’s use case mentioned above can take advantage of the segmentation service that comes as the features of CDP. The purpose of this service is to support marketing decisions based on certain customer criteria, divided into three types of the following data: User Data: email address, home, and delivery address, etc. Transaction Data: loyalty points, payment status, etc. Behavioral Data: search and click history, wishlist, etc. Illustration of customer segmentation (source tellius.com) To elaborate on how or what the service does, we will get back to John who plans to create a campaign to target “urban millennials who are inactive for the last three months”. He needs to get the list of users belong into the aforementioned segments and send them some special deals with products that may draw their interest. John will input age (User Data) and activity (Behavioral Data) criteria in the segmentation dashboard that connects into the Segmentation Services. The service will handle the request and John will be notified once the process is complete and results available. The rest of this post will explain the technicalities of process taking place in the background while John is waiting for the segmentation result. Illustration of the system In general, the CDP Segmentation Service is divided into the following parts: Data transformation Data from multiple origins are collected, cleaned and transformed into a single customer profile database. We utilize existing Tokopedia ingestion data platform, doing the data processing job on the data lake and manage separate storage using Google BigQuery for segmentation objective in normalized data form. Cleansed and normalized data is appended on a daily basis using Apache Airflow. Security Layer The service is intended to accommodate broad use cases by different teams across the company. Consequently, the user may have different roles and therefore disparate access to the data. To ensure that the user is authorized, an audit in the finest granularity must be enforced. Access to our customers’ data is regulated to the column level. Control layer The mechanics of the main segmentation system is abstracted in this part, including the normalization and translation of user input to the database-specific language. We use the typical Golang, Postgre and Redis stack. Additionally, the service also logs and monitors all of the segmentation activity using Prometheus and Grafana in this module. Executor Acts as the last layer of the service, executor abstracts the data storage from the logical layer and will perform data retrieval job defined by the user-specified criteria. Big Data capable tools are presented in this layer. At the end of the job, a notification will be sent to update and notify the status of the task. As each of the components mentioned is decoupled from one another, this setup provides scalability to the system. Changes in one layer are isolated, and each of them comes with an extensible set of tools it supports. The executor layer, for example, supports multiple ways and tools to retrieve data from our customer-profile database. You can think of this as strategy pattern in system level.
https://medium.com/tokopedia-data/customer-data-platform-the-hero-behind-user-engagement-at-tokopedia-f80658add046
['Yunita Ekawati Salim']
2019-06-26 16:18:54.641000+00:00
['Customer Segmentation', 'Tokopedia', 'Customer Data Platform', 'Data Engineering', 'Tokopedia Data']
“Stay Humble” 6 Insider Tips With Singer Songwriter, B. Howard
- Can you share the funniest or most interesting story that occurred to you in the course of your acting/ directing/performing career? Putting on a wig for “A Tale Of Two Corey’s” was one of my funnier and stranger experiences recently. I was completely a loss for words when I looked in the mirror lol but certainly honored to play my part in his story. - What are some of the most interesting or exciting projects you are working on now? Currently, I just released my new single “ Nite and Day 3.0/ Girl You Gotta Know.” Im very excited about the energy of it. I’m also working on new music, so stay tuned for more great stuff! Also, I recently made my acting debut in the Lifetime film, “Tale of Two Corey’s” which was a great experience and have a new Cartoon being developed in China for 4th quarter release. -Who are some of the most interesting people you have interacted with? What was that like? Do you have any stories? I really enjoyed spending time with the people in Legos. I met so many talented people there, and was fas- cinated by their rich culture. One of my fondest memories was standing by the ocean and I felt astounded by how much history this place had witnessed. -Which people in history inspire you the most? Why? There are so many but one in particular is Richard Branson. He has incredible work ethic and diversity, and it comes through in both his music and business ethic. -What do you do to “sharpen your craft”? Can you share any stories? I use my experiences to connect with people — it’s important to live and engage in different experiences so that you can better connect with the people who are experiencing the stories within my music. -How have you used your success to bring goodness to the world? I donate parts of my royalties to different charities. One charity in particular that I support is Alicia Keys’ “Keep a child alive foundation.” It’s a wonderful group which fights to combat the physical and social impacts of HIV in children and families. -Can you share 6 “non-intuitive tips” to succeed in the music industry? • Hard work and determination always pay off. Hollywood is tough and perseverance is key. Artists need to be prepared to hear “no” and still keep pushing. • Make all the mistakes. Don’t fear your mistakes, but rather use them to master your strengths. • You have to be prepared to work hard even after obtaining any level of success. You are never too big to stop working on your growth. • It’s important to listen to feedback, but also to stand by your image and your strengths. • As artists its important to support and strengthen one another. There is a lot to be learned by watching others, and those relationships can turn into collaborations. • Stay humble. -Some of the biggest names in Business, VC funding, Sports, and Entertainment read this column. Is there a person in the world, or in the US whom you would love to have a private breakfast or lunch with, and why? He or she might see this. :-) DJ Khaled, and I’d like to do a collaboration with him as well…he’s got a great work ethic and it comes through in the quality of his work.
https://medium.com/thrive-global/stay-humble-6-insider-tips-with-singer-songwriter-b-howard-6467154a5978
['Yitzi Weiner']
2018-07-17 20:32:02.041000+00:00
['Inspiration', 'Culture', 'Wonder', 'Celebrity', 'Music']
What hurricanes teach us about the consumers and the economy…
What hurricanes teach us about the consumers and the economy… Deflation is the monster in the central banker’s closet What if I told you Amazon.com was in the business of predicting hurricanes? Seeing as they are getting into just about every business, it might not be a surprise. But that’s not the point here. What matters is the technology which produces this hurricane forecast map. It is exactly the same technology which enables Amazon to pull off same-day delivery. Let me explain. Knowledge Management and Storm Path Prediction The field of “Knowledge Management” (KM) breaks things down into three basic groups. There is “data” — which in meteorology would be a reading like temperature, atmospheric pressure, dew point, etc. The data originates from an instrument at a point in time. If we are looking at consumer purchases, a product would be a point of data (let’s use hot chocolate and marshmallows — more on that in a moment). When you take various points of data and bring them into context with each other you have what KM calls “information” — or data in context with other data. For hurricane forecasting, various meteorological data points are brought into a three dimensional information set called a “cube.” The first two dimensions would be like a spreadsheet. You might have latitude/longitude/altitude positions as rows and the various readings as columns. The spreadsheet itself would represent these readings and their locations at a single point in time. Time, then, becomes the third dimension of the cube. You basically have time slice spreadsheets stacked front-to-back as if in a filing cabinet. The third element of KM is Knowledge. In order to elicit knowledge from the cube you employ a sophisticated statistical algorithm. There can be any number of such algorithms — when you hear the weather reporter refer to “the models,” they are referring to various algorithms which analyze the information in the cube. These algorithms are fine tuned over time. We can go back to previous storm seasons and the information cube from a storm at its beginning (say, when it becomes a named storm), and we can take that “time slice” from the cube and run it through our model today to see what our model will “predict.” Of course, since we are taking information from the past, we already “know” how the storm will proceed. We can compare that with our model’s calculation and use the results to fine tune the model’s math. As a result, when a storm like Dorian forms, we have numerous finely tuned models. We take today’s time slice from the information cube we are building in real time and run it through the models. The models then create future time slices for our information cube. Those future time slices (think of the spreadsheet of rows and columns) is then the underlying data which produces the map we see on the news. Meteorology and Consumer Purchasing Now let’s go from one weather extreme to another. It’s snowing outside and very cold, so you settle down to a book and a cup of hot chocolate and marshmallows. Let’s say I own a hot chocolate company and I get from Big Data a comprehensive data set of consumer purchases of products like mine. I create an information cube, each slice showing me the store locations (rows) and the volume of sales of various products (columns — let’s just limit this to the hot chocolate and the marshmallows.) I can now create a time-based “heat map” — this is a lot like the storm path map in that I can hit “play” and see sales volume (think storm intensity) increase and dissipate. On this heat map, the colors will reflect the volume of sales — dark red will show me when and where hot chocolate and marshmallows are flying off the shelves. Now imagine we get meteorological data sets from the same period in the past. We bring the weather information into context with the consumer purchase information and create an entirely new set of information. When we press play on our heat map we see red and green ebb and flow to represent sales volume as it goes up and down. We also see weather data for the same times and places. Now what I am looking for is a statistically significant correlation between weather data (as it gets colder and stormier in the winter) and sales volume of hot chocolate and marshmallows. This was originally a heat map showing where fitness centers were located. Imagine the red shows us where our hot chocolate and marshmallows are flying off the shelves. Weather models are remarkably accurate — and getting better There is some consternation today about hurricane Dorian originally being forecast to hit the Florida coast, only now to be forecast to take a sharp turn north right before making landfall. Everyone is up in arms over the alarm because it was disruptive. But when we look at this strictly as an exercise in Knowledge Management the models were remarkably accurate, especially if we just look at the predictions made a day or so into the future. (This is why the maps show us a cone surrounding the path — the cone gets wider the further out the map goes in time, reflecting the decreased certainty about the prediction.) If we step back from the “breaking news” of a dangerous storm, we note that the run-of-the-mill weather forecasts we listen to on the news are really no different. The models are spitting out time slice spreadsheets a few days into the future and the guy or gal on TV is telling us what that data means for our commute or that picnic we had planned for Labor Day. But back to my hot chocolate and marshmallow sales… If I find a statistically significant correlation between weather data and sales volume, I can see where and under what weather conditions my product is selling best. I can then go to the weather models in the winter and I will “know” about a week in advance where the weather is going to be cold and stormy. I will also “know” this is likely to mean my sales volume will spike. What I cannot have, as a business owner, is for a potential customer to come to the shelf looking for my products, and not find them because I did not supply it to the store in the volume necessary to meet the demand. This is basically how Amazon pulls off same-day delivery — but only in certain markets. These markets have enough of the right kinds of data Amazon’s models require to produce highly reliable predictions. Amazon’s “map” shows them when and where various kinds of products will be in demand in a few days. This allows them to move those products into regional warehouses so, when they are ordered, the products are in a close-enough proximity to the customer that Amazon can deliver on the same day. And now a word from our sponsors…or not You might have found my subtitle odd: “Deflation is the monster in the central banker’s closet.” Janet Yellen as former Fed Chair? If you are my hot chocolate and marshmallow customer, and you have reason to believe I will lower my price tomorrow, why would you buy today? This “deflationary” mindset, if we extrapolate it across the economy, means supply chains start to back up with an excess of supply. If my hot chocolate and marshmallow supply chain starts backing up, in the minds of central bankers I will have to lower my price. This will only confirm my customer’s suspicions and they will decide to wait a little longer, hoping to get a better price tomorrow. Writ large, at least as the conventional wisdom goes, a depression-causing spiral begins. Expectations of lower prices mean transactions are delayed. That delay prompts producers to lower their prices to move product. That confirms the expectations of lower price, so that expectation persists and transactions are further delayed — causing prices to drop… Wash, rinse, repeat. Central bankers think this is a monetary problem. If my consumer has more money, they will be able to pay today’s price and my supply chain starts moving again. When the only tool you have is a hammer, every problem looks like a nail. So the central banker says we need to add money to the economy. But what if deflation is not, nor has ever been, a monetary problem? What if deflation is a supply-chain management problem? What if the deficit is not in the money supply, but in the “knowledge” of where certain products are likely to have enough demand to support today’s price. What if the real tool needed here is Knowledge Management? The central banker is worried about why people are not buying. As a businessman, I don’t care why people are not buying — I want to know where they are buying at today’s price and why they are buying at today’s price so I can supply the market at those places and times. It seems the central banking crowd has remained unaware that there is this thing in today’s economy that was not around during our grandparents’ time. This thing is the computer, and the fatal flaw of central banking (and academic economics) is they haven’t the foggiest idea of how real people in the real economy of today actually use computers to manage their supply chains. There are two kinds of people in the economy — those who do real things, and those who talk and write about those who do real things… Central banks and academic economists are clearly the latter. I actually learned to code on one of these! If they would simply stop intervening in what they clearly do not understand, the utility of today’s Knowledge Management technology would be allowed to come to the forefront of the economy. Instead, by injecting more and more money, businesses are basically excused from having to compete against each other to see who can best use today’s technology to manage their supply chains. Left alone, those who can, will. Those who can’t will go out of business (and probably end up teaching economics).
https://medium.com/swlh/what-hurricanes-teach-us-about-the-consumers-and-the-economy-541c469803d9
['John Horst', 'Cissp', 'Issap']
2019-09-04 06:11:01.186000+00:00
['Weather', 'Hurricane', 'Dorian', 'Central Banking', 'Federal Reserve']
The Linear Thinking Pitfall
The Linear Thinking Pitfall Reality rarely draws straight lines, but your mind doesn’t know it (yet) “Compound interest is the eighth wonder of the world. He who understands it, earns it. He who doesn’t, pays it.” — Albert Einstein (Allegedly) There is a Twitter screenshot going around that says if you saved $10,000 every day since the height of Ancient Egypt, you wouldn’t have as much money as the top five billionaires today. This is true. The Giza Pyramids were built in the 4th dynasty, around 4600 years ago. If you saved $10,000 every day for 4600 years, you would have $16,790,000,000, or around 17 billion dollars. At 16.8 billion, you would rank just under Ray Dalio at number 85 on the billionaire rankings. (1) The point of the screenshot was to point out how ridiculous it was for individuals to amass so much wealth. Implicit in this screenshot is the idea that nobody can make and save $10,000 every day for over four thousand years, and thus the billionaire wealth is illegitimate. However, if we examine the presuppositions of the screenshot we see that it conveniently falls victim to the linearity bias. Billionaires don’t “make and save” money. Their money compounds. They get rich through the power of exponentiation. I am not here to defend billionaires (for I am neither one nor do I know any). However, I would like to offer an alternative example: suppose that you have $10,000 right now, and I offered you one of two choices: Use the 10k to purchase an investment that returns 10k every day for the next 50 years, OR Use the 10k to purchase an investment that compounds at 30% for the next 50 years Which option do you pick? We are naturally inclined to pick option #1: 10k a day! That means you make $3.6 million each year, or $182 million over the 50 year timespan. Sounds good, right? Indeed, though option #1 is a pretty slick investment in the abstract, it becomes nothing when you compare it to option #2. If you are compounding yearly at 30%, you will end up with $5 billion dollars by the end of the 50 years. How does this work, you ask? The first year, you end up with $13,000. The second year, $16,900. The third, $21,970. It sounds pretty lame compared to option #1, which, by the end of the third year has accumulated over $10 million. However, option #2 always grows at 30% every year while the other is constant. By year 18, option #2 will have finally made you a millionaire — and by year 50, one of the richest people in the world. What does this look like? Option 1 is the sum of 10,000 for 356 days for x number of years. Option 2 is 10,000*(1.3)^x where x is the number of years Now, lets zoom in to the first 40 years, and see where the two choices cross over. We see that had you picked option #2, you would have lived a mediocre life until you got extremely wealthy, seemingly overnight. Does this happen in real life? Indeed it does. Just take a look at the wealth curve of legendary investor Warren Buffett, for example: We see that Warren Buffett’s wealth follows the exact same trend as option #2. Over time, the power of exponentiation prevails and Mr. Buffett sees almost overnight wealth. What we don’t see here is that the linear relationship is hidden behind the exponential curve. With the first option, what is linear is the rate of accumulation. With the second option, what is linear is the time it takes to double an amount of money. With a linear rate of accumulation, the time it takes to double becomes longer and longer as the base amount grows. With a linear time of doubling, the amount of accumulation grows at a higher and higher rate. With the first option, your money doubles once a day to begin with, then once a week, then once a year, then, once every ten years, and so on. With the second option, your money is doubled once every 3 years or so, guaranteed. Perhaps then it would be better to rephrase the two options: Use the 10k to purchase an investment that takes longer and longer to double over time Use the 10k to purchase an investment that doubles over a constant time period This is illustrated through a log graph of both options: Suddenly, option 2 becomes linear while option 1 plateaus. Through reframing of the question, we are suddenly able to shed the bias of “wow! 10k a day”, and reach the conclusion that option 1 is an inferior investment because it does not take advantage of exponentiation. Non-linear thinking in the real world We are accustomed to seeing the world in linear terms. Intuitively, we think linearly: if I buy a bag of popcorn for $3, then I assume twenty bags are $60. If it takes me 20 days to get 100 Instagram followers, then I assume it’ll take 180 days to get to 1000 followers. This is sometimes true, but far more often, it is a form of linear bias. In the real world, the current value of a variable often depends on its prior value. For example, the more Instagram followers you have, the easier it is to get followers. It is far easier to get from 18k to 19k than it is to get from zero to the first thousand. This could be the result of many factors. Maybe people are more likely to follow large accounts. Maybe the Instagram algorithm allows your content to reach more people, the more followers you have. Whatever the cause may be, the result is that followers at time x and followers at time y are not independent of one another. In other words, we have a non-linear relationship. What is likely true in the Instagram follower example is that the doubling period is constant. if it took you 10 days to get from 50 to 100, it will probably take you 10 days to go from 100 to 200. Eventually it will slow down, of course, but in the beginning the trend will hold. This means that for the majority of activities, an amount of effort in the beginning will produce orders of magnitude less results than the same amount in the end. Metaphorically, its not so much that climbing the mountain gets easier after each hike, but rather your steps literally become larger each time. This realization has major real world implications. For example, whenever you are deciding to spend money you are making a trade-off between present enjoyment and future security. What is the trade off when you spend $50 on a new pair of shoes you don’t need? Your linear brain would think that $50 of enjoyment now is about $50 of enjoyment later, so your might as well spend it now. But this is actually not true! If you put $50 in the stock market, in an index fund will return on average 7% per year. This means you will see a 32 fold increase in your money in “later” money (more specifically, 50 years later). This means that if instead of buying a pair of shoes, you invest the $50, you will likely have $1,600 worth of enjoyment when you are older. However, you will not see this money for a very long time. Indeed, in the first year of saving, you end up with $54 — and you might think to yourself, “darn, four bucks. I should have bought the shoes.” Like how first time investors often think that their returns are measly, and how you might see a $4 trade-off on buying shoes versus saving as not worth it, in an exponential relationship the initial effort (or time) often seems insignificant and not worth it. However, it is worth knowing that the majority of the returns will be found at the end of the road. Exponential relationships are an exercise of great patience. Practicing non-linear thinking “Rule №1: Never lose money. Rule №2: Don’t forget rule №1” — Warren Buffett (rules for investing) Besides the obvious fact that you are losing money, why is this the number one rule? A linear mind might think, “if I can make a million dollars, surely I can make it back?” Yet, those who think this way are the same people who have never understood this enlightened quote by Warren Buffett. Warren thinks non-linearly. He realizes that the effort it takes to make x amount of money is proportional to the amount of money you have in the beginning. It took him 30 years to get from a million to a billion, and if he goes back to a million, it will take him another 30 years. Indeed, to lose money is to squander the effort it took to produce returns in the beginning. Because only through growing the first 10 followers can you get to the next 100, and only through growing the first 1,000 can you get to the 100k you have always desired. If effort is rewarded non-linearly, as it is the case for investing, then the most important thing suddenly becomes protecting the fruits of your time and effort at all costs. Then, it must be that Warren Buffett realizes that losing money is the equivalent to losing time. Perhaps we can reframe the quote: rule no. 1, never squander the time you have already spent; rule no. 2, never forget rule 1. We can see that the reason behind Warren’s rule for investing is the exact same reason behind Robert Greene’s 48 Laws of Power. In particular, Law 5 says: “Protect Your Reputation at All Costs.” In this example we can see that reputation is a non-linear function, similar to followers or money. It is supremely difficult to gain a good reputation with one person. It is somewhat easier with 10. And, it is extremely easy to spread your good reputation if you have a million believers. Then, to squander this non-linear resource is to waste the time and effort it took to develop the first x number of pieces of good will. Practice thinking non-linearly will greatly reward your future self, whether it is with reputation, or followers, health, or money. It may even make you better at making every day decisions. Consider this final example from the brilliant HBR article on linear thinking: If you have two cars, one at 10 mpg and one at 20 mpg, and have only enough budget to change one car to become more efficient, which car do you change to maximize your savings? Change the 10 mpg to a 20 mpg Change the 20 mpg to a 50 mpg It turns out that despite the greater absolute degree of efficiency change in the second option, the first option saves far more money. Over 10,000 miles, you save 500 gallons in the first option and only 200 gallons in the second option. With non-linear thinking, you can be wealthy, healthy, reputable, well-followed, and environmentally conscious. Give it a try! All my articles are dedicated to the public domain under the terms of the Creative Commons Zero licence. Please translate, copy, excerpt, share, disseminate and otherwise spread it far and wide. You don’t need to ask me, you don’t need to tell me. Just do it!
https://tengrong.medium.com/the-linear-thinking-pitfall-87f48a820ac
['Teng Rong']
2020-08-12 18:41:49.896000+00:00
['Investing', 'Thinking', 'Mathematics', 'Psychology', 'Compound Interest']
How We Built a Serverless E-Commerce Website on AWS to Combat COVID-19
2020 turned out to be radically different from what everyone had expected. COVID-19 has impacted our lives in many ways. As the pandemic spread, one question that baffled us was: what can we do with technology to combat the virus?. A few days later, we got the answer. In a phone conversation in March 2020, Olalekan Elesin informed me of an idea he thought about while doing his regular grocery shopping at DM Drogrie. The idea is centered on a bracelet like your typical watch that dispenses Sanitizer. Before this time, Sanitizer comes packaged in bottles and cans of different sizes and kinds. Some are mounted on walls, and some are portable enough to be carried around in handbags. But not portable enough to be worn on the wrist. Having spent years building scalable applications and data applications at various startups, we understood that every idea must be validated. We have to know it’s an idea that people want and are willing to pay for. As for us, this means doing some market research. Problem Discovery and Customer Validation We started out with one goal split into two hypotheses: Is this a big enough problem, and are customers willing to part with cash in exchange for the product. Validating such assumptions with software/digital products is relatively straightforward. One could build a simple landing page, explain the idea, and add a form — the lean startup way. But for physical products, this is quite different. Developing MVP could mean designing and 3D-printing hand bracelets. After several iterations in coming up with the leanest and cheapest way to test, we arrived at one: blogpost as MVP. In the blogpost, we embedded a pre-order Google form to collect some information about potential customers. The form included the color, price, payment method and quantity fields. While doing market research, it’s essential to collect potential customer contact details like email addresses, etc. If your product resonates with people, potential customers will not hesitate to give their contact details. Customers who provide contact details at the pre-launch phase are usually the first to be converted when you go live. Before our product hit the market, we recorded over $100,000 in pre-order bookings from matured markets such as the USA, Germany, Italy, France, and the United Kingdom through the medium post with zero ad budget. More than 85% of customers were willing to pay online and have their SanitizerWristbands delivered to them. We validated our problem and customer hypotheses with these and other leading indicators: big enough pain, and customer willingness to pay. This informed our next decision to develop the first physical MVP. Manufacturing: An Uncharted Territory Physical goods manufacturing was uncharted territory for us. It was not long before we realized it’s different from everyday app development. In manufacturing, you design and create a product and then replicate it repeatedly. In software, the product design is the product. This is not to say one can not apply certain software development principles to manufacturing. The concept of lean, which is popular in tech, originated from the Toyota production system. With our first product design ready, we spoke to manufacturing companies to build the prototype. We learned quickly that our design was too cumbersome and would cost a substantial amount to produce. Figure 1: An Earlier Iteration We arrived at the simplest possible design that works through customer feedback and trying to cut down the production cost to match the amount customers are willing to pay for. We were ready and it was time we put our store online. Scalable eCommerce Website With 0$ Commitment There are many e-commerce platforms for online retailers that don’t require technical expertise. Shopify and Square are popular choices. We wanted a platform that would allow us faster access to the market with little to zero cost commitment. Shopify and other popular platforms didn’t meet our cost requirements, so we built a custom solution in 2 days with 0$ commitment. It’s a static website hosted on s3 with a serverless cart and inventory capabilities provided by Snipcart. The high-level architecture looks like this: Figure 3: E-commerce Website Architecture Right from the start, our goal was to automatically deploy changes made to our website whenever a PR is merged. We leveraged AWS CodeBuild and CodePipeline to achieve this and automatically deploy new changes to an S3 bucket. Figure 3: CI/CD with AWS We sent a campaign email to customers who had pre-ordered before we launched. Automating Orders Fulfilment With orders coming in, there was yet one more problem we had to solve: automating orders fulfilment. Our warehouse and logistics partners are in China. This implies we must find a way of notifying them every time an order is placed. For a few orders per day, sending an email with an attachment to our logistic partners suffices. As the number of orders per day grew, this became really boring and automating the process became a necessity. Could we automate it? To automate fulfilment of orders, we created a serverless “order fulfilment addon” whose architecture looks like this: Figure 5: Automating Order Fulfilment (WIP) Making Informed Decisions with User Behaviour Our conversion goal was to get potential customers to buy our product. To reach this goal, we needed to learn about the things that interrupt user flow from right from landing on the website to actually placing an order. For us, this means conducting several A/B, Multivariate tests weekly and implementing what gets us closer to our goal.
https://medium.com/swlh/what-we-learned-from-building-a-serverless-e-commerce-website-on-aws-to-combat-covid-19-2b66155f9b08
['Samuel James']
2020-10-13 19:58:44.369000+00:00
['Product', 'Manufacturing', 'AWS', 'Serverless', 'Ecommerce']
#RealLifeTransAdult
#RealLifeTransAdult Electronic engineer, Katie Montgomenrie Katy Montgomerie is a British transwoman that works as an electronic engineer, designing computer chips that make the internet work! And she is great at it! GR: Katy, what has been the most difficult situation as an electronic engineer? KM: Probably the most difficult thing I’ve done at work was coming out to everyone, I’d worked with most of them for 5 years before coming out, and standing up in front of 30 people telling them that tomorrow you’re coming in as a woman is terrifying! It went really well though. GR: Wow. That is amazing. Can you recall any other moment where you felt so loved and accepted? KM: My coming out wasn’t one single big event, it was a series of one on one conversations over a series of months. Every time I told a friend about being trans I was dying of fear and worry, and every time they showed me love and respect. In most cases I felt much closer to them after sharing such a major part of my life with them. I still cannot get over how amazing all my friends are, how much I love them and how grateful I am for how much they love me. “The more you tighten your grip, Tarkin, the more star systems will slip through your fingers.” Leia — Star Wars GR: What’s your view on these last years women’s right’s movements? KM: Women’s Rights is one of the things that is most important to me, I spend literally hours every day discussing and debating feminism on the internet and with friends. It’s great to see things like the #MeToo movement and Australia’s recent equal marriage plebiscite making progress, but at the moment it does seem every step forward women take, they are forced to take another backwards elsewhere, for example the Trump administration’s actions to restrict reproductive rights of women. I think the biggest thing at the moment where actual positive change is possible is Ireland’s [upcoming] referendum on abortion on the 25th of May, look it up if you don’t know about it, get involved! #RepealThe8th *editor’s note: the referendum was approved and abortion is now legal in Ireland GR: What would you recommend to 16 year old you? KM: This is the kind of thing I think about a lot… #nerdlife. I’ve actually got a secret passphrase that I can say to myself from the age of about 8 onwards that would prove that I was in fact me from the future, so I’d open with that (obviously I can’t say that here) then I’d say “dump your abusive partner; only sign up for 3 years at university; you’re actually a trans woman, it’s great, don’t panic; invest in bitcoin, sell at £10k”. GR: Well Katie, you will reach hundreds of youngster LGBTQIA people so, what would you like to tell them? KM: You are mega! There are going to be some things in your life that are harder for you than other people, but that doesn’t mean you can’t ace it! You can have a happy life surrounded by loving friends and family, you can have a great job, you can do any hobby that you like! Hang in there ❤ http://www.noh8campaign.com/ http://www.noh8campaign.com/ http://www.noh8campaign.com/
https://medium.com/join-the-gender-revolution/reallifetransadult-4133bd4121ce
['Regina Carbonell']
2020-10-09 11:55:55.920000+00:00
['Engineering', 'LGBTQ', 'Transgender', 'Women In Tech', 'Women']
You Should Expect Equal Pay for Equal Work at Your New Remote Job
#2. Differences in government spending across countries Public spending enables governments to produce and purchase goods and services, in order to fulfill their objectives — such as the provision of public goods or the redistribution of resources like social protection, education, and healthcare. Recent data on public spending reveals substantial cross-country heterogeneity. Relative to low-income countries, government expenditure in high-income countries tends to be much larger (both in per capita terms, and as share of GDP), and it also tends to be more focused on social protection. — Our World in Data In India, the government spends about 1,700 US dollars per head (adjusted based on Purchasing Power Parity) in the year 2015; while in countries such as Norway, that figure is over 30,000 US dollars (adjusted based on PPP) and in USA it is over 21,000 USD. This lack of government spending is ultimately passed on to citizens who have to pay for it with their own money — so it’s reflected in their cost of living. A poor public schooling system means that employees need to send their children to expensive private schools. A broken public hospital infrastructure means that people need to use expensive private hospitals for healthcare. And many of these private enterprises are not above profiteering in times of crisis. How does cost of living based compensation take into account differences in government spending across countries? Shouldn’t the employees be compensated for this difference? #3. Relocations get complicated at best and unfair at worst What happens if I relocate to a lower-paid region? Will I be compensated differently? Companies like GitLab are pretty transparent about this. “Yes, you take a pay cut”. But what happens if I was living in a cheap city and decide to move to a more expensive one? I asked the CEO of Gitlab. Here’s what he replied: Coming from India, I know for a fact that a lot of people in lower-income countries will jump at this opportunity. What happens if I choose to be a digital nomad changing cities every couple of months? What if I choose to get an official address in some expensive city while I actually live in the suburbs? #4: Loose Definition of “cost of living” The most common argument against equal pay for equal work for remote employees is the difference in housing prices across cities or countries. So how do you define the “Cost of Living”? Is it just the housing prices? And groceries? Restaurant bills, maybe? All that seems incomplete — it’s only a small percentage of many people’s actual costs of living. What about the costs of caring for an elderly parent? What about living a single life versus being married? For that matter, how about having a stay-at-home spouse versus having a spouse in a high-paying job? What about the number of kids one has? Number of dogs? Cats? What’s included in the cost of living? More importantly, who defines it? Should it be the employee who’s actually incurring these costs? Or should it be the employer who’s paying the employee? Other tough questions Who dictates the proportion in which I should be spending my money? Electronic devices from big brands are priced the same regardless of whether they are sold in the USA or Brazil. A Macbook would cost you the same every city of the world (except for the import duties that people outside the USA probably need to bear). So, what if I choose to spend just 10% of my income on housing expenses and 30% surrounding myself with the latest tech gadgets from the world? And maybe another 40% investing in NASDAQ stocks? It sure doesn’t make sense to have a 100% of my salary reduced based on just 10% of my expenses. Where does the leadership in the company live? Leadership in companies that offer COL based compensation, often live and work in high-wage markets but they might feel differently if they were subject to lower pay for the same work. How do you account for the costs of reduced opportunities that employees, who don’t live in primary talent markets, incur? People who don’t live in the tech hubs of the world might have to bear costs in terms of reduced networking advantages and lesser alternate job opportunities.
https://medium.com/better-programming/you-should-expect-equal-pay-for-equal-work-at-your-new-remote-job-beaa9a2d29a4
['Nityesh Agarwal']
2020-07-21 16:50:39.083000+00:00
['Remote Work', 'Software Development', 'Startup', 'Work', 'Programming']
Why Today’s Vinyl Resurgence is Nostalgia For Another Era
Albums, in theory, are themselves a singular concept, made of smaller cohesive components or layers (songs). The singles are only part of the story. Still, I had never really listened to a full album until Beyoncé dropped in 2013, and then again a few years later when I listened to Beyoncé’s full visual album, Lemonade. Before then, I’d only flipped through singles on this playlist or that. I’m 26 — this could be generational. It could be the byproduct of being young in the digital age with digital downloads and streaming. Still, I have this nostalgic haze of a vision that things were different decades ago; as though, in the 60s and 70s, we used to really listen to an artist’s album from beginning to end—and to stop somewhere in between would be to miss something from the story. And yet, the onset of the digital age made it easy to buy and download singles, which caused album sales to plummet in all formats. Some artists must have seen this coming. When iTunes was created in 2001, several of them pushed back against the idea of their albums being fragmented, cherry-picked, and sold for 99 cents a single. But ultimately, the digital song sales won. I know this because fragmented singles are all I’ve ever known of music. And when I noticed hipsters buying record players and playing vinyl albums, it seemed like such a novel concept to me. The novelty was not only the physical records themselves, but also the idea of listening to full albums.
https://ktmar10.medium.com/why-todays-vinyl-resurgence-is-nostalgia-for-another-era-fb89dd37a2c6
['Katie Martin']
2020-12-29 21:26:37.877000+00:00
['Self', 'Mindfullness', 'Music', 'Digital', 'Lifestyle']
Cracking the Facebook’s Machine Learning SWE Interview.
Photo on Unsplash by Alex Haney Cracking the Facebook’s Machine Learning SWE Interview. I recently passed Facebook’s Machine Learning Software Engineer (Ph.D.) internship interview. In this post, I share my interview experience and the resources I used to prepare for the interview. To cover the complete interview process, I have divided this post into separate events based on the timeline. This will help you to evaluate the process and preparation time required for each stage of the interview better. Important Take-Aways, read further to learn about each stage of the interview. Give yourself time to prepare, the more time you take to prepare, the better are the odds. Try to solve ~150 coding questions before actually appearing for the interview. Practice coding in an actual interview scenario, and not just solve questions. Use additional resources like binarysearch.com, pramp.com to get acquainted with the interview tension, and time constraint coding. Do a thorough walkthrough over your resume, you need to know every single detail of your projects and explanations of why you did things a certain way. About Me (at the time of the interview) If you are looking to interview for a similar position, it is important to evaluate the profiles which get picked for the interview process. Now, I don’t know exactly how my profile stood in the pool of candidates, but I am sharing my resume as a sample profile that got picked for such interviews. My resume at the time of the interview. My Google Scholar Page at the time of the interview. As you can see, I had completed 3 years of my Ph.D. by this time, publishing majorly in the areas of Machine Learning, Data Visualization, and Computer Vision. I also applied for this position during my third year but I wasn’t interviewed since my graduation date was after 2 years from the date of the application. Going from this, it is evident that Facebook prefers hiring candidates who are graduating at around a year from the date of the internship, as they have a higher chance of joining the team after graduating. September 1st: The Recruiter Call I interviewed for the Software Engineer, Ph.D. Intern/Co-op (Machine Learning) position at Facebook. This all started when I received an email from one of the recruiters asking if I was looking for ML SWE interns for the summer of next year, further asking the questions listed below: About your PhD research Month and year of expected graduation Preference of locations for Facebook offices, i.e. Menlo Park, New York, Boston and Seattle Are you actively interviewing for other positions September 3rd: Providing Availability Since it was still early in the hiring season for summer internships, I was asked to provide my availability within the next two months. Since I wasn’t at all prepared for the coding interviews, I chose the dates close to the end of October, so as to give myself ample time to prepare. In the meantime, we are asked to fill up the work authorization and other forms on their careers website. Following all this, my interviews were scheduled to be held on October 26th, and there are two interview sessions, each of 45 mins duration. As per listings on their careers website and recruiter suggestions, we are expected to solve two medium/hard coding questions in each of the sessions. Facebook uses coderpad to conduct their interviews, where your code is shared with the interviewer and you are on a video call with the person on bluejeans. September 3rd — September 10th: Checking out Facebook’s prep resources The best part of interviewing with Facebook is that they want you to succeed as badly as you want for yourself. Hence, they have prepared extensive resources for you to prepare well for these interviews and they try their best to answer every question you might have about the process. So, I started off by spending a week carefully going through every resource my recruiter shared with me. This turned out to be the best decision in this process since I was able to clearly understand the process of acing a coding interview. These resources are listed below, make sure to check them out if you are planning to appear for any coding interview. Facebook real interview session videos and guide : link This is by far the most comprehensive guide to prepare for any technical coding screening. It lists all the important videos, tutorials, and some (easy) sample questions to help you brush up your coding skills. : link This is by far the most comprehensive guide to prepare for any technical coding screening. It lists all the important videos, tutorials, and some (easy) sample questions to help you brush up your coding skills. Facebook’s technical screening website : https://www.facebook.com/careers/swe-prep-techscreen?hc_location=ufi : https://www.facebook.com/careers/swe-prep-techscreen?hc_location=ufi A good read : https://medium.com/@XiaohanZeng/i-interviewed-at-five-top-companies-in-silicon-valley-in-five-days-and-luckily-got-five-job-offers-25178cf74e0f : https://medium.com/@XiaohanZeng/i-interviewed-at-five-top-companies-in-silicon-valley-in-five-days-and-luckily-got-five-job-offers-25178cf74e0f Interviewer office hours: This is a unique thing not very commonly seen in this area of interviewing. Facebook allows the upcoming interviewees to talk to one of the employees about their questions on the technical screening rounds. Also, there is a mock interview session that can help candidates understand the process (dos and don'ts) better. Facebook hosts office hours for upcoming interviewees to talk to real Facebook’s interviewers. September 11th — October 25th: Coding and Mock Interviews Mock Interviews (pramp.com) — 32 sessions I cannot emphasize enough, the importance of mock interviews in getting ready for a coding interview. Of course, you need to practice solving problems, but it is equally important that you get used to solving problems in an interview sort of tense environment. It’s more of mental training and practicing how to think about an unseen problem during an interview. I really started enjoying these mock interviews and they served as a warm-up round before eventually giving an actual interview, so I ended up giving a mock interview even on the day of my actual interview, to get rid of the cold start my brain suffers from :D. My mock interview sessions during the time of preparation. Leetcode ~ 150 questions (September 11th — October 14th) I solved ~70 top Facebook interview questions and ~80 top interview (Medium and Hard Questions) from Leetcode. Since the time was limited and I only spent <4 hrs/day preparing for the interview, if stuck and had no clue about the problem, I would look at the solution. Also, I tried not to spend more than 30 mins on each problem. To refer to my notes for last-minute revision of the problems I solved, please refer to this post. Top Facebook, and general interview question decks on leetcode. BinarySearch.com — Race against time The website, binarysearch.com has many cool features of competing against other competitive programmers, but one feature which proved very helpful in my preparation for the interview was time constraint coding practice. We can practice, company-focused top interview questions from four difficulty levels with a running clock, for example, 30 mins to solve a hard problem, etc. This is a great way to get used to the pressure of an interview, typically with Facebook where you are expected to solve 2 problems in around 35 mins during an actual interview. It is important to practice unseen problems and solve them in a limited time constraint to be able to perform your best during the interview. Create Rooms for time constraint problem solving on binarysearch.com Revision (October 14th-25th) Revision plays an important role when we try to learn too much in a relatively short period of time. It proved useful to mark important questions and taking notes while I was solving the coding problems during my practice phase to quickly revise what was needed. Also, I continued practicing with mock interviews and time constraint coding during this phase. You’ll notice that by this time, after giving several mock interviews, you’ll be more confident going into the interview than before. October 26th: Day of the interview Sleep well before the interview day and give a mock interview before the actual interview if your brain suffers from a slow start problem. On the day of the interview, my experience was actually very similar to the mock interviews I had been giving for days. The interviewers were very nice, soft-spoken, and very clear in their communication about the problems. I was asked to solve two problems in each of my two interviews. The basic technique I used to solve the problems during an actual interview consists of six steps, thanks to Gayle Laakmann and her interview tips I shared in the previous section. The steps are: Ask clarifying questions: Make sure you understand the question completely. For eg, what’s the data type of the array, is there any constraint on the length of the array, etc. Discuss your algorithm over an example: After you understand the problem completely, make sure to come up with a medium-sized example to walk the interviewer through your algorithm. The example shouldn’t be an edge case at this point. Explain the time and space complexities: Make sure to point out the space and time complexity of your solutions at this point and confirm with the interviewer that they are okay with the solution you provided. Code: Once you know that the interviewer is okay with your solution, code it. Code Walkthrough: Walkthrough your code with the example you took previously and make sure everything is in place. Discuss edge cases: At this point, when your code works for a general solution, try covering all the edge cases. October 27th: Coding Round Result I received an email from my recruiter mentioning my feedback and discussion about the next, Team-Matching rounds. A portion of the email from the recruiter discussing the coding round results. October 28th — November 9th: Team Matching Preparation Once you clear the coding round, Facebook technically wants you to join but the offer depends on a successful team match. Based on the location preference you provided to your recruiter in the previous emails, you’ll be given a list of teams currently looking for interns at those locations. The first step is to fill out a survey of the teams you’ll be most interested in interviewing with. I provided Menlo Park as my first preference and was asked to provide the name of 5 teams I would like to interview with. This number depends on the location and the time you are interviewing with Facebook, so if it’s not 5 for you, don’t panic. The google form covers all the teams and a short summary of what the teams are working on. This is to help you decide and come up with a list of your favorite team rankings. Survey to provide preferences of the teams based on your research interests. After the preference is provided for some number of teams (depending on location and time of the interview), the interviewer will schedule interviews based on the feedback team managers to have after referring to your resume. Based on this feedback, my recruiter scheduled two team matching calls for me from my first and third priority teams. An email with the team matching calls. Preparing for team matching rounds A general team matching interview call will have the following structure: 5–10 mins: Intros/ Interviewer Team Overview 25–30 mins: Research Experience / ML Design Questions 5–10 mins: Q&A Since this is a machine learning position, you’ll be expected to know the basics of Machine Learning, Data Science, Probability Statistics, and your research area (for eg. Computer Vision). There is a very high chance that you’ll spend 45 mins of your interview just discussing your resume, related projects, and the interviewer will be pondering on the potential projects you might be interested in working on, in case you end up joining the team. But you should be prepared for any design and basic ML questions which might come in your way, for example: Why did you evaluate your XYZ system with method A instead of method B? So I did spend some amount of time revising my courses and prepared the revision notes which you can refer to from the links listed below: But for the most part, I focused on revising my projects, papers, and preparing for potential questions and reasonings that could follow up in the discussion. November 10th: Team Matching Interviews My experience with two team matching interviews was great, you get to know the teams and your potential manager. For me, both the interviews were much less technical than the coding rounds, and the focus of conversations was solely on getting to know me better. We discussed my research interests, project, current projects the teams were working on, and the potential projects that I could be a part of. In the midst of these questions, there were instances where you had to explain the technical details of the tools you used in your papers/projects, so make sure to cover that before the interview. And it was just 45 mins per call, so time just flew by. The whole point of these interviews is not to reject you, but to come up with good potential projects that you can take as an intern which are both relevant to you and the team. In case you don’t match with the teams for some reasons (you don’t want to join the current teams after interviewing, or the manager couldn’t find any good potential projects), there might be further rounds with other teams from the list you provided, but that’s not guaranteed according to Facebook’s terms and conditions. November 12th: Offer Letter I got the call from my recruiter two days after the interview, asking about my preference over the teams I would like to join after the team matching rounds. This preference is then matched with the interview feedback they have for me from both the teams. This is used to decide whether or not you passed the team matching rounds, it’s pretty subjective. Since my first preference team, after the interviews had accepted me as an intern candidate, I successfully matched with that team. Things I could have done better I took a lot of time ~2 months to schedule my coding rounds. Though that time was required to practice the questions, I could’ve got more options with the teams if I had cleared the coding rounds a bit earlier in the hiring cycle. It is important to note that the choice of teams that are given to you keeps shrinking with time as more and more teams get their positions filled in the meantime. Conclusion And that’s it about my whole experience with the interview process. It’s more focused on your coding skills since the purpose of this internship is to scale the existing Machine learning algorithms to Facebook’s large infrastructure, and not to write papers discovering new techniques. I was really surprised to know how hard it becomes to apply even trivial ML algorithms to the scale on which Facebook functions. Make sure you practice hard and keep calm during the whole process, things will eventually work out. Good Luck!
https://medium.com/swlh/cracking-the-facebooks-machine-learning-swe-interview-d625133fe966
['Anjul Tyagi']
2020-11-24 22:57:38.828000+00:00
['Data Science', 'Interview', 'Facebook', 'Machine Learning', 'Interview Tips']
AWS Lambda Security Checklist — Don’t become SecureLess
One of the benefits of Serverless architectures, and AWS Lambda specifically, is that there is #NoOps required to get your code live. You provide the code and deployment, patching and hardware are abstracted away from you. The issue is, there is still a server. Hot lambda invocations (invoking the function while the cloud provider has your runtime and code provisioned) can allow information leaking across lambda invocations, networks are still a thing, underlying operating systems change and you’re adding code dependencies. From experience on several serverless projects, this is a checklist of security concerns to look out for on Serverless projects. When the temporary is persistent The /tmp directory is the only place you can write to disk on a Lambda function invocation. It’s useful to persist connections and optimize caching, but it does break the sandboxing concept of Lambda. Issues can arise if any application data, meta-data or logs can be accessed by your (or injected) code. The best practice is to, like your parents always told you, cleanup after yourself. Always remove any file written to the /tmp directory unless you are explicitly doing it to optimize performance and have considered all potential security risks. Separation of Lambda Concerns Separation of concerns is always a good way to win an architectural debate, and it’s sound advice. One of the advantages of running code in a cloud environment like AWS is the ability to lock down permissions based on roles. In AWS, IAM policies allow fine-grained permissions over resources. If you’re building a large monolithic application in a container environment, policies don't help you as much as they could. If, on the other hand, you’re in a serverless environment and have broken down your application into fine-grained “micro-lambdas” that do one thing and do it well, then you can apply fine-grained policies that prevent your code (or third party code) accessing resources they shouldn’t. Keep your Lambdas fine grained with a separation of concerns. Don’t go wild with IAM As discussed above, IAM is a powerful tool to lock down security. Serverless development is massively empowering to the development team in part because it allows them to work independently of a dedicated ops team. Spinning up new environments and microservices is easy. The risk is that badly configured IAM policies can leave lambdas open to doing things their (or third party) code should not be doing. If we have the ability to send an email, should all lambdas in the project be able to send emails? Or run a machine learning model? Or access an RDS instance? Or call another Lambda? Ensuring your IAM policies give the least privilege possible to perform a specific task will reduce the risk of it doing things it’s not supposed to be doing and mitigate the impact of such attacks. The simple version of this is preventing wildcards in roles and policies, but a more advanced version can be achieved. Cloudformation linters on generated cloudformation, a basic grep or Serverless Enterprise Safeguards can provide automated protection from basic over-privileging as part of your CI pipeline. On top of this training, code review and policy audits can provide a further quality assurance. Ensure all IAM policies allow the least privileges needed to perform their task. Train your team on this principle and ensure basic checks are enforced on CI. Gateway to Heaven Don’t expose Lambda functions to the wider Internet unless strictly needed. If they are exposed, do not expose them for direct invocation, but put them behind an API Gateway. API Gateways provide DDOS protection, rate limiting and simple integration with “Authentication as a Service” providers like Cognito or Okta. Avoid exposing Lambda functions to the Internet and if exposed only allow invocation through API Gateway. Know what’s happening Logging is nothing new, but the flexibility, granularity and distributed nature of serverless architectures make observability more challenging. CloudWatch logging is the basic requirement, adding X-Ray is a must if lambda functions are chained or events are going through multiple services. On top of natively supported logging, several serverless specific logging providers have emerged. For example, Lumigo and Thundra are viable options to add extra observability and alerting. Taking this a step further, logging is something to consider when designing your architecture. Splitting functions so that logging is clear and recoverable can become key as your architecture and request load scales. Basic logging is a given. Add onto this, X-Ray Logging and specialised third-party Serverless logging providers. Don’t forget the basics On the whole serverless changes a lot about how you build applications, but the fundamentals of security do not change. You still need to follow best practices when it comes to application code, access control and encryption… Keep up to date the OWASP Top Ten and apply those that are relevant to your serverless architecture. There is now a Serverless OWASP Top Ten that is also worth keeping your team up to date with. Keep applying basic security principles and use the OWASP Top Ten as your reference. In Conclusion Serverless architectures reduce a lot of the ops and patching work needed to ensure security and availability. That said, they also introduce new attack vectors that are not as well known, simplicity lets you get away without knowing how the underlying machine is working and security basics don’t go away. The above principles are based on our team’s experience across serverless projects. Any comments, queries, and additions are more than welcome.
https://medium.com/serverless-transformation/aws-lambda-security-checklist-dont-become-secureless-6ec467271b77
['Ben Ellerby']
2019-09-07 12:17:41.880000+00:00
['Lambda', 'Owasp', 'AWS', 'Serverless', 'Security']
A Simple Online Marketing Sales Funnel that Converts Leads into Clients
To a lot of independent professionals, such as consultants and service providers, sales funnels may appear to be unnecessary and overly complicated. They may seem impersonal or cold. This may be true in some cases, but you can implement a simple online marketing sales funnel that allows your personality to shine through while guiding your prospects toward your goal. WHY IS A SALES FUNNEL NECESSARY? Sales funnels guide prospects towards a sale. In the B2B world, it’s unlikely that a prospect will make an immediate decision to work with you. Usually it takes several phone calls or meetings and emails back and forth. A prospect needs more interaction before they can make a decision. A sales funnel helps reduce the amount of time you spend on this process because your funnel helps answer questions and introduce your services. The goal of the sales funnel is to make your prospect feel at ease and comfortable with working with you. It increases their trust in your knowledge and abilities. WHAT DO I NEED TO GET STARTED? To implement a simple online marketing sales funnel, you need three main things: Email marketing software Opt-in offer Email content Your email marketing software will be used to deliver a series of emails. Be sure to choose a software platform that allows for automations. I recommend ActiveCampaign. To get people to opt-in for your email messages, you’ll need a strong free offer. This could be an ebook, a video series, or other resources. This is something valuable that you will give away for free. Finally, you’ll need content for your series of emails that prospects will receive after they sign up for your free offer. WHAT IS YOUR MAIN GOAL? Start with your goal in mind to create an effective sales funnel. For some people, the goal might be a free consultation, or for others the goal could be membership sign-ups. Work backwards from your goal to put the rest of the funnel in place. Consider how your opt-in offer relates to your goal. It should give your prospects the first step in achieving their desired result and lead them down the path to hiring you. HOW DO I PUT IT ALL TOGETHER? Step 1: Set up your email marketing software Use a system such as ActiveCampaign to create the form that will deliver your free opt-in offer when someone signs up. Create the automated sequence that will follow after the opt-in offer is delivered. Here’s a sample sequence you can use: (Day 1) Welcome Email: Let your new subscriber know that you are here to help, and ask them what they are currently struggling with. (Day 2) Mission Statement: Tell your prospects about yourself and why you want to help them. (Days 3–7) Additional Content: Send several pieces of your best content that will help your prospects with their goals. (This can be spread over several emails.) (Day 8) Pitch Your Introductory Service: Make an offer for your introductory service such as a free consultation. Step 2: Add your opt-in offer to your website Once the logistical side is set up, you’re ready to publish your opt-in offer. Place it prominently on your website. Make sure people know exactly what value they’ll receive by signing up. Step 3: Attract visitors to your website. There are many methods to drive traffic to your website. Here are few main ways to consider: social media, advertising, direct outreach, guest blogging, and content syndication. Step 4: Follow up with prospects who reach out to you directly The goal of your funnel is to bring qualified prospects to you. A small portion of the people who sign up will be ready to accept your introductory offer and become a client. Step 5: Continue to engage with your list of potential prospects Not everyone will be ready immediately. This is to be expected. However, you still have their email address, so you can continue to send valuable information on a regular basis. Send them new or curated content to stay top-of-mind. Here’s what the funnel looks like when it’s all set up: CONCLUSION Using a simple online marketing sales funnel will help you deliver value to your prospects and turn them into warmer leads. People buy from people they trust, and your sales funnel helps to build that trust with prospects. Go deeper into this process with The Essential Guide to Online Marketing for B2B Professionals.
https://medium.com/multiplier-magazine/a-simple-online-marketing-sales-funnel-that-converts-leads-into-clients-377ba9ebf757
['Cassi Lowe']
2017-10-13 21:01:53.051000+00:00
['Marketing', 'B2B', 'Business', 'Online Marketing', 'Digital Marketing']
Multithreading With ThreadLocal, Making Threads Work for You.
Before we start…. ThreadLocal is your answer to:- If you want the object to be thread-safe but at the same time it does not hamper the performance of your application. If you don’t want to create a new object each time new thread task is being assigned. If you want to share the same object between multiple method calls without actually passing the object as parameters. Write a better and clean code in a multithreaded environment.
https://medium.com/swlh/multithreading-with-threadlocal-making-the-threads-work-for-you-1523f038d268
['Prateek Nima']
2020-11-04 12:51:06.924000+00:00
['Java', 'Multithreading', 'Software Development', 'Programming', 'Interview']
Visualization and 5 Senses
It’s interesting to trace the origin of trends in business and technology back to the intrinsic needs that we as humans develop in the changing environment. People influence technology, technology influences people, and the cynical adage that laziness drives all progress proves to be true time and again. In my earlier articles I explored the Big Data trend showing how it might have originated from a human-natured desire to ease off on the responsibilities of prioritizing and making decisions. Or, why the evolution of Agile movement has purely human implications. Or, why they resorted to Kanban after trying Scrum because of Kanban’s “no deadline” philosophy. Today I want to share some thoughts on the origins and history of visualization. Why the trend to visualize has come to the fore in the recent years, acquiring more and more evangelists? I like to dig deep, so I’ll first take a look as far back as at the prehistoric times. This analysis will provide some food for thought, and maybe will help you see how other technology trends are downright rooted in the properties of human psyche and physique. The one with knowledge is the one who is powerful, so make yourself comfortable and read on. The Five Senses in Prehistoric Times Ancient hunters and gatherers relied on their five senses as the devices to signal an immediate threat, or a promise of food or water. Like, a certain smell in the air might have meant something. Or, a sound of a dangerous animal moving in the forest. Or any other sign of a danger. In fact, actually seeing some dangerous thing might have been too late. Some fast-running predator could grab a defenseless human in an instant, leaving zero option for a retreat. Eyesight worked as a part of the 5-component system and did not receive overload as compared to the other senses. One example of a visualization from those times would be cave paintings, created as a ritual for success at hunting. This concept differs from what we regard as “visualization” nowadays. Cave paintings represent a projected vision of those hunters who believed that if they visualize their plea, it would help them survive. I’ve singled out this particular case for the sake of showing that the sense of vision might not have meant that much to ancient humans as a sense of perception, but was more important as a sense of projection. I’m using a simplified perspective here on purpose. The Five Senses in the Information Age Taking a huge leap, we now move on to the Information Age which started somewhere in the second half of the 20th century and continues through now. I’ve skipped the industrial revolution as the changes that it brought about weren’t as drastic and lifestyle-altering as the changes coming along with the Information Age (at least, that’s the way it looks to me as to a contemporary human being :). Besides, information visualization haven’t made substantial advancements until the past decade or two. Next, let’s consider how the perceptual load is distributed between the five senses in the modern times. There’s hardly any need to visualize this as a distribution chart *ironic*, because it goes without saying that eyesight is the most overloaded out of all our five senses. Some people perceive information via their unlocked 6th sense ESP, but my article is not about that. It’s a given that most of the signals that we receive from the external environment are coming through our eyes. This is especially true for the information technology workers. Today, in the Information Age, we spend most of our lives looking at digital screens. Phones, laptops, TV. Hearing might stand a chance of competing with eyesight. But the senses of smell, taste, and touch have faded in their significance as compared to the prehistoric times. I pondered once how amazing would it be if we were able to smell a fragrance that we consider purchasing over the web. Or to touch this carpet to feel if it’s really that soft, thick, and comfy. But we only have our eyesight for anything that comes from a screen (mostly). We’ve traded the other senses for a more comfortable existence, and we have to deal with the consequences of this choice… that’s why our body — as well as our mind — instinctively wants to develop some coping mechanisms for this overload. Visualize for Watching Less? In view of the above, I regard the modern trend for data and information visualization simply as a consequence of the overloaded eyesight. Our collective unconscious makes us come up with ways to limit exposure to visual signals, striving to keep all the 5 senses balanced. It’s hardly that the sense of smell, or touch, or taste would gain precedence over eyesight and hearing as the primary information channels, but at least some coping mechanisms can be developed. That’s why we tend to present textual information as laconic visuals if there’s too much to read (as shown in this brief write-up on the taxonomy of names in sports leagues). Or, with data visualization, we now prefer to make sense of analytical reports presented as visuals, rather than as texts. It just takes too damn long to read them as texts! Have mercy on our poor eyesight, someone! That’s what the trend for visualization is about. By the way, you certainly have seen how office folks like to listen to whatever plays in their earphones. This might be a yet another unconscious attempt to ease up on the overloaded eyesight by shifting balance to the hearing receptors. Hearing is the only other sense that we can use as we work at screens. In fact, any attempt to distribute the load onto the 3 other senses — taste, touch, and smell — might turn out to be disturbing to the colleagues with whom we share office spaces :) No doubt, we can understand concepts and do analytics faster with visuals. Let alone quickness, a more powerful driving force for that trend is something that sits deep inside of us, humans, as we want to keep our senses in balance. Another example of an imbalance in the eyesight-hearing duo would be texting. Natural communication involves hearing and speaking. If for the most part people “talk” by means of typing, this adds up to the overload that our eyesight experiences processing all the other kinds of information. It feels like a huge energy drain if I’m expected to spend much time checking text messages or comments online. That’s the reason why I do this sparingly. I’m not sure which subliminal remedy is in store for that particular kind of overload. Will we develop some universal hieroglyphic writing as a replacement for phonetic-based written texts, for the sake of saving our eyesight? I don’t have answer to this question so far. Related: Cut Yourself Some Slack From Slack The Dietitians of Info-Sharing Visualization: Why The Fusion of Art and Tech Matters Visualizing Music Why Self-Organization Is a Luxury Further reading: My 12 Visualization Books
https://medium.com/quandoo/visualization-and-5-senses-b6c4475ca993
['Olga Kouzina']
2019-01-25 12:33:56.317000+00:00
['Information Visualization', 'Insights', 'Big Data', 'Data Visualization', 'Learning']
My PTSD from Workplace Bullying
It’s back. Years after surviving a horrific episode of workplace bullying, I am experiencing a relapse in my recovery. My sleeping is sporadic. Dreams are nightmares again. Paranoia has returned. I can’t calm down. Every now and then, I have episodes of post-traumatic syndrome disorder from the level of abuse I suffered. My therapist told me this is normal, and that I will continue to have this experience whenever my fear is triggered. I know what has triggered my PTSD today. The President of the United States. The Return of My Fear When I am doing well and getting beyond the trauma, I feel like I am unstoppable. I believe that I have moved on, and that I know exactly what to do. I even wrote a book about it. Then, when I watch as bullying is happening in front of me, even if I’m not the target, I start to have anxiety. I watched the most recent rally that the President led. It’s not the first time I’ve heard the language of a bully coming from the most powerful person in the country. But this last time, with the crowd whipped into a frenzy, chanting “Send her home” after listening to a litany of depraved language from the President, something in me broke. I broke down. Whenever I watch in real time someone being belittled and disregarded, it hits me hard. It takes me back to meetings where I was publicly shamed and insulted, as my boss whipped subordinates into a frenzy, laughing at me and my work. I am struck by how similar my workplace bully is to the President. I feel like I keep reliving that terror, and that I will never escape it. I feel like our country is going to a place that we won’t be able to leave. It makes me feel hopeless. Finding My Way Back The only way I know to fight my way out of the darkness is through authentic positivity. Even when I want to hide and not face the world, I know that I can’t give in to the pain. I have to keep fighting or I will lose. So I look for inspiration and hope. Today, it arrived in the most unlikely way. My daughter and her friends decided to make a town out of chalk drawings in the driveway. At their version of Target, they did something that restored me. They included accessible parking in their art work. My daughter’s drawing of a pretend town included space for people of all abilities. The innocent act of drawing in a parking space that is inclusive and considers all people made me want to cry and celebrate simulataneously. At least the next generation seems to understand that kindness is more important thatn divisiveness. Today, my daughter’s simple act saved me from despair. There is hope.
https://medium.com/rejectrevolution/my-ptsd-from-workplace-bullying-44e032824b5f
['Ree Jackson']
2019-07-19 02:19:18.031000+00:00
['Work', 'Bullying', 'Mental Health', 'Equality', 'Abuse']
Arbitrage: Get realtime lending rates from DeFi platforms
Arbitrage is defined as the simultaneous buying and selling of securities, currency, or commodities in different markets or in derivative forms in order to take advantage of differing prices for the same asset. In our case, an arbitrage opportunity arrises on interest rates when you can borrow at a lower rate and lend at a higher rate than you borrowed. As shown on the table above highlighted in yellow, there are arbitrage opportunities for USDC and DAI. How can you profit from these 2 opportunities ? We will be using a £10k loan with almost to none exposure risk as an example. Why do I say almost to none ? Because you will still need to deposit some cryptocurrencies like BTC, ETH … in order to collateralize your loan, which exposes you to the price fluctuations on your collateral. Case DAI**: By taking a look at Table2 above, the lowest annual rate you can borrow DAI is 8% on NEXO* and the highest annual rate you can lend DAI is 11.48% on DYDX. So you can lend DAI at a higher rate than borrowing it. The difference in rates is 3.48%. So let’s say on Table3, you take a £10k year loan in DAI on NEXO and lend these DAI on DYDX for the same period of time, after a year you will have to reimburse your loan on NEXO with interests £10'800 (£10'000*8% +£10'000) and you will get £11'148(£10'000*11.48% +£10'000) if the counter-party hasn’t defaulted on your loan. So in total you will have made £348 which is 3.48% of your loan. This is not 100% free risk as the counter-party could default on your loan, and you take the exposition risk of your collateral. *On Nexo, you can deposit the following crypto assets: BTC, ETH, XRP, LTC, XLM, stablecoins, NEXO and BNB as collateral. Support for other major altcoins is coming later in 2019. Case USDC**: As for USDC, the lowest annual rate you can borrow is 8% on NEXO and the highest annual rate you can lend is 9.25% on Celsius (Table2). So you can lend USDC at a higher rate than borrowing it. The difference in rates is 1.25%. So let’s say on Table3, you take a £10k year loan in USDC on NEXO and lend these USDC on Celsius for the same period of time, after a year you will have to reimburse your loan on NEXO with interests £10'800 (£10'000*8% +£10'000) and you will get £10'925(£10'000*9.25% +£10'000) if the counter-party hasn’t defaulted on your loan. So in total you will have made £125 which is 1.25% of your loan. * IMPORTANT: The borrowing rates may be different from what is shown as more and more lending platforms are adapting their rates on the size/amount of the loan. Due diligence is necessary. Trinito shows how you can apply the arbitrage in practice . EXAMPLE Accounting Compute the interest you earned and paid on your different lending platforms
https://medium.com/the-cryptocurious/arbitrage-get-realtime-lending-rates-from-defi-platforms-dee24830ee85
[]
2020-10-17 13:23:34.092000+00:00
['Finance', 'Google', 'Cryptocurrency', 'Lending', 'Analytics']
I Quit Drinking For A Month — & These 12 Things Happened
By Elettra Wiedemann A couple of years ago, I decided to do a Dry January. That means no booze at all, for any reason (yes, even at a birthday party / wedding / after a bad day / whatever) for the entire month. To some people, that might not sound like a big deal, but to me it sounded like a major commitment. Before I gave this a try, I wasn’t even a huge drinker or partier — I would do wine on weeknights, and maybe some cocktails on the weekends with friends. So, my Dry January was not about “detoxing” or turning around a serious bad habit. Mostly, I wanted to see if having a sober month was something I could do. I also wanted to see how it would make me feel (better? more focused? totally the same?). Going in, I figured I would probably miss having a drink with my friends on the weekends, but as it turned out, the effects were way more far-reaching than that. My first-ever Dry January not only totally changed my relationship with alcohol; it changed some of my friendships, and I’d even argue it changed my life. In fact, this January was my seventh Dry January. Intrigued? If you are planning to try a Dry January, there are some important things you need to know before you embark on this challenging, enlightening, and ultimately rewarding booze-free journey. Here we go. You might want to try not to get totally wasted on NYE I get the temptation to party hard on New Year’s Eve, to get in one last hurrah before your month of sobriety, but having a massive hangover is just going to weaken your resolve starting from Day 1 (after all, it’s hard to resist the hair of the dog). Of course, I’m not saying “don’t drink at all on NYE,” but I highly recommend resisting the urge — and the peer pressure — to get smashed. Trust me, you’ll need all your resolve and discipline, because… The first two weeks will be really hard. Yep, the first 14 or so days of your alcohol-free month are probably going to be really hard. I’m sorry to be the bearer of not-so-amazing news, but if you know that you’re going to be fighting an uphill battle, I think you’ll have a better chance of success. As I mentioned previously, I was not even a huge drinker when I tried this for the first time (other than two “too much” years in my 20s, and even then, I only blacked out once — and rugby-tackled my dad’s best friend to the ground. Zero recollection). But even so, that first half of the month took a lot of resolve, focus, and almost constant re-commitment for me. Even just one or two glasses of wine, or a couple of beers in the evenings, were sorely missed, because… You’ll realize that almost all social life is centered around food and drink. Being sober will make you recognize this. It’s actually kind of astounding, and not something you fully notice while you are participating in it. (Tip: Going to the gym really helped, mostly because it gave me something else to do and was another form of sociability.) It became hard for me to even have dinner with friends, though, because… A lot of people, including your close friends, will be SUPER annoying and unsupportive about your decision. This was the strangest thing of all about going dry for a month: other people. Almost everyone, including my own friends, was likely to get weird and even kind of pissy about it. People called me “boring,” rolled their eyes when I said I wasn’t drinking for the month, and put a lot of pressure on me to “just have one drink.” Some people even stopped calling me or inviting me out to gatherings or parties. The first time I faced this, I felt totally unequipped for how to deal with it. But ultimately, I realized it was about them and not about me. That said, it still totally sucked, and made me feel embarrassed (Oh God, am I boring?) and sad. Once I had some time to reflect, I realized that being sober can make some other people uncomfortable because they don’t know how to be social/interact without a drink. In my personal experience, I got the most flak from friends whom I always suspected had less-than-healthy relationships with alcohol. If you find yourself in a situation that just feels too peer-pressure-y, here are two tactics that worked for me to deflect. Tell people you’re on some SERIOUS antibiotics or, in very extreme cases, tip your bartender for club soda. In my experience, if you tell people you’re getting over strep/flu/some horrible illness, that you’re still not feeling 100%, and you’re on antibiotics, so you can’t drink, people will back off. If, after that, people are still being dickheads (and yes, that happened to me), go to the bar, give the bartender a fiver, and tell him/her that every time you order a vodka and soda, all you want is soda and lime. It’s sad to have to resort to that, but it did happen to me, which leads me to this doozy… You might decide you don’t like your friends anymore. This sounds horrible, but it needs to be said: Being dry for the month might make you realize that some of your friends kind of suck, or at the very least that you don’t have much in common with them apart from drinking. What happened for me was this: I realized that some of the people I was hanging out with would all get drunk together and tell the SAME STORY AND JOKES OVER AND OVER AGAIN AD INFINITUM. It was shocking. And it shocked me mostly because it made me reflect on myself: After all, just a couple weeks before, I was probably that guy. I was the one telling the same stories and jokes over and over again. Which led me to realize that… Maybe “just a drink or two a night” is much more than that. I am not trying to come across as a total geezer here, but this was a big “aha” moment for me. I had NO idea how much my light booze habit of one to two drinks per night, plus drinks on the weekends, was affecting me, my choices, and my life. Everything from the people I was choosing to hang out with to how I was choosing to spend my time to a kind of blasé laziness / “I’ll do that tomorrow” attitude was all tied to just a couple of seemingly innocent drinks. But take out a calculator, and you’ll soon realize that “just a couple of drinks per night” plus weekend drinking will amount to over 900 drinks per year (yep, really; just do the math). That ain’t nothing, friends! At the time of my first dry month, I was kind of “stuck” in a lot of areas (mostly deciding about the trajectory of my school and work life), and that mental fog, or “stuckness,” cleared up within a couple of weeks of no booze. Yikes. On the positive side, while I started to distance myself from one group of friends, there were all these other great people I started getting closer to. Which leads me to this nugget… Your Dry January will likely lead you to new friendships, relationships, and other cool people your slightly inebriated brain didn’t really notice before. When I take a look at my close friends right now, I realize that many of them came into my life around the time that I did my first Dry January. A lot of them had been casual friends or acquaintances, but I had just never really connected with them before, beyond surface conversations. I remember I ran into one of these people in the street around week three of my first Dry January, and she asked me to go get a drink. When I said, “Oh I can’t, I am not drinking this entire month,” she immediately answered, “Oh, that’s so cool! Well, let’s go get a coffee or see a movie then?” WHOA. After the weirdness I had endured from others, that response felt so refreshing and awesome, and I really appreciated it. So fret not if you find yourself wanting to branch out to new friends; you’ll find them (you might even already know them). And, what’s maybe even better than that is… By the end of week two, you’ll feel like a new person. Holy shit, guys, I have to tell you that you will feel SO FRESH AND SO CLEAN by the end of week two. I was sleeping so much better, my brain was firing on all cylinders, I was getting all my to-do lists done, my mood was sky-high, my PMS that month was so minor I almost didn’t notice it, my outlook on life was consistently positive, my energy levels were constant, I felt so clear-headed, and my skin was glowing. Which is why… You might be tempted to extend your Dry January to a Dry February. And, in fact, I did. I felt so great at the end of January that I decided to extend it another month. And after those two months without any booze at all… The first drink you have after the dry period will be like whoa. And hangovers? Well, they might never be the same. After about 60 days of no drinking, I had a glass of wine (which I had missed, despite all the upsides) and OMG did I feel it right away. I felt like a cartoon character with those little stars and birds flying around my head. Of course, I had another glass or two that night, which led to an insane hangover the next day. Seriously, the ratio of booze-to-hangover was just totally unjustified. And that, sadly, never went away. To this day, even when I don’t drink a lot, I feel the effects of those drinks. Honestly, I’m not sure if that’s a result of my sober months or a product of getting older, or both, but either way… This whole experience may change how and how much you drink forever. Listen, I still like to go out and have a drink with friends, and when I have a hard, frustrating, difficult day, I’ll come home and pour myself a glass of vino. But I have never gone back to the same drinking habits I had before, I never will, and I am so glad. In fact, in my mind, my first alcohol-free month marks the moment I went from grown-up-in-training to real grown-up. Now that you’re equipped with all the tips, go forth into your alcohol-free month with confidence: You CAN do this, you WILL do this, and you’ll be so glad that you did.
https://medium.com/refinery29/i-quit-drinking-for-a-month-these-12-things-happened-489727624937
[]
2020-08-12 15:56:01.047000+00:00
['Alcohol', 'Dry January', 'Stop Drinking', 'How To Quit Drinking', 'Health']
These Anti-election Tactics Could Save Your Sales
These Anti-election Tactics Could Save Your Sales 3 practical ways to correctly leverage election uncertainty Photo by visuals on Unsplash There’s a simple rule for business. That is “don’t mention politics or religion if you don’t want to alienate half of your customer base”. Those subjects are nuanced, subjective and open to misinterpretation. You’ll never satiate everyone’s expectations. How then, do you stay topical in your marketing without mentioning the biggest topic of the year; the 2020 US election? Now you may be forgiven for believing that COVID-19 is the biggest story of the year, but US politics said ‘Nah’ and stole that trophy too. Turning protections, closures, stimulus and health into a war of two sides. Left vs right. As a marketer, you need a lifeline. Ways to stand-up and stand-out, without losing sales from one or both of those two parties. Cue the music…
https://geraintclarke.medium.com/these-anti-election-tactics-could-save-your-sales-ec0972de314b
['Geraint Clarke']
2020-10-15 12:02:11.166000+00:00
['Work', 'Sales', 'Politics', 'Business', 'Marketing']
How Gastrointestinal Symptoms Affect Covid-19 Outcomes, Meta-analyses Find
The CDC acknowledges gastrointestinal symptoms as part of Covid-19. The question is: Does the appearance of gastrointestinal symptoms means a bad disease outcome? What Meta-analyses Say Chinese researchers at the Department of Gastroenterology at Zhongnan Hospital of Wuhan University systematically searched the PubMed database and identified 21 studies for analysis. The pooled sample stands at 3024 Covid-19 patients, of which: 9.1% had diarrhoea (based on 19 studies). 5.2% had nausea and vomiting (based on 14 studies). 3.5% had abdominal pain (based on 4 studies). Importantly, the prevalence of these symptoms did not differ significantly between mild and severe cases, or between survived and deceased cases of Covid-19. This study implies that gastrointestinal symptoms are not a typical trait of severe Covid-19. An earlier meta-analysis asking the same question arrived at a similar conclusion. Harmonizing data from 10 studies, the authors found that nausea, vomiting, or diarrhoea was not related to disease severity. But abdominal pain was associated with severe Covid-19, though this result is based on only three studies with near-missed statistical significance (p = 0.45). A third and most extensive meta-analysis — by eight professors and four medical doctors from China, UK, and Hong Kong — provides conclusive insights on this matter. Their paper — published in the distinguished Lancet Gastroenterology and Hepatology journal — analyzed 6686 Covid-19 patients from 35 studies, of which: 9% had diarrhoea (based on 26 studies). 7% had nausea and vomiting (based on 18 studies). 21% had appetite loss (based on 8 studies). 3% had abdominal pain (based on 6 studies). “Patients with severe disease were more likely to have gastrointestinal symptoms compared with those with non-severe disease” they found. “More specifically, a higher risk of having abdominal pain (OR 7.10) was observed in patients with severe disease than in those with non-severe disease.” An odds ratio (OR) of 7.10 means 7.1 times increased odds of an event. Whereas the loss of appetite, nausea, vomiting, or diarrhoea had nothing to do with Covid-19 severity in this meta-analysis.
https://medium.com/microbial-instincts/do-gastrointestinal-symptoms-affect-covid-19-outcomes-meta-analyses-find-ab20ab5cfb6a
['Shin Jie Yong']
2020-05-24 08:35:30.245000+00:00
['Health', 'Ideas', 'Life', 'Covid 19', 'Advice']
Why Climate Action Is Crucial To Democracy
Why Climate Action Is Crucial To Democracy Trust in democracy is at its lowest in 25 years. There are many factors, but here’s why one of the most significant is climate inaction. Photo by Callum Shaw on Unsplash People simply do not trust democracy, their representatives, and their governments anymore. In America, 75% of adults believe that trust in the federal government has been shrinking. And across developed countries, dissatisfaction with democracy stands at 58%, up 10% from 25 years ago. There are, naturally, the standard, consensus theories about why. Most political scientists would tell you that there are two key reasons: trends, and events. Events are probably what you might have heard cited for the current lack of trust in politics most frequently. The Iraq war had a catastrophic impact on the electorate’s trust in politicians to be truthful, as did the expenses scandal in 2009 in the UK, where The Telegraph revealed the rather shocking abuse of power for monetary gain by MPs. More recently, the Syrian migrant crisis and the varying political reactions to it have fuelled a sense that governments don’t serve their own people properly. Trends deal mainly with how politics is conducted and how politicians go about their business, as well as long-term market patterns. The migrant crisis and the reaction to it have helped to fuel the trend of increasingly populist politics, which is characteristic of the wider political realignment through which we are going. This realignment is occurring both because left-wing ideas are becoming more and more unpopular, and because people’s trust in politicians to keep their word is declining. In terms of the economy, while it seems to be, on surface level, performing well, most of the world’s democracies (usually wealthier countries) are struggling with increased inequality as a result of the 1980s takeover of the New Right and the more pernicious aspects of it, some of which still remain today. Inequality creates social divisions and increases tensions, which has led to what many believe to be a culture war. Brexit and Trump’s election, say many commentators, are simply the result of this culture war. However, despite all of these significant factors, climate inaction has played a much greater role than most would have you believe. Trust in democracy was already fairly low 25 years ago. Nearly 50% didn’t trust democracy, and most were the typical disengaged group — the poor, ethnic minorities, former miners, and so on. The change, then, must have been fuelled to a great extent by a decline in trust from those who were more engaged in politics, and who care about the big issues of their time. The failure to effectively deal — or even to try to deal — with arguably the greatest threat to our long-term prosperity has surely played a crucial role in the declining pattern of trust in democracy and politics. Whether you think climate change really is that threatening or not, the perception of it is that it could lead to disaster — more than 80% in the UK class it as very or fairly threatening. Therefore, the electorate, and particularly those who currently feel most frustrated by the inaction, are not seeing solutions which match the scale of the problem, as they perceive it. This is undoubtedly playing a key role in the decline in trust in democracy. As we miss targets in international agreements, this is only getting worse. The UK, though, is currently experimenting with a climate change citizens’ assembly, one of the key demands of Extinction Rebellion. This could allow citizens to feel more engaged with the process, and will surely lead to some significant political action. Perhaps this will be the best way to restore trust on what is increasingly a pivotal issue, a bellwether of belief in politics. Trust in democracy isn’t just a story of ignorance anymore. It’s a story of how politicians are letting down those who care deeply about political issues — and this new fault line is alarming. As representative democracy fails, deliberative democracy may present the way forward.
https://medium.com/age-of-awareness/why-climate-action-is-crucial-to-democracy-ea17f8dc0704
['Dave Olsen']
2020-02-03 02:20:07.654000+00:00
['Climate Change', 'Environment', 'Democracy', 'Inequality', 'Politics']
Why data literacy is the key to future-proofing your career
Why data literacy is the key to future-proofing your career Jobs come and go, but data is here to stay Some years ago, I read a book that’s had a profound effect on how I look at careers and advise the people I mentor. That book is The 100-Year Life: Living and Working in an Age of Longevity by Scott and Gratton. In it, the authors talk about how the arc of a career history is so much more extended now than it was in our parents’ generation and our grandparents’ generation. They point out that a child born in the early years of this millennium has a 50% chance of living to 100. I’ll say that again: a 50% chance of living to 100. A lifetime of learning and resilience That means a working career is now arguably approaching 70 years long. But I don’t think we’ve really woken up to that yet. We haven’t updated our world view. We’re not used to the idea of having a career that stretches out for close to three quarters of a century. And many of us (myself definitely included) are not good at doing the same thing over and over again for six or seven decades. What does that mean? It means we’re all going to have to get really good at reinventing ourselves as we go, and re-skilling in a significant way 2–3 times over our lifetimes. At least! So we’ll need to be resilient and we’ll need to become lifetime learners. Now me, I work in tech. I have done since I left academia after eleven super enjoyable years of tertiary education and post docs. I love what I do and I believe that companies in various flavours of the ‘tech industry’ offer a wealth of career options for many people. No matter what role you have in the tech industry — engineer, product designer, agile facilitator, architect, business analyst — the work you do will be impacted by the rise of data, machine learning and artificial intelligence (AI) in your working lifetime. So how do you begin to understand this fascinating area and future-proof yourself and your skillset? It begins with data literacy. Data literacy needs to be in everyone’s job description At Xero, we want our people to do the best work of their lives. And in 2020, that means helping them improve their ability to source, use and understand the massive amounts of information that flows into and around our company. You might think data literacy is only important for financial analysts and data scientists. But we don’t think that’s right at all. Data literacy for our executive team might include their ability to look at a data visualisation and distinguish between situational and systemic movements. Data literacy for an engineer in one of our product pods could be all about knowing how to structure a database schema to support future-looking analytical queries, as well as current-day operational needs. To improve our data literacy, we’re developing a playbook for product teams to help them spot and triage opportunities where they can use machine learning techniques to build even more #beautiful customer experiences. The playbook helps them level up from ‘I’ve heard about AI and I think it’s cool’ to actually being able to apply some of the foundational techniques in a meaningful way — and knowing when the extra complexity is worth the extra effort. The exponential growth in the amount of data in the world isn’t slowing down anytime soon. And as these digital breadcrumbs generated by day-to-day activity become ubiquitous in our everyday experience, data and the conclusions we draw from it is also becoming easier and easier for bad actors to manipulate. Along with teaching numeracy and the ability to read, we believe it’s important for people to be taught how to think critically about data and statistics, and spot places where they are used badly, so they don’t get taken for a ride. These are lifelong skills and important ones. We hope our folk will stay with us for years, but even if they move on from Xero, they’ll take those new abilities with them. The ability to look at the world in a more quantitative way, and a logical framework to be able to question the things they see. Avoiding a dystopian data future There’s also a fair bit of talk — some of it rather daunting — about dystopian futures driven by this explosion of data. Some of the algorithms being developed and deployed really do have the potential to change our criminal justice and legal systems. We face a real risk of accidentally ‘freezing’ ourselves in the 2020s and enshrining the social and societal norms of today, because that’s ‘when’ we collected the data that pours into these massive, under-specified algorithms. That’s why it’s extremely important to make sure that the people involved in the conversations about these algorithms come from a diverse background and can bring a broad spectrum of views. I’m not suggesting that companies or governments deliberately set out to write things that are biased or lock in some particular status quo. But it’s really easy to do it unconsciously. Building strong data literacy skills across a workforce will therefore pay dividends for a company and for society, as it ensures we have a wide range of voices at the ‘algorithm decision-making table’. This reduces the likelihood of accidentally building something that turns out to be detrimental; to the company and to the world in general. Unlocking new career opportunities The rapid pace of this industry means that in many cases, the people we need to help us drive the savvy and thoughtful development of ‘augmented intelligence’ products aren’t in the conversation yet. Often it’s because the roles don’t exist yet and there aren’t many experienced and skilled people to lead the way. For example, we’re currently looking for data evangelists to help drive data literacy at scale across the organisation. It’s a role that doesn’t really exist in the mainstream right now, so finding them is tricky. These folk need to have a passion for making sure that diverse voices are being heard, and some of them will need a background in organisational development, or be able to teach adults new skills effectively (it’s very different from teaching children!). We work really hard to make sure that our data is clean and coherent and ordered, so we can build great insights and better support our customers. But everyone has a role to play in building great data and data products. High levels of data literacy empower everyone in the company to spot opportunities to use data well — helping to ensure we create products that consider a full spectrum of possible futures and reduce the unconscious bias that stems from too few people contributing ideas. Most importantly for the individual, lifting data literacy opens doors to new roles, even roles we have yet to imagine.
https://medium.com/humans-of-xero/why-data-literacy-is-the-key-to-future-proofing-your-career-53140c993d5a
['Kendra Vant']
2020-12-03 20:57:05.390000+00:00
['Machine Learning', 'Technology', 'AI', 'Data Science', 'Careers']