title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Facebook New 3D Effect
Facebook New 3D Effect The exciting new feature of Facebook The exciting new feature of Facebook available now any guesses what it might be let me clear to you. You can now convert photos from 2D to 3D just with a single camera. Isn’t it great and unique one camera, two versions and a lot of fun. 3D first introduction To clear the air, this 3D feature was first introduced in 2018, but due to some issues, it was only restricted to the users having high-end smartphones and supported portrait mode only. The company is back with the great feature again and this time users having iPhone 7 or above l, or having the average android phone can avail this opportunity as well. All kind of photos One more exciting thing to share regard to this feature is that it converts not only new photos that are just taken but also old photos as well. This is great isn’t, and the company further revealed that they had used advanced machine learning technology for the creation of this fantastic feature. How to set everything But there is one clause any guesses don’t panic its not that difficult thing to do you just need to update your Facebook app to avail the feature that’s it. The other thing worth sharing is that you need to go to news feed then what’s on my mind and then use the new 3D feature. Few limitations This feature is new, but it has a lot of importance as this enables almost everyone to avail of the 3D effect feature. There is one issue also that the quality of the pictures will not match the quality of the images taken with a dual high-end mobile phone. Still, you can enjoy the feature tilt the photos and have fun with the exciting new addition to the Facebook features bucket.
https://medium.com/codixlab/facebook-new-3d-effect-9b3b1c22da2b
[]
2020-03-08 13:09:59.803000+00:00
['3d', 'Facebook', 'Technology', 'Trends', 'Tech']
NO STRINGS ATTACHED WITH JAVA.
Hello and welcome back to yet another exciting post in java. So far, as at the time of writing this post, I have had no reader on this post. But that ain’t gonna stop me from continuously taking time to share my ideas on how to understand Java better because that’s what we do for the things we love, hoping to make it big in the future. Enough of the stories that touch the heart. Looking at the above picture and also the title of this post, you may begin to wonder what I might be insinuating. Even I am kinda wondering and confused but not because of what this post is all about (“ coz I am obviously the author”). The Big question is this…. why would anyone ever enter into a relationship with Java, all in a bid to become a software developer and then decide not to want more from the relationship ???. Even I, could not help but turn her into my friend, I mean she’s clearly good at what she does. isn’t she??? Moving on to the reason this post was made, in Java we have something called a STRING and it is simply a sequence of characters that are surrounded by double quotes as opposed to the primitive datatype known as “CHAR”. CLICK HERE to find out about it. Now with the existence of a String, we can store as many sequences of characters as we want in a variable…. (‘you probably know this right?’). I probably haven’t said anything different from what you know right? Well let's move onto the freebies that come with strings in java A string in Java is not part of the 8(‘Eight’) primitive datatypes that are used to store values of all kinds. it's actually an object like the other objects you create out of classes in java. So I have a question for you. Have you ever subscribed to a website and then you got an email that said hello and then a subversion of your name? Or you probably may have used the find and replace tool in Microsoft Word and Adobe PDF reader. Well, those functions or abilities are present in Java and can be used on its STRINGS. They are known as String Methods in Java. What are the String methods??. Simply put, these are functions that can be carried out on a String. Strings as earlier noted are objects, so that makes it possible for methods to be called on them (“Methods are and can only be used on objects and not primitive data”). Here, I shall list a few string methods with their brief explanation.
https://medium.com/java-for-absolute-dummies/no-strings-attached-with-java-8abfcf7582b2
['Nonso Biose']
2019-01-12 11:16:41.440000+00:00
['Java', 'Programming', 'Strings', 'Software Development']
7 Foods You Should Consume for Total Body Transformation
Photo Credit: Pinterest I pass this secret to all my supporters so please enjoy! The List below shows 7 food types that help cure each problem in its respective category. 1.Crown Foods Problems: Issues with sleep/wake cycle, feeling, disconnected from your body or others, difficulty meditating, spiritual discomfort. Foods: Mother Nature :) Fresh Air, Sunlight, Nature 2. Throat Foods Problems: Thyroid Disease, Frequent Sore Throat, Difficulty Expressing Feelings Foods: Blueberries, Blue Raspberries, Figs, Kelp 3. Eye Foods Problems: Depression, Poor Eyesight, Hormonal Imbalances, Poor Intuition Foods: Purple Potatoes, Blackberries, Plums, Purple Grapes 4. Heart Foods Problems: Heart, Lung Problems, Asthma, Allergies, Fear of Intimacy Foods: Broccoli, Kale, Chards, all other Leafy Greens 5. Solar Food Problems: Gas, Bloating, Liver Issues, Stomach Ulcers, Eating Disorders, Lack of Confidence, Procrastination Foods: Yellow Peppers, Yellow Lentils, Yellow Squash, Oats, Spelt 6. Sacral Food Problems: Infertility, Hip Pain, Sexual Dysfunction, Emotional Imbalances, Creatives Blocks Food: Seed, Nuts, Oranges, Carrots, Pumpkins 7. Root Foods Problems: Colon Issues, Lower BackPain, Varicose Veins, Emotional Issues, Surrounding money and security Foods: Beef, Parsnips, Rutabaga, apples, pomegranates, protein If this blog offered you any value, please recommend it and share it with others! Also please connect with me on my website, Facebook page, and Instagram if you want to stay in touch or give me any feedback!
https://medium.com/gethealthy/7-foods-you-should-consume-for-total-body-transformation-7178dd7cd5e2
['Jeremy Colon']
2017-02-16 04:58:08.396000+00:00
['Health', 'Diet', 'Nutrition', 'Metabolism', 'Food']
Creating a New .NET Project. C# From Scratch Part 2.1
Welcome to another part of the series C# From Scratch, a course dedicated to teaching you everything you need to know to be productive with the C# programming language. In the previous part of the series, we learned how to interact with .NET on our machines using the .NET CLI, a tool that was installed as part of the installation of the .NET SDK. That part of the series is available here. In this part of the series, we’ll create our first C# project using the .NET CLI. Creating the Project Use the command ‘dotnet new’ to create a new .NET project. The command returns with some information about the different options that can be used to create a new .NET project.
https://medium.com/swlh/creating-a-new-net-project-762153b1100e
['Ken Bourke']
2020-12-16 15:22:47.838000+00:00
['Software Development', 'Csharp', 'Dotnet', 'Learning To Code', 'Programming']
Dear People Who Think They’re Culturally Superior for Ragging on Billy Joel
You’re not too good for the King of Cheese n’ Roll. I see you. Talking about stabbing your ears when “Piano Man” comes on the radio. Or driving off a cliff upon hearing the first few notes of “Uptown Girl.” Or setting fire to that any remaining physical copies of “We Didn’t Start the Fire.” You wax poetic about Billy Joel’s music being made of saccharine, cheese and lard. A real crap smorgasbord Joel’s music is, you casually say to your friends who mutter “amen” in unison. A blight on modern music. You think you sound real cool — real high brow — right? You’re wearing your Pixies tee that no longer fits you, and sporting an Elliot Smith hairdo. Your parents owned a copy of The Stranger and you swore to God you would never, ever give that hound dog-eyed Long Islander who uses gratuitous amounts of sound effects the benefit of the doubt. Well I’m here to tell you: Billy Joel has more talent in his little cherub pinky than the whole lot of you. Billy Joel, a Jewish kid born and raised in Long Island, is America’s working class hero. Yeah, I said it. Lennon is to Britain what Springsteen is to New Jersey what Joel is to Long Island. As a baby boomer, Joel had a front row seat to the struggles of both WWII and Vietnam vets. As a Long Islander, he witnessed the lack of opportunity facing his blue collar peers. As a young person, he understood the fear and disillusionment in a post-Kennedy America. Much like Springsteen, Joel sang for the common men. He sang for and to the steel workers and vets in “Allentown,” the soldiers in “Goodnight Saigon,” the suffering fisherman in “Downeastern Alexa,” the starving artist in “Piano Man,” the bored and depressed youth in “Captain Jack.” His songs painted detail portraits of the Italians, the Jews, the Polish, the Irish living behind the Nylon Curtain. The ones who got up each day hustling down 52nd Street, who jumped the Turnstiles. The prom queen and king who got married and divorced — as a matter of course; the businessmen who shared a drink called loneliness; the Catholic girls who started much too late. These are the people he saw every day, and by God, he was going to tell their stories. Billy Joel is a time and a place; he’s a state of mind. If you’re from the northeast, if you’re from European roots, if you’re born during the midcentury, if you’re struggling to get by — Joel saw you. He was you. These truths are what endears me to Billy Joel. This essay might make one assume that I’m a fierce fan, but that is not the case. I enjoy Billy Joel’s music — particularly as a working class New Yorker who grew up with European immigrants— but what I appreciate most about him is that he sings for the proletariat. It’s true his songs might sound like stories your grandfather regales you with as he drifts off to sleep. It’s true that his lyrics might be as cliche as inspirational posters found in dentist offices. It’s true he might have relied too heavily on motorcycle and factory sound effects. (Maybe.) It’s true he might look more like your car mechanic who enjoys one too many Zimas instead of looking like a true rocker, but — Billy Joel is up there with Dylan and Springsteen and Simon as one of America’s greatest musical storytellers. And I’m here to tell you: You are not too good for Joel. NONE OF US ARE.
https://medium.com/slackjaw/dear-people-who-think-theyre-culturally-superior-for-ragging-on-billy-joel-d229b55b327a
['Lauren Modery']
2018-08-25 17:12:12.484000+00:00
['Music', 'Music Criticism', '1980s', 'Pop Culture', 'Billy Joel']
Why do Cryptocurrency and Blockchain Matter?
This article has been updated but was published previously. First, what is a cryptocurrency? According to Wikipedia, a cryptocurrency is an asset that “uses strong cryptography to secure financial transactions, control the creation of additional units, and verify the transfer of assets.” Right now there are 5,737 cryptocurrencies listed on CoinMarketCap. A blockchain, which is just a bunch of connected chunks of information connected in order like a chain, is what makes many of those 5,737 cryptocurrencies tick. Blockchain matters largely because it enables cryptocurrency. I’ve seen lists of a lot of different things that have evolved out of Bitcoin that will “change the world”. Things like smart contracts, blockchain, security, privacy, getting rid of the middle man, and so on. These things might change the world a little… The only crypto-related thing, in my opinion, that will change the world a lot is the potential for one or more of those 5,737 digital assets to displace the dollar and other national currencies. If and when that happens, central banks will lose their power, the people who run many of the world’s private banks will lose much of their profit-making power, inflation as we know it today will all but disappear, and governments’ ability to spend by printing money (or by adding zeros to banks accounts, as they do now), will all but disappear. It’s so simple, yet so huge. The power over the money supply will no longer be in the hands of the rich and powerful, but rather in the hands of an algorithm. Do you remember that list of technologies that some people might think can change the world? Here it is: smart contracts, blockchain, security, privacy, getting rid of the middle man, and perhaps more. Look at them. Most of these things can be achieved by multiple blockchains. The value of what they provide will probably be commoditized once there are many providers, and these technologies may disrupt a few major players, but that’s all. The cryptocurrency or cryptocurrencies that disrupt the money supply will change every penny of value from every unbacked currency that right now exists in the world. That affects trillions and trillions of dollars worth of currency: some estimates put it over $1.2 quadrillion. I think cryptocurrency will also get commoditized, but not without turning upside down many trillions of dollars worth of value. Wars have been waged, in part with the power to print money. Without the power to print money, wars should lessen. Wages have slumped in the United States since right around 1971: this is the year we fully went off of the gold standard. By going off the gold standard, the U.S. federal government has been inflating the money supply and spending as they please without having to raise taxes nearly as much as they would have needed to if we were still on the gold standard. Because of the growth of the money supply, the government can spend more without us noticing, or at least making it harder for us to notice. It’s harder to understand why we seem to keep having to work harder to make ends meet when the government is quietly printing dollars. For instance, how many men and women now work, when before there only needed to be one earner per household? Many of us now have twice the workers per house with perhaps barely more pay, when pay should actually be about twice as high when there are double the workers. Compare this to how things were about 50 years ago, when many households could make ends meet with just one worker who worked outside the home. A cryptocurrency can get rid of the “money power” throughout the world. The money power includes some of the super-rich: the people who own banks. But it also includes national governments, virtually all of which have a central bank that prints money with nothing to back the money. The money power has been artificially strong as long as it’s been printing unbacked money. So strong that it helps to cause asset bubbles that burst, wars, and unsustainable government spending. It’s not so much that countries have horrible leaders and that’s why their governments have to default under so much debt, it’s that they unnaturally were able to acquire that debt in the first place, through one or more central banks. When people are given a power they tend to use it, unfortunately. Cryptocurrency solves the problem of central banks, and it also plunks down one huge piece of the puzzle for solving government involvement in wars and governments’ over-taxation. Can cryptocurrency be stopped? So far, it cannot.
https://medium.com/predict/why-do-cryptocurrency-and-blockchain-matter-e64293aa1cd8
['Eric Martin']
2020-07-16 14:11:57.986000+00:00
['Cryptocurrency', 'Blockchain', 'Bitcoin', 'Money', 'Future']
Decision Tree Regression
Before we begin with the Regression Trees or the Decision Tree Regression, let’s recall and review the simple Decision Tree. Decision Tree is a supervised machine learning algorithm and it is one of the popular machine learning algorithm. It is a tree like structure constructed on the basis of attributes/features . Decision Trees is the non-parametric supervised learning approach. So guys you may have heard the term CART which stands for Classification and Regression Trees. In my pervious blog of Decision Tree,I have covered basics of Decision Tree and Classification tree. Regression Trees are bit more complex than classification tree. Decision Tree Regression is a Non- Linear Regression technique. Decision Tree Regression model is in the form of a tree structure.It breaks down a data set into smaller and smaller subsets while at the same time an associated decision tree is developed. Decision Tree Regression is used to predict the target variable whose values are continous in nature. Regression Tree can easily handle the complicated data. Impurity Measure for regression tree are : Least Square : Least squares regression is a way to find the line of best fit for a set of data. It does this by creating a model that minimizes the sum of the squared vertical distances (residuals). : Least squares regression is a way to find the line of best fit for a set of data. It does this by creating a model that minimizes the sum of the squared vertical distances (residuals). Least absolute deviations:Least absolute deviations attempts to find a function which closely approximates a set of data. Instead of minimizing the sum of squared errors, it minimizes the sum of absolute values of errors. So let us consider an example and implement the decision tree regression, here we have a dataset that has been given to us and we’ve got a scatterplot which represent it. In the dataset we got two independent variable X1 and X2 and we have to predict the third variable a dependent variable.In the plot we cannot see it because this a simple two dimensional chart only fit the two variable,Y is the third dimension which look like this. So once we run the regression tree or decision tree regression the scatterplot will be split up into many segments and the splits are create by an algorithm. How and where these splits are conducting is determined by the algorithm. The right split is the split which increases the amount of information and algorithm knows when to stop when there is certain minimum amount of information that needs to be added and once they cannot add more information to the setup by splitting or we have less then 5% of the total point in the leaf then that is the leaf node and no further split take place. Then these optimal splits will help us to create the decision tree. As according to the first split is at 20(X1),second at 170(X2) on the right side of first split,third at 200(X2) on the left side of the first split and fourth at 40(X1) below the second split. So as we were splitting the data and adding information to the system and the information falls into the terminal leaf. The Information helps us to predict the value for the new element and it works just by taking the averages of each of the segments. In Rregression Trees, each leaf node represents a numeric value Splitting in Decision Tree In order to determine how to split the data into groups or segments, we try to find the different thresholds that gave us the smallest sum of squared residuals.The splits are chosen to minimize the residual sum of squares between the observation and the mean in each node. Mathematical formula of residual as follow The threshold with the smallest sum of squared residuals becomes a candidate for the root of the tree. Mathematical formula of RSS (residual sum of squares) as follow In order to find out the best split, we must minimize the RSS If we have more than one predictor i.e. independent variables,we find the optimal threshold for each one and pick the candidate with the smallest sum of squared residuals to be the root. When we cannot add more amount of information to the system and we have less than 5% of the total points in the leaf then that node becomes a leaf node, otherwise we repeat the process to split the remaning observation untill we can no longer split the observation into smaller groups or segment then we are done. Regression Tree can easily accommodated the additional predictors. That’s all for Regression in Decision Tree Machine learning Algorithm. Stay tuned for further blogs. Thankyou Implementation of Decision tree Regression algorithm on Position-Salaries dataset ~ Dataset : Position_Salaries dataset Link:https://github.com/InternityFoundation/MachineLearning_Navu4/blob/master/Day%208%20:%20Decision%20Tree/Decision%20Tree%20Regression%20.ipynb
https://medium.com/swlh/decision-tree-regression-c977b732eb51
['Navjot Singh']
2020-11-26 03:37:54.982000+00:00
['Machine Learning', 'Python', 'Decision Tree', 'Data Analysis', 'Data Science']
The value of UX design
As we spend more and more time with our digital devices, we are all that we describe as “users”. When we visit a website or use an app all we are looking for is a way to have an effortless experience, without having to think less and do less. There are reasons why some search engines feel better to use, why some payment apps feel safer than the others, why it’s easy to use one website’s customer care service and on the other, we can’t seem to find the options we are looking for. All those unforgettable user experiences come from well planned and executed design solutions. Businesses that recognize the value of UX design understand the importance of providing memorable experiences to their users and believe in creating long-lasting relationships with them. “That’s nice to read, but what exactly is the value of UX design.” To make the case for the value of UX design more legitimate, let’s speak the language of numbers. Effects of bad UX design Companies lose $62 billion every year due to poor customer service. 70% of customers abandon purchases because of bad user experience. 91% of non-complainers just leave and 13% of them tell 15 more people about their bad experience. Results of a good UX design Research shows that, on average, every $1 invested in UX brings $100 in return. That’s an ROI of a whopping 9,900%. A well-designed user interface could raise your website’s conversion rate by up to 200%, and a better UX design could yield conversion rates up to 400%. 8 in 10 customers are willing to pay more for a better user experience. UX business cases stats Jeff Bezos invested 100X more into customer experience than advertising during the first year of Amazon. AirBnB’s Mike Gebbia credits UX with taking the company to $10 billion. After Virgin America redesigned their digital travel experience they experienced a 14% increase in conversion rate, 20% fewer support calls, and flyers booked nearly twice as fast, on any kind of device. In the McKinsey quarterly, The business value of design they tracked 300 companies for more than five years and concluded, diversity among companies achieving top-quartile MDI performance shows that design excellence is within the grasp of every business, whether product, service or digitally oriented. So, what do we see here? It’s not just number, It’s how UX design has proven itself to be the most credible part of a digital experience. Good design is an enabler and is the only differentiator. To create a memorable experience that works both in favor of the users and business owners comes only through design.
https://uxplanet.org/the-value-of-ux-design-bc22bcd482a4
['Priyanka J']
2020-09-27 09:08:36.797000+00:00
['UX Design', 'UX', 'Business', 'Design']
Countries where the future has already come
– I wonder what the future holds for us in 10, 20 or even 50 years? Great! I do either! Here is my foreign passport, some money, a suitcase… Let me embark on a journey to the future starting from Domodedovo… Top 5 countries of the future Switzerland I am starting my journey to the future from a country well-known among the self-respecting officials. I mean Switzerland. Notwithstanding its small territory, this is a world-known country due to gigantic investments parked in the technological development and research. The average annual capital withdrawn for innovative projects is estimated at 16 bn CHF (108 006 998 875 RUB). While domestic utility workers sticking to the old Russian traditions lay asphalt only when the first November snow falls, 3D printers and robots build houses and bridges in Switzerland within several days. Moreover, in spite of all the transparency and incorruptibility of the election system, inhabitants of this amazing country still improve anticorruption methods. Besides, the first blockchain-based municipal elections were held here. The election was based on the state system eID, an element of Swiss digital infrastructure that allows voting by mobile phones. No need to be present at the elections, just click on this or that candidate — your voice is recorded. Imagine you order a pizza but these are elections) Apart from eID, such digital services as eGovernment, eVoting, eBanking, eHealth, eEducation, and eCommerce are planned to be developed and introduced. A fully transparent system of elections and state government appeared to be real in the future that has already come in Switzerland. Japan What is the list of innovative countries without Japan worth for? It’s been a long time since the Land of the Rising Sun became synonyms to such words as innovations, robotics, computer technologies, sushi, anime, hentai, oops, that’s for another blog, let’s dwell on innovations. What is the secret of the Japanese technological wonder? It’s pretty simple: innovations are generated due to the active scientific research and technological development, then get commercialized (e.g. bring profit) turning to prospective startups that are a base of the country continuous development. “Psst! Dude, wanna some Japanese innovations?” “Pigeon post has one doubtless advantage over the Russian Post: pigeons don’t steal smartphones from packages”. Meanwhile, the Japanese distrust pigeons. What if they used to work for the Russian Post? That is why they created a special driverless car to deliver correspondence directly to an addressee sending SMS notifications about the arrival. Green technologies are also well-known in Japan. Recently, the local company Eco Marine presented the project of ships that would use both solar and wind energy to move. This will make shipping cheaper and protect the World Ocean waters from contamination. Isn’t that genius? The USA Of course, it will take us a long time to forgive the Americans for what President Obama did at the Russian entrance halls. However, give or take, the Yankees keep the finger on the pulse of modern technologies. Moreover, they are among the technological heavyweights. Microsoft, Apple, IBM, and other giants of the tech universe are creations of skilled and progressive specialists of the U. S. A new hypersonic means of transport Hyperloop can undoubtedly be called the latest hyped ongoing project. The American authorities have already approved the building of Hyperloop in Washington D.C. and New York. The planned time for moving from one city to another (330 km) — 29 min, flight duration — 55–75 min, by a high-speed train — almost 3 hours, by highway — 4–5 hours. 330 km within less an hour? At last, you can visit the grandma in Yakutsk not taking a month’s vacation on your own expense! Many thanks, Elon Musk! Sweden A country with a comparatively small territory Sweden is always ranked among top 5 of the prestigious world’s ratings — Top Global Innovation Index. It’s a list of the most thriving and innovative countries. Unfortunately, Russia holds only the 48th place. So, what did these white-haired Vikings do? They invested a lot of useful things: matches, a cardiostimulator, a PC mouse, Tetra Pak food packaging, Spotify music service, Skype, Bluetooth, and IKEA, for example. My girlfriend would put the last mentioned to the top… Meanwhile, Sweden is one of the most progressive countries by environmental protection, renewable energy sources, waste disposal, and water cleanness. For the Swedes, it’s very important to save resources during manufacturing, exploitation and utilization as well as reduce the exhaust volume and protect the nature. Sweden is a Scandinavian country where no one knows what corruption is. However, they do all the possible to keep officials far from this. Thus, the blockchain-based technology for real estate registration was tested within two years. The results of the test showed the efficiency of the technology regular use. It should be noted that blockchain accelerated the process of registration that could take from 3 to 6 months. Now, it will take several hours, and both a buyer and a seller don’t need to be on the territory of the country. Israel Israel is considered to be one of the most developed countries of Southwest Asia by economic and industrial development notwithstanding the never-ending war on its territory. Even Israeli girls are in the service, and even local top models! A nice try, military office) Besides, Israel is a world’s leader in the technologies of water resources protection and thermal energy. It is an open secret that the climate of this country is not ideal for agriculture. However, it is one of the best agricultural states first of all due to domestic innovations in this sphere. ROOTS Sustainable Agricultural Technologies is the brightest example. Its technology lies in the placement of water fill tubes in the ground and reaching the optimal temperature. Put it otherwise, if the ground is very warm, ROOTS can cool it off and vice versa. This technology can considerably boost yield. Tubes can fill the roots with water and fertilizers, which allows to grow plants under almost any conditions — from the Arctic to the Sahara. Conclusion: Strange to say but not a powerful army and mass volumes of mineral deposits, which is undoubtedly important, but the absence of corruption, strong investments in modern developments and the efficiency drive both in manufacturing and everyday life are an underlying condition of country’s thriving. What is more, please, stop producing YotaPhones and LADA Kalina. Let us stop blushing when seeing the achievements of foreign colleagues.
https://medium.com/smile-expo/countries-where-the-future-has-already-come-f9513432cd96
[]
2018-08-29 09:31:01.224000+00:00
['Future', 'Blockchain', 'Innovation', '3D Printing', 'Switzerland']
Artificial Intelligence and the Future of Retail
By Courtney Watts Two weeks ago, I joined over 33,000 others in New York City at NRF’s Big Show, the premier gathering for retail’s leaders and innovators. Hundreds of exhibitors were there to show off their latest and greatest in retail tech. Nearly every conversation I had with a retailer or industry expert was focused on AI and its impact on the future of retail. In this new and present era of digital transformation, businesses that do not adapt will find themselves falling behind very quickly. How AI will continue to shape the future of retail: Personalization Big data is increasingly being used to gain insight into retail at a faster pace and with more accuracy than human intuition. It allows retailers the ability to bring personalized, feel-good experiences that you might receive in-store, into the digital world. AI can already understand my style and adapt recommendations for me as I shop. Amazon has been doing this for a while now, and other online retailers are starting to follow suit. This ‘in the moment’ personalization makes every online shopping experience more valuable for shoppers and retailers. Chatbots Digital assistants and automated customer service bots are becoming more common across the industry. Having a well-made chat service will complement the welcoming sales associate in physical stores. A good chatbot can answer some of the common questions shoppers have, providing product recommendations and explaining deals and offers. The key, however, is to design an experience that feels authentic, and not robotic. Sephora, H&M and even high-end fashion brands are turning to chatbot tools as their modern concierges. Last April, Facebook opened up its chatbot tool, allowing companies to connect with customers using AI. This scenario will continue to be the norm for retailers, making shopping seem more like a conversation and less like interacting with a search engine. Visual Search Lately, brands are obsessed with making images ‘shoppable’. In a move ahead of its time, Pinterest introduced a visual search tool that lets you zoom in on a particular object in an image to discover similar images. As a site with a plethora of rich, visual data, it makes sense to leverage deep learning to optimize this experience, and it’s a major step in transforming how we shop and search online. Similarly, the ShopBot on Facebook Messenger searches eBay for you. You can text a picture of what you’re looking for and the bot will make recommendations based on similar images off its site. Making operations more efficient behind the scenes Retailers like Amazon have been forward thinking in embracing AI to optimize not only customer demand, but also supply chain logistics. AI can help make sure products are in the right places and predict which products will sell out faster than others. At every step of the buyer’s journey, AI is positioned to deliver tangible and important experiences for both retailers and consumers. It is no longer the lore of science fiction, but a reality that is already present in our everyday lives. Retail and nearly every other industry is shifting because of AI and machine learning. If you’re interested in the AI world and want to learn more sign up for our AI and Machine Learning newsletter! Courtney is a Client Partner at TribalScale, helping our clients bring innovative ideas to life. After spending time early in her career in human rights and international education nonprofits, Courtney became passionate about how businesses leverage technology and innovation to make the world a better place. After getting her Master’s in International Business in Boston, she ventured into the tech and digital space (both enterprise and startup) and hasn’t looked back. Originally from the states, her work and study experience spans across the US, Latin America, Spain, India and China… and as of last year, Canada! Connect with TribalScale on Twitter, Facebook & LinkedIn!
https://medium.com/tribalscale/artificial-intelligence-and-the-future-of-retail-e65dc4261a75
['Tribalscale Inc.']
2018-03-01 21:40:01.197000+00:00
['Retail Technology', 'AI', 'Technology', 'Retail', 'Ecommerce']
Three Rules for Successful Video Games
Three Rules for Successful Video Games No matter what video game you are making, there are three key elements you must get right A long time ago I wrote an article for Gamasutra talking about how to spot bad games, which was critiqued because people felt the list was too subjective and broad for the time. About eight years later (and now over 2,000 games played from all corners of the industry), I feel it’s time to revise this topic. Today I’m going to cover the basic building blocks that make a game successful. These aren’t just necessary requirements for a game to be commercially successful, they’re also markers that can identify the success of a game’s design. Game Feel “Game Feel” is a term to describe how a game feels in the player’s hands and represents what it’s like to actually play the game. It doesn’t matter what genre you’re designing for or your intended audience, if your game doesn’t feel right to play, people aren’t going to stick around. Some of you reading this may think that complex or deep games are a positive here, and you may be surprised to know that they’re not. No video game should be hard to control, and a confusing or poorly thought out UI is often the first red flag of an experience. A game that is simple to learn and challenging to master is always the goal of good game feel. Therefore I continue to harp on the importance of UI/UX and getting playtesters on your game. Pain points are often the antithesis of a good game feel, and the big ones that often end up frustrating players come with the UI/UX. A bad control scheme can ruin a game even before someone starts to play it. For RPGs or abstracted titles, game feel is also about how menus are set up, how critical information is displayed, and does the player feel like they know what’s happening. It’s easy to think that if your game is turn-based that you don’t need to worry so much about the UI because the player has all the time to process what’s happening. However, turn-based design really lives or dies by its UI, as that’s the player’s only window into figuring out how to play. Your game could have the deepest, most interesting game systems of any strategy game, but if the player has no idea what’s happening because of the UI, then none of it matters. Action-based games are not exempt from game feel, although it is a little easier for them. Surprisingly, this is a category where I tend to see newer developers succeed even if they don’t have much design experience with the genre or even with the industry. I tend to chalk it up to having played so many good and bad examples to draw from when designing. Good game feel is essential if you’re trying to design a challenging game. Good action-based gameplay can be drilled down to a basic tenet — the action on the screen should be 1:1 to the buttons pressed by the player. What that means is the player should be in complete control over their character and nothing should be happening that the player did not input. I’ve played plenty of platformers and action games where the general feel is sluggish — the character doesn’t perform an action when I push the button, gets stuck in long or short animations, or I feel like I’m fighting the controls. You can have input buffering, as with souls-likes, but your game should still be close to a 1:1 ratio. Often, good game feel in action games is about deciding on the mechanics first, playtesting and iterating on them until they feel right, and then building the game space to fully capitalize on it. A few years ago I would have argued that you can have a game where things feel purposely sluggish if you were trying to represent someone who isn’t a master fighter/soldier, etc. However, in today’s market, I feel that is just a poor excuse. Part of the overall topic in this article is that you are no longer designing games in a vacuum. If someone finds a problem in your game, they can just switch over to any of the other dozen/hundreds of games in their library and return your title. There are too many developers who adhere to a poor UI as part of the game experience, and that is one of the biggest mistakes you can make in terms of game feel: If your game feel is terrible, then your game is terrible. There are far more creative ways to have a “weaker” character without hurting your game feel, and conversely, has been a part of the successful games that have elevated their genres. The days where you could make a survival horror game frustrating to play, and write it off as part of the experience are over. This point also relates to story-driven titles and is another trap for developers. Even if your entire gameplay is just walking around and solving basic puzzles, that is not an excuse to make the feel of exploring the world poor, or make it cumbersome to interact with the environment. There are games where the player must wander around huge environments with either a limited run or no running ability that just pads out the tedium. I would argue that a narrative game should have an amazing game feel because there are fewer mechanics to design and balance. As I said a few paragraphs up, there are designers who, early in their game dev careers, nail the feel of their first games. Unfortunately, they often slip up on our next topic. Presentation “Presentation” for our purposes here is going to define everything about the look and aesthetics of your game. Video games are a visual medium, and being able to effectively present your game to the consumer is often overlooked by new developers. The “art” side of a video game can be either quite easy or exceedingly difficult depending on your background. Our chief problem when describing the presentation of a game is that art is subjective, so how can we put a label on a game to say that it has a good or bad presentation? What we have seen over the last decade is the difference between a game where the art of the title is simply there as assets, and where a developer is exploring what they can do within the constraints of their style. Take pixel art as an example: there is a monumental difference between good and bad pixel art in terms of look and animation. As I’ve said countless times, if you’re trying to emulate classic games and your game looks worse than them, you have a big problem. We have all heard the complaint of “asset-store rip off” to describe the look of a game when it appears to be low quality, even if the developer made (or retouched) all the assets. If things looked mismatched, not enough was done to change the assets, or things simply look low quality, people are going to view your game as low quality. I’m about to say something very mean to every developer reading this, but it needs to be said: General consumers are not going to buy ugly-looking games There is a lot to unpack with that damning statement. For myself, I can look past any issues with art and aesthetics to examine the gameplay underneath, which is how I find a lot of my hidden gems each year. For most people, however, they’re not going to give your game one second of their time if it doesn’t look presentable. I know there are a lot of developers out there who are programmers first, artists maybe third or fourth down the line; and let’s face it: art is hard to do right. But in today’s market with the consumers buying games, they’re looking for quality, whether your game is 20 minutes or 20 hours long, it doesn’t matter. One impressive trailer or unique screenshot can easily elevate a game in the consumer’s eyes. The quickest way for someone to judge a game is on its art, unfortunately, and often why games that have good art over good mechanics tend to do better market-wise. A solid aesthetic for your game can go a long way towards getting people interested in it. All it can take is one well-done trailer featuring a unique look or art style to quickly elevate a game in the consumer’s eyes (see Cuphead’s announcement for instance). Of course, it is easier to show off art and theme compared to gameplay. There is one other aspect of the presentation that is often overlooked and not discussed in this capacity — Are there noticeable bugs in your title? They could be big examples like the game crashing or even minor ones like characters clipping through objects. This is another point that could prove to be divisive, but the days in which consumers would ignore minor, or even major, bugs are over. Anything that lowers the quality of the presentation of your title is a major red flag to the consumer. People want to play a game that looks like the developer put a lot into it, not something that fell off the back of a metaphorical truck. Good presentation gets people to check out your game, good game feel and design is what keeps them playing and not refunding your title. What I’ve talked about so far are elements that designers can understand and work to improve, but there is one that is often the most damning for any video game. Marketing To explore the entirety of successful marketing of a game is beyond the scope of this piece and my own experience — I study game design, not marketing. However, for every developer large and small reading this right now, you need to have someone in your corner who gets it. You can do everything right with the first two points of this post, and still fail horribly if no one knows about your game. I have said this countless times, but any developer who refuses to market their game because they view it as “selling out” is not going to last in this industry. I go back to a quote from my friend Z over at Serenity Forge: “There is as much art in the business as there is business in the art.” People often gloss over the impact marketing and PR have on the success of a videogame, and only tend to understand the results, not the work itself. There is a reason why games like Darkest Dungeon, Slay the Spire, and Disco Elysium blew up, and it wasn’t just for their gameplay. The developers did the work of putting together attractive marketing campaigns, reaching out to people, and making sure that the world knew about their respective titles. Again, all three of those games were amazing on their own, but that became a part of their marketing plans, not the only element. You should be planning the marketing of your game as soon as you have something playable for people to look at. Some people tend to think that successful games just “magically appear” one day, but I can tell you from experience that the games that blow up like that had weeks or months of marketing their games and reaching out for coverage. A landing page for your title and company, press kits, general contact information; these are just a few check marks on the dizzying lists of tasks needed for the successful marketing of a game. The Truth of Videogame Success It’s time for one more salient point: Games that truly succeed do everything that we’ve talked about today correctly, not just one or two. I have personally played many games that have a great game feel to them, but the presentation was lacking and there was no marketing to them. There are games that have great gameplay and presentation but did very little marketing and most people didn’t hear about them. There are games that did a good job of marketing with a good look, but their gameplay was poor and people didn’t stick around. Once a game breaks out in terms of marketing, success usually follows for it. There are three comments I want to address, as I feel people are going to leave them after reading this post. The first has to do with lucky breaks like Five Nights at Freddy’s and Among Us — games that managed to blow up in a big way thanks to outside forces. Luck is something you never want to gamble your studio on, and why the three points mentioned here are important. With Five Nights, it has been said that Scott had been making games for around 20 years before he struck gold with it. I know at this very moment there is someone out there thinking that they can (and should) emulate Among Us and build a game that is just like it because that’s the game everyone is talking about. Here’s the truth as bluntly as I can put it: Among Us was a commercial failure. No studio should be designing their game under the idea that no one will buy it for two years and then it gets picked up by famous streamers and blows up. I’m saying this bluntly because I have seen way too many indie developers chase after popular games without understanding why they’re popular and what they did right/wrong, and often end up failing in the process. The second point has to do with passion projects or one-hit wonders like Dwarf Fortress, Minecraft, or Factorio, which spent an awfully long time in development. Passion is a major source of inspiration in game dev, but can unfortunately be blinding to the actual work that goes into a successful game. Understanding the difference between what you want to make and what you can make is difficult but ultimately so important to your success. No developer should make their dream game as their first title without any experience in game development. Passion projects end up turning into incredibly long development cycles, and sadly, are rarely able to support their cost. Building your entire studio on the hopeful mega-success of one game is another easy path to failure. The games I just mentioned are exceptions to the rule. For most of you reading this, you are not going to have that multi-million dollar game that succeeds on all fronts. I’m not trying to be mean, that’s just the law of averages at work. And the final comment is especially important to discuss. I know there is someone out there who is going to write something like the following: “My “ugly” unknown games get hundreds of sales and earn me enough money to live off of, so who the **** are you to tell me how to design a videogame?” In my previous post about understanding and celebrating game criticism, I talked about why developers need to do anything and everything they can to mitigate the risks of releasing a game. If you are earning enough money and don’t care about creating a successful commercial business, then you can ignore everything that I say. What I talk about is for people who are trying to earn a living and grow in this industry. This is not for people who have “made it” — having years of successful games and a sizable fanbase who will support them no matter what. A company cannot continue to grow and improve if it’s not extending and building its audience and consumer base. If you can make simple changes that start attracting people to your studio and titles, that can lead to more sales and better security in the industry. As a company, you don’t want your hope of remaining in business to rest upon the chance that all 200 of your fans will definitely buy your game. What you want is to have a fanbase several times more than that, and only needing a fraction of them to buy your game for it to be considered a success. I have said this many times — the bar of quality for video games has grown. You cannot just design a game in a vacuum and hope that it does well. Not every game is going to be the next Minecraft, and building a successful game company will require more than one game. By focusing on the points that we’ve outlined today, you can see how to improve yourself and do more to get your games out there. If you enjoyed my article, consider joining the Game-Wisdom Discord server. It’s open to everyone.
https://medium.com/super-jump/three-rules-for-successful-video-games-c6e86cd12288
['Josh Bycer']
2020-12-21 00:30:02.003000+00:00
['Marketing', 'Game Design', 'Gaming', 'Business', 'Game Development']
Critical Introduction to Probability and Statistics: Fundamental Concepts & Learning Resources
From Probability Theory to Statistics Statistics was antedated by the theory of probability. In fact, any serious study of statistics must of necessity be preceded by a study of probability theory — since the theory of statistics grounds its foundation. While the theoretical ends of statistics ought to agree that (at least to serve as a common feature) it depends on probability; the question as to what probability is and how it is connected with statistics have experienced certain forms of disagreement [8]. Whereas there are arrays of varying statistical procedures that are still relevant today, most of them rely on the use of modern measure-theoretic probability theory (Kolmogorov) while others express near relative as a means to interpret hypotheses and relate them to data. Probability is the most important concept in modern science, especially as nobody has the slightest notion what it means (Russell, 1929). What does probability mean? The mathematical notion of probability does not provide an answer to this. Hence, the formal axiomatization of probability does not guarantee that it be held meaningful for all possible worlds [11]. Interpretations of Probability Theory [10–11] Since the notion of probability is deemed one of the foremost concepts in scientific investigation and spans its relevance to the philosophy of science in the analysis and interpretation of theories, epistemology, and philosophy of the mind, the foundations of probability [and its interpretations] which is held of utmost relevance in honing our understanding in statistics, bear, at least indirectly, and sometimes directly, upon scientific and philosophical concerns. The probability function — a particular kind of function that is used to express the measure of a set (Billingsley, 1995) — may be interpreted as either physical or epistemic. In addition, the American Philosopher, Wesley C. Salmon (1966) provides a set of criteria for coming up with an adequate interpretation of probability which is briefly reviewed as follows [11]: Admissibility — if the meanings assigned to the primitive terms in the interpretation transform the formal axioms, and consequently, all the theorems, into true statements. — if the meanings assigned to the primitive terms in the interpretation transform the formal axioms, and consequently, all the theorems, into true statements. Ascertainability — This criterion requires that there be some method by which, in principle at least, we can ascertain values of probabilities. — This criterion requires that there be some method by which, in principle at least, we can ascertain values of probabilities. Applicability — The interpretation of probability should serve as a guide relative to the domain of discourse (or field of interest). According to Salmon (as cited in Hájek, 2019), most of the work will be done by the applicability criterion. That is to say, more or less, that our decision for interpreting probability should cast the world which we are interested in. For example, Bayesian methods are more appropriately used when we know the prior distribution of our event space — for instances like rolling a dice where there is a geometric symmetry that follows a natural pattern of distribution. For most, experiments, however, Bayesian methods would require the researcher’s guess for setting some prior distribution over their hypotheses. This is where other interpretations may seem more appropriate. Because we are more concerned with honing our deep understanding with statistics, I limit this article to the most relevant set of interpretations which can be classified into physical and epistemic class. For a more detailed interpretation of probability, the reader is invited to consult the entry from Stanford Encyclopedia of Philosophy on Interpretations of Probability [11]. Physical where the frequency or propensity of the occurrence of a state of affairs often referred to as the chance. where the frequency or propensity of the occurrence of a state of affairs often referred to as the chance. Epistemic where the degree of belief in the occurrence of the state of affairs, the willingness to act on its assumption, a degree of support or confirmation, or similar. According to the University of Groningen Philosophy Professor Jan-Willem Romejin (2017), the distinction should not be confused with that between objective and subjective probability. Both physical and epistemic probability can be given an objective and subjective character, in the sense that both can be taken as dependent or independent of a knowing subject and her conceptual apparatus. Meanwhile, the longheld debate between two different interpretations of probability namely being based on objective evidence and on subjective degrees of belief has caused mathematicians such as Carl Friedrich Gauss and Pierre-Simon Laplace to search for alternatives for more than 200 years ago. As a result, two competing schools of statistics were developed: Bayesian theory and Frequentist theory. Photo by Ellicia on Unsplash Note that some authors may define the classical interpretation of probability as Bayesian while classical statistics is frequentist. To avoid this confusion, I will dismiss the term classical to refer to the Frequentist theory. In the following subsections, I will briefly define the key concepts between Bayesian Theory and Frequentist Theory of Statistics which I got from [14]. 1.0 Bayesian Theory The controversial key concept of Bayesian School of thought was their assumption for prior probabilities — which relies solely on the researcher’s naive guess or confidence towards their hypotheses. But there are also good reasons for using Bayesian methods over the Frequentist approach. The following highlights the key ideas as to why you should or should not use Bayesian methods for your analysis. Bayesian inference depends on one’s degree of confidence in the chosen prior. Bayesian inference uses probabilities for both hypotheses and data; it depends on the prior and likelihood of observed data [14]. Criticism [14] Subjective nature of selecting priors. There is no systematic method for selecting priors. Assigning subjective priors does not constitute outcomes of repeatable experiments. Reasons for using Bayesian Methods [14–15] Using Bayesian methods are logically rigorous because once we have a prior distribution, all our calculations are carved with the certainty of deductive logic [14]. Philosophers of science usually come down strongly on the Bayesian side [15]. [15]. The simplicity of the Bayesian approach is especially appealing in a dynamic context, where data arrives sequentially, and where updating one’s beliefs is a natural practice[15]. By trying different priors, we can ascertain how sensitive our results are to the choice of priors [14]. It is relatively easier to communicate a result framed in terms of probabilities of hypotheses. 2.0 Frequentist Theory While Bayesian methods rely on its priors, Frequentism focuses on behavior. The frequentist approach uses conditional distributions of data given specific hypotheses. Frequentist approach does not depend on a subjective prior that may vary from different researchers. However, there are some objections that one has to keep in mind when deciding to use a frequentist approach. Criticism [14] Struggles to balance behavior over a family of possible distributions. It is highly experimental; it does not carry the template of deductive logic. P-values depends on the exact experimental set-up (p-values are the threshold of which inference is drawn). P-values and Significance level (both are forms of a threshold for inferential decision) are notoriously prone to misinterpretation [14]. Reasons for using Frequentist Methods [14] The frequentist approach dominated in the 20th century, and we have achieved tremendous scientific progress [14]. The Frequentist experimental design demands a careful description of the experiment and methods of analysis before staring — it helps control for the experimenter’s bias [14]. Comparison The figure above is a comparative analysis made by Little (2005). The difference is that Bayesians aim for the best possible performance versus a single (presumably correct) prior distribution, while frequentists hope to due reasonably well no matter what the correct prior might be [15]. Important notes The competing notions between Frequentist’s approach to statistical analysis and Bayesian methods have been around for over 250 years. Both schools of thought have been challenged by one another. The Bayesian method had been greatly criticized for its subjective nature while the Frequentist’s method had been put into question for its justification of probability threshold of which it draws an inference (p-values, and significance value). It is worth noting that, albeit the Frequentist’s approach to Statistic prevailed in 20th-century science, the resurgence of Bayesian method has been greatly values in 21st-century Statistics. For more detailed discussion for this matter, the reader is invited to consult [15].
https://medium.com/dave-amiana/critical-introduction-to-probability-and-statistics-fundamental-concepts-learning-resources-95853e94308c
['Dave Amiana']
2020-06-09 14:45:16.916000+00:00
['Science', 'Statistics', 'Mathematics', 'Philosophy']
The Music Plays On — Mahler Symphony №1
Gustav Mahler, 1881 Gustav Mahler had every intention of becoming a composer. However, the professional success he attained and experienced during most of his career was based almost solely on his rather considerable skills as a conductor. Why did this happen? In 1880, three years after graduating from the Vienna Conservatory, Mahler composed his first large piece, the incredibly dramatic cantata, Das klagende Lied (“The Song of Lamentation”) and submitted it in 1881 for the Beethoven Prize of the Vienna Conservatory. It didn’t win. Mahler would later complain that the conservative jury’s decision condemned him to a life conducting in the theater. Let’s take a look at the jury members — Johannes Brahms, Karl Goldmark, Johann Nepomuk Fuchs, Josef Hellmesberger und Franz Krenn. Conservative, perhaps, but certainly not an incompetent jury. Let’s take a listen to the piece that did win the Beethoven Prize that year, Robert Fuch’s Piano Concerto in B flat minor, Op. 27. The decision prompted Mahler to throw himself into a career of conducting and for the next four years didn’t compose, slowly moving himself up the ladder of conducting positions in provincial opera houses. In 1884, while working as the kapellmeister for the opera house in Kassel and inspired by his hopeless infatuation with one of the sopranos, Mahler composed his song cycle Lieder eines fahrenden Gesellen (“Songs of a Wayfarer”). Four more years would pass when, in 1888, the twentyseven-year-old Mahler, now the second conductor at the Leipzig opera house, befriended the grandson of the famous German composer Carl Maria von Weber, who shared with Mahler his grandfather’s unfinished opera, Die Drei Pintos. Mahler completed it using the unfinished material as well as adding his own compositions. It was his first major success. Unfortunately, Mahler fell in love (and probably had an affair) with Weber’s wife, Marion Mathilde. This hopeless romance resulted in Mahler throwing himself into his next composition, a five movement Symphonic Poem. Mahler left his post in Leipzig in May of 1888 and assumed a new post as General Music Director of the opera house in Budapest in October. A year later, in November of 1889, this five-movement Symphonic Poem was premiered. In 1891, Mahler became the chief conductor for the opera house in Hamburg and in 1893, there was a second performance of his Symphonic Poem, now with the subtitle, Aus dem Leben eines Einsamen (From the Life of a Lonely-one). It was not until 1896 for a performance in Berlin that he decided to omit the Blumine movement, finally creating what would become his Symphony №1. Let’s listen to this touching, omitted movement. It was never well received during Mahler’s lifetime and the symphony didn’t really catch on until Bruno Walter and Leonard Bernstein performed and recorded it in the 1950’s and 60’s. Here are my favorite performances. I can’t help but share the San Francisco Symphony Youth Orchestra’s triumphant performance in the Berlin Philharmonie in 2012
https://donatocabrera.medium.com/the-music-plays-on-mahler-symphony-1-8fa57016eee0
['Donato Cabrera']
2020-05-20 20:50:15.409000+00:00
['Las Vegas Philharmonic', 'California Symphony', 'Donato Cabrera', 'Mahler', 'Music']
Cold Temperatures Can Make Your Workouts More Efficient
Photo: Jordan Siemens/DigitalVision/Getty Images In the Before Times, I was a frequent gym-goer. I’ve always felt more accountable in a fitness class. But the Covid-19 pandemic has significantly changed my workout habits. Because I live in a small Brooklyn apartment, most of my exercise happens outside. I was worried about how the cold temperatures would affect my ability to get cardio exercise, but it turns out exercising in chilly temperatures is not so bad. (Especially when you’re wearing a warm mask!) It also turns out there’s compelling science to support exercising in the cold. As Markham Heid reports, as long as you take precautions, cold temperatures can actually make your workouts more efficient. Exercise produces body heat, and the body has to work to regulate that temperature. This is why exercising in high heat or humidity can feel so miserable. Exercising in cooler temperatures can be more optimal for the body performance-wise. Claims that working out in the cold can lead to changes like weight loss, however, are overblown, as my colleague Dana Smith reports. So do not go on a winter run for that purpose. Rather, if you’re looking to push yourself a bit harder and run a bit longer — working out in the cold might be worthwhile.
https://elemental.medium.com/cold-temperatures-can-make-your-workouts-more-efficient-3e5b90bf273c
['Alexandra Sifferlin']
2020-12-21 19:11:17.660000+00:00
['Exercise', 'Fitness', 'Health', 'Life']
Forget Steve Jobs and Bill Gates. Elon Musk Is Redefining Innovation
Forget Steve Jobs and Bill Gates. Elon Musk Is Redefining Innovation To innovate will no longer be just releasing another website or app Photo by Matt Ridley on Unsplash We have always looked upon Steve Jobs and Bill Gates as the most significant innovation gurus in the world. I mean, Steve Jobs’ Apple revolutionized the way we use our phones, creating the first smartphone along the way, and Microsoft established the widespread use of personal computers. Even nowadays we refer to a general computer as a PC. These men were the starting point of an entire technological revolution that happened (still happening?) since the ’80s, where people started using computers for personal use. This growth was even more noticeable when the modern Internet was made available in 1995. The following years brought new tools that made our lives easier almost every year. Windows was released, Google was created, Amazon was founded, PCs started to become cheaper and more people could afford to have one at home. At the moment, it’s hard to find someone without a computer and easy internet access in the developed world. Things changed a lot in 30 years. Image from pplware Computers and Smartphones Are the Past in Terms of Innovation The computer and smartphone industries are starting to stagnate. There is way too much competition for new companies to thrive, and the established players are hardly bringing something new to the table. I mean, foldable smartphones? Yeah, it’s cool to show to our friends and all, but it is not something that one was hoping for their phones to have. Don’t take me wrong: It is an engineering marvel. I wonder how they managed to make a display that can be folded in half. But the average consumer doesn’t care about the technical stuff. What can it do for me? The functional part of technology is what makes it either be a success or a disaster. We have plenty of examples of great technology that were never a success because it didn’t match the user’s expectations:
https://medium.com/illumination/forget-steve-jobs-and-bill-gates-elon-musk-is-redefining-innovation-1dce15c0495e
['Emanuel Marques']
2020-07-27 04:59:35.185000+00:00
['Technology', 'Innovation', 'Space Exploration', 'Future', 'Inspiration']
How Long Does it Take To Rank #1?
How long does it take to rank #1? Well, this website is quite new … bought the domain on August 6th and started working on it immediately. The first thing I did was block search engines. This may be surprising, but it’s actually a good idea until you get the website presentable for the general public. I robots.txt blocked the site and implemented a no-index meta tag. So when did I really launch the site? On August 26th, I removed the no-index meta and allowed all search engines access again through robots.txt. I posted my first blog entry, and the website was off and running. The first thing to show up in search results was my blog post. FeedBurner is quite amazing; with Google Caffeine, blog posts get listed within minutes if not seconds. After that first post, it took quite a for Google to display the rest of my pages in search results. Today is September 11th, and I can say that I now see all my pages indexed properly with accurate titles and meta descriptions. Plus I rank number one for my own name … on Google that is. Sixteen days after my site launched (which I think is a pretty relevant for my name), Yahoo and Bing still haven’t listed JordanSilton.com in the top results. Just because Google indexes all it’s content really fast doesn’t mean that the content is more relevant for a specific query. Bing’s Stefan Weitz recently spoke about how Bing is trying to do search smarter. It may take a little longer, but validating the quality of search results is surely important. Both Google and Bing do a remarkable job of sifting through millions of web pages and choosing those most relevant for any individual query. It’s quite outstanding that my website, which gets 5–10 visits per day, can we crawled and indexed so quickly. It used to take search engines several months to update the results for websites with thousands of visits per day. Now, some results appear instantaneously.
https://medium.com/jordan-silton/how-long-does-it-take-to-rank-1-e27d29d3e57e
['Jordan Silton']
2016-04-29 01:35:09.633000+00:00
['Google', 'SEO', 'Digital Marketing']
Six of the greatest and most important live performances ever
Here is a list of some of the most important live music performances ever (in my opinion!). James Brown, Boston Garden, 1968 On April 4th, 1968 Martin Luther King Jr. was assassinated. America’s cities saw riots and violence. Amid this, James Brown was booked to play the day after. While the city considered cancelling it, they were convinced that doing so would have sparked even more anger and violence. So, they pushed on, and dedicated the concert to Dr. King. It was in the final act that James Browns true power came to the fore. As they reached a climax, young fans began rushing the stage, and the white police officers to the side jumped in to try and restore order. This was the moment that everyone had feared, and it could have easily escalated into large-scale violence. But Brown had full command — he interjected and told the crowd: “You’re not being fair to yourself and me or your race…. Now, are we together, or we ain’t?” — before launching into “I Can’t Stand Myself (When You Touch Me).” Brown arrived as a musician but left the stage more like a political leader. Brown later recalled: “I was able to speak to the country during the crisis, and that was one of the things that meant the most to me.” Queen, Live Aid, 1985 It wouldn’t be too controversial to say that the years leading up to this performance hadn’t been kind to Queen. They lost much momentum since their initial run of diverse records in the 70s. Additionally, at Live Aid they were wedged between U2 and David Bowie — bigger and more contemporary artists at the time. But Queen rose like a phoenix from the ashes. In 20 minutes they had re-established their legacy, delivering a performance that enraptured the audience. They furiously delivered their greats, Radio Ga Ga, Bohemian Rhapsody, We Will Rock You — and Freddie Mercury gave an energy rarely seen before, or since in live performances. He rushed around the stage, from piano, to marching around with his mic stand. He was in full control. “It was,” Brian May remembered, “the greatest day of our lives.” Radiohead, Glastonbury, 1997 Everything went wrong. It had been pouring for days, two stages had sunk into the mud, there were reported cases of trench-foot from attendees! The signs were bad, and when Radiohead went on stage it only got worse. The lighting rig was shining directly in Thom Yorke’s face, his monitor melted down, and they couldn’t hear themselves play. Yet, despite this, the chaos delivered one of the band’s most epic performances. Yorke seemed to feed off the rage, and persisted through their catalogue, adding twists and fire to each and every one. It was incredible. Yet Yorke only realised afterwards. “I thundered offstage at the end, really ready to kill,” Yorke remembered. “And my girlfriend grabbed me, made me stop, and said, ‘Listen!’ And the crowd were just going wild. It was amazing.” The Three Tenors, Baths of Caracalla in Rome, 1990 It was an event that was never expected, but always wanted. The three greatest operatic performers of the time — on stage together. Pavarotti said they had been asked over 50 times and yet had always said no. This was believed, in part, to be due to a rivalry between Pavarotti and Domingo. But football was to be the thing that brought them together. They were all fervent supporters, and the performance was to be a landmark moment in the World Cup, hosted by Italy. In addition, Carreras had recently defeated his leukemia. It became the start of a powerful force in classical music, and one that delivered time and time again. Daft Punk, Coachella, 2006 In the late 90s and early 00s EDM had gathered plenty of momentum. But live performances still left much to the imagination. In 2006 Daft Punk changed this. They took live lighting and staging to a new level — and unveiled the genre’s most incredible centrepiece. At the Coachella festival, the duo performed from a 24-foot pyramid, covered in LED panels, to over 40,000 fans. The performance influenced staging far beyond EDM, and importantly, it was a tipping point for EDM becoming a major force at festivals. The Beatles, Shea Stadium, 1965 The world was held hostage by Beatlemania, and the band chose this time to embark on a tour of North America. They opened at the home of the New York Mets and set the record for attendance — with 55,600 ticket holders. To give a full picture of the hype — the band arrived in an armoured truck, and there were 2,000 police officers in charge of security. This was an event to behold — and the noise from the crowd almost completely drowned out the music. Reportedly the Beatles themselves couldn’t hear what they were playing. The night also set a precedent — never before had a concert been held in an outdoor stadium. The night at the Shea Stadium was the start of a movement — and now the idea of such a performance seems commonplace. Indeed, some sports stadiums are choosing to be seen as the home of music and entertainment.
https://medium.com/exit-live/six-of-the-greatest-and-most-important-live-performances-ever-47fc21ef9ffc
['Giorgio Serra']
2018-02-14 14:42:56.876000+00:00
['Historical', 'Music', 'Live']
A Brief Overview of Garbage Collectors in Java, Because Cleanliness Is Necessary.
The following are the 7 types of garbage collectors in Java along with description of each and their appropriate use cases. I will try to explain each of them briefly so that you can differentiate between each one of them without going through a long article. 1. Serial Garbage Collector The serial GC is useful when you only want a single thread to perform all the GC operations. Whenever the garbage collector runs it freezes all the operations running in the background. Due to this, it won’t be a decent choice to use the serial garbage collector in a live server environment where the difference of every millisecond counts to judge the application performance. The other scenario where it would be useful to use serial GC is when there are a large amount of JVMs which run on the same machine and the GC threads would not interfere with each other. Highlights:- Single thread, Freezes application operations, Not suitable for live environments, feasible when multiple JVMs on a single machine. 2. Parallel Garbage Collector(Also known as Throughput collector) This was the default GC before Java 9. As the name signifies, the parallel GC uses multiple threads to clear up the memory in the heap space. The parallel GC likewise pauses all the processes in the background while performing its operations. One way to improve the throughput of application is to increase the heap space but on the other hand, this can lead to long pauses when garbage collection takes place. As we now know that there are multiple threads which perform the garbage collection we control the number of threads which participate in garbage collection. By default, for N CPUs, N Threads are used to perform garbage collection. This collector should be usually used when a lot of work needs to be done and long pauses are admissible. Highlights:- Multiple GC threads, Pauses background processes, large heap size might lead to longer pauses. no concurrent execution. 3. Concurrent Mark Sweep(CMS) Garbage Collector As we have seen the parallel GC might lead to long pauses in the application. The Concurrent Mark Sweep(CMS) GC helps us to reduce this waiting time. As we are aware everything comes for a trade-off, we need to share the processor resources with the garbage collector in between the execution of the application. If we compare between parallel and CMS garbage collector, CMS should be preferred only when it is feasible for your application to share the resources with the garbage collector which would also eventually reduce the wait time. Highlights:- Multiple threads, Less time compared to parallel GC, Processor resources shared, no concurrent execution. 4. Garbage-First Garbage Collector(G1 Garbage Collector) G1 Garbage collector was first released in JDK version 7 and has been the default java collector since Java 9. The GC , in this case, consists of two phases a) Marking Phase:- In this phase, the GC divides the heap space into an equal number of contiguous partitions. b) Sweeping Phase:- After the Marking phase, the GC now knows which region has the least amount of space occupied and clears it as yield the most amount of free space. Although using G1 collector reduces the wait time but it can lead to degrading performance as the heap size increases since if the heap size is very large it will waste all its time in phase 1 i.e. the marking phase itself. To overcome this issue Shenandoah GC was introduced which we will talk in the coming section. Highlights:- Two phases, Marks and then clears the space, not feasible for large heap sizes, no concurrent execution. 5. Epsilon Garbage Collector Introduced in Java 11, the Epsilon Garbage Collector was introduced by considering the scenario when you want to clear a certain amount of memory for your application i.e. it won’t perform any garbage collection. So your next question would be then what’s the point in using it. Whenever the JVM runs the garbage collection process it stops the other processes of your application while its performing GC but sometimes you want to test your application performance without being affected by any other factors. Epsilon is just simply impeccable to use in such cases as it removes the impact of GC on the application performance. This helps you to test your application and helps you to check the throughput without any interruption by the garbage collector. While using the epsilon garbage collector you can specify how much heap memory you want to allocate for your application and then run it. The downside of the garbage collector is that it does not recycle the objects, not in use. So if the heap space is exhausted the application will run out of memory and throw a java.lang.OutOfMemoryError Exception. Highlights:- Passive GC, Test application performance, application might run out of memory. 6. Z Garbage Collector(ZGC) Introduced in Java 10, it helped in concurrent execution of the application without hampering the throughput of the application. Until Java 10 the G1 GC was the default collector in java. But the major issue with the earlier GCs was that they stop the execution of the current application program while the garbage collection is being carried out. This not only adds overhead to your application but also reduces the throughput and add up to the latency of your application. Being a scalable low latency GC, it performs the expensive tasks concurrently within a time target of 10ms i.e. it does not stop the execution of your current application for more than 10ms. The most important factor in ZGC is setting up the heap size. The max heap size must be set such that it can accommodate and run your application, although it is better the more memory you provide there must be a ratio between the memory usage and how often the GC should run since wasting the memory is not acceptable. Highlights:- Concurrent execution, specify max heap size, execution time of 10ms. Image via blogs.oracle.com. Pause Time Comparison between different garbage collectors. 7. Shenandoah Garbage Collector Introduced in Java 12, Shenandoah is also a concurrent garbage collector as the Z garbage collector and is also known for ultra-low pause time to reduce the overall GC pause times. G1 works to clear the heap memory only when the application is paused. Shenandoah on the other hand can relocate the objects without pausing the application execution. Garbage collection pauses are really frustrating to many users which can be avoided using these garbage collectors which include ZGC and Shenandoah. Although Shenandoah and ZGC both are advantageous for concurrent usage, the former one provides more tuning options.
https://medium.com/swlh/a-brief-overview-of-garbage-collectors-in-java-because-cleanliness-is-necessary-f3dd9babc2cb
['Prateek Nima']
2020-10-10 14:33:06.499000+00:00
['Java', 'Interview', 'Software Development', 'Better Programming', 'Programming']
Tesla is One of The Greatest Marketing Success Stories of Recent Times
Tesla is One of The Greatest Marketing Success Stories of Recent Times David Ferrara Follow Dec 28 · 5 min read Tesla famously has zero sales and marketing budget yet has some of the best marketing of any company Photo by Tech Nick on Unsplash Have you ever seen an add for Tesla on TV? Heard one on the radio? Seen one on Facebook, or off to the side in your Google search? Nope, that’s because Tesla does not do any paid advertising. Similarly, they don’t have a network of auto dealerships with ads all over your local TV, and clowns and banners and those dancing wind puppet thingies. Let me tell you how absurd this sounded to me when I first heard it especially given my last job in the SAAS world where this ratio is king. Sales and Marketing (S&M) Cost to Annual Contract Value (ACV) I’ve sat in front of venture capitalist who insisted that in order to be a successful company we had to spend $1 in S&M cost for every $1 in ACV. Even better maybe we should be spending $1.50 for every $1. Now, in case you’re not a finance person let me tell you how absurd this sounds when you first hear it. Let’s say you sell a product that cost you $0.2 cents for every $1 in revenue (i.e. you have a 80% margin). Then you spend $1 in S&M costs to earn that revenue. Well then you’re already $0.20 cents in the hole. And we haven’t even gotten to overhead costs such as rent, utilities, salaries of everyone else in your company other than the developers. i.e. management, finance, HR, product etc. By the time that gets added in well you’re bottom line is you are spending $2 for every $1 you make. Sometimes you’re spending $3 or $4 dollars. This is what companies do Well, this is just how it works. Have you ever wondered how AirBnB and Doordash to take the two most recent big IPO lose so much money year after year? As well as WeWork, Uber, Facebook and well you name it, all of the Silicon Valley startups lost huge amounts of money in the early years. This is standard practice. All big venture backed tech companies (especially SAAS companies) take this approach. The theory is that The money is recurring revenue so once you get that customer into the pipeline you will have that $1 of recurring revenue year after year for only spending $1 in year one. There is endless competition in these spaces and you have to hit critical mass (i.e. capture a certain customer share and hit a certain company size) before your competitors. So spend, spend, spend in the early years to hit that critical mass and then worry about profitability later. I’ve always thought there is a third reason that nobody is ever willing to talk about. In order for a venture capitalist to get as much of the cap table of a company in their pockets and out of the founders pockets they need the company to spend as much money as possible. i.e. “here’s more money Mr/Mrs. Founder, now give us more equity… this is just how it’s done, have to spend spend spend if you want to succeed… oops now we own 90% of your company” How did Tesla get away with this? How did they grow to where they are without spending any money on marketing? They didn’t fall into the above practices. In fact, they spent the majority of their costs building their product, innovating and building out manufacturing capacity. There have been articles written about this walking through a ton of reasons. I just have two that I think really matter. Free Marketing The first reason is Tesla is a lot like Donald Trump. For whatever reason they just receive a ton of free marketing. The Tesla owners and probably even more critical Tesla stock holders are as to Tesla as CNN and MSNBC were to Donald Trump. Superior Product The second reason is because Tesla is the polar opposite of Donald Trump. That is to say unlike Trump, Tesla just has a really damn good product. In fact, not just damn good, but ridiculously good, game changing good. In fact a product that is so good it generates its own press. Actually, let me just have Jay Leno summarize how good the product is and why he thinks Tesla will succeed. Leno writes that he bought the Tesla because it is the fastest four-door car he could buy, and that it turned out to be electric was secondary. When he bought it, he wasn’t specifically thinking of the environment, so the reduction in emissions was just an added bonus.” by Jay Leno via www.tesmanian.com blog December 19, 2020. The Chicken vs the Egg Here’s the question everybody should have. How did this happen? I think everyone knows this general marketing concept… “You can have the best product in the world, but if nobody knows about it, what good is it?” Phil Knight, Chairman Emeritus Nike So, Tesla made a great product how did anybody know how great the product was before anyone bought one? And in case we forget, for a long time nobody did. It took a long time to get the first Tesla rolled out. Well, that’s where the free marketing came in. See Tesla has a built in advantage in that they have access to perhaps the single greatest influencer in the world (and definitely the single greatest influencer on Twitter). Of course I’m talking about Elon Musk who makes the news about once or twice a week due to a tweet. His latest one, that’s all over CNBC and all the rest of the news channels was this one about how he almost sold to Apple a couple years ago. Elon Musk tweet screenshot from Author’s Twitter Account So, that’s the answer. Tesla might not spend any money on sales and marketing. They just figured out how to get it for free. Elon built up his celebrity and reach and then leveraged that reach for free marketing valued equivalent to many millions if not billions of ad spend. As a bonus now all of that money that most companies spend making sure you are aware they exist Tesla is able to pump into their product. Not a bad strategy if you can make it work. Gain Access to Expert View — Subscribe to DDI Intel
https://medium.com/datadriveninvestor/tesla-is-one-of-the-greatest-sales-and-marketing-success-stories-of-all-time-1457188c062a
['David Ferrara']
2020-12-29 16:52:49.177000+00:00
['Business', 'Investing', 'Finance', 'Technology', 'Marketing']
Kenza Ait Si Abbou, Deutsche Telekom: “Think twice about who you work for”
Kenza Ait Si Abbou works at Deutsche Telekom as a Senior Manager Robotics and Artificial Intelligence. In her job, she and her team develop solutions for all sorts of use cases ranging from HR processes to Q&A pages for customer support. She has also dedicated herself to the mission of making AI more understandable to consumers. Source: Unsplashed On this episode of REWRITE TECH, we talked to Kenza Ait Si Abbou about her passion for math and artificial intelligence and discussed solutions to make algorithms less biased. Why is AI so biased? There are several public examples that demonstrate that AI is not neutral. Like the Australian app Giggle that wanted to create a safe space for girls, but excluded trans-girls. Or even worse and potentially dangerous: facial recognition techniques that weren’t trained enough to differentiate people of colour but were used by law enforcement. Right at the beginning, Kenza makes clear that such cases are not the mistakes made by artificial intelligence: “The biases don’t come from AI, they come from humans. The problem is that we are the problem.” Specifically, our unconscious biases — stereotypes we have without being aware of them — are a big problem because we don’t see them. “We teach these biases to the machines and the machines then learn from this data”, Kenza explains. Kenza Ait Si Abbou: “We need responsible consumers” As a first step, Kenza believes, developers and companies must admit that they have an unconscious bias and consider those biases already in the ideation phase of a product or feature. “When the software is finished, it’s too late”, explains Kenza, who has more than ten years’ experience in this field. In the best case, checking software for bias is an obligatory requirement in each development process. But even with more ethical practices on the developer site, it doesn’t release the user to make better and thoughtful decisions. “We need responsible consumers”, says Kenza. We can’t just rely on the things that AI-powered software presents to us, we need to “use our mind” as Kenza puts it. “What we should keep in mind is that AI is just supporting activity in all sectors. It shouldn’t be the decision-maker. It should help to make a decision but not make the decisions.” How the Deutsche Telekom uses artificial intelligence Photo by Mika Baumeister on Unsplash Kenza spent the last ten years working at Deutsche Telekom in Berlin. As a Senior Manager Robotics and Artificial Intelligence, she and her team develop solutions powered by AI. This includes internal tools for human resources, but also consumer-faced technology like the Q&A section that helps clients to find the right answer to their problem automatically. One thing that makes Kenza especially happy is that Deutsche Telekom was one of the first corporations that developed guidelines for the use of AI. To do that, Deutsche Telekom held several workshops with people from different professions and age groups to understand what they expect. “Our guidelines represent the wish of our customers,” concludes Kenza. Teaching the art of AI Apart from her role at Deutsche Telekom, Kenza is a very vocal member of the tech scene in Germany. Her goal is to educate people on AI and explain how deeply integrated it is already in our lives. Recently she published her first book: “Keine Panik, ist nur Technik” (Eng: “Don’t panic, it’s just technology”). She wants kids and adults to have fun exploring new technologies and to understand that machines are only as smart as the people who develop them. At the end of our conversation, we asked Kenza about a piece of career advice she would like to pass on. After thinking about it for a moment she says: “Think twice about who you work for” and continues: “The impact you have is much higher than you might think.” Listen to REWRITE TECH with Kenza Ait Si Abbou You can listen to the whole conversation with Kenza Ait Si Abbou on the REWRITE TECH podcast. Our podcast is available on your favorite streaming service including Spotify, Apple Podcasts and Google Podcast.
https://medium.com/rewrite-tech/kenza-ait-si-abbou-deutsche-telekom-think-twice-about-who-you-work-for-6f462bcb31ce
['Sarah Schulze Darup']
2020-11-23 16:03:16.049000+00:00
['Robotics', 'Tech', 'Podcast', 'AI', 'Women In Tech']
The UTMB through the prism of data: typology of race management. (English version)
This ultra-trail is certainly the most challenging in the world every year. The distribution of time does not follow the normal law. The peloton gathers the “less good” riders, as shown in the graph. Time distribution (Standardized data) Typology of race management First analysis: Quick start, complicated final Note: The following study does not take into account participants who did not complete the race. First, we tried to understand how the riders manage their first part of the race. We consider that the first part of the UTMB is done in Courmayeur at KM 78. The second part of the UTMB is therefore spread out from Courmayeur to Chamonix over the remaining 92 KM. Each runner devotes a given percentage of his final time to complete the first part of the race, which is noted as T1. Thus the second part of the race takes T2% of the time to each runner, with T2 = 100-T1. By doing so, we can compare the race management of runners with very different performances. François d’Haene, like a mid-race rider, can both devote 42% of their final time to cover the first half of the course for example. We notice that the distribution of variable T1 is normal. Distribution of variable T1 On average, participants spend 37.5% of their final time on the first part of the course (Courmayeur is not exactly in the middle of the course. The average value of T1 is therefore logically far from 50). Others go very fast and spend only 30% of their final time on the first part of the course. But they spend 70% of their final time on the second part! On the contrary, some riders spend more than 43% of their final time on the first part, and 57% on the second part. Race management is therefore very variable between each runner. We tried to understand what race profile each rider adopted according to his final performance. On the graph below, the data has been normalized. The best runners have a final time (y-axis) close to 0, while the runners with the highest T1 values have a value close to 1 on the x-axis. Final result according to the percentage of time spent on the first part of the course (T1) We notice that the best riders spend a large part of their overall time on the first part of the course. As an example, François d’Haene and Kilian Jornet respectively spent 42.24% and 41.73% of their final time on this first part. However, “start carefully” is a necessary but not sufficient condition to achieve a very good performance. Legs still play a major role, of course. The riders who started quickly were among those who achieved the worst times. This is a notorious lesson. Finally, we note all the same that the majority of the riders (navy blue zone) adopt the same strategy.
https://medium.com/sports-data-analytics/the-utmb-through-the-prism-of-data-typology-of-race-management-c648beb9542b
['Maxime Bataille']
2020-05-11 14:31:00.777000+00:00
['Data Science', 'Clustering', 'Trailrunning', 'Machine Learning']
The Self-Titled Interview: Dave Porter
Is New York the place you like to vacation in rather than live? [Laughs] I lived in New York for a very long time, during my twenties. I went to school just outside New York City, at Sarah Lawrence College. And I lived in New York until I was 30. Then I was part of an exodus of people who left New York after 9/11. I moved to LA in 2002. And that’s where I’ve been since. But I still have a New York number because when I first moved to LA, I had clients in both places and shuttled a lot back and forth. You must have had no choice but to live in LA once you got involved with more major TV shows. That’s totally true. I wish I’d done it earlier in my life. I hemmed and hawed about it for a bunch of years, but I was having a fun time and making a good living writing music for commercials and documentaries — the kind of work you find in New York as a composer. But to do the dramatic stuff, you absolutely have to be in LA. It was a sobering transition, moving to LA, because I thought my New York credits would help me leapfrog ahead of people. But of course, it didn’t, so I spent a couple of years on the couch watching Law and Order, waiting for the phone to ring. Is being a composer as competitive and cutthroat as being an actor? It is. You’d be amazed at how many composers are cycling throughout LA. It’s very competitive, right up there with acting, directing, and writing. There’s just a lot of talented folks. The good news is there’s more work than there’s ever been in terms of quality television to work on. But it’ll never be enough for all the people who want to do it. Right; it’s not just Law and Order spinoffs anymore. Yeah, I never had a TV in my twenties, so it was novel when I moved to LA. LA is a very different town; in New York, you can walk out the door and entertain yourself. In LA, you kinda have to have a plan. You have to have your people, and things going on, and that takes a while. The first few years are pretty tough for most people who move to LA. But you said you were part of an exodus. Does that mean you already had friends there? It was a different world. I had one friend who lived in LA but there were others coming from New York around the same time. It was kinda find-your-own-way. Lots of people will tell you that the first friends you make in LA won’t be the ones that last; you’re just desperate for people to hang with, to get out of the house. It’s a process of finding your own crew. But now I feel like I have as good of friends in LA as I once did in New York. I still miss my friends in New York though. Yeah. I’ve been away for about a year and a half after living about a decade there, and that’s the one thing I miss the most — the people. That, and the feeling that you could walk for five minutes and feel like you’re part of something, whereas LA can feel very compartmentalized. Yes. You can see a lot going on but you’re not invited.
https://medium.com/self-titled/composer-dave-porter-discusses-his-work-on-breaking-bad-better-call-saul-the-blacklist-and-2af0aca60a0e
[]
2016-08-03 03:59:37.385000+00:00
['Music', 'Better Call Saul', 'Breaking Bad', 'Culture', 'Preacher']
Microsoft learning pathway for data engineers (2020)
Hello! My name is Jack and I help lead the Microsoft practice at Servian. A big part of my role is helping grow the skills and capabilities of our consultants in Azure and other technologies. I often get asked what Azure learning pathways people should consider studying, based on their specific background and experience. Whilst I am always happy to spend the time with people discussing their goals, I thought it would be good to capture and publish some of the more common guidance that I give to our consultants and our clients. I hope that this guide will be helpful to others when they are starting their journey into Microsoft and Azure, specifically for a data engineer. I work with lots of very smart data engineers. I’m pretty handy with a pipeline myself, but some of these folk make me look like a plumber, by comparison. So, for our article this week on Microsoft Learning Paths, I’ll be exploring part of Servian’s Azure data curriculum that our very own Chithra Koushik has pulled together for our learning squads and client teams to follow. Ahh yes, Data Factory, my old friend — Image credit: @brucebmax This is the second in our Microsoft Learning Pathways series, with our first article for those considering their AZ-104 Azure Administrator exam. We’ll add more learning paths over the next few weeks, including for DevOps, Security and Power BI, so stay tuned. TL;DR: AZ-900 (3 days) -> DP-900 (5 days) -> DP-200 (4 weeks) -> DP-201 (4 weeks)
https://medium.com/weareservian/microsoft-learning-pathway-for-data-engineers-2020-696220834778
['Jack Latrobe']
2020-10-21 04:23:35.766000+00:00
['Certification', 'Azure', 'Data Engineering', 'Learning', 'Cloud']
What Will Survive A Plastic Ocean
What will survive a plastic ocean? to recycle is to use a potion, a cure supernatural is its power to protect not only humans but creatures that call the ocean home What will survive a plastic ocean? to recycle is to cause a commotion amongst those who deem our planet's health a non-issue What will survive a plastic ocean? the notion that if we ignore and deny the slow-destruction of our climate’s change our planet’s health is no hoax What will survive a plastic ocean? to care is to display emotion that humans are not the only beings — that dive deep in a sea, beautiful the ocean is not single-use.
https://medium.com/creative-humans/what-will-survive-a-plastic-ocean-7db26b12e18d
['Zach J. Watson']
2020-07-10 15:23:20.416000+00:00
['Environment', 'Poetry', 'Writing Prompts', 'Climate Change', 'Oceans']
International Organisations Increasingly a Target of Cyber Attacks
The global health crisis has seen an exponential increase in cyber attacks in hospitals and international organizations. The UN Brief interviewed Alex Urbelis, Partner at Blackstone Law Group, a cybersecurity firm in New York. His firm has developed a way to anticipate threats by using domain name registration scanning, drawing from his wide experience working for the private sector, and government agencies on both sides of the Atlantic. We spoke about his firm, the Blackstone Law group, his education both in cyber security and law, and his career working in Europe and the US for the luxury sector, and government agencies. Urbelis discusses the different ways that organizations can keep their staff and systems updated when handling devices, and carrying their day-to-day work with email and the cloud. Listen to the interview in our SoundCloud channel: Urbelis is also a co-host of the Hacker Radio Show. Alex Urbelis is an infosec lawyer / CISO | Words at @CNNOpinion and @TheIntercept| Co-host of @HackerRadioShow UL Security Council Member / http://blackstone-law.com
https://medium.com/digital-diplomacy/hospitals-increasingly-a-target-of-cyber-attacks-a2b1d7ee75a0
['Maya Plentz', 'Editor', 'Founder', 'The Un Brief']
2020-10-29 20:06:42.564000+00:00
['Health', 'Cybersecurity', 'UN', 'Government']
Pop Triple Plays: A Tribute to Three-Hit Wonders
Pop Triple Plays: A Tribute to Three-Hit Wonders It’s an elite club with three iconic acts and a flock of seagulls. The Cure’s Robert Smith in the “Lovesong” video (Photo: Vevo) Although it’s a fate I wouldn’t wish on my worst enemy — or on Portugal. The Man — being a one-hit wonder isn’t all bad. They’re celebrated on retro lists and flashback countdowns and by fans who will forever love and remember songs like “Afternoon Delight,” “99 Luftballons,” and “Puttin’ on the Ritz,” even if they can’t always immediately recall who sang them (Starland Vocal Band, Nena and Taco, respectively, by the way). “Two-hit wonder” doesn’t have the same cachet, but they’re often mistaken for one-hit wonders, so they inadvertently get nearly as much play. And three-hit wonders? Well, compared to the Adeles and Taylor Swifts of pop, they might be regarded as simply not having been big enough to have swung more hits or lasted longer in the game. It’s a dismissal with nearly as many exceptions as there are acts to support it. The chart trajectory of three-hit wonders often goes a little something like this: One big hit followed by, or sandwiched between, two lesser ones that are easily forgotten by people who erroneously classify them as one-hit wonders, until some oldies celebration reminds them otherwise. The pair of Top 40 addenda can sometimes make an act seem less notable in retrospect: If Starland Vocal Band had enjoyed two more minor Top 40 entries after reaching number one with “Afternoon Delight” and winning the 1976 Best New Artist Grammy, would they have gone down in history in bold print? They’d probably be relatively forgotten as just another act with a short hit list. Despite the reputational hazards of landing only a trio of Top 40 singles on Billboard’s Hot 100, I’m a firm believer in the power of three. Don’t think Carly Rae Jepsen wasn’t relieved to graduate from two- to three-hit wonder the week “I Really Like You” climbed into the top 40 in 2015 (peaking at 39). When that happened, four years after “Call Me Maybe” became one of the most inescapable number ones of the 2010s, she joined the elite group celebrated here: assorted mid-tier acts and also-rans, two Rock and Roll Hall of Famers, a disco icon, and a soul queen — all of whom (with three exceptions) made it into the top 10 at least once. A Flock of Seagulls Everyone remembers “I Ran” (number nine, 1982), but here’s the twist — make that twists: The British new-wave band’s two follow-up singles — “Space Age Love Song” (number 30) and “Wishing (If I Had a Photograph of You)” (number 26), both from the same year — are actually better songs. And in the group’s native UK, their fortunes were reversed: “Wishing” was a number 10 hit, while “I Ran” missed the Top 40 entirely, peaking at a lowly number 43, and “Space Age Love Song” climbed to a commensurate (to its U.S. high) number 34. Quarterflash Ah, the joy of sax (solos)! Despite the moments of pleasure provided by Quarterflash’s Rindy Ross on the woodwind, I still don’t understand how a female-led band that came across as a poor woman’s Pat Benatar enjoyed a bigger hit than Benatar ever did, with “Harden My Heart” (number three, 1981). But then, the Hot 100 was as unfair as ever in the ’80s. The group’s superior next single, “Find Another Fool,” only got as high as number 16, two notches lower than 1983’s equally better “Take Me to Heart.” Animotion Since two completely different versions of Animotion — one featuring the incomparably named Astrid Plane and the other with Cynthia Rhodes, the former Mrs. Richard Marx — were responsible for the band’s first two Top 40 hits — “Obsession” (number six, 1984) and “Let Him Go” (number 39, 1985) — and its belated third one, “Room to Move” (number nine, 1989), can we call Animotion a two-hit wonder and a one-hit one with the same name? And where’s the justice in an ’80s pop world where my second-favorite “Animotion” single (after “Obsession” — natch!), 1986’s “I Engineer,” only crept up to number 76? Sylvester I’m utterly confused by the career of the late Sylvester. He’s a bonafide disco legend, but he’s the only act on this list who never managed to make it into the Top 10 on Billboard’s Hot 100. His biggest chart hit, 1978’s “Dance (Disco Heat)” reached number 19, and although it’s a fantastic song, featuring the future Weather Girls on backing vocals and sampled by Byron Stingily in his 1997 number one U.S. dance and number 14 UK. pop hit “Everybody (Get Up),” it’s not exactly a universally remembered classic of the genre. His follow-up, “You Make Me Feel (Mighty Real),” is, however, yet it peaked way down at number 36. On the plus side, Sylvester’s musical legacy does include the third highest-charting (in the U.S.) version of the pop standard “I (Who Have Nothing),” which Sylvester took to number 40 in 1979. Black Box Regardless of who was doing the lead singing (and yes, it was Martha Wash, the back-up vocalist on the left in the Sylvester video above, and not the beautiful model in the group’s videos), Black Box’s trio of Top 40 U.S. singles — “Everybody Everybody” (number eight), “I Don’t Know Anybody Else” (number 23) and “Strike It Up” (number eight), all from 1990’s Dreamland — sound as good today as they did 30 years ago. Bonnie Tyler She’s only made it to the U.S. Top 40 three times in four decades, yet many artists with more extensive hit lists probably would kill for three as big as Tyler’s. Her 1977 U.S. breakthrough, “It’s a Heartache” (number three), earned her one-hit-wonder status until 1983, when “Total Eclipse of the Heart” topped the Hot 100 for four weeks, keeping another Jim Steinman-penned hit, Air Supply’s “Making Love Out of Nothing At All,” which later would be covered by Tyler, out of the number-one spot, and becoming Tyler’s second signature tune. The following year’s Footloose single “Holding Out for a Hero” only peaked at number 34 but went on to achieve pop immortality as a gay anthem. Terence Trent D’Arby Introducing the Hardline According to Terence Trent D’Arby, his 1987 debut, launched his only three Top 40 hits in the U.S. — “Wishing Well” (number one), “Sign Your Name” (number four) and “Dance Little Sister” (number 30) — but the best (1989’s Neither Fish Nor Flesh, 1993’s Symphony Or Damn, and 2001’s Wildcard) was yet to come. Winger and Slaughter I was wrong on three counts about ’80s hair metal: 1) I was certain more bands from the genre would qualify for this list. 2) I was convinced that like most ’80s hair-metal bands, Winger made it to the pop Top 10 at least once, but the closest the band came was not with its two most memorable singles — 1989’s “Headed for a Heartbreak” (number 26) or “Seventeen” (number 19) — but with one I don’t even remember, 1990’s “Miles Away,” which went to number 12. 3) Slaughter actually did a lot better than I thought, with its trio of Top 40 successes, the second of which I bought as a cassette single back in the day: “Up All Night” (number 27, 1990), “Fly to the Angels” (number 19, 1991), and “Spend My Life” (number 39, 1991). Stephanie Mills She may not consider herself “unsung” (which is reportedly why she refused to participate in Unsung, TV One’s Behind the Music-style documentary series for Black artists), but she’s definitely been underrated. A singer with so much talent and so many well-known songs (if you’re Black) deserves more than three Top 40 hits, one of which, 1981’s “Two Hearts” (number 40), she had to share with Teddy Pendergrass, who, astonishingly, was a one-hit wonder as a Hot 100 solo act. Alone, Mills also scored with 1979’s “What Cha Gonna Do with My Lovin’” and 1980’s number six “Never Knew Love Like This Before” (number six). The Sylvers When I watched the Unsung documentary on The Sylvers, I was surprised to discover that there was so much more to the ’70s family act than its number one bubblegum signature, 1976’s “Boogie Fever,” and the two lesser top 40 hits that followed, “Hot Line” (number five, 1976) and “High School Dance” (number 17, 1977). Leon Sylvers was super-producer and hit songwriter for other acts in the ’70s and ’80s, including Shalamar, The Whispers, and Gladys Knight & The Pips. And then there was little Foster, who gave Michael Jackson a brief run for his child stardom in 1973 with the top 20 “Misdemeanor.” Sister Sledge In 1982, when the four Sledge siblings were climbing the Top 40 with their cover of the Mary Wells’s 1964 number one “My Guy,” eventually getting to number 23, I recall having the same unimpressed feeling that I did when their biggest hit, “We Are Family,” was going to number 2 in 1979. So how is it possible that I can’t even remember “He’s the Greatest Dancer,” their 1979 breakthrough single, my favorite Sister Sledge song, and one of my Top 10 favorite disco hits period, except as a golden oldie? The Knack A three-hit wonder with rapidly diminishing fortunes. They packed all three top 40 singles — My Sharona” (number one, 1979), “Good Girls Don’t” (number 11, 1979) and “Baby Talks Dirty” (number 30, 1980) — into a little over six months. Bananarama A three-hit, three-album wonder! Although the female trio logged a string of memorable singles successes in its native U.K., Bananarama only managed to enter the U.S. Top 40 three times (once less than Seduction, which accomplished its quadruple play with one album, 1989’s Nothing Matters Without Love), each time making it into the Top 10, with the 1983 number nine “Cruel Summer,” the 1985 number one “Venus,” and the 1986 number four “I Heard a Rumour.” Talking Heads It took the Rock and Roll Hall of Famers eight years — half of its 16-year existence — to score their triple play: “Take Me to the River” (number 26, 1978), “Burning Down the House” (number nine, 1983), and “Wild Wild Life” (number 26, 1986). I still can’t believe “And She Was,” which I remember seeing on MTV all the time in 1985, didn’t go higher than number 54, despite my prayers to God and Casey Kasem at the time that it would go high enough to disqualify the new-wavers from this list. The Cure Alas, “Just Like Heaven” (number 40, 1987), “Lovesong” (number 2, 1989) and “Friday I’m in Love” (number 18, 1992) don’t even begin to hint at the awesomeness that is these Rock and Roll Hall of Famers’ oeuvre. Gary Wright When Wright peaked at number two with two consecutive hits in 1976 — “Dream Weaver” and “Love Is Alive” — the world no doubt expected great things from him chart-wise. Great things, sadly, were not meant to be. It would take him five years to reach the Top 40 again (with “Really Wanna Know You,” number 16), and he’d never again trouble the Hot 100. Hamilton, Joe Frank & Reynolds In between “Don’t Pull Your Love” (number four, 1971) and “Winners and Losers” (number 21, 1975), the trio whose name sounded like a law firm achieved dreamy perfection with 1975’s “Fallin’ in Love” (number one). Franke and the Knockouts “Sweetheart” (number 10, 1981) far eclipsed either of their two top 40 follow-ups: “You’re My Girl” (number 27, 1981) and “Without You (Not Another Lonely Night)” (number 25, 1982). Franke would go on to eclipse all three as co-writer of the Dirty Dancing smashes “(I’ve Had) the Time of My Life” and “Hungry Eyes,” the former of which scored him a Best Original Song Oscar. The Time In a reversal of the typical three-hit wonder’s chart fortunes, the Morris Day-led band’s third time was the charm. They finally hit the top 10 in 1990 with “Jerk Out” (number nine), six years after “Jungle Love” petered out at 20 and “The Bird” stopped flapping at 36. Klymaxx On their three Top 40 ’80s hits, the Black Go-Go’s blunted the sexy edge of their superior early R&B hits “The Men All Pause” and “Meeting in the Ladies Room” and two later ones, “Sexy” and “Divas Need Love Too.” From a sonic standpoint, the results were mixed: “I Miss You” (number 5, 1985) may have been the number-three Hot 100 song of its year, but I missed Klymaxx’s cheeky spunk. To borrow from “I’d Still Say Yes” (number 18, 1987), I’d still say yes to “Man Size Love” (number 15, 1986) over their biggest hit any day. The Greg Kihn Band I used to think they’d end up being what Huey Lewis and the News were to the ’80s, but then they disappeared from Billboard’s Hot 100 a couple of hits after “Lucky” (number 30, 1985) became their third and unlucky final Top 40 trip. Kihn and Co. may have stopped well short of defining the ’80s (maybe it was too-punny album titles like Rockihnroll and Kihnspiracy), but they did produce a trio of great top 40 hits. The two biggies — “The Breakup Song (They Don’t Write ’Em)” (number 15, 1981) and “Jeopardy” (number two, 1983) — both sound a lot better nearly four decades later than I ever expected them to at the time. Freda Payne Nope, she wasn’t a one-hit wonder. After “Band of Gold” (number three, 1970), she went top 40 twice more, with “Deeper & Deeper” (number 24, 1970) and “Bring the Boys Home” (number 12, 1971). In the late ’70s and early ’80s, she’d go on to marry one-hit wonder Gregory Abbott, who topped the Hot 100 in January of 1987 with “Shake You Down,” and have a relationship with Edmund Sylvers of the aforementioned The Sylvers. Brenda Holloway Like the aforementioned Franke Previte, Holloway’s greatest claim to fame — and a lifetime of royalties — may not be her three top 40 hits but her work as a songwriter. The elegant lady’s third and final top 40 single, which she co-wrote, went to number 39 in 1967, two years before it became a number-two smash for another act. The song: “You’ve Made Me So Very Happy,” best known as Blood, Sweat & Tears’ first big hit.
https://jeremyhelligar.medium.com/pop-triple-plays-a-tribute-to-three-hit-wonders-77b9bcf8ba3d
['Jeremy Helligar']
2020-07-18 15:52:01.586000+00:00
['Music', 'Casey Kasem', 'One Hit Wonders', 'History', 'Nostalgia']
”Snowplay”
Who needs to romp in the snow more- grownups or kids?
https://backgroundnoisecomic.medium.com/snowplay-9f9a23eefe2d
['Background Noise Comics']
2019-03-05 02:38:24.503000+00:00
['Comics', 'Cartoon', 'Weather', 'Snow', 'Relationships']
An Opinionated Way to Structure React Apps
An Opinionated Way to Structure React Apps Based on my experience acquired building several big projects Photo by Dayne Topkin on Unsplash. When we first develop a React app, we can just put every component in a folder and it works. But when it comes to larger projects, it might be difficult to find our way between files if we keep using React this way. So how can we handle a bigger project? Dan Abramov has a way. You don’t think this is very helpful? Actually, it is. It’s the best way to find the perfect architecture that will fit your needs, but at a cost of many iterations in folder creation and removal. Today, I’m introducing the result of my many moves, making a base structure for people seeking a way to improve their own.
https://medium.com/better-programming/an-opinionated-way-to-structure-react-apps-10f87bf29952
['Bruno Sabot']
2020-05-05 02:35:37.125000+00:00
['JavaScript', 'Architecture', 'React', 'Reactjs', 'Programming']
How The Toppling of Statues Emphasises the Need to Address History
Heads must roll The issue with the statue is clear. Colston profited from the misery of thousands of people. No matter how much he donated to good causes or how many buildings bear his name, these facts can’t be airbrushed away. His legacy is tarnished. When a statue of Sadaam Hussein was toppled in Baghdad in 2003, aided by the actions of American troops, the image was beamed around the world as the liberation of a people from a tyrannical dictator. The sentiment was similar when statues of Lenin in former Soviet republics were toppled following its dissolution. These scenes were portrayed as liberation from oppression. No one was complaining about the removal of Nazi symbols from Germany and occupied territories at the end of the Second World War but by the logic of those opposing the toppling of Colston’s statue, does this not amount to historical revisionism? Of course, no one objects to this. The crimes committed by the Nazis were abhorrent as were those by the Soviets and Sadaam, so why the objection to the removal of a statue of a slave trader? My first reaction to the removal of the statue was disbelief that it existed in modern Britain. The claim is that the toppling of the statue was unlawful, undertaken by an out of control mob. Instead of taking direct action, the protestors should have expressed their displeasure through democratic means. Except, that’s what had happened. Only the process ended in gridlock as neither side could agree upon how to proceed. Often, the responses to these events are binary. The reality is more nuanced. For black people living in Bristol, the statue will be a daily reminder of the oppression their ancestors suffered. Is it right that in 21st century Britain, a statue of a prominent slave trader is in the middle of a city? It’s hard to argue it is. This brings us to an important question on statues, what is their purpose? A statue is a way of remembering someone. Often, they are an act of vanity, a way of asserting ideals or the importance of a figure. After all, not everyone is lucky enough to be commemorated in this manner. Statues can serve as a form of power and propaganda that serve to reinforce myths about the figures they depict. History is written by the winners as they say and statues are a way of enforcing a narrative of history. The reason the statue went up was because of some reverence for Colston. The population of Bristol, and more importantly, the decision-makers back then will have been mainly white men. They were free to decide what should and shouldn’t be remembered. The statue of Colston remembers one man, a morally bankrupt one, at the expense of the thousands of victims of the slave trade who were silenced and, in effect, have been expunged from history. Are their stories any less valid than Colston’s? Are their stories not worthy of being commemorated? Those that say the removal of statues is rewriting history miss the point. History is always being revised and always has been. It’s a living, breathing thing. The history that we remember says a lot about us as a society. To leave the statue standing is a tacit acknowledgement of what Colston did, to bring it down is to reject it. To say that the history of this man or that man is final is just wrong and ignores the fact that cultural standards change as time passes. Was it right that Colston’s statue was taken away down? It’s hard to argue against it. There is also a historical irony in the fact that he was dumped in the harbour as many of the slaves he owned were dumped in the ocean on voyages to the Caribbean. The incident highlights the need for greater awareness of the past. As a history graduate, there is not enough of this in Britain. I never learnt anything about the crimes committed by the British empire. Nor was I taught about the British occupation of Ireland and the misery this brought upon the residents of the island, including my ancestors. I only found out today that Britain had paid off a loan taken out in 1835 by the Chancellor of the Exchequer to compensate slave owners in 2015. That’s right, a total of £20 million, equivalent to £300 billion today. No penny of that money went to the slaves who were freed, instead, it was given to slave owners as compensation for the introduction of the Slavery Abolition Act in 1833. It’s uncomfortable to think that my taxes were used to pay off this loan but without wider recognition, I was blissfully unaware. History is contentious and powerful. When we approach as a tool for remembrance, it can help people heal and acknowledge its chequered past, but if wielded in the wrong way it can gloss over the unsavoury parts. A museum dedicated to the slave trade in the UK should be set up to offer greater awareness of the role the British played. One such museum exists in Liverpool, but more awareness of this part of our history needs to be raised. It’s a part of our history whether we like it or not. Then you have stories such as this, where the British government destroyed evidence of crimes committed during the final years of the British Empire. Without reconciliation and an acceptance of what happened, it’s no wonder there is a furore around the removal of the statue. As George Orwell wrote in his novel, Nineteen Eighty-Four, “who controls the past controls the future.” The fact that a man who died 299 years ago is household news today in Britain is evidence of Orwell’s quote and that we have yet to come to terms with our past. To do so, we must acknowledge that many of the riches we enjoy today were the result of the exploitation and misery of thousands of innocent people. Irrespective of when they were put up, the statues we see today are a mirror of the society we live in. Do we tolerate the historical amnesia we display when we preserve such statues, or do we tackle the issues they present? Maybe the toppling of Colston’s statue is the event we need to push us in the right direction and finally face up to our past.
https://tom-stevenson.medium.com/how-the-toppling-of-statues-emphasises-the-need-to-address-history-36caa78c9323
['Tom Stevenson']
2020-06-12 11:25:44.023000+00:00
['Racism', 'Society', 'History', 'BlackLivesMatter', 'Culture']
How to land your first freelance design job
How to land your first freelance design job You have the design skills, but where do you start? Designed by Monica Galvan You’re just starting on your journey to becoming a freelance designer. You have the design skills, you’ve been practicing, you know your craft. You just need help finding freelance jobs, where do you start? In this article, we’ll dive into the best way to find freelance design work. But before you even search and inquire about freelance design opportunities, there are a few things you need to work on first. Follow these steps and you’ll land your first freelance design job in no time. Decide what type of freelance design jobs you want to work on As a designer, you have the opportunity to take on various types of freelance design jobs. Logo design, branding, and web design are just some of the options. Decide whether you want to generalize or specialize. If you choose to generalize then you may decide to pursue multiple types of freelance design jobs. Or maybe you want to specialize and focus on web design or work for a specific industry like tech. Knowing the type of design jobs you want, will help you stay focused in the next few steps. Curate your portfolio projects Every designer needs to have a portfolio website. It’s the best way to show potential clients what you offer as a designer. Your portfolio site should include your best projects, an about page, a contact page, and links to social if it’s relevant and you’re sharing your work on those platforms. It’s important to curate the projects you include in your portfolio. When you’re just starting out as a designer, it’s tempting to include every project you ever produced but that’s a huge mistake. Less is better. Try to limit yourself your portfolio to 3–5 projects total. These should be projects you’re most proud of. You will only be hired to do the work you show in your portfolio so only show work you want to continue doing in the future. If your portfolio doesn’t reflect the work you want to be hired for, then create your own projects or modify them so they are. Design and develop a killer portfolio website After you’ve gathered images, screenshots, and write a backstory for each portfolio project, it’s time to organize this content and create a killer website. You can choose to design and develop the website yourself with a tool like or you can use a website builder tool like Squarespace or Wix. Since your goal is to land freelance design jobs, give yourself a time limit on how long you’ll spend creating your portfolio website. You can always update and redesign it later. Right now you need to get something up and running to start attracting clients and apply for freelance design jobs. For tips on how to design your portfolio site, check out this YouTube playlist. I break down the process of how I redesigned my portfolio site from start to finish. How to design a website in 5 steps Create an online presence Creating a portfolio website is only half the battle. How will clients find you? How will they discover your work? You need to become your own marketer. Fortunately, we have tools at our disposal that can help us get our work out and generate potential leads. Here are some of the best platforms to share your work on. Instagram Instagram is one of the top ways clients can find you. It’s a mini-portfolio gallery of your design work with the power to reach millions. The key is to keep your account curated and focused on design. If your current Instagram profile is focused on selfies, meals with family and friends, or past travels, then consider creating a new Instagram profile dedicated to sharing your work only. Use a professional photo of yourself, write a concise bio that explains the work you want to be hired for, and don’t forget to include a link to your new portfolio site. Create your first 12 posts right away. You want to have enough posts so potential clients can get a taste of your design style and give them a reason to click on your link to reach out. The key with Instagram is to keep posting fresh content so try to remain consistent. Not every post has to be a polished design. Take photos of wireframe sketches, crop into screenshots of your designs, tell a story. Use Instagram Stories, Reels, and if you’re ambitious maybe even go live. Research the best hashtags to use for your niche. Avoid overpopulated ones and instead, focus on ones with a range of 50,000–300,000 posts that way you have a higher chance of ranking in the top 9 posts for that tag. Top Instagram posts for #uidesigner Behance Behance is a platform to share and discover creative work. A lot of clients use this site to find designers. Take the same portfolio projects you have on your website and reformat them for Behance. Use the appropriate title and tags so people can easily find you. On Behance, your projects have the potential to be featured or show up higher in search results the more likes and comments you receive. It is a social platform so be sure to like and leave thoughtful comments on other creator’s projects, which will potentially help bring more views to your profile. Behance.com homepage Pinterest Pinterest is a massive search platform. The best way to use it is to design pinworthy images you’ll use to create new pins with links back to your portfolio site. If you want to dive even deeper, create blog posts featuring progress work, case studies, or curate work from other designers, then design pins that point back to those webpages. Over time traffic will build and help rank your website but it is a long-term strategy. Dribbble Dribble is a platform made specifically for designers. You can search, find inspiration, and connect with other designers. But it’s also a great place for potential clients to find you. Dribble.com homepage Originally, Dribbble allowed you to post 400x300px images or shots of your work. Now they’ve expanded and you can post up to 1600x1200 pixel images and even multi-shots if you upgrade to the Pro version which also gives you a little more exposure. Over time, these shots form a portfolio gallery on your profile page. You can choose to include a “Hire me” button on your page to show you’re open for freelance opportunities. LinkedIn You might think LinkedIn is only for those who work for large corporate companies or looking for a new full-time position but you can actually use it to find clients and allow them to discover you. First, update your LinkedIn profile with accurate past work experience, upload a professional photo, write a short bio, include your portfolio link, and best contact email address. Once your profile is ready, you can apply for freelance design jobs in the job section or reach out to business owners directly (more on this later). Spread the word to everyone you know Especially in the beginning, you want to take any design job that comes your way. You need the experience. With every small job, you’ll get better as a designer and you’ll potentially have a new project to share in your portfolio. Business owners Try reaching out to business owners on LinkedIn. Take a look at their website (if they have one) and identify opportunities for improvement. Try not to be too forceful with your outreach, not everyone needs design help at this time. If you contact the same person multiple times with no answer, take it as a hint and move on. Don’t spam a bunch of random people with generic message requests. Instead, take the time to target your leads. Write thoughtful messages that show you understand the work their business does and offer specific ways you can help. The more details you give, the more likely you will receive a response. Don’t forget about all the business owners you personally interact with on a daily basis. Whether it’s your dentist, a real estate agent, or a restaurant owner, there’s someone who needs the design skills you have to offer. Content creators YouTubers, bloggers, and social media influencers often need design help. Whether it’s designing YouTube thumbnails, eBook layouts, Instagram and Pinterest posts, or personal websites, there’s no end to the design services you can offer content creators. It helps if you’re already a fan of their work. Reach out with a friendly email and offer to help for free (they may be hesitant to hire) or a low fee to start. The key is to get your foot in the door. Once you deliver high-quality design and help make their job easier, they’ll want to hire you for more on an ongoing basis. You’re also creating connections. They may refer you to other content creators they know who are looking for help. Search design job board sites Not all job postings are for full-time opportunities. In fact, many small and large businesses post freelance and contract positions. Where can you find design jobs? LinkedIn — Use it to find and apply for full-time and freelance jobs. It’s one of the largest job search sites, companies large and small use it to find top talent. Indeed — Another well-known job board site, they used to specialize in tech-related jobs such as Software Engineers and UX Designers but they’ve since expanded. Behance — Use it to post your portfolio projects so potential clients can find you. They also have a job list section. Dribbble — Not just for sharing your fancy UI design work, there’s also a creative job section. Coroflot — A job board site specifically for designers. Krop — Helps you build a portfolio site and search for creative jobs. AngelList — Focuses on jobs for startup companies. AIGA — This isn’t free to the general public but if you’re already an AIGA member or decide to become one, you can access their job board section. Craigslist — This one is last for a reason. Be careful of scams and those who just want free work but sometimes you can find a hidden gem. Try freelance marketplaces Post a gig on Fiverr Fiverr is an online marketplace for freelance services. It’s geared toward the business owner looking for freelance services ranging from logo design, social media, web design, app design, and so much more. Originally, Fiverr only allowed you to charge $5 for a gig so that clients could take a chance with little financial risk. Over time the company expanded and now allows tier pricing with add ons. However, if you’re just starting out on the platform it’s expected you keep your rates low until you build up enough reviews when you can charge more. If you ask for a high rate and have zero reviews, you’re not likely to get any sales. While Fiverr is not the best place to earn the most as there is a profit limit with a steep 25% fee, it’s a good option for landing freelance design jobs in a quick and easy way. 99designs 99design includes access to a variety of designers working in marketing, branding, advertising, illustration, merchandising, and more, this is one of the most popular freelance graphic design websites with a global reach. Clients upload a brief for a specific project to 99design’s global freelancer network. But with such a wide net of freelancers, be prepared for competition. Upwork Upwork is another popular and easy-to-use site for freelancers at any level in their career. They focus on hiring “proven pros with confidence using the world’s largest, remote talent platform.” It’s not just for designers. Clients can find remote workers for web and mobile software development, writing, sales and marketing, admin support, customer support, analytics, engineering, and more. Upwork allows you to create a highly detailed freelancing profile and gives you the ability to chat with clients before accepting work to ensure you’re pursuing the right freelance design job that’s relevant and valuable to you and your skillset.
https://medium.com/design-bootcamp/how-to-land-your-first-freelance-design-job-5a2aaabe9083
['Monica Galvan']
2020-11-19 02:45:03.450000+00:00
['Business', 'Visual Design', 'Design', 'UX', 'Freelancing']
Building a Chatroom using ReactJS and Pusher
Many of our personal and professional interactions are facilitated by real-time communication tools in the digital age. In 2014, Tim Cook said that Apple handles “about 40 billion iMessage notifications per day” worldwide. Given that every conversation in iMessage is essentially a micro-chatroom, we’re going to explore the inner workings of a chatroom by building one using ReactJS and Pusher. Could you imagine writing messages and not seeing your chatroom in real-time? That’s what would happen if bi-directional communication via a client and server were not possible. Most solutions use Websockets for bi-directional communication, but it requires manual deployment and configuration. Today, we’re going to use Pusher’s Chatkit API to update the chatroom in real-time. Step 1: Download and open Pusher’s ReactJS template This tutorial will be largely guided by the template and instructions listed in Pusher’s repo here. The repo is pre-configured with instructions for the Node.js server that handles the Client and Server-side chatroom requests. Step 2: Create a Pusher Chatkit instance Go to Pusher’s dashboard to create a Chatkit instance. Once you’ve created your instance, save your instance locator and secret key values. Step 3: Set up Node.js server for Chatkit API Server-Side interactions In Pusher’s third step, the Client-side interactions are limited to users being able to join chatrooms, send messages, and view when other users are typing. The Server-side interactions (creating and managing user accounts) can only happen if we install @pusher/chatkit-server via npm install --save @pusher/chatkit-server and update ./server.js with values from Step 2. Step 4: Create login flow See Step 4 instructions here. Create a controlled form by making a component called UsernameForm.js in the ./src/components/ folder. Track the username in the state of UsernameForm.js. Then, update the parent component, App.js , with a function called onUsernameSubmitted(username) that receives the username from UsernameForm.js and makes a POST request to /users route to create new Chatroom users. The POST request also pessimistically sets the state of App.js currentUser to the username value stored in UsernameForm.js ’s state. Finally, pass onUsernameSubmitted(username) as a prop to UsernameForm.js and add the prop to an onSubmit listener in the form to bring the loop full-circle. Step 5: Conditionally render the chatroom component Create a ChatScreen.js component in the ./src folder that will eventually display the user’s messages. Create a toggle for the state in the App.js component called currentScreen that will default to “Login”. Then, update currentScreen to “ChatScreen” when the onUsernameSubmitted(username) function in Step 4 is called. It will only be called when a username is submitted in the Login form. Update App.js to render the code below: // App.js component render() { if (this.state.currentScreen === 'Login') { return <UsernameForm onSubmit={this.onUsernameSubmitted} /> } if (this.state.currentScreen === 'ChatScreen') { return <ChatScreen currentUsername= {this.state.currentUsername} /> } } Step 6: Connect your Chatkit instance In Step 2, we created a Chatkit instance and now, we are ready to connect to our instance via Pusher’s Client-Side API. Install @pusher/chatkit-client via npm install --save @pusher/chatkit-client and update ./ChatScreen.js with the code below: // ChatScreen.js component componentDidMount () { const chatManager = new Chatkit.ChatManager({ instanceLocator: 'YOUR INSTANCE LOCATOR', userId: this.props.currentUsername, tokenProvider: new Chatkit.TokenProvider({ url: 'http://localhost:3001/authenticate', }), }) chatManager .connect() .then(currentUser => { this.setState({ currentUser }) }) .catch(error => console.error('error', error)) } Use the instance locator to instantiate, userId (which is your username), and token to create a chatManager object that will allow the user to join chatrooms, send messages, and view when other users are typing. Note: If you encounter the TOKEN EXPIRY TOO FAR AHEAD error, this means your Node.js server time is ahead of the current time. An easy fix is updating your local machine's time. Click on the time in the upper righthand corner and click "Open Date & Time Preferences" to update your local time. Then, restart your server with npm install. Step 7: Create chatroom within Pusher Chatkit In a more robust app, the functionality to create a chatroom via the Client or Server should exist by default, but we are creating a room through the Chatkit dashboard here. Click on the instance you created in Step 2. Then, click on the Console tab in the upper middle area of the page. Then, click on the Rooms tab slightly to the left and click the Create New Room button. Once the room is created, make note of the ID that appears under the room name. You’ll use the ID in Step 8. Step 8: Subscribe to new messages Conceptually, you can’t see a chatroom without “subscribing to” or joining one, so we’re going to update ./ChatScreen.js with the code below: // ChatScreen.js component state = { currentUser: {}, currentRoom: {}, messages: [] } componentDidMount () { const chatManager = new Chatkit.ChatManager({ instanceLocator: 'YOUR INSTANCE LOCATOR', userId: this.props.currentUsername, tokenProvider: new Chatkit.TokenProvider({ url: 'http://localhost:3001/authenticate', }), }) chatManager .connect() .then(currentUser => { this.setState({ currentUser }) return currentUser.subscribeToRoom({ roomId: "YOUR ROOM ID", messageLimit: 100, hooks: { onMessage: message => { this.setState({ messages: [...this.state.messages, message], }) }, }, }) }) .then(currentRoom => { this.setState({ currentRoom }) }) } Replace “YOUR ROOM ID” with the Room ID from Step 9. When the user logs in, the ChatScreen component will render, which will initialize the ChatManager object that gives you the ability to perform Client-Side and Server-Side actions. We’re going to add an empty currentRoom object and empty messages array to the state. The Chatkit API has a subscribeToRoom method that will add the user to the chatroom and subscribe them to all and any actions taking place in the chatroom. subscribeToRoom takes an event handler called onMessage that is called in real-time each time a new message is posted via Chatkit’s webhooks feature. When a message is posted, it is appended to the messages array in the state of ./ChatScreen.js . Step 9: Rendering messages in the Chatroom In Step 8, we created a messages array in ./ChatScreen.js , so now, we’ll create a component called MessageList.js in the ./src/components folder that will receive the messages array as a prop. The MessageList.js component will then iterate over the array to render every message in a chatbox. Follow the steps for MessageList here to render messages in your Chatroom. Step 10: Allowing users to send messages Similar to Step 4, we can make a copy of the UsernameForm.js component to repurpose it as SendMessageForm.js . The new component should have a state for “text”, so the onChange event can update this.state.text and onSubmit event that calls a new function called sendMessage(text) . The new function will be passed down to SendMessageForm.js as a prop for the onSubmit even of the form. // SendMessageForm.js component state = { text: '', } onSubmit(e) { e.preventDefault() this.props.onSubmit(this.state.text) this.setState({ text: '' }) } onChange(e) { this.setState({ text: e.target.value }) if (this.props.onChange) { this.props.onChange() } } <form onSubmit={this.onSubmit}> <input type="text" placeholder="Type a message here then hit ENTER" onChange={this.onChange} value={this.state.text} /> </form> // ChatScreen.js component sendMessage(text) { this.state.currentUser.sendMessage({ text, roomId: this.state.currentRoom.id, }) } render ( <SendMessageForm onSubmit={this.sendMessage} /> ) Remember, the subscribeToRoom method from Step 8 takes an event handler called onMessage that is called in real-time each time a new message is posted, so new messages will appear in the MessageList.js component made in Step 9. Step 11: Adding real-time typing indicators Create a TypingIndicator.js component in the ./src/components folder using the instructions in Pusher’s document here. At the end of this step, you’ll wind up having a state for usersWhoAreTyping in ChatScreen.js and the following code: // ChatScreen.js component state = { currentUser: {}, currentRoom: {}, messages: [], usersWhoAreTyping: [], } sendTypingEvent() { this.state.currentUser .isTypingIn({ roomId: this.state.currentRoom.id }) .catch(error => console.error('error', error)) } return( <TypingIndicator usersWhoAreTyping={this.state.usersWhoAreTyping} /> <SendMessageForm onSubmit={this.sendMessage} onChange={this.sendTypingEvent}/> ) Step 12: Adding who’s online list Create a WhosOnlineList.js component in the /src/components folder using the instructions here. Step 13: Adding auto-scrolling for new messages The MessageList.js component renders all messages sent by users, but I was surprised to see the chatroom did not automatically display the most recent message at the bottom of my screen. I looked further into what it would take to make the MessageList auto-scroll down to the newest message and came up with the following code: // ChatScreen.js component state = { currentUser: {}, currentRoom: {}, messages: [], usersWhoAreTyping: [], scrolled: false, } sendMessage = (text) => { this.state.currentUser.sendMessage({ roomId: this.state.currentRoom.id, text: text }) this.setState({ scrolled: false }) } onScroll = () => { this.setState({ scrolled: true }) } // MessageList.js component componentDidUpdate() { setInterval(this.updateScroll,1000); } updateScroll = () => { let element = document.querySelector('.message-list-container') if (this.props.scrolled === false) { element.scrollTop = element.scrollHeight; } } //update the div for message-list-container with onScroll event <div className="message-list-container" onScroll={() => this.props.onScroll()}> Step 14: Cosmetic add-ons Changing color of user’s posts I’m a fan of rendering the logged in user’s name differently from other users to make posts easily identifiable. To accomplish this goal, I wrote the ternary statement below in the MessageList.js component. // MessageList.js component { this.props.currentUser.id === message.senderId ? <font className="special-text">{message.senderId}</font> : {message.senderId}} Every time you or another user posts a message, the senderId for the message is set equal to your username. The currentUser.id is set equal to the username of the logged in person. If the ternary is true, the username of the logged in person is wrapped in a className called “special-text”, which styles the username using the CSS in Step 15. Displaying time next to user post This functionality is a staple in any chatroom application, but it’s not as easy as it seems to implement. The Chatkit API returns the time of messages in UTC time in a format that’s not as simple as HH:MM. I had to write a function called parseTime to convert UTC to military time and then, convert military time to standard time with the AM/PM designation. <span className="message-list-time">{this.parseTime(message.createdAt)}</span> parseTime = (createdMessage) => { let date = new Date(createdMessage); let hours = date.getHours(); let mins = date.getMinutes(); let merid = " AM" if (hours > 12 && mins < 10) { hours = hours - 12 mins = "0" + mins merid = " PM" return hours + ':' + mins + merid } else if (hours > 12 && mins > 10) { hours = hours - 12 merid = " PM" return hours + ':' + mins + merid } else if (hours < 12 && mins < 10) { mins = "0" + mins return hours + ':' + mins + merid } else { return hours + ':' + mins + merid } } Displaying “hit return to submit” when length is greater than 2 Slack has a really cool feature that’s subtle in design, but lets users with minimal experience know how to communicate in its app when they type a specific number of characters. // SendMessageForm.js component { this.state.text.length > 2 ? <div className="message-form-return">hit return to send</div> : "" } Step 15: Style the chatroom components The standard for chatroom dictates a navigation menu on the left panel with information about the users on that same left panel. To the right of the panel, the chatroom should render all chat messages and have a text field at the bottom to submit messages. See my styled sheet to update the styling of your chatroom app here.
https://medium.com/javascript-in-plain-english/building-a-chatroom-using-reactjs-and-pusher-ec8c33b5f660
['Hector P.']
2020-01-23 21:04:27.593000+00:00
['JavaScript', 'Web Development', 'React', 'Pusher', 'Programming']
An Introduction to ILLUMINATION Slack Workspace
Introduction Some writers and readers say that ILLUMINATION is growing with the speed of light. This statement is not an exaggeration. Within less than nine months, 6,000+ writers and 52,000 readers support this publication. With the addition of new services such as special collection ILLUMINATION-Curated and technology collection called Technology Hits, the growth turned into an exponential state. Our special collection of ILLUMINATION-Curated is also rapidly growing. To adapt this fast growth, we need to improve our services and make them more efficient to serve thousands of writers contributing to these publications serving thousands of readers. We upgraded our Slack infrastructure, capability and processes. I invited all applicants who are interested in contributing to our publications. Around 5,000 Medium writers have already joined and are collaborating joyfully. In this post, my goal is to introduce Slack workspace briefly, how writers can benefit it and use it efficiently to achieve their writing and reading goals. I will also introduce other resources that can help you learn Slack quickly and easily. For the beginners, let me introduce what Slack is. What is Slack Slack is an online collaboration and productivity tool. It has many useful features to enhance communication and collaboration and deepen relationships among members. In a nutshell, with Slack, members can work together more effectively, connect their software tools and services to find the information that they need to do their work. It is a secure platform with the enterprise-grade environment. It is used by thousands of organizations globally. A Slack workplace may look overwhelming initially, but many users get used to it very quickly. It is an intuitive program and easy to use platform. Slack is used by millions of people for various purposes. Slack is one of the most popular communication tools. It supports both personal computers and mobile devices. You can run Slack application on your Windows or Mac machine and any smartphone or tablet devices. Your profile and information details are automatically move to the preferred device. A Slack workspace has a team of members with different permissions. Let me introduce our Slack workspace so that you can start your journey with ease and comfort. ILLUMINATION Slack workspace As the owner of the workspace, I created and configured the workspace. I invite members and set permissions for other administrators, members, and guests. As a complementary service, I invite all contributing writers as a member for the workspace. I plan to invite some guests to use specific channels as part of our extended projects. We run multiple collaboration projects to give more visibility to the great work of our writers and reach out to a broader audience. Slack invitations work with email address. For example, if you applied to ILLUMINATION as a writer using our application tool, I send you an invitation to join our Slack workspace. The invitation you receive would say “Join your team on Slack. Dr Mehmet Yildiz has invited you to use Slack with them, in a workspace called ILLUMINATION.” You need to click on the JOIN NOW button. If you have Slack application, you will automatically be introduced to the workspace, if not you need to install the app. When you join the first page you see is channel called #1_start_here. The core architecture of Slack is related to the provision of channels. You can learn the concept of Slack channels from the attached short video. Messaging is the primary communication way in Slack. Slack also allows phone calls and video calls. However, our version does not support video calls yet. We may consider video version in the future. The main inhibitor is the cost. Considering our publications do not generate any income, we cannot justify the cost for video calls. Each member has a profile on Slack workspace. You can add a photo and contact details such as email, phone number, and your role in the workspace. The Slack workspace also links all the files you own in the space. Here is my member profile on Slack as a sample. image screen capture by author I recommend our writers to use Medium account ID as display name so that other writers can easily find you on Medium. As the purpose is collaboration, presenting your profile with sufficient publicly available information can be useful. Adding information about yourself is optional, of course. If you prefer to stay anonymous, it is fine; we respect your choice. You can leave Slack anytime without being dependent on the administrator. Slack has several automation tools as well. For example, I forward all ILLUMINATION articles via Medium RSS to a particular channel so that the members can easily access them, read them, comment on them, and share them with other members. You can ask your questions to our Slack editors and champions Tree Langdon, Dew Langrial, The Maverick Files, Agnes Laurens, Liam Ireland, Stuart Englander, Britni Pepper, Ntathu Allen, Joe Luca, Thewriteyard, Dr John Rose who actively use the workspace. We operate all publication activities in a secure channel dedicated to editors. You can easily access editors via channels or even sending a private message. Sending private messages on Slack is much easier, faster, and reliable than sending private messages on Medium. For example, your private messages on stories submitted to publications can be viewed by all editors. On Slack, only the owner of the account can see it. This feature is handy for communicating sensitive matters with your target audience. You can learn more about Slack by reading the instruction in the attached link by official Slack organization: https://slack.com/intl/en-au/help/categories/200111606 You can also check this excellent article developed by our Slack champion and senior editor Tree Langdon. Tips for using ILLUMINATION Slack workspace Let me give you a few tips to make your journey comfortable and productive. Start your first visit with #1_start_here channel. This is the channel you can introduce yourself. Then proceed with other channels that you want to engage. The hashtag (#) sign depicts the channel. When you type, # in a chat box, Slack prompts available channels in the workspace. You can search all channels from the Channel browser tool. Use your Medium account ID to be recognized by other writers. Use channel browser to see relevant channels. Use People & use groups to find members. One of the best ways to catch up with conversations is by using threads. You can use “threads” to see all the new discussions on any channel. You can also star channels so that you can find them easily in the future. And of course, you can remove the stars if you don’t need them any more. Ground Rules As the Slack workspace is a social platform, we need to act ethically. We act with common sense. The golden rule is to treat other members as you want to be treated. Respecting privacy and opinions of other people is critical. We don’t allow harassment, hate speech, discrimination, and toxic behavior in this collaboration environment. Our members want to have secure, joyful, and pleasant conversations. Our Slack is not like common social media. It is managed privately, and we have the right to choose our members. If you observe or experience toxic behavior on our Slack workspace, please contact me directly or reach out to our editors in my absence. We will remove the privilege of offending members and will not allow them to use our services. Now and then, conflicts may arise. We are all humans. Mistakes and misunderstandings can happen. We approach the problems case by case. Please always give the benefit of the doubt until a matter is understood clearly and resolved amicably by our administration team. Low hanging fruits for members 1 — Introduce yourself, create visibility, and start the collaboration 2 — Add your writer bio to the channel called #writerbios 3 — Share your story links on the #post-your-own-writing-here 4 — Say “hi” to a few members and introduce writing practice. 5 — Share your mailing list and other materials in promotion channels. You can also show your reactions with emojis on the comments. Let me introduce you our promotion channel. Promotion Channels Visibility of writers require smart promotion. Writers are usually shy creatures by nature. I encourage writers to think and act like an entrepreneur. There is no other way to survive and thrive in this economic climate. There is no shame in promoting your great services. You deserve recognition and appreciation. The best approach is to find creative and meaningful ways to reach out to your audience. Therefore, I created this innovative platform and encourage our writers to promote their profiles, content, external products and freelancing services, newsletters, and mailing lists. Our Slack workspace is an excellent promotion tool for writers to create visibility. Here are the channels that you can use to promote your materials. #promotion-fiction-books #promotion-nonfiction-books #promotion-nonfiction-books #promotion-poetry-books #promotion-blogging-links #promotion-training-education #mailinglists-and-newsletters We also have clubs such as #poetry-club #fiction-club #technology-science-entrepreneurship-club Conclusions Our Slack workspace is designed as an engagement, visibility, promotion, and collaboration tool. Our Slack workspace is a privilege that we give to our contributing writers of ILLUMINATION, ILLUMINATION-Curated, and Technology Hits publications. You can use ILLUMINATION Slack workspace to meet other writers, engage in their content, collaborate with them, and promote your materials. Joining and engaging in our Slack workspace can help you enhance your network and make you feel joyful in your writing journey. I hope you enjoy your journey. I am one message away to connect with you. Thank you for reading my perspectives.
https://medium.com/illumination/an-introduction-to-illumination-slack-workspace-33e10b9e890c
['Dr Mehmet Yildiz']
2020-12-10 18:10:54.784000+00:00
['Social Media', 'Business', 'Technology', 'Writing', 'Freelancing']
Understanding Java Streams
After having had a deep introduction to functional programming in my last article “A new Java functional style”, I think it’s now time to look at Java Streams more in depth and understand how they work internally. This can be something very important when working with Streams if our performance is going to be impacted. You’ll be able to see how much easier and efficient is processing sequences of elements with Java Streams compared to the “old way” of doing things and how nice is to write code using fluent interfaces. You can now say good-bye to error-prone code, full of boilerplate code and clutter that was making our lives as developers much more complicated. Let’s start by having a brief introduction to Java Streams first! Introduction Java Streams are basically a pipeline of aggregate operations that can be applied to process a sequence of elements. An aggregate operation is a higher-order function that receives a behaviour in a form of a function or lambda, and that behaviour is what gets applied to our sequence. For example, if we define the following stream: collection.stream() .map(element -> decorateElement(element)) .collect(toList()) In this case the behaviour what we’re applying to each element is the one specified in our “decorateElement” method, which will supposedly be creating a new “enhanced” or “decorated” element based on the existing element. Java Streams are built around its main interface, the Stream interface, which was released in JDK 8. Let’s go into a bit more of detail briefly! Characteristics of a stream As it was mentioned in my last article, Java Streams have these main characteristics: Declarative paradigm Streams are written specifying what has to be done, but not how. Streams are written specifying what has to be done, but not how. Lazily evaluated This basically means that until we call a terminal operation, our stream won’t be doing anything, we will just have declared what our pipeline will be doing. This basically means that until we call a terminal operation, our stream won’t be doing anything, we will just have declared what our pipeline will be doing. It can be consumed only once Once we call a terminal operation, a new stream would have to be generated in order to apply the same series of aggregate operations. Once we call a terminal operation, a new stream would have to be generated in order to apply the same series of aggregate operations. Can be parallelised Java Streams are sequential by default, but they can be very easily parallelised. We should see Java Streams as a series of connected pipes, where in each pipe our data gets processed differently; this concept is very similar to UNIX pipes! Phases of a stream A Java Stream is composed by three main phases: Split Data is collected from a collection, a channel or a generator function for example. In this step we convert a datasource to a Stream in order to process our data, we usually call it stream source . Data is collected from a collection, a channel or a generator function for example. In this step we convert a datasource to a Stream in order to process our data, we usually call it . Apply Every operation in the pipeline is applied to each element in the sequence. Operations in this phase are called intermediate operations . Every operation in the pipeline is applied to each element in the sequence. Operations in this phase are called . Combine Completion with a terminal operation where the stream gets materialised. Please remember that when defining a stream we just declare the steps to follow in our pipeline of operations, they won’t get executed until we call our terminal operation. There are two interfaces in Java which are very important for the SPLIT and COMBINE phases; these interfaces are Spliterator and Collector. The Spliterator interface allows two behaviours that are quite important in the split phase: iterating and the potential splitting of elements. The first of these aspects is quite obvious, we’ll always want to iterate through our data source; what about splitting? Splitting will take a big importance when running parallel streams, as it’ll be the one responsible for splitting the stream to give an independent “piece of work” to each thread. Spliterator provides two methods for accessing elements: boolean tryAdvance(Consumer<? super T> action); void forEachRemaining(Consumer<? super T> action); And one method for splitting our stream source: Spliterator<T> trySplit(); Since JDK 8, a spliterator method has been included in every collection, so Java Streams use the Spliterator internally to iterate through the elements of a Stream. Java provides implementations of the Spliterator interface, but you can provide your own implementation of Spliterator if for whatever reason you need it. Java provides a set of collectors in Collectors class, but you could also do the same with Collector interface if you needed a custom Collector to combine your resulting elements in a different way! Let’s see now how a Stream pipeline works internally and why is this important. Stream internals Java Streams operations are stored internally using a LinkedList structure and in its internal storage structure, each stage gets assigned a bitmap that follows this structure: So basically we could imagine this representation as for example: Why is this so important? Because what this bitmap representation allows Java is to do stream optimisations. Each operation will clear, set or preserve different flags; this is quite important because this means that each stage knows what effects causes itself in these flags and this will be used to make the optimisations. For example, map will clear SORTED and DISTINCT bits because data may have changed; however it will always preserve SIZED flag, as the size of the stream will never be modified using map. Does that make sense? Let’s look at another example to clarify things further; for example, filter will clear SIZED flag because size of the stream may have changed, but it’ll always preserve SORTED and DISTINCT flags because filter will never modify the structure of the data. Is that clear enough? So how does the Stream use these flags for its own benefit? Remember that operations are structured in a LinkedList? So each operation combines the flags from the previous stage with its own flags, generating a new set of flags. Based on this, we will be able to omit some stages in many cases! Let’s take a look at an example: In this example we are creating a Set of String, which will always contain unique elements. Later on in our Stream we make use of distinct to get unique elements from our Stream; Set already guarantees unique elements, so our Stream will be able to cleverly skip that stage making use of the flags we’ve explained above. That’s brilliant, right? We’ve learned that Java Streams are able to make transparent optimisations to our Streams thanks to the way they’re structured internally, let’s look now at how do they get executed! Execution We already know that a Stream is lazily executed, so when a terminal operation gets executed what happens is that the Stream selects an execution plan. There are two main scenarios in the execution of a Java Stream: when all stages are stateless and when NOT all stages are stateless. To be able to understand this we need to know what stateless and stateful operations are: Stateless operations A stateless operation doesn’t need to know about any other element to be able to emit a result. Examples of stateless operations are: filter, map or flatMap. A stateless operation doesn’t need to know about any other element to be able to emit a result. Examples of stateless operations are: filter, map or flatMap. Stateful operations On the contrary, stateful operations need to know about all the elements before emitting a result. Examples of stateful operations are: sorted, limit or distinct. What’s the difference in these situations then? Well, if all operations are stateless then the Stream can be processed in one go. On the other hand, if it contains stateful operations, the pipeline is divided into sections using the stateful operations as delimiters. Let’s take a look at a simple stateless pipeline first! Execution of stateless pipelines We tend to think that Java Streams will be executed exactly in the same order as we write them; that’s not correct, let’s see why. Let’s consider the following scenario, where we have been asked to give a list with those employees with salaries below $80,000 and update their salaries with a 5% increase. The stream responsible for doing that would be the one shown below: How do you think it’ll be executed? We’d normally think that the collection gets filtered first, then we create a new collection including the employees with their updated salaries and finally we’d collect the result, right? Something like this: Unfortunately, that’s not actually how Java Streams get executed; to prove that, we’re going to add logs for each step in our stream just expanding the lambda expressions: If our initial reasoning was correct we should be seeing the following: Filtering employee John Smith Filtering employee Susan Johnson Filtering employee Erik Taylor Filtering employee Zack Anderson Filtering employee Sarah Lewis Mapping employee John Smith Mapping employee Susan Johnson Mapping employee Erik Taylor Mapping employee Zack Anderson We’d expect to see each element going through the filter first and then, as one of the employees has a salary higher than $80,000, we’d expect four elements to be mapped to a new employee with an updated salary. Let’s see what actually happens when we run our code: Filtering employee John Smith Mapping employee John Smith Filtering employee Susan Johnson Mapping employee Susan Johnson Filtering employee Erik Taylor Mapping employee Erik Taylor Filtering employee Zack Anderson Mapping employee Zack Anderson Filtering employee Sarah Lewis Hmmm, that’s not what you were expecting, right? So actually how Java Streams are processed is more like this: That’s quite surprising, right? In reality the elements of a Stream get processed individually and then they finally get collected. This is VERY IMPORTANT for the well functioning and the efficiency of Java Streams! Why? First of all, parallel processing is very safe and straightforward by following this type of processing, that’s why we can convert a stream to a parallel stream very easily! Another big benefit of doing this is something called short-circuiting terminal operations. We’ll take a brief look at them later! Execution of pipelines with stateful operations As we mentioned earlier, the main difference when we have stateful operations is that a stateful operation needs to know about all the elements before emitting a result. So what happens is that a stateful operation buffers all the elements until it reaches the last element and then it emits a result. That means that our pipeline gets divided into two sections! Let’s modify our example shown in the last section to include a stateful operation in the middle of the two existing stages; we’ll use sorted to prove how Stream execution works. Please notice that in order to use sorted method with no arguments, Employee class has now to implement Comparable interface. How do you think this will be executed? Will it be the same as our previous example with stateless operations? Let’s run it and see what happens. Filtering employee John Smith Filtering employee Susan Johnson Filtering employee Erik Taylor Filtering employee Zack Anderson Filtering employee Sarah Lewis Mapping employee Erik Taylor Mapping employee John Smith Mapping employee Susan Johnson Mapping employee Zack Anderson Surprise! The order of execution of the stages has changed! Why is that? As we explained earlier, when we use a stateful operation our pipeline gets divided into two sections. That’s exactly what has happened! The sorted method cannot emit a result until all the elements have been filtered, so it buffers them before emitting any result to the next stage (map). This is a clear example of how the execution plan changes completely depending on the type of operation; this is done in a way that is totally transparent to us. Execution of parallel streams We can execute parallel streams very easily by using parallelStream or parallel. So how does it work internally? It’s actually pretty simple. Java uses trySplit method to try splitting the collection in chunks that could be processed by different threads. In terms of the execution plan, it works very similarly, with one main difference. Instead of having one single set of linked operations, we have multiple copies of it and each thread applies these operations to the chunk of elements that it’s responsible for; once completed all the results produced by each thread get merged to produce one single and final result! The best thing is that Java Streams do this transparently for us! That’s great, isn’t it? One last thing to know about parallel streams is that Java assigns each chunk of work to a thread in the common ForkJoinPool, in the same way as CompletableFuture does. Now as promised, let’s take a brief look at short-circuiting terminal operations before we complete this section about how Streams work. Short-circuiting terminal operations Short-circuiting terminal operations are some kind of operations where we can “short-circuit” the stream as soon as we’ve found what we were looking for, even if it’s a parallel stream and multiple threads are doing some work. If we take a closer look at certain operations like: limit, findFirst, findAny, anyMatch, allMatch or noneMatch; we’ll see that we don’t want to process the whole collection to get to a final result. Ideally we’d want to interrupt the processing of the stream and return a result as soon as we find it. That’s easily achieved in the way Java Streams get processed; elements get processed individually, so for example if we are processing a noneMatch terminal operation, we’ll finish the processing as soon as one element matches the criteria. I hope this make sense! One interesting fact to mention in terms of execution is that for short-circuiting operations the tryAdvance method in Spliterator is called; however, for non short-circuiting operations the method called would be forEachRemaining. That’s it from me! I hope now you have a good understanding of how Java Streams work and that this helps you design stream pipelines easier! If you need to improve your understanding of Java Streams and functional programming in Java, I’d recommend that you read “Functional Programming in Java: Harnessing the Power Of Java 8 Lambda Expressions”; you can buy it on Amazon in the following link. Conclusion Java Streams have been a massive improvement in Java language; not only our code is more readable and easier to follow, but also less error-prone and more fluent to write. Having to write complex loops and deal with variables just to iterate collections wasn’t the most efficient way of doing things in Java. However, I think the main benefit is how Java Streams have enabled a way to do concurrent programming for anyone! You don’t need to be an expert in concurrency to write concurrent code anymore; although it’s good that you understand the internals to avoid possible issues. The way Java processes streams in a clever way has cleared our paths to process collections and write concurrent programs and we should take advantage of it! I hope that what we’ve gone through in this article has been clear enough to help you have a good understanding about Java Streams. I also hope that you’ve enjoyed this reading as much as I enjoy writing to share this with you guys! In the next article we’ll be showing many examples on how to use Java Streams so if you’ve liked this article please subscribe/follow in order to be notified when a new article gets published. It was a pleasure having you and I hope I see you again!
https://medium.com/swlh/understanding-java-streams-e0f2df12441f
['The Bored Dev']
2020-07-21 10:52:24.409000+00:00
['Functional Programming', 'Java', 'Technology', 'Programming', 'Software Development']
How To Automate Web Scraping using BeautifulSoup for Dummies
1. Using Requests to download page To start scraping a web page, first we need to download the page using the Python requests library . The requests library will make a GET request to a web server, which will download the HTML contents of a given web page for us. There are several different types of requests we can make using requests, of which GET is just one. You can easily install the library by running code below pip install requests 2. Using BeautifulSoup to parse the HTML content To parse our HTML document and extract the 50 div containers, we’ll need to install a Python module called BeautifulSoup: pip install BeautifulSoup4 Then, we will: Import the BeautifulSoup class creator from the package bs4. Parse response.text by creating a BeautifulSoup object, and assign this object to html_soup . 3. Understanding the HTML structure Before you get all hyped up for web scraping, you need to understand the HTML of the website which you want to scrape from. Take note that every website has different structure. Hover to the items you want to reach its HTML code Right click on the website Left click on Inspect Turn on the hover cursor button on top left. Each movie is in a div tag with class lister-item-mode-advanced . Let’s use the find_all() method to extract all the div containers that have a class attribute of lister-item mode-advanced : As shown, there are 50 containers, meaning to say 50 movies listed on each page. Now we’ll select only the first container, and extract, by turn, each item of interest: The name of the movie. The year of release. The IMDB rating. The Metascore. Directors The number of votes. Gross Let’s get started with the first_movie 4. Scraping Data From the first_movie html which we had stored, we are going to use find and find_all with str slicing to work out the magic. The name of the movie. The year of release. IMDB rating Metascore Directors This is more complicated as this class contains Directors and Stars. So I used slicing and splitting to extract only the directors. You may use the same logic to extract Stars as well. The number of votes Gross 5. Changing the URL’s parameters (Where automation starts) The URLs follow a certain logic as the web pages change. As we are making the requests, we’ll only have to vary the values of only two parameters of the URL: release_date : Create a list called years_url and populate it with the strings corresponding to the years 2000-2017. : Create a list called and populate it with the strings corresponding to the years 2000-2017. page : Create a list called pages, and populate it with the strings corresponding to the first 4 pages. 6. Controlling the crawl-rate We need 2 functions: sleep() : Control the loop’s rate. It will pause the execution of the loop for a specified amount of seconds. randint() : To mimic human behavior, we’ll vary the amount of waiting time between requests. It randomly generates integers within a specified interval. 7. Monitoring the loop as it’s still going Monitoring is very helpful in the testing and debugging process, especially if you are going to scrape hundreds or thousands of web pages in a single code run. Here are the following parameters that we are gonna monitor: The frequency (speed) of requests: make sure our program is not overloading the server. Frequency value = the number of requests / the time elapsed since the first request. The number of requests : can halt the loop in case the number of expected requests is exceeded. : can halt the loop in case the number of expected requests is exceeded. The status code of our requests: make sure the server is sending back the proper responses. Let’s experiment with this monitoring technique at a small scale first. 8. Piece everything together (Loops) 🎉 Phew~ Tough work is done, now let’s piece together everything we’ve done so far. Import necessary libraries Re-declare the lists variables so they become empty again. Prepare the loop. Loop through the years_url list in the interval 2010–2019 and loop through the pages list in the interval 1–4. Make the GET requests within the pages loop Give the headers parameter the right value to make sure we get only English content. Pause the loop for a time interval between 8 and 15 seconds. Throw a warning for non-200 status codes. Break the loop if the number of requests is greater than expected. Convert the response‘s HTML content to a BeautifulSoup object. Extract all movie containers from this BeautifulSoup object. Loop through all these containers. Extract the data if a container has a Metascore. Extract the data if a container has a Gross, or else append(“-”). 9. Transform the scraped data into CSV In the next code block we: Merge the data into a pandas DataFrame. Print some information about the newly created DataFrame. Show the last 10 entries. Sneak peak of the dataset Last but not least, save the DataFrame to CSV, so that we can do data wrangling and EDA later on: FREE access to all running code Here’s the GitHub link to get the Python code. 👉 To be published next: Data Wrangling and EDA of movie ratings dataset Stay tuned! 😊
https://medium.com/analytics-vidhya/automated-web-scraping-using-beautifulsoup-for-dummies-free-python-code-41925125774e
['Shan Yi Tan']
2020-05-30 14:03:46.415000+00:00
['Python', 'Automation', 'Data Science', 'Web Scraping', 'Data Visualization']
How to Run a 5k Under 20 Minutes
Train for speed 6:26 pace is quick. Damn quick. 9.36 mph quick. To race fast, you have to train fast. Rocky Balboa’s trainer, Mickey, said it best: “We need speed. Speed is what we need!” Tempo runs, fartlek runs and high intensity intervals are excellent training modalities to build speed. *Always check in with your doctor before starting any exercise program* Tempo run / threshold training Tempo runs, also known as lactate-threshold runs, are run at a pace about 25-30 seconds per mile slower than your 5K race pace. So for a sub-20 minute 5k, 7 minute per mile pace. Without getting too technical, tempo pace is the effort level at which your body clears as much lactate — a chemical byproduct of exercise (webmd.com) — as it produces. Basically, the fastest pace you can maintain without the “dead-leg sensation” setting in. While this training method is most commonly used for 15k efforts and above, tempo runs have their place in a quality 5k training program for several reasons: They get the body used to running fast without the repeated physical stresses of running at or above race pace They’re generally dictated by time, not mileage — great for those just starting a running program or who travel a lot They help develop mental toughness and concentration — very useful come race day They stimulate and build both fast and slow twitch muscle fibers, leading to gains in both speed and endurance Fartlek runs Swedish for “speed play”, Fartlek runs are as fun to run as they are to say (well, for running geeks like me they are). Unlike tempo runs, Fartleks are unstructured. Unregulated. Alternating between slow, moderate and hard efforts within a single run. See that sign a quarter mile up the road? Sprint hard, reach the sign, then jog for a couple minutes. See that hill over there? Run up it as hard as you can. Walk down. Then fall back into your pace as you make your way home. Think about how you used to run and play as a kid. Do that. Think about how you first felt when you started running. Bring that same mental attitude. Feel your body. What hurts? What doesn’t? What feels good? What needs to be stretched and rolled out? Fartleks are perfect for taking stock of your body, for identifying problem areas, and for having fun! The goal is to keep your runs as free-flowing as possible. To keep your body guessing in order to avoid the dreaded plateau. Plateaus occur when we do the same thing, perform the same workout, day in day out, week in week out. We stop improving. Stop making gains. Stop seeing reults. Basically, plateau halts progression. Plateau is a larger topic deserving of it’s own article, but we’ll get to that later. We always want to continue getting better, to continue getting stronger. Fartleks help keep the plateau at bay. High intensity interval training Now the real fun starts. High Intensity Interval Training (H.I.I.T) workouts are, as the name suggests, intense. Dialed up to 11 intense. That shouldn’t scare you, that should excite you! When most runners think H.I.I.T., they think sprints. And they’d be right. Sprints are the best form of interval training a runner can perform. Here’s an example: Warmup 8–10 minutes with an easy jog 60 second CONTROLLED sprint at hard to maximum effort Like a car at 8000 rpm, your body should be in the red (reaching hard for air, legs churning, unable to hold a conversation, counting the seconds until you can stop). Get that heart rate up as high as possible. Three minutes of easy jogging or walking to catch your breath Read: easy jog. Your heart rate should come down as quickly as possible. The secret is in the recovery between sprints. The difference between your heart rate during the sprint and your heart rate during recovery is the “interval” part of interval training (menshealth.com). You want to run the subsequent intervals strong and finish the workout fatigued, but not completely spent. Repeat this process 8 times Cool down with a 5–7 minute easy jog or walk As your fitness improves, play with your timing a bit. Up your sprint from 60 to 90 seconds. Shorten your recovery from 3 minutes to 2 minutes. Up your interval count from 8, to 10, to 12. Make sure to never skip the initial warm-up and cool down periods. Don’t neglect long runs and recovery runs Long runs and recovery runs have their proper place in all types of training for all types of distances. Long slow runs, where you can have a conversation with a friend, are essential for not only building a foundation for faster, tougher workouts, but for improving body physiology. You’ll foster growth and development of new capillaries and blood vessels in muscle tissues. Your heart will get stronger. Your feet will get tougher. You energy efficiency will improve. You’ll be a better machine. Recovery runs encourage blood flow to stiff, sore areas in need of rehabilitation. They aid in the removal of scar tissue, lactic acid, and any other negative chemical byproducts built up from physical activity. Regular maintenance on your car is just as important as the performance of the car itself.
https://medium.com/in-fitness-and-in-health/how-to-run-a-5k-under-20-minutes-db611f20ac54
['Scott Mayer']
2020-10-04 01:24:03.312000+00:00
['Racing', 'Health', 'Running', 'Fitness', 'Life']
Top 5 BI Tools Widely used for Data Visualization
Business Intelligence (BI) — The topic of discussion in the business domain since quite a while now. Nearly all kinds of businesses are convinced about the potential of good BI tools and are using them for the betterment of their business. With the rise of BI phenomenon, advanced BI tools and technologies were introduced in good numbers. This made a lot of potentially efficient BI tools available in the market for customers. Today I am sharing the details of top players in the domain of Business Intelligence this means top BI tools for data visualization. Let’s start the discussion with the introduction to Data Visualization. What is data visualization? The term Business Intelligence refers collectively to the tools and technology used for the collection, integration, visualization and analysis of raw data. In today’s time, with ever increasing amounts of free-flowing data, efficient BI tools are very important to make the most of the knowledge that hides itself in raw and unprocessed data. Data visualization plays a crucial role in the entire business intelligence dynamic. In simple terms, data visualization is pictorial representation of a given set of data. A text-based data is visualized graphically in the form of charts, graphs, tables, infographics, maps etc. With the help of visualizations, new insights and hidden patterns in data can be detected. The motive of data visualization is to detect patterns, trends and correlation between different data sets which can’t be studied otherwise from data in simple (non-graphic) form. It helps users gain a better understating of the market’s current situation and evaluate customer’s needs. Also, an enterprise can evolve through new strategies and techniques to enhance and foster their business. And this is precisely why all the data science software companies focus on making their BI tools best in data visualization capabilities as it helps in unveiling the hidden information in the huge reservoirs of raw data. Top BI tools for Data Visualization Let’s start with our discussion on the top BI tools for data visualization for 2019. We will discuss about the leading players in the realm of BI such as Microsoft Power BI, Tableau, QlikView and Qlik Sense. These BI tools are listed by Gartner’s Magic Quadrant for Analytics and Business Intelligence platforms 2019 (a survey series published by Gartner) in different category. Tableau Tableau is a new age data analytics and business intelligence platform which offers flexibility and ease-of-use to its users. Tableau’s core strengths are considered to be its interactive dashboards, quick responsiveness and real-time data analysis features. It offers eye-catching graphics (visualizations) to represent your data set pictorially. Fundamentally, Tableau provides all the necessary capabilities for data extraction, processing, representing and sharing the final reports/dashboards/worksheets with others. The primary reason for Tableau’s popularity is its easy drag-and-drop functionality to create visualizations. It is faster than other BI tools and is highly intuitive making it a perfect self-service BI tool. It also offers connectivity to a huge number of data and big data sources such as Oracle, Teradata, SAP HANA, MongoDB, Excel, Text files, JSON, Google Cloud, SQL, Hadoop, Amazon Redshift etc. It is not needed to buy connector license to connect to these data sources. Also, Tableau is designed for all kinds of users and does not require any specific skillset or knowledge to work on it. All types of users, from all over the enterprise can easily perform all the data analysis and visualization capabilities. I recommend you to explore this Tableau Tutorials Series to gain expertise in Tableau Features of Tableau Ask data Tableau prep conductor Tableau mobile for iOS and Android Connectors and connections Data sharing Install and deploy Design view and visualization Given below are some data visualization samples of a Tableau dashboard. 2. Microsoft Power BI Microsoft Power BI is a powerful data visualization tool and a very popular one. It is a cloud-based software which is available in two versions; Power BI Desktop and Power BI Mobile. Microsoft Power BI is well known for its easy-to-use functionality for data preparation and data visualization. Power BI comes with a lot of visualization features such as custom visualizations, creating visualizations using natural languages, having Cortana personal assistant etc. Microsoft Power BI offers connectivity to a wide range of data sources such as Oracle, IBM, SQL Server, Salesforce, Google analytics, Azure DevOps, Excel, text files, JSON, Zendesk, Mailchimp etc. In addition to this, integration with big data sources is also easy with the help of direct connections using web services. Learn everything about Power BI in just 4 weeks Have a look at some sample Microsoft Power BI apps using different kinds of visualizations in them. Features of Microsoft Power BI Access to on-premise and cloud-based data sources Intuitive and graphically rich visualizations Quick response to complex Mobile compatibility Easy insight (dashboards, reports etc.) sharing within the organization Publish data reports and dashboards to web Help and Feedback buttons Pattern indicators Informative and intuitive reports with Power BI Desktop You must explore these Power BI features in detail 3. QlikView QlikView is one of the leading BI tools according to the Gartner Magic Quadrant reports for 2019. QlikView provides in-memory storage feature which makes collecting, integrating and processing of data very fast. The reports are generated using visualization tools and the relationship between data is derived automatically by the QlikView software. In other words, QlikView is a data discovery tool that facilitates the creation of dynamic apps for data analysis. QlikView is predominantly a data discovery tool and so it has some distinct data visualization features. Data Discovery is a user-driven search for patterns and trends in data sets. It helps users to understand and see these patterns by providing visual aids like graphs, tables, maps etc. QlikView is also unique because of its flexibility, in-memory features and collaborative aids. Have a look at some sample QlikView apps using different kinds of visualizations in them. Features of QlikView Unique data discovery and global search Interactive visualizations Collaboration Absolute control over data Secure working environment Flexibility and Integrations Consistent Reporting It is the right time to upgrade your skills — Learn QlikView from Experts 4. Qlik Sense Qlik Sense is also a popular data analysis and visualization software. At its core it operates with an associative QIX engine. This engine enables the user to link and associate data from different sources to carry out analysis. Qlik Sense serves as a data analytics platform for a wide range of users i.e. from non-technical to technical users. Qlik Sense focuses more on data visualization as it has augmented graphics. However, in QlikView you can manipulate data in a lot of technical ways through scripting. If your motive of using Qlik Sense is visualizing and analysing data in the best possible graphics, then you have made the right choice. Qlik Sense provides a lot of flexibility to the users as they can carry out completely independent operations with the self-service visualizations and analysis. Also, they can be guided by the automated machine-guided analysis by the cognitive engine of Qlik Sense. Qlik Sense uses an Associative Model in which users are free to explore the vast and complex data and draw intuitive insights from it. Integrating large data files from multiple sources is possible in Qlik Sense. The clients can share data applications and reports on a centralized hub. Along with this, they can share secure data models, export the data stories etc. to enhance their business. Have a look at some sample Qlik Sense apps using different kinds of visualizations in them. Learn Qlik Sense to become a master of Business Intelligence Features of Qlik Sense Associative model Smart visualization and analytics Self-service creation Centralized sharing and collaboration Data storytelling and reporting App Mobility Data preparation and integration The QIX engine Enterprise governance and scalability 5. SAP Lumira SAP Lumira has also made its place in the list of top 10 BI tools. According to the Gartner’s Magic Quadrant for Analytics and Business Intelligence platforms 2019, SAP Lumira is categorized as a visionary BI tool having great potential. SAP Lumira is a self-service data visualization and analytics tool known for its ease-of-use and intuitive applications. SAP Lumira provides rich and interactive visualizations such as tables, graphs, charts, maps, infographs, etc. There are two editions of SAP Lumira based on the purpose of use; a Discovery edition and a Designer edition of SAP Lumira. In Discovery edition, you can create self-service data visualizations and publish them directly to the SAP BusinessObjects BI tools. Whereas, in the Designer edition, you can use these self-service visualizations to create detailed analytic applications. SAP Lumira is a user-friendly tool having a home screen where all the data sources are available. Input controls, so that users can work on the application freely. The application screen provides a single platform to create visualizations and applications using the imported data. Users can access real-time data such as governed data, Universe data, cloud data, metadata, data from big data sources etc. Have a look at some sample SAP Lumira apps using different kinds of visualizations in them. Connecting SAP Lumira to SAP HANA As SAP HANA is an in-memory database technology, data from it is taken into SAP Lumira for data visualization and analysis by users. SAP Lumira connects directly to SAP HANA using a JDBC connection (i.e. an OLAP connection). The language used for communication between SAP HANA and SAP Lumira is SQL. You can connect SAP Lumira to SAP HANA to use the data stored in HANA database. To connect to SAP HANA database, go to the File menu of SAP Lumira and add a new dataset by establishing a connection. The complete steps for connecting SAP Lumira to SAP HANA are given in a separate tutorial “Connecting SAP Lumira with SAP HANA”. Please refer to that. Once you establish a connection with SAP HANA and import the data set from it, you can create numerous visualizations in Lumira. You can select from a range of visualizations such as pie-charts, bar charts, tree-map charts, donut charts, heat-map chart etc. Features of SAP Lumira Easy application development for data visualization using low-level JavaScript programming. Template-based guided designing of data visualization dashboards. Access to Lumira apps through web and mobile platforms. Embedded visualizations and customizable extensions. Integrate data from multiple data sources. Integrate with SAP BusinessObjects BI tools for analysis. Create ad-hoc reports and dashboards Storytelling Although, Power BI, Tableau, Qlik Sense, QlikView are the leading BI tools for data analysis and visualization as of 2019. There are other tools in the BI space such as ThoughtSpot, Sisense, Salesforce, Looker, Domo, SAP (SAP Lumira) etc. that are proving their potential of being a perfect BI tool for data visualization. It is not too far in future, that there will be dozens of new BI tools available in the market. This will provide a wide range of options having advanced BI capabilities for the customers to select from. Hope you enjoyed reading this article.
https://towardsdatascience.com/top-5-bi-tools-that-you-must-use-for-data-visualization-7ccc2a852bd3
['Rinu Gour']
2019-05-23 03:50:25.089000+00:00
['Qlik View', 'Business Intelligence', 'Tableau', 'Power Bi', 'Data Visualization']
Interview with the BehaviourExchange team
1) What problem are you solving, why do you need the blockchain technology and what are the main pillars that define your company and differentiate you from your competitors? Online B2C business struggle with two mayor things: they don’t get enough traffic to their websites and even more importantly, they don’t know who their visitors are in real-time, what are their demographic and psychographic characteristics, such as gender, age, interests, etc. Therefore they cannot interact with their online visitors in real-time or offer them products and services that fit their profile. BehaviourExchange will enable B2C online businesses to identify visitors the moment they enter their website and customize the website’s content in real-time, engage with visitors proactively, offering them products/services that correlate with their needs and interests. Blockchain technology will help to fuel growth by enabling BehaviourExchange to create visitors profiles faster and with visitor’s consent profiles will be safely stored in Blockchain. BehaviourExchange will be the first profiling model that will offer so far only centralized profiling services to all online B2C businesses in decentralized way. It will be also the first profiling model that will offer profile recognition and content customization in the exact moment when visitor enters the website. 2) What value are you adding to your industry value chain and which are the main obstacles to the success of your solution? Already in the middle ages merchants wanted to know as much as possible about their customers in order to personalize their offer and consequently increase their sales and efficiency of customer management. Nowadays with enormous amount of various offers on the internet, truly knowing and engaging with customers became bigger challenge. The importance of knowing and understanding the customer became crucial for any B2C business. There are millions of B2C companies today that have difficulties getting to know their online customers. With BehaviourExchange they will be able to solve that situation and truly know who their visitors and (potential) customers are. BehaviourExchange will revolutionize the way companies behave and communicate with all of us today. 3) Why did you decide to launch an ICO and why do you need a public Token Sale? BehaviourExchange will use BEX tokens to stimulate the growth of the network. Our goal is to create BEX token economy ecosystem which will include one billion visitors profiles on one hand and one million B2C companies which will use BehaviourExchange services on the other hand, connected within one business model. 4) How does your token function within the platform and why is it needed? How did you decide the total supply and distribution among stakeholders? BEX tokens are needed for the stimulation of the economic ecosystem that connects three parties. (1) Online visitors will be rewarded with BEX tokens for sharing their personal data, the same as websites who will help us profile visitors. (2) B2C companies that will be able to pay for BehaviourExchange services with BEX tokens and will be therefore rewarded with discounts. (3) Online visitors that will be able to pay for services or products B2C companies offer with BEX tokens and be rewarded with discounts as well. BehaviourExchange will use BEX tokens to fuel both the website partner network and the number of active visitor profiles. The total token supply is based on our business plan. Distribution of tokens: 67% of the tokens will be sold through the crowd sale campaigns, 15% will go to the founders, 10% will be used as a faucet to encourage the growth of the network, 5% to endorsers, supporters and advisors and 3% to early contributors and legal funds. 5) Where do you see the value of your token in the medium to long term and the ultimate benefit for the token holders? Personal profiles are the currency of the future and BehaviourExchange business model enables constant flow and need of new profiles through connecting three parties: web visitors, B2C companies and websites. BehaviourExchange will provide benefits to each of the parties included: - websites are offered the free service of exchanging traffic with other websites from the partner network to boost their reach - B2C businesses will understand who their visitors are in real-time and will use our service to show the most appropriate product or service to each visitor - visitors will immediately find a product or service that fits to their needs and won’t be bombarded with irrelevant content/ads. One of the biggest benefits for the token holders will be the ability to enjoy the special offers from the companies that use BehaviourExchange services. Apart from that the token holders will have better online user experience due to personalized website content. 6) Thinking about the future, what are your plans after ending the ICO? Are you afraid that the volatility of the cryptocurrency market might affect the economy of your project in any way? Very fast after the ICO will end we will enter the B2C market with BEX services which will enable use of BEX Token. The advantage of BehaviourExchange project is that it’s not only idea on paper but already working platform with 1,5 million profiles and with beta version of the final product around the corner. That is why we are very calm and certain that our project won’t be affected with the volatility of the cryptocurrency market and therefore the economy of the project. By the end of the year 2018 we are planning the massive growth of BEX token economy and functionality and Global expansion in the year 2019.
https://medium.com/crypto-unveil/interview-with-the-behaviourexchange-team-9945fb392dfa
['Crypto Unveil']
2018-04-19 22:13:41.698000+00:00
['Blockchain', 'ICO', 'Token Sale', 'AI', 'Ico Interviews']
Data extraction from XML using Cloudant database view and expose it as a service
In this article, we will discuss how we could easily extract data from the XML file and expose it as a service using the IBM Cloudant view. Prerequisites An active IBM Cloudant account Basic JavaScript knowledge Knowledge in Java or any other language to work with the XML file Introduction We all have dealt with XML files and used numerous tools to parse XML and extract information out of it. However, the code to extract information could get complicated while dealing with XML with complex structures. In this article, we will discuss how we could use the IBM Cloudant view to simplify XML data extraction and also expose it as a service. Let’s demonstrate this through an example! Note: For this article, we would be using Java to convert the below XML file to JSON first and store it in the Cloudant database. You could use any other language of your choice to accomplish the same. Sample XML File used for this tutorial: Click to download Overall Process Convert the XML to JSON and store it in the Cloudant database Configure Cloudant view to extract data and expose it as a Service Objective: The attached XML file contains country names and their respective codes in multiple languages. We would extract the list of country names and codes in the English language and expose it as a service using Cloudant views. Step-1: Convert the XML to JSON and store it in the Cloudant database Step-i: Create a Java project using any IDE. e.g. Eclipse Step-ii: Add the json jar and commons-io jar in the Java Build Path. Step-iii: Create a Class “CreateJSON” and paste the code given below. Note: To keep the tutorial simple, we would use the below java code to convert the XML to JSON format and store it in a File. Then we would copy the content of the file to create a database document. This could also be done programmatically. package com.test; import java.io.BufferedOutputStream; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.OutputStreamWriter; import org.apache.commons.io.IOUtils; import org.json.JSONObject; import org.json.XML; public class CreateJSON { public static void main(String[] args) throws Exception { FileInputStream fis = new FileInputStream("src/Country_List.xml"); String xmlStr = IOUtils.toString(fis,"UTF-8"); JSONObject jsonObject = XML.toJSONObject(xmlStr); String data = jsonObject.toString(); String updatedData = data.replaceFirst("\"", "\"_id\":\"countrylist_id\",\""); OutputStreamWriter writer = new OutputStreamWriter(new FileOutputStream("src/Country_List.txt"), "utf-8"); writer.write(updatedData); if (writer != null){ writer.close(); } System.out.println(">>> Finished"); } } Step-iv: Open your Cloudant account and create a database in the name of sample_db. Step-v: Click on database sample_db > click on "New Doc" > copy the content of the Country_List.txt and paste it in the document and then click on “save” to create the document. Step-2: Configure Cloudant view to extract data and expose it as a Service Step-i: Create a view by clicking on the "New View" option on the plus symbol present in "All Documents" or "Design Documents" and let's give the Index name as "countries-list-us-en" and _design as "data". Step-ii: Add the below JavaScript in the "Map function" textbox. function (doc) { if(doc._id == "countrylist_id") { var length = doc.picklist.entry; for(var i in length){ var desclength = length[i].description; for(var j in desclength){ if(length[i].description[j].language == "en-US"){ emit(length[i].description[j].content,length[i].name); } } } } } Step-iii: Now click on the “Create Document and then Build Index” button. This would start the process of creating the view. Step-iv: Once the view creation is completed, click on the view name and again click on the JSON option(highlighted in red) as shown in the below image. Step-v: This would open a unique URL(as shown below) through which the data could be consumed. Here “key” represents the country name and “value” represents the corresponding two-digit code of that country. Note: Similarly, other views could easily be created for different languages available in the attached XML file showing similar data by making minor changes in the javascript code like below. function (doc) { if(doc._id == "countrylist_id") { var length = doc.picklist.entry; for(var i in length){ var desclength = length[i].description; for(var j in desclength){ // Changed language below to show data in Italian if(length[i].description[j].language == "it-IT"){ emit(length[i].description[j].content,length[i].name); } } } } } Summary As you saw in this article, with minimal coding we could easily extract data from XML and expose it as a service using IBM Cloudant’s inbuilt feature. This service which has a unique URL can be accessed through REST-based GET call and make it convenient for adopting applications to consume it.
https://debatosh-tripathy.medium.com/data-extraction-from-xml-using-cloudant-database-view-and-expose-it-as-a-service-d2fb4b54132d
['Debatosh Tripathy']
2020-11-29 06:40:39.241000+00:00
['Couchdb', 'JavaScript', 'Cloudant', 'Java', 'Data Extraction']
Combining Data Science and Machine Learning with the Aviation Industry: A Personal Journey through a Capstone Project (Part I)
Combining Data Science and Machine Learning with the Aviation Industry: A Personal Journey through a Capstone Project (Part I) Christopher Kuzemka Follow Jul 22 · 21 min read Inspiration and Blog Summary As a Mechanical Engineer turned Data Scientist, a large part of my time is spent on discovering ways in which both fields can be combined for the greater good. In my personal opinion, the mechanical engineering industry is in severe need of reform, through a manner that allows new and passionate graduates to directly implement their knowledge into industry, without the need for a PhD. Through this capstone project, I attempted to showcase such connections between engineering and data science by creating a potential business model used in aviation, by implementing a machine learning algorithm to solve a real-world problem, and by showcasing what issues are present when conducting such study during a pandemic. In this two-part series, we will uncover what is a standard data science workflow, research the airline and aviation industry, navigate through problems within our workflow in Python, expose ourselves to data collection APIs, and add as much engineering thought relevant to solving the problem expressed here. This series will read as a very personal story regarding the stress dealt with the project and what solutions were implemented to navigate around various obstacles. In this particular blog, we will discuss the thought process used in understanding the problem, discuss the APIs used as tools for data collection, discuss regime change, as well as showcase the process of data cleaning. Let’s begin by addressing the issue. This Current Aviation Study Consider the situation in which we as data scientists are approached by a business to answer the question: “What is the minimum cost threshold an airliner can charge their passengers per flight and how can we make a model that discovers this?” It isn’t uncommon for data scientists to be contracted out for freelance work; in this case, a startup business is trying to set itself up within the airline industry — a very unfortunate time to start up with the current pandemic….more on this later. The details on whether this business is an airliner or a third-party service working in conjunction with an airliner are unimportant, at the moment. The problem in its entirety is made up, which allows us some creativity in considering a solution. So, what should a data scientist consider when approached with this question? I would think the most direct approach in figuring out the minimum cost threshold is to analyze different flights that currently fly with different airlines. Every day, thousands of flights transpire above our heads around the world, which leaves us a plethora of data to work with in finding a solution. It may be possible to collect different features from these flights and observe different patterns across them, which may play a role in how airfares are determined for these flights. Our target label will be the price, implying that the models we plan to use here are supervised machine learning models. Our label, measured in U.S. dollars, is considered to be continuous — not in the infinite & uncountable mathematical sense, but in the computer scientist sense where there exist so many steps between a range of discrete values that it can be treated as continuous — which will ultimately mean that regression models will be best implemented. In working with pricing data and simultaneously addressing a business, we will want to optimize for a metric that makes most sense in describing the performance of the model. For this study, we will resort to using the mean absolute error due to its strength of being expressed in original units of our label and due to its resistance to being heavily affected by outliers. Other regression metrics could be used for optimization, but a data scientist should consider who his or her audience is and how much sense it will make to describe performance in a metric without any tangible significance; imagine using the mean squared error and expressing your model’s performance in squared dollars — doesn’t make much sense to speak like this, does it? Now, what should a mechanical engineer consider when approached with this question? I would focus on the details of the project. It may be worth looking into the distance traveled on flights, types of planes used, altitudes, weather patterns, time durations, travel trends throughout the year, and airlines. I would dive deeper, looking into technicalities such as force and energy output of the engines, temperature and pressure states across each cycle stage of the plane’s engines, changing fuel-mass ratio of the plane during a flight, average mass of passengers occupying the plane, changing air density of the surrounding environment, tail-wind effects, average fuel-burn rate during a plane’s taxi-time, and size of the plane. I would go further into analyzing other varying costs such as pilot and staff salary, maintenance cost, fuel-pricing, lawsuits, sponsoring partners, and stock trends. How much time do we have for this study? When does this business need an answer by? “This business needs a working proof of concept within 30 days.” This is good to know when considering the scope of work present. The total project has the ability to scale astronomically with just the above possible features and methods mentioned; it would not be known until later if all of the features expressed here would be needed for our models, as some models perform worse when there are too many features and are considered to be overfit. This time constraint is not enough time to make a thorough and effective investigation, but is enough to takeoff a working model and initial approach. It is now within our power to decide how we would like to approach the fake business. We will go over ways in which I approached this problem. We will see how the pandemic plays a role in our analysis and discover ways to maneuver around the data domain hurdles present. What Access do we have to Data? The FlightXML API It was discovered that FlightAware’s FlightXML API would be a great tool to use to access the data we would need for this study. FlightAware.com is considered to be the most popular flight tracking website out there, providing details on all flights that have flown in recent history. It offers a very comprehensive API, used extensively by real businesses, to help individuals collect flight data in a very programmatic way. The tradeoff to using this API is that it is not a free tool to use. This will add some complexity to the study as we will want to gather as much data possible to help legitimize our study, while simultaneously keep the cost of the API’s usage down. Through the FlightXML API, we should have access to: weather data, altitude data, velocity data, aircraft type, flight schedules, distances, airlines, flight codes, origins, destinations, flight details on services, number of seats per aircraft, flight durations, etc. The Skyscanner API Skyscanner.com is a travel website meant to help people find the most affordable prices for hotels, flights, rentals, and more. Their [API] can be offered through approved developer consent. However, to eliminate the time spent negotiating with developers in establishing a secure and personalized API key, RapidAPI offers free use of the Skyscanner API, by acting as a host website for query searching. RapidAPI was extremely useful in that it only required a Google account to start using the API and offered the ability to perform “key rotations” — the act of discarding compromised API keys. There were some minor limitations in using RapidAPI’s access to Skyscanner’s API (such as the inability to access all of the commands available in the API’s documentation), but this did not prevent the ability to search prices of flights. What Data do we have Access to? The Skyscanner API It took approximately 3–5 days to navigate the documentation of the Skyscanner API and understand its limitations. A programmatic way was needed to access various prices for flights and it didn’t take long to discover access to previous flights was not easily accomplished. Searching through different sites manually for “yesterday’s flight” was unsuccessful as no listings were available — perhaps this saves some server space for the airline business. The irony in this is that one can honestly plan a trip, months to years in advance, but will never be able to observe past flight pricing individually without having it being tracked some other way. This served as a complication towards our study because we had the intention of using past flight price trends to associate with past flight data, but even the API we were reliant on proved unsuccessful in this. The alternative suggestion was to take the flights gathered from the FlightXML API and replicate as many conditions as possible for each pricing in the future — that is, to make sure weekdays match up, months match up, airlines match up, etc. However, this approach also proved to be time-wasting as some flights gathered and observed within the FlightXML API could not be found across similar time periods, airlines, or on similar weekdays. Furthermore, the Skyscanner API lacked the ability to provide ticket pricing on a class basis, which hindered our ability to perform a more comprehensive study. The Skyscanner API will only search for the lowest price available, which was not the worst limitation as technically the solution we must provide is a minimum cost threshold model. This limitation is believed to be the reason why we would not be able to find class differentiation in pricing. In some cases, more than one price would be returned for a flight, but many of these prices are close together that their differences were negligible across carriers. The FlightXML API It took approximately 7+ days to learn how to programmatically navigate through the FlightXML API, understand its limitations, and discover solutions. To start, it was reminded that each search we performed costed “parts of a penny”, keeping us conscious that the tool must be navigated through efficiently. While “parts of a penny” does not sound like much money, it adds up when query searches are iteratively processed, and different sized data batches are always returned — often times, returning thousands of data points per query search. Costs and time penalties are typically expected with different APIs, as a user’s access is traffic and wear on a server. Our testing of the sophisticated API showed some searched data would be overlapped, some outputs gained from certain methods could not be pipelined as inputs to other methods, some searches returned null and still charged us money, some methods were deprecated, and some searches returned data which proved to be useless for our own purposes. To our dismay, we discovered that FlightXML API did not have the ability to search flight data past three months of the current date — at least with the non-subscription-based license we were paying for. This heavily limited the capabilities of our study as it is preferred to gather as many different flights as possible across a larger date range to show patterns with seasonality — no longer an available approach for us, here. To make matters more complicated, a different method available from the API, which could allow us to gather thousands of data points on individual altitudes, velocities, latitudes, longitudes, and position/technicalities of a previous flown flight, ONLY will search as far back as two weeks! If we wanted to capitalize on the ability to incorporate a thermal engine analysis in our model, we would have to sacrifice all aspects of a seasonal approach on our study. The initial approach taken was to choose six different routes, domestically within the US, and to analyze over a smaller timeframe. Even with these six flights chosen, there technically should be enough data to gather considering the six flights being analyzed are popular routes. According to Flights.com, “there are, on average 2,197 flights per month” made between JFK (John F. Kennedy International Airport) and MIA (Miami International Airport). This initially was one of the routes we planned to search. This meant we should be finding about 550 flights per week and 74 flights per day. When testing this claim, it was found that within a two-week span in the early chunk of May 2020, only about 14–15 flights were documented. To add more complexity, the method we could use to gather speed and altitude data on these flights would only return data for three of these flights! The reason behind this was because we were initially using an idealized departure coordinated universal time as an input for such method. Unfortunately, the method we wanted to use would only work best if we knew the actual departure coordinated universal time of the flight (and we didn’t have any immediate approach on how to programmatically search for actual departure times, due to unknown delays). To finalize the complexity, it was then found that the past days we wished to match up with future pricing on the Skyscanner API (parsing through to the exact date a year from the searched flight) would not return any pricing. This was thought to be due to differences in airlines flying for these future dates and possible differences in weekdays. The above mentions on data access highlights only some of the complications found across both APIs integrating with one another. There were a few more aspects playing a role across the APIs, such as dealing with codeshare flights as duplicated searches and converting unique ICAO codes to IATA codes for airports and weather stations. While there may be a lot to juggle here, I have done my best to discuss most of the issues, but the largest one that stood out to me were the lack of flights within a two-week span on a popular route. Through some more testing, it was discovered that many flights were lacking worldwide, which heavily hindered our ability to gather all of the data needed for the study. It also increased usage cost of the FlightXML API as we were still being charged for searching through empty lists of flights — and searching through these empty lists was necessary to continue our study. How Outside Elements Affect the Data Domain: The Covid-19 Pandemic When historians analyze the year 2020, one of the biggest topics of discussion will be on the Covid-19 pandemic. This pandemic originated in Wuhan, China towards the end of 2019 and has persisted as a global threat since then. It affected a large majority of the world and continues to play a role in the way people live their lives, today. Ultimately, we have what is considered to be a “regime change”, which is a dramatic change to a way of life in society. Regime changes are very important to recognize in data science as they heavily affect the performance of machine learning models. If you are currently living in the pandemic, the way I am, you may notice how there are a large majority of individuals wearing masks, how establishments ask their customers to socially distance within their stores, and how more people have been carrying hand sanitizer with their belongings. Months before Covid-19 gained its notoriety, machine learning models, predicting the purchasing rates of masks and hand sanitizer, have been trained under a completely different regime; just as machine learning models which study the foot traffic of people in restaurants, subways, and in tourist attractions have been trained under a different regime. They are all inaccurate now with Covid-19 inducing a dramatic regime change in the previous patterns recognized. Observe the below figure: The blue line represents the total travelers tracked through TSA checkpoints around the world, from early March to late May. The foot traffic had decreased immensely when compared to the orange line, which showcases the TSA-tracked foot traffic a year earlier (same weekday). As a result, the airline industry in its entirety has suffered a major loss in profit. For solidified context on how individual airports were affected by the pandemic, refer to the below figure: This decrease in foot traffic has much to do with the panic surrounding the pandemic as well as closures and limitations of travel. Coincidentally, the FlightXML API’s range of searches will almost entirely capture this lower traffic trend, leaving us with not much choice other than to conduct such study through this regime change. The benefit from doing this is that we will have a model working for a regime that has severely limited the airline business’ outreach; in case there were to be another possible world-wide disaster directly affecting the airline business, we technically would have a model that can predict through such disaster. The Approach Taken With all of the above considerations, the final approach taken to efficiently gather data and create a model was as follows: Analyze airports of our choosing and all the flights currently flying around there. The following airports chosen were: — — New York John F. Kennedy International — — — — Chicago O’Hare International — — — — Los Angeles International — — — — Houston George Bush Intercontinental — — — — Miami International — — — — Hartsfield-Jackson Atlanta International — — — — Portland International — — These airports were chosen for their spread across the United States. With each search conducted through the FlightXML API, the 15 next flights approaching each airport were gathered. This search occurred at night, on May 27th 2020. Ultimately, we gathered 60 different flight combinations. With each individual flight gathered, we would programmatically utilize the associated flight number as a unique identifier to gather data on its past scheduling for the month of May. By iterating in 8-hour increments on a flight number with the associated destination airport, we would be able to see all the times in which such flights occurred throughout the month of May. This helps us scale up our data collection process, by receiving a month’s worth of flight schedules for different flight combinations observed. We ultimately gathered about 5,000 individual flights. From these flights, we would use ICAO flight code combinations and convert to IATA code flight combinations to programmatically gather prices of flights in the Skyscanner API (the Skyscanner API only accepts IATA code inputs). Finally, we would have multiple dataframes constructed from the data collections to combine into one dataframe meant for modeling. We would hope that this method can match several different prices along with similar flights while, also match as many flights with pricing as possible in way that does not force us to drop data. Data Cleaning Nuances In every data scientist study, cleaning data will most often be the most time-consuming process undertaken. Data cleaning is important, in that it’s the main way to utilize any useful tools and methods available in your programming language; without proper input for methodologies, you’ll encounter a lot of error. We won’t go over all of the cleaning necessary with this entire project other than highlighting some of the most arduous cleaning processes observed. Overall, a large amount of time was spent on cleaning the data. More of this cleaning process can be seen when looking at the source code folder of this project on my Github. In particular, we will look at some of the difficulties observed in cleaning the pricing dataframe gathered from Skyscanner API. The Skyscanner API was extremely useful in being able to provide us a variety of quotes for the study, but the manner at which the data was received was not ideal for modeling whatsoever. See the below two images respectively reflecting the head and tail of the dataframe: Self-created image of a pricing dataframe’s head. Referenced source code. Self-created image of a pricing dataframe’s tail. Referenced source code. What we can take away from here is that some of our collected data returned empty. The “ValidationErrors” column would be an indicator that there was an error in receiving prices, but if this column is null, then it means a successful search was conducted. Ultimately, these empty rows stand for a search which was successfully conducted to find nothing (quite a useless feature in my personal opinion). Simply removing the empty lists was no simple task either. Observe the following figure while pondering on the above indexed rows 715 and 717, for the next read: Self-created image of a pricing dataframe’s head. Referenced source code. The above output showcases information regarding this pricing dataframe. Each row, as shown in the above image, under the “Dtype” column for the dataframe information table, shows a data type of “object” (except in the unnecessary “Unnamed: 0” column, which is an “int” or integer). Further investigation has classified each cell in this dataframe as a “string”. What we ultimately discovered in this cleaning process was that our pricing dataframe had each cell interpreted as a list of dictionaries, with keys meant to be interpreted as strings, where all such data types were entirely interpreted as strings. If you are an experienced programmer, you might certainly want to learn how to navigate through such “toxic dataframe” (as my data analytics friend puts it). Once the “empty string lists” were removed and all other nulls were removed, we ran the below function to convert the strings into what the interpreted datatypes should be: import pandas as pd #imports the pandas package import ast #imports the ast package ## Converts the strings as literal expressions def take_as_literal(dataframe): for column in dataframe.columns: dataframe[column] = dataframe[column].apply(lambda element: ast.literal_eval(element)) #utilizes the ast package return dataframe Self-created image of cleaned literal expressions in pricing dataframe. Referenced source code. The above image showcases our new dataframe’s structure. We have taken each string as a literal expression through the ast package and now have a dataframe of lists containing dictionaries. Unfortunately, we seem to only have 110 rows of pricing. This is a strong foundation for future error in our model as we are now needing to consider what data have we lost in our cleaning and in our data gathering. Until we successfully merge our dataframes, we will not be able to recognize if we will have to drop more rows in our features dataframe for our flights. However, we can luckily at least recognize this dilemma by only noting the dramatic size different between our 5,000 row features dataframe and our 110 row pricing dataframe. Furthermore, there is some hope in how to salvage this study and how to navigate around this issue. Note the next two images below regarding the pricing dataframe’s dynamic sizing: Self-created image of each element’s contents in a single row of the pricing dataframe. Referenced source code. Self-created image of each element’s contents in a different row of the pricing dataframe. Referenced source code. Each individual row showcases different sized lists of dictionaries as elements. Each row is specific to an individual flight within a certain month. The above images show that we encounter multiple “low” prices for such flights across multiple carriers. The lists showcasing quotes information and carrier information dynamically change length throughout our dataframe. This may be enough to salvage our study as we are only relying on there being more than 60 flight combinations to keep our features data. If we are lucky, we would find many quotes per list for this pricing dataframe. The below function was used to unpack the dataframe and create a final dataframe to be merged with our features for modeling. We ultimately also utilized the extra information details in this dataframe as features to add to our final model dataframe. We need not worry about this acting as a form of data leakage, as such features are also considered to be the types of data we could possibly find already within our 5,000 row flights dataframe. def create_targetframe(price_dataframe): quotes = pd.DataFrame(columns = ['QuoteId', 'MinPrice', 'Direct', 'CarrierIds', 'OriginId', 'DestinationId', 'DepartureDate']) places = pd.DataFrame(columns = ['PlaceId', 'IataCode', 'Name', 'CityName', 'CountryName']) carriers = pd.DataFrame(columns = ['CarrierId', 'Name']) ## Makes the Quotes Dataframe for i in range(len(price_dataframe)): for j in range(len(price_dataframe.loc[i, 'Quotes'])): quoteid = price_dataframe.loc[i, 'Quotes'][j]['QuoteId'] minprice = price_dataframe.loc[i, 'Quotes'][j]['MinPrice'] direct = price_dataframe.loc[i, 'Quotes'][j]['Direct'] carrierid = price_dataframe.loc[i, 'Quotes'][j]['OutboundLeg']['CarrierIds'] originid = price_dataframe.loc[i, 'Quotes'][j]['OutboundLeg']['OriginId'] destinationid = price_dataframe.loc[i, 'Quotes'][j]['OutboundLeg']['DestinationId'] departuredate = price_dataframe.loc[i, 'Quotes'][j]['OutboundLeg']['DepartureDate'] individual_quotes_dict = {'QuoteId':quoteid, 'MinPrice': minprice, 'Direct': direct, 'CarrierIds': carrierid, 'OriginId': originid, 'DestinationId': destinationid, 'DepartureDate': departuredate} individual_quotes_df = pd.DataFrame(individual_quotes_dict, columns = individual_quotes_dict.keys()) quotes = pd.concat([quotes, individual_quotes_df]) ## Makes the Places Dataframe for i in range(len(price_dataframe)): for j in range(len(price_dataframe.loc[i, 'Places'])): placeid = price_dataframe.loc[i, 'Places'][j]['PlaceId'] iatacode = price_dataframe.loc[i, 'Places'][j]['IataCode'] name = price_dataframe.loc[i, 'Places'][j]['Name'] cityname = price_dataframe.loc[i, 'Places'][j]['CityName'] countryname = price_dataframe.loc[i, 'Places'][j]['CountryName'] individual_places_dict = {'PlaceId':placeid, 'IataCode':iatacode, 'Name':name, 'CityName':cityname, 'CountryName':countryname} individual_places_df = pd.DataFrame(individual_places_dict, columns = individual_places_dict.keys(), index = [j]) places = pd.concat([places, individual_places_df]) ## Makes the Carriers DataFrame for i in range(len(price_dataframe)): for j in range(len(price_dataframe.loc[i, 'Carriers'])): carrierid = price_dataframe.loc[i, 'Carriers'][j]['CarrierId'] name = price_dataframe.loc[i, 'Carriers'][j]['Name'] individual_carriers_dict = {'CarrierId': carrierid, 'Name': name} individual_carriers_df = pd.DataFrame(individual_carriers_dict, columns = individual_carriers_dict.keys(), index = [j]) carriers = pd.concat([carriers, individual_carriers_df]) ## Cleans the Quotes DataFrame quotes.drop_duplicates(inplace = True) quotes.reset_index(inplace = True) quotes.drop(columns = 'index', inplace = True) quotes.rename(columns = {'CarrierIds':'CarrierId'}, inplace = True) ## Cleans the Places Dataframe places.drop_duplicates(inplace = True) places.reset_index(inplace = True) places.drop(columns = 'index', inplace = True) places['OriginId'] = places['PlaceId'] places['DestinationId'] = places['PlaceId'] places.drop(columns = 'PlaceId', inplace = True) ## Cleans the Carriers Dataframe carriers.drop_duplicates(inplace = True) carriers.reset_index(inplace = True) carriers.drop(columns = 'index', inplace = True) carriers.rename(columns = {'Name':'CarrierName'}, inplace = True) #Merging of three dataframes quotes = pd.merge(quotes, right = places, how = 'inner', on = 'OriginId') quotes.drop(columns = 'DestinationId_y', inplace = True) quotes.rename(columns = {'DestinationId_x':'DestinationId', 'IataCode':'OriginIataCode','Name':'OriginName', 'CityName':'OriginCityName', 'CountryName':'OriginCountryName'}, inplace = True) quotes = pd.merge(quotes, right = places, how = 'inner', on = 'DestinationId') quotes.drop(columns = 'OriginId_y', inplace = True) quotes.rename(columns = {'OriginId_x':'OriginId', 'IataCode':'DestinationIataCode','Name':'DestinationName', 'CityName':'DestinationCityName', 'CountryName':'DestinationCountryName'}, inplace = True) quotes = pd.merge(quotes, right = carriers, how = 'inner', on = 'CarrierId') return quotes When unpacking both of our pricing dataframes (one, which is not shown) and performing such cleaning, we ultimately found there to be 6,350 new rows of data — that is 6,350 individual quotes. This is less than what was ideally intended for our study, but is workable in that it is larger than our features dataframe. The more quotes we could gather, the more meaningful our study is as we would then be able to have a more data used for regression modeling. Too little quotes could force us to resort to classification modeling, which would not make much sense with regard to meaningful price prediction. Bootstrapping Bootstrapping is a useful practice often used in data science for understanding metrics such as standard error, for constructing confidence intervals, and for performing hypothesis testing on data.It is an extremely helpful technique. It resamples data randomly with replacement to generate multiple datasets, all stemming from a root data set. Such technique can be illustrated in the below image of a fake dataset: Note that each column represents the original dataset in terms of length and elements used. While it may not contain all of the same unique elements within the original, it will never contain different unique elements separate from the set of original unique elements — that is to analogously say that each bootstrapped dataset of unique elements is a subset of the original dataset. For our case, it was most wise to implement a bootstrapping technique onto our model dataframe. What we ultimately had was a feature dataframe and a target dataframe with no simple way of determining the meaningful count of different quotes per flight. In order to take full advantage of the various quotes which may exist for similar flights, a for loop consistently changing the order of prices observed in our target dataframe, while simultaneously following along the same index of our features dataframe was implemented to create a bootstrapped frame of prices of equal size to the flights dataframe. This adjusted quotes dataframe, was now ready to be fully integrated with our features and allowed us to begin modeling and data observation. Final Words & Observing the Model Dataframe Before we end this blog, and save the modeling and observation for the concluding Part II of this series, observe the final model dataframe shown below: Self-created image of the model dataframe’s head. Referenced source code. Self-created image of the model dataframe’s tail. Referenced source code. Self-created image of the model dataframe’s columns. Referenced source code. The dataframe is already quite large and we won’t be able to show the entire table in a single image, but the third image above of a list shows all the columns we included for our exploratory data analysis and our modeling. We will ultimately analyze and potentially use all of such features to understand the patterns observed in the “MinPrice” column. Our final modeling dataframe will include the one-hot encoding of certain columns containing strings, which will astronomically increase the size of our dataframe to around 4,100 columns. This one-hot encoding will be necessary for implementation into models, as all string-object data types will now be expressed as numerical inputs for the modeling. To conclude, I would like to give a final thanks to everyone who followed me on my journey for this capstone. I thank you all for reading Part I of this journey and to please be on the look-out for Part II of this series. In Part II, we will dive into the visual analysis and supervised machine learning modeling of our data, which includes KNN Regression, Decision Tree Regression, as well as Ensemble Modeling. It was originally intended to showcase such conclusions within this blog, but ultimately decided that breaking the read into two segments would be best to not overwhelm ourselves with the information intake. Leaving us on this cliffhanger, ponder on which models could be best implemented into our study, how can we continue to navigate through previously discussed problems, and hypothesize which model would be best in minimizing the mean absolute error of its performance (hint: it will not be Linear Regression, which is often shown to be a very reliable regression model for pricing data). If you wish to explore the project in its entirety, the source code is linked here, on my Github. Note that some parts of this project are continuously being improved upon and it will certainly be conducted again in a post-Covid world.
https://medium.com/analytics-vidhya/combining-data-science-and-machine-learning-with-the-aviation-industry-a-personal-journey-through-132e59d8380b
['Christopher Kuzemka']
2020-07-24 13:57:32.043000+00:00
['Machine Learning', 'Aviation', 'Engineering', 'Covid 19', 'Data Science']
A Short Introduction to VADER
nltk.sentiment.vader Valence Aware Dictionary and sEntiment Reasoner (VADER) What it is. VADER is a module in the nltk.sentiment Python library that was specifically created to work with text produced in a social media setting, however, it of course works with language that originates in other contexts. VADER is able to detect the polarity of sentiment (how positive or negative) of a given body of text when the data being analysed is unlabelled. In traditional sentiment analysis, the algorithm is given the opportunity to learn from the labelled training data. A classic example would be predicting the star rating of a movie review based on the written review of a given critic. The star rating would be the target variable and the text would be the predictor variables. In the sentiment analysis of song lyrics, we do not have any labelling. The lyrics are not rated on a scale of 0 to 10 (0 being negative and 10 being positive). So how is VADER able to gauge sentiment, then? VADER uses a lexicon of sentiment-related words to determine the overall sentiment of a given body of text. Below is an example of how the lexicon is structured, with each word having a valence rating: VADER has built this labelled lexicon using Amazon’s Mechanical Turk, which is a crowdsourcing platform that pays ‘crowdworkers’ to perform tasks en masse, resulting in an impressively efficient method for doing this. Preprocessing. Nope. The incredible thing about VADER is it doesn’t require a great deal of preprocessing to work. Unlike with some supervised methods of NLP, preprocessing necessities such as tokenisation and stemming/lemmatisation are not required. You can pretty much plug in any body of text and it will determine the sentiment. VADER is even smart enough to understand the valence of non-conventional text, including emojis (i.e. :-( ), capitalisation (i.e. sad vs SAD) and extended punctuation (i.e. ? vs ???). This is what makes the module so good at analysing social media text. Additionally, VADER removes stop words automatically so there is no need to do so yourself. Working with VADER We can use pip install nltk in the command line to install the library on our device. Then we need to import VADER into our programming environment using the first line of the code snipped below. We can then initialise the sentiment analysis object and use the polarity_scores() method to determine the sentiment of a string. The output of this is four normalised scores:
https://towardsdatascience.com/an-short-introduction-to-vader-3f3860208d53
['Ravi Malde']
2020-07-06 15:22:21.163000+00:00
['Machine Learning', 'NLP', 'Spotify', 'Data Science', 'Music']
The welcome party recap Pt 1. So, for the sake of community member…
So, for the sake of community member that were not able to join the welcome party due one or two busy schedule or funny enough the connections(internet connection, you know), Here is a recap of what was covered and discuss in the welcome party, sit back, grab your pen and paper or stick note whichever way you want to put it and enjoy a recap. Okay, It is our first meetup and it is tagged Welcome Party. Prior to this participant (interested community members) already opt-in and an email detailing the rules and guide were sent out and the next line of action was also stated, and from the moment the email got delivered, participant are onboarding to the community with the next steps as highlighted in the email. The D-day later comes and all ready for the real welcome party. The party as fondly called commenced with the Lead, Adeniyi Temidayo welcome everybody, do some basic house rule, then highlight the agenda of the day, thus: i. Basic guideline ii. Introduction of the community leads iii. A brief introduction of the community iv. Some background development trend and evolvement v. clarity on some evolved buzzwords vi. Duties and what to expect going forward. I. Basic Guideline Then the guide serves as the outline that helps in handling and moderates the party. Adeniyi Temidayo relay the guideline, rules and regulation that will help put things in other in the community and things under check too. Part of the expectation of what the community aim to achieve given thus, Participants will be given access to good/best learning resources/material to help them understands the convergent area of data science and its technological implementation. Here are some housekeeping rules & guidelines, it also serves as an introductory note as well. Do your best to abide by these rules. And then this is further reiterate thus, We will have an hour for the session, 30min(clarification and Feedback) for the first day and an hour(Q&A, Comments and Discussions). Follow us via our social media handle. Schedule: Wednesday 5 pm and Saturday at 5 pm. Once again if you do have questions ahead of the day assigned for Question and Answer and you do not want to forget or have the privilege to jot it down, send to our DM or you tweet at us via twitter(@ds4africa). II. Introduction of the Community Leads The community leads were introduced one after another, thus Jolayemi Olubunmi, Joshua and oni Stephen and quickly add that they are ready and willingly drive the community and provide support all the way and across the board as regards the community. III. A brief introduction of the community: Over time we have heard of various different buzzwords around data which range from AI, ML, Neural network, and all sort, but then very few people know the meaning of these buzzwords and very very few people know the real major role that buzzword plays, hence, the need to go to the very root of all of these buzzwords and get the clearer picture and be knowledgeable about them, hence, the very need to start at the very basic of it all. in the course of the session, some things we experience daily and how those buzzwords play a role in them will be made clear without forgetting that we connect with them all along. We are the very person that can proffer a solution to our problem simply because we understand the problem better, and in other to do that we need to understand what tools can help in achieving this, and with the advent of technology some of the processes can as well be automated, so why not learn the tools and tech that can be used to achieved solution and leads more progress. These we deem fit to know and get clear and not waiting for the foreigner to come to explore our land simply because they have the right tool and are knowledgeable about it uses, then applying it on our raw materials. To not waste much time on this, the community is set to go from the very beginning and that is why you solve the basic mathematics to get everybody along, which means to a certain level we will have to solve some basics maths and take it up from their to how they apply and their application in our daily activities. several daily challenges that can be resolved will as well be highlighted and treated as a community because some of the solutions might be open-sourced too from our end and share with others to explore. IV. Some background development and evolvement: some 25yrs ago we have a job role of Statistician saddle with the responsibility of (i) Gathering of data and cleaning of the same data and (ii) Applying statistical methods. But as data growth evolves plus radical tech improvement 5yrs down the lane (20yrs now) we then have the job role changed to Data Mining these with the former job role then combine with extracting pattern from the data collected and thus it helps to make more accurate forecast, moving forward to 10yrs ahead which in addition to the evolvement of duties and tech improvement and data growth from data mining then, we now have Predictive Analytic Specialist that now add new mathematical and statistical method on the same data and the growth of it, hence, the new job role: Data Scientist while things still evolve and radical improvement of technology which happens drastically fast then the need for AI(Artificial Intelligent), Machine Learning (which aside pattern deals with modelling), and more and more… more light will be shed on some of these in details in the cause of subsequent meetups. Lets quickly add that meetup will go virtual for a while and when there is a need for a physical meetup you all will be informed ahead and for now we will remain virtual and keep to schedule. V: Clarity on some evolved buzzword: Due to time this section was limited this to these two words that are interchangeably used Analytics and Analysis these words are used in their literal English meaning but when it comes to the world of Data, which then becomes Data Analytics and Data Analysis they play distinctly different roles and in different spheres, lets touch a few areas on this. Remember we collect data, which can be simply named, age, gender, level, grade, or even from hospital blood groups, rhesus, or in the banking industries, demographics, and cut across different industries with a different set of datasets. These data that were collected can be Analyze (Analysis) since they are very large, by splitting them into different smaller chunk and study separately. these simply makes it clear that Analysis has to do with a past event which helps us to collect the data, and from the studying of the data we can be able to explain WHY there is a surge in admission in a certain year, why the rate of male applying for engineering reduced in the recent year, why the male gender are moving into banking and finance as a course, why banks are rolling out different saving plans for the teenagers, and as well explain HOW it all happens that way due to the outcome of the study of data collected. Summary: Analysis relies on past events (Data of event that happened), a study in chunks and helps in explaining the WHY it happens and How it happens Been conscious of time and considering other participants to quickly catches up when they come online, He quickly then wraps up with the Analytic part, Analytics, he then explains that it deals with the future exploration of potential events or put this way, Analytics explore future potentials based on the pattern observed from Analysis. Analytics also make use of the application of logical and components parts which is obtained in analysis, which means we have to have good reasoning in interpreting the outcome of the analysis, look for patterns that can be explored in future. if time permits in the next session we will look into the Qualitative and Quantitative aspect. To also ensure that participants were following and if it worth the time, He then asked if the participants gain something in this intro party? Then ask for feedback from the active participants to rate the session and in his word the way he puts it. Rate this session on a scale of 5 (1 not what I expect at all while 5 it worth it and more of such am expecting)? let quickly give a response to this in the next 2min.
https://medium.com/ds4africa/the-data-science-for-africa-welcome-party-recap-part-1-70ed920b65c8
['Temidayo Adeniyi']
2019-06-05 14:23:00.424000+00:00
['Analytics', 'Big Data', 'Data Analysis', 'Data Science', 'Statistics']
Why I am A Big Fan Of Colonizing The Ocean
Image Source: Canva Pro/Kamchatka One of the things I am big on for the future is colonizing oceans. In fact, for years I have been talking about oceanic colonization. There are ways I believe we can start ethically doing it, and it would be much easier than fully colonizing other planets. While one of my main wishes is for us to become an intergalactic species (as people like Elon Musk believes), I still think the concept of colonizing oceans should be explored. This may also help with the whole “limited resources and land space” concerns that scientists have. A big aspect in terms of my contribution to this is working on underwater wireless networks for around half a decade (at least). There were a series of telemetry solutions I have been developing in regards to making modules for underwater signals. This includes technology in regards to extending the signals, offsetting the doppler effect underwater, and connecting to sensory networks in observance of things ranging from harmful algae blooms to ecoli. Infact the project appropriately titled “Reinvent the Internet” talked about a whole new approach into garnishing signals underwater. An approach I detailed on this post here: Now outside of trying to do some dull deployment for testing signal strength in the pond, I actually have done variations of this that been quite more complex. In regards to actually testing latency and such, I already went over it in some of the above posts and project specifications. I entered this project in the InternetofH20 challenge and was one of the few losing teams, the same with GigabitDCx where I was a semi-finalist. However, I did notice that it is much harder as an individual to win a prize pool as opposed to a team. Not sure if that played a role in me losing, but it made me reiterate my focus on either collaborations or trying to get a grant. In regard to this technology, I think underwater wireless networks is a huge part of my fantasy in terms of oceanic colonization. I’m envisioning everything from specially designed underwater telecom towers to smart elevators, submarine cities and artificial land. It sort of brings an aquatic internet of things if you will. In regards to where I stand, outside of wireless networks and trying to track information about our oceans, I’m also working on a variety of other things. These include things from hydroelectric grids to nutrient loading. On a separate note, I am also trying to make these large reverse HVAC devices for water harvesting in third world countries. That project had to be on the side for limited resources. I made this old graphic on the gist of building a hydroelectric transmission tower as an attempt to compete in the Ocean Observing Prize, but also ran to some resource limitations. This is just a mock-up of my idea outside of the already developed sensors. My main goal currently revolves around utilizing what I have for large technological collaborations or partnerships, and just finding better ways to expand my technology further. Outside of saying, “hey let us colonize the oceans over space”, I also want to say that there isn’t anything to colonize if we don’t go out of our way to protect it. Regardless what political spectrum you are, people should really like the idea of both expanding and protecting the earth’s capabilities.
https://medium.com/an-idea/why-i-am-a-big-fan-of-colonizing-the-ocean-93f32788fb34
['Andrew Kamal']
2020-11-27 04:20:31.823000+00:00
['Climate Change', 'Environment', 'STEM', 'Innovation', 'IoT']
Lyft Data Science Interview. As of January 2018, Lyft could count 23…
Subscribe to our Acing AI newsletter, I promise not to spam and its FREE! Thanks for reading! 😊 If you enjoyed it, test how many times can you hit 👏 in 5 seconds. It’s great cardio for your fingers AND will help other people see the story. The sole motivation of this blog article is to learn about Lyft and its technologies helping people to get into it. All data is sourced from online public sources. I aim to make this a living document, so any updates and suggested changes can always be included. Please provide relevant feedback.
https://medium.com/acing-ai/lyft-data-science-interview-questions-463f2d5bdea4
['Vimarsh Karbhari']
2020-02-26 05:45:25.215000+00:00
['Data Science', 'Artificial Intelligence', 'Machıne', 'Interview', 'Data']
Software Engineering Artifacts — Let’s agree on Terminology
Commonly-used “Epic” or “User Story” cover just a fraction of the entire picture, and it worth understanding that software design and development is not so flat and goes beyond these two primitives. The presented diagram covers just the highest level of details — so-called “Level 0”, to denote the “width” of the model, not yet paying much attention to its “depth”. If you need more information on the Layers — take a look at the AIFORSE Framework — Software Engineering Enterprise Processes Map (08/Jan/2019).
https://medium.com/ai-for-software-engineering/software-engineering-artifacts-lets-agree-on-terminology-4f009b351361
['Valentin Grigoryevsky']
2019-11-04 21:29:06.291000+00:00
['Software Engineering', 'Software Development', 'Software Testing', 'Web Development', 'Software Architecture']
A Rap Connoisseur’s Favourite Poetic Rap Lyrics Explained
Aesop Rock, Daylight I got a friend of polar nature, and it’s all peace You and I seek similar stars but can’t sit at the same feast Metal Captain, this cat is askin’ if I’ve seen his bit of lost passion I told him: “Yeah,” but only when I pedaled past him Aesop Rock is another Rhymesayer artist. His lyrics are incredible but it is also essential to read them and decipher them as you would a poem. Many of his lyrics are deep metaphors. I still don't understand many of his lyrics, despite being a fan of his for many years. This song is my favorite of Aesop’s. If you get a chance, you should definitely listen to this song and read the lyrics: They will blow your mind. This song is a combination of thoughts Aesop has about his life. In this passage, he is talking about a friend who is very different from him. Although they have the same goals, they can’t seem to collaborate professionally. There is a great level of respect between them even if they can’t work together. Some people believe “metal cat” is a reference to the amazing MF Doom, another wordsmith rapper who performs while wearing a metal cat mask. The last line was my favorite in this song. He has been asked by his friend if he ever has regrets in his life and he replies that he only regrets his mistakes when he goes back to the past and reflects on them. This is my interpretation, but other fans may have a different view. Either way, Aesop Rock never ceases to amaze me.
https://acotterized.medium.com/a-rap-connoisseurs-favourite-poetic-rap-lyrics-explained-eddd34a55319
['Amy The Maritimer']
2020-10-17 20:00:57.863000+00:00
['Hip Hop', 'Lyrics', 'Music', 'Rap', 'Poetry']
Seyifunmi Adebote, Others To Speak At UN’s #Youth4ClimateLive Series — IG Live
Seyifunmi Adebote, a young environmental leader in Nigeria will be speaking through the UN Youth Envoy’s Instagram account as part of a panel on a special Instagram Live on Friday, August 21st between 06.00 pm-06:45 pm WAT. This is part of activities to set the stage for #Youth4ClimateLive Episode 3: Driving Youth Action. Other youth speakers on the panel include India’s Environmentalist and Wildlife filmmaker, Malaika Vaz and Dominican Republic’s Climate and Youth Activist, Claudia Taboada. The IG Live session will be hosted by Ahmed Badr on Connect4Climate’s channel in partnership with the office of the UN Secretary-General’s Envoy on Youth as part of the #31DaysOfYOUth campaign and the #Youth4ClimateLive Series. Seyifunmi Adebote As a run-up to Pre-COP26 in Milan, Italy and the COP26 which will now hold in Glasgow, the United Kingdom from Monday, 1 November 2021 to Friday, 12 November 2021, the Youth4ClimateLive Series is hosted by the Italian Ministry for the Environment, Land and Sea, in collaboration with Connect4Climate — World Bank Group and the Office of the UN Secretary-General’s Envoy on Youth. Seyifunmi’s innovative approach to climate education through his widely-listened-to Climate Talk Podcast has earned him this spot on Friday’s Instagram Live. He is expected to address a global audience, discussing “What Meaningful Youth Engagement means”, and highlighting effective climate-focused initiatives and opportunities in Nigeria and internationally.
https://medium.com/climatewed/seyifunmi-adebote-others-to-speak-at-uns-youth4climatelive-series-ig-live-49d2de1fb73b
['Iccdi Africa']
2020-08-20 23:09:20.912000+00:00
['Climate Change', 'United Nations', 'Renewable Energy', 'Wash', 'Youth Development']
Cresta CEO Zayd Enam on AI That Brings Out Our Best
Audio + Transcript Zayd Enam: The really big change that happened was with the printing press. Folks could extract the lessons that they had and write it down in a book, and then replicate that book and share it with everyone else. AI is going to become the book that writes itself. James Kotecki: This is Machine Meets World, Infinia ML’s ongoing conversation about artificial intelligence. I’m James Kotecki. And my guest today is the co-founder and CEO of Cresta, Zayd Enam. Thanks so much for joining me. Zayd Enam: Thanks, James, for the invitation. James Kotecki: So as I understand Cresta specifically, what you’re doing is, I’m sitting in a call center, maybe I’m a sales person or I’m a customer service representative, and I’m looking at a screen and Cresta is using AI to provide me with a script of things that I should be saying to the person on the other side of the call. Am I basically getting that right? Zayd Enam: Yeah. So what it’s doing is it’s a system that’s constantly learning and listening from all the conversations that are happening and identifying what is resulting in success, measured by sales, conversion. It might be measured by customer satisfaction or average handle time. And it’s identifying what are the behaviors, the responses, what are the best people doing to result in success? And then it’s, in real-time, prompting everyone else, “Hey, here’s what’s being really effective and here’s how to handle this conversation to result in the best possible conversation.” Our vision for companies is that every single representative within the company is armed to be the very best representative of the company. James Kotecki: And is it learning just from my company, my reps, or is it kind of an anonymized synthesis of everybody who’s doing that kind of work? Zayd Enam: For particular verticals that we go into, we’re able to build a model, 60% of the model based on the best practices of that vertical and industry. So for financial services, identifying what are the best practices of all the financial services companies or what are the best practices of the best SAS companies or the best retail companies. And then the rest of that 40% is company-specific because that’s specific to the product, the knowledge, the promos, the offers, all these companies’ specific information. James Kotecki: And what kind of things might it suggest? Zayd Enam: It’s a wide range of things. So if it’s a customer support interaction, for example, it might be that it identifies the answer within the knowledge base. So a customer might come up with a question and it identifies where that answer is in the knowledge base and gives the rep that answer. But there’s subtleties to that. Sometimes the way the customers frame the problem is different [than] the way the answer exists in the knowledge base. Just as an example, a customer might say that their mattress has gone flat. If you search within the mattress industry and all the knowledge bases for all these companies, they won’t have any knowledge base for flat mattresses because the words that they use is bladder. And so that’s the word that they use for the internal part of a mattress with these air mattresses. James Kotecki: Bladder? Like the human body bladder? Is that what you said? Zayd Enam: Yeah. James Kotecki: Ok. Zayd Enam: And this was just one of those humorous things that we found in the course of deployment. The language that they use is just dramatically different than the way the customer perceives the problem. What we’ve been able to do is really identify how do you map those customer intents and those customer conversations, and the way that the answers might exist in these systems into a shared semantic space. James Kotecki: So would you say that you’re using AI to increase human empathy? Zayd Enam: I think naturally humans are, I think humans are empathetic. When a person is sort of in between multiple conversations, is trying to find the answer to the conversation, doesn’t have the confidence to know how to answer a customer question, that’s when they become less empathetic. But when they feel confident that, “Hey, I have the systems and the tools to be successful with these conversations, I know the answer here,” then they can worry less about the answers, the technical piece of it, and more on the empathy side of it and how to really connect with the person and understand them and really seek to understand them. That’s the right combination of humans and machines is how do you give a person confidence to give the answer so they can focus on the empathy and building that empathy with the customer? James Kotecki: If I was a cynical person, I would say, “Okay, in the short term, what you’re saying makes sense. It’s humans and machines together accomplishing something better than either one of those two entities could do alone.” In the long-term maybe, what one might suspect is that you’re really training a system that ultimately could replace humans, right? Because you’re collecting more and more data about what’s effective, voice analysis and synthesis technology is getting better and better. All of us talk to bots all the time on the phone right now. And so if those bots got more and more sophisticated, one could imagine them replacing more and more of the tasks that people do. So even though you might not be looking at totally replacing people, is the long-term vision to move a lot of people out of these roles so that you only need a few kind of hardcore specialists in them? And you can really eventually, using the data that you are collecting now, automate everything else? Zayd Enam: In any company, in any type of work, there’s always more strategic work to be done. There’ll be simple things that can be automated. There’ll be basic FAQ’s like, “what’s my order?,” all these things that can be automated. But there’s actually very strategic roles in any company, and that’s building relationships with customers, really focusing on higher-level strategy, creative things. And that’s what we really want to enable and make possible. The best companies, I think, will invest in these roles from a strategic perspective because they recognize that customer relationships are really the most important thing for a modern company. James Kotecki: I suppose another way you could maybe answer that question is “Look, have you ever been on hold for more than a couple minutes? Didn’t you hate that experience? Wouldn’t it have been better if there was a person to answer the phone? And the fact that you were on hold meant that there wasn’t a person to answer the phone. So clearly we still have more jobs for people to do.” Zayd Enam: Yeah, absolutely. I actually think there’s a latent demand for conversations in the United States or the world. There would be a hundred times as many conversations if calling your phone provider or your bank was as easy as picking up your phone and immediately getting connected to somebody knowledgeable, a very talented specialist who really understood your problem. The reason you don’t do it right now is because when you call them, it takes 30 minutes to find someone, and they’re not really clear on what your problem is. You need to transfer to somebody else. Zayd Enam: And I think in some ways it’s actually a little bit like the car industry in [the] early 1900s. There was a latent demand for transportation in the sense that folks wanted to get from point A to point B, but it wasn’t until we could build efficient cars with the assembly line that everyone could afford cars and roads could be built. And all of a sudden, everybody in the United States could drive a car and get from point A to point B. I think the same thing is holding back conversations where people just aren’t having conversations because they’re painful and they’re just, they’re not solving the things that they want to solve. James Kotecki: Okay. So you are one of three co-founders. Can you just quickly tell me what the other two co-founders are kind of bringing to the table and how you work together to put this puzzle together? Zayd Enam: My two co-founders are Tim Shi and Sebastian Thrun, who is the chairman of our board. And Tim and I were PhD students together at Stanford, working with Sebastian. And Sebastian had sort of, has a very interesting background, sort of having built the first self-driving car that won the DARPA Grand Challenge and then founding Google X and Udacity, and his sort of mandate to us in the lab was basically, “Let’s go figure out what are the biggest problems that we can solve with AI, and let’s focus singularly on solving those problems.” James Kotecki: Stanford, AI, Google X, these are very sexy things. And then when you think, when you tell someone, “Oh, I’m working on technology for call centers,” that might initially to people seem kind of unsexy. How do you see what you’re doing fitting into this broader picture of AI? I’ve often wondered, is AI going to transform the world to make it into some kind of Star Trek utopia, or is it going to basically just make the world a little bit better so that I have more good days that I recognize? I have fewer crappy experiences on the phone with a customer service rep, but it’s not creating some holographic doctor to solve all my problems. Zayd Enam: I generally believe that the vision for Cresta is, perhaps, probably the largest possible vision that you could have with AI. For us, the recognition was that the biggest impact that artificial intelligence will have is it’s a machine that will learn from the very best of any team, of all of us, identify what traits are we being successful at, and really coach and improve and share that knowledge with everyone else. Zayd Enam: And so if you look at it before, the way this was done, you have cave paintings and folks sharing knowledge through these — kind of talking to each other. And the really big change that happened was with the printing press, when you had this — when folks could sit down, identify and extract the lessons that they had, and write it down in a book, and then replicate that book and share it with everyone else. But the challenge there in these books is it’s actually really hard to write a book, like sitting down and extracting what’s happening and identifying what’s happening, and then you have to write it and then folks have to read, have to absorb it. And in some sense, what we believe is AI is going to become the book that writes itself, where it’s constantly learning and identifying what are things that are successful and what is the knowledge to extract from each example — training example in the world. Zayd Enam: And we’ll build these networks that identify these things and then are able to distribute them in a way to everybody in the world that’s helping them live in the moment, coaching them in those conversations, identifying how do we distribute this knowledge to everyone. It’s not just in the call center, it applies to every type of knowledge work and every type of office work. We think that’s the biggest thing that we can impact. James Kotecki: Zayd Enam, CEO and co-founder of Cresta, thanks for joining us today on Machine Meets World! Zayd Enam: Thanks James, it was such a pleasure to be here. James Kotecki: Thanks so much for being part of the conversation, and don’t forget you can email the show: mmw@infiniaml.com. You can also like this, share this, rate this, you know what to do! I’m James Kotecki and that is what happens when Machine Meets World.
https://medium.com/machine-meets-world/cresta-ceo-zayd-enam-on-ai-that-brings-out-our-best-dd310528f2e8
['James Kotecki']
2020-12-02 15:49:03.586000+00:00
['Customer Experience', 'Artificial Intelligence', 'Business', 'Technology', 'Communication']
5 Must Know Active Record Methods
Ordering First up, we have the ordering method. This can be used to organize our database in a specified way. We can order, alphabetically, numerically, ascending order, descending order, etc. How do we use it? To use the ordering method we can call .order on a class like so: This will order our movie instances by title in ascending order. We should pass in an argument to specify which attribute will be used to create this order. We can also pass in an argument for how we would like the order to be (asc or desc), but if this is not specified, the order will be ascending.
https://medium.com/swlh/5-must-know-active-record-methods-7475b1578270
['Yahjaira Vasquez']
2020-06-21 10:13:23.138000+00:00
['Activerecord', 'Software Engineering', 'Ruby', 'Ruby on Rails', 'Coding']
The MIT Media Lab and the Open Music Initiative
Cryptocurrencies and their underlying technology provide a way for people to transfer value and make payments without using a trusted intermediary like a bank or credit card company. But at the MIT Media Lab, we’re interested in more than just payments: this technology has the potential for far-reaching changes across industry and government by enabling open, decentralized data platforms. So when we heard about the BerkleeICE Open Music effort, we knew this was a good application for the research being undertaken by the MIT Media Lab Digital Currency Initiative. One of the biggest problems the music industry faces today is a lack of transparency. There is no uniform way for participants to identify ownership regarding any piece of music. Because of this, it’s hard to send money to the people who should be making it. Artists often don’t even know how revenue is shared or when their songs are played. There have been a few attempts to fix this in the past; the most recent being the Global Repertoire Database. One reason that effort failed was contention over who would own the data. We think that a distributed system, in the form of a digital ledger of music contracts, might be the answer to this particular problem. We can design a common format and an open platform for licensing data. In a distributed system, many entities can work together to maintain an open, transparent database that everyone runs, but no one owns. This technology is not a panacea for all the problems afflicting the music industry. It’s going to be a long, hard road to bring together stakeholders and design a solution that the entire industry can get behind. We’ll have to solve numerous challenging problems around data integrity and integration, and deal with legal complexities across jurisdictions. But we now have the tools to build an open architecture for music rights, using a decentralized platform. We’re excited to work with Berklee College of Music and the Open Music Initiative to create a foundation for innovation, not only for rights management but also for music itself. Neha Narula Director of Research, MIT Media Lab Digital Currency Initiative Marko Ahtisaari MIT Media Lab Director’s Fellow and Co-founder and CEO, Sync Project
https://medium.com/mit-media-lab-digital-currency-initiative/the-mit-media-lab-and-the-open-music-initiative-24ccacd126f4
['Neha Narula']
2016-06-13 16:21:44.267000+00:00
['Open Data', 'Music', 'Bitcoin', 'Blockchain']
Why Is Estimation (i.e. — Story Pointing) and Tracking Velocity Important?
Q: Why is estimation (i.e. — story pointing) and tracking velocity important? Estimation is a forward-looking prediction of the effort required to complete an increment of work, most commonly (but not always!) measured in story points. This is typically done at the individual user story level, but can also be applied to larger work items such as features and epics when planning large-scale, multi-team releases. Velocity is a measure of the amount of work a team has completed in the past — sometimes referred to as the work shipped within a single sprint (“our velocity last sprint was 25 story points”) or as an average across multiple sprints when in the context of referencing a team’s capacity to complete work (“our team’s velocity is 25 story points per sprint”). The purpose of tracking a team’s velocity is two-fold: Predict Future Performance: Teams most commonly average their velocity over a couple of recently completed sprints as a general indication of the amount of work they could do in future sprints. During Sprint Planning, this average is used to determine how much work the team can commit to completing within the sprint — they simply pull in work until the sum of story points in the sprint backlog is equal to or just below the team’s averaged velocity. For longer-term planning, the team’s averaged velocity is used to estimate when a large body of work could be completed by first estimating the backlog size (in story points), then deriving a duration (“with a team velocity of 25 story points per sprint, our backlog of 250 story points is likely to be released in 10 sprints”). Image by Author “Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.” — 8th Agile Principle Promote Continuous Improvement: By tracking a team’s velocity over time and plotting it on a velocity trend chart, teams can monitor the consistency of their velocity sprint-over-sprint to identify opportunities for improving the way they work in an effort to become more predictable by first stabilizing, then increasing velocity over time. At the end of the day it’s up to your team to decide how important these benefits are to you. To help with this, ask yourself these questions: Do I need to improve the speed in which my team is able to get to done? Does the amount of work my team is able to complete sprint-over-sprint need to become more predictable? Am I constantly being asked to provide release date estimates? Do we struggle to provide said release date estimates with a reasonable level of accuracy? Does the work my team performs require a high level of cross-team coordination with others? If the answer to any of the above questions is yes, then tracking the team’s velocity is probably a good idea. If you decide to do so, consider the following: Never Compare Velocity Between Teams: Each team is unique — their size, developer experience levels, estimation methods, product and technology they work on, etc. are so different that any comparison of velocity across team’s is futile. Story points are a completely arbitrary, unitless measurement that only make sense within the context of the team performing the estimation. Instead, analyze trends of a specific team’s velocity to gauge the overall health of the team and identify opportunities for improvement. Improvement Doesn’t Always Mean Increasing Velocity: Remember, velocity is primarily used to predict a team’s future output, either to plan the next sprint or provide accurate release dates, and is typically calculated as an average of the story points completed in prior sprints. The team’s actual output is likely to be different from the average, either lower or higher, depending on the complexity of work being done, unforeseen roadblocks, and other miscellaneous factors that inevitably arise throughout a sprint. Using averages to predict team output hides the volatility of sprint-over-sprint performance. Teams should first focus on achieving a stable velocity with the goal of becoming more predictable; then, emphasize increasing their velocity while continuously promoting and maintaining a sustainable pace of development. Photo by Scott Adams from Dilbert Cartoons (Source) “It’s better to be roughly right than precisely wrong.” — John Maynard Keynes Don’t Fall into the Precision Trap — When estimating release dates for large bodies of work using your team’s average velocity, avoid estimates that imply a greater level of precision than is realistically possible. Estimates need to be accurate, but should only be as precise as the data available at the time. In general, the farther out a project’s end date is, the less precision is possible in predicting the end date. Try using the following guide that proposes a reasonable resolution to use for release estimates based on how far out the estimated release date is: 6+ Months Away: Quarter of Release of Release 3-to-6 Months Away: Month of Release of Release 1-to-3 Month Aways: Sprint of Release ± 1 Sprint of Release ± 1 Sprint <1 Month Away: Day of Release ± A Couple Days There are certainly more advanced methods of estimation using historical sprint data to calculate confidence intervals and provide date ranges rather than a single-point estimate to demonstrate work that probably will be completed vs. likely to be completed vs. probably won’t be completed — I’ll cover these in future stories, so stay tuned! In a nutshell, velocity is a great tool teams can use to continuously improve their processes by first stabilizing, then increasing the amount of work they complete each sprint in the name of providing reasonably accurate estimates that are necessary for strategic business planning.
https://medium.com/swlh/why-is-estimation-i-e-story-pointing-and-tracking-velocity-important-24fd77fced1c
['Goodspeed']
2020-11-05 20:43:36.729000+00:00
['Software', 'Scrum', 'Software Development', 'Agile', 'Development']
Real world application of DSA to model and monitor corona-virus.
Real world application of DSA to model and monitor corona-virus. Jyotsna Follow Apr 22 · 3 min read I am sure every beginner computer science student would have once wondered why do we find shortest path in graph or what is the use of trees, are we going to play josephus circle using circular linked list? So, this article talks about the importance of data structures and algorithms in real world, with the reference of COVID-19. The present outbreak of a coronavirus- acute respiratory disease called COVID-19 has resulted in a major epidemic. The main reason why coronavirus is a major problem is because it’s spread can be modelled by a tree. Before the world took lockdown measures, estimates stated that each infected person was infecting between two and four other people. In the study of infectious disease, this number is called R0 (R-naught), a mathematical denotation that indicates how contagious an infectious disease is. For instance, if a disease has an R0 of 15, a person who has the disease will transmit it to an average of 15 other people. Three possibilities exist for the transmission or decline of a disease, depending on its R0 value: · If R0 is less than 1, each existing infection causes less than one new infection. In this case, the disease will eventually die out. · If R0 equals 1, each existing infection causes one new infection. The disease will stay alive and stable, but there won’t be an outbreak. · If R0 is more than 1, each existing infection causes more than one new infection. The disease will be transmitted, and there may be an outbreak or epidemic. Importantly, a disease’s R0 value only applies when everyone in a population is completely vulnerable to the disease, as in the case of COVID-19 where no one has been vaccinated, no one has had the disease before and there’s no way to control the spread of the disease. In our model — R0 is the average number of children each node in the tree has. This means — each node in our tree has (on average) between two and four children. If you’ve studied trees to any depth, you know this is going to get very large, very quickly. The early objective of health organisations worldwide was to reduce R0 to around one (or less). If R0= 1, then each leaf node in our tree now becomes the head of a linked list. Each person is infecting exactly one other person, in the same way that a (singly) linked list has a reference to the next node in the list. If R0< 1, then at some point a person will infect no-one else, and the line of infection (for that leaf) is broken. We can model that in code but having the node point to a null reference, meaning it is the ultimate node in the linked list. One way to “solve” the coronavirus situation is to change the behaviour of the virus so it can be modelled by a collection of (eventually finite) linked lists, rather than a tree. Well, I am no medical expert. I am here just to make you realise that Trees, Graphs & Linked Lists aren’t only used in FAANG interviews, but are actually very useful in modelling and solving real world phenomena.
https://medium.com/datadriveninvestor/graphs-trees-in-real-world-df24ef23b358
[]
2020-07-19 20:09:24.287000+00:00
['Covid 19', 'Algorithms', 'Trees', 'Data Structures', 'Coronavirus']
You should eat the avocado toast
Some complain that young people are irresponsible and foolish for spending their money on Starbucks or avocado toast when they could be saving for a deposit for a house. Even setting aside any misunderstandings on the complainers’ part about the cost of deposits and toast I think such irresponsibility is rational, and I’m going to present what I think is a novel philosophical argument for it — one, at least, that moves me quite a lot. The argument makes use of several interesting ideas from recent analytic philosophy, so I’ll also introduce those ideas. If it wasn’t for this, you’d have a home by now Future Selves The form of responsibility that the avocado toast complainers are interested in is responsibility towards your future self. You shouldn’t do things now that are going to cause problems for yourself later, like eat healthy fats on wheat that will lead you to be still renting at 40. You’re mistreating your future self — depriving them of money they could put to better use. I don’t think one is mistreating one’s future self, because, in essence, for those of us living now, we don’t have future selves. The world is going to get either much worse or much better in the coming years, and this is going to change us fundamentally, so that what we want and are concerned about will change fundamentally. But we are, plausibly, just what we want and are concerned about, and so the future parts of us will be different beings from the current parts of us. Here’s a rough analogy. Say you knew that in five years the world, including where you lived, would be plunged into war, a war in which you had to fight. Alternatively, and more optimistically, say you knew that you would suddenly come in to a life-changing sum of money: for example, a relative gives you a million dollars. Either of these scenarios will change you unrecognisably. At the moment, you may desire to travel the world. It could be your deepest held desire. But that’s likely to change in either of the above scenarios. In the war scenario, your deepest held desire might be simply an adequate supply of water; in the millionaire scenario, it might be, say, to be the president. There’s a lot of difference, you might think, between those whose deepest desire is just for water, those for whom it’s world travel, and those for whom it’s the presidency. If you were to learn that your desire was going to change that way, you should feel a disconnect between who you are now and who you will be; moreover, you should feel that it’s not irresponsible to act in a way that favours yourself at the expense of future you. It’s pointless to save for a house if war-induced hyperinflation is going to reduce your savings to nothing; it’s pointless to save for a house if you’re just going to inherit enough money to buy one soon. So, in the war-or-millionaire scenario, you should get the avocado toast. In a second, I’m going to argue that we should expect something like the war-or-millionaire scenario. But first I want to take a detour to consider some recent work in philosophy that helps understand these sorts of decision problems, and which influenced this post. Transformative Experience And Externalism A few years ago the philosopher Laurie Paul introduced the notion of transformative experience to capture how we make decisions about things that, if we act on them, reshape our nature in such a way that us before and us after have different values and desires that are hard to integrate. An accessible account can be found here. She has several vivid examples; I will consider just one. Thus consider the decision to have a (first) child. Whether or not to have a child is a hard decision to make. It’s hard for several reasons — for one, it’s very hard to know what it’s like to be a parent in advance of actually being one. And for a second, becoming a parent changes one’s desires fundamentally. Before doing so, you might be able to know what it would be like to care for an infant. And you may not may not find that appealing — they can be cute but also annoying. Caring for a child is very different from caring for your own child, though. You can’t really imagine what that’s like. And you can’t anticipate how it’ll change your desires — how you might come to want nothing more than to spend all your time with this young child, whereas the thought of spending all your time with some random child leaves you, as it does most people, cold. Having a child is a transformative experience. The point that I want to eventually make is roughly that going to war or becoming a millionaire is something like a transformative experience, in the sense that it fundamentally reshapes one’s desires and concerns about the world, and that we avocado toast considerers should anticipate war or millions in the 2020s or 2030s. But the fit isn’t quite perfect at the moment — in Paul’s cases, we are faced with a decision that we are in charge of. We decide actively to have a child; in the war and inheritance case, though, it’s something that happens to us. To make the idea of transformative experience play nice with the concerns of this post, I will introduce a second concept that receives a lot of attention by analytic philosophers (indeed, it’s considerably more venerable, dating from around the 1970s, although it has been constantly refined and developed), namely what’s called externalism. The last fifty or so years have seen a movement away from the thought that one’s mental life is something to which one has special privileged access, in favour of a view according to which the external world impinges upon and determines the nature of that life. That’s jargony. So let’s consider an example. The view first arose in the philosophy and language in the mind with the theory that what your beliefs and desires are about is not something you can learn just by introspecting. To show this, Hilary Putnam came up with a famous thought experiment. Water, on planet earth, is H2O. Imagine there was another universe exactly like ours except the stuff that was in rivers and came from taps and which we could drink was the chemical element XYZ. Apart from that, it’s exactly like earth. In particular, there are people who speak English and who go around saying things like “I sure would like a glass of water!” Consider my duplicate in this alternative reality saying this, and consider me saying it here on earth. The two scenarios are identical but for the nature of the stuff in rivers. That means they’re identical as far as the introspectible mental states of me and my duplicate. We have the same mental states; at least, introspection makes it seem like they’re identical. But we’re talking about different things: I am expressing my desire for H2O — that’s what my thought is about — while my duplicate is expressing a desire for XYZ. So you can’t learn what your desires are about just by looking at your mental states, because your mental states don’t uniquely determine any one object of aboutness. My duplicate and I have the same mental state but want different things. As Putnam famously says, meanings ain’t in the head. A core theme in much contemporary analytic philosophy is that many things ain’t in the head. The world presses on us and shapes our mental states. It seems quite reasonable to extend this to our personality. Who we are isn’t something that is in the head: rather, it’s something, at least in part, determined by our environment. Change the environment, change the personality. And that makes sense of our view that we would be very distanced from how we are now in the war-or-millionaire scenarios: because the environment has changed markedly, so has our personality. But what, you might ask, has this to do with avocado toast? What it has to do is this: we’re currently like a person contemplating a war-or-millionaire future, and since it’s rational for such a person to eat the toast, it’s rational for us to do so. Next, I make this point. What The Future Brings For millennia, a story goes, things didn’t change much. The world one found was similar to the world one’s parents found. Although events were unpredictable, they were predictably so: there was always a change of bad harvests or war, but there had always been such changes. Economic output and and other measures of progress proceeded at a more or less constant speed. Our world is increasingly unlike the world of those who came before us. We’ve left our mark on it and will continue to do so. There are new forms of war and new belligerents, a warming planet that no one is doing anything about, and humans are living for longer than it seems the economy, as currently set up, can bear. Any of these three factors, or at least so I believe, could lead to an event of world-historical importance in the next couple of decades. That could be nuclear war or complete denuclearisation, or massive death tolls among those most exposed to global warming and the elderly. Alternatively, and more positively, maybe these things could end well: maybe the world en masse will refuse to let millions in the exposed global south die, or — more likely — will refuse to let first-world elderly die through neglect leading to the re-emergence or the creation of new social support systems at a scale we’ve never seen before. That’s impressionistic, and maybe you can’t get more than impressionistic when trying to predict the future, but it’s worth noting that according to some of those who see patterns in history, we’re overdue a change. According to the idea of Kondratiev waves (a decent first source to consult here is the wiki), economic development is cyclical: periods of growth, marked by the introduction of new technology, are followed by periods of stagnation and depression, a whole wave lasting between 40–60 years. Although very controversial, one could view the development of information technology beginning roughly at the start of the last third of the 20th century as beginning a wave that we’re currently at the tail end of, and thus that we should expect a new wave to come (the internet of things, some think), bringing with it prosperity, or, at least, a markedly different social, economic and political world (think of how different life was in 1965 relative to now). Similarly, the theory of cliodynamics has it that history cycles between periods of political stability and periods of instability, with one such unstable phase in the West beginning in the mid-70s. It’s plausible to think that Trump and the return of the far right are marking an endpoint in this, and thus that we’re due for change. (For more on cliodynamics, see this by its founder. In the first bit of this post I outline the theory.) The Upshot While you may think such predictions are a waste of time, I think you should at least be open to the possibility of widespread change soon, either for good or bad. The closest to home for me and probably for most of my readers is the treatment of an ageing population. There are too many years of life, and expensive, ailing years of life at that, chasing too little money, either in the form of government benefits or in the form of a younger generation that can spend time/money to care for the elderly. Something will have to change either for the better, or the worse. But if things are going to change drastically, who I am is going to change too. I suggested two ways to understand this: what I desire is going to change, and I am what I desire. Or my society is going to change, and I am externalistically shaped by my society, so, again, I am going to change. I accordingly think that future me is going to be markedly different from current me; perhaps unrecognisably so. Note that this is out of my control. I can’t stop those changes in myself, because they’re the result of external changes over which I have no control. I’m not responsible for nuclear war or political instability or social welfare policies. I thus think that I will not only be different from what I am, but that I don’t have the capacity to control those differences. And so I think it’s somewhat pointless even to try. And this brings us back to toast and irresponsibility. If I expect to be a different person to who I am now, and one whose nature is determined by things outside of my control, then I shouldn’t be too concerned to act on behalf of that person. I should act on behalf of my current self, over which I do have control. If, to a large extent, I am not responsible for how I’m going to be in the future, then it’s not irresponsible of me to concentrate on today, and to eat the toast.
https://mittmattmutt.medium.com/you-should-eat-the-avocado-toast-d04cba779dda
['Matthew Mckeever']
2018-08-22 10:03:49.810000+00:00
['Politics', 'Society', 'Philosophy', 'Futurology', 'Millennials']
Returning to India with Mom
Over the weekend, Mom and I embarked on the longest flight of our lives, a 12,000 km 15-hour trip aboard an Air India 777 direct from Chicago to New Delhi. Surprisingly, we both felt that the flight seemed much shorter than we thought it would. We cleared customs and met Shri just outside the only international arrival door. Mom agreed that the flight itself was nothing compared to the two-hour drive from the airport to Faridabad that, even on Sunday afternoon, felt like riding a roller coaster through a cloud of exhaust and dust. At the first stop in traffic, a haggard beggar pressed her young face and hands against my window. Her ring and fingernails scraped against the glass. Mom could hardly watch. Mamta, Naysa, and Naima were waiting with open arms when we arrived just after sunset. Mamta was already preparing rajma rice and malai paneer, two of my favorite dishes. I then went over to the convent to say hi to the sisters, who had also been looking forward to our arrival. We talked and laughed and were excited to be reunited. Exhausted by the 28 hour journey door-to-door, Mom and I were in bed by 9. We were up again by 3:30 a.m. and, after everyone else woke up, walked Naima around the corner to school. Naima also attends the Carmel Convent School now, where 76 of our students across 4 grade levels and 11 classes attend. We were outside the entrance for the youngest children when some of our oldest students, Priyanka, Ankit, Neha, and Kajal, spotted us from around the corner. They waved and jumped up and down yelling, “John bhaiya, John bhaiya!,” then sprinted toward us with smiles from ear to ear. After giving me huge hugs and an outpouring of optimism, they turned to Mom and did the same. They had certainly been looking forward to the moment as much as we had. Mom needed no introductions. We went home to wash up and eat breakfast before making the complete rounds of Carmel Convent School, KL Mehta School, and the slum. Mom met Sisters Pushpa, Asha, Sweta, and Namrata for the first time. In every classroom and office, we were greeted with songs and poems and even dances that all the children had learned. It was incredible to see their progress over just a few short months. Nearly all of the students are making rapid progress. Many are even excelling with almost perfect grades and evaluations. Many of the youngest students have learned to read and write both English and Hindi since April. Some of the kindergarten students are even multiplying already! We made it to the slum by late afternoon. Children and adults came out from every building to say hello and shake our hands. Many of the men made a point to shake my hand, look in my eyes, and say, “Thank you.” I’d never had that happen before. Some of the children who had never seen Mom before even came up and said, “Hello Mary ma’am!” Please take a second to blow up the previous three photos and try to digest the emotions of these sisters and our students Anita and Sindu. They live the hardest lives of any healthy children I know. I’ll discuss their situation and circumstances later as they are complex and we have some work to do to get to the bottom of it all. We also passed by an intellectually disabled boy in the slum who was being held in a woman’s lap while healing from a burn sustained from an open fire. Despite the difficulties of living in a slum and occasional pockets of extreme despair, life is largely vibrant and enthusiastic. At one point, my shoulder was grabbed by some of the fathers and local men. Despite my demonstrative objections, Mom and I were all but forced to sit down on a bed in the street and enjoy a cold orange soda. A crowd of 30 people of all ages gathered around to watch us sip. I enjoyed mine, as I knew our hosts would be disappointed if I did not. However, I think Mom was a bit overwhelmed by the situation. It’s tough to receive a gift here, especially when you know that person worked for a few hours to be able to afford that soda. Although the first 24 hours here were as much of a roller coaster as the ride in, excitement was the overarching feeling of the day. In the photo above, Ankit runs to greet us as fast as his little legs will carry him.
https://medium.com/squalor-to-scholar/returning-to-india-with-mom-923010dcd428
['John Schupbach']
2017-07-22 22:57:12.906000+00:00
['Health', 'Travel', 'Education', 'Squalor To Scholar']
How to Add Graphs and Charts to a React App
How to Add Graphs and Charts to a React App Building an app to display current and historical exchange rates Photo by Markus Spiske on Unsplash Business apps often display graphs and charts. The hard work of React developers has resulted in graphing libraries that make it easy to meet the requirements for displaying graphs and charts in apps. One popular graph display library is Chart.js, which can used in any JavaScript application. For React apps, we can use react-chartjs-2 , a React wrapper for Chart.js, to easily add graphs and charts to our application. In this piece, we will build an app to display current and historical exchange rates. The current exchange rates against the Euro will be displayed in a bar graph and historic rates will be displayed in line graphs. The data is obtained from the Foreign Exchange Rates API located at https://exchangeratesapi.io/. It’s free and does not require registration to use. It also supports cross-domain requests, so it can be used by web client-side apps. To start we run Create React App to create the scaffolding code. Run npx create-react-app exchange-rate-app to create the app. Next, we need to install our libraries: run npm i axios bootstrap chart.js formik react-bootstrap react-chartjs-2 react-router-dom yup to install the libraries. Axios is our HTTP client for making requests to the Exchange Rates API. Bootstrap is for styling, React ChartJS is our graph library. React Router is for routing URLs to our pages. Formik and Yup are for handling form value changes and form validation, respectively. Now we have all the libraries installed, we can start writing code. Code is located in the src folder unless otherwise stated. In App.js , we replace the existing code with this: import React from "react"; import { Router, Route, Link } from "react-router-dom"; import HomePage from "./HomePage"; import { createBrowserHistory as createHistory } from "history"; import "./App.css"; import TopBar from "./TopBar"; import HistoricRatesBetweenCurrenciesPage from "./HistoricRatesBetweenCurrenciesPage"; import HistoricRatesPage from "./HistoricRatesPage"; const history = createHistory(); function App() { window.Chart.defaults.global.defaultFontFamily = ` -apple-system, BlinkMacSystemFont, "Segoe UI", "Roboto", "Oxygen", "Ubuntu", "Cantarell", "Fira Sans", "Droid Sans", "Helvetica Neue", sans-serif`; return ( <div className="App"> <Router history={history}> <TopBar /> <Route path="/" exact component={HomePage} /> <Route path="/historicrates" exact component={HistoricRatesPage} /> <Route path="/historicrates2currencies" exact component={HistoricRatesBetweenCurrenciesPage} /> </Router> </div> ); } export default App; We defined the routes with React Router here since it’s the entry point of our app. We also set the font for the graph here, so it will now be applied everywhere. In App.css , we replace the existing code with this: .center { text-align: center; } This centers the text in our app. We add a file to add the list of currencies that we will use. Create a file called export.js and add this code: export const CURRENCIES = [ "CAD", "HKD", "ISK", "PHP", "DKK", "HUF", "CZK", "AUD", "RON", "SEK", "IDR", "INR", "BRL", "RUB", "HRK", "JPY", "THB", "CHF", "SGD", "PLN", "BGN", "TRY", "CNY", "NOK", "NZD", "ZAR", "USD", "MXN", "ILS", "GBP", "KRW", "MYR", ]; Now we can use this in our components. Next, we create a page to display the historical exchange rates between two currencies. Create a file called HistoricRatesBetweenCurrenciesPage.js and add the following: import React, { useEffect, useState } from "react"; import { Formik } from "formik"; import Form from "react-bootstrap/Form"; import Col from "react-bootstrap/Col"; import Button from "react-bootstrap/Button"; import * as yup from "yup"; import { getHistoricRates, getHistoricRatesBetweenCurrencies, } from "./requests"; import { Line } from "react-chartjs-2"; import { CURRENCIES } from "./exports"; const schema = yup.object({ startDate: yup .string() .required("Start date is required") .matches(/([12]\d{3}-(0[1-9]|1[0-2])-(0[1-9]|[12]\d|3[01]))/), endDate: yup .string() .required("End date is required") .matches(/([12]\d{3}-(0[1-9]|1[0-2])-(0[1-9]|[12]\d|3[01]))/), fromCurrency: yup.string().required("From currency is required"), toCurrency: yup.string().required("To currency is required"), }); function HistoricRatesBetweenCurrenciesPage() { const [data, setData] = useState({}); const handleSubmit = async evt => { const isValid = await schema.validate(evt); if (!isValid) { return; } const params = { start_at: evt.startDate, end_at: evt.endDate, base: evt.fromCurrency, symbols: evt.toCurrency, }; const response = await getHistoricRatesBetweenCurrencies(params); const rates = response.data.rates; const lineGraphData = { labels: Object.keys(rates), datasets: [ { data: Object.keys(rates).map(key => rates[key][evt.toCurrency]), label: `${evt.fromCurrency} to ${evt.toCurrency}`, borderColor: "#3e95cd", fill: false, }, ], }; setData(lineGraphData); }; return ( <div className="historic-rates-page"> <h1 className="center">Historic Rates</h1> <Formik validationSchema={schema} onSubmit={handleSubmit}> {({ handleSubmit, handleChange, handleBlur, values, touched, isInvalid, errors, }) => ( <Form noValidate onSubmit={handleSubmit}> <Form.Row> <Form.Group as={Col} md="12" controlId="startDate"> <Form.Label>Start Date</Form.Label> <Form.Control type="text" name="startDate" placeholder="YYYY-MM-DD" value={values.startDate || ""} onChange={handleChange} isInvalid={touched.startDate && errors.startDate} /> <Form.Control.Feedback type="invalid"> {errors.startDate} </Form.Control.Feedback> </Form.Group> <Form.Group as={Col} md="12" controlId="endDate"> <Form.Label>End Date</Form.Label> <Form.Control type="text" name="endDate" placeholder="YYYY-MM-DD" value={values.endDate || ""} onChange={handleChange} isInvalid={touched.endDate && errors.endDate} /> <Form.Control.Feedback type="invalid"> {errors.endDate} </Form.Control.Feedback> </Form.Group> <Form.Group as={Col} md="12" controlId="fromCurrency"> <Form.Label>From Currency</Form.Label> <Form.Control as="select" placeholder="From Currency" name="fromCurrency" onChange={handleChange} value={values.fromCurrency || ""} isInvalid={touched.fromCurrency && errors.fromCurrency} > <option>Select</option> {CURRENCIES.filter(c => c != values.toCurrency).map(c => ( <option key={c} value={c}> {c} </option> ))} </Form.Control> <Form.Control.Feedback type="invalid"> {errors.fromCurrency} </Form.Control.Feedback> </Form.Group> <Form.Group as={Col} md="12" controlId="currency"> <Form.Label>To Currency</Form.Label> <Form.Control as="select" placeholder="To Currency" name="toCurrency" onChange={handleChange} value={values.toCurrency || ""} isInvalid={touched.toCurrency && errors.toCurrency} > <option>Select</option> {CURRENCIES.filter(c => c != values.fromCurrency).map(c => ( <option key={c} value={c}> {c} </option> ))} </Form.Control> <Form.Control.Feedback type="invalid"> {errors.toCurrency} </Form.Control.Feedback> </Form.Group> </Form.Row> <Button type="submit" style={{ marginRight: "10px" }}> Search </Button> </Form> )} </Formik> <br /> <div style={{ height: "400px", width: "90vw", margin: "0 auto" }}> <Line data={data} /> </div> </div> ); } export default HistoricRatesBetweenCurrenciesPage; The page has a form to let users enter the date range for the historical rates they want and the currency that they are converting. Once the user enters the data, it’s validated against our form validation schema in the schema object, provided by the Yup library. We require the dates to be in YYYY-MM-DD format and all fields are required, so they’re checked against the schema for validity. We filter out the currency that has been selected for the forCurrency from the choices of the toCurrency and vice versa so we won’t end up with the same currency for both dropdowns. When the form submission is done we submit the data to the API and get the rates. We have to massage the data into a format that can be used by react-chartjs-2 , so we define the lineGraphData object with a datasets property to be an array of historical exchanges rates. label is the title of the line chart, borderColor is the border color of the line, and fill false means that we do not fill in the line with color. Once we set that with the setData(lineGraphData); function call, the graph is displayed. Next, we create a page to search for historical exchange rates with the Euro as the base currency. To do this, we add a file called HistoricRatePage.js , and add this: import React, { useEffect, useState } from "react"; import { Formik } from "formik"; import Form from "react-bootstrap/Form"; import Col from "react-bootstrap/Col"; import Button from "react-bootstrap/Button"; import * as yup from "yup"; import "./HistoricRatesPage.css"; import { getHistoricRates } from "./requests"; import { Line } from "react-chartjs-2"; import { CURRENCIES } from "./exports"; const schema = yup.object({ startDate: yup .string() .required("Start date is required") .matches(/([12]\d{3}-(0[1-9]|1[0-2])-(0[1-9]|[12]\d|3[01]))/), endDate: yup .string() .required("End date is required") .matches(/([12]\d{3}-(0[1-9]|1[0-2])-(0[1-9]|[12]\d|3[01]))/), currency: yup.string().required("Currency is required"), }); function HistoricRatesPage() { const [data, setData] = useState({}); const handleSubmit = async evt => { const isValid = await schema.validate(evt); if (!isValid) { return; } const params = { start_at: evt.startDate, end_at: evt.endDate, }; const response = await getHistoricRates(params); const rates = response.data.rates; const lineGraphData = { labels: Object.keys(rates), datasets: [ { data: Object.keys(rates).map(key => rates[key][evt.currency]), label: `EUR to ${evt.currency}`, borderColor: "#3e95cd", fill: false, }, ], }; setData(lineGraphData); }; return ( <div className="historic-rates-page"> <h1 className="center">Historic Rates</h1> <Formik validationSchema={schema} onSubmit={handleSubmit}> {({ handleSubmit, handleChange, handleBlur, values, touched, isInvalid, errors, }) => ( <Form noValidate onSubmit={handleSubmit}> <Form.Row> <Form.Group as={Col} md="12" controlId="startDate"> <Form.Label>Start Date</Form.Label> <Form.Control type="text" name="startDate" placeholder="YYYY-MM-DD" value={values.startDate || ""} onChange={handleChange} isInvalid={touched.startDate && errors.startDate} /> <Form.Control.Feedback type="invalid"> {errors.startDate} </Form.Control.Feedback> </Form.Group> <Form.Group as={Col} md="12" controlId="endDate"> <Form.Label>End Date</Form.Label> <Form.Control type="text" name="endDate" placeholder="YYYY-MM-DD" value={values.endDate || ""} onChange={handleChange} isInvalid={touched.endDate && errors.endDate} /> <Form.Control.Feedback type="invalid"> {errors.endDate} </Form.Control.Feedback> </Form.Group> <Form.Group as={Col} md="12" controlId="currency"> <Form.Label>Currency</Form.Label> <Form.Control as="select" placeholder="Currency" name="currency" onChange={handleChange} value={values.currency || ""} isInvalid={touched.currency && errors.currency} > <option>Select</option> {CURRENCIES.map(c => ( <option key={c} value={c}> {c} </option> ))} </Form.Control> <Form.Control.Feedback type="invalid"> {errors.country} </Form.Control.Feedback> </Form.Group> </Form.Row> <Button type="submit" style={{ marginRight: "10px" }}> Search </Button> </Form> )} </Formik> <br /> <div style={{ height: "400px", width: "90vw", margin: "0 auto" }}> <Line data={data} /> </div> </div> ); } export default HistoricRatesPage; It’s similar to the previous page, except that we only choose the currency to convert to to display since the currency to convert from is always Euro. Once again we have a lineGraphData , with the datasets being an array and within it, data is an array of historical exchange rates. label is the title of the chart. borderColor and fill are the same as the previous graph. Both forms are created by React Bootstrap form components. The Form components correspond to the regular Bootstrap 4 components. Then we create HistoricalRatesPage.css and put the following: .historic-rates-page { margin: 0 auto; width: 90vw; } This adds some margins to our page. Next, we create our home page. Create a file called HomePage.js and add the following: import React, { useEffect, useState } from "react"; import Card from "react-bootstrap/Card"; import { getExchangeRate } from "./requests"; import "./HomePage.css"; import { Bar } from "react-chartjs-2"; function HomePage() { const [rates, setRates] = useState({}); const [initialized, setInitialized] = useState(false); const [date, setDate] = useState(""); const [base, setBase] = useState(""); const [chartData, setChartData] = useState({}); const getRates = async () => { const response = await getExchangeRate(); const { base, date, rates } = response.data; setRates(rates); setDate(date); setBase(base); const filteredRates = Object.keys(rates).filter(key => rates[key] < 50); const data = { labels: filteredRates, datasets: [ { backgroundColor: "green", data: filteredRates.map(key => rates[key]), }, ], }; setChartData(data); setInitialized(true); }; useEffect(() => { if (!initialized) { getRates(); } }); const options = { maintainAspectRatio: false, legend: { display: false }, scales: { yAxes: [{ ticks: { beginAtZero: true } }], }, title: { display: true, text: "EUR Exchanges Rates", }, }; return ( <div className="home-page"> <h1 className="center">Rates as of {date}</h1> <br /> <div style={{ height: "400px", width: "90vw", margin: "0 auto" }}> <Bar data={chartData} options={options} /> </div> <br /> {Object.keys(rates).map(key => { return ( <Card style={{ width: "90vw", margin: "0 auto" }}> <Card.Body> <Card.Title> {base} : {key} </Card.Title> <Card.Text>{rates[key]}</Card.Text> </Card.Body> </Card> ); })} </div> ); } export default HomePage; In this page, we display the list of current exchange rates from the API. We make a data object, with the currency symbols as the labels , and we also have a datasets property — an array of objects with data in the object being the current exchange rates. Also, we display the exchange rates in Bootstrap cards, provided by React Boostrap. For styling this page, we create HomePage.css and add the following: .home-page { margin: 0 auto; } This gives us some margins on our page. Next, we create a file to let us make the requests to the Foreign Exchange Rates API. Create a file called requests.js and add the following: const axios = require("axios"); const querystring = require("querystring"); const APIURL = " https://api.exchangeratesapi.io ";const axios = require("axios");const querystring = require("querystring"); export const getExchangeRate = () => { return axios.get(`${APIURL}/latest`); }; export const getRateBetweenCurrencies = data => axios.get(`${APIURL}/history?${querystring.encode(data)}`); export const getHistoricRates = data => axios.get(`${APIURL}/history?${querystring.encode(data)}`); export const getHistoricRatesBetweenCurrencies = data => axios.get(`${APIURL}/history?${querystring.encode(data)}`); This will get the exchange rates the way we want them, with requests to get the latest rates and historical rates, with or without specifying currency symbols for the base currency and currency to convert to. Next, we create the top bar. Create a file called TopBar.js and add the following code: import React from "react"; import Navbar from "react-bootstrap/Navbar"; import Nav from "react-bootstrap/Nav"; import { withRouter } from "react-router-dom"; function TopBar({ location }) { const { pathname } = location; return ( <Navbar bg="primary" expand="lg" variant="dark"> <Navbar.Brand href="#home">Currenc Converter App</Navbar.Brand> <Navbar.Toggle aria-controls="basic-navbar-nav" /> <Navbar.Collapse id="basic-navbar-nav"> <Nav className="mr-auto"> <Nav.Link href="/" active={pathname == "/"}> Home </Nav.Link> <Nav.Link href="/historicrates" active={pathname.includes("/historicrates")} > Historic Rates </Nav.Link> <Nav.Link href="/historicrates2currencies" active={pathname.includes("/historicrates2currencies")} > Historic Rates Between 2 Currencies </Nav.Link> </Nav> </Navbar.Collapse> </Navbar> ); } export default withRouter(TopBar); This adds the navigation bar provided by Bootstrap to our pages and a link to the pages we created before. It also adds highlights for the link on the currently opened page. We wrap the component with the withRouter function, so we can get the currently opened route to let us highlight the links. Finally, we replace the code in index.html with this: <html lang="en"> <head> <meta charset="utf-8" /> <link rel="shortcut icon" href="%PUBLIC_URL%/favicon.ico" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <meta name="theme-color" content="#000000" /> <meta name="description" content="Web site created using create-react-app" /> <link rel="apple-touch-icon" href="logo192.png" /> <!-- manifest.json provides metadata used when your web app is installed on a user's mobile device or desktop. See --> <link rel="manifest" href="%PUBLIC_URL%/manifest.json" /> <!-- Notice the use of %PUBLIC_URL% in the tags above. It will be replaced with the URL of the `public` folder during the build. Only files inside the `public` folder can be referenced from the HTML. work correctly both with client-side routing and a non-root public URL. Learn how to configure a non-root public URL by running `npm run build`. --> <title>React Currency App</title> <link rel="stylesheet" href=" integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="anonymous" /> </head> <body> <noscript>You need to enable JavaScript to run this app.</noscript> <div id="root"></div> <!-- This HTML file is a template. If you open it directly in the browser, you will see an empty page. Unlike "/favicon.ico" or "favicon.ico", "%PUBLIC_URL%/favicon.ico" willwork correctly both with client-side routing and a non-root public URL.Learn how to configure a non-root public URL by running `npm run build`.--> React Currency App https://maxcdn.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T"crossorigin="anonymous"/> You need to enable JavaScript to run this app. You can add webfonts, meta tags, or analytics to this file. The build step will place the bundled scripts into the <body> tag. To begin the development, run `npm start` or `yarn start`. To create a production bundle, use `npm run build` or `yarn build`. --> </body> </html> This is so we get some Bootstrap styles and can change the title of the app. We replaced the title tag with our own and add the following between the head tags: Source code: https://bitbucket.org/hauyeung/react-chart-tutorial-app/src/master/
https://medium.com/better-programming/how-to-add-graphs-and-charts-to-a-react-app-339ed2dc4c05
['John Au-Yeung']
2019-09-19 17:19:37.773000+00:00
['Charts', 'Graph', 'Programming', 'React', 'JavaScript']
Women Aren’t Crazy
For whatever reason, “crazy ex-boyfriend” doesn’t carry the same ring as its female counterpart. Maybe that’s because we hear “crazy ex-girlfriend” all the damn time. Some people insist that’s because women are genuinely bonkers. More bonkers than men, because our cultural narrative says that crazy men are the exception. Crazy women? Supposedly, we’re the rule. So women are routinely written off as being crazy throughout their lives. Oh, don't mind her--she's just overreacting. As little girls, we’re called crazy bossy and warned that’s a bad thing, even though men get to be bossy with few complaints. As teenagers, we girls are warned against being boy crazy, while boys are applauded for being little heartbreakers. Once we begin menstruating, we hear that periods and PMS also make us crazy. In motherhood? Don't be that crazy helicopter mom. And single ladies? We're in danger of being baby crazy, man crazy, or cat crazy. As much as people complain about the mere mention of toxic masculinity, feminists recognize that we’re not complaining about all masculinity. Only the toxic kind. But when it comes to calling women crazy, it’s not even about calling out bad female behavior. Instead, women are called crazy in an effort to keep us in line. To shut us up. After all, bitches be crazy. Right? In dating Women are under enormous pressure in dating to not appear crazy. Self-help books and articles are littered with advice for women explaining that many of their natural inclinations are wrong. In a world that incessantly cheers, "Be yourself," women have long been on notice that in the dating game it's all a rouse. Dating rules might change here and there, but through the centuries, women have been advised to behave less emotional to avoid scaring men away. Don't cry, don't talk too much, and never seem over eager. If you don't watch yourself, you'll surely behave too crazy. Even when the "don't act cray cray" advice ventures into reasonable tips for anybody wanting to avoid an unhealthy relationship, women are the ones singled out as if our gender is particularly prone to go off the deep end. Of course, it isn't just women who go off the deep end in dating--as every woman knows far too well. Men are just as able to exhibit needy or unhealthy behaviors. Domestic abuse statistics show that men are more than capable of "acting crazy" and getting away with it. On a less violent scale, the internet has made the proof of men's crazy all too easy to find. Dick pics, revenge porn, and text tirades are practically all par for the course and expected from men today--yet we don't collectively call men crazy in dating. Just women. Among the biggest red flags I've encountered in dating, are the men who claim that all of their ex-girlfriends are crazy. First of all, if that's true, it doesn't say anything positive about the man's judgment. In my experience, the men who complain about crazy exes take little responsibility for themselves, and often gloss over their own bad behavior which preceded a woman's alleged emotional collapse. Believe it or not, women do get sick of your shit, and when it comes to any relationship, they are best handled with care. Any man or woman who treats others like dirt in dating really shouldn't be surprised to receive an emotional reaction. But yes, we know--it's often easier to call someone else crazy rather than handling your own damn issues or working through your own shitty behavior. A man with a string of "crazy ex-girlfriends" may seem cliche in a culture that calls women crazy, but I’d say avoid them at all costs. In parenting I think we're all a little bit guilty of promoting the notion that mothers are crazy. People with perfectly kind and wonderful mothers often joke about having "crazy moms" who care too much. We roll our eyes at mothers who worry. As if all moms are unable to maintain healthy boundaries. New moms are frequently told to calm down. Just relax. *Eye roll.* When my daughter was a month old, I was convinced that she had tongue tie. But her pediatrician assured me I was just a worried new mom. "She’s fine," he said. "Quit worrying about everything--new moms drive themselves crazy." My daughter's father and pediatrician exchanged knowing looks whenever I discussed a concern. Here she goes again. Her dad also asked the pediatrician if it really mattered that our daughter was breastfed. He wanted some go-ahead to go against my wishes and take our daughter home to his new girlfriend for overnights. Nobody cared that it was already my choice as the mother and legal guardian. Once again, I was written off as an overprotective new mom. "Breastfed babies still have ear infections and other illnesses," I was told. I replied, "That's fine, but I'm still choosing to breastfeed." I caught plenty of more eye rolls. Colic, GERD, painful breastfeeding, food regression, along with speech and occupational therapy--every issue that came up and every decision I made was met with the same "crazy mom" eye rolls. And for a while, I even believed it myself. My daughter wound up needing an expensive frenectomy for her tongue tie at two years old. All because I believed for so long that I was a crazy new mom. And it’s not over. I still run into family, friends, strangers, and professionals who try to convince me that any particular choice I've made for my daughter is silly or stupid. Don't pay attention to her--she's just a crazy single mom. In healthcare It isn't just pediatricians telling women to calm down. Healthcare offers an entire history of labeling women as crazy. If a woman has her uterus surgically removed, it’s called a hysterectomy. But the words "uterus" and "hysterectomy" come from the Greek word hystera which means womb. That, of course, leads us back to hysteria, which the experts used to believe was a female mental disorder stemming from our anatomy. To be hysterical was to be sick in the uterus, and physicians recommended marriage as the best cure for a hysterical woman. Nothing a steady dose of penis can’t fix. Of course. Eventually, 19th-century doctors began giving women orgasms (which they called paroxysms) through "genital massage" as the treatment for female hysterics. Their hands cramped up so much that one doctor invented the mechanical vibrator as a result. That’s right — vibrators were invented to help save men some trouble. It doesn't even end there in the land of history. Even today, women battle the label "crazy" and fight to be heard by their doctors. As everybody knows, "overemotional" is just one more way we call women crazy to set their concerns aside. We might talk about equal rights being so certain, but physicians notoriously take women's pain less seriously. In the workplace Thank God I have no boss these days because I am entirely done with male managers who can’t take female employees as seriously as males. It’s 2019, and many of you don’t think this happens anymore. Yet I just got out of a company that told me to quit being so emotional when I offered reasonable feedback. Smile more! Catch those flies with honey! Male managers--no, not all but far too many--love to tell a female employee that she’s unprofessional every time she tells them something they don’t want to hear. We’re not supposed to disagree, discuss our pay and worth, or bring up uncomfortable issues. If we do, we get eye rolls. We get passed up for promotions and labeled as crazy or any number of hysterical synonyms. Women aren't crazy In every arena of life, when you call women crazy, what you're really saying is that women aren't to be heard or trusted. That we don't know what we're talking about. Except that women aren't crazy. Men aren't crazy. "Crazy" is a useless way to shut somebody down, and it's time we quit letting the notion slide that poor men throughout history have been saddled with the arduous task of getting women to calm down, chill out, and overcome their "irrational" ways. Crazy is not a female affliction, so let’s quit telling women they’re nuts.
https://medium.com/awkwardly-honest/women-arent-crazy-5e59fb0e8e6d
['Shannon Ashley']
2019-02-05 09:47:18.439000+00:00
['Culture', 'Women', 'Feminism', 'Health', 'Life']
Movie Categorizer With Binary Search Tree
flow diagram Hi, The aim of this project is making it easier for users to rate and categorize the movies they watched of each genre. The program lets the user rate the movies and then it lists them in a tree so the user can see their ratings again later. The program has 5 main functions; adding a movie to the list, see all movies in the tree, updating a movie, delete a movie and the “save and exit” function. Linked lists are also used in this project. This idea came out to make the offline version of IMDB. If you watch a lots of film and you want to keep your rank about films this project is for you. This program has two different data structures one is binary search tree and other one is linked list. Binary search tree is for keeping films and linked list is for keeping binary search trees for every kind of movies. We have 5 different kind of binary search tree this kinds for film kinds. Program Functions 1-)Adding a Film To The Tree: Add film In start menu, in order to add a movie to the list, the user should enter 1 to initialize “Press 1 to add a new film” function. After that, the movie genre should be chosen by entering the buttons set for the genre (sci-fi, horror, comedy, drama, romance). After the genre is chosen, the user should enter the movie name, movie year and their rating to the movie in that order. Then the movie will be added to the tree and the program will say “Film, year and rate added to the sci-fi tree” and return to the start menu. 2-)See All Movies In The Tree: See all movies In start menu, in order to see all the movies in the tree the user should enter number 2 in the console and initialize “press 2 to see all films in the tree” command. This command will list all movies, their year and rating each in their own genre. Then the program will say “All films listed” and return to the start menu. 3-)Updating a Movie Already In The Tree: Update In the start menu, in order to update a movie’s name, year and rating, the user should enter number 3 and initialize “Press 3 to update film” command. Then, enter the movie’s genre, and then the movie’s previous name and previous rating. Then they will be able to re-enter the new name, year and new rating of the movie. After those are done, the program will go back to the start menu. 4-)Deleting a Movie From The Tree: delete In order to delete a film from the list, the user should enter number 4 and initialize the command “Press 4 to delete film”. After that, the user should choose the existing movie’s category. Then they should enter the movie’s name and rating in order. This will delete the movie in the tree and go back to start menu. Wrong Command When in console, the command written and/or entered is wrong, the program will not give an exception but instead tell the user the mistake. wrong command 5-)Save & Exit Save & Exit Simply, when the user enters 0 and initializes the command “save and exit”, the program turns all trees to string and gets saved in a txt file. Next time the program is started, program reads the txt file and turns it back into tree. This is all thank you for reading if you want to take a look at the full project you can go to my github account from here: https://github.com/bllhlskr/School-Project-Movie-categorizer-with-Binary-SearchTree
https://medium.com/dev-genius/movie-categorizer-with-binary-search-tree-5161ee824fd1
['Halis Bilal Kara']
2020-06-16 08:06:43.940000+00:00
['Software Development', 'Linked Lists', 'Software Engineering', 'Binary Search Tree', 'Data Structures']
When a Cancer Surgeon Becomes a Cancer Patient
For me, one of the hardest parts of getting this diagnosis, I think, was who to let know. I decided I had to tell my work colleagues in breast oncology, my family, my close friends, and my fellow MIT classmates and professors. People reacted to hearing my diagnosis in different — sometimes disappointing — ways. But I learned how much so many people cared about me, and that allowed me to ignore the reactions of those who let me and my family down. Many patients were already scheduled to have their surgery with me. Until I started treatment at the end of the week, I operated on as many as I could. The rest I had to call and share my news with. These were some of the hardest calls I have ever made. I was letting my patients down. After my “last” case, I let some members of the operating room team know about my diagnosis. The news spread like wildfire. Everyone knew I had cancer now. A few days later, it was time for my port-a-cath to be placed for chemotherapy to start. The port-a-cath is inserted under the skin so that drugs can easily reach a person’s veins. So many of my patients have this done, and I’ve always thought of it as such a minor procedure. Well, it turns out I hated it. It became a badge of sickness for me, something that constantly reminded me that I had cancer. I could see the port, and I constantly felt it. When I get back to practicing, I will remember the feeling. I was told the hospital would call me to let me know what I should do to prepare for the port-a-cath placement. I waited, but no one called. So I had to call them, twice. How could they forget? So often we hear about the need for patients to take charge of their own care; it was strange to experience it on the other side. As a physician, I just assumed the systems work. When I arrived at the hospital a couple days later, my provider went over all the pros and cons of having a port-a-cath placed. I couldn’t help but think, “Do I really have a choice to say no?” The procedure went fine, and when I awoke from the sedation, I was craving a cup of coffee. It was one of the last cups of coffee I’d enjoy for awhile. For 20 years, I used to drink a cup a day, but during the many months of receiving chemotherapy, I couldn’t stomach it, not a single cup. The day after the port placement, despite the pain and still recovering from the procedure, my wife and I drove up to Maine. I had accepted an invitation to give a presentation at a major cancer conference there a year prior to my diagnosis. I sat through a panel of esteemed cancer surgeons who spoke about various treatments for lung cancer, breast cancer, and, of course, “my” cancer. When it was my turn, I stood in front of the audience and began my usual lines of how great we are at treating cancer — all the data and statistics. No one knew I was sick, but I couldn’t move my head to the right. Each time I tried, I felt the port-a-cath, and it hurt. I was trying to hide my cancer, but it was sticking its ugly head out. Despite covering my bandages that stretched to my neck and biting through the pain, the cancer was winning that day. As fall turned to winter, I started to notice things I hadn’t seen when I had been busy studying for exams, training to be a surgeon, or working. I noticed the leaves change color and fall as we took walks in the neighborhood. I found myself wondering if I’ll experience the seasons change again next year. Of course I will, but sometimes it’s hard to believe much of anything in those moments when all you know is pain and helplessness — and I will never forget that. Today, nine months after my cancer diagnosis, I’ve completed all treatment, and I am cancer free. I’ve managed to complete my MBA program, and I graduated June 7. I’m making many promises to my future patients and to myself, including to always remember what this was like. I will not forget how terrifying it was to be told you have cancer. How painful each needle and catheter felt, how chemotherapy ruined my appetite, what it was like when my providers forgot to call. I’m still learning so much as a physician, now in a new way.
https://elemental.medium.com/when-a-cancer-surgeon-becomes-a-cancer-patient-3b9d984066da
['Mehra Golshan']
2019-06-25 14:34:41.579000+00:00
['Health', 'Bod', 'Doctors', 'Cancer', 'Medicine']
The Role of Politics in Leadership
It’s not commonly known, but the actual reason Rome fell was because baseless rhetoric caused irrational behavior in leaders. The ability to agree (and win) was considered the ultimate virtue. (To some degree this remains true in cultures in the region). Politicians used rhetoric to gain control over the population. This lack of integrity with respect to what was good for the Roman populous resulted in the collapse of their society. Politics are a necessary evil and necessary to gain kinship in order to obtain a position of authority. Authority is given as a vehicle to lead the populous. However, after the leadership position is obtained, politics no longer has a place. This transition from running for office and running the office does not occur in the U.S. anymore. In fact, all elected officials spend many hours per week trying to get reelected. A few have expressed the irony of trying to find time to do the job they were “hired” to do, yet moonlighting to satisfy the machine that got them into office. The stupidity of a system like this is obvious, but it is also unlikely to change. The fundamental characteristic of a good leader is integrity and the ability to influence. A bad leader uses manipulation. Navigating the political process to obtain a leadership position through influence is difficult, but not impossible. Attack ads are used in nearly all modern political campaigns as a means to manipulate voters. Some ads don’t even identify the person running for the office, instead focus on the opponent. This is incredibly stupid and often ineffective. Why? Because, subliminally, the human brain does not understand “not”. The subconscious is an emotion engine that only understands nouns. Therefore, when an attack ad mentions an opponent, they are actually advertising for the opponent. Often, the end of the ad will show the candidate in a pleasing, trustworthy setting to provide an immediate alternative. This doesn’t always work. It would be far better to repeat the candidates name without referring to the opponent. Voters’ subconscious would only focus on the candidate. Candidates that resort to these forms of manipulation are unlikely good leaders. They will use manipulation to get what they want and be proxies for what their supporters want. They are OK with the means justifying the ends. So long as the do things they were “hired” to do, they’ll have support. This is not leadership. But it has become normal, it’s not even new. In closing, I urge support for people who work through influence to overcome manipulative forces in leadership. There will always be a battle between “good” and “evil”, but we’re here to do good, let’s not give up on it. Maybe fewer people will be out of their minds as a result.
https://medium.com/out-of-your-mind/the-role-of-politics-in-leadership-24c05fcd2d17
['Joe Bologna']
2020-11-08 01:57:45.114000+00:00
['Society']
Creating a Task Queue with TypeScript
I have always wanted to write a super basic task queue. I didn’t have any specific language that I wanted to implement it in so I just decided to go with TypeScript. Before we carry on I would just like to say that the code that follows is untidy and it does not always use types. For example the variable that holds all the queue items should have a type but it is just a normal JS object. Ok lets get into the code. Queue I am going to start off with the Queue class. It is extremely basic. It has one property and two methods. The property holds all the tasks or “queueItems” as I have called it. This is just a simple blank JavaScript object. The next part of the code is the “addQueueItemForTopic” function. All this does is it takes a queueItem of type IQueueItem(we will get to this later) and a topic. A topic is just a string which will be a key in the “queueItems” property. The queueItem parameter will be the task that needs to be run. The next function is the “processItemsForQueueTopic”. This basically does what it says. It just runs the code for each task or “queueItem” that the queue currently has. class Queue { queueItems = {}; addQueueItemForTopic(queueItem: IQueueItem, topic: string) { if (this.queueItems[topic] === undefined) { this.queueItems[topic] = [queueItem]; } else { this.queueItems[topic].push(queueItem); } } processItemsForQueueTopic(topic: string) { for (let item of queue.queueItems[topic]) { item.main(); } this.queueItems[topic] = []; let numberOfItemsLeft = this.queueItems[topic].length; if (numberOfItemsLeft > 0) { console.log("Number of items left in Queue for topic" + topic, this.queueItems[topic].length); } else { console.log("No more tasks for topic " + topic); } } } let queue = new Queue(); QueueItem Now we get onto the “IQueueItem” interface. // Interface that an item added to the queue needs to conform to interface IQueueItem { main<T>(something?:T); } This is what allows the “processItemsForQueueTopic” function to be able to run. Every item that gets added to the queue needs to conform to this protocol. This interface has a method called “main”. When you create an object that conforms to this interface all the processing that you want to happen will happen in this “main” function. When the “processItemsForQueueTopic” function gets called it will loop through every item in the queue for the topic that you have specified and call the “main” function on the “queueItem” or task. Creating the tasks Creating the tasks are super simple. It just needs to conform to the IQueueItem interface. let task1: IQueueItem = { main: function<T>(something?:T) { let calculation = 1 + 1; console.log(calculation); } } The reason the task needs to conform to the IQueueItem interface is because it needs to have the main function that you see above. That function will hold all the logic that you need for processing whatever you need to be processing. Then when you tell the queue to process the items for a specific topic. It will run through all the items for that topic and then just call that main function on each task which will do all the processing that you wanted it to do. Example of another task but with “email” as the topic: let task2: IQueueItem = { main: function<T>(something?:T) { let email = "test@email.com"; console.log("Send mail to:", email); } } Adding to the queue Now all that is needed is to add this to the queue like so: queue.addQueueItemForTopic(task1, "calculation"); queue.addQueueItemForTopic(task2, "email"); That is pretty all that one needs to do to add it to the queue. After that we will just tell the queue to process or call the main function. Processing the tasks in the queue queue.processItemsForQueueTopic("calculation"); queue.processItemsForQueueTopic("email"); When we want to process all the tasks for each topic we just call the processItemsForQueue function with the topic that we want it to run for and it is done. That will run all the tasks that we have added to the queue. Please note that this is an extremely basic version of a queue. There can be many other features added to make it much better and actually useable. Let me know if there is something that I can improve with this code in the comments!
https://medium.com/quick-code/creating-a-task-queue-with-typescript-3993ed2cc303
[]
2018-02-02 17:07:24.874000+00:00
['JavaScript', 'Web Development', 'Front End Development', 'Typescript', 'Frontend']
What is a LEEP Procedure and Why Do I Need It?
Loop Electrosurgical Excision Procedure for abnormal pap smears Photo by Jonathan Cosens Photography on Unsplash The doctor called. She says I have precancerous cells on my cervix. I am so scared. What if it is cancer, and what is a LEEP procedure? LEEP stands for Loop Electrosurgical Excision Procedure. It’s a treatment to prevent cancer after precancerous cells are identified during cervical cancer screening. Precancerous cells are caused by HPV, the human papillomavirus. 80% of Americans will contract HPV, making it the most common sexually transmitted infection. HPV causes genital warts, and persistent strains lead to cervical, vaginal, anal, throat, and neck cancer. Despite screening programs, 4,000 US women die from HPV related cervical cancer annually. A LEEP procedure saves lives. A small wire loop is used to remove abnormal cells from your cervix. The thin wire loop is attached to an electrical current to cut away the top layer of cervical cells and remove the effects of HPV. We detect HPV effects during routine paps smears, the first step in cervical cancer prevention. When someone has an abnormal pap smear, the next step is a diagnostic procedure called a colposcopy. A colposcopy is an office procedure that allows your doctor to visualize the cervix more closely using a microscope. The colposcope identifies abnormal cervical tissue that cannot be seen with the naked eye. Areas of the cervix concerning for pre-cancer or cancer can then be biopsied (sampled) during the exam. If the biopsy shows a precancerous lesion then, your healthcare provider may recommend a LEEP (loop electrosurgical excision procedure.) Where is a LEEP Procedure performed? A Loop electro excision procedure can be performed in a variety of settings. Most commonly, Obgyns perform this procedure in the office setting. The office, surgery center, or hospital are all reasonable and appropriate surgical settings. Photo by Pam Sharpe on Unsplash Can my family visit me? Most LEEP procedures are performed in a medical office setting. A trusted family member should drive you to and from the appointment. If the procedure is done in a hospital or Ambulatory Surgery Center, your family is welcome to stay with you before and after the procedure. Does my procedure require an anesthetic? Anesthesia is required for a LEEP procedure. The type of anesthesia will vary depending on the surgical setting, the surgeon’s experience, and the availability of office equipment. Oral sedation, paracervical block, IV sedation, and general anesthesia are all potential anesthetic options. In the office setting, anesthesia is provided via a paracervical anesthetic. A paracervical block is an anesthetic technique done by a gynecologist to numb the uterus. Medication is injected into the cervical tissue to reduce pain during surgery. For a LEEP procedure, a medication called epinephrine is mixed with the anesthetic to reduce the risk of intraoperative bleeding. Some gynecologists also recommend oral medication to reduce anxiety. What’s the procedure when I check-in? Most surgeries will involve a preoperative visit with your surgeon. The risks and benefits of the procedure will be discussed in detail and questions regarding your procedure are discussed. The surgical consent form is reviewed, signed, or updated with any changes. In most settings, patients will receive a preoperative phone call by a nurse or medical assistant one to two days before surgery. If any blood work or preoperative testing is required, it will be scheduled and confirmed. When a LEEP procedure is performed in an office setting, the experience will feel like a normal office visit. After checking in, you will be taken to a procedure room. The medical assistant will prepare the room and provide a gown or leg coverings. When all is prepared, your surgeon will come and review any last-minute questions. If a LEEP is scheduled in a hospital or Ambulatory Surgery Center, the staff will guide you to the preoperative holding area to change into a surgical gown and store your valuables. If an IV is required, it will be placed at this time. You will meet the nursing team who will provide care during your stay. The anesthesia team will come to interview you and answer questions. Typically your surgeon will also come and review any last-minute questions. What happens in the operating room? For an office-based procedure, your surgeon will help position your legs into the stirrups. A speculum is placed into the vagina to allow visualization of the cervix, the opening of your uterus located at the back of the vagina. The cervix is cleaned to make the area sterile. A paracervical block anesthetic is then gently injected into the cervical tissue. The medication absorbs into the surrounding area to numb the nerves and make the procedure more comfortable. The surgeon selects the appropriate sized LEEP wire to match the size and appearance of your cervix. Because a low dose electrical current is used to do the cutting, a grounding pad is placed on the outside of your leg. The doctor will take extra precautions to ensure an adequate and safe view of the cervix. A grounding pad is placed on the outside of your leg. Once all preoperative safety checks are confirmed, the surgeon will activate the electrical current to pass the wire across the top layer of the cervix. This action removes a small, pancake layer of cervical cells. This specimen is sent to a pathologist for analysis. The electrical current is then used to stop any bleeding through a process called cauterization. Often, a drying chemical called Monsel’s solution is painted onto the cervix to prevent bleeding later on. This chemical is messy and will cause a brown, coffee-ground vaginal discharge over the next few days. In the hospital setting, things function a little differently. After the preoperative evaluation, the team will guide you to the operating or procedure room. You will move from the mobile bed to the operating table. Once you are positioned comfortably and safely, the anesthesiologist will give you medication through your IV if the procedure is being done outside of the office setting. The OR nursing team will cover your body with sterile drapes and prep the vagina for surgical sterility. The team then performs a “surgical time-out.” A surgical safety checklist is read out loud requiring all surgical team members to be present and attentive. The surgeon then performs the surgical procedure as described above. Once the procedure is complete. A post-procedure review is done together as a surgical team. All instruments and equipment are counted and verified. Once complete, the anesthesiologist will begin to assist the patient in waking up for transfer to the recovery room. How long will I be in the operating room? Once the patient enters the operating room a series of safety steps must occur. This process takes about 20 minutes. A LEEP procedure takes approximately 10–15 minutes of surgical time. This includes the surgical time as well as accounting for positioning, the speculum insertion, a paracervical block anesthetic, and removal of the instruments. When can I go home? After an office-based LEEP procedure, patients may go home after getting dressed as long as you are feeling normal. Hospital-based procedures under general anesthesia will follow a different process. Postoperative recovery time will vary from person to person. Each patient must meet certain discharge criteria. The patient’s vital signs must be stable. The patient must be alert, oriented, and able to walk with assistance. Postoperative nausea, vomiting, and pain must be controlled as well as confirmation of no postoperative bleeding. The nursing team will go over discharge instructions, and the plan for postoperative pain management options will be confirmed. LEEP procedures require a minimal amount of postoperative recovery. Patients are often discharged as early as 30–60 minutes after the procedure. What is the usual recovery time You should be able to resume all work and household activities the day after your procedure. You should expect to feel a little vaginal soreness for 2–3 days. Mild uterine cramping is also common. Some patients will require mild pain medication like NSAIDs or even low dose narcotics for a brief period of time. It is wise to wear a sanitary pad for a few days as you may experience vaginal spotting or dark vaginal discharge. You will be instructed to abide by pelvic rest for approximately one week. This includes no douching, no sex, and no tampons. You should call your doctor if you experience heavy vaginal bleeding, fevers, or worsening abdominal pain. What aftercare is required? Most women should be able to return to normal daily activities the next day. You should speak with your physician regarding the resumption of sexual activity. Typically, the recommendation is no intercourse for 1–2 weeks. You should not use tampons for up to seven days after the procedure to reduce the potential risk of infection. Light bleeding, spotting, and brown or black discharge is common and expected. Sanitary napkins are advised. Your doctor will schedule a postoperative examination to evaluate your cervix 1–2 weeks after the procedure. The cervical specimen pathology report will be reviewed during this visit. A follow-up pap smear will be scheduled to confirm all of the abnormal cells have been successfully removed and do not come back. Photo by Brooke Lark on Unsplash Danger Signs to look out for after the procedure After a LEEP procedure, we expect light spotting and vaginal discharge. If you experience heavy bleeding, abdominal or pelvic pain, a fever, or pain that increases over time beyond 24 hours, call your physician. After any surgery contact your physician if you meet any of the following criteria: Pain not controlled with prescribed medication Fever > 101 Nausea and vomiting Calf or leg pain Shortness of breath Heavy vaginal bleeding Foul-smelling vaginal discharge What should I pack at home to take with? Nothing special is required after a LEEP procedure. A supply of sanitary napkins will help keep your clothing clean. What information should I provide to my doctors and nurses? It is very important to provide your doctor with an updated list of all medications, vitamins, and dietary supplements prior to surgery. All medication and food allergies should be reviewed. Share any lab work, radiologic procedures, or other medical tests done by other healthcare providers with your surgeon prior to your procedure.
https://medium.com/beingwell/what-is-a-leep-procedure-and-why-do-i-need-it-f0b2b798036c
['Dr Jeff Livingston']
2020-07-03 02:33:44.286000+00:00
['Womens Health', 'Cancer', 'Women', 'Surgery', 'Health']
Z- Statistics, T-Statistics, P-Statistics are Still Confusing you?
Z- Statistics, T-Statistics, P-Statistics are Still Confusing you? Definitions and concepts in Statistics for machine learning Photo by Ruthson Zimmerman on Unsplash Understanding statistics look like a side parallel road for data science and machine learning people. But learning statistics is worth making the inferences and solutions from the data. The road not to be taken by many people should have to accept the statistics with their daily dose. Well, in this article we will discuss the Z, T and P statistics distribution and will try to learn why we use them in data science. Before diving into this concept we will discuss some basic definition and terms as shown below: Topics to be covered: Section 1: Types of Data, Histogram and Scatter plot Section 2: Central Measures values and Measures of Spread Section 3: Covariance and Correlation Section 4: Z, T — Distributions and confidence intervals Section 5: Hypothesis and P — Distribution Section 1: Types of Data Acquiring knowledge about statistics is a need in a data science career. Before jumping we should know with whom we are dealing with oh! that is obvious “DATA”. It is not like that data will come to you and tell all the inferences, we need to find a way to deal with different types of data. Data is comprised of numbers and words that can be in the measurable or observational form. We cannot do the same operation with all different types of data, for this we need to identify and according to that, we need to test and visualize. Numerical Data: This type of data deals with numbers which can be in a quantitative form either discrete or in continuous. Discrete data is like a whole number ( 2, 10, 20, 15, etc) which tells a direct specification of the quantity that we count easily. Continuous data is like a range means the value which falls in some particular measuring range ( Kg, Km, cm, etc ). Categorical Data: This type deals with qualitative data that we can describe. It comes in a group of two or more types of a different description. Example: Binary values ( 0 and 1 ), Nominal Data: They are not in order but shows some groups or category like seasons, brand names, flower names, etc. Ordinal Data: Well these types of data are those where we give them some rating or in some order. Histogram and Scatter plot If data is very big then we cannot sit all day and check every line of data records to take out some information. That’s where graphs and plots of data come into the picture. Different types of plots are Bar, Line and Pie Charts, Histogram and Scatter plots, etc. Histogram Histograms although similar to bar charts but in the histogram, the bar falls in some range i.e. continuous type. The bar charts can have a gap between two bars but histogram doesn’t. Difference between Bar and Histogram Chart. A photo from mathisfun Scatter Plot The scatter plot shows the relationship between two variable points in the record. Scatter plot between Time and Marks. A photo by Author Section 2: Central Measures value When we want to know about something from a big record we choose one thing which is similar to all others. So, in numbers, we choose a common and around value through Mean, Median and Mode. Measures of Spread The Range When we arrange the data in ascending order and make a difference from the big number to a small number is called range. Mean Deviation This spread tells us how far the values from the measured central value. First, take a mean of the values. Then make a difference from the mean value to all values. Then calculates the spread distance from the mean. Example:
https://medium.com/towards-artificial-intelligence/z-statistics-t-statistics-p-statistics-are-still-confusing-you-87557047e20a
['Amit Chauhan']
2020-12-20 01:03:18.644000+00:00
['Machine Learning', 'Artificial Intelligence', 'Data Visualization', 'Data Science', 'Programming']
That’s how you were born girl!
If she cooks, she is docile If she doesn’t, she is a rebel She parties with her friends, she is loose The one who comes home after work always, she is a bore You do this, you do that You do it all or you don’t give a damn You will always be labelled and judged You’ll be forced to feel guilty Step back and shrug off all these labels Look inside and find yourself Peel away each frame one by one The one given by your parents The one gifted by friends The one that is holding you tight… Who are you? What do you want? What are your likes? What are your dislikes?
https://medium.com/polar-tropics/thats-how-you-were-born-girl-45fa9f6153f3
[]
2019-11-22 05:32:17.270000+00:00
['Feminism', 'Love', 'Poetry', 'Society', 'Self Improvement']
Marketers: We Don’t Need More Data!
Remember the industry in 2018? You know, last year? Facebook was facing Congress — privacy issues were front and centre. Amazon’s ad growth was making Google look over its shoulder. And we were just one unexpected ad away from unplugging Alexa. And yet! We marketers continued to drone on about the promised and predictable land of data-driven, AI-supported marketing. ROI-positive from hour one! Zero-waste — real-time — hyper-efficient! Substitute ‘2018’ above with any of the past ten years and you’ll see the unhappy pattern. Yet throughout, ad tech has stubbornly sold us on the perfect promised land of more data. We’re a few cold weeks into 2019 and … …Odds are, the answer to improved marketing is not more data. Because the reality is most marketers are challenged to use the data we already have in this fragmented industry. So how do we solve for that? Some might say you need proper Business Intelligence. But what does that even look like? Walk with me for a bit, let’s tackle the issue together. First stop: Business Intelligence (BI) In the world of data processing, the term BI has its own, software-centric definitions. You can look it up here or here. For our purposes today, BI is what takes business from having data to getting results. That’s quite a big leap. We’ll take it step by step. Data, Insights, Stories, Results At the risk of oversimplifying, let’s agree there are 3 types of organizations. Each is a little more mature with their data: Beginning: businesses who have the data, or at least access to it Little further ahead: Those who can draw insights and stories from their data Achievers: Those who can drive results from those stories. I propose that access to even more data can slow your progression along this path. Unless you know what to do with it. What does that mean? Well, let’s start with the firehose problem. Drinking from that ol’ firehose In 2019 most marketers have more input signals than they know what to do with! Because market feedback is abundant and immediate. But saying we have access to data when we only have a way to record input — say an ad platform — is misleading. It’s the equivalent of opening a firehose and saying you have access to running water. You don’t. What you have is a flood. You have access to data when you can tap into the information you need when you need it. And how do you do that? By making sure your infrastructure is connected. Hello, integration! By making sure your data is filtered. Filtered for the right situation and for the right audience. What is presented and relevant to the C-suite should be different from what is used in day-to-day marketing — i.e. cold water to drink, hot water to shower. Simple, right? Only when you start to connect and filter are you in a position to move ahead a step on your path to excellence. Consider a simple PR campaign. It will likely have some mainstream press activity, a paid social component, and some influencer outreach. Yet a campaign like that spikes my cortisol levels, just thinking about the data gaps and blind spots. Without a conscious effort to connect and filter, you’ll be left shrugging your shoulders when asked the critical questions. Now, you can use your team’s analytics skills to draw insights. If you haven’t already, sharpen the analytical blade in your marketing utility belt, that thing that allows you to make meaning from the numbers. Training and education for your team are key. Making room for roles like data analyst on your team helps, too! Finding patterns and starting dialogues With your dedication to your team’s skills and a clear and filtered view of your data, you are going to start seeing patterns over longer time periods. Patterns in customer behaviour that span across days, weeks, months, maybe even years of interactions with your brand. Think back to our PR campaign. How does it fit into the overall marketing ROI calculation, year-over-year? How did it affect the customer, the brand, and more importantly the bottom line — not just this quarter but beyond? Answers begin to pop when the right people are looking at the right information. But this is just the start. From there, you can spark dialogues with other business functions such as finance, and accounting, and operations … and HR. I have long advocated that marketers need to have a better understanding of all business disciplines. I’m now saying that we not only need that, we need to deeply speak their language. We need to access their data, analyze it, report it, draw it, chart it, digest it. For example: If we want to tell the story of our brand success, we need to know the experience of front line staff. To illustrate and prove the profitability of our advertising, we have to be able to speak to the underlying accounting. Or maybe your e-commerce advertising is not profitable because your shipping and fulfilment operations aren’t locally optimized! Sidenote: this was actually the case with one of our clients, and I take particular pride that we helped them solve for that. In sum, if we want to identify and articulate the true story behind our data, we need to be, hire, and train true business analysts. And yet, that is still not quite enough. We also need to spread the word. From bonfire to boardroom: educate and set expectations If all business disciplines came together on a beach around a bonfire, singing and telling our data-driven stories to each other … we wouldn’t produce any real, tangible results. Because producing results needs one more thing: action. And action — at least in the context of an organization — comes from leadership. Raise your hand if your sound analytical decision-making has ever been overruled by someone else’s ‘instinct’ in a boardroom. Mine seems to be perennially raised — I was a foolish outspoken junior in our industry and I’m not sure I’m cured yet. The reality is that many business decisions are still made based on what feels right. You might say that’s outdated, or “how could they?” But stay with me here — it’s not them — it’s you. We as marketers need to: Educate our leaders Champion a culture of intelligence-based decision-making Assist in the making of sound data-driven decisions. That means helping CEOs and decision-makers understand and manage marketing numbers. It means iteratively educating, setting expectations, and having thoughtful worst-case scenario conversations in the boardroom. It also means being critical of our own numbers so that we are better prepared to answer the tough questions. At the end of the day, marketers can be business leaders only when we move our organizations from data to results. Not by investing more in our data. But by investing more in our people.
https://medium.com/empathyinc/marketers-we-dont-need-more-data-bf5802ce2be5
['Mo Dezyanian']
2019-03-15 13:21:46.572000+00:00
['Marketing', 'Analytics', 'Digital', 'Data', 'Advertising']
KennyHoopla dives gracefully into the pop punk world with new single “Estella”
KennyHoopla dives gracefully into the pop punk world with new single “Estella” The Wisconsin artists new single brings a fresh new voice and energy into pop punk, teaming up with Blink-182’s Travis Barker KennyHoopla’s gigantic ambitions have really come to fruition this year. After releasing a couple singles, including his smash hit “how can i rest in peace if i’m buried by a highway?//” and his EP of the same name he’s utilized this quiet year for music to shout his moody and electrifying music into the internet’s memory. His EP and it’s self titled single have helped propel him into the forefront of the alternative scene just this spring. His videos for his singles have amassed millions of views along with the live performances of the title track and “plastic door//” (my personal favorite by the artist) both garnering attention. KennyHoopla seems primed to become a staple in the alternative scene with his charm, creativity and eagerness to avoid categorization. The new single by KennyHoopla out this week sees the artist once again toying with a whole new sound and, dare I say, a new genre. While Kenny’s music beforehand has blended the moody shoegaze guitars with hip-hop and electronic to create an energetic and introspective sound that is all his own, Kenny’s new single “Estella” brings the artist fully into the familiar pop punk soundwave. That’s not saying the artist is taking a step back however, rather Kenny is utilizing this time to approach a genre he has not dipped his feet in and exploding out with a bombastic and highly infectious pop punk rager. The artists spotlight shows on this track as it features production and drum work from Blink-182’s Travis Barker, and his work brings Kenny’s lyricism and vocals higher than they ever soared before. Pop punk has seen a lot of ups and downs over the last 20 years. With Warped Tour coming and going and giving the genre a humongous platform to grow it fell once again into stagnation by the late 2010s as bands such as The Story So Far and State Champs were releasing the same record each year and making fans bored. Pop punk seems to be chasing its tail these days looking for new ways to expand its ever growing reach. The expansion it needed has come in the most unexpected ways this year with hip-hop artists citing their affinity for the genre. Emo rap has been on the rise in the last couple years since Lil Peep’s death and with that has come artists embracing the “emo” music of the 2000s and even incorporating it into their own albums. Machine Gun Kelly most recently released his long awaited pop punk album Tickets To My Downfall in September and in a way the album confirms the suspicion that hip-hop can mix with any genre if put in the correct artists hands. KennyHoopla continues that movement with his new single “Estella” boasting the loud and bouncy riffs that have become trademarked within the pop punk circle. This single is the sound of an artist giving himself a chance to try new things and he does so with a particular energy that the pop punk scene itself needs in order to stay afloat. When shows finally come back after the quarantine I hope to be able to see the transition of Kenny playing his softer and moodier tracks before killing the mellow and blasting off into this energetic and lively track. It would be interesting to see how the crowd would react to such a change in tone. What Kenny’s new single does is provide himself a wider range of genres, sounds and techniques to open himself to which has always been where his potential lies and now we get to see this in action with a drastically new sound. Before he had dwelled within the indie rock underground mixing hip-hop and electronic beats with guitars and the result was a mix of up and down moods, all superb. Here he brings a newer, cleaner and downright ass-kicking song with soul in its lyrics (his bread and butter). “Estella” truly establishes Hoopla as a useful voice in the alternative scene, you need vocals on a slow-paced dirge-influenced indie song? Kenny’s your boy. You need a feature for your next pop punk anthem? Give Kenny a call, he’ll deliver the goods. 2020 has not been anyone’s year, it’s been a stagnant time for a majority of major artists but it has inspired hardworking underground artists, rooted in the present, to rise up and make their music known since the platform is wide open. KennyHoopla is an artist who has taken advantage of this years stagnation and as we close out the year he seems to be making bigger moves than a lot of major artists in his field. He’s shown us that he can collaborate with a lot of talented artists and he’s stretched himself across a wide range of musical styles while blending them with his own sound. We’re in the last days of 2020 and who knows what next year will bring not only in popular music but the world, it will be interesting to see where the Midwestern artist will jump to next in his daring climb to stardom.
https://medium.com/clocked-in-magazine/kennyhoopla-dives-gracefully-into-the-pop-punk-world-with-new-single-estella-40ceb2678a5e
["Ryan O'Connor"]
2020-12-05 17:52:28.536000+00:00
['New Music', 'Review', 'Punk', 'Indie', 'Music']
Generative Line Art With Blender
Simulation of natural phenomena, geometrical properties of different shapes, and travel schedule optimization are some applications of advanced mathematics. Nonetheless, art generation is far from being perceived as a possible application for mathematics. Due to its pattern generation capabilities, math can offer a series of tools to create several pieces of generative art. The following describes how to create geometrical patterns with trigonometric functions in blender. Generative art Generative art refers to artwork created by the use of an autonomous system. Generally, the autonomous system is non-human and can independently determine the features in the artwork that otherwise require the decisions made by the artist. In this case, the artwork will consist of a series of patterns generated with trigonometric functions. To add the patterns into the scene we need two things, one: a function to create and manipulate a curve in the scene, and a function to calculate the coordinates in the curve. To create a curve, first, a curve object is created and linked to the scene, and from the curve object, the kind of curve is defined. Then from all the coordinates in the generated pattern a new point in the curve line is added and located at the appropriate coordinates. Finally, a bevel is added to the curve to create a solid geometry. The pattern calculating function will take as argument two wrapper functions and a limit for a mesh grid. The wrapper function arguments will be the x-axis and y-axis coordinates. That design will be helpful to create new patterns. Then those functions will be evaluated through a mesh grid object. Finally, the resulting patter values will be scaled so it can fit in the camera. With all in place, a simple material can be added to the curve object to generate a pattern.
https://medium.com/swlh/generative-line-art-with-blender-2340a9fc5e63
['Octavio Gonzalez-Lugo']
2020-11-06 08:26:28.587000+00:00
['Python', 'Blender', 'STEM', 'Mathematics']
How to Build a Reporting Dashboard using Dash and Plotly
A method to select either a condensed data table or the complete data table. One of the features that I wanted for the data table was the ability to show a “condensed” version of the table as well as the complete data table. Therefore, I included a radio button in the layouts.py file to select which version of the table to present: Code Block 17: Radio Button in layouts.py The callback for this functionality takes input from the radio button and outputs the columns to render in the data table: Code Block 18: Callback for Radio Button in layouts.py File This callback is a little bit more complicated since I am adding columns for conditional formatting (which I will go into below). Essentially, just as the callback below is changing the data presented in the data table based upon the dates selected using the callback statement, Output('datatable-paid-search', 'data' , this callback is changing the columns presented in the data table based upon the radio button selection using the callback statement, Output('datatable-paid-search', 'columns' . Conditionally Color-Code Different Data Table cells One of the features which the stakeholders wanted for the data table was the ability to have certain numbers or cells in the data table to be highlighted based upon a metric’s value; red for negative numbers for instance. However, conditional formatting of data table cells has three main issues. There is lack of formatting functionality in Dash Data Tables at this time. If a number is formatted prior to inclusion in a Dash Data Table (in pandas for instance), then data table functionality such as sorting and filtering does not work properly. There is a bug in the Dash data table code in which conditional formatting does not work properly. I ended up formatting the numbers in the data table in pandas despite the above limitations. I discovered that conditional formatting in Dash does not work properly for formatted numbers (numbers with commas, dollar signs, percent signs, etc.). Indeed, I found out that there is a bug with the method described in the Conditional Formatting — Highlighting Cells section of the Dash Data Table User Guide: Code Block 19: Conditional Formatting — Highlighting Cells The cell for New York City temperature shows up as green even though the value is less than 3.9.* I’ve tested this in other scenarios and it seems like the conditional formatting for numbers only uses the integer part of the condition (“3” but not “3.9”). The filter for Temperature used for conditional formatting somehow truncates the significant digits and only considers the integer part of a number. I posted to the Dash community forum about this bug, and it has since been fixed in a recent version of Dash. *This has since been corrected in the Dash Documentation. Conditional Formatting of Cells using Doppelganger Columns Due to the above limitations with conditional formatting of cells, I came up with an alternative method in which I add “doppelganger” columns to both the pandas data frame and Dash data table. These doppelganger columns had either the value of the original column, or the value of the original column multiplied by 100 (to overcome the bug when the decimal portion of a value is not considered by conditional filtering). Then, the doppelganger columns can be added to the data table but are hidden from view with the following statements: Code Block 20: Adding Doppelganger Columns Then, the conditional cell formatting can be implemented using the following syntax: Code Block 21: Conditional Cell Formatting Essentially, the filter is applied on the “doppelganger” column, Revenue_YoY_percent_conditional (filtering cells in which the value is less than 0). However, the formatting is applied on the corresponding “real” column, Revenue YoY (%) . One can imagine other usages for this method of conditional formatting; for instance, highlighting outlier values. The complete statement for the data table is below (with conditional formatting for odd and even rows, as well highlighting cells that are above a certain threshold using the doppelganger method): Code Block 22: Data Table with Conditional Formatting I describe the method to update the graphs using the selected rows in the data table below.
https://medium.com/p/4f4257c18a7f#b300
['David Comfort']
2019-03-13 14:21:44.055000+00:00
['Dash', 'Dashboard', 'Data Science', 'Data Visualization', 'Towards Data Science']
Linear programming and discrete optimization with Python using PuLP
The discrete optimization problem is simple: Minimize the cost of the lunch given these constraints (on total calories but also on each of the nutritional component e.g. cholesterol, vitamin A, calcium, etc. Essentially, in a casual mathematical language, the problem is, Notice that the inequality relations are all linear in nature i.e. the variables f are multiplied by constant coefficients and the resulting terms are bounded by constant limits and that’s what makes this problem solvable by an LP technique. You can imagine that this kind of problem may pop up in business strategy extremely frequently. Instead of nutritional values, you will have profits and other types of business yields, and in place of price/serving, you may have project costs in thousands of dollars. As a manager, your job will be to choose the projects, that give maximum return on investment without exceeding a total budget of funding the project. Similar optimization problem may crop up in a factory production plan too, where maximum production capacity will be functions of the machines used and individual products will have various profit characteristics. As a production engineer, your job could be to assign machine and labor resources carefully to maximize the profit while satisfying all the capacity constraints. Fundamentally, the commonality between these problems from disparate domains is that they involve maximizing or minimizing a linear objective function, subject to a set of linear inequality or equality constraints. For the diet problem, the objective function is the total cost which we are trying to minimize. The inequality constraints are given by the minimum and maximum bounds on each of the nutritional components. PuLP — a Python library for linear optimization There are many libraries in the Python ecosystem for this kind of optimization problems. PuLP is an open-source linear programming (LP) package which largely uses Python syntax and comes packaged with many industry-standard solvers. It also integrates nicely with a range of open source and commercial LP solvers. You can install it using pip (and also some additional solvers) $ sudo pip install pulp # PuLP $ sudo apt-get install glpk-utils # GLPK $ sudo apt-get install coinor-cbc # CoinOR Detailed instructions about installation and testing are here. Then, just import everything from the library. from pulp import * See a nice video on solving linear programming here. How to formulate the optimization problem? First, we create a LP problem with the method LpProblem in PuLP. prob = LpProblem("Simple Diet Problem",LpMinimize) Then, we need to create bunches of Python dictionary objects with the information we have from the table. The code is shown below, For brevity, we did not show the full code here. You can take all the nutrition components and create separate dictionaries for them. Then, we create a dictionary of food items variables with lower bound =0 and category continuous i.e. the optimization solution can take any real-numbered value greater than zero. Note the particular importance of the lower bound. In our mind, we cannot think a portion of food anything other than a non-negative, finite quantity but the mathematics does not know this. Without an explicit declaration of this bound, the solution may be non-sensical as the solver may try to come up with negative quantities of food choice to reduce the total cost while still meeting the nutrition requirement! food_vars = LpVariable.dicts("Food",food_items,lowBound=0,cat='Continuous') Next, we start building the LP problem by adding the main objective function. Note the use of the lpSum method. prob += lpSum([costs[i]*food_vars[i] for i in food_items]) We further build on this by adding calories constraints, prob += lpSum([calories[f] * food_vars[f] for f in food_items]) >= 800.0 prob += lpSum([calories[f] * food_vars[f] for f in food_items]) <= 1300.0 We can pile up all the nutrition constraints. For simplicity, we are just adding four constraints on fat, carbs, fiber, and protein. The code is shown below, And we are done with formulating the problem! In any optimization scenario, the hard part is the formulation of the problem in a structured manner which is presentable to a solver. We have done the hard part. Now, it is the relatively easier part of running a solver and examining the solution. Solving the problem and printing the solution PuLP has quite a few choices of solver algorithms (e.g. COIN_MP, Gurobi, CPLEX, etc.). For this problem, we do not specify any choice and let the program default to its own choice depending on the problem structure. prob.solve() We can print the status of the solution. Note, although the status is optimal in this case, it does not need to be so. In case the problem is ill-formulated or there is not sufficient information, the solution may be infeasible or unbounded. # The status of the solution is printed to the screen print("Status:", LpStatus[prob.status]) >> Status: Optimal The full solution contains all the variables including the ones with zero weights. But to us, only those variables are interesting which have non-zero coefficients i.e. which should be included in the optimal diet plan. So, we can scan through the problem variables and print out only if the variable quantity is positive. for v in prob.variables(): if v.varValue>0: print(v.name, "=", v.varValue) >> Food_Frozen_Broccoli = 6.9242113 Food_Scrambled_Eggs = 6.060891 Food__Baked_Potatoes = 1.0806324 So, the optimal solution is to eat 6.923 servings of frozen broccoli, 6.06 servings of scrambled eggs and 1.08 servings of a baked potato! You are welcome to download the whole notebook, the data file, and experiment with various constraints to change your diet plan. The code is here in my Github repository. Finally, we can print the objective function i.e. cost of the diet in this case, obj = value(prob.objective) print("The total cost of this balanced diet is: ${}".format(round(obj,2))) >> The total cost of this balanced diet is: $5.52 What if we want a solution with whole numbers? As we can see that the optimal result came back with a set of fractional numbers of servings for the food items. This may not be practical and we may want the solution to be forced to have only integer quantities as servings. This brings to us the technique of integer programming. The algorithm used for the previous optimization is simple linear programming where the variables were allowed to assume any real number value. Integer programming forces some or all of the variables to assume only integer values. In fact, integer programming is a harder computational problem than linear programming. Integer variables make an optimization problem non-convex, and therefore far more difficult to solve. Memory and solution time may rise exponentially as you add more integer variables. Fortunately, PuLP can solve an optimization problem with this kind of restrictions too. The code is almost identical as before, so it is not repeated here. The only difference is that the variables are defined as belonging to Integer category as opposed to Continuous . food_integer = LpVariable.dicts("Food",food_items,0,cat='Integer') For this problem, it changes the optimal solution slightly, adding iceberg lettuce to the diet and increasing the cost by $0.06. You will also notice a perceptible increase in the computation time for the solution process. Therefore, the optimal balanced diet with whole servings consists of -------------------------------------------------------------------- Food_Frozen_Broccoli = 7.0 Food_Raw_Lettuce_Iceberg = 1.0 Food_Scrambled_Eggs = 6.0 Food__Baked_Potatoes = 1.0 A cool application of integer programming is solving a driver-scheduling problem which can be an NP-hard problem. See this article (also note in the article, how they compute the costs of various actions and use them in the optimization problem), How to incorporate binary decisions in a linear programming problem? Often, we want to include some kind of ‘If-then-else” kind of decision logic in the optimization problem. What if we don’t want both broccoli and iceberg lettuce to be included in the diet (but only one of them is fine)? How do we represent such decision logic in this framework? Turns out, for this kind of logic, you need to introduce another type of variables called indicator variables. They are binary in nature and can indicate the presence or absence of a variable in the optimal solution. But for this particular problem, there is an apparent problem with using indicator variables. Ideally, you want the cost/nutritional value of a food item to be included in the constraint equation if the indicator variable is 1 and ignore it if is zero. Mathematically, it is intuitive to write this as a product of the original term (involving the food item) and the indicator variable. But the moment you do that, you are multiplying two variables and making the problem nonlinear! It falls under the domain of quadratic programming (QP) in that case (quadratic because the terms are now the product of two linear terms). The popular machine learning technique Support Vector Machine essentially solves a quadratic programming problem. However, this general concept of using an indicator variable for expressing binary logic in a linear programming problem is also extremely useful. We have given a link to a problem of solving Sudoku puzzle by LP in the next section where this trick is used. It turns out that there is a clever trick to incorporate such binary logic in this LP without making it a QP problem. We can denote the binary variables as food_chosen and instantiate them as Integer with lower and upper bounds of 0 and 1. food_chosen = LpVariable.dicts("Chosen",food_items,0,1,cat='Integer') Then we write a special code to link the usual food_vars and the binary food_chosen and add this constraint to the problem. for f in food_items: prob += food_vars[f]>= food_chosen[f]*0.1 prob += food_vars[f]<= food_chosen[f]*1e5 If you stare at the code long enough, you will realize this effectively means that we are giving food_vars importance only if the corresponding food_chosen indicator variable is 1. But this way we avoid the direct multiplication and keep the problem structure linear. To incorporate the either/or condition of broccoli and iceberg lettuce, we just put a simple code, prob += food_chosen['Frozen Broccoli']+food_chosen['Raw Iceberg Lettuce']<=1 This ensures the sum of these two binary variables is at most 1, which means only one of them can be included in the optimal solution but not both. More applications of linear/integer programming In this article, we showed the basic flow of setting up and solving a simple linear programming problem with Python. However, if you look around, you will find countless examples of engineering and business problems which can be transformed into some form of LP and then solved using efficient solvers. Following are some of the canonical examples to get you started thinking, Many machine learning algorithms also use the general class of optimization of which linear programming is a subset — convex optimization. See the following article for more information about it, Summary and conclusion In this article, we illustrated solving a simple diet optimization problem with linear and integer programming techniques using Python package PuLP. It is noteworthy that even the widely-used SciPy has a linear optimization method built-in. Readers are encouraged to try various other Python libraries and choose a good method for themselves.
https://towardsdatascience.com/linear-programming-and-discrete-optimization-with-python-using-pulp-449f3c5f6e99
['Tirthajyoti Sarkar']
2019-04-25 04:22:35.208000+00:00
['Programming', 'Python', 'Mathematics', 'Data Science', 'Machine Learning']
Tensorflow 2 YOLOv3-Tiny object detection implementation
In this tutorial, you will learn how to utilize YOLOv3-Tiny the same as we did for YOLOv3 for near real-time object detection. The YOLO object detector is often cited as being one of the fastest deep learning-based object detectors, achieving a higher FPS rate than computationally expensive two-stage detectors (ex. Faster R-CNN) and some single-stage detectors (ex. RetinaNet and some, but not all, variations of SSDs). However, even with all that speed, YOLOv3 is still not fast enough to run on some specific tasks or embedded devices such as the Raspberry Pi. To help make YOLOv3 even faster, Redmon et al. (the creators of YOLO), defined a variation of the YOLO architecture called YOLOv3-Tiny. Looking at the results from pjreddie.com (image below) the YOLOv3-Tiny architecture is approximately 6 times faster than it’s larger big brothers, achieving upwards of 220 FPS on a single GPU. The small model size and fast inference speed make the YOLOv3-Tiny object detector naturally suited for embedded computer vision/deep learning devices such as the Raspberry Pi, Google Coral, NVIDIA Jetson Nano, or desktop CPU computer where your task requires higher FPS rate than you can get with YOLOv3 model. In this post, you’ll learn how to use and train YOLOv3-Tiny the same way as we used in my previous tutorials. The downside, of course, is that YOLOv3-Tiny tends to be less accurate because it is a smaller version of its big brother. For reference, Redmon et al. report ~51–58% mAP for YOLOv3 on the COCO benchmark dataset while YOLOv3-Tiny is only 33.1% mAP — almost less than half of the accuracy of its bigger brothers. That said, 33% mAP is still reasonable enough for some applications. As I said, on a standard computer with a Graphics Processing Unit (GPU), it is easy for YOLOv3 to achieve real‐time performance. However, in the miniaturized embedded devices, such as Raspberry PI, the conventional YOLOv3 algorithm runs slowly. The YOLOv3‐Tiny network can basically satisfy real‐time requirements based on limited hardware resources. Therefore, in this tutorial, I will show you, how to run the YOLOv3‐Tiny algorithm. YOLOv3‐Tiny instead of Darknet53 has a backbone of the Darknet19, the structure of it is shown in the following image: You Only Look Once v3‐Tiny (YOLOv3‐Tiny) network structure This above structure enables the YOLOv3‐Tiny network to achieve the desired effect in miniaturized devices. Same as in the YOLOv3 tutorial, seeing Darknet-19 and above YOLOv3-Tiny structure, we can’t fully understand all layers and how to implement it. This is why I have one more figure with the overall architecture of the YOLOv3-Tiny network. In the picture below, we can see that the input picture of size 416x416 gets 2 branches after entering the Darknet-19 network. These branches undergo a series of convolutions, upsampling, merging, and other operations. Two feature maps with different sizes are finally obtained, with shapes of [13, 13, 255] and [26, 26, 255]:
https://medium.com/analytics-vidhya/tensorflow-2-yolov3-tiny-object-detection-implementation-c3ea5d4d0510
['Rokas Balsys']
2020-10-05 19:18:06.678000+00:00
['Machine Learning', 'Data Science', 'Computer Vision', 'Computer Science', 'Deep Learning']
Native Tongue
Native Tongue It leaves you, once again Image by Bashar Alaeddin on Unsplash They visit you sometimes, running across your colonized tongue as it clumsily stumbles over their sharp edges. When they start to retreat, you savour the taste and let them go, as they will come back when you are overflowing with desperation and leave you all over again.
https://medium.com/self-ish/native-tongue-1a26c966ae30
[]
2020-09-05 08:36:33.282000+00:00
['Language', 'African', 'Creativity', 'Culture', 'Colonization']
On Facebook Charging Subscription
On Facebook Charging Subscription Why paying $2.57 per month is the best solution to the modern ‘social media dilemma’ and even a way to avoid ‘Armageddon’. Check out our new platform: https://thecapital.io/ Mark Zuckerberg F8 2018 Keynote — Anthony Quintano on Flickr How much would you need to pay Facebook to ensure that you didn’t get any advertisements in its family of social apps — Facebook, Instagram, Messenger, and WhatsApp. US$2.57 per month. Yes, as little as that. How did I arrive at that figure? Well, it’s easy. In 2019, Facebook made US$69.655 billion from advertising. There were 2.26 billion users who used one or more of its family of apps at least once a day — what Facebook defined as Daily Active People (DAP). Simple division tells us that each active user, on average, created US$2.57 in ad revenue per month for the company. In other words, if each of those users were willing to pay Facebook that amount in monthly subscription fees, Facebook would have made the same amount of money in 2019 without having to sell any ads. (Advertising accounted for 98.5% of total revenue for Facebook.) Think about it. It’s entire business model could then be inverted to serve the needs of its users — instead of advertisers. Wouldn’t you pay US$2.57 per month to ensure you have no data privacy issues, no annoying ads and perhaps even the ability to control how much time your addicted child gets to spend on social media each day? Will everyone pay? All right… I admit; I’m getting ahead of myself. The reality is: Many of Facebook’s users are from developing countries. US$2.57 a month could be drag for them. That’s fine. Keep a basic version free. For the US$2.57 per month, paying users will get a better experience — no ads, unlimited connections, unlimited messaging, advance privacy options, cooler emoticons, special e-shopping deals, etc., etc. Is US$2.57 too high? Too low? Not worth the money? Well, the cheapest Apple Music or Spotify subscription fee — $9.99 per month. Netflix — $8.99, Amazon — $12.99, Hulu — $5.99. What about cloud storage? iCloud — $0.99, Dropbox — $9.99, Google Drive — $1.99. Starbucks latte — $2.95… Okay, you get the drift. Proven models in many other businesses within the modern digital economy. Time to think within the box Facebook! You can have your cake and eat it too. The need to attain quick and massive adoption as a startup is over. There’s no more need for free. It is time to look after the folks (aka your users) who got you there in the first place and give them some options. Could it be that with all the brains you’ve hired, there is no better way to come up with solutions other than relying on AI and an army of human curators to keep out objectionable ads? What about fake news and politically motivated ads? Admittedly bad actors with the financial means won’t be deterred. But at least you can make it such that it will no longer be free to generate thousands of fake accounts with unlimited abilities to share and like fake news. It could also potentially shut down all the so-called digital marketing firms in less-developed countries selling ‘likes’ and ‘followers’ to anyone looking for a quick boost to their accounts and posts. To avoid Armageddon… Ok, I’ll admit it. Over discussed topic. Is this whole ‘evils of social media’ debate worth your time? Well, it wasn’t me. Netflix was the one that got the world going at it again. The renewed debate over social media ills has been sparked off again by a rather ‘in-your-face,’ awe and shock documentary called “The Social Dilemma.” Prominently recommended by Netflix to its subscribers, the film strings together interviews of many eminent technologists and researchers — including former employees of social media giants who were responsible for creating the features and business models that made them so successful. The filmmakers also hired actors to enact scenes of politically charged civil unrest, teenage depression, cyberbullying, and three ‘AI bots’ manipulating users to keep them addicted. The conclusion at the end of the film — the world will fall into social dysfunction and civil war at this rate, i.e. social media will cause Armageddon if left unchecked. (Yes, I’m not exaggerating. That’s what the ‘experts’ literally said.) Screenshot from documentary “The Social Dilemma” CNBC published an article on the film, saying that while the interviews were “interesting,” the documentary “offers few solutions.” “Despite the confessionals and doomsaying, however, the final recommendations to the average consumer of these tech products are disappointingly unoriginal.” — “Popular Netflix movie ‘The Social Dilemma’ slams social media but offers few solutions,” CNBC Ironic though, because that same CNBC article proposed no solutions either… So I shan’t be guilty of the same. Social media has become a way of life; a need for survival even — especially for the average teenager. It’s not going away any time soon. Asking people to delete their social media accounts is like environmentalists saying we should all ditch automobiles and air travel. Some could do it, but the majority would find it too uncomfortable and radical. In the end, the debate heats up but the problem doesn’t get solved… Let me resolutely propose that $2.57 per month is the answer. Mark, my words Is putting a price tag on it really the cure? And why nag just Facebook? Aren’t there other social apps out there just as responsible? Well, look, Facebook is certainly the biggest and most powerful. Great change requires power like that. So Mark, your responsibility is greater than most. Therefore I repeat, providing a differentiated service and charging for it is the simplest and best way you can meet the needs of both your shareholders and your users — plus, even your advertisers; because there is a free-ride segment of users they can still hit. In any case subscription revenue is far more stable than finicky cost-per-click or cost-per-impression ads. Would competition come in with something free to steal your users? Sure they will try, but the novelty is over and you are the leader by a huge mile, with a huge cash pile and the best talent in the business — the advantage is clearly on your side to out-innovate and out-build anyone trying to compete. There’s no excuse for not trying — you are a ‘God’ now, and the fate of humanity rests in your hands (or at least that’s what the folks who made ‘The Social Dilemma’ thinks…) So Mark, how about it? Sometimes K.I.S.S. is the answer (Keep it Simple, Stupid). At least give people the option? Hashtag #dollar257. Peace out.
https://medium.com/the-capital/on-facebook-charging-subscription-f534d63ce2b3
['Lance Ng']
2020-10-08 00:40:40.090000+00:00
['Mark Zuckerberg', 'Facebook', 'Technology', 'Social Media', 'Finance']
Your Rocket Is Now Boarding
Spaceports This sounds like something you read in science fiction but there are currently 22 spaceports in the world today and dozens more are on the way. Many of these aren’t going to be your stereotypical launch sites where miles of barbed wire surround cement pads in the middle of nowhere. No, many will look like this spaceport design that was developed for the Spaceport Japan Association. designboom.com There are so many spaceports coming that it’s hard to keep track. In the US, the planned and active sites can be found all over the country from Kodiak island in Alaska down to the new Spaceport America in New Mexico. The UK will have at least three and Japan just approved a spaceport for Virgin Galactic. India is building a second in the country’s south and Indonesia is building one off the coast of New Guinea. New Zealand, Kenya, the Azores islands and Brazil all have sites planned or in use already. SpaceX (who else) is even planning floating spaceports that will be connected to shore by hyperloops. All of these spaceports have one thing in common — serving the explosive space economy that is expected to be worth $1 trillion this year. Whether it’s tourism in orbit, carrying cargo to the several new space stations now being built or sending up thousands of more satellites, our future in space promises to be incredibly lucrative. At some point we will even have factories in space, where micro-gravity environments can be extremely beneficial for manufacturing. The company Made in Space, for example, will be sending the first ceramics factory to the ISS station soon. This site regularly updates a long list of companies planning to use space for building new products. These include the very high-tech things you expect to make in space like complex proteins and fiber optics, but also stuff like new types of beer and roasted coffee.
https://medium.com/datadriveninvestor/your-rocket-is-now-boarding-454202f96700
['Craig Brett']
2020-10-18 17:33:39.720000+00:00
['Travel', 'Space', 'Life', 'Future', 'Technology']
Split overlapping bounding boxes in Python
Split overlapping bounding boxes in Python Complete tutorial in 5 steps from problem formulation up to creating a PyPI package, tested using data from Global Wheat Detection Kaggle competition. In object detection, it is usual to have bounding box targets to identify objects in images. These bounding boxes may sometimes overlap. In some models like Mask RCNN the bounding boxes are predicted directly and the overlap of the bounding boxes in not a problem. Another possible approach is to convert the bounding boxes to masks and use a semantic segmentation model like U-Net. It that case, overlapping masks may be a problem if at the end you want to separate the individual objects. In this story, I cover how to develop an algorithm in Python to separate overlapping bounding boxes and setting a margin between them. I will be using data from the Global Wheat Detection Kaggle competition. Note that I’m not arguing that this method is the best for the Wheat Detection competition. In the future, I may write another story covering such topic. Table of content Problem formulation Writing the math Coding the algorithm in Python Testing the code in a real-world problem Create a Python package with fastai nbdev 1. Problem formulation Let’s start by considering the case for two overlapping bounding boxes. We want to separate the bounding boxes leaving some margin in between them as suggested in the image below. Example of splitting two bounding boxes with some margin. Image by the author. To define the margin the procedure will be the following: Consider the line defined by points A and B — the centroids of each bounding box — let’s call it AB. Then consider a line perpendicular to that at the median point between A and B — let it be ABp. Finally, the two lines in the figure above are parallel to ABp at a distance given by the margin value at our choice. Later in section 4, I will show that the code can be easily applied for the case of multiple intersecting bounding boxes. 2. Writing the math Consider the image and equations bellow where the bold notation denotes vectors. The vector AB is simply the vector that goes from A to B defined in the first equation. Then we can consider a vector perpendicular to AB — that I call ABp — using the second equation. Finally, the point M on the margin line for box A can be defined by the third equation. Formulation of the margin lines. Image by the author. The rationale is that: 1) you start in A; 2) you move in the direction of AB but only halfway to reach the median point; 3) you move slightly backwards in the same direction by a factor m multiplied by the unit vector in the direction of AB; 4) the point you reach is what I define as point M. Using the point M and the vector ABp it is straight forward to define the margin line as I will show in the next section. 3. Coding the algorithm in Python The function below receives two bounding boxes and returns the sliced region for the box A. The input boxes are shapely Polygons as well as the output returned by the function. Let’s now dive into the code line by line. Line 1: The inputs are two bounding boxes — box_A and box_B of type shapely Polygon. The margin sets how big the distance between the boxes should be and line_mult just needs to be high enough to guarantee that the line crosses the polygon completely. and of type shapely Polygon. The sets how big the distance between the boxes should be and just needs to be high enough to guarantee that the line crosses the polygon completely. Line 3: The vector AB ( vec_AB ) is defined using the centroids of the boxes. ( ) is defined using the centroids of the boxes. Line 4: Similar to line 3 but for the perpendicular vector ( vec_ABp ) following the equation in section 2. ) following the equation in section 2. Line 5: The norm of vector AB is computed as it will be needed later. Line 6: The split_point (point M) is defined according to the equation of section 2. (point M) is defined according to the equation of section 2. Line 7: The line is defined using shapely LineString class that allows defining the line given two points. Notice that the points are also shapely geometries. The line is therefore defined from the point M minus a multiple of vector ABp up to the point M plus a multiple of vector ABp. Line 8: A shapely utility function is used to cut the polygon into two using the line. Line 10: For each polygon, obtained on line 8, check if contains the centre — point A. Line 11–15: In these lines, I separate the polygon containing the centre point (the one that matters for this purpose) and return it together with the other polygon (not containing the centre) and the line used for the slice. The extra returned objects are just in case it can be useful for future applications. 4. Testing the code in a real-world problem The code defined in section 3 only applies to two bounding boxes. In order to apply it for several bounding boxes, I defined the following code that I will now explain briefly. intersection_list — a function computes the intersection of all polygons in a list. — a function computes the intersection of all polygons in a list. slice_all — this function receives as input a GeoDataFrame (see table below) containing all bounding boxes for an image and calls slice_one for each bounding box. slice_one simply applies the slice box function for a given box_A and considering all boxes that intersect with it. When there are several intersecting boxes, intersection_list function is used to obtain the final polygon. Sample of the input GeoDataFrame. Image by the author. The result of slice_all is a similar GeoDataFrame but with the sliced boxes. The image below shows the original bounding boxes (left) and the result of slice_all (right). As you can see there are several overlapping regions in the original data but none after applying the methodology just developed. Original bounding boxes (left) and result after applying the method described (right). Image by the author. 5. Create a Python package with fastai nbdev fastai nbdev is arguably the easiest and most user-friendly way to create a Python package and upload it to PyPI. When you start a new project go to nbdev instructions and use the link to create a repository for the template. This will get you started easily. Then you should clone the repository for your local machine, pip install nbdev and on the project directory run nbdev_install_git_hooks . and on the project directory run . Now you can open jupyter notebook as usual. The settings.ini file contains setup information that you need to fill such as the project name, your github username and the requirements for your package. file contains setup information that you need to fill such as the project name, your github username and the requirements for your package. The nbdev template includes a index.ipynb and a 00_core.ipynb . The index will be the GitHub README.md file. and a . The index will be the GitHub file. Then on the 00_core.ipynb you can develop your code as usual but remember to add #export comment at top of the cells that should be in your package core.py file — generated by nbdev from the notebooks. You can read in detail how to use nbdev in their documentation . I highly recommend it! It will change the way you code. you can develop your code as usual but remember to add #export comment at top of the cells that should be in your package file — generated by nbdev from the notebooks. After you are ready you can run nbdev_build_lib and nbdev_build_docs on the terminal. Then you can commit your changes and push to the github repo. Check the repo to see if all tests passed. A common error is ModuleNotFoundError when you import external packages. You need to include them under the requirements in settings.ini so that the package will be installed together with all required dependencies. and on the terminal. Then you can commit your changes and push to the github repo. Check the repo to see if all tests passed. A common error is ModuleNotFoundError when you import external packages. You need to include them under the requirements in so that the package will be installed together with all required dependencies. When everything is green you can upload your package to PyPI by running the command make pypi. You need however, if you have not done it before, to create an account in PyPI and set up a configuration file (the detailed instructions are on nbdev page here). And that’s it! My package is now on PyPI at https://pypi.org/project/splitbbox and can be pip installed. Final remarks
https://towardsdatascience.com/split-overlapping-bounding-boxes-in-python-e67dc822a285
['Miguel Pinto']
2020-06-23 15:00:41.100000+00:00
['Python', 'Software Development', 'Tutorial', 'Data Science', 'Programming']
‘Against The Wind’: A Bob Seger Classic Of Ballads And Barnburners
Brett Milano If you’ve only heard the hit singles on Bob Seger’s Against The Wind, you probably got the wrong idea about the rest of the album. Those singles were something new for Seger, the most polished country-rock songs (or maybe just country songs) he’d yet done. First impressions were that Seger had grown up, mellowed out and done all those things that grizzled heartland rockers weren’t supposed to do — including hanging out with the Eagles and other members of the Los Angeles in-crowd. But appearances can deceive, because those impeccably crafted singles — ‘Fire Lake’, ‘You’ll Accomp’ny Me’ and the title track — all sit alongside the most hellraising rock tunes of Seger’s commercial-peak years. Listen to Against The Wind right now. A split musical personality At this point in his career, a split musical personality was Seger’s calling card. Credit that to the ten years he spent making good albums for his homegrown fans. By the time he got to 1976’s Night Moves — his commercial breakthrough and his ninth studio album — his reflective side was at least as strong as his rock’n’roll roots; thoughtful tracks like ‘Main Street’ sat next to barnburners like ‘Rock And Roll Never Forgets’, and FM radio loved them both. This also started his tradition of recording part of an album with session players (mostly the funky Muscle Shoals studio crew) and the others with his own Silver Bullet Band, sweetening both with guest backup singers. Against The Wind ‘s commercial success topped that of all Seger’s previous records and it remains the bestselling album of the singer’s career, despite its slightly schizophrenic tracklist. Indeed, the advance single ‘Fire Lake’ was a minor shock when it hit the airwaves in February 1980. With its loping country groove and uncharacteristically smooth production, it sounded just like an Eagles record (and there were, indeed, three Eagles singing on it). But, of course, this was 1980: Eagles were one of the world’s biggest bands and sounding like them wasn’t going to do anybody any harm. They were also returning a favour, since Seger had co-written and sung backup on the recent Eagles hit ‘Heartache Tonight’. A trio of ballad hits Against The Wind ‘s title track, however, couldn’t have been anybody but Seger. Lyrically and musically, it’s an obvious follow-up to ‘Night Moves’, containing one of his most quotable lines (“Wish I didn’t know now what I didn’t know then”) and lamenting the lost youth that a not-too-ancient songwriter was feeling at 35. Completing the trio of ballad hits is the timeless ‘You’ll Accomp’ny Me’. Arranged as a country-soul ballad, it’s one of those template songs that outlines where the Nashville sound was going in the next few decades. But you could also imagine the song working equally well with a rootsier treatment. For proof, seek out the gorgeous Cajun-styled version (in French, with fiddle and accordion) that Kate and Anna McGarrigle recorded a few years later. Seger’s inner adolescent The antidote to those three thoughtful, grown-up hits was the album’s other three FM-radio hits, on which Seger’s inner adolescent came out to play. This wasn’t a familiar mode for him, since even the rockers on his last couple of albums had some gravitas to them, such as ‘Hollywood Nights’ and ‘Fire Down Below’. But Against The Wind ‘s opener, ‘Horizontal Bop’, is pure illicit fun, with the Silver Bullet crew doing their best Saturday-night bar-band shuffle. And that’s not even the rowdiest thing on the album. ‘Her Strut’ sports an even raunchier groove and a lyrical pun that high-school guys everywhere took to heart. Completing the libidinous trilogy is ‘Betty Lou’s Gettin’ Out Tonight’, whose lyrics are a late-50s/early-60s slice of teenage life. Betty Lou’s been grounded for misbehaving, now she’s free and ready to misbehave some more. The song may be more sophomoric in tone, but it allows this bunch of pros (here including ex-Manassas keyboardist Paul Harris) to rock out with uncharacteristic abandon. On an album with six hits, only a few tracks qualify as deep cuts, but ‘Long Twin Silver Line’ is Against The Wind ‘s great lost rocker, a train song that evokes the deep South at every turn (it’s the only rocker here that’s helmed by the Muscle Shoals crew instead of Seger’s Silver Bullet regulars). ‘No Man’s Land’ is the latest in a string of Seger songs that paint the rock arena as a war zone, but here the mood is even more tense than it was on ‘Turn The Page’ and ‘Sunburst’. Looking to the future The album’s closing track, ‘Shinin’ Brightly’, is, of all things, an optimistic song. It was the first song Seger had written that dared suggest that life and love might just get better when your impulsive youth is over. Seger pulls out all the stops for this one, from the uplifting chord strums that open the song, to the gospel choir and the reassuring vocal interjections throughout. Alto Reed’s majestic sax solo (framed by soaring Hammond organ) is one of his greatest moments, and Seger closes the track with a spontaneous, “It’s gonna be OK, yeah!” He also employs some of the same vocal tricks as ‘Night Moves’, but here he’s looking to the future instead of missing the past. Seger would continue going for depth on his next few albums. The studio follow-up The Distance evokes the downside of the relationships that Against The Wind largely celebrates — and the narratives would get even more epic. But with Against The Wind, Seger proved he could do deep thoughts and have a hell of a good time while he was at it. Celebrating its 40th anniversary, Against The Wind has been reissued on vinyl. A limited-edition blue pressing also includes a bonus 7". Buy it here. Listen to the best of Bob Seger on Apple Music and Spotify. Join us on Facebook and follow us on Twitter: @uDiscoverMusic
https://medium.com/udiscover-music/against-the-wind-a-bob-seger-classic-of-ballads-and-barnburners-88f570cdffb5
['Udiscover Music']
2019-11-22 18:11:04.241000+00:00
['Rock', 'Features', 'Pop Culture', 'Culture', 'Music']
Utilizing Natural Language Processing Methods With Supervised Learning on Reddit Data
One of the largest schools of interest in the vast world of data science is machine learning. Machine learning is a field of study focusing on having a computer make predictions as accurately as possible, from data. Today, there are two main types of machine learning used: supervised and unsupervised learning. In supervised learning, a programmer will have access to data that depicts an outcome from a certain pattern of features. The “outcome data” can then be used as a prediction measure for modeling a machine learning process, through training. In supervised learning, classification scoring metrics such as accuracy and precision are unlocked for a data scientist to estimate how well their model performs where as in regression learning, scoring metrics such as mean absolute error and mean squared error are often utilized to complete the same objective of measuring the effectiveness of a machine learning algorithm. Many times a programmer may not have access to “outcome data” or prediction data and will have to resort to unsupervised learning. In unsupervised learning, different approaches are taken to help a data scientist determine useful information from data. Often, unsupervised learning will encompass the analysis of the relationship between features. Methods such as “clustering” and “principal component analysis” are used to classify and determine significance of features for other models. Ultimately, both types of machine learning are a subset of AI development and have their roots embedded in computational statistics. Each type of machine learning is powerful and utilizes unique methodologies for uncovering pattern trends in data. Today, we are going to focus primarily on supervised machine learning models and their use with natural language processing practices. While data science may intuitively seem like a field for only analyzing numerical data (which is correct in a literal sense), a big focus in industry is placed on interpreting human language data in a programmatic way. Businesses thrive from learning in what their users are saying about them (particularly in sentiment analysis), and machine learning is an excellent way to tackle the challenge of “how does one understand what people are saying from a large data set of human language?” Let’s explore that by going through a classification process involving Reddit data… Consider This Problem Statement… Reddit, the popular, “community run” social media website, has a dilemma in which two subreddit content are mixed together into one subreddit. You as a data scientist are tasked with creating a machine learning model to classify which content comes from which subreddit to help the Reddit developers and subreddit moderators clean up this blend of content. This form of a problem statement can be interpretable for different situations which may occur in the world across various domains — imagine the case scenario where a hacker maliciously blends categorical content for pleasure or where a developer may want to create a classification application for users to navigate through. This problem statement can also be transferable for situations where sensitive and valuable information is accidentally merged with data that does not associate. The key takeaway here is that this is a binary classification problem, where an outcome can be determined with a simple question of asking whether the data associates with one class or another. For this particular problem statement, we are going to analyze whether submission content belongs to the subreddit “r/aww” or “r/natureismetal.” What are These Subreddits and Why is Researching Them Important for Our Problem? Each subreddit has a vast difference in type of content it showcases. This is useful to understand when considering how well our model will perform. Similar content may be more difficult for a computer or a person to classify, where as polarity in content is more intuitive to identify. In “r/aww”, textual submission content is relevant to posts that nurture feelings of positive sentiment; content is not graphic nor intimidating to experience. For “r/natureismetal”, posts are more closely related towards negative sentiment, where NSFW (“not safe for work”) content is allowed; content is often more graphic and more intimidating to witness. In any given data science problem, a good portion of one’s time should be spent on researching the involved subject. A lack of subject matter knowledge will more often yield a poor result and ultimately may spread misinformation to anyone who chooses to study from your work. Furthermore, research on a relevant topics may also help one discover more about the different and more efficient kinds of approaches one can make to solve a problem. In this case, it was discovered through the subreddits’ rules and guidelines that each subreddit expresses a different image for what it chooses to popularize — this affects how post titles are made. For instance, r/natureismetal requires that submission titles are descriptive while r/aww does not have such requirement. Furthermore, r/aww prohibits any “sad” content whereas r/natureismetal endorses the showcasing of graphic animal content or grand acts of nature. It was also noticed that each subreddit domain has a vast difference in size in userbase: r/aww showed about 24.4 million users while r/natureismetal showed about 1.4 million users. A portion of our time was also spent on analyzing Reddit’s platform in general. It was discovered that subreddits undergo a judicial system of sorts when it comes to posting content. Each subreddit has moderators and auto-moderators (robotic moderators) which patrol the subreddit for content not appropriate for the community’s image. Many times, ill-suited posts are removed from a subreddit, but some posts manage to squeeze by the judicial system. The next filter is a userbase reaction where more popular submissions will often receive more upvotes, comments, and virtual awards to increase a post’s popularity — which often yields more upvotes, comment, and virtual awards as if it were a positive feedback loop. Such applauded submissions will serve as better choices for a user to discover what image a subreddit expresses. All of the above aforementioned regarding the subreddits and Reddit as a platform must be taken into consideration when analyzing results. For now, we will continue with analyzing the data. What Data are We Analyzing? Luckily, we have access to 2,500 submission texts from each subreddit before their fictional merge, where each post in our dataframe has an associated label showcasing what subreddit the post is from. The data was scraped, prior to the merge, and may be enough for us to make a working model. With access to such data, we can very easily integrate a supervised machine learning model. Let’s Explore What We are Working With…
https://cmkuzemka.medium.com/utilizing-natural-language-processing-methods-with-supervised-learning-on-reddit-data-e223c6b249a7
['Christopher Kuzemka']
2020-07-08 14:12:09.737000+00:00
['Machine Learning', 'Python', 'Naturallanguageprocessing', 'Reddit', 'Data Science']
React Hooks {useState, useEffect} v16.9 with examples
{ useEffect } UseEffect is mostly used for data fetching, subscriptions or manual DOM changes in functional components. These operations called by effects because it is effecting other components. There is three main lifecycle events we can use with useEffect. componentDidMount componentDidUpdate componentWillUnmount Lets use useState and useEffect together with counter example we made with useState. import React, { useState, useEffect } from 'react'; function Example() { const [count, setCount] = useState(0); // Similar to componentDidMount and componentDidUpdate: useEffect(() => { // Update the document title using the browser API document.title = `You clicked ${count} times`; // Only re-run the effect if count changes },[count]); return ( <div> <p>You clicked {count} times</p> <button onClick={() => setCount(count + 1)}> Click me </button> </div> ); } useEffect will be called when count is changed in functional component. Each time count has been changed it will update the document.title as number of count. If it wanted to be written in class components componentDidMount and componentDidUpdate lifecycles needs to be added to class component like below. class Example extends React.Component { constructor(props) { super(props); this.state = { count: 0 }; } componentDidMount() { document.title = `You clicked ${this.state.count} times`; } componentDidUpdate() { document.title = `You clicked ${this.state.count} times`; } render() { return ( <div> <p>You clicked {this.state.count} times</p> <button onClick={() => this.setState({ count: this.state.count + 1 })}> Click me </button> </div> ); } } That makes the same thing as we do with useEffect but we have to duplicate the code in componentDidMount and componentDidUpdate. What does useEffect do? By using useEffect component will listen changes after its rendered. Component will remember the passed function with useEffect. UseEffect run after every render. It is manageable with second argument of useEffect. As you see example below we added second argument as count. useEffect(() => { document.title = `You clicked ${count} times`; }, [count]); // Only re-run the effect if count changes As we passed as second argument [count] will compare the count value from previous render. If it is not changed it will not be affected and useEffect function. React would skip that effect. Skipping effects increase performance of your components. Effects with Cleanup With useEffects we can add cleanup function very similar to componentWillUnmount. If there is some subscriptions from external data sources or intervals we want to remove after component removed from DOM. If effects return a function that means it will used for cleanup. React will run it when its time to cleanup the effect. Lets create an setInterval in with useEffect. Then return a function at the end of effect to clean interval. That means when our component removed from the DOM cleanup function will be executed and clear the created interval. Since we didnt add any subscribers as second argument of useEffect that will only run when componentDidMount and componentWillUnmount. If we wanted to use as class component we had to use as below You can check it from codesandbox Lets create another example for fetching data from bitcoin api. That component will fetch bitcoin price every minute and print it out. Here is the codesandbox example: But if we want to make same thing with Effect We used useEffect for componentDidMount, componentDidUpdate for refetch bitcoin price and returned clearInterval as componentWillUnmount. You can check bitcoin price checking interval with useEffect from codesandbox. That is one of the useful use cases for clearing the interval when DOM element removed from the component otherwise it will cause memory leak and will try to fetch bitcoin prices while user looking another page. We added the second argument to useEffect as [minutes] to execute useEffect only minutes has been changed in our useEffect(() => { const interval = setInterval(() => { setMinutesState(seconds => seconds + 1); }, 60000); fetchBitcoinPrice(); return () => clearInterval(interval); }, [minutes]); Conclusion React team integrated very powerful things to functional components with {useState, useEffect} I highly suggest you to use it in your applications :) You dont have to change all of your class based components to functional components in a day but it is good practice to have. Thank you for reading this far. If you enjoyed this post, please share, comment, and press that 👏 a few times (up to 50 times). . . Maybe it will help someone. Follow me on Medium or Github if you’re interested in more in-depth and informative write-ups like these in the future. 😀 Melih
https://medium.com/quick-code/react-hooks-usestate-useeffect-v16-9-with-examples-60ba2e78fd1e
['Melih Yumak']
2020-02-05 03:19:40.287000+00:00
['JavaScript', 'Web Development', 'React', 'Reactjs', 'Hooks']
Building Robust Components with React Children
Children: so fun! A Quick Introduction to React Children At Hootsuite we use a global front-end component library that we share across applications to create a cohesive and intuitive user experience. This practice often requires creating versatile components. While some components will naturally have a relatively narrow scope, others might be useful across a variety of different products. One way to build components that mitigate tight coupling is to make use of React’s built-in children prop. Building components using React children is a great way to write structured yet reusable components. It allows us to have a large degree of control over components while preserving flexibility and safely handling arbitrary props. Let’s take a simple navigation drawer as an example: Imagine we want to share the same styles and animations between all our navigation drawers, creating a predictable and comfortable user experience across all of our apps. Although we may want all these drawers to have identical styling and animations, we might need our menu items to differ depending on context. Maybe we need a drawer with links only in one app, and in another we want some menu items that act as accordions — buttons that open a secondary list of submenu items. In order to make the drawer robust enough to handle these potential differences, we can use React.Children . React children can be passed down to a component by nesting them inside the component’s root JSX tag. We can then access them in that component using props.children . Here is what our navigation drawer component might look like if we are passing it some menu items: Our NavDrawer component handles children by rendering them in an expression container: Which outputs the following in HTML: React children utilities What if we want a little more uniformity in our menu items, like giving each menu item some specific styling by wrapping each one in a styled component? For brevity’s sake, let’s say we want to wrap each menu item in a div to give each menu item a bottom border, regardless of what that menu item may be: We can use React.Children.map to accomplish this! React.Children ’s map utility is especially useful because it returns null if children is either null or undefined , and it also automagically adds a key to each rendered child! Here’s what React.Children.map looks like in action: Our NavDrawer now looks like this under the hood in HTML: Using Slots to Designate Structure What if we decide we want to enforce just a bit more structure and split up our NavDrawer component, separating its contents? We can actually pass JSX as a prop, then designate these elements to be rendered in specific “slots” in the component they are passed into. Let’s say we want to divide our NavDrawer into two sections: a “title” slot at the top, and anything else below it. We can do this by having NavDrawer accept a “title” prop: Note that the prop we are passing is named “title”. We can actually pass children as props without using the standard “children” prop name. Next, let’s pass our title prop to NavDrawer, where NavDrawer handles and styles it as necessary (I’m placing it in an <h2> for simplicity): Keep in mind the above code is a simplified example. There are more React.Children utilities available for use, such as React.Children.forEach , React.Children.toArray , etc. Conclusion Using React children props can help you construct flexible, robust components that are useful in a variety of different scenarios while adhering to the DRY principle. Don’t be afraid to dive in and take advantage of this built-in React feature in your next project!
https://medium.com/hootsuite-engineering/building-robust-components-with-react-children-2f29757ebc7c
['Jennifer Macfarlane']
2018-08-31 20:59:21.213000+00:00
['React', 'JavaScript', 'Hootsuite', 'Co Op', 'Front End Development']
Tufte is Dead; Long Live Tufte
Tufte is Dead; Long Live Tufte Review of The Visual Display of Quantitative Information Ok. Stick with me here. I’m going to review a book from the King: Edward R. Tufte. The man who made one of the earliest and most beautiful arguments for why we have to design better visualizations. In this review I am going to take a critical look at this book and try to to set aside the fact that this book should be on every UXer’s bookshelf. I’m going to try to ignore that Tufte printed these books himself because he was unsatisfied with regular pubishers. And, I’m going to ignore the fact that he is a legend within the field. Instead, I am going to present a critical argument regarding the quality of the material presented in the book. Of all of Tufte’s books, The Visual Dispaly of Quantitative Information is my favorite. This is mostly because I make a whole lot of visualizations based on very large data sets for my job. The visualizations that I design are incredibly powerful and what wake me up in the morning excited to go to work. But, at the same time, getting them wrong is also incredibly frustrating. This book was the first one I read that talked about data visualizations as an art and really spoke about bridging the gulf of a user’s understanding. Here is a great video where Tufte and a few others argue for importance of data visualizations: If you haven’t read this book, the visualizations are more than tantalizing. The book is also beautifully designed. The pages are cream and the paper has a density that makes you want to touch them. The written content takes up about 2/3 of the page, following the golden ratio for beautiful composition. There are references anchored in the remaining space that are small but still engaging. And, best of all, most pages have a visualization example to solidify the argument being made. The visualizations are printed in color and sometimes take up the whole page so that you can see all of the small details (even in the visualizations that Tufte argues are poor). The book is divided into two sections: Part 1 that focuses on the history of quantitative visual design and why it matters; and, Part 2 that focuses on why graphs and charts really are an art but that there are some guidelines we should all be following. The sections flow easily from one to another and the conversational style of writing makes the book one that you can read in an afternoon but also one that you can easily return to multiple times. Overall, the book is pretty much a masterpeice. Now, lets put that aside for a second and talk about what Tufte is trying to say here. He is arguing that really good quantitative visualizations provide insight that you cannot get from just looking at the data. The classic example is shown below of Napoleon’s March through Russia and how the march had a severe impact on the amount of troops he has. Sure, you can look at the numbers. You can also look at Napoleon’s route. But, it isn’t until you put those two together than you really understand the magnitude that this march had on the army. The visualization is what enables that deeper level of understanding. Tufte’s argument on this topic is solid and has been proven time and time again. Napoleon’s March from Wikipedia Commons https://en.wikipedia.org/wiki/Charles_Joseph_Minard However, in this book, Tufte also coins the term “chart junk.” Chart junk is all of the stuff that people add to visualizations to jazz them up. It is all the flare that gets added to a visualization to make them more interesting. Tufts writes: When a graphic is taken over by decorative forms or computer debris, when the data measures and structures become Deisgn Elements, when the overall design purveys Graphical Style rather than quantitative information, then that graphic may be called a duck in honor of the duck-form store, “Big Duck.” How funny is that? That building is a duck. And, it is a great metaphore. It really quacks me up. Once the reader hits the Big Duck stage of the book, the argument rolls downhill into something that would make the Bauhaus Art School look like they are full jazz and whimsy. Tufte makes the case that everything that isn’t about presenting the data should be removed. No more bars in a bar chart. You only get lines that reach a certain height! No more axis. You get empty space! Everything must be stripped down to the bare essentials so as to only represent the data and only the data. I like to say that a Tufte visualization is like soylent of visualizations: all function and little form. Here is the problem with Visualizations… most of them are so stupid boring. They are so stupid boring that no one wants to pay attention to what you are trying to tell them. People want the data to sing. The problem is that sometimes the data is just humming its story rather than singing like they are Bey. All data cannot be Bey. There is an argument to be made for chart junk. Maybe if you add a little chart junk users are going to pay attention to what you are trying to say. Maybe a little chart junk helps turn up the beat a little. Now I’m not saying that everyone should make their visualizations like this ass hat in the video below making a PowerPoint presentation. But I am saying that a little color doesn’t hurt. Axis can be your friend. Two dimensions can help fill out the story you are trying to tell. The last thing I want to say about this book is that all of the visualizations in this book are static. The users cannot play with the data and that is a real shame. Because, playing with the data in a dynamic set of visualizations that allow for on-click filtering is the future. As a field, we really need to spend some time thinking more about how to make the data tell a story rather than assuming the user can click to get the insight they need. So far, I haven’t found the Tufte of dynamic visualizations, even though I’m a big fan of Stephen Few’s work. Ultimately, quantitative data visualizations are powerful. Tufte is still king. But, his work is a bit dated and draconian to be practical for mundane datasets. Book Club Questions
https://medium.com/the-ux-book-club/tufte-is-dead-long-live-tufte-21f830a0cfa8
['Laurian Vega']
2016-09-19 23:20:07.782000+00:00
['User Research', 'Design', 'Thumbs Up', 'UX', 'Data Visualization']
Why Too Many Hints are Bad for Improvement
Making Sense Through Systemic Analogies When Kepler noticed in his astronomical observations that the stars gravitated irregularly, he began an investigation to understand the forces that animated their movement. However, as he was the first to follow this trail, he had very few conceptual resources to explain this phenomenon. So to fill the gap he decided to rely on more distant concepts such as from chemistry and elementary physics. For example, he assumed that the star was moving like waves on water or sound vibration. What Kepler had then used was the power of analogies to study and clarify new and complex problems. These similarities found themselves especially useful for him to discover structural similarities between apparently different objects. And this conceptual use of analogy was studied by management researchers at the Oxford Business School. Analyzing the way consultants and investors evaluated the risks of their projects compared to those of others, they noticed that they systematically underestimated them. Too focused on the specifics of their project, they were locked into an “inside view” that prevented them from seeing the results of other projects similar to them. For example, consultants looking to invent a new tram system for a city in Scotland overspent their budget estimates by a factor of two by focusing on rigorous financial analysis of every aspect of their project, rather than comparing it systemically with other similar projects in Europe. On the contrary, what the researchers have called the ‘outside view’ is to rely not on our experience or familiar analogies but distant and truly relevant analogies. When you base your thinking on various perspectives, you are better able to focus on real metrics and implications that you found otherwise. If you want to raise your broad mind, you may better judge the risks and probabilities of your project by comparing it with examples from other horizons. Your improvising thinking will then shine and get you through unexpected consequences.
https://medium.com/thinking-up/why-too-many-hints-are-bad-for-improvement-dedaa62889ba
['Jean-Marc Buchert']
2020-12-14 20:05:30.123000+00:00
['Growth', 'Creativity', 'Self Improvement', 'Life Lessons', 'Learning']
George Harrison & Friends: The 1971 Concert for Bangladesh
There were two trigger events that led to the concert, the first a natural disaster and the second a monumental man-made disaster. I’ve just finished reading Paul Thomas Chamberlin’s The Cold War’s Killing Fields, subtitled Rethinking the Long Peace. I can’t recall when I’ve been so moved by a single book. While reading it I have mentioned to several friends that “this is the saddest book I’ve read in my life.” The underreported human suffering that has been perpetrated in the course of our lifetimes since World War II is nothing short of shocking when you lay it all out in one book. What Chamberlin does, probably unique, is to show how a single thread actually connects all these disparate atrocities, that thread being the cold war and corresponding fears of the major superpowers. So much of what has happened these past 70 years was delivered through the media piecemeal so that Americans not only were left in the dark much of the time, the general impression has been that Americans have always been the good guys, the white horse heroes. The tragedy of Bangladesh was two-fold. The first was a destructive cyclone of historic proportions that devastated the country and left as many as 500,000 dead in its wake. Because East Pakistan was located 1000 miles from Pakistan there was a move for liberation which led to a military incursion by the Pakistan army that resulted in the deaths of a quarter million civilians and seven million refugees fleeing to India. This latter had been building for years and did not occur overnight, but the timing of its escalation couldn’t have been worse. Bookcover photo by the author. The Chamberlin book outlines how WW2 changed the face of the world’s power game. We tend to forget that before the World Wars European powers were colonialists whom for hundreds of years had their fingers in every corner of the known world. Suddenly this all changed. The aftermath of WW2 resulted in a variety of complicated conflicts as groups within various regions struggled for freedom and autonomy. Looking back, we’ve forgotten the relationship between the collapse of Colonialism and the various mini-wars in all corners of the world. The subsequent power struggles occurred against a new backdrop, the Cold War. The big players in this new game interpreted events through their own lenses. Pakistan was an ally of the U.S. so when it began committing horrors against its own people, President Nixon and his advisors chose to support Pakistan with arms and did nothing to restrain the genocidal horror under the pretext that we need an ally like Pakistan in this part of the world. China was breaking with Moscow, and we wanted to be tightly embedded in the region. After the cyclone the United States initially wanted to help alleviate suffering, but then National Security Advisor Henry Kissinger weighed in, indicating it would make Pakistan look bad in the world’s eyes if we did more than they did for their own people. After West Pakistan’s bungling relief efforts, a December election showed how divided East Pakistan sentiments were from West Pakistan. In their divine wisdom the West Pakistani leaders decided in March that instead of meeting needs they would invade and slaughter, using American made M-24 tank units. Within a few days there were radio reports of three hundred thousand killed. Reports like this were easily dismissed as Bengali exaggerations, but when Nixon’s own foreign office reported how brutal the atrocities were Nixon and Kissinger applauded the success of the Pakistan army in crushing the “uprising.” I don’t need to repeat all the details, only that U.S representatives in Pakistan wrote a scathing indictment of our leaders that begins with this: “Our government has failed to denounce the suppression of democracy.” The London Times reported “This is genocide, conducted with amazing casualness.” Millions of refugees fled to India. Cholera and smallpox began breaking out, taking even more lives. You can be sure that all these horrors weighed heavily on Ravi Shankar, the Bengali musician who taught George Harrison how to play the sitar which is featured on “Within You, Without You,” the opening track on side two of the Beatles’ Sgt. Pepper album. Ravi Shankar, who had remained a friend of Harrison since that time, had relations in East Pakistan and he (Shankar) was well aware of the trauma there. The Concert for Bangladesh took place at the beginning of August 1971 featuring “a supergroup of performers that included Harrison, fellow ex-Beatle Ringo Starr, Bob Dylan, Eric Clapton, Billy Preston, Leon Russell and the band Badfinger. In addition, Shankar and Ali Akbar Khan — both of whom had ancestral roots in Bangladesh — performed an opening set of Indian classical music. Decades later, Shankar would say of the overwhelming success of the event: ‘In one day, the whole world knew the name of Bangladesh. It was a fantastic occasion.’”* The account here is much abbreviated from that which is in The Cold War Killing Fields. When I think back on that period in my life I can’t recall a single word about the atrocities that took place after the initial devastation of the cyclone. The war in Viet Nam was the focus of our media and the complexities surrounding the political struggles of these various nations made it easy to not really hear much. Americans were too distracted by other things to really try to figure out what was happening here or what happened in Indonesia where in 1965–66 500,000 to a million civilians were similarly slaughtered by their own government for reasons of their own while the U.S. simply stood by and watched. All this to say that it was a beautiful thing what these performer did. But it makes me sad to reflect on how little I knew about the world we’ve lived in all the years. And this is but one chapter.
https://ennyman.medium.com/george-harrison-friends-the-1971-concert-for-bangladesh-ddde20570e4e
['Ed Newman']
2018-10-14 07:26:53.863000+00:00
['Dylan', 'Bangladesh', 'Cold War', 'George Harrison', 'Music']
Chapter 3 : Numpy, Scipy and Matplotlib
Hello Learners and Welcome Back! There’s a new chapter in town! We’re glad to introduce you to our new friends: NumPy, SciPy and Matplotlib. Populating the environment Together with Pandas (go back to Chapter 2 for a brief introduction) they constitute the very foundations of scientific programming in Python. As we told you before, scientific programming is considered as an appetizer for Machine Learning, so take your time to complete these notebooks. You can find them on our Github, remember to follow us there for updates. Have fun! Simone, MLJC P.S. If you find any bug / error or you have some problems with the code feel free to contact us at mljcunito@gmail.com
https://medium.com/mljcunito/scientific-programming-chapter-3-numpy-scipy-and-matplotlib-8e215b4ffe99
['Simone Azeglio']
2019-08-11 20:29:13.358000+00:00
['Machine Learning', 'Python', 'Data Science', 'Programming', 'Data Visualization']
Why Machine Learning?
Machine Learning, AI, Deep Learning… these words need no introduction at the moment! We all have an opinion on, or sometimes even fantasize about AI taking over the world, thanks to numerous movies and TV series. Image source : https://i.imgflip.com/1b7szd.jpg But what makes AI and Machine Learning such a buzz topic? Why are so many people interested in that? Here’s five reasons why I love ML so much and want to start a career on it! My love for mathematics! Ever since I was a kid, I loved mathematics. The reason was partly because I was good at it, and partly because it was so interesting. I would like to quote Neil deGrasse Tyson from his famous book, “Astrophysics for People in a Hurry”. He starts the book by saying “The universe is under no obligation to make sense to you!”. That’s true, but we have used our 3 pounds of grey matter in our head to invent Mathematics. And Math is the language of the universe. All of our accomplishments as a collective human race wouldn’t be possible without our understanding of mathematics. With no surprises, every concept in Machine Learning has it’s deep roots in the concepts of mathematics. In fact, every other algorithm in computer science is actually a subset of the ocean, that is, mathematics. Image source : https://medium.com/nybles/understanding-machine-learning-through-memes-4580b67527bf Advent of Data In the 50,000 to 300,000 year old human civilization, data or information has never been readily available ever, as we have it now. We have managed to generate more data in the last half a century, than we have in the previous 50 centuries. There is not only such huge amounts of data, there is also huge variety of data. The real question is, what have we managed to do with so much information? That’s when Data science and Machine Learning comes into picture. With so much data and so many different types of data, it was only a matter of time before we began to think about doing something useful with it. And the insights we get from the data, has helped us make informed decisions in so many crucial places. image source : https://quantumcomputingtech.blogspot.com/2019/05/big-data-analysis-meme.html Cricket(Sports) and Data Science I live in India, and here there are very few things which has the same influence on society as Cricket. The amount of people who sit and watch a World Cup match in India is easily in the billions. Cricket sure did have an influence on me. I grew up watching cricket and even imagined being part of the national team, although that remained a dream for me. I took up computer science as my under grad, and around the time I was in my first year in college, I got to know about a really cool job. I came to know that there is a data scientist as part of every IPL team, who analyses players’ data and comes up with a game plan or strategy to improve the team. This was a revelation for me, as I thought it was only the coaches, support staff and the captain who make decisions. Little research got me to realise that there is a data scientist for the Indian Cricket team as well, and data scientists actually play a crucial role in every sports team in the world. This amplified my interest in Machine Learning as I could combine two things I love and start a career with that. How cool! Image source : https://www.timesofmedia.com/dhoni-will-always-be-my-captain-kohli-reiterates-6863.html Possibilities are endless Once I was into Data Science, I started exploring the various algorithms and techniques. One thing that struck me the most was, there was no fixed algorithm for a particular domain. The number of parameters you can tune to improve your algorithm is endless. This gives us a playground to explore so many endless possibilities given the amount of data we have. You want to improve your accuracy? Try getting more data. You cannot get more data? Try amplifying the data you already have. You think this algorithm is not fitting the data properly? Try a different one. You think this is the best algorithm that can fit this data? Try tuning the parameters to improve it. This gives an excitement to building algorithms. There are also endless possibilities in the domains you can apply machine learning. Since data is omnipresent, and in such huge volumes, machine learning cannot be restricted to any particular part of our lives. This has led to humans using machine learning in sports, weather forecast, stock market prediction, AI based robots and machines, health care, Image recognition, Speech recognition etc. The world is your oyster! Image source : https://memegenerator.net/img/instances/43802050.jpg …Cause it’s really interesting! The reason why machine learning is a buzz word these days is just because it is so damn interesting. And for me, the most interesting part of machine learning has been Image Processing! Here, we segue into Deep Learning. The various algorithms in Deep Learning tries to mimic the way the human brain works. We all learn from our experiences. Deep learning algorithms are no different. Given enough examples of a particular class of images, the algorithm can learn various features of the image and predict a completely new image under the same class. If we understand linear algebra, the underlying mathematics behind this is pretty easy, but the fact that we somehow applied that to images, to actually build various algorithms is very interesting and fascinating. And trust me, image processing has actually taken over our everyday lives. Facial recognition systems are available in our phones, our offices, and it is used in crime investigations (we have all seen in movies :P). Moreover, the insights we get from analysing data and the process we follow to get there, is something very engaging. Image Source : https://www.analyticsvidhya.com/wp-content/uploads/2016/12/10-ceo.jpg And there we go. That’s some of the reasons I love Data Science and Machine Learning! I believe ML and AI is gonna rule the future, as we have still not mined all the data and information we produce. And as a consequence, there will be demand for more data scientists in the near future. I hope to see huge advancements in Machine Learning in the next decade! Here are some useful courses to begin your Machine Learning journey : Happy learning! :D
https://vvenkataramanan12.medium.com/why-machine-learning-c9b8e73bd467
['V Venkataramanan']
2020-12-16 13:59:55.473000+00:00
['Machine Learning', 'Mathematics', 'Artificial Intelligence']
DoWhy — Python Library for Causal Inference by Microsoft
DoWhy is a recently published python library that aims to make Casual Inference easy Causal inference is the process of drawing a conclusion about a causal connection based on the conditions of the occurrence of an effect. The main difference between causal inference and inference of association is that the former analyzes the response of the effect variable when the cause is changed Difficulties in finding cause and effect relationships Current methods for finding relationships in data represent a simplified version of causal explanation. Most of the models that try to find causes in the data rely on empirical analysis. Pure causal inference relies on counterfactual analysis, which is more closely related to how people make decisions. The causal inference depends on predicting objects that the model has not previously observed. This provokes a fundamental problem, which is that it is impossible to objectively evaluate causal inference algorithms using a test sample. The problem imposes restrictions on the data generation process. Traditional machine learning methods turn a blind eye to the fundamental problem, which leads to the limited generalizability of such models. More about DoWHy library DoWhy. Separating identification and estimation of a causal effect. Source: Microsoft Blog The objective is modeled as a causal graph so that all assumptions are explicitly specified. DoWhy provides a unified interface for popular causal inference techniques. The library combines two large frameworks: graph models and potential outcomes. DoWhy allows you to test the validity of assumptions, if possible and evaluates the robustness of predictions. In DoWhy, the recognition of the causal effect and its assessment are divided into two separate parts. DoWhy models the problem of causal inference in a workflow, which consists of four steps: modeling, recognition, evaluation, and refutation. Workflow visualization in DoWhy. Source: Microsoft Blog Code Example Causal inference in four lines. A sample run of DoWhy. Source: Microsoft Blog Read More If you found this article helpful, click the💚 or 👏 button below or share the article on Facebook so your friends can benefit from it too.
https://medium.com/deep-learning-digest/dowhy-python-library-for-causal-inference-from-microsoft-336394a1ecba
['Mikhail Raevskiy']
2020-09-07 11:16:01.532000+00:00
['Machine Learning', 'Python', 'Microsoft', 'Programming', 'Software Development']
Efficient Pandas: Using Chunksize for Large Data Sets
Efficient Pandas: Using Chunksize for Large Data Sets Exploring large data sets efficiently using Pandas Data Science professionals often encounter very large data sets with hundreds of dimensions and millions of observations. There are multiple ways to handle large data sets. We all know about the distributed file systems like Hadoop and Spark for handling big data by parallelizing across multiple worker nodes in a cluster. But for this article, we shall use the pandas chunksize attribute or get_chunk() function. Imagine for a second that you’re working on a new movie set and you’d like to know:- 1. What’s the most common movie rating from 0.5 to 5.0 2. What’s the average movie rating for most movies produced. To answer these questions, first, we need to find a data set that contains movie ratings for tens of thousands of movies. Thanks to Grouplens for providing the Movielens data set, which contains over 20 million movie ratings by over 138,000 users, covering over 27,000 different movies. This is a large data set, used for building Recommender Systems, And it’s precisely what we need. So let’s extract it using wget. I’m working in Colab, but any notebook or IDE is fine. Unzipping the folder displays 4 CSV files: links.csv movies.csv ratings.csv tags.csv Our interest is on the ratings.csv data set, which contains over 20 million movie ratings for over 27,000 movies. # First let's import a few libraries import pandas as pd import matplotlib.pyplot as plt Let’s take a peek at the ratings.csv file ratings_df = pd.read_csv('ratings.csv') print(ratings_df.shape) >> (22884377, 4) As expected, The ratings_df data frame has over twenty-two million rows. This is a lot of data for our computer’s memory to handle. To make computations on this data set, it’s efficient to process the data set in chunks, one after another. In sort of a lazy fashion, using an iterator object. Please note, we don’t need to read in the entire file. We could simply view the first five rows using the head() function like this: pd.read_csv('ratings.csv').head() It’ s important to talk about iterable objects and iterators at this point… An iterable is an object that has an associated iter() method. Once this iter() method is applied to an iterable, an iterator object is created. Under the hood, this is what a for loop is doing, it takes an iterable like a list, string or tuple, and applies an iter() method and creates an iterator and iterates through it. An iterable also has the __get_item__() method that makes it possible to extract elements from it using the square brackets. See an example below, converting an iterable to an iterator object. # x below is a list. Which is an iterable object. x = [1, 2, 3, 'hello', 5, 7] # passing x to the iter() method converts it to an iterator. y = iter(x) # Checking type(y) print(type(y)) >> <class 'list_iterator'> The object returned by calling the pd.read_csv() function on a file is an iterable object. Meaning it has the __get_item__() method and the associated iter() method. However, passing a data frame to an iter() method creates a map object. df = pd.read_csv('movies.csv').head() # Let's pass the data frame df, to the iter() method df1 = iter(df) print(type(df1)) >> <class 'map'> An iterator is defined as an object that has an associated next() method that produces consecutive values. To create an iterator from an iterable, all we need to do is use the function iter() and pass it the iterable. Then once we have the iterator defined, we pass it to the next() method and this returns the first value. calling next() again returns the next value and so on… Until there are no more values to return and then it throws us a StopIterationError. x = [1, 2, 3] x = iter(x) # Converting to an iterator object # Let’s call the next function on x using a for loop for i in range(4): print(next(x)) >> 1 2 3 StopIterationError # Error is displayed if next is called after all items have been printed out from an iterator object Note that the terms function and method have been used interchangeably here. Generally, they mean the same thing. Just that a method is usually applied on an object like the head() method on a data frame, while a function usually takes in an argument like the print() function. If you’d like to find out about python comprehensions and generators see this link to my notebook on Github. It’s not necessary for this article. Ok. let’s get back to the ratings_df data frame. We want to answer two questions: 1. What’s the most common movie rating from 0.5 to 5.0 2. What’s the average movie rating for most movies. Let’s check the memory consumption of the ratings_df data frame ratings_memory = ratings_df.memory_usage().sum() # Let's print out the memory consumption print('Total Current memory is-', ratings_memory,'Bytes.') # Finally, let's see the memory usage of each dimension. ratings_df.memory_usage() >> Total Current memory is- 732300192 Bytes. Index 128 userId 183075016 movieId 183075016 rating 183075016 timestamp 183075016 dtype: int64 We can see that the total memory consumption of this data set is over 732.3 million bytes… Wow. Since we’re interested in the ratings, let’s get the different rating keys on the scale, from 0.5 to 5.0 # Let's get a list of the rating scale or keys rate_keys = list(ratings_df['rating'].unique()) # let's sort the ratings keys from highest to lowest. rate_keys = sorted(rate_keys, reverse=True) print(rate_keys) >> [5.0, 4.5, 4.0, 3.5, 3.0, 2.5, 2.0, 1.5, 1.0, 0.5] We now know our rating scale. Next, is to find a way to get the number of ratings for each key on the scale. Yet due to the memory size, we should read the data set in chunks and perform vectorized operations on each chunk. Avoiding loops except necessary. Our first goal is to count the number of movie ratings per rating key. Out of the 22 million-plus ratings, how many ratings does each key hold? Answering this question automatically answers our first question:- Question One: 1. What’s the most common movie rating from 0.5 to 5.0 let’s create a dictionary whose keys are the unique rating keys using a simple for loop. Then we assign each key to value zero. ratings_dict = {} for i in rate_keys: ratings_dict[i] = 0 ratings_dict >> {0.5: 0, 1.0: 0, 1.5: 0, 2.0: 0, 2.5: 0, 3.0: 0, 3.5: 0, 4.0: 0, 4.5: 0, 5.0: 0} Next, we use the python enumerate() function, pass the pd.read_csv() function as its first argument, then within the read_csv() function, we specify chunksize = 1000000, to read chunks of one million rows of data at a time. We start the enumerate() function index at 1, passing start=1 as its second argument. So that we can compute the average number of bytes processed for each chunk using the index. Then we use a simple for loop on the rating keys and extract the number of ratings per key, for each chunk and sum these up for each key in the ratings_dict The final ratings_dict will contain each rating key as keys and total ratings per key as values. Using chunksize attribute we can see that : Total number of chunks: 23 Average bytes per chunk: 31.8 million bytes This means we processed about 32 million bytes of data per chunk as against the 732 million bytes if we had worked on the full data frame at once. This is computing and memory-efficient, albeit through lazy iterations of the data frame. There are 23 chunks because we took 1 million rows from the data set at a time and there are 22.8 million rows. So that means the 23rd chunk had the final 0.8 million rows of data. We can also see our ratings_dict below complete with each rating key and the total number of ratings per key {5.0: 3358218, 4.5: 1813922, 4.0: 6265623, 3.5: 2592375, 3.0: 4783899, 2.5: 1044176, 2.0: 1603254, 1.5: 337605, 1.0: 769654, 0.5: 315651} Note that By specifying chunksize in read_csv , the return value will be an iterable object of type TextFileReader . Specifying iterator=True will also return the TextFileReader object: # Example of passing chunksize to read_csv reader = pd.read_csv(’some_data.csv’, chunksize=100) # Above code reads first 100 rows, if you run it in a loop, it reads the next 100 and so on # Example of iterator=True. Note iterator=False by default. reader = pd.read_csv('some_data.csv', iterator=True) reader.get_chunk(100) This gets the first 100 rows, running through a loop gets the next 100 rows and so on. # Both chunksize=100 and reader.get_chunk(100) return same TextFileReader object. This shows that the chunksize acts just like the next() function of an iterator, in the sense that an iterator uses the next() function to get its’ next element, while the get_chunksize() function grabs the next specified number of rows of data from the data frame, which is similar to an iterator. Before moving on, let’s confirm we got the complete ratings from the exercise we did above. Total ratings should be equal to the number of rows in the ratings_df. sum(list(ratings_dict.values())) == len(ratings_df) >> True Let’s finally answer question one by selecting the key/value pair from ratings_dict that has the max value. # We use the operator module to easily get the max and min values import operator max(ratings_dict.items(), key=operator.itemgetter(1)) >> (4.0, 6265623) We can see that the rating key with the highest rating value is 4.0 with a value of 6,265,623 movie ratings. Thus, the most common movie rating from 0.5 to 5.0 is 4.0 Let’s visualize the plot of rating keys and values from max to min. Let’s create a data frame (ratings_dict_df) from the ratings_dict by simply casting each value to a list and passing the ratings_dict to the pandas DataFrame() function. Then we sort the data frame by Count descending. Question Two: 2. What’s the average movie rating for most movies. To answer this, we need to calculate the Weighted-Average of the distribution. This simply means we multiply each rating key by the number of times it was rated and we add them all together and divide by the total number of ratings. # First we find the sum of the product of rate keys and corresponding values. product = sum((ratings_dict_df.Rating_Keys * ratings_dict_df.Count)) # Let's divide product by total ratings. weighted_average = product / len(ratings_df) # Then we display the weighted-average below. weighted_average >> 3.5260770044122243 So to answer question two, we can say The average movie rating from 0.5 to 5.0 is 3.5. It’s pretty encouraging that on a scale of 5.0, most movies have a rating of 4.0 and an average rating of 3.5… Hmm, Is anyone thinking of movie production? If you’re like most people I know, the next logical question is:- Hey Lawrence, what’s the chance that my movie would at least be rated average? To find out what percentage of movies are rated at least average, we would compute the Relative-frequency percentage distribution of the ratings. This simply means what percentage of movie ratings does each rating key hold? Let’s add a percentage column to the ratings_dict_df using apply and lambda. ratings_dict_df['Percent'] = ratings_dict_df['Count'].apply(lambda x: (x / (len(ratings_df)) * 100)) ratings_dict_df >> Percentage of movie ratings per key. Therefore to find the percentage of movies that are rated at least average (3.5), we simply sum the percentages of movie keys 3.5 to 5.0. sum(ratings_dict_df[ratings_dict_df.Rating_Keys >= 3.5]['Percent']) >> 61.308804692389046 Findings: From these exercises, we can infer that on a scale of 5.0, most movies are rated 4.0 and the average rating for movies is 3.5 and finally, over 61.3% of all movies produced have a rating of at least 3.5. Conclusion: We’ve seen how we can handle large data sets using pandas chunksize attribute, albeit in a lazy fashion chunk after chunk. The merits are arguably efficient memory usage and computational efficiency. While demerits include computing time and possible use of for loops. It’s important to state that applying vectorised operations to each chunk can greatly speed up computing time. Thanks for your time. P.S See a link to the notebook for this article in Github. Cheers! About Me: Lawrence is a Data Specialist at Tech Layer, passionate about fair and explainable AI and Data Science. I hold both the Data Science Professional and Advanced Data Science Professional certifications from IBM. I have conducted several projects using ML and DL libraries, I love to code up my functions as much as possible even when existing libraries abound. Finally, I never stop learning and experimenting and yes, I hold several Data Science and AI certifications and I have written several highly recommended articles. Feel free to find me on:- Github Linkedin Twitter
https://medium.com/towards-artificial-intelligence/efficient-pandas-using-chunksize-for-large-data-sets-c66bf3037f93
['Lawrence Alaso Krukrubo']
2020-10-04 11:54:19.079000+00:00
['Artificial Intelligence', 'Technology', 'Data Science', 'Programming', 'Data Visualization']
10 Skills to Help You Become a Product Manager
The Hard Skills 1. Problem-solving attitude When we are given a problem can we find a unique, different, and feasible solution to it or we just feel like giving up on it? That problem-solving attitude is the thing we need to become a great product manager. Programming definitely helps us improve the way we think and approach a problem when we are trying to solve it, hence that skill will pay-off so much when we move into product management. Be someone who exercises the mind in an effort to reach a decision. 2. Passion and understanding of technology The best part is that we don’t really have to be a kick-ass coder or a gadget geek but we should be able to appreciate and understand how different apps, technologies, and products work. This skill would also come in handy if we don’t want to be laughed at by the engineers you would be working with. 3. Deep understanding of the users Many times, we might fail enough to really understand your end-users who actually are gonna use your product. There might be people from different cultures, backgrounds, very different from you and your perspective. Research and invest your time in understanding them. You should be able to build an application that is inclusive. 4. An eye for a good user experience When there is a particular problem you might be trying to solve through a feature, there can be multiple ways to implement that feature where your user will go through a journey of steps to actually complete it. You should be able to think and compare these different user-experiences and evaluate them to make the decision on which path to chose for your users. Connecting the use of technology with a great user experience is a must-have skill for a product manager. 5. Strong business sense This is where I have seen a lot of developers lacking, including me. The sooner we realize and start appreciating the business side of the problems, the better for us and our companies. It really doesn’t matter how technologically good our system/application is if it doesn’t fit the business timeline and feasibility. As a Product Manager, we should be able to help make the trade-off wherever required to build the right application for the users with the right technology meeting the business goals. Keeping an eye out for the companies business goals while delivering the product is another important skill to have. 6. Understanding analytics Data is what drives businesses today. The ability to understand and extract out insights from the data that your application is generating can help you make the right decisions for your product as well. Again you might have data-analysts or data engineers in your team, you should be able to communicate and understand them, you don’t have to be the expert at it.
https://medium.com/better-programming/becoming-a-product-manager-53684704a291
['Dhananjay Trivedi']
2020-09-09 02:00:28.031000+00:00
['Careers', 'Programming', 'Business', 'Startup', 'Product Management']
What Is Reddit Scared of?
What Is Reddit Scared of? Ranking the Most Popular Phobia Subreddits What fears do we share with the others around us? A particular type of fear that I find especially interesting are phobias, or an irrational fear of something that’s unlikely to cause harm. If you were to try to learn more about phobias on Google, all that comes up are a bunch of generic “Top 10” lists based on therapist interviews. I wanted to take a more data-driven approach to rank phobias, and Reddit is the perfect source to give us insight into the most engaging fears that we share.
https://williamchon.medium.com/what-is-reddit-scared-of-c932370aa572
['William Chon']
2020-08-03 14:53:36.572000+00:00
['Society', 'Life', 'Culture', 'Social Media', 'Data Science']
Falling Down — The Reality of Developer Burnout
Preventing Burnout Burnout is not considered to be something that has a permanent effect on life because it can be managed. Most people get through it but it can take time to figure out. Here are some strategies to try. Rest, exercise, and health Try to exercise at least two or three times a week and eat more healthy food like fruit and vegetables. Drink plenty of water and make sure you are getting a full quota of sleep. Meditation is another good way of managing stress and resetting your mind. Apps like Headspace can really help with this. Be open about your burnout to friends and family as their advice and support can be crucial in getting you back on track. Another strategy is to become an early riser if you are not already. Getting up early can make you more productive while maintaining energy and mental sharpness. You could exercise before work or have a more productive day as a freelance developer. Getting your work done earlier to free up your evenings for leisure is a positive move. Take time for other interests According to Coderhood, it’s important to make time for doing things that you like doing. This can include spending time reading books, attending conferences and meetups, listening to development podcasts, or starting a personal blog about the things you enjoy the most. Writing can also be excellent therapy for helping your mind and mood. Hobbies are another great way to combat burnout. Knowing when to take a break and do something else is an important way to manage yourself. If you feel work is becoming too much to manage, set some time aside to do the enjoyable things that you have been putting off. It may be that you like gaming, sport, or photography. It’s easy to deprive yourself of the fun things in life. Don’t do that. Fun activities generate energy and that is what you are looking for to create a better life balance. The main point is equilibrium. There is a time to work, time to sleep, and time to enjoy life with family, friends, and hobbies. Split your time up wisely and you will be a far more content person. Be sure to self-reflect According to Towards Data Science, an excellent strategy is to take the time to self-reflect at the end of each day. This can be done using a written journal or an online resource like Evernote. You could even send emails to yourself. The point is that you take time to look back at what you have achieved that day. This can include passive and active events. Passive events are when something happened without you actually doing anything, such as bumping into an old friend and enjoying a good long conversation. An active event is when something has been done as a direct action from you. This can be in the form of completing a task on your to-do list such as downloading some software or walking the dog. The point of self-reflection is to look back at what you achieved but realize that things can happen that you have no control over. These things may have shifted your schedule and set you back. The trick is to understand that you don’t have full control over everything in life. Things can happen out of the blue, but that’s OK. It’s important to accept that it happens, so don’t let frustration burn you out. By reflecting you can see what is happening in your daily life and learn about how to adapt your plans as you go.
https://medium.com/better-programming/falling-down-the-reality-of-developer-burnout-59c9079446ef
['Rob Doyle']
2020-11-27 13:36:54.433000+00:00
['Software Development', 'Mental Health', 'Technology', 'Work', 'Programming']
Poor Little Gay Boy
Poor Little Gay Boy An English translation of Renaud’s classic ballad Cover art from Remaud’s 2002 studio album ‘boucan d’enfer,’ or “noise from hell.” Renaud’s celebrated “Little Faggot” Singer/songwriter Renaud is a French legend, an icon of pop music who has sung for decades in a voice brimming with love and respect for ordinary French people. He composes lyrics that poke gentle fun of their foibles, criticizing their faults (and his) while celebrating their strength, humanity and basic goodness. The French take on homosexuality is complex, far more so than many English speakers imagine. We remember how Oscar Wilde found refuge in Paris after being imprisoned at hard labor in England for crimes against nature. But we forget that French legal tolerance has had more to do with philosophies of secularism than with widespread cultural acceptance of gender and sexual differences. Growing up gay or transgender in France has never been easy, particularly in the provinces. When I first heard Renaud’s Petit Pédé (Little Faggot) in about 2010, it blew me away with its raw power and unswerving honesty. Renaud released the song in 2002 with his album Boucan d’enfer, or noise from hell. He dared a lot with that album, exploring despair in the death of a Puerto Rican girl caught in the 9/11 attack in New York City, mourning the death of a young Afghan girl who died in the American counter-attack, and commiserating with the widow of a Corsican nationalist — all politically hot topics. But Petit Pédé touched me the most of all the songs on the album. Some call it crude or even condescending, asking how a straight man like Renaud has the right to sing the words he wrote. I cried when I heard it the first time, and I still cry sometimes. I’ve never encountered an English translation that does the song justice, so I’m attempting my own. As poor as my effort is, I believe I’ve captured something, and I hope you have a listen and a read at the same time. Just press play and read on. For those interested in how I produced my very loose (but I hope emotionally faithful) translation, I’ve made some notes down below under the original French. Little Faggot You fled the hills moldering Under clouds of jeer and mock Your neighbors’ contempt And your parents’ cruel blind shock At 15 when you learned You were made for the queers When you announced to your mother How well I imagine the tears Poor little faggot If you’d been born Black, no sweat No need for news or bulletin But such a different story For you to confess, “I only love men” It’s not you fault, it’s just nature As Aznavour¹ told us all so well But at the time of first love Oh, how it must have been hell Poor little faggot You tried all your life to pretend To be what they call couth You worked so hard to behave like a man And to hide your childhood truth In the hills where you were born They treated you like a mange scarred mutt It’s no good to be a faggot When you’re surrounded by the fucked Poor little faggot In Paris you landed without delay In the backrooms of the Marais There you began to find yourself In that ghetto oh so gay The brazen homos of the bars Found in you their inner lost boy Oh how they adopted you fast Even if just for your ass Poor little faggot You let yourself run wild Fucking much more than was wise Reveled in freedom denied as a child But thank God you always played nice You’ve protected yourself from the scourge That struck down so many loved pals With Satan’s own virus sounding a dirge You never ever go out unsheathed Poor little faggot One day soon you’ll find a man With a mustache or maybe shaved smooth For just a few days or even for life But into his home you will move You may dream of a child, you two So many orphans pining for love But no, such care is not for the likes of you The law says it cannot be done Poor little faggot You know life is often to rue Only seldom sweet or much fair Just look at me who is nothing like you Of pain life dished more than my share No matter you’re queer or you’re straight In the end life is so much the same For only love can heal all our wounds And for you I shall wish it with haste Poor little faggot Poor little faggot Original French for comparison. See notes below for explanations. T’as quitté ta province coincée Sous les insultes, les quolibets Le mépris des gens du quartier Et de tes parents effondrés À quinze ans quand tu as découvert Ce penchant paraît-il pervers Que tu l’as annoncé à ta mère J’imagine bien la galère Petit pédé T’aurais été noir, pas de lézards Besoin de l’annoncer à personne Mais c’est franchement une autre histoire Que d’avouer “j’aime les hommes” C’est pas de ta faute, c’est la nature Comme l’a si bien dit Aznavour Que c’est quand même sacrement dur À l’âge des premiers amours Petit pédé Toute sa vie à faire semblant D’être “normal”, comme disent les gens Jouer les machos à tout bout de champ Pour garder ton secret d’enfant Dans le petit bled d’où tu viens Les gens te traitaient pire qu’un chien Il fait pas bon être pédé Quand t’es entouré d’enculés Petit pédé À Paris tu as débarqué Dans les backrooms du Marais Dans ce ghetto un peu branché Tu as commencé à t’assumer Pour tous les homos des bars gays Tu étais un enfant perdu Tu as été bien vite adopté Même si c’était pour ton cul Petit pédé Tu t’es laissé aller parfois À niquer plus que de raison C’est ta liberté, c’est ton droit T’as heureusement fais attention Tu t’es protégé de ce mal Qui a emporté tant de tes potes Grâce à ce virus infernal Ne sortez jamais sans capotes Petit pédé Bientôt tu trouveras un mec Un moustachu ou un gentil Alors tu te maqueras avec Pour quelques jours ou pour la vie Rêverez peut-être d’un enfant Y en a plein les orphelinats Sauf que pour vous papa-maman C’est juste interdit par la loi Petit pédé Tu seras malheureux parfois La vie c’est pas toujours le pied Moi qui ne suis pas comme toi Le malheur j’ai déjà donné Qu’on soit tarlouze ou hétéro C’est finalement, le même topo Seul l’amour guérit tous les maux Je te le souhaite et au plus tôt Petit pédé Petit pédé Translator’s notes In daring to translate Renaud’s classic Petit Pédé, I faced enormous difficulties. Much of the power of the song lies in its contradictions. The most evident contradiction of this ballad is its minor key. Renaud transforms a jaunty little tune into something like a dirge merely by transposing it. It’s still catchy as hell, but it sounds infinitely sad. But perhaps the most important contraction lies in the clever lyrics, which manage to look like simple rhyming verse while juxtaposing high language with street language. From the beginning, Renaud sets us up for a shock. His first verse is slightly elegant, rhymey but not singsong, peppered with elevated vocabulary. T’as quitté ta province coincée Sous les insultes, les quolibets Le mépris des gens du quartier Et de tes parents effondrés To describe a kind of insult, he chooses quolibets from Latin, not as unusual as it would seem in English, but not ordinary. For “shocked,” he selects effondrés, probably for the rhyming scheme, but also because of its high, metaphoric flavor. He teases us a bit in the second verse with pervers for perverse, warning us something unusual is about to happen. But pervers is still high language rather than street language, and he rhymes it with the fairly elegant galère, in the sense of trial or tribulation. Then he slaps us in the face with his first use of the mini-refrain petit pédé, which is a vulgar expression that closely matches the emotional power of ‘little faggot” in English. I have taken the liberty of translating it as “poor little faggot” to match the tone in Renaud’s voice and to more accurately capture the sense of the lyrics. The rest of the song continues a cycle of high language contrasted with vulgar expressions, presumably for emotional effect. Each verse ratchets up the emotional disconnect, the contradictions as disconcerting as the lively melody sung in a minor key. Renaud reaches a climax of vulgarity in the final verse with his use of tarlouze, a street slur whose Parisian connotation is impossible to capture in translation. It means roughly “faggot” or “queer,’ but those words don’t capture the classism and snobbery of tarlouze, which comes from québécois argot rooted in language long forgotten in metropolitan French. It derides not only the queer, but the provincial or rustic. It implies a sense of weakness or femininity, and some translators choose “sissy,” but I feel that’s rather wide of the mark. In recognizing all the contradictions and language peculiarities in Renaud’s work, I quickly abandoned any notion of direct translation. Likewise, I have chosen not to attempt a strict rhyming scheme. Instead, I’ve done my best to capture Renaud’s underlying emotional contradictions while using just enough meter and rhyme to preserve the idea that his work is verse. I have never encountered a satisfying English translation of Petit Pédé, and while this one is lacking in many ways, I offer it in the hopes of introducing you to a remarkable French singer and his astoundingly powerful song.
https://medium.com/prismnpen/poor-little-gay-boy-29a20c9f4d5b
['James Finn']
2020-12-19 22:19:51.572000+00:00
['LGBTQ', 'Music', 'French', 'Equality', 'Creative Non Fiction']
Wave Properties of Matter
It has always been known that there is an elegant relationship between matter and waves, albeit in elementary particles. This was proposed way back in 1924 by Louis de Broglie — a French physicist¹. The hypothesis was not well-received then, and even to date unfortunately it is still a subject that is mainly introduced much later in physics education. Many previous attempts to extend these connections to classical mechanics, let a lone general relativistic studies have failed miserably. The vice versa has also been the case, when transiting from studies of objects of larger extent to that of the minute — the quantum mechanics. That is to imply, we cannot look at a kilogram mass and tell straight away what it’s wave equivalent properties are. There is one missing detail that could be used to elucidate this, and is the subject of this essay. Unification of general relativistic studies to the calculations involved in the quantum level today represents an active area of research. We will take a step in this thought-experiment to abstract a simple concept of classical mechanics as well as attempt to broaden it to fit our daily understanding of both worlds. To construct the scenario, a piece of stone with a known dimensional extent (radius), r meters is elevated to a height of H meters above the ground with an intention to drop and measure the instantaneous positions and velocities under free-fall. Sir Isaac Newton predicted that these heights would be modeled as H= 0.5gt². With the very intention to introduce the wave properties of matter, we will suppose that the initial position H was a radius of a sinusoidal wave of wavelength λ, such that: Eq. (1) Read up to the end to know why Eq. (1) is true. Since the resulting wavelength may also be represented as a function of another (smaller) radius, this process can be conducted iteratively with each successful cycle corresponding to the instantaneous position (and time, t) of the free-falling object. Rather than tracking the velocity of the stone v=gt as usual, in this case we will be recording the properties of the wave generated i.e. wave’s velocity — renamed u, and the resulting wave frequency, f. We would then be able to draw some conclusions and attempt to generalize the phenomenon based on the thought experiment. It turns out that the relationship between such wavelength and the wave’s speed can be used to predict the Newtonian gravitational potential, and free fall times in a similar way we do for Newtonian mechanics to yield the same results in the following manner: Eq. (2) The resulting form(s) in Eq. (2) is no surprise. However, replacing the earlier form of the wavelength λ in Eq.(2), we find that the wave’s linear, and angular speed ω, and thus frequencies, and period may then be estimated through the following methods. Eq. (3) We know that for classical waves, the associated frequency can be treated as a stand alone entity, since it can be measured independently as a vibration. What is more intriguing is that multiplying the frequency, f with the wave’s speed predicts an exact value of Newtonian gravity g in that g=uf. Extending the analogy to general relativity While the methodology described may seem over simplified, it has far reaching implications upon up-scaling to the general relativistic context. The wave properties instantly transforms into electromagnetic entities. For instance, we know that planets are in constant “falling” around their respective stars, and thus governed by Newton’s universal laws of gravity. An equivalent wavelength in that case would be given as a function of Schwarzschild radius as follows: Eq. (4) Where r is the radius of the object under consideration (of the earth in this case ≃ 6375087.54925801 m) as it is free-falling under the gravitational field of the sun². G is Newton’s universal gravitational constant, M is the mass of the object, and c is the speed of light in a vacuum. The rest of the calculations, namely frequency and velocity are of pure electromagnetic nature, and thus related by the generic formula λf= c. Most importantly, unlike in the thought-experiment, the retarded time of the wave t₀ becomes a known constant such that: Eq. (5) Where T is the period of the wave, and fᵣ is a known set of constant ultra low frequencies (ULF) for each celestial body large enough to exert its own gravity. A more detailed description on these frequencies was presented earlier on extending Einstein’s field equations. The physical implication of the frequencies is presumed to be position dependent, rather than belonging to the matter. This is the case even though both matter and the frequencies occupy the same spot in Einsteins coordinates of x, y, z, and t in spacetime. Once more, as the frequencies described in the outgoing paragraph are independently measurable entities, this would imply that pointing a radar to the position where a massive body is located should read the scalar vector frequencies which when multiplied by the speed of light in a vacuum yields the gravitational acceleration component of the body as depicted in Eq. (5). For the sake of generalization, a quick test on our nearby planetary bodies would indicate that Jupiter has the highest radio frequency of 77.05 nHz, while Pluto has the lowest of only 2.335 nHz. For planet Earth, this value stands at 32.71 nano Hertz. Taking a multiplication of the above frequencies with c must indeed yield the respective values as g=23.1, 0.7, and 9.80665 m/s² for Jupiter, Pluto, and Earth respectively. Characterizing black holes is another benefit of understanding this methodology. For those objects characterized as non-rotating black holes, their radii are known to be equal to the Schwarzschild radius by definition. Meaning that r=r_s in Eq.(4), and thus we can state with confidence that the equivalent wavelength for a Schwarzschild black hole is simply given as 2r_s or twice the radius. A metric tensor solution that fully characterizes the nature of an object in a gravitational field based on this method can be found by letting the wavelength recede by a factor of Δh= λ-r. When the λ from Eq.(4) is replaced in this difference, the solution to the quadratic equation formed yields a metric solution to Einstein’s field equations as follows: Eq. (6) Eq.(6) implies that an object will only be characterized as a black hole if and only if Δ h ≤ rₛ. The mass of the resulting black hole if and when formed, for instance due to gravitational collapse due to its on gravity³, will be given as follows: Eq. (7) The classical question as to what exactly is waving does not arise here since the understanding is that matter itself is a wave, in the same way elementary particles are characterized from de Broglie-Compton relations in the subatomic level. NB: To prove Eq.(1) is true, imagine you were to calculate the gravitational potential energy having been told to assume H=R, the radius of a traveling sinusoidal wave. You would then use E=mgR, in place of E=mgH. These two are equal for the simple fact that R=H=λ/2π. You could also conclude that when no loses occur, gravitational potential energy must transform fully to kinetic energy since: Eq. (8) With the fact that the object’s speed v = gt, and thus K = mv²/2 as we know from classical mechanics. Conclusion We have therefore transformed a simple abstraction of equations of motion to give a full description of how matter and waves interact in a gravitational field. We then extended the generalization to solve for previously counterintuitive subjects in both classical mechanics and general relativity. In general, matter tend to (broad)cast a wave around them, that is much larger (in extent) compared to their own radii and can be analysed in every way like classical waves. The essay thus presents an easier means for new physics readers to look at matter as indifferent from waves with non-conflicting mathematical correlations as compared to the very way objects behaves in classical mechanics work. References ¹ Feynman, R.P., 2006. QED: The strange theory of light and matter. Princeton University Press. ² https://www.britannica.com/science/free-fall-physics ³ Eppelbaum, L., Kutasov, I. and Pilchin, A., 2014. Applied geothermics (pp. 99–149). Springer Berlin Heidelberg. ⁴ https://www.nasa.gov/multimedia/imagegallery/image_feature_532.html
https://medium.com/discourse/wave-properties-of-matter-cb878b867cfc
['Nicolus Rotich']
2020-02-16 22:08:52.216000+00:00
['Science', 'Education', 'Physics', 'Mathematics', 'General']
My cholesterol is sky high — here’s why I’m not acting on it
The evidence is clear, so why am I not on drugs yet? To be able to understand why I’m not worried by my severely elevated cholesterol levels, we’re going to need an analogy to simplify why high cholesterol is not always a problem. To give credit where it’s due, my analogy is heavily inspired by Dave Feldman’s work, who is a “Software Engineer, Low Carb enthusiast, N=1 Adventurer, and Cholesterol Controller” as he puts it himself. A tale of two takeout restaurants Imagine a city with millions of people living in it, but just two restaurants that deliver takeout food. One of the restaurants, Glyco’s, is accredited for their fast delivery and huge portions. The other restaurant, named Lippy’s, can be a tad slower in terms of delivery, but they do offer a special service that we’ll get back to in a second. Nevertheless, people just don’t seem to be into Lippy’s that much. Glyco’s is king. So, every day when dinner time comes, the streets are flocked by Glyco’s delivery dudes, easily recognizable by their bright orange caps and jackets. They’re blazingly fast. As the streets color orange, everyone is happy with their big portions of food delivered right to their doorstep. Meanwhile, there’s not a Lippy’s delivery guy to be found. But who cares — this is how it has been forever, and everyone is fine with it. As far as anyone knows, this is how it should be. Unfortunately, there’s a problem. Glyco’s is fast and affordable, but people tend to have a bunch of leftovers. So they chuck them in their freezers and fridges. Some people even buy bigger freezers, just to be able to save all of those Glyco’s leftovers. Over time, the leftovers pile up quite badly. And that’s where Lippy’s delivery guys come into play. Not only do they deliver great food — but they’re also willing to take some of your Glyco’s leftovers with them, and then share those with the community. This way, the food gets distributed more evenly, and people who aren’t in the position to order takeout can get their hands on a decent meal, too. Plus: nothing goes to waist — pun intended. If only there were enough Lippy’s drivers out at night to distribute all of these leftovers… Now imagine that, quite coincidentally, Glyco’s runs into a terrible plumbing problem, greatly limiting their output. The city grows hungry, and, lazy as the people are, they refrain from cooking their own meals. Instead — you’ve guessed it — they turn to Lippy’s. Within a few days, the orange delivery guys make way for Lippy’s riders to appear in their bright green jackets, painting a completely different picture. Anyone returning from a holiday right now would be shocked to see how many Lippy’s riders there were out on the streets. They might even try to put the green riders to a halt, just because it doesn’t add up to see so many of them. But looking at the entire picture — knowing that Glyco’s is currently only delivering a minimum amount of meals — it makes perfect sense that there are so many Lippy’s riders out on the streets. Besides, there is a collateral benefit. Not only does Lippy’s do just as good a job to supply the city with food — having this many riders out also makes it much easier for people to clear out their fridges over time. Maybe it wouldn’t be that bad of a deal to have Glyco’s on the low every now and then. Bringing it home Of course, the analogy misses nuance and details. Human metabolism is wildly complex. But the gist of it is that, when there are many carbohydrates to be delivered in the form of glucose (by Glyco’s), there are naturally few lipids (Lippy’s) being mobilized. Leftovers pile up and the body has no means to do anything about it. When carbohydrates are limited, more and more lipids can be transported (because insulin remains low). Cholesterol is neatly tied into the processes of transporting lipids, including body fat. So when someone is on a diet that is high in fat and low in carbohydrates, it makes sense that there are many more lipid transporters found in the bloodstream compared to someone that mainly runs on carbs. And there we have the exact reason why I’m not going to take drugs to lower my supposedly dangerously high cholesterol levels. As I see it, those levels are bound to be elevated compared to a normal Western diet. On a high-fat diet, the majority of energy being carried around is fat. As cholesterol is related to much of this work being done, you will naturally see more of it in the bloodstream, and taking drugs to fight this would make no sense.
https://medium.com/edible-future/my-cholesterol-is-sky-high-heres-why-i-m-not-acting-on-it-de4698516ca9
['Reinoud Schuijers']
2019-09-25 05:26:38.234000+00:00
['Diet', 'Food', 'Health', 'Lifestyle', 'Technology']
A Global Look at Cancers Affecting Women
by Alexandra Chang Each year, more than a quarter million women die from cancer in the United States alone, according to the Centers for Disease Control and Prevention. Breast and cervical cancer, in particular, are among the most common diseases affecting women across the globe. Silvia Chiara Formenti, Chairman of the Department of Radiation Oncology and the Sandra and Edward Meyer Professor of Cancer Research at Weill Cornell Medicine, has spent much of her professional career studying and developing cancer therapies, with an especial focus treating cancers affecting women. “As an oncologist I find it necessary to do research, because we continue to lose people to cancer,” says Formenti, who is also radiation oncologist-in-chief at New York-Presbyterian/Weill Cornell Medical Center. “It is our duty to better understand these diseases, and initiate research to make a difference.” The Outcome of Children Whose Mothers Die of Cancer While most of Formenti’s research focuses on studying cancer immunology and immunotherapy, she recently took a wide lens to the disease to ask a more global question. After noticing the high incidence — and earlier death — from female cancers in developing countries, she examined how maternal death from breast and cervical cancer affects the outcome of their children. “We chose breast and cervical cancer because I was interested in the size of the collateral damage of women dying young,” says Formenti. “Another reason is that in both cancers we know how to make a difference. In breast cancer, one can reduce mortality with early detection and appropriate treatment, and cervical can be prevented with vaccination against HPV virus — its main causative agent.” Formenti worked with one of her former residents Raymond Mailhot, currently a faculty member at University of Florida, to study child mortality. “There is a robust literature that shows if the mother dies when the children are younger than 10 years old, the children acquire an increased risk of dying that has nothing to do with disease,” says Formenti. “It has to do with losing their moms.” The questions they sought to answer included these: Can the association between a mother’s death from cancer and her child’s mortality be predicted? How much does maternal mortality from breast and cervical cancer affect child mortality overall in a given country? Thus, how much could child mortality be reduced if maternal death from breast or cervical cancer was prevented? Global Studies on Mothers’ Deaths from Breast and Cervical Cancer and Child Mortality Using available large-scale population data from three countries with distinct levels of economic development and medical infrastructure — Bangladesh, Burkina Faso, and Denmark — the researchers looked at incidences of cervical and breast cancer rates in women during their fertile years. They combined this with available data on baseline child mortality, as well as the predicted mortality rates for children who lost their mothers before the age of 10. They then created a simulation model that specifically analyzed the impact of mothers’ deaths from breast and cervical cancer on child mortality in these countries. The results of the study, which was published in the journal Cancer in 2019, showed that child deaths associated with mothers’ deaths from breast and cervical cancer resulted in notable increases in cancer-related mortality — as high as 30 percent in certain African countries. Formenti says that this demonstrates how the burden of cancer affects more than those who have the disease. “When you look at both incidents of mortality — mother and child — it is clear that in developing countries it’s much younger than in richer countries.” In countries with more resources, however, the increase of child mortality was much smaller, for example, less than one percent in Denmark. “When you look at both incidents of mortality — mother and child — it is clear that in developing countries it’s much younger than in richer countries,” says Formenti. “The disease occurs earlier, resources are limited, and women often present with advanced diseases, so they’re much more likely to die. The effect is amplified by the collateral death of children.” Formenti hopes that broad research like this can help shape policy and health changes in the countries most affected. Combining Radiotherapy and Immunotherapy, a Novel Therapeutic Approach to Cancer When it comes to working directly with patients with cancer, Formenti has long been interested in how radiotherapy can be combined with immunotherapy to control cancerous tumors. Her earlier work has shown that radiation could modify tumors to make them more recognizable by an individual’s immune system. Based on these findings, she moved on to studying breast cancer in mouse models. More recently, she and her colleague Sandra Demaria, Professor of Radiation Oncology at Weill Cornell Medicine, performed successful clinical trials using radiotherapy alongside immunotherapy in solid tumors (lung and breast cancer). Building on that evidence, Formenti and Demaria were recently awarded a $5.7 million grant from the Department of Defense to run clinical trials on breast cancer patients. Led by Formenti, this project consists of a consortium of five groups of investigators — including those at University of Pittsburgh, Cedars-Sinai Medical Center, and the Mount Sinai Hospital — to conduct a novel clinical trial in newly diagnosed breast cancer patients. The research will focus on breast cancers that are hormone receptor-positive (HR+), which make up approximately 75 percent of cases in the United States. Despite therapeutic advances, nearly a third of HR+ breast cancer recurs, and it accounts for the most frequent cause of death from breast cancer. Current available immunotherapy does not have a high success rate with HR+ breast cancer with only a small minority of patients responding to it. Formenti and the consortium of researchers are addressing this barrier with a multipronged approach. They will perform a randomized trial that tests standard therapy with different immunotherapies targeting specific barriers previously found in mouse models and confirmed in some clinical studies. “Nobody else has used radiotherapy directed to the tumor with immunotherapy before surgery,” says Formenti. “It’s completely new.” The goal is to demonstrate evidence of a treatment that can convert HR+ breast cancer tumors into an individualized vaccine — essentially immunizing the patient to the tumor. Formenti says that she hopes data from these trials will also spur larger studies and, if confirmed, change the treatment of HR+ breast cancer, potentially reducing tumor recurrence and death. Weill Cornell Medicine, Ideal for Collaborative Research As an expert in breast cancer, Formenti has received consultant honoraria and research grants from various commercial entities. Formenti says, “Weill Cornell Medicine is an ideal site to develop forms of interdisciplinary collaboration — not only in the delivery of integrated clinical care, but also for collaborative research.” Formenti says, “This type of environment is especially conducive to somebody working both as a clinician and a researcher.”
https://medium.com/cornell-university/a-global-look-at-cancers-affecting-women-4ef2e06bde73
['Cornell Research']
2020-02-03 20:01:01.360000+00:00
['Health', 'Cornell University', 'Cancer', 'Women', 'Medicine']
How Your Imagination Can Make You More Successful
As we approach the end of a tough, tough year, what keeps you motivated? Maybe it takes a bit of daydreaming, daydreaming about a better life? Did you know that your view of the future determines your future success? It’s true. Your view of the future shapes your current behavior, which in turn drives your future results. Years ago, whenever a business colleague and I got stuck on a complicated problem, he would say, let’s pursue the art of the possible. I love that phrase because creative thinking is art, and thinking about the future is to think about the possible. Creating your future is to pursue the art of the possible. Your future is yours to paint and to write. Your life is your canvas and your journal. So close your eyes and imagine your future — what is your “possible?” Your future depends on this Prospection is our ability to pre-experience the future by simulating it in our minds. (Greater Good Science Center “GGSC” at UC Berkeley) Twenty years ago, my wife and I dreamed of moving to the sun-soaked beach life in Southern California. Over the next year, I lobbied my boss hard and finally convinced him to approve my relocation to our posh office on Wilshire Boulevard. I walked into our Chicago city townhome late that evening bursting with excitement to give my wife our update. What I didn’t anticipate was her surprisingly good news— we were expecting our second, precious little girl! In a blink of an eye, we put our California dream on indefinite hold. But as they say, you can always dream, right? And that’s where the skill of prospection comes into play — prospection is our ability to envision the future; to plan, predict, and daydream. Over the following decades, we kept our California dream alive. We drafted and edited our future story, and our future narrative affected the decisions we made over those many years. The picture we painted motivated and inspired us when our future got hazy. Unknowingly, my wife and I were employing what is known as prospective psychology. Prospective psychology surmises that both your present and future is shaped by how you think about your future. Prospective psychology flies directly in the face of historical practice. For decades psychologists explained that your past experiences affect your present, which in turn affects your future. Effectively, you’re a prisoner to your past experiences. Do you see the important distinction between these two approaches? Experience the future before it happens If you don’t know where you’re going, you’ll end up somewhere else. (Yogi Berra) I became fascinated by the idea of my future driving my present while reading Dr. Benjamin Hardy’s book Personality Isn’t Permanent. Professor Martin Seligman conducted much of the original research on prospection and outlined his findings in the 2013 paper Navigating Into the Future or Driven by the Past. If you are a future-focused person, you get why I’m so excited by this idea. In any so-called personality test, I score highest in future-mindedness, planning, and strategic thinking. Maybe it sounds geeky to you, but I’ve done an annual life plan for the last 10+ years. Envisioning the future comes naturally to me. Now, if you’re not naturally future-focused, don’t lose hope! Research done by Chandra Sripada demonstrates your brain is wired for prospection too; it’s just a matter of how you use this gift of imagination. Here’s what’s important to know — prospection plays a massive role in your life. According to the paper Future-Mindedness by the Greater Good Science Center at UC Berkeley, prospection directly impacts your ability to: • Imagine different futures. • Make decisions based on different outcomes. • Experience happiness in anticipation of future events. • Envisioning the benefits of relationships and community. In short, prospection affects your decisions, happiness, relationships, and ultimately your future. Pursue your possible A plan in the heart of a man is like deep water, but a man of understanding draws it out. (Proverbs 20:5 NLTse) Fast forward twenty years into my story, and today we live 50 yards from the Pacific Ocean with the sound of the rolling surf and the smell of saltwater lulling us to sleep every night. I run along the sandy beach or in the green coastal canyons almost every day, and I swim in the dark blue Pacific. I’ve dedicated this season of life to sculpting a whole new set of dreams and goals. So as you sit on the precipice of a new year, I encourage you to exercise your gift of prospection. Imagine your future. Focus on your future self. Outline the goals and dreams that inspire and excite you. Picture the future you dream of across all areas of your life — family, personal, health, financial, spiritual, intellectual, and social. Take out your canvas and start drawing. Write something beautiful about your life. Pursue your art of the possible. Be wise and successful
https://medium.com/curious/why-your-imagination-can-make-you-successful-in-2021-e26a7cfad875
['Greg Longoria']
2020-12-18 06:36:08.602000+00:00
['Life Lessons', 'Self Improvement', 'Life', 'Self', 'Creativity']
How to Avoid a Catastrophe: Lessons for Communicators from Futurist Amy Webb
A highlight of each year’s Online News Association conference is futurist Amy Webb’s presentation on the trends that will impact the future of news, technology, and information. This year marked a turning point in Webb’s 10 years of presenting at ONA. Source: Future Today Institute, 2017 Instead of presenting an optimistic vision of how technology will improve the quality and distribution of news, Webb painted a dystopic future, cautioning that if communicators and publishers do not recognize where some trends are leading, they could contribute to the decline of the industry, society, and possibly democracy. These trends include the rising volume of fake news, the market dominance of a small number of technology companies, the commoditization of personal data, and the likelihood that other countries are harnessing social platforms to sow confusion and chaos. “What’s about to happen is going to fundamentally alter journalism,” she says. “We are going to wind up on the other side of this with a media landscape that we may not recognize.” But Webb believes there are three possible versions of what’s to come. In her optimistic prediction, media leaders take immediate actions to preserve publishers’ independence and foster a strong democracy. Social and business leaders think through the implications of technological advances to avoid threats to individual rights. In her pragmatic prediction, media leaders are reactive and have minimal influence in shaping technology. Some businesses will thrive, but the public’s trust in institutions may further wane. This is the path that Webb sees as most likely to happen at current levels of innovation. Her final prediction is the catastrophic prediction. In this scenario, a handful of tech companies have an unbalanced influence on the rest of society, the media industry fails to innovate and loses public trust while bad actors use technological advances to create further chaos in the news cycle. The changes pose an existential risk to democracy and global security. If it sounds grim, that’s because it is. That’s why Webb encouraged all attendees to be proactive and to prepare for the technology as if it exists today. Her recommendations apply to all communications leaders, not just news publishers, editors, and reporters. Here are the three key insights from Webb’s presentation. For each we map out the best- and worst-case scenario and one thing you can start doing today to support the best-case scenario for your organization and society. 1. What happens after websites? Soon, you won’t type information into a browser or click through an app. Instead, you will ask a question to your mobile device or smart speaker and you’ll receive an audio response. Webb calls this the “zero-user interface” setting. Users will have conversations with machines, and the computers will know the type of information the user is looking for and will respond instantly. The interactions will be intuitive, but there will be a trade-off in knowing whether the information is coming from a credible source. “In a zero UI setting, it will be awkward to cite sources and news brands,” Webb says. How will communicators ensure that their organization is properly cited when it is the source of a voice search? Which voice queries should surface information about the organization? How do marketers build a brand reputation without a visual identity? How do companies plan to surface in voice results without SEO or content marketing? Optimistic case: In a best-case scenario, organizations will begin to test business models, ask what happens next, and plan for a future that’s less visual and more conversational. But right now, Webb says organizations are building voice applications for Alexa and failing to think about how voice will impact their revenue. Catastrophic case: Without a proactive business plan, organizations could lose control of the media landscape. Nine organizations — groups like Facebook and Google — would control the revenue opportunities for news organizations and the advertising channels for other businesses. Without trusted news sources, fake news could thrive, possibly increasing the risk of global turmoil. One action you can take today: Prepare for the future by observing the voice search habits of a three-year-old. Children who can’t type or read are using their parents’ smartphones and tablets with the help of voice assistants. The head of design for Google Search and Assistant, Hector Ouilhet, uses the interactions between his daughter and Google’s voice search to understand how the technology should evolve to meet user needs. 2. The importance of the open web The current atmosphere of fake news combined with concerns about privacy, monopolies, free speech, and automation will drive governments to regulate how these platforms can be used to share information. Webb and others predict that this will result in a “splinternet” with each country having varied access to news and information. Optimistic Case: One way to avoid this future is for organizations to advocate for better news and information practices, such as verification for trusted sources of news and information, Webb says. This could cut down on the rise of fake news, ease the distrust that audiences are already feeling toward media and other institutions, and decrease the pressure for governments to regulate access. Catastrophic Case: Without these regulations, communicators could spend more time customizing content approaches, studying the legalities of online distribution, and contextualizing the news environment in places where they want to advertise or share information. The proliferation of systems will be difficult to manage and thus have a higher risk of cyberattacks. In this environment, there is greater potential for widespread misinformation campaigns, says Webb. One action you can take today: Use this study of the most trusted news sources in 2017 to shape your approach to earning the public’s trust. In an analysis of the most common three-word phrases used to describe sources, the researchers found that the most credible reporting showed multiple sides of a story or was shared by multiple news organizations. 3. Computer recognition gets eerily accurate Platforms like Snapchat and Instagram are already using visual recognition technologies to create photo filters that interact with people and objects in the real world. “Visual computing allows us to do things like recognize and interpret human health, emotion, and characteristics,” Webb says. This technology has the potential to be creepy — for example, some cities in China have used visual recognition to call out jaywalkers — but it can also be useful. In the future, communicators may be able to use recognition technologies to customize experiences for their audiences. Optimistic Case: More organizations use visual computing data to create and distribute stories in unique ways. These groups recognize the risks of cognitive bias and establish procedures to avoid inadvertently putting others at a disadvantage. Catastrophic Case: The worst-case scenario is nearly the opposite, with people and organizations negatively impacted by the decisions of algorithms and many dealing with what Webb calls “digital graffiti” — malicious digital content created by bad actors and overlaid onto real-world settings through augmented reality. One action you can take today: Play with Google’s Teachable Machine. It’s a demo that illustrates how artificial intelligence can learn from images. You train the machine, and watch it learn. You’ll come away with an understanding of visual identification and machine learning. Train the machine accurately and you can see the benefits of AI, but mislead the machine, and you can begin to understand concerns about algorithmic biases. Become adapters, not adopters The key to preparing for these technologies is to become early adapters, rather than early adopters, Webb says. “I don’t want you to go out and find all of the latest, coolest tech,” she says, “… I want you to start thinking in a different way. I want everybody to start taking incremental actions on trends each and every day.” We hope the exercises we’ve shared here will help you begin. With contributions from Liza Kaufman Hogan, director of Atlantic Media Strategies’ editorial team.
https://medium.com/atlantic-57/how-to-avoid-a-catastrophe-lessons-for-communicators-from-futurist-amy-webb-34112650ddbb
['Sarah Harkins']
2017-10-20 18:57:39.056000+00:00
['Communication', 'Media Trends', 'Future', 'Technology', 'Social Media']
Young Leonardo Da Vinci
Quick Intro It’s only appropriate that the third submission to this Masters of Many series lands on the 500th anniversary-month since his passing. Known for his feverishly inventive imagination & unquenchable curiosity, the original Renaissance man, Leonardo Da Vinci, is consistently revered as one of the most important & creative minds of all time. Arguably the greatest painter to have ever lived, his lifetime accomplishments spanned far across multiple disciplines. From military inventions, to anatomical observations, to sketching utopias, Leonardo Da Vinci sets the bar for polymaths. In this mini-series, we previously explored Benjamin Franklin & Bertrand Russel’s formative years. We’ll maintain that same focus by zoning in on a specific time-period of Leonardo Da Vinci’s life — what was he like in his twenties? Note-Worthy Accomplishments — One of the greatest painters of all time, his portfolio contains multiple, timeless works such as Mona Lisa, Last Supper, Vitruvian Man, Virgin of the Rocks & Salvator Mundi — Extraordinarily innovative military engineer that sketched out a plethora of inventions that’d eventually see the light of day such as a tank, a multi-barrel gun, a giant crossbow, a helicopter, & a parachute — Renowned anatomist that evolved the field of anatomy through careful autopsies, comparative studies & incredibly detailed sketches — Wide-spanning scientist that made serious contributions to the fields of optics, botany, cosmology, & hydrology 20s To 30s (1472–1492) Da Vinci was identified to possess great artistic talent early on in life. This is evident by the apprenticeship he gained on his fourteenth birthday. Spending the majority of his childhood in Vinci, a territory that lay in a valley in modern-day Florence, at fourteen he moved to the city in order to work directly underneath one Andrea del Verrocchio. An artistic legend in his own right, Verrocchio’s studio was likely one of the best possible learning opportunities for any young artist in that time-period. Though Verrocchio was the best painter in Florence, his studio offered a far wider education — other famous figures that stepped through the studio & rubbed shoulders with a young Leonardo are Domenico Ghirlandaio, Pietro Perugino, Sandro Botticelli & Lorenzo di Credi. With access to such diverse technical talent & theoretical principles, its little surprise that Leonardo learned picked up multiple disciplines such as drafting, carpentry, drawing, painting, sculpting, & modeling. It’s widely-known that Verrocchio, as the master of the studio, often let various of his apprentices & assistants contribute to his commissioned work. As history dictates, from the type of brushstrokes & attention to light, Leonardo’s influence is seen in at least one of Verrocchio’s pieces — The Annunciation, in particular, is often quoted as the first “work” by Da Vinci: The Annunciation — Verrocchio Da Vinci turned twenty in the year 1472. This kick-off year is marked by two large life events. First, after a total of seven years, Leonardo finally left Verrocchio’s studio with his father setting up his own workshop; however, he quickly realized that he much preferred the lively corroboration that Verrochio’s studio offered & returned before the year ended. Second, based on his work under Verrocchio, he was asked to join the Painter’s Guild of Florence. In most of Europe, ever since the expansion of towns & cities, crafts & professions had been governed by local bodies known as guilds. Essentially city-wide unions, these associations controlled trade, limited outside competition, established standards of quality, & set rules for the training of apprentices. Membership was usually compulsory — only guild members could practice their trades within a city & its territory. Rarely were such young members invited, which points to the degree that his artistic talent showcased. The next year, at twenty-one, he publishes his first, official, original (though not commissioned) work. Dated August 5th, 1473, Leonardo drew a landscape depicting the valley of the Arno & Monteloupo Castle: At twenty-two, in 1474, he received his very first commission for a portraiture for one, 16-year-old Ginevra Benci. While the patron is shrouded in undecided history, guesses range from her betrothed Luigi Niccolini, to her admirers, one of whom was the aristocratic Lorenzo de’Medici, part of the elite Medici family. Ginevra Benci — Da Vinci circa 1474 The next year, 1475, Da Vinci likely started multiple individual projects that would be later released, however, it’s impossible to determine exactly which paintings were started. It is confirmed, however, that this is the year that he & Verrocchio finished The Baptism of Christ. The following year, 1476, twenty-three year-old Da Vinci suffered quite a personal & public embarrassment. Just on the cusp of becoming a master in his own right, he was suddenly plagued by a scandal: Da Vinci, along with three other young men, was anonymously accused of sodomy (still a criminal offense in Florence). The accusation was particularly embarrassing for Da Vinci because his charge was with one Jacopo Saltarelli, a notorious prostitute. Five years of deep despair, isolation, commercial failure & unfinished works followed. For the next two years immediately following the charges, 1477 & 1478, almost nothing is known of twenty-four & twenty-five year-old Leonardo. Neither whereabouts or his works in progress— only that he intermittently popped into Verrocchio’s studio. During the following three years afterward, Da Vinci again went out on his own; only to, once again, fail commercially. Due to his growing renown, he at least managed to receive a total of three commissions, yet he failed to complete a single one. He started two commissions, leaving them unfinished, then flat out ignored his third commission. Nevertheless, one of the two unfinished commissions, The Adoration of the Magi, despite its lack of completion, propelled a twenty-eight year-old Da Vinci’s reputation forward as a generational artistic talent: Adoration of The Magi — Leonardo Da Vinci circa 1481 In 1482, Leonardo created a silver lyre in the shape of a horse’s head. The local aristocracy, Lorenzo de’ Medici, saw an opportunity here to rekindle a strained relationship by sending Leonardo to Milan, bearing the lyre as a gift, to secure peace with Ludovico Sforza, Duke of Milan. In a rather uncharacteristic & unorthodox tactic, Leonardo also saw an opportunity here to leave his hometown & escape his perceived maligned reputation by seeking employment with the recipient of the message. Along with the lyre, Leonardo delivered a hand-written letter claiming an extraordinary & diverse skillset, some of which he very clearly had no prior experience in. A few of these outlandish claims are: I have a sort of extremely light & strong bridges…that are secure and indestructible by fire & battle …I have methods for destroying every rock or other fortress, even if it were founded on a rock. I have mortars..that can fling small stones almost resembling a storm; & with the smoke of these causing great terror to the enemy And when the fight should be at sea I have many machines most efficient for offence & defence; & vessels which will resist the attack of the largest guns While it’s entirely possible that Leonardo had sketched many of these inventions out, or even considered them in thought experiments, it was really out of the blue for him to hyperbolize so greatly. It was also quite gusty to challenge the credibility of the claims by ending the note with: If any one of the above-named things seem to any one to be impossible or not feasible, I am most ready to make the experiment in your park, or in whatever place may please your Excellency Virgin of the Rocks (earlier) — Leonardo Da Vinci circa 1484 Perhaps aiming to kill two birds with one stone by both moving locations & re-setting his career as artist-engineer instead of a painter, Leonardo’s risky play worked: twenty-nine year-old Da Vinci was en-route to Milan as the official “Painter & Engineer of the Duke.” The closing year of this mini-series, at thirty, our protagonist engages in his first commission in Milan: the Virgin of the Rocks. Like his previous commissions, however, this particular project dragged on for twenty-five years before he collected payment, resulting in two different final versions. Quirks, Rumors & Controversies Once again we reach the point in this series where the rubber meets the road: just how infallible are these seemingly-perfect polymaths? Quirk-wise, it’s worth nothing Leonardo Da Vinci lived out one personality trait above all: curiosity. As noted from his journal & observations, Da Vinci simply fed this unquenchable curiosity his entire life — which is easy to assign as critical in his path towards universal genius. One of the oft-quoted best examples of his curiosity is a self-written note to study a particular subject of intrigue: Describe the tongue of a woodpecker Like Bertrand Russell, Leonardo undoubtedly waded in the dark side of his emotions & psyche throughout his life (though never out-right suicidal). Unlike Benjamin Franklin, Leonardo was a disaster with his personal finances & almost equally irresponsible with his commercial commitments. As evident by the personal turmoil caused during the sodomy charges, Leonardo, at times, flipped from his gregarious self to an isolated creative marked with depression, misery & guilt. Another such moment occurred when he was crushed by Michelangelo in a battle of the artistic geniuses, quitting before the competition even began. Apart from slight & intermittent mental health & emotional disturbances, Leonardo Da Vinci was absolutely abysmal when it came to either his personal finances or his commercial commitments. Count the track-record above — just within this ten-year summary he literally only delivered a single, out of a total of five commissions. Ask any freelancer, leaving project scopes incomplete means diving-headfirst into Pandora’s box — it obviously makes collecting payment & achieving financial stability laughably impossible. Thankfully, Da Vinci’s ever-growing network, particularly in high-class circles, provided him with room & board. He experienced a wealthy lifestyle even though he did not have much personal wealth. If he displayed that same apathy towards commitments in today’s hyper-critical & reputational-based social media space, it’s highly likely that he would’ve been shamed & perhaps never contracted for commissions again. In Closing Who was Leonardo Da Vinci in his twenties? An extraordinarily talented painter with a feverish curiosity on the cusp of vastly expanding his repertoire. Was he accomplished in his twenties? Undoubtedly, but with a caveat. While he was brimming with talent during these formative years, it’s hard to say he “accomplished” much in terms of completing commissions or commercially managing his own studio. In fact, looking at just this time period, with the exception of the audacious letter he wrote to the Duke of Milan, Leonardo Da Vinci was quite one-dimensional. He held extraordinary talent as a painter & displayed great potential in other endeavors, but nothing more. In his twenties, the Renaissance Man was far from the universal genius he’d grow into — so don’t worry about diversifying your skillsets this early on. It’s much more critical to achieve dominance in one area during these years, & then apply those mental models to additional skillsets afterward. Additional Entries Part I — Benjamin Franklin Part II — Russell Bertrand Part IV — Thomas Young Part V — Mary Somerville Part VI — Richard Feynman Part VII — Sir Francis Bacon Part VIII — Jacques Cousteau Part IX — Nikolas Tesla Part X — Isaac Newton Part XI — Thomas Jefferson Part XII — Sir Jagadish Chandra Bose Part XIII — Charles Babbage Part XIV— Emanuel Swedenborg Sources Leonardo Da Vinci — Walter Isaacson World History Project — Leonardo Da Vinci Da Vinci Life — Timeline
https://medium.com/young-polymaths/in-their-20s-leonardo-da-vinci-5655c3d8ef82
['Jesus Najera']
2020-06-23 21:12:33.953000+00:00
['Creativity', 'Life Lessons', 'History', 'Education', 'Life']
Training models using Satellite imagery on Amazon Rekognition Custom Labels
Satellite imagery is becoming a more and more important source of insights about changes that happen worldwide. There are multiple satellites that provide publicly available data with almost full earth coverage and almost weekly frequency. One of the main challenges with satellite imagery is to deal with getting insights from the large dataset which gets continuous updates. In this blog post, I want to showcase how you can use Amazon Rekognition custom labels to train a model that will produce insights based on Sentinel-2 satellite imagery which is publicly available on AWS. The Sentinel-2 mission is a land monitoring constellation of two satellites that provide high-resolution optical imagery. It has around a 5-day frequency and 10-meter resolution. In our example, we will use satellite imagery to detect and classify agricultural fields. How to access Sentinel-2 imagery There are multiple ways how you can access satellite imagery: Use one of the browsers mentioned on the page https://registry.opendata.aws/sentinel-2/ Download imagery from the Amazon S3 bucket directly. Amazon S3 is a storage service that provides scalable and secure access for downloading and uploading data. Browser is the best option for finding single images for specific dates and places. Amazon S3 is a better option if you want to automate a workflow or develop an application using satellite imagery. I want to showcase the usage of Sentinel-2 Amazon S3 storage as it’s the best way to use imagery if you need to organize a pipeline for its processing. Find Sentinel-2 scene One of the simplest options to find the necessary scene is to use Sentinel-2 browser. There are multiple browsers out there. In our example, we will use the EO browser as it provides an S3 path for the image. Basically, you just need to define search parameters and then copy the AWS path field. Download image bands When dealing with satellite imagery we work with multiple images from different specters. The idea here is that each spectral band can give a different insight. Because of that sometimes it is more useful to construct images not with RGB bands, but with other bands, for example, NIR which stands for Near Infra-Red imagery. This band gives great insight into vegetation. That is why the combination of NIR-Red-Green bands is popular and it’s called False Color since it’s not a true RGB image. Once we have the AWS path we can use AWS CLI commands to download the necessary bands. In our case, we want to construct a False Color vegetation image which will have higher contrast for vegetation areas so we would need to download NIR-Red-Greed bands (8,4,3 respectively). Prepare False Color Sentinel-2 image Once bands are downloaded, we can use rasterio python library to compile False Color image and scale the bands since initially they are provided in uint16 format, but their values actually lie in the range 0–2¹². Here is the example code which does that: We will get the following result: Once we have this False Color Vegetation image we can actually crop it in multiple areas to use some of them as examples and some of them for the training and test datasets. How to train and run Amazon Rekognition model Once we have images we can create a dataset and train our model. One of the main advantages of using Amazon Rekognition is that you don’t need to know deep learning frameworks or write code for deep learning training or inference. You also don’t need to manage deep learning infrastructure and can start using Amazon Rekognition model right away. Create and label Dataset Create dataset Choose “Upload images from your computer” On the dataset page click “Add images” On the pop-up window, click Choose Files and choose files from your computer. Then click “Upload images” Create labels “active field”, “semi-active field”, “non-active field” Click “Start labeling”, choose images, and then click “Draw bounding box” On the new page, you can now choose labels and then draw rectangles for each label. After you’ve finished labeling you can switch to a different image or click “Done”. Train and run the model On the projects page click “Create Project” On the project page choose “Train new model” Choose the dataset which we just created and then choose “Split training dataset” for the test dataset. Then click “Train”. Once the model is trained you can start making predictions. You can evaluate the model using one of the crops from the image which we’ve processed before. Rekognition will return the json results with predicted regions that we can visualize. Visualize the result You can visualize the results using the following code in jupyter notebook. This code parses response from the Amazon Rekognition model and draws prediction boxes on the image. It also takes into account the type of labels and marks predictions with different colors based on class. Here is how the output will look like: Conclusion We’ve trained and deployed a model for finding agriculture fields on satellite imagery using Amazon Rekognition. As you can see setting everything was pretty simple and you can use this example to develop more complex models for other satellite imagery tasks, for example, forest monitoring and building detection. Feel free to check the code in the following repo: https://github.com/ryfeus/amazon-rekognition-custom-labels-satellite-imagery
https://ryfeus.medium.com/training-models-using-satellite-imagery-on-amazon-rekognition-custom-labels-dd44ac6a3812
['Rustem Feyzkhanov']
2020-11-18 21:49:09.055000+00:00
['Machine Learning', 'Agriculture', 'Satellite Imagery', 'Deep Learning', 'AWS']
AWS Sentiment Analysis of Web-Scraped Employee Reviews
Introduction Hi! This post born as a means to fulfill a homework assignment on putting various AWS products to use in a practical setting. Alternatively, a simple step-by-step guide would have sufficed, but that probably would be more dreadful both to do & to read. Anyhow, when thinking of use-cases for speech recognition, translation, or comprehension, you would be forgiven to have your imagination go wild. Indeed, using AWS translation & comprehension/sentiment analysis services on various news websites’ articles to compare sentiments came to my mind. Being more practical myself, & generally feeling the need to come up with something of more return on invested effort, I looked for something more immediately relevant to my professional circumstances: I’m a 26 years old MSc Business Analytics Student at CEU, with an undergrad in Business Administration in Hotel Management & 5 years of experience in that field. As a relatively newbie to Data Science, Analytics & all that, 2 things constantly on my mind are a. Finding employment in the field & b. Trying to gain a level of domain/market understanding similar to what I came to know with Hotels. Obviously, 1 project is not going to achieve either, but I think I might just be able to put my web-scraping skills to the test, and incorporate AWS Comprehend into the workflow, to build something helping me get a sense of how employers are in this field, in the sense that what employees are posting on a major employment website, Glassdoor. For purposes of this post I’ll focus on the IT sector, though obviously condensing down Data Science & Analytics to 1 sector is rather limited. The generalized use-case is figuring out how the company is perceived by its employees. You’d be wise to say the approach below does not answer this question in itself, after all, Amazon Comprehend can ‘only’ categorize reviews by its detected sentiments, as well as to detect key phrases within each review. I will illustrate this with 1000 reviews on the 10 best-ranked Tech companies, per Glassdoor’s company ranking, having scraped their Companies’ & Reviews’ sections.
https://medium.com/swlh/aws-sentiment-analysis-of-web-scraped-employee-reviews-2fc20ff5b7b2
['Helmeczy Brúnó']
2020-12-16 23:09:29.614000+00:00
['R', 'AWS', 'Web Scraping', 'Amazon Comprehend', 'Glassdoor']
The Future of Automation: Are Robots Coming for Our Jobs?
Photo by Franck V. on Unsplash The last time I walked into a McDonald’s restaurant, I didn’t talk to anyone. I walked in, pressed a few buttons on the touch-screen kiosk, waited a few minutes, then grabbed my order from the counter and left. McDonald’s isn’t just getting rid of humans to save money on labor costs, either; customers tend to buy more from kiosks than they do from workers at the counter. On the flip side, a poll conducted by MSN found that 78% of folks are less likely to go into a restaurant that has self-service kiosks. The refusal to go into restaurants that offer self-service kiosks sounds an awful lot like the initial refusal to use self-checkout machines at the grocery store, which means it probably won’t last and any resistance is futile. Does anyone out there still refuse to go through self-checkout at the grocery store? Anyone? It’s unclear how much of a risk automation poses to society right now, but it’s obvious that it could be a big issue in the future if we don’t do anything about it. One Democratic presidential candidate, Andrew Yang, has built his entire campaign around tackling the issue of automation. He believes that if we don’t act now we could soon face Great Depression-level unemployment and a societal meltdown. As part of the solution to the problem, Yang wants to give a freedom dividend of $12,000 per year to every American adult over age 18. How big of an issue is automation? Automation doesn’t feel like a big issue right now. It seems like we have much more pressing problems to deal with, such as climate change. Tackling automation and taking on the robots now would mean getting out in front of a problem before it exists, and when have we ever done that? It feels much safer to sit on the sidelines and wait until automation can no longer be stopped before trying to get involved. McKinsey & Company, Pete Buttigieg’s alma mater, said in a 2017 report that as many as one-third of American jobs could disappear due to automation by 2030. If they’re right, in just 10 short years 33% of the population might be replaced by machines. That is an alarming statistic and one that requires action immediately, not sometime in the next few decades. Who is in danger of losing their job? I’m not worried about losing my job to a robot anytime soon, but even wealth management is becoming automated through robo-advisors like Betterment and Wealthfront. Investing through an app is a fraction of the cost of going through a traditional financial advisor. Wealthfront charges 0.25% of assets under management (AUM), and a traditional financial advisor charges around 1.00% to 1.50%. (Although a human advisor can provide comprehensive financial planning that an app cannot. At least not yet…) Those hardest hit by job losses due to automation will almost certainly be blue-collar unskilled workers (this is not to say they have no skills; unskilled labor is the term used to describe jobs that require no special skills or training). Over 3 million Americans work as truck drivers, and another 4.6 million work in fast food. Overall, about 45% of jobs are susceptible to automation. Restaurants like McDonald’s have already begun getting rid of human workers in favor of machines, and self-driving cars and trucks are expected to take over the roads by 2030. Some of those workers replaced by machines will land on their feet. Humans are resilient, if nothing else. Many, though, may have nowhere else to go. In a world where we can buy groceries, food, and almost anything available on Amazon with no human interaction, where do those displaced workers end up? It’s hard to imagine a future where we end up creating as many jobs as we replace. Robots certainly will create some new jobs, though. Someone has to be in charge of designing, manufacturing, and deploying our new metal overlords. The human element I can be just about as dystopian as it gets, but even I admit that in some industries we’ll always need or want a human touch. It’s hard to ever imagine a society with automated police officers (at least without a robot uprising), and jobs that require a certain level of skill and human interaction (teachers and lawyers, for example) are unlikely to be replaced by robots anytime soon. In some workplaces, though, humans are not required. Amazon warehouses almost seem like they’re made for machines instead of humans; workers need to skip bathroom breaks to keep their jobs, and the temperatures inside some warehouses aren’t hospitable to humans. Robots don’t need bathroom breaks (that I know of) and could conceivably operate at temperatures much colder or warmer than humans can. If I worked in an Amazon warehouse, I would be shocked if I wasn’t eventually replaced by a robot. What’s the solution? Some politicians, like Andrew Yang, believe that a universal basic income will eventually be required for many Americans to get by. Others, like Bernie Sanders, want to enact a federal jobs guarantee to ensure every American has a stable job. Most politicians, though, don’t seem to be concerned about automation at all. I think we need to get in out in front of this problem early. Ideally before one-third of the country loses their job. We need a plan for displaced workers. This means re-training workers to do new jobs and creating new opportunities for those who end up being replaced by machines. Even then, I’m not sure if it will be enough. The robots are assembling and I don’t know if we can stop them.
https://medium.com/anti-dote/the-future-of-automation-are-robots-coming-for-our-jobs-97cbb58ad0e5
['Daniel May']
2020-02-07 14:06:17.887000+00:00
['Automation', 'Money', 'Robots', 'Future', 'Dystopia']
The Dark Backstory Behind Some Popular Christmas Carols
Every year we sing the same carols without really thinking about the words. The tunes are just so catchy! Plus no one wants to believe that those tunes they associate with their favorite holiday are actually harboring dark stories behind their bright façades. We Wish You a Merry Christmas The song starts off innocently enough, with the carolers wishing the residents of a home a, “Merry Christmas.” However, in the second verse, the crowd’s demand for figgy pudding is unmet. They then ominously sing, “We won’t leave until we get some.” And they are true to their word. After breaking into the household, they hold the family hostage for hours. Finally, the police negotiate 3 figgy puddings per caroler. Though they go to jail, the sadistic singers know that the family will forever think of them when they hear that familiar song. That feeling of power is sweeter than any pudding. Jingle Bells Yes, it is very fun to go, “Dashing through the snow in a one-horse open sleigh.” With bells and warm gloves what could go wrong? Well, one thing a sleigh lacks is a seatbelt. Tom found this out the hard way when he took his sweetheart on what he thought would be a romantic date. He even recorded the words to the song as they sped through the park. All was well until the sled hit a root, and Tom was thrown into a tree. The “OH,” in the song is often mistranslated for a fun breath of air to take rather than the very real surprise of being launched from a sleigh. No one was laughing all the way to the emergency room, because of the horrific injuries and stuff. 12 Days of Christmas Many see this as a sweet song about someone’s significant other sending them wonderful gifts for 12 days straight. Truthfully, this song is about a person being harassed by a stalker who keeps sending bigger, and more annoying, gifts to get a reaction. The ladies, lords, maids, and musicians beg the person to respond to the stalker since they’ve been threatened with death if they stop their designated action or try to contact the authorities. Oh Christmas Tree Who doesn’t love a picturesque Christmas tree? Those who have never seen one, that’s who. Few know this song is actually a transmission from the future. The “leaves so unchanging” refer to the fake leaves that are made of a synthetic material that doesn’t burn. Which was the fate of most of the trees in the world. In the time the song sent from, no one would even dream of cutting down a rare tree just so they could put it in their living room for a week. To do so would be immoral, and a crime punishable by 20 years in prison.
https://medium.com/jane-austens-wastebasket/the-dark-backstory-behind-some-popular-christmas-carols-fd92a6bf4199
['Kyrie Gray']
2020-12-12 19:41:10.358000+00:00
['Satire', 'Humor', 'Culture', 'Christmas', 'Music']
The Medical History of Sex Toys
And here’s a wonderful article about that story of vibrators being used to induce orgasms as a medical treatment.
https://medium.com/sexedplus/the-medical-history-of-sex-toys-eba4a4efabd8
['Sexedplus Dan']
2018-11-14 16:35:46.507000+00:00
['Sex', 'Sexuality', 'History', 'Health', 'Comics']
Python's import mechanism __main__ __init__
Exploring Python’s Import Machinery Write better structured modules and packages If you’ve been working with Python for a while, you’ve probably come across the “ __main__ idiom”. It consists of a couple of lines of code that usually look like this: In this article, I would like to explore the meaning of these lines in some greater depth and use this common pattern as a starting point for an exploration of Python’s import machinery. This should help you understand better what is happening during import, and also help you to bring structure into your own modules and packages. (This article is written referring to the standard CPython implementation and Python version 3.6.) Executing a Module With the Interpreter When a module like the one shown above ( module_a.py ) is passed to the interpreter (e.g. as python module_a.py ) on the command line, Python’s import machinery collects information about the module, and defines and sets several attributes that can be used to control the module’s behaviour. These attributes are set before any of the code in the module is executed and are accessible from within the module. A list of those attributes can be found here. Another thing that happens when the interpreter is invoked with a file is, the __main__ module gets initialised, and the statements in the file get executed and become part of the the module’s namespace. The __main__ module’s __name__ attribute is set to the string value __main__ . More on this below. To make more sense of the paragraphs above, let’s add a few statements to module_a.py in order to inspect the attributes set by the Python interpreter: globals() is a built-in function that returns a dictionary containing all the symbols (variables, methods, etc) defined in the current namespace. Line 4 in the code above copies the dictionary returned by globals() before iterating it and printing its keys and values. (It is necessary to operate on the copy because the variables k and v become part of the namespace for this module and change at every step. Lines 4–5 are only there to print out the variables in a more readable way, you might as well replace them with print(globals()) or a similar statement.) When you execute this code via: $ python module_a.py you should see output that looks very similar to the following (slightly truncated for readability): $ python module_a.py globals Module A: '__name__': '__main__' '__doc__': 'Module A' '__package__': None '__loader__': <_frozen_importlib_external.SourceFileLoader ...> '__spec__': None '__annotations__': {} '__builtins__': <module 'builtins' (built-in)> '__file__': 'module_a.py' '__cached__': None Hello World As you can see, most of the attributes described in the official documentation are defined and have some value assigned to them. You can see that __file__ contains the file name (this will typically be the relative or full path the the file), __doc__ contains the module’s docstring and __name__ is set to the string __main__ . These attributes are now defined in the __main__ module’s namespace and can be directly accessed, as is done in line 15 in the code above ( module_a ). Since the value of __name__ is __main__ in this case, the main() function gets called, which in turn calls function_a() , which prints out Hello World . It is only by convention that the method called after the __name__ check is called main . It is to the script’s author to decide what should happen in this place. It is possible to call any (defined) function, method, etc, or have some more complex initialisation code. More on that later. Importing a Module If we want to use functions, classes, etc, which are defined in module_a in another module, let’s say in module_b , we can easily import module_a in there. Assuming that both files are in the same directory, the code could simply look like: If you pass module_b to the interpreter now, you’ll see output like the following (truncated for readability): $ python module_b.py globals Module A: '__name__': 'module_a' '__doc__': 'Module A' '__package__': '' '__loader__': <_frozen_importlib_external.SourceFileLoader ...> '__spec__': ModuleSpec(name='module_a', loader=<_frozen_importlib_external.SourceFileLoader ...>, origin='/path/to/module_a.py') '__file__': '/path/to/module_a.py' '__cached__': '/path/to/__pycache__/module_a.cpython-36.pyc' '__builtins__': {'__name__': 'builtins', ...} The output is generated by the print statements inside module_a.py , after it has been imported by module_b.py . Notice the differences between this and the first output: __name__ is now set to the module’s name rather than __main__ , __file__ is an absolute path to the file the module has been imported from, __spec__ is set to an instance of ModuleSpec (see here for more information), and __builtins__ is set to the builtins ’s module dictionary (this is a CPython implementation detail). Also, notice that Hello World does not get printed out. Since module_a ’s __name__ is now set to it’s name (rather than __main__ ), line 15 in module_a.py prevents main() from getting called when the module is loaded and imported, and therefore function_a() never gets called. To get a better understanding of the variables in module_b ’s namespace let’s add a print out to module_b.py : Running this, should produce output similar to the following: $ python module_b.py globals Module A: '__name__': 'module_a' '__doc__': 'Module A' '__package__': '' '__loader__': <_frozen_importlib_external.SourceFileLoader ...> '__spec__': ModuleSpec(name='module_a', loader=<_frozen_importlib_external.SourceFileLoader ...>, origin='/path/to/module_a.py') '__file__': '/path/to/module_a.py' '__cached__': '/path/to/__pycache__/module_a.cpython-36.pyc' '__builtins__': {'__name__': 'builtins', ...} globals module B: '__name__': '__main__' '__doc__': 'Module B' '__package__': None '__loader__': <_frozen_importlib_external.SourceFileLoader ...> '__spec__': None '__annotations__': {} '__builtins__': <module 'builtins' (built-in)> '__file__': 'module_b.py' '__cached__': None 'module_a': <module 'module_a' from '/path/to/module_a.py'> The first half should look the same as before, while the second half should look similar to the output when we ran python module_a.py but with module_a replaced with module_b in most places. On top of that, the namespace now also contains module_a , which makes it (and everything defined in it) accessible inside module_b . One more thing to notice is that __package__ in module_a ’s namespace is set to an empty string, while __package__ in module_b ’s namespace is set to None . Python will try to determine whether a module is part of a package. Since module_a is being imported by module_b in this case, it is at least possible that it might be part of a package, therefore the variable is set to an empty string, while module_b is directly executed which implies it cannot be part of a package (in this particular execution). The output shows us that module_a has been successfully imported into module_b , its function definitions have been loaded and can be accessed, e.g.: which would print out Hello World (and the variables inside module_a ’s namespace upon import). The __main__ Module As mentioned above, when a module is executed by invoking the interpreter directly, the __main__ module is initialised in order to provide the namespace for the top-level environment of the program. To get a better understanding of what that means, we can use Python’s sys module to get a list of loaded modules ( sys is one of the few modules that gets initialised on interpreter start-up). For this, let’s create a new module with the following content: The sys.modules variable holds a dictionary with all modules that have been loaded so far (but not necessarily imported). Sorting for convenience and printing them out gives something like the following (truncated for readability): $ python module_c.py modules '__main__': <module 'module_c' from '/path/to/module_c.py'> '_bootlocale': <module '_bootlocale' from '/usr/lib/python3.6/_bootlocale.py'> '_codecs': <module '_codecs' (built-in)> ... ... ... 'warnings': <module 'warnings' from '/usr/lib/python3.6/warnings.py'> 'weakref': <module 'weakref' from '/usr/lib/python3.6/weakref.py'> 'zipimport': <module 'zipimport' (built-in)> It is a mapping between module names (by which the modules can be accessed) and the module instances (a module is a Python object itself) for all loaded module. In other words, the modules listed are known to the interpreter and can be import ’ed inside the given module. This is also the first place Python will search for modules to import. As you can see, the first entry is the __main__ module which has been initialised with module_c ’s content. This means, we can use this module to further convince ourselves that the __main__ module’s and the current module’s namespace are the exact same thing. To do this, let’s create another module with the following content: Running this, should give: $ python module_d.py True False True True This shows us, not only is the value of __name__ the same in both cases, but they also refer to the same object in memory. Understanding __main__.py In addition to the “ __main__ idiom”, Python offers a way of achieving the same effect by creating a file called __main__.py inside a project directory, alongside the actual module files. This can be useful when a project has become very large and you would like to split the logic into multiple files/modules, or if you want to keep functionality strictly compartmentalised. Imagine a package with the following directory structure: my_package/ ├── __main__.py ├── module_x.py └── module_y.py and files with the following content: It is now possible, to pass the directory to the Python interpreter to execute, which gives output like the following (truncated for readability): $ python my_package globals Module X: '__name__': 'module_x' '__doc__': 'Module X' '__package__': '' '__loader__': <_frozen_importlib_external.SourceFileLoader ...> '__spec__': ModuleSpec(name='module_x', loader=<_frozen_importlib_external.SourceFileLoader ...>, origin='my_package/module_x.py') '__file__': 'my_package/module_x.py' '__cached__': 'my_package/__pycache__/module_x.cpython-36.pyc' '__builtins__': {'__name__': 'builtins', ...} globals Module Y: '__name__': 'module_y' '__doc__': 'Module Y' '__package__': '' '__loader__': <_frozen_importlib_external.SourceFileLoader ...> '__spec__': ModuleSpec(name='module_y', loader=<_frozen_importlib_external.SourceFileLoader ...>, origin='my_package/module_y.py') '__file__': 'my_package/module_y.py' '__cached__': 'my_package/__pycache__/module_y.cpython-36.pyc' '__builtins__': {'__name__': ...} globals main: '__name__': '__main__' '__doc__': 'Main module' '__package__': '' '__loader__': <_frozen_importlib_external.SourceFileLoader ...> '__spec__': ModuleSpec(name='__main__', loader=<_frozen_importlib_external.SourceFileLoader ...>, origin='my_package/__main__.py') '__annotations__': {} '__builtins__': <module 'builtins' (built-in)> '__file__': 'my_package/__main__.py' '__cached__': 'my_package/__pycache__/__main__.cpython-36.pyc' 'module_x': <module 'module_x' from 'my_package/module_x.py'> 'module_y': <module 'module_y' from 'my_package/module_y.py'> function x function y Most of the output is similar to what has been described above, but the fact that it is printed at all and the order in which it is printed, gives us insight into what the Python interpreter is doing. We see module_x ’s namespace variables, followed by the module_y ’s and the __main__ module’s namespace variables. Since the __main__ module is the only place we have done any imports so far, this tells us that the interpreter is automatically picking up whatever is in __main__.py and executing it as if it was specified on the command line directly (this is not exactly true, as the paths in most cases would be absolute instead of relative). The next thing to notice is that the name attribute __name__ for module_x and module_y are set to the respective names, as you would expect for modules being imported, while __name__ is set to __main__ for __main__.py . Notice how module_x and module_y are part of the namespace in __main__.py (as you would expect since we are importing them), and we can make calls to functions defined inside those modules. The last two lines show us that the two calls to functions defined in module_x and module_y are executed as well. Be aware of how every line in each module gets executed automatically upon import (the function definitions inside module_x and module_y are statements that get executed as well, while the functions themselves don’t). It is also possible to pass in the absolute path to the package, i.e.: $ python /path/to/my_package The result should be the same, with absolute instead of relative paths in the output. Advantages of Using the __main__ Idiom Whether it is via the if __name__ == '__main__' “guard” statement or by using a __main__.py file, one key advantage is separation of logic defined in your modules from its execution. The details on if you should use it, how to structure a project and which pieces of logic should go where, will generally depend on what the code does and how it is intended to be used. Here are a few common patterns to consider. Testing Imagine module_a from the first example, didn’t have the guard statement and would make a call to the main function every time the module is imported somewhere. If you wanted to write a test for function_a , you would have to import module_a in your test script, which would immediately call main() and subsequently function_a() . In this particular case this might not be a big deal, but if function_a had more impactful side-effect (maybe writing a file to a specified location), you would most likely want to avoid that, or at least have more control over it. Command Line Arguments Another use-case is a module or package that is designed to run stand-alone (as a command line script) but which also defines logic that might be used (via imports) in other modules. Since your project is designed to run from the command line, it is likely to have some form of command-line argument processing, using argparse or a comparable library. It might also be necessary to perform other initial steps like reading and checking configuration files, setting up a logger, etc. These and other things may be unnecessary or even counterproductive when the module is imported as part of another project. Imports If your project is using libraries that are only relevant when it is executed, it can make sense to import those libraries in __main__.py and that way, avoid having to import those when your code gets imported somewhere else. Understanding __init__.py A slightly more common thing in Python projects and module to find is a __init__.py file. The official documentation tells us that when a regular package is imported, this __init__.py file is implicitly executed, and the objects it defines are bound to names in the package’s namespace. This means that __init__.py serves a different purpose than __main__.py and we can use the method described above to understand the differences in more detail. Let’s extend my_package from above and add a __init__.py file: my_package/ ├── __init__.py ├── __main__.py ├── module_x.py └── module_y.py where __init__.py has the following content: When we now pass the package to the interpreter ( python my_package ), we should see the exact same output as in the previous example, since nothing has changed for the execution of a package in that way. The main difference comes in, when we treat the package as an actual package, and import it. Importing a Package In order to get the subtleties, I will describe a step-by-step approach. We’re starting a Python interpreter session without any parameters and use our two-liner from above to get an idea of the current namespace, i.e.: $ python Python 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> for k, v in dict(globals()).items(): ... print(f'{repr(k)}: {repr(v)}') ... '__name__': '__main__' '__doc__': None '__package__': None '__loader__': <class '_frozen_importlib.BuiltinImporter'> '__spec__': None '__annotations__': {} '__builtins__': <module 'builtins' (built-in)> This looks very similar to the case where we passed in a module directly to the interpreter. As you might have expected, the interpreter created a module with name __main__ and populated some of the module related variables. In the next step, we import my_package : >>> import my_package globals init: '__name__': 'my_package' '__doc__': 'Init' '__package__': 'my_package' '__loader__': <_frozen_importlib_external.SourceFileLoader ...> '__spec__': ModuleSpec(name='my_package', loader=<_frozen_importlib_external.SourceFileLoader ...>, origin='/path/to/my_package/__init__.py', submodule_search_locations=['/path/to/my_package']) '__path__': ['/path/to/my_package'] '__file__': '/path/to/my_package/__init__.py' '__cached__': '/path/to/my_package/__pycache__/__init__.cpython-36.pyc' '__builtins__': {'__name__': 'builtins', ...} and see the namespace variables for __init__.py printed out. Notice how, in this case, __name__ , as well as __package__ , is set to the string value my_package . This shows us that everything in __init__.py has been executed upon import (including the function definition), and that a new module (and also namespace) has been created that contains bindings to everything defined in __init__.py . After importing my_package in the interpreter session, let’s use globals() to inspect the name space and to ensure that the package is available: >>> for k, v in dict(globals()).items(): ... print(f'{repr(k)}: {repr(v)}') '__name__': '__main__' '__doc__': None ... ... 'my_package': <module 'my_package' from '/path/to/my_package/__init__.py'> The last line should show my_package now. To further convince ourselves, we can run a few checks like these: >>> my_package.__name__ 'my_package' >>> my_package.__file__ '/path/to/my_package/__init__.py' And eventually: >>> my_package.package_level_function() package level function The function defined in __init__.py is immediately accessible, the two modules ( module_x and module_y ), however, are not: >>> my_package.module_x Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'my_package' has no attribute 'module_x' Importing a Module From a Package In order to make the modules accessible, and to further explore what happens when we import one of the modules from my_package , let’s start a new interpreter session and run the following import statement: $ python ... >>> from my_package import module_x We see two things happen in this case: The namespace variables in __init__.py are being printed out, followed by the namespace variables in module_x : globals init: '__name__': 'my_package' '__doc__': 'Init module' '__package__': 'my_package' ... ... '__builtins__': {'__name__': 'builtins', ...} globals module X: '__name__': 'my_package.module_x' '__doc__': 'Module X' '__package__': 'my_package' '__loader__': <_frozen_importlib_external.SourceFileLoader ...> '__spec__': ModuleSpec(name='my_package.module_x', loader=<_frozen_importlib_external.SourceFileLoader ...>, origin='/path/to/my_package/module_x.py') '__file__': '/path/to/my_package/module_x.py' '__cached__': '/path/to/my_package/__pycache__/module_x.cpython-36.pyc' '__builtins__': {'__name__': 'builtins', ...} In other words, everything in __init__.py has been executed before importing module_x and executing everything inside it. Also notice how module_x ’s __name__ has been set to the module’s fully-qualified name, while __package__ has been set to my_package . Printing out the variables the main namespace shows us: >>> for k, v in dict(globals()).items(): ... print(f'{repr(k)}: {repr(v)}') ... '__name__': '__main__' '__doc__': None '__package__': None '__loader__': <class '_frozen_importlib.BuiltinImporter'> '__spec__': None '__annotations__': {} '__builtins__': <module 'builtins' (built-in)> 'module_x': <module 'my_package.module_x' from '/path/to/my_package/module_x.py'> In other words, the last line tells us that my_package.module_x is now bound to a variable called module_x in the namespace and that it is accessible (while my_package isn’t): >>> module_x.function_x() function x >>> >>> my_package.package_level_function() Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'my_package' is not defined This is not surprising, since my_package didn’t show up in the namespace variables. To further explore what is going on, let’s have a look at sys.modules . In the same session, run the following few lines (output truncated for readability): >>> import sys >>> for k, v in sorted(sys.modules.items()): ... print(f'{repr(k)}: {repr(v)}') ... '__future__': <module '__future__' from '/usr/lib/python3.6/__future__.py'> '__main__': <module '__main__' (built-in)> ... 'my_package': <module 'my_package' from '/path/to/my_package/__init__.py'> 'my_package.module_x': <module 'my_package.module_x' from '/path/to/my_package/module_x.py'> ... 'zlib': <module 'zlib' (built-in)> The long list that is printed out, again, contains all the modules that have been loaded by the interpreter up until this moment. We notice that my_package as well as my_package.module_x have been loaded but only my_package.module_x has been imported and bound to a name in the main namespace (which means it shows up when printing out globals and can be accessed in the interpreter). Importing a Package Module Let’s see what happens when we import the module using it’s fully-qualified name. To do so, we start a new Python interpreter session and type in the following: $ python ... >>> import my_package.module_x The output is very similar to the previous case, the variables in __init__.py ’s namespace are printed out, followed by those in module_x : globals init: '__name__': 'my_package' '__doc__': 'Init module' '__package__': 'my_package' ... globals module X: '__name__': 'my_package.module_x' '__doc__': 'Module X' '__package__': 'my_package' ... The difference becomes clearer, when we inspect the variables in the main namespace: >>> for k, v in dict(globals()).items(): ... print(f'{repr(k)}: {repr(v)}') ... '__name__': '__main__' '__doc__': None '__package__': None '__loader__': <class '_frozen_importlib.BuiltinImporter'> '__spec__': None '__annotations__': {} '__builtins__': <module 'builtins' (built-in)> 'my_package': <module 'my_package' from '/path/to/my_package/__init__.py'> We see, that in contrast to the method above, my_package (instead of module_x ) is now defined in the namespace, and we cannot access module_x directly but have to use its fully-qualified name: >>> module_x.function_x() Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'module_x' is not defined >>> >>> my_package.module_x.function_x() function x It is also possible to call functions and everything else defined in __init__.py : >>> my_package.package_level_function() package level function To further understand what is going on, let’s inspect my_package with help of Python’s dir function (which produces a list similar to globals.keys() ): >>> for v in dir(my_package): ... print(repr(v)) ... '__builtins__' '__cached__' '__doc__' '__file__' '__loader__' '__name__' '__package__' '__path__' '__spec__' 'module_x' 'package_level_function' As we can see, module_x has become part of my_package ’s namespace, which explains why we cannot call it directly. In other words, this way we have loaded, imported and bound my_package to a variable in the main namespaces, while binding module_x to a variable in my_package ’s namespace. This behaviour is also described in the documentation of the __import__ function which gets called during import. Importing * From a Package Python’s official tutorial does a great job in explaining what happens when you import * from a package. Let’s use the method described in this article to get a better understanding of this in a fresh interpreter session: $ python ... >>> from my_package import * globals init: '__name__': 'my_package' '__doc__': 'Init module' '__package__': 'my_package' '__loader__': <_frozen_importlib_external.SourceFileLoader ...> '__spec__': ModuleSpec(name='my_package', loader=<_frozen_importlib_external.SourceFileLoader ...>, origin='/path/to/my_package/ __init__.py', submodule_search_locations=['/path/to/my_package']) '__path__': ['/path/to/my_package'] '__file__': '/path/to/my_package/__init__.py' '__cached__': '/path/to/my_package/__pycache__/__init__.cpython-36.pyc' '__builtins__': {'__name__': 'builtins', ...} As expected, the code in __init__.py has been executed, but nothing else. We can further check this by inspecting namespace variables: >>> for k, v in dict(globals()).items(): ... print(f'{repr(k)}: {repr(v)}') ... '__name__': '__main__' '__doc__': None '__package__': None '__loader__': <class '_frozen_importlib.BuiltinImporter'> '__spec__': None '__annotations__': {} '__builtins__': <module 'builtins' (built-in)> 'package_level_function': <function package_level_function ...> The only (additional) available object is package_level_function , while module_x and module_y have not been imported or loaded. If we would like to change this, we can follow the instructions in the tutorial and add __all__ to the package. __init__.py is the ideal place to add this variable, so we modify the file to look like this: In a new interpreter session we repeat the import: $ python ... >>> from my_package import * As expected, we see the __init__.py printouts followed by the module_x printouts: globals init: '__name__': 'my_package' '__doc__': 'Init module' '__package__': 'my_package' ... '__all__': ['module_x'] globals module X: '__name__': 'my_package.module_x' '__doc__': 'Module X' '__package__': 'my_package' ... Notice that __all__ is now defined in __init__ ’s namespaces, which is the reason why module_x gets imported. We can now further inspect the main namespace: >>> for k, v in dict(globals()).items(): ... print(f'{repr(k)}: {repr(v)}') ... '__name__': '__main__' '__doc__': None '__package__': None '__loader__': <class '_frozen_importlib.BuiltinImporter'> '__spec__': None '__annotations__': {} '__builtins__': <module 'builtins' (built-in)> 'module_x': <module 'my_package.module_x' from '/path/to/my_package/module_x.py'> As expected, we see module_x in the namespace, however, package_level_function is not directly accessible in the namespace in this case (neither is my_package itself). In other words, excluding things from __all__ allows you to “hide” objects, functions, variables, etc, defined in __init__.py that you might use for the initialisation of your package but that are not supposed to be exposed to the user (e.g. because they contain a name that is likely to clash with other imports). Conclusion There are a lot of subtleties associated with Python’s import machinery but it is powerful tool that does a lot of heavy-lifting for you when it comes to finding modules in your file tree, loading and importing them. It also gives you a lot of flexibility and convenience when importing modules. Python also allows you to segregate the logic in your project in a way that will help others understand the project’s structure more easily. There is much more to the import machinery and generally the official Python Tutorial is a great reference for intermediate and advanced programmers. If you are like me, observing the internals in a simple way, like the one described in this article, is a great way of solidifying your knowledge and getting a better grasp of the language’s details.
https://papacz.medium.com/exploring-pythons-import-machinery-514fb21d5486
['Paul Papacz']
2020-12-19 15:11:40.043000+00:00
['Modules', 'Coding', 'Computing', 'Programming', 'Python']
Fellow Marketers: Please, Stop With the Buzzwords
Fellow Marketers: Please, Stop With the Buzzwords A go-forward actionable plan to optimize your vocabulary There are no points for poor communication. Photo by Joshua Miranda from Pexels I didn’t need coffee to fuel me that morning. The high of giving my first presentation to our biggest client at my new job provided all the adrenaline I needed. The strategy was sound, the creativity was stunning, and the market insights were fresh. This was my time to shine and prove to my colleagues and clients that they made the right choice in headhunting and relocating me across the country for the role. I envisioned looks of astonishment because I felt the presentation was that good. Heck, maybe even a high-five or proclamation of “this is exactly why we chose you!” from the client when we landed on the token “Thank You” slide. But instead, the vice president of my client’s business turned to me at the end and said, “This is great. Now can you say it again in English?” The problem was I did say it in English.
https://medium.com/better-marketing/fellow-marketers-please-stop-with-the-buzzwords-f8790061976e
['Liana Buenaventura']
2020-11-24 14:27:32.873000+00:00
['Marketing Tips', 'Communication', 'Marketing', 'Business', 'Communication Skills']