title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
The what, why, and how of Message Queues
The what, why, and how of Message Queues If you have ever heard of message queues like Apache Kafka and been confused about the what, why, and how of messaging queues in modern applications, this one is for you. If you have had any amount of interest in things like scalable architectures and microservices, there is a good chance you have come across messaging queues. Perhaps you have heard of companies making use of Apache Kafka, or an alternative like RabbitMQ, those two being the most popular messaging queues in use today. And, perhaps, just like me, you were confused by what messaging queues did and how they helped modern distributed architectures. Let’s try to understand that. First things first, let’s get the absolute basics out of the way. What even is a messaging queue? What is a message queue? Message queues or brokers are components used for inter-process communication or for inter-microservice communication, whereby messages between multiple services are passed not through direct transfer of data, but through a common “queue” or buffer in the form of these messaging queues. The paradigm is similar to the publisher/subscriber model, whereby multiple publishers or “producers” push messages into “queues”, and subscribers or “consumers” can listen for new messages coming into the queue. As the consumer “consumes” or picks up a message from the queue, the message is removed from the queue. The model is asynchronous in nature, such that the interaction of producers and consumers with the queue is completely independent of one another. Maybe you’re still confused. Let’s try to understand some use cases for such messaging queues to aid your understanding. What are some use cases for messaging queues? As you might have correctly guessed, messaging queues aren’t meant for real-time communication. To take a made-up, unrealistic scenario, using them as an intermediate, say, for an HTTP request where the user has to wait on the response is probably not a good idea. Using messaging queues in a producer-consumer model gives us no guarantee of when the consumer actually will take up the message for processing. All we have is a guarantee that the message will eventually get consumed and processed. That’s not exactly true, because, in real-world large-scale systems, things like queue overflows may be a real problem. But for our understanding, we will be assuming a relatively fail-safe system. Owing to these asynchronous nature, messaging queues are good for processes that are not crucial but nice to have done. The processes which are performed by the consumers should, ideally, not be core to the functioning of the overall architecture and functionality of the application, but they may be processes that help improve the functionality and/or performance of the application. Some of the use cases for messaging queues in real-world scenarios today are listed below: Sending emails Emails are used for a large number of purposes, such as marketing campaigns, account verification, password resets, et cetera. If you think about these use cases, you may realize that none of them need immediate processing. Delays in the dispatch of these emails are completely acceptable and do not detract from the core functionality of the applications utilizing emails for various purposes. Message queues can help in such a case. Multiple “producer” services that generate emails and require them to be published can push email objects (by objects I mean some formatted objects that contain all necessary data such as the content, receiver, subject, etc. for the email) into the queue. A single consumer microservice, dedicated to sending emails, works completely independently of where the email comes from. This service consumes messages from the queue one by one and sends emails as specified in the email object. This approach is well scalable, as in case the number of incoming messages to queue gets too large, we may horizontally scale (read: add more number of) consumer email service instances that may all listen to the same queue, sending emails. Data post-processing Assume you have a Blogging application that needs to handle large amounts of image data, from images uploaded by users themselves. Users can’t be expected to provide images that are web-optimized or small in size. However, declining users from uploading certain images due to size may not be the best user experience, and you may want to allow users to upload any images they want if your architecture and storage capacity is capable of handling it. One solution for allowing user’s flexibility while not impacting the load times of your application negatively (owing to larger assets) is to post-process and optimize all images that are uploaded by the user. This is not at all a crucial operation: while it may impact user experience slightly, the optimization is in no way crucial to the application functionality, nor is it necessary that the action is performed instantly. Service may be employed in the application architecture whose sole purpose is optimizing images that get uploaded on the application. A message queue can help in this scenario. The flow of control and messages for the same maybe something like this: The user publishes a blog post with some high-quality, large images. The image gets pushed to storage (something like AWS S3 or Azure Blob Storage). A hook is triggered in the application, pushing a message with information on the newly uploaded image into the “image optimization” queue. The image optimization service can listen to the queue. It pulls an image from S3, optimizes it, and them reuploads the optimized image to replace the unoptimized one on S3. Batch updates for databases Databases are used for many purposes. Not all of them are crucial to application usability. Assume you have an application like Spotify. This application may have databases storing user data, which the user can update and/or view at any time. This data is important and any changes in this data may need to be reflected immediately. On the other hand, your application may have a statistical/machine learning engine, which analyzes user actions for any reason, be it for optimizing recommendations or for generating stats. This operation may be considered non-crucial in terms of how immediate the updates need to be. In general, delays in the integration of the latest user activity into the machine learning algorithms may we completely acceptable. Here, a message queue may be used for optimizing database querying. The establishment of a database connection has some overhead irrespective of the amount of data that is transferred. Even if a persistent connection is used, transit times become a factor for large-scale systems. In general, if possible, batching is suggested for operations such as record insertion. In the scenario represented above, each action by the user such as song listen, like, playlist creation, etc. may be used for optimizing user recommendations. However, creating a database request for every single such operation would be insanity, for lack of a better word. Instead, a better approach maybe this: For every action, push the action data to a message queue. This will be magnitudes faster than sending the data directly to the database. A consumer service may cache these activities’ data on a per-user basis as they come into the queue. Caching here is acceptable for two reasons. One, this data is non-critical. Losing this data will not break the application, and the user will never be affected or even know. Two, using a cache is much faster for temporary data (Of course cost is also a factor, but let’s not dive into that for the purpose of this article). On a regular interval, the consuming service may use the cached data, transform it into a single database insertion query, and commit that change to the database. How the machine learning/data warehousing engine later uses the data is a completely different story, but I hope are able to get a grasp on the several use cases for message queues. Now that you have an idea of the basic concept, let’s look into some of the main features of messaging queues. Features / Advantages of message queues Decoupling : Operations of the consumer and producer are completely independent of one another. : Operations of the consumer and producer are completely independent of one another. Scalability : The number of producers and consumers can easily be scaled as required. : The number of producers and consumers can easily be scaled as required. Buffering and load management : Message queues act essentially as data buffers. In case of a spike in the amount of data that needs to be processed by the consumer service, the latter needs not be made aware of the fact. The message queue buffers the data for the service, and the service only needs to process the data one by one, rather than having to manage a large amount of data all at once. This makes your architecture more robust overall. : Message queues act essentially as data buffers. In case of a spike in the amount of data that needs to be processed by the consumer service, the latter needs not be made aware of the fact. The message queue buffers the data for the service, and the service only needs to process the data one by one, rather than having to manage a large amount of data all at once. This makes your architecture more robust overall. Resiliency : Even if your consumer processes go down, it doesn’t mean that your application breaks. Messages for the consumer will remain queued in the messaging queue. Whenever the consumer service comes back up, it can start processing the messages without having to do any additional setup or work. : Even if your consumer processes go down, it doesn’t mean that your application breaks. Messages for the consumer will remain queued in the messaging queue. Whenever the consumer service comes back up, it can start processing the messages without having to do any additional setup or work. Delivery guarantees : Message queues offer a general guarantee that any message that gets pushed to a queue will get delivered to a consumer eventually. : Message queues offer a general guarantee that any message that gets pushed to a queue will get delivered to a consumer eventually. Order guarantee: Being a queue, an order is associated to the incoming messages implicitly. The same order is followed in the consumption and processing of the data. Now, hopefully, you have a slightly better understanding of the what and why of messaging queues, compared to when you started reading this article. Now, let’s try using a messaging queue in a small application to understand how it would be used in an actual application. Using a messaging queue in code Disclaimer: The application that I’ll be using is a very simple one, and it honestly does nothing of value. However, it should allow you to understand where a messaging queue may fit into a real application. The architecture for the application will be something like this: For the messaging queues, I’ll be using RabbitMQ, as it is one of the easier to use messaging queue systems. The concepts should be transferrable to any other system such as Apache Kafka, which works on the same principles. For the application components, I use docker and then docker-compose for putting up the containers. For the application code, I use NodeJS in the example. However, of course, the concept will be language agnostic. Setup Let’s see the directory structure used for the application. All the code for the repository will be available here. |-consumer/ | |-Dockerfile | |-package.json | |-app.js |-producer/ | |-Dockerfile | |-package.json | |-app.js |-secondary_producer/ | |-Dockerfile | |-package.json | |-app.js |-docker-compose.yml The package.json file in NodeJS applications is used for managing dependencies and application metadata. The dependencies for the three applications are identical, and the package.json files for each of the three applications are almost identical. You may change the “name” key to match the application, but even that’s not mandatory. { "name": "producer", "version": "1.0.0", "description": "", "main": "app.js", "scripts": { "start": "node app.js", }, "author": "Dakshraj Sharma", "license": "ISC", "dependencies": { "amqplib": "^0.5.6" } } Now, for the Dockerfiles. The Dockerfiles for each of the three applications is exactly the same since they are each nothing but simple NodeJS applications. FROM node:alpine WORKDIR /usr/app/ COPY ./package*.json ./ RUN npm install RUN npm install -g pm2 COPY ./ ./ CMD [ "pm2-runtime", "npm", "--", "start" ] This Dockerfile does the following things: Pulls the node:alpine image Creates a directory /usr/app for the application Copies to local package.json file to the container’s /usr/app directory and then installs dependencies by running ‘npm install’. Installs pm2 using ‘npm install -g pm2’ to ensure that the application restarts if it errors. Copies code file (essentially only app.js) to /usr/app. Runs the application using ‘pm2-runtime npm — start’. If you want to learn more about pm2, the official website may be a good place to start, but that may be beyond the scope of this article. Docker-compose is responsible for running the four containers of our application together and allows connectivity between them. The content of the docker-compose.yml file is very simple: docker-compose.yaml We simply create four ‘services’ or containers. A RabbitMQ container from the latest rabbitmq image. Password and username for connecting to rabbitmq are passed as environment variables. RABBITMQ_DEFAULT_USER: rabbitmq RABBITMQ_DEFAULT_PASS: rabbitmq The service specification for the remaining three services is exactly the same, barring the directory paths and service names. Build from the Dockerfile in the specified path: Tell docker-compose to restart the container if it goes down and that the container should depend on the rabbitmq container. build: context: ./consumer/ dockerfile: Dockerfile Set environment variables for the rabbitmq connection (will be used in the code) via ‘process.env’. RABBITMQ_HOST: rabbitmq RABBITMQ_USER: rabbitmq RABBITMQ_PASS: rabbitmq RABBITMQ_QUEUE: messages Note the RABBITMQ_HOST variable. In a real-world scenario, this would be replaced by the IP/URL of the machine that is running RabbitMQ. If a local installation is being used, the localhost may be specified. Since we are using docker-compose, specifying rabbitmq as an address will automatically be directed to the container/service named rabbitmq. Bookmark /app/node_modules so we do not copy it repeatedly. Also, mount the local files to /usr/app so code changes may be reflected in real-time if using nodemon. volumes: - /usr/app/node_modules - ./secondary_producer:/usr/app/ The code Let’s write up our first producer, inside ‘producer/app.js’. To connect with RabbitMQ, we will use the NPM library ‘amqplib’, which will be available for us to use since it was added as a dependency to our package.json already (see above). const amqp = require('amqplib'); We then store the environment variables as local const values. Using the host, username, and password variables, we also form the connection string using Javascript template strings and store it as RABBITMQ_CONNECTION_STRING . const RABBITMQ_HOST = process.env.RABBITMQ_HOST; // Host const RABBITMQ_USER = process.env.RABBITMQ_USER; //Username const RABBITMQ_PASS = process.env.RABBITMQ_PASS; // Password // Connection string: ampq://username:password@host const RABBITMQ_CONNECTION_STRING = `amqp://${RABBITMQ_USER}:${RABBITMQ_PASS}@${RABBITMQ_HOST}/`; // The name of the queue we will connect to const RABBITMQ_QUEUE = process.env.RABBITMQ_QUEUE; You might be confused as to why we need to have a queue name value when we already have the host defined. This is because a single RabbitMQ can have many queues, independent of each other. Each of these queues is identified by a string identifier, which we have declared as an environment variable for the current application. We define a function that returns a random number. In a real-world scenario, this may have been any function that supplied some data which had to be passed into the messaging queue. const getRandomNumber = () => { return Math.random(); }; Fairly basic stuff upto here. Now, let’s try to connect to the RabbitMQ instance, set up a queue if one doesn’t already exist, and then push in data to the queue. Connection and channel creation are asynchronous and return promises, so we use then and catch blocks. producer/app.js To summarize what the code is doing: Create a new connection to the RabbitMQ instance through amqp.connect(RABBITMQ_CONNECTION_STRING) .then((conn) => { return conn.createChannel(); // Create channel }) Create a queue in the channel to send to .then((chan) => { return chan.assertQueue( RABBITMQ_QUEUE, { durable: false, } ) // specify queue to send to Send data into the queue .then(() => { chan.sendToQueue( RABBITMQ_QUEUE, Buffer.from("ANY MESSAGE HERE"); ); }) For messages to be sent into the queue, any buffered bytes work. So, essentially, you could send a JSON.stringified object into the queue, which could be later parsed at the consumer end as a JSON object. Now we have a producer up. Let’s create the secondary producer in ‘secondary_producer/app.js’ with nearly identical code, which will send a random number into the queue 50 times, every 1.5 second. No explanation should be needed for the code as it is exactly the same as the code for the first producer. Note that we ensure here that the second producer also will send messages to the same queue. secondary_producer/app.js Now, then, we have two producers that will dump, in total, 100 messages into the queue. Let’s now create a consumer for the messages that may do something with the messages. consumer/app.js To summarize what the consumer is doing: Environment variables, connection to the RabbitMQ instance, channel creation, and queue specification remains exactly the same as for the producers. To consume messages from channel, we use the code: .then(() => { // consume messages from specific queue chan.consume(RABBITMQ_QUEUE, (message) => { // do something with the message setTimeout(() => { if (message !== null) { // Increment consumer count consumer_count++; // Log the received value to console console.log( `[ RECEIVED: ${message.content.toString()} | COUNT: ${consumer_count} ] `.toUpperCase() ); } }, 2000); // Wait for two seconds to simulate some blocking operation }); }); Something to note here is the fact that we do not use a loop in our consumer. This means that consumer connections work somewhat like WebSocket connections, whereby the consumer constantly listens on the channel for incoming messages. Whenever the consumer service is free to do work, it reaches into the queue and gets the oldest message (first in the queue) and processes it as required. In case there is no message, the consumer will wait on the queue. To run the app, run docker-compose up --build Of course, it requires you to have docker and docker-compose installed locally. However, NodeJS and RabbitMQ installations are not needed. ( Why? Because docker is awesome like that) Running docker-compose output you’ll see an output that looks something like this: Eventually, when all of the messages have been processed, there will be no more console logging. The producers do not send any more data into the queue, and the consumer continues waiting on the queue. If you look hard enough, you’ll see how the order of messages being sent and being received is perfectly in line, as expected from the queue. Further, we also see that the consumer works on the messages one by one, independent of how the messages arrive, logging a RECEIVED message exactly every 2 seconds. That was all for this article. Although I have not discussed complex concepts and strategies used by modern MQ systems like Apache Kafka, such as distributed commit logging and partitioning of queues, you hopefully now have greater insight into what a messaging queue is and can imagine many more use cases where such a system may prove to be useful. Perhaps you could think of how you could integrate an MQ system into your own projects. The principles discussed here should be effectively transferrable to any other MQ system. Until I find something else I feel worthy of sharing, take care and happy learning :)
https://medium.com/swlh/the-what-why-and-how-of-message-queues-a0bb5b579946
['Dakshraj Sharma']
2020-06-13 20:11:16.408000+00:00
['Backend Development', 'Software Architecture', 'Software Development', 'Microservices', 'DevOps']
Five Morrissey Albums That Deserve Reappraisal
British pop icon Morrissey reached the heights of music royalty when he fronted The Smiths, the band which catapulted him to fame in the late 80’s. After the band’s demise, the much-loved ringleader of the outcasts embarked on a solo music career, which has (so far) spanned an impressive four decades. With classic tracks such as Everyday Is Like Sunday, Suedehead, The More You Ignore Me, The Closer I Get and Spent The Day In Bed, Morrissey has certainly proven his credentials time and time again as one of the greatest wordsmiths of a generation. A relentless talent, the singer refuses to rest on his laurels, prolifically moving forward with exciting new works as opposed to cashing in on the nostalgia gravy train of his legendary back-catalogue as so many of his contemporaries have. Whilst many music lovers will continuously praise the genius of his albums such as Viva Hate and You Are The Quarry, here are five Morrissey albums that you may be guilty of overlooking, and they each deserve your reappraisal… SWORDS Since the days of The Smiths, Morrissey has been renowned for his quality B-sides and this collection (which spans his output from 2004 to 2009) shows that the artist has lost none of his quality control over the years. Released by Polydor in 2009, this album contains a treasure trove of recordings that sit up there easily with the artist’s best. From Ganglords to Munich Air Disaster, this 18 track compilation is not simply for completists — it features some of his most compelling work to date. Shame Is The Name -featuring guest vocals by Chrissie Hynde- is one of many memorable standouts that helps make this album more than worthwhile for any music fan. At the time of its release, Morrissey condemned the music industry’s obsession with marketing and campaigns rather than with the music itself. On a statement made on his own website, he said, “Even though you see the death of culture all around you, you also want to raise whatever it is you do to a higher plane, yet there is no one, it seems, who can inch the Morrissey thing forwards.” Despite his break with his then record label, Universal, and the problems he perceived with promotion, Swords stands up as a testament to an artist who never lost his vigour for artistic growth. YOUR ARSENAL The third release from the Brit icon landed in the hands of his eager fans back in July 1992 from record label HMV. Although it was still early days for Morrissey, he was already stretching his wings as a solo artist and proving his worth through tracks such as You’re Gonna Need Someone on Your Side and Seasick, Yet Still Docked — a dark, sombre tale that truly captivates. Few artists find their works covered by the music elite such as David Bowie, but this is exactly what happened in 1993, when the singer released his own version of Your Arsenal’s I Know It’s Gonna Happen Someday. It’s a fascinating rendition and a highlight of Bowie’s uneven release Black Tie, White Noise, but few could contend that it stands up to the masterful original, which is quite simply the crowning jewel of Morrissey’s 1992 album. This album, produced by Mick Ronson, is almost 30 years old yet still sounds bold and fresh. I AM NOT A DOG ON A CHAIN This is Morrissey’s latest release, having landed in March of 2020. Yet the strength of the eleven tracks on this tight, flavourful album give us more than enough reason to boast it’s worth here. Morrissey’s vocals in this eclectic collection of songs — along with the deeply poignant lyrics and heartfelt sincerity — make this not only a moving, personal album, but a bloody good listen. The title track could serve as an anthem to the singer’s outlook on life, as he defiantly sings about his refusal to conform. “I raise my hand, I hammer twice, I see no point in being nice…” With tracks Once I Saw The River Clean and My Hurling Days Are Done, we find the singer in an introspective mood, singing about his grandmother, his experiences of growing up, nostalgic memories and his mother, whom he sadly lost earlier this year. “Mama, mama and teddy bear, were the first full firm spectrum of time,” he sings, his voice bold but the softness of his love for the woman who raised him more than evident. Never one for overly-saccharine, maudlin songwriting, the reflective moments are almost always tinged with his trademark dry wit: “Time will mould you and craft you / But soon, when you’re looking away it will slide up and shaft you.” MALADJUSTED His sixth solo studio album - released August 1997 with Island records- is a masterful journey from beginning to end. If you’ve never heard Wide To Receive, Trouble Loves Me or Alma Matters, then you have some catching up to do. By this stage in his solo career, Morrissey more than owns it. Produced by Steve Lillywhite and with several tracks co-written with his long-time guitarist, Boz Boorer, the album came when the landscape of the alternative music scene was largely dominated by Britpop bands. As always, Morrissey’s sound remained uniquely his own, competing with no one. The album may have slipped under the radar for some, but time has proven Morrissey right for holding his own, with many of his contemporaries from 1997 slipping into relative obscurity, whilst Maladjusted sounds better than ever. WORLD PEACE IS NONE OF YOUR BUSINESS 2014’s release from Morrissey almost sounds like a prophetic soundtrack to these unsettled, troubling times. Dark, political, critical and powerful, World Peace Is None of Your Business packs a powerful punch. His sole released with Harvest Records, World Peace has a distinctly world music flavour; a rich and varied collection of songs that highlight the diversity of the artist’s sound. Not a weak track to be found amongst the album, the sonic treasures includes Mountjoy, Smiler With A Knife and I’m Not A Man. One of the standouts-the title track itself- is a scathing criticism of our modern world and the way in which we are enslaved by governments we no longer trust. It’s very Morrissey, and makes a bold and engaging start to what is, essentially, one of the artist’s best records. W ant to see him live? Morrissey is playing a string of dates in Las Vegas during summer 2021. Find out more at Ticketmaster
https://fionadodwell.medium.com/five-morrissey-albums-that-deserve-reappraisal-f03d606baa8f
['Fiona Dodwell']
2020-12-03 15:21:39.467000+00:00
['Morrissey', 'Music', 'Reviews']
Google เตรียมแสดง “Mobile-friendly” ในหน้าผลลัพธ์ของ Mobile
SiamHTML The collection of articles for front-end developers
https://medium.com/siamhtml/google-%E0%B9%80%E0%B8%95%E0%B8%A3%E0%B8%B5%E0%B8%A2%E0%B8%A1%E0%B9%81%E0%B8%AA%E0%B8%94%E0%B8%87-mobile-friendly-%E0%B9%83%E0%B8%99%E0%B8%AB%E0%B8%99%E0%B9%89%E0%B8%B2%E0%B8%9C%E0%B8%A5%E0%B8%A5%E0%B8%B1%E0%B8%9E%E0%B8%98%E0%B9%8C%E0%B8%82%E0%B8%AD%E0%B8%87-mobile-23315cb3c7e6
['Suranart Niamcome']
2017-10-24 05:22:25.292000+00:00
['Highlight', 'News', 'Google']
Let’s Hear It for the Boy
Blast From The Musical Past Let’s Hear It for the Boy - A song by Deniece Williams My baby he don’t talk sweet, He ain’t got much to say, But he loves me, loves me, loves me, I know that he loves me anyway… And maybe he don’t dress fine, But I don’t really mind, Because every time he pulls me near, I just want to cheer… Let’s hear it for the boy…. Let’s give the boy a hand, let’s hear it for my baby, You know you go to understand…. 1984 was the year for this hit song. Still fresh and groovy like when I first heard it. Lots of time has passed like it all happened in a flash and I’m still loving this song. Evergreen! Everlasting musical magic always happens and echoes forever when the right melody meets the right lyrics and they fall in love and get married in a beautiful song. Awesome! Check it out and have fun!
https://medium.com/blueinsight/blast-from-the-musical-past-8d81448d4344
[]
2020-12-10 14:04:53.464000+00:00
['Blue Insights', 'Culture', 'Songs', 'Artist', 'Music']
‘Don’t Worry Baby’ - Fitting All These Beach Boys’ Song Titles Into One Story — Musical Story Challenge
With Rhonda in my car, it was California Calling, California Dreamin’, and California Feelin’ with our California Girls. Without so much as a Mother May I we headed off on a Surfin’ Safari. Surfers Rule and we were going Surfin’ U.S.A., me, my Surfer Girl and my friends. We were off to Catch a Wave. You know what they say “Catch a wave and you’re sitting on top of the world.” While I drove, Rhonda sang out “Add Some Music to Your Day!” She’s my Little Surfer Girl. Oh Darlin’ And with the radio blasting we had Fun, Fun, Fun! all the way to the beach. That’s Why God Made the Radio. We heard everything from Alley Oop to You’re Welcome. Nothing but Rock and Roll Music. We were Stoked! We decided to head to Palisades Park to surf and play on the sand. Some locals from Malibu High were there, but our crew knows how to Be True to Your School. They’re loyal Til I Die. Once we hit the waves it was the Summer of Love. Some of the guys did Wipeout but all in all it was Wonderful. We’d be Still Surfin’ if the Sunshine shone all night. My girl Rhonda caught some great rides. She’s Got Rhythm. Then it was time to get Back Home. I needed some Time to Get Alone with my honey In My Room. In the Still of the Night that Island Girl gives me Island Fever. She’s the best of the Girls on the Beach when we Cuddle Up. When that happens It’s a Beautiful Day. Here She Comes and Happy Endings combined! Games Two Can Play. So we jumped In My Car and headed home so I could be Good to My Baby. God Only Knows why we decided to stop on the way back. One of the guys wanted to Chug-A-Lug and Dance, Dance, Dance I think. We stopped In the Parkin’ Lot of the Hully Gully drive-in restaurant, a place packed with Heroes and Villains. I Should Have Known Better. Not a place Where I Belong. I stayed in my car but some of the guys wandered over to a Little Honda. They were having Lonely Days and wanted to make Friends and answer the age-old question, “What is a Young Girl Made Of?” Got to Know the Woman ya know. Blondie started off with “Kiss Me, Baby” and “Come Go With Me” but he got the Little Bird. “I’ll Bet He’s Nice” but “Ding Dang” he came on too strong one of the girls said. “Oh Darlin’” let’s get outta here I thought, but Rhonda said “Don’t Worry Baby.” Then Blondie tried with “Hey, Little Tomboy” In the Parkin’ Lot. ‘I’m Waiting for the Day’ that line works I thought. I thought she’d just Walk On By, but that Car Crazy Cutie gave Blondie her digits. Then it was really time to head back to my Little Pad. “Let Us Go on This Way” I said, sounding like God Only Knows. Guess I’m Dumb, but it worked and everybody got back into the Little Deuce Coup and the 409 and we headed home. I wanted to get there before White Christmas. Once I got Rhonda In My Room, well … I won’t tell ya How She Boogalooed It, but I was Good to My Baby and we both had a Good Time. Like I said, She’s Got Rhythm. Goin’ to the Beach then Goin’ South, that’s Hot Fun in the Summertime. Heads You Win — Tails I Lose I guess. And Your Dreams Come True, sometimes at least. Aren’t You Glad? I didn’t mention this before, but I’m Bugged at My Ol’ Man. So, after Rhonda split it was time for a Bull Session with the Big Daddy. He’d been home all along Busy Doin’ Nothin’. “Wouldn’t It Be Nice” he started in, if you just ate your Vegetables. Sometimes it’s just a Strange World. It surprised me that for Just Once in My Life my old man just wanted to Talk to Me. The Times They Are A-Changin’, and Strange Things Happen. In the end I said to my dad “You’re So Good to Me.” Wouldn’t It Be Nice if that’s the way it always was, I thought. You Still Believe in Me. Then, after that I had to Shut Down. It was time for some California Dreamin’. I Went to Sleep dreaming of a Daybreak Over The Ocean. Such a perfect day. I want to Do It Again.
https://medium.com/age-of-empathy/dont-worry-baby-i-can-fit-all-these-beach-boys-song-titles-into-one-story-musical-story-8d9be4682d60
['Michael Burg']
2020-10-18 02:40:59.680000+00:00
['Musical Story Challenge', 'Music', 'Innovation', 'Fiction', 'Short Story']
Air Frying
Air Frying Ever wondered how to deep-fry without oil? This is almost, if not quite, the same. Deep frying might be the only cooking technique that seems to make everything better, from the mundane, like french fries and chicken wings, to the outright strange, like the fried stick of butter at the Iowa State Fair, or fried jelly beans at the Massachusetts State Fair. The tragic part is that fried foods are notoriously bad for our health, but their crunchy texture and savory flavor makes them difficult to resist. Lucky for us, we live in the age of the air fryer, a small kitchen appliance that promises the same flavour and texture of deep frying, but without all the oil and calories. Perhaps this sounds too good to be true? Or, you’ve achieved only mediocre results using your air fryer, which don’t quite stand up to deep fried food? That’s because air frying and deep frying are incredibly different cooking techniques that yield distinct textures, flavours, and colours. To understand why, we need to take a closer look at the physical changes that occur in food when deep frying or air frying. It might come as a surprise, but frying food is actually an ancient method of cooking, and the first written record is found in the Bible. The book of Leviticus, found in the Old Testament, outlines in excruciating detail how certain sacrifices should be carried out. We are talking intricate directions of how to properly sacrifice your cow, goat or sheep. Or maybe a smaller animal like a bird, if you were less wealthy. Interestingly, frying comes into play if you didn’t have an extra animal to sacrifice: you could do a grain offering instead. Following the instructions outlined in Leviticus, you should pick a flour and oil of your choice. It should be well-soaked in the oil and prepared on a griddle. Think of little fried pieces of dough, perhaps like tiny donuts. The directions even go on to say this shall make a pleasant aroma that can be presented to the Lord [1]. Fast forward to the mid-1900s, and deep frying is about to have a huge spike in popularity. A kitchen appliance salesman by the name of Ray Kroc is about to have his big break. Kroc was visiting the restaurant owned by the two McDonald’s brothers, and was so impressed that he gave up his sales career, instead purchasing the rights to their establishment. Kroc was obsessed with the speed at which food got to the customers and the unbelievably low prices advertised by the brothers. He understood the potential of this business model, the possibility of immense growth, and had huge visions for the future [2]. For better or worse, Kroc was integral in producing the present day fast food industry. He realized the importance of deep frying as a method for mass production of food. It was fast: not only to cook the food, but also to get out to hungry customers. Any untrained employee could set the basket into the oil, wait for a timer to go off, then remove the basket from the fryer. It was cheap: the oil could be reused and you didn’t need a trained chef. You really didn’t even need a skilled adult, a teenager would be more than capable. It was Kroc’s standardized methods, almost assembly-line like production, that led us to low skilled and low wage jobs in the fast food industry. The success and profits achieved by Kroc fuelled many imitators giving us the modern fast food. In classical deep frying that we know and love, food is fully immersed in hot frying oil. Here, the oil acts as a heat transfer agent. Usually held at 120–180°C (~250–350°F), or way higher than boiling water, it warms the food and makes it safe to eat [3]. We know that to cook food we need some type of heating medium — whether that be an open fire, oven, or frying oil. Here, because it’s so hot with oil touching it, the surface of the food begins to lose moisture. The water just boils away. So far, no problem with deep frying, right? The nutritional problems with fried foods begins when the frying oil starts to act as a mass transfer agent. Any water that’s evaporated from the food’s surface ends up leaving open pores. And what does the frying oil do, but migrate right into these open spaces [4]! One study suggests that more than 40% of the final fried food is composed of frying oil [5]. This large increase in fat-percentage is exactly what makes them so unhealthy. Unfortunately, that same mechanism also generates that crispy, brown crust that makes these foods so craveable. The inside of the food doesn’t get quite as hot as the outer surface, never approaching the boiling point of water. That means the inner parts of the food have minimal moisture loss — so fried foods remain juicy in the centre, while crispy on the outside. An almost irresistible pairing of textures. The unfortunate combination of suboptimal nutrition and high craveability of deep fried foods ultimately led to the birth of the air fryer. This innovation is marketed as a small kitchen appliance that can be used at home as a healthier option to traditional frying. It promises a more nutritional versions of food, with less calories and fat, while still retaining the texture and feel of deep fried foods — something that previously seemed truly impossible. Although air fryers come in many different shapes and sizes, the internal parts all operate on the same principles. Most come with a perforated basket that is used to hold the food as it cooks. There are typically small holes on the bottom of the basket and sometimes slits down the side. Right above the food basket, is the main heating element. If you turn your air fryer upside down you can see this. Usually, it looks like the spiral coils you see on stove tops. The heating element (the coil) is placed as close to the food as possible. Directly above the heating element is a fan. The fan is positioned so that it draws air up through the heating coil. As the hot air is pulled upwards, it eventually reaches the top of the cooking chamber, and is directed down the outer walls of the air fryer. Once the air hits the bottom of the air fryer, it follows a specially shaped element that directs the air upward through the perforated basket. The small holes in the basket allow this heat to migrate upwards as it heats the food. Since the fan is always drawing up air, this results circular air flow that continuously cooks the food. The arrangement of the heating coil and fan is key since it allows the food to be cooked by two different heat transfer mechanisms. For anyone who hasn’t taken a basic science class in awhile, recall there are three main types of heat transfer: conduction, convection, and radiation. In the case of air frying, both radiation and convection are used. Radiation is unique from convection and conduction since it doesn’t require direct contact to transfer heat. Instead, electromagnetic waves are used to warm things and radiation occurs through empty space. This is one of the functions of the heating coil in the air fryer. It’s placed directly above the food and heats it via radiation. This is also how the the sun heats the earth or a campfire keeps you warm in the evening. You can feel the heat even if you are not touching the heating source. On the other hand, convection uses a moving liquid or gas to directly contact and warm substances. This is where the fan in air fryers comes into play. The circular movement of air encouraged by the fan promotes hot air to continuously be cycled and come into contact with the food. This cycle or process is known as convective currents You also use convective currents when boiling a pot of water. The bottom of the pan is hot, and it warms up the water near it. Hot water has a lower density than cold, therefore the hot water migrates up, forcing cooler water down. Finally, conduction is when two objects are in direct contact, the hotter one warms the colder one. If you burn your hand by placing it on a hot stove, that’s conduction! Air fryers don’t use conduction since no heating element directly touches the food being fried. It’s this combination of radiation from above via the hot coil, and convection from below form the hot air, that gives air fried foods their “fried” properties. When air frying, “fried” really is meant to express that the food is heated evenly on all sides. Even if the food is piled up, it won’t need to be turned over during cooking, similar to deep frying. Because of this, the term “air frying” is a bit of a misnomer. Really, most air fryers act more like a convection oven and the food is baked, rather than fried. And if you’re confused where the oil comes in when air frying, you are not alone. Traditionally, the word “frying” implied the use of hot fat or oil when cooking food. However, air fryers actually work without adding any oil. And if you do want to add oil, you have to do it before putting your food in the air fryer. There aren’t any special compartments to hold oil in air fryers. You can add oil by lightly spraying the food, or using a brush to coat the food prior to air frying — but don’t add too much! Most air fryers are made to work best with little to no oil. And plus, having less oil with air-frying is kind of the point. Although air frying is really not “frying,” we do know it makes healthier food compared to deep frying. If you are someone who needs to see some real numbers, one study found 70% less fat on air-fried versus deep-fried potatoes. Calorie-wise, this corresponds to a reduction of 45 k-calories for every 100 grams of potatoes [3]. One of the biggest issues with deep fried foods is just how energy dense, or high in calories, these foods are while lacking key nutrients like vitamins and minerals. Of course, we need some fat in our diets — but deep-fried foods tend to provide way more than needed. So much so, that frequent eating of deep-fried food is often studied by medical doctors for any associations with chronic illness. Unfortunately for us, high consumption of deep-fried foods has been linked to obesity [6], type 2 diabetes [7, 8] hypertension [9, 10], and heart failure [10]. On a more morbid side, one study that followed male physicians, saw an increased likelihood of death by cardiovascular disease associated with eating of deep-fried food seven times or more a week [11]. On the bright side, all of these studies looked at people who ate deep-fried food frequently, typically four or more times a week. So I guess it’s still okay if I treat myself to the occasional treat. Nutritionally, there is no doubt that air frying beats deep frying by a long shot. But before you go out to purchase an air fryer, let me make a few final points. First, look to see if your oven has a convection mode. This would cook your food with that same cyclically flowing air method used in air fryers, but without spending the extra money, or making your kitchen more cluttered. It’s also good to be aware that air frying usually takes twice as long as deep frying, so anticipate cooking times more similar to baking rather than frying. If you are someone who frequently eats deep-fried food, and don’t mind sacrificing a little bit on taste and texture for a more balanced meal, then air frying is for you. However, if you are unwilling to compromise on the quality of your traditional deep fried food, the results from an air fryer will likely disappoint you. There is a reason we crave deep fried food, and the reason is that fat is delicious. Want to write with us? To diversify our content, we’re looking out for new authors to write at Snipette. That means you! Aspiring writers: we’ll help you shape your piece. Established writers: Click here to get started. Curious for more? Sources and references for this article can be found here.
https://medium.com/snipette/air-frying-e31380e051ea
[]
2019-08-16 07:01:01.290000+00:00
['Nutrition', 'Food', 'Air Fryer', 'Health', 'Kitchen']
Inside the World of Birth Tourism
The maternity hotel’s services begin at the Los Angeles International Airport. Qu picks up his guests and delivers them to their reserved room. Every day, the maternity hotel’s two on-staff chefs provide three meals (four dishes and one soup) as well as seasonal fruit. Qu drives his clients to shop and dine out twice a week. He also helps find obstetricians for them, drives them to prenatal care appointments, and takes to the hospital for delivery. “Every pregnant woman has her own budget expectation for delivery,” says Qu, who worked as a technician in a pediatric hospital in Shanghai, China. “I give my clients a quotation list of different hospitals. After they choose a hospital, I will contact an obstetrician for them.” In China, new mothers are told to rest indoors for one month after giving birth. They’re supposed to have traditional soup of chicken and ginseng, pork ribs and corn, or pork liver and sesame oil as well as vegetables with rice. They’re not supposed to do housework or touch cold water. The maternity hotel provides a mainland or Taiwanese matron, who follows through on these principles when taking care of new mothers and babies. Babies’ diapers “I chose a Taiwanese matron for my first and second delivery. She cooked five meals with nutritious soup a day,” Qin says, standing up and heading to the refrigerator for a glass of fresh orange juice, “So I recovered very fast.” Traditional Chinese customs don’t allow women to take a shower for a month after they give birth, but Qin says her matron “didn’t think so.” Ancient people were used to bathing in a tub, which is not clean for a woman who just gave birth, she says. Qin adds that poor bathing conditions made women suffer colds and fevers, both of which were serious illnesses at that time. “Now, things are different. I can take a shower and keep warm in a temperature-set room,” she explains, sitting back on the couch. But her matron didn’t allow her to go outdoors, believing the wind could adversely affect her health. “I had to wear a warm hat when I went out,” Qin shrugs. “And wear fluffy socks all the time, even at home. Keeping warm is an important thing for a woman who gave birth.” Seeing Qin approach the couch from the kitchen, Qu adds that the maternity hotel impressed her because she received gifts for her newborn babies after delivery. The kitchen “My first child received powdered milk, baby diapers, and cotton clothes. My second child received an electric cradle as well as baby diapers,” Qin says. “The owner is very kind.” Hearing Qin’s words, Qu smiles gently and says, “I hope my clients feel at home.”
https://medium.com/s/story/inside-the-world-of-birth-tourism-4d6e382346b0
['Yuming Fang']
2019-01-07 22:49:13.122000+00:00
['Birth Tourism', 'Birth', 'Immigration', 'Health', 'Medical Tourism']
Can you teach me how to code? Why my answer always starts with no and end with yes
Can you teach me how to code? Why my answer always starts with no and end with yes it’s not as glamorous as Hollywood makes it seem Can you teach me how to code? Sit me down and flip open a book, going from beginning to end like a textbook? Can you teach me to code, so I can make games and maybe hack the Pentagon? Can you teach me to code, so I can make millions and billions, doing almost nothing but drag and drop? Can you teach me to code? The short answer, my young dreamer, is no. The long answer is yes — just not how you want me to teach you. The thing about code is that it’s not that glamourous. It’s long hours, days and nights spent arguing with a computer and the computer is always right. It’s a one-way relationship, with Google as your companion, and maybe a Stack Overflow friend. It’s a series of one confusion after the next, silent failures and red errors. Why? Because code is a language and languages are tools of communication. You can route learn code, but to create something out of it — well, that’s a different story. It’s the difference between tracing and learning to draw. Anyone can trace a picture, but not everyone can draw. The ability to draw includes the ability to identify shapes and replicate them on a medium. That’s what coding is — a process of identifying the different necessary parts to make a system. After that comes creating the pieces, the struts and the beams that make up the application. When you code, you are the builder and the architect, the mastermind of your little plot of sandbox. But like in real life, you can’t just build a house and call it a day. There are resource consents, council permissions, engineering reports, provisioning, and even the weather forecast to take into account. The equivalent to this is your project manager, client demands, team member’s opinions and thoughts, and legacy code (if you’ve got any). Over time, you find yourself spending more time in the process than the actual code itself. It’s easy to create code when you’re by yourself — but that’s rarely the case. Your work is absorbed into the multitude of moving parts that are supposed to work together like a seamless machine made of digital cogs mostly encased in curly brackets. Sometimes it gets edited by others, morphing into something completely different within a few months, if not weeks. So can I teach you how to code? Probably not — not the way you want me to teach you — the linear point A to B, x = y = z kind of formula. I’m not that kind of teacher. You probably have a project in mind, a dream that you want to fulfill, an app, a game, a something that will materialize if only you knew how to code. Let me tell you this, you’re doing it backward. If you’ve got an idea, work out the bits and pieces first. Figure out what you need as your features, why you need it, and how it's going to work. Figure out how the bits and pieces to group together and the relationships between them. Then you can start coding. Pick the smallest set of features — the minimum collection of things you need to get your app, dream, idea running. Then jump right into it. Get it working as quickly as possible. Learn to fail and fail often. Become friends with failure. Once you do, every time something falls apart, you get better at picking yourself up. You learn to fix things faster and recognize your potential failing points before they happen. Because coding is nuanced to the language you choose. Learn the technical basics and get creating as quickly as possible. Read around topics that can supplement your app, dream, thing. Look up patterns. Create a series of dots that will eventually make sense. And trust that these knowledge points will eventually make sense. Code is a tool. It’s not some magical thing that will make all your dreams come true. Code is a thing that only materializes properly if you are clear on what you want to achieve from it. So can I teach you how to code? Probably not. Can I give you some of the pieces of the puzzles that you might need? Probably yes.
https://medium.com/madhash/can-you-teach-me-how-to-code-why-my-answer-always-starts-with-no-and-end-with-yes-7c94f7800f56
['Aphinya Dechalert']
2020-08-04 06:01:01.075000+00:00
['Software Engineering', 'Software Development', 'Ideas', 'Web Development', 'Technology']
The Forgotten Champion of Polio Victims the World Over
Innovation The Forgotten Champion of Polio Victims the World Over How Australian bush nurse Elizabeth Kenny eased the suffering of infantile paralysis patients and invented physical therapy Sister Elizabeth Kenny, circa 1917 (public domain) When I was a child in the 1960s, it was not uncommon to see people with braces on their legs, their mobility dependent upon canes or crutches. They were survivors of infantile paralysis or poliomyelitis, polio for short. Polio is a highly infectious, debilitating, and sometimes fatal disease. The Centers for Disease Control and Prevention reports that the United States has been polio-free since 1979 due in large part to the polio vaccine developed by Dr. Jonas Salk in the 1950s. Students at a school for victims of the 1916 polio outbreak in Billings, MT, 1923 (image via ywhc.org) Salk’s vaccine was one of the significant medical breakthroughs of the 20th century, but it was of no use to polio victims. They needed an effective treatment for what is still an incurable disease. For that, many found relief from an unlikely source: an Australian bush nurse with no medical degree or formal training. Her name was Elizabeth Kenny, but the world would come to know her as “Sister Kenny.” “She had treated more cases than anyone else in the world — she gave the precise number, 7,828 — and no one else was in the position to speak with her authority. She is now almost forgotten by the world.” — virologist Sir Macfarlane Burnet Injury leads to inspiration Elizabeth Kenny was born in Warialda, New South Wales, Australia in 1880. Her parents were farmers. At the age of 17, Kenny fell from a horse and broke her wrist. She convalesced at the home of Aeneas McDonnell, a medical doctor in Toowoomba. During her recuperation, Kenny became interested in how muscles work. Dr. McDonnell allowed her to study his anatomy books and model skeleton. Since skeletal models reserved for medical students, she constructed her own to continue her studies when she returned home. Portrait of Elizabeth Kenny in 1915 (public domain) A series of jobs followed. She taught Sunday school and music and had a successful stint as an agricultural broker. Then, while working in the kitchen of a midwife’s cottage hospital, Kenny decided that she wanted a career in medicine. She used the money she’d earned from her brokerage job to pay a local woman to make her a nurse’s uniform and went to work as a bush nurse. She eventually opened a cottage hospital at Clifton, providing convalescent and midwifery services. The first polio patients In 1911, Kenny encountered two sick children who appeared to be suffering from polio. Having no formal training to diagnose their condition, she sent a wire to her former doctor and mentor Aeneas McDonnell describing their symptoms and asking for his advice. McDonnell wired back, “ “Infantile paralysis. No known treatment. Do the best you can with the symptoms presenting themselves.” Doll in full-body plaster cast used to introduce the treatment to young polio patients in England, circa 1930–1950 (image via sciencemuseumgroup.org.uk) The standard treatment for polio victims at the time was to immobilize their affected limbs in plaster casts. But Kenny decided to try something different. Noting the stiffness of their muscles, she applied hot compresses followed by passive movement of the affected areas of their bodies. According to Victor Cohn’s 1976 biography, Sister Kenny: The Woman Who Challenged the Doctors, the children recovered without any severe after-effects. Her process of muscle rehabilitation is credited by some to be the basis of modern-day physical therapy. The Kenny Method In 1915, Elizabeth Kenny convinced the Royal Australian Army Nursing Corps to accept her services as a nurse aboard troopships during the First World War despite her lack of formal credentials. In 1917, the army promoted her to the rank of Sister, which is the Nursing Corps’ equivalent of a First Lieutenant. She continued to use this title throughout her life. In interviews with the Australian press during the 1930s, Kenny stated that she further developed her methods for treating polio by attending soldiers suffering from meningitis during the war. Elizabeth Kenny Clinic, Brisbane, Australia, 1938 (image via archivessearch.qld.gov.au) When the war ended, Kenny returned to Australia and continued to work as a nurse. She treated victims of the influenza pandemic of 1918, eventually settling in the rural town of Nobby in Queensland. In 1932, Queensland suffered a severe polio outbreak. With the help of local people, Sister Kenny set up a rudimentary polio clinic on the grounds of a hotel in Townsville. Although her methods were controversial among members of the medical establishment, her success with patients led to the creation of several Kenny Clinics throughout the state and eventually throughout Australia. International recognition Sister Kenny demonstrates her methods to medical personnel in the United States, 1943 (image via corpus.nz) Kenny’s reputation as healer soon spread beyond her own country, and her techniques gained increasing popularity throughout the world. Unfortunately, her lack of a degree was still an obstacle to the acceptance of her practices by the medical establishment. Sister Kenny’s story was a compelling one, however, and in due time, Hollywood came calling. In 1946, RKO made the film Sister Kenny starring Rosalind Russell in the title role. The script, loosely based on Kenny’s book with Martha Ostenso And They Shall Walk, took some liberties. Poster for the film “Sister Kenny” 1946. (fair use) As was typical for the time, the screenwriters inserted a fictional romance and altered the facts to boost the film’s marketability. Sister Kenny was not a financial success for RKO, but Rosalind Russell won the Golden Globe for Best Actress, and the film helped promote the real Sister Kenny’s reputation. Polio cases skyrocket in post-WWII America In 1946, there were 25,000 reported cases of polio in the United States. By 1952, the number of annual cases had grown to 52,000. Parents across the country were terrified because the most common victims of polio were children. Patients in iron lungs at Rancho Los Amigos Hospital in Downey, CA, circa 1953 (image via pbs.org) Some patients were so severely affected that they had to use tank ventilators known as “iron lungs” to breathe. Many of them stayed in these devices for months or even years. The isolation that was a part of this life-saving treatment was an added burden to patients and their families. Several famous people who were victims of polio in the 1940s and 1950s credit Sister Kenny’s techniques for helping them recover. These include actors Alan Alda, who contracted polio at the age of seven, Martin Sheen, who was bedridden for a year, and Canadian singer Joni Mitchell. In Mitchell’s case, the treatment was so successful, according to ex-husband Chuck Mitchell in the book Girls Like Us by Sheila Weller, she was able to run up and downstairs afterward without complaint. Post-vaccine cases Thanks to the vaccine pioneered by Jonas Salk and a global eradication initiative launched by the World Health Organization in 1988, polio is extremely rare today. According to the WHO, reported cases of polio worldwide dropped to just 33 in 2018. Sister Kenny’s treatment techniques continue to be effective, however, and today’s patients still benefit from her lasting legacy. Elizabeth Kenny died of Parkinson’s disease on November 30, 1952, in Toowoomba, Queensland, Australia.
https://medium.com/history-of-yesterday/the-forgotten-champion-of-polio-victims-the-world-over-21c27f99181e
['Denise Shelton']
2020-06-16 19:36:00.898000+00:00
['Innovation', 'Inspiration', 'Feminism', 'History', 'Health']
Thoughts on the state of Xarray within the broader scientific Python ecosystem
This is a follow-up on Ryan Abernathey’s blog post about supporting new Xarray contributors. How Xarray is positioned within the Python’s Scientific Stack? (modified from Travis Oliphant, 2017 and originally based on Jake VanderPlas PyCon 2017). Xarray has slowly but steadily gained in popularity since it was created by Stephan Hoyer in late 2013. Although its growth has accelerated over the last two years (notably alongside the emergence of the Pangeo project), Xarray still remains widely unknown, especially outside the community of Atmospheric / Ocean / Climate (AOC) Sciences. This could easily be explained by the fact that early development of Xarray happened — and still happen mostly — within that community. Yet, Xarray nicely solves a fundamental problem — i.e., handling labels and metadata on top of raw arrays — that is encountered in so many domains. Of course, the goal of Xarray is not to solve everything, but clearly: Xarray does not seem to be used to its full potential within the broader scientific Python ecosystem. I’d like to illustrate this observation through two examples: My personal experience : I discovered Xarray some time ago while I was working in the AOC field. Since then I moved away from that field but haven’t stopped using Xarray, which still meets my current needs perfectly (I haven’t totally moved from Earth Science, though). I’m now trying to convince my colleagues that Xarray might help them too for some of their work and must admit that it is easier than I thought. Like me before, colleagues who have been using Python for quite a while were either using Pandas the hard (or wrong?) way or were spending effort on hacking their own labels/metadata wrappers around NumPy arrays (or even worse, they didn’t use any label/metadata explicitly in their code). Either they didn’t know the existence of Xarray, or they barely knew it and thought it was not relevant. Of course, we cannot blame them for that! But we can work towards a better discoverability of Xarray. : I discovered Xarray some time ago while I was working in the AOC field. Since then I moved away from that field but haven’t stopped using Xarray, which still meets my current needs perfectly (I haven’t totally moved from Earth Science, though). I’m now trying to convince my colleagues that Xarray might help them too for some of their work and must admit that it is easier than I thought. Like me before, colleagues who have been using Python for quite a while were either using Pandas the hard (or wrong?) way or were spending effort on hacking their own labels/metadata wrappers around NumPy arrays (or even worse, they didn’t use any label/metadata explicitly in their code). Either they didn’t know the existence of Xarray, or they barely knew it and thought it was not relevant. Of course, we cannot blame them for that! But we can work towards a better discoverability of Xarray. Recent developments within the scientific Python ecosystem: although I have limited background in the area of machine (deep) learning, a project like NamedTensor and this related post both seem to show that labeled arrays/tensors is a real concern also in that area, and yet Xarray isn’t really visible there. Two possible ways towards more visibility and adoption of Xarray throughout the scientific Python ecosystem. Here are some general (and hopefully useful) thoughts on how we could tackle the issue described above. Build a more robust stack on top of Xarray. Xarray itself does not solve any real scientific/business problem. It rather provides a solid infrastructure with core data structures and a consistent API for solving those problems. Anything at a higher-level is generally out of the Xarray scope. A lot of various features have been built on top of Xarray in third-party packages. Many of them are listed here and here. This is great! Without Xarray, those packages would probably have been built directly on top of NumPy, thereby missing important opportunities offered by a proper array labeling system such as great reduction of the amount of boilerplate code and potential bugs (and thus much easier maintenance). Unfortunately, this ecosystem of packages is not organized as a clear and consistent stack, which doesn’t makes it very sustainable. Some development efforts have been duplicated and a number of those packages don’t seem to be actively maintained anymore (many of those are initiatives of individual developers as side projects). We are probably missing an intermediate level between the Xarray core library and those domain-specific packages, i.e., a few projects with a scope that is well-defined but large enough to gather individual contributors and ensure good maintenance in the mid or long term. There have been some initiatives in this way (e.g., xr-scipy, geo-xarray, geoxarray) but none of them has really gained momentum yet. It turns out to be a big challenge. If you’d like to share your point of view on this topic, please join us in this ongoing discussion on GitHub! Suggest or add support for xarray.DataArray and/or xarray.Dataset as core data structures in popular, third-party projects. This suggestion is complementary to the previous one: in addition to building new packages on top of Xarray, we could also try better advocating the utility of its core data structures in existing projects. Xarray has already a very good connection with the rest of the Python scientific / PyData stack. It wraps NumPy arrays, has an API heavily inspired from Pandas (including tools for easy conversion from/to Pandas DataFrame), provides powerful plotting capabilities built on top of Matplotlib, and integrates tightly with Dask for parallel computing. By contrast, the connection in the opposite direction (i.e., Xarray as a dependency) could be much richer, given the core functionality that Xarray provides. More specifically, I’m thinking about two kinds of tools that could benefit from a better integration with Xarray (and vice versa): Machine (deep) learning libraries and frameworks Tools like Scikit-Learn, TensorFlow, PyTorch, etc. are taking up a large part of the Python scientific ecosystem. Very recently there has been much work towards better integration between Scikit-Learn and Pandas. While it is not yet clear to me which use cases would directly leverage tight integration between Xarray and Scikit-Learn, it is worth mentioning these two projects: sklearn-xarray and sklearn-xarray. It is more obvious to see how Xarray would interplay with deep learning frameworks that handle high-dimensional tensors. Tight integration between Xarray and those frameworks is currently tricky, which may also explain why similar projects like NamedTensor and tf.contrib.labeled_tensor exist. Hopefully, planned support for flexible arrays in Xarray and up-coming improvements upstream (like NEP-0018) will greatly help here. Scientific visualization libraries I’m tempted to believe that the wide adoption of Pandas over the lasts years had something to do with visualization libraries like Seaborn, which allows creating fancy figures easily through an elegant API and which added support for Pandas DataFrame objects early in its development. Regarding Xarray, the current situation is actually not bad! Perhaps the most relevant examples of third-party visualization libraries that support (and promote) Xarray data structures are the PyViz project and ArviZ. Other examples include ipyleaflet (which uses Xarray to draw velocity maps), PyGMT and probably a few other examples that I have overlooked. This is all very exciting, but much more could be done! For example, it would be awesome if libraries like Altair could natively support Xarray structures like they support Pandas (even though there are some challenges). There are also some nice examples of Plotly figures created from data imported using Xarray, although those examples require prior conversion into Pandas DataFrame objects. Scientific visualization is important in all disciplines ; it is pretty much all about displaying raw data + labels. The data model implemented in Xarray includes both of these. Perhaps one of the most underestimated potentials of Xarray is precisely about its data model, which, while being perfectly suited for high-dimensional data, isn’t tied to this kind of data. For example, xarray.DataArray or xarray.Dataset objects are in practice widely used for storing collections of “basic” 2-d images or 1-d data with just one or two additional dimensions representing, e.g., time or batches of experiments. The extra dimension(s) and the global or variable attributes supported by the data model both enable storing those collections of data + metadata in a tidy format, which could be leveraged in many places, especially in (interactive) visualization libraries. Conclusion I have tried here to give a short but documented summary on the current position of Xarray within the scientific Python ecosystem. I encourage the reader to have a quick look at all the projects that I have mentioned in this post. I humbly hope that it will contribute to convincing more people from different horizons (both users and contributors) about the potential of Xarray and the benefits of working towards its wider integration within this very rich ecosystem.
https://medium.com/pangeo/thoughts-on-the-state-of-xarray-within-the-broader-scientific-python-ecosystem-5cee3c59cd2b
['Benoît Bovy']
2019-04-15 13:57:58.698000+00:00
['Machine Learning', 'Python', 'Pangeo', 'Data Science', 'Data Visualization']
VR games mature together with the technology, cinematic VR does not
I think my fascination for first-person storytelling started when I was about 8 years old. Around this age, I watched and thought along with games like Myst and Riven, which my father used to play on our CD-i player. We loved the first person point of view (POV). We could walk around in those worlds together for hours, or so it felt. My father passed away in february 2013. He never got to try VR, but I think he would have loved it as much as I do. The guy who made this walkthrough of Myst is my age. He also used to play it with his father when he was 8 years old. First person games have already existed for quite some time now. They have developed together with other kinds of games, as over time the technology made more things possible. The last game me and my dad played together was ‘Heavy Rain’. We played it when he was sick and we finished it just in time. The game wasn’t in first-person POV, but it was a first-person narrative. I remember how overwhelmed he was by how real the world and the story of this game looked and felt on his PS3. Virtual Reality is close to being the ultimate first person medium. Games have more experience with this POV. Therefore, VR is easier to be awesome right away when applied to games. Two weeks ago I was in New York City. I visited the Madame Tussauds, just to see the Ghostbusters hyper-reality experience. It was great. A first-person game in it’s purest form. Me and the second player could walk around in multiple rooms and we even went up a floor in an elevator. Of course there wasn’t actually a second floor, but it looked and felt perfectly real. We could also see exactly what the other person was doing. We both held a gun that felt the same way it looked and that gave resistance when we fired. The vest I had on even let me feel the ghost that went right through me. When we finally defeated the big bad marshmallow man, it even smelled sweet! So, game developers already know exactly what to do with a first-person POV, but filmmakers do not. Most filmmakers are used to writing third-person narratives. Many Cinematic VR experiences struggle with this. It is still possible to tell a third-person narrative from a first-person perspective, but I think VR as a medium asks to be used differently. The way I look at it, a first-person POV is strongest with a first-person narrative. This means that the viewer should be watching through the eyes of one of the characters of the story, possibly even the main character. When a first-person POV ís used in a movie, most of the times it is only in one scène and there is always a reason why. A good example is the movie ‘Being John Malkovich’. In this movie people are able to put themselves into the body of actor John Malkovich. When they do, what they see is showed with a first-person POV of the actor. This is done to give the viewer a better understanding about what happens to these people. It has a purpose. One of the best uses of a first-person point of view in a movie must be ‘Being John Malkovich’. By definition, Virtual Reality shows a story from a first-person perspective. Therefore, I think that every story told in this medium should have a reason to use this perspective. Otherwise, why wouldn’t you just make a movie out of it? In my next blog, I will be writing more about first-person narratives for VR. I hope you enjoyed reading my theories about VR storytelling. If you did, you might want to follow me so you can read the sequels. If you can’t wait, please check out some of my earlier blog posts. They are worth getting some new attention. Thank you!
https://medium.com/hackastory-playgrounds/vr-games-mature-together-with-the-technology-cinematic-vr-does-not-8a487edfade3
['Nikki Van Sprundel']
2016-10-03 17:27:03.282000+00:00
['Gaming', 'Filmmaking', 'VR', 'Virtual Reality', 'Storytelling']
Good data chart examples in this Kleiner Perkins presentation
This Kleiner Perkins presentation introducing their iFund makes good use of data charts to prove points. Kleiner Perkins iFund Presentation at iPhoneDevCamp 3 View more documents from Raven Zachary. Quiet layouts that focus on making one point only Clever sequencing of charts: “you thought the iPod was big but wait untill you see it compared to the iPhone” Applying a brightly colored fill under a line chart to amplify the trend (lines on its own are not very visible) Click through the 14 pages to have a look yourself
https://medium.com/slidemagic/good-data-chart-examples-in-this-kleiner-perkins-presentation-171b49fafb1a
['Jan Schultink']
2016-12-27 08:41:35.256000+00:00
['PowerPoint', 'Presentation', 'Design', 'Data Visualization', 'Presentation Design']
When Civilizations Collapse
The survivors walked across the desert, their heads hung in sorrow and their hearts drained of hope. Some wondered if those who died were actually the lucky ones. The group of around one hundred refugees stopped every few days to bury another one of their lot who succumbed to the harsh conditions. The desert offered little sustenance but plenty of danger. Burned by the hot desert sun, the people had to fend off jackals and snakes and poisonous insects. Vultures followed them from above. Whenever civilizations collapse there have always been survivors. It is these survivors who plant the seed for new civilizations. These seeds carry the genetic imprint of the trauma experienced in the collapse of the previous civilization. History repeats itself over and over, providing endless opportunities to learn from and heal from that trauma. Their numbers dwindling, the refugees walked for months across the hills and valleys of the desert wasteland. One day they reached the top of a rise in the land and what they saw stopped everyone in their tracks. In the distance there were magnificent snow-capped mountains. In the little valley immediately below them was a wide river, its waters flowing from those distant mountains. Great joy spread through the people. Some began dancing and singing. All they had to do was follow that river and they would arrive at those mountains in just a matter of days. The glaciers atop those mountains could provide all the water the people would need for their bodies and their crops. The forests could provide the wood to build new homes and the game to further nourish them. The people could finally end their long journey and build a brand new civilization. The possibilities for true peace and joy and happiness filled the people with euphoria.
https://medium.com/grab-a-slice/when-civilizations-collapse-b62caea2fd9b
['White Feather']
2020-12-09 19:13:12.764000+00:00
['Life', 'Spirituality', 'Society', 'Fiction', 'History']
An Ode to Handwashing
Photo by Austin Kehmeier on Unsplash The rumble begins from beneath the sink, bubbling up a delightful stream of water that emerges from the faucet. A trickle at first, then a great gush of beautiful H20, tiny droplets misting into the air. It reminds me of a sweet summer rain, the flow of a mighty waterfall, the gentle rocking of the waves. Water. Here to wash away filth and poison. My hands, the purveyors of my touch and the tools of my daily tasks, are contaminated. I desperately shove them beneath the water, sighing in relief as the cool fluid provides its soothing touch. I reach for the soap dispenser, an unassuming vessel of gooey goodness. Pump, pump, squirt. A glob of soap swirls around my palm. It looks impotent, disgusting. Just wait. I blend the soap with the stream of water, rubbing my hands quickly together. The goo transforms into a bubbly delight. It soaks my skin in a comforting lather, a germ-slaying concoction. Scrub, rub, scrub again. Twenty seconds’ worth. Under the nails, over the knuckles, down the wrists, across the palms. The soap sweeps the landscape of my hands, washing the dirt and microbes away. Now comes the best part: the rinse. Luscious water rolls over the suds, pushing them into waves like those that lick the shoreline. The warmth travels up my arms and into my heart. Clean. My hands are finally clean. I gently dry away the excess water, enjoying the plushy comfort of the towel. I examine my hands. They feel renewed, ready to take on the world again. Such a simple ritual, yet so powerful. I wonder why more people don’t do it.
https://rachelwayne.medium.com/an-ode-to-handwashing-70a96549894b
['Rachel Wayne']
2020-03-18 18:00:22.338000+00:00
['Humor', 'Poetry', 'Self Care', 'Health']
Clustering vs Distributed Systems
This post contains high-level concepts on clustering load balancing and distributed systems. It mainly focuses on the aspects of what, why, advantages and disadvantages. What is clustering? When it comes to a model like a client and a server application model, the client sends requests to the server in order to execute a task and get results. This server can be a single endpoint that contains a single server entity or in other words, it can be one service entity. Let’s say a client sends a request to this single service endpoint which contains multiple sub-tasks. Then once it gets the request it starts processing the sub-tasks in order to complete the request but it has to perform all the sub-tasks by using its own resources. So it will take some time to execute all the sub-tasks to complete the request. So with this approach imagine what happens if this server (the single service endpoint) has thousands of clients and at the same time all the clients start to communicate with the server by sending requests. The server may run out of its resources during the execution of the tasks, so some clients will have to wait until the server gets its resources to fulfill the needs of the clients. The availability of resources becomes low and it adds limits to the scalability as well. The end result would be a huge decrease in performance. What happens if the server goes down? it will make services unavailable to the clients and till the server comes up clients will have to hold their tasks. This may introduce problems to the application users and it may cause data losses as well. Now let’s see how to overcome the above problems. In order to overcome the resource problem and increase performance, we can add more resources to the same endpoint to execute sub-tasks parallelly. In other words, we can add more service entities that can communicate with each other and execute sub-tasks parallelly. All these service entities or server entities will share one service endpoint where all clients would connect and send requests. So if the number of requests or number of connected clients or users are getting high we can add more service entities to the same endpoint. This concept is called “scalability” in clustering. Unlike the single service entity or single server now we have multiple service entities to fulfill the requests. Let’s assume one service entity goes down due to an error occurred, now the requests coming to that server entity will be redirected to the other available service entities by minimizing the downtime of the services. With this ability, the users or clients may not experience huge availability of the services since there are other service entities that can take over the execution by minimizing the downtime. This concept is known as “high availability” Now let’s take a look at the summary of the things that we discussed above Clustering in a nutshell Clustering is the concept of multiple service nodes or entities working as one single entity by sharing a single endpoint to enable the scalability and high availability of the services. As explained above ,the main advantages of clustering scalability, high availability, increased performance, and simplicity. Now we get another problem in our minds, how to handle and route requests which come to a single endpoint and distribute those requests between service entities or nodes in a cluster? The answer is we have to use a load balancer to handle the requests which come to the endpoint and distribute them between service nodes in a cluster using a proper mechanism. (I will be explaining more about load balancing in a different blog post in the future.) Cluster using a load balancer What are distributed systems? Now let’s assume that there is a system that can interpret an incoming request and divide the request into sub-tasks and re-route those tasks into different endpoints (in other words different service modules) that are dedicated to complete those tasks. This concept of having dedicated servers or endpoints for different tasks or service modules is called a distributed system. Here the functionalities are being distributed among different dedicated servers. distributed system This adds few advantages like increasing the ability of concurrent access to a particular service, reduce the resource consumption, reusability of common services between applications and it will also make it easy to extend the functionalities by adding them in a distributed way. What is the difference between a distributed system and a cluster? Mainly there are two major differences between a distributed system and a cluster. The first major difference comes with the way the services are being executed. In distributed systems, it distributes sub-tasks in different servers, so if there are main 10 sub-tasks then there will be 10 endpoints (or in other words 10 different dedicated servers) to get the work done but in clustering, one endpoint is there and that endpoint is being mapped to several servers (through a load balancer) each consists of the ability to execute 10 sub-tasks within itself. Now given the above fact think what happens if one server fails in a distributed system? Then the whole system would have to wait till that server comes up to complete the task or to abort the task. But what happens if one server fails in a clustering system? The load balancer would redirect the incoming requests to another service entity or server node to handle the task Let’s take another example, assume that there is an application that has thousands of clients sending multiple requests at a time, so in order to decrease the execution time of a request we can distribute the functionalities as sub-tasks in a distributed system. Due to the dedication and high resource availability, it will decrease the execution time of a single sub-task and will increase efficiency. When it comes to a cluster it will increase the number of tasks that are being executed during a given time period and will increase the efficiency through that, and that is the second major difference. In simple words assume that a task consists with 10 sub-tasks and one sub-task takes 1 hour to complete, then for a single server to complete the task it will take 10 hours. Now assign that task to a distributed system where we have those 10 sub-tasks distributed in 10 different dedicated servers, with parallel execution it will complete all the sub-tasks within 1 hour. In a clustering system with a single server, it will still take 10 hours to complete the whole task but we can increase the number of servers in clustering and let’s say we increased it up to 10 servers and now at a given time the cluster is now capable of handling 10 requests parallelly. Now comes the main question which is… When to use which strategy? Well, I would say it is highly depending on the properties of your application like the number of users per given time, number of requests per given time, number of database connections at a given time and load of the data that needs to handle etc. likewise there are many things you have to consider when selecting what approach is more suitable for your application. But most of the time if you are having a large scale application then having a combination of both would give you more results, as an example, you can use clustering inside of a distributed system by clustering the dedicated servers for more availability and scalability but still, everything depends on your application’s properties.
https://medium.com/swlh/clustering-vs-distributed-systems-51eb15836eb7
['Lakshitha Samarasingha']
2020-07-21 22:20:07.087000+00:00
['Distributed Systems', 'Distribution', 'Cluster', 'Load Balancing', 'Clustering']
The Solar Plexus: Our Power is Taken Away When This Chakra is Blocked
A SIMPLE GUIDE TO THE CHAKRA ENERGY SYSTEM The Solar Plexus: Our Power is Taken Away When This Chakra is Blocked How to strengthen and align your solar plexus chakra Image by Peter Lomas from Pixabay The entire universe is made up of energy and our bodies are no exception. The solar plexus (celiac plexus) is a complex system of radiating nerves and ganglia located in the pit of the stomach in front of the aorta. It is part of the sympathetic nervous system and therefore responsible for fight-or-flight. The solar plexus chakra is located in the solar plexus. The study of the chakras originates in Eastern spiritual traditions that consider the seven primary chakras as a person’s life force or prana — the energy that moves us. In the two previous articles, we learned that the first chakra, the root chakra is responsible for keeping us safe and grounded on earth and the second chakra, the sacral chakra is the spot where our creativity lies and it rules all things sensual and sexual.
https://medium.com/spiritual-secrets/the-solar-plexus-our-power-is-taken-away-when-this-chakra-is-blocked-db58021f801b
['Kimberly Fosu']
2020-12-21 19:46:21.546000+00:00
['Spirituality', 'Energy', 'Inspiration', 'Leadership', 'Creativity']
Your Relationships Have An Energy Vibration
Relationships | Inspiration | Creativity Your Relationships Have An Energy Vibration So Does Everything You Create fractal-2573303_1280 by dawnydawny on Pixabay When you create something new, pay attention to your intentions. Whether it’s a new business or product or a new relationship, your energy will affect how your creation is viewed by others. It’s not what you do, it’s how you do it. Everyone’s soul has a unique energy vibration. There are gifted people who can read soul records (called Akashic records). These records are energy imprints. Each one contains information about the unique gifts, energy vibrations, rhythms, talents and abilities of that soul. Similar souls resonate. They are drawn into soul groups with each other. Because they resonate, they interact in relationships called clusters or families. I call them familiarity groups. It’s an interesting way to perceive the world. When your eyes meet across a crowded room and you make a connection, it’s your soul talking. There’s something about the other person’s energy that interests you. Sometimes it’s the opposite. There are people you avoid on sight. People who make you feel uncomfortable for no reason. You can’t define it, but you are instantly sure you won’t get along. When you start to pay attention to the energies you feel, you may begin to notice patterns in how others relate to the world. You notice their attributes and see where their talents lie. Sometimes they resonate with your energy print and it “just feels right” to be near them. They feel familiar. If you are observant you can see how various rhythms come together. They interact and a relationship builds when they resonate. Or things don’t work out because the energies conflict. Energy is unlimited. Your creations have an energy vibration. Each creation is a being, with its own energy centre and life. When you begin to create the consciousness of something, like a blog or a website, a new relationship or a job, you attract things that match the energetic profile of what is being created. When you’re inspired and creative you infuse your creation with your inspiration. If you are creating something for the sole purpose of making money, that’s the vibration or feeling you’re infusing it with. That creation may be attractive to some, but others will feel manipulated in a subtle way. They won’t be interested in pursuing it. Take a moment and sit with your creation. Feel the energy profile you see in the things you create. It has its own rhythm, gifts, attributes and essence. Ask yourself questions. Why am I creating this? How do I want it to be perceived by customers? What does it need? Once you’re sure your creation resonates with you, ask it what it wants and see where it leads you. It’s like enticing a new lover. Before you can do anything with this creation, get to know it well so you can recognize its vibration in other things. Quietly coax it to come closer so you can become better acquainted. The universe is waiting to entice you to experience more in your life.
https://medium.com/illumination-curated/your-relationships-have-an-energy-vibration-147375eafa47
['Tree Langdon']
2020-07-14 16:34:37.782000+00:00
['Relationships', 'Inspiration', 'Self Improvement', 'Mindfulness', 'Creativity']
Interactive Data Graphing in React
Interactive Data Graphing in React Create compelling visualizations with Cytoscape.js Photo by NASA Visualizing data has become one of the most powerful tools available to software engineers. Making large amounts of data interactive and user-friendly produces an attention-grabbing product, and can reveal essential correlations that might otherwise go unnoticed. Cytoscape.js offers a convenient solution for spinning up accurate graph models quickly. With impressive displays, like this interactive map of the Tokyo Railway System, Cytoscape has become the first choice network visualizer for many web developers. When working recently on a React project, I had to sift through various sources to get a fully-featured implementation of Cytoscape.js up and running. Thanks to the power of blogging, my former headaches can help guide you through the setup process, so you can quickly implement stunning data visualizations in your next React project.
https://medium.com/better-programming/interactive-data-graphing-in-react-95d7963247ff
['Jeremiah Tabb']
2019-10-08 15:42:18.358000+00:00
['Data Visualization', 'Code', 'JavaScript', 'Web Development', 'React']
The Rise of Turntable.fm
Has Turntable.fm taken over your days and nights? The site for people to gather in “rooms,” play music, and feel embarrassingly giddy when friends and strangers vote their approval is still in beta mode but already one of the most addicting internet developments in recent memory. By ingeniously combining good and bad music with friendly competition, a chat feature, and cute avatars that have changeable outfits and boppable heads, the site ruins your life. For another reason to avoid it, go read these interviews with some of the site’s top DJs, who say things like: I think my personal record in the DJ chair is 18.5 hours. I had things keeping me in the house that day, but not demanding too much of my attention so I just never found a reason to step down. That probably makes me sound pretty lifeless. It does, yes, but also yes! That’s exactly what it does. Why would you ever step down when you could just keep going? Go start a room, never leave. You’ll see.
https://medium.com/the-hairpin/the-rise-of-turntable-fm-92ce04f7adb6
['Edith Zimmerman']
2016-06-01 11:55:16.053000+00:00
['The Internet', 'Music']
Store a Card on File using Reader SDK
Heads up, we’ve moved! If you’d like to continue keeping up with the latest technical content from Square please visit us at our new home https://developer.squareup.com/blog Last year, we announced the release of Square Reader SDK for iOS and Android, along with plugins for React Native and Flutter. While Reader SDK makes it easy to take in-person payments within your own app using Square hardware, you may also want to store customer card information securely to use in the future (for example, to create a recurring billing plan). To help you create a seamless experience for returning customers and enable recurring payments, Reader SDK now supports storing a card on file using Square hardware. This new functionality works in tandem with the Customers API and Transactions API, allowing you to create a customer profile; swipe, tap, or dip a card to store it securely; and then charge the card for future purchases at any time. Here’s how it works: Create a Customer Use the Customers API to create a customer profile, including contact information such as name, address, email, and phone number. Store a Card on File Use Reader SDK to swipe, tap, or dip a customer’s card and store it as a card on file. Reader SDK handles all payment information securely and creates a Customer Card object for you, so you don’t have to worry about handling raw card details or dealing with PCI compliance. Android Create a CustomerCardManager: Write a callback to handle the result: Add code to start the Store Customer Card flow: iOS Implement StoreCustomerCardControllerDelegate Methods: Add code to start the Store Customer Card flow: Charge the Card on File Once you’ve stored a Card on File for a given Customer, you can use the Transactions API to charge the card for future payments. Meet your buyers wherever they are With Square’s omnichannel developer platform and the addition of Card on File to Reader SDK, you can now add cards to a customer profile regardless of where the customer relationship starts: online, in-store, or in-app. We are excited to see what you’ll build! You can read more about Reader SDK in our documentation. Be sure to read and follow the requirements for obtaining customer consent and disclosing terms and conditions outlined in the documentation.
https://medium.com/square-corner-blog/store-a-card-on-file-using-reader-sdk-1a8e89d13953
['Gabriel Jinich']
2019-04-18 20:32:57.348000+00:00
['Payments', 'API', 'Ecommerce', 'Software Development', 'Mobile App Development']
The beginning of a deep learning trading bot — Part1: 95% accuracy is not enough
Start experimenting — Finding the right data Before training the production-grade level models we first have to find out how explanatory stock prices and financial news are when forecasting for stock returns. In order to get a first impression of how well stock prices and news indicate future stock price changes we initially train multiple models on a smaller dataset. The dataset that we will use to start proving our assumptions are the historical price and volume data of the IBM stock. IBM has a fairly long price history on Yahoo, prices reach back as far as 1962. The easiest way to get the historical IBM prices is to simply download the dataset from yahoo’s IBM page. For each trading day, Yahoo provides Open, High, Low, Close prices and the Volume (OHLCV). Once downloaded and having it loaded into a notebook the IBM OHLCV data looks as follows. IBM’s price and volume data To get an idea of how the prices have changed over time, we plot the daily closing prices. The IBM price graph start date is January 2nd, 1962, ends on February 3rd, 2020, and has a price range between $7.5 – $225. IBM daily close price 1962–2020 Let’s also have a look at the volume data which we will use as an additional feature to our price data points. The volume for each day is calculated by multiplying for each trade the number of shares times the trade share price. Then the products of all trades during a day are summed and form the volume data point for this particular day. IBM’s daily trading volume 1962–2020 Preprocessing our data — I know it’s boring but necessary 😊 Feeding raw price and volume data into a deep learning model is usually a bad idea. When looking at IBM’s price graph you can see the prices from 1962 to 1991 ($7- $48) are on a totally different level than the prices around the years 2000 and 2020 ($140-$220). In essence, these two price ranges have little to do with each other. Meaning that the range from 1962–1991 (average price $25) has little explanatory value for the price range of 2000–2020 (average price $130). In order to bring past price points on the same level as price points of recent times, and thus more useful for training our neural networks, we have to do a couple of preprocessing steps. Let’s start the preprocessing, I promise it‘s going to be quick. Firstly, we are going to convert prices and volumes into price returns/percentage changes. The easiest way to do this is by using the pandas function pct_change(). The advantage of having price returns is that they are more stationary than raw price data. A simple explanation of stationarity: Stationarity = Good, because past data is more similar to future data, making forecasts easier. IBM’s daily stock returns and volume changes The next graph illustrates nicely that converting stock prices to stock returns removes the trend of increasing stock prices. Secondly, a min-max normalization is applied to all price and volume data, making our data range from 0–1. Instead of using the raw price returns and volume changes, normalized data has the advantage that it allows a deep learning model to train more quickly and stably. Thirdly, we will split the time series into training, validation and test datasets. In most cases a training and validation dataset split is sufficient. However, for time series data it is crucial that the final evaluation is performed on a test set. The test dataset has not been seen by the model at all and thus we avoid any look ahead or other temporal biases within the evaluation. Having calculated stock returns, normalized and split the data into 3 sections, the datasets have now the illustrated shape below.
https://towardsdatascience.com/the-beginning-of-a-deep-learning-trading-bot-part1-95-accuracy-is-not-enough-c338abc98fc2
['Jan Schmitz']
2020-05-20 02:37:41.011000+00:00
['Trading', 'Deep Learning', 'Finance', 'AI', 'Data Science']
Sharing Private Data for Public Good
“Data collaboratives,” an emerging form of partnership in which participants exchange data for the public good, have huge potential to benefit society and improve artificial intelligence. But they must be designed responsibly and take data-privacy concerns into account. (Reposted from Project Syndicate) After Hurricane Katrina struck New Orleans in 2005, the direct-mail marketing company Valassis shared its database with emergency agencies and volunteers to help improve aid delivery. In Santiago, Chile, analysts from Universidad del Desarrollo, ISI Foundation, UNICEF, and the GovLab collaborated with Telefónica, the city’s largest mobile operator, to study gender-based mobility patterns in order to design a more equitable transportation policy. And as part of the Yale University Open Data Access project, health-care companies Johnson & Johnson, Medtronic, and SI-BONE give researchers access to previously walled-off data from 333 clinical trials, opening the door to possible new innovations in medicine. These are just three examples of “data collaboratives,” an emerging form of partnership in which participants exchange data for the public good. Such tie-ups typically involve public bodies using data from corporations and other private-sector entities to benefit society. But data collaboratives can help companies, too — pharmaceutical firms share data on biomarkers to accelerate their own drug-research efforts, for example. Data-sharing initiatives also have huge potential to improve artificial intelligence (AI). But they must be designed responsibly and take data-privacy concerns into account. Understanding the societal and business case for data collaboratives, as well as the forms they can take, is critical to gaining a deeper appreciation the potential and limitations of such ventures. The GovLab has identified over 150 data collaboratives spanning continents and sectors; they include companies such as Air France, Zillow, and Facebook. Our research suggests that such partnerships can create value in three main ways. For starters, data collaboratives can improve situational and causal analysis. Their unique collections of data help government officials better understand issues such as traffic problems or financial inequality, and design more agile and focused evidence-based policies to address them. Moreover, such data exchanges enhance decision-makers’ predictive capacity. Today’s vast stores of public and private data can yield powerful insights into future developments and thus help policymakers plan and implement more effective measures. Finally, and most important, data collaboratives can make AI more robust, accurate, and responsive. Although analysts suggest AI will be at the center of twenty-first-century governance, its output is only as good as the underlying models. And the sophistication and accuracy of the models generally depend on the quality, depth, complexity, and diversity of data underpinning them. Data collaboratives can thus play a vital role in building better AI models by breaking down silos and aggregating data from new and alternative sources. Public-private data collaborations have great potential to benefit society. Policymakers analyzing traffic patterns or economic development in cities could make their models more accurate by using call-detail records generated by telecom providers, for example. And researchers could enhance their climate-prediction models by adding data from commercial satellite operators. Data exchanges could be equally useful for the private sector, helping companies to boost their brand reputation, channel their research and development spending more effectively, increase profits, and identify new risks and opportunities. Yet for all the progress and promise, data collaboration is still a nascent field, and we are only starting to understand its benefits and potential drawbacks. Our approach at the GovLab emphasizes the mutual benefit of collaboration and aims to build trust between data suppliers and users. As part of this process, we have begun designing an institutional framework that places responsible data collaboration at the heart of public- and private-sector entities’ operations. This includes identifying chief data stewards in these organizations to lead the design and implementation of systematic, sustainable, and ethical collaborative efforts. The aim is to build a network of individuals from the private and public sectors promoting data stewardship. Given heightened concerns over data privacy and misuse — the so-called techlash — some will understandably be wary of data-sharing initiatives. We are mindful of these legitimate worries, and of the reasons for the more general erosion of public trust. But we also believe that building rigorous frameworks and more systemic approaches to data collaboration are the best ways to address such concerns. Data collaboratives bring together otherwise siloed data and dispersed expertise, helping to match supply and demand for such information. Well-designed initiatives ensure that the appropriate institutions and individuals use data responsibly to maximize the potential of innovative social policies. And accelerating the growth of data collaboratives is crucial to the further development of AI. Sharing data involves risks, but it also has the potential to transform the way we are governed. By harnessing the power of data collaboratives, governments can develop smarter policies that improve people’s lives.
https://medium.com/data-stewards-network/sharing-private-data-for-public-good-29ee24c78b57
['Stefaan G. Verhulst']
2019-08-29 16:02:08.446000+00:00
['Data', 'Data Collaborative', 'Data Steward', 'AI', 'Public Good']
10 Simple Ways to Think Positively Every Day
10 Simple Ways to Think Positively Every Day Positive thinkers are more successful, less stressed, healthier, and live longer Photo by Nick Fewings on Unsplash When your state of mind is positive, you’re able to handle every day stress in a constructive way. Positive thinking builds your confidence, improves your mood, and reduces your stress, according to research. “Once you replace negative thoughts with positive ones, you’ll start having positive results.” — Willie Nelson, an American musician and actor Positive thinking doesn’t mean you avoid life’s unpleasant situations. You just approach tough times in a positive and productive way. Think the best outcome is going to happen. Not the worst. Thinking positive requires focus, dedication, and discipline. Let’s dive into 10 ways that you can make sure you think positively every day.
https://medium.com/illumination/10-simple-ways-to-think-positively-every-day-9aa7fc0f775c
['Matthew Royse']
2020-10-21 13:09:43.580000+00:00
['Health', 'Personal Development', 'Life', 'Self Improvement', 'Positive Thinking']
How to Launch a Tech Startup in 2020
But why is this happening? Why do some ideas work while others don’t? CBInsights conducted a study and found out what are the most common causes of failure among startups. The three leaders included the lack of demand in the market, lack of money, and the wrong choice of a team. The results of the study make it clear that the idea is not everything for a successful business, and to achieve results, you need to make a lot of additional effort. So how to launch your startup Most articles on this subject contain voluminous paragraphs with tips on how to develop your brand, hire a sales team, start social networks, and so on. That’s right, but several other crucial stages precede these. Remember the main reason for startup failure? Lack of market demand. Therefore, the first stage depends only on you — test your idea. Unfortunately, the idea is worthless if it doesn’t benefit people. The fundamental rule for creating any new product is to answer the question of what problem it solves. You must clearly understand who will use your product, how it will be helpful, how it will improve someone’s life. Otherwise, no one will pay for it just because it’s an interesting idea. Ask yourself as many questions as possible regarding the future product: What problem do you solve? Who are your future users? Who are your competitors? What market are you going to work on? What result do you expect after launching your product? After that, do market research to find out the main trends and determine if your startup will fit into the system. You can even ask representatives of your target audience if they would be interested in the product. Want to develop a retail app? Ask sellers about their challenges and offer your idea. “If you’re competitor focused, you have to wait until there is a competitor doing something. Being customer focused allows you to be more pioneering.” — Jeff Bezos, American entrepreneur, founder of Amazon.com You shouldn’t create a product solely for yourself, relying on your opinion if you want to sell it to other people. Tech startups always bring something new but try to turn your idea into an understandable, simple product that people will gladly use.
https://medium.com/ideasoft-io/how-to-launch-a-tech-startup-in-2020-c36895b7fc64
[]
2020-09-04 09:17:50.032000+00:00
['Startup Lessons', 'Software Development', 'Tech Startups', 'Startup', 'Technology']
Exploratory data analysis guide
Step 1: Basic exploration🐇 By basic exploration, you may recall that we are referring to any exploration that are common to most data. To streamline this step, we will prepare a template by leveraging automation tool as well as custom functions. Therefore, let’s split the basic exploration step into two parts: Part 1: Leverage automation tool Leverage automation tool Part 2: Use custom functions for the remaining basic exploration The idea behind the template is that we just have to update a few parts in the template when we have new data and can get basic exploration with little effort. Once basic exploration is completed, it’s important to carefully analyse all the charts and aggregate statistics, and create a summary of key findings. This analysis will help shape step 2 of the exploration. Part 1. Leverage automation tool We will use Pandas Profiling package to automate parts of the basic exploration. We can create Pandas Profiling report (just report from here onwards) in .html file like this: # Create profile report profile_train = ProfileReport(train) # Export it to html file profile_train.to_file("training_data_report.html") With just 2 lines of code, we get an awesome report containing data overview, descriptive statistics, univariate and some bivariate distributions, correlations and missing value summary. Output report will look similar to this. You will notice that you can use the tabs at the top right corner to jump to sections and look at more detailed information by clicking on toggle buttons. You can change the title that appears on the top left corner with title and turn on dark mode with dark_mode arguments. For instance, when creating a report, extending the first line to below will get you a report titled ‘Training Data Report’ in darker mode (which I find nice): profile_train = ProfileReport(train, title="Training Data Report", dark_mode=True) While I think that creating html report is the best way to use the report, it can be accessed in Jupyter Notebook using one of the following options: # Option 1 train.profile_report() # Option 2 profile = ProfileReport(train) profile.to_widgets() # Option 3 profile = ProfileReport(train) profile.to_notebook_iframe() I recommend try running these options to find your preference. If you want to learn more about the package, see their documentation page. Part 2. Use custom functions for the remaining basic exploration It’s likely that automation tools like Pandas Profiling may not cover all parts of basic exploration. If you find yourself doing the same exploration over and over again when working on different projects, and they are not included in the report, then create custom functions or use pandas built-in functions or methods to cater for the gap. One key trick to keep the template as clean as possible is to abstract away certain functionalities into custom functions and save it in a separate module. Examples in this post use functions from custom module called ‘helpers’. The code used for each function is available at the end of this post. In this section, we will look at a few example explorations one can do to complement the report. Let’s start with summary for all variables: helpers.summarise(train) Although this output may seem like a duplicate of what the report shows, looking this information side by side for all variables can be useful. From this output, we can see that deck is missing in roughly 4 in 5 records and age is missing in about 1 in 5 records. Let’s split features into two groups: # Define feature groups continuous = train[features].select_dtypes(['float']).columns discrete = train[features].columns.difference(continuous) # Convert to category dtype train[discrete] = train[discrete].astype('category') Checking out summary statistics for continuous variables is also useful. The following function extends pandas DataFrame’s .describe() method: helpers.describe_more(train, continuous) Here, we can see the descriptive statistics and the summary of outliers. Outlier here is defined using John Tukey’s rule. 12% of fare values are outliers. Visualising the distribution of continuous features for each target class can give us insights on their usefulness in discriminating target classes: for feature in continuous: helpers.plot_continuous(train, feature, target) Similar summaries and visualisations can be done for discrete values: train.describe(exclude=['number']) We can see the number of non-missing and unique values in the first two rows. In the last two rows, we can see the most frequent value and its frequency count. Now, let’s visualise the discrete features. By imputing with ‘missing’, we can see if those records with missing values are more or less likely to have survived relative to the other categories in the feature: # Fill missing for feature in discrete: if train[feature].isnull().sum()>0: train[feature] = train[feature].cat\ .add_categories('missing') train[feature] = train[feature].fillna('missing') # Visualise for feature in discrete: helpers.plot_discrete(train, feature, target) In this section, we have looked at summary statistics side by side for features and visualised features’ relationship with target to gain some insights on their usefulness in discriminating target classes. I hope you will create your own template for basic exploration using some or all of the ideas mentioned supplemented by your own favourite ways to explore data. Once basic exploration is completed, it’s important to carefully analyse output and create a summary of key findings. Here are a few examples: There were substantially more third class passengers and their survival rate is much lower. There were substantially more male passengers and their survival rate is much lower. Understanding the context of the problem you are trying to solve with data is valuable and can help you sense check the data or interpret the findings. Reading about the problem or talking to relevant stakeholders regarding the problem are good ways gain that understanding. In Titanic’s example, we could google around information to understand the disaster better. After doing a little bit of reading, we will soon find out that there was a ‘women and children first’ protocol during an emergency which played a role in who was more likely to have survived. Contextualising data this way is helpful in making sense of the data exploration findings and understanding the problem better.
https://towardsdatascience.com/exploratory-data-analysis-guide-4f9367ab05e5
['Zolzaya Luvsandorj']
2020-11-02 23:24:39.916000+00:00
['Data Analysis', 'Python', 'Getting Started', 'Visualisation', 'Data Sience']
Crochet Affirmation №8
The craft of crochet is a violent and indestructible obsession. — After George Sand People will tell you that crochet is a lesser skill hardly worthy of your time, but rest assured they fear your craft and understand that in being able to create and give form to your innermost thoughts, you are asserting power over all you survey.
https://medium.com/crochet-affirmations/crochet-affirmation-8-f8e585211c51
['Leslie Stahlhut']
2020-02-13 21:55:44.487000+00:00
['Makers', 'Life Lessons', 'Crochet', 'DIY', 'Creativity']
Integration Testing Resque with Cucumber
Integration Testing Resque with Cucumber Processing asynchronous jobs deterministically Written by Zach Brock and Matthew O’Connor. Heads up, we’ve moved! If you’d like to continue keeping up with the latest technical content from Square please visit us at our new home https://developer.squareup.com/blog Square takes integration testing seriously. We use Cucumber and RSpec to test our code during every step of development: on developer pairing machines, continuous integration servers, staging servers and production servers. The Problem Traditional Cucumber tests exercise the web stack from the web server through the database, but they don’t typically cover asynchronous tasks like background processing and scheduled jobs. Integration tests involving these asynchronous jobs are hard to write due to race conditions between the test process and the background workers. We faced this problem when we began using Resquefor processing background jobs. We love Cucumber and we love Resque; we wanted to find a way to use them together. Sample test with a race condition When someone accepts a payment using Square, we queue up a notification email in Resque. A pool of Resque workers constantly monitor the queue. One of them immediately grabs the email job, renders the email, connects to the mail server, and sends it to the user. The following test can cause a race condition. The test will pass if the Resque worker had processed the email job by the time the Cucumber test looks for a new email… otherwise the test will fail. Scenario: Capturing an authorization successfully results in an email notification Given my name is Jools And I have a valid API session And I use a new capturable card authorization When I POST to API 1.0 "payments capture" And I open my newest email Then I should see a link to "the payment page for the last payment" An immediate solution would be to run all Resque jobs synchronously and skip the enqueueing/dequeueing parts of the stack. We do this for RSpec unit tests, but we want integration tests to directly test the full stack. The diagram below shows the processes (boxes) involved in running a Cucumber test and the communication channels between the processes (black lines). To solve the race condition problem, the goal was to add another channel between the Cucumber process and the resque worker (gray line). The Solution To avoid the race condition, we start up a Resque worker as a child of the Cucumber test and then use Resque’s signal handling to control when the worker processes jobs. Our solution for integration testing with Resque works like this: 1. Start a Cucumber test process. 2. Start a Resque worker by forking the Cucumber test process. In the child process: exec a Resque worker. In the parent process: store the Resque worker’s PID and continue on as normal. 3. Pause the Resque worker on startup. 4. Execute some Cucumber steps. 5. Invoke a special Cucumber step to: A. Un-pause the Resque worker. B. Wait for it to finish processing all jobs. C. Re-pause the Resque worker. 6. Make assertions about the result of the worker process. View the code in the CucumberExternalResqueWorker gist. Starting the Resque Worker When the Cucumber test process starts we immediately fork and start a Resque worker. It can take a little while for this worker to finish starting up, so we wait around for up to a minute. In order to pause and un-pause the worker with signals, the PID returned to the parent process needs to be the PID of the Resque worker. We capture this PID by using Ruby’s fork and exec commands. Forkcauses a child process to be spawned and exec replaces the child process with the resque worker. This fork and exec trick gives us the Resque worker’s PID as a return value of fork in the Cucumber test process instead of having to manage external PID files. However, we learned the hard way that exec behaves differently if you call it with the array form or the string form. If you call exec with the array form, the command has the same PID as the process it replaces. If you call exec with the string form, the string is passed to sh -c and sh gets the PID of the process being replaced. We orphaned a lot of workers before we figured this out. Pausing the Resque Worker Resque workers can be paused by sending them the USR2 signal, and un-paused by sending CONT. A Rails initializer adds a before_first_fork hook to the Resque worker and makes the worker send itself a USR2 signal before it can process any jobs. To run all the queued jobs, we use CucumberExternalResqueWorker.process_all, which un-pauses the worker, waits until it finishes processing jobs, and then re-pauses it. Putting it together We can now use asynchronous processing in a deterministic way. We added a new Cucumber step to clear the Resque email queue: Given "the email queue is empty" do CucumberExternalResqueWorker.reset_counter Resque.remove_queue(Mailer.queue) reset_mailer end And another new Cucumber step to process all Resque jobs: When "all queued jobs are processed" do CucumberExternalResqueWorker.process_all end The updated Cucumber test uses the new steps to avoid race conditions between the Cucumber test in the Resque jobs: Scenario: Capturing an authorization successfully results in an email notification Given my name is Jools And I have a valid API session And the email queue is empty And I use a new capturable card authorization When I POST to API 1.0 "payments capture" And all queued jobs are processed And I open my newest email Then I should see a link to "the payment page for the last payment" Resque is almost totally awesome How Resque helped us We were able to solve this problem because of how well architected Resque is. Its support of POSIX signal handling and the built in extension hooks made it really easy to exercise control over our child worker. We didn’t have to monkey patch anything and we used standard signals to control the workers. The fact that Resque managed its own workers and provided a single PID to control workers was also a big help. How Resque hurt us The one big problem we ran into was that the pending jobs counter in Resque isn’t atomic. When a job is processed, the worker decrements the number of pending jobs remaining, does a bunch of processing, and then increments the counter for jobs being worked. This turns out to be a problem when a Resque job spawns other jobs. We initially tried to use the Resque.info[:pending] and Resque.info[:working] counters to track when our child worker had finished all the jobs, but because they don’t update atomically, we would occasionally have child jobs that were never processed. We solved this by alias method chaining a counter into the enqueue and performmethods in our base Resque worker class. Future Work Being able to test our Resque workers in a full integration environment has been a huge improvement. We’re big fans of Resque and Redis; they’ve been a pleasure to work with. Our CucumberExternalResqueWorker has been a great solution for us so far. There are a few features we’d like to add when we have time: Patch Resque to have atomic counters so we don’t have to monkey patch our base worker. Add ability to run all jobs in a specific queue Add ability to run exactly N jobs in a specific queue Add timeouts for long running jobs Turn CucumberExternalResqueWorker into a gem Add an ENV option to disable starting a Resque worker Hopefully our solution will help other people get up and running with full stack testing of Resque using Cucumber.
https://medium.com/square-corner-blog/integration-testing-resque-with-cucumber-5ff99edd3f4b
['Square Engineering']
2019-04-18 23:56:52.204000+00:00
['Nodejs', 'Programming', 'Engineering']
PORK: A Technology Resilience Framework
The story of technology today is complexity. Increasingly complicated software deployed in complex and unpredictable environments that seem to be slipping more and more out of our control. Daunting stuff, this complexity. A Prediction: The code you’re writing today won’t be the same code running a year from now. The machines hosting your services today won’t be the same machines in a year. Your services will fail occasionally. Your business partners will change their minds about features. Hardware will need security patches to address vulnerabilities. Network blips will disrupt and change the topology of your system. Your end users will interact with your applications in unexpected ways. Your business will scale significantly (hallelujah!), but load will be uneven. Intraday releases will mean services temporarily restart at inopportune times More examples of complexity, you ask? No problem! Distributed systems, consensus algorithms, machine learning. Team collaborations, human emotions and miscommunications. Feature enhancements, bug fixes, new competition, vendor integrations and organizational restructuring. All this, and we’re just scratching the surface of the complex factors that could impact our technology systems. Engineers are working within a truly unpredictable and complex environment from a technology, business and human perspective. We should acknowledge this head-on by building systems to be responsive to an uncertain future. Hungry For Resilient Software Given the new reality of complexity and constant change, our approach to building technology systems must adapt. At Rocket Mortgage Technology, we’re striving to transform the mortgage industry as well as the technology that runs it. In my role as a Software Architect, I work with a suite of amazing teams that build custom software solutions for our Capital Markets and Treasury business partners. These business areas are critical to the success of our organization, so we need our systems to reflect that criticality. Due to the volatility of the financial markets and the speed with which the mortgage industry changes, this is an exciting and ever-present challenge. Our mission in architecting systems, therefore, is to build systems that are both resilient and flexible enough to turn on a dime. Our mission in architecting systems, therefore, is to build systems that are both resilient and flexible enough to turn on a dime. What Is Resilient Software? Let’s define resiliency as the measure of how a system withstands stress and responds to unforeseen errors and urgent feature requests. More broadly, resiliency is a measure of how a system embraces change. As the needs of our clients change, our systems must change accordingly. This article details my teams’ approach to building resilient systems through our PORK Resiliency Framework. More than 100 billion dollars of loan originations flow through our systems each year, so the stakes are high. Our framework is constructed of four principles designed to take the fear out of decision-making in this complex and business critical environment. For several years now, these four principles have been the cornerstone of the systems we build and the measuring stick to guide our technology decisions. Engineers, architects, product owners and leadership have rallied around a shared goal to deliver resilient software, and these principles have led the way. The Four Principles Of PORK Our teams strive for fast, frequent and confident releases of our software, but how do we balance the safety of our system with the rapid change that comes with frequent releases? Well, imagine if you could say to your engineers, “Don’t be afraid of making mistakes because we have the tools in place to quickly find and easily correct them!” This is the essence of our framework. We strive for resiliency by observing errors as early as possible, then recovering quickly (ideally before impacting our clients in the first place). With this approach, we’re giving our engineers the runway to experiment, innovate and make mistakes. At the same time, we’re relieving the pressure from our engineering teams as they no longer need to aim to be perfect, bug free or predict the future. To liberate our team members from the fear of mistakes is to empower them to move at a higher velocity and deliver business value faster. So, what are the four principles of the PORK resiliency framework? Predictability Observability Recoverability Keep it simple Predictability Just because we’re working in an unpredictable and complex environment doesn’t mean our code should be unpredictable, too. Our first core principle asserts that we value predictability over correctness. This may appear counter-intuitive at first glance, but an engineering team should design a system to behave the same way in all circumstances. If a system is wrong but wrong in the same way every time, then we’re well-positioned to quickly diagnose and fix the problem. Predictability also enables the team to better explain, support and evolve the system over time. We believe that the same inputs fed into a system in the same order should always result in the same output. Why is this deterministic behavior important? Replicating a bug is often the hardest part of an engineer’s job. We have great talent on our engineering teams, and they’re adept at writing code to fix a bug — that’s typically the easy part. Most of the battle can be the task of reproducing the problem in the first place. With a focus on predictability, we can take a notoriously tricky task and make it trivial. If something goes wrong in a predictable system, it’s simple to pass the same inputs through our code again and replicate the behavior so we can diagnose what went wrong. From there, we can leverage our automated toolchain to fix it. This approach may sound simple, but there are a lot of forces at play here. To make your system predictable, you’ll need to delve deep into the underpinnings of your system design, likely pursuing advanced techniques, such as immutability, retry policies, idempotency, event-driven programming and other techniques. Challenge yourself and your teams to make engineering choices that reinforce predictability throughout your technology stack, from code to infrastructure and the space between. How do our teams achieve this? We invest in several tools and patterns, which I’ll discuss later in the article. Observability Broadly speaking, observability means we’re designing systems with visibility and diagnostic tooling in mind. Lack of a cohesive and reliable observability strategy complicates how we triage issues, impacts system stability and ultimately erodes trust between a technology team and its clients. If we’re confident we can identify when our system is behaving normally (healthy), then we should also be able to identify when something is behaving out of the norm (unhealthy). Upon observing unhealthy behavior, we can alert our support teams proactively and correct it before it impacts our clients. By contrast, if we don’t have the appropriate visibility into what our system is doing, then we don’t know when something is misbehaving, and we don’t know when it warrants our attention to fix it. This almost certainly leads to unfortunate client impact, and what’s worse, our beloved clients are the ones notifying us that the system is broken. Shouldn’t we already know? Observability is about quick and effective root cause analysis, not a few flashy visualizations to impress your leadership team. Rather than getting distracted by polished graphs and charts, I recommend you focus on a chain of diagnostic tools. We want three types of tools in this toolchain: Proactive alerts that call out unhealthy trends Tools to form a hypothesis of what has gone wrong (e.g. dashboards for logs, metrics, traces) Tools to confirm or deny the hypothesis Once you have confirmed your hypothesis, you can leverage our principle of Predictability to replicate the problem and test your bug fix. Voila! If we’re aware we have a problem (Observability), and we can quickly replicate and diagnose the problem (Predictability), then we are in good shape to correct the problem quickly. Recoverability When building a system, engineers have a tendency to try to make it bulletproof against failure. This seems reasonable at first glance, but ultimately proves to be unreasonably harsh and restrictive. Engineers should be thoughtful and defensive in their coding practices, but if they’re not careful, bulletproofing can lead teams down a path of fear, long-lived testing environments, days or weeks of extensive testing and planned maintenance windows. This approach will undoubtedly manifest itself in brittle systems and slow software delivery. As we shift our mindset toward embracing unpredictability, rather than avoiding failures at all costs, we should instead be thinking about how quickly we can respond to inevitable failures. You cannot avoid failure all together, but you can very much control how quickly a team responds to it. We need not bulletproof our systems if we can quickly observe system issues and repair them before our business partners or clients feel the impact. Recoverability is particularly important when considering how teams manage the data flowing through their systems. Data, in many ways, is a system’s most precious resource and the backbone of our business. If your system encounters an outage, chances are you’ll lose some data and need to recover it. Have a recovery plan in place for this in advance! Have a plan to repopulate your data in caches, event streams and data stores. For example, can you work with the system of record to provide a full “state of the world” snapshot to repopulate all your data on-demand? You should have layers of protection in place and, ideally, you should practice how you use them. Don’t wait for a production outage to hatch a recovery plan. Do it now. Don’t wait for a production outage to hatch a recovery plan. Do it now. We leverage many additional techniques for recovery, as well, such as approaching system design with a disposable mindset. The disposable mindset encourages teams to avoid building precious, brittle systems. Instead, support teams can quickly tear down a misbehaving application and spin up a fresh new application in its place. During critical outages, we favor this approach for quick recovery so our clients can get back to work as quickly as possible. Later, once the urgency has diminished, our engineers will delve deep into the observability data we have collected to explain what led to the outage in the first place. A disposable mindset often requires stateless services, a careful plan around managing your data (as discussed above) as well as some modern tooling like Docker containers and container orchestration, which I’ll briefly discuss later. My recommendation is to shoot for the moon and build a reset button that will quickly restore an unhealthy system to health in an automated fashion. This will certainly take a lot of work, but it’s a noble goal. Remember, due to the unpredictable nature of our complex environments, you can’t avoid failure, but with principles such as these, we learn to build resilient systems from unreliable parts. Keep It Simple Simple, elegant solutions are less brittle, less likely to break and easier to fix when they do break. As we design and implement systems, we’re typically addressing multiple goals across multiple time horizons at once. These include the initial delivery of a new system, but also the ease of maintaining the system for the years to come and the ease with which our system can evolve over time to meet our clients’ evolving needs. Given the time-shifting, shape-shifting nature of these goals, unless we have a healthy respect for complexity, the software project is almost certain to fail. In my experience, the single biggest indicator in the success of a software project is simplicity. It’s also the single biggest factor in reasoning about complex systems when you’re urgently alerted to a production issue at 3:00 a.m. Simplicity allows for a shared mental model across your team. Can every engineer on your team draw a reasonably accurate diagram of how your system is architected? I hope so. Our systems act as a record of all the hundreds and thousands of decisions we’ve made along the way. Build the simplest thing that will solve the problem. Be deliberate in your decision-making process and be able to justify your decisions. Because we must live with these decisions, maintain them, support them, debug them — it is in our collective best interest that we keep it simple. Tools For The Job As described, the principles of our PORK Resiliency Framework are conceptual in nature, thus not overtly opinionated with regards to programming languages, utilities or technology stacks. That said, I’ll share some tooling and best practices that my teams leverage today to achieve the goals of our framework. We expect these tools will change over time, but we also expect the four core principles will continue to apply, nevertheless. Infrastructure As Code (IAC) We use Terraform, which helps provision predictable infrastructure to avoid hard to debug environmental drift issues, which arise from manual configuration across multiple environments. Continuous Integration/Continuous Delivery (CI/CD) No surprise here. Tooling in this space is widely available to introduce automation, which offers us predictable and immutable build artifacts (for example, Docker images) and predictable promotion across environments. We use CircleCI and custom tooling to achieve this across our systems. Docker In our world, Docker refers to the containers as well as the orchestration layer that manages those containers. We love Docker as it promotes a disposable mindset, which ensures all dependencies are documented in code, and prevents snowflake environments. If a Docker container dies, another will be spun up dynamically by the container orchestrator (Docker Swarm, Kubernetes, ECS) to replace it. Because containers are immutable, the new one is just as good as the prior. The orchestration platform also offers us readily available infrastructure (compute on demand), not to mention a variety of predictability-focused tooling such as software defined networking, declarative files to describe the desired state of your system (Docker Compose, Kubernetes manifests) and a number of other abstractions to help with engineering productivity and resilient system design. F# We use C# in many scenarios, and C# is used more broadly across our company, but we love F# in this specific arena. F# is a functional language that addresses many of our predictability concerns elegantly (through first-class constructs around immutability and side-effect-free processing). That said, a functional language is not mandatory. You can certainly achieve predictability through C#, Python or other languages, though it will require more deliberate decision making and the engineering discipline to stay the course when the predictability path is not the easiest way forward. A functional language makes sense to us because we can emphasize immutability, testability and other predictability concerns at the code level, just as we already do at the service level and the system level. This allows for a cohesive predictability strategy up and down our technology stack. Kafka We love Kafka because, like the other tools mentioned here, it makes good engineering practices simpler to implement. It’s easy to integrate and deprecate services, so your system can evolve as needed over time. We believe immutable data is easier to manage, and Kafka is a durable log of immutable events kept in strict order (per partition). Your teams don’t need to reinvent the wheel and write custom code to achieve resiliency through buffering, redundancy, scalability and fault tolerance. These and other useful patterns and principles come out of the box with Kafka. Furthermore, Kafka allows us to process data as events happen in the real world, rather than wait for a user request. Thus, we’re processing earlier and can observe anomalies sooner. This flexibility allows us to fix issues even before the user invokes a request (thus, no client impact). When we do encounter a bug, Kafka allows us to rewind the consumer offset and replay the data inputs we received earlier. This functionality allows us to diagnose the problem easily and get back to health, as described in our predictability and recoverability principles. Splunk We use Splunk for several reasons across our organization, but in the context of PORK, Splunk helps us achieve our observability goals using logs. At a high level, Splunk helps us describe, explain and alert on the health of our systems. We use Splunk for building dashboards and alerts that show anomalies. We leverage the Splunk DSL to quickly drill down into request-level context to troubleshoot and diagnose production issues as they’re happening. Grafana Metrics help us describe and alert on the health of our system, but unlike request-level logs, it struggles to explain the state of our system because metrics intentionally strip away the context of individual requests. We love Grafana, nonetheless, but we focus it where it brings the most value — quickly and efficiently graphing trends that don’t require drilling down into request-level data. We find that Grafana is great at visualizing resource utilization (CPU, RAM, disk) and HTTP endpoint status codes. Summary With the PORK Resiliency framework, we’re attempting to address the increasing complexity of technology and our environment by turning the design of a system on its head, such that we no longer over-emphasize the initial delivery of a system. Instead, we ask ourselves: How will our system respond to an urgent business need down the road? How will we react when we have a bug? Do we have the right tooling in place to respond quickly and confidently? The answers to these questions will reveal how resilient a team’s systems truly are. Resiliency, after all, is a quality that applies not only to the systems but to the teams that run them. How do you ensure resiliency in your systems? Let me know in the comments!
https://medium.com/rocket-mortgage-technology-blog/pork-a-technology-resilience-framework-745207bd28d5
['Rocket Mortgage Technology']
2020-09-15 21:58:15.184000+00:00
['Technology', 'Developer Tools', 'Software Development', 'Software Engineering', 'Software Architecture']
[我不是附屬品] — 談單元測試該有的樣貌
Kuma老師的軟體工程教室 Welcome to the Kingdom of Software Engineering
https://medium.com/kuma%E8%80%81%E5%B8%AB%E7%9A%84%E8%BB%9F%E9%AB%94%E5%B7%A5%E7%A8%8B%E6%95%99%E5%AE%A4/%E6%88%91%E4%B8%8D%E6%98%AF%E9%99%84%E5%B1%AC%E5%93%81-%E8%AB%87%E5%96%AE%E5%85%83%E6%B8%AC%E8%A9%A6%E8%A9%B2%E6%9C%89%E7%9A%84%E6%A8%A3%E8%B2%8C-360ddc87c4f4
['Yu-Song Syu']
2020-11-24 01:06:40.909000+00:00
['Software Engineering', 'Unit Testing', 'Software Development', 'Refactoring', 'Testing']
Regulating Facebook’s Public Spaces
The Social Life of Small Urban Spaces, William Whyte Jr, 1980 In architecture school I wrote a dissertation arguing that urban design was no longer happening within the offices of architects, planners, and engineers, but within the lines of code written by small clusters of software engineers in California. My 21 year old self was both apprehensive and excited by this idea. I proposed architects and urban designers start engaging with this new reality; that it was ludicrous that those now altering our experience of cities in ways more profound than any planner could have dreamed, were in no way qualified, or responsible, for doing so. Whether deciding where we go, how we get there, how we organise, how we socialise, form relationships, it seemed to be that these experiences were now exclusively being shaped by emerging technologies and platforms. The urgency I felt came from a simple observation: that the qualities of public spaces we enjoy: the presence of others, serendipity, beauty, the unexpected, were being (poorly) replicated or replaced by online equivalents. The trajectory appeared to be taking us down a route where more and more public life was either substituted or mediated by technologies exclusively designed by software engineers. My answer to this was the “Cyber Architect”! A group of people who were skilled in the areas of architecture, urban design, planning, structural engineering, as well as sociology, software engineering and UX/interaction design. These people would harness the new possibilities to shape urban experience so rapidly and at such a scale, for “public good”, so that software would not compete or draw us away from quality “real-world” public spaces, but enhance them. Because, as I saw it, the comparison between real-world public spaces, and their online equivalents, most profoundly in the case of Facebook, were undeniable. Over the next several years I set about on a path of experimenting with the idea of the Cyber Architect. The most obvious manifestation of this idea was a collaboration and company I formed with Canadian interaction designer Jonathan Chomko. Between us, we combined our knowledge and skills in architecture, design, interaction and software/hardware development, and produced real-world experiments. These included a street light that recorded and replayed the shadows of those who walked underneath, and an interactive visitor experience in the streets of central London that allowed visitors to locate and explore lost histories around them. Our philosophy was straightforward: use these new technologies to create real-world experiences that enhance the world around us, and produced new opportunities for “genuine” social interaction. While these experiments unfolded, the social media platforms that had set me off on this journey, unfolded too. The world within which I was now operating, comprising mainly of artists working with technology, appeared to have a growing obsession: data. The work of friends and practitioners that I admired, appeared to increasingly convey a sense of urgency and alarm at what was unfolding online, in particular on social media platforms. Yet in spite of this, I was disinterested. I was bored by the intangible, often highly technical, dilemmas that were being discussed within the conferences, communicated through works of art and critical design, and debated in pubs. Until the “real-world” began changing too. Data Licence, exhibited as part of the Big Bang Data in 2016, IF, 2015 During a recent interview with Ed Milliband, Yuval Noah Harari, author of Sapiens, was discussing the phenomena of fake news, pointing out that fake news is nothing new, after all we all know how effectively misleading propaganda in the form of posters, speeches and particularly films, a relatively new media at the time, were used by the Nazi regime. Harari went on to explain how in 1930s Germany however, the message in the propaganda had to be simplified and broadened to an extent that would allow its message to connect with as many German citizens as possible. What is unique today he explained, is not fake news itself, its that social media targeting algorithms give us the ability to individually tailor propaganda to an individual, based on their own individual biases. In light of this technological shift, and the unpredictable global events that followed, what became clear is not that we are such different people, but that we inhabit different realities. When objective reality ceases to exist, the terms of a debate around issues such as how to deal with global warming, extremism, immigration or Brexit cease to exist also, and we are left only with our hardened position. It’s within this state of alternative realities that the President of the United States was able to say “you had some very bad people in that group, but you also had people that were very fine people, on both sides” when describing neo-nazi and anti-facist demonstrators, in the wake of the Charlotteville terror attack. With every extremist atrocity, every tragic teenage suicide, the incontrovertible evidence mounted of the role social media companies, in particular YouTube and Facebook, played in leading an individual down a tragic path. Anecdotally, in conversations and listening to radio phone-ins, I heard videos and stories from YouTube or Facebook as references for hardened political positions, whether fringe economists presenting alternative forecasts via Youtube, or a fabricated news story on Facebook about a man “being arrested for carrying a British flag”. In this state of public polarisation, accelerated and deliberately manipulated by content on social media, it would be impossible to form a consensus around the truly massive issues of our time, such as global warming, mass-migration, inequality and automation. The real work of societal change is complex, and the timescales daunting. However, to make a start on that work, we need an electorate that can start reaching a consensus around such issues, with politicians that reflect this consensus and can take a leading role in tackling them. My conclusion from the past few years is that this will require a majority of us to inhabit the same objective reality, so that consensus can be reached, and informed decisions can be made. Regulating social media, a mechanism that holds the potential to individually target us with alternative forms of reality, is an essential step to achieve this. From day-one, my interest in technology was an emotional one, I loved the experience of towns and cities, and felt dissatisfied with those of social media. I therefore felt that people like me must “seize control!” of the potential of these new technologies and help “steer the ship”. So it wasn’t until I started to see first hand the real-world consequences of these technologies, that I’d seen such potential in, that I understood what those artists, designers and thinkers had been so concerned about. Until I had an emotional response, until there was something to save. I believe, now more than ever, that this is how the vast majority of humans make judgements, especially on topics we do not have the time to explore in intricate detail. Therefore, I also believe, and see increasingly evidenced, that we are entering a moment of profound choice and change when it comes to the trajectory of social media, and the big internet companies. Where regulation, and a new form of relationship between ourselves and our data, becomes not just a vision, but an inevitability. It was only recently, while reading Mark Zuckerberg’s new manifesto, where he repeatedly used the phrase “town square Facebook”, that I realised I’d been a part of the joke all along. Facebook, and along with it YouTube, Twitter and Instagram have relied on us buying into a premise: that they are an extension of the public realm, and therefore, are naturally granted the same privileges as our real-world public realm: our town squares. What the artists and thinkers were desperately trying to tell us is that this has never been so. That these companies are exactly that — companies! Companies who have a legal duty to extract as much value from their product, us, as possible. The advertising model that emerged from these companies relies on us being able to share and encounter content uninhibited. The metaphor of a town square, where our freedom of speech is protected, our freedom to demonstrate, to encounter the unexpected, has rightfully been enshrined in laws and constitutions around the world, became the default argument against regulation. Last year the UK Government released a White Paper suggesting a new regulatory body to regulate social media. However this remains within the context of the idea of Facebook, and platforms like it, being town squares, and therefore regulation having to be very carefully considered so as not to infringe on our rights (i.e. minimal). The UK Government’s own White Paper suggests that “privately-run platforms have become akin to public spaces”. For all the compelling comparisons between scrolling through a news feed and strolling through a town square, the ones that I found so alluring in architecture school, it is essential regulation is approached in the same manner that we would with any multi-billion dollar company. Let alone a company that profits exclusively off a product that we produce: data. It is now essential that this narrative of Facebook and platforms like it being public spaces, being town squares, protected by our hard fought democractic and human rights, is broken down. These platforms could have once argued for such a reality, as the LSD-induced software engineers of the past century dreamt, however that time has passed. The reality in 2019 is that these platforms are publicly listed, multi-billion dollar companies. It is vital at this point in history, with our democracies straining, that we treat them as such.
https://mattrosier.medium.com/the-new-town-square-7802df6c890a
['Matthew Rosier']
2020-07-29 12:24:57.596000+00:00
['Architecture', 'Public Space', 'Civic Technology', 'Mark Zuckerberg', 'Facebook']
An American Dies Every 40 Seconds. Please Stay Safe.
Let me tell you what you never forget. You never forget how it feels when you watch them pull the sheet over your mother’s face. They strap her down and push the gurney out the door, down the hall slowly in measured steps. Towards the elevator. We trail behind, her ducklings, knowing it would be the last time we’d see her. Tears roll silent down my face. Behind me, my sister keens softly and I am nearly undone. The elevator dings and as the doors open, I run to kiss her one last time, through the sheet. She is so cold and I am so broken. As we walk through the hospital lobby for the last time, Christmas music plays softly in the background. Silver bells, silver bells. It’s Christmas time in the city. We drive half an hour, my child and I, to winter land. Giant snow flakes falling through the glow of Christmas lights on giant pine trees as we crunch through the snow to the haunting strains of The Dance of the Sugar Plum Fairy and tears freeze on my eyelashes. Somehow, it feels like she is there with us. She so loved the holidays. I remember when I was a child and she’d hang tiny silver stars from the ceiling so they’d twinkle and twirl above us. It’s almost dark when I get home and I see her Christmas stocking, stuffed to the brim with treats Santa-me would have snuck into her apartment and we’d laugh and say I guess she was good this year. I sink to the floor and cry. I lost my Mom the Christmas before last. Let me tell you what she loved. Life. Her children. The holiday season. And dancing until 2 am on New Year’s eve. Somehow, it seemed so unfair that she had to leave us mere days before she could enjoy those one more time. No one wants to lose a loved one, but we do and the worst part is that we never know when the last grains of sand will trickle through the hourglass. Let me tell you what I read today… I read that there is a Covid death every 40 seconds in America. I won’t tell you to wear a mask. You already are. Or aren’t. I won’t tell you to social distance, or stay home if you can, or wash your hands or follow the health guidelines. You already are. Or aren’t. Nothing I say is going to change that. I read that if 95% of Americans wore masks, it could save 56,000 lives by April, according to the University of Washington’s Institute for Health Metrics and Evaluation. But not everyone believes that. Nothing I say is going to change that, either. Belief is a funny thing, isn’t it? When I was about the age where children start to think maybe Santa isn’t real, Mama asked if I wrote a letter to Santa. I shook my head no. I’m too big, Mama, I told her. I don’t think I believe anymore. She tipped her head and smiled. “But what if you are wrong?” she asked. What if? What if Santa is real and you don’t write the letter? And what harm if he isn’t and you do? Believing doesn’t always make you right, she said. I wrote the letter. A silly childhood story, but it’s as good a question as any, don’t you think? We love to think we are right, of course. But what if? What if just one chair is not empty at Christmas dinner next year, because I put a silly piece of paper on my face in the grocery store? What did it hurt? What if that chair that isn’t empty is mine? Or maybe the elderly lady who walks her doggy past my house? What if? For the rest of my days, there will never be a Christmas untouched by the ache of losing my Mom during the holiday season. I don’t have to know you personally to know I wouldn’t wish that on anyone. Not all death is preventable. But what if some is? What if? Stay safe, my friends.
https://medium.com/the-partnered-pen/an-american-dies-every-40-seconds-please-stay-safe-my-friends-b4a0c1480ba0
['Linda Caroll']
2020-12-15 05:06:53.329000+00:00
['Health', 'Relationships', 'Loss', 'Family', 'Love']
The Curse of Dimensionality… minus the curse of jargon
The Curse of Dimensionality… minus the curse of jargon In a nutshell, it’s all about loneliness The curse of dimensionality! What on earth is that? Besides being a prime example of shock-and-awe names in machine learning jargon (which often sound far fancier than they are), it’s a reference to the effect that adding more features has on your dataset. In a nutshell, the curse of dimensionality is all about loneliness. In a nutshell, the curse of dimensionality is all about loneliness. Before I explain myself, let’s get some basic jargon out of the way. What’s a feature? It’s the machine learning word for what other disciplines might call a predictor / (independent) variable / attribute / signal. Information about each datapoint, in other words. Here’s a jargon intro if none of those words felt familiar. Data social distancing is easy: just add a dimension. But for some algorithms, you may find that this is a curse… When a machine learning algorithm is sensitive to the curse of dimensionality, it means the algorithm works best when your datapoints are surrounded in space by their friends. The fewer friends they have around them in space, the worse things get. Let’s take a look. One dimension Imagine you’re sitting in a large classroom, surrounded by your buddies. You’re a datapoint, naturally. Let’s put you in one dimension by making the room dark and shining a bright light from the back of the room at you. Your shadow is projected onto a line on the front wall. On that line, it’s not lonely at all. You and your crew are sardines in a can, all lumped together. It’s cozy in one dimension! Perhaps a little too cozy. Two dimensions To give you room to breathe, let’s add a dimension. We’re in 2D and the plane is the floor of the room. In this space, you and your friends are more spread out. Personal space is a thing again. Note: If you prefer to follow along in an imaginary spreadsheet, think of adding/removing a dimension as inserting/deleting a column of numbers. Three dimensions Let’s add a third dimension by randomly sending each of you to one of the floors of the 5-floor building you were in. All of a sudden, you’re not so densely surrounded by friends anymore. It’s lonely around you. If you enjoyed the feeling of a student in nearly every seat, chances are you’re now mournfully staring at quite a few empty chairs. You’re beginning to get misty eyed, but at least one of your buddies is probably still near you… Four dimensions Not for long! Let’s add another dimension. Time. The students are spread among 60min sections of this class (on various floors) at various times — let’s limit ourselves to 9 sessions because lecturers need sleep and, um, lives. So, if you were lucky enough to still have a companion for emotional support before, I’m fairly confident you’re socially distanced now. If you can’t be effective when you’re lonely, boom! We have our problem. The curse of dimensionality has struck! MOAR dimensions As we add dimensions, you get lonely very, very quickly. If we want to make sure that every student is just as surrounded by friends as they were in 2D, we’re going to need students. Lots of them. The most important idea here is that we have to recruit more friends exponentially, not linearly, to keep your blues at bay. If we add two dimensions, we can’t simply compensate with two more students… or even two more classrooms’ worth of students. If we started with 50 students in the room originally and we added 5 floors and 9 classes, we need 5x9=45 times more students to keep one another as much company as 50 could have done. So, we need 45x50=2,250 students to avoid loneliness. That’s a whole lot more than one extra student per dimension! Data requirements go up quickly. When you add dimensions, minimum data requirements can grow rapidly. We need to recruit many, many more students (datapoints) every time we go up a dimension. If data are expensive for you, this curse is really no joke! Dimensional divas Not all machine learning algorithms get so emotional when confronted with a bit of me-time. Methods like k-NN are complete divas, of course. It’s hardly a surprise for a method whose name abbreviation stands for k-Nearest Neighbors — it’s about computing things about neighboring datapoints, so it’s rather important that the datapoints are neighborly. Other methods are a lot more robust when it comes to dimensions. If you’ve taken a class on linear regression, for example, you’ll know that once you have a respectable number of datapoints, gaining or dropping a dimension isn’t going to making anything implode catastrophically. There’s still a price — it’s just more affordable.* *Which doesn’t mean it is resilient to all abuse! If you’ve never known the chaos that including a single outlier or adding one near-duplicate feature can unleash on the least squares approach (the Napoleon of crime, Multicollinearity, strikes again!) then consider yourself warned. No method is perfect for every situation. And, yes, that includes neural networks. What should you do about it? What are you going to do about the curse of dimensionality in practice? If you’re a machine learning researcher, you’d better know if your algorithm has this problem… but I’m sure you already do. You’re probably not reading this article, so we’ll just talk about you behind your back, shall we? But yeah, you might like to think about whether it’s possible to design the algorithm you’re inventing to be less sensitive to dimension. Many of your customers like their matrices on the full-figured side**, especially if things are getting textual. **Conventionally, we arrange data in a matrix so that the rows are examples and the columns are features. In that case, a tall and skinny matrix has lots of examples spread over few dimensions. If you’re an applied data science enthusiast, you’ll do what you always do — get a benchmark of the algorithm’s performance using just one or a few promising features before attempting to throw the kitchen sink at it. (I’ll explain why you need that habit in another post, if you want a clue in the meantime, look up the term “overfitting”.) Some methods only work well on tall, skinny datasets, so you might need to put your dataset on a diet if you’re feeling cursed. If your method works decently on a limited number of features and then blows a raspberry at you when you increase the dimensions, that’s your cue to either stick to a few features you handpick (or even stepwise-select if you’re getting crafty) or first make a few superfeatures out of your original kitchen sink by running some cute feature engineering techniques (you could try anything from old school things like principal component analysis (PCA) — still relevant today, eigenvectors never go out of fashion — to more modern things like autoencoders and other neural network funtimes). You don’t really need to know the term curse of dimensionality to get your work done because your process — start small and build up the complexity — should take care of it for you, but if it was bothering you… now you can shrug off the worry. To summarize: As you add more and more features (columns), you need an exponentially-growing amount of examples (rows) to overcome how spread out your datapoints are in space. Some methods only work well on long skinny datasets, so you might need to put your dataset on a diet if you’re feeling cursed. P.S. In case you interpreted “close in space” as having to do with scale, let me put that right. This isn’t about the effect of measuring in miles versus centimetres, so we won’t try to blame an expanding universe for our woes — and you can’t dodge the curse by simple multiplication. Instead, maybe this picture will help you intuit it in 3D; it’s less a matter of how big this spherical cow, er, I mean, meow-emitter is… and more a matter of how many packing peanuts it is covered in. Image: SOURCE. Thanks for reading! Liked the author? If you’re keen to read more of my writing, most of the links in this article take you to my other musings. Can’t choose? Try this one:
https://towardsdatascience.com/the-curse-of-dimensionality-minus-the-curse-of-jargon-520da109fc87
['Cassie Kozyrkov']
2020-09-11 20:17:48.595000+00:00
['Data Science', 'Editors Pick', 'Machine Learning', 'Artificial Intelligence', 'Mathematics']
Hey There, My Name Is Delilah, And I’d Like To Make A Complaint
Hey There, My Name Is Delilah, And I’d Like To Make A Complaint I’ve never even been to New York City. Photo by Max Ilienerwise on Unsplash Yeah, so, whenever I meet a guy and I introduce myself, the guy inevitably gets this bright look in his eyes and says, “Hey there, Delilah.” And he’ll act like he’s the first person to make the connection between my name and that annoying song which even now, fifteen years later, seems to be playing on every radio every minute of every god damned day. Do I sound bitter? Well, I am bitter. I am tired. And I am sick. Sometimes the guy, after opening with that incredibly unwitty witticism, will follow it up with, “What’s it like in New York City?” Then he’ll look at me expectantly as if I’m supposed to give him a prize or something. “Heh heh,” I will say, without smiling. And I hope that will be the end of it. But sometimes, it goes even farther into the land of banality, and this guy who thinks he’s so clever will top it off with: “Oh, it’s what you do to me… oh, it’s what you do to me…” Um, this is basically sexual harassment, am I right? I meet a guy, and three seconds later he’s singing this, which is basically saying out loud that I’m giving him a hard-on. And this happens over, and over, and over again. Welcome to the hell that is my life. And while I’m at it — fuck you, too, Tom Jones When I meet old people, this is what they do, and in a way, it’s even worse. They sing, “My, my, my, Delilah.” That’s from this old song by this old guy called Tom Jones. Yeah, that’s pretty stupid. And yet they think they are being so clever. Um… old person, just sayin’, you’re like the thousandth person to do that, OK? Not the first! Some people sing, “Why, why, why, Delila?” which is another line from the song, and which I hate even more because I feel like I have to give them an answer. “Um… I don’t know. I’m sorry?” That’s what I say, sometimes. Other old people sing “Ay, ay, ay, Delila,” because they misremember the words. It’s not ay, ay ay, old people. I checked. It’s either my, my, my, or why, why, why. But I would prefer that it be nothing. I wish that you wouldn’t sing anything. Especially not the damn Bruce Springsteen lyric. That one really burns me up. But luckily, it’s kind of obscure, so I only get it when I meet a real genuine asshole. “Hi, my name is Delilah,” I’ll say. And the genuine asshole will look me right in the eye and say: “Cause when we kiss… fire.” That is worse than sexual harassment. It basically feels like being mauled. It makes me want to puke. First of all, it’s eliding about five lines from the verse. Google it. The song goes like this: “Well Romeo and Juliet Sampson and Delilah Baby, you can bet They were burnin’ with desire If I say split Then I know that I’d be lying ’Cause when we kiss Ooh… Fire” But see, the genuine asshole just skipped from my name right to “fire.” How dumb! Anyhow, I couldn’t take it anymore. I thought I would shorten my name. “Hi, my name is Lila.” I liked that — Lila. It worked fine for a few weeks. But what I didn’t know is there is a song by a band called Oasis who I didn’t really even know and it is called “Lyla.” One day I met this guy and I said, “I’m Lila.” He got this shining eye look that I recognized right away. My heart sank. Oh no, I thought. And he broke into song. “Hey, Lyla,” he said merrily. “The stars about to fall.” “Huh?” I said. “The world around us makes me feel so small, Lyla,” he said. “No, it can’t be,” I exclaimed. “No!” But he kept on. “If you can’t hear me call, then I can’t say Lyla, heaven help you catch me if I fall.” I turned and ran screaming like a lunatic. I went to YouTube and found the song. I didn’t like the song. I didn’t like the band. “Arghhhhhh!” Alright then, I thought. I’ll go with Del instead. “Hi, my name’s Del,” I said to the next guy I met. “Oh like the taco?” he replied, eyes shining, looking at me with that dumb smile. “Yes,” I promptly replied. “Exactly like the taco.” My life is a nightmare. Speaking of which, last night I dreamed that somehow these four things combined into one hellish rejoinder. I told some guy in my dream that I was Delilah. “Hey there Del Taco,” he said to me. “What’s it like being a Del Taco in New York City? Are your stars about to set Romeo and Juliet on fire? Because I saw the light on the night that I passed by your window, Miss Del Taco. Why, why, why, Del Taco?” Anyhow, thanks, Mom. Thanks, Dad. Thanks for giving me such a great name. I really appreciate it… …NOT!
https://medium.com/slackjaw/hey-there-my-name-is-delilah-and-id-like-to-make-a-complaint-7485e28974e4
['Simon Black']
2020-07-06 13:01:01.199000+00:00
['Pop Culture', 'Satire', 'Humor', 'Pop', 'Music']
Be A Data Maestro, Part III
Be A Data Maestro, Part III I must be a Pointer Sister, because I’m So Excited to show you my project results Photo credit: SoundCloud We have now arrived at the final movement (or is it?) of my Be A Data Maestro series here on Medium discussing my fun data analysis on music preferences. In this installment, I present the fruits of my research. For those late to the party — I had compiled a database with info on respondents’ demographic characteristics and favorite music artists, bands, singers, musicals, and composers. In Part II, I expounded on two novel metrics I came up with — namely, the Come Together Score (CTS) and the Bridge Over Troubled Water Score (BOTWS). I am now pleased to share with you the winners of 6 awards (half good, half Razzie-like in nature): The ‘No Blank Spaces Here’ Award This highly prestigious award for the artist who is favorited or liked by every single one of the five generations represented in my database (Baby Boomer, Gen X, Millennial, Gen Z, and Gen Alpha) — a feat accomplished by none of the other 189 artists represented in my database — goes to the one, the only, Miss TAYLOR SWIFT!! Congrats Tay-Tay! The ‘One Love’ Award A very culturally important category, the One Love Award goes to the artist(s) who is/are favorited by the most colors of people. Sadly, while no one achieved a perfect score, we DO have a five-way tie among the following artists (listed alphabetically) who are each favorited/liked by people of 3 different races: ABBA, BOB MARLEY, ELTON JOHN, NIRVANA, and U2 Accepting the award on behalf of U2 is Bono: “We’re one, but we’re not the same, we’ve got to carry each other, carry each other, ONE!” The ‘Beyond The Barricade’ Award In my data, I found that members of a household not surprisingly tend to have similar musical interests. So, I wanted to see which artists are most effective at striking a chord with listeners across multiple households, not just one’s own. Without further ado, here they are: LES MISERABLES, NAT KING COLE, and SIMON AND GARFUNKEL SPOILER ALERT!! Please skip to the next section if you haven’t seen the musical or read the book. Unfortunately, no one is able to accept the award on Les Miz’s behalf, because (1) fictional, and (2) almost everyone dies anyway. Now for the awards of dubious distinction… The ‘Nothing Else Matters’ Award If I was a musician, I actually wouldn’t mind winning this award. That’s because it is going to the artist who elicited the widest range of reactions from respondents. That means people care about you one way or another! There was only one artist who managed to have people either (1) say they’re their FAVORITE BAND, or (2) HATE or dislike them. And that band is…drumroll please Lars Ulrich: METALLICA! Accepting the award on behalf of Metallica is lead singer James Hetfield: “Life is ours, we live it our way. Never cared for what they do. Never cared for what they know.” Um…congrats? The ‘Viva La Pizza' Award Speaking of artists who are polarizing — I did collect data on a seemingly trivial and irrelevant preference other than music. I asked respondents “Do you like Hawaiian pizza?” I had noticed anecdotally in my Facebook feed that this was a rather hot topic, and both sides were equally as forceful and passionate in their opinions. For example: Pro Pineapple Pizza : “Of course I like Hawaiian pizza. I’m not a monster” and “It’s my favorite topping” : “Of course I like Hawaiian pizza. I’m not a monster” and “It’s my favorite topping” Hawaiian Haters: It is “nasty,” “gross,” and “an abomination” Accordingly, we have two winners: COLDPLAY and MISS SAIGON! People who favorited/liked Chris Martin and company would toss that Hawaiian pizza straight into the trash, while fans of the Tony award-winning musical Miss Saigon would relish the opportunity to polish off a slice or two. Interestingly enough, Miss Saigon is an East Meets West story, not unlike Hawaiian pizza itself. When I dive deeper into this, I can see that the Pineapple Pizza Preference is definitely split along racial/generational lines. A takeaway (see what I did there? Get it — pizza, takeaway? Oh nevermind) is that in the absence of self-reported demographic data, the PPP can be used as a reasonable proxy if available. The ‘We Are The Champions’ Award If the Be A Data Maestro Awards were a real awards show, then this would be the final statuette given before James Corden or Neil Patrick Harris ends the telecast (I love you both, in case either/both of you are reading). Technically there is one undisputed king, but I am making the executive producer decision to have a runner-up. And that would be…drumroll please Roger Taylor: QUEEN! Now at long last, the ultimate winner is…THE BEATLES!! You love them, yeah, yeah, yeah! I know what you guys are thinking — “You really needed to do a data analysis project to come to the conclusion that the Beatles are the best band ever in the history of the world and always will be, Einstein??” To that I say, yes I know it was a long and winding road. But I still think my analysis can serve as a breeding ground ripe for further exploration… The ‘Last Night I Dreamt That Somebody Loved Me’ Award Let’s say you are the manager or promoter of one of these artists who won this award for (1) only having one person who favorited you and (2) no one else likes/has heard of you. Who can name all five of these artists?* Ironically, The Smiths were not eligible for this award due to they were favorited/liked by more than one respondent. *Answers at the end of this section. Someone obviously felt strongly enough about them to list them as one of their favorites, when given free reign to name as many artists as you want. Find out who these die-hard fans are, and learn all about them! Really get to know them, then go out and try to recruit more fans by marketing to people who are similar, no? *TAME IMPALA, DERRICK MORGAN, WU TANG CLAN, BRENTON WOOD, and ALEXZ JOHNSON Listen/Ooh whaa ooh/Do You Want To Know A Secret? Curious about how I went about executing my project from start to finish? Happy to discuss with fellow data enthusiasts and Alexz Johnson fans (this artist was courtesy of moi. Anyone else out there used to religiously watch Instant Star?)! As a sampler, below is a screenshot of the 19 artists who won an award of some sort tonight. You’ll notice that the six artists who have an overall top 2 rank in terms of BOTWS form the acronym BANQET. Cool! Now let’s have the night’s big winners The Beatles bring a close to these awards:
https://medium.com/datadriveninvestor/be-a-data-maestro-part-iii-6288ffdd6f04
['Marmi Maramot Le']
2019-07-08 15:48:16.513000+00:00
['The Beatles', 'Data Science', 'Music', 'Pizza', 'Coldplay']
No, I Am Not Crowdfunding This Baby (an open letter to a worried fan)
So, dear Worried, you can see why your email stirred my darkest fears. I’m worried too. Probably more worried than you, because, I have to live with me all the time. And soon, I’ll have to live with this baby all the time. All while trying to not lose my art-self. And, honestly, if this baby really winds up acting as a crippling, muse-killing, inspiration-sucker who saps the life out of my music rendering it totally bland…well…just tiptoe away, and leave me in my balanced, bland and happy misery. As to your worry about whether or not this is a scam to crowdfund an infant: it can be confusing about where the lines of asking and taking should be drawn. Let me tell you a story, one that I was going to include in “The Art of Asking” book (it wound up on the cutting room floor with 100,000 other words.) Last year I stumbled across an open letter from Eisley, a female-fronted indie band from Texas, who’d tried to raise $100k on kickstarter so they could afford to accept a slot to support a far bigger band on tour. Some people were confused, but I understood those logistics: when my band, The Dresden Dolls, were offered the opening slot for Nine Inch Nails in the summer of 2005, we chose to go into a financial hole in order to say yes. Our nightly paycheck covered about a third of what it cost to hire a crew and keep up with their tour buses, and we lost thousands of dollars. It’s a financial decision I’ve never regretted; I still meet fans, years later, who found me on that tour and have stuck with me ever since. Those things pay off. Unfortunately Eisley didn’t reach their Kickstarter goal (so it went completely unfunded, as per the all-or-nothing Kickstarter model); but they went on the tour anyway and there was an angry backlash from their fans, who accused them of acting dishonestly. The fans asked: “if you didn’t need the money to begin with…why did you crowdfund??” Two of the members of the group were planning to bring their babies on tour, a fact that got dragged into the whole kerfuffle. Eisley defended themselves in an open letter, pointing out they’d managed to borrow the money from their families and their label, and they defended themselves specifically against people accusing them of tastelessly begging for money for baby formula, with the rebuttal that all their babies were breastfeeding…and thus weren’t planning to spend a dime of that crowdfunding money on baby formula. But honestly? Why shouldn’t they buy baby formula with that money? It’s just there on the list of stuff they need to survive on tour, up there with everything else like gas, food, and capital to print t-shirts. It would be a like a diabetic singer promising her fans that she wasn’t going to spend her Kickstarter tour money on insulin. If you are a touring indie musician, your life is NOT compartmentalized into neat little financial sections. When you’re a crowdfunding artist, it shouldn’t matter what your choices are as long as you’re delivering your side of the bargain — the art, the music. It shouldn’t matter whether you’re spending money on guitar picks, rent, printer paper, diapers, college loans, or the special brand of organic absinthe you use to find your late-night muse…. as long as art is making it out the other side and making your patrons happy. We’re artists, not art factories. The money we need to live is often indistinguishable from the money we need to make art. We need all sorts of stuff to make art with. MAYBE I EVEN NEED THIS BABY TO MAKE ART. Who knows? As to your question about the timing of all this…no, it wasn’t schemed. I’ve been intending to use patreon since it was founded two years ago, because I love the idea of giving my fans a way to just pay me whenever I actually release content, instead of relying on a tired, outdated system of making one big-old fashioned record every couple of years. It feels way more sane, actually, as the impending unpredictability of parenthood approaches, to be able to work whenever I’m inspired and can make the time, instead of working on the forced, binge-and-purge, feast-or-famine cycle that I was stuck on when I was on a major label who didn’t care much about my quality of life. I love the idea of getting paid for my work, when I work, by the people who want me to work. (Like you. Unless you stop wanting it. Which is fine. We’re in an open relationship. You can leave anytime. You can even come back. I’m fine with that.) And if you already think that my output is getting too weird, or too dull: at least you don’t have to worry that the baby will turn me into one of those obnoxious songwriters who picks up a ukulele and…let’s just admit that I clearly jumped the ukulele shark years ago, and it was REALLY liberating. Though, honestly, if what you’re waiting around for is “the really gritty, complex, emotional good stuff…”…I’ll be really surprised if pushing a SMALL HUMAN OUT OF MY VAGINA doesn’t also rip my heart open and provide some really, profound new artistic perspectives. It might take me a second to recover from you know, childbirth, before I start writing again, but just give me a second. Don’t strangle me if I decide to go into labor without a notebook in my hand, jotting down inspirational lyrics. In closing, dear Worried, if you really are worried about me, and you are with me in sensitive camaraderie, I humbly ask one thing: please don’t terrify and jinx me right now. Not when I’m just about to jump into this net that I’m praying will appear to catch me, my art, and this baby…all at the same time. I love you, Amanda
https://medium.com/we-are-the-media/no-i-am-not-crowdfunding-this-baby-an-open-letter-to-a-worried-fan-9ca75cb0f938
['Amanda Palmer']
2019-10-24 21:46:34.480000+00:00
['Pregnancy', 'Crowdfunding', 'Music']
Interview: Tolu A. Talks about Dreams, Talents, and Thinking Outside the Box
Tolu A. Pianist-songwriter Tolu A. recently unveiled the music video for “My Talents.” Directed by Tyler Scheerschmidt, the visuals depict Tolu A. playing the piano, along with a contemporary version of the Parable of the Talents. Blending flavors of pop, gospel, R&B, and hints of jazz, Tolu A. delivers a soundscape at once vibrant and contagious, chock-full of bright brass inflections and sparkling piano hues. While the music unfolds, the video displays the story of three brothers, who, upon receiving large sums of money from their father, demonstrate the correct and incorrect ways to invest one’s finances. The allegory is easily applicable to personal gifts: failure to use the abilities bestowed by God is tantamount not only irresponsible but boorish. Tolu A. is anything but boorish. He’s utilizing his God-given talents to inspire listeners to employ their own abilities to full effect — in the end, changing the world in a miraculous way. Pop Off caught up with Tolu A. to chat about how his sister indirectly introduced him to the piano, his influences, and his Daniel-like songwriting process. How did you get started in music? What’s the backstory there? I grew up listening to classical music and I was truly intrigued by the dynamics of the music. My sister used to play classical piano, and hearing her play piqued my interest in the piano. I learnt how to play the piano by ear and when I was proficient enough I started playing for a local church in Temple Hills, Maryland (RCCG Christ Chapel) in 2004. I played in a church off and on for about 6 more years until I got the desire to make my own music. Fast forward to 2010, I released my first cover “He is Exalted” on iTunes; and I got good reviews from friends and family. By 2013, I had released 5 covers and gotten radio air time for my music, but this wasn’t enough for me. I was tired of doing covers and wanted to start writing my own music. In 2018, I released my first video “Magnificent” which did rather well. Now we are here in 2020 and I just released my 4th composition, “My Talents,” which I am very proud of. What’s your favorite song to belt out in the car or the shower? This is a good question. I am a seasonal person; what this means is I have songs that I play for a time frame until I move on to the next. For now, my favorite song is “Jesus” by Eddie James. One of the best pieces of music I’ve ever heard. What singers/musicians influenced you the most? Since I do instrumental music, I was inspired by instrumentalists. The very first person that influenced me was Adlan Cruz who was invited to a big conference in my home country Nigeria. He is a famous pianist with lots of compositions and he is simply fantastic. My second biggest influence was Yanni. He showed me that what I am trying to accomplish is possible. Yanni in my opinion makes Heavenly music! What is your songwriting process? The funny thing is I truly believe I am inspired by the Almighty himself. For the most part I hear the tunes in my sleep/dreams first. As soon as I wake up, I hum what I remember on my phone recorder. After which I start the expansion process from there to make it into a full song. What was the inspiration for your new single/music video “My Talents?” The inspiration came to me while I was still filming my last video “The Wait.” I always wanted to do a bible story but wasn’t sure which one to do. It just dropped in my spirit, ‘The Parable of the Talents’ and I got to work. To be honest, I wasn’t sure how it would all be accomplished but with a lot of creative and critical thinking, we were able to pull it off. We did a modernized version of the bible story which I am very proud of and would love the world to see. Tolu A. What do you want people to take away from the video? I hear that the graveyard is truly the richest place on earth because you have all these talents and abilities that were never actualized or made use of. I want people to find their God-given purpose and bring it to reality or use their talents or skills to their fullest potential. This video is supposed to inspire people to think outside the box to attain greatness. Where was the video filmed and who directed it? The performance scene of the video was filmed in Hancock, Maryland. The House flip was filmed in Hyattsville, Maryland. The car sales were filmed at a dealership in Stafford, Virginia and the other scenes were filmed in Woodbridge, Virginia. The video was directed by Tyler Scheerschmidt and produced by High End Pro. Why do you make music? I make music because I believe it is my God-given purpose and destiny to inspire, bless, and motivate people. I am very passionate about music and I truly love what I am able to create. I want people to watch my videos and learn a lesson, or be encouraged. I want to spread the gospel with my music and videos. How are you handling the coronavirus situation? This pandemic has been rough on everyone. It has been rough on individuals, businesses, and all institutions globally. It is just in our best interest for us all to limit our movements as much as possible, socially distance oneself in large crowds, wear protective gear like face masks, and wash your hands frequently. I have adjusted as much as possible and I pray we all see the end of this very soon. Looking ahead, what’s next for Tolu A.? I’m most certainly going to continue making music. I am going to continue writing originals and shooting videos. All my videos would be inspirational and have some form of message people can take and use to enrich their lives. Thank you for your time, I appreciate you reaching out to me. Follow Tolu A. Instagram | Facebook | Spotify
https://medium.com/pop-off/interview-tolu-a-talks-about-dreams-talents-and-thinking-outside-the-box-61593588ab29
['Randall Radic']
2020-12-22 13:44:36.046000+00:00
['Tolu A', 'My Talents', 'Interview', 'Music', 'Gospel Of Matthew']
Why we REALLY need to stop using switch statements in JavaScript
For example, let’s create a function that returns the result of a binary operation between two numbers, according to an operation string given to the function. doAction(4, 5, "+") // returns 9 doAction(2, 3, "-") // returns -1 doAction(-1, 5, "*") // return -5 doAction(10, 2, "/") // return 5 doAction(3, 0, "/") // Throws an Exception due to division by zero If we use an if-else structure, the function will be written as follows function doAction(a, b, operation){ let result; if(operation === "+"){ result = a+b; } else if(operation === "-"){ result = a-b; } else if(operation === "*"){ result = a*b; } else if(operation === "/"){ if(b === 0){ throw new Error("Can't divide a number by 0"); } result = a / b; } else{ result = undefined; } return result; } As one may notice, the code is cluttered and involves many if-else statements which all focus on determining the value of the parameter operation . For such cases (which are surprisingly common), the switch statement was created, to relieve us from this cluttered mess. function doAction(a, b, operation){ let result; switch(operation){ case "+": result = a + b; break; case "-": result = a - b; break; case "*": result = a * b; break; case "/": if(b === 0){ throw new Error("Can't divide a number by 0"); } result = a / b; break; default: result = undefined; } return result; } Today, thanks to Javascript’s objects and functional programming concepts, we can transform most switch statements to an object. The code above will be changed as follows const doAction = (function(){ const functionsObject = { "+" : (a, b) => a+b, "-" : (a, b) => a-b, "*" : (a, b) => a*b, "/" : (a, b) => { if(b === 0){ throw new Error("Can't divide a number by 0"); } return a / b; } }; return function (a, b, operation){ let resultFunction = functionsObject[operation]; if(resultFunction){ return resultFunction(a,b); } }; })() Not only that we got a function that still works the same, we have a clear separation between logic and value parsing. So why do we need to stop using the switch statement? In order to demonstrate this, we will focus on the following problem. Alice and Bob are enjoying a game of Rock, Paper and Scissors. Our task is to write a function that evaluates a single game between Alice and Bob and outputs the winner of the game. Let’s examine several solutions to this problem, and show why the use of the switch statement isn’t as helpful as once has been considered. Solution 1: Switch structure In this solution, we will use the switch structure. function Switch_RPS_Eval_Game(alice, bob) { // If Alice and Bob chose the same choice, we will output a tie if (alice === bob) { console.log("Alice and Bob tie") } else { let isAliceWin; // variable to determine if Alice won // Switch on Alice choice switch (alice) { case "Rock": switch (bob) { case "Scissors": isAliceWin = true; break; default: isAliceWin = false; } break; case "Scissors": switch (bob) { case "Paper": isAliceWin = true; break; default: isAliceWin = false; } break; case "Paper": switch (bob) { case "Rock": isAliceWin = true; break; default: isAliceWin = false; } break; } if (isAliceWin) { console.log("Alice Wins!"); } else { console.log("Bob Wins!"); } } } In order to know who the winner is, we first determine Alice’s selection with a switch statement , and then determine Bob’s selection with another switch statement. One can notice that the function (though short) still looks a bit cluttered. Solution 2: Object structure Let’s create an object, similar to the switch structure, with keys that determine the possible selections. The value of each key, will be another object that contains only the selections that will lead to a win. const RPS_rules = { Rock: { Scissors: true }, Scissors: { Paper: true }, Paper: { Rock: true } }; Now we will use the object to construct a function that given an object (in the structure described earlier) will return a new function. The new function will receive Alice’s and Bob’s selections and output who the winner is. function createRuleGame(rulesObj) { return function (alice, bob) { // If Alice and Bob chose the same choice, we will output a tie if (alice === bob) { console.log("Alice and Bob tie") } else { // variable to determine if alice won let isAliceWin = rulesObj[alice][bob]; if (isAliceWin) { console.log("Alice Wins!"); } else { console.log("Bob Wins!"); } } } } Finally, we can build the desired function! const Object_RPS_Eval_Game = createRuleGame(RPS_rules); That’s it! This is the whole function! Notice how we don’t have the clutter of the case, default and break statements that was previously added to the code. Switch_RPS_Eval_Game("Rock", "Paper"); // Bob Wins! Object_RPS_Eval_Game("Rock", "Paper"); // Bob Wins! So why do we REALLY need to stop using the switch statement? After a long afternoon of playing Rock, Paper and Scissors, Alice and Bob wanted to play something more exciting and decided to play Rock, Paper, Scissors, Lizard, Spock. Rock, Paper, Scissors, Lizard, Spock is a game, invented by Sam Kass and Karen Bryla, that extends the Rock, Paper and Scissors game. The rules of the game are as follows. Scissors cuts Paper Paper covers Rock Rock crushes Lizard Lizard poisons Spock Spock smashes Scissors Scissors decapitates Lizard Lizard eats Paper Paper disproves Spock Spock vaporizes Rock (and as it always has) Rock crushes scissors Below is a graphical representation A diagram of the game’s rules. (from The Big Bang Theory Wiki | Fandom) Once again, our task is to write a function that evaluates a single game between Alice and Bob and outputs the winner of the game. Solution 1: Switch structure (The REALLY messy solution) Since we can’t “extend” the function we created earlier, we are left with no other choice but to create an entirely new function. function Switch_RPSLS_Eval_Game(alice, bob) { // If Alice and Bob chose the same choice, we will output a tie if (alice === bob) { console.log("Alice and Bob tie") } else { let isAliceWin; // variable to determine if alice won // Switch on Alice choice switch (alice) { case "Rock": switch (bob) { case "Lizard": case "Scissors": isAliceWin = true; break; default: isAliceWin = false; } break; case "Scissors": switch (bob) { case "Lizard": case "Paper": isAliceWin = true; break; default: isAliceWin = false; } break; case "Paper": switch (bob) { case "Spock": case "Rock": isAliceWin = true; break; default: isAliceWin = false; } break; case "Lizard": switch (bob) { case "Paper": case "Spock": isAliceWin = true; break; default: isAliceWin = false; } break; case "Spock": switch (bob) { case "Rock": case "Scissors": isAliceWin = true; break; default: isAliceWin = false; } break; } if (isAliceWin) { console.log("Alice Wins!"); } else { console.log("Bob Wins!"); } } } Not only we had to duplicate the code from the previous function, this function is more cluttered, harder to read and difficult to maintain. Solution 2: Object structure (The cleaner solution) Thanks to the previous function we created earlier ( createRuleGame ), all we need to do is to create a new object with the game’s “rules” and use it. const RPSLS_rules = { Rock: { Scissors: true, Lizard: true }, Scissors: { Paper: true, Lizard: true }, Paper: { Rock: true, Spock: true }, Lizard: { Paper: true, Spock: true }, Spock: { Scissors: true, Rock: true } }; And the desired function is created as follows const Object_RPSLS_Eval_Game = createRueGame(RPSLS_rules); Mind Blowing! Switch_RPSLS_Eval_Game("Lizard", "Paper"); // Alice Wins! Object_RPSLS_Eval_Game("Lizard", "Paper"); // Alice Wins! The bottom line It will be a long way until the switch statement will be deprecated, but it doesn’t mean that we still have to use it. By leveraging the power of Objects and Functional Programming concepts in Javascript (and other programming languages), we can create a cleaner, readable and easy to maintain codebase So, if you still have some “ugly” switch statements in your code, don’t be afraid to give it the makeover it deserves!
https://nisimdor.medium.com/why-we-really-need-to-stop-using-switch-statements-in-javascript-cdd0ab61ef5a
['Dor Nisim']
2020-11-25 20:28:18.341000+00:00
['Objects', 'JavaScript', 'Switch', 'Functional Programming', 'Design Patterns']
5 Easy Ways of Customizing Pandas Plots and Charts
5 Easy Ways of Customizing Pandas Plots and Charts Pandas gives you a simple and attractive way of producing plots and charts from your data. But sometimes you want something a little different. Here are some suggestions Perhaps you are a data journalist putting a new story together, or a data scientist preparing a paper or presentation. You’ve got a nice set of charts that are looking good but a bit of tweaking would be helpful. Maybe you want to give them all titles. Maybe some would be improved with a grid, or the ticks are in the wrong places or too small to easily read. You know how to produce line plots, bar charts, scatter diagrams, and so on but are not an expert in all of the ins and outs of the Pandas plot function (if not see the link below). You don’t have to stick with what you are given. There are quite a lot of parameters that allow you to change various aspects of your diagrams. You can change labels, add grids, change colors and so on. Using the underlying matplotlib library you can change just about every aspect of what your plots look like and it can get complicated. However, we are going to look at some of the easier things we can do just with Pandas. Before we start you’ll need to import the appropriate libraries and get some data. Let’s start by importing all of the libraries that you’ll need to run the examples # The first line is only required if you are using a Jupyter Notebook %matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt This is all pretty standard stuff which should be familiar from my previous article. One thing to note though is the first line — if, like me, you are using a Jupyter Notebook then you may need to include this. If you are using a normal Python program, or the Python console, then you should not include it. Getting some data You need some data to work with, so we’ll use the same data set as previously: weather data from London, UK, for 2008. Load the data like this: weather = pd.read_csv(‘https://raw.githubusercontent.com/alanjones2/dataviz/master/london2018.csv') print(weather[:2]) Year Month Tmax Tmin Rain Sun 0 2018 1 9.7 3.8 58.0 46.5 1 2018 2 6.7 0.6 29.0 92.0 The print statement prints out the first couple of lines of the table, representing January and February. You can see that there are four pieces of data (apart from the year and month), Tmax is the maximum temperature for that month, Tmin is the minimum temperature, Rain is the rainfall in millimeters and Sun is the total hours of sunshine for the month. So let’s just draw a simple plot of how the maximum temperature changed over the year weather.plot(x=’Month’, y=’Tmax’) plt.show() This is the default chart and it is quite acceptable. But we can change a few things to make it more so. 1. Change the size and color The first thing that you might want to do is change the size. To do this we add the figsize parameter and give it the sizes of x, and y (in inches). The values are given a a tuple, as below. To change the color we set the color parameter. The easiest way to do this is with a string that represents a valid web color such as ‘Red’, ‘Black’ or ‘Teal’. (Note: you can find a list of web colors in Wikipedia.) weather.plot(x='Month', y='Tmax', figsize=(8,5), color='Red') plt.show() 2. Setting a title It’s very likely that for and article, paper or presentation, you will want to set a title for your chart. As you’ve probably gathered, much of this is knowing what the correct parameters are and setting them properly. The parameter to set a title is title. Of course! Here’s the code: weather.plot(x=’Month’, y=’Tmax’, title=”Maximum temperatures”) plt.show() 3. Display a grid While the default charts are fine, sometimes you want your audience to more easily see what certain values in your chart. Drawing gridlines on your plot can help. To draw the grid, simply set the grid parameter to True. Pandas defaults to False. weather.plot(x=’Month’, y=’Tmax’, grid=True) plt.show() 4. Changing the legend The legend is given the name of the column that represents the y axis. If this is not a acceptably descriptive name, you can change it. Or, indeed, you can eliminate it altogether! If we want to remove it we set the parameter legend to False. If we want to change the label we incorporate the label parameter and set it to the string that we want displayed. weather.plot(x=’Month’, y=’Tmax’, legend=False) plt.show() weather.plot(x=’Month’, y=’Tmax’, label=’Max Temp’) plt.show() 5. Customizing the ticks Ticks are the divisions on the x and y axes. You can see that on our charts they are labelled from 10 to 25 on the y axis and 2 to 12 on the y axis. Given that the bottom set are supposed to represent the months, it would be better if they went from 1 to 12. We can set the tick labels with tuples. If we want to display all twelve months we would set the parameter xticks to (1,2,3,4,5,6,7,8,9,10,11,12). You can do a similar thing with the yticks parameter. Take a look at this code: weather.plot(x=’Month’, y=’Tmax’, xticks=range(1,13), yticks=(0,10,20,30)) plt.show() As you can see I’ve set both sets of ticks explicitly. The y ticks now start at 0 and go up in tens to 30, and the x ticks show every month. But I’ve been a bit sneaky here, rather than use the tuple I showed you, above, I’ve used the Python range function to generate a list from 1 to 12 (less typing!). If you wanted to remove the ticks altogether, it would be a simple matter to set either parameter to an empty tuple, e.g. xticks=(). weather.plot(x=’Month’, y=’Tmax’, xticks=()) plt.show() I you want to emphasize the ticks even more you can change the font size. In the example below, you can see how. plot = weather.plot(x=’Month’, y=’Tmax’, xticks=range(1,13), fontsize=18) plt.show() It could get messy I mean your code could get messy. What if you wanted to set the ticks, a title, labels, a grid and so on. First, the line of code to do the plot would be very long and, second, if you have several plots to make you find yourself repeating it. Here’s a solution. Assume that you want all of you plots to look the same. What you do is define a dictionary of all of the parameters that you want to apply to all of your charts, like this: plot_kwargs={‘xticks’:range(1,13), ‘yticks’:(0,10,20,30), ‘grid’:True, ‘fontsize’:12} then instead of typing in all of the parameters explicitly, you can take advantage of Python’s ** operator which will expand a dictionary into a list of keyword arguments. I’ve called the variable plot_kwargs as kwargs is the conventional name given to a variable containing keyword parameters (which is what these are). Use them like this: weather.plot(y=’Tmax’,x=’Month’, **plot_kwargs) plt.show() Now, you can use the same dictionary for other plots, too, for example: weather.plot.scatter(y=’Tmax’,x=’Month’, legend=True, label=”Min Temperature”, **plot_kwargs) plt.show() Here, I’ve used the plot_kwargs parameter to set the default parameters but explicitly set the ones for the individual plot. That’s it Well, no. It isn’t really. There is a lot you can do to customize your plots more both with Pandas and matplotlib. You can also make changes when you save the plots to a file. There is just far too much to cover in a single article. Thanks, for getting to the end of the article, I hope you have found it useful.
https://towardsdatascience.com/5-easy-ways-of-customizing-pandas-plots-and-charts-7aefa73ff18b
['Alan Jones']
2020-01-26 12:53:42.261000+00:00
['Python', 'Data Science', 'Data Visualization', 'Pandas']
P-Hacking Recession Indicators
By Caitlin Whalen, David Gao, Tian Xie, Zhanni Gao, Pam Wu, Nick Hespe, Sina Chehrazi, Mark Solomon, Lee Gutman and Juliana Sullam Every day in the media we read about an imminent economic downturn in the U.S. Depending on the article and the related data it references, the next recession sounds as though it could be mere months — if not minutes — away. Given this media focus, we went into Enigma’s Hack Week determined to find out whether we could more accurately predict a recession by looking at public data. After all, there were public data signals for the 2008/2009 recession, e.g., the data around unemployment, mortgages, housing prices, and so on. We asked ourselves: What could we be looking for in public data now that might predict the next recession, recognizing that the causes of recessions are not often repeated? Within minutes it became clear that out-predicting a leading economist or think tank would be an impossible feat, but we decided to see whether we could find any public data that might at least correlate with, if not act as a leading indicator of, GDP contraction. Our intention was to have a little fun, but also illustrate just how easily conclusions can be drawn from spurious leading correlations. We looked at the most commonly tracked indicators (e.g., the yield curve, U.S. stock market performance, housing prices, unemployment rates) and compared those findings with some more, shall we say, obscure public data sources (e.g., avocado prices, cereal production, number of lawyers) to see if we could find a link. Thus began our p-hacking quest. The Approach Gather any/all time-series public data over the past 2 recessions (~ last 20 years) Clean data to uniform format Run Granger Causality Analysis across all datasets Our approach was to P-hack our data to try and uncover any correlations between GDP (specifically GDP percentage change per quarter) and random public data sets that would offer any type of indicator. While p-hacking is widely-shunned amongst data scientists, it proved to be an ideal approach to uncovering spurious leading correlations between GDP and random public datasets. Moonlighting as p-hackers over Hack Week illuminated just how easy it is to manipulate data analysis to fit a certain narrative or thesis. In our case, looking for potential recession signals across a random assortment of both traditional and unconventional data sources yielded some interesting and some obvious findings. While we by no means hold our brief analysis on par with the many economists that spend decades predicting recession activity, our p-hacking revealed that housing sales, retail alcohol sales and lightweight truck sales can be seen as leading indicators for a recession. You can scroll down to view a few spotlight analyses below. P-Hacking Highlights Ratio of houses for sale versus number of houses sold (“monthly supply of houses”) Granger Causality (P value) = nearly 0 (lag of 2,3,4,5) Our analysis revealed a negative correlation between trends in the ratio of houses for sale to houses sold and GDP, providing a leading indicator for economic growth. Instances of sharp increases in the listed-to-sold ratio correlate negatively with GDP for up to three subsequent quarters. With a p-value of approximately 0.0001, the relationship is reasonably firm, at least by p-hacking standards. The monthly supply of houses has been steadily increasing since November 2017, perhaps a sign that the U.S. economic outlook isn’t great. Lightweight truck sales (total per quarter) Granger Causality (P value) = 0.0357 (lag of 3) We observed a positive correlation between lightweight truck sales (e.g., Ford F-150s) and U.S. GDP. Our analysis indicates a strong association between truck sales and potential recessions, which makes sense as higher truck sales would seem to indicate optimism regarding overall U.S. economic health while slower truck sales might forecast concerns about near-term economic performance. Lightweight truck sales have been increasing annually since 2010, which may be a positive indicator for continued GDP growth. Alcohol retail sales (beer, wine, liquor, seasonally adjusted total per quarter) Granger Causality (P value) = 0.0001 (lag of 2) We observed a significant positive correlation between the level of alcohol retail sales and GDP. Once again, the association seems intuitive, as alcohol is a luxury good for most people and consumption would seem to increase with stronger economic performance. (However, it would be perhaps equally if not more intuitive for alcohol sales to spike ahead of and during an economic downturn…) Conclusion Ultimately our P-Hack Week experience taught us that as we hear predictions of a forthcoming recession, all of these analyses should be taken with a grain of salt. It’s very easy to find correlations with GDP, but that doesn’t signify a meaningful connection. In the meantime, we’ll be closely monitoring things like alcohol and Ford F-150 sales :). ___________________________________ Project Notes
https://medium.com/enigma-engineering/p-hacking-recession-indicators-5ff86a5aae19
[]
2019-03-14 17:34:33.652000+00:00
['Public Data', 'Recession', 'Hack Week', 'Hacking', 'Data Visualization']
How to run localhost web apps on your remote device without USB connection
Alright, now let’s get started, today’s agenda is to learn how to open your web-app (running at localhost), on your mobile. I am going to tell you two ways how you can get this job done. The first one is a very common way to do it (through IP address), while the second one is not so common, using ngrok. But, why two methods? Why not a single method? The first method (using IP address), sometime doesn’t work, the issue might be because of public network, or your firewall might be blocking it, so the second one makes it foolproof. In this small tutorial, I will be trying to test a sample “Hello world” web app. Setting that up is very simple, Open your project directory in VS Code Add a file, index.html inside index.html, paste this simple code — <html lang="en"> <head> <title>Test</title> </head> <body> <h1>Hello World!</h1> </body> </html> After that, serve this file through live server, and then you will be able to see this on your localhost - Now you can try opening the same URL on your phone’s browser and see that it won’t work Now let’s see how can you open this on your phone ( ͡~ ͜ʖ ͡°)
https://medium.com/javascript-in-plain-english/how-to-run-localhost-web-apps-on-your-remote-device-without-usb-connection-5278b6296bcb
['Madhav Bahl']
2020-08-26 11:43:40.532000+00:00
['Deployment', 'JavaScript', 'Testing', 'Productivity', 'Web Development']
The 3 pillars of the OOP
Few days ago i had a Python developer job interview and besides studying the syntax of the language i thought it was a pretty good idea to take a look again on the “3 pillars of the Object-oriented-programming”. That in other words just means what makes an OOP language an OOP language. In online resources you might not find just 3 but 4 or even 5 and they can be called in different ways but the definition remains the same. That's okay, but in my opinion we can resume all in just the ones i will be explaining on this post. I like to understand things really from the root, so let's take a quick look of what OOP is. From wikipedia definition: “Object-oriented programming (OOP) is a programming paradigm based on the concept of “objects”, which can contain data, in the form of fields (often known as attributes or properties), and code, in the form of procedures (often known as methods).” Object-oriented languages include Java, C++, C#, Python, PHP, JavaScript, Ruby, Perl and many more. Now, they are some specific features that a programming language must be able to provide in order to be called an object oriented language. As i told before i will resume them in this 3: Encapsulation Specialization Polymorphism From this on i will be attaching examples in Python language code because it is the one that i like the most so far. Don't worry, of course the concepts remains for every other object oriented language and also based on the hacker.io most popular languages statistics Python is not doing it bad at all!. Encapsulation Every class you define should be fully encapsulated, that means that every class defined should contain the whole class data and the functions to manipulate that data. From the example on orreilly: “If you create an Employee object, that Employee object should fully define all there is to know, from the perspective of your program, about each Employee. You do not, typically, want to have one class that defines the Employee’s work information, and a second, unrelated class that defines the Employee’s contact information. Instead, you want to encapsulate all this information inside the Employee class, perhaps by aggregating the contact information as a member of the Employee class.” In our example we will bring a “Person” class: class Person(): ''' Person class ''' def __init__(self, name='Noname', age=0, gender='None', sexuality='None', religion='None'): '''Initialize person''' self.__name = name self.__age = age self.__gender = gender self.__sexuality = sexuality self.__religion = religion def get_name(self): '''Return person name''' return self.__name def set_name(self, name): '''Set person age''' self.__name = name return self.__name def get_age(self): '''Return person age''' return self.__age def set_age(self, n): '''Set person age''' self.__age = n return self.__age def get_gender(self): '''Return person gender''' return self.__gender def set_gen(self, gen): '''Set person age''' self.__gender = gen return self.__gender def get_sexuality(self): '''Return person sexuality''' return self.__sexuality def set_sex(self, sex): '''Set person sexuality''' self.__sexuality = sex return self.__sexuality def get_religion(self): '''Return person religion''' return self.__religion def set_rel(self, rel): '''Set person rel''' self.__religion = rel return self.__religion Let’s say that this is all the information and data required from a person and we are going to assume that we will just require to pull out and update this information. We can note then that all the data and functions (methods) to handle that data are inside the definition of the class itself. We then can say this class is fully encapsulated. Specialization Specialization allows you to establish hierarchical relationships among your classes. That means that one can define a class to be derived from another existing class. By doing this the “child” class will inherit the characteristics of the “parent” class. This is something pretty useful to avoid unnecessary duplicated code. We are going to create a “Teacher” class to understand this better. Think about it: a teacher is a person itself right? By telling this i want to highlight that there is not necessity to duplicate code in order to return or set a teacher’s information like name, age or any other attribute that is already existing in the person class. This is where inheritance is useful for! class Teacher(Person): '''Teacher class inherits from Person class''' def __init__(self, knowledge, experience): '''Initializing teacher''' super().__init__() self.__knowledge = knowledge self.__experience = experience def get_knowledge(self): '''Return teacher knowledge''' return self.__knowledge def set_knowledge(self, know): '''Set teacher knowledge''' self.__knowledge = know return self.__knowledge def get_experience(self): '''Return teacher experience''' return self.__experience def set_experience(self, exp): '''Set teacher experience''' self.__experience = exp return self.__experience You see?…Using this principle we don’t have to redefine the same attributes and methods for the Teacher class but only the ones that the Teacher class will specially have. To call the Parent class methods on this one we used the super() builtin. If you want to know more about it click here. Polymorphism But.. just a moment. What if the teacher doesn't want to share some private information as his/her religion or sexuality?. We should totally respect this decision right? The correct thing to do would be to ask the teacher first if he/she want this information to be release don’t you think? That being state, we know now that the get_religion() method has to bee different from the Person class one. And this is where the polymorphism plays its role: From overig.com: “In literal sense, Polymorphism means the ability to take various forms. In Python, Polymorphism allows us to define methods in the child class with the same name as defined in their parent class. As we know, a child class inherits all the methods from the parent class. However, you will encounter situations where the method inherited from the parent class doesn’t quite fit into the child class. In such cases, you will have to re-implement method in the child class. This process is known as Method Overriding.” Overriding get_religion() method for Teacher class: def get_religion(self): ''' Return teacher religion if he/she wants to ''' print('Do you want to share this?') answer = input() if answer == 'yes': print('Thank you for your consent') return (super().get_religion()) if answer == 'no': print('Sorry person doesn't want to share religion') return else: print('Answer needs to be yes or no') return Make sense right? Let’s make a little test: man = Person('sech', 34, 'Male', 'Hetero', 'Christian') print('Person religion is: ', man.get_religion()) Teacher_1 = Teacher('Math', 4) Teacher_1.set_rel('Catolic') print('Teacher Religion is: ', Teacher_1.get_religion()) Output: Person religion is: Christian Do you want to share this? yes Thank you for your consent Teacher Religion is: Catolic If you want to have the code to make your own tests you are welcome here. Hope this have been helpful for you somehow.
https://medium.com/analytics-vidhya/the-3-pillars-of-the-oop-4308edeb6230
['Sech Rueda']
2020-04-22 17:06:58.824000+00:00
['Object Oriented', 'Oop Concepts', 'Python', 'Programming Languages']
Facebook’s News Feed: Why There’s No Silver Bullet to Reach
Last week the internet went into a collective panic over the proposition that Facebook could remove non-sponsored page posts from the organic News Feed. Whilst the story turned out to be much ado about nothing, it did lead to a lot of questions around the importance of News Feed as a channel. It’s no secret that Facebook is becoming increasingly more pay-to-play for brands, with organic engagements for pages declining around 20% since January. The easy answer to ‘reach decay’ is the sheer volume of content versus the available attention of users. The Facebook algorithm, a closely-guarded secret that is constantly tweaked to provide the best user experience, is usually a mystery to publishers and advertisers. Many ‘hacks’ exist in an attempt to game the algorithm, with a lot of urban legends over how best to share content — from posting links in comments through to only sharing a minimum amount per week, every social media guru has their own idea on how this works. Facebook themselves have this week weighed in, with a short video on key actionables to have your content rank better in the News Feed. A condensed version of the ‘Whats New with News Feed?’ presentation given by VP of News Feed Adam Mosseri earlier this year, the information given is valuable to any publisher looking at ways to tweak their social content to generate the best reach and engagement possible. While not at all a silver bullet in the quest to top the News Feed, there are some important lessons to take away here. The News Feed is different for each user Every person values and engages with content differently, thus individuals are served content they are more likely to care about first. Mosseri explains the detail that goes into displaying content in each user’s feed — the algorithm evaluates everything from your likelihood to engage to how much negative feedback the publisher has received before, to how much time you might spend reading a piece of content and even the frequency of posts from that particular user. All of this is calculated and weighted to determine what the post’s score is and whether you’re part of the audience who will see it. There is no ‘gaming’ the system Engaging content is engaging content, period. Even if you delete a particular post that didn’t get the engagement you wanted, that post will still contribute to your overall performance rating on Facebook. This counts for both organic posts as well as ads — meaning an advertising campaign with poor targeting could hurt your organic engagement too. Having said that, spending money on a post does not guarantee its reach. Many advertisers have experienced frustration when ads with too much text result in a disapproval or limited reach. The same goes for a wealth of other factors that aren’t considered, including your overall engagement levels. The best communication strategy includes a calculated content creation effort, as well as a carefully targeted advertising campaign. Which leads to the next point, which is; Consider engagement When creating content many event marketers will consider reactions (likes, comments, shares) as a hero metric of success. This can certainly be the case with social-proofing but isn’t always the best gauge. A dynamic carousel with no comments can convert purchasers at $0.82 per purchase, whilst a strong video with hundreds of comments can sell two tickets at $250 per purchase. Facebook will consider and weight both of these metrics in deciding what content will be shown. The official recommendation from Facebook is to ask yourself, “Would people share my story with their friends or recommend it to others?” We agree. Post Frequently Many event marketers have their own rules on when to post.This is a good way to ensure you’re engaging your community at the times they’re most active. However, rules on how often you post are based on now-outdated data based on how content was previously distributed in the feed. Facebook’s own guidelines now advise publishers to post often, allowing them to increase the chances of the algorithm matching them to their potential audience. So long as your post is “new and high quality”, the frequency of your posts won’t be a penalising factor. Post Quality Every post has a range of factors that will affect how strongly it ranks in the feed. By raising the quality of your posts you’ll improve your rank and reach. Consider some of these factors: Website experience a) How mobile-responsive is the page you’ve linked to? If it doesn’t handle mobile well your reach will suffer. b) Does the page load quickly? Many internet users will abandon a page that loads slowly. On both mobile and desktop the load time of the page is a crucial factor. Media c) Are your videos or photos high quality? Are they compelling? Most high quality images will load quickly with internet speeds and can be buffered at a lower quality if need be. d) Have you used closed-captioning in your videos? Should you? Accessibility can be a back seat consideration but shouldn’t be — not just for inclusion but for the fact that many videos are watched with no sound on. e) Is your content appropriate? Does it use overly sexual, violent or malicious imagery? Facebook has cracked down on this content heavily in recent months and this should be a key consideration when creating your content. f) Do you own the intellectual property of the content you’re sharing? If not, you are potentially at risk of a copyright breach not to mention recycled content will generally rank lower. Type of Post g) Is your post timely? Are you capitalising on a trending topic that is no longer relevant? For events, this could mean running key content at your peak publicity moment and not delaying unnecessarily. h) Is your post clickbait? Do people drop-off quickly when they land on your page or do they stay and engage? The faster people drop off or abandon your site, the more likely it is that your post will be considered low quality. i) Is the content original or rehashed and dated content? Sharing a post more than once can be good practice to getting your message across but can also be considered spam. Consider other ways to spread the message — perhaps by crossposting a video, or utilising another page. All of these and more, show just how broad the factors that determine your post’s quality and therefore reach. It may take you just a moment to publish it, but that extra moment of consideration and forward planning could cost you dearly in organic reach. The reality is that no one who claims to know how to rank you higher in the News Feed can guarantee you results. In the end, you are at the mercy of the audience you’re engaging. With a carefully planned social strategy and a focus on engaging, timely and high-quality content, your chances of reaching your intended targets are much higher than posting simply for the sake of posting. To learn more about Facebook News Feeds, you can read the News Feed Publisher Guidelines.
https://medium.com/bolstered/facebooks-news-feed-why-there-s-no-silver-bullet-to-reach-3aa13d5bc7dc
['Sean Singh']
2018-02-22 04:56:48.874000+00:00
['Social Media', 'Digital', 'Facebook Marketing', 'Digital Marketing', 'Facebook']
Don Da Menace Drops Dope ‘Falling Toward Success, Vol. 1’
Don Da Menace Don Da Menace Drops Dope ‘Falling Toward Success, Vol. 1’ Slow, low, and banging. New York City-based alternative hip-hop artist Don Da Menace releases a new collection of tracks, entitled Falling Toward Success, Vol. 1, reflecting his thoughts and feelings on the present COVID-19 crisis and the social upheaval occurring throughout the country. Originally from East Harlem, Don Da Menace grew up inspired the music of his late father, Smoove Da Menace, whose rootsy, organic sound made a huge impact on Don’s unique compositions. Featured in numerous online outlets for his potent singles “Ruby” and “Enough,” the latter with Robert Eberle & Versa the Band, on Falling Toward Success, Vol. 1, Don Da Menace imparts a mixture of his prior releases, along with new tracks offering a tasty smorgasbord of his innovative exploration of novel sonic expressions, infused with old school energy. Encompassing 13-tracks, the mixtape starts off with “Tomorrow,” riding a tight, low-slung rhythm topped by gleaming accents supported by a fat, popping bassline. Don’s flow is smooth yet rife with smart rhymes. Don Da Menace Entry points include “Cold War (Baba Child), featuring a spoken-word intro rolling into exotic colors traveling on a slow, low, and banging rhythm. The track blends retro ’60s rock hints with big boom resonance, imbuing the tune with brawny allure. Distant echoing harmonies give Don’s flow an enigmatic aspect. “Worth It” offers stridently luminous synths gliding over the Jovian thump of a kick-drum, while Don’s intense, almost anguished flow infuses the rhymes with an inhibiting prospect. “Low Key,” featuring FT$ Teddy, exudes dark, heavy dynamics, as well as sparkling high tones, while Don brings the heat with skintight flow. “Fight or Flight” features eerie glistening notes floating like ghosts over a muscular, slow, rumbling rhythm, full of cavernous depth. This track is totally dope because of its dual layers of sound, one flashing and shiny, the other throbbing with gravitational power. “Ruby” merges psychedelic-flavored synths with a viscous, deep trap beat. The ultimate track, “Rap Shit,” relies on an undulating rhythm, radiant harmonies, and Don’s scrummy flow. Cool R&B, old school soulful textures make this track a winner. Falling Toward Success, Vol. 1 stands out because of its amalgamation of daring fresh sounds with old school oomph. Follow Don Da Menace Facebook | Instagram | YouTube | Spotify
https://medium.com/pop-off/don-da-menace-drops-dope-falling-toward-success-vol-1-d4a18aa938ec
['Randall Radic']
2020-09-22 13:01:45.973000+00:00
['Don Da Menace', 'Hip Hop', 'Music', 'Rap', 'Falling Toward Success']
Death Threats Are Not Hyperbole
It shouldn’t have taken that to persuade me. After all, trolls don’t even bother to call what they do hyperbole. Only government officials, apparently, do that. Hyperbole is “extravagant exaggeration,” according to Merriam-Webster. Dictionary.com defines it as “obvious and intentional exaggeration.” Urban Dictionary defines it as “an exaggeration so big, it creates a black hole.” Or, “Bullshit.” Recently, Christopher Krebs filed a defamation lawsuit against 45’s lawyer Joseph diGenova and Newsmax. In the Newsmax interview, diGenova stated that Krebs should be “drawn and quartered,” or “shot at dawn.” Why? Christopher Krebs had stated that the 2020 election was the “most secure in American History,” Krebs defamation lawsuit contends that those death penalties are for convicted traitors, which he most decidedly is not. Hence the charge of defamation. Krebs and his lawyer are counting on the defamation of calling him a traitor being easier to prove than harm from the actual death threat. Why would diGenova consider Krevs a traitor? Because Krebs had the audacity to proclaim, as head of the Homeland Security’s Cybersecurity and Infrastructure Security Agency, CISA, that the 2020 election was the most secure in American History. His agency found no evidence of any voting system deleting or losing votes, changing votes, or in any way compromising the election. By the way, Krebs is a life-long Republican. And yet, apparently, diGenova considers him a traitor. A traitor to what? Not to the United States. Not to the current president, as he did not threaten him with death, which can be considered treason. Rather he is being called a traitor to 45’s narrative that he won, in spite of Biden’s decisive victories in the popular and Electoral College vote. Worse, diGenova’s defense against the defamation charge is that what he said was hyperbole. Hmmm. I’m liking the Urban Dictionary definition of “Bullshit.” You either want someone dead, or you don’t. And even if you don’t personally want to kill them yourself, there are others, trolls and malcontents, who might take you at your word. Krebs has received death threats from just these types, following diGenova’s statements. As a psychotherapist, I can tell you that it’s difficult to know exactly who will carry out such threats and who will not. It’s never a good idea to gamble on a death threat. Death threats are a dangerous game. Politicians and their lawyers are some of the last people who should be making them, hyperbole or not. Not only because they should show more decorum and plain old common sense, but because their unstable followers don’t understand hyperbole. CNN’s John Avlon recently stated, “No less than six election officials around the country, many of them Republicans, have received serious death threats to date. Georgia Secretary of State Brad Raffensperger requires security, his wife is getting messages on her cellphone that read, ‘Your husband deserves to face a firing squad.’ His deputy, Gabriel Sterling, is getting threats while an election technician was accused of treason and sent a noose.” Maybe diGenova should have considered labeling his comments hyperbole, with the included definition, before signaling to the election “fraud” fanatics. Calling it hyperbole afterward is closing the barn door after the trolls have escaped. State criminal codes don’t consider death threats hyperbole. “Under state criminal codes, which vary by state, it is an offense to knowingly utter or convey a threat to cause death or bodily harm to any person.” The Supreme Court, overall, has protected political hyperbolic speeches that appeared to be threats. In most rulings, they found that only “true threats” are outside the protection of the first amendment. But what’s a “true threat?” The Ninth Circuit stated in one case that, “It is not necessary that the defendant intend to, or be able to carry out his threat; the only intent requirement for a true threat is that the defendant intentionally or knowingly communicate the threat.” And therein lies the issue. In today’s world of instantaneous sound bites, retweeting, and sharply divided media coverage, threats are immediately and widely communicated. And some of that communication goes out to people who are not stable. To those who use the psychological defense mechanism of all-or-nothing thinking, who see things as all good or all bad. To people who take the words of those they follow as gospel, not as the exaggerated speech of hyperbole. Whether it’s political death threats, the efforts by “leaders” to convince followers an election was stolen, or death threats to women, BIPOC, LGBTQ people, and anyone who disagrees with the world view of the one hurling death threats, the danger is real. There is always someone willing to act on those type of threats. When that happens, as when police were sent to Toni Crowe’s house where her unaware and unarmed son was sleeping, with false reports of gunfire being heard, the danger becomes real. What can we do if we receive death threats? Most advice is to ignore trolls online. They’re usually anonymous, and all you can do is block and report them. It’s good advice to ignore and block the individual anonymous ones, because psychologically, they thrive on your outrage and reactions. If there is a threat from someone identifiable, and you can afford a lawyer, suing for defamation, as Krebs is doing, is a useful work-around. For threats coming from someone you know, contact the police. They can’t do much with one harassing threat, but if the threats continue, they can arrest the person and issue a restraining order. Keep the texts, emails, and/or phone messages as proof. Finally, if the pattern of harassment and threats is from several people, as is the case for writers and speakers who’ve touched a racist, misogynist, homophobic, or other nerve, notify your local police. They will have a record of your situation, so that if “prank” calls are made about gunfire or other disturbance at your address, they can contact you, or at the very least, proceed with caution in responding.
https://medium.com/an-injustice/death-threats-are-not-hyperbole-ded1227de2ee
['Carol Lennox']
2020-12-14 16:35:49.891000+00:00
['Justice', 'Election 2020', 'Society', 'This Happened To Me', 'Trolls']
The Old Man With The Saxophone
Paul Cantor The Old Man With The Saxophone Thanks for the Inspiration, Rich. You always hear people talk about the things they’d do if they just had more time. Read more books, watch more movies, visit their family more often, write more. Things like that. Learning to play an instrument always seems to come up in those conversations. In December of last year, I saw someone who was making good on that impulse, and it was pretty inspiring. I’d ventured into Brooklyn to hang out with an old friend, who like a lot of friends in this digital age, I speak to often but rarely see. Our nightly adventures somehow brought us to Park Slope for coffee at Tea Lounge, which I’d never been to. We were there for perhaps 30 minutes when one of the bartenders came over to tell us that the area we were sitting in was going to be cleared out in 15 minutes. They had to set up that section for open mic night. Fuck. I hadn’t been to an open mic in a fairly long time, and because I basically started my music career with a group of friends organizing a local open mic night, I’ve something of an aversion to them. Open mics remind me too much of things I’ve left behind. The prospect of staying for this one didn’t seem high on my list of things to do at that moment, but after talking to a dude who was going to be playing I felt like maybe I’d stick around. It was the holiday season and I really didn’t have much to do. Maybe I’d see something good? Little did I know the real highlight of the show wasn’t going to be this kid who was chatting me up. Instead, it was going to be Rich, the second performer of the evening. Rich was an older gentleman— he had to be in his late 60s — and he ambled on stage with a saxophone strapped around his neck. Usually at open mics you get guitar players and poets and comedians. Occasionally someone has a keyboard. Only in rare instances does someone have something outside of that repertoire. The sax made my eyes widen. Played properly, like any instrument, it’s incredibly expressive and beautiful. I’ve had the great fortune to work with a few saxophone players and have flirted with buying a sax from time to time. My uncle Peter plays sax. When I was a kid and I tried to get into the school band, the instrument I told them I wanted to play was— you guessed it— the saxophone. Any time someone is playing sax on the street I have to stop for a few minutes. The first time I went to SXSW, back in 2010, I left a pretty exciting ‘event’— one of those things where people stand around drinking free alcohol and pretending to care about whoever is on stage— and stood by idly watching a guy playing sax on a corner for at least an hour. That wound up being the highlight of the trip. So, Rich gets on stage and he seems a little nervous. I thought it was just an age thing. But given his age I thought Rich might have been a seasoned sax player who was just here because that’s sometimes how life works out for musicians. You see guys in the train station all the time, seemingly anonymous, who have in some cases toured all over the world. The hard life of a musician is often thankless like that. Rich placed some sheet music on a music stand in front of him and nervously fiddled with the pages. He was jittery, jumbling words and stuttering a bit. He told the crowd that he wasn’t used to being on stage this early in the evening, that he would usually go on much later. He said this almost in a way where it seemed like he wasn’t in on the reality that the organizers of the open mic might stick him at the end because they think he’s either not good or because they don’t want some old guy messing up their vibe. Open mic night people are dicks like that. But anyway, Rich adjusted the music stand to a sensible height so that he could see it, then it promptly fell back down. He couldn’t fix it so he just left it down there. Then he told a story about how before Dick Clark’s presence became a staple at New Year’s Eve celebrations, Guy Lombardo held sway. That every New Year’s Eve, Lombardo would have his band the Royal Canadians play “Auld Lang Syne” after the ball dropped— that’s how it became a tradition— and so to mark the occasion he would play it. And then he started playing. I want to say that Rich blew me away with the way he played but honestly he didn’t. He wasn’t great. He wasn’t bad. He hit all the right notes. He read the sheet music and played it the way it was written. The fact that he was reading sheet music at all should have given it away to me. Rich wasn’t a seasoned saxophone player. Rich was just learning the sax. He was a novice. A rookie. Rich moved on to his next piece, the saxophone solo from Herbie Hancock’s “Watermelon Man.” Again he hit what sounded like all the right notes, but then again this was a jazz piece and in jazz improvisation is everything. Still, he was reading off the page and you could tell he was just trying to get through the transcription. Two songs down and his set was over. People clapped, he thanked them for their time, then he grabbed his music sheets and made a beeline to the back of the venue. It was short and sweet. The host for the evening got back on the microphone and said that it’s hard for people to get on stage and do what they do. He was right. It is. I couldn’t stop thinking about Rich, though. I looked to the back of the venue a few times to see if he was still there but it seemed he’d left just as quick as he came. Another performer was now on— a comedian, who was not extremely funny— and I was thinking about what it meant for someone like Rich to get up in front of all these people and play. He’s not a young kid with dreams of making it big. He’s not trying to be America’s next top whatever the fuck. The guy was just trying to take something that he might have spent his whole life wishing he could do, and put it out into the universe. Whoever heard it, heard it. There was something so pure about it. So honest. I looked back to the room one last time, and to my delight, there he was. I got up and walked over to him. “How long have you been playing?” I asked. “Two years,” he replied. “What made you start playing? Was the saxophone something you just always wanted to learn?” I asked. “Retirement,” he said. “Not wanting to be alone. This gives me something to do, makes me get out there. There’s more to life than spectating. I didn’t want to just sit and watch. I didn’t want to spectate anymore. I wanted to participate.” He told me about how the organizers usually put him on late, after the crowd has thinned and there aren’t many people left. But he plays anyway, just glad that someone is listening. He told me about other open mics he goes to, how he tries to make sure he always has a message when he performs— sometimes he’ll recite a poem or something before he plays— so that even if his playing isn’t awesome, people can get something out of it. He said he took a lesson every week, and I told him that I too take music lessons, because at the time, I did. In a weird way we were kindred spirits, standing there. Just then, a young twenty-something-year-old girl walked past us. “You were awesome!” she said. He smiled at her, then looked over at me. “That felt good,” he said. Then he had to go home. I told him I hoped to see him again there, and he said the same about me, and he hurried out the door. I looked at my watch. It was 9:58 PM. It was like he had a curfew or something. Amazing. I went back to watching the performers. None of them were spectacular. Even Rich wasn’t spectacular. But the idea of this guy getting up there and playing for us, it was really inspiring. Rich made me think of my own life, nights I’ve spent alone by myself, staring at a piece of sheet music into the early hours of the morning, just trying to nail a certain part. Of sitting in piano lessons while everyone else is out doing amazing things or whatever it is people do with themselves these days. Of the pure joy the act of playing music can bring to a person and how it’s never too late to learn how to do that. I thought about how rewarding it might have been for that girl to come up to Richard and give him that compliment and how maybe me talking to him was the icebreaker that allowed her to do that. Because really, nobody was saying anything to the guy before then. He was alone. By himself. Just standing there. He did what he came to do and was prepared to go home. And though I doubt he needed it, maybe our conversation gave him a little more courage to keep going. Maybe I needed our conversation so I could keep going. Thank you, Rich. Thank you, old man with the saxophone.
https://medium.com/this-happened-to-me/the-old-man-with-the-saxophone-c1df8a7792e9
['Paul Cantor']
2017-06-22 05:14:53.739000+00:00
['Music']
The Danger of Green
The Danger of Green Are COVID-19 visualizations unintentionally putting readers at risk? Should we stop using green for all COVID related visualizations? As we collectively experience the COVID-19 pandemic, we are continuously inundated with more and more information. In response, millions of people have used visualizations and dashboards to identify patterns and differences. One of the first and most widely used COVID-19 dashboards was developed by John Hopkins (which I wrote about previously), but hundreds of dashboards are now available. There are country-specific dashboards, comparative dashboards, state and even county dashboards that are delivering greater insight and valuable information to the public. This accessibility to complicated scientific data is great. As one navigates these dashboards, they tend to have a similar look and feel. The ubiquity of the US map combined with the color schemes that are often standard palettes from within the developer’s tool of choice, be it Tableau, Qlik, or PowerBI, deliver familiarity. For many visualizations, the choice of colors is a preference with limited impact on the ultimate utility of the information. That isn’t the case for COVID-19 visualizations. For these dashboards, I wonder if the prevalence of the color green is delivering a false sense of safety and security? The Stoplight — A brief history In 1920, the first red, amber and green traffic light was installed on the corner of Woodward and Michigan avenues in Detroit. A police officer, William Potts, designed the signals as a way to address the traffic problems that accompanied the proliferation of the automobile. Earlier versions of traffic signals were binary (stop/go), but his design included a “warning” light. Mr. Potts borrowed the colors from train engineering, and the scientific wave-length-based reasons for red as the color for stop. For the issues addressed here, traffic signals provide a cognitive gateway to establishing a false sense of security. At a very young age, we become aware of traffic signals and their meaning. By our mid-to-late teenage years, most of us are tested on our understanding of these signals in order to earn a driver’s license. Photo by Harshal Desai on Unsplash The rules are simple, as laid out in the California Driver’s Handbook: A red traffic signal light means “STOP.” A yellow traffic signal light means “CAUTION.” A green traffic signal light means “GO.” It is that conditioned, automated response to green that concerns me. During a pandemic, with limited abilities to control the spread of the virus, do we really want to send readers a “GO” signal? Shouldn’t we always pause at “caution” levels until we know more? Visualizing Momentum The maturation of COVID visualizations have migrated from simple data elements (positive tests, deaths) to focus on a more complicated series of measurements that indicate momentum. The idea of momentum for COVID grew out of policy guidelines designed to determine if states are trending better or worse with regard to their ability to slow the spread, treat the sick, and limit the number of deaths. ICU bed utilization is an example of a metric that not many outside of the medical profession cared about six months ago. Now, it may literally mean life or death for a relative or loved one. That makes it awfully damn important. The visualizations are a user-defined combination of trend data. Fourteen-day positive count, rolling averages, testing availability, positivity rate trend, and R (the rate of virus reproduction). All of these are indicators used in the determination of momentum. Where are we getting better, where are we getting worse? “For many visualizations of COVID-19, measures of momentum are encoded in color. As you can see in the pictures of websites in this article, the details of momentum are consolidated by state or county and encoded with a color. For a great many sites, those colors are red, yellow, and green.
https://medium.com/nightingale/the-danger-of-green-ffd43bb6d2fe
['Ron Giordano']
2020-07-16 14:56:57.049000+00:00
['Public Health', 'Design', 'Covid 19', 'Pandemic', 'Data Visualization']
Empowering Data Science with an Effective Team
In an organization looking to move toward data-driven decision making, data scientists must work in concert with other teams. Collaboration requires more than just understanding how to use the latest machine learning techniques to analyze data. It requires a complete understanding of the business problem to be solved. From a people-oriented standpoint, this includes developing a sense for how business users will best interact with machine learning tools. From a technical standpoint, it also requires high quality data and a way to securely deploy models in production. To support the data science effort, we recommend Data Scientists embed within teams comprised of the following: Human Centered Design practitioners responsible for ensuring the end user is at the center of solutions practitioners responsible for ensuring the end user is at the center of solutions DataOps practitioners responsible for maintaining high quality data practitioners responsible for maintaining high quality data DevSecOps practitioners responsible for pipelines, security, and delivery Each group has a role to play in ensuring the integrity of the data pipeline from collection through communication of insights. These teams can offer speed to insight across many potential use cases, such as reports and dashboarding, advanced analytics, natural language processing (NLP), and computer vision (image recognition, classification, and detection). Human Centered Design Human-centered design (HCD) is an approach to problem solving, commonly used in design and management, that develops solutions to problems by involving the human perspective in all steps of the problem-solving process. As we’ve written about in other articles, this team aims to learn directly from the end user about what’s currently not working for them in order to prevent pitfalls that could arise from deploying a machine learning empowered tool into the existing workflow. HCD facilitates communication across stakeholder groups to ensure the end product offers resolution and avoids deepening existing feedback loops. HCD is an important part of building a machine learning tool. This team ensures the needs of the end users are met, the handoff between machine and human is seamless, and negative feedback loops are prevented to the greatest extent possible.
https://medium.com/atlas-research/data-science-team-eae84b1af65d
['Nicole Janeway Bills']
2020-10-23 15:35:36.679000+00:00
['Machine Learning', 'Data Science', 'Artificial Intelligence', 'Computer Science', 'Business']
React Native Vs Ionic 2: Comparison
Ionic 2 is part of the hybrid mobile development framework family. Ionic 2 is not only a rewrite of the previous Ionic framework, it embraces the structure and design of AngularJS 2, while also taking inspiration for their design language from Android, Material Design, and iOS. If you are from a background of AngularJS, you’ll be comfortable developing apps using Ionic. Also, it’s TypeScript ready, meaning you can use your current AngularJS 2 components too. Ionic also comes with pre-developed and styled components making it easier for developers to create the UI of an app. The UI is not native though but can give the appearance of a native UI. Ionic is a framework on top of Cordova, which is used to access the phone hardware functionality. In other words, Cordova whirls up your system browser to render you app, known as WebView. Keep in mind, Cordova isn’t the only framework for hybrid apps, and neither is Ionic the only framework using Cordova. WebView is a browser-free web page loader, giving you access to mobile functionalities like camera, contacts, etc. It’ll be slower in comparison to React Native since here you have to write the HTML code in your Android activities. If your user’s smartphone has a slow processor, it can lead to performance issues or graphical issues You need to download plugins to access native functionality. For instance, you need to download Cordova plugins if you want to use Google Maps. Ionic 2 supports Ionic Native which does the job of accessing native functionalities of the device through JavaScript more smoothly than the older version. There’s a common interface for all plugins, and this makes it easier to use native functionalities. With Ionic 2, TypeScript components can make tasks slower compared to directly working with native API. Nevertheless, Ionic 2 does overcome many performance issues with the help of its structure. Pros: Fast development-testing cycle. It cross-compiles to iOS and Android. Ionic is easy to learn & work with. You can write code in TypeScript, which makes it easy if you’re coming from a background of AngularJS 2. You can use TypeScript to develop applications for all platforms. You can access the native functionalities of your user’s devices with the plugin system. Angular itself is easier to learn & work with, for smaller projects than React. Cons: There could be performance issues if you need to use a lot of callbacks to the native code. If your users prefer the native UI look, the same UI look in all the devices could put them off. Development of advanced graphics or interactive transitions will prove difficult. Previously, working on Webview was slow and in a way, guaranteed low app ratings because of performance. Now, it’s less an issue, as newer smartphones come with better system browsers and better phone specs. However, it still is a noticeable performance difference compared to native. Ionic is easy to use and learn, but there are restrictions on using the Native features of the device. You bridge the gap using Cordova or PhoneGap, and both have plugins for almost all Native features you may need like GPS, file system, etc. Ionic is a good choice for prototyping, or projects which have fast development needs, or if you have many app requirements with lower budgets, and whose app performance ratings aren’t paramount. React Native
https://medium.com/swlh/react-native-vs-ionic-2-comparison-50aba900be6c
['Amit Ashwini']
2020-03-20 09:58:28.468000+00:00
['React Native', 'Mobile App Development', 'Hybrid App Development', 'Software Development', 'Ionic']
Earn Money From Your Software
Step 2. Processing Payments Now we have the account_id we can process payments and send funds directly to the connected accounts. In the context of a marketplace, this means if a customer were to purchase a product such as an iPhone case, we can handle the payment on our site, but the funds would be sent to the supplier. The process is as follows: The client makes a request to your server for a payment intent secret. Create a paymentIntent with the Stipe API specifying details about the transaction such as the price, this will return a paymentIntent.client_secret . Respond with the client_secret to your client. Create a new card element on the client. When the user hits submit, confirm the payment by calling stripe.confirmCardPayment passing both the paymentIntent.client_secret and a reference to the card element. Set up a webhook for Stripe to inform your server that the payment was successful. Creating a payment The integration guide provides all of the steps in details for the client-side setup. Stripe Elements Once complete you will have a full flow for processing payments. Setting up a webhook Once the payment has been made you should not attempt order fulfillment via another client request to your server. Your integration should not attempt to handle order fulfillment on the client-side because then it’s possible for customers to leave the page after payment is complete but before the fulfillment process initiates. Instead, once the payment is confirmed, show a success screen but leave further processing for later. Once the payment is confirmed, at some point in the very near future you’ll receive a webhook notification to inform you of the status for that payment. Here you can see a list of all possible events Stripe could send via webhooks: https://stripe.com/docs/api/events/types. But right now we are focusing on the payment_intent.succeeded and payment_intent.payment_failed events. Here you will be guided through setting up a webhook https://stripe.com/docs/payments/payment-intents/verifying-status#webhooks Testing webhooks locally Webhooks require an endpoint to be accessible over the internet. This can make local testing a pain, but there are a few solutions. NGROK: My personal preference is ngrok. This service provides a CLI to point a public URL to your local machine in a secure manner. You can also define static URLs to avoid having to constantly update your Stripe webhook settings. It also comes with an NPM package to programmatically setup tunnelling. I like ngrok as it provides a generic solution which can work for many use-cases rather than just Stripe. Stripe: Stripe also provide there own CLI for local webhook testing. With this, you can monitor all webhooks sent by Stripe as well as forward them to your own localhost. It is also possible to trigger webhooks for testing purposes.
https://medium.com/better-programming/earn-money-from-your-software-58fe98c44ecf
['Warren Day']
2020-08-04 21:28:29.921000+00:00
['Software Engineering', 'Software Development', 'Stripe', 'Finance', 'Programming']
TensorFlow 2 Object Detection API using Custom Dataset
TensorFlow 2 Object Detection API using Custom Dataset Everything you should know about using TF2 Object Detection API on a Custom Dataset, including common issues faced and their solutions Here you will go step by step to perform object detection on a custom dataset using TF2 Object Detection API and some of the issues and resolutions. Object Detection using TF2 Object Detection API on Kangaroo dataset The custom dataset is available here. TensorFlow 2 Object detection model is a collection of detection models pre-trained on the COCO 2017 dataset. Tensorflow 2 Object Detection API in this article will identify all the kangaroo’s present in an image or video, along with their locations. Locations of kangaroo will be depicted by drawing bounding boxes. Steps for performing Object Detection on a Custom Dataset using TF2 Object Detection API Step1 : Setup for TensorFlow2 Object Detection API Clone the TensorFlow Models repository git clone https://github.com/tensorflow/models.git You should have the following directory structure. TF2 Object Detection API folder structure You now have the code for TF2 Object detection API; install the API using the protobuf compiler. Ensure you are using version 3 or higher of the protobuf compiler. Add the following paths to the PYTHONPATH environment variable for the TF2 Object detection API to work. Path to …\models\research Path to …\models\research\object_detection Some common issues and resolutions for the protobuf compiler can be found here. Once the environment variable PYTHONPATH is set up, you can execute the following command to convert all the .proto files in the \models\research\object_detection\protos folder .py files. %%bash cd models/research/ protoc object_detection\protos\*.proto — python_out=. cp object_detection\packages\tf2\setup.py . python -m pip install . Ideally, pycocotools gets installed when installing the TF2 Object Detection API, but if you have issues and it is not installed, then clone the cocoapi repository as shown below. Step 2: Set up the directory structure for the Custom dataset Directory setup for a custom dataset Explanation of the contents of the directories required for object detection for training on a custom dataset. annotations: will store the TFRecord files for train and test dataset along with the label_map.pbtxt. You can also store the .csv files used for generating the TFRecord files. TFRecord contains a sequence of records, including annotation details. TFRecord files are read sequentially. images: will store the image file and annotations XML files for both the train and the test dataset. models: will contain a sub-directory for each of the training jobs along with the pipeline.config file. If you are using different object detection models like SSD and EffeicientDet, you can create two sub-folders for SSD and one for EfficientDet, and each sub-folder will have its own pipeline.config file. pretrained_models: will contain the downloaded pre-trained models and will be used as a starting point for our custom dataset training. exported_model: will store the exported versions of the trained model on the custom dataset. Step 3: Create a Label Map file TF2 object detection API requires a label_map.pbtxt file containing the mapping of the objects you want to detect mapped to an integer value, as shown below. The Object Detection API uses label_map.txt file during training as well as during inference. item { id: 1 name: 'kangaroo' } Place this file in the annotations folder, as shown below. Step 4: Prepare the Custom Dataset In this article, we are using a kangaroo dataset that is already annotated. If you have images, then you can use tools like labelImg for annotating the images. We will read the annotation files and create a data frame that will be used for generating the TFRecord. The images and the annotated XML files should be placed under the images folder within the respective train or test directories, as shown below. Images and XML files containing annotations One of the issues I faced was having a corrupted image file, and I was getting an error “Invalid JPEG data or crop window”. I used the following code to find the corrupted image file. You can use any one of the libraries io or cv2 ## To check if the Jpeg are good from skimage import io import cv2 files=[] files=glob.glob(TRAIN_IMAGE_FILE+ '\*.jpg') for i in range(len(files)): try: _ = io.imread(files[i]) img = cv2.imread(files[i]) except Exception as e: print(e) print(files[i]) Creating TFRecord based on the annotated XML file Create the TFRecord for the training and test annotated XML files using the following code # importing required libraraies import os import glob import pandas as pd import io import xml.etree.ElementTree as ET import argparse import tensorflow.compat.v1 as tf from PIL import Image from object_detection.utils import dataset_util, label_map_util from collections import namedtuple # Set the folder name for the source annotated XML files and folder #to store the TFRecord Record file LABEL_MAP_FILE=r'\Custom_OD\Workspace\Annotations\label_map.pbtxt' TRAIN_XML_FILE=r'\Custom_OD\Workspace\images\Train' TRAIN_TF_RECORD_DIR=r'\Custom_OD\Workspace\Annotations\train.record' TEST_XML_FILE=r'\Custom_OD\Workspace\images\test' TEST_TF_RECORD_DIR=r'\Custom_OD\Workspace\Annotations\test.record' #Create a dictionary for the labels or objects label_map = label_map_util.load_labelmap(LABEL_MAP_FILE) label_map_dict = label_map_util.get_label_map_dict(label_map) #convert the object annotation from XML file to a dataframe def xml_to_df(path): """Iterates through all .xml files conatining Annotation in a given directory and combines them in a single Pandas dataframe. Parameters: ---------- path : str The path containing the .xml files Returns ------- Pandas DataFrame The produced dataframe """ xml_list = [] for xml_file in glob.glob(path + '/*.xml'): tree = ET.parse(xml_file) root = tree.getroot() for member in root.findall('object'): value = (root.find('filename').text, int(root.find('size')[0].text), int(root.find('size')[1].text), member[0].text, int(member[4][0].text), int(member[4][1].text), int(member[4][2].text), int(member[4][3].text) ) xml_list.append(value) column_name = ['filename', 'width', 'height', 'class', 'xmin', 'ymin', 'xmax', 'ymax'] xml_df = pd.DataFrame(xml_list, columns=column_name) return xml_df #pass the label and get its equivalent integer def class_label_to_int(row_label): return label_map_dict[row_label] #Split filename and the annotations details for all the xml files def split(df, group): data = namedtuple('data', ['filename', 'object']) gb = df.groupby(group) return [data(filename, gb.get_group(x)) for filename, x in zip(gb.groups.keys(), gb.groups)] # Create the TF Record files def create_tf_example(group, path): with tf.io.gfile.GFile(os.path.join(path, '{}'.format(group.filename)), 'rb') as fid: encoded_jpg = fid.read() encoded_jpg_io = io.BytesIO(encoded_jpg) image = Image.open(encoded_jpg_io) width, height = image.size filename = group.filename.encode('utf8') image_format = b'jpg' xmins = [] xmaxs = [] ymins = [] ymaxs = [] classes_text = [] classes = [] for index, row in group.object.iterrows(): xmins.append(row['xmin'] / width) xmaxs.append(row['xmax'] / width) ymins.append(row['ymin'] / height) ymaxs.append(row['ymax'] / height) classes_text.append(row['class'].encode('utf8')) classes.append(class_label_to_int(row['class'])) tf_example = tf.train.Example(features=tf.train.Features(feature={ 'image/height': dataset_util.int64_feature(height), 'image/width': dataset_util.int64_feature(width), 'image/filename': dataset_util.bytes_feature(filename), 'image/source_id': dataset_util.bytes_feature(filename), 'image/encoded': dataset_util.bytes_feature(encoded_jpg), 'image/format': dataset_util.bytes_feature(image_format), 'image/object/bbox/xmin': dataset_util.float_list_feature(xmins), 'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs), 'image/object/bbox/ymin': dataset_util.float_list_feature(ymins), 'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs), 'image/object/class/text': dataset_util.bytes_list_feature(classes_text), 'image/object/class/label': dataset_util.int64_list_feature(classes), })) return tf_example #Generating the train TFRecord writer = tf.python_io.TFRecordWriter(TRAIN_TF_RECORD_DIR) for group in grouped_train: tf_example = create_tf_example(group, TRAIN_XML_FILE) writer.write(tf_example.SerializeToString()) writer.close() print('Successfully created the TFRecord file: {}'.format(TRAIN_TF_RECORD_DIR)) # Generating the test TFRecord writer = tf.python_io.TFRecordWriter(TEST_TF_RECORD_DIR) for group in grouped_test: tf_example = create_tf_example(group, TEST_XML_FILE) writer.write(tf_example.SerializeToString()) writer.close() print('Successfully created the TFRecord file: {}'.format(TEST_TF_RECORD_DIR)) Now the folder structure with files should be as shown below. Step 5: Download the Pretrained Models Download your choice of pre-trained object detection model from here. You will have a tar file like ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz. Extract the contents of the file in the pretrained_models folder. The extracted folder should have a checkpoint folder, saved_model folder, and pipeline.config file for the model. You can download multiple pre-trained models in the pretrained_models folder, as shown below. Step 6: Configuring the Training Pipeline by updating the pipeline.config Copy the pre-trained model’s pipeline.config file to the model’s directory and edit a few settings to prepare the object detection API for the custom dataset. num_classes: update num_classes to match the classes/objects in your dataset and mentioned in label_map.pbtxt model { ssd { num_classes: 1 image_resizer { fixed_shape_resizer { height: 320 width: 320 } } } batch_size: update batch_size based on the system's memory you will be training on. The higher batch_size will need a higher memory. train_config { batch_size: 6 data_augmentation_options { random_horizontal_flip { } } Guidance for batch_size as mentioned here Max batch size= available GPU memory(in bytes) / 4 / (size of tensors + trainable parameters) fine_tune_checkpoint : mention the path to the checkpoint of the pre-trained model : mention the path to the checkpoint of the pre-trained model num_steps: number of training steps you want to train the object detection API on number of training steps you want to train the object detection API on fine_tune_checkpoint_type : specify “ detection ” when performing object detection and specify “ classification ” when performing image classification : specify “ ” when performing object detection and specify “ ” when performing image classification use_bfloat16: set this to false if you are not training on a TPU fine_tune_checkpoint: "\\Custom_OD\\Workspace\\pretrained_model\\SSD_640\\ckpt-0" num_steps: 300000 startup_delay_steps: 0.0 replicas_to_aggregate: 8 max_number_of_boxes: 100 unpad_groundtruth_tensors: false fine_tune_checkpoint_type: "detection" use_bfloat16: true fine_tune_checkpoint_version: V2 Training parameters: update the label_map_path to point to the label_map.pbtxt file in the annotations folders and update the input_path to point to the training TFRecord file in the annotations folder train_input_reader { label_map_path: "\\Custom_OD\\Workspace\\Annotations\\label_map.pbtxt" tf_record_input_reader { input_path: "\\Custom_OD\\Workspace\\Annotations\\train.record" } } Evaluation parameters: update the label_map_path to point to the label_map.pbtxt file in the annotations folders and update the input_path to point to the test TFRecord file in the annotations folder eval_input_reader { label_map_path: "\\Custom_OD\\Workspace\\Annotations\\label_map.pbtxt" shuffle: false num_epochs: 1 tf_record_input_reader { input_path: "\\Custom_OD\\Workspace\\Annotations\\test.record" } } Another issue I encountered was the error “Invoked with: None, 524288”. The resolution is to check all of the above pipeline.config file parameters and ensure a valid value for each of the file parameters. Step 7: Training the Object detection model on the custom dataset To start training object detection API using any of the pre-trained models on a custom dataset, you need to copy the model_main_tf2.py file from the object detection API to the workspace folder as shown below. Execute the following command on your command prompt to start the training python model_main_tf2.py --model_dir=models\SSD_640 --pipeline_config_path=models\SSD_640\pipeline.config model_dir : is the path where the training process will store the checkpoint files. : is the path where the training process will store the checkpoint files. pipeline_config_path is the path where the pipeline.config file is stored The model will be trained based on the num_steps specified in the pipeline.config file and the time to train will depend on your host platform/machine's computation capability. As the model is trained, it will occasionally create checkpoint files and place them in the model_dir specified for training the model. Evaluating the Object Detection API for the custom dataset Update the pipeline.config file to point to the right checkpoint as shown below fine_tune_checkpoint: "\\Custom_OD\\Workspace\\models\\SSD_640\\ckpt-6" To evaluate the object detection trained model execute the following command from the command prompt python model_main_tf2.py --model_dir=models\SSD_640 --pipeline_config_path=models\SSD_640\pipeline.config --checkpoint_dir=models\SSD_640 model_dir : the path will contain the events files generated as part of the evaluation : the path will contain the events files generated as part of the evaluation pipeline_config_path : is the path where the pipeline.config file is stored : is the path where the pipeline.config file is stored checkpoint_dir: is the path where the training process has stored the checkpoint files. After the evaluation is complete, you can see the events files in the path specified in the model_dir. When the evaluation process runs, it periodically checks for a new checkpoint file in the checkpoint_dir. Once the evaluation process is complete, then you can see the following output. Evaluation Monitoring the Object Detection Training The Object Detection API training on a custom dataset can be monitored using TensorBoard. The events file generated as part of the evaluation will display the evaluation metrics on TensorBoard. To start the TensorBoard, execute the following command on the command prompt tensorboard --logdir=models\SSD_640 log_dir: specify the path that contains the checkpoints and the evaluation events once the command is executed you will see the URL as shown below Exporting the Trained Model You will now need to export the trained model to be used for Object detection on your custom dataset. First, copy the exporter_main_v2.py file from the object detection API code in the models/research/object_detection folder to your workspace, as shown below. To export a trained object detection model, execute the following command from the command prompt python exporter_main_v2.py --input_type=image_tensor --trained_checkpoint_dir=models\SSD_640 --pipeline_config_path=models\SSD_640\pipeline.config --output_directory=exported_model input_type : is the input node type of the inference graph. options are image_tensor, encoded_image_string_tensor, and tf_example. image_tensor is a uint8 4-D tensor. : is the input node type of the inference graph. options are image_tensor, encoded_image_string_tensor, and tf_example. image_tensor is a uint8 4-D tensor. trained_checkpoint : the path to the folder containing the trained checkpoint : the path to the folder containing the trained checkpoint pipeline_config_path : path for the pipeline config file : path for the pipeline config file output_directory: the exported model will be available in the path specified in the output_directory Making Inferences Using the Trained Object Detection Model you now have a trained model that you have exported to a folder that can make inferences. #Import the required libraries for Object detection infernece import time import tensorflow as tf from object_detection.utils import label_map_util from object_detection.utils import visualization_utils as viz_utils import os import cv2 import matplotlib.pyplot as plt %matplotlib inline # setting min confidence threshold MIN_CONF_THRESH=.6 #Loading the exported model from saved_model directory PATH_TO_SAVED_MODEL =r'Custom_OD\Workspace\exported_model\saved_model' print('Loading model...', end='') start_time = time.time() # LOAD SAVED MODEL AND BUILD DETECTION FUNCTION detect_fn = tf.saved_model.load(PATH_TO_SAVED_MODEL) end_time = time.time() elapsed_time = end_time - start_time print('Done! Took {} seconds'.format(elapsed_time)) # LOAD LABEL MAP DATA PATH_TO_LABELS=r'\Custom_OD\Workspace\Annotations\label_map.pbtxt' category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True) #Image file for inference IMAGE_PATH=r'\Custom_OD\Workspace\images\test\00178.jpg' def load_image_into_numpy_array(path): """Load an image from file into a numpy array. Puts image into numpy array of shape (height, width, channels), where channels=3 for RGB to feed into tensorflow graph. Args: path: the file path to the image Returns: uint8 numpy array with shape (img_height, img_width, 3) """ return np.array(cv2.cvtColor(cv2.imread(path), cv2.COLOR_BGR2RGB)) image_np = load_image_into_numpy_array(IMAGE_PATH) # Running the infernce on the image specified in the image path # The input needs to be a tensor, convert it using `tf.convert_to_tensor`. input_tensor = tf.convert_to_tensor(image_np) # The model expects a batch of images, so add an axis with `tf.newaxis`. input_tensor = input_tensor[tf.newaxis, ...] detections = detect_fn(input_tensor) # All outputs are batches tensors. # Convert to numpy arrays, and take index [0] to remove the batch dimension. # We're only interested in the first num_detections. num_detections = int(detections.pop('num_detections')) detections = {key: value[0, :num_detections].numpy() for key, value in detections.items()} detections['num_detections'] = num_detections # detection_classes should be ints. detections['detection_classes'] = detections['detection_classes'].astype(np.int64) #print(detections['detection_classes']) image_np_with_detections = image_np.copy() viz_utils.visualize_boxes_and_labels_on_image_array( image_np_with_detections, detections['detection_boxes'], detections['detection_classes'], detections['detection_scores'], category_index, use_normalized_coordinates=True, max_boxes_to_draw=200, min_score_thresh=MIN_CONF_THRESH, agnostic_mode=False) plt.figure() plt.imshow(image_np_with_detections) print('Done') plt.show() Conclusion: Use TensorFlow2 object detection API for detecting objects in your custom dataset step by step. Download the object detection API and coco API, Set up the folder structure as shown in the article For the Custom dataset, create a customized label_map.pbtxt file in the annotations folder containing all the objects you wish to detect in your custom dataset and assign an integer for all the classes. Prepare the custom dataset for training and testing and convert the XML annotations to TFRecord files for train and test and store it in annotations folders Download the pre-trained model fo your choice from the model zoo Update the pipeline.config file for mapping to a custom dataset Once the model is trained you can monitor the training and evaluation metrics using TensorBoard Export the trained model that will be used for making inferences on images or videos References: https://github.com/tensorflow/models
https://medium.com/analytics-vidhya/tensorflow-2-object-detection-api-using-custom-dataset-745f30278446
['Renu Khandelwal']
2020-12-17 14:24:10.863000+00:00
['Tensorboard', 'Python', 'Object Detection', 'Tensorflow2', 'Deep Learning']
Beyond Employee Engagement: Understanding Employee Lifetime Value
Quantifying the return on investment (ROI) of a software engineer is a science this lifetime is likely insufficient to solve. The amount of factors that comes into play in measuring overall ROI remains elusive. A gaping hole is likely due to companies in their nascent years having little resource to invest in people analytics. However, as the company grows, so does the factors affecting people operations, hence, the vicious cycle. Very few studies today are done with a high degree of coverage and even less companies actually apply those results into their day to day people operations. This could be unfortunate.¹ In software engineering, monitoring is of high importance. The earlier a company detects anomalies, the less resource it incurs to recover from failures. However, because monitoring is not easily translatable to the top line or bottom line, convincing the CFO to throw in resource for such tend to fall under the mercy of intuition (and experience) rather than rigorous evaluation. (And to be clear, putting in a lot of monitoring in the first few years of a company may also end up hamstringing the company of valuable resources that could have been invested elsewhere for higher leverage.) People Analytics come under the same vein. Everyone understands the value of people analytics but very few see it as a priority. This blog is not intended to argue for it. It is simply intended to present what we are missing and leave it to the reader to evaluate its primacy as the organization scales. Employee Lifetime Value Employee Lifetime Value (ELTV) is a recent concept that found its origins from Customer Lifetime Value(CLTV). At risk of oversimplifying, the CLTV is computed by subtracting the acquisition cost from the overall revenue generated by the customer.² Calculating ELTV is far more complex because a lot of factors comes into play in people analytics. In general, we can condense the analysis of ELTV into four data points: Acquisition — the amount of resource spent to hire the employee. Onboarding — amount of time it takes for the employee to understand the system and work on it productively. Performance — the value the employee generates Attrition — the end of employment The graph below gives a rough sense of how ELTV looks like post acquisition: Figure 1 — Employee Value to Cost At first glance, he who has a keen eye would surface a few questions. How is employee value derived? The answer is that it is arbitrary. At this point, there is not enough literature nor people analytics technique to convert productivity into a unit of measurement. One could argue that this is easier in sales where a salesperson can directly attribute his productivity to dollar value in sales. The further the staff is from sales however, the more factors are considered. For example, how does one quantify the productivity of a Staff Software Engineer or an Engineering Manager? It becomes more convoluted the further away it is. Hence, the arbitrary value in the graph must be seen in relative measure to (a) staff productivity within the same cohort and (b) employee cost.³ However, do not be alarmed. As you read on, you’ll understand why this arbitrary value remains a very good indicator of subsequent macro-level decision making within the company. What is the staircase increment in employee cost? This is the annual salary increment with an estimated promotion after three years of tenure. Why is there a productivity plateau for the employee? During the onboarding phase, the employee’s productivity increases exponentially. By onboarding, I do not mean a typical one week onboarding process. Onboarding here is defined as the amount of time it takes for the employee to understand the system and work on it productively.⁴ How is the delta between an average staff and a high performer derived? The average performer has an exponential decay in performance over a period of five years whereas the high performer has a linear performance growth.⁵ This is also arbitrary. In essence, certain employees tend to get comfortable and ending up having constant (if not decaying) contribution over time whereas high performers tend to keep challenging themselves hence is attributed with a linear growth projection. ROI vs. Employee Performance Over Time Below is an overview of how the relative value an employee generates to the business: Figure 2 — ROI vs. Employee Performance Over Time Notice that at the beginning, the ROI is negative for both the average and the high performer. This is because in the onboarding phase, the company is bleeding resource to get the staff up to speed. This must be seen in light of the cumulative ROI over time in the graph below: Figure 3 — Cumulative ROI Over Time In essence, it takes about a year for the company to break even.⁶ Moreover, consider Figure 2. Over time, retention of an average performing employee does not yield progressive ROI to the company over time. In general, the value he provide stagnates. On the other hand, high performers continue to grow the delta. This must then be seen in light of several inferences around macro-decision making within the company. In fact, here’s another model showing that even if the high performing employee gets double the salary increment per year compared to the average performing one, (in this scenario, an annual increment of 12% vs. 6%) the ROI he generates remains more than an average performer: Figure 4 — ROI Impact with High Performer getting double percentage in salary increment per year Maximizing ROI in light of the Employee Lifetime Value (ELTV) To reiterate, the graphs above are indicative. While there’s not enough science to generate a succinct perspective, the graphs are good enough signposts to inspire general inferences. Be Meticulous in Acquisition — Steve Jobs famously noted that A-Players hire A-Players and B-Players hire C-Players and if you don’t get your talent acquisition right, you’ll end up having a lot of B and C players and drive away your A-Players. The people that the company allows to go through the hiring door becomes a self-fulfilling prophecy. What this means is that you would rather hire three high performers than four average performers because over time, the top performers will create a halo effect to the company they’re in. Reward Your Top Performers —Retention of top performers is of vital importance to the company. While it is totally fine to have average performers go through the revolving door of attrition, a company must make every effort to retain their golden geese. These are those that produce a disproportionate amount of value to the company and every feasible effort must be made to retain them. Several articles, videos, and books have already been written around identifying top performers that a repeat discussion in this blog is unnecessary. Retention is a P1 Objective. Always— As shown in the graph above, even retaining an average performer will yield ROI to the company. Hence, a company should continue to strive in extending employee tenure. How this is done is already better discussed in books, videos, and articles elsewhere. Shorten Onboarding Time — The shorter the onboarding time, the better it is for the company, especially at a time when the company is in hypergrowth. During mass hiring, ensuring that people get productive quickly generates exponential value to the company. Conclusion There you have it! Beyond simply an abstract perspective on employee engagement is a sense on how acquisition, onboarding, retention, and rewarding generates value to the company. The widening gap between employee cost and the ROI they generate over time must not be ignored. While further studies are still needed to reinforce the current breadth of knowledge, the graphs above should serve as a bunch of viable guideposts towards maintaining a virtuous cycle of value for the company. If anyone is interested to see the data points in the graph I plotted, he can access it here. ____ ¹ I used could because applying people analytics too early may have an adverse effect in resource management. Applying it too late will likewise yield the same. Not enough study has been made to prescribe an ideal timing when and to what degree people analytics must be done. ² Acquisition cost involves amount of money spent on ads, marketing campaigns, etc. Some also consider the personnel cost although of course as the company scales, the personnel cost becomes negligible. Different models apply depending on the nature of the business. Hence, the reader is encouraged to read through the link for a more robust take on CLTV. ³ In essence, salary + equipment + other benefits. ⁴ I am writing this with the context of a software engineer. ⁵ Roberts, Pasha. “Employee Lifetime Value and Cost Modeling.” People Analytics in the Era of Big Data, 2016, 255–81. https://doi.org/10.1002/9781119083856.ch10. ⁶ Of course, a lot of factors come into play that could affect this. The model is simply to give a general sense of how the curve looks like and how people ops should be understood and executed in a company.
https://medium.com/leading-and-managing/beyond-employee-engagement-understanding-employee-lifetime-value-c306966b9af1
['Julius Uy']
2020-11-14 13:41:35.101000+00:00
['Employee Engagement', 'Leadership', 'Software Development', 'Human Resources', 'Startup']
Leveraging a New Partner for Brands
Back in August, Facebook opened its video streaming platform, Watch, to all users in the US. The new feature is said to be Facebook’s response to Netflix and YouTube, and will stream both long and short-form content. The company tapped into YouTube creators, traditional television channels, and publications, committing to pay them up to $50,000 per episode of short-form content. Looking to the future, Facebook has committed to a $1 billion investment for long form content in 2018. While the future of Watch is uncertain, its social DNA presents an enticing partnership opportunity for brands looking to expand their social reach through storytelling. Facebook realized that long-form content generates twice the engagement on mobile than short-form content. This is why Facebook has tweaked their algorithm to prioritize long-form video content on the News Feed rather than short-form. Taking this into account, Facebook is creating a social environment conducive to video advertising and therefore targeting those ad dollars traditionally spent on TV. Similar to traditional TV, there will be ad breaks between programming. Brands who partner with Facebook to create content have the option to enter into a revenue share agreement where they receive 55 percent of advertising revenues, and in return Facebook receives the right of first refusal. For those publishers who prefer to stay away from ads, Watch can also allow brands to gain revenue through product placement, however, they must tag the brands featured in the content. To launch in the US, Facebook partnered with over 30 influential publishers ranging from YouTube creators and online publishers like Buzzfeed and Refinery 29, to traditional TV publishers like A&E and Univision, to everyday inviduals. But for Facebook, partnering does not mean content needs to be exclusive. Publishers who debut content on Facebook are free to share that content on YouTube or their own channels after premiering it on Facebook. This flexibility is meant to attract those influential and professional publishers whose main platform isn’t Facebook. In order to jumpstart this initiative, Facebook has agreed to initially fund some of the content currently available. However, their ultimate goal is for anyone to be able to contribute content making Watch Facebook’s boldest attempt yet to compete with YouTube. A New TV and Video Screen In terms of content style, Facebook is going for quantity over quality. As Variety reported, “Facebook wants thousands of shows, and it’s OK with anything as long as it can connect with a highly engaged fan-base.” This is in line with Facebook’s priority to increase visibility for videos. This is not to say users will be exposed to anything and everything on Watch. Content displayed in the Watch menu will be personalized and targeted to each user based on what’s trending, their likes and what their friends are watching. Shows on Watch also receive their own Watch pages, where users can go to see the latest episodes, comment and follow the shows, and get updates on their News Feed as they would with fan pages. The publishers would be able to track video performance, engagement, and operate as they would with a fan page. In terms of tracking, Watch is a great source of qualitative insights which can complement a brand’s existing analytics and collections of data. Through Watch, brands have the opportunity to engage with viewers like never before, responding in real-time to questions and comments as they watch. They are able to track fan affinity for specific shows and products through a social lens, leveraging real-time engagement. Watch could also serve as an arena for piloting content. Potential new shows or existing ones, content themes, products, and even new characters could be tested through capsule content published exclusively on social (call it a “social focus group”). Looking to The Future: Long-Format Original Content Production Facebook could still compete more directly with Netflix and HBO when it debuts its long-format content in 2018. Yet, Facebook wants to be clear, its goal is not to buy content, as Dan Rose, VP of Partnerships on Facebook told Reuters: “We are not focused on acquiring exclusive rights. The idea is to seed this with good content.” After all, Watch is about leveraging online video for advertising dollars. And while Facebook is committed to paying specific partners to produce content in the short-term, in the long-term VP of Media Partnerships Nick Grudin has said, “We’re funding these shows directly now, but over time we want to help lots of creators make videos funded through revenue sharing products like Ad Break.” It is important to remember that Watch, similar to other Facebook products, is a publishing tool. It is meant to open a fresh revenue stream for Facebook and provide publishers with an innovative avenue to engage with customers and attract new ones. Ana Gaby Viyella is an account executive in the Digital, Miami. Juliana Uribe is the head of the Digital, Miami.
https://medium.com/edelman/leveraging-a-new-partner-for-brands-256b16b00dda
[]
2018-02-05 22:09:13.705000+00:00
['Edelman', 'Facebook', 'Facebook Watch', 'Digital Marketing']
Face Mask Detection
Face Mask Detection is a project based on Artificial Intelligence . In this we detect people with or without mask . In making of this project we have two phases . Train model using Convolution or any pretrained model which detect face masks in images . Then, Detect faces in video or images and get prediction from our trained model . In this Project I used Transfer learning to train model on dataset in which I used pretrained model MobileNetv2 , Now question is what MobileNetV2 is ? what is the Strucutre of it ? MobileNetV2 MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, MobileNetv2 is a Significant Improvement over MobileNetV1 . The Structure of MobileNetV1 and V2 is below . MobileNets Now , Come to Code and Data Explanation Dataset In the Dataset we have two types of images one who wore mask and other which do not have mask. Training Dataset consist of 4296 images in which 2455 images have label with mask and 1841 images are of without mask faces and Validation Dataset consist of 300 . Import : First I import Required modules like tensorflow , keras ,optimizer , layers and pretrained model(MobileNetV2) Loading Data And Augment Image : Now I load all images for training and preprocess using MobileNetv2 “preprocess_input” through which the images are ready as mobilenetv2 require . we also Augment our images through which we can get more data and variety in data. we add augmentation technique like zoom , horizontal and vertical flip , rotation . After this step we have training and validation batches of shapes which mobilenetv2 require as input . Loading Pretrained Model : As I previously write I Used MobileNetV2 . So , I download weights of model and create a object of MobileNetV2 type after we freeze the layers of our pretrained model through which layer weights can’t be modified when our model is in training . Adding Some More layers To Model : Adding Some Layer at the end of the model to achieve good accuracy or save model from overfitting . At last add one fully connected layer which contains neurons equal to number of labels we have (In this case we have 2 labels mask or no mask) . And After Define Model Input and output layer . Now Our Model Structure is ready . Compile and Train : Now it’s time to train the model but before training we have to define loss function and optimizer i.e., compiling a model . After Hypertuning of parameters I found good results with Learning Rate 0.00001 . So , I used Adam Optimizer with learning Rate 0.00001 and loss function binary_crossentropy . then after I trained the model with 15 epochs and validate it on validation data (which contains 300 images) . Testing Phase : Now we have to test model on test data through which we got good idea weather model will work good on real time data or not . so , we have 74 images of 2 classes as test dataset after evaluating it on test dataset I got good results . Loss : 0.058 Confusion Matrix : A confusion matrix is a table that is often used to describe the performance of a classification model on a set of test data for which the true values are known. After Plotting Confusion Matrix what I see My Trained Model only predicts 1 image wrong else all images are correctly predicted . so model will work Good in Realtime . As I got good results so it’s time to implement it with camera to work in real time . So, First we have to save the trained model . In Next Part I Will Write About How to Implement it with Opencv .
https://medium.com/analytics-vidhya/face-mask-detection-6dca534f7879
[]
2020-11-19 17:23:07.216000+00:00
['Neural Networks', 'Computer Vision', 'Face Mask', 'Convolution Neural Net', 'Machine Learning']
Create Voronoi regions using Python
Overview One of the most common spatial problems is to find the nearest point of interest (POI) from our current location. Let’s say someone will run out of gas soon and he/she needs to find the closest fuel station before it’s too late, what is the best solution for this problem? For sure the driver can check the map to find the nearest fuel station but it can be problematic if there are more than one station in the area and he/she needs to quickly decide which station is the closest. The best solution is by representing each POI with a dot inside a polygon shape. So within a polygon, the closest POI is definitely the dot inside the polygon. These polygons are called Voronoi regions. Data Collection For this project I create Voronoi regions on the map based on POI data. All POI data are chosen randomly while the street network data downloaded from OpenStreetMap with the help of OSMnx package. Create Voronoi Regions Currently the easiest way to build Voronoi regions using Python is by using geovoronoi package. Geovoronoi is a package to create and plot Voronoi regions inside geographic areas. As for the map visualization, I choose folium package. First I start by creating random points around the map. gdf = gpd.GeoDataFrame() gdf = gdf.append({'geometry': Point(106.644085,-6.305286)}, ignore_index=True) gdf = gdf.append({'geometry': Point(106.653261,-6.301309)}, ignore_index=True) gdf = gdf.append({'geometry': Point(106.637751,-6.284774)}, ignore_index=True) gdf = gdf.append({'geometry': Point(106.665062,-6.284598)}, ignore_index=True) gdf = gdf.append({'geometry': Point(106.627582,-6.283521)}, ignore_index=True) gdf = gdf.append({'geometry': Point(106.641365,-6.276593)}, ignore_index=True) gdf = gdf.append({'geometry': Point(106.625972,-6.303643)}, ignore_index=True) Next step is to determine the coverage area of Voronoi regions & save it to the geodataframe. area_max_lon = 106.670929 area_min_lon = 106.619602 area_max_lat = -6.275227 area_min_lat = -6.309795 lat_point_list = [area_min_lat, area_max_lat,area_max_lat,area_min_lat] lon_point_list = [area_min_lon, area_min_lon, area_max_lon, area_max_lon] polygon_geom = Polygon(zip(lon_point_list, lat_point_list)) boundary = gpd.GeoDataFrame() boundary = boundary.append({'geometry': polygon_geom}, ignore_index=True) Don’t forget to convert both gdf and boundary dataframes to Web Mercator projection. gdf.crs = {'init' :'epsg:3395'} boundary.crs = {'init' :'epsg:3395'} Convert the boundary geometry dataframe into an union of the polygon and POI dataframe into an array of coordinates. boundary_shape = cascaded_union(boundary.geometry) coords = points_to_coords(gdf.geometry) Calculate Voronoi regions. poly_shapes, pts, poly_to_pt_assignments = voronoi_regions_from_coords(coords, boundary_shape) Create a graph from OpenStreetMap within the boundaries of the coverage area. Using the graph to collect all street networks within the boundaries of the coverage area and save it to dataframe. G = ox.graph_from_polygon(boundary_shape, network_type='all_private') gdf_all_streets = ox.graph_to_gdfs(G, nodes=False, edges=True,node_geometry=False, fill_edge_geometry=True) Create new dataframe to collect street networks within each Voronoi regions gdf_streets_by_region = gpd.GeoDataFrame() for x in range(len(poly_shapes)): gdf_streets = gpd.GeoDataFrame() gdf_streets['geometry'] = gdf_all_streets.intersection(poly_shapes[x]) gdf_streets['voronoi_region'] = x gdf_streets = gdf_streets[gdf_streets['geometry'].astype(str) != 'LINESTRING EMPTY'] gdf_streets_by_region = gdf_streets_by_region.append(gdf_streets) Below is the visualization of the Voronoi regions on the map. Conclusion The map looks great! Unfortunately it is not yet ready to be used on real life application, the problem is these Voronoi regions created by using euclidean distance instead of network distance. Hope you enjoyed reading this article. For more details of the code, you can just click this link.
https://medium.com/analytics-vidhya/create-voronoi-regions-with-python-28720b9c70d8
['Denny Asarias Palinggi']
2020-10-17 17:20:38.392000+00:00
['Python', 'Folium', 'Geospatial', 'Data Visualization', 'Voronoi']
Voice in Apps: YouTube
In this week’s ‘Voice in Apps’, we are breaking down an app which we all are familiar with, YouTube. For the uninitiated, Slang Labs has started a new series called ‘Voice in Apps’. Every week we take an app which has integrated voice inside it and break it down. We are doing this because we think that it’s important to give recognition to the trendsetters and show how they are adding voice inside their apps and what is the result of it. Last week we broke down ‘Gaana’ a music stream app by Times Internet. If you haven’t read it, you can read it here. YouTube’s voice search before the overhaul Old YouTube Voice Search Earlier versions of the Youtube app had a mic button in the search bar. On clicking this mic button, users voice query is transcribed to the search bar, verbatim. Whenever users gave free-formed sentences like “I want to watch Naagin”, the search ended in no results shown. Hindi voice search was a nightmare. YouTube’s major Voice search overhaul YouTube did a major overhaul of their voice feature in January of 2019. This update saw major UI and functional changes to improve voice search. Visual Changes: New User eXperience NUX for the new Voice Search The app starts with training users on the new voice search feature. They do this by showing coach mark which says “New ways to search with your voice! Show me trending videos” in a blue dialogue box hovering over the mic button. UI Change With the new UI, on clicking the mic, a white overlay with a pulsating red mic takes over the whole screen. YouTube hasn’t forgotten about the dark mode and actually shows a black screen overlayed with the mic. Right above the mic, there are again hints present like ‘Play Charlie Puth’ which are personalized to the user. When the user speaks, the utterance is transcribed clearly on the screen and is visible in a large font. We are seeing this trend of large font size in other apps as well which are made for the Next Billion Users by Google eg, Neighbourly. Functional Change: Thought to Action: Earlier, a user had to click the mic button then speak out the utterance, which was transcribed to the search bar which showed the listings and then a user selected the video from the listing. It was a time consuming, to say the least Now, the user just has to click the mic button and say the utterance, for eg “Play A R Rahman” and YouTube directly plays the song, thereby reducing the thought to action latency. This removes the time spent in browsing for the videos. More time user spends seeing a video, more money they make. It is important to note that this happens only where the intent of the user is very clear for example, ‘play’. Other intents like ‘Show’ still end up opening the list of videos where user can go and select the video. This is a pattern we are seeing in other apps like Gaana as well, where voice search by a user results directly into action. We broke down the specifics here. Navigation via Voice: One important feature that YouTube also enabled was, the ability to navigate parts of their app through voice. You can tap the mic and say “Show me my history” and YouTube will take you there. This will help users to navigate through the treacherous hierarchies of the app with a single voice command, essentially rendering the entire app flat. Currently, not all menus and submenus are accessible by voice commands. There are various parts of the app that users have to still access by visuals What’s still missing? Multilingual support: Ability to do the searches and navigation via voice in different Indian languages like Hindi and Tamil. With 400% YoY increase in Hindi searches, it is necessary to make the search at least bilingual. Navigational support: Currently, voice navigation is a hit or a miss because users are not aware of the boundaries of the voice search. YouTube either needs to expand voice navigational support to all parts of the app or inform users the boundaries. Better NLP capabilities: The NLP capabilities of the voice search can be improved significantly. To allow the users to speak free form sentences, essentially allowing the app to ask for videos as naturally as they want. This major overhaul of the Voice UI was a long time coming. Google is seeing a 270% YoY increase voice search across all its properties in India. We will see a lot more functionality being added to voice search and see this change be replicated across a lot of different apps and not just by Google but with other different brands as well. Slang allows you to add voice to your apps in the easiest and fastest way possible. Reach out to us at 42@slanglabs.in if you are interested in adding voice to your apps as well.
https://medium.com/slanglabs/voice-in-apps-youtube-25bcc288ac4c
['Vinayak Jhunjhunwala']
2019-09-21 07:28:17.832000+00:00
['Voice Search', 'Voice In Apps', 'Google', 'YouTube', 'Next Billion Users']
The Marketing of Bottled Water
Marketing If bottled water tastes virtually the same as tap water across the U.S., what exactly are we buying? That’s where the ingenuity of the marketers comes into place. We’re buying the beautiful story the water brands love to pitch about the location, purity, and history of each brand. Two clear examples of water brands positioning themselves in the high-end of the pricing spectrum — charging up to three times the average price — through great marketing are Fiji Water and Evian. 1. Fiji Water Fiji Water came to life in 1996 when founder David Gilmour closed a deal with the Fijian authorities to exploit their natural resources. Ever since, the brand has leveraged the exotic location of the Fiji Islands to create a compelling narrative. Among the phrases they emphasize are: “High above Fiji over 1,600 miles from the nearest continent.” “Bottled at the source, untouched by man, until you unscrew the cap.” “Slowly filtered by volcanic rock, it gathers minerals & electrolytes that create Fiji water’s soft, smooth taste.” Though I’ve never heard of non-soft water, many Americans seem to be seduced by the idea of drinking water from the Fiji Islands. The brand also sponsors high-profile events like concerts, fashion weeks, and film festivals. Fiji water girl photobombing. Photo by Fijivillage on Twitter Among the most notable events was the 76th annual Golden Globes where a fashion model holding Fiji water on the red carpet became an internet sensation and was dubbed the “Fiji water girl” after photobombing numerous celebrity pictures. 2. Evian Photo by HONG FENG on Unsplash Evian water dates back to 1789 when a French aristocrat Marquis de Lessert discovered the natural mineral water from the French Alps. As he continued drinking the water regularly, his health began improving with his kidney disease eventually healing. Word of the water’s alleged medical benefits spread, to the extent where doctors were recommending the “healing” water to patients. Ever since, Evian has used its history to make the brand synonymous with health and wellness. By using the “Live Young” slogan, it’s often used athletes, babies, and actors in their campaigns to inspire consumers to pave their own way in life. Through marketing efforts, similarly to Fiji Water, Evian has established itself in the most glamorous hotels, restaurants, and events across the globe.
https://medium.com/better-marketing/the-marketing-of-bottled-water-351d2465b2f2
['Kenji Farré']
2020-11-02 15:12:44.071000+00:00
['Marketing', 'Branding', 'Humor', 'Business', 'Water']
Which Movie Character Would Mitch McConnell Be?
Which Movie Character Would Mitch McConnell Be? A 1997 cult film perfectly captured the true essence of Mitch. In over a century of film-making, which character has truly captured the essence of Mitch, his unique blend of detached cruelty, self-conscious vileness, and fully-acknowledged evil? There are a few suitable candidates, and if your first instinct was to answer Hannibal Lecter, Nurse Ratched, HAL, or the velociraptor from Jurassic Park, I don’t blame you, they’re good matches. However, there is a character whose story arc and personal values match those of Mitch so perfectly that the comparison is uncanny. I’m talking about Jean-Baptiste Emmanuel Zorg, of course. If you’ve never watched The Fifth Element, I invite you to do so right now. Drop this article, find the movie, and watch it, I’ll wait for you. Should you choose to read the article first, be forewarned that it may contain spoilers. Jean-Baptiste Emmanuel Zorg is the villain of this 1997 French-American movie directed by Luc Besson (Léon: The Professional). Gary Oldman brilliantly plays him if the adverb is even needed anymore for the man. Zorg is the CEO of Zorg industries, and he embodies the worst aspects of humanity and capitalism. Already, he seems like a great representation of our dear Senate Majority Leader. But the comparison doesn’t end here. Zorg secretly works for The Great Evil, a form of destructive darkness that threatens to end all life in the universe for no specific purpose. The Great Evil loves his destruction and chaos. He enjoys hurting living things, just like his underling Zorg, and much like Mitch! Zorg also doesn’t mind to use enhanced interrogation techniques, as he demonstrated when he said: Torture who you have to, the President, I don’t care. Just bring me the stones. You have one hour. To Zorg, the end justifies the means. It’s interesting, then, that Mitch’s website features an article with just that title! In an interesting twist of irony, the article below talks about the Democrats, and their Machiavellian attitude in the Senate. Oh, those pesky, naughty Democrats always trying to prevent Congress from working properly, always not bringing bills to the floor. Don’t worry, Mitchie, you ain’t done nothing wrong. And when it comes to torture, Mitch’s grandstanding against Trump in 2017 (“torture is illegal”) doesn’t erase his response to the 2014 CIA report on interrogation techniques used in the war against terrorism: Zorg also shows a profound disdain for soldiers and veterans when he says: I don’t like warriors. Too narrow-minded, no subtlety. And worse, they fight for hopeless causes. Honour? Huh! Honour’s killed millions of people, it hasn’t saved a single one. Mitch may not talk much about the troops, but his actions when it comes to supporting the troops show as much consideration as Zorg’s words. The following dialogue between one of the movie’s good guys, priest Vito Cornelius, and Zorg demonstrates how perfectly he accepts his evilness: Priest Vito Cornelius: You’re a monster, Zorg. Zorg: I know. When Mitch is faced with the consequences of his actions or lack thereof, what does he do? When he’s told that he is a monster for all intents and purposes, how does Mitch defend himself? He doesn’t. He laughs. Mitch knows he is a monster. In one of his most famous quotes, Zorg tells Priest Vito Cornelius that: Life, which you so nobly serve, comes from destruction, disorder and chaos. He then pushes a glass off his table, causing several robots to come out and clean up in a ballet of beeps, toots, and other futuristic noises. He concludes that: You see, father, by causing a little destruction, I am in fact encouraging life. In reality, you and I are in the same business. Isn’t that exactly what Mitch is doing right now? He refused a $2,000 stimulus check to all Americans while bringing back on the table two issues his Great Evil overlord has been asking for months: a commission to investigate voter fraud and a repeal of Section 230. Mitch has been playing with chaos for years, to the point that some argue his goal might be to destroy the very institution he sits in! Plus, we can not ignore that Mitch is married to Chao. He has embraced the Chaos multiple times in his life and continues to do so to this day. Coincidence, I think not!
https://ncarteron.medium.com/if-mitch-mcconnell-were-a-movie-character-who-would-he-be-4882cfa8e4bd
['Nicolas Carteron']
2020-12-30 17:50:05.077000+00:00
['GOP', 'Politics', 'Society', 'Movies', 'Culture']
The Pursuit of Autonomy
Image Cred: Sergio Bellotto It’s been two years since I left one of the biggest professional services firms in the world to create something of my own. I had a core concept of a consulting company that wholly invested in its staff’s development and ultimately helped them to leave by launching their own ventures. For those people it would be a safe off-ramp from a structured career into a world of complexity. Slight issue as I resigned though was that it was just me, with no clients, and no-one who was crazy enough to jump ship with me. Recently I was discussing my experience over the last 2 years with a good friend of mine who still works at a Big4. The conversation centred around three main questions “Why did I choose to leave?” “What is different now?” “What advice would I give to others?” I’m sharing my responses to these questions as a way of clarifying my own thoughts, and providing insights for people that may be considering taking a similar risk or who are feeling they need to find a job with greater ‘purpose’ (more on this shortly). Why did I choose to leave? The conversation started off on the differences between working in a large global consulting business (where you can be somewhat limited depending on your role/line of service) vs working in a smaller firm where you can invest every ounce of yourself into anything and receive nothing less than 100% merit (total failure of 100% is also a likely outcome). I had been at a Big4 for a number of years and towards the end of my time there felt completely lost. Our conversation reflected on our differing experiences over the last couple of years comparing one environment to another. This difference was focussed on the way I described having full autonomy and authority over the work we did, who we worked with and the impact we could have. Two years previous I thought I hated everything about consulting, today it’s the polar opposite. Even though it’s the same work, it’s now my own business, my own clients and most importantly the people that I have always wanted to work with. In essence, that is why I had left. I had left in the pursuit of total autonomy but what is interesting is that wasn’t why I thought I was leaving at the time. Recently, I have spoken to a number of people who feel and seem slightly lost with what they are doing. It’s an undeniable millennial trait to constantly be pursuing something of ‘impact’ or ‘purpose’, and many people are quick to point that out. I am 100% that type of individual born slap bang in the middle of the millennial cohort. I remember feeling immensely ‘lost’ for years in my career with no idea how to get out — but also zero idea of what ‘getting out’ meant or why it was important ‘to get out’. Interestingly, I think the feeling or desire to find impact/purpose driven work is actually more a desire to find ownership and autonomy in an individual’s life. I know that I certainly feel fulfilled more so that I have found autonomy over purpose because I can use my autonomy to drive the impact and purpose that I wish to have. How did this all manifest itself? As with any major personal or professional change it is usually the result of something that builds tension over time and then a catalyst ignites that change. The tension I found was very much focussed on the lack of autonomy and authority I had in delivering the work that I did. I fully understand from a risk perspective that you cannot give a junior with limited experience full autonomy, however, more often than not I felt restricted in what I could do and achieve. This compounded over years and evidenced through high performance grades but no promotion to the next level (it’s amazing how nonsensical that feels to me nowadays). Eventually, the catalyst was another promotion rejection and enough was enough. Why is it different now? What next? I knew I wanted to start my own business, I actually wanted to start many businesses and invest in many others either financially or through developing founders. But with no capital and with no network becoming the next venture builder was a 10 year pipe dream at least. I therefore hatched a plan, and put it into action. I was very lucky as during my time in the Big4 there was one client where I had full autonomy over the project and team. I loved that project, I loved that client and the work that we did was not only rewarding for us all, but of great benefit to the client. Reflecting on the project it made me realise that actually, deep down, despite being slightly scarred by my previous experiences, I actually loved consulting. I knew that I wanted to continue to work with clients, delivering impactful work and doing so with amazing groups and individuals. From that acknowledgement, Upside was born and everything since then has been driven by the desire to hire the best people, invest in those individuals and then manifest that talent through impactful work for clients and the creation of impactful ventures. At the heart of Upside is a laser focus on investing in our people. Investing in what is best for them, not what is best for us. By doing this the impact we could have is compounded by the individuals or teams we nurture and the onward impact they will have. One thing that the above means is that we don’t always make the most ‘rational’ investments of either our money or our time. Does it make sense to pay for an employees ACA qualification when we could run out of money in 3 months, probably not. However, an unfailing belief in courage favours the brave makes me feel that if we continue in this vein then maybe we could just make it work. I don’t want for us to become the next hot shot consultancy, or the next bleeding edge digital agency, or a venture builder that churns out 10s of concepts a year. Scale in this regards isn’t a driving force. Far more so, with Upside, balance is what we view as key to success. By focussing on scaling the 3 pillars of Upside simultaneously, we won’t get distracted by one specific part thus distracting us from our ultimate aim. Balance then plays into the careers and opportunities we offer our people. Our belief is that if you focus 100% on the broader development of a phenomenal talent by giving them varied development experiences then this will lead to incredible things. What advice would I give to others? Mine and my friends conversation ended on this last point, what advice would I give to others if I was fortunate enough to be allowed to offer up an opinion. My response would be:. Unhappy? Pull the rip chord: If you are no longer learning, or unhappy to the point where it is affecting your broader life. Get out. Your joy and talent is like a market. When it enters into a bear market it compounds the worse it gets. This also works on the upside. Therefore cut your losses at all costs and invest in something that drives an upside. Beware the hamster wheel: If you don’t think the career path you are on isn’t for you, trust your gut and do something about it. I know first hand how easy it can be to stay on the hamster wheel of a career always thinking the next step is the solution whereas actually it is just the sticking plaster. Find the next stepping stone: Don’t worry if you don’t have a precise understanding of where you are going next. It likely takes a number of stepping stones to cross the river. Find a learning environment with great people: Regardless of what you do next, focus on finding an environment where you can learn from and be challenged by talented people. Don’t burn bridges: Needless to say, your current colleagues will value your individual talent and therefore will be upset to see you go and likely feel hurt by your departure. Don’t leave in a ball of flames, you never know when one of your previous counterparts is a pivotal piece of your master plan. The key question at this point is whether the move has given me all the things I was searching for. I would say to a certain extent, yes. Yes it absolutely has based on the points that I raised earlier about autonomy leading to a self-actualisation of purpose. However, as any Founder will attest, you live in constant fear that it all comes crumbling down through a perceived inability to sustain growth against the vision you aspire to. That fear is your worst enemy one day and your best friend the next. It’s your worst enemy as it is all consuming, as someone said to me just today it’s your monkey brain taking over and once that monkey is in, it is very hard to shake it. This affects you as an individual making you closed and transactional (or certainly that is the impact it has on myself that I am acutely aware of) opposed to open, creative and agile. To counter that though it is also your best friend as the fear instils a perpetual drive in oneself to keep at it and to keep looking for answers that lead to continued success. Without that fear one can ebb back into a state of lethargy or arrogance, which in actual terms is your greatest enemy and the kiss of death to any business owner regardless of stature. I hope this resonates with a few people who’ve made the jump and that it provides courage to anyone considering it. If you read this and feel that you need to talk things through with someone, get in touch with us. Our company is filled with people that will be able to give you some sage advice about what you are doing and hopefully offer some form of guiding hand. And if you think you may be interested in a role at Upside, we are always on the lookout for exceptional people who may be grappling with what their next challenge should be and are searching for a little more autonomy. wyndham@theupside.io
https://medium.com/swlh/the-pursuit-of-autonomy-98f60d90c5f0
['Wyndham Plumptre']
2019-08-14 20:30:33.808000+00:00
['Autonomy', 'Careers', 'Culture', 'Business', 'Startup']
Residual Leaning:認識ResNet與他的冠名後繼者ResNeXt、ResNeSt
這邊單純地補充一下關於ResNeSt論文中的一些數據討論。 ResNeSt論文開頭給出了這個數據:比當代最快的EfficientNet [10]快,又刷新了其他題目的SOTA。看起來似乎很神,不過這個速度的計算上其實有些小秘密。 首先,回頭看一下EfficientNet論文中,ResNet與EfficientNet的FLOPs數: 對比ResNeSt論文中ResNeSt與ResNet的FLOPs數: 可以看到兩篇論文在ResNet-50的GFLOPs都約為4.14,而ResNeSt-50的GFLOPs為5.39,對比EffiecentNet-B2的1.0,然而在ResNeSt論文裡的latency卻是ResNeSt壓著EffiecentNet打?這點雖然矛盾,卻不是錯誤的。 關於這個速度的計算,論文作者是這樣說的:In this benchmark, all inference speeds are measured using a mini-batch of 16 using the implementation from the original author on a single NVIDIA V100 GPU。
https://medium.com/%E8%BB%9F%E9%AB%94%E4%B9%8B%E5%BF%83/deep-learning-residual-leaning-%E8%AA%8D%E8%AD%98resnet%E8%88%87%E4%BB%96%E7%9A%84%E5%86%A0%E5%90%8D%E5%BE%8C%E7%B9%BC%E8%80%85resnext-resnest-6bedf9389ce
['Jia-Yau Shiau']
2020-09-06 10:27:40.447000+00:00
['Research', 'Machine Learning', 'Computer Vision', 'Data Science', 'Deep Learning']
What Makes Me Unhappy as a Developer?
Part 2. How to Improve the Way We Interact With People There are two very popular books that teach us very well about this area. I will be covering them in brief so that you don’t have to go and read the books, but you are always welcome to do so. Book 1. Influence: The Psychology of Persuasion In Influence: The Psychology of Persuasion, the author Robert B. Cialdini has laid out six weapons of influence. You can use these properties to your advantage, but use them with positive intent. 1. Reciprocity — If you do a small favour for someone which they didn’t even ask for, it sort of makes them accountable to reciprocate back with a favour in return, which can be a much bigger one. 2. Consistency — People have a tendency to remain consistent with their promises and actions, which you can now use to influence them. 3. Social proof — At the time of uncertainty, people always feel more comfortable in a group. 4. Authority — There are certain people who can easily influence us by their title or if they are seen as experts. 5. Liking — When there is liking, trust also comes in. So we tend to trust people easily whom we like. If we like somebody more, we will trust them more. 6. Scarcity — We value things more if there is a scarcity of them. That is why all ecommerce websites show “two left in stock.” Book 2. How to Win Friends and Influence People How to Win Friends and Influence People by Dale Carnegie is divided into these four different parts which cover certain behaviours that you should cultivate. 1. How to make people like you Be genuinely interested in other people. Remember a person’s name. Be a good listener. Talk in terms of other people's interests. Make the other person feel important. 2. How to handle people Don’t criticise, condemn, or complain. Give honest and sincere appreciation. Arouse an eager want in other people. 3. How to win people to your way of thinking The only way to get the best of an argument is to avoid it. Show respect for the other person’s opinions. Never say “You’re wrong.” When you are wrong, admit it quickly and emphatically. Be friendly. Let the other person do a great deal of the talking. Let the other person feel that the idea is theirs. Try to be in their shoes. Have sympathy for their ideas and desires. Throw a challenge at them. 4. How to be a leader Begin with praise and appreciation. Call attention to people’s mistakes indirectly. Talk about your own mistakes before criticising the other person. Ask questions instead of giving direct orders. Praise the slightest improvement. A short summary would be: Act as if others are interesting and you will eventually find them so. So from all the above learnings, we can understand — It’s not about them, it’s about me. So if I change the way I behave, others will change too. So far, we have been making an assumption that it’s other people’s fault and we are right at our place. But this was the hard question that was thrown to me: How do you know you’re right? Man, that shook me! A lot of arguments and differences are not around the end goal but rather on how to achieve it. Do the people I work with have the intention to destroy the application? No! Then why am I fighting with them to do things a certain way? All right, since one thing is clear — they are our teammates, with the team’s best interest at heart — let’s make it about them and not us. Let’s try to motivate them to take up the initiative and lead things all by themselves so that we don’t have to worry about them.
https://medium.com/better-programming/what-makes-me-unhappy-as-a-developer-1933bcca6c14
['Dhananjay Trivedi']
2020-11-20 16:44:37.724000+00:00
['Leadership', 'Programming', 'Teamwork', 'Software Development', 'Startup']
What Google’s and Salesforce’s respective acquisition of Looker and Tableau Software means for CIO’s
The BI analytics tool space is consolidating to compete against Microsoft’s ensemble of Business Analytics (BA) products, which promises to solve for the entire workflow — data generation, data capture, data storage, data access stratified by persona and data visualization. Why? Because Data is the new currency. The organizations who are able to milk the data would remain to be the leaders in their respective industries. The faster the access to relevant data, the more quickly the business decision makers can react in order to drive user engagement, reduce user churn, and build unmatched product features that continue to delight. With Google’s announcement to buy Looker for $2.6b last week and Salesforce’s $15.3b deal to buy Tableau Software announced today, a new ‘one-stop business analytics suite of tools’ trend is emerging. Both of these respective tech giants are likely to create a one-stop cloud product that solves for the entire data work-flow from data generation to data access and visualization to compete with Microsoft’s Azure and PowerBI workflow. And, it won’t stop there. I totally expect all three companies to continue developing AI and NLP powered — “ask me any question and I will give you a chart/answer” features that will make data consumption easier for citizen analysts. However, building a culture of data, like that of Capital One or Airbnb, where every employee can “think data” and “act data” requires more than data maturity. Building Data Culture requires all 4 D’s presented below to line up. 1. Data Maturity: Microsoft’s Azure to Power BI workflow as well the Google and SalesForce acquisition is focused on delivering data maturity — easy, fast, scalable, user-level access to a single source of truth. 2. Data Literacy: Tableau, PowerBI, and all the leading Business Analytics tools suffer from one ailment. That is of low adoption rate — defined as the percentage of seats/licenses that get meaningfully used. It ranges between 20–30% for most of the top business analytics tools, i.e. 70–80% of licenses go unutilized or under-utilized. Can you believe it? BTW, we are talking here about tools that are very easy to use, with intuitive drag-and-drop features as well as “ask me a question” feature. And adoption is low as it is, even though almost all of the tool roll out is followed by the tool training. Do you know why there is low BA tool usage? In my 7+ years of working with the large mature organization for developing data DNA for them, we repeatedly arrive at low Data Literacy being the key driver of low tool adoption. Data Literacy is the ability to read and use data to draw a meaningful conclusion to power decisions. Many leaders may be tempted to jump the gun here and say — “oh, I can increase literacy, let’s give people some training on charts and graphs.” I have seen this approach fail where companies fail to move their Data Literacy index even after having strategic tie-ups with top universities and then rolling out mandatory classes for all (which ‘all’ hate). I myself have been part of this bandwagon. Early on, we failed miserably in building data culture for our partner organizations, but thankfully for us, we quickly learned to turn those failures into learnings that we are now able to drive success and transformation for our clients. I will share our learnings shortly. 3. Data-driven leadership: Like any other enterprise-wide strategic initiative, developing a culture of data starts with data-driven leaders. Granted, when we start with any organization, not all leaders are at the same level, however, by the time we are done, most leaders believe in the power of using data to power decisions (because they have already seen the massive movement in key metrics powered by data literacy programs) and are willing to lead by example — holding their team accountable, zero-based budgeting, following decision-making processes for decision-making instead of ‘because I know’-based decisions, etc. 4. Decision-making process: Lastly, a successful data culture needs a data-driven decision-making process to plan by numbers, act and execute, measure and course correct, and look back to evaluate and decide the next set of actions. In conclusion, here are my top tips for CIO’s driving Data Culture for their respective organization. a. Data Literacy needs a stratified solution — i.e different job roles require a different level of Data Literacy. For example, a customer support agent might just need to be a ‘data enthusiast’, while a claim analyst would need to be a ‘citizen analyst’. b. Data Literacy goals are different for different organizations and so even though the Data Literacy personas may be similar across organizations, the rollout is almost always custom to the client’s needs and internal company culture. c. Developing a Data culture is a change management process and so must be treated similar to any other culture change initiative with full planning, by phases, strong communication, evangelism by leaders, and all the rest. d. Successful Data Culture transformation always drives top financial and customer metrics — In all of our successful Data Culture rollouts, we have focused on building data chops for the top teams, while they solve top strategic projects for the company. So, our Data Literacy programs show successful movements on revenue, growth, retention, profitability within 9–12 months and so should yours. The coming months and years are going to be an interesting time for these tech giants. I predict that the business analytics platforms who go beyond data maturity into enabling Data Literacy and a culture of data will gain significant market share. CIO’s and CTO’s looking to invest in data infrastructure in 2019/2020 — do pay attention!
https://towardsdatascience.com/what-googles-and-salesforce-s-respective-acquisition-of-looker-and-tableau-software-means-for-3f824c0b32f4
['Piyanka Jain']
2019-06-11 17:09:47.760000+00:00
['Analytics', 'Salesforce', 'Google', 'Data Science', 'Business Analytics']
How to Shield Yourself from the Chaos of the World Right Now
How to Shield Yourself from the Chaos of the World Right Now Especially when more is coming Photo by Kyndall Ramirez on Unsplash 2020 relentlessly serves up cause for cynicism. Between a pandemic, political unrest, and division, unlike anything I’ve seen in my lifetime, I’m having more WTF moments than ever. Considering the fuel the coming election results will add to the societal wildfire, the chaos will likely peak. Despite that, I made a conscious decision to shield myself from it. Here’s what I’m doing. Keeping my news intake to a minimum I curtailed the amount of time I spent watching to 10–20 minutes if I do watch it. I am not equipped to handle all the world’s ills, and I don’t need to continuously remind myself of them. When COVID-19 hit, I watched the news and read articles to learn as much as I could. I like to stay informed, generally. Now, it’s a matter of having the information I need to protect myself. In the middle of all that, I became a minor news junkie. Compound that with the Trump administration’s refusal to take any leadership role to slow the spread, I slowly leaned into watching one inept move after another. Then, I found myself scratching my head, wondering why people still defended Trump and his troop of incompetent brown-nosers. The social unrest of the summer following George Floyd’s murder took me deeper into the abyss. Between the blatant racism of some and the deliberate blind-eye of others, my pessimism grew. I had to step back. I’m no good to myself or anybody if I stay focused on the negative. And where do you see a lot of the negative? The news. I like to stay informed but not at the expense of my sanity. Staying connected to the people in my life Everyday life can keep us from staying in touch with people. Now, social distancing adds an extra obstacle to keep up communication. I am not the greatest with reaching out usually, but I have stepped it up lately. I text and call people who make me feel good. I have one friend who I text multiple times a day. I have never done that in my life, but I enjoy maintaining our connection. Zoom calls still give me pause because I haven’t figured out angles yet that make me look decent. But I enjoy seeing people’s faces when we’re chatting. Finding joy as much as I can When I find myself getting pinched, I find the happy. I get hyper-focused when I’m in work mode. My dog has become an excellent accountability partner to keep me in check. The little monster stares at me with his amber eyes, gets on his back, bares his belly for me to rub, and grins at me until I do it. He steers me back to joy. The other animals in my neighborhood pull me out of serious mode with their quirky personalities too. A beautiful feral cat on my block who looks like a lynx sashays around like she owns it. I named her Cleopatra Jones because she’s regal and sassy at the same time. Jiminy sounds like a cricket and is the loudest squirrel I have ever heard in my life. I never even realized squirrels made noise until I heard him. I thought it was a baby bird in a nest at first. What’s so joyful about naming random animals? It makes me laugh. That’s never a bad idea. Connecting to myself and nature I am a firm believer in filling my cup. If I’m not happy, I’m not useful to anyone. My self-care is my daily priority. I do some form of exercise and meditation daily. I meditate once in the morning and in the evening. The evening is especially important because it helps release all the thoughts I have accumulated during the day. Emptying my mind helps me sleep, which is a primary health foundation we all take for granted. No matter what the weather is like, I have to get outside everyday. When I take my dog for a walk in the morning, I take in the sky, the trees and focus on the beauty that surrounds me. It calms me and keeps the stress at bay. Nature is a living example of resilience. After heavy rain, I took a walk through one of my favorite parks by a river. I saw downed trees along different pathways, and the sight reminded me of their steadfastness. They are present, standing in the same spot for hundreds of years. When one falls, they don’t fuss. The other animals don’t either. Everything keeps going. People complain about so many things all the time. Where does it get us? Nature is a constant stream of living examples of presence, the divine in action. Life takes many twists and turns, generally. In this bizarre time, the state of the world, this country, has been an unending tornado. While we can’t ignore it, we can protect ourselves from the storm. We can shield ourselves from the chaos.
https://medium.com/curious/how-to-shield-yourself-from-the-chaos-of-the-world-right-now-db6818c3a37a
['Sameena Mughal']
2020-10-30 19:56:38.136000+00:00
['Self Care', 'Personal Development', 'Society', 'Politics', 'Self Improvement']
Humanity’s Inevitable Future is in Game Design
Photo by Susan Yin on Unsplash Analysis of QAnon by a game designer. Everyone should read! A Game Designer’s Analysis Of QAnon QAnon method is like the movie Inception on a mass scale. Planting seeds of misinformation so that its victims generate their understanding of alternative reality for themselves. The author concludes that this isn’t a movement that grew organically, but rather one that is orchestrated with big money. Many in the real world place too much faith in their opponents also participating in the same shared reality. Unfortunately, what happens when if their opponents live in a fictional alternative reality? History is full of conflicts as a consequence of clashing world views. Historically these world views were created by religions. That is a narrative primarily based on fiction. It is alternative fictional narratives that have no reference to a reality that leads to massive human suffering. What do the Armenian, Cambodian and Nazi genocides have in common? A fictional narrative with a final solution. Humans have always been storytellers. Religion has always been that kind of collective storytelling that creates a virtual reality in our world. It is what gives us meaning in this world. But meaning-making has can be defined in a way that is simple and non-mysterious. Collectives thus are able to generate the games required to keep their believers in active participation. It is in participation that we derive meaning. The stories that have endured are stories that require participation by its listeners. The purpose of stories by Australian aborigines was for travelers to navigate their way in the outback. This eventually morphed into a unique kind of mysticism. Stories thus that involve participation become incorporated into our very being. The Japanese Ikigai (the reason for being) in fact hinges on the interplay of belief and participation. Humans find meaning when they participate. Participation leads to the discovery of patterns. Pattern discovery leads to the feeling of understanding. Understanding leads to the feel of wholeness. None of these feelings needs to be driven by deference to reality. They only need to be orchestrated by a good game designer. Perhaps we should not be just story makers, but rather game designers. The power of artists, specifically musicians is that they capture our minds through our participation. Bach's popularity was a consequence that his melodies were sung in church. This primed the population to begin to resonate with his kind of music. Traditionally, we’ve treated mediums as static things that are consumed passively. But what do we make of mediums like games where there is constant repetitive immersion? What do we make of media that creates entire universes (i.e. Marvel MCU, Star Wars, etc) Humans will continue to seek meaning in alternative realities that can be framed in a game. In fact, many consume their entire lives participating in a game known as sports. We find meaning by playing or being a fan of a game. The future of all human enterprise is in the creation of alternative realities that encourage participation and immersion. It is gross incompetence by studios that seek creativity in favor of preserving ‘canon’. Good alternative realities still require a level of consistency and coherence. There must be some deference to logic that allows a fan to suspend or maintain a consistent set of beliefs. All too often we see studios kill the golden goose by insisting on story arcs that destroy the story’s original consistency. As a consequence closing off an evolutionary pathway to a much larger universe of stories. Lucas Film story group is responsible for the consistency of all Star Wars stories. Although it is all fiction, there is value in maintaining consistency across many stories. Marvel MCU has executed this strategy to great effect. The next evolution of games is to make participation a means of livelihood. We see this already in cryptocurrencies. Bitcoin and its cousins create different narratives to draw participation or investment. It is a game that is played that is worth half a trillion dollars. But not everyone derives meaning from making money. Fantasy sports and sports betting perhaps has a sweeter spot in the interplay of meaning-making and making money. Also let’s not forget the financial markets, also a game that is becoming more divorced from reality. If everything is just a game, then where can we find meaning that transcends all of these simulated realities? We can find it in the game of discovering reality. We have evolved to play games:
https://medium.com/intuitionmachine/humanitys-inevitable-future-is-in-game-design-2315515cd411
['Carlos E. Perez']
2020-12-03 04:20:50.102000+00:00
['Future Of Ai', 'AI']
5 Key Differences Between Python & GO
1. One of the first differences you’ll encounter when it comes to Go vs Python is their type While Go is a static language, Python is a dynamic language. While both types have their own advantages and disadvantages, most people prefer static languages. Aside from early error detection or writing less code, there is a lot of support for static programming languages ​​on the Internet. 2. Although Go is a procedural, functional, and concurret language, Python; is an object oriented, imperative, functional and procedural language With Object Oriented Programming (OOP), you professionally write your codes in classes and call your codes through objects derived from classes. Another difference between Go and Python is object orientation. Python is object oriented throughout, while Go isn’t exactly. Go is a strongly typed language and support for object orientation is moderate. 3. Ideally GO is used in systems programming and Python is for solving data science problems Another big difference in the Go vs Python debate is the intended use of languages. Python mainly focuses on web development and Linux-based application management. Golang is mostly seen as a system language. System languages ​​are languages ​​used to create and develop operating systems rather than programs that run on the system. However, Go can also be used for web development. 4. The exceptions are different in both The Python exception class hierarchy consists of a few different exceptions spread across the important base class type. As with most programming languages, errors occur in a Python application when something unexpected goes wrong. Go does not provide exceptions. The type of the exception is shown as part of the output of. Also, there is no option other than try-excect in Go. Instead, Go allows functions to return an error type in addition to a result. Therefore, when using a function, you should first check if an error is returned. 5. Community and Legibility Both Python and Go have great community support. However, Python is widely among the more popular programming languages. If you want to learn Python, there are many sources of information on the internet. Python is seen among the easiest programming languages ​​to master. While Go is simple and easy to use, it is not a language superior to Python in readability.
https://medium.com/swlh/5-key-differences-between-python-go-ee52691f65ae
['Kurt F.']
2020-10-02 22:30:44.124000+00:00
['Coding', 'Programming', 'Python', 'Golang', 'Data Science']
by Martino Pietropoli
First thing in the morning: a glass of water and a cartoon by The Fluxus. Follow
https://medium.com/the-fluxus/friday-love-you-jay-kay-5e9061628d67
['Martino Pietropoli']
2017-03-31 06:48:23.069000+00:00
['Humor', 'Comics', 'Friday', 'Cartoon', 'Music']
5 Alternatives to YouTube Monetization for Video Creators
I love creating videos on YouTube but recently I’ve begun to search for alternatives. Why? Because YouTube monetization is hard to achieve, and even once you’ve achieved it, there’s no guarantee that YouTube monetization will actually earn you much money at all. There are benefits to creating videos on YouTube. For one, it’s easy — you can upload and go. I’ve made just under 100 videos, earned around $2,000, and gained 8,000 subscribers in the year and a half since I began uploading. Plus, you don’t need to be a savvy marketer or salesperson, because YouTube monetization relies on advertisers rather than you distributing and selling your own content. However, this means the relationship between the actual value of your video to viewers and how much money you get from it are totally independent. Instead of your viewers deciding how much your content is worth to them, advertisers decide how much your viewers are worth to them, the advertisers. Screenshot of the YouTube monetization for my video with ~1000 views. This means that alternatives to YouTube monetization are growing more and more interesting to video creators, especially as we begin seeing stronger creator-consumer relationships that don’t rely on middlemen or advertisers, like paid newsletters. Let’s get into five alternatives to YouTube monetization that don’t rely on hours of viewed content, subscribers, or what advertisers think your viewers are worth. 1. Tap Into the Microconsulting Market With Wisio YouTube is good at letting you create a relationship with your viewers. I, at least, really enjoy answering my YouTube comments and having another way to give back to the people who consume my content. Wisio takes it one step further and lets your viewers actually reward you for answering their questions. This is a great alternative to YouTube monetization for anyone who teaches anything in their videos, or who has especially invested fans. The benefit of this platform is that it lets you continue building on your relationship with your viewers on a more one-on-one basis. The way it works is you set up different types of questions to be answered (for example, $20 for a 2-minute personalized video answering a personal question, $120 for a 10-minute video review of a submitted blog post draft) and then promote your account to your existing audience. There’s a real personalized note when it comes to answering a question directly. Sometimes you can’t really convey the depth or breadth of your thoughts and expertise on a subject in a written comment, so this method lets you sate your viewers’ appetites for more of you and your knowledge. It also taps into the market for consulting. People pay a lot of money for an hour of an expert’s time — by shrinking the scale down to a micro-consultancy scale, you can increase the potential audience for your services. The benefits of this alternative to YouTube monetization is you get to piggyback on your existing audience and allow them greater access to you. This strengthens your relationships while putting the financial control back in the hands of you and your viewers. 2. Set Up Your Patreon to Earn Money Per Video Posted Patreon is part of what’s driving this sort of slow, inevitable realization that creators should get paid in proportion to the value they create for viewers. Patreon has two monetization plans creators can choose from: a monthly membership with tiered options, and a pay-per-content, where creators get paid when they post a new piece of content like a video. This second mechanism is a great alternative to YouTube monetization because it puts the control in your hands without any advertisers at all. I first did this when I subscribed to Ryan Dunleavy for his maps of Blades in the Dark, an RPG my husband and I played with a group of friends. I needed maps for our game, he was providing them at $5 a pop, plus access to any previous maps he had created. For me, it was a bargain. For him, he got value for the content he created. This method works really well for creators because the value is evident and it makes it easy for consumers to decide whether it’s worth it or not to them. Plus, the longer the creator keeps going, the more value there is for the consumer of that content, who gets access to the backlog. If you wanted an alternative to YouTube monetization, you could set up a Patreon where you release once-monthly videos at $10 per video. With just 10 subscribers, you’d earn $100 per month for a single video from viewers who really value your work. If ten people watched my video on YouTube, I’d earn perhaps a penny, no matter how many hours of work I put into it. The advantages of Patreon are this: the platform is easy to use, there are no advertisers involved at all, and consumers get to decide how much they value your content. Compare that to YouTube’s monetization method where you need thousands of views to earn much money at all. 3. Sell Your Videos at a Premium with MakerStreamer The downside to Patreon is that the expectation of pricetag is already set low. Patrons expect to pay between $3–20 on Patreon, whether on a monthly subscription or on a per-content basis. If you’re a video creator who spends meticulous hours of work and who wants to earn more than $100 per month with your videos, you need a huge volume of subscribers. That’s what makes MakerStreamer such an enticing alternative both to YouTube monetization and places like Patreon — the average transaction there is estimated to be $47, rather than Patreon’s average of $6. Consumers come ready to pay more money for premium content. Take a look at Five Pencil Method as an example. Darrel earns over $11,000 per month by selling videos on MakerStreamer, priced slightly higher than Patreon’s typical content. Again, there are two options for monetization — viewers can choose to purchase one-off videos, or they can subscribe for membership of ongoing videos. This dual-natured monetization plan means that again, the choice is in the viewer’s hands to see how much they value their work. Again, unlike YouTube monetization, there are no advertisers who place a value on your viewers’ worth to them — the focus is on your relationship with your viewers. 4. Sell Seminars on Your Own Website One of the strengths when it comes to YouTube monetization is the royalty scheme. You can create a video in January 2021, and in December 2021 it’s still earning money — without any additional needed effort expenditure on your part. You can have that literal passive income without needing to rely on advertisers by selling webinars on your own website. For example, I could create a seminar on some of my best-earning blog post templates. I might price it at $40. The only difficulties would be finding a way to host this content, deliver it, and manage the money income. The good news is many websites, like my own Squarespace website, do this already on a few of their paid plans. This does require some work — for instance, you have to market your own videos, you have to refresh them if they get out of date, you have to drive traffic to your site to increase your potential buyer audience — but unlike microconsulting, you don’t need to continue creating content to get paid. You create a good video once and sell it for years to come. This is a great alternative to YouTube monetization because it allows you to play to your strengths, receive passive income, and get paid for the value you create by the people you’re creating the actual value for, rather than advertisers. 5. Monetize Your Talents on Twitch Lately, a lot of gamers have been in the news for their high-paying video games. Like sports, talented players will stream their games, even earning millions per year just by playing video games for their fans to watch. What I didn’t know is that it’s also possible to stream other things and earn money, like your commentary on other people’s games. For example, T90 is a caster who makes his living by commentating on other people’s Age of Empire games. He played for a few years and streamed those games, but really found his calling and his fanbase when he began to commentate on the games of others. While he doesn’t make millions, he is estimated to have a net worth in the hundreds of thousands — not by being monetized on YouTube or having many advertisers or even sponsors. The bulk of his income comes from him streaming his commentary on Twitch, and hundreds of thousands of devoted viewers rewarding him for his content. Twitch is definitely an up-and-coming alternative to YouTube monetization because there aren’t any restrictions to begin monetizing, and there’s a much better-established viewer-to-creator financial relationship. Twitch viewers can purchase memberships, gifts, and other marks of their devotion to their favored streamer. And there’s a much broader scope for what it’s possible to stream. On YouTube, most streamers are gamers. On Twitch, you can watch streams of anything, from gaming, to cooking, to commentating, to blogging. Overall, it’s a great alternative to YouTube monetization because of the low entry barrier, the wide variety of audience tastes, and because it’s expected that a proportion of your income will come directly from the people you’re creating for.
https://medium.com/the-post-grad-survival-guide/5-alternatives-to-youtube-monetization-for-video-creators-44289c20f4b7
['Zulie Rane']
2020-12-29 12:27:25.059000+00:00
['Freelancing', 'Video', 'Personal Finance', 'Digital Marketing', 'Creativity']
Running a .NET Project. C# From Scratch Part 2.2
We know that the application is working since it produces the default output, but what exactly is happening when we use the dotnet run command? Behind the Scenes When you use the dotnet run command, .NET automatically does several things to make your project ready to run. In this section, we’ll look at the steps .NET takes behind the scenes to run your application. Restore The first thing that happens is .NET implicitly runs the command ‘dotnet restore’. The dotnet restore command tells .NET to go out and get the correct version of any packages used by your application. Packages are pieces of codes that other developers have written and you can use in your application. In the world of .NET, the package system is called NUGET. All of the packages being used by your application are recorded in the .csproj file, along with the version of the package being used. When the dotnet restore command is executed, .NET goes out and gets each package listed in your .csproj file from the internet. The packages that are used in your application are called external dependencies. In our basic project, we don’t have any external dependencies so nothing happens when .NET runs this command. Build Next, .NET runs the ‘dotnet build’ command implicitly. This command tells .NET to compile your source code files into a binary file that can execute on your machine. The tool that does this is the C# compiler. The C# compiler takes all of the source code files in your project as an input and outputs a single binary representation of your source code. This binary format is faster to execute on your machine. The output file is a dll file. Dll stands for Dynamic Linked Library. In .NET, this is referred to as an assembly. Run Finally, the dotnet run command is executed. This command tells .NET to bring the application to life using the .NET Core Runtime and to execute the instructions in the dll file. It’s important to note that if you want to run your application, you need to use the .NET Core Runtime. It’s not possible to run the application via the dll file directly. This is because it’s the .NET Core Runtime that knows how to launch your application, manage memory, convert the instructions in the dll file into instructions that your processor understands and tear down the application when it’s finished running. What’s Next? In this part of the series, we learned how to run a .NET application using the dotnet run command. We also learned what .NET is doing in the background to convert our project into an application that can be run on our machine. In the next part of the series, we’ll learn how to open the project in Visual Studio Code and look at the most important parts of our code editor. You can find that part of the series here.
https://kenbourke.medium.com/running-a-net-project-6d18b840b607
['Ken Bourke']
2020-12-16 15:23:07.453000+00:00
['Dotnet', 'Csharp', 'Software Development', 'Learn To Code', 'Programming']
Pain in the Data
I have been working on a data analytics project for around 3 weeks, the project aims to visualize and allow querying a database of employees based on their skills, industry, and specialty. It is a very interesting and challenging project, it sounds fairly simple, yet it is taking a surprising amount of time; this is not a bad thing, as I was taking this opportunity to verify a certain fact in data science. The challenge lies in the data itself. I received the data as a CSV (comma separated values) file with each row being a record of Name, Email, ID, Skills List, Skills Scores List, Region, Industry, and Specialty. I was working in Python and developing a web app running on IBM Cloud, so I went with Pandas library to handle the data for me. Just uploading the data to a database on the cloud was painful. Converting the CSV file to a JSON (JavaScript object notation) format was a challenge because the data was organized in such a way that each employee had one row for each region, skill or industry or specialty. I essentially had to: Combine all rows for an employee into one row Clean the data types Convert to JSON It took me a week just to clean the data types, and this was just the first step in the project: uploading the data to Cloudant NoSQL database. One might argue why did I use JSON and NoSQL whereas I could have used a table format and SQL database? There are two main reasons, primarily because I am more comfortable working with NoSQL, and second because I was doing an experiment. Then came the challenge of querying the data, once I received the query identifying the requested combination of region, skills, industries, and specialty. Structuring the data right for a query was a challenge which took around 3 days to address; if it weren’t for the Pandas library, I would have taken maybe a week or two. Funny enough, the total time I spent on building the structure of the web app, log in, and user interface all in all took around 2 or 3 days. This little experiment of mine shows a very important fact about data science and analytics: 80% of the time is spent cleaning the data I spent around 10 days to clean and prepare the data, and just 4 days to query and build the web app. Lucky enough I was doing everything in Python which provides a set of great tools and libraries for data science. My choice of database was not the best for this application, but in a real-life situation, not everything is so sweet, you almost always have to restructure, reformat, and reorganize the data.
https://medium.com/astrolabs/pain-in-the-data-4196845615d
['Aoun Lutfi']
2017-10-05 06:58:43.756000+00:00
['Data Analysis', 'Ibm Cloud', 'Python', 'Insights', 'Data Science']
Roulette: The House Always Wins (Python Simulation)
Photo by: Kay The other day, I fell prey to the YouTube algorithm. I began watching videos about cars, then I ended up watching videos about roulette strategies to maximize profit. You win this round YouTube. Anyway, I don’t feel that there are any strategies, after all, the house always wins. However, I couldn’t settle by just knowing that the house always wins. Today, we’ll build a roulette game in python, we’ll perform a monte carlo simulation, and we will conduct an experiment to see how fast we can become millionaires (don’t quit your job just yet). Background Before we start writing code, we need to understand how roulette works. Roulette is a game in which a ball is spun around a roulette that is spinning too. The ball has 38 different places to land. You can win in the following way: Guess the color of the number. Guess the number. Guess if the number is even or odd. Guess what area the winning number is located. For example, if it’s in the first 1/3 of numbers, second 1/3, etc. How hard can it be right? Extremely hard. Why? Because odds will always be stacked against you. For example: You can try to guess the color of the number. Black or red. However, there is a third color that the casino will not let you choose, on 2 extra numbers 0 and 00, which is green. No matter if you pick red or black, you will only win around 47% of the time. You can try to guess if the number will be even or odd. 50/50 right? Nope, 0 and 00 don’t count as even or odd. You lose your wager. What about guessing the zone? It should be easier, right? Nope, you still have 2/3 of the board against you. What about if you bet on all thirds available? Congrats! You won, but you still haven’t made money. Why? Each third pays 2 to 1. If you place a $10 bet on all thirds, you are likely to win $20, but you will lose $20. Unless it lands on 0 or 00, then you lost all $30. Okay, but if you bet on a single number? The payout should be substantial, right? You’re right. It pays 35 to 1, but you will only guess the number about 2.6% of the time. Now that I discouraged you enough, let’s build the game. Building Roulette Before we get started, I must disclose that this game doesn’t replicate roulette in real life perfectly. For example, you cannot place bets on more than one space, and it uses a random algorithm. The random algorithm part is important because it means that it’s not truly random. Someone designed the package to act random. However, if you crack the algorithm you can potentially predict correctly what number comes next. Anyway, we will start with building the board. In this case, we will use a dictionary to list all spots with their assigned color. roulette = {'00':'g',0:'g',1:'r',2:'b',3:'r',4:'b',5:'r',6:'b',7:'r', 8:'b',9:'r',10:'b',11:'b',12:'r',13:'b',14:'r',15:'b',16:'r', 17:'b',18:'r',19:'r',20:'b',21:'r',22:'b',23:'r',24:'b',25:'r', 26:'b',27:'r',28:'b',29:'b',30:'r',31:'b',32:'r',33:'b',34:'r', 35:'b',36:'r'} As you can see, we pass 00 as a string because python does not support 00 as a number. This will not be an issue because when we use the get() method, it will not care if it’s a number or a string. Next, we will create a dictionary with possible bets as keys and what they mean. bets = {'first_dozen':range(1,13),'second_dozen':range(13,25),'third_dozen':range(25,37), 'red':'r','black':'b','even':'even','odd':'odd', 'first_column':[1,4,7,10,13,16,19,22,25,28,31,34], 'second_column':[2,5,8,11,14,17,20,23,26,29,32,35], 'third_column':[x*3 for x in range(1,13)], 'high':range(19,37),'low':range(1,19), 'single_num':list(roulette.keys())} For example, if we use the first_dozen key, then we are betting on number 1–12. Remember, the range function creates a range up to the number before that it was told to stop at. The numbers in columns were assigned that way because the board consists of 36 numbers arranged in 3 columns by 12. High and low just means the first half of the numbers 1–36. Now, we need to get some payouts. We achieve this by using a dictionary. The payouts we used are the same ones a well-known casino uses. bets_payout = {'dozen':2,'color':1,'even_odd':1,'column':2,'high_low':1,'single_num':35} Great! We have everything we need to start adding the logic to the game. We need three functions. To spin the roulette. Check if we have a winning bet. Calculate our payout. def spin_roulette(): number = choice(list(roulette.keys())) color = roulette.get(number) return number, color def calculate_payout(bet_amount,payout): bet_payout = bets_payout.get(payout) return bet_amount * bet_payout def check_win(bet_amount,bet,test,num,color): if bet in 'even odd': if (num == 0) or (num == '00'): return 0 if ((num%2) == 0) and (bet=='even'): return calculate_payout(bet_amount,'even_odd') elif ((num%2) == 1) and (bet=='odd'): return calculate_payout(bet_amount,'even_odd') else: return 0 elif bet in 'black red': if color == test: return calculate_payout(bet_amount,'color') else: return 0 elif 'column' in bet: if num in test: return calculate_payout(bet_amount,'column') else: return 0 elif 'dozen' in bet: if num in test: return calculate_payout(bet_amount,'dozen') else: return 0 elif bet in 'high low': if num in test: return calculate_payout(bet_amount,'high_low') else: return 0 else: if num == test: return calculate_payout(bet_amount,'single_num') else: return 0 bonus, how would you make this program better? Leave a comment. Everything is pretty straight forward with all three functions. The only thing I have to add is that I used the random package to “randomize” the number selected from the roulette. Monte Carlo Simulation We will look at the percentage of player wins, player winning amount, and house winning amount: Betting always on even. Expecting to converge to (18/38) wins. Betting always on 00. Expecting to converge to (1/38) wins. Betting on the opposite color that shows up. (Example, if red shows up this round, then we will bet black next round) Expecting to converge to (18/38) wins. Some of the rules we will follow: We will bet 15 units every time. We will run 10,000 trials First, we’ll create a function to spin our roulette and determine if we win. def custom_feeling_lucky(bet_amount,bet,test=False): if test == False: test = bets.get(bet) num,color = spin_roulette() return check_win(bet_amount,bet,test,num,color), color Now, we’ll build our simulations: Bet always on even. win_lose_even = [] for i in range(0,10000): win,color = custom_feeling_lucky(15,'even') win_lose_even.append(win) even_wins = [x for x in win_lose_even if x > 0] player_winning_amount_even = np.sum(win_lose_even) house_winning_amount_even = np.sum([15 for x in win_lose_even if x == 0]) print("Win Percentage: {}".format(len(even_wins)/len(win_lose_even))) print("Player Win Amount: {}".format(player_winning_amount_even)) print("House Win Amount: {}".format(house_winning_amount_even)) Results Win Percentage: 0.4723 Player Win Amount: 70845 House Win Amount: 79155 Bet always on 00. win_lose_00 = [] for i in range(0,10000): win,color = custom_feeling_lucky(15,'single_num','00') win_lose_00.append(win) wins_00 = [x for x in win_lose_00 if x > 0] player_winning_amount_00 = np.sum(win_lose_00) house_winning_amount_00 = np.sum([15 for x in win_lose_00 if x == 0]) print("Win Percentage: {}".format(len(wins_00)/len(win_lose_00))) print("Player Win Amount: {}".format(player_winning_amount_00)) print("House Win Amount: {}".format(house_winning_amount_00)) Results Win Percentage: 0.0242 Player Win Amount: 127050 House Win Amount: 146370 Betting on the opposite color that shows up. win_lose_opposite = [] bet_color = 'red' for i in range(0,10000): win,color = custom_feeling_lucky(15,bet_color) win_lose_opposite.append(win) if color == 'r': bet_color = 'black' elif color == 'b': bet_color = 'red' else: bet_color = choice(['black','red']) wins_opposite = [x for x in win_lose_opposite if x > 0] player_winning_amount_opposite = np.sum(win_lose_opposite) house_winning_amount_opposite = np.sum([15 for x in win_lose_opposite if x == 0]) print("Win Percentage: {}".format(len(wins_opposite)/len(win_lose_opposite))) print("Player Win Amount: {}".format(player_winning_amount_opposite)) print("House Win Amount: {}".format(house_winning_amount_opposite)) Results Win Percentage: 0.4733 Player Win Amount: 70995 House Win Amount: 79005 As you can see, the winning percentage converged to what it was expected. For every simulation, you can see that we did win some money, but the house won even more. The house always wins. When I was doing my research for this article, I came across a popular casino that had the following words in their website: “Luck isn’t with you. Luck isn’t against. Luck doesn’t care.” I guess that would be true if the game wasn’t stacked against you. Becoming a Millionaire I decided that we would run this experiment by betting only on red. Why? This is the bet that you have more chances on winning, but not more than the house of course. We’re interested in finding out how many trials it would take us to reach a million dollars and how much money we spent to reach it. trials = 0 money_spent = 0 money_made = 0 while money_made < 1000000: win,color = custom_feeling_lucky(15,'red') trials += 1 money_spent += 15 money_made += win print('Trials: {}'.format(trials)) print('Money Spent: {}'.format(money_spent)) print('Money Made: {}'.format(money_made)) Results Trials: 141076 Money Spent: 2116140 Money Made: 1000005 Looks like we need to spend $2 million to win $1 million, and we need to spin the roulette about 141,000 times. It doesn’t look like this is such a good deal for us. The Wrap Up It’s obvious that the house always wins, but let’s get passed that. This project was a fun experiment that touched on statistics, programming, and analysis. If you’re a veteran programmer, I think that you should still practice easy projects like this. Going back to the basics is always a must. If you’re a newbie programmer, projects like this will push you to get better.
https://medium.com/swlh/roulette-the-house-always-wins-python-simulation-134475508aed
['Alejandro Colocho']
2020-12-29 21:40:55.904000+00:00
['Monte Carlo Simulation', 'Python', 'Data Science', 'Data Analysis']
FedEx Gears Up for All-Out War With Amazon
FedEx CEO Smith was taking a week off with his family in February when emails suddenly started flooding his inbox. “It wasn’t much of a vacation,” he recalls. “I was on the phone day and night.” The immediate crisis was that China and other Asian countries were grounding flights and quarantining pilots who landed there, throwing operations at FedEx — the world’s largest cargo airline — into turmoil. Smith, a self-proclaimed “logistics network geek,” oversaw a massive juggling of the company’s 670 aircraft, 5,000-odd package sorting facilities, 180,000 ground vehicles, and half million employees, in order to try to work around the restrictions. As the pandemic took hold in the U.S. in March, the juggling expanded to diverting employees and packages away from facilities in the hardest-hit areas, like New York City, and finding a way to move personal protection equipment to health care organizations. The company directed more than 150 flights and 1,000 ocean cargo containers to move PPE into and around the U.S., and set up 28 new flight legs just to process Covid tests. “We can flex our networks better than anyone in the business,” boasts Smith. Meanwhile, FedEx’s customer help lines were being flooded with callers mostly looking for the answer to one question: Are you delivering packages? In spite of some initial delays, the packages were still getting through — even on Sunday, a new service FedEx happened to roll out just before the pandemic hit, and which turned out to be a special boon for shipments of perishable food items to consumers who were cut off from in-person shopping. It didn’t hurt that FedEx employees were also declared essential workers. FedEx’s reputation for fast or reliably on-time shipping services goes back to its founding by Fred Smith in 1971, famously inspired by a paper he wrote as a Yale undergraduate. The paper described a delivery service for urgent or valuable items — think medicine, jewelry, or high-tech parts — that involved flying all packages to one central location every night for sorting, and then flying them back out for delivery the next day. In other words, Smith had dreamed up a service that was almost comically inefficient, but fast and dependable. (Legend has it that Smith received a “C” on the paper, but he himself has never made that claim.) After founding Federal Express to do just that, Smith ensured the company became one of the first in the world to exploit information technology strategically, pioneering then-radical innovations such as bar-code scanning and sharing tracking data online. That tech-forward sensibility continues today, including the company’s September announcement that it can now use Bluetooth sensor chips on packages to let customers track almost every inch of the journey in real time. FedEx pricing is complex — the company’s standard pricing list is 79 pages long — and further varies with an array of discounts and surcharges. But to use some rough, typical numbers, the increases and surcharges levied since the start of the pandemic have already added about $3 to the $14 cost of shipping a relatively small, light package via FedEx Ground, with more increases scheduled for the holidays and beyond. UPS and Postal Service rates, while up, remain a few dollars cheaper for small, light packages in most cases. (While the high-urgency FedEx Express is the heart of the company’s business, FedEx Ground is its cheapest and slowest service, which competes more directly with the Postal Service and UPS.) The percentage of consumers who do more than half their shopping online has nearly tripled from 16% to 45% since the start of the pandemic — and 73% of new online shoppers say they enjoy it more than they expected. A few dollars is a big difference on an e-commerce order of, say, $25, and FedEx’s pricing disadvantage tends to grow with package size and weight. But while an $11 shipping fee might be cheaper than $14, it’s still a lot to pay for shipping on a small order — especially when you’re running a small business. Chandler Tang opened her San Francisco artsy gift shop PostScript in November last year, only to have to scramble to throw up an online store just four months later when the pandemic hit. After a self-taught crash course in shipping, Tang went with USPS for most in-state sales, and UPS for more distant customers. What about shipping via FedEx? Never, insists Tang. “It would cost at three times as much as the Postal Service,” she says, adding that FedEx makes it too difficult for small businesses to negotiate the corporate discounts that makes FedEx’s services more competitive. But while FedEx might have less small-business-friendly pricing than its competitors, the bigger problem is that shipping is just too expensive for an economy that’s increasingly buoyed by e-commerce. Glenn Gooding, president of consulting firm iDrive Logistics, estimates that in recent years about half the packages shipped by FedEx and UPS have gone to homes rather than businesses. But during the pandemic, he says, that’s jumped to as high as 80% with rocketing e-commerce sales. This trend is only likely to grow. A Pitney Bowes survey found that the percentage of consumers who do more than half their shopping online has nearly tripled from 16% to 45% since the start of the pandemic — and 73% of new online shoppers say they enjoy it more than they expected. FedEx itself has predicted this growth rate in e-commerce in past years’ earnings reports; but the company estimated it would take three years longer to reach this level. In other words, the pandemic has been a powerful accelerant for an ongoing trend. As shoppers move to buying more and more online, the added costs of shipping will become a bigger and bigger inhibitor. Sucharita Kodali, an e-business analyst at research firm Forrester, notes that the average e-commerce order is for about $50, making shipping costs of $8 to $20 a real burden. “Consumers are highly price sensitive, and delivery fees can make the difference,” she says. When Tang, the gift-shop owner, first started shipping early on in the pandemic, she offered it for free, but after a few weeks of losing money on the deal she began tacking on a $5 fee for local deliveries and $10 for all others. “We had a surge of orders from people who wanted to beat the shipping charges, and then we had a big dip when the fees kicked in,” she says. “I’d say we’re losing about a quarter or third of our customers because of the fees.” And that’s in spite of the fact that her fees don’t cover her own shipping costs. To try to recapture some of those customers, Tang is experimenting with dropping the fees on larger orders, while raising item prices to recover some of the costs. So consumers ultimately pay for the shipping one way or another. Or do they? That notion — along with everything about e-commerce shipping — becomes a little convoluted when Amazon enters the picture.
https://marker.medium.com/fedex-gears-up-for-all-out-war-with-amazon-e59caa31b8e3
['David H. Freedman']
2020-12-04 15:25:31.599000+00:00
['Fedex', 'Amazon', 'Business', 'Ecommerce', 'Pandemic']
When — If Ever — Can You Trust Google?
In Google, we trust. A few days ago, though, Google did something that has to call into question whether the search and advertising behemoth is trustworthy. Trust Google? That’s getting harder. Let me be clear that I’m talking about some very specific stuff here. I still think my decision to start using a Chromebook was a good one, and I feel the same way about my Nexus 7 tablet. And despite Google’s continued tweaking of the way they do search and what it means to your privacy, I’m OK letting the Google pipe stay open all the time in all my devices. But if you think you can have everything Google offers for free, or even that you can rely on Google to keep providing tomorrow what it was promoting yesterday, it’s time to wake up. I’m painfully aware that this story can sound whiny, or entitled, or negative in any number of other ways. I promise, it’s none of those things. I’ll always defend Google’s right to make a profit and I’m conflicted as to whether GOOG “is a monopoly” and should be legislated toward involuntary business change. It seems, by the way, that Google is changing the way they handle certain things to avoid exactly that. My concern is YOU. When you take advantage of all those free Googies, are you ready to deal with the changes that you’ll encounter whenever Google says so? Google’s latest series of changes has been announced. In a blog post innocuously titled “Winter Cleaning”, Google has discontinued a few products and services, and in most cases their decisions seem to be a simple matter of concentrating on things that they can sell or give away more of; big companies don’t like marginal sellers. Most cases. Not All. One of things Google is discontinuing is Google Sync. To be honest, I think Google Sync is a buggy, unreliable piece of code, and that people shouldn’t have ever trusted it for hedging their bets between Microsoft Outlook and Gmail. And, oh yeah: if you’re a paid Google Apps user instead of a free user like most people, Google Sync isn’t going away — yet. But yikes. The discontinuation of Google Sync means that if you’re using that tool to bridge Outlook and Gmail you have to either start paying for a Google Apps account so you can keep doing that, or accept that Google has forced your hand; in the next few weeks, you’re going to have to make a choice. Again, I support Google’s right to make money, and reiterate that the responsibility for managing your business lies with you. But whether you naïvely believed Google would keep giving you all that free stuff forever, or are more pragmatic about things, you’re now getting just a few weeks notice to enact a complex business change. At least that’s better than when Google gave us all under a week to stop using Microsoft Office files in Google Drive. And Google wins either way, of course, with you paying for that victory. There are a couple of takeaways from this story. First, Google really does seem to be entering a business phase where they fear anti-trust regulation. And that’s great, so long as the result isn’t a series of fear-based maneuvers by Google; nothing’s more dangerous than a terrified monster. What would be nice, and could make things a lot easier to navigate for everyone, would be if Google would finally reveal their secret sauce. But the big point is that you need to start managing all these “free” tools, and figuring out how relying on them can impact your business. You can contact me here to talk about it. Trust Google? In one way, sure: You can absolutely trust Google to look out for Google.
https://medium.com/business-change-and-business-process/when-if-ever-can-you-trust-google-64a9a76bb1fe
['Jeff Yablon']
2016-12-19 00:22:58.971000+00:00
['E Mail', 'Google', 'Business Process', 'Anti Trust']
Machine Learning-Powered Search Ranking of Airbnb Experiences
Since offline testing often has too many assumptions, e.g. in our case it was limited to re-ranking what users clicked and not the entire inventory, we conducted an online experiment, i.e. A/B test, as our next step. We compared the Stage 1 ML model to the rule-based random ranking in terms of number of bookings. The results were very encouraging as we were able to improve bookings by +13% with the Stage 1 ML ranking model. Implementation details: In this stage, our ML model was limited to using only Experience Features, and as a result, the ranking of Experiences was the same for all users. In addition, all query parameters (number of guests, dates, location, etc.) served only as filters for retrieval (e.g. fetch Paris Experiences available next week for 2 guests), and ranking of Experiences did not change based on those inputs. Given such a simple setup, the entire ranking pipeline, including training and scoring, was implemented offline and ran daily in Airflow. The output was just a complete ordering of all Experiences, i.e. an ordered list, which was uploaded to production machines and used every time a search was conducted to rank a subset of Experiences that satisfied the search criteria. Stage 2: Personalize The next step in our Search Ranking development was to add the Personalization capability to our ML ranking model. From the beginning, we knew that Personalization was going to play a big role in the ranking of Experiences because of the diversity of both the inventory and the guest interest. Unlike our Home business, where two Private Rooms in the same city at a similar price point are very similar, two randomly chosen Experiences are likely to be very different, e.g. a Cooking Class vs. a Surf Lesson. At the same time, guests may have different interests and ideas of what they want to do on their trip, and it is our goal to capture that interest fast and serve the right content higher in search results. We introduced two different types of personalization, mostly by doing feature engineering given collected data about users. 1. Personalize based on booked Airbnb Homes A large portion of Experience bookings come from guests who already booked an Airbnb Home. Therefore, we have quite a bit of information we can use to build features for personalization: Booked Home location Trip dates Trip length Number of guests Trip price (Below/Above Market) (Below/Above Market) Type of trip: Family, Business Family, Business First trip or returning to location Domestic / International trip Lead days To give examples of features that can be built to guide ranking, we demonstrate two important ones: Distance between Booked Home and Experience. Knowing Booked Home location (latitude and longitude) as well as Experience meeting location, we can compute their distance in miles. Data shows that users like convenience, i.e. large fraction of booked Airbnb Experiences are near booked Airbnb Home. Experience available during Booked Trip. Given Home check-in and check-out dates, we have an idea on which dates the guest is looking to book Experiences and can mark Experiences as available or not during those dates. These two features (in addition to others) were used when training the new ML ranking model. Below we show their partial dependency plots. The plots confirmed that features behavior matches what we intuitively expected the model will learn, i.e. Experiences that are closer to Booked Home will rank higher (have higher scores), and Experiences that are available for Booked Trip dates will rank higher (which is very convenient because even in dateless search we can leverage trip dates). 2. Personalize based on the user’s clicks Given the user’s short-term search history, we can infer useful information that can help us personalize future searches: Infer user interest in certain categories: For example, if the user is mostly clicking on Music Experiences we can infer that the user’s interest is in Music For example, if the user is mostly clicking on Music Experiences we can infer that the user’s interest is in Music Infer the user’s time-of-day availability: For example, if the user is mostly clicking on Evening Experiences we can infer that the user is available at that time of day At the time they are published to the platform, each Experience is manually tagged with a category (e.g. hiking, skiing, horseback riding, etc.). This structured data gives us the ability to differentiate between types of Experiences. It also gives us the ability to create user interest profiles in each category by aggregating their clicks on different categories. For this purpose, we compute two features derived from user clicks and categories of clicked Experiences: Category Intensity: Weighted sum of user clicks on Experiences that have that particular category, where the sum is over the last 15 days (d_0 to d_now) and A is the number of actions (in this case clicks) on certain category on day d. Category Recency: Number of days that passed since the user last clicked on an Experience in that category. Note that the user may have clicked on many different categories with different intensities and recencies, but when the feature is computed for a particular Experience that needs to be ranked, we use the intensity and recency of that Experience category. To illustrate what our model learned for these two features, we show the partial dependency plots below. As it can be observed on the left side, Experiences in categories for which the user had high intensities will rank higher. At the same time, having the recency feature (on the right) allows the model to forget history as time goes by, and Experiences in categories that the user last clicked a long time ago will rank lower. We built the same types of features (intensity & recency) for several different user actions, including wishlisting and booking a certain category. Time of Day Personalization: Different Experiences are held at different times of day (e.g. early morning, evening, etc.). Similar to how we track clicks on different categories, we can also track clicks on different times of day and compute Time of Day Fit between the user’s time-of-day percentages and the Experience’s time of day, as described below. As it can be observed, the model learned to use this feature in a way that ranks Experiences held at time of day that the user prefers higher. Training the ranking model: To train the model with Personalization features, we first generated training data that contains those features by reconstructing the past based on search logs. By that time we already had a bigger inventory (4000 experiences) and were able to collect more training data (250K labeled examples) with close to 50 ranking features. When creating personalization features it is very important not to “leak the label,” i.e. expose some information that happened after the event used for creating the label. Therefore, we only used user clicks that happened before bookings. In addition, to reduce leakage further, when creating training data we computed the personalization features only if the user interacted with more than one Experience and category (to avoid cases where the user clicked only one Experience / category, e.g. Surfing, and ended up booking that category). Another important aspect to think about was that our search traffic contains searches by both logged-in and logged-out users. Considering this, we found it more appropriate to train two models, one with personalization features for logged-in users and one without personalization features that will serve log-out traffic. The main reason was that the logged-in model trained with personalization features relies too much on the presence of those features, and as such is not appropriate for usage on logged-out traffic. Testing the ranking model: We conducted an A/B test to compare the new setup with 2 models with Personalization features to the model from Stage 1. The results showed that Personalization matters as we were able to improve bookings by +7.9% compared to the Stage 1 model. Implementation details: To implement serving of the Stage 2 model in production, we chose a simple solution that required the least time to implement. We created a look-up table keyed off of user id that contained personalized ranking of all Experiences for that user, and use key 0 for ranking for logged-out users. This required daily offline computation of all these rankings in Airflow by computing the latest features and scoring the two models to produce rankings. Because of the high cost involved with pre-computing personalized rankings for all users (O(NM), where N is the number of users and M is the number of Experiences), we limited N to only 1 million most active users. The personalization features at this point were computed only daily, which means that we have up to one day latency (also a factor that can be greatly improved with more investment in infrastructure). Stage 2 implementation was a temporary solution used to validate personalization gains before we invest more resources in building an Online Scoring Infrastructure in Stage 3, which was needed as both N and M are expected to grow much more. Stage 3: Move to Online Scoring After we demonstrated significant booking gains from iterating on our ML ranking model and after inventory and training data grew to the extent where training a more complex model is possible, we were ready to invest more engineering resources to build an Online Scoring Infrastructure and target more booking gains. Moving to Online Scoring also unlocks a whole new set of features that can be used: Query Features (highlighted in image below). This means that we would be able to use the entered location, number of guests, and dates to engineer more features. For example, we can use the entered location, such as city, neighborhood, or place of interest, to compute Distance between Experience and Entered Location. This feature helps us rank those Experiences closer to entered location higher. In addition, we can use the entered number of guests (singles, couple, large group) to calculate how it relates to the number of guests in an average booking of Experience that needs to be ranked. This feature helps us rank better fit Experiences higher. In the online setting, we are also able to leverage the user’s browser language setting to do language personalization on the fly. Some Experiences are offered and translated in multiple languages. If the browser setting language translation is available, that one is displayed. With ranking in the online world we can take it a step further and rank language-matching Experiences higher by engineering a feature that determines if the Experience is offered in Browser Language. In the image below we show an example of Stage 3 ML model ranking Experiences offered in Russian higher when browser language is Russian. Finally, in the online setting we also know the Country which the user is searching from. We can use the country information to personalize the Experience ranking based on Categories preferred by users from those countries. For example, historical data tells us that when visiting Paris Japanese travelers prefer Classes & Workshops (e.g. Perfume making), US travelers prefer Food & Drink Experiences, while French travelers prefer History & Volunteering. We used this information to engineer several personalization features at the Origin — Destination level. Training the ranking model: To train the model with Query Features we first added them to our historical training data. The inventory at that moment was 16,000 Experiences and we had more than 2 million labeled examples to be used in training with a total of 90 ranking features. As mentioned before, we trained two GBDT models: Model for logged-in users , which uses Experience Features, Query Features, and User (Personalization) Features , which uses Experience Features, Query Features, and User (Personalization) Features Model for logged-out traffic, which uses Experience & Query Features, trained using data (clicks & bookings) of logged-in users but not considering Personalization Features The advantage of having an online scoring infrastructure is that we can use logged-in model for far more uses than before, because there is no need to pre-compute personalized rankings as we did in Stage 2. We used the logged-in model whenever personalization signals were available for a particular user id, else we fall back to using logged-out model. Testing the ranking model: We conducted an A/B test to compare the Stage 3 models to Stage 2 models. Once again, we were able to grow the bookings, this time by +5.1%. Implementation details: To implement online scoring of thousands of listings in real time, we have built our own ML infra in the context of our search service. There are mainly three parts of the infrastructure, 1) getting model input from various places in real time, 2) model deployment to production, and 3) model scoring. The model requires three types of signals to conduct scoring: Experience Features, Query Features, and User Features. Different signals were stored differently depending on their size, update frequency, etc. Specifically, due to their large size (hundreds of millions of user keys), the User Features were stored in an online key-value store and search server boxes can look them up when a user does the search. The Experience Features are on the other hand not that large (tens of thousands of Experiences), and as such can be stored in memory of the search server boxes and read directly from there. Finally, the Query Features are not stored at all, and they are just read as they come in from the front end. Experience and User Features are both updated daily as the Airflow pipeline feature generation job finishes. We are working on transitioning some of the features to the online world, by using a key-value store that has both read and write capabilities which would allow us to update the features instantly as more data comes in (e.g. new experience reviews, new user clicks, etc.). In the model deployment process, we transform the GBDT model file, which originally came from our training pipeline in JSON format, to an internal Java GBDT structure, and load it within the search service application when it starts. During the scoring stage, we first pull in all the features (User, Experience, and Query Features) from their respective locations and concatenate them in a vector used as input to the model. Next, depending on if User Features are empty or not we decide which model to use, i.e. logged-out or logged-in model, respectively. Finally, we return the model scores for all Experiences and rank them on the page in descending order of the scores. Stage 4: Handle Business Rules Up to this point our ranking model’s objective was to grow bookings. However, a marketplace such as Airbnb Experiences may have several other secondary objectives, as we call them Business Rules, that we can help achieve through machine learning. One such important Business Rule is to Promote Quality. From the beginning we believed that if guests have a really good experience they will come back and book Experiences again in the near future. For that reason, we started collecting feedback from users in terms of 1) star ratings, ranging from 1 to 5, and 2) additional structured multiple-response feedback on whether the Experience was unique, better than expected, engaging, etc. As more and more data became available to support our rebooking hypothesis, the trend became more clear. As it can be observed on the left in the figure below, guests who have had a great experience (leave a 5-star rating) are 1.5x more likely to rebook another Experience in the next 90 days compared to guests who have had a less good time (leave 4 star rating or lower). This motivated us to experiment with our objective function, where we changed our binary classification (+1 = booked, -1 = click & not booked) to introduce weights in training data for different quality tiers (e.g. highest for very high quality bookings, lowest for very low quality bookings). The quality tiers were defined by our Quality Team via data analysis. For example: very high quality Experience is one with >50 reviews, >4.95 review rating and >55% guests saying the Experience was unique and better than expected. very low quality Experience is one with >10 reviews, <4.7 review rating. When testing the model trained in such a way the A/B test results (on the right in the figure above) showed that we can leverage machine learning ranking to get more of very high quality bookings and less of very low quality bookings, while keeping the overall bookings neutral. In a similar way, we successfully tackled several other secondary objectives: Discovering and promoting potential new hits early using cold-start signals and promoting them in ranking (this led to +14% booking gain for new hits and neutral overall bookings) using cold-start signals and promoting them in ranking (this led to and neutral overall bookings) Enforcing diversity in the top 8 results such that we can show the diverse set of categories, which is especially important for low-intent traffic (this led to +2.3% overall booking gain ). in the such that we can show the diverse set of categories, which is especially important for low-intent traffic (this led to ). Optimize Search without Location for Clickability For Low Intent users that land on our webpage but do not search with specified location we think a different objective should be used. Our first attempt was to choose the Top 18 from all locations based on our ranking model scores and then re-rank based on Click-through-rate (this led to +2.2% overall booking gain compared to scenario where we do not re-rank based on CTR). Monitoring and Explaining Rankings For any two-sided marketplace it is very important to be able to explain why certain items rank the way they do. In our case it is valuable because we can: Give hosts concrete feedback on what factors lead to improvement in the ranking and what factors lead to decline. Keep track of the general trends that the ranking algorithm is enforcing to make sure it is the behavior we want to have in our marketplace. To build out this capability we used Apache Superset and Airflow to create two dashboards: Dashboard that tracks rankings of specific Experiences in their market over time, as well as values of feature used by the ML model. Dashboard that shows overall ranking trends for different groups of Experiences (e.g. how 5-star Experiences rank in their market). To give an idea of why these types of dashboards can be useful we give several examples in the figures that follow. In the figure below we show an example of an Experience whose ranking (left panel) improved from position 30 to position 1 (top ranked). To explain why, we can look at the plots that track various statistics of that Experience (right panel), which are either directly used as features in the ML model or used to derive features.
https://medium.com/airbnb-engineering/machine-learning-powered-search-ranking-of-airbnb-experiences-110b4b1a0789
['Mihajlo Grbovic']
2019-02-19 01:35:25.156000+00:00
['Machine Learning', 'Travel', 'Search', 'AI', 'Data Science']
Submitting your first Google Analytics Reporting API Request
Your browser will automatically launch a window that look something like this. Create a new notebook in the page that shows up in your browser, then we are ready to begin coding some Python! Get familiar with Jupyter Notebook and Import Necessary Packages Let’s begin by familiarizing ourselves with the interface of Jupyter Notebook. Most of the commands are quite simple and easy to understand (copy, paste, cut, etc.), but I have highlighted a few functions that are very frequently used and essential for our exercises to come. Now, let’s begin coding in one of the code block below by adding our import statement for all of the packages that we need to make our request happen. Importing packages in Python is surprisingly simple — just type “import <package_name>” as a line of code. Python packages are usually structured in a hierarchical way, meaning that a package, such as the google-auth package, may have multiple smaller packages, with multiple even smaller packages in those small packages, and so on. To only import the packages you need, you may use the syntax “from <package.sub_package> import <sub_sub_package>. This is usually used to: Save loading time for large packages, and Make the syntax easier to pull. You may also “import <package> as <your_name>” to make the packages you have imported more accessible, and avoid naming conflicts with variables in the code you write. I have intentionally displayed multiple different ways of importing in the code below to illustrate, and you can simply copy paste the import statement into your first code block. After pasting the code into the code block, click the “Run” button illustrated above to run the code block. After you click it, nothing will happen (this is good news), as we did not request any outputs from the code. However, you will see your “Ln” number next to code block increase, illustrating the number of code block executions in this run. Set up the project in Google Developer Console and get client token and secret Welcome to the development world! Now that you are a developer, time to tell Google so it can grant you access to API accesses that are exclusive to developers! To register your Google Analytics project with Google, simply follow the setup tool in the link below to setup your project on Google Developer Console, it is easy as 1, 2, 3, 4, 5. Step 1: If you have an existing project, you may have a different screen, but choose “create a new project” regardless unless you have a good reason to integrate with a current application you own. Step 2: Just click the button, nothing much Step 3: Fill this out as below (since ipython notebook is technically a Cli tool). Update: If you are running into an error during the OAuth Playground stage, please try to create a “web application (instead of Other UI)” client token instead, and in the “Authorized Redirect URIs” field, enter “https://developers.google.com/oauthplayground” as one of the options. Use that token for the following steps. Step 4: There is another screen that ask you to name the app, nothing there so we are going to skip. Then you will see this screen, and you can name it to your liking. Step 5: Click download credential and open the downloaded file with a text editing software, you will find your client id and client secret in there — keep this file open as we will need both in our code. Following all of the steps above, you will have access to your client id and client secret, along with Google Analytics reporting API permission to your project. Keep those two pieces of information handy, as we will need it to construct authorization credential — now let’s go ahead and get credential for our Google Analytics account. Getting authentication tokens and authenticate Alright, now let’s talk about oauth2, the authentication method used by Google to make sure you are an authorized user for your Google Analytics account. OAuth2 is the most popular protocol right now for authorization for all sorts of applications/apis, including but not limited to Facebook, Google, Youtube, and so on. Let’s explain why we need OAuth2 in plain language. When running our script, we would need a way to tell the Google API client that we are indeed an authorized user to access a selected Google Analytics account. The most direct way of authorizing is via username and password, which can be accomplished by sending our username and password for Google to the google api service. However, this practice is a huge security risk for multiple reasons: First, if we are using the Google password directly, it would mean that we have to store Google credential somewhere that is NOT Google’s database. If I am making an application that services 100 users, I would store the username and password of those 100 users in my database, and there is no way to tell whether I stored a user’s information securely, or if I am selling the user’s information to a third party. This is a huge security risk for user information, as a simple breach on any of the application that uses Google services will result in a massive loss in user credentials, essentially paralyzing all Google-related services. Secondly, sending usernames and passwords to the Google API service would mean that real passwords and usernames will be transferred over the internet, which can be intercepted by the hacker and decrypted to steal your personal information. OAuth solves those problems above by offering a slightly more convoluted, but secure way for the user to authorize in a third-party app or script for Google services. Essentially, instead of sending usernames or passwords to Google directly, the app/script requests a link from Google, where users can sign into their Google Account via this link (it is native to Google) — the third party app/script will not touch the username or password of the user. Then, if authentication is completed, Google will send the app/script an access token and a refresh token, which will serve as the access code for authorized Google services. The access token and refresh token can only be used by that specific app, for access restricted information only agreed upon by the users when they sign in, and may expire or be revoked at the user’s leisure. The differences between the access code and refresh code is that the access code is designed to be short-lived, only usually last few hours. However, if the app wants continued access the user’s information, it can refresh the access token by passing the refresh token to Google again, getting a new access token to use — this protects users from unauthorized, permanent access from apps they do not wish to grant permission to it anymore. Here’s a quick flowchart explaining how that works: So first we get our access token from Google via the Google OAuth 2 Playground link below: We need to make sure the access token is authorized for our app specifically. You can do that by typing in your own application information (the client id and client secret I told you to keep) in the “Setting” section: Each access token is only granted for a restricted purpose called “Scope” by Google, and we only need view permission for Google Analytics here, so please select that specific scope and click continue: Then, you will be direct to your usual Google login screen, enter the account you use for your Google Analytics account, and then follow the instruction below to get your refresh and access token. As explained in the graphic, keep your access token and refresh token on a separate notepad or word document, as we will use it in the next step. Now let’s go back to our Jupyter Notebook and start coding the authentication engine The title of this section might be misleading — we are not going to code our own authentication engine, we are merely going to use the existing ones we have imported from google-auth library previously. Recall that in the import statement, we imported the “client” object in the google-auth library, now we just need to create a client per instruction of the library and type in the required information.
https://medium.com/analytics-for-humans/submitting-your-first-google-analytics-reporting-api-request-cdda19969940
['Bill Su']
2019-09-16 17:39:48.352000+00:00
['API', 'Tutorial', 'Coding', 'Python', 'Google Analytics']
There’s Nothing Pro-Life about The Trump Administration, or Amy Coney Barrett
There’s Nothing Pro-Life about The Trump Administration, or Amy Coney Barrett It’s all about power and control. Here’s what someone with an agenda sounds like: “I can’t say… I’m going in with some agenda. I don’t have any agenda. I have no agenda… I [only] have an agenda to stick to the rule of law.” That’s what Donald Trump’s third appointment to the Supreme Court, Amy Coney Barrett, said during her confirmation hearings this week. She was answering questions about abortion rights. Republicans can’t even fool themselves anymore. There’s no point in pretending that Trump, or any of his Supreme Court nominees, care about life in any form. If they did, they wouldn’t be going through with half their plans. Protecting life has never been on their agenda. They’re up to something else. Everyone knows it by now, which is why we’re calling them out. Silence is approval. Trump treats humans like fodder. This is about more than the Supreme Court. The Trump administration has claimed the moral high ground for years now, while continually devaluing lives that don’t serve their agenda. The pandemic is a perfect example of how the GOP views the dynamic between life and control. Some human life is expendable, so Trump can keep bragging about the economy. The most recent reports confirm what we’ve been fearing for weeks now. Trump is ready to sacrifice us for a jobs report. The White House plans to officially endorse the herd immunity approach, even though barely 10 percent of the population has been exposed to the virus at this point. He doesn’t care. Our risk is his fail safe, in case vaccine and antibody trials continue to halt. As the GOP sees it, letting a few hundred thousand additional people die is worth the cost of economic progress. Trump is dancing while people die. Forget his Sunday golfing. In case you’re still wondering how much Trump values life, watch him dance at a maskless rally earlier this week: Trump clearly thinks the pandemic is over, as far as he’s concerned. He talks about it in the past tense now. He caught the coronavirus. He’s getting better (allegedly). Now he assumes it’s going to go that way for everyone else. Like he said, “You catch the virus, you recover, you’re immune.” That’s true for everyone except the 220,000 people who’ve died this year. Almost a thousand people have died in just the last 24 hours. It’s a daily reality right now. It doesn’t look like he’s in mourning. This is the dance of a man who clearly cares about one life: His own. And then there’s the matter of COVID-19 reinfection, which health experts still don’t completely understand yet. None of this has stopped the White House from yanking a CDC order requiring face masks in all transportation hubs. The message is blunt: They want everyone to get sick in order to speed the recovery, knowing it’s going to kill people. The Supreme Court plans to end Affordable Care. If Trump or his administration were pro-life, it wouldn’t make any sense to snatch healthcare from millions of Americans. That goes double if your brilliant plan to deal with a pandemic is to sit back and have a diet coke, while letting a deadly virus infect everyone. Amy Coney Barrett can dodge and deny all she wants. We know enough. She’s not qualified. She’s merely the handmaid to help Trump dismantle the Affordable Care Act. She knows this. She doesn’t care. Her nomination is about her, and her ambition. You can hear this in every calculated answer she gives. When asked about matters of life and death, she refuses to give any “pre-commitments.” The sad irony is that Trump supporters are going to feel it as fast as anyone, when they’re suddenly dropped from their insurance, and their president tries to blame everything on Obama. Some still won’t care. They’ve made their decision to support Trump based on a single issue, abortion. So let’s talk about that for a minute… Anti-abortion laws don’t save lives. Time and again, research shows that limiting women’s access to abortion is the least effective way to reduce unwanted pregnancy. Sex education and access to contraception work far better. Red states have the strictest abortion laws, but they offer the least amount of support for families in poverty, including single mothers. They offer the least help for victims of domestic abuse. They offer the poorest mental healthcare, and they have the highest number of uninsured mothers. If that weren’t enough, they also have the highest maternity mortality rates. Take South Carolina. The state has 14 different restrictions on abortion. As of 2017, their maternal mortality rate has tripled. It’s pretty clear. Banning abortion does nothing but kill expecting moms, while making women poor and miserable. Someone with Amy Coney Barret’s privileged education is capable of grasping this logic, and the evidence. She simply doesn’t care. She’s one of the many white, affluent women who remain insulated from the impacts of their decisions. It’s not very compassionate. It’s not very pro-life. Pro-lifers don’t see their own hypocrisy. Most pro-life advocates oppose the regulation on just about everything. Guns. Markets. The environment. You name it. Most of them won’t even wear a mask. “It’s a personal choice,” they say, parroting the president. These same people want to regulate a woman’s body. They want to make it a universal law that life begins the minute a woman’s done having sex. They want to deny her the right to end a pregnancy in the last trimester, no matter what medical reason she might have. They want her to die in agony with her unborn infant, ensuring that she’ll never have another chance to become a mother. Does that sound pro-life? Pro-lifers don’t understand life at all. Just listen to politicians like Ted Cruz try to describe the female reproductive system. Listen to Vito Barbieri, the Idaho representative who thinks doctors should conduct a gynecology exam by having women swallow a tiny camera. Sure. That’s where the uterus is located. Her stomach. It would be hilarious if these clowns didn’t hold any real power over women’s bodies. But they do. So you know, it’s terrifying. These people want to regulate and legislate women’s bodies, but they don’t know the first thing about those bodies. It’s shameful that our leaders don’t even have to know how life is created in order to have this much power over it. All they care about is power and privilege. Pro-life politicians don’t know how sex works. They don’t know how plants pollinate. They couldn’t explain how a cell divides, or tell you the difference between an egg and an embryo. Many of them have never even seen a baby delivered. They don’t care about life at all. They care about one thing. For them, it’s all about power and control. Here’s the worst part: In the end, it’ll be a woman who helps overturn Roe V. Wade, one who understands what’s at stake. She’s more than willing to sacrifice thousands of women’s lives over the coming years, all in order to achieve her own career ambitions, in line with her own personal religious views. So congratulations in advance to Amy Coney Barrett on her confirmation. We all know it’s going to happen. She’s in good company. I’ll say that.
https://medium.com/discourse/theres-nothing-pro-life-about-the-trump-administration-or-amy-coney-barrett-e4eb36932ac4
['Jessica Wildfire']
2020-10-14 13:47:02.784000+00:00
['Society', 'Politics', 'Women', 'Feminism', 'Equality']
3BG Supply, creating centralized data solutions
3BG Supply has raised $4M in total. We talk with its team. PetaCrunch: How would you describe 3BG Supply in a single tweet? 3BG: 3BG is a technology-enabled distribution company for Industrial MRO Products and focused on creating centralized data solutions and technology to improve the customer purchasing experience. We’re solving the problem of ‘decentralization’ of data making it difficult to purchase. PC: How did it all start and why? 3BG: Co-founders Shane and Alex created 3BG because they grew weary of using catalogs. While working for his grandfather’s distribution company, Shane identified a solution for the inefficiencies our industry has faced for decades. We lack the use of technology to service customers simpler, faster, and cheaper. The catalyst for the start of 3BG occurred upon a regular visit to a customer where Shane had to identify, source, and supply various products to solve the issue at hand. Shane wasted a considerable amount of time and money throughout this process, mainly because he had to go catalog by catalog just to interchange the products needed. In fact, Shane loves telling the story where he was led to the customer’s “Catalog Room” where he stood face to face with an overwhelming amount of paper literature. It was there that he said to himself, “There has to be a better way, one in which I can search a part number and have all the different manufacturers pop up and allow me to purchase right there on the site.” Thus, the idea of 3BG was conceived. Today the company is a rapidly growing, venture backed and industry changing business. PC: What have you achieved so far? 3BG: We have a team of 23 people (30 total forecasted for year-end 2019) and 175,000+ Products Online scaling rapidly as an eCommerce distributor within the Industry. In addition, Minimum Viable Product for what will be the industry’s most comprehensive interchange application revolutionizing the way in which people identify, source and purchase industrial MRO products is slated for a launch in 18–24 months. We have acquired and serviced over 5,500 customers of which 13% are active (or have ordered more than once within the past year). We’re acquiring over 350+ new customers per month and growing. PC: How will you use your recent funding round? 3BS: Our spending is split as follows: Marketing & Customer Acquisition 15% Team Building (Hiring) 55% Software Development 10% Overheads 20% PC: What do you plan to achieve in the next 2–3 years? 3BS: We forecast 300%+ Sales Growth over 2–3 years and team growing to 72+ people. We plan to acquire and service over 39,000 customers of which 13% are active (or have ordered more than once within the past year). We will launch in 18–24 months the industry’s most comprehensive interchange application revolutionizing the way in which people identify, source and purchase industrial MRO products.
https://medium.com/petacrunch/3bg-supply-creating-centralized-data-solutions-1a7eb78995e0
['Kevin Hart']
2019-09-03 20:14:42.224000+00:00
['Database', 'Technology', 'Startup', 'Startup Life', 'Data']
Using NextJS In A MonoRepo
Using NextJS In A MonoRepo Share your code with ease Photo by Артемий Савинков on Unsplash When you work with a monorepo, you keep all of your isolated components and utilities inside of one repository. Instead of having to keep multiple repositories in sync, we keep everything in one place. With a monorepo, you could keep your component library, your Android app, and your website all in one repository. NextJS is a framework made by Zeit. They describe it as “Production grade React applications that scale”. By using NextJS, we can abstract away the difficult parts (build configurations, server-side rendering, etc) and concentrating on building our apps. Unfortunately, using NextJS inside a monorepo takes a little bit of setup. Let’s dig into it! Creating the monorepo We want to create a new monorepo with the following structure. packages components website Start by creating a new directory, and running npm init. mkdir monorepo-nextjs cd monorepo-nextjs npm init Then we use workspaces to turn our project into a monorepo. Make the following modification to the root package.json . { "name": "@monorepo", "private": "true", "version": "1.0.0", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "workspaces": [ "packages/*" ], "license": "MIT" } Make sure that you don’t forget to set private to true, and that you add the field workspaces . Next, we create the directory where our packages will live. mkdir packages Creating our first package Now, let’s create our frontend application with create-next-app. This allows us to quickly start a new NextJS application. npx create-next-app packages/website Let’s run our NextJS application to make sure it works. Run the following from the root directory. Our app should compile and run on localhost:3000 . Using --cwd tells yarn to use packages/website as it’s current working directory . yarn --cwd packages/website dev Open your browser to make sure it’s working. It should look like this screenshot. NextJS running in our browser Adding another package Now that we have created our website package, it’s time to add another. Let’s create our components package. Create a new directory called packages/components and run npm init to initialize the package. mkdir packages/components cd packages/components npm init Next, we need to install some dependencies. Our components package will be where we keep generic React components. So let’s install react and react-dom . Run yarn add react react-dom to install both dependencies. Now that we have created our components package, let’s add a component. Create a file called Button.js . import React from "react"; export const Button = props => { return <button {...props}>{props.children}</button>; }; Now that we have our first component, let’s use it inside of our website package. Start by adding @monorepo/components as a dependency. Then, modify _app.js in our nextjs application to use our new component. import { Button } from " import Head from "next/head";import { Button } from " @monorepo/components /Button"; const Home = () => ( <div className="container"> <Head> <title>Create Next App</title> <link rel="icon" href="/favicon.ico" /> </Head> <h1 className="title"> Welcome to <a href=" </h1> Welcome to https://nextjs.org ">Next.js! <Button>Hello World</Button> </main> </div> ); Now when we run our NextJS application, we see an error.
https://medium.com/frontend-digest/using-nextjs-in-a-monorepo-e011ff1826f5
['Malcolm Laing']
2020-04-06 09:42:15.786000+00:00
['Nextjs', 'JavaScript', 'React', 'Frontend', 'Programming']
Inside the Horrific Guns N’ Roses ‘Hell House’
It started out as a rehearsal space. They had been getting by using a room in Silver Lake owned by Nicky Beat, a Strip-scene drummer who’d spent about ten minutes in LA Guns. “Nicky wasn’t necessarily seedy,” Slash recalled. “But he had a lot of seedy friends…” Guns N’ Roses connected with various of those — the “underbelly” as Slash called it — and some would follow them back to the Hell House. Their lives were chaotic and becoming more so, and yet the chaos fired them. In the Hell House they wrote and worked up most of the songs that would appear on Appetite for Destruction, plus a few that would hold over for Use Your Illusion, too. Izzy had the riffs for “Think About You” and “Out ta Get Me”; Slash had the opening chords and riff to “Welcome to the Jungle.” “That song, if anything,’ Slash explained, “was the first real tune that the band wrote together…” Duff and Steven spent many hours jamming along to rock and funk, forging their groove, and the rhythm of “Rocket Queen” came from one of those extended jams. And they wrote quickly. “Out ta Get Me” and “Welcome to the Jungle” took little more than an afternoon to assemble. When they got to the Hell House, the fierce work ethic continued. “We rehearsed a lot of hours,” Duff recalled. In the small concrete space with their amps turned up, “our shitty gear sounded magical, clear and huge.” G N’ R on the porch of the Hell House [photo: onthisveryspot.com] They had no PA and played so loud Axl would have to scream lyrics and vocal melodies into his bandmates’ ears in order to get his ideas across. Axl and Slash were the first to become permanent residents in the garage. Izzy, Duff and Steven had girlfriends that they were living with, but they still spent most of their waking hours there. As the band began to establish itself as one of the best new acts on the Strip, they dragged others towards the Hell House too. There was West Arkeen, a musician neighbour of Duff’s, cut from the same cloth as the band and ultimately close enough to Axl to co-write “Yesterdays,” “The Garden” and “Bad Obsession,” as well as “It’s So Easy”; Del James, a biker turned writer and a pal of West’s, who began to hang with Axl and wrote short stories that were adapted for various lyrics and ideas, most notably the video for “November Rain”; Todd Crew, who played bass in another Strip band called Jetboy; Robert John, a photographer and friend of Axl’s whose work would become synonymous with the band’s early years; Jack Lue, another photographer, closer to Slash; Slash’s friends Mark Manfield and Ron Schneider; Duff ’s Seattle pal Eddy, who quickly tapped into Izzy’s heroin supply and was exiled back to Washington State; Marc Canter, still a true Guns believer who was to have a key, if unsung, role in Guns’ development during the Hell House era; Vicky Hamilton, a promoter and would-be manager with an eye for talent — she had booked early shows for Mötley Crüe and Poison — and the key to those precious slots at the Troubadour that Guns had begun to covet while they schlepped their wares at Madam Wong's (a Chinese restaurant) and the Stardust Ballroom (miles from West Hollywood); plus a revolving cast of bands that got to know of Guns N’ Roses as the new noise on Sunset (literally — the rehearsals were audible from ten blocks away): musical misfits like Faster Pussycat, Redd Kross, London, the rest of Jetboy and a stack of others, followed of course by girls who liked guys in bands, and then guys who liked girls that liked guys in bands, an ever-growing scene that centred around the Hell House and a cheap, dark Mexican restaurant across Sunset called El Compadre, and the Seventh Veil strip club, where the band became friendly enough with the girls to start having them come and dance on stage with them. Live at the Troubadour, 1985 [photo: Marc Canter / Getty] The scene itself fuelled creativity, sparked songs: when the entire band went to visit Lizzie Grey, who lived on Palm Avenue, an infamous street that ran between Sunset and Santa Monica (Slash: “more than a few sleazy chicks lived there, a few junkie girls we knew lived there…”), Lizzie passed around a bottle of cheap fortified wine called Night Train, a formidably alcoholic brew known for its ability to get the very broke very blasted very quickly. They began screaming the words “I’m on the night train” as they walked up Palm Avenue, with Axl extemporising along. The next morning back at the Hell House, they nailed the entire thing, words and music. One of the regular visitors to the Hell House, Slash’s childhood buddy Marc Canter, recalled seeing the band work on that early material. “A lot of the songs would start with some idea from Izzy like ‘My Michelle’ — the spooky intro part of ‘Michelle’ was total Izzy but without Slash we wouldn’t have gotten the harder riff that followed it. Axl would hear these unfinished songs and just know exactly how to work within them. Duff and Steven would then make the songs truly swing and really flesh them out with their ideas. You could say as some have that Axl was the most important, [but] if you took any one of those guys out of the equation it would have drastically changed all of those songs. It was truly a democracy in the beginning, at that time in 1985 or 1986 they were all on the exact same page.” All of the lyrics came from real-life situations or people. “My Michelle” was named after Michelle Young, who went to school with Slash and Steven and was a friend of Slash’s first serious girlfriend, Melissa. Michelle had a brief fling with Axl, who then immortalised her early life in the brutal opening couplets: “Your daddy works in porno / Now your mummy’s not around / She used to love her heroin / But now she’s underground.” The idea, ironically, had come from Michelle herself, who’d once remarked to Axl how wonderful it would be to have someone write a song about her, after listening to “Your Song” by Elton John with him. “We were driving to a show I think it was,” she described in 2014, “and that song came on and I was like, ‘Oh, that’s such a beautiful song! I wish someone would write a song like that about me.’ And then, lo and behold, came ‘My Song,’” she laughed. It wasn’t so funny, though, she admitted, the first time she heard the lyrics. “I heard it when I was at my dad’s house. I was in my bedroom [when] Axl called. He would always call me and sing me new songs. He would play this drumbeat on his knee and sing and snap to me on the phone whenever he had a new song, he would call me and sing a little and ask my opinion of it.” This time, though, she didn’t know what to say. “I was so out of it at the time, I was always high back then so when I heard it and heard the lyrics I was like, ‘Oh, it’s fine, it’s cool… do whatever you want.’” She laughed again then added, “I didn’t really honestly think that the album was going to be that huge or even that that song was gonna be on their album for that matter.” According to Slash, writing in his memoir, “Michelle loved the attention it brought her. Back then it was the best thing that had happened to her. But like so many of our friends that were drawn into the dark circle of Guns N’ Roses, she came in one way and went out another. Most of them ended up going to jail or rehab or both (or worse).” According to Michelle, though, “when the song came out I can say it was never a blessing, it was always a curse, let’s just say”
https://medium.com/cuepoint/inside-the-horrific-guns-n-roses-hell-house-971e20df749c
['Cuepoint Selections']
2017-02-03 08:02:01.857000+00:00
['Rock', 'Drugs', 'Sex', 'Featured Rock', 'Music']
Beach Week
Brett can I call you Brett? probably not, Your calendar says Beach Week. My calendar says Trauma Week. This is the difference between us. Black out drunken game nights lost in your hippocampus. You could always say to the priest “I did a few things in college.” While I carry the fragile scar of what happened. It opened up last week as your rage spit all over your ambition. I think if you had said sorry we could have forgiven you. Even for the fact of your forgetfulness. Perhaps if you tried to look in the folds of your memory for an old story lost in its pockets, for the boy before this cartoon manhood BEACH WEEK is preserved on the dates in 1982, in capital letters and written over twice in bold marker. While TRAUMA WEEK is never scheduled. There is never a season for it. I am wrecked by the week. Its props and dangers mean I have to wait parked on the side of my life Feel the dead weight of her story; my story. While the world of men can look away. LOOK AT ME. I have done my work but my jaw and face are stone again. I watch the fucked up face of the man on tv. He looks like a drunk seventeen year old boy. This time though I am not the only victim. I am in a stadium full of victims. We are in the cheap seats watching her. Holding her. Waiting. I dont like your lies. I understand them though. “I was not there. It did not happen.” I did not push her down. Biologically, lies wont hold. They give way. The brain wants its whole story It wants repair. Until you find it in the hippocampus —you are spoiled milk. I am wrecked by the week. Its props and dangers I have to wait. Feel the dead weight of it. My poor husband trying to learn everything he is supposed to know, waiting until he can touch me again. Waiting to see if we can make plans for something else or if we will go into overtime.
https://medium.com/poets-unlimited/beach-week-a77ff6a0a632
['Annie Fahy']
2019-08-02 14:14:40.098000+00:00
['Creativity', 'Sexual Assault', 'Feminism', 'Poetry', 'Politics']
Blockchain International Show Highlights Practical Startups
London, United Kingdom — On 6 and 7 June Genaro Network’s Luke Sheehan (of the platform’s EU Business Development team) attended the Blockchain International Show at the ExCeL London exhibition center in the city’s docklands. He met with advisors, entrepreneurs, and others from the crypto scene in the U.K. and farther afield. Startup companies filled out more than half of the exhibition space and shared their concepts over two days while the remaining space and time were set aside for wide-ranging keynotes and panel discussions on the development of blockchain. Two themes stood out among the projects and the speeches: First, the fast-changing currents in the regulation of cryptocurrencies and ICO fundraising, and second, the question of practical application — how to get useful tools into the hands of ordinary people to allow them to include crypto-transactions in their daily lives. One startup aiming to make Bitcoin easier to obtain was Cashin, founded in London by French entrepreneur Benoit Marzouk. The Cashin app lets retailers who want to operate in the crypto space work like miniature exchanges for their customers. You give cash to the registered retailer, and Cashin converts the amount to Bitcoin and sends it to your wallet. Despite relying on the Bitcoin blockchain for the model to work, Cashin does not currently utilize blockchain for its own architecture or data storage. Cashin’s Marzouk and other brands at the show expressed interest in Genaro’s proposed solutions for data storage and development of decentralized applications. Among speakers presenting their experiences and projects were several planning to make investing and using tokens and currencies more accessible. These included David Siegel of the Pillar Project (makers of a ‘universal smart wallet’) and Kendrick Nguyen of Republic (seeking to open up crypto to more legally compliant ICO investing for both accredited and unaccredited investors). Asked by Genaro how Republic would guarantee security and privacy of data, Nguyen replied that the bulk of the entire team was experienced software engineers and that security was a core concern — Republic retaining only a minimum of personal data, to begin with. Neither Republic nor a majority of other new companies spoken with had a data or security model based specifically on blockchain. The majority used tokens as a funding model or sought to give better access to the token economy. Forbes columnist Naeem Aslam chaired a final panel discussion with speakers like token economy cheerleader and editor Ismail Malik founder of the print magazine The ICO Crowd) and Hayden Jones, founder of the Blockchain Hub incubator. Both Jones and Malik were confident about the power of crypto to make great changes to how value will be shaped and transactions made in the future. Regulation, while it is ongoing and in many ways desirable, should be a means of helping this change. “The law shouldn’t tell people what to value,” asserted Malik. Following a question from Genaro about the integration of blockchain tech into everyday life, Hayden Jones answered for the panel, stating that crypto had the momentum to either change or sweep away many aspects of the economy we tend to see as deeply embedded in daily life, including even the concept of a ‘Point of Sale’ itself.
https://medium.com/genaro-network/blockchain-international-show-highlights-practical-startups-930094909354
['Genaro Network', 'Gnx']
2018-06-13 05:25:32.406000+00:00
['Storage', 'Startup', 'Blockchain', 'Data', 'Bitcoin']
How to Build a Reporting Dashboard using Dash and Plotly
A method to select either a condensed data table or the complete data table. One of the features that I wanted for the data table was the ability to show a “condensed” version of the table as well as the complete data table. Therefore, I included a radio button in the layouts.py file to select which version of the table to present: Code Block 17: Radio Button in layouts.py The callback for this functionality takes input from the radio button and outputs the columns to render in the data table: Code Block 18: Callback for Radio Button in layouts.py File This callback is a little bit more complicated since I am adding columns for conditional formatting (which I will go into below). Essentially, just as the callback below is changing the data presented in the data table based upon the dates selected using the callback statement, Output('datatable-paid-search', 'data' , this callback is changing the columns presented in the data table based upon the radio button selection using the callback statement, Output('datatable-paid-search', 'columns' . Conditionally Color-Code Different Data Table cells One of the features which the stakeholders wanted for the data table was the ability to have certain numbers or cells in the data table to be highlighted based upon a metric’s value; red for negative numbers for instance. However, conditional formatting of data table cells has three main issues. There is lack of formatting functionality in Dash Data Tables at this time. If a number is formatted prior to inclusion in a Dash Data Table (in pandas for instance), then data table functionality such as sorting and filtering does not work properly. There is a bug in the Dash data table code in which conditional formatting does not work properly. I ended up formatting the numbers in the data table in pandas despite the above limitations. I discovered that conditional formatting in Dash does not work properly for formatted numbers (numbers with commas, dollar signs, percent signs, etc.). Indeed, I found out that there is a bug with the method described in the Conditional Formatting — Highlighting Cells section of the Dash Data Table User Guide: Code Block 19: Conditional Formatting — Highlighting Cells The cell for New York City temperature shows up as green even though the value is less than 3.9.* I’ve tested this in other scenarios and it seems like the conditional formatting for numbers only uses the integer part of the condition (“3” but not “3.9”). The filter for Temperature used for conditional formatting somehow truncates the significant digits and only considers the integer part of a number. I posted to the Dash community forum about this bug, and it has since been fixed in a recent version of Dash. *This has since been corrected in the Dash Documentation. Conditional Formatting of Cells using Doppelganger Columns Due to the above limitations with conditional formatting of cells, I came up with an alternative method in which I add “doppelganger” columns to both the pandas data frame and Dash data table. These doppelganger columns had either the value of the original column, or the value of the original column multiplied by 100 (to overcome the bug when the decimal portion of a value is not considered by conditional filtering). Then, the doppelganger columns can be added to the data table but are hidden from view with the following statements: Code Block 20: Adding Doppelganger Columns Then, the conditional cell formatting can be implemented using the following syntax: Code Block 21: Conditional Cell Formatting Essentially, the filter is applied on the “doppelganger” column, Revenue_YoY_percent_conditional (filtering cells in which the value is less than 0). However, the formatting is applied on the corresponding “real” column, Revenue YoY (%) . One can imagine other usages for this method of conditional formatting; for instance, highlighting outlier values. The complete statement for the data table is below (with conditional formatting for odd and even rows, as well highlighting cells that are above a certain threshold using the doppelganger method): Code Block 22: Data Table with Conditional Formatting I describe the method to update the graphs using the selected rows in the data table below.
https://medium.com/p/4f4257c18a7f#319e
['David Comfort']
2019-03-13 14:21:44.055000+00:00
['Dashboard', 'Dash', 'Towards Data Science', 'Data Science', 'Data Visualization']
There’s Only One Way to Beat Donald Trump for Good
There’s Only One Way to Beat Donald Trump for Good Treat him like he’s boring. We still haven’t learned much when it comes to Donald Trump. We keep making all the same mistakes. He’s not going away unless we do the unthinkable: Treat him like he doesn’t matter. That would be hard, because the truth is nobody really wants Trump to go away — not even liberals. Americans are deeply angry right now. Half of us need someone to hate. The other half needs a hero who can hate on their behalf. Trump does both. Donald Trump knows how indispensable he is. He’s a political novelty. So he intends to keep fuming over the election results. He’ll remain a looming presence in our lives. He’ll spend the next four years threatening a 2024 presidential run, even if he doesn’t actually plan on it. He might or might not get someone to ghostwrite a comeback book for him. He’ll continue holding rallies and calling into TV shows. Here’s what you have to understand:
https://medium.com/the-apeiron-blog/theres-only-one-way-to-beat-donald-trump-for-good-68860064d20
['Jessica Wildfire']
2020-11-26 23:28:30.540000+00:00
['Social Media', 'News', 'Donald Trump', 'Society', 'Politics']
Are Running Watches Worth It For Casual Runners?
Are Running Watches Worth It For Casual Runners? I was on the fence about buying a running watch, but six months ago I took the plunge. Photo by Bogdan Glisik on Unsplash I was on the fence for quite a while about whether to buy a running watch: On one hand, I’m a casual runner, not going particularly quickly or entering races. On the other hand, I was having trouble with my pace, and using a phone to track my runs was becoming cumbersome. Also, I knew that if I was going to get a running watch, built-in GPS would be an essential feature for me, which meant I’d be spending at least $100 on the watch. Last December, I finally decided to get one after they went on sale for Christmas. After six months of running with it, my only regret is that I didn’t buy one sooner. Even as a casual runner, the benefits of a running watch have been huge for me.
https://benyaclark.medium.com/are-running-watches-worth-it-for-casual-runners-4346984473eb
['Benya Clark']
2019-06-07 02:54:57.947000+00:00
['Health', 'Running', 'Fitness', 'Lifestyle', 'Technology']
The polar explorer using Grime to break the ice
The polar explorer using Grime to break the ice It’s not often someone compares the voices of seals to the sounds of space set to a Grime beat. But when he’s not monitoring seals from space, PhD student Prem Gill is using ‘Seal Grime’ as one way to encourage people from a wide range of backgrounds to take up polar science. Prem Gill at the Scott Polar Research Institute I can be sitting in my office in Cambridge and witness a moment in time, the other side of the world, that no other human has ever seen before, like a newly born seal pup lying beside its mother on the sea ice. It’s surreal to have these rare glimpses into life in Antarctica. There’s so much crazy stuff going on in the natural world and I just want to know what’s happening. I’m particularly interested in using technology to explore hidden or remote parts of planet Earth which we wouldn’t ordinarily be able to see. My PhD research, which is a joint project with the Scott Polar Research Institute, British Antarctic Survey (BAS) and World Wildlife Fund, uses satellite images to study Antarctic seals. By monitoring the seals, we can gain a greater understanding of their habitat preferences and population trends. Through this analysis we can learn more about the health of the entire Antarctic ecosystem. Satellite images of seals on ice floes This is crucial because what happens in the polar regions affects the whole world. The Arctic and Antarctic act like a thermostat for the planet. If we can monitor what’s going on in these areas, we can get an idea of what’s going on globally, which has huge implications for assessing climate change. What comes to mind when you hear the words ‘polar scientist’? A sepia-tinted photograph of a Victorian explorer? A modern-day researcher in a brightly coloured padded jacket and sunglasses? You might not picture someone who looks like me, but I’m hoping to change that. I want to encourage more people from ethnic minority or working-class backgrounds to consider a career in polar science. I’d like to see more diversity in leading positions in research and policy. Diversity of experience is so important if we are to fully understand the global implications of climate change and how we respond. In the 200 years since Antarctica was first discovered, there have been great strides in terms of women in polar science. Several major polar research institutes and international organisations are now led by women. Their determination is a real inspiration and their success shows me that change is possible. Prem feeding orphaned seal pups I know from experience that a number of factors can stand in the way of young people like me pursuing a career in a subject like polar science — this could be cultural expectations, financial pressures or quite simply not having role models that look like you. Some research areas in science can be hard to break into as work experience may be costly. I remember that during my undergraduate degree in Marine Geography at Cardiff University, some of my cohort paid to spend their summers doing conservation internships abroad in tropical regions, whereas others worked locally earning money. At the time I wasn’t fully aware of how much household income can act as a barrier to pursuing certain careers, however seeing the split between the students who pursued conservation and those who didn’t was a wakeup call. Given that this field has implications for the wellbeing and livelihoods of people from all social classes, this is a real concern. I believe that everyone who wants to should have the opportunity to gain relevant research experience, so I invited a group of students from ethnic minority or underprivileged backgrounds to join me at BAS for a week. Some of the group were mature part-time students doing online courses. They told me they struggled to feel like a ‘proper student’ let alone a future scientist. However, by the end of the week they were using satellite imagery to find and study ice seals and having coffee with some of the biggest names in polar science. Prem in the Arctic at Ny-Ålesund, Svalbard — the northernmost permanent community in the world When I began my PhD, I was surprised to discover that there weren’t any support groups for people from ethnic minority backgrounds in polar science. So, I set one up ­– Polar Impact — which provides a platform for people from ethnic minority backgrounds to speak with each other and share their stories. It’s been heartening to see how many people and organisations have reached out to connect and offer support. Setting up Polar Impact led to being asked by BAS, in collaboration with the Foreign and Commonwealth Office, to take part in their Diversity in Polar Research Initiative. I’m the Early Career Research Diversity Champion and sit on the steering committee for the initiative. This project presents a unique opportunity to reach so many more people than I could ever do on my own. As part of these initiatives I’m working on a new virtual reality art installation called ‘seals from space’ aimed at 12- to 19-year olds. We’ll be producing a Grime track that will feature the voices of seals — which sound surprisingly like spacecrafts (check out my website to see for yourself). This soundtrack will be paired with 3D visuals of seals and Antarctica and be shown to secondary school children. It will demonstrate the ways in which satellite technology and data science techniques can help in our understanding and prevention of environmental and climate change damage. When I listen to the seals making their eerie and beautiful outer-space vocalisations all I hear is a Grime beat. Grime was a big part of my childhood. Whether I was in school out in the village, or back home in the inner city, it felt like every other kid was a Grime artist. Looking back, I can see it was due to its accessibility — all you needed was confidence and a mobile phone to play a beat and record your rapping. Prem holding a Huskie puppy No expensive instruments or music lessons were required. In fact, to create a whole song from scratch you could go to a youth club after school and do it for free on a PC or even at home on a games console. As a result, it’s a genre that’s united people from so many different British communities and provides a great platform to interact with a diverse audience. To launch ‘seals from space’ I’ll be joining an expedition to Antarctica to get up close and personal with the ice seals. From there I’ll be conducting the very first spectral measurements of Antarctic seals to help make automated drone surveys possible. I’ll also be speaking to pupils from less privileged secondary schools live from Antarctica. This project will give the students a chance to experience Antarctica and ask questions about what it’s like to be a polar scientist. I hope that we will be able to inspire the next generation to believe that a career in conservation and polar science is possible, whatever your background. This profile is part of our This Cambridge Life series, which opens a window on to the people that make Cambridge University unique. Cooks, gardeners, students, archivists, professors, alumni: all have a story to share. Words by Charis Goodyear. Photography by Nick Saffell.
https://medium.com/this-cambridge-life/the-polar-explorer-using-grime-to-break-the-ice-cd5b826e31f
['University Of Cambridge']
2020-04-22 16:11:50.897000+00:00
['Polar', 'Antarctica', 'Climate Change', 'Science', 'Conservation']
12 Questions To Think About Before Starting School in the Fall
Even though it’s the middle of summer, it’s not unusual for me to begin thinking about the first day of school and wondering what my year will be like. I question what lessons I will start with, what books we will read, and special projects we will take on. This year is different. Yes, I am still wondering and questioning, but the focus is not on the specifics of the curriculum and lesson plans. My thoughts are centered on how we are ever going to make it work as the COVID-19 pandemic continues to rage on. Just a month ago, after receiving guidelines from our state’s Office of Public Instruction, our district made the decision to open up as “normal.” But the situation has changed and they are reconsidering. A One-sided Poll Our superintendent recently sent out a poll to district staff and parents asking about their preferences: all-day in-school classes, all virtual learning classes, or a hybrid with some days at school and some at home. Although teachers were included in the poll, the questions were definitely geared to the parents. The district office wanted to know if they felt safe sending their children to school, questions about childcare, class/school schedules, and concerns about children’s mental and physical health. There were many other questions. Too many to list here. It is obvious the school district cares deeply about the families they serve. Teachers have many questions. But what about the teachers and other staff members who may be putting themselves at risk by returning to the classroom? As a high school teacher of students with learning disabilities, I have many questions. My questions delve a little deeper. In the trenches, how is this going to work? kind of deeper. To some, these questions might seem trivial compared to the big problem of how to best educate our students. But if you have ever been responsible for instructing a large group of children, you will understand how much impact the seemingly minor things have. Photo by Julian Wan on Unsplash 1. How will I ensure all students will wear masks while at school? This question has been discussed. A lot. But there are several more questions attached to it that also need to be answered. For instance, who provides the masks? I know from experience, you cannot count on the parents or the student to bring their own masks. Not on a daily basis. They forget. It gets too costly. Or they assume the face masks will be provided by the school. I’m not trying to disparage parents. I am one. But it’s the reality. If the district supplies the face coverings, which they should if they want to guarantee students have access to them, it’s going to be expensive. An additional hit to the budget many schools didn’t plan on. So, more than likely, some other line item is going to have to be cut. It’s not unusual for teachers to purchase supplies for their classrooms and students. I spend hundreds of dollars a year on paper, pencils, markers, rulers, art supplies, and food for my kids. I can’t imagine the additional cost of supplying face masks to those students showing up without their masks or losing their face coverings throughout the day. Which brings my next related question. What do we do about students that continually misplace or remove their masks in class? What about the students that just flat out refuse to wear one? Do we interrupt our teaching to deal with the student? Send him to the office? Send him home? Photo by F Cary Snyder on Unsplash 2. Who monitors the handwashing? Frequent and thorough handwashing is one of the requirements of returning to the classroom whether it is full-time or part-time. Not every classroom has a sink. Mine doesn’t. How do I guarantee the handwashing is getting done? Do I need to take classroom time to send students out one by one to wash up? I think this may be one of the most difficult requirements to monitor. 3. How do I know if a student has taken his temperature? If a student must take his/her temperature before coming to school, how will I know if it has actually been done? If the student takes his temperature at school, who is responsible for overseeing that? Will it be done by the teacher as the child enters the classroom, or will their temperature be taken as they come in the front door? Will I be required to keep a record of student names and temperatures? And what do I do if a student does have a fever? 4. What happens if a student displays symptoms while at school? Imagine that a student arrives at school with a cough and a fever of 100 degrees. Do you send him home? Or does he just go to the sick room? Will he be required to be tested for COVID-19? What happens if you are unable to contact a parent? If a student (or a staff member, for that matter) does test positive for COVID-19, does the school close? Or do you just quarantine his classmates, the bus driver, the kids on the bus, and his teacher? 5. How does this affect my sick leave? This may seem like a selfish question, but it has been on my mind. If one of my students or colleagues comes down with the virus and I am asked to quarantine for two weeks, will it be taken out of my sick leave? Would it be possible for me to teach from home if I am not ill? Or will I have a substitute teacher? Just how will that work? 6. Who does all the cleaning in my classroom? For years, I have been buying my own disinfectant wipes. My school doesn’t supply them and the custodians do not generally wipe down desks and tables and cabinets. But I do. At the end of every day. The new guidelines suggest teachers clean the classroom several times a day. I’m assuming this responsibility will fall on the teacher. It makes sense. But who supplies the wipes? Will this be done during instructional time? During breaks? How often is “several times a day?” 7. How will we monitor social distancing? Most students are very social creatures. They want to be together. Keeping kids six feet apart all day is going to be one enormous task. Even if you have desks and tables set apart, students have a way of getting too close to each other. And they like to share: books, pencils, food, and ideas. I find students like to work together, collaborate on projects, help each other out on assignments. They benefit in so many ways when working in small groups. Is there a way to still allow that and social distance? Photo by Annie Spratt on Unsplash 8. Will we eat lunch in the classrooms? One suggestion to help with the social distancing issue is to have students eat their lunches in the classroom. This would eliminate the crowd in the cafeteria. But, oh boy, what a mess in the classroom. It also makes me wonder if the students will be sent one at a time to the cafeteria to get their lunch or will the lunch ladies be bringing it to our door? Our district has an open campus policy during lunchtime. Students can go downtown to get lunch rather than eat in the cafeteria. Will that end? Will students be required to take their temperature again if they leave and come back? How will we know if they maintained social distancing and wore their mask while they were off-campus? 9. When will teachers take breaks and have lunch? With all the monitoring of temperatures, handwashing, and social distancing, I wonder when I will be allowed a break. Will we hire extra staff to monitor the halls? Or will we have rotating breaks? If students are eating lunch in my classroom, I will need to be there to supervise. When will I get my lunch break? I realize I can eat with my students, but like most teachers, I use my thirty minutes for making copies, correcting student work, contacting parents, or collaborating with other teachers. 10. What do we do about “non-academic” classes? Think about a PE class of thirty hot, sweaty kids. It doesn’t sound very safe. Perhaps there are ways around that with smaller groups of kids and split PE activities. Photo by Nguyễn Hiệp on Unsplash It may not be so easy to figure out how to make art, shop, band, and music class work when students are sharing tools and materials or taking off their masks to play an instrument and sing. I am sure assemblies, field trips, clubs, and sports will continue to be on hold until the pandemic is over or a vaccine is created. And neither one is likely to happen before school is scheduled to begin. 11. How do I provide one on one instruction? There are times when a student needs a little private coaching. How will I maintain social distancing and still provide the necessary support the student needs? I can imagine many of my high school students getting upset if I stand six feet away to give them the extra explanation they need on an assignment that everyone else seems to get. 12. What about the relationship between emotional well-being and learning? Research shows and teachers are well aware, children do not learn well under stress. It is going to be a mighty big task for teachers to make students feel safe and comfortable while also taking their temperatures, reminding them to leave their masks on, wash their hands, and keep away from their friends. I’m not saying we cannot create a stable environment for learning, but it is going to take time and work. When we first return to school, the priority may not be book work, but heart and mind work. My Same, but Different, Role I understand my role as a teacher will look different. I will be a nurse, counselor, custodian, data collector, and lunchroom monitor. They are responsibilities already incorporated, but unwritten, into my job description. I’ll just be doing it to a much larger degree. That is not so much what I am worried about. The questions I have are related to the day to day workings of the classroom and my job, in particular. Questions that seem to raise more questions. They may seem like minor things. But they need to be addressed before school begins. If the proper protocols are not in place when school starts, it will be hard to enforce them later. Then the health and safety of students and staff will be at an even higher risk. My Personal Suggestions The more structure and clear routines we have in place in the beginning, the faster our students, our families, and our staff will get used to them. The practices will become just a part of our school day. Personally, I am very worried about school starting “as normal” in the fall. It is going to look and feel very different from previous years. We won’t be jumping right back into our textbooks. I think as teachers we will need to focus on three areas: teaching students to follow the health and safety protocols, develop relationships and work on students’ mental and emotional well-being, and provide instruction on how to successfully use and manage distant learning tools/systems. Because chances are high that we will be returning to some form of virtual learning before the pandemic ends. We need to address the possibility. We need to be prepared, to lessen the confusion and establish some kind of normalcy around either situation, class at school or class at home. This must happen before any learning can take place.
https://medium.com/ninja-writers/12-questions-to-think-about-before-starting-school-in-the-fall-40cb1cabf127
['Mikey Sackman']
2020-07-18 12:01:01.303000+00:00
['Covid 19', 'Culture', 'Health', 'Family', 'Education']
You’ll Catch a Break: Nobody Said It Would Be Easy
Failure is inevitable: embrace it. I used to avoid failure by any means. I despised everything about it. I despised what caused me a great deal of distress. But I have come to realize that change will always be uncomfortable. It is a law of life; change is a current that comes into one’s life to help him realize that he could be doing much better. Accepting the fact that you can and will suck at something can be a good thing. That means you are teachable, and through perseverance, you can become great at it. This principle is self-evident in practically all aspects of life; how many times did you fall before learning how to walk? How many times did you fall your first time learning how to ride a bike? You don’t become great at something overnight. That is a simple truth of life. Nonetheless, it should not stop you from rising to the occasion. To better your life, you must be willing to take risks that petrify you. “Fall forward”. — Denzel Washington You must find your routine When I found out that humans are creatures of habit, I understood that I had to create a routine of habits that I valued the most. That perceptual shift was what I needed to help me add meaning to my life. Understanding that I was not always going to be on my best warranted for a structure to get me through my lows. What’s amazing about this way of life, is that it can be anything as long as it lights your soul on fire. You must take the time to study what you value the most in your life. I realized that what helped me to attune to the ever-present stillness was finding time to meditate on the Word, exercising as much as I can, dancing to uplifting music, and going on hiking adventures. As the months passed by, I found myself adding other habits that helped me to attain a sense of clarity. Whether it was reading a book or writing in my journals, it was as if new layers of life were being unveiled. I found myself paying close attention to the subtle details in my life that I once took for granted. One vital lesson that helped me with this mindset was to reward myself for sticking to the habits I wanted to build. This is a principle that psychologists refer to as “reinforcement”. Each time I partook in a habit that I wanted to add to my life, I made sure to reward myself. After 21 days, it started getting easier. I found myself gaining more vigor to carry me through the day. “Be careful that your routine is pointing to where you’re trying to get to”. — Eric Thomas
https://medium.com/change-your-mind/youll-catch-a-break-nobody-said-it-would-be-easy-ad8343e28cec
['Kevin Ishimwe']
2020-12-28 11:02:40.279000+00:00
['Mindfulness', 'Life Lessons', 'Love', 'Creativity', 'Inspiration']
What 2020 has taught us, tips to tackle 2021
It’s hard to believe that 2020 is almost over. Flashback to a year ago and we were reviewing plans for 2020,our activities and expectations for the year. In March this year, we were all sitting around in the office’s main meeting room, drinking mate and discussing the next steps to shape a very special project. 2 weeks later, and all I remember is this uncertain feeling of limbo. You know that moment when you wake up with bed hair, your senses and thoughts still scrambled; and you sit there, in your bed, for a second still gathering your thoughts? Well, something along those lines. We were disoriented, trying to figure out how to juggle home office, family, kids with no school, anxious clients and a full time remote team. It was challenging at the beginning, I’m not gonna lie, but we were 100% in it for the ride. And it has been a rewarding journey, thanks to the amazing people we work with. If teams have always been paramount to success, then this year they were even more so. Our team kept pushing to stay afloat each and every day in the wake of the COVID crisis. I am grateful for them, thriving through these challenging times and going the extra mile when necessary. I am grateful for our clients’ confidence and trust in walking this year together. And our families, above all, how to ever thank them enough for their support? I hope all your beloved ones are in good health to welcome this 2021. This has been a year full of changes, challenges and beginnings for Arion. As product manager it has definitely been a fun ride, that has tested our teams’ adaptability and taught us a lot. Here are some of my main mantras and takeaways of the journey as PM: May Ideas be with you! Set the stage for Ideas.. and keep pushing! Photo by James Pond This is key to creating value, and I’ve shared more thoughts and tips about this in a previous article. Make sure to check it out here. Magic golden eggs of success do not appear out of thin air or by some random act of magic. They bloom in properly cultivated environments, and can grow into fantastic enhancements, new features or even revenue streams if properly groomed and guarded. Of course not all ideas can be executed, but it is fundamental to ensure that your team feels part of a safe environment where they can let their minds go wild and risk voicing their thoughts. Hear them out, polish them, combine ideas, play with the possibilities. AND, always keep pushing for more. Provide inspiration and prompt discussions, framing the conversation through questions such as “How might we….”, “What if…”. Imagination is a muscle that many of us tend to exercise less and less as we enter adulthood. But all of us have it and it is just waiting for the right opportunity to hatch. Leverage diversity of professions, backgrounds, interests, hobbies, and personalities. Shuffle teams, get all departments together, give everyone a voice. The sum is always greater than the parts. With great expectations comes a great responsibility Motivation is key, but managing expectations is also essential. Photo by Michael Dziedzic This year has been hard on everyone. Keeping teams aligned and motivated is an everyday task in all scenarios, but effective communication during a crisis is just as essential as breathing. Knowing how each team member was coping with the situation, if they needed anything and how we might help seemed to be all we could do during the first few months. We are all humans, with families, and personal worries and fears which we can’t simply delete when we power up our computers. We all want our teams to get excited over the news of a potential new project, get the thrill of the possibilities it opens. Nothing is better than a motivated team! We kept the teams hooked with the projects, but decided to be responsible and transparent regarding possibilities. We wanted our teams to be updated on possible future scenarios, but when there was yet uncertainty we would factor in that uncertainty, and explain the variables it depended on. Creating expectations is great, but creating false ideas or distorted images when everyday is filled with uncertainty is no good to anyone. Keep the teams motivated and on the loop, responsibly. Build what builds you. Build change resilience. Photo by Andrik Langfield This year has shoveled as many plans into oblivion, as the ones that it has forcefully modified. Things usually don’t go as planned, and this year certainly reminded everyone of that. Be it personal plans, company plans or product plans; everything had to be reassessed and re-planned for the new reality and new needs. We can plan for change by building models and environments to be resilient to unexpected events. This volatile, uncertain, complex and ambiguous (also referred to as VUCA) life stage will unfortunately continue in 2021. So it’s best to have systems, and mental models set up to tackle any challenge and opportunity that we may face. Setting an environment and a proper space for everyone to voice their views, while fostering a growth mindset will also help build resilience. During challenging times being a thoughtful leader will make the difference. According to the 2019 Change Lab Workplace Survey, it’s the quality of leadership during times of change, rather than the amount of change, that drains or sustains teams. Building change resilience applies to your teams, and your products. We don’t know what 2021 will bring us, but we can prepare ourselves, and equip our teams to be prepared to thrive. Read more:
https://medium.com/arionkoder/what-the-2020-has-taught-us-tips-to-tackle-2021-72aec2669468
['Victoria De Santiago']
2020-12-29 20:06:38.574000+00:00
['Teamwork', 'Tips', 'Ideation', 'Creativity', 'Product Management']
This Time Next Year I’ll Be A Different Person: How I Lost 145 Pounds in 365 Days
What I Ate As I lost weight, my tastes and desires for certain foods waned and increased dramatically. I’m a firm believer that no food is wholly “bad”, and no food is wholly “good”. Food just is. One thing I realized as I was on this journey is that every meal doesn’t have to be a life changing experience. Every forkful that I put in my mouth doesn’t have to be sumptuously orgasmic. Sometimes food is just fuel. And that’s ok. This was pretty sumptuously orgasmic, though. Making a shopping list and going grocery shopping revolutionized the way I approached food. Every Friday I take about 10 minutes and plan meals for the week. My fiancé and I have a shared calendar that we keep updated with when we’re working late, and when we won’t be around for dinner. I go to the grocery store on Saturday morning after my workout and pick up our groceries for the week. Not only does this contribute to my weight loss goals, but it also cuts down on spending money eating out (which we do 3–4 times a month), and unnecessary food waste. A typical day a few months into calorie tracking. Meal prep was very important for me when I was starting out. Streamlining two out of three meals a day made a big difference for me. I more or less ate the same thing for breakfast and lunch every day for several months. Did it get a little boring sometimes? Yeah. But removing the stress and decision making out of figuring out what to eat during the day made the other choices I had to make easier. A typical day, 145-ish pounds later. As I got closer to my goal weight, plateaus became more common and long-lasting. Plateaus for me usually lasted two weeks or longer, and I generally regarded them as a necessary evil; it meant my body had gotten used to what I was doing, and it was time to switch things up a bit. I realized that I was never hungry in the morning and was forcing myself to eat a banana or some oatmeal because I thought that’s what I should do. I began experimenting with light fasting during the majority of the day — I was always hungry in the evenings, so why not save my calories until then? I look at my calorie goals as a debit card. Every morning I get about 2000 “dollars” loaded on my debit card. I can spend them on whatever I want, whenever I want, but if I use them foolishly, I’ll be overextended. These days, I start my morning with several glasses of ice cold water. I have a 32 oz cup at home that I fill with water twice and chug. Throughout the day, every time I have to pee, I repeat the process. Around lunchtime I’ll have a Fage yogurt. I’m deeply passionate about Fage yogurt, specifically the honey variety. All other yogurt brands can suck it. Later in the afternoon I’ll have a snack of some pretzels, grilled chicken or popcorn. This gets me through the day until it’s time for the main event — dinner! Dinner is usually a protein paired with a vegetable and a carb. Think salmon, broccoli and mashed potatoes, or pork chops, roasted sweet potatoes and collard greens. One thing I’m always careful to make sure I’m not doing is doubling up on carbs — if I’m having spaghetti, I don’t need a hunk of garlic bread to go along with it, as there are already carbs in the pasta. Fruits and vegetables are important to me — I like to keep green grapes easily accessible, and I always have a gigantic bowl of cut watermelon chilling in the fridge. Broccoli is always welcome on my plate. Roasted cauliflower is a gift from God. I try to make sure I have something green every day. Pro tip: Grilled onions literally go with every meat, and add a fantastic depth of flavor and additional texture to boring meats like grilled chicken. Side note: stop buying chicken breasts. They are dry and expensive. Skinless, boneless chicken thighs have a few more calories but taste so. much. better. And as an added bonus, they are cheaper. Another thing you’ll notice if you thumb through many of the entries in my food diary is that I almost always save room for dessert. Having a little something sweet at the end of the day is a must, so I always allocate some calories towards a mini ice cream cone, a handful of chocolate covered nuts or some kettle corn. With the exception of Skinny Cow products and the Halo Top line of ice cream, nothing I eat is packaged to be “low fat”. I don’t have anything against Lean Cuisines, but they aren’t food items that I plan on eating for the rest of my life.
https://medium.com/gethealthy/howilost145pounds-ec37044b0ef0
['Deandre Upshaw']
2016-10-27 14:32:34.292000+00:00
['Weight Loss', 'Body Image', 'Lose Weight', 'Body', 'Health']
Colouring data visualizations
Semantic factors This is how we interpret meaning from colour. The associations we have with colour depends on the cultural, environmental and personal contexts that we’ve been exposed to. These differences can be seen in “Colours in Culture” from Information is Beautiful (below). The colour green, for example, is associated with good luck in Arab, Japanese, and Western cultures — while in African, Chinese, and Eastern European cultures, good luck is associated with the colour red. By being aware of these perceptual and semantic factors, you can select colour combinations that avoid common constraints and help focus attention on the message you want to share. Questions to ask before picking colours Let’s imagine that you’re a part owner in the Udderly Delicious Ice Cream Company, a small fleet of ice cream carts in Ottawa, Canada. Here are a few questions you can ask yourself when picking colours for data visualizations. Who’s the audience? Are you making visualizations for a specific group of people? Are there cultural or industry-specific conventions they use for colour? Let’s say you thought it would be helpful to put together a few visualizations to share with your business partners to help inform budget, sales targets, and other areas to focus on this year. You know that, when it comes to cultural or industry-specific conventions in Canada, green means good and red means bad. You’ve also seen blue and red used together when referencing temperature. By taking a few moments to write down what you know about your audience, their goals & motivations, your story becomes clearer. What’s the story? Ok, now that we know who the audience is, it’s important to focus on the story we want to tell before thinking about colour. What are you trying to explain to your audience? For example, at the Udderly Delicious Ice Cream Company, what are the three most important questions that you and your business partners need answered to inform the business this year? Maybe you’re wondering what the annual sales for the top 5 ice cream flavours are because it will affect what you choose to order for your first batch this year. Or maybe you’re wanting to see sales volume by neighbourhood, so that you can strategically distribute the ice cream carts to meet customer demands. Or perhaps you want to visualize customer satisfaction per cart to see if any trends become obvious. By knowing the story and how it relates to your audience, you’ll know how to check if you’ve achieved what you set out to do. It means you know why sharing this information matters, what data you’ll be using — and often this will lead you to the best way to visualize this information. When to use colour in data visualizations Now that we know the audience and have figured out what information we want to share, we can focus on colouring the data. It’s important to remember that colour needs a purpose. If the information can be understood without it, then don’t add colour. As a general rule, if the visualization only has two dimensions of data, like gross profit over the years, then you don’t even need a palette, a single colour is perfect.
https://uxdesign.cc/colouring-data-visualizations-89c69e21ce20
['Cate Wilcox']
2019-12-05 01:10:28.246000+00:00
['Branding', 'Design', 'Data Visualization', 'Visual Design', 'UX']
Python for Logistic Regression - SereneField - Medium
Import Packages import pandas as pd import numpy as np import statsmodels.api as sm import statsmodels.formula.api as smf 2. Fit the logistic model model = smf.glm('y ~ x1 + x2 + x3', data=df, family=sm.families.Binomial()).fit() model.summary() maybe more info in the future …
https://medium.com/adamedelwiess/linear-regression-16-python-for-logistic-regression-b92073d9084e
['Adam Edelweiss']
2020-12-16 16:40:44.348000+00:00
['Logistic Regression', 'Linear Regression', 'Mathematics', 'Python']
Project Dragonfly: Google’s Controversial Return to China After 8 Years
Here is a quick timeline of events: Aug. 1: The Intercept reports that Google is working a new project, supposedly codenamed Dragonfly, that has been in development since last spring, citing internal documents leaked by a whistleblower. The secret project is said to involve an Android app that “will blacklist websites and search terms about human rights, democracy, religion, and peaceful protest.” Aug. 2: The Information (paywall) reports that Google is also developing a news app exclusive to China that will comply with local censorship laws. Aug. 3: Google is also looking at partnering with local cloud service providers (including Tencent Holdings) to bring Google Drive and Google Docs to China, Bloomberg reports, in a move that sounds a lot similar to Apple, which recently migrated all of its Chinese users’ iCloud data to a local operator named GCBD in order to comply with local regulations. Aug. 7: People’s Daily, China’s ruling Communist Party mouthpiece, responds to Google’s plans for a censored search engine, saying it’s welcome back to mainland China as long as it complies with the necessary laws. “Google is welcome to return to the mainland, but it’s a prerequisite that it must comply with the requirements of the law,” reads the commentary, in addition to stating that the tech giant’s decision to exit the country was a “huge blunder which resulted in the company missing golden chances in the mainland’s internet development.” Aug. 16: In the wake of internal backlash against Google’s supposed entry into China, CEO Sundar Pichai informs employees that the moves being undertaken are just “exploratory” and that the company isn’t close to launching a search product in the country. Sept. 14: Google is also reportedly building a prototype system that would tie Chinese users’ Google searches to their personal phone numbers so as to meet local censorship requirements, reports The Intercept, drawing further opposition inside and outside the company over its controversial decision to re-enter the country. Sept. 26: At a Senate hearing on data privacy, Google’s chief privacy officer Keith Enright confirms the existence of Project Dragonfly, but dodges questions on what exactly the project entails, stating “I am not clear on the contours of what is in scope or out of scope for that project.” Ben Gomes, Google’s head of search, tells the BBC that “Right now all we’ve done is some exploration, but since we don’t have any plans to launch something there’s nothing much I can say about it.” Oct. 4: The U.S. government urges Google to halt work on Project Dragonfly, and in general any business practice that would abet Beijing’s oppression, reports The Wall Street Journal. Oct. 11: Google’s China-specific search engine may be launched in the next six to nine months, contradicting earlier reports that the company’s plans were still in the exploratory phase, according to a leaked transcript of an internal meeting that occurred on July 18, via The Intercept. An anonymous Google source calls Ben Gomes’ earlier comments (refer to update on Sept. 26) “bullshit,” while Gomes has the following response when reached via cellphone: “I can’t hear anything that you are saying, I can just hear that you are talking,” before hanging up. Oct. 15: Google CEO Sundar Pichai officially confirms company’s plans for a China-focussed search engine, adding internal tests have been very promising and that the country is important given “how important the market is and how many users there are”. “It turns out we’ll be able to serve well over 99 percent of the queries,” he said, stating that, “People don’t understand fully, but you’re always balancing a set of values (providing access to information, freedom of expression, and user privacy) … but we also follow the rule of law in every country.” Google’s Project Dragonfly and Project Maven (a Department of Defense project to build AI and facial recognition technology for drone warfare) have been both controversial, with employees and the larger tech community seeing the company’s actions as crossing an ethical line. Nov. 21: Alphabet chairman John Hennessy underscores the struggle with launching a search engine in China. “The question that I think comes to my mind then, that I struggle with, is are we better off giving Chinese citizens a decent search engine, a capable search engine even if it is restricted and censored in some cases, than a search engine that’s not very good? And does that improve the quality of their lives?,” Hennessy tells in an interview with Bloomberg, adding, “Anybody who does business in China compromises some of their core values.” Nov. 27: A growing group of Google employees (now more than 600) sign a letter (Google Employees Against Dragonfly) urging the search giant to end the “Dragonfly” project aimed at creating a censored version of its search engine in China. “Many of us accepted employment at Google with the company’s values in mind, including its previous position on Chinese censorship and surveillance, and an understanding that Google was a company willing to place its values above its profits. After a year of disappointments including Project Maven, Dragonfly, and Google’s support for abusers, we no longer believe this is the case. This is why we’re taking a stand,” reads the letter. Nov. 29: Key executives involved in Project Dragonfly, including Google’s China head Scott Beaumont, were dismissive of surveillance and security concerns arising out of launching a search engine that could associate users’ phone numbers (and their location) with searches seeking “information banned by the government,” reports The Intercept, adding, not only was Sergey Brin kept in the dark, they “shut out members of the company’s security and privacy team from key meetings about the search engine, the four people said, and tried to sideline a privacy review of the plan that sought to address potential human rights abuses.” Liz Fong-Jones, a Google worker and employee advocate, responds on Twitter saying “I firmly suggest that my current fellow colleagues think about what they’d do if the red line were crossed and an executive overrode a S&P (Security and Privacy) launch bit, or members of the S&P team indicated that they were coerced into marking it green. Google’s S&P teams must have our backs.” Fellow Googlers, including herself, pledge more than US$ 200,000 towards a strike fund, calling for mass resignations if Project Dragonfly gets shipped without signoff from Security and Privacy teams. Nov. 30: New internet regulation rules go into effect in China, mandating any online internet service provider (like Tencent, Alibaba etc.) to start capturing activities of users posting in blogs, microblogs, chat rooms, video platforms and webcasts, including call logs, chat logs, times of activity and network addresses, citing a “need to safeguard national security and social order.” Dec. 1: Jack Poulson, research scientist at Google who quit the company after internal fights arguing for more clarification on Project Dragonfly, pens his side of the story on The Intercept, narrating the sequence of events that led to him coming public with his resignation. “I, for my part, would ask that Sundar Pichai honestly engage on what the chair of Google’s parent company has agreed is a compromise of some of Google’s ‘core values.’ Google’s AI principles have committed the company to not ‘design or deploy … technologies whose purpose contravenes widely accepted principles of … human rights.’” Dec. 10: Privacy advocate Edward Snowden, along with various human rights groups including Amnesty International, sign an open letter urging Google to drop its plans to re-enter China. Dec. 11: Google CEO Sundar Pichai, in a testimony to U.S. Congress, once again reiterates that Project Dragonfly is an internal effort and that “right now there are no plans for us to launch a search product in China,” while pointing out Google’s “mission”, of making information digitally accessible. Dec. 17: The Intercept reports that Google has halted work on a data collection/analysis project that sourced data from 265.com, a Chinese web directory service that Google purchased in 2008, to gather “data about the kinds of things that people located in mainland China routinely search for in Mandarin,” in turn helping them a prototype of Dragonfly. March 4: Fresh concerns mount after Google employees spot over 900 code changes concerning two smartphone search apps Maotai and Longfei in December and January that the search giant planned to roll out in China for users of Android and iOS mobile devices, suggesting that work on Google’s controversial Project Dragonfly is still ongoing despite claims to the contrary; Google responds, saying “This speculation is inaccurate. As we’ve said for many months, we have no plans to launch Search in China and there is no work being undertaken on such a project. Team members have moved to new projects.”
https://ravielakshmanan.medium.com/googles-return-to-china-after-8-years-a-timeline-9778ece99fc3
['Ravie Lakshmanan']
2019-03-16 13:02:56.506000+00:00
['Censorship', 'Google', 'Tech', 'Human Rights', 'Surveillance']
How INSIGHT drives Stranger Things’ success
Last time we looked at how data is at the heart of Netflix’s success in helping them to predict successful movies and TV shows. If you missed that post, you can read it here. As we move through the DATA > INSIGHT > ACTION loop, this blog post deep-dives further into the INSIGHT aspect of Netflix’s data. We have already established that Netflix collects A LOT of data. Storage of that data so that it is readily accessible for analysis is a challenge. In order to do this, Netflix has to use a Data Warehouse combined with a Data Analytics tool to create a Single Customer View, measure KPIs and visualise the data to generate insights. How does this insight maximise Netflix’s return on investment? For Netflix, ROI means achieving the maximum customer happiness and loyalty per dollar spent. To maximise user happiness, Netflix has to continually provide really relevant content to its subscribers’ interests. However, with a price tag of $6 million per episode, totalling $45 million for the Stranger Things series, getting it wrong, could be a very expensive mistake to make. But Netflix has ensured that their spend has been successful through using historical data to predict subscriber satisfaction. Their Machine Learning and Data Science approach to identify high potential content is so successful that, compared to the rest of the TV industry, where just 35% of shows are renewed past their first season, Netflix renews 93% of its original series. To measure users’ happiness, Netflix uses various and complicated algorithms. This enables Netflix to quickly get quantitative answers (insights) to specific questions and to predict viewing rates for Stranger Things. When Netflix’s decision-makers were reviewing the Stranger Things project, I assume they had concerns such as: Netflix will input those concerns along with some requirements into a smart algorithm. This algorithm will produce a list of indicators that helps Netflix’s decision-makers understand if the show would be a good investment. Going forward, using machine learning and data science, the system will also be able to share recommendations about how Netflix can improve the existing predictive score and maximise their investment. For instance, for Stranger Things it could be: Using powerful analytical tools such as Looker, Power BI and Tableau Netflix can investigate the recommendations that have been generated. Through manipulating the data and creating visualisations insights can be produced at a very granular level. The main goal from this activity is to find a way to minimise the risk as well as define the action plan before and after making the decision to buy into a TV series or movie. Predicting successful content is not the only area where machine learning and data science is being used to drive more outcomes as you can see below: In our next Stranger Things post we complete the DATA >> INSIGHT >> ACTION loop and look at how Netflix puts the initial two stages of the process into action.
https://medium.com/acrotrend-consultancy/how-insight-drives-stranger-things-success-1d72a5ff5c28
['Acrotrend Consultancy']
2019-07-30 13:57:30.202000+00:00
['Business Intelligence', 'Customer Insight', 'Data Science', 'Stranger Things', 'Data Visualization']
Story Cycle
Audible Sundays Story Cycle Photo by LB On the road we drove with the light Immersed in the planet’s revolution Merging with the cycles in the skies As our story took form after the rain
https://medium.com/chance-encounters/story-cycle-31d0a7ff991e
[]
2020-11-01 18:19:07.385000+00:00
['Poetry', 'Music', 'Freeform', 'Audible Sundays', 'Photography']
What I’ve learned from my 3 year health journey. Holiday reflections 100lbs later
~Distilled weight loss lessons near the end~ Part of my King Tut ornament collection So I’m sitting down to a wonderful Christmas eve meal at our house last week, surrounded by family, and I know that when I wake up tomorrow, chances are very high I will be 2–3 pounds heavier. As much as my stomach allows these days I gorged myself on ham with honey-maple mustard sauce, macaroni and cheese, a mashed potato and cheese casserole, curry chicken, crispy apple salad, cranberry sauce, with Korean dumplings and pancakes for appetizers and pumpkin pie for dessert. It turned out that I gained a very happy 4 pounds — a record for me in one day, and after another wonderful meal on Christmas day, I was 6 pounds up. This was neither unexpected, nor a failure on my part, nor the beginning of a weight loss crisis of potential long term gain for me. It was just a really good way to spend Christmas. One Meal Really Does Make Me Fat! Christmas Eve — Yum! Even nearly a year and half after losing 100 pounds in a healthy and sustained way, when I eat decent quantities of a salty-sweety-fatty meal (which the above meal certainly qualifies) my body still soaks up water like a sponge. I don’t know if this is a remnic from previously being diabetic, but this dynamic has been constant for me since I started losing weight in earnest. My most “impressive” example of this was going from 203 to 218 in 5 days after the death of a relative. With this personal health issue, I am absolutely positive that if I still had craving and binging issues (largely caused by dieting — if you’re goal is to become morbidly obese over the course of a decade or so, this is the best way to accomplish that) or considered of any real weight gain as a failure as many do, I would have ballooned up back to where I started well before ever losing 100 pounds. This is just how my body works now and I’m OK with that. If you’ve read my previous articles, you’ll know I’ve chosen the glutinous path towards weight loss. It's really important to me that I enjoy what I eat. I don’t do food constraints and don’t have, nor do I desire supernatural willpower. I can eat anything I want, when I want, in whatever quantities I want. If I want something, I eat it. This includes hamburgers dripping in BBQ sauce, spaghetti with mushrooms, lots and lots of chocolate and other yummies — but the consumption now occurs on my timescale. My unhealthy eating is no longer opportunistic as is the case when you blow your diet. For that reason, I’m probably not skinny as a rail — my goal is to be happy and healthy. For most of the past year, I’ve been sitting about 10 pounds heavier than my article weight (197) in the low 200s, usually 203–207, but ballooned up to 213 on the 26th. Now on December 30th, I’m already back down to 210 but may bounce up again tomorrow evening. Again, this has happened countless times, and is a regular to my pattern of living. Everyone’s body is unique. I’ve spent a long time gaining awareness about how mine functions. As healthy as I am (my last BP was something like 104/72), I still see myself as a recovering 300 pound person. I am different from someone who never got that big. This has actually a source of inspiration for me. It meant I could figure out how to get healthy by slowly gaining awareness to what’s happening internally — I don’t need to take crazy medicines or shitty health advice (calorie counting or weight loss goals for instance) that I don’t understand how to integrate into my life and no longer believe is helpful. I see obesity as a systemic failing in society far more than a personal shortcoming. It turns out enduring long term problems may just take enduring, long term approaches, which is the antithesis of our our society approaches problems — “solve it now or your fired” is how we usually problem solve. Eliminating Chronic Pain How often do you hear someone with long term chronic back pain tell you it's largely eliminated — who does that? Most of my ambient “health time” for the past year and a half has been about eliminating any lingering chronic pain in my ankles and lower back. At this point I can say its been largely gone, without the help of the medical establishment other than my chiropractor, who unfortunately just moved to Washington. The key in my case (I believe this is probably applicable to most) has been modifying my walking posture — the main way chronic pain forms over time — and my fascia, where chronic pain is stored. I am in the process of writing a series of articles and doing videos on both of these, but unfortunately these may be delayed due to job loss. Bottom line though is I have yet to meet anyone in the medical establishment who cares about addressing chronic pain in the neck, back, hips, knees, ankles or feet with either either experimenting with walking posture or massaging the fascia layer which is holding the pain in place (although others have, particularly with some physical therapists and fascia). In my experience, many in the medical field are doing the equivalent of “looking under the lamp post at night” for the solution . If you go to see the doctor for knee pain, s/he’s rarely examining the overall movement operations of the living system known as “you,” an output of which as resulted in increasing chronic pain in your knee, but instead just examines at your knee. My overall strategy with eliminating chronic pain is more inclusive, and has been near identical to my approach with weight loss — gradual but deliberate experimentation on the living system known as “me” to find lots of small ways for improving health while reducing pain and increasing movement and flexibility. Walking Posture I’ve changed my walking posture three times over the past two years. I used to walk with my weight balanced on my joints — in my case the stress was in the upper neck leading to spiking pains, lower back with numbness shooting down my legs, and my ankles. After losing the better part of 100 pounds (what everyone says you need to do), I experienced very little decrease in pain. I rarely if ever started a step with my heel touching first— this is now my norm. My normal way of walking for the past two decades was essentially to balance my excess weight on my joints and “fall forward” with my neck bent slightly forward, my back looping forward as well, with the bend at the lower back, which, over time has created to immense pressure, pain, and tightness on the balls of my feet and toes. Look around and you’ll notice this similar style of walking on roughly 30–40% of morbidly obese men if not more. I’m willing to bet most have developed chronic pain on their joints over the course of a few decades. It took me about 2–3 months of constant practice to lock in a new way of walking. I’ve done this three times, in each case making further adjustments to my method of walking based on subjective feedback. The doctors tell you the chronic pain is due to the weight — they’re obscuring the truth if not outright lying. Its due to your walking posture, which codifies and stores stress in your fascia in chronic pain locations not unlike a river flood plane which overflows regularly with cast off refuse. You might have had a fine walking posture before the weight gain (or not), but over time, your body adjusts as best it can to changes in weight or other health issues — often it does so poorly. Had I known then at 300 pounds what I know now, I could have started addressing my chronic pain issues at once. My ankles were in near constant pain and discomfort for most of the past decade, same with my lower back, but no longer are. I’ve had surgeries proposed to solve my issues, but I know know they would be an expensive and debilitating version of a band-aid. The problem with my ankles was not my ankles, it was where I put my weight when moving. Said another way, if you have chronic hip problems, leading to a hip replacement surgery and don’t change your walking posture, be ready for more procedures in the future. If your knee has crumbled, the solution is to change what in your movement is causing the long term stress and pain — a new knee won’t solve that. It's the end result of the way you are walking that leads for stresses to form in certain parts of your body. Your muscle covering — your fascia—tightens in response to protect the ligaments and joints. Over time this becomes largely permanent, leading to loss of movement, calcification, arthritis and all the rest. If you ask me, an untrained health professional who’s done the equivalent of staying in a Holiday Inn Express while ending up one hundred pounds lighter, the solution to these life destroying, debilitating problems is largely in your hands. Yes, I’m sure there are people who can point you in the right direction for experimentation, but the hard work comes in the small things — daily practice, thinking and experimentation. Gradually and deliberately it is possible to reverse the effects of 30 years of chronic pain and weight gain. Stop looking for a miracle pill or fantasy cure. Nor is this about willpower. Just small, deliberate experimentations toward a healthier you. My current walking posture engages my core I don’t know how applicable my current walking posture is for others, but its been transformative for me. Now when I walk, I largely keep my core engaged, meaning my muscles are taught, with my legs slightly bent. In an earlier version (about the time of my health journey article), I ended up putting too much of the weight on my upper legs, which led to stress on my IT band over time— by keeping my core more fully engaged, I better distribute the weight, and almost feel like I’m floating when I walk. I take far less of a step forward, and now push off backward with the balls of my feet. I’ve practiced this daily walking my dog with long “power walks” up and down hills in our neighborhood on average 4–5 days a week (I actually enjoy this now — really!). If you see me walking down the street, chances are high (north of 30%) that I’m probably thinking about/experiencing walking. I love the feel of my muscles engaging from my feet up through my lats to pectoral muscles, which happens when you push off in this way. Significant increases in core strength has led to one of the many simple but great joys I now experience every day — I can lift myself up from a sitting position without having to bend forward or push off with my arms! This probably sounds mundane and really basic (and is really just the back half of a squat exercise), but I’ve never been able to do that as an adult. I’ve never imagined being this healthy, even though I rarely do any traditional gym exercises. A few weeks back, after doing an hour long power walk, I was able to rattle off 50 sit-ups on a bench, without touching the bench (I attempt as many sit ups as possible on average once every 2 months or so, but never count — my wife did for me in this case). Because my daily method of walking also supports my back and strengthens my core with every step, I’ve literally integrated exercise into my normal movement. Aside from this, my main type of movement is meditative stretches and modified meditative plank exercises. I’ve gained a small amount of weight but haven’t really put on fat since my article — I still have lost 100 pounds or more of fat — the 10 pounds since the article are largely if not completely muscle weight. I still do have probably 10–20 pounds of fat left, and am still intend to eliminate that, but most likely this will continue at a slow pace. I still do have looseness in my skin in both my stomach and legs, but this has slowly improved to the point I think returning to normal is more a function of time, as long as I keep the skin flexible. I’ve used both straight warmed cocoa butter, or this salt and coconut oil mixture, both to decent effect. If your feet are cold, massage your fascia! Fascia massage is an ongoing, daily habit for me. Very regularly, you’ll find me almost without thinking massaging the fascia around my thumbs, neck or even ankles in the presence of friends. This is a really good video for describing what this fascia thing is all about, although my massage approach is dramatically different from this therapist (videos coming at some point, I promise!). I barely use any pressure, but go back and forth quickly and repeatedly over an area, versus the slower, more muscle focused approach that massage therapist advocates. I most often am working on my knees through my feet to maintain improved blood flow (my body regularly breaks bad in this way, leading to cold feet), but also massage my face, head, hands, and neck. I trained my chiropractor to do this wonderfully on my back, but unfortunately he moved to Seattle (he’s amazing if you’re in the area). If you find tight skin on your body somewhere, chances are extremely high that the fascia below is tight as well, which means so too is the muscle the fascia is encasing. That muscle is part of an interconnected network of muscles, and tightness in one place, leads to problems throughout the network known as “you”. Over time, these problems can lead to negative emergent cascading consequences in the same way (in my case) that weight gain leads to diabetes, high blood pressure and sleep apnea. The terms are different but I’m sure you’ll recognize them. For instance, if you have a chronic condition like arthritis, as I started to develop in my ankles and feet, you’ve most likely had fascia issues for a really long time. Said another way, reflecting back three years later, I believe the main cause of my foot and lower back pain over the past decade has been my neck posture, which cascaded to pressure in my lower back, which led to a horrific decade-long, increasingly ossified yin-yang tension between my feet and lower back, along with all the second order side effects this dynamic created. Fixing this level of complex interaction is not solved with a single visit to anyone, nor is it solved by weight loss. Instead it takes gradual, long term and sustained effort, mostly on your part, to correct this. As an example of this, I still have little fascia balls — lots smaller than before — underneath my joints in my toes — this is after a year of working on them. As I loosen them, flexibility returns, but tensions in my body inevitably tighten them again, and so on. Awareness Goals If you’ve read my health article you’ll know I see internal awareness as the my key to doing the constant small experiments needed to get healthy. I shared my awareness of internal energy levels (anybody can do this), internal sugar levels in real-time (necessary for getting off of diabetes), and that I care about my metabolism level more than anything else, and can generally tell whether I’m gaining or losing weight. Over time, these have become more subconscious but are still there, especially when a sugar rush starts. My main focus now is gaining better awareness of my fascia. Becoming aware of changes in tensions in your fascia gives you very reliable signals that something in your body is breaking bad. As your nerve endings physically reside within your fascia, it actually is your body’s early warning signal — the more relaxed the fascia is, the more aware you are of changes to it, and the earlier you will notice problems occurring when walking or sitting. In my case it's usually a signal letting me know something is misaligning in my neck or back — or already has if I start feeling tightness in my feet. I’ve learned a variety of stretches and massages to address structural issues with my body that always seem to emerge. I can usually re-align my neck if I catch it early by slightly nudging the vertebrae over a minute or two in the direction the needs to go — once it starts moving into its natural place, its just a matter of massaging the surrounding muscle fascia to stop the tightness that caused the misalignment. I also have learned an amazing way of aligning my lower back (this is needed nearly every morning — not practical for a chiro visit) by swaying back and forth in a natural turn with my feet planted slightly wider than my shoulders and my shoulders, back and neck bent slightly backward — I start the sway from each side pushing off from the inside of the balls of my feet before, eventually turning my shoulders forcing a fuller turn in the same direction (video “soon come” as they say in Jamaican). Regarding weight awareness issues, my steady state diet is usually at or slightly below even weight, but I have these bursts of weight gain I mention above, most often initiated by living a normal life. In terms of awareness, I can actually feel when my body is ballooning, but haven’t come up with any remedies to this in the moment other than the glaringly obvious — significant energy exertion and directed breathing shortly after eating — not something I felt like doing on Christmas eve. Did I mention I had lots of alcohol? The next best thing for me is significant energy exertion and directed breathing before eating anything of significance the next day — this really does restart the metabolism while halting the ballooning, but again, was not something I felt like doing on Christmas day. I have been experimenting with various meditative stretches and breathing approaches, but really haven’t made progress there yet. I do however know to get my body in the the condition it needs to be for fat cells to release the excess water gained. Through movement and breathing, I can prompt my fat cells to start releasing the water, and can feel when is occurring. So from a weight standpoint, this is really just a steady state issue for me. Weight Loss: Boiling Down What I’ve Learned June 2013/August 2015 Given that the new diet season begins in January, many of you reading this will soon be inundated with numerous ads from Jenny Craig, Weight Watchers, Lifetime Fitness and all the rest. Hopefully you’ll decide to give this these points a go instead. They’ll cost you nothing other than time and energy. Here is a distilled version of what I’ve learned about losing weight, or in diet industry terms, here are the 7 easy steps to sustained weight loss: Gradual Change through deliberate experimentation to positively affect health on food, movement and lifestyle issues works. Choose one — food, movement or lifestyle issue — experiment with locking it into your daily routine (if you’re not changing that, you’re not changing). Once its locked in, which might take a few weeks or more, then choose another. Internal Awareness is critical to improving health, and is knowable. The following things improve awareness: stretching, chewing food, relaxed fascia, and meditation. The following things reduce awareness: processed foods, chemicals, stress, and lack of sleep. Long Term Focus: Clear direction, but no clear, measurable end point in time transforms your weight loss narrative from one of pass or fail (which in practice is always “fail” in the end) to one of a personal health journey — one which requires learning and growth in a low stress, inquisitive manner. Learning what works for you requires the ditching of personal success or failure as a mindset. Change Your Wants: Withholding and dieting does not work. Instead, eat what you want, whenever you want in whatever quantities you want, but slowly, deliberately change your wants. Diets engage your body an an unstable short term state that might drive you to lose 200 pounds, but if it didn’t come off in a sustained way, keep your clothes, because the weight is coming back. Diets as concept literally force short term gratification (through misery and withholding until you reach a goal), which destroys good food choices. A good food choice in my terms is one I enjoy before, during and after I eat it. If my stomach is going to feel bad an hour later, I really don’t want to eat it right now. Happy Food Replacement Only: Yes, you need to eat healthier. Don’t settle for health shit you hate, or can only tolerate. This is not a sustained change. Find things you like that are slightly healthier instead. Keep doing that and over time you’ll find your tastes change from godiva to dark chocolates. Sometimes something healthy you like (cinnamon for blood sugar or cacao in my case) ends up being great for your health — if so gorge on it. Gamify your Weight Loss: Weight loss goals involve withholding for a period of time. Losing weight in this way isn’t stable — dieting is not unlike training for a marathon in that you are doing a bunch of unnatural things that gets your body in a short term, controlled state toward an endpoint — only you aren’t running a marathon, you’re hitting a goal. When the goal is reached, you revert to daily steady state, and your weight gain follows accordingly. Gamify weight loss instead (details in my health journey article) Movement — Optimize your enjoyment, not your time: Set exercise routines don’t work for me, but I have heavily integrated movement into most facets of my life. Movement is absolutely necessary for your metabolism if nothing else, but in truth, it turns out to be great fun. Choose fun before function. Use what you have — there is no reason to buy anything until you know what you like on a sustained basis. Forget optimizing your time — optimize your enjoyment instead. My health outlook three years later I spent 30 years developing my failing health profile, it made sense to give myself significant time — many years perhaps — to get healthy…I decided to dedicate three years to this path toward gradually becoming healthy. I didn’t imagine I’d be healthy or even normal weight by then, but thought it was long enough that I wouldn’t stress about meeting short term weight loss targets. Three years ago, when I started my health journey I was very worried about fixating on progress goals. I thought anything shorter than three years would lead me to worry about short term results. Now, three years later, I think three years is far too short a time frame to get healthy. I’ve gotten my life back and have been transformed over the course of my journey, but in some ways I feel like I’ve just started. My health journey certainly hasn’t ended — it's become a lifetime pursuit. I constantly find myself talking to people about weight loss and getting healthy. I’ve even dipped my foot into health life coaching, but have no idea how I would monetize that yet. I tend to give my knowledge away, be it pumpkin carving templates, open source efforts or in this case, weight loss and health concerns. In any event, I believe I’ve happened upon some powerful but simple insights that have the potential to be transformative for a lot of people on this earth. I will definitely continue to look for ways to share this knowledge moving forward. And if these ideas of touched you, and you want to write about your journey, I’d love to add your article to the mix!
https://medium.com/gethealthy/holiday-reflections-on-my-health-journey-three-years-and-100-pounds-later-7cecc6f303c3
['Noel Dickover']
2017-01-27 14:41:34.522000+00:00
['Weight Loss', 'Health', 'Exercise', 'Diet', 'Chronic Pain']
Working with AWS Secret Manager
In my previous post Exploring AWS Secret Manager, we learned about some key benefits of using AWS Secret Manager. In this post, we will explore how to use it with a practical example. How it Works You can use Secret Manager to store, rotate, monitor, and control access to secrets such as database credentials, API keys, and OAuth tokens. I have discussed the benefits and workflow of Secret Manager in my other post check it out AWS Secret Manager Workflow The above slide describes the typical application workflow when working with AWS Secret Manager The DBA or Service admin creates a service account credential to use the service for a particular app. For example, DBA creates a username and password for MyWebApp to access the database. Administrator (DBA or Service or AWS) creates a record in AWS Secret Manager as a secret for an app for example MyWebAppDBCredentials When an application needs a credential it queries AWS Secret Manager through secure HTTPs and TLS channel, AWS then returns the credentials to the app. for example, the client calls Secret Manager to retrieve entries for MyWebAppDBCredentials. The app or client parse the credentials and use it in the application as required. For example, in a connection string or API call. Example Overview In this example, we will store a password of MYSQL RDS database, retrieve it in a Node.js function using AWS SDK Creating a Secret The first step is to create a secret open AWS Console and go to AWS Secret Manager Click on Store a new secret button Select Secret Type, in our case we are storing credentials for our MySQL RDS database. Enter the username and password and select the RDS database the credentials belong to Enter secret name and description, optionally enter Tags Configure rotation function, I have chosen default which is Disable automatic rotation. Enabling rotation is a topic for another post Review your selections and click Store Secret Now you have stored the secret, let’s see an example to work with it Using a Secret Following Node.js code is reading credentials from AWS Secret Manager then passing credentials info to the Connection string in Node.js function // Use this code snippet in your app. // If you need more information about configurations or implementing the sample code, visit the AWS docs: // https://aws.amazon.com/developers/getting-started/nodejs/ // Load the AWS SDK var AWS = require(‘aws-sdk’), region = “us-east-1”, //replace with your region secretName = “dev/mysql/mywebappdb”, // replace with your secret ID secret, decodedBinarySecret; // Create a Secrets Manager client var client = new AWS.SecretsManager({ region: region }); // In this sample we only handle the specific exceptions for the ‘GetSecretValue’ API. // See https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html // We rethrow the exception by default. client.getSecretValue({ SecretId: secretName }, function (err, data) { if (err) { if (err.code === ‘DecryptionFailureException’) // Secrets Manager can’t decrypt the protected secret text using the provided KMS key. // Deal with the exception here, and/or rethrow at your discretion. throw err; else if (err.code === ‘InternalServiceErrorException’) // An error occurred on the server side. // Deal with the exception here, and/or rethrow at your discretion. throw err; else if (err.code === ‘InvalidParameterException’) // You provided an invalid value for a parameter. // Deal with the exception here, and/or rethrow at your discretion. throw err; else if (err.code === ‘InvalidRequestException’) // You provided a parameter value that is not valid for the current state of the resource. // Deal with the exception here, and/or rethrow at your discretion. throw err; else if (err.code === ‘ResourceNotFoundException’) // We can’t find the resource that you asked for. // Deal with the exception here, and/or rethrow at your discretion. throw err; } else { // Decrypts secret using the associated KMS CMK. // Depending on whether the secret is a string or binary, one of these fields will be populated. if (‘SecretString’ in data) { secret = data.SecretString; } else { let buff = new Buffer(data.SecretBinary, ‘base64’); decodedBinarySecret = buff.toString(‘ascii’); } } //Parsing secret JSON object const secretJSON = JSON.parse(secret); // Read data from MYSQL database var mysql = require(‘mysql’); //Pass credentials info to connection var con = mysql.createConnection({ host: secretJSON.host, user: secretJSON.username, password: secretJSON.password, database: secretJSON.dbname }); //Query MySQL table con.connect(function (err) { if (err) throw err; con.query(“SELECT * FROM myproducts”, function (err, result, fields) { // change to your table name if (err) throw err; console.log(result); //display data from table process.exit(); //exit node.js server }); }); }); Execute the above code by writing it in a file readsecret.js The output of the above code is as follows $ node readsecret.js [ RowDataPacket { product_id: 100, product_desc: ‘PC’ }, RowDataPacket { product_id: 200, product_desc: ‘MAC’ }, RowDataPacket { product_id: 300, product_desc: ‘iPHONE’ }, RowDataPacket { product_id: 400, product_desc: ‘iPAD’ }, RowDataPacket { product_id: 500, product_desc: ‘PRINTER’ } ] Conclusion In this post, we stored the credentials of our MySQL RDS database in AWS Secret Manager and later retrieved the credentials from a secret manager and use it in our application securely. I hope you like this post. @IamZeeshanBaig About DataNext DataNext Solutions is a US-based system integrator, specialized in Cloud, Security, and DevOps technologies. As a registered AWS partner, our services comprise of any Cloud Migration, Cost optimization, Integration, Security and Managed Services. Click here and Book a Free assessment call with our experts today or visit our website www.datanextsolutions.com for more info.
https://medium.com/datanextsolutions/working-with-aws-secret-manager-4896a1cb05d7
['Zeeshan Baig']
2020-02-25 20:42:36.536000+00:00
['Amazon Web Services', 'AWS', 'Cloud Security', 'Security', 'Cloud']
Please Don’t Ask Me to Help You Put on a Sari
She asked me to help her put on a sari. So what? It’s the “so what?” that gets me. It has been part of the narrative of my experience growing up Brown in this country. Of a whole host of things people have asked me and continue to ask me, the list includes: “Do you know Dr. Raj? He’s the dentist we go to and he is Indian.” “Do you know our neighbor Anuradha? She’s Indian and makes great samosas.” “Do you know this woman named Kiran? She’s Indian and she’s also in marketing!” We are surrounded by labels. And so at work and in our communities, we want to check the box. Dr. Raj. Anuradha. Kiran. Apparently, I should also know every Indian person within a 50-mile radius. And sometimes the ones out of state, too. After having worked for many years in marketing, I know we marketers make our living putting labels on products. That’s how we sell lots of stuff. Gluten free. Contains SPF 50. Paraben free. All natural. Dairy free. Made with no preservatives. We are surrounded by labels. And so at work and in our communities, we want to check the box. We want to put people in boxes. We want to label each other. It’s an easier shortcut for our brains. To categorize, sort, and place people in the appropriate boxes. And stick a label on them. Let me go and ask the “Korean” person I know where I can get some good bibimbap. Let me go ask the “Hispanic” person I know how to translate these instructions into Spanish for me. Let me go ask the “German” person I know where the closest beer garden is. Let me go ask the “Japanese” person I know if they can help me put on a kimono. Ugh. When I walk into my workspaces and workplaces, my Brownness enters the room before I even have the chance to sit down and say hello. People put me into a box based on what they see as my identity. Female. Check. Asian. Check. Indian. Check. I am immediately labeled. Because that is the quickest and easiest way to understand my Brownness. Labels do not belong on people.
https://zora.medium.com/please-dont-ask-me-to-help-you-put-on-a-sari-583e3113eac4
['Mita Mallick']
2019-07-30 11:01:01.034000+00:00
['Society', 'Culture', 'Identity Self Care', 'Race', 'Identity']
Serverless-Flow: A CI/CD Branching Workflow Optimized for Speed and Quality
The ephemerality of Serverless architectural components, coupled with the pay-per-use pricing model, allows us to have rapidly deployable, cheap and disposable environments when working with Serverless-First architectures. No longer are we constricted to Dev, Staging, UAT and Prod environments. Instead, we can have any number of environments that are isomorphic to the real production infrastructure. This allows us to reimagine how our CI/CD can work, reducing the lead-time to release to production, while maintaining quality assurance. “Serverless-Flow” is similar to GitHub Flow, but takes advantage of the ephemeral pay-per-use environments Serverless offers. There are 4 steps to getting to Serverless-Flow on your CI/CD: 1. Isolating Environment Deployments 🏝 2. Integration Test Coverage 👷‍♀️👷‍♂️ 3. Branching Workflow — Serverless Flow 🌳 4. Cleanup ♻️ 1. Isolating Environment Deployments 🏝 The basic pre-requisite to using the Serverless-Flow approach to CI/CD is to have service(s) that are deployable to isolated environments based on a single change to a deployment argument. Note: These examples will focus on the Serverless Framework In the Serverless Framework, people often make use of the stage option for this: serverless deploy -s [STAGE NAME] -r [REGION NAME] -v The stage variable can be used by the Serverless Framework to make environment variables change, and also included in the naming of other resources to ensure resources don’t clash between stages. The key is that the environments are isolated and don’t clash. If you deploy a Dev1 environment, you don’t want to have issues when deploying Dev2 because an S3 resource created as part of the service has a naming conflict. The simple solution to this is to include the variable you’re using for environment declaration in the naming of any collateral resource. Alternatively, some teams stop naming additional resources at all and let AWS generate unique names. Some projects avoid stages all together to make this issue clearer to the team. API Gateway natively supports the concept of staging (generating per stage URLs), whereas services like S3 do not. It’s important that teams ensure collateral resources are made dynamic per-environment to prevent accidental impact to other environments and to allow deployments to proceed. One tactic to avoid stages can be to add a custom option to your Serverless deploy: serverless deploy --infraStackName staging Just like a stage, this deployment argument can be included in resource naming to avoid conflicts: service: name: my-service-${opt:infraStackName} ... Resources: StaticBucket: Type: AWS::S3::Bucket Properties: BucketName: ${self:service}-bucket 2. Integration Test Coverage 👷‍♀️👷‍♂️ To have trust in CI/CD all the way to production we need to verify our changes have not broken existing functionality, and that they deliver the new functionality. Teams should no longer rely on manual regression testing and instead, automation should be the primary verification check. Serverless architectures are best tested via Integration Testing on the deployed infrastructure. Mocking is rarely appropriate, and instead, real scenarios should run on the real underlying services. It’s key that the integration test suite can be run on any environment, automatically conducting all required setup and teardown of data. For example, if a test runner like jest is used, arguments should be passed for the stack/stage name and AWS Profile to use for verification (e.g. asserting the content of an S3 bucket is correct). Note, a future deep dive article will demonstrate how integration testing of Serverless Architectures can be achieved in Jest. 3. Branching Workflow — Serverless Flow🌳 Serverless-Flow, as the name suggests is based on many of the same principles as GitHub Flow. GitHub Flow in short proposes: Create a branch from main with a descriptive name. Add commits onto that branch locally, pushing regularly. Open a Pull Request to main Code Review Deployment to production before final merge to main Serverless-Flow avoids the verification in production step by making use of the ephemeral environments available to us in Serverless. Create a branch from main with a descriptive name. Add commits onto that branch locally, pushing regularly. Open a Pull Request to main Code Review Automatic deployment to new environment named after Pull Request number. e.g. serverless deploy --infraStackName pr-42 (triggered by PR open or push to existing PR) Integration Test Suite runs against PR-x environment to verify changes. Optional: Functional Review on PR environment (environment will have a subdomain for the PR number, allowing a preview of the feature) Deployment to Production (before or after the merge to main) Note: GitHub Flow is actually not strict on where verification happens. “Different teams may have different deployment strategies. For some, it may be best to deploy to a specially provisioned testing environment” — therefore Serverless-Flow is a form of GitHub Flow, where we are strict on where verification happens. 4. Cleanup ♻️ Finally, it’s not great to have many stacks building up so we need to do some garbage stack collection. This can be achieved by a Lambda cron, automatic cleanup on merge to main or a lightweight stack management API that allows stacks to be claimed, released and even reused*. Read more about stack reuse and management APIs with our open-source Table Lock project. Conclusion Serverless allows an infinite number of environments. If we structure our infrastructure well we can create innovative CI/CD processes that allow teams to deploy faster, with better verification & QA processes.
https://medium.com/serverless-transformation/serverless-flow-a-ci-cd-branching-workflow-optimized-for-speed-and-quality-6b98c5a4e489
['Ben Ellerby']
2020-08-24 13:42:24.150000+00:00
['Ci', 'AWS', 'Pipeline', 'Serverless', 'Github']
Clairo’s ‘Immunity’ is a Queer Coming of Age Film
LGBTQIA+ Music Clairo’s ‘Immunity’ is a Queer Coming of Age Film A Pride Month gem for LGBTQIA+ youth Sofia, know that you and I shouldn’t feel like a crime. Claire Cottrill- known in the music world as Clairo- is a Massachusetts-raised singer-songwriter and luminary of the indie pop genre. After her 2017 viral bedroom hit “Pretty Girl”, an original song with an accompanying impromptu music video of her lip syncing (suitably) in her bedroom gained an overwhelming amount of attention, the Gen Z singer’s fanbase flourished with young people who related to her and her message of fortitude. Her words “I’m alone now but it’s better for me/I don’t need all your negativity” accompanied by a simple lulling instrumental was enough to obtain almost overnight success. Whether she is posting memes or advocating for social justice on social media, she stays engaged with her dedicated audience. Clairo first brought up the topic of her sexuality in a tweet in May 2018 saying “B.O.M.D. is also G.O.M.D. for ur information” referring to the song from her first EP diary 001 whose acronym stands for Boy Of My Dreams. She explained further in an interview with Out that “I’m still not really sure what my sexuality is, but I do know that it’s not straight.” Though currently labeless when it comes to sexuality, she explores love and relationships with women through her vivid language and picturesque instrumentals of Immunity. A coming of age film is one that follows a main character (usually a young person) through a transformative time of their life. It’s a story of maturation where we experience the character overcome hurdles to attain personal growth. Some recognizable examples of coming of age films include the 80’s paradigm The Breakfast Club, the cult classic Clueless, and more recent hits like Moonlight and Love Simon. Strains of loneliness, love, and revelation knit these stories together just as they do in Clairo’s LP. The songs that construct Immunity illustrate a cinematic queer coming of age story. Through dreamy and honest lyrics supported by fuses of glossy melodies and indie rhythms, we can see her script screened before us. Until recently, the honest depiction of LGBTQ+ people (especially women) in film or music were few and far between, leaving queer youth with little media to relate to. Clairo joins the voices of similar artists such as girl in red, mxmtoon, and Hayley Kiyoko (deemed Lesbian Jesus by fans) giving women in the community an outlet to celebrate themselves by releasing unabashed gay bops in her own electric lo-fi style. The record places its listeners into its cinematic universe of lovely teenage chaos. She does not shy away from the truth of her emotions nor from the use of “she/her/hers” pronouns when singing about relationships which only conjures an even clearer image of Clairo’s dramatic narrative. We let Clairo lead us on her odyssey of icy isolation, anxiety-ridden self-discovery, and blossoming romance starting with the first track off the album, “Alewife”. This song acts as our opening scene in a somber but powerful flashback exploring her eighth grade self and the mental hurdles she experienced. The lyrics are all at once self involved, tragic, and at times even humorous- just like every tween’s eighth grade experience. We can imagine a shot of a shadowy blue-washed room with fourteen year old Clairo centered on her bed. The shot closes in on her as she says “I lay in my room wondering why I’ve got this life.” As “why me?” as these lines may read, this true story of Claire questioning being alive is dreadfully relatable and reassuring. As “Alewife” persists from timid to vigorous, we learn that it was her close friend who “saved her from doin’ something to herself that night.” This song is a dedication to Clairo’s friend Alexa who she claimed saved her life that night. Our main character has an established friendship and the album already passes the Bechdel Test. Immunity’s gleaming first single “Bags” turns a mundane moment of watching TV on the couch into a fully fleshed will-they-or-won’t-they scene between the ingenue, Clairo, and her love interest. She stated in an interview with Genius “I think this song is definitely about one of my first experiences with a girl, but I think as a whole it’s just about being comfortable or becoming comfortable in between spaces.” Her hesitant voice singing “Can you see me using everything to hold back?/I guess this could be worse/Walking out the door with your bags,” over a wavy synth shows us her conflicting emotions of earnestness and nervousness. She hopes for the best but expects the worst. We can see the half empty wine glasses, the close-up of their hands inching toward each other so slightly, Clairo’s face in focus featuring eager side-glances at her couch companion. The anticipation is palpable. “Bags” leaves the listener with the feeling of a classic love story escalation and even features a reference to the Academy Award winning LGBTQIA+ film Call My By Your Name. It’s hard to be still while listening, “Sofia”, the final single from Immunity. Clairo explains in a tweet that “Sofia” “is about my first ever crushes on women i saw in the media. people like sofia coppola, sofia vergara, etc…” This upbeat dance-tune features a dense guitar and polyphonic vocals making it the most concert-ready of the record. She sings “I think we could do it if we tried/If only to say, you’re mine,” in the chorus offering a fresh queer rendition of a sentiment we have heard again and again from straight artists- much like the LGBTQ+ film genre which has become more mainstream than niche over the last decade or so. “Sofia’’ is a confession of adoration and joy color graded in soft rosy hues. The lyrics can come across as trite at times, but who doesn’t love a corny romance to indulge in every once in a while? Through Immunity, Clairo gives her listeners in the community a space to be seen and revered. With the previously mentioned tracks and many others on the LP, we are seeing Claire Cottrill’s instrumentalized diary- a novel-to-film-like work that plays out the ripening of Clairo’s love life, artistry, and ultimately career.
https://medium.com/an-injustice/clairos-immunity-is-a-queer-coming-of-age-film-bfd07d0afec0
['Tori Ladd']
2020-07-02 17:32:14.969000+00:00
['Music', 'LGBTQ', 'Film', 'Indie', 'Pride']
5 Sites for Journalists to Learn How to Code
With the growing use of data journalism in Arab newsrooms there is a necessity to tell the press the story in an interactive way to ensure that the reader microcontroller drawing context of the story and make it interacts more with the data, which requires the press to deal with Coded Albermjbh other than text, images and video clips to build a press story interactive. For this purpose may cooperate journalist with the software developer in the Press Foundation, which works to their advantage to work together to produce a press story, or by learning to do this through a number of sites that offer ready-Coded for more graphic formats to use and which are easily adapted to make it suitable for the content of any story press, which what makes journalistic story takes half the time to provide the parts can be built upon. The following are the top five sites Coded software allows a quick preparation, you can easily be discovered by embedding tags Albermajh and make some minor adjustments: 1- Google Developers The site provides all the different types of forms such as drawing graphs and graph linear drawing circuit diagrams and columns, as well as the multiple world map of BB, as well as maps tree. The site does not require high skill of the press to deal with the software code where it was easy to copy and modify the data inside of the changes automatically figure chosen according to data by the press by placing them in compiled code, and in less than a few minutes can press out the story of journalist interactive, here are interactive for Exporting Countries Map fighters in the organization of the Islamic state, and you set up using maps Available on-site. 2- d3plus Location depends on open source codes, which the programmers designed the initial version and publish them for everyone to develop and work to improve it, and here the journalists could get a ready-made tags and add data to the press Baksham code. The site being put largest of shapes and graphs of the number, plus they are characterised with high quality in design, making it more attractive to the reader. 3- d3.js The site contains more than 250 software code fast preparation, divided according to the different categories most notable is the basic graphic formats animation and graphics in addition to the innovative visual models to display the data and Mgarantha each other. You can also use the site Kmthelh from previous sites by copying the software code to your website and then work to modify the data in it. But you could also use the developer of the software if you wish to modify the spaces and colors, what distinguishes the site is Onha Atdmanan a large number of models and visual perceptions that address data and make the relationship between the data in an easy and graphics, here an interactive map has been implemented using the codes available on site. 4- site datavisualization.ch The site does not work according to the usual way in which it operates previous sites which displays graphs and Coded their own, while the site displays journalistic projects that have been Anjazha by software developers and explains how to implement, it also provides the possibility to communicate with the press entrepreneurs to obtain a copy of the compiled code own and then work on the modified and developed to correspond with other press story. 5- OpenProcessing This site comes in the last list in terms of the multiplicity of forms and ease of use, as it does not offer a lot of free models, and provides updated monthly subscription service in return for providing the largest number of ready-made tags. Despite the fact that the previous five sites were not displayed is the leading provider of software Coded noodles, where preceded Manyeyes site and site refuse but they were not able to evolve and keep pace shift that occurred in the data processing industry.
https://medium.com/info-times/5-sites-for-journalists-to-learn-how-to-code-ea326bd1d30
['Amr Eleraqi']
2016-12-19 20:57:10.486000+00:00
['Infotimes Blog', 'Design', 'Data Visualization']
Music Collaboration Will Never Happen Online in Real Time
With the world moving towards high speed networks, a New York-Tokyo jam session should be easy right? It will never happen. Light-speed is not fast enough for making music in cyberspace. in 2013, I started a school for traditional music in the Dominican Republic. We’ve been teaching kids to play bachata — which is the Dominican version of rock ‘n’ roll. Things are going well. We threw out the mainstream approach to music education, and focused on two areas: learning by ear, and playing in groups from day one. The group learning component has been critical. Just as with rock ‘n’ roll, bachata is polyphonic — meaning several instruments play together at the same time. Bachata is nearly impossible for a single person to play alone. The guitar on its own, the bongos on their own . . . are empty. The sounds these instruments make complement each other. Put simply, it’s fun to play in a group; it’s boring to play alone. Fortunately our school is full of bachata-crazy cadets, and no one lacks someone to play with. Cesar, Adriel, Isaa, Juliana, and Emily, students of The iASO Bachata Academy @ DREAM, Cabarete Campus, Dominican Republic Video: A class at the Bachata Academy @ DREAM But what if a person living in Chicago or Tokyo wants to learn to play bachata? While he or she might find an instructional video on YouTube, it could be difficult to find someone to play with. Light speed is fast, but not fast enough. Playing music in a group is a two-way communication. For it to be possible, there must be very little delay, or latency, between the sounds each participant produces, and the other participants’ hearing it. Imagine if Chicago guy hits a drum with a steady beat — one beat per second. Imagine if Tokyo girl hears each beat 0.1 seconds after it is played. If Tokyo girl claps in time with what she hears, her claps will occur 0.1 seconds after Chicago guy claps, but the two claps will sound in time to her. So far, so good. But when Tokyo girl’s hand claps reach Chicago guy, there is a further 0.1 second delay, and so Chicago guy hears hand claps 0.2 seconds after each of his drum beats. The gentleman that he is, Chicago guy shifts his drum beats 0.2 seconds later, to adjust to Tokyo girl. Tokyo girl then hears Chicago guy’s beat shift by 0.2 seconds, and so she politely shifts her claps to match. The result is that the tempo continuously slows as each adjusts their beat and clap to accommodate the other. Eventually the whole thing grinds to a halt or falls apart. There is a roughly 0.18 second delay between two people with good internet connections in Chicago and Tokyo. For it to be possible to have a Tokyo-Chicago jam session, the network would have to be more than 20x faster [less than 0.009 seconds of latency]. This would require a speed faster than light. Hearing and latency Humans can accurately perceive audio intervals as small as 4-5 ms [milliseconds]. Because of the time it takes even a staccato sound like a drum hit to evolve and decay, two sounds less than 15 ms apart are generally perceived as continuous rather than separate. But intervals between 5 ms and 15 ms are an important part of the feel of music — the pushing or dragging against the tempo. Latency of physical sound Performers spaced 2 meters apart will experience a natural 6 ms latency from the time it takes sound to travel through air. At 3 meters, that latency is 9 ms, and at 4 meters (13 feet), it is about 12 ms. Less than 9 ms of latency is ideal, and greater 12 ms becomes problematic for timing. Such transmission speed is not likely to ever be possible between continents. Latency of fiber-optic communication Signals travel on fiber optic line at about 1km per 5 micro-seconds (about 2/3 the speed of light in vacuum). To reach the furthest part of the world on the most direct route (20,000 km) would therefore take about 100 ms. Put differently, every 1 meter of natural sonic latency (about 3ms) is equal to nearly 588 kilometers of theoretical fiber-optic latency. The theoretical limit to online real-time collaboration is therefore about 2000 km. That won’t cross the oceans, but seems at least like a good start. In practice, however, our network latency is much higher. The signal between two users is not a straight fiber optic line, but rather zig-zags and passes through a multitude of networking devices, each introducing more latency along the way. Online real-time music collaboration is not possible As of 2013, it is difficult to have online real-time musical collaboration even within the same city. To do so requires setting up a class of specialized high-speed network similar to what is used by the financial world’s high-speed trading outfits. The latency between the laptop on which I am typing in a New York City apartment and a local New York City domain name server is currently 12ms — low, but not good enough for music. Real time online musical collaboration has been a dream among musicians since the advent of the internet. But it is constrained by the same physical barrier as interstellar travel: the speed of light. We will colonize the stars sooner than play music together across continents. 2020 update: There are a number of apps in the works that aim to solve the problem of real-time jamming. These apps use a centralized metronome to keep participants in sync. Latency is less of an issue with melodic content. If player A hear’s player B’s melody 1/8th or 1/16th note late, it’s still close enough to react melodically. The bigger issue is with tempo. A centralized metronome can solve for this — much as a conductor keeps the wings of an orchestra in time, but now imagine that the wings of the orchestra are 40 meters apart — roughly equivalent to 120ms of network latency. But the adherence to a metronome removes an important aspect of the music making — the ‘feel’ and control over tempo. Network Latency Test Below is a list of the latency I measured between a high speed cable home connection in New York, and name servers in various global locations. I tested with both wifi connected laptop and hard-wired PC. The results were the same. New York: 12 ms Boston: 16 ms San Francisco: 85 ms Santo Domingo, Dominican Republic: 63 ms Paris: 93 ms Tokyo: 189 ms
https://medium.com/thsppl/music-collaboration-will-never-happen-online-in-real-time-e1c6448fc3d4
['Benjamin De Menil']
2020-06-30 17:10:48.669000+00:00
['Music Education', 'Music Technology', 'Music App Development', 'Music Application', 'Music']
Paul McDonald
Paul and I chatted about his music and plans after idol. He even sang some of one of his new songs!
https://medium.com/a-teen-view/paul-mcdonald-62f323475e16
['Arin Segal']
2016-11-04 00:42:06.114000+00:00
['Mcdonald', 'American', 'Music', 'Paul', 'Idol']