title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Is serverless cheaper for your use case? Find out with this calculator.
At the end of the month, when you receive your serverless bill, you mainly see one number. If you want to compare this number with non-serverless app costs, you shouldn’t simply compare it to infrastructure costs, you should broaden your comparison and also include: a chunk of Ops salaries SaaS bills such as a monitoring tool development efforts for generic features like authentication, or efforts related to technology maturity and skills build-up This high-level perspective is key when assessing the economic value of serverless, as serverless encompasses TCO’s 3 variables in one bill. To give two concrete examples: A service such as Lambda is managed, monitored and auto-scalable by design, which means reducing the operational burden to handle classic performance issues and system maintenance. Another service like Cognito is pre-packaging an always up-to-date industry-standard way of handling authentication, users and authorisation, seamlessly auto-wired to other AWS services, which means you won’t have to spend developer days on rebuilding or rewiring something similar in your architecture. Yan Cui wrote a good case explaining how serverless indeed lowers costs when looking through the TCO lens. Deloitte published a white paper on the topic in which they compare serverless vs more traditional approaches on each TCO component using real examples. Serverless costs control: With the right set of skills, the technology enables feature-level cost optimisation There are many examples showing how cloud costs can get out of control (like this fresh one). With AWS, you cannot define cost ceilings. You can only create cost alarms. Which is not as reassuring as it should be: Denial-of-Wallet, I see you. The possibility of being hit by an over-spending is real. Just like the possibility of slow, or worse, crashed, systems in a classical architecture because of un-planned traffic. This is a direct consequence of the big shift in paradigm that serverless is. It is also an amazing opportunity: cloud providers are giving us the power to scale instantly, which reduces the effort of hiring and managing a dedicated team. To counter this over-spending fear, some Ops and Architect tasks are moving towards FinOps. As a FinOps you optimise your system “slightly” less for performance and maintenance reasons, and a lot more for financial ones. A nice illustration of this is Alex Casalboni’s analysis on optimising Lambda execution time and cost by defining the right resource allocation. Serverless gives you the tools for a complete and granular cost control over each feature and each piece of your architecture. You can see where you spend more than you should. You can assess the very ROI of a feature to better understand its impact on the business. Which is what a company always ends up looking at. Cloud providers are “merely” abstracting generic complexity away, and creating a shortcut toward more financial control. Some fixed architectural opinions help estimate a serverless project cost To help people estimate the cost of a serverless project and to share best FinOps practices, we decided to build an AWS serverless cost calculator. This calculator is meant to be easy to use while including each component of a complete architecture. Why is it different from what you can already find online? Because it relies on an opinion of what a serverless architecture should look like. That’s where we used our FinOps skills, arbitrating what services and how to use them to simplify the process of estimating an AWS serverless app cost. I agree that all architectures are different, and costs can change many-fold because of some differences. But at the same time most web applications share a lot of common elements: authentication, a database with some models, an API to get and update those models, a few files on a system, some asynchronous tasks and a couple of advanced workflows (think e-commerce checkout funnel or multi-step identity validation). When standardising what a typical serverless architecture can be, it makes easier the task of estimating costs for typical use cases. It also creates a place for thorough discussions and welcome challenges within the community to aim for consensus toward always more optimised systems. These opinionated principles can be found in my previous article: What a typical 100% Serverless Architecture looks like in AWS! I invite you to read it to fully understand the calculator. With the calculator, you have two ways to estimate: either be guided by pre-defined scenarios, or define one with your own inputs
https://medium.com/serverless-transformation/is-serverless-cheaper-for-your-use-case-find-out-with-this-calculator-2f8a52fc6a68
['Xavier Lefèvre']
2020-09-20 16:09:52.244000+00:00
['Finops', 'Serverless', 'Infrastructure', 'AWS', 'Lambda']
What I’m Excited About From GitHub Universe 2020
Automerge for PRs Most of the time, you need a review or two and some checks to pass before a pull request can be merged. This can take some time, so GitHub is adding the ability to click a button to enable automerging on a PR. What this means is you’ll be able to finish up a pull request and enable automerging, and then it’ll be automatically merged when all the right conditions are met. This makes it so you don’t need to check back on your PR’s status every five minutes because it’ll automatically merge when it needs to.
https://medium.com/better-programming/what-im-excited-about-from-github-universe-2020-da57a928c9e2
['Ben Soyka']
2020-12-14 17:40:02.617000+00:00
['JavaScript', 'Github', 'Startup', 'Github Universe', 'Programming']
The Ivy Leagues Have Released 67 Free Online Business Courses For You To Take Right Now
Well, Brown, Harvard, Cornell, Princeton, Dartmouth, Yale, Columbia and Penn are currently offering free online courses across multiple topics on Class Central. Below are 67 business courses you can have a crack at from the comfort of your own couch. Whether you find yourself needing some stimulation, or if you’ve gained some more time up your sleeves during lockdown, this is a great way to learn some new skills and knowledge. This list was kindly curated by Dhawal Sha, the founder of Class Central. I have pulled this from the list of 450 curated courses he lists out here. If “Business” isn’t your topic of choice, you will also find courses on Programming, Humanities, Science, Health and Personal Development, amongst many. Below are the business courses still available right now:
https://medium.com/the-post-grad-survival-guide/the-ivy-leagues-have-released-67-free-online-business-courses-for-you-to-take-right-now-d2fec2ea4ca7
['Maddie Rosier']
2020-04-23 11:25:05.217000+00:00
['Life Lessons', 'Learning', 'Business', 'Marketing', 'Business Strategy']
Your Mom Was Right
What is intermittent fasting? You’re already intermittently fasting at night when you sleep. The idea behind the new fasting fad is to extend that period of not-eating for a couple of hours or longer. Maybe you postpone breakfast or skip it altogether. Maybe you skip dinner instead. Instead of being considered starvation, skipping meals is now considered healthy. “In the past century, a shift has occurred away from disease caused by insufficient nutrient supply towards over-nutrition, leading to obesity and diabetes, atherosclerosis, and cardiometabolic disease,” explained a 2018 report from the Washington University School of Medicine. Intermittent fasting is an outgrowth of research into caloric restriction. First theorized in the early 1900s, experiments on animals throughout the 20th century demonstrated an extended lifespan resulted from the reduction of caloric intake by a third. The National Institute of Aging began a clinical trial on humans in 2002 called CALERIE. CALERIE studied human participants over a series of years. The trial results supported previous research, connecting “sustained human calorie restriction (for at least two years) and the favorable effects on predictors of longevity and cardiometabolic risk factors.” What’s the difference between intermittent fasting and caloric restriction? Caloric restriction means you eat as often as you like, but substantially fewer calories. Intermittent fasting features two or three meals compressed into a short period of time, with more rest and digestion time before dinner and breakfast. During that increased digestion time, your body shifts over to burning fat. “After about 8 hours of fasting, the liver will use the last of its glucose reserves. At this point, the body enters into a state called gluconeogenesis, marking the body’s transition into fasting mode. Studies have shown that gluconeogenesis increases the number of calories the body burns. With no carbohydrates coming in, the body creates its own glucose using mainly fat,” Medical News Today reported. In no way are either caloric restriction or intermittent fasting intended to create starvation or anorexia. The key here is that you have to eat a regular meal at some point, or your body will go into starvation mode and begin eating muscle when it runs out of fat. But burning fat while you sleep sounds pretty good, right?
https://marthahimes.medium.com/your-mom-was-right-2402ffd7856
['Martha Himes']
2020-01-29 13:31:01.331000+00:00
['Health', 'Diet', 'Fasting', 'Weight Loss']
How create Image Recomendation system
A recommender system, or a recommendation system (sometimes replacing ‘system’ with a synonym such as platform or engine), is a subclass of information filtering system that seeks to predict the “rating” or “preference” a user would give to an item.[1][2] They are primarily used in commercial applications. Recommender systems are utilized in a variety of areas and are most commonly recognized as playlist generators for video and music services, product recommenders for online stores, or content recommenders for social media platforms and open web content recommenders.[3][4] These systems can operate using a single input, like music, or multiple inputs within and across platforms like news, books, and search queries. There are also popular recommender systems for specific topics like restaurants and online dating. Recommender systems have also been developed to explore research articles and experts,[5] collaborators,[6] and financial services. ( https://en.wikipedia.org/wiki/Recommender_system ) Collaborative filtering A key advantage of the collaborative filtering approach is that it does not rely on machine analyzable content and therefore it is capable of accurately recommending complex items such as movies without requiring an “understanding” of the item itself. Many algorithms have been used in measuring user similarity or item similarity in recommender systems. For example, the k-nearest neighbor (k-NN) approach[38] and the Pearson Correlation as first implemented by Allen.[39] When building a model from a user’s behavior, a distinction is often made between explicit and implicit forms of data collection. Examples of explicit data collection include the following: Asking a user to rate an item on a sliding scale. Asking a user to search. Asking a user to rank a collection of items from favorite to least favorite. Presenting two items to a user and asking him/her to choose the better one of them. Asking a user to create a list of items that he/she likes (see Rocchio classification or other similar techniques). Cosine similarity Using Keras and CNN vgg16 we going to develop a algorithm to recommend similar products ; # imports from keras.applications import vgg16 from keras.preprocessing.image import load_img,img_to_array from keras.models import Model from keras.applications.imagenet_utils import preprocess_input from PIL import Image import os import matplotlib.pyplot as plt import numpy as np from sklearn.metrics.pairwise import cosine_similarity import pandas as pd # parameters setup imgs_path = "../input/style/" imgs_model_width, imgs_model_height = 224, 224 nb_closest_images = 5 # number of most similar images to retrieve 1. load the VGG pre-trained model from Keras Keras module contains several pre-trained models that can be loaded very easily. For our recommender system based on visual similarity, we need to load a Convolutional Neural Network (CNN) that will be able to interpret the image contents. In this example we will load the VGG16 model trained on imagenet, a big labeled images database. If we take the whole model, we will get an output containing probabilities to belong to certain classes, but that is not what we want. We want to retrieve all the information that the model was able to get in the images. In order to do so, we have to remove the last layers of the CNN which are only used for classes predictions. files = [imgs_path + x for x in os.listdir(imgs_path) if “jpg” in x] print(“number of images:”,len(files)) example path files using in project # load the model vgg_model = vgg16.VGG16(weights='imagenet') # remove the last layers in order to get features instead of predictions feat_extractor = Model(inputs=vgg_model.input, outputs=vgg_model.get_layer("fc2").output) # print the layers of the CNN feat_extractor.summary() # compute cosine similarities between images cosSimilarities = cosine_similarity(imgs_features) # store the results into a pandas dataframe cos_similarities_df = pd.DataFrame(cosSimilarities, columns=files, index=files) cos_similarities_df.head() def retrieve_most_similar_products(given_img): print("--------------------------------------------------------") print("original product:") original = load_img(given_img, target_size=(imgs_model_width, imgs_model_height)) plt.imshow(original) plt.show() print("-------------------------------------------------------") print("most similar products:") closest_imgs = cos_similarities_df[given_img].sort_values(ascending=False)[1:nb_closest_images+1].index closest_imgs_scores = cos_similarities_df[given_img].sort_values(ascending=False)[1:nb_closest_images+1] for i in range(0,len(closest_imgs)): original = load_img(closest_imgs[i], target_size=(imgs_model_width, imgs_model_height)) plt.imshow(original) plt.show() print("similarity score : ",closest_imgs_scores[i])
https://medium.com/analytics-vidhya/how-create-image-recomendation-system-3dcc5edf1597
['Bernardo Caldas']
2020-12-07 15:07:47.860000+00:00
['Recommendation System', 'Keras', 'Cosine Similarity', 'Computer Vision', 'Recommender Systems']
The Process to Deliver a Software or Web Application Into Production
I have worked for different companies which allowed me to experience different strategies when it comes to take an idea and release it to the public as a final product. Whether you are part of a small or big project, I believe any developer would benefit a lot from understanding the entire process of what it takes to deliver software or application. Analysis The first thing that triggers a product or software development is the idea or a solution for a specific problem. After that, you normally look at the market for existing solutions if any. If you find one you take a closer look at them to see if they lack something you can contribute to or if you can just go ahead and build something better to compete in the market. I call this part “research the idea” and in a software development cycle it is called analysis and it is the step where you define the scope and the project itself. You measure the risks, define a timeline that will take you to the final result, define or anticipate issues or opportunities, and plan for things as well as come up with the requirements for the project. This step may even determine if the project should go forward or not as well as how. Documentation This phase is to come up with a way to document the project solution and requirements and it must contain everything needed during development and provide few checks: Economic, Legal, Operations, Technical, and Schedule, more or less. You must define the costs related to the development of the project, go over copyright and patents to avoid any legal conflict around the idea and product. The delivery schedule is a big one especially if you have a sales, marketing, and social media team and they need to create content and be ready for launch to promote the product. Design The design phase is not just about designing the interface of the product itself but anything related to it. It can be the overall system architecture, how the data look like, where it will be stored, and how the data flows in the system. You also define the functionality of each module and all related logic as well as how these modules talk to each other and yes, you also design the interface of the software as well. Coding After the design phase is the coding phase where you analyze the idea the documentation, requirements, and specifications and start coding things by following a coding plan, the product schedule, timeline, and roadmap. Anything that turns out to be more complex and deviates from the original plan should be communicated. Things may change as a result. I often see plan B being applied where you find an MVP version of the feature or the delivery and implement that and come back to it later on where you further improve the feature after much detailed research. The show must go on and it is hard to remove a wagon after a train is in motion. The coding is done maybe by following an agile development model where features are delivered in sprints, are planned in sprint plannings, and get daily Engineer updates in daily stand-ups. The development team keeps a backlog of features and bugs to distribute among them and address them per sprint which usually takes 2 weeks. Testing When the code is done it is the testing phase. I am not talking about unit tests as those should happen during the coding phase whether you use a Test Driven Development technique or not. The test phase is for QA and E2E(sometimes). These tests do not happen after 100% of the things are coded. They happen as different parts are completed. Anything that is found to be faulty or deserves improvement is sent back to be fixed by the engineers. The goal is to not introduce new features but to check that what was coded follows the requirements and does what it is supposed to do. The E2E is created to automate the user flow in a step by step pattern to mimic how the user would use the product. Deployment If everything is coded, tested, and seems to be right it then gets deployed but it does not mean that developers and testers job is complete. The QA then tests things in production as the production and development environments are different and again, anything found to be broken is also sent to be fixed by the developers. At this point, the user will start interacting with the product and sometimes things come up and this is when customer support comes in. These people understand how the product works because they got trained as things were being built or at the end. These people will guide the users, through the product in case of any problem or if the user is stuck in some issue that is preventing them from using the product where anything perceived to be a problem is turned into issues that are sent to the developer’s backlog to be checked by the engineering team and gets fixed if necessary. The customer support may even be the developers as well. Some companies use the concept of having developers “on-call” for any user-related issues. Normally small companies do that and these engineers stay on call even during non-business hours. Maintenance After launch, there is the maintenance phase, the final phase of the cycle. This phase includes bug fixes like those reported after launch, software upgrades, and any new feature enhancements. The development cycle is circular so if any new thing, version, or complex update needs to be done it goes from phase 1 again until it is delivered. Observation One thing to notice is that the coding phase is often small. There is a lot of planning and support time dedicated to delivering a product. I worked in companies where we took 2 and a half years to deliver a product as well as others that took 3, 6, or 9 months depending on the product type. No matter the time it takes to deliver software, they all follow or try to follow a software development cycle. Red Flag A red flag would be a place where the coding time is the largest phase and normally these are startups that experiment, test, and come up with requirements as things are being coded and designed. These environments tend to be very stressful to work at as things may change as you are coding them, meaning, you start a sprint with a set of requirements and by the end of the sprint the design and requirements may change which may mean that the developers need to allocate extra time to address these changes. Conclusion It should not matter the size of the project, whether it is a side project or a freelance project. You should always try to follow a plan and get good it. The steps will narrow your focus and allow you to deliver a product in junks which will keep you on track and satisfied as you go. I implement these steps fully or partially in my deliveries which allows me to finish a side project, give a detailed plan, pricing, and timeline for a freelance project client, communicate well with the VP, managers, and project owners at work. Watch me code and create things by visiting beforesemicolon.com or subscribing to my YouTube channel.
https://beforesemicolon.medium.com/how-to-deliver-a-software-or-web-application-into-production-4bf309be4493
['Before Semicolon']
2020-12-27 18:28:23.396000+00:00
['JavaScript', 'Software Architecture', 'Technology', 'Software Engineering', 'Programming']
Z-Ro and Shaq Spread the Wealth on “Stop the Rain”
On the back half of Z-Ro’s latest album, Rohammad Ali, listeners are met with a familiar, if a bit peculiar voice. Following features from rapper Juicy J and singer Brendalynn, Z-Ro enlists the talents of Shaquille O’Neal on “Stop the Rain.” You might think that Shaq, whose most recent rap exploits have included beef with Portland Trailblazers all-star Damian “Dame Dolla” Lillard, is an odd choice for a Z-Ro song. I might be inclined to agree. Shaq and Z-Ro are polar opposites. The former is the crown prince of the spotlight. He has twice as many nicknames as teams he played for in the NBA. No one batted an eye when he became a police officer, nor when he earned his Ph.D. He’s been 7-foot-2 for most of his life, making it entirely impossible for him to escape the limelight even if he wanted to. Z-Ro, on the other hand, shirks the spotlight at every given chance. His sunglasses are a near permanent fixture on his face, shielding him from public view in concert with his usually all-black attire. Ro’s label, 1 Deep Entertainment, is a manifestation of everything he’s espoused over the course of his 20-plus year career. He said it best on A.B.N.’s “No Help”: “I don’t need no help my N — -a, I can do bad on my own.” So, I would concede that thinking Z-Ro and Shaq not mixing is a fair estimation. “Stop the Rain,” however, isn’t beholden to the laws of popularity. Over the course of the track’s four-minute thirty-second run time, the duo delivers a pair of contrasting verses, drawing out each other’s strengths in a collaboration that no one could have seen coming. “Stop the Rain” is such a success because Z-Ro doesn’t change for anyone. Since splitting off from his two-man group with A.B.N. partner and cousin Trae tha Truth, Z-Ro has been rolling solo. His discography is a series of musings about trustworthiness and personal morals interspersed with a handful of non-committal features. Z-Ro albums don’t include a band of his fellow Houstonians like Pop Smoke would do with his Brooklynites. Ro’s projects are tailor made for him and him alone. And still, Z-Ro is quick to show his support for his Texas roots. DJ Screw gets a shout out on nearly every track. The Mo City Don, clearly recognizes he’s part of something bigger — the Houston rap scene has fought for the characterization of the “third coast” in hip-hop — but he hypes Houston cautiously, knowing that any relationship, rap or otherwise, is fleeting. Shaquille O’Neal is the complete opposite. The “Big Fella” is welcomed everywhere he goes, as is the case on “Stop the Rain.” Where Z-Ro is hesitant to relinquish his solitary style, Shaq litters Houston rap references throughout his verse. “I feel like Bun B, damn I miss Pimp C/ Fat Pat I miss you too cuz, RIP,” he raps, taking ownership of the culture of which Z-Ro is weary. All of this comes together atop a historic sample of Loose Ends’ “You Can’t Stop the Rain.” In 1996, Shaq deployed the UK funk band’s song as the title tarck for his third studio album, swapping the word “rain” for his regal hominem. The video for the song included larger than life fare, consisting of a Mission Impossible-like story line and utilizing CGI effects and character title cards, all of which indulged Shaq’s grandeur. He had recently signed a seven-year, $20 million contract with the Los Angeles Lakers, and the track was an expression of his new found wealth. Z-Ro, however, takes the song to wax poetic about building success from the ground up. “Why you hate to see me walk on marble floors, homie/ I ain’t greedy, I could show you how to marble yours, homie,” Z-Ro raps near the song’s intro. Just as Shaq spreads the wealth by shining a light on an oft-overlooked segment of hip-hop, Z-Ro proliferates the mindset that success isn’t resigned to a chosen few. “Stop the Rain” melds Z-Ro and Shaq’s belief systems seamlessly, remixing a classic 80s tune to find a bigger purpose.
https://abrandbox.medium.com/z-ro-and-shaq-spread-the-wealth-on-stop-the-rain-efe0e5fe409
['Brandon Johnson']
2020-07-01 01:47:07.321000+00:00
['Hip Hop', 'Review', 'Music', 'Houston', 'Rap']
7 Steps for a Faster Lightroom Photo Editing Workflow
If you just got off a portrait session, shot an event, finished covering sports, or got back from days and days of vacation, you probably have anywhere from a few hundred to a few thousand photos you have to look through. You can spend days and days editing and sorting through photos in Lightroom, and that’s why today I’m going to give you tips on saving time in the photo editing workflow. Besides things like learning to use shortcuts, here are 7 steps for a faster and more productive Lightroom editing experience. #1. Get it Right in Camera First I understand this is a really big no brainer, but seriously, it’s worth taking the time to consider. Getting things right in camera is essential to speeding up your workflow later. You’ll spend less time having to tweak small mistakes and therefore finish a lot earlier than normal. Here are two examples of things you need to get right in-camera that are going to boost your workflow when you import your photos into Lightroom. First, get your exposure correct as often as possible. Use your exposure meter to know if you’re getting it right. Check your photos every once in a while to make sure you aren’t getting too off your mark. It’s totally fine to chimp every couple of bursts just to make sure you’re hitting the exposure right, just don’t do it too often where you may miss a shot! If you’re shooting on mirrorless with an EVF, this will be even more convenient as you can actually see what your true exposure is going to be. No chimping needed. I know that in Lightroom, the exposure slider is just one on of many, but if you can spend less time fiddling around with it by getting the photo right in camera, the more time you can save. Next, get the leveling of your image right. There’s nothing more tedious than having to rotate images in Lightroom. Today, most cameras have gyroscopic sensors that can be activated and shown on the back of the camera screen or in an EVF, telling if you are level or not. Even when not using this tool, it’s important to check things like horizons and other horizontal lines to see if they’re aligning correctly across the frame. Like with exposure, if you can spend less time trying to rotate your images, the quicker your workflow becomes. The bottom line for this step as whole is just to be a bit more mindful and maybe a little less spray and pray if possible. #2. Use the quick collection and other organizing tools Lightroom is not just an editing tool but also a cataloging tool. I don’t get too in-depth with my cataloging, but what I will always do is use the quick collection selection by pressing F on a photo. Before making edits or maybe after just a few quick edits to know how your photos will generally look, go through your entire set of photos, picking out the best ones. These should be the preliminary photos you plan to use or send to clients. Don’t spend too much time dwelling on a photo. If you like it, add it to the collection. You can always remove the photo later if you don’t like it or find it redundant. The quick selection also puts all your best photos into one convenient folder for you and will allow you to quickly batch edit and export all of them when you need to. Most importantly, the quick collection tool will help you refrain from having to edit every single photo and just focus on the ones that matter. Usually, I take anywhere from 500–600 photos on a portrait shoot. The first time I go through them, I put around 100 in the quick collection. I probably remove a few and end up with around 70, but the selection process allows me to spend less time editing unneeded, poorly shot, or redundant photos. There are definitely tools you can also use to put desired photos in their own specific folder per shoot, but I’ve found that quick collection selection is the best for my workflow. Use whatever works for you at the end of the day whether it’s quick collection or a rating system with stars. Putting all the best photos into a singe folder #3.Create and Use Presets Utilizing presets has many, many advantages. I used to be naive enough to think that every photo deserved its own special love and care when it came to all the settings. This might be important when shooting landscapes or when you come across a very, very special image you took, but for the most part, creating and applying presets is more than worthwhile with hundreds or thousands of photos. Presets also allow shoots to have a similar feel and vibe with each photo. It’s important in the final album that all photos have a sense of commonality to them. Presets allow you to have a consistent style, and a consistent style is the forefront of good personal branding as a photographer. Onto the process of applying them, after importing my images, the very first thing I do is make a preset for the shoot. You should be making your own presets, but why you should be doing that is an article for another time. Back to the point, even if it takes a while, creating the preset for a set of photos is worth the time investment since this will be applied over and over again. Usually, I already have presets made for specific types of photos like for sports or performances or portraits. I will take this preset that I made previously and tailor it to the new set of photos. I make sure to save the settings as their own separate preset when I’m done making changes or save them into some “burner” presets I always allow to be changed. Here are a few good rules of thumb. If certain photos all have similar lighting conditions, they should all get the same preset. If, for some reason, lighting changes drastically throughout the shoot, the preset should be modified to fit this change, or a new preset should be made. If I went from shooting an event from the outdoor to the indoor, each location will typically get its own preset. But, if I shot the entire time outdoors for a portrait shoot and the sun got a little lower in the sky, it’s worth it to keep the same preset and modify it. Usually, in this scenario, where the sun gradually changes, all I really find myself changing is the white balance. You’ll be surprised just how useful adjusting the white balance can be in getting photos to match. Generally, presets should be applied after putting items into some special category like a quick collection. However, it might also be understandable to apply presets before you make specific selections as a preset can be the difference between a so-so photo and a great one. You might not know how good a photo can be until you’ve seen it edited. It’s important, however, that if you want to see how each photo looks with a preset, that you proceed to do the next step, batch editing. Utilizing pre-made presets I have and adjusting them to a shoot #4. Batch Edit To make presets the most effective, you should be batch editing. Batch editing allows you to refrain from having to apply your preset to each and every photo. In Lightroom, this can be done by going into the library section, making sure you’re in grid mode (G). Highlight (holding down shift and dragging with the mouse) all the photos a preset will be applicable to. On the right-hand side, you can choose a saved preset, pick the one you want, and apply it to the highlighted items. Your computer is going to need time to process heavy presets on a lot of photos, so this might take a bit of waiting. But afterward, all your selected photos are going to be modified to the preset. A few reminders. After batch editing, skim through your modified photos just to make sure everything looks consistent. Remember, I mentioned earlier that presets need modifying every once in a while to ensure that the final album looks consistent. If a string of photos is looking off, apply the re-modified preset with a batch edit once more to that specific grouping of photos. Applying a Tennis based preset to a group of photos. #5.Only Adjust Certain Things at Time At this point in the workflow process, you’ve already applied presets to your quick collection or selected images, but of course, your photos are not yet perfect. Now is the time you can go through each photo one by one after narrowing down the good ones. With portraits, for example, I need to do a lot of retouching, clearing some face blemishes, whitening teeth, and brightening eyes. Sometimes I also still need to crop and realign photos that aren’t perfect. But don’t be making all these adjustments to one photo and then continue to do all the same adjustments to the next. Do only certain adjustments at a time. For example, on my first pass through, I might need to dodge certain parts of a face with my adjustment brush. I will do this to every photo that this needs to be done before doing anything else. I proceed to repeat the process, but maybe this time, I only brighten teeth, and then I go through again and maybe only crop and rotate. Lightroom will keep whatever module and settings you used active as you progress through your collection of photos. So it makes sense to keep applying whatever this module is doing. By not having to switch between modules and module settings, you’ll be a lot faster in making small, minute changes that need to be done to every photo in specific areas. Sometimes I need to soften certain areas of someone’s face. I apply the brush to a photo and then continue to apply the same brush to subsequent photos since the tools will stay active. #6. Clear any Distractions and Listen to Something Passive Now, for general tips that will helpful for the previous five steps. It stands to reason that if you aren’t looking at your phone every five minutes to see your notifications, you will be more productive and more efficient in getting work done. Clear yourself of distractions while you edit. Find time for yourself when you know that no one is going to bother you or at least not bother you too much. By allowing yourself to just work, you’ll eventually get into a flow state where your actions just feel fluid and quick. Of course, editing photos in silence can be quite boring. Make sure you listen to something passive like a podcast, someone’s previous live stream, or music (or maybe you like dead silence; that’s fine too). You might not actually remember too much from these things, but they act as a nice white noise or maybe a great motivator. I really stress the word passive because, again, you don’t want these things to distract you too. These should be relatively easy things to listen to where you don’t have to give too much mental capacity in understanding what they mean. Editing your last shoot is not the time to be learning differential calculus, but it may be a good time to listen to an hour long podcast of someone talking about how they had a life-changing trip to Thailand. Music or long form content like uploaded live streams and podcasts are great to listen to! #7. Take a Break! This is one of the most important things. Even with all these steps, if you’ve shot hundreds and thousands of photos, it can still take up to hours to get through every photo. Take a break for half an hour. Read a book. Watch a YouTube video. You’re far more productive when your brain is in the right place than trying to drill through everything with a slight headache. Breaks are incredibly important and can actually boost your productivity even though you’re not working during them. Sometimes I find that I don’t quite know what a photo needs to make it really pop. I can struggle and struggle to try to figure it out, but it doesn’t always work. Taking a step back, taking a break, and looking at it with fresh eyes later is a much more pleasurable experience. Whenever I come back from my break, I can figure out a solution almost instantly. My choir teacher always told me, “You can find what you’re looking for with a clear mind.” Clear your mind when you get tired! Of course, these steps aren’t perfect. This list is certainly not complete. But I guarantee that if you take the ideas from this article, they will be useful in cutting down your editing time and making your workflow much more productive. They might turn a four-hour-long editing session to a two-hour one, which means you have more time to do other things that matter in your life or get back out shooting again. The key takeaways from this are really to get things as right in-camera as you can before you edit, automate your process as much as possible, and employ positive work habits like taking a break!
https://medium.com/photo-paradox/7-steps-for-a-faster-lightroom-workflow-2fe46f2eb256
['Paulo Makalinao']
2020-09-23 20:55:56.552000+00:00
['Work', 'Editing', 'Photography', 'Creativity', 'Art']
iOS Integration Testing
iOS Integration Testing A framework for integration testing iOS by simulating user interaction. Written by Eric Firestone. Heads up, we’ve moved! If you’d like to continue keeping up with the latest technical content from Square please visit us at our new home https://developer.squareup.com/blog We love iOS for the fast and versatile experience it offers. But what we think is really cool is that it brings these attributes to the development process as well. The App Store makes short release cycles easy, allowing developers to address feedback quickly and add new features regularly. This improvement over the traditional box software model makes the agile development model a great choice, except for one thing: agile development requires good automated testing. We looked around and none of the existing iOS testing frameworks fit our needs, so we innovated. Today we’re happy to announce KIF, the “Keep It Functional” framework. KIF allows for realistic iOS integration testing through simulated user interaction. We developed KIF to meet a few goals: KIF requires minimal setup to run a test suite KIF lets you develop your tests in the same language as the rest of your code to minimize learning and adaptation layers KIF can be easily extended to fit your needs KIF works in continuous integration (CI) setups The resulting framework links completely into your app, and allows you to run tests in either the simulator or on the device. After configuring your Xcode project, running your KIF suite can be done easily because it has no external dependencies. To cover the majority of testing needs, KIF comes with a number of factory test steps built in, such as “tap this view,” “turn on this switch,” or “type this text.” These steps traverse the view stack using the built in accessibility capabilities of iOS. Making your app accessible is easy, and has the added advantage of making it navigable to your users with visual impairments. If your testing requires more complex steps, KIF is ready for you. Because KIF links into your app and is written in Objective-C, you can easily add new steps, such as “simulate a memory warning,” “receive a push notification,” or even “fake the key combination up, up, down, down, left, right… from the plugged in game controller.” If the user can do it with your app, then KIF can do it, too. The demo video below shows KIF running a few sample payments with the Square app: https://s3.amazonaws.com/square-production/video/kif-tests.mp4 To give you an idea of what a scenario in KIF looks like in code, here’s an example: + (id)scenarioToLogIn; { KIFTestScenario *scenario = [KIFTestScenario scenarioWithDescription:@"Test that a user can successfully log in."]; [scenario addStep:[KIFTestStep stepToEnterText:@"user@example.com" intoViewWithAccessibilityLabel:@"Login User Name"]]; [scenario addStep:[KIFTestStep stepToEnterText:@"thisismypassword" intoViewWithAccessibilityLabel:@"Login Password"]]; [scenario addStep:[KIFTestStep stepToTapViewWithAccessibilityLabel:@"Log In"]]; // Verify that the login succeeded [scenario addStep:[KIFTestStep stepToWaitForTappableViewWithAccessibilityLabel:@"Welcome"]]; return scenario; } Finally, with the help of WaxSim, KIF runs in continuous integration, so you can get constant feedback about the health of your codebase. We’re making KIF available as open source on GitHub, and more information is available in the embedded README. We’ve found KIF really useful, and we hope you do too. Check it out, and let us know what you think. There is a discussion groupavailable, or use the tag “kif-framework” on Stack Overflow.
https://medium.com/square-corner-blog/ios-integration-testing-8b97e5af2670
['Square Engineering']
2019-04-18 23:54:42.711000+00:00
['Continuous Integration', 'Testing', 'Engineering']
Self-Driving Cars Are Out. Micromobility Is In.
Waymo, a division of Alphabet, has long been a leader in autonomous vehicle technology. Based on the limited data released on the company, its vehicles have driven the most miles in self-driving mode and have the lowest rate of disengagement (moments when humans have to take over). Waymo CEO John Krafcik. Source: Waymo But Waymo’s CEO, John Krafcik, has admitted that a self-driving car that can drive in any condition, on any road, without ever needing a human to take control—usually called a “level five” autonomous vehicle—will basically never exist. At the Wall Street Journal’s D.Live conference, Krafcik said that “autonomy will always have constraints.” It will take decades for self-driving cars to become common on roads. Even then, they will not be able to drive at certain times of the year or in all weather conditions. In short, sensors on autonomous vehicles don’t work well in snow or rain—and that may never change. Such a statement from someone leading a self-driving vehicle company seems surprising. But given what’s happened throughout 2018, it shouldn’t be. A number of negative stories about self-driving cars permeated the year’s coverage, including the deaths of those using Tesla’s Autopilot technology. The effect of an Uber self-driving car killing a woman in Tempe, Arizona, cannot be understated. That singular event broke through the largely uncritical mainstream coverage of autonomous vehicles; it showed us how far the technology really had to go before it could be safe. No longer does anyone credibly claim that self-driving cars are the future of transportation. The initial event was bad enough: A self-driving car failed to slow down to avoid hitting a person and a safety driver was too distracted to notice. But as the National Transportation Safety Board investigated the incident, we learned that the autonomous driving system was unable to determine that the object in front of it was a person at all. When it finally did correctly determine that it had to stop—just 1.3 seconds before impact—it couldn’t because emergency braking had been disabled, and there was no way to alert the safety driver. Leaked information showed that Uber safety drivers had to intervene in their self-driving vehicles every 13 miles (21 km) compared to every 5,600 miles (9,000 km) on average for Waymo’s vehicles, and the team was putting their test vehicles in unsafe situations to try to hit impossible deadlines. It was a complete mess, and eventually blew up future plans among ride-sharing apps that depended, in part, on autonomous vehicles to reduce labor costs. Source: Navigant Research Uber had to completely halt its autonomous vehicle testing, and it was already far behind its competitors. It pulled out of Arizona completely, laid off most of its safety drivers, and only reapplied to resume testing in Pittsburgh near the end of 2018—almost eight months after the fatal crash. But between March and November, everything changed. No longer does anyone credibly claim that self-driving cars are the future of transportation, and Uber has even shifted its focus to scooters, e-bikes, and turning its app into the “Amazon for transportation.” At the beginning of 2018, it would have been unimaginable for the CEO of Waymo to publicly acknowledge that self-driving cars will never work in all conditions. Now, it’s a statement of fact that anyone familiar with the industry already knows. But while the hype about self-driving cars is over, there’s a new vision for urban transportation that’s much more inspiring—and everyone seems to want in on it.
https://medium.com/s/story/self-driving-cars-will-always-be-limited-even-the-industry-leader-admits-it-c5fe5aa01699
['Paris Marx']
2019-01-15 03:06:59.329000+00:00
['Technology', 'Self Driving Cars', 'Cities', 'Future', 'Transportation']
Interview: 3LetterzNuk Talks about Bringing the Heat on ‘I’m Back’
3LetterzNuk Atlanta-based rapper 3LetterzNuk dropped a new mixtape, I’m Back, not long ago via ZENtertainment/Sony Music. 3LetterzNuk first made an impact in 2015, when he signed with FloRida’s IMG imprint. Now with ZENtertainment/Sony Music, he’s definitely back and ready to go, which explains the mixtape’s title — I’m Back. 3LetterzNuk’s track record witnesses his talent: over 9 million streams with Aaron Carter on “Fool’s Gold” and almost 2 million streams on “I Wonder,” featuring Marco Foster. 3’s dope fusion of pop, hip-hop, and cool textures of R&B generates tasty banging sonic concoctions. Pop Off sat down with 3LetterzNuk to delve into his influences, what he’s listening to right now, and how he keeps his sound unique. What three things can’t you live without? My family, music, and running water. What’s your favorite song to belt out in the car or the shower? The song “Move Your Feet” by Junior Senior. How did you get started in music? What’s the backstory there? When I was in high school, my friends invited me to the studio. I found that I was impatient with their creative process, so I got up and went in the booth and started freestyling. They were impressed with my rhythm, cadence, and my voice. When I saw their reactions, it was the first time I felt gifted with entertaining. I’ve always wanted to be in entertainment but mostly as an actor. But when I got the reaction, and it was different than when I acted, danced, or played basketball, I was hooked. What musicians influenced you the most? Jay-Z, Drake, and Chris Brown are three of my most prominent influences. 3LetterzNuk Which artists are you listening to right now? DaBaby, Lil Baby, Young Thug, Rich Homie Quan, KCamp, and Tory Lanez are some of my top listens of the moment. What was the inspiration for your new I’m Back mixtape? Honestly, the youth inspired the sound. I tapped in with a lot of what the younger people were listening to and consuming which influenced the mixtape. How do you keep your sound fresh and avoid coming across as derivative? I am constantly working on my own music and it evolves into something unique based on my creativity at that time. What can you share about your songwriting process? It’s completely organic. Sometimes I have to be riding in the car. Sometimes I have to be in an open space in the studio. Sometimes I have to be alone with my headphones and just sit down and study the beat over and over. How have you been handling the coronavirus situation? I just dropped new music for the first time in two years. I’m pretty excited about it and I’ve been working super hard on that. The quarantine has been a blessing in disguise, as it made me concentrate on the music. I wrote the 9 new singles on the mixtape, while in quarantine. Why do you make music? Because it’s on the inside of me and there’s nothing I love more than spending my time creating. Follow 3LetterzNuk Facebook | Twitter | Instagram | Spotify | Website
https://medium.com/pop-off/interview-3letterznuk-talks-about-bringing-the-heat-on-im-back-182e1b0355cb
['Randall Radic']
2020-12-21 14:19:17.832000+00:00
['Hip Hop', '3letterznuk', 'Im Back', 'Interview', 'Music']
Kyle Meagher (Anne With An E) Unveils “Know Better”
Kyle Meagher Stylish pop-rock with emotional imminence. Pop artist and actor Kyle Meagher launches the music video for “Know Better,” a song about the beguiling nostalgia of love, expectancy, and the pain of realization. Speaking to the song’s genesis, Kyle shares, “I sent a message to an old girlfriend to see if she wanted to get together. She read the message but she never replied. I realized that she hadn’t changed, and it sparked this crazy lyric of ‘I should know better, that she’s no better’ and that was my inspiration!” Originally from Ottawa, Ontario, Kyle was a bit of a musical prodigy, participating in music camps when he was only three-years-old, as well as taking piano, guitar, and saxophone lessons, which naturally led to him singing and playing in rock bands in middle school and high school. Kyle skyrocketed to international attention with his continuing role on Anne With An E, CBC/Netflix’s popular series, followed by his debut EP, Beats in a Bagel Shop, which accrued millions of streams on Spotify. Ever creative, Kyle then developed, wrote, and produced the EP as a visual album. The visual album attracted a number of awards, including selection by The Orlando Film Festival and the Global Music Awards. Kyle Meagher If not for an fusion of luck, the sharp eye of an agent, and what seems tantamount to divine intervention, instead of acting and singing, Kyle might be playing in the NHL. It happened like this: Kyle and his mother went to an audition, along many other wannabe young actors. The line of participants wound on forever. Reluctant to miss hockey practice, Kyle and his mother were about to take off, when an agent noticed them getting ready to bolt, and escorted Kyle to the head of the line. The agent’s gift for spotting talent resulted in Kyle surfacing in commercials, catalogs, television, and films. “Know Better” opens on shimmering colors accented by a hefty snarling guitar riding an alt-rock melody flavored with coruscating pop tangs. A potent bassline and tight percussion infuse the rhythm with infectious flow, while Kyle’s deluxe tenor imbues the lyrics with searing timbres acknowledging his mistake in hoping his former lover had matured. “She’s playin’ a game that I just can’t win / So why am I tryin’ / I’m gettin’ so tired of keeping score / Drunk dial, get no answer / Text you get no reply / But I’ve said I’m done with you before.” Chock-full of surging energy, “Know Better” parades Kyle Meagher’s dazzling talent as a songwriter and vocalist. Follow Kyle Meagher Spotify | Website | Instagram | Twitter | YouTube
https://medium.com/pop-off/kyle-meagher-anne-with-an-e-unveils-know-better-ade57dcc6471
['Randall Radic']
2020-07-10 16:10:24.124000+00:00
['Music Video', 'Anne With An E', 'Kyle Meagher', 'Music', 'Pop Rock']
How to Teach Yourself Data Science in 2020
This is close to what we encounter at work as an analyst — we use different techniques that we’ve learnt to extract information from the same database. The following is the entity-relationship diagram of the SQLZoo question ‘Help Desk’. Given this, you’re asked to show the manager and number of calls received for each hour of the day on 2017–08–12. (Try it yourself here!) Other resources that I used include Zachary Thomas’ SQL Questions and Leetcode. 2. Data Manipulation with R and Python To start learning about the programming and the tools needed for data science, one cannot run away from R and/or Python. They are very popular programming languages which are used for data manipulation, visualization and wrangling. The question or R vs Python is an age-old question that deserves another post on its own. My take? It doesn’t matter whether you pick R or python— once you master one, you can easily pick up the other. My journey with coding in python and R started with the code-along-with-me sites like CodeAcademy, Datacamp, Dataquest, SoloLearn and Udemy. These sites provide you with the self-paced classes organized by languages or packages. Each breaks concepts down into digestible parts, and gives the user with starter code to fill in the blanks. These sites typically walk you through a simple demonstration, and you will get a chance to practice the concept immediately afterwards by exercises. Some offer project-based exercises afterwards. Today, I will focus on two of my favourites, Datacamp and Dataquest. Please keep in mind — down below you’ll find an affiliate link to the courses. That doesn’t mean anything to you, as the price is identical, but I’ll get a small commission if you decide to make a purchase. DataCamp DataCamp offers video lectures taught by professionals in the field and fill-in-the-blank exercises. The video lectures are mostly succinct and efficient. Image by author One part I love about DataCamp is the up-to-date courses that are organized into career paths in SQL, R and python. This takes away the pain of planning your curriculum — now you only need to follow your path of interest. Some of the paths include: Data Science in Python/R Data Analyst in Python/R/SQL Statistician in R Machine Learning Scientist in Python/R Python/R programmer Personally, I started my R education with Data Science in R, which provided a rather detailed introduction to the tidyverse in R, which is a collection of incredibly useful data packages to organize, manipulate and visualize data, which most notably includes ggplot2 (for data visualization), dplyr (for data manipulation) and stringr (for string manipulation). My favorite packages in R. Image by author. However, I do have my complaint about DataCamp — that is the poor retention of information after completing DataCamp. With the fill-in-the-blank format, it is easy to guess what is needed in the blank without really understanding the concept. When I was a student on the platform, I tried completing as many courses as I could in the shortest possible time. I skimmed through the code and filled in the blanks without understanding the bigger picture. If I could restart my learning on DataCamp again, I would take my time in digesting and understanding the code better as a whole, not just the parts that I was asked to fill. DataQuest Image by author Dataquest is very similar to DataCamp. It focuses on using code-along exercises to illuminate programming concepts. Like Datacamp, it offers a wide variety of courses in R, Python and SQL, though it is somewhat less extensive than those in DataCamp. For instance, However, unlike Datacamp, Dataquest does not offer video lectures. Some of the tracks offered by Dataquest includes: Data Analyst in R/Python Data Science in Python Data Engineering DataQuest’s content is generally more difficult than those in DataCamp. There were also fewer ‘fill-in-the-blank’ format exercises. Though it took longer, my knowledge retention on DataQuest was better. Another great feature about the DataQuest is the monthly call with a mentor who will review your resume and provide technical guidance. While I did not personally get in touch with a mentor, I would have in hindsight, since it would definitely have helped me progress much faster. 3. Data Visualization Data visualization is the key to present the insights you drew from your data. After learning the technical skills of creating charts using python and R, I learnt the principles of data visualization from a book, Storytelling with Data by Cole Knaflic. Sending a message with numbers. Photo by Alexander Sinn on Unsplash This book is platform-agnotistic. In other words, it does not focus on any particular software but teaches the general principles of data visualization with enlightening examples. Some of the key pointers you can expect to learn from this book are: Understand the context Choose an effective visual Eliminate clutter Draw attention where you want it Think like a designer Tell a story I thought I knew data visualization, until I read this book. After digesting the book, I was able to create a (somewhat) visually pleasing chart that address the police brutality against blacks. One of the main learning points from the book applied here was to draw attention where you want it. This was done by highlighting the African American line with a bright yellow — reminiscent of the BLM color — while ensuring that the rest of the chart remained in the background with duller shades like white and grey. Data visualization techniques applied to a chart that highlights police brutality. Image by author. Next Steps In this post, I covered the steps I’ve taken in learning programming from scratch. With these courses, you now have the necessary skills to manipulate data! However, there is still a pretty long way to go. In the next posts, I will cover Part 2 —Mathematics, Probability and Statistics Part 3 — Computer Science Fundamentals Part 4— Machine Learning (Read it here) If you have any questions, feel free to connect with me on LinkedIn. All the best, and good luck! Other Readings If you enjoyed this blog post, feel free to read my other articles on Machine Learning: Translation This article has been translated to Russian thanks to Denis Iurchenko. References [1] Dhar, V. (2013). “Data science and prediction”. Communications of the ACM. 56 (12): 64–73. doi:10.1145/2500499. S2CID 6107147. Archived from the original on 9 November 2014. Retrieved 2 September 2015.
https://towardsdatascience.com/how-to-teach-yourself-data-science-in-2020-f674ec036965
['Travis Tang', 'Voon Hao']
2020-11-24 16:06:11.271000+00:00
['Sql', 'Data Science', 'Data Visualization', 'Python', 'R']
Welcoming Our First Engineer in Asia — Shahul Hameed
We are excited to announce that Shahul Hameed, our first engineer in Asia, has joined the Origin Protocol family! Shahul is based in India, near Chennai, and has already contributed to every aspect of our platform, from the front-end and back-end to our smart contracts. He is currently working on improving the user experience of our DApp with gasless meta-transactions. Shahul started programming at age 10 and began his career as a freelancer. He comes to Origin most recently from Zoho, where he was a front-end developer. He has also published several of his own applications on the Windows Store and Windows Phone Store. Shahul’s path to joining Origin, or his “Origin story”, is a great example of a little bit of everything that makes Origin special. Our Head of Partnerships, Coleman, reached out to Shahul on Reddit because Shahul had been trying to buy Amazon gift cards for ETH on /r/giftcardexchange. Coleman was conducting research on how and why people trade crypto for gift cards. It turns out that Shahul is a huge Pokémon fan and was looking to buy a Nintendo Switch so he could play Pokémon Let’s Go. Unfortunately, the Nintendo Switch was not available in India yet, so Shahul needed to obtain American Amazon gift cards to purchase it. Cryptocurrencies helped him solve this problem. Coleman told Shahul about the Origin Marketplace DApp and Shahul started poking around our GitHub. He picked up some “good first issues” on our project board and immediately began solving problems for us, without pay or any official relationship to us. Shahul’s rate of work increased and our engineering team started to take notice of this new open source contributor. We reached out to him to get him more involved and soon hired him as a contractor. Shahul’s code was exceptional and he developed an immediate rapport with our team. We asked Shahul to join the Origin team as a full-time engineer shortly after. To our delight, Shahul had been looking to work on an open-source blockchain project for quite a while and accepted our offer! In addition to his passion for coding and Pokémon, Shahul is also an excellent Rubik’s cube speedsolver, averaging under 15 seconds. He can also solve a Rubik’s cube blindfolded. Shahul also is an avid blogger and YouTuber. We are lucky to have such an exceptional and multi-talented person to join our team! Learn more about Origin:
https://medium.com/originprotocol/welcoming-our-first-engineer-in-asia-shahul-hameed-abada3e49308
['Nick Poulden']
2020-01-17 19:18:51.199000+00:00
['Team', 'Ethereum', 'Blockchain', 'Cryptocurrency', 'Engineering']
Here’s the Real Reason Why You Should Vote Democrat in 2020
Here’s the Real Reason Why You Should Vote Democrat in 2020 It’s deeper than just Trump Photo by Element5 Digital on Unsplash The 2020 election is sure to be one of the closest and most critical elections in history. For one, our country is more divided than ever before. The two-party system has brought us more unrest than we ever could have anticipated. We’re struggling with checks and balances. Even though he ran as a Republican, our President doesn’t even have a majority of the party’s support, and his conservatism borders past any two-party lines. On the other hand, our legislative branch is also split into two — the House of Representatives is majority Democrat, while the Senate swings problematically Republican. But here’s the one we don’t think about: the Supreme Court. The Supreme Court is exactly the reason why we should be voting Democrat in November. Because here’s the thing: the Supreme Court is already leaning conservative. Already in his presidency, Donald Trump has taken advantage of the ability to appoint two more conservative justices on the Supreme Court, which means the Supreme Court leans dangerously in the conservative direction. Generally, the whole idea of having nine justices on the court is to have at least one swing vote in order to even out the assessment and interpretation of the law. There’s one more kicker, though: as many of you may know, Justice Ruth Bader Ginsburg is currently 87 years old. She’s been serving as a Democrat on the Court since 1993. If we lose her, we forfeit any sort of balance on the Supreme Court, as Donald Trump will inevitably appoint another conservative. In fact, he’s already been drafting a list of possible appointees, one of which is Ted Cruz (previously a 2016 presidential candidate). Cruz has previously spoken out against abortions and women’s healthcare specifically, citing that even in cases of rape or incest, women should be forced to carry a fetus to term. So why does that mean we need to vote Democrat? Well, Ginsburg, especially with her frequent and difficult battles with cancer, is becoming frighteningly close to retirement (which in her case is, essentially, death — dare I say it). She is completely set on serving for as long as she is physically able, so it’s unlikely that anything other than death will cause her to resign from the Court. However, we’re on thin ice. Even in 2016 we were doubtful that Ginsburg would make it through the four years of Trump’s presidency. Now, it’s nearly impossible that she could make it through the next four, which brings us here: Voting for Biden will enable us to restore some of the balance in SCOTUS, and thus, in the rest of the government. Even if we aren’t thrilled about voting for Biden, maybe we don’t view it as us enthusiastically voting for him. Maybe we view it as us enthusiastically voting to keep Ginsburg. And to appoint a well-rounded, balanced, Democratic justice when she does eventually have to stop serving. Basically, a vote for Biden isn’t just a vote for the Democratic party. It’s also a vote for who will replace Ginsburg. So why should we care about who replaces Ginsburg? Well, for one thing, having a conservative-leaning court does, as I previously mentioned, affect the balance of our government. Without at least one swing vote, we hardly have a fair shot at assessing the law and evaluating it holistically. If the Court leans in one way or another, we’re doomed for the majority to always win, which, in this case, is not the desired outcome. Additionally, one of the most important principles of our government is the separation of church and state. With a conservative-leaning court, we become frighteningly close to the violation of that principle. For instance, in hotly debated topics like whether LGBTQ people have the right to marry, conservatism often sites religion as a reason for opposing such rights. The same often goes for abortion and the right to receive contraceptives and/or comprehensive sex education. At the end of the day, there is purely no reason why LGBTQ people should not have the right to marry, besides the fact that the Bible arguably prohibits it. Given separation of church and state, though, it isn’t ethical of us to use the Bible as any form of justification for the law. From a broader perspective, people’s rights, in general, are at stake. Women’s rights are at stake when they are threatened with the possibility of abortion bans, especially those that do not make exceptions for rape or incest. Rights are at stake for sexually active individuals when they are refused adequate or affordable contraceptive access. The rights and future of young people is at stake when it is deemed unconstitutional to provide comprehensive, preventative sex ed in American schools. The rights of LGBTQ people are at stake when states try to revoke their right to marry, or insist that businesses and corporations have a right to discriminate against them for their sexuality or gender. BIPOC people’s rights are at stake when their right to be hired without discriminatory practices is critiqued and overturned. One of the biggest dividers in politics is people’s perception of governmental function. Does government exist to help protect people, even if individual rights and opinions are sometimes sacrificed? Or does government exist primarily to protect those rights, even if discrimination and oppression ensues? Traditionally, the Democratic party identifies with the former, and it’s hard to bring us all together to one side and agree. The bottom line, though, is that we need balance. Regardless of party, we all deserve to have a government where we can feel fairly represented; a government whom we can trust not to be corrupt, to legitimately assess and interpret the law, and to protect citizens as best it can. With as much corruption as we have already seen, we can’t afford any more. It’s time for us to vote Democrat. Not for the sake of Biden. But for the sake of our rights, our sanity. And for the possibility that we may one day unite again. Gain Access to Expert View — Subscribe to DDI Intel
https://medium.com/datadriveninvestor/heres-the-real-reason-why-you-should-vote-democrat-in-2020-f28218aef944
['Brooklyn Reece']
2020-09-14 17:36:20.181000+00:00
['Politics', 'Society', 'Elections', 'Trump', 'Government']
The Unwritten Rules Of Software Development
The Unwritten Rules Of Software Development To misquote Churchill: why so many bugs have been created in so many applications by so few Image credit: Christina at wokintechchat.com on Unsplash I have a shameful confession: in one way or another I’ve spent the last 30+ years of my life being involved with software development, from complex enterprise-scale applications handling hundreds of transactions per second to smartphone apps developed for multi-million-user market segments. I’ve helped major corporations remediate poor implementations of well-known ERP systems and I’ve consulted to organizations to help them create the basic ground rules essential for reliable, scalable, and robust application development. During all this time I’ve discovered there are some eternal verities when it comes to the wild and wonderful world of software engineering, and with your indulgence I’ll now share a few of these truths with you. Verity Number One: senior executives and users alike think a brief demo is the same as a fully-functioning complex system, and can’t understand why the development team needs more than a week to implement what is obviously trivial and simple to deliver. Now that we’ve all supposedly adopted some flavor of Agile, it’s obvious that all executives have to do is specify the required features, when these will be delivered, and how many hours of development time it will take. Because Agile means Miracle in any language spoken by people who have large annual bonuses and their own reserved parking spaces, so it must be true. Verity Number Two: no software engineer alive will ever believe they should document their code by commenting, no matter how complex their code may be. This is because their code is elegant and transparent and absolutely anyone could glance at it and immediately understand its flow end-to-end, even if in between the first statement and the last there are tens of thousands of lines calling all manner of obscure undocumented sub-modules. Verity Number Three: no software engineer alive will ever look at someone else’s code without asking in a loud and annoyed voice why that lazy good-for-nothing developer didn’t comment their code sufficiently to make its intent clear. Verity Number Four: project sponsors and the business folk who will use the system are not unlike small children: they want sweeties and sparkly things but have no interest in how those sweeties and sparkly things are made. Additionally, they aren’t entirely sure what it is that they want but they’ll know it when they see it. Or, to be more accurate, when they see it they’ll know it’s not what they wanted even though they asked for it mere weeks ago. Verity Number Five: no one wants to test anything until just before delivery, at which point some reluctant testing will be done and this will reveal so much is defective that delivery will have to be postponed. Even though the project manager will have been shouting about the need for testing since the very beginning and will have been consistently over-ruled by the business sponsor who “doesn’t want to waste time” on testing, this disastrous situation will of course now be entirely the project manager’s fault. Responsibility without authority is the project manager’s job description, which is why so many project managers either cease to care about their work or, conversely, commit suicide in the corporate broom closet on Thursday evening before the big go-live meeting on Friday morning when all the senior executives will come in expecting their sweeties and sparkly toys to be ready for them on the table. Verity Number Six: no one learns from previous generations. Regardless of how new and shiny the tools may be, and how clever and sparkly the applications are, there are some basic rules to software development that when ignored always result in unnecessary problems. Fortunately for people with a wry sense of humor, these basic rules are automatically forgotten with each new generation of programmers. Even simple things like verifying a transaction get lost in the mists of time — until the consequences of failing to verify transactions become painfully apparent, at which point the idea of two phase commit is rediscovered like some lost wisdom from the Library of Alexandria. Verity Number Seven: security is for losers. No developer needs to worry about security these days and besides, everyone wants more and more features and security isn’t a feature so screw it. This approach works wonderfully until 140 million social security numbers or bank account records suddenly appear for sale on the dark web, at which point it turns out (to everyone’s amazement) that maybe security is sometimes important after all. Verity Number Eight: by the time a person has amassed half a lifetime’s experience and learned from uncountable mistakes both witnessed and personally committed, it’s pretty much time to retire, thus ensuring that all this accumulated knowledge will vanish from the community, leaving whoever’s left to learn everything all over again.
https://allanmlees59.medium.com/the-unwritten-rules-of-software-development-b8b711a9425f
['Allan Milne Lees']
2020-08-10 08:23:32.853000+00:00
['Humor', 'Software Development', 'Startup', 'Technology', 'Life Lessons']
How I Leveraged My Unpaid Online Writing to Land a Paid Gig
About two months ago I saw a paid writing gig on one of those websites people like me frequent. The ones where some random small business offers to pay you the “generous” sum of $20 for 2000 words. Yet if one is willing to be patient, or forced to because of the sudden unemployment caused by a Pandemic, there are good gigs to be found. This was why after a few days of checking the site religiously with different keywords, I stumbled upon a lifestyle blog in need of content. And they wanted to pay for it. I gathered 4 pieces, well-liked by audiences thus providing both social proof of my abilities to connect to readers and to formulate a sentence, and sent them in with a quick cover letter. I knew I couldn’t be the only one out there seeking the same job so I won’t lie, I didn’t expect to hear back. To my surprise, they wrote back 3 days later. In the email, they said they were impressed and wanted me to get started with 3 posts a week for a set cost. I was ecstatic. I was also lucky. There are millions of writers right now scouring the internet for those well-paying opportunities, the luxury of a guaranteed paycheck. There are some things to keep in mind if you are trying to transfer from personal blog work to paid work. Write the kinds of articles you want to be paid for No matter what job you apply for they are going to ask for samples. This means you need to have some on hand. It proves you’re serious about the topic and can research or write in the tone they need for the position. A food blog doesn’t want to see your breakup account, but they will be interested in a personal post in regards to a cake you made for your Grandma’s last birthday. Produce work that shows you know how to write Many businesses are unwilling to pay writers a healthy wage. And it shows. Poorly paid writers often produce poor work because they aren’t writers so much as content creators. They may not know the first thing about how to form a sentence or a compelling story. So serious employers who do want good writers have to sort through the noise to find the diamonds who just get it. That’s why your quality will always trump quantity when you are attempting to get a company to hire you. Those that are willing to pay well, want to make sure the writer the hire can write. Show them you can with any online writing you do for free on your blog. Don’t sell yourself short One way that companies will try to save money is to claim you “don’t have enough experience” for higher pay. However, prove them wrong. Show them the analytics on stories. I chose my most popular stories for this reason and so should you. At the end of the day, you are expected to write the same content as a professional so you should be paid for that service, not the discounted price. If they argue you can tentatively agree but let them know you are fielding other offers that will pay you your asking rate, so if they really want your skills, they need to show it. Financially. Writing isn’t easy, but it’s in demand. Content is needed. If online writers could agree to not work for pennies for companies who demand specific posts, then we might see a turning of the tide. A world in which creatives of all calibers were paid well for their labor, and not perceived value based on popularity. Until then, keep writing on your own blog for free, but keep the end goal in mind. If you want to get paid for your words, you have to write words that people want to pay to read.
https://medium.com/the-partnered-pen/how-i-leveraged-my-unpaid-online-writing-to-land-a-paid-gig-adeced844265
['Amber Radcliffe']
2020-10-20 04:26:23.214000+00:00
['Life Lessons', 'Business', 'Jobs', 'Self', 'Writing']
6 interesting facts about the Yakuza you probably didn’t know
2. The Japanese public recognizes them through their tattoos While tattoos are widespread across the globe, they are frowned upon in Japan due to their associations with the Yakuza. This is to the extent that public facilities such as gyms, pools and, onsens (bathhouses) have banned anyone with tattoos to keep them out of these places. For the Yakuza, tattoos are used to recognize members, demonstrate commitment, and boast about their wealth. They are designed using an extremely painful process called irezumi, in which the tattoos are hand-poked. The process not only hurts from the traditional needle, but also from the pigment injected into the wounds, which often causes fevers and liver problems later on. Through this “art” — often lasting several days — Yakuza members send a strong message of commitment to the lifestyle and a permanent rejection of mainstream society. Given the considerable costs of irezumi (hand-poked tattoos), members boasting full-body tattoos indicate they’re successful in their business practices, and therefore contributing to the syndicate’s profitability. Unfortunately, in modern society, the rise of machine tattooing has undermined both their pain tolerance and flamboyance. The tattoo designs themselves are based on Japanese mythology. Typical features include dragons, Samurais, and Koi fish, as they represent wealth, honor, and prosperity, respectively.
https://kjf67.medium.com/6-interesting-facts-about-the-yakuza-you-probably-didnt-know-46cd744397b7
['Kenji Farré']
2020-05-26 08:08:41.643000+00:00
['Tokyo', 'Japan', 'Society', 'Culture', 'Yakuza']
Here’s Something Simple You Can Do With the Photos on Your Computer
I bought my first digital camera in preparation for a cross-country road trip some friends and I took one summer in college. It was about 4 years before smartphones existed so having a dedicated camera was the only way to dabble in photography. I immediately fell in love with it. I loved the photos and the ability to capture memories, but more than anything I loved the way it changed how I viewed the world. Whenever my camera was in my hands, I’d scour the landscape looking for interesting objects, unique angles, or distinct interplay between light and shadow. I recall laying in the grass to get a closeup of a purple violet or trying to capture the soft glow of a wispy reed backlit by the sun. The photos weren’t as significant as the feeling I got after crawling around on my knees paying attention to the folds of a flower, the arrays of light, and the brilliant combinations of colors. To this day, I feel like my camera is a sort of dowsing rod leading me to magical little things and moments that I’d otherwise not pay attention to. The photo is merely a souvenir of my experience with that magic. After we had kids, photography became a way to concretize sacred moments in time. We wanted to preserve and remember those silly faces they made when they were 9-months old and capture their first steps. I don’t think I’m an obsessive photo-taker. I make a concerted effort to enjoy life’s fleeting moments without the need to take a photo, but there are still occasions that feel too precious to not capture in some manner. Of course, as the years go by these photos accumulate, my hard-drive slowly fills, and it becomes more difficult to go back through and appreciate these once-lived moments in time. And despite regularly backing up my computer I’m often worried about our memories existing in digital form alone. The massive photo albums of printed 4x6s that were commonplace when I was a kid aren’t any better really. They are cumbersome and just seem to end up in basement bins somewhere hardly looked any more than their digital counterparts. We don’t post photos of our kids on social media so it just felt like we needed some sort of creative outlet for all these photos to get some of them off our computer. Then, my wife and I decided we should create something like the annual yearbooks we had in school.
https://medium.com/assemblage/heres-something-simple-you-can-do-with-the-photos-on-your-computer-38617a56e1f
['Lance Baker']
2020-03-09 13:01:19.900000+00:00
['Life Lessons', 'Memories', 'Family', 'Creativity', 'Photography']
One Man Has the Power to End the World
We were saved by legitimate doubt and cold reason; by people doing what was right, and not what was ordered of them. Since Trinity, decades of research have done nothing but increase the destructive power of atomic weapons. While our capacity to inflict damages increased exponentially, our dread of nuclear doom has receded in a similar fashion. We no longer live in constant fear; perhaps we have too much on our plates to think about it; perhaps we’re simply inured to it. We shouldn’t forget, however, that since the 1950s there have been 14 nuclear close calls. Fourteen times in 70 years we have flirted with the apocalypse, once every five years on average. What prevented it from occurring was the good faith, courage, and cool-headedness of a few people like Stanislas Petrov. Indeed, in a mutually-assured-destruction standoff, retaliation must be immediate and conducted without doubts or second thoughts. The whole process is designed to exclude human emotions because emotions tend to make one not want to kill billions of people. More importantly, the vacuity of a second strike would compel most people not to order it. All a second strike does is guarantee the end of the world; it doesn’t prevent the damages from the first strike. This is a lose-lose scenario built on the notion that “if I die, everybody should die too.” The only thing that has saved the world until now is the prevailing of emotions and common sense in situations of emergency. We were saved by legitimate doubt and cold reason; by people doing what was right, and not what was ordered. A few people have saved us. A few can doom us. One has his finger on the button for 24 more days. And this person is Donald Trump. The press, the experts, the intellectuals, his political opponents, we, citizens of America and the world, have ve all discussed at great length the changes in US governance that need to happen following Trump’s presidency. A government system based on good faith and “being a gentleman” and founded on the premise that all politicians are honest and well-meaning has proven its limitations. The Founding Fathers never envisioned half of Washington would become what the GOP is today: a personality cult whose only aim is to reduce tax on their wealthy donors’ estates and keep power at all costs. The political system of the US needs a critical update. The Founding Fathers never considered the nuclear bomb either. That the President would one day possess a power so immense it could be considered divine cannot have possibly touched their minds. That, through firepower, the President would act as an absolute ruler on the lives of all citizens of the world is something they simply couldn’t conceive. Yet, here we are, subjects of a man’s desires, vassals of our nuclear overlords. For Humanity’s sake, the US must also update its nuclear strategy and rethink the powers given to the President. We all live under the threat of nuclear Armageddon. We all live a few inches away from destruction, a mere step separating us from the precipice. And, for 24 more days, the man who gets to decide whether we take that step and throw ourselves in the abyss is Donald Trump.
https://ncarteron.medium.com/one-man-has-the-power-to-end-the-world-48a80fe994b0
['Nicolas Carteron']
2020-12-27 11:55:55.278000+00:00
['Politics', 'Society', 'Nuclear Weapons', 'Trump', 'USA']
3P — People, Product and Process
Last few days, I was busy clearing much of the boxes that I had brought back from my office. It was walking down the memory lane going over various folders, documents and depositing them to recycle. One thing I noticed was how long 3P — People, Product Process has been part of my work. I found meeting agenda starting in year 2001 that included 3P. Managing around these dimensions became an integral and central part of the work. Although, the concepts got molded over the period of time. I don’t recollect how I got exposed to 3P initially — but I recall managing to these dimensions got re-enforced after I read “Execution” by Larry Bossidy and Ram Charan sometime in 2002–03. In “Execution”, these three dimensions were referred to as “Strategy” — focusing on why/what; “Operations” — focusing on how; and “People” — focusing on who. Coming from the Product Development organization, “Strategy” seemed to me as something that result into a “Product” and “Operations” was all about “Processes” needed to build an efficient organization. So what surfaced in my staff agenda in 2001 — still remains an integral part of the agenda today. Also one thing about opening old boxes — sometimes I think, it is better to keep them closed. It helps save time and grief.
https://medium.com/aloktyagi/3p-people-product-and-process-42687b4b9b63
['Alok Tyagi']
2017-03-08 21:02:35.817000+00:00
['Development', 'Organization Development', 'Personal', 'Software Development']
Climate change is also a problem of racial justice
Climate change is also a problem of racial justice Tackling the climate crisis means tackling racial injustice- why every climate activist should be backing Black Lives Matter Photo credit: Unsplash. On the surface, climate change and racism may seem like significant, but very separate, issues. But you don’t have to dig very deep to start seeing the connections between the two. This has always been the case, so I’m wary of suggesting that now is a good time to ‘start’ talking about these connections. However, there has certainly been increased attention on and awareness about racism following George Floyd’s tragic death at the hands of police in Minneapolis, and the Black Lives Matter protests which erupted worldwide following his death. And it’s important that we extend that conversation to all corners — including environmental justice and climate activism. At the very heart of it, climate change and racism are caused by the same broken system. A system that centres on exploitation to drive the economy. With climate change, that means exploiting the natural world. Extracting natural resources, the oil and natural gas trapped in the earth’s layers, and using it to generate profit by producing fuel and energy with little care for what this means for the ecosystem or for the future of humanity. It’s the same with animal agriculture and factory farming, exploiting the lives of the living creatures who share this planet with us in the cruellest in ways, in order to further our own power — and adding to greenhouse gas emissions whilst we do it, without a care in the world or any intention to stop. With racism, it’s about exploiting people. Africa was seen as a resource for colonial expansion, with the slave trade providing free labour in the form of people who were viewed as expendable. The slave trade may have (officially) come to an end, but powerful people still don’t want to stop exploiting people of colour for their own benefit. And racism perpetuates and protects that system, encouraging us to view people of colour (or non-white non-wealthy people in general) to be lesser citizens and to feel that exploiting them is ok. It’s a system that was never designed to be fair, just, or equitable. It was designed for powerful, rich people to exploit other people and the world’s resources in order to add to their existing wealth and deepen society’s divides for their own benefit. As long as this system exists, we’ll continue to see the exploitation of people and the planet for the material gain of certain groups of people. It’s why we see police violence against black people. It’s why COVID-19 has hit black and working-class communities the hardest. It’s why the direct impacts of climate change will take a higher toll on these communities too, and not those in the wealthy minority who lie at the root cause of the problem but can afford to protect themselves. This isn’t just conjecture, as I know many would claim. There’s a growing base of research which shows that people of colour are already the most affected by the impacts of climate change. Particularly pertinent is the impact of air pollution, which studies show is affecting low-income, non-white households the most because they live in predominantly urban areas, in older and less efficient housing, and are already vulnerable to health issues. This air pollution is now impacting pregnant women in these groups, causing premature, underweight, or stillborn children. Not only does is this a result of the system which underpins our society and economy, but it’s adding to that system of inequality, effectively making it more difficult for black, low-income families to reproduce successfully. Another example is the direct impact of extreme weather events, made more prevalent by climate change. Hurricane Katrina hit New Orleans in 2005. As ever, those already vulnerable were hit the hardest, in this case the elderly, black people, and low-income families. They were less able to evacuate in time because many did not own a car. They were most likely to die. They were left homeless with nowhere to go, no second home or family in another state to rely on. They were the faces shown on television as criminals looting local stores or stealing abandoned cars as they fought to survive. They didn’t have the money to rebuild their lives and didn’t have any support from the state to do so, and so their poverty deepened. It’s a dangerous cycle. And this is why all climate activists must also be race activists. “Racial justice is climate justice. That means police reform is climate policy.” - Emily Atkin, climate journalist To truly expel racism from our society and to tackle the climate crisis, we need to see the transition to a fair and clean economy. Models for this exist. Kate Raworth’s ‘doughnut economics’ model, for instance, which has clear boundaries in terms of maintaining the health of the planet and the social foundations which protect all people. So what can we do? Ultimately it all comes down to whether we can affect change at the level of the system. It’s been good to see companies coming out with statements of support for Black Lives Matter during this time of protests, but this doesn’t really do anything in the long-term, especially if it’s just a one-off statement with no further action. What we need is these companies, and individuals to insist that racist and unequal laws and institutions are left behind. What we need is to change the system to one which is not based on exploitation. And we need climate solutions which mirror this, which are community-led and aim to benefit people as well as the environment. Solutions like community energy, for instance. It can’t just be about rich people installing solar panels on the roof of their house(s) and being able to profit from selling their solar energy. We need renewable energy installations which are developed by and owned by the community, with profit coming directly back to that community — a local, and fair energy system. On an individual level, you can help by supporting those who are working on this agenda. Use your vote. Write to your local councillor and MP, and encourage others to do so too. Support your local community energy organisations. Support charities and organisations working to change the system, like Climate Justice Alliance, a coalition of organisations addressing racial and economic inequities together with climate change. Donate, volunteer your time, or just raise awareness by sharing on your social media or bringing it up in conversation. It may be large, widespread change that we need to see, but we all have a part to play.
https://medium.com/age-of-awareness/climate-change-is-a-racial-problem-too-c1d732e3dc4f
['Tabitha Whiting']
2020-08-17 14:27:12.798000+00:00
['Social Justice', 'Environment', 'Environmental Issues', 'Climate Change', 'Racism']
How to automate run your python script in your Raspberry Pi
How to automate run your python script in your Raspberry Pi Sasiwut Chaiyadecha Follow Jul 7 · 4 min read Hello everyone, today we are going to talk about the automation tasks. Many of yous might own the python script that help you to do somethings. I have one. My python script is a daily COVID-19 bot. It is simply to push the notification message for daily cases to LINE application and my Twitter account. Just better to know the situation, isn’t it? Well, I don’t want to run it manually everyday because it might far away from the word “bot” if I doing so. I decided to keep it into my single board computer, I own one called Raspberry Pi. It also run 24/7 so, it is a good place. I assumed all of you have Linux installed in your single board computer. In the Linux has a tool for task scheduler, called “Crontab”. Today, we are going to run our python script by using crontab. Introduction It is better to know about crontab First, we can simply access to crontab by open the terminal and passing command below: crontab -e (If you run for the first time, it might ask you to select to editor. I would recommend Nano or VIM. It is easy to use.) After you have your editor open, it is showing you how the command work. It can be summarised as below: * * * * * command to be executed - - - - - | | | | | | | | | ----- Day of week (0 - 7) (Sunday=0 or 7) | | | ------- Month (1 - 12) | | --------- Day of month (1 - 31) | ----------- Hour (0 - 23) ------------- Minute (0 - 59) 1st * is minute range between 0–59 is minute range between 0–59 2nd * is hour range between 0–23 is hour range between 0–23 3rd * is day range between 1–31 is day range between 1–31 4th * is month range between 1–12 is month range between 1–12 5th * is weekday range between 0–6 (0 is Sunday and 6 is Saturday) is weekday range between 0–6 (0 is Sunday and 6 is Saturday) 6th * is command to run the script For example, if you want to do somethings at 10.45 everyday, you can write the command as: 45 10 * * * yourcommand For someone who get the difficulty and not sure to address the command’s syntax, you can use the website below to get the guild idea. Or if you want to do inversely by picking the date and time and need command to be generated, you can visit the site below: Python script task schedule Now, we all know how the crontab work. Let’s integrate it with our python script. First, I am going to create simple ‘hello world’ python script and save it in the desktop. Go to terminal and cd Desktop . Then, I use Nano to create python script as nano pytest.py . In the nano editor, I code the classic one. print('hello world') Pressing ‘Cmd+x’ or if you are the Windows guys, use ‘Ctrl+x’. Then, you can press ‘y’ to save and exit the editor. Let’s try to run the code, open terminal and type python3 pytest.py . The terminal should display something like this: hello world It works fine! Now, we can add the python script to crontab. However, before doing that we need to know where is our python executor. In Linux, we can use command which python3 (I am using python3) to find the path of executor. Normally, it should locate in the path below: /usr/bin/python3 We are ready to automate the python script. The one last step that you probably should know about crontab. The crontab doesn’t show any output in the terminal. To check whether your task schedules still working or not, you could create the log file to keep results. It can be done by simply use >> syntax to the log file as the example below: >> /home/pi/Desktop/log.txt The command above will create text file, named ‘log’ in your desktop. It can contain any output from your python script. In this case, it is going to append ‘hello world’ every time when the script is running. You can refer to full command below: * * * * * /usr/bin/python3 /home/pi/Desktop/pytest.py >> /home/pi/Desktop/log.txt When you can ensure that your crontab is working, you might not need to create the log.txt . You can only keep * * * * * /usr/bin/python3 /home/pi/Desktop/pytest.py for your task scheduler. That’s all! Just simple and useful. See you next time. You could find me on LinkedIn as link below:
https://medium.com/analytics-vidhya/how-to-automate-run-your-python-script-in-your-raspberry-pi-b6fe652443db
['Sasiwut Chaiyadecha']
2020-07-13 01:06:04.009000+00:00
['Crontab', 'Linux', 'Raspberry Pi', 'Python', 'Task Scheduler']
Building a Metallica-detecting neural network with TensorFlow: Part 1 - Cutting Code
Neural networks have always fascinated me. Star Trek TNG’s Data had a positronic neural net which allowed Data to progress towards his goal of becoming more human over time. The Terminator’s neural net processor was capable of reacting and learning from the environment to better serve and/or terminate Sarah & John Connor. There is something that is just inherently cool (and terrifying) about a machine which is capable of directing its own learning and becoming more than the sum of its parts. These capabilities are now no longer works of Sci-Fi fiction; neural networks are probably serving some of your needs right now in ways you wouldn’t necessarily expect. As my first stab at entering the world of machine learning, neural networks and artificial intelligence, I decided to document the trials and tribulations of throwing a neural network at a very simple recognition / categorisation problem: is a particular piece of music by Metallica or R.E.M? A neural network graph generated by TensorBoard What is TensorFlow? TensorFlow is an open-source framework released by Google for the purposes of ‘data-flow programming’ and is primarily used for machine learning tasks. It is at heart, an extremely powerful maths library driven by a combination of mathematical approachs: graph theory : the study of relationships between mathematical objects and structures. : the study of relationships between mathematical objects and structures. linear algebra : the mathematics of linear equations and their solutions. : the mathematics of linear equations and their solutions. tensors: multi-dimensional vectors which are analagous to matrices / arrays but are more generalised and usually with higher dimensions. What is a neural network? A neural network is a digital analogue of the human brain. The human brain consists of billions of cells called neurons which are connected by weighted connections called ‘synapses’. Neurons in the brain pass signals from ‘action potentials’ through to a receiving neuron If enough inputs from a given set of neurons are received at a neuron’s synapse then that neuron will also fire. The firing of neurons against synaptic weights is called forward propagation. The process can also run backwards and is called ‘back propagation’. In this process the weights are recalculated and the neural network is in the process of ‘learning’. A neural network complete with an ‘input layer’, ‘hidden layer’ and ‘output layer’. Installing pre-requisites TensorFlow has GPU support (for NVIDIA GPUs only currently) and CPU support. There are certain performance optimisations you can take advantage of by using the GPU to process on behalf of tensorflow. GPU’s are perfectly suited to churning out masses of vector calculations every single millisecond. Since my current machine doesn’t have an NVIDIA GPU, I will make do with the CPU distribution. Installing TensorFlow is a simple case of using the pip package manager that usually comes bundled with python: pip install tensorflow Afterwards, you can start using tensorflow in python immediately: import tensorflow as tf Note: Of course you need python and pip installed as pre-requisites too. Getting Data into the Neural Network The first thing we need for any neural network is a set of training data so we can feed it into the base neural network and train a model. For our specific machine learning problem, we need music samples; some examples of Metallica and examples of R.E.M. I’ve chosen the following because to me they are pretty classic examples of both groups: Metallica - Fade To Black.mp3 Metallica - Master Of Puppets.mp3 R.E.M. - Everybody Hurts.mp3 R.E.M. - Losing My Religion.mp3 Our first problem to tackle is the audio format. MP3 is a compressed format which means most python libraries (and our neural network) will struggle to work with the format. As the files are compressed, they’re also conceptually only giving us a portion of the picture - we need the original waveform (a WAV file) so we have something that python and TensorFlow can work with. Our next problem is the size of the individual audio files and the resultant size of the array representing the audio that we will feed to the network. For the python API of TensorFlow, our input training and test datasets need to be a list of numpy arrays. If we passed 120 seconds of audio in 2-channel WAV form to our neural network at a sample rate of 44.1kHz, the resultant numpy array would have dimensionality of 5292000 x 2. This is pretty large and within that array it will also contain a lot of noise in the data (e.g. periods of silence and audio that isn’t particularly characteristic of a Metallica song). To tackle both of these problems we will use the following code to chop our music into smaller chunks: And the following to convert the audio into a lower quality sample rate after the chunks have been created: Finally we need to take the audio we’ve chopped up into chunks and categorise it somehow. For this example I’ve chose to use the name of the file, some hardcoded categories and to append each entry into a list inside a container object: The snippet takes an array of categories and generates a one-hot encoding array which represents whether a given sample matches one of the categories. This step is required because categorical data (such as whether a sample matches one label or another) needs to be converted into a form that fits better into a (logistic) regression model. Composing the Neural Network There are other parts of the code required to stitch some of the above together but for now lets continue onto creating the actual neural network, which is pretty simple! YAY! For our particular task we’re pulling in tflearn — a higher level API for TensorFlow so we don’t have to concern ourselves with too much bootstrapping of our neural network. network = tfl.input_data([None, AUDIO_SHAPE, AUDIO_CHANNELS]) On the first line in the method, we create the first layer of our network — the input layer. Here we define the shape of the numpy array which will be fed into the network. For our purposes, the variable AUDIO_SHAPE is the sample rate (Hz) multiplied by the length of the audio segment (in seconds). For a numpy array of 3000 x 2, you would declare your input layer like this: network = tfl.input_data([None, 3000, 2]) Onto the other layers: network = tfl.lstm(network, 30, dropout=0.8) network = tfl.fully_connected(network, 2, activation='softmax') network = tfl.regression(network, optimizer='adam', learning_rate=learning_rate, loss='categorical_crossentropy') We next define another layer in our neural network, an LSTM cell which allows our neurons to hold some form of state — LSTM cells can ‘remember’ or forget information every iteration and use it in future iterations. This is a very useful property which we take advantage of in a RNN (recurrent neural network) for processing data in a sequence. Audio files happen to be data which are in a ‘sequence’ form so LSTM cells are well suited for this particular problem. Next we define another layer, that we want our network to be ‘fully connected’, so that each neuron in our hidden layer will be connected to each other neuron. An alternative is a ‘convolutional’ network so that only specific neurons make connections to neighbours — but this is for specialised cases and we’re not ready to make that optimisation yet. We also state our activation function, a function which calculates the weighted sum of the inputs to the neuron. There are several different types of activation function — softmax is quite commonly used in the problem domain so I’ll carry on using it for now. In this layer we also define our number of outputs: 2, Metallica or R.E.M. Finally we define a regression layer which we use to define and minimise a loss function for training. A loss function is used to calculate a diff between the desired output and true output of the neural network. The regression layer’s purpose is backpropagation and optimisation — it uses this loss function and performs a stochastic gradient descent to optimise the coefficients that exist within the neural network: Putting it all together After some refactoring and tweaks we’re ready to run the model and train it against our data. I define a method called batch_reader to pool our audio data into a container: And then write my main method to train our model. I also define a learning rate ‘hyper-parameter’ to determine how fast our model ‘learns’: Summing up That’s our progress so far. So far we’ve created a base model, a means of chopping up and categorising data and to feed it into the neural network for training. Next up I will cover off training the model, the bumps in the road I experienced and the solutions I found. For now, here’s the whole code: https://github.com/AndyMacDroo/metallica-tensorflow Thanks for reading! :)
https://medium.com/clusterfk/building-a-metallica-detecting-neural-network-with-tensorflow-part-1-cutting-code-a8358092911c
['Andy Macdonald']
2018-12-28 15:45:25.425000+00:00
['Machine Learning', 'Python', 'TensorFlow', 'Neural Networks', 'Artificial Neural Network']
The Art of Deconstruction — How to Reverse Engineer Success
1. Identify Your Goal What do you want to do? It’s very important to identify your goal before you start anything. (I talk all about this in another article]) This will help you hyper-focus, save time, and give you a clear filter to help you evaluate your efforts. Here are a few examples of goals: Take great portrait photos Paint like an Impressionist Remodel my home office Give a presentation 2. Research Find the top performers or best examples on a subject and particular platform. Who else has done this before? What do you like? What does everyone else like? What metrics are valuable to measure? Once you’ve done a deep dive and can pick up on trends and patterns, select the cream of the crop. Pick the top 3–5 examples and include at least one anomaly — the example that performs as well as the rest, but breaks the common threads that link the others together. Anomalies will help to challenge the conclusions you might have from your findings to help you form a better hypothesis moving forward. 3. Deconstruct, Analyze, & Understand If you’ve gone to art school, you might be familiar with the “masterwork copy” assignment. You take a popular piece of art and simply copy it. Stroke by stroke. Color to color. The purpose is to help students develop their skills quickly by understanding how the masters made such great pieces of art. Through this process, they train their eyes on how to see and their hands how to make. Students add new tools to their toolbox as they solve the challenges of their own work. Whenever I seek to understand the success of something, I ask myself a few questions: What are the main components and ingredients that make this up? What specific attributes make it effective? How did the person make it? How would you go about making that? Self-Portrait, September 1889. Musée d’Orsay, Paris via Wikipedia. If we were to apply this line of questioning to a Van Gogh painting, you might analyze it this way: The main components that make this up: the subject, Van Gogh, slightly off-center of the frame. He’s facing us 3/4 and is lit with a single key light. the subject, Van Gogh, slightly off-center of the frame. He’s facing us 3/4 and is lit with a single key light. What makes it effective (beautiful): its organized rhythmic brush strokes, subtle muted split-complementary color palette, and high contrast at its focal point: his eyes and expression. its organized rhythmic brush strokes, subtle muted split-complementary color palette, and high contrast at its focal point: his eyes and expression. How did he make it: Using oil paints, Van Gogh created his self-portrait using repeating patterns of brushstrokes that follow the contours of the shapes of the subject. Become a Reverse Engineer Being able to reverse engineer something is one of the most effective ways to understand how great things are made and why they work. This is The Art of Deconstruction and can be applied to anything: A dish from a Michelin star restaurant can be broken down into raw ingredients, seasoning, and cooking techniques. When studying a successful business, you can research their industry, the need they fill for their consumers, and their perceived brand in the marketplace. With a beautiful website, you can take note of the colors, typography, and hierarchy. You can even open the page source, and see every line of code it took to build it. When watching a film, you can break down the story arc, its characters, themes, cinematography, visual effects, and editing style. For comedy, you can analyze a standup artist’s jokes: their stories, cadence, and method of delivery. 4. Emulate & Apply When you can decode the ingredients that makeup something great, you demystify its genius into elements you can understand. From there, you can begin to emulate and adopt these pieces into your own work. To fully understand how something works, try to copy it. Just like the Masterwork assignment I mentioned, use each ingredient and follow each step until you’ve recreated one part or all of the work you’re studying. Based on the learning retention pyramid, this method of active learning will help you cement the information for you to use later. “Isn’t that stealing?” — Every Creative Person At this point, some of you might feel a little apprehensive towards this. Watch “Everything is a Remix”, then I’ll meet you back here. 5. Improve Upon Don’t just take from culture; contribute to it. The last step of this process is to apply your newfound knowledge towards the goal you’ve identified in step 1. Take the most relevant elements of the best things you’ve consumed and improve upon them. Combine them together. Make something new built off of what came before you. Here are a couple of ways to do that:
https://medium.com/thefuturishere/the-art-of-deconstruction-563591bd2bcc
['Matthew Encina']
2019-09-18 02:37:03.082000+00:00
['Art', 'Design', 'Content Creation', 'Design Process', 'YouTube']
6 Tricks to Take Better iPhone Photos
6 Tricks to Take Better iPhone Photos They say the best camera is the one you have with you As the adage goes, the best camera is the one you have with you. While you may not be convinced that the iPhone is capable of greatness, allow me to change your mind. Note: While this post is targeted to iPhone users, many of these features are also available on Android. Here are some sample iPhone photos I’ve taken: Photo by Delaney Jaye Photo by Delaney Jaye In fact, I don’t think I’ve taken a single photo that I ended up using on my professional camera. These cameras are powerful, so here’s how to make the most of them: Tips for iPhone Camera Photos: 1. Live Mode Taking photos in “live” mode is a little known way to make sure you get the right photo every time. By taking a photo in this mode, you create a mini video — about 2 seconds of pre/post photo time. Why is this beneficial? You can change the final image into any freeze frame within that video. So if your subject moved or the final result wasn’t quite right, you can select another image. It gives you the effect of having shot photos in rapid succession, without the duplicate files. You can also choose to turn them into boomerangs or long-exposure shots from the edit window. Photo created by the author. 2. Gridlines In your iPhone Settings, click “Camera.” Tick the box for “Grid.” Why is this beneficial? The grid helps you line up your photos and ensure they are symmetrical and straight. It also helps you follow the “Rule of Thirds.” The Rule of Thirds In photography, the rule of thirds says that splitting your image into a 9-square grid and using the intersection of the gridlines to line up your subject will create much more aesthetically pleasing and intriguing images than if your subject were centered. Example of a “Rule of Thirds” Photo by cottonbro from Pexels 3. Panoramic Photos We all know that turning on panoramic mode allows you to take beautiful (albeit, difficult to execute) extra long landscape photos. What many people don’t know is that if you flip your phone horizontally, you can take beautiful (albeit, difficult to execute) extra long vertical photos. Start low and slowly tilt your camera up toward the sky in the panoramic mode for some fun shots (like getting a person and a tall building in a photo together). Note: It takes some practice to get it right. 4. Flat Lay Photos If you’re taking a flat lay photo (an aerial image of a scene below you — see below image), your iPhone will help you line it up. Often when holding your phone over a scene, you may unintentionally skew the image by holding the phone at a slight angle. When you hover with a flat iPhone, the camera will reveal a “+” shape in the middle of the screen to help you. Line up the yellow and the white “+” for a perfectly flat photo. Example of a Flat Lay Photo by THE 5TH on Unsplash 5. Adjust Exposure Lighting is the most important factor when it comes to taking a good photo. The best lighting is soft natural light (like an overcast day). To adjust lighting artificially, the iPhone can help. When your phone is open to the camera, tap once on the focal point of your photo. A small box with a sun will appear (orange arrow in the below photo. Slide the sun “up” with your finger to increase exposure, or “down” with your finger to decrease exposure. Photo created by Delaney Jaye. 6. Lock Focus When your phone is open to the camera, tap and hold the focal point of your photo. The photo will lock its focus. This will allow you to move the camera without losing your focal point and take clearer photos. Closing Thoughts I hope this has shown you what a powerful tool your iPhone can be and has empowered you to give iPhone photography a go. While this post was (obviously) targeted to iPhone users, many of these features are also available on Android. Do you have any tips for iPhone photography?
https://medium.com/content-cafe/6-tricks-to-take-better-iphone-photos-1be2b7d1b498
['Delaney Jaye']
2020-03-18 09:56:27.374000+00:00
['Photography', 'Business', 'Marketing', 'How To', 'Instagram']
4 Things I Learned from Reading the React Hooks API
1. Cleaning up a useEffect Hook Cleaning up a useEffect hook is the same functionality as calling componentWillUnmount in a class-based component. This function will be called before a component is removed from the DOM. Some examples of when we may use this are removing event listeners, invalidating timers, canceling network requests, or cleaning up any subscriptions that were created in componentDidMount, or in our case when useEffect was run. To clean up a useEffect hook, we need to return a function. This function will be run when it is time to clean up. Because useEffect runs for every render, the cleanup function will also cleanup effects from previous renders before running the effects the next time. useEffect(() => { // do something return () => { // clean up }; }, []); As an additional note, the useEffect function can take a second argument, which is an empty array as seen above. Whatever values you pass into the array will determine when the hook is run. If you pass an empty array as the second argument, the Hook will only run once, upon initial render, 2. When to Use useReducer useReducer is an alternative to useState. useReducer will usually be used when there is complex state logic or when your next state depends on your current state. The logic and setup for useReducer is similar to how Redux works, using actions, reducers, dispatch, and state. Learning more about useReducer led to me think about whether Hooks’ useContext and useReducer could replace Redux in React apps. The article below has helped me to understand more on this topic. TL;DR Use useState for basic and simple/small size applications. Use useState + useReducer + useContext for advanced/medium size applications. Use useState/useReducer + Redux for complex/large size applications. 3. What is Memoization? I was not sure what memorization was, so after coming across the useMemo and useCallback hooks, it led me to learn more about it. For those of you who are not familiar with it, memoization is a performance optimization technique used to help speed up computer programs. It does this by storing the result of function calls and returning the cached result. Keep in mind, the useMemo and useCallback functions should only be used for expensive calculations. For simple tasks, it can actually be worse for performance and have other side effects. Here is an interesting article I read about it if you would like to learn more. 4. Managing Focus with the useRef Hook The useRef hook has two main uses; accessing DOM nodes or keeping track of mutatable variables. If you are familiar with class-based components, this is similar to createRef. One example of when this will come in handy is managing focus, text selection, or media playback. This can be done with the useRef hook. In the example below, we are creating a reference to the search input and setting the focus on the button click.
https://medium.com/javascript-in-plain-english/4-things-i-learned-from-reading-the-react-hooks-api-ad0d48374901
['Chad Murobayashi']
2020-11-29 09:12:01.813000+00:00
['React', 'Hooks', 'JavaScript', 'Web Development', 'Programming']
Age doesn’t prevent you from being physically fit
Age doesn’t prevent you from being physically fit Recent research shows elderly and very elderly individuals can successfully build muscle with proper exercise Photo by Mladen Zivkovic on Canva Building muscle is for more than preparing for beach season or making a medium size t-shirt look like a smedium. Building muscle provides a foundation for strength. Why is strength important? Strength is the amount of force we can generate at a given moment. There are varying degrees of strength. Recently, “Game of Thrones” star Hafthor Bjornsson — known as “The Mountain” — deadlifted 1104 pounds to set a world record. But strength does a lot more than demonstrating your ability to lift a barbell weighing as much as a polar bear off the ground. Strength dictates our ability to stand up from a seated position, whether that be from the ground, an office chair, or a low toilet. We can easily take these tasks for granted, but many people struggle with the simple act of standing up without assistance. To improve our strength, we have one of two options. How do we get stronger? Photo by LOGAN WEAVER on Unsplash First, we can improve the efficiency of our nervous system through exercise. Strength represents how effectively we can use the muscle we have. Essentially, through repeated exercise at a challenging intensity, we improve the efficiency of the movement. This is why physical tasks gradually become easier the more you perform them; not within an exercise a session — fatigue will ultimately win the battle and force us to stop — but across sessions. The gains from efficiency are limited, however. The mountain didn’t perform a bunch of bodyweight squats to earn his nickname. The amount of muscle we have has a limit. The second option to improve strength is by building muscle. A racecar driver can use strategy and skill to outmaneuver someone in a comparable car, but a Toyota Prius will never beat Ferrari (unless you cheat). At some point, you need a bigger engine. The same holds true for our bodies. Inactivity and age can quickly weaken our bodies. Like a GPA in school, it is much easier to lose muscle than to gain it, but through a consistent effort, it can be improved. According to recent research, it can be achieved regardless of age. Age is just a number Photo by Michal Bednarek on Canva As people age, strength becomes a greater concern as muscle tends to diminish. This makes functional activity, such as standing from a chair, more challenging. The common belief is that once someone qualifies for the senior citizen discount, their opportunity to build muscle is gone once and they are resigned to a future of walkers and wheelchairs. Turns out, this is not the case. A recent systematic review — meaning a study that pulled the results from many studies on the same topic — examined the effects of resistance training on muscle size and strength in very elderly individuals (greater than 75 years old). The study found the very elderly can increase their muscle strength and size by participating in resistance training programs. These effects were observed with resistance training interventions that generally included low weekly training volumes and frequencies. For strength, the exercise programs lasted from 8 to 18 weeks with a training frequency of 1 to 3 days per week. For hypertrophy (a fancy term for building muscle), the interventions lasted 10 to 18 weeks, with a training frequency of 2–3 days per week. When looking exclusively at the oldest subgroup of participants (80 + years of age), there was a significant effect of resistance training on muscle strength as well. To quell any concerns on safety, there were minimal reports of adverse events associated with the training programs. You don’t have to do it on your own This study helps us understand the potential for improvement following exercise in elderly individuals. Whether someone is exercising for general health, beach season, or part of a rehabilitation program, age will not prevent them from building muscle. As a physical therapist, I have the opportunity to work with patients across the age spectrum and all demonstrate remarkable adaptations to exercise. The same holds true for personal trainers and strength coaches. If you have been holding off on exercise believing it is a waste of time, I encourage you to give it a chance. If you are unsure where to begin or have concerns about safety, reach out to a local physical therapist, personal trainer, or strength coach. Age is not a barrier to building muscle and strength.
https://medium.com/live-your-life-on-purpose/age-doesnt-prevent-you-from-being-physically-fit-2375efcaa6ba
['Zachary Walston']
2020-09-19 02:02:44.562000+00:00
['Health', 'Exercise', 'Fitness', 'Personal Development', 'Personal Growth']
Young Mikhail Lomonosov
Quick Intro Within this already highly-selective echelon of polymaths, there exists an even more exclusive community: those that momentarily embody the soul of their country. Cemented & celebrated through erected statues, printed currencies & building eponyms, they’re defining personas typically thrust into a changing landscape. For the United States, it was Benjamin Franklin during its founding; for Russia, it was Mikhail Lomonosov during its enlightenment. Our fifteenth protagonist this Masters of Many series, Lomonosov, led both a cultural & scientific revolution — helping position Russia as a center of 18-century Enlightenment. A man of legendary mystique, he’s well-deserving of his place among this group of polymaths. Persevering our niche focus, we explore his earlier years. In order to uncover their defining habits, choices & experiences we ask again: what was he like in his twenties? Note-Worthy Accomplishments — Influential statesman that established Moscow University & served his country as appointed Secretary of State — Famed chemist that established the Russian Academy’s lab, proved the law of conservation of mass in reactions & opened the first mosaic factory — Greatest linguistic enlightener of the 18th-century, published adopted grammar rules & popular poems, plays, & historicals — Accomplished geologist that catalogued over 3,000 minerals, demonstrated the origin of soil & explained the formation of icebergs 20s To 30s (1731–1741) Mikhail Vasilyevich Lomonosov was born into a freezing existence. Spending his childhood in a remote, Siberian village (Mishaninskaya, later named Lomonosovo in his honor), his personal life mirrored his surroundings. The son of an illiterate peasant fisherman, he nevertheless experienced a worse turmoil in the matriarchy department: his mother died when he was nine. Exacerbating the situation, his first stepmother passed a few short years afterward as well. The proverbial cherry on top, his second stepmother made his home a living hell; a voracious reader from an early age & she’d callously berate him for doing so. He’d later remark that he: Was obliged to read & study, when possible, in lonely & desolate places, & to endure cold & hunger While his endearing father helped him understand commerce, Lomonosov sought a more academic influence & mentor, which brought him to the village deacon, S.N Sabelnikov. The deacon gifted him with two books (“Grammar” by Smotritsky & “Arithmetics” by Magnitsky) that Lomonosov fondly remembers as the gates of his erudition. The legend of Lomonosov begins in 1730. Clear that his educational curiosity far exceeded the resources of his surroundings, he committed to heading out in search of a more academic ambiance. Set on the sprawling metropolis of Moscow, he courageously made the treacherous ~1200 kilometers/weeks-worth of isolated travel by dog sled. Slavo-Greco-Latin Academy / The Academy — Moscow, Russia Once settled in Moscow, the twenty year-old Lomonosov sought out a classical education at none other than the first higher education establishment in the city: the Slavo-Greco-Latin Academy [hereafter referred to as ‘the Academy’ ]. Already too far in his endeavors, he refuses to allow bureaucracy from deterring his admission & therefore applies with false credentials, falsely claiming to be a son of a Kholmogory nobleman. He spent the next two years, twenty-two & twenty-three, perpetually working in a flurry. The average student started much earlier age-wise, so in order to catch up with his generation, he had to compress three years worth of studies to approximately eighteen months. This need to catch up triggered a competitive motivation. Of note, he was markedly financially desolate: I lived on three kopecks a day: half a kopeck for bread and half a kopeck for kvas; the rest was for paper, shoes, and other necessities The following year, 1734, twenty-three year-old Lomonosov was tapped to attend the sister Kievo-Mogilyanskaya Academy in an exchange program. An active trip abroad, he publishes his very first artistic composition, a short poem called Verses to a Cup. Additionally, he mentally commits to the pious path with the goal of becoming a priest & joining the traditional religious expedition to Orenburg; curiously, he begins telling his academic colleagues that his father was in fact a priest. Dissatisfied with the STEM offerings at this academy, he returns back to the Slavo-Greco-Latin Academy early. In 1735, his changing, contradictory background stories finally caught up with him & twenty-four year-old Lomonosov found himself in deep academic trouble — he faced imminent expulsion. Equally demoralizing, the Orenburg expedition rejected him. Thankfully, a last-minute miracle occurred & his academic record was spared: the government Academy of Sciences formally three requested students that had proven excellence in latin, math & science for a special assignment in the near future. Out of options & grateful for the lifeline, he fully embraces his academic side. Next year, twenty-five year-old Lomonosov finally graduates The Academy, completing the study program that usually takes eleven or twelve years in a record five years. Per annual tradition, The Academy sent their top 12 students to St. Petersburg, scholarships in hand. His first few months into arrival, a mixed request came in from St. Petersburg & the Academy of Sciences; they sought three young undergrads to study abroad in Germany to bring back the latest academically in chemistry & mining sciences. St.Petersburg State University In January of 1937, therefore, the twenty-six year-old Lomonosov set out to study in Phillips-University Marburg, in central Germany, under the eminent scientist Christian Wolff. He remained under his tutelage for two years, which proved greatly influential both philosophically & as an access point for networking in increasingly-higher scientific circles. Lomonosov boarded with one Catharina Zilch, a brewer’s widow; & though he actively “was trying to escape love,” he soon found himself infatuated with the modest & beautiful daughter of his hostess, Elizabeth Christine Zilch. This same year, tragically, his alma matter, The Academy, burns down while he’s abroad. Phillips-University Marburg Finally finished with his academic education & with a stellar letter of recommendation from the renown Wolff, he opted to sharpen his applied education for the next two years. Curious to expand his mineralogy, metallurgy & mining repertoire, twenty-eight year-old Lomonosov landed at Bergrat Johann Friedrich Henckel’s laboratory in Freiberg, Saxony. Fully embracing his inner-generalist, he self-studies German literature & publishes two additional poems: Ode & Letter Concerning the Rules of Russian Versification. Ending a whirlwind of a year on a positive note, he & Elizabeth give birth to their daughter, Catharina-Elizabeth in late November. Shortly afterward, Lomnosov & his beloved officially tied the knot. In 1741, the year he turned thirty, Lomonosov finally headed home. Royally armed with cutting-edge knowledge & renowned referrals, Russia warmly welcomed him back by appointing him as Adjunct of the Russian Academy of Science in the physics department. Privately, he neared completion on his first major scientific publication: First Principles of Metallurgy or Mining. An Original Lomonosov Mosaic Quirks, Rumors & Controversies It’s an unhealthy & counterproductive bias to assign infallibility to our heroes, regardless of generational stature. Suspending subjective labels & judgement is necessary for a healthy, comprehensive analysis; therefore, here, we’ll actively seek out that behind the veil & address the quirks, rumors & controversies behind Mikhail Lomonosov. First, it’s worth surfacing his distinctive irascibility. Often isolated & exasperated, there are multiple recorded events of an irritated Lomonosov aggressively disparaging colleagues. Most famously, he was accused, tried & placed under house arrest in 1743 after verbally lashing out at multiple people in the Academy of Science; he was released in eight months, only after apologizing to everyone involved. His emotionally trying & physically callous childhood reasonably fueled his need to prove himself for the remainder of his life; it logically follows that he perhaps never lost this ferocity & at times responded with overwhelming & disproportionate anger. Explosive behavior from intense individuals is unpleasant, but not entirely uncommon — an understandable flaw. Perhaps more worrisome as a lasting potential blemish to his reputation, however, is his pattern of outright submitting falsehoods. It’s particularly unfortunate because it re-casts one of his most courageous moments as one of foreshadowing: his big lie that he’s of nobility to attain admission at The Academy. Incontrovertibly, an intrepid twenty-year old crossing the Siberian tundra in search of better education & submitting a necessary lie is admirable. His follow up lie a claiming that his father was a priest & nearly getting himself expelled? That highlights an alarming lack of judgement & hints at a sense of arrogance. In Closing Who Was Mikhail Lomonosov In His 20s? A brilliant, curious, generalist student of life with a unique & courageous sense of perseverance. Was He Accomplished In His 20s? Outside of academic record, not entirely. Akin to Thomas Jefferson or Jagadish Chandra Bose, this formative period was marked by a decade of prestigious schooling, hands-on apprenticeships & elite networking. While he had no deep or revolutionary discoveries during this decade, his remarkable range as a polymath was inarguably visible. A traditional polymath that genuinely straddled the line between artist & scientist, he’d shortly lead a cultural & scientific movement, on of Russian enlightenment. More than worthy of placement among Da Vinci, Franklin, & co, Lomonosov would rightfully come to personify the ethos of social mobility through heroic perseverance & immeasurable creative intelligence.
https://medium.com/young-polymaths/young-mikhail-lomonosov-2f4ba50b8c79
['Jesus Najera']
2020-07-18 21:34:36.351000+00:00
['Life Lessons', 'Russia', 'History', 'Science', 'Art']
Three Different Things: October 29, 2018
in In Fitness And In Health
https://medium.com/early-hours/three-different-things-october-29-2018-64f8c0ebdcbb
["Sean O'Brien"]
2018-10-29 12:25:48.033000+00:00
['IBM', 'Addiction', 'Linux', 'Creativity', 'Digital Transformation']
Debt and Financial Crisis: Are We near the End of the Long-term Debt cycle?
Almost 10 years has passed from the big crash of 2008, the worst economic disaster after the great depression. After several bailouts and significant measures to stimulate growth, it seems that the economy recovered in many ways and we are back into new times of prosperity. Is That the Case or Is It Only One Part of The story? There is an ideal concept in economics called equilibrium, a condition in which economic forces are balanced: supply, demand, prices are stable and economic variables remain unchanged without external influences. The fact is that there is no such thing in economics and history speaks clearly about that. Since the beginning of capitalism, we are able to see a sequence ups and downs, periods of growth followed by times of recession. The closest thing to equilibrium that we can think of today is a steady growth of the economy with a low and stable inflation rate, that is what central banks are pursuing. If you stop for a moment and try to see things from a long-term investing perspective, that is a “cyclical” perspective, you can easily see that each capitalist economy shows periods (roughly 5–7 years) of economic growth followed times of recession. Recessions Are the Rule in the Economy Today, after 8 years of economic growth, we are going to gradually approach times in which the economy consolidates and growth gradually slows down. What are the sources of the last economic expansion? Is the economy doing really that good? Are the fundamentals better than the past or is it just a temporary situation? Debt and economic growth When talking about the sources of economic growth there is a very wide range of variables that could be considered. Factors like capital, technologies, labor, investments, interest rates and their relationships all play crucial role in the economy. For the purpose of this article, I’d like to focus your attention on these three drivers of the economic growth: Productivity Short-term debt cycle Long-term debt cycle The chart below offers an intuitive representation: Productivity, Short-Term Debt Cycle, Long-Term Debt Cycle Productivity is the factor that ultimately drives the long-term economic growth, it is represented here as a straight line that grows over time. The larger wave represents the long-term debt cycle and the smaller waves reflect the short-term debt cycle. Credit plays a huge role in driving the economic growth with serious implications on the economy and our society. In short period of times (5–7 years), the increasing amount of credit makes it possible to drive consumption levels higher than income levels, fostering the expansion of the economy until it reaches its maximum level. The credit expansion generates a huge amount of debt in the system, that is the result of the behavior of buying on credit and increased businesses investments. At a certain point the cycle inverts, lower levels of credit lead to lower consumption and the economy slows down. There is nothing surprising about that, this process is part of how the economy works and will continue to occur. What is more surprising is our incredible ability to forget that fact during economic good times. Credit is a really important factor for the economy to grow and prosper, it is a crucial variable of the current system. To put it extremely simple: Higher Credit → Higher consumptions → The economy will do better We can see that today, it is exactly what happened after the 2008 crash: an unprecedented monetary expansion that fuelled what is now considered a healthy economic growth (according to the news). Currency Creation by Central Banks — Source: FRED Monetary policy played a big role in supporting the recovery after the crash and that was possible because of the role of credit and debt in the system. Surely monetary policy has real effect in driving short term movement of the economy, but this system works until the economy can’t accept higher level of debt. While credit is a powerful tool that helps the economy to grow, at some point people and businesses will reach their maximum. Higher interest rates will increase debt payments and rate of defaults, the aggregate demand for products and services will drop and the economy will slow down. Given how the system works a recession is inevitable, the question is only when. Where Are We today? Short Term Debt Cycle Consumer credit represents the debt that a person incurs when purchasing a good or service using credit cards, lines of credit or some loan. Take a look at this chart, showing the total amount of consumer credit in the US over recent years: Today, consumer credit is up more than 45% in comparison to 2008 before the market crash and the following recession. How Long and How Much Do You Think that Consumer Credit Levels Can Go Up? If you consider this from a long-term perspective, you clearly see that this trend is unsustainable. I am not making predictions about when things may happen, but when the markets start to price the risk associated with the probability of a recession, it is really likely that we are going to see substantial declines in financial markets and stocks evaluations and this could be one of the triggers for a future recession. Long Term Debt Cycle The fluctuations of the overall debt in the economy over the years can be represented by the long-term debt cycle. These cycles are much longer and typically last around 50 years, they begin at low levels of debt in the economy and end at very high levels of debt. Suppose that you were born in 1960, except some small recessions along the way and two big market crash, you would have seen only economic growth and prosperity, a huge growth in overall production. You can get the idea by looking at this chart of the GDP over the years: When you live in those circumstances, you don’t seriously consider a dramatically different scenario. After a short period of crisis, you reasonably expect a fresh wave of economic growth. The point is that: There Are Limits to an Economic Growth Financed by a Combination of Debt and Monetary Expansion! When these limits are reached, the upward phase of the long-term debt cycle comes to its end, the overall amount of debt in the system comes back to lower levels in a process called deleveraging. Deleveraging means an overall reduction of debt levels by all the economic operators (banks, consumers, businesses…), a scenario that has a huge negative impact on economic growth. The question is: how much of this economic growth measured by GDP has solid fundamentals? Here is the chart of the total amount of consumer credit in the US over a longer period of time: As you can see if you start to consider an historical perspective, the level of consumer credit has increased alarmingly, roughly 10 times higher than it was 45 years ago. One of the main driver (if not THE driver) of the huge economic growth of the last 20 years is high consumer credit, made possible by low interest rates. And that trend shows a strong acceleration following the two major crisis of 2000 and 2008. Stop here and try to think: What Happens When This Trend Reverses? The good scenario is the one that sees a normal recession where stocks drop, bad businesses go bankrupt, but there is enough room to further stimulate growth, probably with another monetary expansion. In this soft-landing scenario, long-term investors are presented with really good buying opportunities for good businesses that are temporarily undervalued by the market. The bad scenario is the one that is going to wipe out the huge pile of debt accumulated in the last 45 years, correcting the excesses in the economy. If the long-term debt cycle is over, we may well see 10–20 years of a great depression scenario. In this situation everyone has reached his debt ceiling and there is no room to further stimulate the economy through easy money. Higher interest rates coupled with the high level of debt make the debt unsustainable, putting a huge burden on the government and the people: default rates increases, confidence in the economy goes away and uncertainty rises, the aggregate demand drops and the cycle strengthens, paving the way for a big recession. Nobody expects a great depression scenario, but if you look at the overall level of debt in the system by governments, consumers and businesses, we meet the requirements for a full-scale economic collapse. Conclusions We tend focus on the short-term news like the FED announcements on interest rates, the number of new jobs created, international trade, the level of GDP, etc. forgetting to consider a long-term view based on how the economy works and how people behave. The massive monetary expansion that came after the 2007–2008 crash fostered an enormous debt assumption by consumers and especially governments (deficit) that lead to economic growth. This increased the total amount of debt in the system is putting a burden on the entire economy: it makes the system more fragile and the debt has to be repaid in the future (IF and HOW it will be paid back deserve a dedicated article). It is exactly what the old-fashioned theory of the economic cycle says: we are now in times of economic expansion stimulated by monetary policy. What Comes After Huge Expansion Times? Every time that there is a scenario that could be a bubble we say that “this time is different”, providing all sort of explanation to justify that history non longer applies because of changes in technology, economy and society. It was said so many times over the course of history and usually had never been the case, maybe this time could really be different, indeed it could be much worse than ever before.
https://medium.com/swlh/debt-and-financial-crisis-are-we-near-the-end-of-the-long-term-debt-cycle-615d2ec2976a
['Duino S.']
2019-10-29 16:53:12.253000+00:00
['Investing', 'Society', 'Financial Crisis', 'Debt Crisis', 'Economics']
4 Advanced Tricks With Python Functions You Might Not Know
2. Using * and ** for Function Argument Unpacking Some functions require a long list of arguments. Although this should be avoided altogether (e.g. by using data classes), it’s not always up to you. In such cases, the second-best option is to create a dictionary with all the named arguments and pass that to the function instead. It will generally make your code more readable. You can unpack a dictionary for use with named keywords by using the ** prefix:
https://medium.com/better-programming/4-advanced-tricks-with-python-functions-you-might-not-know-d1214d751741
['Erik Van Baaren']
2020-11-21 08:18:49.767000+00:00
['Programming', 'Data Science', 'Python3', 'Python', 'Software Development']
API Gateway and Cognito Auth Without v4 Signing
Why Would This Situation Occur? First, why would you have this situation? This is fairly niche, but the particular case I’m solving is an API that is composed of an API Gateway endpoint directly injecting the data into DynamoDB — no lambdas. That said, this can apply to any API Gateway endpoint where you want authentication. This API is authenticated, to secure it in general, as well as to identify which user the data belongs to. Note that I’m also assuming the use of Cognito as your user system. If you use something else, then you may need to go the lambda authorizer route (which I didn’t want to do per above). I’ll have a future article on doing these API Gateway to DynamoDB APIs. For most situations, you either control both the front end (code making the API call) and backend (providing the API). However, in my case I wanted to use a particular HTTP library that handles making the API calls for you, because it provides some nice functionality that I didn’t want to implement. Specifically this is the Background Geolocation library from Transistor Software. This is a great library that handles monitoring geolocation, and sending location points in batches to a backend. It has other nice features such that it handles when you have no network connectivity (queues/stores locations until it can send them), and allows various configuration. Further it’s available for native iOS and Android, Flutter, React Native, and Cordova mobile platforms. This is not likely a common need, but this article may also provide you ideas on different ways you can do auth in general or for similar problems. When I initially started, I did location myself, and thus was able to use the AWS v4 signing process. There are decent AWS libraries for nearly any platform, and thus it was relatively simple to create signed URLs for my API. But then I switched to using the Transistor Background Geolocation library, for which I can’t control the full URL. It does allow adding headers, and you can of course set the base URL, but you don’t control the entire URL, thus can’t v4-sign it. First I’ll cover the specific solution for this, but after that I’ll cover one alternative as well. Also, as said another potential route is a lambda authorizer, which would work, but I felt meant another Lambda to maintain (and pay for) and avoided the “free” authentication mechanism you can use via API Gateway. JWT Tokens and the Authorization Header Tl;dr: One can authenticate to an API-Gateway endpoint using the Authorization header and a JWT token. This sounds easy right? Well, the process is very easy, you take your JWT token for your authenticated user and pass it in the Authorization header with your API call, and API Gateway uses that to authenticate. Nice! However, those JWT tokens expire (in 1 hour), so you can’t just auth your user and be done, you need to refresh that token. This means you are now writing code to monitor and maintain that token. Because the token can change when you refresh, it also means that depending on how your HTTP library/code works, you will need to set the updated token into the configuration every time it’s refreshed. Furthermore, you need to actually get that JWT in the first place. This is fairly easy, but I’d claim the AWS docs are not very clear on which token this is. They usually talk about the “identity token”, but they have both an ID Token and Access Token. Those docs indicate either can be used (they have different data in the JWT for each, so depends on your needs). I wound up using the ID Token. At least in the Flutter AWS library, you need to authenticate your user, get a session, then get the ID token from that, and finally a JWT from that. To achieve this in my application, I setup a timer that is active any time a user is logged in, and that timer triggers periodically to refresh the token. The refresh is essentially the same process, where you get a session, the ID token, and then the JWT. That’s only a couple lines of code — but there wasn’t much documentation around this process in general, so hopefully this helps someone :). Every time I ensure the freshness of the token (it may or may not have expired), I update the Background Geolocation configuration with the [new] token. Further, if the user logs out, I need to remove that token from the config, and of course stop sending location events to my API. Finally, how do you get the actual Cognito user ID into your data when using this, given API Gateway is doing the authentication from a JWT? This is provided in the data API Gateway makes available. In my case because it’s going directly to DynamoDB, I’m using a VTL template to transform the data into a suitable form for DynamoDB. As part of this, it includes the user, which I can obtain in the VTL via $context.authorizer.claims.sub . Another Approach Another mechanism to consider, depending on your particular needs is to not have your API do specific user authentication. Instead, you could leverage API Keys with API Gateway, to authorize the API call in general. The user ID would be provided in the data sent by the client. This technique is the likely approach when your API is simply collecting data, but you don’t control the users (e.g. your client/customer has the users, but they are using your API for whatever it provides). This approach is much simpler, as you would just configure the HTTP library with a header for the API key. I use this with some other APIs I have, and it works great, and is really nice in terms of getting rate limiting for free, as provided by the AWS usage plans — all of which is simply configured in my serverless framework (or CloudFormation, etc.) config. No code required. Your client application still needs to be authenticating the user (and obtaining a user ID if that is different than what they use to login — i.e. the Cognito user ID vs. their email or username), but you obviously have to do that in all these cases. What’s the downside? You are placing trust in the client that they are sending the proper user ID with the API call. This is likely fine, in that you’ve issued an API key to the client, who is presumably trusted, and presumably they wouldn’t have any reason to send incorrect or bad user IDs. Of course, if this is a public API, and if it is self-serve signup, someone could simply sign up and start spamming your API with bogus data. Or, if the API key leaked or an employee of the client put in nefarious code, or whatever, there is an opening. You have a small protection in that API Keys can be assigned to rate limits/plans, but that doesn’t really protect you from anything but high volume attacks (doesn’t protect against sending bogus data points for a user, which could have serious ramifications, etc.). You may want to combine this with other components that will help reduce the likelihood someone could maliciously use the API if they obtained (either legitimately or not) an API key. And that is further complicated if your API is public and is fully documented. Security is Hard API Security is a challenge. There are so many factors involved, as well as balancing making the API overly difficult to use for your clients. AWS’ v4 signing works great if/when you can use it, and there are similar techniques employed by many public APIs that make them very secure. But there are also these times when we don’t have that ability and we have to use alternate methods. Ensuring those remain as secure as possible is tough! This particular article covered a very narrow and specific use case. Using your own auth, or Auth0, or other such authentication systems will likely have their own issues. I’d love to hear other ideas, downsides, problems with these approaches, etc.
https://medium.com/swlh/api-gateway-and-cognito-auth-without-v4-signing-180320bb2a61
['Chris Bailey']
2020-10-26 22:27:35.190000+00:00
['Cognito', 'Api Gateway', 'Serverless', 'AWS', 'Authentication']
dataviz.cafe
dataviz.cafe Find the right tool to visualize your data This article was co-authored by Andrea B. and George S. Dataviz.cafe is a public resource curated by IQT Labs for anyone interested in open-source software for data visualization. With over 700 software packages — summarized and tagged by data type, programming language, and other keywords — dataviz.cafe is designed to help people find free visualization tools for a wide variety of use-cases. Above: the dataviz.cafe interface In recent years, the open-source community has developed hundreds of highly performant visualization authoring tools. As a result, the quality and variety of data visualization tools in circulation today is staggering. Even as the paid software segment enjoys continued growth and accelerated commercial consolidation, free and open-source software (FOSS) is tackling important visualization problems. These range from general-purpose charting, mapmaking, and dashboarding to more specialized use-cases like displaying brain structures, tracking satellite trajectories, and visualizing load patterns on electrical grids. While visually captivating and technologically exciting, the abundance of tool options and the rapid pace of change in the field has a downside: information overload. As a consequence, it can be challenging to keep track of capabilities and to make thoughtful, informed decisions about what visualization software to use. Enter dataviz.cafe. Built with data scientists, visualization designers, web developers, and creative coders in mind, dataviz.cafe is a curated catalog of over 700 open-source visualization tools and software packages that can support a multiplicity of individual needs and use-cases. In assembling dataviz.cafe, we sought to reflect the diverse array of visualization programming languages in use today, from JavaScript to Python (the two most well-represented languages in the context of the dataviz.cafe tool catalog), to R-based visualization tools, to iOS, Ruby, PHP, and C++. To be as inclusive as possible, we also included low-code DIY visualization tools, like Chart Tool, which cater to less technically-oriented newcomers working in a web browser with non-sensitive data. The dataviz.cafe website organizes tools and software packages into five categories, based on the type of data they visualize: geospatial, network, quantitative/numerical, text, and miscellaneous. Tools that span multiple categories are cross-listed under all appropriate data types. Each entry has a summary and tags by data type, programming language, and topical keywords. Users who visit dataviz.cafe can explore the entire collection, they can use the data type buttons to filter results, or they can use the search bar to find tools for a specific programming language (“JavaScript,” “Python,” “R-project,” etc.), chart type (e.g. “Sankey” or “treemap”), or keyword (e.g. “bio” or “machine learning”). Given the popularity of D3.js visualizations, when users type “D3” into the keyword search box, D3.js will appear, along with D3-derived and D3-compatible libraries and modules. (The same is true for jQuery-, Angular-, and React-based visualizations in JavaScript, as well as for PyTorch, Keras, and TensorFlow, which underpin several Python-based visualization tools included in the collection.)
https://medium.com/high-stakes-design/dataviz-cafe-initial-release-f7f6d6c52776
['George S.']
2020-08-21 21:14:54.464000+00:00
['Open Source', 'Data Visualization', 'Open Data', 'Creative Coding', 'Design']
A Guide to Classification Algorithms
The Most Popular Classification Algorithms Image source: Author Scikit-learn is one of the top ML libraries for Python. So if you want to build your model, check it out. It provides access to widely-used classifiers. Logistic regression Logistic regression is used for binary classification. This algorithm employs a logistic function to model the probability of an outcome happening. It is most useful when you want to understand how several independent variables affect a single outcome variable. Example question: Will the precipitation levels and the soil composition lead to a tomato’s prosperity or its untimely death? Logistic regression has limitations; all predictors should be independent, and there should be no missing values. This algorithm will fail when there is no linear separation of values. Naive Bayes The Naive Bayes algorithm is based on Bayes’ theorem. You can apply this algorithm for binary and multiclass classification and classify data based on historical results. Example task: I need to separate rotten tomatoes from the fresh ones based on their look. The advantage of Naive Bayes is that these algorithms are fast to build: They do not require an extensive training set and are also fast compared to other methods. However, since the performance of Bayesian algorithms depends on the accuracy of its strong assumptions, the results can potentially turn out very bad. Using Bayes’ theorem, it is possible to tell how the occurrence of an event impacts the probability of another event. k-nearest neighbors kNN stands for “k-nearest neighbor” and is one of the simplest classification algorithms. The algorithm assigns objects to the class that most of its nearest neighbors in the multidimensional feature space belong to. The number k is the number of neighboring objects in the feature space that are compared with the classified object. Example: I want to predict the species of the tomato from the species of tomatoes similar to it. To classify the inputs using k-nearest neighbors, you need to perform a set of actions: Calculate the distance to each of the objects in the training sample. Select k objects of the training sample, the distance to which is minimal. The class of the object to be classified is the class that occurs most frequently among the k-nearest neighbors. Decision tree Decision trees are probably the most intuitive way to visualize a decision-making process. To predict a class label of input, we start from the root of the tree. You need to divide the possibility space into smaller subsets based on a decision rule that you have for each node. Here is an example: Image source: Author You keep breaking up the possibility space until you reach the bottom of the tree. Every decision node has two or more branches. The leaves in the model above contain the decision about whether a person is or isn’t fit. Example: You have a basket of different tomatoes and want to choose the correct one to enhance your dish. Types of decision trees There are two types of trees. They are based on the nature of the target variable: Categorical variable decision tree Continuous variable decision tree Therefore, decision trees work quite well with both numerical and categorical data. Another plus of using decision trees is that they require little data preparation. However, decision trees can become too complicated, which leads to overfitting. A significant disadvantage of these algorithms is that small variations in training data make them unstable and lead to entirely new trees. Random forest Random forest classifiers use several different decision trees on various sub-samples of datasets. The average result is taken as the model’s prediction, which improves the predictive accuracy of the model in general and combats overfitting. Consequently, random forests can be used to solve complex machine learning problems without compromising the accuracy of the results. Nonetheless, they demand more time to form a prediction and are more challenging to implement. Read more about how random forests work in the Towards Data Science blog. Support vector machine Support vector machines use a hyperplane in an N-dimensional space to classify the data points. N here is the number of features. It can be, basically, any number, but the bigger it is, the harder it becomes to build a model. One can imagine the hyperplane as a line (for a two-dimensional space). Once you pass three-dimensional space, it becomes hard for us to visualize the model. Data points that fall on different sides of the hyperplane are attributed to different classes. Example: An automatic system that sorts tomatoes based on their shape, weight, and color. The hyperplane that we choose directly affects the accuracy of the results. So we search for the plane that has the maximum distance between data points of both classes. SVMs show accurate results with minimal computation power when you have a lot of features.
https://medium.com/better-programming/a-guide-to-classification-algorithms-fdaabb538b26
[]
2020-08-13 22:40:10.610000+00:00
['Machine Learning', 'Algorithms', 'Artificial Intelligence', 'Data Science', 'Programming']
Three Bold and Optimistic Predictions for the 2020s
Photo by Aaron Burden on Unsplash The closing out of a decade always prompts “best of” and “worst of” listicles- as well as plenty of crystal gazing about what’s to come. And while predictions can be fun, I’ve learned to be careful about the ones I take in. Because when the mind accepts an idea to focus on, it unleashes a torrent of power that leads us to the next things we see and do, and ultimately experience. Cognitive framing is real and we’re smart to stay attentive to how it shapes both our perceptions and our experience. So, instead of relying on what “experts,” pundits, or psychics predict, I’d rather be the one wielding that power, wouldn’t you? Thinking about how our collective future will unfold can be a tragicomic exercise these days. But I played a game with myself when composing this post. I decided to explore broad topics portrayed in the news as a $%&!show: healthcare; the shrinking middle class; and the state of leadership. I challenged myself to turn conventional thinking on its head, focusing instead on three bold, optimistic predictions for the coming decade. 1. New discoveries will underscore and amplify our ability to heal ourselves from disease. We are so used to viewing “healthcare” as an expensive, barely-functioning system in desperate need of reform. And that it is — if we only examine it through a familiar lens. But if we pull focus on a different set of evidence, we might see a trend that helps us shift our view away from this broken system, toward our own personal power. Plenty is changing for the better: Via the internet and mobile apps, people have access to deep information that can help us tune into our bodies and our health. It can feed our curiosity and guide us to shift behavior before serious health issues develop. (This assumes we are discerning when sifting through content, because self-diagnosing can have dangerous side-effects!) The fact that we can access powerful information at our fingertips is part of a growing trend to stop relying solely on doctors and big pharma for answers. New cutting-edge research is pointing to our own cells and DNA for better treatment and cures. Whether it’s tumor DNA sequencing to determine the best course of treatment, stem cell therapy to treat various diseases and conditions, or new research about our own gut bacteria holding the key to health — we are learning that there is tremendous power in each individual’s own physical body to achieve optimal health. The mindfulness movement is perhaps the most exciting thing that puts us on a path to more personal power in health, by fueling the known mind-body connection we all have. Our enormous potential to de-stress and reduce inflammation in the body is only just beginning to be understood. And there’s plenty more on the verge of discovery about the power of placebo. I believe this area alone has the potential to shift us into an extraordinary new paradigm around healing. Bottom line: we are tapping into the power of our own mental and physical “healthcare system,” and the more we do it, the more power we’ll discover. This doesn’t mean doctors or drugs will disappear, nor should they. But the relationship between doctor and patient will continue to morph from the transactional “Doctor, tell me what’s wrong and give me a pill,” to a more balanced one of cooperation. Coupled with new medical and metaphysical discoveries, I can see a future where we source groundbreaking ideas for much-needed reform toward a high-functioning healthcare system in the US. 2. The value of expensive “stuff” will decrease. And I don’t mean that plummeting costs will make it easy for everyone to don diamonds and drive Porsches! I mean exactly the opposite. Sure, my Portland purview means I’m immersed in a land of fleece and Priuses, but remember that “Westward Ho” has often defined grand movements of change and new ways of being. Besides, these noted predictive trends know no borders: Emboldened youth are leading the way toward much-needed change regarding global warming, gun sense, and the blatant human inequities they see. And the drum they’re beating is getting louder, waking us ALL up to what is truly important. The attraction of brand names and status symbols is giving way to a social currency that expresses “save the planet.” We’re becoming more aware of the origins of our products and the impact of our purchases. Whether it’s palm oil leading to deforestation and killing animals, or the child labor behind inexpensive clothing, more people are saying “enough.” Income inequality will continue to mean that the vast majority of people are unable to purchase expensive status symbols. The top 10% of the population owns over 70% of the wealth, leaving the bottom 90% of the population often barely able to scrape by. This widening gap means rich folk will increasingly be on an island unto themselves. Hard to feel good about your designer clothes when you walk among the homeless. A family’s “surplus” (those lucky enough to have one) will get socked away to educate their children and/or buy experiences rather than “stuff.” After meeting basic needs (food, clothing, shelter, etc.) people will prioritize on their legacy and how they want to live life, rather than what they want to own in their lifetimes. Photo by Markus Spiske on Unsplash With these trends, I anticipate “rich people stuff” losing much of its allure over the coming decade. Due to peer pressure, an empty purse or other factors, the stuff we have won’t define our power. Instead, our growing awareness will push us in a different direction from what previous generations expected (for example, with fashion). One might argue this is not exactly an optimistic view, but I stand firm: moving away from “shiny object status symbols” frees us up to become better people who find value in better “things.” 3. Integrity in leadership will become the new normal in ways never before seen. Yeah, I know this is especially bold and might seem naive, given that every week (or every day!) there’s a new revelation of political corruption, criminality or just plain cruelty. (These links are recent examples, but countless others prove the point.) Consider the surfeit of such horror stories as a great purging. And it’s not over yet. In fact, a sub-prediction for next year: the revelation of some incredibly scary, ugly things will test us to the limit. But I hold these views in the spirit of “it’s darkest before the dawn.” I expect this big purge will clear our mental landscape for something new and better. In terms of leadership, the world has taken a bizarre, paradoxical turn. Many heads of state around the globe outwardly or covertly promote an anti-democracy agenda. But at the same time, this has flipped-on a switch in the masses, many of whom are standing up. Whether that’s in Hong Kong, Chile, or here in the US (where early this year record numbers of women and minorities were sworn into Congress). The more that autocratic “leaders” try to squash those without power, the more it inspires people to rise. Youth are coming of voting age, and if my kids and their friends are any indication, they will not stand for the hypocrisy and inaction they’re seeing from current leaders (per my #2 prediction). As the oldest voting bloc exits the scene, the incoming one is determined to end the environmental and societal degradation that their elders put up with for way too long. My vision of “integrity in leadership” starting to dominate in the 2020s isn’t one where those in power suddenly see the light and change. It’s one where *we* the people step into our power, and this will happen as more of us become inspired to speak up and step up. It will also come from a sense of sheer survival, in order to save the planet that sustains us.
https://medium.com/swlh/three-bold-and-optimistic-predictions-for-the-2020s-399adafcbc46
['Jacqueline Jannotta']
2020-01-16 19:17:37.840000+00:00
['Predictions', '2020', 'Optimism', 'Future', 'Humanity']
The Jungle of Koalas, Pandas, Optimus and Spark
If you are as excited about data science as me, you probably know that the Spark+AI latest summit started yesterday (April 24th 2019). And there are great things to talk about. But I will do it with a spin-off. If you’ve been following me you now that I co-created a framework called Optimus. If you want to see more about that check these articles: Where I’m explaining a whole data science environment with Optimus and other open source tools. This could have been part 3, but I’m more interested in showing you other stuff. Optimus APIs In the beginning Optimus started as a project for cleaning big data, but suddenly we realized that there was a lot of more opportunities for the framework. Right now we have created a lot of interesting things and if you are a data scientist using pandas, or spark you should check it out. Right now we have these APIs: Improved versions of Spark Dataframes (much better for cleaning and munging data). Easier Machine Learning with Spark. Easier Deep Learning with Spark and Spark Deep Learning. Plots directly from Spark Dataframes. Profiling Spark Dataframes. Database connections (like Amazon Redshift) easier. Enrich data connecting with external APIs. And more. You can even read data directly from the internet to Spark. So you can see we have been trying a lot to improve the world of the data scientist. One of the things we care was creating a simple and usable API, and we didn’t love Pandas API or Spark API by itself, but a combination of those with a little touch of awesomeness created what you can call today our framework. Koalas vs Optimus vs Spark vs Pandas Today Databricks announced the project Koala as a more productive way when interacting with big data, by augmenting Apache Spark’s Python DataFrame API to be compatible with Pandas. If you want to try it check this MatrixDS project: And this GitHub repo: So instead of boring you with copy-pasting the documentation of Koalas that you can read right now, I created a simple example of the connection between Koalas, Optimus and Spark. You’ll need to install Optimus pip install --user optimuspyspark and Koalas pip install --user koalas I’ll be using this dataset for testing: https://raw.githubusercontent.com/databricks/koalas/master/data/sample_stocks.csv Let’s first read data with Spark vanilla: from pyspark.sql import SparkSession spark = SparkSession.builder.getOrCreate() df = spark.read.csv("sample_stocks.csv", header=True) For that I needed to upload the dataset before. Let’s see that in Optimus: op = Optimus() df = op.load.url(" from optimus import Optimusop = Optimus()df = op.load.url(" https://raw.githubusercontent.com/databricks/koalas/master/data/sample_stocks.csv ") That was one step simpler because with Optimus you can read data directly from the web. What about Koalas? import databricks.koalas as ks df = ks.read_csv(" https://raw.githubusercontent.com/databricks/koalas/master/data/sample_stocks.csv ") This code will fail, would that happen in Pandas? import pandas as pd df = pd.read_csv(" https://raw.githubusercontent.com/databricks/koalas/master/data/sample_stocks.csv ") Well no. That would work. That’s because you can read data with Pandas from the web directly. Ok so let’s make the Koalas code work: import databricks.koalas as ks df_ks = ks.read_csv("sample_stocks.csv") Well that looks simple enough. By the way, if you want to read the data from the local storage with Optimus it’s almost the same: from optimus import Optimus op = Optimus() df_op_local = op.load.csv("sample_stocks.csv") But, let’s take a look at what happen next. What are the types of this Dataframes? print(type(df_sp)) print(type(df_op)) print(type(df_pd)) print(type(df_ks)) And the result is: <class 'pyspark.sql.dataframe.DataFrame'> <class 'pyspark.sql.dataframe.DataFrame'> <class 'pandas.core.frame.DataFrame'> <class 'databricks.koalas.frame.DataFrame'> So the only framework that created a Spark DF apart from Spark itself, was Optimus. What does this mean? Let’s see what happens when we want to show the data. For showing data in Spark we normally use the .show() method, and for Pandas the .head() method. df_sp.show(1) Will work as expected. df_op.show(1) Will work too. df_pd.head(1) Will work as well. But what about out Koalas DF? Well you need to use the pandas API, because that’s one of the goals of the library, make the transition easier from pandas. So: df_ks.show(1) Will fail, but df_ks.head(1) Will work. If you are running this code along with me, if you hit show for spark, this is what you saw: +----------+------+------+------+------+----------+----------+----------+-------+-------+------+--------+----------+------+ | Date| Open| High| Low| Close| Volume|ExDividend|SplitRatio|AdjOpen|AdjHigh|AdjLow|AdjClose| AdjVolume|Symbol| +----------+------+------+------+------+----------+----------+----------+-------+-------+------+--------+----------+------+ |2018-03-27|173.68|175.15|166.92|168.34|38962839.0| 0.0| 1.0| 173.68| 175.15|166.92| 168.34|38962839.0| AAPL| +----------+------+------+------+------+----------+----------+----------+-------+-------+------+--------+----------+------+ only showing top 1 row Which is kinda awful. Everyone prefers those pretty HTML outline tables to see their data, and Pandas has them, so Koalas inherits them from Pandas. But remember, this are not Spark DF. If you really want to see a prettier version of Spark DF with Optimus you can use the .table() method. df_op.table(1) and you’ll see: which shows you the data better plus information about it like the types of the columns, the number of rows in the DF, the number of columns and partitions. Selecting data Let’s do more with our data. Like slicing it. I’ll choose the columns Date, Open, High, Low and Volume with the frameworks. There may be more ways of selecting data, I’m just using the common ones. With Spark: # With Spark df_sp["Date","Open","High","Volume"].show(1) # or df_sp.select("Date","Open","High","Volume").show(1) With Optimus: df_op["Date","Open","High","Volume"].table(1) # or df_op.select("Date","Open","High","Volume").table(1) # or with indices :) df_op.cols.select([0,1,2,5]).table(1) With Pandas: df_pd[["Date","Open","High","Volume"]].head(1) #or df_pd.iloc[:, [0,1,2,4]].head(1) With Koalas: df_ks[["Date","Open","High","Volume"]].head(1) # will work df_ks.iloc[:, [0,1,2,4]].head(1) # will fail df_ks.select("Date","Open","High","Volume") # will fail So as you can see right now we have good support of different things with Optimus, and if you love the [[]] style from Pandas, you can use it with Koalas too, but you can’t select by indices, at least not yet. The difference here is with Koalas and Optimus you are running Spark code underneath, so you don’t have to worry about performance. At least not right now. More advance stuff: Let’s get the frequencies for a column: Pandas: df_pd["Symbol"].value_counts() Koalas: df_ks["Symbol"].value_counts() They’re the same which is very cool. Spark (some of the bad parts): df_sp.groupBy('Symbol').count().show() Optimus (you can do the same as in Spark): df_op.groupBy('Symbol').count().show() or you can use the .cols attribute to get more functions: df_op.cols.frequency("Symbol") Let’s transform our data with One-Hot-Enconding: Pandas: # This is crazy easy pd.get_dummies(data=df_pd, columns=["Symbol"]).head(1) Koalas: # This is crazy easy too ks.get_dummies(data=df_ks, columns=["Symbol"]).head(1) Spark (similar enough result but horrible to do): # I hate this from pyspark.ml.feature import StringIndexer,OneHotEncoderEstimator indexer = StringIndexer(inputCol="Symbol", outputCol="SymbolIndex") df_sp_indexed = indexer.fit(df_sp).transform(df_sp) encoder = OneHotEncoderEstimator(inputCols=["SymbolIndex"], outputCols=["SymbolVec"]) model = encoder.fit(df_sp_indexed) df_sp_encoded = model.transform(df_sp_indexed) df_sp_encoded.show(1) Optimus (a little better but I still prefer Koalas for this): from optimus.ml.feature import string_to_index, one_hot_encoder df_sp_indexed = string_to_index(df_sp, "Symbol") df_sp_encoded = one_hot_encoder(df_sp_indexed, "Symbol_index") df_sp_encoded.show() So in this case the easier way was from Pandas, and luckily it’s implemented in Koalas, and this types of functions will increase in the future, but right now this is almost all we have as you can see here: But they run in Spark so it rocks. Plots: Plotting is an important part of data analysis. With Pandas we are use to plot what ever we want very easily, but with Spark it’s not that easy. We are happy to announce that in the latest version of Optimus (2.2.2) we have created a way of creating plots directly from your Spark DataFrames, no subsetting needed. Pandas: df_pd.plot.scatter("Open","Volume") df_pd.boxplot("High") df_pd.hist("Low") Koalas: df_ks.hist("Low") This will not work. Spark: Nope. Optimus: df_op.plot.boxplot(“High”) df_op.plot.hist(“Low”) df_op.plot.scatterplot(["Open","Volume"]) Apache Spark 3.x:
https://towardsdatascience.com/the-jungle-of-koalas-pandas-optimus-and-spark-dd486f873aa4
['Favio Vázquez']
2019-04-25 12:34:33.660000+00:00
['Technology', 'Data Science', 'Business', 'Artificial Intelligence', 'Machine Learning']
Pink Ribbons & Purple Hearts
Breast Cancer Awareness Month Photo by Gabrielle Henderson on Unsplash June 2017: “Courage: noun- the quality of mind or spirit that enables a person to face difficulty, danger, pain, etc., without fear; bravery.” Per Dictionary.com. But they’re wrong. There is plenty of fear involved in courage. Courage is facing difficulty, danger, pain, etc despite the fear. I work with some of the most courageous Humans on the planet. Women who have breast cancer. And trust me — there is plenty of fear to go around. Breast cancer used to kill more of them, but it still kills plenty. Just because we’re finding it earlier doesn’t mean it always ends well or that we’ve got it on the run with better chemo, more focused radiation, and hereditary gene detection. Fuck No. I could tell you about some of these women and you would cry. You would cry buckets of big fat tears over their stories. New mothers who found their cancers when they were nursing, pregnant women putting off chemo and radiation until after they can deliver their babies, young women who are gene-positive having their breasts removed because their grandmothers, mothers, aunts, and now their sisters have died from breast cancer, tough women with tattoos who talk to me about their bikes as they go under anesthesia, softer women who hold mala beads and accept The Universe’s plan. Maidens, Mothers, and Crones. My friends. My co-workers. Every religion, every ethnicity, every language. They are all there. They come to my OR to fight the alien that is living in their breast and destroying their life, their future. All they want is to live. They know life will never be the same again. If they get through this — forever after they will have earned the right to be called a Survivor. It’s a club none of them asked to join. For Survivors the cloud of a recurrence will never recede totally from their existence; it clings like a fog. Even after the magical five years of remission anniversary, none of them will completely exhale again. Ever. All of us who stand with them in their fight are changed by their courage. These women never stop being who they are even in the heat of their battle. They remain mothers, wives, sisters, daughters to the Humans they love. They do not pull into themselves and let the disease define them. I see it in the short time I spend with them pre-operatively — they are still taking care of their shell shocked family on the very day of their surgery. They are trying to make sure their kids got to school, that their mom has enough tissues and that their husband (who looks like he is facing a firing squad) is in the most comfortable chair in the room. They know they can’t die, because who would take care of the Humans in their life if not for them? When you see the matriarch in a family unit wounded it touches something deep in your soul. Humans respond to that on an instinctual level. We have to save this Human because in doing so, we also save this family. And we must save this family. To all the Survivors in My Life, I am so thankful you kicked cancer’s ass. To my fellow nurses and surgeons who stand with me in this space, I know of no better, kinder, humane Humans to do this work with. I am so very grateful to the courageous women who allow me to share their path in this journey. They inspire me daily with their love and strength. I am in awe of you. You are my heroes. Namaste. Addendum: October is Breast Cancer Awareness Month so I decided to recycle this story about when I was a surgical oncology OR nurse. When asked what I ‘did’ back in those days, my reply was simple:
https://medium.com/recycled/pink-ribbons-purple-hearts-c286a08b4a1c
['Ann Litts']
2020-10-08 02:31:34.451000+00:00
['Health', 'Life', 'Courage', 'Breast Cancer', 'Nursing']
How Paintings Depict Time
On the surface, Thomas Cole’s painting The Oxbow shows a natural wonder: the winding course of a river across a low-lying valley. It has the dramatic addition of changing weather conditions, giving a sense of the artist having captured a fleeting, single moment. Yet there is more to this picture than first impressions. Painted in 1836, the American artist produced a vision of a landscape in a state of transformation. Not only is there a storm, but in the distance the land is being converted into farmland by human settlers. In fact, the painting supplies three overlaid time-frames: the rapid onset of a storm, which arrives and departs in a matter of minutes or hours; the clearing of trees and wilderness to be replaced by agriculture and towns, a process that occurs over years and decades; and the far slower geological process of a river flowing over flatlands and slowly silting up, so creating curves that eventually turn into oxbows, the great horseshoe meander that gives the painting its subject matter. Time having long passed The Long Engagement (1859) by Arthur Hughes. Oil on canvas. Birmingham Museum and Art Gallery, UK. Image source Birmingham Museums Trust (open access) Arthur Hughes’ painting, titled The Long Engagement, made in 1859, has within it a wonderful and somewhat poignant detail that visually describes the onset of time. On the trunk of the tree, the name “Amy” has been scored into the bark, only to be half-covered by growing ivy leaves. Detail from ‘The Long Engagement’ (1859) by Arthur Hughes. Oil on canvas. Birmingham Museum and Art Gallery, UK. Image source Birmingham Museums Trust (open access) This delicate symbol hints at the wider subject of the work: How time can overtake the first flush of romantic love. It is unlikely that the name “Amy” was chosen by chance. The name comes from the Old French, Amée, meaning “beloved”, a vernacular form of the Latin Amata. The wider the image shows two engaged sweethearts meeting beneath the boughs of a tree. Some time before, the man scored her name into the tree trunk. Yet, from the growth of the ivy leaves, we know that their betrothal is a long one. From his clothing, the man can be identified as a member of the clergy — a curate, who assists the work of the parish priest. It is a low-paid position and provides the reason for the extended engagement, since the parents of the girl have not allowed her to marry until he has secured a better paying position in the church. What the artist has depicted is an aspect of English Victorian society (one that is perhaps not so foreign to us today) that reflected the importance of financial stability as a basis for marriage. Will the curate eventually achieve a salary to match the expectations of his sweetheart’s parents? Only time will tell. Past, present and future An Allegory of Prudence (between c. 1550 and c. 1565) by Titian. Oil on canvas. Image source Wikimedia Commons Titian’s Allegory of Prudence offers a picture of time based on the three stages of life: youth, maturity and old age. In art history, this is a popular subject and is often depicted in using the same motif of a youth, a middle-aged man and an old man together. In this painting, the oldest face (looking left) is in fact a self-portrait of Titian himself. His son Orazio, is thought to be the middle-aged face looking forward. For the third generation — youth — we might expect to find Titian’s grandson, yet since the painter had no grandson at the time of the work, it is thought that a cousin by the name of Marco Vecellio was depicted instead. Titian’s painting has long engaged art historians because of the intriguing connection between the three human faces and the triple-headed creature below them. The wolf, lion and dog combined offers a symbol of prudence, as confirmed by the inscription at the top of the painting, which reads (from Latin) “From the experience of the past, the present acts prudently, lest it spoil future actions”. And so, we may read the painting as a visual description of the facets of prudence: looking back, which remembers and learns; the intelligence of the present, which judges and acts; and foresight, which anticipates future events. Time and death Et in Arcadia ego (1628) by Nicolas Poussin. Oil on canvas. The Lourve, Paris. Image source Wikimedia Commons This painting, by the French artist Nicolas Poussin, shows four figures gathered around a tomb. They are pointing to an inscription carved in the stone. The inscription reads “Et in Arcadia Ego”, which can be translated as “Even in Arcadia, there am I”. The four figures are in a place called Arcadia, a region of Greece that has long been celebrated in art and poetry. As an idealised setting, Arcadia is a place of classical, unsophisticated existence. Shepherds roam the landscape singing love songs; nymphs wander through the woods and rivers. Poussin’s painting gives us this romantic landscape but it also poses a question: what about death in Arcadia? In this regard, the inscription may refer to the occupant of the tomb, understood as “I too once lived in Arcadia.” Or it may have a wider message, the poignant thought that even in the idyllic setting of Arcadia, the present moment will inevitably pass and death will come to us all. My name is Christopher P Jones and I’m an art historian and critic. I’m also the author of How to Read Paintings. Read more about my art writing here. Would you like to get… A free guide to the Essential Styles in Western Art History, plus updates and exclusive news about me and my writing? Download for free here.
https://medium.com/thinksheet/how-paintings-depict-time-33850ff344f4
['Christopher P Jones']
2020-11-12 15:14:50.779000+00:00
['Painting', 'Creativity', 'Art', 'Time', 'Art History']
Money, Greed and The Meaning of Life
People do astounding things for money; they subject themselves to heinous conditions, participate in the most degrading circumstances and waste their most precious resource, time, in the pursuit of it. The adage that money makes the world go round is the saddest reality of life. On a planet where the poor could be lifted from the depths of despair through simple monetary investment but aren’t speaks of the greed intrinsic within the human race. We see the richest among us giving back when they become octogenarians as a means to alleviate their guilt from the opulence and wealth they have selfishly hoarded and they expect adulation. They are given it through knighthoods and recognition whitewashing over the child labour and squalid conditions their factory workers in Bangladesh operate under. How can the wealthiest elite watch the lifeblood of clean water flow from their gold plated taps while cognisant of the millions of children dying from water-borne diseases in another country? If you are giving back you have already taken too much. Money is greed. Psychologically it shackles us and forces us to make irrational decisions. It is more easily available than heroin but far more addictive. Why isn’t there a rehab for those who are incapable of controlling themselves and who continually spectate as their lives spiral out of control, Isn’t that what school is for? Instead, further education saddles us with debt that lasts a lifetime and impedes us from flourishing economically. School isn’t the cure it symptomatic of the disease. And that won’t ever change, the bankers are making too much money. They have found their mechanism for personal enrichment and are exploiting the future of this country. We are rewarded for our selfishness and greed. Which other species would get a bonus for recklessly gambling their populations future? For us, this inept handling and total lack of corporate responsibility are praised and encouraged. Instead of caring about and being invested in the long-term benefits of the country having highly educated peoples, the country’s future are being pillaged for every penny that is legally allowed. Money is also the pre-requisite to life which affords us the opportunity of shelter, but this also straddles us with a lifetime of debt. The older generation has seen property prices rise to unprecedented levels pricing the young out of the market. Can you see the emerging pattern? The things we require to avoid the debilitating pitfalls most pertinent to life are the things that most regularly set us on the path of despair. How do you feed your family if you have no money? You do astounding things for it; you subject yourself to heinous conditions, you participate in the most degrading circumstances and you waste your most precious resource, time, in the pursuit of it. It is the vicious cycle of life 99.9% can’t escape. But we are still sold the hope of the ‘American Dream’ which imprisons us by preventing us from taking a stand. We think we can all make it which makes us resist burning down the house. We accept it instead of questioning, we hope instead of striving for change. But is it starting to change? We see Bernie and Donald growing in popularity but the establishment doesn’t understand. You can only take so much from those who don’t have it until they decide to fight back, no matter how ill-informed their political choices are. If there are no chances and people don’t have money they become desperate. Marry that to the fact that people have the propensity to lust for the most frivolous of things. They don’t appreciate the value of what they possess as they desperately search for what’s next. We miss the most valuable things in our life searching for something that may never come. But money is also hope. It is that mother working three jobs to provide her children with the opportunities she never had. Without the opportunity for hope, we would just give up. So money keeps us going, we have been conditioned to believe if we work harder we can achieve but that is disingenuous. HR departments have worked out to the penny exactly how little they can pay you for maximum production while they maximise profits. It’s why we are poorer than our parents. Management has fucked us and our future is bleak. Money makes us do irrational things, we go to work to earn money to pay for the gas and car to get us there. We work to pay for the food which gives us the energy to be a productive worker. We work to earn money for vacations which give us a break from working, and the cycle goes on. What % of the money you earn is used in earning that money in the first place? Think of your expenditure on work clothes etc. Our whole life is focused on making money for other people. Even if you are an entrepreneur think about where your taxation goes. Whether you pay corporation or personal tax your money still goes to the man, sorry. And we can’t escape this virtuous cycle. Generation after generation follows and we don’t solve the problem. It flows through to the meaning of life: money & economic gain. As long as we work to earn a living, to enrich our lives and to fund our happiness the meaning of life will always be one governed by economics. It why a utopia has never arisen throughout human history. Because a utopia alters the paradigm by eradicating the necessity for fiscal responsibility. It anatomises the economy and births the advent of universal incomes which truly support life. It is the next step after capitalism but we believe there can be no improvement on the capitalist model. We have been programmed to believe democratic stability is supported by the moniker that greed is good. Somehow we have been convinced that greed is the mechanism which ensures honesty by endowing a select few with power and wealth. But robots are coming and they will take our jobs. I realise that is the theory which has reigned supreme for generations but technological progress has ensured it is an impending reality. Look at the proliferation of autonomous vehicles. Where do the uber drivers go and what happens when the autonomy transitions to trucks? 4 million workers in the USA alone would be displaced/disrupted by this new technology and that is just the beginning. Money is the medium which has grown to define us. It defines our social standing, the institutions we are able to attend, the restaurants we eat at and the clothes we wear. It is ingrained so deeply within our being that imagination of a world without is inconceivable. But what if? The development of blockchain and cryptocurrencies threatens the financial model. They are scrambling to get a piece. You might not see it yet, but what happens when banks are not a necessity and trust is ensured through alternative means? Trust has been the limitation which has prevented the advent of certain systems, it is the currency that enables transactions to occur in higher frequency. So perhaps technology is the key which frees us from this precipitous world. Our dependence on credit will be eradicated. Or more likely the current elite will acquire the means for autonomy, replace the workers earning a pittance with even cheaper machines and try to profit even more. Because there are no depths we wouldn’t stoop to in order to acquire more money. Capital is King. ‘Get rich or die trying’ is the string which defines us as a race.
https://chrisherd.medium.com/money-greed-and-the-meaning-of-life-7a041e924926
['Chris Herd']
2018-07-10 16:20:56.837000+00:00
['Politics', 'Life Lessons', 'Future', 'Money', 'Economics']
Do You Want To Predict Your Future?
Action Take responsibility for your daily decisions and moments. They will define your past, present, and future. You only can live in the moment. We can learn from our past. Our future is in no small measure defined by what we decide to do and act on today. Spend time reflecting on your life. What has it been? What can you learn from your past? What will you do with today? How will today create the future you want? We are thinking about people. And, our actions, more than any words, will define what we are and what we are becoming. Invest in your future daily by being your authentic self. This does take commitment, learning, and bold action. However, if you do it well, you get to create the exciting future you want to have. Never underestimate the power of daily decisions and actions to create the future you always dreamed of.
https://medium.com/illumination-curated/do-you-want-to-predict-your-future-43a3b5f99789
['Randy Wolken']
2020-12-27 15:12:22.954000+00:00
['Business', 'Self-awareness', 'Leadership', 'Self Improvement', 'Life Lessons']
How To Convert Your Google Sheets Spreadsheet Into A Web App
How To Convert Your Google Sheets Spreadsheet Into A Web App For Receiving Form Response with Just One Line Of Code We are in the era of technology and advancements. every day we fill out many forms or receive responses from many forms. whether we are getting an order for any product or asking for a refund for a product everything is circled around forms. I Know Many of you might know that to receive a response from an HTML form we have to use a backend language like PHP, flask, Django, or Node.js. choosing a backend language for the form is not enough we also need to have a database to store all the responses but that's not enough to make sure that anyone around the globe can check out your form for that you also need a server. I have worked on many projects and I know it's not easy to handle forms. In this blog, I will introduce you to a simples way of receiving responses using google sheets with just one line of code. Let’s start, The first thing you need is a google account to use google sheets. I know you might already have one but still, if you don’t have one or don’t want to use your personal Gmail account then go and create one. Visit Google Sheets and Log in with the account you want to use. Create a Blank new sheet by pressing the Blank Tile. Give You Columns Appropriate Names and Save the file. 4. Click On Tools and then Script editor. 5. Copy the Code File from here and paste it into the browser. 6. Now Click On Run➡Run Function➡initial setup. 7. Next Review Permission. Then Choose an account. 8. Click on Advanced and then Go to _FILE_NAME_ and Allow it. We have successfully created a connection now let’s publish the sheet as a web app. Click on Publish. Deploy as a web app Project version➡Version 1.0 Who has access to the app ➡ Anyone, even anonymous Deploy Copy the generated URL and paste it somewhere safe. Now To test the sheet let’s create a simpler HTML form. You Can Download the code for the HTML form here. Paste the generated URL according to the HTML code. Demo Now Let’s See How We Can Send The Google Sheets Entry as an email automatically.
https://medium.com/python-in-plain-english/use-google-sheets-for-receiving-html-form-responses-and-sending-email-8a869a685dcf
['Abhay Parashar']
2020-12-15 16:14:37.857000+00:00
['Google Sheets', 'Technology', 'Programming', 'Python', 'Software Development']
Why It’s Important That Women Like FKA twigs Speak on Abusive Relationships
Why It’s Important That Women Like FKA twigs Speak on Abusive Relationships Abuse doesn’t recognize status. Everyone’s experience matters. Many of us know the statistic. Over 43 million women and 38 million men have experienced psychological aggression by an intimate partner in their lifetime. You may have been one of those 81 million people. I have. But who am I? I’m a college-educated woman in psychology, no less. Someone raised in a staunch Catholic family where divorce wasn’t an option. I make a decent living working my 9 to 5, meaning I would have been able to survive on my own. So why did I stay, you may ask? There are many many reasons. In the end, I am no one. I am just one of the 81 million people who have lived this experience. Do you know who else is one of 81 million ? FKA twigs [born Tahliah Debrett Barnett]. Her experience doesn’t matter more because she’s a celebrity. Her experience isn’t more traumatic because it’s in the public light. But her decision to share her experience publicly is so incredibly important because it supports the fact that abuse can happen to anyone.
https://medium.com/fearless-she-wrote/why-its-important-that-women-like-fka-twigs-speak-on-abusive-relationships-85664a758de9
['Estrella Ramirez']
2020-12-16 15:26:07.124000+00:00
['Relationships', 'Self', 'Abuse', 'Mental Health', 'Healing']
Trump provides the answer
Trump provides the answer Not the one he thought Trump stumbling on reality By Mike Meyer It appears that Trump and the, barely, eukariotic creature named McConnell (no spine or intestines, but you can’t have everything) have gone with Donald’s National Emergency for Wall and Fence Contractors. This is an excellent and sorely needed move that could well be our salvation. As many of us have written about since the collapse of the American political system in 2016, there is very little that can be done to prevent the future disasters now being totally ignored by the government of the US. The reasons for this are multitude but the archaic and corrupted Federal structure cannot be changed while under the control of low order criminals who have locked in their rule by vast voter disenfranchisement. But the means for full structural change are being given to us by Trump and the very people who have brought on these disasters. It should be no problem, now, to remove Trump. If he is not relocated to a guarded and locked facility for high crimes and misdemeanors he will be devastated in the next election. People are prone to stupidity but they do learn. The second time around requires the piper to be paid. I’m prone to confidence that a suitable, i.e. rational individual with some solid scientific education will be elected president in 2020. Upon taking office she can declare a National Emergency due to Climate Change, Resurgent Racism, National Greed, and Massive Human Rights abuses. Unlike Trump’s wall fiasco these are all very real and, not only national, but planetary emergencies. To avoid the disaster that is befalling Trump and the Republicans in claiming they don’t need support for a plan with no reality and no purpose, the broad principles of the new Emergency should be presented to the population for a direct, bio-metric confirmed online vote. A majority of the country will support these principles as they have been waiting years for someone to ask. Of that I have no doubt. People actually know real emergencies, particularly those that have been created by incompetent and illegitimate officials dedicated to greed and oppression. If done correctly, the principles of the new Emergency will be fully mandated. To be sure that this is done properly a second Emergency will need to be declared temporarily disenfranchising populations in states confirmed to have implemented racist voter suppression by any means that remains uncorrected. They may regain their citizens’ full franchise by correcting the problems. Turn about is fair play. Once this first National Citizen Vote is completed the process can move to specific policies and actions to address the stated Emergencies. Unlike Trump and his Republican cohort who have no goal other than filling their pockets, I suggest a national majority would be needed for approval of these policies with immediate implementation. No need to bother the corrupt Congress controlled by foreign funded criminals of various kinds. This would allow the world to watch while the embarrassment that had been America returns to taking a positive planetary leadership. Several current Republican officials have made excellent suggestions (who knew?) that should facilitate correction of these Emergencies. These should be at the top of the list for the New Emergency: Establishment of a ten year conversion eliminating all fossils fuels with nationalization of all existing fossil fuel corporations. These could then be liquidated to fund further Emergency policies as listed below. (Thank you, Senator Rubio (R-FL) for the excellent suggestion.) Immediate implementation of International Human Rights including Emergency national law providing full rights for all gender or non gendered peoples and sentient beings (a future need). This could also fund the addition of genderless facilities in all schools and public places. (And, thank you, Matt Gaetz (R-FL), for your suggestion.) Nationalization of all financial corporations with conversion to state owned banks for state government funding. Community credit unions owned by their members would be the only federally insured primary financial service organizations. This, along with liquidation of the fossil fuel industry, would fund all national services for citizens and refugees for the foreseeable future. Removal of all physical borders to be replaced by non intrusive sensors and adequate facilities to process anyone seeking refugee status for any reason with expedited processes specifically for climate change caused political instability. This would begin to correct a century of American theft and political oppression in Central and South America. This will prepare for the inevitable steady increase in climate change driven population movements. Implementation of Universal Basic Income replacing all government welfare and unemployment services. This will be funded, initially, by the nationalization and liquidation of the fossil fuel and the finance industries and then by maximum income limits. Above this national income limit, to be determined by another National Citizen Vote, taxes will be 100%. Members of Congress would be a special case with any personal wealth above, the very generous amount of $500,000 required to be donated to charity annually. This should quickly produce mass resignation from Congress by existing party members. Replacement of, the now nearly vacant, Congress by an automated version of the National Citizen Vote system. This would be used primarily to approve annual funding proposals by federal department. Begin research and conversion to Artificial Intelligence based systems to replace the Executive Office. On implementation of this the White House to be converted into a museum dedicated to human stupidity and ignorance with a full wing dedicated to the late US party system. Trump would be honored by a small, gold note pad including all his truthful statements as president. This would be of limited interest given the repetition of; “I need to go to the bathroom”, “I’m hungry”, “Time to play golf”, and “Where’s my phone?”, but you need to work with what you have. This would begin to return America to planetary leadership status while simply bypassing old and failed systems. Thank you Donald Trump for stumbling into the future. In honor of this inadvertent success ten years could be taken off your life sentence.
https://mike-meyer.medium.com/trump-provides-the-answer-320a3962ba07
['Mike Meyer']
2019-02-20 08:31:00.758000+00:00
['Politics', 'Future', 'America', 'Humor', 'Trump']
Honoring the 50th Anniversary of 2001: The Monolith and Hope for the Human Species
Fifty years after its debut, 2001 is more relevant than ever. Released at the height of the space age in 1968, Stanley Kubrick’s 2001: A Space Odyssey was one of the cinematic and cultural blockbusters of the 1960s and early 1970s. Fifty years later, 2001 stands as the greatest and most thought-provoking science-fiction film of all time, perhaps rivaled by Christopher Nolan’s Interstellar (2014). As most science-fiction fans know, 2001 also introduced the sleek black monolith, one of the most striking icons in film and art history. Although it initially garnered mixed reviews, 2001 became a big box office hit and generated much discussion about its meaning. For example, what the hell was the black monolith anyway, and where did it come from? Where did astronaut Dave Bowman end up in his cosmic journey? The 35 mm and 70 mm versions of the film played in theaters well into the early 1970s, but because the thrill of the Apollo moon landings had worn off by that point, 2001 faded from popular consciousness, even as its prestige eventually skyrocketed. Now we’re in the year 2017 and the 50th anniversary of 2001 is soon to arrive. It’s time to consider the meaning and vision of hope in 2001, which is centered around the sleek black monolith. The Year 2000 It is probably hard for contemporary readers to grasp that the year 2000 once stood for “the future,” a world of tomorrow filled with optimism for art, science, technology, planetary ecology, social equality, and universal progress. With the 2 replacing the 1, followed by three 0s, the future beyond 2000 just had to be better, wiser, cooler, and overall more awesome. This hope for a better future after 2000 is why Kubrick and coauthor Arthur C. Clarke set their space-age odyssey in 2001 — the first year of the new millennium. 2001 depicts a past and future in which humans have evolved from apes to astronauts through science and technology, along with an assist from the mysterious monolith. Importantly, 2001 taps into the awe and wonder of the cosmos along with the marvels of science and technology. At the same moment in human history, Kubrick and NASA directed space odysseys that expressed the highest trajectories of the space age, when humanity first ventured into the vast universe beyond planet Earth — yet neither Kubrick nor NASA provided the philosophical meaning for these discoveries and achievements. Kubrick wanted 2001 to show a human space narrative but not explain it or detail it. Meaning 1: Desert Monolith In a moment of the sublime, the “desert” monolith inspired pre-human simians to discover technology and look to the stars. The monolith first appears in the desert scene with the apes, as small tribes battle over scarce resources. During one scene, we see the close-up of an ape’s face as the ape glances right, then left, then skyward. We see a sunset of deep orange with a dark sky above. The next morning the monolith appears, standing perfectly upright in the desert as if it had been planted there with intent. There is no explanation for its origins. As the apes gather around, we hear Ligeti’s Requiem, the music rising in a crescendo. The monolith is then shown against the morning sky, with the sun eclipsed as it rises above the top edge. A crescent moon is positioned above the sun. This linear and symmetrical arrangement suggests the monolith is pointing toward the sun and moon and thus directing the apes to look toward the stars. This observation is confirmed in the famed jump cut from the bone to the spacecraft. The technological apes are motivated by the sublime. When the apes first see the monolith, they are experiencing an all-too-human moment of the sublime — the simultaneous feeling of awe, wonder, and terror when gazing upon something majestic and mysterious that seems to overwhelm their reason and sensory perceptions. We can see the awe and terror of the apes as they approach the monolith, first staring at it and then quickly touching it and removing their hands in fear. Their reason overwhelmed, the apes remain curious. Eventually the sleek monolith seduces their reason and senses, leading them to caress it as an object of mystery and desire, even if they have no idea where it came from or what it means. Inspired by the monolith, the apes are soon using bones as technology and evolve to become the space farers we now are. The scene where the ape transforms the bone into technology is accompanied by Richard Strauss’s Also Sprach Zarathustra (1896), as kettledrums and symphonic sounds rise to a triumphant crescendo. (Also Sprach Zarathustra is also heard in later appearances of the monolith.) Strauss’s composition was inspired by Nietzsche’s Thus Spoke Zarathustra. Surely Kubrick and Clarke knew the inspiration for the music, suggesting that the extraterrestrials in 2001 are the space-faring gods who inspired the apes to become humans, not the human-created anthropomorphic God who is dead to Nietzsche. In fact, this is the idea Kubrick intended, as he states in his 1968 Playboy interview: “I don’t believe in any of Earth’s monotheistic religions, but I do believe that one can construct an intriguing scientific definition of God, once you accept the fact that there are approximately 100 billion stars in our galaxy alone, that each star is a life-giving sun and that there are approximately 100 billion galaxies in just the visible universe. Given a planet in a stable orbit, not too hot and not too cold, and given a few billion years of chance chemical reactions created by the interaction of a sun’s energy on the planet’s chemicals, it’s fairly certain that life in one form or another will eventually emerge. It’s reasonable to assume that there must be, in fact, countless billions of such planets where biological life has arisen, and the odds of some proportion of such life developing intelligence are high. Now, the sun is by no means an old star, and its planets are mere children in cosmic age, so it seems likely that there are billions of planets in the universe not only where intelligent life is on a lower scale than man but other billions where it is approximately equal and others still where it is hundreds of thousands of millions of years in advance of us. When you think of the giant technological strides that man has made in a few millennia — less than a microsecond in the chronology of the universe — can you imagine the evolutionary development that much older life forms have taken?” Kubrick is clearly offering a cosmic, secular, extraterrestrial notion of a “God.” This is not a God of our creation and narcissistic insecurities, but the idea that a sufficiently evolved and advanced extraterrestrial species — with the science and technologies to traverse the light years — would be like gods to us or any less-advanced species. Would these extraterrestrials be like Nietzschean supermen or more like Hollywood superheroes? Or more like scientists and philosophers contemplating microbes in a petri dish? Perhaps that explains the monolith left behind for the apes on the petri dish of planet Earth. Since the extraterrestrials would likely have no need to conquer our planet for its resources, why would they want to destroy us? Kubrick thinks it’s possible (though not certain) they would be benevolent, as suggested by the desert monolith in the film. In the spirit of Nietzsche’s Ubermensch, perhaps the monolith-bearing extraterrestrials have evolved beyond their early evolutionary stages, becoming peaceful and benevolent space farers on a quest for discovery, beauty, and cosmic meaning. Perhaps they have already developed and embraced a cosmic narrative that unites their species as they explore the universe. Meaning 2: Moon Monolith In the same interview, Kubrick points toward the second meaning of the monolith: “But at a time [1968] when man is preparing to set foot on the Moon, I think it’s necessary to open up our Earthbound minds to such speculation. No one knows what’s waiting for us in the universe. I think it was a prominent astronomer who wrote recently, ‘Sometimes I think we are alone, sometimes I think we’re not. In either case, the idea is quite staggering.’” The “moon” monolith proves to the simians-turned-humans that they are not alone in the vast universe, no longer are they cosmically central, nor are they the top species in the cosmos. Discovered by scientists on the moon in the year 2001, the second monolith was buried four million years previous. Like the desert monolith, the moon monolith stands perfectly vertical. The difference is the moon monolith is beaming a radio frequency toward Jupiter. The scientists realize this is a monumental discovery, and it prompts the visit to the Clavius moon base by Dr. Heywood Floyd (William Sylvester). However, the humans on Earth have not yet been told of the discovery. As Floyd says to a gathering of scientists at Clavius: Congratulations on your discovery, which may well prove to be amongst the most significant in the history of science. . . . Now I’m sure you’re all aware of the extremely grave potential for cultural shock and social disorientation contained in this present situation if the facts were prematurely and suddenly made public without adequate preparation and conditioning. After making this statement, Floyd informs the scientists that “security oaths” will be required of them until it is decided when and how to inform the public. With this plotline, Kubrick and Clarke also provide artistic motivation for the world’s space conspiracy theorists. Among the most popular and absurd conspiracies are that NASA faked the moon landings, NASA or the military has found extraterrestrial life but refuse to tell us (Steven Spielberg’s Close Encounters of the Third Kind [1977]), and mainstream scientists and archaeologists have conspired to deny evidence that proves “ancient astronauts” have visited Earth (Ancient Aliens [2010-]). Meaning 3: Jupiter Monolith The “Jupiter” monolith is a symbol of the cosmic sublime and the infiniteness and mysteriousness of the universe. The monolith provides the apes with an experience of the sublime, thus inspiring them to develop technologies that eventually send them into the stars when they become humans. Such vastness and mystery will seduce the ever-curious human species into leaving Earth and exploring space. On the Discovery One spacecraft, Bowman accidentally uncovers the existence of the moon monolith when he comes across a video announcement in the spacecraft’s computer system (as he is turning off the HAL 9000 computer). An unnamed spokesman states: “Eighteen months ago, the first evidence of intelligent life off the Earth was discovered. It was buried 40 feet below the lunar surface, near the crater Tycho. Except for a single, very powerful, radio emission aimed at Jupiter, the four-million-year-old black monolith has remained completely inert. Its origin and purpose still a total mystery.” [Italics mine.] The idea of the monolith as a symbol for the sublime and cosmic mystery is confirmed in the next scene, when we see the Jupiter monolith (black with a cobalt blue tint) floating and slowly tumbling against the blackness of the starry skies surrounding Jupiter and its moons. Discovery One is shown stationary near Jupiter. Soon, Bowman exits Discovery One in the space pod, flies toward us in space, and then enters the Star-Gate sequence. Seduced by the monolith, Bowman ventures untold numbers of light years through the immensity of the universe and its array of cosmic and intergalactic forms. Meaning 4: Hotel Monolith The “hotel suite” monolith symbolizes the void in human meaning, yet in pursuit of that meaning or meaninglessness lies human destiny, a fate from which there is no exit. Of course, the other monoliths symbolize this indirectly. The monolith makes a final appearance in the hotel-like suite where Bowman has arrived upon conclusion of his journey through the Star-Gate. After rapidly aging to become a very old man lying in bed, Bowman’s last act is to point toward the monolith standing at the foot of his bed. Bowman dies or is transformed while thinking of the monolith. Kubrick zooms into the blackness of the monolith, which envelopes the screen. As Also Sprach Zarathustra rises to a final crescendo, we are instantly returned to our area of the universe. We see the moon, followed by Earth and then the Star-Child, who is either Bowman reborn, the infant heir to his space-faring legacy, or the first of a new cosmic Ubermensch that evolved from current space-faring humans. Perhaps Bowman’s rebirth as a Star-Child symbolizes the Nietzschean recurrence, though now the Star-Child represents the superhumans who explore the universe with a new cosmic narrative. The Star-Child gazes down at Earth and then straight at us, eyes wide open as the film ends. With the Star-Child, the trajectory seems complete, from ape to astronaut to astral species. Through Bowman the space spore, we have returned to “The Dawn of Man” — Star-Child, stargazer, space voyager, and seeker of beauty, meaning, and purpose. 2001 at 50: The Specter of the Monolith Ultimately, the black monolith and 2001 pose the question of what we humans will become as we venture into a magnificent universe in which we are not central, not significant, and maybe not alone. The monolith signifies the complexity of mysteries and meanings, if any, in Kubrick’s “indifferent” universe — the marvelous cosmos that allows us to exist on a tiny planet yet seems eternally unconcerned with the fate of the many species that populate Earth. Like the towering star factories in the Hubble images, the monolith is a pillar of beautiful indifference yet also a beacon for wonder and curiosity in a gigantic universe. The Hubble images give a moment of the sublime, the simultaneous sense of awesomeness and meaninglessness. When we scan the Hubble’s cosmic images and those from the world’s many telescopes, our aesthetic sense grasps their beauty while our reason affirms their scale and splendor, yet our minds are blown and we end up dizzy or intellectually paralyzed by what it means for our species, the tiny species with big brains and a yearning for significance. We gaze into the sublime wonders of the cosmos, yet sensing nihilism and meaninglessness we retreat from any new possibilities offered by the cosmic blank slate. Astride the abyss between now and what’s possible, we get vertigo and step back into the comfort of the traditional narratives that order life on Earth, even if those narratives are completely false. We humans apparently can’t handle the paradoxical meaning of our greatest scientific achievement and most important philosophical discovery: The universe is vast and majestic, and our species is insignificant and might be utterly meaningless. With the phrase “specter of the monolith” (also the title of my book), I am naming a complex existential moment — the simultaneous experience of the sublime and nihilism. As a new space age ramps up in the 21st century, future astronauts and the human species itself will inevitably face the specter of the monolith and the challenge of nihilism and meaninglessness. We have yet to embrace the sublime as a potential counter to nihilism in developing a universal human narrative for a space-faring species and a peaceful planetary civilization. In Thus Spoke Zarathustra (the book that inspired Richard Strauss to write the symphony Also Sprach Zarathustra [1896], which was later used by Kubrick in 2001), Friedrich Nietzsche speculated that since humans are the superior species that evolved from apes, there might be an equally greater species that would evolve from humans — what he termed the “Ubermensch” or “Superman.” Nietzsche wrote how “man is a rope stretched between the animal and the Superman — a rope over an abyss.” So what comes next? What will emerge in the next stage of human evolution? That’s the question Kubrick poses at the end of 2001, with the Star-Child appearing against the blackness of the cosmos, Earth literally rising in his gaze. As a space-faring species, what will humans make of themselves in an awe-inspiring universe with unlimited possibility? That’s where the monolith has profound metaphorical meaning. Tall, sleek, and black, the monolith is an icon of awe and the cosmic void, yet it’s also a towering blank slate for us to write a new philosophy for the future of the human species. 2001: What Can We Hope For? The very opening of 2001 begins with three minutes of a completely black screen, accompanied by Ligeti’s Atmospheres. The monotonal sound textures suggest a monochromatic existential void, the beginning of the intellectual journey for the human species. The origins and purpose of the monoliths are never explained, though they trigger and inspire events. Bowman’s journey and the hotel suite are never explained. In the end, the black monolith that seduced the apes seems to have given birth to a Star-Child and space-faring species in search of its cosmic meaning and existential purpose. 2001 has several meanings that indicate what we can hope for now and in a future cosmic narrative: 1) We are an evolutionary species capable of great things. We evolved from apes to artists to astronauts, from simians to scientists and space voyagers. Inspired and seduced by the monolith, we created a technological civilization capable of exploring the stars and seeking to understand its origins and destiny via art, science, and philosophy. This is an incredible achievement for our species of which we should be proud. 2) We are not alone. The existence of the monolith with the apes is a hopeful message, in that the extraterrestrials were benevolent and sought to inspire the most advanced species on Earth at the time. 3) We need to be careful with our advancing digital technologies. Via the flawed artificial intelligence of the HAL 9000, 2001 provides a warning to humans about the seductive power of our technology. We might find ourselves serving the technology rather than the technology serving us. More radically, perhaps HAL symbolizes the next leap in evolution, the technological Ubermensch to succeed humans. 4) We face the challenge of cosmic nihilism. As symbolized by the final two monoliths, we face a philosophical void, for there is no intrinsic or self-evident meaning for human existence in a vast and wondrous universe. 5) There is no exit from the cosmic sublime. Though our science and technology are accelerating into the universe as if on autopilot, 2001 suggests our discoveries will disrupt our traditional narratives and that our journey into space is the next step in the continuing existential quest. In the search lies our destiny, a species seeking meaning and purpose amid the awe-inspiring galaxies and voids. 6) Our first space spores have been launched. As symbolized in 2001 (and by technologies such as Apollo and Voyager), we are launching our first spores into the cosmos. In effect, 2001 is one of those spores, though it is artistic and philosophical. With our knowledge of the cosmos, we have the opportunity to become Star-Children and philosophical Ubermensches, to be the artists, scientists, philosophers, voyagers, and tourists of the cosmos — seeking not merely to survive but peacefully pursue our existential quest in a beautiful and sublime universe. All of this is why 2001 is the greatest space film and a towering work of art and philosophy, offering us a vision of hope and meaning in a majestic and awe-inspiring universe. Fifty years later, it’s time to begin developing a new space and humanist philosophy for the human species, a philosophy and cosmic narrative that situates us in the sublime universe from which we evolved. ____________________ In this book, I use the sublime as the starting point for developing a new space narrative and space philosophy. The above is based on passages from my new book, Specter of the Monolith (2017). For more information or to purchase the book in Amazon, click here. To view my my 12-minute video homage to 2001 and the monolith in Vimeo, see below.
https://medium.com/explosion-of-awareness/honoring-the-50th-anniversary-of-2001-the-monolith-and-hope-for-the-human-species-1704c93501a0
['Barry Vacker']
2020-08-22 21:44:28.433000+00:00
['Space', 'Philosophy', 'Art', 'Atheism', 'Science']
AWS Glue vs EMR
Amazon Web Services provide two service options capable of performing ETL: Glue and Elastic MapReduce (EMR). If they both do a similar job, why would you choose one over the other? This article details some fundamental differences between the two. AWS Glue is a pay as you go, server-less ETL tool with very little infrastructure set up required. It automates much of the effort involved in writing, executing and monitoring ETL jobs. If your data is structured you can take advantage of Crawlers which can infer the schema, identify file formats and populate metadata in Glue’s Data Catalogue. Based on your specified ETL criteria, Glue can automatically generate Python or Scala code for you and provides a nice UI for job monitoring and scheduling. In comparison, EMR is a big data platform designed to reduce the cost of processing and analysing huge amounts of data. It is a managed service where you configure your own cluster of EC2 instances. You have complete control over the configuration and can install Hadoop ecosystem components, which makes EMR an incredibly flexible and complex service. Its use cases are vast. Data scientists can use EMR to run machine learning jobs utilising the TensorFlow library, analysts can run SQL queries on Presto, engineers can utilise EMR’s integration with streaming applications such as Kinesis or Spark… the list goes on! You could replace Glue with EMR but not vice versa, EMR has far more capabilities than its server-less counterpart. Another thing to consider when choosing between these tools is cost. Glue is more expensive than EMR when comparing similar cluster configurations, probably because you’re paying for the server-less privilege and ease of set up. Drop’s Data Lake solution found a reduction in cold start time and an 80% reduction in cost when migrating from Glue to EMR. There are currently only 3 Glue worker types available for configuration, providing a maximum of 32GB of executor memory. This restriction may become problematic if you’re writing complex joins in your business logic. If the join isn’t optimised for performance then executor memory can quickly be consumed and the job may fail. The same can occur if you have to unpack a very large zip/gzip file, all of the data will be held on one node (such is the workings of Spark!). In contrast to this, EMR has a plethora of supported Instance Types to choose from! (although you’d still want to optimise joins to improve performance 😃 and ideally avoid zip and gzip formats!) One advantage of using AWS Glue, is that it automatically sends logs to CloudWatch, which is very handy if your architecture uses multiple AWS services — providing you with one centralised location for monitoring and alerting. EMR on the other hand, sends logs to S3 by default — although you can install the CloudWatch agent via EMR’s bootstrap configuration. In conclusion, if your workforce is new to AWS configuration and you only wanted to execute simple ETL, Glue might be a sensible option. However if you wished to leverage Hadoop technologies and perform more complex transformation, EMR is the more viable solution. Thank you for reading! 😊
https://medium.com/swlh/aws-glue-vs-emr-433b53872b30
['Leah Tarbuck']
2020-09-04 14:13:09.882000+00:00
['Emr', 'Etl', 'AWS', 'Glue']
Plotting business locations on maps using multiple Plotting libraries in Python
I was browsing through Kaggle and came across a dataset which included locations in latitudes and longitudes. I haven’t worked with plotting on maps so I decided to take this dataset and explore various options available to work through them. This is a basic guide about what I did and the inferences I drew about those libraries. The aim was to look for a library that was very easy to use and worked seamlessly out of the box for plotting on maps. Another aim was to find a library that could print all the points in the dataset at once (190,000+ points). Here, I explored four libraries, gmplot , geopandas , plotly and bokeh . I’ll import the libraries as and when needed rather than importing them all in the beginning. The complete code is available as a GitHub repo: Let’s begin!! Dataset I took the dataset from Kaggle and saved it inside the data folder as dataset.csv . It includes a list of businesses complete with their address, state, location and more. I extracted the latitude, longitude, state, unique_states and name in separate arrays. I also extracted the minimum and maximum latitude and longitude values which would help me zoom into the specific area on a world map which we’ll see below. For each library, I’ll plot the first 1000 locations and then try to plot all the points. gmplot gmplot is a library that generates the plot by creating an html file which we can load in our browser. It is one of the easiest and quickest way of getting started with plotting the data on a map. It plots the information on Google Maps and hence looks really nice.
https://towardsdatascience.com/plotting-business-locations-on-maps-using-multiple-plotting-libraries-in-python-45a00ea770af
['Karan Bhanot']
2019-04-30 22:10:44.845000+00:00
['Data Visualization', 'Technology', 'Productivity', 'Data', 'Programming']
AirPods Max: A Showcase of Apple’s Extravagance
Image Credit: Apple Diversification is something that should be applauded. We have all met that person that is incredibly one-dimensional. No one enjoys talking to a person that has a limited range of conversation. But we all know a few people like that, and their existence should make us appreciate the friends that we can talk to about almost anything. When it comes to companies that make products that we buy, we sometimes glorify the one-dimensional company. The company that only makes one kind of product can sometimes be glorified as being “laser-focused” and “true to its core audience”. Some PC gaming computer manufacturers come to mind here. A company like MSI is beloved by gamers because all they produce is hardware catered to the gaming community. It is when a company makes various products for various types of people that there seems to be a disdain that emerges. This was the first thing that I thought about when Apple unveiled the AirPods Max this week. That this is a further indication of the varied nature of Apple, a company that appeals to all sorts of people. So, What Are They? Image Credit: Apple To put it plainly, the AirPods Max are high-end overhead wireless headphones. They offer exciting features such as active noise cancellation, a comfortable design, a digital crown for precise controls, and spatial audio for Apple devices. They are objectively a great pair of headphones. What has many people up in arms is how much they cost. The AirPods Max cost $550. They are expensive and positioned as a premium product because they are a premium product. The instant comparison by many was to two headphones that tech reviewers have mentioned a lot this year: Sony’s 1000-MX4 and Bose’s NC 700, which cost $350 and $400 respectively. The reason for the comparison is appropriate on the surface. The offerings from Sony and Bose are excellent wireless listening solutions with active noise cancellation and a comfortable fit. As a result, the opinions formed have been that these headphones are $200 more than they should cost. My counter to that point is that Apple has always positioned itself a notch above its comparable competition in terms of price. When AirPods were first introduced, there were other wireless earbud solutions available at a lower price, there have always been Windows laptops that are equally specced to the MacBook Pro and Air that are often hundreds of dollars cheaper, and the iPhone is frequently higher priced than many of its Android competitors. Apple has never been afraid to charge more than everyone else in the room. In some ways, this is what makes the company so polarizing. To some, this is an indication of craftsmanship and quality while to others it is a manifestation of corporate greed. What the AirPods Max truly represents is the high-end wireless audio solution for the Apple user. As these headphones feature the vaunted H1 chip that allows for additional features with Apple hardware, these are designed to be utilized by someone that is fully invested in Apple products and wants a premium listening experience for their music. This can be seen through the fine details that the company is choosing to highlight with this product. A focus on the aesthetic design and comfort level that is provided by the mesh head strap is indicative of this. Apple views its current wireless headphone strategy like this: the standard AirPods for $159 are the communication headphones, the $250 AirPods Pro are the hybrid communication and decent music listening experience, while the AirPods Max at $550 represents the full audio-focused experience for its users. When looked at in this regard, what they are trying to accomplish makes a little more sense. The elephant in the room, however, continues to be the fact that Apple manages another audio brand: Beats. The Beats Conundrum Image Credit: James Yarema via Unsplash I remember when the first Beats by Dre headphones were released in 2006. As a long time fan of hip-hop music, there was a certain excitement about a producer that was a household name having their signature headphones. Over the years I’ve used various Beats products and enjoyed the experience since they typically complement the type of music that I listen to that features heavy bass and drum patterns. In 2012, when the partnership between Beats and Monster was evaporating there seemed to be a feeling that the Beats brand was dead. That was until 2014 when Apple bought the rights to the Beats brand and infused it into its product portfolio. The move at the time made a lot of sense to me since Apple has always had an interest in audio with its iPod and iTunes brand so having a headphone company in-house was a natural progression. The concern was if they would be able to make the Beats brand have an Apple flair to it or if it would languish as the proverbial red-headed stepchild in the Apple product line. In many ways, this is precisely what has happened. The Beats brand continued to be one that didn’t exactly conform to the clean minimalist feel of other Apple products. The Beats image still felt like something very different than the products that defined Apple like the iPhone and iPad. Beats were loud and colorful, a product designed for a younger generation. Whereas the rest of Apple’s product line took a more mass appeal approach, appealing to all age ranges with understated minimalism. When the first AirPods launched, the writing was on the wall that the Beats brand might soon be a thing of the past or at the very least much less of a focus in Cupertino. And sure enough, as there have been multiple generations of AirPods released the focus from Apple on the Beats brand has faded into the background. The company has quietly updated the entire lineup with more accurate sound profiles, taking away the previous reputation of the headphones only being good for bass-heavy music. For years, Beats has attempted to remain relevant in the high-end consumer headphone space by competing directly with Sony and Bose. The AirPods Max could not interfere with this, so they aim to be a notch above. A more approachable entry into high fidelity wireless audio competing with niche solutions from companies like Sennheiser and Bang & Olufson. In the context of the entire Apple audio lineup, these make sense in that regard. Segmentation Outrage Image Credit: Zhang Kaiyv via Unsplash As was mentioned earlier, a lot of people instantly scoffed at the price tag of the AirPods Max. These are the same people that have scoffed at the price of Mac Pro, the wheels for the Mac Pro, and the Pro Display XDR. Apple is capable of making some very expensive hardware. But it is also capable of making reasonably priced hardware that is objectively very good. A prime example of this would be the 2020 iPhone SE and the 8th generation iPad, both of which can be had for under $400 (or $300 less than the cost of wheels for the Mac Pro). There is a diversity to Apple where it can release a smartwatch that undercuts the price of Samsung watches (Apple Watch SE) while still crafting a pair of $550 wireless headphones. While people are angry at Apple for having the audacity to release a pair of headphones this expensive, the reality is that they offer other wireless audio solutions at a variety of price points. From the $50 Beats Flex all the way to the AirPods Max. Apple more than most other companies that it competes with, is able to create solutions at all ends of the pricing spectrum, and as a result is noticed more for these efforts. The company is not afraid to make a niche product that will be criticized for price despite how small of an audience it may have. The genius of the AirPods Max that differentiate it from other solutions in the $300–400 range are the replaceable ear cups. They are attached by magnets, making them easy to remove for cleaning or replacement. This ensures a longer life of the headphones. Granted, $550 is still a lot to spend for almost everyone on headphones. But Apple is not attempting to make this a mass market set of headphones. They are the best wireless audio that the company has to offer, and quite often the best is not the one that people buy. In the car industry, the bulk of people end up buying the midsize sedan and not the vaunted luxury car. This is the same dynamic at play here. What AirPods Max do for Apple is to reinforce its commitment to music while showing that it can be extravagant when it needs to be extravagant. While some may see this product release as tone deaf to the realities of the economics of the “average consumer”, I see it differently. Apple has made something that is for someone wanting the best possible audio quality on their Apple devices. And if the asking price is too high, the company will gladly sell you a pair of Beats Solo for half the price.
https://medium.com/the-shadow/airpods-max-a-showcase-of-apples-extravagance-2b5960da6e09
['Omar Zahran']
2020-12-13 18:04:15.582000+00:00
['Apple', 'Technology', 'Headphones', 'Innovation', 'Audio']
Escape from Animal Crossing. Though Nintendo’s breezy island…
Escape from Animal Crossing Though Nintendo’s breezy island simulator has become increasingly politicized, the community around its soundtrack has formed a musical sanctuary. Nintendo For the last few months Animal Crossing: New Horizons (ACNH) has been a safe space. Released on March 20, it arrived just in time for COVID-19 quarantines. The game quickly transitioned from real life-simulator to real life. With public gatherings and activities banned for much of the spring season, ACNH became an outlet for expression and camaraderie during dark times. I’ve used it as a coping method, darting around my island Palumpolum, named for the Final Fantasy IV characters, as I would be doing right now had my job not gone entirely digital. Equipped with wireless communication options — New Horizons allows players to vacation on others’ islands via Nintendo Online — the game has successfully replaced the need to hang out in-person. School closures opened the door for virtual celebrations through the game, just as some folks have cobbled together a late-night talk show, live-streamed over Twitch with guests and an audience, filling a void left by the likes of highly-produced network shows like Jimmy Kimmel Live. Now, as stores begin to reopen and public facilities remove their shutters, ACNH is serving as a diversion from the 24 hour news cycle, which has been squarely focused on public outcry against the murder of George Floyd by the Minnesota police. Peaceful protests, marching, looting and police retaliation with rubber bullets and tear gas have raged throughout the United States, and the world, for more than three weeks, creating a necessary, unified voice in support of the Black Lives Matter movement. As much as Animal Crossing has served as a distraction from an increasingly bleak reality, it’s also engulfed the protesting spirit. In-game islands have morphed from breezy getaways to hyper-aware rally hubs. Players have used the game’s design tools to craft “Black Lives Matter” and “Say their names” signs, in concert with the real-life movements. And the protesting spirit reaches beyond BLM to a variety of causes. PETA took to Animal Crossing to, ironically, protest the game and its treatment of the game’s furry residents. Earlier this year, residents of Hong Kong protested China’s oppression of the region’s rights to expression, leading to the game’s removal from Chinese storefronts. Like it or not, Animal Crossing is being weaponized, and ever so slightly nudging away from its developer-minded intentions of collecting shells and catching bugs. But as New Horizons becomes increasingly politicized and fans denounce the game’s inherently capitalistic systems, it continues to offer a sanctuary: it’s music. Composed by Yasuaki Iwata, Yumi Takahashi, Shinobu Nagata, Sayako Doi, and Masato Ohashi, the ACNH soundtrack provides the escapism that the core game has trouble maintaining. It’s a lightweight feature, not meant to serve as anything more than background music, but is arguably the game’s strongest proponent of living a simple, island life. The first thing you hear when you load up Animal Crossing: New Horizons is a strolling flugelhorn melody. Lacking the audacity of Miles Davis or the vivacity of Dizzy Gillespie, the melody is a warm hug, eventually intertwining with ukulele, double bass, percussion, acoustic guitar and a sneaky accordion. The intro track lasts a mere 1 minute and 16 seconds, longer than most will spend on the title screen, but it’s soothing nature is an invitation to stick around for a couple loops. Lately, I’ve even been booting the song up on YouTube, playing it as I start my day. What’s special about the soundtrack, which features songs for various times of day, locations like the museum and stores and more, is the music’s simplicity and thematic cohesion. Most tracks feature just a handful of instruments, allowing the shush of the wind or washing of waves to build the atmosphere. Adding to the soundtracks’ beauty is the presence of in-game musician Totakeke. Performing under the alias K.K. Slider, he’s a Jack Russell Terrier wielding nothing but his acoustic guitar (unlike most animals, who wear some sort of clothing, K.K. is completely nude). He’s a digital representation of original series composer Kazumi Totaka and has been a game staple since the inaugural title Doubutsu no Mori (Animal Forest) released in 2001 in Japan. Slider’s main purpose, beyond playing a concert for the islanders and signifying the “end” of the main game, is to create a host of songs, all of which are playable on in-game stereos and record players. He’s a multifaceted maestro, composing tracks inspired by world genres ranging from jazz (the album cover to which is inspired by the monochrome artwork of Blue Note Records) to soul (which is a cover of Bill Withers’ “Lean on Me”) to metal. K.K. even crafts his own, original melodies, strumming along to game specific songs for boating, and city life. A guitarist by training, K.K. is also a vocalist, singing his tunes in the same digitized warble as the rest of the game’s inhabitants. It’s a refreshing use of music in a game attached to reality through the newswire — there aren’t any lyrics to misconstrue or phrases to interpret. K.K. just offers grooves to fit any mood you might think of. The soundtrack’s good-natured spirit has spread to the larger community. Cover artists have entrenched themselves in the task of recreating all of ACNH’s music, whether in a group or solo. Few covers are more prolific than Nintendo’s official track, posted to YouTube on May 15 and stitching together performances by the game’s musicians to recreate the theme song. ACNH’s music community is as soft and loveable as many of the game’s characters. Some of the songs, like Bubblegum K.K., have garnered fan written lyrics, dressing up an upbeat anthem that also went viral on Tik Tok in April. In the case of Danielle Minch, lead singer for alternative pop-punk band Behind the Facade (BtF), the music of Animal Crossing has been comforting during months of uncertainty. “I don’t typically cover game themes but I fell in love with Animal Crossing: New Horizons while incessantly playing it during quarantine,” Minch said. “That song brought me comfort and happiness.” Scroll through BtF’s discography, and it’s hard not to hear shades of Paramore. Hailing from Long Island, Minch and her bandmates, Danny Briones, Griffin Backer, and Eliran Malakov, exist in a state of hyper-awareness, their music drawing inspiration from the moods that stumble across their path. Minch’s voice cuts through the group’s driving instrumentals as she fights pessimism, anger and vulnerability, as she does throughout 2013s We Are The Fighters. All of that is to say, BtF’s Animal Crossing cover doesn’t really sound anything like the rest of their catalog. What’s more is that Minch wasn’t a fan of 2012s Animal Crossing: New Leaf, admitting at the time she “didn’t catch the bug.” This time around, she said, “It probably helps that all my friends are playing too. We visit each other’s islands pretty regularly.” The positivity New Horizons theme song brought her, combined with its easy to learn, but complex enough to arrange melody led her and her band to break from their norm. The ACNH community of cover artists is all inclusive, drawing interest from musicians from all walks of life. Take Izan Rubio, a classically trained guitarist who has performed solo and in quartets throughout Europe. Prior to the pandemic he was booked everywhere from the National Library of Catalonia to concert halls in Trento, Italy. On one of his YouTube channels, however, he dives into musical expression through one of his favorite mediums: gaming. “The first impulse before I arrange a music piece for guitar has to be ‘wow, that is a very good musical piece,’” Rubio told me. “The first time I listened to the main theme from Animal Crossing: New Horizons I had that feeling.” Rubio’s cover creativity struck two months before the game’s release. After hearing its theme, he started arranging his guitar cover to fit a particular tuning style that would allow him to play the melody and bassline, while preserving the song’s jazzy chords. In a way, his passion runs deeper for the music than the game itself. Despite having covered some of the game’s soundtrack, he’s never played New Horizons. “Unfortunately, my life lately has not allowed me to have any time to play video games and live through such amazing stories. Still, I love the Nintendo Switch concept and can’t wait to see the implementation of new sound technologies in the PlayStation 5 and Xbox Series X,” he said. Each of the artists I talked to noted the simplicity of New Horizons soundtrack was the gateway to arranging a cover. Central Washington University student Steven Higbee has been playing clarinet since fifth grade, but as as a multi-instrumentalist with a fondness for the ocarina, he works especially hard to incorporate the sound of an ancient and limited instrument. “You have to account for the limited range with a lot of ocarinas, which definitely affects what keys you may make the arrangement in, or how you want to do your part writing,” he said. “Not only that, but you have to take into account that ocarinas can’t play dynamics — lower notes are softer and higher notes are louder.” The ocarina’s intricacies lead Higbee’s cover videos to be split screen affairs, stitching together multiple performances structured around the piano, clarinet, ocarina and others. Even then, other artists, like Nico Mendoza stumbled into Animal Crossing covers by chance. Despite being a lifelong Nintendo fan, Mendoza didn’t set out to record New Horizons’ music. “I started covering the ACNH soundtrack when I realized I had all the instruments in my arsenal to get insanely close to the original sound,” Mendoza said. “I’ve always been a huge Nintendo fan, but I had the urge to cover ACNH music once I saw how big this game had become.” Gaming communities as a whole are polarizing. Lewd fan art often bisects with otherwise innocent games, just as streamers can come together to raise money for charities tackling cancer. Games are an artform, and will inevitably mingle with the new and pop culture of their time. Animal Crossing is here to preserve some of the world’s purity. And when the game fails, its music forms a sanctuary.
https://medium.com/the-riff/escaping-from-animal-crossing-15d9ab74cb91
['Brandon Johnson']
2020-06-28 05:22:25.768000+00:00
['Gaming', 'Music', 'Protest', 'Politics', 'Animal Crossing']
The Last Musician
Onophrian Fountain, Dubrovnik | Tien (2018) He rests the rebec on his knee. It is that moment — the moment when the crowd has thinned and the next group of tourists has not arrived yet. The brief window of break when his time belongs to him and his thoughts begin to wander. He had a dream. Once. It is difficult to believe but once upon a time, he was young. I am young, he tells himself. But once upon a time, when he was younger, he had a dream. He wanted to bring the music of his village into the world. The gentle song of the river, of the grassland and of the trees; the world is more interested in the harsh sounds of the metal, of the rock and of the electric. “He rests the rebec on his knee.” He had a dream. And once upon a time, he naively thought the world owed it to him to listen to his music. The simple songs of his village’s river, of his village’s grassland and of his village’s trees. He felt he was entitled to the success. But the baffling sounds of the metal, of the rock and of the electric prevail. He prevailed too — not in a manner he is proud of, but he prevailed. Forty years, he has been singing the meaningful songs of his village to tourists who listen. Forty years, he has been singing the meaningful songs of his village to tourists who do not listen. And forty years, he has been singing the meaningful songs of his village meaninglessly. “The gentle song of the river, of the grassland and of the trees; the world is more interested in the harsh sounds of the metal, of the rock and of the electric.” As his rebec techniques improve, his rebec playing becomes hollow. “Look Ma!” a child points excitedly. “What a funny violin!” “Shh, don’t point. That’s rude,” the mother pulls her child away and they hurry off to rejoin their tour group. I am still young, he reminds himself. Both the village and the world are far away, with him straddling in between. He is the last musician of his village to the world, the only one who still remembers the songs of his village. “Forty years, he has been singing the meaningful songs of his village meaninglessly.” He resumes his playing.
https://medium.com/a-cornered-gurl/the-last-musician-fa30465ac319
['Tien Skye']
2020-10-05 10:03:06.300000+00:00
['Life', 'Fiction', 'Dreams', 'A Cornered Gurl', 'Music']
Three Methods of Achieving Language Fluency in 3 Months.
Don’t be discouraged. If it were easy, everyone would be conversing in a wide range of languages on a daily basis. I’ve taught many students over the years and most of them ask; how can I learn English fast? It has taken some time and careful consideration, however as each year rolled over, I feel that I have created an essential basis for anyone who wishes to learn a language quickly. There is no get it quick scheme or magic button to be pressed, this journey is going to take hard work, dedication and lots of coffee. But with tenacity and optimism, and the right set of goals, learning a language should and can be, fun and exciting. Remember your goals an focus on what is important. If you make mistakes that’s good, but if you lose motivation, you need to spice things up a notch. So here we go; three factors to help you get fluent, faster. Routine is the Key I kid you not, this is the most vital. If you stick to a core routine and allow your body and emotions to focus on a routine, then surely your ability to learn a language should also follow suit, no? Of course. When you wake up in the morning, if the first thing you do is check Facebook, you’re automatically creating a habit that will lock in and you will continuously do that for the following three months. Take Facebook out of the equation and set an alarm to do 30 mins of language grammar study and your mind and energy will become attune to waking up at a certain hour. It will start to feel less like a chore and more like a habit. Creating healthy habits and sticking to them is essential with any task, especially with language acquisition. TIP: So many people forget to have a notebook and pen beside the bed. Also, replace your mobile with a proper alarm clock. That way you are not obliged to pick up your phone. Use the pen and paper to write down as many phrases from the previous day to help trigger memory recall the moment the white analog clock goes off. Sticky Notes are Your Best Friend. This is a little something that I picked up on my travels in South Korea. I had been a teacher for nearly three years at the time and the simple premise had never once crossed my mind. All you need is a decent couple of hours and a few pads of sticky notes. A pen will also be required. Go around your house and label everything, I mean everything, with sticky notes written in your desired language. When you go to the cupboard to get a can of beans, you’ll not only have a visual representation of what the word is but the spelling and correct article use as well. TIP: Instead of writing the singular word, take the extra time to write plural forms and the correct articles, prepositons or phrases that go with that word. For example, if you are learning Portuguese, when you open the fridge you should have written uma garrafa de leite (a bottle of milk) instead of just leite. We instinctively know what our household items and spaces are called, yet in other languages, they can be so diverse. Instead of learning them on an App or in a Classroom, simply apply them to your new routine. After a few months, you’ll have learnt everything in your house in two languages. Get Going on Grammar I cannot stress this enough. Grammar is crucial when learning a language. I’ve taught many students in my time who are excellent speakers, but their grammar is atrocious. Although they were confident in what they were saying, to people who are not familiar with listening to grammatical errors, the sounds and sentence structure sounded off and strange. Your first few weeks should be studying grammar, every day for a least an hour to really get the key concepts down. Particularly where the language has reverse subject-verb agreement or masculine and feminine words. These are the little things you need to become familiar with quickly to ensure you get your head around the language within the three months. TIP: Write down each grammatical tense on individual pages in a notebook. The top of the page should have the tense you are learning followed by what it is in your native language. The second part should be the language you are trying to learn and the third part needs to be at least five examples. Case Study A student of mine travelled to Australia with very little English. While it may seem irrelevant to you, like so many other travellers to Australia, English has been regarded as one of the hardest languages to learn so this study should offer some insight. After the first class, he told me what his intentions were and I gave him clear instructions, much like the notes in this article. He went home and took on some more of his own ideas as well as mine. Each day that passed I noticed he was using grammar and vocabulary that was not being taught in the unit material. Rather he was conducting his own studies and implementing them in the class. After three months of lessons, we sat down and had a conversation. Me: What have you been doing outside of the classroom to assist with your studies? Student: Many things. But I guess the most important was surrounding myself with English speakers rather than relying on native Portuguese speakers. He flew back to Brazil with an advanced level of English. In Summary I know most readers, much like myself at times, like to scroll to the end to get the main points of the story, rather than reading line for line. I cannot stress enough how important reading and re-reading will be when learning a language. It is a great way to visually see how grammar is constructed and any words that you may be unfamiliar with, as I am sure there will be many you can highlight or underline. However, for those of you who don’t like to read and merely want the quick notes here they are; Change up your routine and morning habits. If you start the day strong, then the rest of the learning process will be a breeze. Use sticky notes everywhere. They’re inexpensive and a great way to visualise vocabulary. Don’t skip the grammar. This is the foundation of the language and will provide great support in building your language knowledge. Learning a language won’t just improve your conversational skills in distant countries, but will also help you develop healthier learning habits as well.
https://medium.com/curious/three-methods-ofachieving-language-fluency-in-3-months-7cd22bc769ce
['Sam Taylor']
2020-11-21 10:44:17.314000+00:00
['Learning', 'Self Improvement', 'Education', 'Language', 'Creativity']
You can be anyone you want online- that apparently includes me.
A little more than fifteen years ago, my ex-husband pulled the car to a stop on a pitch dark stretch of gravel road in the middle of a snowstorm. He proceeded to beat me as I sat in the front seat. When he stopped to catch his breath, I crawled to the back seat, unbuckling my infant daughter from her car seat, and stepped outside of the vehicle. There was a light on at a house a quarter mile away and I thought I could get me and her to safety in fairly short notice. I started screaming for help as I ran toward the house and he fled in the car. When I got to the house and knocked, there was nobody home. The doors were locked and there wasn’t another house around for at least five miles. I carried the baby to the shelter of the barn and sat with a friendly gray horse for about an hour, hoping someone would arrive. They didn’t, but the snow let up. Fearing we’d freeze, or that he would come back, I took my coat off, wrapped it around my daughter, and walked five miles to town. It took hours. Fortunately, I knew a couple that lived right on the edge of the small town I arrived at. I knocked on their door and they called the police. This was not the first or the last time I’d be attacked by him, physically. I have stab scars in my legs. I have 2 collapsed vertebrae. I have a burn scar under my hairline at the back of my neck. That’s just the start. But, by far, the worst of what he’s done has been the psychological abuse. Eventually, based on the physical abuse, I was able to get the court’s permission to take my daughter out of state. We moved to Tennessee, where I have family, to get away from him. Shortly after arriving in Tennessee, though, I started having issues with cyberstalking. My picture and pictures of my daughter were being pulled off the now-defunct social media platform of Hi5 and winding up in ads on Craigslist claiming I’d kidnapped her, that I was abusing her, or that I was a prostitute. An unfortunate legal requirement was that I had to provide him my phone number at that time so he could call and check on his daughter- the one he’d never paid any attention to. I started having midnight callers from all over the country wanting to talk dirty to me. When I’d hang up, they’d call back angry. Every time I’d change my number, I was required to give him the new one and every time the calls would pick up again. Eventually, he lost his parental rights and I was no longer required to provide him my contact information. I moved from the house with the address he had into a different home. Things settled down for a while, but when my first book was published, things started up again. Every time my book would be featured on a website, the comments would fill with his usual lies about me in an attempt to assassinate my character. As I stopped advertising the book and doing interviews, that round of terror fizzled out. In 2015, I posted a Facebook comment on a popular page’s post. There was an online discussion going on about children who don’t know their fathers for varied reasons. I explained that I have always answered my daughter’s questions in an age-appropriate way, but with as much honesty as that allowed. The post went viral because the advice resounded with some other mothers. Then, as it became popular and a screenshot of the comment started surfacing on various other pages, the comments again filled with his hatefulness- lies about me, personal information, and vile, vulgar stories about our former relationship. My dignity will not let me repeat them, but you can imagine the things a hateful man can think up to say about a woman he has previously been intimate with and now seeks to ruin. At the age of 13, our daughter came out as a lesbian. Part of her coming out was launching her own social media campaign to bring awareness to anti-LGBT bullying she had endured. She received a lengthy message to her campaign page almost as soon as it opened from an “anonymous friend” who told her they were concerned that she had been tarnished by her mother’s “neo-liberal” beliefs. They told her that if she’d been allowed a better relationship with her father, she’d be “normal” and that they were “so very sorry” her head had been “warped.” Now, let me explain something here- I am very much a liberal on many subjects. However, there are certain policies I do find myself leaning more to the right on. I am married to a man who considers himself a conservative. I am the daughter of a man who considers himself a conservative. Many of my friends are conservatives. I have never had an issue with opposing beliefs. I only ask that opposition is met with kindness and intellect. Recently, a false post was created and circulated on the internet. It was created to look like I had posted it at 12:50 a.m. on March 21st (when I was fast asleep- I assure you). The post is written in a syntax that is not mine claiming that I was proud to have broken the law to ruin the business of a conservative white man. Not something I’d do. Not something I’d brag about. And the business described in the post does not exist where I live. The photo accompanying the post does not exist where I live. It was not me. But in the online storm that followed, the post was shared to multiple forums where someone discovered the name of my former employer. Not only did I suffer dozens of emails and comments on my articles and posts found across the internet, but a very good and respectable business was caught in that crossfire. They have had to remove their social media because of the aggressive way in which people responded to a fake post. It is not fair. Unfortunately, though, after speaking with officers, there isn’t a whole lot that can be done about it. At worst, it’s a misdemeanor here, and to charge that, there needs to be a significant reason to believe that someone wishes to do me bodily harm. In short, abusers have a new way to abuse without repercussion. When your job is to be online, you can’t hide, either. All you can do is this:
https://medium.com/thrive-global/you-can-be-anyone-you-want-online-that-apparently-includes-me-ba6afa94cf8b
['Jenni M']
2019-03-29 14:12:58.462000+00:00
['Wisdom', 'Wellness', 'Abuse', 'Cyber Bullying', 'Advice']
Analytics for Small and Medium Sized Firms
Analytics for Small and Medium Sized Firms by Michael Watson A lot of press around analytics is focused on large and exciting projects. But, analytics can be applied to firms of any size. Northwestern’s Kellogg School of Management’s recent magazine published an article entitled “Think you’re too small for big data?”. The gist is that few businesses are too small to benefit from data-driven methods. At Opex Analytics, we provide a variety of advanced solutions for small and medium sized businesses. For example, in the area of transportation analytics, you can do quite a lot with relatively simple tools. In addition, map portals can be easily deployed. They are hosted in the cloud and are ready for mobile devices. You can gain a lot of new insight into your business by combining your current data on a map. And, since the application is mobile-ready, you can deploy to your people in the field. ___________________________________________________________________ If you liked this blog post, check out more of our work, follow us on social media (Twitter, LinkedIn, and Facebook), or join us for our free monthly Academy webinars.
https://medium.com/opex-analytics/analytics-for-small-and-medium-sized-firms-5dfae25e015b
['Opex Analytics']
2019-04-25 17:12:00.991000+00:00
['Big Data', 'Analytics', 'Data Science', 'Business']
No, liberals don’t want to take your freedom and conservatives don’t hate poor people*
I want to capture how I feel at this time in history. “This time in history” is a funny phrase. Every time is a time in history. Every thing is either experience or memory. But I needed to get this out prior to next Tuesday’s election — the most important one in my lifetime. I think I’ve said that before. This time is truer than ever. We need to vote. And we need to vote for Joe Biden. This isn’t because he is amazing or anything. We all know he ain’t. Put simply, America may not be able to handle another four years like the ones we’ve just had. This isn’t going to be a post about all the things wrong with Trump and the tone he has set for the country. That would take too long and others have done it better. Plus he shouldn’t get all the “credit”. He simply tapped into something that was already there. No, I’d like to take this post down a more personal path — in an old school weblog sort of way. I’d like to tell you about what has made me very sad and very angry in 2020. I’ve always thought Americans — when it really came down to it — would stand together — would rise up as one. (Some) Wars have done it. I figured huge events like an alien invasion or global pandemic would have the same sort of effect. Unfortunately, the Covid-19 pandemic has not brought Americans together, except for maybe in hospital beds near each other. We’ve had riots where some are angry and sad and some are out to sow chaos for their own gain. There are people proudly not wearing masks — disregarding the other people in our society for — you know — their freedom. And you have people getting together in large numbers, going to restaurants for — you know — boredom. I really don’t like how my fellow Americans are behaving. More than anything, I dislike how I feel during these times. What I am feeling borders on hate for those who would — in my opinion — put their party and “winning” over the good of the country, and even their neighbors. I don’t like hating. I want to try to understand people. I am not interested in how they may see things but how it is that they came to see things the way they do. I would like to tear down the idea that we are our beliefs versus just believing what we believe. We should believe what we believe but be who we are. We should hold our beliefs as less sacred — be willing to change them when new information is presented. But today, we vote based on how we identify and which candidates best reflect that identity back at us. This is why to say someone is voting against their best interests is simply not understanding how that person sees the world and their place within it. I won’t ask someone why they are seemingly voting against their interests ever again. Instead I’ll be curious about the person. I’ll want to know what motivates them? What is important to them? How do they see themselves, their community, their country. We need to understand each other and for that we need to talk to each other. And here we get to the thing that has really got me down — especially in this time where we are (or should be) spending time more or less isolated with our families and small circles of friends. It is harder than ever to talk to those “on the other side”. We need to move away from politics as team sports where one side “wins”, one side loses, and the sum is zero. It isn’t winning to simply make the other side angry, or sad. We should be looking for ways to have the highest number of Americans improve their lives.
https://medium.com/alttext/no-liberals-dont-want-to-take-your-freedom-and-conservatives-don-t-hate-poor-people-c936994043f4
['Ben Edwards']
2020-11-17 19:03:41.213000+00:00
['Politics', 'Society', 'Sociology', 'Trump', 'Election 2020']
Creating an Ambitious New Web Platform for The Lancet Countdown
A Bespoke Website and Data Platform for The Lancet Countdown Our team developed and designed a new bespoke website for The Lancet Countdown with a flexible modular layout CMS, allowing administrators the ability to easily create new pages, amend content including all text, images, resources and video. We also introduced a bespoke data platform into the new website, integrating with the Flourish visualisation platform. This new data platform features 41 data visualisations, allowing users to observe and navigate through the key indicators from the report in a more engaging way. We also designed and implemented five illustrations to represent graphically the five main categories into which the various indicators were grouped within the 2019 report. Bespoke data platform integrated with Flourish visualisation. Video Production In addition to the new data platform, our production team at Univers Labs produced an animated video, interactively presenting the key findings from the 2019 Lancet Countdown report. Our team oversaw numerous critical aspects of this video, including the storyboarding, voice over, design, animation and musical composition. Lancet Countdown’s video, interactively presenting the findings from their 2019 report A Redesigned Resources Page The Lancet Countdown produces a range of highly regarded research materials which are extensively used by governments, policymakers, health professionals and the public. We redesigned the resources page, allowing users to explore the data, resources and reports, ensuring that users can quickly and easily access pertinent information in a format that’s most relevant to their work and needs. We also implemented search filters into the resources page, allowing users to search by year and country. Illustrations In addition to the newly developed resources page, our team at Univers Labs redesigned a range of resources for The Lancet Countdown, refreshing policy briefing templates and developing a range of illustrations to be used within their marketing efforts. Redesigned Policy Briefing Templates Illustrations and Infographics for The Lancet Countdown Marketing Activities “Our partnership with The Lancet Countdown has enabled Univers Labs to work on an ambitious web platform to coincide with the launch of their annual report. This project required content across a range of digital media types, calling on many different skills from our talented team, including video production, musical composition, and illustration to communicate the work of The Lancet Countdown in a number of ways that speaks to a wider audience”. — Max von Seibold, Head of Production. This project launched on Wednesday 13th of November, in time for The Lancet Countdown’s annual report on climate change and human health, which has received excellent worldwide media coverage and recognition. Full case study coming to our website soon.
https://medium.com/universlabs/creating-an-ambitious-new-web-platform-for-the-lancet-countdown-b9d83065eba2
['Univers Labs']
2020-01-17 14:08:06.941000+00:00
['Climate Change', 'The Lancet', 'UX Design', 'UI Design', 'Development']
Top 10 Python Libraries for Data Science
“T he goal is to turn data into information and information into insight.” Carly Fiorina — Former chief executive officer, Hewlett Packard. There are so many wonderful libraries being added to the Python ecosystem every single day but these are the ones I’ve found the most intuitive and useful (and unsurprisingly the most popular)!
https://medium.com/analytics-vidhya/top-10-python-libraries-for-data-science-78a6a2c3871f
['Akhil Sonthi']
2020-09-24 13:00:02.142000+00:00
['Data Science', 'Python', 'Libraries', 'Data Analysis', 'Scikit Learn']
Myanmar still using landmines, global casualties ‘exceptionally high’: report
The report — an accounting of casualties and global stockpiles, as well as on progress towards mine removal and victim assistance — is released annually by the International Campaign to Ban Landmines. The coalition of NGOs spearheaded the anti-mine movement, leading to the 1997 treaty that banned the weapon’s use. The coalition says 164 countries have signed on to the treaty. But 33 others have not, including some of the world’s largest stockpilers of landmines: the United States, Russia, China, Pakistan, and India. From mid-2018 to October 2019, government security forces deployed mines in only one country, Myanmar, underscoring the ongoing conflicts raging on multiple fronts in the Southeast Asian nation. Accused of widespread rights abuses, Myanmar’s army largely operates without civilian oversight. Conflict between the military and the Arakan Army, a militant group drawn from the country’s ethnic Rakhine minority, has displaced about 40,000 people this year, the UN says. The Landmine Monitor report says there’s evidence of landmine casualties in previously uncontaminated areas. Ongoing conflicts in parts of Kachin and northern Shan States also continue to trap civilians, and a surge in landmine use is also fuelling a migrant exodus in some areas. While Myanmar’s army was the only government security force to deploy landmines in the last year, anti-government groups in Afghanistan, India, Myanmar, Nigeria, Pakistan, and Yemen also used the weapons. Researchers said they were unable to confirm allegations of landmine use in Cameroon, Colombia, Libya, Mali, the Philippines, Somalia, and Tunisia. Afghanistan, where the civilian toll continues to hover near record highs as its conflict intensifies, topped the list of landmine casualties in 2018. Even though most countries have signed on to the landmine ban, there’s still a large global stockpile among treaty signatories, which are allowed to retain mines “for training and research”. Still, these totals are just a fraction of the stockpiles held by countries who haven’t signed on: Russia’s cache of anti-personnel mines is estimated to be at least 26.5 million. il/ag
https://newhumanitarian.medium.com/myanmar-still-using-landmines-global-casualties-exceptionally-high-report-5c62e487e4c0
['The New Humanitarian']
2019-11-28 08:04:07.946000+00:00
['Conflict', 'Data Visualization', 'World', 'Landmines', 'Myanmar']
Square’s Open Approach to Code
Square’s Open Approach to Code A quarter of a million lines of code later. Written by Lindsay Wiese. Heads up, we’ve moved! If you’d like to continue keeping up with the latest technical content from Square please visit us at our new home https://developer.squareup.com/blog Nearly 3 years ago we open sourced our first project at Square: Retrofit. Since then we’ve released more than a quarter of a million lines of code in more than 60 open source projects — a project, on average, every 2.5 weeks. As a member of the open source community — and a company that’s benefited from many open source libraries — we have a responsibility to pay it forward. Our open source efforts also benefit our customers: making internal work suitable for public consumption requires an extra layer of polish that helps to improve our products across the board. So thank you to everyone who has submitted patches to our open source projects. We have much more to contribute. If you’re interested in learning more about open source at Square, you can read about our efforts here. For more information about the broad technical challenges we face at Square, visit squareup.com/careers.
https://medium.com/square-corner-blog/squares-open-approach-to-code-c94c04a9feae
['Square Engineering']
2019-04-18 21:51:57.026000+00:00
['Engineering', 'Open Source']
Staring into the Dark — An Introverts View of Quarantine
What I Miss in the Office Zoom calls, messaging, screen sharing, emails, phone calls, and texts are great. This technology offers us the ability to work through this quarantine in as normal a state as possible. But as a software developer and data scientist, this technology does not offer me all I need to do my job. White Boarding Sessions The main thing I have missed the most is whiteboarding sessions. When in the office, it was common for team members to migrate to one person’s desk or a conference room with a large whiteboard and begin to talk. These discussions could range from mathematical, visual, application diagrams, and more. It was a way for us to work out a problem on paper and throwback and forth ideas. It allowed everyone in the small gathering to think through a problem out loud and get feedback. If you haven’t noticed yet, these sessions are not the same on Zoom. We have handled it the best we can, but now they tend to be someone screen sharing code or a PPT, talking through their ideas. Other developers or data scientists will chime in on their thoughts or approaches, but it can be hard to effectively translate those thoughts. After these meetings, you begin to wonder, did they understand what I was trying to say? Did I get my message across effectively to help them with their problem? Hallway Conversations The other aspect of working in an office that has changed are hallways conversations. Unless I am working directly with someone on a project, I often don’t see or talk to some people, whereas in the office, we would chat in passing, see one another in meetings, and have lunch together. We are no longer seeing these people, and they can begin to fade into the background as we spend our Zoom calls with those we work closely with and our free time working on our projects. Missing this hallway conversation and those daily interactions has led me to begin reaching out more to people I often do not see. Checking in with people you do not often see will allow you to keep in touch with them and understand what they are working on. It may surprise you the work they are doing, and maybe you can work with them on a solution. I have found it beneficial to have 1:1’s with individuals I don’t often see as I can chat with them to catch up on home life, work-life, and more. These conversations give us both a chance to talk things out and understand what the other has been doing. Consider reaching out to more individuals to talk and catchup. Free Time At least for myself, I have found the transition to remote work and more meetings harder to find free time to work on my projects. Everything now requires a meeting. Where once you could walk up to someone’s desk to quickly whiteboard out a concept, or ask a quick question in passing, now it commonly requires a Zoom. These calls are used to: Get the persons attention. Share with them what you are working on. Explain the issues you are facing or the decision you need to make. Come to a call of actions or conclusion from the meeting. These meetings tend to last 30 minutes to 1 hr, and if you have many of them on your calendar, they can add up. These, combined with regular team meetings, sprint meetings, and more, can leave you with little time to focus on your work. How do you tackle this problem? For myself, I have begun blocking time off on the calendar. I typically block at least one day off on my calendar to deter people from scheduling meetings. Allowing me a good 8 to 9 hours to do focused development work on my projects, get through some project planning, and develop any presentations that I need for the week. I have found this helpful as I can get done what I need to without interruption for at least one day every week. Team Gatherings The last area I have seen changed the most from in-person to remote has been team gatherings. Before working in my current role, I worked remotely for another company, but the dynamics were different. I was able to go to work at least once a week, I saw my team regularly, and I had opportunities to attend team gatherings. When I began working in the office at my current job, we often would do team gatherings or attend events together, which was great. You got to know people and spend time with them outside of work. Once quarantine hit, this was no longer the case. You can tell me to try virtual happy hours or team meetings, but that does not work for every person. I have attended many of those, but it is not the same atmosphere as sitting in a room with people and chatting. I joined the team with enough time to get to know most people before going remote. But some members joined and then quickly went remote, giving them very little time to get to know everyone and bond as a team before being thrown into uncharted waters.
https://medium.com/the-innovation/staring-into-the-dark-an-introverts-view-of-quarantine-fa69571277f2
['Rose Day']
2020-11-14 17:01:03.876000+00:00
['Work From Home', 'Software Development', 'Remote Working', 'Productivity', 'Data Science']
3 Reasons Why You Should Start 2021 With a Vision Board
3 Reasons Why You Should Start 2021 With a Vision Board A magazine contacted me, just days after I made a vision board Photo by Polina Zimmerman from Pexels You would not believe the number of amazing opportunities that came my way a mere few weeks after making my vision board. Whether you believe in the theories behind manifestation or not, a visual reminder each morning to get up and work on your goals will inherently make your goals more tangible. Here’s what happens when you make a vision board (in my experience): 1. You’re forced to seriously think about what you want in life… …which you don’t always get the opportunity to sit down and do. You can approach your vision board in one of two ways. You can make a short-term (yearly) vision board or a long term one. Stop and ponder what you would like to do for a living in 5 years, what your dream house looks like, what car you’ll drive, what your day will look like in 10 years, what you want your net worth to be etc. As exciting as this is it can also be quite daunting. To do this, one approach I used is to make a mind map of everything I can think of. Image by Author: The mindmap that helped me make my vision board This mind map is just as valuable as the vision board. If you see before yourself on your mindmap your dream life, you’re doing it right. Not all of these things made it onto the vision board, but it definitely made my life easier when it came to choosing the images. Bonus tip: Don’t pick up your nearest magazine to shred up and make your vision board with. This may lead you to put things you never intended to be on your vision board because they looked cool and exciting. For example, if you’ve picked up your nearest copy of Vogue, you will have a largely materialistic vision board, which may not have been your intention. Writing a mindmap or list and then printing them off from Google should make for a far more accurate vision board. 2. You are making a promise to yourself to work towards your dreams By printing, cutting, and sticking your images down onto your board you are making a commitment to each thing you stick down. Not everyone gives themself the privilege of honouring their aspirations, but I imagine most that make a vision board, do. Depending on how you approach your board, you are telling yourself, these are things I will achieve, this is a visual representation of what my life will look like. Some also choose to put overly ambitious things on their vision board with the idea that even if they fall short, their life will look pretty amazing. “Shoot for the moon, and even if you fail, you’ll land among the stars.” 3. The universe will reward you in the strangest of ways Not everyone will believe this, or they may believe that this would have happened without the vision board, and well, who knows? All I know is that I put my vision board: “Become an F1 presenter” Then one week later, the top F1 team followed me on Twitter and I was picked up by the UK’s best motorsport magazine. Therefore I believe it is a combination of hard work and something a little extra. The truth is, I wake up and I see, either on my phone (as my vision board is my phone wallpaper) or on my desk, my vision board. This will kick you up the bum and remind you of what truly matters. Ask yourself whether what you are doing aligns with your goals that have been set out, if the answer is no, stop. Therefore, the Twitter follow and magazine gig was a result of my hard work, but I like to think the universe had a say too.
https://medium.com/age-of-awareness/3-reasons-why-you-should-start-2021-with-a-vision-board-6f2077689392
['Lu Mar']
2020-12-15 03:47:45.602000+00:00
['Education', 'Self Improvement', 'Life', 'Creativity', 'Inspiration']
Sodomy and Me; A Tale of Activism
It’s 1997. I’m 28 years old and living in the beautiful island state of Tasmania, the last bastion of Australian anti-gay sentimentality and dubbed by international media as “Bigots Island” due to some of the harshest anti-homosexual laws in the world. In those days I was a Mormon, a seriously square peg trying to fit into an even more serious round hole. An emerging feminist, I’d been struggling with the faith I was raised in since my early teens and now, as a young woman, a wife, and a young mother, it was becoming ever more apparent that the religious community I existed in was not a place for individual expression, much less freedom of thought, and certainly not an ideal place for anyone who wasn’t white, straight, and male. It was at a church function that I was approached by a woman asking that I sign a petition. Being the social activist type I was happy to meet another politically engaged Mormon so I sat down to listen to her. But as she passionately revealed the reason for her angst I, in turn, became passionately agast. Unbeknownst to me this quaint little island community I called home had managed to retain a certain law that was suddenly being challenged, and by some foreigners too if you please! How dare! In 1997 the United Nations Human Rights Committee was putting pressure on Tasmania to repeal its archaic sodomy laws. The law, inherited from Britain after colonisation in 1788, criminalised anal intercourse whether consensual or not. This law had stood unchallenged until the 1970s when the first gay rights organisation ‘Campaign Against Moral Persecution’ (CAMP) was founded and a shift in Australian attitudes towards homosexuality began. South Australia had led the charge in 1975 when distinctions were made between sodomy during rape and sodomy between two consenting adults; providing, of course, it was entered into behind closed doors. Other states soon followed and by 1990 the whole country had abolished a morally biased religious law in favour of a more secular and liberal view; all states, that is, except Tasmania. By keeping sodomy firmly entrenched in the criminal code, Tasmania had hoped to keep its pristine shores free of an unwanted homosexual contingency. Perhaps they could all just head north to a more liberal society, was the thinking of many Taswegians. Melbourne and Sydney — Australia’s own Sodom and Gomorrah — would be a much more appropriate gathering place, and surely criminalising anal sex would achieve this aim. After all, an appreciation for all things anal is what defines a homosexual, is it not? As my fellow Mormon clutched her pearls I found myself wondering, does the proliferation of homosexuality really come down to just one word? And what about gay women, how is their sexuality viewed legally? Little did I know that as I pondered this question a young Tasmanian, Hannah Gadsby, was rotting “quietly in self-hatred” and, despite today being the darling of comedy, still struggles with the low self-esteem fostered during that period of intense hate in our state's history. So what is sodomy? Depending on whether you speak the Queen’s English or the President’s you may have a different view of sodomy. According to the Oxford English Dictionary, it is, to put it bluntly, anal sex. However, according to the dictionary favoured by our American friends (Merriam-Webster) it also includes, to put it bluntly, oral sex. If we were to search even farther afield the broader definition goes so far as to include masturbation, which would put possibly 80% of the human population at serious risk of incarceration, sexually segregated incarceration at that — a rather ironic thought. As such sodomy laws were used to criminalize love between two people of the same sex regardless of gender. The research told a different story In 2011 Joshua Rosenberg, Professor of Global and Human Health at George Mason University, conducted the largest ever study into the sexual behaviour of the gay and bisexual male community. “Of all sexual behaviours that men reported occurring during their last sexual event, those involving the anus were the least common,” Rosenberger said. “There is certainly a misguided belief that ‘gay sex equals anal sex,’ which is simply untrue much of the time.” The study, which was published in the Journal of Sexual Medicine, revealed that 75% of the 25,000 homosexual men participating in the research stated that oral sex was their preferred style of sexual intimacy. And what about straight men? If porn hub is anything to go by, a lot of straight men are sodomites, and no more or less than gay men considering half of them admit to it and the other half aspire to it. The only thing stopping them it seems is finding a willing female partner. However, I didn’t need research to tell me what I already knew, though it sounds cliché; love is love. As for promiscuity (another archaic concept); that wanton act of having sex just for the simple joy of it. Who was I to question other humans for doing something I rather enjoyed myself? As I sat there listening to the outrage of a woman — who really felt that homosexuality was not only a crime that God should judge but one that she and my entire religious community needed to ensure they could continue to judge (and punish) too— a different jury had come in for me and it was clear that I needed to speak up about this; in my church community, in my local community, and in my home where I was just starting to raise a socially aware and ethical child.
https://medium.com/an-injustice/sodomy-and-me-a-tale-of-activism-492dcff8680c
['Sarah J. Baker']
2020-12-14 23:24:46.645000+00:00
['Equality', 'LGBT Rights', 'Society', 'Memoir', 'History']
Convolutional Neural Network Champions —Part 1: LeNet-5 (TensorFlow 2.x)
Applying convolutional layers (aka “ConvNet”) on images and extracting key features of the image for analysis is the main premise of ConvNets. Each time “Conv-layers” is applied to an image and divides this image into small slices known as receptive fields, hence extracting important features from the image and neglecting less important features. The kernel convolves with the images using a specific set of weights by multiplying its elements with the corresponding elements of the receptive field. It is common to use “pooling layers” in conjunction with Conv-layers to downsample the convolved features and to reduce the sensitivity of the models to the locations of the features in the input. Finally, adding dense blocks to the model and formulating the problem (classification and/or regression), one can train such models using the typical gradient descent algorithms such as SGD, ADAM, and RMSprop. Of course, prior to 1998, the use of convolutional neural networks were limited and typically support vector machine was the preferred method of choice in the field of image classification. However, this narrative changed when LeChun et al. [98] published their work on the use of gradient-based learning for handwritten digits recognition. Data LeNet models are developed based on MNIST data. This data-set consists of the hand written digits 0–9; sixty thousand images is used for training/validation of the model and then a thousand images are used to test the model. The images in this data-set have a size of 28×28 pixels. An example can be seen in the following figure. The challenge of using a MNIST data-set is that the digits often have slight changes in shape and appearance (for example, the number 7 is written different way). Examples of MNIST data sample Looking at the labels in the MNIST data set, we can see the number of labels are balanced, meaning there is not too much disparity. Label counts in the MNIST data set Network Structure LeCun et al. [98], The proposed structure of LeNet5 network The proposed model structure of LeNet-5 has 7 layers, excluding input layers. As described in the Data section, images used in this model are MNIST handwritten images. The proposed structure can be seen in the image above, taken from the LeChun et al. [98] paper. The details of each layer are as follows: Layer C1 is the first Conv-layer with 6 feature maps with strides of 1. Using a formula given in the appendix, one can calculate the output dimension of this layer 28×28 with 156 trainable parameters (refer to appendix 1 for details). Activation function of this layer is tanh (refer to appendix 2 for more details). Layer S2 is an average pooling layer. This layer maps average values from the previous Conv layer to the next Conv layer. The Pooling layer is used to reduce the dependence of the model on the location of the features rather than the shape of the features. The pooling layer in LeNet model has a size of 2 and strides of 2. Layer C3 is the second set of the convolutional layer with 16 feature maps. The output dimension of this layer is 10 with 2,416 parameters. Activation function of this layer is tanh . Layer S4 is another average pooling layer with dimension of 2 and stride size of 2. The next layer is responsible for flattening the output of the previous layer into one dimensional array. The output dimension of this layer is 400 (5×5×16). Layer C5 is a dense block (fully connected layer) with 120 connections and 48,120 parameters (400×120). Activation function of this layer is tanh . Layer F6 is another dense block with 84 parameters and 10,164 parameters (84×120+84). Activation function of this layer is tanh . Output Layer has 10 dimension (equals number of classes in the database) with 850 parameters (10×84+10). Activation function of output layer is sigmoid (refer to appendix 2 for more details). The following code snippet demonstrates how to build a LeNet model in Python using Tensorflow/Keras library. Keras sequential model is a linear stack of layers. Then, we need to define each layer as seen below. Finally, the model needs to be compiled and the choices of optimizer, loss function, and metrics need to be explicitly defined. The optimizer used in this work is sgd or Stochastic Gradient Descent. The loss function is optimized to train the machine learning model. The loss function used here is cross-entropy, or log loss, and measures the performance of a classification model whose output is a probability value between 0 and 1. An accuracy metric is used to evaluate the performance of training. Loss function is a continuous probability function while accuracy is a discrete function of the number of correctly predicted labels divided by total number of predictions (refer to appendix 3). LeNet-5 Layers Structure Note that in the above code snippet, we have not specified anything about how the weight of neural network is initialized. By default, Keras uses a glorot_uniform initializer. Weight values are chosen randomly in a way to make sure that information passed through the network can be processed and extracted. If the weight is too small, the information shrinks as a result. If the weight is too large, the information grows as a result and becomes too big to process. The Glorot uniform algorithm (also known as Xavier algorithm) chooses appropriate random weight values from a multivariate random normal scaled by the size of the neural network [refer Glorot 2010]. The summary of LeNet-5 network constructed with Tensorflow is given below (Using model.summary() ) : Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 28, 28, 6) 156 _________________________________________________________________ average_pooling2d (AveragePo (None, 14, 14, 6) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 10, 10, 16) 2416 _________________________________________________________________ average_pooling2d_1 (Average (None, 5, 5, 16) 0 _________________________________________________________________ flatten (Flatten) (None, 400) 0 _________________________________________________________________ dense (Dense) (None, 120) 48120 _________________________________________________________________ dense_1 (Dense) (None, 84) 10164 _________________________________________________________________ dense_2 (Dense) (None, 10) 850 ================================================================= Total params: 61,706 Trainable params: 61,706 Non-trainable params: 0 _________________________________________________________________ Now that we have constructed the LeNet model using Tensorflow and Keras, we need to train the model. Using model.fit() and feeding training and validation sets, the model is trained. Additional parameters needed for training are the number of epochs, batch size and verbose: An epoch is one complete presentation of training data. Samples from the training data set are selected randomly and presented to the model to learn. An epoch, therefore, represents one cycle through the full training data set. As mentioned before, an epoch refers to the complete presentation of data to model. Training data is selected randomly and fed to the model. The number of samples in the randomly selected data is called batch size. Smaller batch sizes are noisy compared to the larger batch sizes, but they may generalize well. Larger batch sizes are used to avoid memory limitation problems especially when using Graphics Processing Units (GPU). Verbose specifies the frequency of output logging. If set to 1, each iteration model loss will be printed. Training code snippet. Note to_categorical command is used to convert a class vector to binary class matrix. Once the model is trained, we can use a testing set that we have set aside to evaluate the performance of model training using model.evaluate() command: Testing code snippet The result of training the model for 10 epochs can be seen in the following figure. Initially, the weights of neural network are chosen randomly, but after 2 epochs of presenting 48,000 pictures to the model, the model loss reduced from 0.38 to 0.1. After 10 epochs of model training, the model accuracy surpassed 95% on testing set. This is a substantially improved accuracy compared to previous models at the time (mainly support vector machines), and as a result LeNet-5 cemented its legacy as one of the earliest champions of computer vision. LeNet-5 Training results (10 epochs) Using model.optimizer.get_config() we can interrogate the optimizer parameters. Note that we only specified the type of optimizer, the loss function, and accuracy metrics. As it can be seen from the following snippet, the optimizer used to train LeNet model is a Stochastic Gradient Descent (SGD) optimizer. The learning rate by default is set at 0.01. The learning rate controls the change in the model parameters in response to observed error measured by loss function. Imagine the model training process as traversing from a hill to valley, and the learning rate defines step sizes. The larger step size traverses to the solution faster but it might result in jumping over the solution. On the other hand, smaller step sizes take too long to converge. In the more complex problems, step size decay is commonly used. In this problem, however, no decay is sufficient to get good results. Another parameter using SGD is momentum . Instead of using only the gradient of the current step to guide the search, momentum also accumulates the gradient of the past steps to determine the direction to go. Momentum, therefore, can be used to improve converge the speed of the SGD. Another parameter is nesterov . If set to true (boolean), SGD enables the Nesterov accelerated gradient (NAG) algorithm. NAG is also a closely related algorithm to the momentum in which step sizes are modified using velocity of the learning rate change (refer to Nesterov [1983]). model.optimizer.get_config(): {'name': 'SGD', 'learning_rate': 0.01, 'decay': 0.0, 'momentum': 0.0, 'nesterov': False} Model Interrogation There are multiple ways to assess the performance of a classifier. Accuracy performance on the test data set that held back from any of the model development tasks is the obvious choice. However, the confusion matrix can provide a detailed report of classifier and better assess the classification performance. Furthermore, the Classification accuracy can be misleading if an unequal number of observations in each class are present. A confusion matrix is a detailed report of the number of samples classified correctly/incorrectly. The number of samples along the diagonal of the confusion matrix are correctly predicted samples. All other samples are miss-classified. The higher the number of samples on the diagonal, the higher the model accuracy. As it can be seen from the confusion matrix of LeNet-5 on MNIST data set, most of the classes are classified correctly. However, there are a few cases the classifier had trouble correctly classifying the label such as label 5,4, and 8. For example, there were 16 cases the classifier incorrectly classified number 2 as number 7. Those aforementioned cases are depicted in the following image. LeNet-5 Confusion Matrix Example of miss-classified label Choice of Optimizer In the previous section, it is mentioned the SGD optimizer is used for this optimized neural network model. However, because of the slow convergence of SGD and problems of getting stuck at local minima, this method is not popular. Since its introduction, Adaptive Moment Estimation aka Adam (refer to Kingma et al. [2014] for more details) enjoys significant popularity in the field of deep learning. The Adam optimization algorithm is an extension to stochastic gradient descent in which momentum by default is applied to gradient calculation, and separate learning rates for each parameter. Using Adam optimizer and retraining the LeNet-5 from scratch, model accuracy can be increased to 98% as seen in the following learning curves: LeNet-5 Training results (10 epochs) using Adam optimizer. Effect of Batch Size The batch size is one of the most important hyper-parameters in neural network training. As discussed in the previous section, the neural network optimizer during each training epoch randomly selects data and feeds it to the optimizer. The size of the selected data is called the batch size. Setting the batch size to the entire size of the training data may cause the model to be unable to generalize well on data it hasn’t seen before (refer to Takase et al. [2018]). On the other hand, setting batch size to 1 results in higher computational training time. The proper choice of the batch size is particularity important as it leads to model stability and increases in accuracy. The following two bar charts demonstrate testing accuracy and training time of various batch sizes from 4 to 2,048. Testing accuracy of the model for batch sizes 4 to 512 is above 98%. However, the training time of batch size 4 is more than four times the training time of batch size 512. This effect can be more severe on more complex problems with a large number of classes, and large numbers of training samples. Effect of batch size on model accuracy (upper chart) and training time (lower chart) Effect of Pooling Layer As discussed before, the pooling layer is required to down sample the detection of features in feature maps. There are two most commonly used pooling operators: average pooling and max pooling layers. Average pooling layer operates by calculating average values of the selected patch in the feature map, whereas max pooling layer calculates maximum value of the feature map. Max pooling operation as it can bee seen in the following figure, works by selecting the maximum feature value from the feature map. Max pooling layers discriminate against features with less dominant activation functions and only select the highest values. This way only the most important features are fed through pooling layer. The major drawback of max pooling is that the pooling operator in the regions with features of high magnitude, only the highest value feature is elected and the rest of features are ignored; the discerning features disappeared after performing max pooling operations and results in loss of information (the purple region in the following figure). Max Pooling operation Average pooling, on the other hand, works by computing average values of the features in the selected region of the feature map. All parts of the selected region in the feature map are fed through using average pooling. If the magnitude of all the activations is low, the computed mean would also be low and rise due to reduced contrast. The situation will be worst when the most of the activations in the pooling area come with a zero value. In that case, feature map characteristics would reduce by a large amount. Average Pooling Operation As indicated earlier, the original LeNet-5 model uses average pooling strategy. Changing average pooling to the max pooling strategy resulted in approximately the same testing accuracy on the MNIST data set. One can argue the point of different pooling layers. However it should be noted that the MNIST data set is rather simple compared to other complex data sets such as CIFAR-10 or Imagenet, hence the performance benefits of max pooling in such data sets can be by far more beneficial. Comparison between average pooling and max pooling strategies Effect of Feature Rotation and Flipping Thus far we have explored different aspects of LeNet-5 model including the choice of optimizer, effect of batch size, and choice of pooling layer. LeNet-5 model is designed based on MNIST data. As we have seen so far, the digits are centered in each image. However, more often than not, the location of the digits in the image in real life is shifted, rotated, and sometimes flipped. In the following few sections we will explore the effects of image augmentation and sensitivity of the LeNet-5 model to image flipping, rotation, and shifting. Image augmentation is done with the help of the Tensorflow image processing module, tf.keras.preprocessing. Effect of Flipping In this exercise, images are flipped along the horizontal axis using ImageDataGenerator (horizontal_flip=True) . Applying ImageDataGenerator tests image results in the new data set with the images horizontally flipped as seen in the following image. As it can be seen, it is expected the model have low accuracy on the flipped images data set. As seen from the testing accuracy table, the accuracy of the LeNet-5 model dropped from 98% to 70%. Test accuracy on flipped images A closer look at the confusion matrix of the flipped images data set reveals few interesting takeaways. The highest accuracy labels are 0,1,8, and 4. The first three labels are symmetrical (0,1,8) and as a result model has good accuracy of prediction on such classes. But it is interesting that the LeNet-5 model has good classification accuracy on label 4. Another interesting aspect of this test is how the model identifies digits. For example one of the labels that the model suffers from accuracy in the flipped data set is 3. The model almost half the time misclassified it as the number 8. It is very useful to understand how the model identifies each digit in order to build a robust classifier. Packages like SHAP can provide means of understanding input and output mapping of any deep neural network model (look for DeepExplainer module in SHAP library). Confusion matrix of flipped images prediction Image Rotation Image rotation is another possible scenario in real life. Digits can be written in an angle with respect to the image boundaries. With the Tensorflow image augmentation module, one can produce randomly rotated images using the following line of code: ImageDataGenerator(rotation_range=angle) . The testing result of LeNet-5 model with various randomly rotated images can be seen in the following figure. The more rotation, the worse the prediction of the model. It is interesting to note that model predictions are rather satisfactory for up to 20 degrees of rotation, then the predictions degrade rapidly. Prediction of LeNet-5 on randomly rotated images Effect of Shifting One final image augmentation effect is shifting digits along the horizontal or vertical axis within the image. This effect can be easily applied to MNIST test dataset using ImageDataGenerator(width_shift_range=shift) . Note that, in this section I am demonstrating the result of width_shift generator. The sensitivity of LeNet-5 network to width shift is much higher than image flipping and image rotation. As can be seen from the following figure, the accuracy degrades much faster than other discussed image augmentation processes. Only a 10-degree width shift results in accuracy drop from over 95% to about 48%. This effect might be attributed to the filter size and kernel dimensions of the model. Prediction of LeNet-5 on randomly shifted images Performance on CIFAR-10 Data set As we have seen from all the previous sections, LeNet-5 model achieved a significant milestone in the hand written digit recognition. Due to its superior performance on classification problems, LeNet-5 model used in banks and ATM machines for automatic digit classification in mid-1990s. However, the next frontier for this model was addressing image recognition problem to identify various objects in the image. In this final section, we aim to train the LeNet-5 on CIFAR-10 dataset. CIFAR-10 (Canadian Institute For Advanced Research) is an established computer vision data set with 60,000 color images with the size 32×32 containing 10 object classes as it can be seen from the following picture. The 10 different classes represent airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. As can be seen from the images below, the complexity of images is much higher than that of MNIST. CIFAR-10 Data Set Applying LeNet-5 structure to this data set and training the model for 10 epochs, results in an accuracy of 73%. The testing accuracy of this model is 66%. Considering human level accuracy on this data set is about 94% (according to Ho-Phuoc [2018]), LeNet-5 type structure is not efficient enough to achieve high recognition capability.
https://towardsdatascience.com/convolutional-neural-network-champions-part-1-lenet-5-7a8d6eb98df6
['Amir Nejad']
2020-11-06 15:30:13.193000+00:00
['Python', 'TensorFlow', 'Deep Learning', 'Convolutional Network', 'Data Science']
A Content Moderation scheme for Facebook and other Social Media
In recent years, Facebook has come under fire for their handling of harmful content posted on their platform. The content in question includes fake news, credible threats of violence, terrorism, and even genocide. According to the New York Times, Facebook’s methods for dealing with the onslaught of harmful posts have been sub-optimal. They consists of thousands of contractors sifting through posts using a large set of guidelines spread across powerpoint slides and Excel spreadsheets. These documents aim to reduce removing a post to a yes/no question. This system has produced numerous errors, sometimes accidentally enabling the very practices it seeks to prohibit. Among other factors, this has come about because of the difficult categorization hate speech, especially considering that many of the contractors working do not understand the language, politics, and culture of the citizens whose posts they aim to filter. Given these complexities, the best way for Facebook and other social media companies to handle content moderation is to step away from controlling speech and focus on preventing violence. The basic premise for scaling down content moderation is that no matter what policies a company implements, it is inevitable that the employees will make errors. The most egregious of these errors are those that cause actual harm to people (i.e allowing an extremist group in Myanmar to organize and execute genocidal campaigns). These errors are a product of sheer volume. The content moderators that Facebook and other companies contract often have seconds to decide whether or not a post should be taken down or not because there are simply too many posts to consider each one carefully. Accuracy is sacrificed for speed. Moreover, having to recall complex rules in such a short time-span is extremely difficult, not to mention that sometimes the rules themselves are incorrect. The only way to allow more thoughtful consideration of each post is to reduce the volume of content that reviewers must sift through. The most effective method of accomplishing this might look like a tiered system, with each layer discarding posts from consideration until the ones left are those which directly call for or create violence. The first tier of the system would necessarily be algorithmic. Sentiment analysis techniques have proved quite capable of detecting negative sentiments. Non-deep-learning models have been able to achieve near 80% accuracy on smaller datasets. Although 80% accuracy is a dismal figure when considering the scale at which Facebook operates, it is important to consider that with Facebook’s state-of-the-art deep-learning techniques and enormous access to data, the 80% figure can easily be raised to acceptable rates. In any case, the purpose of the algorithm is only to allow posts expressing negative sentiments to the next stage of review. Facebook should only care about the negative sentiments because any post organizing or advocating violence by nature will be using words with negative connotations. It is important to recognize that this model would not be flagging posts praising terrorist organizations or violent individuals. While this is a valid concern, it is not Facebooks’s job to be the global arbiter of which groups people support. In fact, Facebook’s current attempts to control discourse about the groups they have deemed “hate groups” have only resulted in inadvertent political meddling and suppression of legitimate speech as evidenced by their censorship of political parties during the Pakistani election cycle. After the posts expressing negative sentiments have been collected, they can be put through human review. However, when reviewing these posts, humans should be looking for criterion specifically regarding violence, not the complex and nebulous ruleset which Facebook currently has its content moderators following. It can be a difficult problem to discern whether or not a post expresses admiration or support for fringe organizations. While it may be easy for short posts written in the content moderator’s language, for longer posts or those written in foreign language, mistakes are bound to be made. The downside of these mistakes is that the censored speech is mainstream in its country of origin, but not in Western thought. Accordingly, the simplest solution is to not attempt to censor these types of posts. They cause no immediate harm or violence. Instead, the focus should be on removing posts of threatening and discriminatory nature. These posts cause psychological and sometimes physical harm, and it is easier for humans to distinguish threats than it is to categorize generic “hate speech.” Of course, every system needs fail-safes, and for the tiered structure, the fail-safe is to cascade posts through the system. For example, seemingly benign posts which are filtered out by the machine learning model should be put through other models or even a cursory human review system. Likewise, if there are three stages of human review, posts allowed to stay on the site at each stage should be reviewed by a different team at least once for verification. As a whole, this approach minimizes the number of posts looked at because it has narrowed down the search criterion to threats of violence and discrimination, disregarding those which fall in the nebulous category of hate speech. It also makes room for multiple people to review posts for potential “false positives” (i.e posts which are deemed non-threats but actually are threats). In this way, Facebook’s content moderators can filter out the posts that cause the most harm while reducing the amount of accidental censorship.
https://medium.com/mdblog/a-content-moderation-scheme-for-facebook-and-other-social-media-6be968eedde8
['Anmol Parande']
2019-08-22 05:31:56.837000+00:00
['Free Speech', 'Technology', 'Facebook', 'Content Moderation']
Let’s Stop With the Fatphobic Talk
Let’s Stop With the Fatphobic Talk It’s not your place to speak about someone else’s body In the before times, when swimming and water-exercise were encouraged in indoor swimming pools, I’d go to my local YMCA for water aerobics classes. I loved going because I was good at them, and I could do things in the water that I could no longer do on dry land, such as jumping. The water exercise classes were my happy place until one woman ruined it with her fatphobic comments. One day, after class, Linda stopped me as I was about to get out of the pool. I thought she was going to compliment me on my impressively high jumps, but rather than praising me; she started talking to me about a woman who was her father’s nurse. I couldn’t figure out what a nurse had to do with water-exercise or me until she described the nurse. “She was huge,” Linda said, “two or three of you put together. Super fat, unbelievably, huge — 600 lbs, maybe. If they put the two of you side by side, you’d be skinny compared to her.” She wanted to make sure that I knew I was fat, but also that I wasn’t the fattest person she’d ever encountered in her life. While this comment may have appeared random, I knew what her motivation was. She wanted to make sure that I knew I was fat, but also that I wasn’t the fattest person she’d ever encountered in her life. Okay, good to know. She didn’t seem to care that her comments weren’t appropriate or even unkind. Perhaps she didn’t know she was fatphobic. Many people lie to themselves that when they talk about someone’s weight — even in a negative way, they’re being helpful. Let me tell you, giving advice about someone else’s body isn’t valuable, friendly, or appreciated in any way. What does fatphobic mean? Fatphobia is the intense fear and dislike of fat people and fat bodies, and the fear of becoming fat. Someone who has fatphobia is fatphobic, and it can manifest in their actions, thoughts, and how they talk. Fatphobic phrases may appear innocent. People say fatphobic things all the time without being aware or caring how their words hurt others. It’s as if there’s something inside them needs to let a fat person know they’re fat(which by the way, we know,) or how they’re defective, they are in the eyes of the fatphobic person because of their weight. Saying things like, “You’ve got such a pretty face,” and “You carry yourself well” aren’t complimentary, and their negative messages are heard loud and clear. “You’ve got such a pretty face” means your face is acceptable by society’s beauty standards, but your body is not. “You carry yourself well” means hooray for you for being able to walk about with all your extra weight. “You’ve got such a pretty face” means your face is acceptable by society’s beauty standards, but your body is not. Your body is wrong, unruly, and if you only had a little discipline, you could fix it. Building awareness of fatphobic talk. Mount Holyoke College diversity office recently shared a guide called, “Phrases you didn’t know were fatphobic” on social media, which included, “You’re so brave,” “That’s so flattering on you,” and “I’m so bad for eating this.” In an article on Campus Reform, Mount Holyoke student, Evelyn Bushway said, “Talking about weight, in general, is really hard to do without offending anyone but questions and statements like that [I’m so bad for eating this] said to the right (or wrong)person could be detrimental to their mental and physical health.” Fatphobic phrases and fat-shaming are aggressive actions and can be avoided with an awareness of how those words affect others. Would Linda of the pool have resisted describing the nurse’s body if she knew it was fatphobic? The answer is maybe. There’s no excuse for fatphobic talk. Some people feel justified in saying mean things because they’re just trying to help or are being honest. Some don’t believe in self-censorship when it comes to other people’s bodies or behaviors. They need to say what’s on their minds, no matter the cost to the other person. I can’t tell you how many times when riding my bike, someone felt compelled to yell a fatphobic comment at me from their car, or how in a restaurant, I’ve been questioned by strangers on my order. People need to learn that you can’t tell what someone else’s problems are by looking at them, and it’s not their business anyway. It’s not constructive to give criticism to a stranger or to someone who hasn’t explicitly asked for it. The truth is, no matter how much you try to sugarcoat your fatphobic comment — it’s still not okay. I hope in the future, people will be conscious of their words and the effect that they have on others. It’s not difficult to stop saying fatphobic phrases — make a choice not to remark on anyone’s body even if you think it’s helpful.
https://medium.com/fattitude/lets-stop-with-the-fatphobic-talk-867d38147ab7
['Christine Schoenwald']
2020-08-30 22:23:15.098000+00:00
['Health', 'Culture', 'Body Positive', 'Awareness', 'Women']
10 Project-Based Tutorials for Learning JavaScript
10 Project-Based Tutorials for Learning JavaScript You don’t have to spend any money. You just have to be resourceful Photo by Photoholgic on Unsplash “I am still learning.” — Michelangelo After taking these free courses, don’t forget to leave a thumbs-up and a token of gratitude for our teachers, instructors, and mentors for providing these awesome courses for free. They gave so much time and effort just so they can share their knowledge and hoping to help fellow developers, and just a thumbs-up and a thank-you would mean a lot to them, that’s for sure. Let’s be the best developer by simply helping each other. Always remember that in this developer world, the farthest stranger could be your greatest supporter, so learn the value of giving and you will receive abundance in unexpected ways. Kindness is free.
https://medium.com/better-programming/10-project-based-tutorials-for-learning-javascript-b53c6bb00a47
['Ann Adaya']
2020-11-19 16:28:57.146000+00:00
['JavaScript', 'Software Development', 'Software Engineering', 'Learning To Code', 'Programming']
Compendium of Kubernetes Application Deployment Tools
Deploying applications to Kubernetes can be as simple as writing a few resource definitions in yaml or json and applying them with kubectl, but it can also be a whole lot more automated (and complicated). A popular meme in application deployment is the combination of Continuous Deployment and GitOps: the automatic deployment of resources after each change to the source code. In order to for you to use GitOps to deploy applications to Kubernetes, you need several things: Container Image Building to build your source code and local dependencies into container images. to build your source code and local dependencies into container images. Resource Templating to customize deployment resources for your environment(s). to customize deployment resources for your environment(s). Package Management to bundle multiple resources into versioned releases and manage package dependencies. to bundle multiple resources into versioned releases and manage package dependencies. Continuous Deployment to roll out new changes to your environment(s), often using a pipeline of steps and stages. to roll out new changes to your environment(s), often using a pipeline of steps and stages. Imperative Deployment to manage complex service lifecycles programmatically and reduce manual or fragile scripted steps. to manage complex service lifecycles programmatically and reduce manual or fragile scripted steps. Autoscaling to manage the replication and resource allocation of your application over time, based on usage and consumption. In this article, I have listed many tools (both popular and obscure) for each of these stages in application lifecycle management. Since it’s hard to judge popularity or success objectively, I’ve tried to annotate these tools in a way that makes it easy to see which big corporate backers have invested in these projects. Keep in mind, a large cloud backer may have multiple competing investments, so just because it has a known investor doesn’t mean it will survive and thrive in the long term. Hopefully this list will give you a place to start when searching for solutions to your application deployment problems. Container Image Building Moby / buildkit (Docker) — A toolkit for converting source code to build artifacts. (Docker) — A toolkit for converting source code to build artifacts. kaniko (Google) — A tool to build container images from a Dockerfile, inside a container or Kubernetes cluster. (Google) — A tool to build container images from a Dockerfile, inside a container or Kubernetes cluster. img (Jess Frazelle) — A standalone, daemon-less, unprivileged Dockerfile and OCI compatible container image builder. (Jess Frazelle) — A standalone, daemon-less, unprivileged Dockerfile and OCI compatible container image builder. buildah (IBM/Red Hat) — A tool that facilitates building Open Container Initiative (OCI) container images. (IBM/Red Hat) — A tool that facilitates building Open Container Initiative (OCI) container images. Source-To-Image (IBM/Red Hat) — A tool for building artifacts from source and injecting into container images. (IBM/Red Hat) — A tool for building artifacts from source and injecting into container images. Tanzu Build Service / kpack / pack (VMware/Pivotal) — A CLI and service for building apps using Cloud Native Buildpacks. (VMware/Pivotal) — A CLI and service for building apps using Cloud Native Buildpacks. Carvel / kbld (VMware/Pivotal) — A service for building and pushing images into development and deployment workflows. (VMware/Pivotal) — A service for building and pushing images into development and deployment workflows. Google Cloud Buildpacks (Google) — Builders and buildpacks designed to run on Google Cloud’s container platforms. (Google) — Builders and buildpacks designed to run on Google Cloud’s container platforms. Makisu (Uber) — A fast and flexible Docker image building tool, that works in unprivileged containerized environments like Mesos and Kubernetes. Resource Templating Helm (Microsoft, Google) — A Kubernetes Package Manager (Microsoft, Google) — A Kubernetes Package Manager Kustomize (Google, Apple) — A CLI to customize raw, template-free YAML files, leaving the original YAML untouched and usable as-is. (Google, Apple) — A CLI to customize raw, template-free YAML files, leaving the original YAML untouched and usable as-is. Carvel / ytt (VMware/Pivotal) — A YAML templating tool that works on YAML structure instead of text (VMware/Pivotal) — A YAML templating tool that works on YAML structure instead of text jsonnet / go-jsonnet (Google) — A JSON templating language. (Google) — A JSON templating language. gomplate (Dave Henderson) — A CLI for golang template rendering, supporting local and remote datasources. (Dave Henderson) — A CLI for golang template rendering, supporting local and remote datasources. Mustache (Github) — A framework-agnostic JSON templating engine. Package Management Helm (Microsoft, Google) — A Kubernetes Package Manager (Microsoft, Google) — A Kubernetes Package Manager Cloud Native Application Bundles (CNAB) / Porter / Duffle (Microsoft/Deis, Docker) — A package format specification, bundler, and installer for managing cloud agnostic distributed applications. Continuous Deployment Imperative Deployment Kubebuilder (CNCF, Google, Apple, IBM/Red Hat) — An SDK for building Kubernetes APIs (and controllers and operators) using CRDs. (CNCF, Google, Apple, IBM/Red Hat) — An SDK for building Kubernetes APIs (and controllers and operators) using CRDs. Operator Framework / Operator SDK (IBM/Red Hat/CoreOS) — An SDK for building Kubernetes application operators. (IBM/Red Hat/CoreOS) — An SDK for building Kubernetes application operators. KUDO (D2IQ) — A framework for building production-grade Kubernetes Operators using a declarative approach. (D2IQ) — A framework for building production-grade Kubernetes Operators using a declarative approach. Pulumi (Pulumi) — An Infrastructure as Code SDK for creating and deploying cloud software that use containers, serverless functions, hosted services, and infrastructure, on any cloud. (Pulumi) — An Infrastructure as Code SDK for creating and deploying cloud software that use containers, serverless functions, hosted services, and infrastructure, on any cloud. Carvel / kapp / kapp-controller (VMware/Pivotal) — A CLI and Kubernetes controller for installing configuration (helm charts, ytt templates, plain yaml) as described by App CRD. (VMware/Pivotal) — A CLI and Kubernetes controller for installing configuration (helm charts, ytt templates, plain yaml) as described by App CRD. Isopod (Cruise) — An expressive DSL and framework for Kubernetes resource configuration without YAML. Autoscaling Horizontal Pod Autoscaler (built-in) — A Kubernetes controller that automatically scales the number of pods in a replication controller, deployment, replica set or stateful set based on a configured metric. (built-in) — A Kubernetes controller that automatically scales the number of pods in a replication controller, deployment, replica set or stateful set based on a configured metric. Vertical Pod Autoscaler (Google) — A set of Kubernetes components that automatically adjusts the amount of CPU and memory requested by pods running in the Kubernetes Cluster. (Google) — A set of Kubernetes components that automatically adjusts the amount of CPU and memory requested by pods running in the Kubernetes Cluster. Addon Resizer (Google) — A simplified version of vertical pod autoscaler that modifies resource requests of a deployment based on the number of nodes in the Kubernetes Cluster. (Google) — A simplified version of vertical pod autoscaler that modifies resource requests of a deployment based on the number of nodes in the Kubernetes Cluster. KEDA (Microsoft) — A Kubernetes-based Event Driven Autoscaling component. (Microsoft) — A Kubernetes-based Event Driven Autoscaling component. Watermark Pod Autoscaler Controller (DataDog) — A custom controller that extends the Horizontal Pod Autoscaler (HPA). (DataDog) — A custom controller that extends the Horizontal Pod Autoscaler (HPA). Pangolin (Damian Peckett) — An enhanced Horizontal Pod Autoscaler for Kubernetes that scales deployments based on their Prometheus metrics, using a variety of highly configurable control strategies. (Damian Peckett) — An enhanced Horizontal Pod Autoscaler for Kubernetes that scales deployments based on their Prometheus metrics, using a variety of highly configurable control strategies. Predictive Horizontal Pod Autoscaler (IBM) — A custom pod autoscaler, similar to Horizontal Pod Autoscaler, however with added predictive elements. (IBM) — A custom pod autoscaler, similar to Horizontal Pod Autoscaler, however with added predictive elements. Horizontal Pod Autoscaler Operator (Banzai Cloud) — A Kubernetes controller that watches Deployments or StatefulSets and automatically creates HorizontalPodAutoscaler resources, based on autoscale annotations. In The End… As any DevOps advocate will tell you, it’s not about the tools but about the mindset. No one tool will give you an end to end application lifecycle management experience that delights you, because everyone uses their own permutation of tools, glued together with scripts and integration code. You can look for tools that do one thing well, being easily replaceable and extensible, or tools that provide the most value, being less to manage, cheaper to integrate, and the best end to end user experience. There’s not really a wrong answer. Because of those tradeoffs, it pays to look at who is behind each project, how many companies are investing, and how popular the tool is. Popular tools with large, diverse investors are more likely to keep growing as you use them, rather than stagnate and become abandoned, requiring you to replace the tool or replace the investment with your own time and energy. Hopefully, this taxonomy will be useful and provide you with a starting place as you consider your options. Good luck! Did I forget your favorite tool? Leave a comment or let me know on Twitter!
https://karlkfi.medium.com/compendium-of-kubernetes-application-deployment-tools-80a828c91e8f
['Karl Isenberg']
2020-09-13 21:20:56.588000+00:00
['DevOps', 'Continuous Deployment', 'Software Development', 'Docker', 'Kubernetes']
Real-time Forecasting Using R and Watson Studio
Use an R model to generate forecasts from live data. After training and tweaking any model, using it to score streaming data is a great way to extend its value. When you score a model on static data, you gain hindsight into past events. However, when you score a model on streaming data, you can detect and respond to patterns and trends as they occur. The video below shows how you can integrate existing R models into a streaming application created with Watson Studio Streams Flows, including how you can change the model without having to restart the application. Background Streams Flows is is a lightweight tool in IBM Watson Studio for creating streaming analytics applications called flows. Using the simple drag and drop interface, you can create a flow to ingest and analyze data from IoT devices, Apache Kafka, and more. The flows run on the Streaming Analytics service, a platform for real-time analytics in the IBM Cloud. If you’re completely new to Streams flows, check out this introductory video on YouTube. A Streams flow running in Watson Studio, showing the live data The problem — forecasting using live data Imagine you have developed a model in R to predict how many users will connect to network in the near future (next hour, half hour, etc.), based on how many are currently connected. You already have a Streams flow that is receiving data from Event Streams or Apache Kafka and computing current usage metrics. All that’s needed is a way to feed those statistics to the model, and you’ll have your predictions. Creating a forecasting microservice To score R models from a Streams flow, a simple solution is to create a forecasting microservice. The microservice will also run on the Streaming Analytics service, and is written in Streams Processing Language (SPL). The microservice loads the R model and scores the stream of metrics it receives from the flow. Finally, it publishes the forecasts back to the flow. Diagram showing the interaction between the flow and the microservice See it in action The video below is a short demo of how to extend an existing Streams flow to add forecasting with R. Video: Using R to score streaming data in Streams flows Try the sample All the code shown in the video is available on GitHub. The repo includes: The Streams flow, Microservice source and binary Step by step instructions of how to run the sample. Learn more
https://medium.com/ibm-watson/real-time-forecasting-using-r-and-watson-studio-513c45abd1a9
["Natasha D'Silva"]
2020-09-01 16:43:52.045000+00:00
['Big Data', 'Forecasting', 'Data', 'Tutorial', 'Streaming']
What is PropTypes in React
Recently, while working my way through this tutorial, I came across a React library called prop-types. I’ve never used this library before, so I made a quick research on why we need prop-types and how to use is. JavaScript is a so-called “untyped” language. That means JavaScript will figure out what type of data you have and make a function work without any required adjustments. For example, it can convert a number into a string if it is what a function expects. Sound great, right? However, it can be a blessing until it turns into a curse. As your app grows, this JavaScript feature can lead to bugs and errors when the data type we expected is not the one we’ve passed. So, to check that data type seems like a smart decision. That’s where React built-in type checking ability comes into play. PropTypes is a library that helps to check whether the props you pass have the type you’re waiting for. Anyway, our first step is to install the library. npm install — save prop-types The second step is to import the library. import PropTypes from 'prop-types'; Our third and last step is to assign special propTypes property. This library offers a great range of validators that can be used to make sure the data you receive is valid. All we need to do is to call the propTypes property on our component and specify the data type. Here is what I have for my recent project. I have a Question component whose prop.content supposed to be a string. Now, if I pass an object of a different type, I’ll get a warning. Also, I added .isRequired to make sure a warning is shown if the props isn't provided. Hope you find this short blog helpful! Source:
https://medium.com/dev-genius/what-is-proptypes-in-react-593147a7254a
['Anastasia Orlova']
2020-12-28 08:34:46.793000+00:00
['Reactjs', 'React', 'JavaScript', 'Libraries', 'Prop Types']
Driving Alignment Around Goals
Driving Alignment Around Goals by Isabelle Berner, Director of Product I was recently a speaker on a Product Stack webinar on collaboration with my former colleague, Jay Badenhope. The organizer, Ronan Dunlop, sent around some questions to registrants ahead of time, one of which was “What is your biggest challenge vis-a-vis team collaboration?”. I was surprised to learn that 39% of respondents, nearly all of them product professionals, selected “misalignment around goals and expectations” as their main challenge. Upon reflection, I also noticed a pattern in my own history as a product manager — the times where I have been “in house” (aka an employee of the company) this has been a more prevalent issue than the times where I’ve worked with a company as a consultant. Why is misalignment around goals and expectations such a common problem? How come product management consultants are more effective at addressing this issue, than in-house product managers (even when they are the same person!)? Below, I’ll share some thoughts on why I think consultants are able to do this better than in-house product managers, and I’ll share a technique that I’ve used on many client projects to ensure alignment around goals and expectations. One of the first things we talk about when we join a client team on an engagement is goals. What does success look like at the end of this engagement? What things have to be true for us to consider this project a success? What are some future goals that are out of scope for this engagement? Our success as consultants hinges on our ability to bring clarity to these questions. If we don’t have alignment on this, then there is 0% chance that everyone involved in the project considers it to be a success. So we push and we dig and we ask dozens of clarifying questions to get to a shared understanding of this. Consultants have two advantages here: First, we are new to the party and have a different type of skin in the game. This means that we can ask the hard questions and don’t have to adhere to the power dynamics and politics of the organization. Second, we can not afford to get this wrong. Leading a Goals & Anti-Goals Exercise ‍ A tool that I use to achieve this alignment is a goals and anti-goals exercise that I picked up during my time at Pivotal Labs. To get the most out of this exercise, you have to bring all the stakeholders and core team members into one room (physical or virtual) and give each person an opportunity to speak about what success means to them. Equally important is to discuss anti-goals, things that are important to accomplish eventually, but not in scope for our immediate set of goals. Heres and example of the agenda for this session: Set up the exercise by giving the meeting an objective and discussing the categories with some examples of well framed goals and anti-goals. (5 min) Note: You might also want to do some housekeeping here, reminding people that this is a safe space, that their participation and engagement is needed for a successful workshop, and that we will put a pin in rabbit hole conversations using a “parking lot” technique. Silently generate goals and anti-goals on stickies (one per sticky) (5 min) If you notice that, across your group, there are more than 30 stickies, prompt each person to pick the 5 stickies in their batch that are most important to discuss since you probably won’t get to all of them. Using a round-robin facilitation technique, have each person read 1 sticky in turn and stick it onto the whiteboard. Invite folks with similar stickies to put their next to this one and write a summary of the goal theme above the grouping. Go around the group as many times as you can in 35 minutes. Sometimes, goals will creep in that are actually anti-goals. Be sure to clarify “are we trying to accomplish this by the end of the project, or is this a future goal that we have?” Give room for discussion, but also be mindful of keeping talking time even across participants and moving the conversation along when needed. Prioritize the groups of goals together. You can do this by stack ranking them based on an agreed upon definition of importance. Or you can have the group dot vote on the 3 goals that are most important. (5–15 min) Recap and review next steps (5 min) Here’s an example of what a whiteboard might look like at the end of this 60 minute session: I’ve actually used this exercise when I was “in house” at Pattern Brands on several occasions. Early in my tenure there, it was easier to ask difficult questions (why is that really important? What would the business look like if that did not happen?) and dig into stakeholder reasoning. As my tenure there progressed, I found myself making more assumptions about what people meant and their underlying intentions. I actually became worse at driving these types of conversations as my time in the organization progressed. Tips if you are an “in-house” product manager If you are trying to apply this technique in the organization you work for, there are a few things you can think about to avoid that trap. First, just be aware of it and ask the questions, even if you think you know the answer. You might be surprised. Second, try to find someone outside of your team to facilitate these conversations so that you can check your bias at the door and participate. This could be a consultant or just someone from another team who is outside the political sphere of your project. Def Method Product consultants can help you facilitate these conversations for free via our Product Consults. Avoid stale goals My last piece of advice on this is, do not let your goals get stale. For goals to properly serve the purpose of driving alignment and scaffolding collaboration for a team, everyone has to stay connected to them and motivated by them. You should be talking about your goals every week, or every other week and having a conversation with your team about which goals are off track, or at risk, and what can be done to get things back on track. It is ok for some goals to shift, as long as this shift is a function of something that you have learned, and as long as the same group that created those initial goals is bought into the shift.
https://medium.com/defmethod-works/driving-alignment-around-goals-a5ac6dd06599
['Def Method']
2020-11-20 17:55:19.210000+00:00
['Product', 'Product Management', 'Product Development', 'Software Development', 'Software Engineering']
The Things That I Saw Senior Software Engineers Do
The Things That I Saw Senior Software Engineers Do How I learned and progressed from them I was fortunate enough to start working as a junior developer with a senior as my mentor. It means I had the opportunity to learn from someone who walked the path I’m going to walk and has more experience than me on this journey. Having a senior engineer in your team who is willing to teach and guide you is such a blessing. If your career goal is to become a senior developer, it can save you an enormous amount of time. In this post, I’d like to share the things I saw my mentor and other senior developers do, and the lessons I’ve learned from them.
https://medium.com/better-programming/the-things-that-i-saw-senior-software-engineers-do-6a9f49b9e54f
['Michael Chi']
2020-11-12 14:03:01.665000+00:00
['Software Development', 'Startup', 'Self Improvement', 'Work', 'Programming']
A Brief Guide to OTP in Elixir
A Brief Guide to OTP in Elixir Learn how to unlock the power of Erlang within Elixir Photo by Mathyas Kurmann on Unsplash. One of the main advantages of Elixir is that it is awesome for server-side systems. Forget using a million different technologies for things like data persistence, background jobs, and service crash recovery, OTP can supply you with everything. So what exactly is this magical thing? In this article, I will introduce you to OTP, look at basic process loops, the GenServer and Supervisor behaviours, and see how they can be used to implement an elementary process that stores funds. (This article assumes that you are already familiar with the basics of Elixir. If you’re not, you can check out the Getting Started guide on Elixir’s website or use one of the other resources listed in our Elixir guide.) What is OTP? OTP is an awesome set of tools and libraries that Elixir inherits from Erlang, a programming language on whose VM it runs. OTP contains a lot of stuff, such as the Erlang compiler, databases, test framework, profiler, debugging tools. But, when we talk about OTP in the context of Elixir, we usually mean the Erlang actor model that is based on lightweight processes and is the basis of what makes Elixir so efficient. Processes At the foundation of OTP, there are tiny things called processes. Unlike OS processes, they are really, really lightweight. Creating them takes microseconds, and a single machine can easily run multiple thousands of them, simultaneously. Processes follow the actor model. Every process is basically a mailbox that can receive messages, and in response to those messages it can: Create new actors. Send messages to other actors. Modify its private state. Spawning processes The most basic way to spawn a process is with the spawn command. Let’s open IEx and launch one. iex(1)> process = spawn(fn -> IO.puts("hey there!") end) The above function will return: hey there! #PID<0.104.0> First is the result of the function, second is the output of spawn — PID, a unique process identification number. Meanwhile, we have a problem with our process. While it did the task we asked it to do, it seems like it is now… dead? 😱 Let’s use its PID (stored in the variable process ) to query for life signs. iex(2)> Process.alive?(process) false If you think about it, it makes sense. The process did what we asked it to do, fulfilled its reason for existence, and closed itself. But there is a way to extend the life of the process to make it more worthwhile for us. Receive-do loop Turns out, we can extend the process function to a loop that can hold state and modify it. For example, let’s imagine that we need to create a process that mimics the funds in a palace treasury. We’ll create a simple process to which you can store or withdraw funds, and ask for the current balance. We’ll do that by creating a loop function that responds to certain messages while keeping the state in its argument. In the body of the function, we put the receive statement and pattern match all the messages we want our process to respond to. Every time the loop runs, it will check the bottom of the mailbox for messages that match what we need and process them. If the process sees any messages with atoms store , withdraw , balance , those will trigger certain actions. To make it a bit nicer, we can add an open function and also dump all the messages we don't need to not pollute the mailbox. While this seems quite concise, there’s already some boilerplate lurking, and we haven’t even covered corner cases, tracing, and reporting that would be necessary for production-level code. In real life, we don’t need to write code with receive do loops. Instead, we use one of the behaviours created by people much smarter than us. Behaviours Many processes follow certain similar patterns. To abstract over these patterns, we use behaviours. Behaviours have two parts: abstract code that we don’t have to implement and a callback module that is implementation-specific. In this article, I will introduce you to GenServer, short for generic server, and Supervisor. Those are not the only behaviours out there, but they certainly are one of the most common ones. GenServer To start off, let’s create a module called Treasury , and add the GenServer behaviour to it. defmodule Palace.Treasury do use GenServer end This will pull in the necessary boilerplate for the behaviour. After that, we need to implement the callbacks for our specific use case. Here are the callbacks we will be using for our process: init(state) initializes the server and usually returns {:ok, state} initializes the server and usually returns handle_cast(pid, message) handles an async call that doesn’t demand an answer from the server and usually returns {:noreply, state} handles an async call that doesn’t demand an answer from the server and usually returns handle_call(pid, from, message) handles a synchronous call that demands an answer from the server and usually returns {:reply, reply, state} Let’s start with the easy one — init . It takes a state and starts a process with that state. def init(balance) do {:ok, balance} end Now, if you look at the simple code we wrote with receive , there are two types of triggers. The first one ( store and withdraw ) just asks for the treasury to update its state asynchronously, while the second one ( get_balance ) waits for an answer. handle_cast can handle the async ones, while handle_call can handle the synchronous one. To handle adding and subtracting, we will need two casts. These take a message with the command and the transaction amount and update the state. def handle_cast({:store, amount}, balance) do {:noreply, balance + amount} end def handle_cast({:withdraw, amount}, balance) do {:noreply, balance - amount} end Finally, handle_call takes the balance call, the caller, and state, and uses all that to reply to the caller and return the same state. def handle_call(:balance, _from, balance) do {:reply, balance, balance} end These are all the callbacks we have: To hide the implementation details, we can add client commands in the same module. Since this will be the only treasury of the palace, let’s also give a name to the process equal to its module name when spawning it with start_link. This will make it easier to refer to it. Let’s try it out: iex(1)> Palace.Treasury.open() {:ok, #PID<0.138.0>} iex(2)> Palace.Treasury.store(400) :ok iex(3)> Palace.Treasury.withdraw(100) :ok iex(4)> Palace.Treasury.get_balance() 300 It works. 🥳 Here’s a cheatsheet on GenServer to help you remember where to put what. Supervisor However, just letting a treasury run without supervision is a bit irresponsible, and a good way to lose your funds or your head. 😅 Thankfully, OTP provides us with the supervisor behaviour. Supervisors can: start and shutdown applications, provide fault tolerance by restarting crashed processes, be used to make a hierarchical supervision structure, called a supervision tree. Let’s equip our treasury with a simple supervisor. In its most basic, a supervisor has two functions: start_link() , which runs the supervisor as a process, and init , which provides the arguments necessary for the supervisor to initialize. Things we need to pay attention to are: The list of children. Here, we list all the processes that we want the supervisor to start, together with their init functions and starting arguments. Each of the processes is a map, with at least the id and start keys in it. Here, we list all the processes that we want the supervisor to start, together with their init functions and starting arguments. Each of the processes is a map, with at least the and keys in it. Supervisor’s init function. To it, we supply the list of children processes and a supervision strategy. Here, we use :one_for_one - if a child process will crash, only that process will be restarted. There are a few more. Running the Palace.Treasury.Supervisor.start_link() function will open a treasury, which will be supervised by the process. If the treasury crashes, it will get restarted with the initial state - 0. If we wanted, we could add several other processes to this supervisor that are relevant to the treasury function, such as a process that can exchange looted items for their monetary value. Additionally, we could also duplicate or persist the state of the treasury process to make sure that our funds are not lost when the treasury process crashes. Since this is a basic guide, I will let you investigate the possibilities by yourself. Further reading This introduction has been quite basic to help you understand the concepts behind OTP quickly. If you want to learn more, there are a lot of nice resources out there:
https://medium.com/dev-genius/a-brief-guide-to-otp-in-elixir-c466dee41118
[]
2020-10-12 01:53:32.172000+00:00
['Software Development', 'Elixir', 'Software Engineering', 'Erlang', 'Programming']
Building an Application: Pre-Work
1. Outline Features List of desired feature set (created using Trello). When you have your app concept determined and are ready to start laying the groundwork, one very important first step is to decide what you actually want your app to do and how you want it to function. Writing out a list of features is a good way to compile and condense your thoughts. There are endless possibilities, so narrowing your focus from the start will eliminate wasted time down the road. Once you have a list of features, thinking through your MVP (or minimum viable product) is crucial. The MVP is basically the state in which your app is fully functional and able to be tested in the market for user feedback, but it leaves out some of the “stretch” features that can be added once you know your app is on track. Reaching MVP and testing in the market is useful to ensure your resources are not wasted on something that is either not necessary or is not working properly. A great tool I have used to start outlining features and deciding on MVP is Trello. If this step is skipped, you run the risk of trying to execute too many ideas or trying to build a feature that doesn’t relate to your user story. This is not only a waste of everyone’s time but also adds to a lack of clarity in the process and will only lead to frustration. Questions to ask yourself
https://medium.com/better-programming/building-an-application-pre-work-eafb660fa55e
['Hannah Kofkin']
2020-07-16 15:50:04.752000+00:00
['Programming', 'Best Practices', 'Software Development', 'Web Development', 'Startup']
10 Tips for Taking Better Portraits on Any Camera
You’re starting to dive into the realm of portrait photography, or maybe you just find yourself taking more pictures of your friends with your smartphone when you go out. Regardless, here are 10 tips for taking better portraits, whether you use your phone, a point and shoot, a mirrorless camera, or a DSLR. #1. Optimize Your Gear Make sure you are using the best settings and best available tools in order to capture your portrait. On a smartphone, this might mean simply putting your camera into portrait mode in order to get a blurred background. It’s an artificial blur but still produces pretty nice results and better ones than the default camera. However, if you do want that wide angle portrait look, you can leave your camera in its default mode instead since portrait modes are tighter and more telephoto. On a DSLR, mirrorless, or point and shoot camera, this can be different depending on what you have. If you don’t have a whole lot of gear yet (maybe just the kit zoom or a permanently attached lens), zoom in that lens a little bit into the more telephoto range (for a more traditional portrait look)and crank the aperture as wide as it goes to get the most blur in the background. As with your phone, you can leave the lens on wider focal length if that’s the look you’re going for. Regardless, I recommend getting used to the manual modes of your camera to achieve this rather than relying on something more automatic. The ideal mode for those still learning their camera is aperture priority. If you have the pleasure of using a prime lens like a 35mm, 50mm, or 85mm, slap that right on your camera body, and again, widen that aperture either as wide as it goes or one-stop darker (usually for better image quality, less vignetting, and better focusing). With primes, you can usually open your aperture all the way to f/1.8 or f/1.4, giving you an incredibly blurred and bokeh filled background for an absolutely creamy portrait. #2. Find and or Make a Clean Background No one likes a cluttered and distracting background. A bright orange cone, for example, can really take away from your subject and instead direct the viewer’s attention away from what you want them to see. Make sure your background doesn’t have anything that sticks out too much or would be an eyesore to see like a trash can. In addition, make sure that it doesn’t look like anything is popping out of your subject’s head either (a pole is a good example of this). But you don’t have to get super involved with cleaning up the background, and it doesn’t need to be overdone. A few cars or a few trees is totally fine, but taking the time to remove the water bottle behind your subject is definitely worth the 30 seconds it takes, especially if you’re a perfectionist like myself. #3. Find the Right Lighting The most important aspect of photography is lighting. After all, the word photo derives itself from the word photon (not necessarily but the two are related). The key to getting good lighting when you’re beginning to take portraits is evenness. That’s why shooting in shaded areas or during a slightly overcast day is always highly recommended. But shooting around half an hour before sunset or after sunrise, the period right around golden hour, is the most ideal as the light is usually very even in many different places. I try to plan all my shoots during this time period. But don’t be discouraged from trying other things in finding the right lighting. Shooting at golden hour or in shade is a good rule of thumb, but don’t limit yourself. Getting photos in direct sunlight can also yield amazing results as long as the light from the sun falls evenly again on your subject’s face. Usually, this means having them almost directly face the sun as long as they can avoid squinting for a second or two. You can also shoot at night! Just make sure to find a nice bright street lamp to nicely illuminate your subject. But whether using a street lamp or in direct sunlight, the keyword here is still evenness, making sure your subject is void of harsh shadows. And sometimes, lighting doesn’t always have to be even either. You can intentionally have your subject turn 90 degrees from the light source, giving a dramatic half-lit, half-shadow type of look. But the keyword here with having non-even lighting is intention. Non-traditional lighting is something you really have to commit to the idea of in order to get it right.
https://medium.com/photo-paradox/10-tips-for-taking-better-portraits-on-any-camera-818c97032524
['Paulo Makalinao']
2020-09-15 19:01:01.205000+00:00
['Portraits', 'Tips', 'Photography', 'Art', 'Creativity']
Branding for Solopreneurs — How To Stand Out!
Four steps to branding success: Bear in mind the following four principles of successful branding: Photo by Slidebean on Unsplash 1) Brand identity This is your brand personality. How do you want to be perceived by (potential) clients and peers? It’s important to realize that you can create your brand around your personality. As freelancers, there’s no need to try and become someone else. Stay true to yourself and be who you are. This will also make your brand more credible. 2) Brand consistency As already mentioned, brand consistency is essential. Take care that your website, brochures, business cards, Facebook page, personal appearance, the way you present yourself, etc. are all homogenous and consistent to avoid confusing and driving away potential clients. 3) Brand recognition Simply put: get your brand seen, and be everywhere where your target audience is. Once you’ve identified your ideal client and have researched them, start getting your name out there using a targeted approach. Your market will think you are everywhere, which will again boost your credibility. 4) Brand engagement Unlike just a few years ago, today is all about brand engagement. Any small business owner should be active on social media, participate in forum discussions, write a blog or at least comment on other blogs, have a video blog, and so forth. This type of online presence is good for SEO purposes and will drive traffic to your website. It is also crucial to engage with your followers or ‘fans’ to make them feel part of your business and enhance their brand experience.
https://medium.com/be-unique/branding-for-solopreneurs-how-to-stand-out-72d192528cd6
['Kahli Bree Adams']
2020-06-19 03:47:33.596000+00:00
['Branding', 'Business', 'Entrepreneurship', 'Solopreneur', 'Freelancing']
Building an App to Make Browser-based Calls to Congress with Flask and Twilio.js on Heroku
Your leaders should be accessible to the public In 2015, I wanted to build an app to provide a way for administrator of public networks (school, libraries, etc.) to provide a look-up and dial tool for members of congress and have it deployable on any target (comparatively low-power machines, or on a personal laptop, or wherever phone access or this information is inaccessible for whatever reason), as well as as a platform application, which we built using these concepts. Twilio seemed like a natural solution for this. I recently re-architectured the application, mostly to bring it into compliance with the latest Twilio JavaScript tool, and to refresh some of the clunkier parts of the original application. I elected to use Flask for this, and ultimately deployed it to Heroku. To see the live product, you can visit: https://dial.public.engineering More information about the project can be found on our twitter, at-publiceng. If you’re ready to check out how we went about building this tool… Setup This application has a few external dependencies: Your application will make use of environmental variables to set this, so when you deploy your application (in our case on Heroku), whatever facility (a PaaS like Heroku, or via a provisioning tool like Terraform, or on a flat Linux system) may exist for this should be used to set the following variables: export twilio_sid=${twilio_sid} export twilio_token=${twilio_token} export twilio_twiml_sid=${twiml_sid} export numbers_outbound="+12345678900" export GOOGLE_API_KEY=${google_civic_api_key} In your project root, you’ll need a requirements.txt : Flask==1.1.2 gunicorn==20.0.4 # Only if you plan to deploy to Heroku requests==2.24.0 twilio==6.47.0 jsonify==0.5 In your app.py , import the following, and we’ll make use of the above variables, before proceeding: from flask import Flask, render_template, request, jsonify import os import requests from twilio.rest import Client from twilio.jwt.client import ClientCapabilityToken from twilio.twiml.voice_response import VoiceResponse, Dial import urllib import base64 import random, string TWILIO_SID = os.environ['twilio_sid'] TWILIO_TOKEN = os.environ['twilio_token'] TWILIO_TWIML_SID = os.environ['twilio_twiml_sid'] NUMBERS_OUTBOUND = os.environ['numbers_outbound'] GOOGLE_API_KEY = os.environ['GOOGLE_API_KEY'] app = Flask(__name__) Building the application: Functions The app relies heavily on the passing and receiving of dictionaries as a messaging format, so most functions will send or receive one such dictionary, and these will eventually be used to populate the templates for the web UI itself. First, a function to take a zip code, and retrieve representative contact info, and build a response containing formatting numbers, and other data I might use from that datasource. Then, I proceed to get some aesthetic data for the UI, like the name of the locality this area covers (for the House of Representatives, for example): From there, we go into the actual work of using this data, and making some calls. A small function to generate, and then set a default_client which will be important for the callback from your TwiML application, which is a requirement to be able to make the outgoing calls: def randomword(length): letters = string.ascii_lowercase return ''.join(random.choice(letters) for i in range(length)) default_client = "call-your-representatives-%s" % (randomword(8)) then a function to validate a phone number to ensure it comes from this datasource: def numberVerify(zipCode, unformatted_number): reps = get_reps(zipCode) nums_found = [] for r in reps: if unformatted_number in r['unformatted_phone']: nums_found.append(r['name']) photoUrl = r['photo'] if len(nums_found) != 0: return { 'status': 'OK', 'zipCode': zipCode, 'name': nums_found[0], 'photo': photoUrl } else: return { 'status': 'FAILED' } The Flask Application and URL Routes With the helper functions completed, you’ll see how they are consumed in the decorated functions for Flask that run when a route is hit using a designated HTTP method, for example, for / : the following template is returned: So, once you submit your Zip code, it is POST ‘d to the /reps URI: which, you’ll see, consumes the helper functions we wrote above: from the form in the template above, it retrieves your zip code, hands it to location_name to get your locality name, to representatives to build a dict of your representatives and their info, and we use the default_client we specified above which the Twilio.js tool (which I’ll demonstrate in a moment) will connect to in order to make the call from your browser. We use all of that data in the template, to populate a page like: You’ll see at the top, your default_client will have a status indicator, and when it is ready, you can click Start Call on whichever representative to initiate a phone call from the browser. In the template file, in this case call.html , anywhere in the <head> section, you’ll use the Twilio JS script: <script src="https://media.twiliocdn.com/sdk/js/client/v1.3/twilio.min.js"></script> and then use the following function inside of another script block to call your token endpoint: function httpGet(Url) { var xmlHttp = new XMLHttpRequest(); xmlHttp.open( "GET", Url, false ); // false for synchronous request xmlHttp.send( null ); return xmlHttp.responseText; } which looks like this, back in app.py : This uses your Twilio token and SID to create a capability token, and then you can add capabilities using the TwiML SID, and for example, allow incoming callbacks using your default client to allow Twilio to connect a call from your browser back to the application. So when you start the call, in the template, by clicking the button: The onclick action will connect your Twilio.Device to the phone number from that iteration of the representatives dictionary. This will hand off the new token, the client ID, and the number you wish to call to the above Twilio device, which once received, will use the TwiML application’s callback URL, in this case, /voice to connect the browser to the call. The /voice function is somewhat involved and was probably one of the more complicated pieces to figure out, as some of this diverged pretty distinctly from the documentation as compiled: The purpose of TwiML apps is to provide a response to a call to Twilio APIs/phone number, and in this case, we’re providing a VoiceResponse() , so we need from the request it received the phone number to send that voice response to, which we’re splitting out of the request form as number:<whatever> , and in the absence of a number, the default_client. NUMBERS_OUTBOUND is your Twilio programmable voice number you acquired at the beginning, which will appear on the caller ID, and the Dial class will facilitate the rest. Deploying to Heroku I have a repository (I will link to all of this again at the end) for deploying to DigitalOcean and to Heroku (where the app lives now), to show a couple of different methods of how I’ve handled deploying this app over time, however, this will focus on the application layout, and a baseline approach to deploying to Heroku with Terraform. In your project root, you’ll need a Procfile which will inform Heroku how to run the application, in this case: web: gunicorn app:app This is one of the packages you might remember from your requirements.txt , and since Heroku prefers the Pipenv format for managing the application as a virtualenv, we can use it to generate the appropriate package manifest: python3 -m pipenv install -r requirements.txt and commit the resulting Pipenv file instead along with the Procfile. With the Heroku requirements committed to your Git repo, you can proceed to create, in another directory, your Terraform project. You’ll create the following vars.tf file: variable "release_archive" {} #The Download URL of your git repo variable "heroku_app_name" {} variable "release" { default = "HEAD" } variable "twilio_sid" {} variable "twilio_token" {} variable "twilio_twiml_sid" {} variable "numbers_outbound" {} variable "google_api_key" {} then, in main.tf we can start laying out the deployment: provider "heroku" { version = "~> 2.0" } resource "heroku_app" "dialer" { name = "${var.heroku_app_name}" region = "us" } Then we’ll specify what Heroku should be building: resource "heroku_build" "dialer_build" { app = "${heroku_app.dialer.name}" buildpacks = ["https://github.com/heroku/heroku-buildpack-python.git"] source = { url = var.release_archive version = var.release } } I am using the release variable to be something you can update in order to have Terraform redeploy the application, rather than anything to do with what version it deploys from; you’ll want to specify a tag or a branch in your release_archive URL which will be something like: release_archive = "https://${git_server}/${org}/call-your-representatives_heroku/archive/${branch_or_tag}.tar.gz" this process allows you to re-apply the same version, but still have the state update in Terraform as a detectable change. The buildpack line just refers to the Heroku environment to use, in our case, their default Python stack: buildpacks = ["https://github.com/heroku/heroku-buildpack-python.git"] Now, our application which has a lot of environment variables, and because they’re credentials, we want them handled properly, we are going to specify the following blocks for our above Heroku application: resource "heroku_config" "common" { vars = { LOG_LEVEL = "info" } sensitive_vars = { twilio_sid = var.twilio_sid twilio_token = var.twilio_token twilio_twiml_sid = var.twilio_twiml_sid numbers_outbound = var.numbers_outbound release = var.release GOOGLE_API_KEY = var.google_api_key } } resource "heroku_app_config_association" "dialer_config" { app_id = "${heroku_app.dialer.id}" vars = "${heroku_config.common.vars}" sensitive_vars = "${heroku_config.common.sensitive_vars}" } You’ll specify all of these values in your Terraform variables, or in your terraform.tfvars file: release = "20201108-706aa6be-e5de" release_archive = "https://git.cool.info/tools/call-your-representatives/archive/master.tar.gz" heroku_app_name = "dialer" twilio_sid = "" twilio_token = "" twilio_twiml_sid = "" numbers_outbound = "+" google_api_key = "" There are other optional items (a Heroku formation, domain name stuff, and output), but this covers the deployment aspect from the above application layout, so you can proceed to set your Heroku API key: HEROKU_API_KEY=${your_key} HEROKU_EMAIL=${your_email} in order to initialize the Heroku Terraform provider: terraform init then you can check your deployment before you fire it off: terraform plan terraform apply -auto-approve and then head to http://${heroku_app_name}.herokuapp.com to see the deployed state. More Resources Follow public.engineering on Twitter Call Your Respentatives app source Call Your Representatives deployment scripts Single-use VPN Deployer app source Single-use VPN Deployer deployment scripts (also includes DigitalOcean and Terraform deployment plans) If you’d like to support the platform in keeping up with fees for the price of the calls, and that of hosting, or would just like to enable ongoing development for these types of projects, and to keep them free for the public’s use, please consider donating!
https://jmarhee.medium.com/building-an-app-to-make-browser-based-calls-to-congress-with-flask-and-twilio-js-on-heroku-b4d85886206e
['Joseph D. Marhee']
2020-11-19 00:46:39.551000+00:00
['Python', 'Twilio', 'Heroku', 'JavaScript']
How We Reduced DynamoDB Costs by Using DynamoDB Streams and Scans More Efficiently
Many of our users implement operational reporting and analytics on DynamoDB using Rockset as a SQL intelligence layer to serve live dashboards and applications. As an engineering team, we are constantly searching for opportunities to improve their SQL-on-DynamoDB experience. For the past few weeks, we have been hard at work tuning the performance of our DynamoDB ingestion process. The first step in this process was diving into DynamoDB’s documentation and doing some experimentation to ensure that we were using DynamoDB’s read APIs in a way that maximizes both the stability and performance of our system. Background on DynamoDB APIs AWS offers a Scan API and a Streams API for reading data from DynamoDB. The Scan API allows us to linearly scan an entire DynamoDB table. This is expensive, but sometimes unavoidable. We use the Scan API the first time we load data from a DynamoDB table to a Rockset collection, as we have no means of gathering all the data other than scanning through it. After this initial load, we only need to monitor for updates, so using the Scan API would be quite wasteful. Instead, we use the Streams API which gives us a time-ordered queue of updates applied to the DynamoDB table. We read these updates and apply them into Rockset, giving users realtime access to their DynamoDB data in Rockset! The challenge we’ve been undertaking is to make ingesting data from DynamoDB into Rockset as seamless and cost-efficient as possible given the constraints presented by data sources, like DynamoDB. Following, I’ll discuss a few of issues we ran into in tuning and stabilizing both phases of our DynamoDB ingestion process while keeping costs low for our users. Scans How we measure scan performance During the scanning phase, we aim to consistently maximize our read throughput from DynamoDB without consuming more than a user-specified number of RCUs per table. We want ingesting data into Rockset to be efficient without interfering with existing workloads running on users’ live DynamoDB tables. Understanding how to set scan parameters From very preliminary testing, we noticed that our scanning phase took quite a long time to complete so we did some digging to figure out why. We ingested a DynamoDB table into Rockset and observed what happened during the scanning phase. We expected to consistently consume all of the provisioned throughput. Initially, our RCU consumption looked like the following: We saw an inexplicable level of fluctuation in the RCU consumption over time, particularly in the first half of the scan. These fluctuations are bad because each time there’s a major drop in the throughput, we end up lengthening the ingestion process and increasing our users DynamoDB costs. The problem was clear but the underlying cause was not obvious. At the time, there were a few variables that we were controlling quite naively. DynamoDB exposes two important variables: page size and segment count, both of which we had set to fixed values. We also had our own rate limiter which throttled the number of DynamoDB Scan API calls we made. We had also set the limit this rate limiter was enforcing to a fixed value. We suspected that one of these variables being sub-optimally configured was the likely cause of the massive fluctuations we were observing. Some investigation revealed that the cause of the fluctuation was primarily the rate limiter. It turned out the fixed limit we had set on our rate limiter was too low, so we were getting throttled too aggressively by our own rate limiter. We decided to fix this problem by configuring our limiter based on the amount of RCU allocated to the table. We can easily (and do plan to) transition to using a user-specified number of RCU for each table, which will allow us to limit Rockset’s RCU consumption even when users have RCU autoscaling enabled. For each segment, we perform a scan, consuming capacity on our rate limiter as we consume DynamoDB RCU’s. The result of our new Scan configuration was the following: We were happy to see that, with our new configuration, we were able to reliably control the amount of throughput we consumed. The problem we discovered with our rate limiter brought to light our underlying need for more dynamic DynamoDB Scan configurations. We’re continuing to run experiments to determine how to dynamically set the page size and segment count based on table-specific data, but we also moved onto dealing with some of the challenges we were facing with DynamoDB Streams. Streams How we measure streaming performance Our goal during the streaming phase of ingestion is to minimize the amount of time it takes for an update to enter Rockset after it is applied in DynamoDB while keeping the cost using Rockset as low as possible for our users. The primary cost factor for DynamoDB Streams is the number of API calls we make. DynamoDB’s pricing allows users 2.5 million free API calls and charges $0.02 per 100,000 requests beyond that. We want to try to stay as close to the free tier as possible. Previously we were querying DynamoDB at a rate of ~300 requests/second because we encountered a lot of empty shards in the streams we were reading. We believed that we’d need to iterate through all of these empty shards regardless of the rate we were querying at. To mitigate the load we put on users’ Dynamo tables (and in turn their wallets), we set a timer on these reads and then stopped reading for 5 minutes if we didn’t find any new records. Given that this mechanism ended up charging users who didn’t even have much data in DynamoDB and still had a worst case latency of 5 minutes, we started investigating how we could do better. Reducing the frequency of streaming calls We ran several experiments to clarify our understanding of the DynamoDB Streams API and determine whether we could reduce the frequency of the DynamoDB Streams API calls our users were being charged for. For each experiment, we varied the amount of time we waited between API calls and measured the average amount of time it took for an update to a DynamoDB table to be reflected in Rockset. Inserting records into the DynamoDB table at a constant rate of 2 records/second, the results were as follows: Inserting records into the DynamoDB table in a bursty pattern, the results were as follows: The results above showed that making 1 API call every second is plenty to ensure that we maintain sub-second latencies. Our initial assumptions were wrong, but these results illuminated a clear path forward. We promptly modified our ingestion process to query DynamoDB Streams for new data only once per second in order give us the performance we’re looking for at a much reduced cost to our users. Calculating our cost reduction Since with DynamoDB Streams we are directly responsible for our users costs, we decided that we needed to precisely calculate the cost our users incur due to the way we use DynamoDB Streams. There are two factors which wholly determine the amount that users will be charged for DynamoDB Streams: the number of Streams API calls made and the amount of data transferred. The amount of data transferred is largely beyond our control. Each API call response unavoidably transfers a small amount (768 bytes) of data. The rest is all user data, which is only read into Rockset once. We focused on controlling the number of DynamoDB Streams API calls we make to users’ tables as this was previously the driver of our users’ DynamoDB costs. Following is a breakdown of the cost we estimate with our newly remodeled ingestion process: We were happy to see that, with our optimizations, our users should incur virtually no additional cost on their DynamoDB tables due to Rockset! Conclusion We’re really excited that the work we’ve been doing has successfully driven DynamoDB costs down for our users while allowing them to interact with their DynamoDB data in Rockset in realtime! This is a just sneak peek into some of the challenges and tradeoffs we’ve faced while working to make ingesting data from DynamoDB into Rockset as seamless as possible. If you’re interested in learning more about how to operationalize your DynamoDB data using Rockset check out some of our recent material and stay tuned for updates as we continue to build Rockset out! Other DynamoDB resources:
https://medium.com/rocksetcloud/how-we-reduced-dynamodb-costs-by-using-dynamodb-streams-and-scans-more-efficiently-ad7610758591
['Aditi Srinivasan']
2019-09-18 21:54:20.278000+00:00
['Cost', 'Dynamodb', 'Real Time Analytics', 'Performance', 'AWS']
Could AI ever be an artist?
Life and Death, Gustav Klimt Could artificial intelligence ever be an artist? Could a computer act as a curator? These are questions that have been bubbling in my mind for months now. But the more I dissect, the more emerge — because, what is art? Who is an artist? When and are we able to classify ourselves as one? And who decides what work is worthy of entering the notoriously old and mostly white canon? AI’s tendrils are slowly creeping into every aspect of our lives. While most of us are okay with it claiming factory jobs and menial roles, there is something about art that remains untouchable. I have long believed that art isn’t about what you create, it’s about what others receive. Art should elicit an emotional response. It should stir something. Conjure some fire in your belly: be it yearning, anger or sorrow. Art must fundamentally convey a feeling. Otherwise, it’s just empty. Anyone can draw, but not everyone is an artist. An artist requires heart. “It is the function of art to renew our perception. What we are familiar with we cease to see. The writer shakes up the familiar scene, and, as if by magic, we see a new meaning in it.” — Anais Nin But this is the small, scared part of me speaking. The corner that is quietly terrified and desperate to be seen, recognised and loved. I romanticise my calling to eliminate the competition. That rabid dog in me makes me overzealous and protective: my teeth clenching over scraps. ‘If I have to prove myself’ it says, ‘so must you.’ How can AI be an artist, if I struggle to call myself one? As a young artist, the title feels clumsy on my tongue. Even on my kinder days, I still fumble as I speak it. There’s an anxiety that slips through as I push past my imposter syndrome and hope the sentinels don’t spot me. A piece of advice I give others and myself most mornings is that you are already an artist. If you make and feel and convert your achings into some kind of artefact: then you are an artist. You are an artist. You are an artist. Creative coders But if art is about provoking a response, pushing the viewer to question, rethink and feel, then AI is certainly one. The mere fact I’m mulling over it now says as much. I look to creative coders like Manoloidee or Shinohara who use AI to create surreal, otherworldly images. Or the algorithm that ate up all Rembrandt’s art and then 3D printed a new piece; exact, layered with printed paint, even including his signature brushstrokes and use of colour. But who here is the artist? Where does creator end and tool begin? To whom — or what — do we give credit? And is enough to just be a novelty? There is a tension that exists between intuition and the practicalities of application when it comes to art. There cannot be one without the other — a kind of creative chicken-and-the-egg conundrum — because, as the American historian, Melvin Kranzberg, so aptly noted, without the human element an instrument has no use. Without the instrument, the human cannot make music. But again, it was still made by human hands and shaped by our knowledge, so it is us that remains the creator. Both god and disciple. I guess my deeper rumination is that impossibly simple question: what is art? What makes a masterpiece? There are some pieces of art that stay with you. Some strike like lightning straight through your consciousness and feel heavy in your chest. Other times, what captivates isn’t so clear. I remember the first time I saw a Frida Kahlo painting: it was like a magnet. I felt her acrylic eyes following me around the gallery space. I tried to pay attention to the other paintings and give them due care, but somehow those eyes were always burning in the back of my head. Somehow my feet were always wandering back. I stood there with my ears ringing and the light just a little bit too bright in front of this tiny, painted piece of wood and glass. It was disorientating. I imagined that it was holy. That this was a pilgrimage and I a weary traveller that had finally arrived in front of what Frida had touched, what she held in her hands and moulded. Frida Kahlo, El Marco Some pieces pierce you. That is what Frida was for me — achy, vulnerable and speaking in a voice so unmistakably human. Then others, like those of the Baroque masters, feel alive and leaking. Like the pools of light might slip off the canvas and onto the floor. Like you could reach in and touch its fleshiness, feel its pulse, taste the scar. There is an empathy imbued in the paint that transcends time and language. You as an existent are annihilated. Consumed. I think of one of my favourite paintings, La Belle Dame Sans Merci by Frank Dicksee, held in Bristol’s Museum and Art Gallery. It is not a remarkable painting by most standards. I’ve always had a soft spot for the Pre-Raphaelites and fairy stories, but my love is less about the work itself and more about all the people I have been while stood in front of it. I think of my first interaction with the painting. The first in our history. It began with reading the eponymous Keats poem aloud in class when I was thirteen. It was one of the first poems that made my heart swell for literature and even now, the memory of the knight alone and palely loitering remains etched in my mind (and him on the hillside). Enter the painting. Now, I live close to the gallery and I have found myself wandering its halls often; on rainy days, for respite, for solitude, for entertainment. I frequently come to see this painting: looking, regarding, or while eating a sandwich. I think of my twenty-one-year-old self, barely out of university, sat in front of this painting trying and failing to make a life for herself. A painting is a story I took friends, dates, partners here. We laughed in the hallways. I remember the heavy anniversaries when I came here for a distraction. I think of my twenty-forth birthday where the world felt too loud and too near, and I retreated into the gallery for shelter, hoping it could be my still point in a turning world. I sat and cried in front of it (and then retreated into the dimly lit video room for more privacy). Now when I see this painting it comes through a haze of smoke. It feels lot like falling in love: at first the face of our beloved is like many others, then suddenly, without knowing, it’s divine. Our hearts ache at the sight of them; there has never existed a thing so exquisite, so beautiful — it becomes impossible to see them for first time, to crack through the lens of love and separate the feeling from what’s in front of you. This is how this painting feels for me. It is a canvas on which I can construct and measure my change. La Belle Dame Sans Merci, Frank Dicksee My experiences are those that I have projected on to them. While it is certainly true that I could carry out this process without knowing whether the piece was created by a human or machine, there is a metaphysical quality in artwork. Something higher. The materiality is matched with something more, something spiritual — like I said, something holy. So, could AI ever create such a transcendent experience? Could an algorithm ever fashion something so human, so true? The artist — whatever their craft or tool of choice — transmits and translates onto the page a feeling without ever knowing why. I think of Bukowski’s words: “Unless it comes out of your soul like a rocket, unless being still would drive you to madness or suicide or murder, don’t do it. unless the sun inside you is burning your gut, don’t do it.” Can a machine know that drive? Can an engine understand that absolute necessity, that all-or-nothing ethos, that little voice that says: ‘I’ll die if I can’t do this, I’m nothing if I can’t do this’? Maybe we need a new kind of art Then, logic reappears and pushes me to question: could our long-held and much-loved ideals of a ‘muse’, or even the idea of creation guided by a higher power, be compared to an algorithm? Are we not all already conditioned and programmed in our own ways — by language, signs, culture and moral frameworks? I say art requires freedom, but are any of us free? Besides, not a shred of our selves or creations is original: nanos gigantum humeris insidentes. We stand on the shoulders of giants. We are just the latest version in a millennium of experimentation. Humans 2.0. Software update incoming. The cynic in me leans to this reading, but my heart isn’t in it. Maybe the artist in me rejects anything that strips away my strangeness or makes my idiosyncrasies obsolete. But I believe in my stomach that creativity requires agency. It is not about honing a craft or creating the perfect replica. It must communicate some deeply rooted human truth. Art gives me the same vulnerability you see in hands pressed against bus windows saying goodbye, or someone tripping in the street or being reunited with their loved one. It is human, it is real, it is impossibly beautiful and tragic and paradoxical. Like us, it makes no sense. It must be made by a mind that encompasses our contradictions, our sharp edges and loneliness. Our need for a tribe and our absolute hunger for independence. It must be made by hands that grapple with death. That feels loss and hate and love and the sublime. It must be made by a being that exists with the absurd knowledge of its impending mortality. A machine can imitate, but there is something within humans we cannot quantify or key into code. The artist takes the unknowable and makes it real. They steal away emotion and turn it into something tangible. Real art — as opposed to motel art — debates and detangles the human condition. It speaks with a raw, shaking voice. It is a hand reaching out from the void — and until a machine knows what it is to exist in all its poignant beauty, the only thing AI can create are glorified screensavers.
https://lauren-ellis.medium.com/could-ai-ever-be-an-artist-16fe4f2bd4c7
['Lauren Ellis']
2018-11-28 19:27:21.588000+00:00
['Philosophy', 'Artificial Intelligence', 'Art', 'Technology', 'Lit']
EHTAERB
EHTAERB As in, “ … T’NAC I” — an acrostic Senator Mitt Romney marched and tweeted his support for Black Lives Matter. Predictably, Donald Trump mocked his participation, while other GOP lawmakers cowardly avoided Trump’s shade. (Image Source) Every recipe for solving problems Has one necessary ingredient — The problem must be acknowledged. Addressing systemic racism in America, Especially inequality in justice, Requires whites to acknowledge Black Lives Matter! ©2020 HHThorpe. All rights reserved. Inspired by Owen Banner’s delicious poem, “Ritual and Ruin”, and prompt, “recipe”. Thanks to our hosts, Kathy Jacobs and me (🙃), and also to the rest of the talented Chalkboard team!
https://medium.com/chalkboard/ehtaerb-cc076b48aa60
['Harper Thorpe']
2020-06-10 20:14:09.703000+00:00
['Poetry', 'BlackLivesMatter', 'One Line', 'Racism', 'Music']
The Gist of Big-O Notation Explained via Bubble Sort
Regardless of where you are in your software career, or what role you fill, you’ve surely heard of the idea of Big-O notation and its relation to algorithmic performance. Many of the analytics behind Big-O can appear quite daunting, and the high-level idea which applies can be lost. Having a solid grasp on Big-O can be a game-changer if you need to answer technical interview questions, grow a keener eye for the implementations in your codebase, or design new products while cognizant of performance characteristics. While you may never find yourself at a whiteboard showing off your analyses, you will surely find yourself asking “do I really need to nest that many loops together?” the next time you churn out a snippet if you come to understand the asymptotic analysis behind Big-O. The Gist of Big-O Big-O notation is simply a framework for performing asymptotic analysis. It comes with a simple vocabulary and serves as a sweet-spot for high-level reasoning about algorithms. It is also coarse enough to suppress any dependencies related to the architecture, language, compiler, and other system elements. Additionally, it is sharp enough to make useful comparisons between different algorithms on very large inputs. This enables it to be a useful tool for quickly making decisions and conclusions about a whole universe of algorithms. Motivating Example (Bubble Sort): We’re going to see how we can apply asymptotic analysis to Bubble Sort —a simple (but also inefficient) sorting algorithm. The way it works is by simply reading an array of integers in steps, and comparing the current value to the next value to determine which is greater (or less) than the other. The final output yields a sorted list in either ascending or descending order — depending on how you implement it. Let’s say we’ve got an array of integers [1, 4, 2, 5, 8, 6, 3, 9, 7] . This array of integers contains n = 9 elements (note that n is simply the number of elements in this array — and is typically the defining variable which denotes the count of elements in an input for virtually all problems in algorithm analysis). To sort our array you would implement the following algorithm (we won’t go into the details of implementation since the scope of this article is the asymptotic analysis): isSwapped = true while isSwapped isSwapped = false for j from 0 to N - 1 if a[j] > a[j + 1] swap(a[j], a[j+1]) swapped = true This algorithm requires n iterations to perform the sort. In each iteration we perform a comparison, and additionally a swap if required. Given an array of size n (or 9 in our example), we need to perform (n — 1) = 8 comparisons on the first iteration. On the second iteration we will need to perform (n — 2) = 7 comparisons. The third iteration will require (n — 3) = 6 , and so on… until you exhaust all of the inputs. This yields a formula as follows: If you are wondering why we have simplified the time complexity of the bubble sort to O(n^2) it is simply because the constant factors and lower order terms have little impact on the conclusions we can draw about its running time performance as the input size of the array increases. Note in the chart below, bubble sort aligns with the quadratic running time in the “Horrible” category. As we increase the number of elements in our input array, Bubble Sort will simply perform worse and worse as n grows in size. The motivation to take from this example, is that we should constantly question if we can achieve better performance as input sizes grow. Formal Definition Big-O defines the running-time performance of any algorithm in general terms with a concern for increasing input size (i.e. can the algorithm still perform favorably if we go from 9 integers up to 1,000,000 integers?). Put quite plainly, Big-O is concerned with functions that are defined on all positive integers n = 1, 2, 3, 4, 5, .... (we won’t ever have an input with a negative count of values … ). Additionally, the running time of an algorithm will always aid in denoting the worst case running time of an algorithm. The definition provided relates well to the chart below (which signifies that no matter what constant factors are applied, we can always apply a bound above the running time of our function). Therefore, we can always ignore constant factors and lower-order terms (in the case of algorithms with polynomial complexity) to infer a succinct running time complexity. The takeaway should always be that the running time of an algorithm increases according to Big-O as input sizes grow large. The magnitude or severity of this increase comes down to the unique algorithm in question. T(n) upper bounded by f(n) with arbitrary constant c When To Care About Constant Factors and Lower Order Terms Generally speaking, writing loops and snippets which operate on collections or data structures in your codebase won’t require profound analysis. You can get away with the general Big-O running time characteristics for your use case. However, its entirely possible that you’re in a startup, or in a highly specialized environment where your domain knowledge can come in handy. In very specific scenarios, where your product or service relies on an algorithm you or your team have implemented, it can behoove you to uncover your constants and lower order terms so that you can further optimize your algorithm — especially if its not feasible for you to vastly modify your design to conform to a totally different implementation. Typically, constant factors and lower order terms are generated as a result of basic setting operations, and other primitive operations (like initializing a variable). The cost with such operations is very low, hence its ignored by Big-O mostly. But, you can take the reigns and utilize Big-O while retaining these factors and terms, and develop ways to reduce those costs to maximally optimize your algorithm (perhaps you can lower the number of variables, and reduce the times you reference addresses in your memory). The little gains made in this area could allow your product to hit the market stronger than your competition — even if marginally. What To Take Away Consider what input you have, and what you need to output Come up with an algorithm, by brute force, or via domain knowledge (use best practices when possible) Analyze your implementation, uncover how many times you are operating on your input (is it quadratic, cubic, linear, etc.?) Ask if your implementation can be better — can you reduce your operations on the input from, say, quadratic to cubic, by using one less loop? Consult a Big-O chart when performing analysis, decide if your implementation is critical to your system chart when performing analysis, decide if your implementation is critical to your system Look up a Big-O chart and uncover the general running times of an array of algorithm types and primitive operations — you don’t need to whiteboard everything on the job What Else To Learn
https://medium.com/swlh/big-o-notation-explained-like-im-5-1828183ffcac
['Andre Unsal']
2020-12-15 02:04:17.165000+00:00
['Algorithms', 'Software Development', 'Software Engineering', 'Data Structures', 'Big O']
(R)Evolution of Redshift in 2019
After another great keynote from Andy Jassy, I decided to recap and provide my opinions on the recent Redshift announcements in 2019, including the latest from the keynote this morning. RA3 nodes with managed storage (link) RA3 nodes are high performing compute nodes with relatively small local SSD storage used for local caching. The remainder of your data is stored on S3. RA3 nodes enable you to scale and pay for compute and storage independently allowing you to size your cluster based only on your compute needs. Opinion — While this is a step in the right direction (i.e., separation of compute and storage), there are still several drawbacks compared to its competition. A single RA3 compute node has 48 vCPU’s, 384 GiB of RAM and 64TB of storage. This is a significant node, and you need a minimum of two to run an RA3 cluster. Looking at the Australian market (where I work) there aren’t many (if any) companies that would have anywhere near 128TB of structured data. Technically what AWS is offering here is more storage per compute unit (vCPU). They’re not offering a complete solution that enables infinite stored without scaling compute as each R3A node can reference a maximum of 64TB of storage. Scaling a cluster or “pausing” a cluster has not been addressed either. Therefore you’re still limited to a slow and tedious classic scale or a quick but limited elastic scale. Federated Query (link) Utilising psql customers could query Redshift from their PostgreSQL databases. However useful it was, AWS decided to flip that around and finally offer the opposing option. Federated Querying enables Redshift to query data directly from Amazon RDS PostgreSQL and Amazon Aurora PostgreSQL Opinion — This is a compelling and useful feature that will only improve as more database engines are added. Reduces the number of ETL workloads organisations need to build to extract data from their operational databases. Data warehouses are generally accompanied by control frameworks — a set of tables that control and audit the loading of a data warehouse. Having the ability to join the two together enables DevOps teams to quickly and easily match data warehousing records with their respective audit records from the control framework. Data Lake Export (link) Unloaded Redshift data into S3 has been around forever. However, your only option was CSV. Today that changed as Parquet (a data lake friendly columnar file format) become available. Opinion — It’s a great addition but not one worth a shout out at a re:Invent keynote. Pre re:Invent There were several Redshift releases leading up to re:Invent that are worth looking into with plenty of quality-of-life changes. Materialized Views (link) Hallelujah, we have materialized views. Materialized views store pre-computed queries decreasing query times for ETL and BI workloads. Materialized views are also self-updating when underlying data sources change. Thus providing you with the benefit of both a view (always up to date data) and a table (physical data source). Opinion — There’s very little to fault here. Unlike Snowflake, Redshift’s materialized views are capable of table joins (i.e., querying multiple tables). Materialized views with table joins enables organisations to build out virtual presentation layers (think dimensions and facts) straight off their integration layer. No need to build, orchestrate, and run ETL workflows to load your presentation layer. Plus, as the underlying data changes, your presentation is automatically updated. Automatic Table Sort (link) Sort keys on a table enhance the speed at which Redshift is capable of filtering your data set before returning the final result to you. A general rule of thumb to follow is to sort on the column(s) that your customers filter on (think where clause). Opinion — Knowing the column or combination of columns to sort on is harder than you think. Having this taken over by Redshift and updated for me automatically is a win in my books. Console UI Refresh (link) The new console user interface is a breath of fresh air. It’s clean, it’s modern, and best of all it easy to use. You can see everything you need to about your Redshift cluster(s) and with the Query Editor, you can execute queries directly against your Redshift cluster. Opinion — I don’t have much to say here. AWS has done a great job modernising the user interface, and they should be applauded for that. AZ64 Compression (link) Compression is critically essential to the performance of any data store, be it a data lake, database or a data warehouse. AWS has developed a proprietary column compressed algorithm called AZ64. According to AWS, it’s miles ahead of the competition: Compared to RAW encoding, AZ64 consumed 60–70% less storage, and was 25–30% faster. Compared to LZO encoding, AZ64 consumed 35% less storage, and was 40% faster. Compared to ZSTD encoding, AZ64 consumed 5–10% less storage, and was 70% faster. Opinion — I love it. The better the compression, the less space my data consumes and the faster my queries executes. I’d only wish Redshift would update my column compression for me when a better choice is available instead of just informing me of a better option. Automatic Workload Management Workload management is key to a well functioning contention-free data warehouse. Set it up incorrectly, and every user is running against the same queue while a significant portion of your cluster goes unutilised. Automatic workload management (WLM) uses machine learning to dynamically manage memory and concurrency helping maximize query throughput. Opinion — What’s not to like? This feature has been a long time coming, and it is finally here. Organisations no longer need database administrators to monitor and optimise Redshift workload queues to achieve optimal performance. Distribution Key Recommendation (link) If you’ve ever worked on a distributed data system before (think Hadoop, Spark, any MPP database) you would know that the key to achieving excellent performance is picking the right key to distribute your data on. Redshift is no different. Creating a table required a user to select a distribution strategy (be it a round-robin, attribute, or broadcast-based). Identifying the right distribution strategy can sometimes be harder than it looks. Amazon Redshift Advisor now recommends the most appropriate distribution key for frequently queried tables to improve query performance. The Advisor generates tailored recommendations by analyzing the cluster’s performance and query patterns. Opinion — AWS has done the hard yards here but stopped short. They’ve built a system to analyse and recommend optimal distribution keys, yet they decided to place the onus of altering the distribution key on the DBAs. There has been no slowing down for AWS’s Redshift team. New features have been rolling out all year long, and the platform has only been improving. I look forward to the changes the team has in store for 2020. If you’re interested in all Amazon Redshift feature releases in 2019 see the below link
https://medium.com/weareservian/r-evolution-of-redshift-in-2019-7702e9a177
['Marat Levit']
2019-12-04 08:04:01.188000+00:00
['AWS', 'Data', 'Cloud', 'Data Warehouse', 'Redshift']
10 Must-Know Statistical Concepts for Data Scientists
5. Covariance and correlation Covariance is a quantitative measure that represents how much the variations of two variables match each other. To be more specific, covariance compares two variables in terms of the deviations from their mean (or expected) value. The figure below shows some values of the random variables X and Y. The orange dot represents the mean of these variables. The values change similarly with respect to the mean value of the variables. Thus, there is positive covariance between X and Y. (image by author) The formula for covariance of two random variables: (image by author) where E is the expected value and µ is the mean. Note: The covariance of a variable with itself is the variance of that variable. Correlation is a normalization of covariance by the standard deviation of each variable. (image by author) where σ is the standard deviation. This normalization cancels out the units and the correlation value is always between 0 and 1. Please note that this is the absolute value. In case of a negative correlation between two variables, the correlation is between 0 and -1. If we are comparing the relationship among three or more variables, it is better to use correlation because the value ranges or unit may cause false assumptions.
https://towardsdatascience.com/10-must-know-statistical-concepts-for-data-scientists-645619783c08
['Soner Yıldırım']
2020-12-28 12:01:05.964000+00:00
['Data Science', 'Machine Learning', 'Statistics', 'Artificial Intelligence', 'Data Analysis']
What Characteristics My Services Should Possess
Low coupling You’ve seen the examples of wrong monolith splitting. What is the problem that these examples share? It is tight coupling, exactly what I try to get rid of: if you need to modify one service, it’s likely that you’ll have to modify couple of more of them. Along with the low coupling usually comes the high cohesion. When the coupling is tight the cohesion is low. What’s interesting, the opposite is true as well, but with a little remark: with a right service granularity high cohesion results in loose coupling. It’s not a generally recognized fact or rule — it’s just my observation. And it is the one I use for finding service boundaries, with services being loosely coupled. For me personally this is more simple, as the notion of “loose coupling” seems too ephemeral to be used as a beacon for identifying service boundaries. High cohesion In a monolith the cohesion is low because one monolith, one piece of code does the whole thing. If you use a way I described in the “Wrong reuse” chapter to split your monolith, the cohesion is low as well, although all those service were created with this ability in mind. For example, the usual mindset goes like “Well, Ticket service contains all the logic related to tickets. Why isn’t it cohesive?”. The problem is that “all the logic” is a quite blurred notion. It’s just a set of functionality related somehow to tickets, that is used by the other services on the whole tickets’ life cycle. This Ticket service inherently can not be cohesive. It resembles me of splitting the project on modules around design patterns: singletons, factories, strategies, etc. Another way could be to split the system around program constructs: classes, interfaces, objects (in case language supports it), etc. Whereas the principle one should follow is semantics. Classes belonging to a module are used together, forming a coherent piece of functionality, telling about domain. This is what modules are used for, this is what they are needed for. Talking about service coherence I keep in mind the same approach — “coherent piece of functionality”. Probably I should’ve started with it, but, nonetheless, just in case, let’s dispel all doubts and clarify the word “cohesion”. That’s what Wikipedia has to say about that: (…) cohesion measures the strength of relationship between pieces of functionality within a given module. For example, in highly cohesive systems functionality is strongly related. That’s what I’m striving for. Correct granularity The approach I use is based on the notion of bounded context taken from the book Domain-Driven Design: Tackling Complexity in the Heart of Software by Eric Evans. If some concept that I use in the code of my service is getting ambiguous (the customer is who browses the page or someone who’ve made a purchase?), if the same concept is used in semantically very different places that are interested in different data and behavior (“order” while making a purchase and “order” while delivering it), or the same entity reflexes some domain concept in every stage of its life cycle (both examples are valid), then the chances are that the service is too coarse-grained and its cohesion is low. I highly recommend to split it further to get more cohesive parts. As soon as we get services with monosemantic bounded contexts, where every concept is unambiguous — you’re done. No further splitting is required. High autonomy Low coupling results in full conceptual service autonomy. By autonomy I mean that service’s ability to do its job doesn’t depend on the availability of other services. Service in order to do its job needs neither functionality nor data of another services. Moreover, service might not even know about other services (and in most of the cases should not). Service autonomy manifests itself in a way services store their data and the way they communicate with each other. But of course it doesn’t come by itself. You should follow the autonomy deliberately by identifying correct service boundaries, but you won’t regret about the time spent, as with high autonomy comes high business agility, the key business property. Autonomy manifests itself in communication via events and decentralized data storage. Services communicate via events Services don’t live in vacuum, so they communicate with each other. How to implement that? I advocate for the use of behavior-centric and business-driven event message type, opposite to synchronous requests and command messages. Such architecture is called Event-driven architecture. Published events should reflect the business concepts, some real things happening in domain: order completed, transaction processed, invoice payed. Usually business policies don’t require an immediate and transactional reaction. But nevertheless if you think that the event should be processed inside the database transaction and consequently the result of it should be “all or nothing, and immediately!” — think again. For me, personally, it helps to imagine how this business worked (or would have worked) hundred years ago, when there were no transactions, and even computers, that made data exchanging, data processing and communication extremely quick. By the way, it is partly because of this service boundaries identification is so difficult task. But even now, with all these technologies at hand, our life still remains not so transactional and rarely synchronous, mostly message-driven. Take a look at such a transactional, from the first sight, thing like buying a house. Realtor prepares the documents. You make first deposit (it already gets less transactional, little by little, doesn’t it?). But suddenly your realtor cancels the deal. Probably he’s found more profitable proposition. Or it turns out that the house is in an emergency condition and it can’t be sold. Probably he even wrote you an email after failing to reach you by phone, but unfortunately you saw it after the deposit had been put. In this case, realtor I hope will give you your money back and you’ll be off to look for a new house. An email here is a command-based message, whose semantics is fire and forget. Realtor wrote you an email but it doesn’t matter what response would be. The deal is canceled anyway, and you should do all you can to get your money back and find a new house. The realtor is not much involved in your activities. Talking about transactionality, firstly, you haven’t payed the full price, only the first deposit. Secondly, the deal is canceled anyway, but for now you don’t have your money. This can hardly be called transactional interaction. Or the classic example of money transfer. In case when cards are emitted by different banks the transfer can take up to several days, while the sender can be debited at once. Is this transfer period spanned by a database transaction? No. Very roughly this process can go like this (it can go another way though, depends on a concrete bank): first the sender account is debited, then the sender’s card emitting bank starts clearing process — a physical process of money movement between banks. After this the receiver’s account is credited. As you see, it is far from being transactional. In other words, quite a few things in our life that seem to be transactional and synchronous are not such. And it’s fine. But sometimes you really need a command There are some cases when a service is inherently a request/reply-like. For example it is the valid case for a service doing some analyzing job. In this case it requires some input data and reports a result at the output. I’m talking about situations when this is really a separate service, representing some business value of the same abstraction level with the others, so it can not be put inside any of the existing services. In this case command messages or asynchronous request/reply come in handy. But such services should not be blocking, so that client service invoking the considered service wouldn’t wait for it to complete its job. Decentralized data Besides communication via events, service autonomy implies that there can not (and should not!) be shared database. Well, when the service boundaries are specified correctly, when each service is highly cohesive, they simply don’t need other services’ data. But if some service needs another service’s data (which is synchronous operation by nature) it’s likely that they should be a single service. I like to compare it with feature envy smell, which is a clear violation of the Information expert principle from GRASP guidelines. It’s just another manifestation of the concepts unity on different levels — be it an object or a service. When data is decentralized, in case of EDA it is modified in a qualitatively different way. Now none of the services can invoke and modify its data. This can be done only through events. Why is it a qualitatively different way? Firstly, in most of the cases it means for a publisher that it should understand what happened from the business perspective. Otherwise it sometimes seems even unnatural to publish an event: it is not a simple CRUD operation that we all got used to, it’s something different, requiring different approach. And this different approach takes us closer to Domain-driven design, which is itself a huge benefit. Secondly, the subscriber decides how to react on an event, what data to modify and how. Publisher doesn’t even know about its subscribers. It reminds me the Dependency Inversion principle. Consider the following code: class Server { } class Client { public function do(Server $server) { // ... } } In synchronous request-reply client depends directly on concrete service it requests, on its API and availability. In case of event-driven architecture based on publish-subscribe, subscriber doesn’t care about a publisher. Subscriber doesn’t know who’s publishing an event. Subscriber takes the position of an experienced meditator who accepts the reality as it is, just watching it. It doesn’t care about publisher’s availability — nevermind, messages can be delivered a bit later. Other put, publisher is totally abstracted from the subscriber, hiding behind a set of events that publisher can emit, i.e., its contract. So the previous piece of code transforms into the following: interface IEvent { } class Subscriber { public function subscribe(IEvent $event) { // ... } } Decentralized data make high scalability possible. Very often it is a database that becomes a performance bottleneck because usually you have to lock some data while request processing. When all the data is located in a single databases the probability that some request needs data that is already locked in parallel request’s transaction rises. So each request waits for the ending of the previous. And when data is decentralized, locks are decentralized either: transactions span less data because request processing is split along different services and there are less “accidental” locks that I wrote about in “Centralized data” chapter of my previous post. Service choreography Service choreography is a natural consequence of synchronous communication rejection, use of business-events and centralized data storage rejection. Governing authority in EDA looks like an archaism from a synchronous past. Profits So with event-driven architecture we don’t have a disadvantages of synchronous communication, command-based communication, distributed transactions and orchestration. So there is a win-win situation: with this fire-and-forget approach we decouple our services both logically and technologically as messaging infrastructure promotes non-blocking communication. Concerning reuse, unit of reuse in EDA is an event, not a service. If you need a new functionality upon some event, then all that you need is to add a new event subscriber. Besides agility, reliability and availability, almost infinite scalability perspectives arise with this approach: services don’t need to cope with peak loads. When we’re flooded with messages they simply reside in our ESB or broker until they get processed. This approach is nothing but a common sense based on the experience with failed ways of system architecture design. It is not a trend. It is not tied to any concrete technology. And surely it’s not new. You can call it SOA, Microservice architecture, Reactive programming or Self-contained systems. For me it’s like a bunch of people wanting to take (financial) advantage of a solid set of principles, coming up with a new catchy labels in an attempt to write their names in history.
https://medium.com/hackernoon/what-characteristics-my-services-should-possess-ca22294bbea6
['Vadim Samokhin']
2018-04-10 17:33:32.685000+00:00
['Service Design', 'Soa', 'Microservices', 'Software Development', 'Software Architecture']
Communication Technology and Revolution
1436 — Johannes Gutenberg began work on his printing press 1450s — Gutenberg prints the “Gutenberg Bible” 1480s — 110 cities across Europe had printers 1500 — over 20 million books published 1517 — Martin Luther posted the “95 Theses” to the door of All Saints Church, then printed and distributed throughout Germany, starting the Reformation The creation of the printing press resulted in the democratization of knowledge. It is no coincidence that after the invention of the printing press by Gutenberg, the Reformation, the Renaissance, the Scientific Revolution, and the seeds of capitalism emerged in short order. There were, of course, critics — the Catholic Church being a prominent one — who thought that the printing press would spread heresies and misinformation. If anyone could rapidly get published without the priestly scribes as gatekeepers, anything could be published — even pornography. Well, of course, all of those things did happen. But so, too, did the various social and epistemological revolutions that resulted in the widespread wealth and knowledge we now enjoy. 1876 — Alexander Graham Bell received the patent for the telephone 1880s — Motion picture camera invented 1890s — Guglielmo Marconi invented the radio 1920s — Television invented The advent of a new set of communication technologies augured in another revolutionary period. While we saw the unmitigated evils of fascism/National Socialism and Communism arise as well, this was also a period of revolutionary artistic innovation, a further democratization of knowledge, and an even more global perspective as a result of people being able to see news and films from other countries. There were of course people who were concerned that these technologies would corrupt the youth and spread dangerous ideas and even misinformation. Which, of course, did happen. But so, too, did the various social and epistemological revolutions and resulted in an even more widespread, globalized wealth and knowledge we now enjoy. 1960s — Internet developed 1980s — Internet connects U.S. to Europe and Asia 1989 — MCI Mail and CompuServe provide email and public access services to the half million internet users at the time Early 1990s — World Wide Web being invented 1995 — NSFNet decommissioned by the U.S. government, making the internet commercially available to the public 1998 — Google founded 2001 — Wikipedia launched 2006 — Twitter becomes available to the general public 2006 — Facebook becomes available to the general public We are in the middle of the latest communications revolution. Like the others, it has resulted in the democratization of communication. With blogs and Twitter and Facebook and personal websites and so on, anyone can write or record anything and post it online for anyone to find. People have been concerned that it’s corrupting the youth and is responsible for spreading dangerous ideas, heresies, pornography, fake news, misinformation, and so on — and it most certainly has. Just like every other new communications technology did (including writing itself, as Plato famously complained — though he did it in writing). What we are currently waiting for is the new renaissance. We are seeing the seeds of it around the world, in the Arab Spring, in Iran and Hong Kong and Bolivia right now, and potentially — I see the ducks being lined up — here in the United States as well. Each revolution in communications technology has resulted in complete social and epistemological transformation, in political revolutions and massive paradigm shifts in the arts and sciences. Religions are overthrown, new religions established. Morals are revised and all values are revalued. This is the period through which we are living. There are those who viciously fight against it — the fascists, the alt-right — and there are those who see opportunities for utopian transformation to create their own vicious regimes — the communists, the social justice warriors — but the true revolution will not be realized by the vices of vicious ideologues. It never is. The true revolution will be realized by the radical realists who move forward with new ways of creating, new ways of seeing, new ways of understanding, new ways of doing. The true revolution will come with the artists, the scientists, the business people, the inventors who create the new paradigms and work within it to transform society for the better. This, I think, will be even more true now, because the paradigm in place to replace the old one is that of self-organized network processes and emergent properties. The utopians, like the reactionaries, believe in top-down organization. But this is the old paradigm, and it’s been thoroughly refuted both in the real world and theoretically. Which hardly means there aren’t people who continue to desperately cling to it. Of course there are. Marxism and other forms of socialism — solidly in the old paradigm — are on the rise as reactionary responses to the emergence of a new world paradigm. In that sense, there really is no difference between the alt-right and AntiFa, between the fascists and the socialists, between the conservatives and the progressives — all of these are old paradigms, fighting in their own ways against the emergence of the new one. Many of these battles are playing themselves out in our universities because our university system is a throwback to the Gutenberg paradigm shift, and nothing much has changed about it except the creation of increasingly massive amounts of debt for decreasing returns. When a vast majority of high school students go to college, it doesn’t even provide a signal anymore. It has barely provided anyone with an education for too long now. Worse, one can get completely educated in practically anything one wants online. Right now, the only value of universities is as a hotbed for left- and right-wing utopian reactionary thought. The universities are bound to become decentralized, and new ways of signalling that you have certain kinds of knowledge will emerge. This doesn’t mean the universities — and untold other kinds of institutions — won’t still be around. There are still Catholic Churches after the Reformation, after all, though the Catholic Church of today is hardly the same thing it was during the Medieval, Renaissance, or even Reformation periods. The good news is that, if history is any guide, though we will go through a dark revolutionary period, we will emerge on the other side much wealthier and with a new way of understanding and being in the world. And while I’m optimistic about freedom in general, the 20th century should have certainly taught us that people can be enslaved in utopian versions of whatever the new paradigm may be. We are now too many generations away from the memories of the evils of Marxism and the corruption and ineffectiveness of socialism, and it seems we may end up taking a reactionary step back to trying such corrupt and corrupting systems — but we can hope that they will be more short-lived this time as the world evolves past belief in such utopian fantasies. We can hope, too, that we will not be tempted by whatever utopian versions of the new paradigm may be — perhaps it is not too much to hope that the new paradigm will itself not be able to be made utopian, that that is one of its main features.
https://troycamplin.medium.com/communication-technology-and-revolution-8f3d6e29bba5
['Troy Camplin']
2019-11-19 18:27:11.966000+00:00
['Society', 'Revolution', 'Social Change', 'Culture', 'Communication']
Understanding Backpropagation Algorithm
Backpropagation algorithm is probably the most fundamental building block in a neural network. It was first introduced in 1960s and almost 30 years later (1989) popularized by Rumelhart, Hinton and Williams in a paper called “Learning representations by back-propagating errors”. The algorithm is used to effectively train a neural network through a method called chain rule. In simple terms, after each forward pass through a network, backpropagation performs a backward pass while adjusting the model’s parameters (weights and biases). In this article, I would like to go over the mathematical process of training and optimizing a simple 4-layer neural network. I believe this would help the reader understand how backpropagation works as well as realize its importance. Define the neural network model The 4-layer neural network consists of 4 neurons for the input layer, 4 neurons for the hidden layers and 1 neuron for the output layer. Simple 4-layer neural network illustration Input layer The neurons, colored in purple, represent the input data. These can be as simple as scalars or more complex like vectors or multidimensional matrices. Equation for input x_i The first set of activations (a) are equal to the input values. NB: “activation” is the neuron’s value after applying an activation function. See below. Hidden layers The final values at the hidden neurons, colored in green, are computed using z^l — weighted inputs in layer l, and a^l— activations in layer l. For layer 2 and 3 the equations are: l = 2 Equations for z² and a² l = 3 Equations for z³ and a³ W² and W³ are the weights in layer 2 and 3 while b² and b³ are the biases in those layers. Activations a² and a³ are computed using an activation function f. Typically, this function f is non-linear (e.g. sigmoid, ReLU, tanh) and allows the network to learn complex patterns in data. We won’t go over the details of how activation functions work, but, if interested, I strongly recommend reading this great article. Looking carefully, you can see that all of x, z², a², z³, a³, W¹, W², b¹ and b² are missing their subscripts presented in the 4-layer network illustration above. The reason is that we have combined all parameter values in matrices, grouped by layers. This is the standard way of working with neural networks and one should be comfortable with the calculations. However, I will go over the equations to clear out any confusion. Let’s pick layer 2 and its parameters as an example. The same operations can be applied to any layer in the network. W¹ is a weight matrix of shape (n, m) where n is the number of output neurons (neurons in the next layer) and m is the number of input neurons (neurons in the previous layer). For us, n = 2 and m = 4. Equation for W¹ NB: The first number in any weight’s subscript matches the index of the neuron in the next layer (in our case this is the Hidden_2 layer) and the second number matches the index of the neuron in previous layer (in our case this is the Input layer). x is the input vector of shape (m, 1) where m is the number of input neurons. For us, m = 4. Equation for x b¹ is a bias vector of shape (n , 1) where n is the number of neurons in the current layer. For us, n = 2. Equation for b¹ Following the equation for z², we can use the above definitions of W¹, x and b¹ to derive “Equation for z²”: Equation for z² Now carefully observe the neural network illustration from above. Input and Hidden_1 layers You will see that z² can be expressed using (z_1)² and (z_2)² where (z_1)² and (z_2)² are the sums of the multiplication between every input x_i with the corresponding weight (W_ij)¹. This leads to the same “Equation for z²” and proofs that the matrix representations for z², a², z³ and a³ are correct. Output layer The final part of a neural network is the output layer which produces the predicated value. In our simple example, it is presented as a single neuron, colored in blue and evaluated as follows: Equation for output s Again, we are using the matrix representation to simplify the equation. One can use the above techniques to understand the underlying logic. Please leave any comments below if you find yourself lost in the equations — I would love to help! Forward propagation and evaluation The equations above form network’s forward propagation. Here is a short overview: Overview of forward propagation equations colored by layer The final step in a forward pass is to evaluate the predicted output s against an expected output y. The output y is part of the training dataset (x, y) where x is the input (as we saw in the previous section). Evaluation between s and y happens through a cost function. This can be as simple as MSE (mean squared error) or more complex like cross-entropy. We name this cost function C and denote it as follows: Equation for cost function C were cost can be equal to MSE, cross-entropy or any other cost function. Based on C’s value, the model “knows” how much to adjust its parameters in order to get closer to the expected output y. This happens using the backpropagation algorithm. Backpropagation and computing gradients According to the paper from 1989, backpropagation: repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. and the ability to create useful new features distinguishes back-propagation from earlier, simpler methods… In other words, backpropagation aims to minimize the cost function by adjusting network’s weights and biases. The level of adjustment is determined by the gradients of the cost function with respect to those parameters. One question may arise — why computing gradients? To answer this, we first need to revisit some calculus terminology: Gradient of a function C(x_1, x_2, …, x_m) in point x is a vector of the partial derivatives of C in x. Equation for derivative of C in x The derivative of a function C measures the sensitivity to change of the function value (output value) with respect to a change in its argument x (input value). In other words, the derivative tells us the direction C is going. The gradient shows how much the parameter x needs to change (in positive or negative direction) to minimize C. Compute those gradients happens using a technique called chain rule. For a single weight (w_jk)^l, the gradient is: Equations for derivative of C in a single weight (w_jk)^l Similar set of equations can be applied to (b_j)^l: Equations for derivative of C in a single bias (b_j)^l The common part in both equations is often called “local gradient” and is expressed as follows: Equation for local gradient The “local gradient” can easily be determined using the chain rule. I won’t go over the process now but if you have any questions, please comment below.
https://towardsdatascience.com/understanding-backpropagation-algorithm-7bb3aa2f95fd
['Simeon Kostadinov']
2019-08-12 23:47:35.648000+00:00
['Algorithms', 'Artificial Intelligence', 'Deep Learning', 'Mathematics', 'Data Science']
A Transcendental Journey of the Soul on a Saturday
Approaching a crossroads of diverging paths, the weary boy ruminated on which path to embark on. Zealously uttering for a plead of serendipity, the universe graciously surfaced from the weary boy’s haven. The weary boy entranced with newfound thrill continued on his journey towards his treasure. You ever wake up in a mesmerizing state of awe? Astounded by the fact that you made it another day above ground? Life truly does take a turn when one adopts this mindset. As the end of 2020 is upon us, I can’t help but be thankful for what this year had to offer. It was not altogether exuberant, quite the opposite if I may add. Nonetheless, life’s blows unveiled numerous aspects of life that I may have taken for granted. Life allowed me to be more appreciative of the simple things in life. to brightly wonder about the marvels of this abundant universe. The need to better one’s life is a fundamental necessity. But, as a whole, I think we have placed a higher reverence on the realization of our dreams rather than the journey. You must find admiration for the process as well, just as a lion exuberates the same vigor in the hunt of prey. In a way, Paulo Coelho’s vibrant fable was an omen of mine. A sign to continue forward, that I indeed may be approaching my oasis. Change is undoubtedly a rugged path, for this reason, only a few bring to life their own Personal Legend, as Paulo would put it.
https://medium.com/change-your-mind/a-transcendental-journey-of-the-soul-on-a-saturday-c2270fb53b38
['Kevin Ishimwe']
2020-12-16 11:36:31.641000+00:00
['Life Lessons', 'Self Improvement', 'Happiness', 'Love', 'Creativity']
Odds Ratio Does What Risk Ratio Fails to Do — an Intuitive Example
Odds Ratio Does What Risk Ratio Fails to Do — an Intuitive Example Understand the concept from scratch Photo by Sydney Sims on Unsplash Let’s begin by dropping the ultimate takeaway of this blog: “The odds ratio is a consistent measure for both the Population & Sample statistics (Case Control studies with significant effect) where the Risk ratio shows inconsistency” Now the question that comes to our mind is — Why is that? I will prove this using a relatable example so that we can build an intuitive sense around it as well. But before jumping to the example, let’s see how Risk & Odds behave: (Image by author) Notice how Odds reach closer & closer in terms of difference to the risk measure as it gets lower. It is because of this reason Odds & Risk are interpreted similarly in the cases where the event rate(Positive here) is relatively low. In cases where the event rate is relatively high(above 0.3 or 30%), there is a significant difference in both measures. But the point we are trying to understand here is why the odds ratio stays consistent in Case-control studies, whereas the risk ratio becomes inconsistent. Time to explain this with the help of an example — The scenario we are considering is related to the sensitive issue of depression and its dominance in Corporates: (Image by author) The visual you see above is the representation of the population; however, for control case studies, only samples are taken from both the affected population as well as the unaffected population. Let’s first calculate the Risk(Corporate), Risk(Non-Corporate), Odds(Corporate), Odds(Non-Corporate), Risk Ratio, and Odds ratio taking approximate figures of the population just for the sake of understanding the concepts: (Image by author) Risk(Corporate) = 0.8/2 = 0.40 Risk(Non-Corporate) = 2/6 = 0.33 Odds(Corporate) = 0.8/1.2 = 0.67 Odds(Non-Corporate) = 2/4 = 0.50 Risk Ratio = 0.40/0.33 = 1.21 Odds ratio = 0.67/0.50 = 1.34 You will agree that the Risk ratio measure(also known as ‘Relative risk’) is more intuitive than the Odds ratio because it conveys that a person working in corporate has 1.21 times more risk of having depression than a non-corporate one. Making intuitive sense out of the Odds ratio is not easy, but it is a consistent measure when it comes to Case-control studies(sample statistics). Let’s see how: (Image by author) From both the Depression(X+Z) & Depression-free population(Y+N), 1000 samples each are taken to conduct the Case-control study. The values look like this: (Image by author) Now we shall observe the inconsistency in the Risk ratio that we have been talking about since the start of this blog, and the sole reason for that is the under-representation of the Depression-free population in the Case-control study. Risk(Corporate) = 285/515 = 0.55 Risk(Non-Corporate) = 715/1485 = 0.48 Odds(Corporate) = 285/230 = 1.23 Odds(Non-Corporate) = 715/770 = 0.92 Risk Ratio = 0.55/0.48 = 1.14 Odds ratio = 1.23/0.92 = 1.34 Appreciate how the Risk Ratio has changed in the Case-control study, but the Odds ratio remained the same(consistent). To find out why the Odds ratio stays consistent, let us look at its formula: Odds ratio = (x/y)/(z/n) A little rearrangement in the formula totally takes care of the issue of under-representation: Odds ratio = (x/z)/(y/n) Now the numerator reflects the split of values (Corporate & Non-corporate) in the Depression set, and the denominator reflects the split of values (Corporate & Non-corporate) in the Depression-free set, thereby computing the same value in the Case-control study as in the whole Population study. It is because of this reason. The Odds ratio measure is preferred over the Risk ratio to make an estimate with confidence intervals in Case-control studies. Conclusion: When the event rate is relatively low, and there is not much significant effect between categories, only then the Risk ratio & the odds ratio behave in a similar manner. Moreover, Risk involves the process of a forward-looking scenario, i.e., After a subject is exposed to something, the likeliness of him/her developing the outcome is computed, whereas, for the odds measure, it is the opposite, i.e., Subject who has already developed an outcome, the odds of him/her being exposed to something in the past is the area of interest. I hope I am able to establish the difference/distinction between the Risk ratio and the Odds ratio with a relevant example. Make sure to visit my profile for similar intuitive and simplified blogs. Many more to come…. Thanks!!!
https://medium.com/towards-artificial-intelligence/odds-ratio-does-what-risk-ratio-fails-to-do-an-intuitive-example-312ff89a08b4
['Atul Sharma']
2020-12-04 13:02:59.762000+00:00
['Machine Learning', 'Artificial Intelligence', 'Mathematics', 'Data Science', 'Data']
Retrieval Augmented Generation. A new sheriff in town? 🤔🤔🤔
Hey Guys, Back to my blog with my second paper analysis. This time we are diving deep into the paper “ Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks ” by Facebook AI. This blog is kinda long read and have references from more several so bear with me. Will go through it step by step. Have fun reading. Pre-trained neural language models Pretrained models have now been here for quite some while. Over the years they have shown to learn substantial knowledge from textual datasets. They can store knowledge without any access to external resources or Database. While this is exciting such models have severe downsides. They are rigid in design and cannot easily be expanded (fix memory size) and many of the times they produce very vague output in various text generation task. This makes them poor in both open and closed book question answering. They cannot easily expand or revise their memory, can’t straightforwardly provide insight into their predictions, and may produce “hallucinations”. — Retrieval-Augmented Generation To answers, these issues a new concept of Hybrid model was introduced which combine both parametric memory and non-parametric (i.e., retrieval-based) memories to address the issues due to which now knowledge can be directly revised and expanded, and its access can now be inspected and interpreted. ( This is due to the presence of non-parametric memory. REALM and ORQA were initial models based on this approach.) Well, the above paragraph might just bounce over your head. Let me give you a brief explanation of parametric and non-parametric memory. Parametric Model A learning model that summarizes data with a set of parameters of fixed size (independent of the number of training examples) is called a parametric model. No matter how much data you throw at a parametric model, it won’t change its mind about how many parameters it needs. — Artificial Intelligence: A Modern Approach Nonparametric Model : Nonparametric methods are good when you have a lot of data and no prior knowledge, and when you don’t want to worry too much about choosing just the right features. — Artificial Intelligence: A Modern Approach RAG i.e. Retrieval-Augmented Generation is an example of this. To give you an analogy I would say language models try to mimic Human brain textual knowledge. Now, what RAG does is trying to mimic Human Brain combined with the accessibility of books (or any external information source.Internet would be an exaggeration !). We cannot learn and store everything so we look up to answers into books whenever we need to find something new or verify something. Gif via Giphy The RAG language model has two memory parts as mentioned above. The parametric memory is a pre-trained generative seq2seq transformer which is BART and non-parametric memory is a dense vector index of Wikipedia using DPR (a pretrained neural retriever). RAG is text input and text output model thus can be used for any seq2seq task. (just like BART and T5). This technique makes RAG helps in combining the strengths of open-book and closed-book question answering.RAG models which use the input sequence x to retrieve text passages z and use these passages as additional context when generating the target sequence y. Here is a pictorial representation of the RAG model combining a pre-trained retriever with a pre-trained encoder-decoder and fine-tune end-to-end.
https://towardsdatascience.com/retrieval-augmented-generation-a-new-sheriff-in-town-571dc5999b1b
['Parth Chokhra']
2020-11-05 02:55:13.722000+00:00
['Machine Learning', 'NLP', 'Deep', 'AI', 'Data Science']
Why Night Running is a Training Opportunity Hidden in Plain Sight
There are so many opportunities for night running through the week. Photo by Zac Ong on Unsplash Birds are hiding something from you. It’s the little drab ones that have the biggest surprise, too. You may think that tropical birds are the only ones with brilliant colours and striking patterns, but what you may not realize is that birds can see in the Ultraviolet (UV) spectrum. Because birds can see in UV, they also have plumage and patterns that only show up in the UV spectrum, too. Since human eyesight works differently, there’s a lot of vibrant detail you won’t catch. So that ho-hum duck you saw at the pond recently? In the UV spectrum, it truly has vivid stripes and patterns. But to us, they are hidden in plain sight. When something is hidden in plain sight, you might think of it as a blind spot. When it comes to running, many of us may have a blind spot to training at night. It might be something we didn’t consider because of safety reasons. Nevertheless, night running provides a lot of excellent advantages to train. Many birds have wild patterns visible only in UV. Picture credit: Jamie Dunning at the University of Nottingham What we’re going to cover in this article is: Seasonal availability Scheduling week vs weekend Crowding The first advantage we’re going to talk about is seasonal availability. Not everyone lives near the equator If you live somewhere near the equator, daylight is relatively consistent. This makes it pretty easy to keep a consistent running schedule throughout the year. But if you live farther away from the equator, there can be significant seasonal shifts in the amount of daylight. Running in daylight is great However, daylight becomes a scarce resource, especially in the winter. If you restrict yourself to running in daylight, the opportunities get fewer and fewer to go out for a run. There are many weeks in the fall, winter, and spring that open up as opportunities to run once you’re on board with night running. But no matter the season, the sun goes down eventually. This brings us to the second point on advantages: scheduling week vs weekend runs. The weekend time is a precious commodity Everyone has plans and priorities. For me, a lot of time on the weekend is busy with family. It can be tough to carve out an hour to be away on a run. This is where the scheduling advantage of night running really shines. There’s time before bed during the week If you have kids, there’s a window of time after they go to bed, but before you hit the sack, too. But this window of time is later in the evening when there is typically not much sunlight left. However, it’s an excellent opportunity for night running, especially if it means you can offset a half-hour or hour from the valuable weekend time you need for other things. Seasonal availability and a flexible schedule are both advantages for when you can train. But night running can also make training less awkward. This brings us to the final point: crowding. Crowding may not always seem like a big problem For some, it may be a measure of comfort to have a crowd around for safety reasons. There’s no disputing there is safety in numbers, but crowds can also be a disadvantage. When sidewalks and trails have a lot of people on them, it’s a disadvantage to running. Is there someone else on the sidewalk or the path? Getting around them can be awkward if you’re on a narrow sidewalk on a road bustling with car traffic. If you’re on a shared-use trail with bicycles, this can also be awkward and distracting. Car and pedestrian intersections tend to be much busier during the day and can interrupt the momentum you have going on your run. When you’re night running, there’s just less crowding There are not many people on sidewalks or trails. Often, traffic intersections are so quiet they are set to trigger immediately to allow pedestrian crossings, so you don’t have to wait. These are all little things that add up to a less awkward, smooth, and consistent run. But what about safety? There are many facets to safety when it comes to night running. Because there are so many concerns, night running as a whole concept is easy to dismiss. Indeed, there are safety issues regarding seeing, being seen, planning your route, and communications. Because there are so many details on safety, we’re going to cover these in separate articles. Ultimately, the decision about safety is for each individual to make, and if you’re in doubt — don’t go out! In summary Once you consider the concept of night running, you realize there is a huge opportunity to train throughout the year. It can free up valuable weekend time, too, which might be better spent on other projects or with family. You can also train in a smooth and consistent way without awkwardly dodging people on sidewalks and bicycles on shared paths. It’s easy to overlook humdrum-looking birds, even though they really do have a lot of amazing beauty, hidden in plain sight. Similarly, you might be overlooking the advantages of night running. There are viable concerns with safety that need to be addressed, but the advantages are real. Next Step: Read more on birds that glow in UV here.
https://medium.com/runners-life/why-night-running-is-a-training-opportunity-hidden-in-plain-sight-8ea9b32806b3
['Ryan S Nicoll']
2020-12-16 02:30:52.303000+00:00
['Fitness', 'Sports', 'Running', 'Health', 'Exercise']
Linear Regression to analyze the relationship between points and goal difference in Premier League standings
Abstract This paper explains the application of linear regression to analyze the relationship between goals scored, goals allowed, and goal difference with points in the final standings of the English Premier League. It shows that there is a strong linear relationship between goal difference and points, as well as a relatively strong linear relationship between goal scored and goals allowed with points. The purpose is to gain insight into the utility of mathematical applications in analyzing the Premier League, which can be applied to analyze other soccer leagues as well. This paper also suggests that linear regression is a useful tool to measure the relative value of attackers and defenders for English Premier League teams under certain assumptions. Given the same capability, a defender should be purchased to achieve a higher standing compared with an attacker. 1 Introduction In a world full of an enormous amount of data, it is essential for us to process and analyze these data. Linear regression, a mathematical approach that models the relationship between a dependent variable and one or more independent variables [1, 4], plays a crucial role in analyzing our daily lives since the method can be employed extensively in practical applications such as in the evaluation of house prices and in identifying differentials between housing areas, as well as personal health conditions and insurance. Linear regression is also significant in the field of machine learning. The linear regression algorithm is a fundamental machine-learning algorithm in view of its relatively simple and widely-known properties [2]. This article mainly focuses on linear regression with one independent variable and one dependent variable, which is called simple linear regression. It concerns two-dimensional sample points in a Cartesian coordinate and predicts the linear relationship between the dependent variable and the independent variable by finding a linear function that best fits the data points [3]. There are several assumptions for the linear regression models, such as constant variable and independence of errors, which means there are no outliers in the data set and the errors of the dependent variables are uncorrelated with each other [1]. A real-life example of the English Premier League standings will also be presented to explore the linear relationship between points and goal difference since a club with a larger goal difference often ends the season with higher points, followed by an extension that addresses the question of whether a team should buy a better attacker or defender. The hypothesis is that points and goal difference have a strong linear relationship, and teams in the English Premier League should prioritize the purchase of defenders over attackers since it is widely known that defense wins championships. 2 Mathematical Methods 2.1 Vectors The first math concept is called vectors, a column of n numbers. If x is a vector, then x is in the form of where x_1, …, x_n belongs to real numbers. n is called the number of components of x, and correspondingly, x is an n-vector. The length of an n-vector x is The distance between two n-vectors,x and y is the norm of the difference dist(x,y) =‖y−x‖ 2.2 Linear Regression For each point given, it is feasible to write the point in the form of Then we can construct two n-vectors, x and y respectively, to represent the collections of x_1, …, x_n and y_1, …, y_n. The best fit line can be expressed in the form of y=αx+β; accordingly, we can construct an n-vector whose values are the y values of this line corresponding to each x. Since the line best fits the data, α and β should be the numbers that make the distance between y and the vector of the function y = αx + β as small as possible. Assume that the function J is defined as the average of the sum of the square of the distance from y to αx + β, so it can be expressed in the following equation. The reason why the function needs to be squared is that the accumulative difference will not be compensated by the negative difference if the total difference is positive. Conversely, the positive difference will compensate for the total difference if the sum of the difference is negative. As a result, when the function is squared, the accumulative difference can only increase, which is convenient for finding the best fit line. From the function J, there are two observations: The first one is that theαandβthat best fits the data is the α and β that minimize J. It is because the function is an expression that represents the total distance between the best fit line and the original points; shorter distance means the line fits the data points better. The second one is that the function is in a quadratic form. Every quadratic function can be written in the form of ax²+bx+c. In the function J, the coefficient before x², a, is larger than 0 because α² is always positive. Therefore, the function ax²+bx+c attains its minimum value when If β is regarded as a constant andαis the variable, after expanding the function J, the formula can be written in the following form: The second observation indicates that J can attain its minimum value when Likewise, If α is regarded as a constant and β is the variable, the function can be expressed in the following form: Now, J can reach its lowest point when After algebraic manipulations of the equations of the minimum α and β, function J can be minimized with the α and β satisfying the following system of equations: In order to solve the system of linear equations of two unknowns, α and β in this case, the solution formula for equations with two variables can be recalled. Suppose the system of equations are The system of equations has exactly one solution if and only if Then the solution will be Since this solution concludes the mathematical methods part, an example of the English Premier League will be presented in the next section. 3 Premier League The Premier League is the top level of the English soccer pyramid in which twenty teams compete against each other, playing each other team twice for a total of 38 games. In general, the team with the larger goal difference, calculated as the number of goals scored minus the number of goals conceded, will often win more games. Winning more games will lead to higher points earned because a win game is worth 3 points whereas a draw game is worth 1 point and a lose game is worth 0 points. Therefore, there should be a linear relationship between goal difference and points earned by a team at the end of the season. In order to explore the relationship, the mathematical methods in section 2 are applied. The standing of the Premier League in the 2018–2019 season is randomly chosen to be the example in this section. A table below is created to record the points, goal difference, goals scored and goals allowed, which corresponds to the second to the fifth column respectively. In this section, since we will explore the impact of the goal difference on the points in the final standings, the independent variable, the x-axis, is goal difference and the dependent variable, the y-axis, is points. Each data point is imported to the plot as Figure 1 shown, as well as the best fit line. It is obvious that there is a linear relationship between these two variables. α and β are calculated, which are 0.64 and 53.45 respectively by using the equation discussed in Section 2. Noticeably, the best fit line printed in the plot has the gradient equal to alpha and y-intercept equal to beta. Since the x-axis is the goal difference and the y-axis is the points of a club at the end of the season, the gradient represents that if a club has one more goal difference, then on average it will end the season with 0.64 more points. The y-intercept indicates that if a club scores as many goals as it concedes, it will end the season with 53.45 points. This revelation of the linear relationship between the goal difference and the points inevitably leads to another question for clubs: whether they should spend money on an attacker or spend the same amount of money on a defender, assuming that these two players are identical, with the same capability and efficiency, but playing in different positions. Thus, in order to delve into the question, we can still use linear regression to analyze it quantitatively. 4 Attackers or Defenders? Similar to the procedure in section 3, the only variable that requires changing is the x-axis variables. Instead of goal difference, it will be substituted by goals scored and goals allowed. Admittedly, some teams which end the season with higher ranking do not necessarily have more goals scored than those teams with fewer points since they can allow fewer goals scored. Conversely, some teams with more points may have more goals allowed because they can score plenty of goals to compensate for the goals allowed. Nevertheless, after plotting each point in the graph, there is still a linear relationship but with a weaker correlation compared with the correlation in section 3. Figure 2 is the plot of goals scored vs. points and Figure 3 is the plot of goals allowed vs. points. The slope of the best fit line in Figure 2 is positive because more goals should lead to higher points whereas the slope of the best fit line in Figure 3 is negative because fewer goals allowed will lead to a higher ranking. In Figures 2 and 3, the output alpha values are 1.129 and -1.230 respectively, which means one more goal scored can help the team end the season with 1.129 points higher and one fewer goal allowed will enable the team to finish the season with 1.230 points higher. Generally, losing a goal and scoring a goal will be compensated, which means the point difference should be 0 instead of around 0.1, because in the real game, after scoring but losing a goal, two teams go back to the same starting line instead of two teams both losing 0.1 points. Back to the question proposed at the end of section 3 on whether a club should buy an attacker or a defender. Quantitatively speaking, in terms of the results of the linear regression, a defender should be purchased since a goal allowed is worth more points than a goal scored. It is noticeable that the case only works in the Premier League because, in other leagues, the result can be different. The result, to some extent, is reasonable in that there are many clubs in the Premier League who are capable of winning the tournament. For example, the top tier contains six teams which dominate the Premier League. In contrast, in other European leagues such as Italian Serie A and Bundesliga, just one team, Juventus and Bayern respectively, have been the champions for more than five years, and Real Madrid, Barcelona, and Atletico Madrid dominate La Liga. As a result, when the teams in the Premier League have similar capabilities, the goal difference for each game will be small. For the top tier six teams, a better defense will ensure the team to allow fewer goals since their offense has been relatively strong, which brings more points considering the fact that a draw game is worth 1 point and a win game is worth 3 points. For other teams which are relatively weak, they can earn one point through the strong defense to make the stronger teams unable to score, which is exactly what those teams have been doing in the recent seasons. It is not uncommon that the team in the last five positions in the standings can earn one point in the game against the top six teams. However, in other leagues in which there is a huge gap between strong teams and weak teams, when a weaker team allows plenty of goals, they should consider reinforcing their offense to try to score as much as possible to compensate for the loss of the goals. It is the same case for the strong teams whose defense is good enough. They should consider scoring more goals to ensure the win. Buying a defense player is not as efficient as buying an attacking player. Again, it is a matter of power difference between the strong teams and the weak teams in a league. 5 Conclusion After the plots were generated and analyzed, the final conclusion confirms that the hypotheses are correct since the results prove that there exists a strong linear relationship between goal difference and points, and a defender should be bought to earn a higher standing compared with an attacker in the English Premier League. The sentence ”defense wins championships” is certainly reasonable. The implication of the research is to better understand the significance of mathematics in real life. Linear regression, an indispensable part of mathematics, is a vital and practical tool that has already been widely applied in the fields such as artificial intelligence and business. The paper dives deeper into the principles and operations of linear regression, which can further provide readers with insights to comprehend the linear relationships between other variables beyond sports such as soccer. Many real-life examples can be quantitatively analyzed by applying the method of linear regression, enabling people to find some particular order in our complex world. References [1] David A Freedman. Statistical models: theory and practice. Cambridge University press, 2009. [2] Tarunpreet Kaur. Factors affecting health insurance premiums: Explorative and predictive analysis. 2018. [3] A Mehra. Statistical sampling and regression: simple linear regression. PreMBA analytical methods. Columbia Business School and Columbia University, 2003. [4] Sanford Weisberg. Applied linear regression, volume 528. John Wiley & Sons, 2005.
https://canem292.medium.com/linear-regression-to-analyze-the-relationship-between-points-and-goal-difference-in-premier-league-6f907b5d0ae9
['Zian Chen']
2020-10-27 02:20:44.802000+00:00
['Mathematics', 'Premier League', 'Soccer', 'Linear Regression', 'Data Science']
Out of the Mess, Goop, and Pain, Something Holy is Born
If Advent is a journey, Christmas Day seems to be one of arrival. After all, Mary gave birth to the Holy One in a long-ago stable, right? Today is the big day with the birth of One, who many consider the — or at least their — Lord and Savior. That concept has come to mean some very specific, carefully — and no doubt — politically codified things in official churches’ doctrines and dogma. That’s not the page I live or write on. I am all about exploring the mystical and metaphorical in these teachings and stories. Often that takes me down some exciting side trails. Today I want to focus on birth. The Bible doesn’t discuss Mary’s actual labor and delivery process. It skips right to the Blessed Baby, implying that whatever labor pains and suffering Mary might have endured, she did so nobly given her mission, sweetly and quietly. Or at least off-camera. I have never given birth, but I’m told it’s a messy, noisy, often painful process. For what’s inside to come out, especially through a relatively narrow opening, a lot needs to happen. The uterine muscles contract, pushing the baby into position and down the birth canal. The baby gets into position via a turning process known as transition. I’m told that can be hell on a delivering mom’s lower back in particular. The baby’s head dilates the cervix as s/he leaves the uterus for the birth canal. The vaginal opening is also stretched, sometimes tearing, sometimes given a pre-emptive cut. There’s amniotic fluid and the goo the baby emerges covered with. What about Jesus’ afterbirth? I assume they buried it outside that stable, ensuring nearby crops were extra bountiful come harvest season. This was well before modern medicine, epidurals, spinal blocks, Lamaze technics, and the ability to deliver via Cesarean Section should need arise. Mary had it rough. Deaths of mothers and babies were common back in the day before sterile techniques. Add the whole barn thing with straw that animals have slept, stepped, and peed on, and the dangers magnify. All this to say, if Mary needed to shriek, yell, scream and cry out, even curse, it would have been totally understandable. If her face contorted in pain, and she did not look her usual beautific self, we’d get it. In fact, many of us would be relieved to know she’s one of us! And yet, in keeping with the story, it’s a holy pain. It’s a sacred birth. The concepts aren’t mutually exclusive. It’s all part of Divine Creative Expression. Not just the pretty after pictures, but the before and during as well. Today it’s about the during part. The messy, goopy pain of giving birth. Of most forms of creation. All through Advent, I’ve been asking the question in several ways: What longs to be born in you? What qualities of God are you bringing forth into the world. What light cracks out of your soul in the darkness to illuminate your path? If we are all giving birth to the Christ Consciousness or God Light within us, we are all Marys. Even men. This is where the birth metaphor morphs into the creative metaphor. So if the gestating/birthing images don’t work for you, perhaps creativity is a better fit. So let me say it another way. We are all creators. We are all artists — of our lives if nothing else. In her Advent collection of colleges and reflections, Night Visions, Jan L. Richardson writes: The celebration of Christ’s birth beckons us to consider what has lain dormant in our own lives, and what new life lies waiting beneath the surface. As women, and as men, during this season, we share with Mary and Joseph in giving birth to the holy. Bringing forth the sacred depends not solely on the physical ability to give birth. Although that is one way to share in creating with God, we give birth, too, when we create with our hands, offer hospitality, work for justice, or teach a child. We share in giving birth whenever we freely offer ourselves for healing, for delight, for transformation, for peace. And we become, as German mystic Meister Eckhart wrote in the Middle Ages, ‘mothers of God, for God is always needing to be born.’” The question becomes — what are we creating today? With our lives. With our hands. With our hearts. With our words and deeds. Like birth, creativity is messy. Artist’s studios testify to that. But out of that cacophony of paint, brushes, clay, glazes, various oils, rags, funny smells, piles of canvases, stacking drawers of paper, come masterpieces. Even though I am typing on my laptop, this page is messy. Below these words are quotes and passages I have cut and pasted here from other sources. They inspire me, and some may actually end up in this story. You don’t get to see the contortions my face makes, especially when I check my word count. None of this negates the sacred nature of creativity. Are you one of those folks who downplays the significance of your work? Especially when someone compliments it? Oh, this? I just threw it together at the last minute. You may think so, but then again, you might have been gestating it subconsciously for nine months. Or your whole life. All our life experiences crystalize into how we express ourselves moment by moment. Even if we’re not aware of it. Sometimes, like when we yell something our parents yelled at us, we’re painfully aware of it. So value your self and your work. If you do, others will as well. Picture that holy beam of light illuminating Madonna and Child lighting up you and your work. Bless it. Care for it. Give it the honor it deserves, whether that’s a show, a reading, a publication (that’s why we’re here on this platform, right?) or a production. Take pride in it and feel the joy of creation in your process. Not just the joy of having the darn thing finally done. But the creative process itself. The brainstorming and initial sketching. The research. The early drafts. The edits. Oh, the endless edits. The well-worth it edits. The ones that shape, sharpen, and give sparkle. If the nature of the Sacred is to continually create, that’s our nature too. In one of his works on Meister Eckhart, Matthew Fox says, Joy is intrinsic to the creative process. This Christmas, let it fill our hearts with joy. Let’s not judge the products of our creation. That is not ours to do; ours is to bring it forth. Martha Graham to Agnes De Mille: This reminds me of what modern dancer Martha Graham said to choreographer Agnes De Mille over sodas one day back in the 1940s. De Mille was frustrated that her work for the musical Oklahoma! which she considered “only fairly good,” was touted as a “flamboyant success.” As shared on Brain Pickings.com, De Mille confessed to Graham a burning desire to be excellent, but no faith that she could be. Here’s what happened next in her words: Martha said to me, very quietly: “There is a vitality, a life force, an energy, a quickening that is translated through you into action, and because there is only one of you in all of time, this expression is unique. And if you block it, it will never exist through any other medium and it will be lost. The world will not have it. It is not your business to determine how good it is nor how valuable nor how it compares with other expressions. It is your business to keep it yours clearly and directly, to keep the channel open. You do not even have to believe in yourself or your work. You have to keep yourself open and aware to the urges that motivate you. Keep the channel open. As for you, Agnes, you have so far used about one-third of your talent.” “But,” I said, “when I see my work I take for granted what other people value in it. I see only its ineptitude, inorganic flaws, and crudities. I am not pleased or satisfied.” “No artist is pleased.” (Graham) “But then there is no satisfaction?” (De Mille) “No satisfaction whatever at any time,” she cried out passionately. “There is only a queer divine dissatisfaction, a blessed unrest that keeps us marching and makes us more alive than the others.” Let’s harness that divine dissatisfaction to fuel the fullest exploration of our talents and passions. But let’s also enjoy the process itself, however painful, however “ugly” we think it or we look. Let it be our Holy Creative Joy. Let it be our gift to the world. Namaste.
https://medium.com/change-your-mind/out-of-the-mess-goop-and-pain-something-holy-is-born-821082ead4f
['Marilyn Flower']
2020-12-26 15:08:10.895000+00:00
['Spirituality', 'Creativity', 'Birth', 'Christmas', 'Pain']
MIT 6.00.1x/2x : A Course to Close Your Computer Science Gap as a Data Scientist
Overview You can find the courses here and here. The course full names are: “Introduction to Computer Science and Programming Using Python” “”Introduction to Computational Thinking and Data Science” XSeries of the two courses They belong to the “Computational Thinking using Python” program on EdX. It used to be one course(YouTube playlist here) for MIT on-campus students. They split it into two parts, so it’s more digestible for online education. We can still treat them as one course since the lessons are closely related to each other. Having taken both courses, here are my two cents if you don’t have the time to read through the rest of this article: Start from ‘level 0’, well-designed learning curve, but still challenging Problem sets are challenging and fun, worth putting efforts in It covers most of the key concepts in Python, OOP, Data Structures, and algorithms (sorting, dynamic programming, etc.) The last part on Machine Learning somewhat lacks if you came from a Data Science/AI background. Some creative and fun way of teaching (Don’t miss the ‘archery in class’ one, really cracks me up) Overall, it’s an excellent course to invest your time in if you find yourself lacking basic programming training coming from the math side of data science. Let me elaborate a bit more if you are still interested.
https://medium.com/analytics-vidhya/mit-6-00-1x-2x-review-a-data-scientists-point-of-view-205b8aec65f1
['Michael Li']
2020-12-24 04:25:43.988000+00:00
['Data Science', 'Python', 'Education', 'Data Structures', 'Algorithms']
Decimation - The Cruelest Punishment in the Roman Army
Decimation: the Cruelest Punishment in the Roman Army Removal of a tenth Roman legionaries (Image: historyanswers.co.uk) The Roman army was an effective war machine that created one of the largest empires in history. Roman legionaries obeyed their commanders and kept formation during the battle. They were famous for their discipline. But what happened when the legionaries deserted from the battle or disobeyed their commander? They would be subjected to one of the most brutal punishments in military history. Decimation meant the execution of every tenth legionary A legionary being beaten to death by his comrades (Imagespartacus.fandom.com) The biggest advantage of the Roman army was their ability to fight in formation. The Roman legionaries received enough food, standardized equipment, and good training. The discipline and blind obedience were of the utmost importance. If the legionaries fled the battlefield, there was no way back in the eyes of the Roman senior leadership. All of the accused men, no matter the army rank or the social status, were divided into groups of ten men. Each of the men was subject to the lottery either by grabbing a stone from the sack or by drawing a straw. The man who grabbed the white stone or drew the shortest straw was killed. In the Roman army, the phrase ‘to draw the shortest straw’ got a much more serene meaning. The Roman legionaries often faced death. The death sentence was not terrifying to them. It was the way how the execution was carried out that made the decimation a cruel punishment. Every tenth legionary was clubbed down by his remaining nine comrades. Imagine that you have to kill a friend with whom you’ve marched for years, shared meals with, and fought side by side. The lucky ones that survived were expelled from the legion for a couple of days. They had to camp outside of the fort and survive on raw barley. Decimation was used only on rare occasions Roman legionaries waiting for the decimation process to begin (Image: historycollection.com) The Romans rarely used decimation since it meant the loss of experienced soldiers. Thus, we have only a few recorded cases of decimation in the Roman army. Roman general Crassus ordered decimation after the defeat by Spartacus in 71 BC. Julius Caesar threatened to decimate his ninth legion during the Roman Civil War (49–45 BC). Mark Anthony, after being defeated by Parthians in 36 BC, ordered decimation too. The most famous case of decimation is the Theban legion. In 286 AD, this Christian legion refused to persecute fellow Christians. Emperor Maximillian ordered decimation but they still refused to obey the emperor. The angry emperor ordered another round of the decimation. The legion still resisted. Therefore, the decimations were repeated until the entire legion of 6.600 men was killed. The town in Switzerland was renamed Saint Maurice to honor their commander, Mauritius. Conclusion Decimation didn’t disappear with the fall of the Roman Empire. Actually, there are reports of decimation throughout history. Some examples of decimation after the collapse of the Roman Empire include:
https://medium.com/history-of-yesterday/decimation-2ca074812341
['Peter Preskar']
2020-12-25 18:03:15.201000+00:00
['World', 'Roman', 'People', 'War', 'History']
Things That May Hold You Back From Unlocking Your Full Potential as a Developer
Impostor Syndrome Became Your Best Friend Imposter syndrome is a collection of feelings of inadequacy that persist despite the progress you have gained and the evident success. Let’s break down this collection of feelings. Firstly, let me remind you of something you’ve probably experienced and felt more than once along your coding voyage. This especially happens when you start something new. When you started a new project, in the first meetings, did you feel completely lost? Your mind was struggling with what other developers, clients, and managers were talking about. Your brain played against you and made you feel useless yourself. So, after going through that tough moment, you most likely thought you were a fraud. This brings you a mix of feelings that, far from reflecting your personal talents, confuse you. Your head spins with silly comments such as feeling you are not qualified for that project, the recent job, or the promotion you got. Have you ever been worried and thinking that everyone will find out your flaws because you constantly check Stackoverflow even for the simplest things you forgot? Come on! The most experienced developers forget things as well. Another aspect where this syndrome blocks you is that you admire people who have achieved what you aspire to, but at the same time, you think you are not good enough to achieve it and belong to that elite. You’re making a lot of assumptions here. Don’t worry! All of us suffer this feeling. So bear in mind that you are not the only one; even highly successful people experience this. Behavior is not related to self-esteem or lack of confidence. To close this point, this behavior sometimes is caused by our personal behaviors. Experiencing tough pressures may lead you to become a perfectionist. Somehow, please, avoid it!
https://medium.com/better-programming/things-that-may-hold-you-back-from-unlocking-your-full-potential-as-a-developer-476f3b6b3c8d
['Roberto Hernandez']
2019-12-01 01:57:32.225000+00:00
['Careers', 'Software Development', 'Programming', 'Startup', 'Advice']
Testing Named Scopes
Testing Named Scopes Test your named scopes with Rail code Written by Zach Brock and Cameron Walters. Heads up, we’ve moved! If you’d like to continue keeping up with the latest technical content from Square please visit us at our new home https://developer.squareup.com/blog Last year Zach wrote a blog post about an easy way to test ActiveRecord’s named_scope. It was one of the first snippets he brought with him to the code base here at Square. We’ve been using it extensively ever since. Recently, we made some improvements to the test helper for clarity and ease of use, so we decided it was time to share them with you. The basic idea is to make it easy to test that your named_scope returns the correct, expected subset of your items. We declare our test condition in Ruby such that it is true for everything in our subset and false for everything outside it. This lets us express the intent of our named_scope in succinct Ruby code allowing us to easily test drive our SQL. Example Let’s take a Payment class that has an amount_cents field and the normal Rails created_at timestamp. We’ll write a test using our helper for a named_scope called large_payments_in_last_12_hours that — you guessed it — returns all payments from the last 12 hours over $100. The diagram below expresses the intent visually: our test will have a condition expressed in Ruby that is true for everything in the blue area and false for everything in the green area. The Test The should_be_a_subset helper integrates with RSpec, so the example below uses RSpec syntax and depends on rspec-rails. The condition that describes large_payments_in_last_12_hours asserts that each payment in the subset was created over 12 hours ago and that it has an amount_cents greater than 100_00: The Implementation Now that we have a test to drive it, we can write the implementation: This example shows how to test a simple SQL-based named_scope with a short Ruby expression. The helper really proves its worth with more complicated named scopes, especially those with multiple joins or sub-selects. The Helper Let’s take a closer look at how should_be_a_subset works. Here’s the helper’s code so you can follow along: One of the first things it does is make sure that you’ve actually created objects that will fall both inside and outside your subset. This double-check is to ensure that you’re really testing the intended scoping behavior by helping to prevent false passes due to insufficient data. We make sure to flunk the test with nice error messages when you haven’t set up your pre-conditions correctly so you can fix it and move along instead of chasing down hard-to-diagnose errors due to fixture data or faulty assumptions. The assertions in the helper then test that our constrained set (here the named_scope you’re testing) matches the objects for which your ruby condition returns true. It also checks that the excluded set (superset — constrained set) returns false for the condition you provided. We then map over the ids of the objects for our test condition instead of the objects themselves so error messages are easier to read. The code is pretty simple, but everyone we’ve shown it to has loved it. It’s made expressing the intention of named_scopes really simple and helped us to design and test them more thoroughly. We hope it will help you, too.
https://medium.com/square-corner-blog/testing-named-scopes-30faf3b61584
['Square Engineering']
2019-04-18 23:56:32.630000+00:00
['Ruby on Rails', 'Ruby', 'Engineering']
Dear Businesses, Stop Treating Freelancers Like Crap
Dear Businesses, Stop Treating Freelancers Like Crap A word about freelancing today Photo by Headway on Unsplash About a year ago, I got a phone call. A phone call that surprised me. One of my clients was calling me right after another team member said the business would be shutting down shop for a month or so for vacation. But as we chatted, I was told there would be some changes. In the next week a new project administrator would be taking over, and I should be expecting some work over the next week. “Great,” I said. Another Surprise Then a few days passed. I shrugged it off for the first two days, but by Wednesday, I was a little nervous. Thursday, I was even more nervous. Friday, I thought I better take action. I logged into my Slack account and no dice. I figured I forgot the password. I tried again and did the “forget password” option and opened up this email. Whaaat? I didn’t delete my account. What? That makes no sense! And then it hit me. This client just flat out deleted my account and didn’t tell me. No email, no telling me they changed their mind, nothing. I suppose you could call this “ghosting.” This kind of thing has actually happened to me three times now during my 6-year writing career. So please allow me to step up on my soapbox for just a moment. I totally understand both sides. As a business, you have work that needs to be done. As a freelancer, you need clients. However, all too often businesses treat freelancers like someone who comes by to pick up after the dog or mow the yard. It’s not a real relationship; rather, it’s completely based on convenience or simply not having enough time. Many businesses might even say, “I could get this done cheaper elsewhere.” Maybe so. But as a business, you decide if you want to work with the cheapest option or the best option. The truth is this: you can have good, fast, or cheap. Pick one. Businesses, it’s time to treat human beings like human beings. It is never okay to fire a client by ghosting. Ever. Empathy goes a long way. Pay your invoices in a timely manner. Be professional. Don’t whine or complain to the freelancer who is providing you with a valuable service. It’s a two-way street, and the truth is businesses are saving a lot of money by hiring freelancers. While a professional freelancer could appear to be expensive at first glance, most freelancers are actually a great deal. A Quick Example Let’s say a freelancer charges $200 for a blog post or an article. That might sound like a lot at first. But you’re not considering what goes into a great blog post. Research. SEO. Writing. Images. Editing. Uploading into WordPress. A great blog post can easily take an entire day’s work. Sure, a company might be able to hire someone at a full-time job for $15–30 an hour. What about benefits? Training? Taxes? The employer pays for those. Take that $15–30 an hour and you can likely double that amount for the bottom line for the company. Make no mistake, in most cases, a professional freelancer often has more training and better skills than someone at a full-time job. A freelancer has to be hungry and motivated. Many full-time employees, well, not so much. By the nature of the gig, you’re going to settle into a grove of sorts and then you’ll soon find the status quo. And, sadly, status quo work brings status quo results. The Takeaway Treat freelancers right because they are likely saving your business a lot of time and frustration. Don’t let a few flaky freelancers make you think that all freelancers are that way. Remote working is now more common than ever. Whatever you’re doing, you’re likely going to be working with more freelancers. Make the most of it. Embrace it. Then you’ll be able to do better work and serve your customers better.
https://medium.com/the-partnered-pen/dear-businesses-its-time-to-start-working-better-with-freelancers-465130a51da1
['Jim Woods']
2020-11-05 15:19:55.638000+00:00
['Freelance', 'Entrepreneurship', 'Business', 'Entrepreneur', 'Freelancing']
What If Trump Tells You, 1 Month From Now, It’s Safe To Go To Vegas?
What If Trump Tells You, 1 Month From Now, It’s Safe To Go To Vegas? First of all, would you believe him? Then, wouldn’t you want to know everyone you encounter there: at the hotel, at a restaurant, at the casino, at the pool, at a sports book, at a club, had either tested negative, or had recovered? How’s that going to happen? Or that hospitals in Las Vegas had enough ventilators should there be a sudden pocket outbreak somewhere? That could happen. In fact, probably will from time to time in various places. But hospitals don’t like to keep a lot of equipment around that they have to maintain and that they don’t make a lot of money off of. Would/should the government force them to? Or would the Vegas hospitals use it as a selling point: putting up the number of ventilators available up on huge billboards along the freeway cutting through the middle of town (as they’ve done for years, advertising current waiting times at the emergency room)? Even then, wouldn’t you want to be sure you weren’t potentially bringing something back to your community with you when you came home? What about your employer? Would they want you right back at the office if you’d just traveled? Or would anyone back from vacation be mandated to self-quarantine and work from home for a period of time (if they had a job that would make that a possibility)? If so, would the government have to have a role in making sure that happened? We remember when we first started traveling internationally, we had to carry a vaccination card along with our passport to gain entry into some countries. That became more unnecessary in recent years. But seems like that kind of thing might definitely be making a comeback. And we don’t just mean for international travel, which we assume will be very difficult or virtually impossible until there’s a vaccine. (And even then, countries are going to have to agree on international standard and how it will be verified.) No, we mean right here in the U.S., where people might be compelled to prove their COVID-19 status in order to work in certain jobs, or maybe even to enter certain buildings. (Although it’d be challenged on the basis of civil liberties for sure.) And again: who does this? And how? Who sets the standard for what type of test results are acceptable? And who monitors for people who may be cheating? Or worse, infectious? If there is a pharmaceutical based treatment for Coronavirus, is everyone going to be able to get access to it like at the snap of a finger? When there is eventually a vaccine, how long is it going to take for everyone to get access? One of the biggest problems right now is there still aren’t enough tests. “Everybody who wants a test” isn’t even close to being able to get a test. That has a lot to do with the fact that the federal government and testing companies playing catch-up. If you can even call it that, because they’re so far behind. And we’re going to trust those same people to develop a program to ensure there are enough treatments or eventually vaccines and they get distributed in a timely and efficient way? We’ve seen years lately when there’s been a shortage of flu vaccine for a time. And it was really hard to get the shingles vaccine when it first came out and demand was sky high. And demand for a COVID-19 vaccine is going to be even greater than that by a multiple of millions and millions. Or billions if you factor in everybody in the world is going to want this vaccine posthaste too! So this can’t entirely be left to the private sector. Making sure those things — in the future — go right, will require strong will, skill, and coordination at the highest levels of government. Also lots of money from the government. And a firm, but deft hand. Except for throwing a lot of money at it (the bulk of which will go to private companies), can we expect any of that from the present administration?
https://ericjscholl.medium.com/what-if-trump-tells-you-1-month-from-now-its-safe-to-go-to-vegas-a8cd435593c1
['Eric J Scholl']
2020-04-06 12:01:01.008000+00:00
['Donald Trump', 'Travel', 'Health', 'Vaccines', 'Government']
Plotnine: ggplot2 in Python
Plotnine is Python’s answer to ggplot2 in R. R users will feel right at home with this data visualization package with a highly similar syntax with minor syntactic differences. R has been the ruler of data visualization and statistical modelling between the two while Python was the best for productionizing / monetizing data science. The ggplot2 package from R is a child of Hadley Wickham’s ingenuity. It undoubtedly allows anyone who can code in R to use a declarative approach to creating stunning visuals for their work. There are also a number of extensions that can extends ggplot2 even further. Python has Matplotlib and Seaborn for its external plotting library. Seaborn is built on top of Matplotlib and is a dependency. Seaborn helps the user accomplish what would be done in Matplotlib with much less code and allows the user to use Matplotlib commands to manipulate the figure as well. Undoubtedly we are spoiled for choice once we have a good grasp of both languages. However lone Python users have not been able to experience the beauty and simplicity of ggplot2 in its native Python until now (to a certain extent). Thanks to Hassan Kibirige, the creator of plotnine. Plotnine is the competitor of R in Python. Its syntax is probably 95% similar or more to ggplot2. It has a table release and is still active on Github. It already does a lot of what users love most about ggplot2 such as access to various geoms, declarative syntax and faceting for example. There are many R users that realize the importance of Python in their skillset and vice versa. Plotnine is another way that makes a comfortable transition between the languages. Plotnine is still an infant in comparison to its counterpart ggplot2 but the potential is huge if active development continues. Plotnine As Python’s designated ggplot2, we shall now see how the syntax and visuals match up with each other. One does not replace the other but if different jobs can be done on different languages, certain people can get an idea on what language to conduct their data visualization tasks. A tip about plotnine installation that some users may experience (not all): When installing plotnine make sure you are using the updated version of your package manager (i.e. pip or conda). If you are using Jupyter, for some reason it may not work but it will work on other IDE’s like Spyder. This was the case for me and I haven’t figured out the reason yet. Below is a standard example of using plotnine. R users will notice that the syntax and output are strikingly similar. ( ggplot(mtcars, aes(‘wt’, ‘mpg’, color=’factor(cyl)’)) + geom_point() + labs(title=’Miles per gallon vs Weight’, x=’Weight’, y=’Miles per gallon’) + guides(color=guide_legend(title=’Number of Cylinders’)) ) There are really only two noticeable differences in the syntax: ggplot(mtcars, aes(‘wt’, ‘mpg’, color=’factor(cyl)’)) — we can see that ‘factor(cyl)’ is a striking resemblance to the method of converting data into a category in R. There are other ways to convert a numeric column into a category but this seems a nice nod to R users. The whole syntax is actually enclosed in parenthesis (brackets). This is due to indentation issues in Python. Because Python relies so much on indentation for functions, brackets become necessary to execute the code. To take it to the next step, faceting the plot is also simple and easy with a minor tweak to make it in plotnine: (ggplot(mtcars, aes(‘wt’, ‘mpg’, color=’factor(cyl)’)) + geom_point() + labs(title=’Miles per gallon vs Weight’, x=’Weight’, y=’Miles per gallon’) + guides(color=guide_legend(title=’Cylinders’)) + facet_wrap(‘~gear’) ) The last line of code has (as R users would know) facet_wrap and the only difference is the argument needs to be in colons. A scatterplot is ideal for addding many dimensions of data. Now we shall see how to change the sizes of the points to represent horse power: (ggplot(mtcars, aes(‘wt’, ‘mpg’, color=’factor(cyl)’, size = ‘hp’)) + geom_point() + labs(title=’Miles per gallon vs Weight’, x=’Weight’, y=’Miles per gallon’) + guides(color=guide_legend(title=’Cylinders’)) + facet_wrap(‘~gear’) ) Once again just a simple command and a barely noticeable difference. We have added the size=’hp’ argument to the aesthetics so that each point will represent a different level of horsepower. We can see that higher the weight engine, the more horsepower it has. Any plot needs to look their best. Themes are a great way to add some default formatting in order to get started. Plotnine appears to have some of the same themes from ggplot2 as well: (ggplot(mtcars, aes(‘wt’, ‘mpg’, color=’factor(cyl)’, size = ‘hp’)) + geom_point() + theme_bw() + labs(title=’Miles per gallon vs Weight’, x=’Weight’, y=’Miles per gallon’) + guides(color=guide_legend(title=’Cylinders’)) + facet_wrap(‘~gear’) ) Just by adding theme_bw() we can get a rather nicer and more aesthetically pleasing plot. Conclusion We can see that plotnine and ggplot2 have much in common. Transitioners to Python from R will definitely feel more at home when conducting data visualization in Python with plotnine. One of the best things about plotnine is that is still active on Github as of July 2019 and hopefully it is a good sign that it will continue to be updated to become the ggplot2 of Python. I say that because ggplot2 has so many extensions and complementary packages that make it powerful and wanted. Plotnine is undoubtedly doing a great job and I would definitely use it myself.
https://medium.com/bitgrit-data-science-publication/plotnine-ggplot2-in-python-2b4b1fe8d4c5
['Asel Mendis']
2019-07-16 06:20:20.865000+00:00
['Data Science', 'Data Visualization', 'Python', 'R', 'Technology']
Hot or Not
Kelly is an illustrator, designer and comic artist living in Minneapolis. See more of her work here: kellyabeln.com Follow
https://medium.com/spiralbound/hot-or-not-c603f51a2cc9
['Kelly Abeln']
2020-01-06 13:59:06.562000+00:00
['Comics', 'Music', 'High School', 'Dating', 'Identity']
From Data Science to Knowledge Science
Data Science productivity today vs. a 10x predicted growth in Knowledge Scientist productivity I believe that within five years there will be a dramatic growth in a new field called Knowledge Science. Knowledge scientists will be ten times more productive than today’s data scientists because they will be able to make a new set of assumptions about the inputs to their models and they will be able to quickly store their insights in a knowledge graph for others to use. Knowledge scientists will be able assume their input features: have higher quality are harmonized for consistency are normalized to be within well-defined ranges remain highly connected to other relevant data as such as provenance and lineage metadata Anyone with the title “Data Scientist” can tell you that the job is often not as glamorous as you might think. An article in the New York Times pointed out: Data scientists…spend from 50 percent to 80 percent of their time mired in this more mundane labor of collecting and preparing unruly digital data, before it can be explored for useful nuggets. The process of collecting and preparing data for analysis is frequently called “Data Janitorial” work since it is associated with the process of cleaning up data. In many companies, the problem could be mitigated if data scientists shared their data cleanup code. Unfortunately, many companies use different tools to do analysis. Some might use Microsoft Excel, Excel plugins, SAS, R or Python or any number of other statistics software packages. Just within the Python community there are thousands of different libraries for statical analysis and machine learning. Deep learning itself, although mostly done in Python, has dozens of libraries to choose from. So with all these options the chances of two groups sharing code or the results of their data cleanup is pretty small. There have been some notable efforts to help data scientists find and reuse data artifacts. The emergence of products in the Feature Store space are a good examples of attempting to build reusable artifacts for data scientists to become more productive. Both Google and Uber have discussed their efforts to build tools to reuse features and standardize the feature engineering processes. My big concern is that many of these efforts are focused on building flat files of disconnected data. Once the features have been generated they can easily become disconnected from reality. They quickly start lose their relationships to the real world. An alternative approach is to build a set of tools for analysts to connect directly to a well-formed enterprise scale knowledge graph to get a subset of data and transform it quickly to structures that are immediately useful for analysis. The results of this analysis can then be used to immediately enrich a knowledge graph. These pure Machine Learning approaches can complement the rich library of turn-key graph algorithms that are accessible developers. What is at stake here is huge gains in productivity. If 80% of your time is currently spent doing janitorial work, the knowledge scientist could become to to four times more productive if they had access to a knowledge graph that has high-quality connected and normalized data. Let’s define what we mean by high-quality, connected and normalized data. First let’s talk about data quality within a knowledge graph. I worked as a Principal Consultant for MarkLogic for several years. For those of you that have not heard about MarkLogic, it is a document store where all data is natively stored in either JSON or XML documents. There are many exceptional features about MarkLogic that promote the productivity of developers doing data analysis. Two of the most important are the document-level data quality score and the concept of implicit query language-level validation of both simple and complex data structures. In MarkLogic there is a built-in metadata element called the data quality score. The data quality score is usually and integer between 1 and 100 that is set as new data enters the system. A low score, say below 50, indicates that there are quality problems with the document. It might be missing data elements, fields out of acceptable ranges or corrupt or inconsistent data. A score of 90 could indicate that the document is very good and could be used for many downstream processes. For example you might setup a rule that you only want documents with a score above 70 to be used in search or analysis. There are two clever things about how MarkLogic works. One is that the concept of a validation schema is built-in to MarkLogic and the concept of valid data is even built into the W3C document query language: XQuery. Each document can be associated with a root element (within a namespace) and bound to an implicit set of rules about that document. You can use a GUI editor like the oXygen XML Schema editor to allow non-programmers to create and audit the data quality rules. XML Schema validation can generate both a true/false Boolean value as well as a count of the number of errors in the document. Together with tools like Schematron and external data checks, each data steward can determine how they set the data quality score for various documents. What is also key about document-level data quality scores is that there is a natural fit with business events. Business events can be thought of as complete transactions that happen in your business workflows. Examples of Business Events include things like an inbound call to your call center, a new customer purchase, a subscription renewal, an enrollment in a new healthcare plan, a new claim being filed or an offer being set as part of a promotional campaign. All of these events can be captured as complete documents, stored in streaming systems like Kafka and loaded directly into your knowledge graph. You only need to subscribe to a Kafka topic ingest the business event data into your knowledge graph. The data quality scores can also be included in both the business event documents your knowledge graph. These scores should then be used whenever you are doing analysis. We should contrast complete and consistent business event documents with another format of data exchanges: the full dumps of table-by-table data from a relational database into a set of CSV files. These files are full of numeric codes that may not have clear meaning. This is the type of data many data scientists find in a Data Lake today. Tuning this low-level data back into meaningful connected knowledge is something that slows the time to insight for many organizations. Storing flattened CSV-level data and numeric codes is where features stores fall down. Once the features are extracted and stored in your data lake or object store they become disconnected from how they were created. A new process might run on the knowledge graph that raises or lowers the score associated with a data item. However, that feature can’t easily be updated to reflect the new score. Feature scores can add latency that will prevent new data quality scores from reflecting the current reality. One of the things I learned at MarkLogic is that document modeling and data quality scoring did not come easily to developers that have been using tables to store data. Relational data architects tend not to think of the value document models and the concept of associating a data quality score with business event documents. I like to refer business event document modeling as “on-the-wire” thinking. On-the-wire thinking should be contrasted to “in-the-can” thinking — where data architects stress over how to minimize joins to keep query times within required service levels. There are two final points before we wrap up. First is that quality in a graph is different than quality in a document. This topic has been well studied by my colleagues at the world-wide-web consortium and documented in the RDF-based SHApe Constraint Language (SHACL). In summary the connectedness of a vertex in a graph will also determine quality. This connectedness not expressed in a document schema. The world-wide web also has spent considerable time building PROV: a data model that describes provenance. Eventually I hope that knowledge graphs products each have a robust metadata layer that has complete information about how data was collected and transformed as it journeyed from source system into the knowledge graph for analysis. The final point is to emphasise that the labeled property graph (LPG) ecosystem really does not have mature standards like RDF has. RDF had a 10 year lead over LPG and this was enough for the semantic web stack to mature and blossom. In the LPG space, we don’t really have mature machine-learning integration tools yet to enable knowledge science. If you are an entrepreneur looking to start a new company this is a green field without much competition! Let me know if I can help you get started! I am indebted to my friend and UHC colleague, Steven J. Peterson for his patient evangelism of the importance of on-the-wire business event document modeling. Steve is a wonderful teacher, a consistent preacher of good design, a hands on practitioner, and an insightful scholar. I cherish both his friendship and his willingness to share his knowledge.
https://dmccreary.medium.com/from-data-science-to-knowledge-science-7f6707727489
['Dan Mccreary']
2019-04-20 16:14:17.048000+00:00
['Feature Engineering', 'Graph Databases', 'AI', 'Knowledge Graph', 'Data Science']
Some details about the Top Level Program in C# 9
After launching .NET 5 and C# 9, I was utterly obsessed with the Top Level Program (a.k.a. TLP) feature. TLP delivers quick and fast prototyping for the .NET application and makes the most immediate entries for my work. But TLP is just syntax sugar, not a complete solution. So there are little details you should know before choose TLP. Thread Apartment Model This detail is a rare case but critical for the Windows application developers. Usually, the Main method does not require particular attributes, excluding the Windows desktop application area. That thing is the thread apartment model, which impacts the Windows Forms or WPF application. If you make your entry point to TLP, it will hard to specify the threading model. In my experience, configure the thread apartment model with the Thread class usually does not work well. If you do not configure the threading model to single-threaded, the Windows Forms application will not perform correctly, especially using WebView2. So if you want to use the Windows Forms in your application, you may revert your entry point into the Main method again. Write your code tidy up for others. TLP helps you code faster, but your code can messy. Because TLP allows you to mix or shuffle the declaration orders between the logical code and the member function declarations. For example, the below code is valid and compilable. using System; int x, y; void TestFuncA() => x = 1; void TestFuncB() => y = 2; void Calculate() => Console.WriteLine(x + y); TestFuncA(); TestFuncB(); Calculate(); Also, this code works. using System; int x, y; void TestFuncA() => x = 1; TestFuncA(); void TestFuncB() => y = 2; TestFuncB(); void Calculate() => Console.WriteLine(x + y); Calculate(); If you don’t tidy up your code, it makes others read difficult to read. So this is a reason that you’d better avoid large and complex code when you use TLP. The args, hidden variable You can write your simple application with the argument input. TLP hides the args variable from your sight, but it still exists. So you can reference the local variable args as you before. So you can write your code like below. // This sample code requires the Bullseye package. using System; using System.Threading.Tasks; using static Bullseye.Targets; int x = default, y = default; async Task Banner() { await Console.Out.WriteLineAsync("Hello, World!"); } async Task ReadX() { int.TryParse(await Console.In.ReadLineAsync(), out x); } async Task ReadY() { int.TryParse(await Console.In.ReadLineAsync(), out y); } async Task WriteResult() { await Console.Out.WriteLineAsync($"{x + y}"); } Target("banner", Banner); Target("read-x", ReadX); Target("read-y", ReadY); Target("calc-result", DependsOn("read-x", "read-y"), WriteResult); Target("default", DependsOn("banner", "calc-result")); await RunTargetsAndExitAsync(args); Limited features In contrast to my belief, TLP does not support some features. For example, you can declare the static local function but not the static local variable. So if you write the static local function, your code should contain an isolated logic (such as an algorithm). But you can call functions from your entry-point code. Another missing feature is, TLP does not emit your Main method code as a static class. In my opinion, the local function is a non-static class member, so there is no need to preserve non-static members when we use the TLP feature. It’s a shame that the extension method cannot exist in the TLP code.
https://medium.com/dataseries/some-details-about-the-top-level-program-in-c-9-d3abe0da7ef1
['Jung-Hyun Nam']
2020-12-28 11:54:25.149000+00:00
['Csharp', 'Dotnet', 'Programming', 'Coding']