title stringlengths 1 200 ⌀ | text stringlengths 10 100k | url stringlengths 32 885 | authors stringlengths 2 392 | timestamp stringlengths 19 32 ⌀ | tags stringlengths 6 263 |
|---|---|---|---|---|---|
How Time-Wasting is Holding You Back | During the pandemic lockdown, there was a queue system put in place at the supermarket near us to avoid over-crowding and stockpiling. But I had been trying to do all my shopping online mostly to avoid the hassle and to stay as safe as possible.
There was one time, however, we just couldn’t avoid it. It was essential. My husband came with me to help carry everything. It was hot and muggy. When he saw the length of the queue as we drove into the car park he sighed and immediately asked if we could just go home. He did not want to be waiting in line for goodness knows how long in the terrible humid heat.
I dragged him to the queue anyway. I reassured him it would be a short wait, that it was deceivingly long because of the distance everyone had to keep in between each other.
After a few minutes went by and we didn’t move forward at all, he asked again if we could leave or he would call a taxi home and let me do the shopping alone.
I argued that we were there already and to just wait it out, through gritted teeth.
He was clearly irritated so I said, “whatever”, to call a taxi. He told me he was literally losing money because he would normally be paid an X amount hourly, and instead, his time was being used to wait in line. I was becoming increasingly angry too, thinking to myself, “just wait! You decided to come here!”
I felt humiliated too because I was sure the people queueing behind us could hear our disagreement. It was just unpleasant when it needn’t be.
Thirty awkward and silent minutes later we were in the supermarket and the rest of the trip was fine. All was forgotten and as well as all the shopping we needed my son gained a ball-pit for his bedroom. Happiness was restored all-round.
Waiting is a Societally Accepted Life Injustice
Now, the way my husband handled things is questionable, but I appreciate that he was trying to save time. He meant that he was literally losing money and while he could have been more tactful about it since it wasn’t my fault, he was thinking objectively — as he does when he’s faced with problems.
The queue was long, the heat was heavy, and the time spent waiting in line could have arguably been more costly in time and money than what we gained by doing the shop then, vs going back another time when the queue was shorter.
His annoyance was a reaction to the violation of being made to wait by the outside world, and then, by his wife who insisted on accepting this violation.
What do I always talk about though? Silver linings.
Tim Denning inspired the idea for this article in his story, Do These Things That Require Zero Talent to Improve Every Area of Your Life. He brought up the subject of saving time and how important it is to manage it well.
Waiting for anything is one of the most annoying things in the world. To me, to my husband, to everyone.
But very few people think they can do anything to change their behaviour or actions to repel from it or avoid it altogether.
5 Unnecessary but Normal Time-Wasting Occurrences: | https://medium.com/the-ascent/how-time-wasting-is-holding-you-back-9cc13d9c9392 | ['Sylvia Emokpae'] | 2020-10-01 13:31:03.848000+00:00 | ['Efficiency', 'Self Improvement', 'Success', 'Society', 'Time Management'] |
US Voters Fire Underperforming Employee | Let me start by saying that we appreciate how you’ve heightened the civic engagement within this country. Your involvement spurred unparalleled levels of voting in this year’s election. Not only that, but it brought to light a lot of harsh truths about this country. Clearly, we have a lot of work to do.
That being said, we simply cannot allow you to continue in this position any longer. The future of our country remains paramount. And it needs a leader that can build trust, connection, and better position us for the future.
The key word here is trust. We need to have someone in this position who we can trust to act in the long-term best interest of this country.
We know that a lot of people are upset over the amount of broken promises, but in all honesty, that’s not why we’re here. Yes, you promised to build a border wall and have Mexico pay for it. In actuality, neither of those events occurred. Yes, you promised to slow down vaccination schedules, despite the advice of the medical community, and get rid of common core education. Again, you fell short in both areas.
I know you’ve said that you accomplished all of your promises. But the facts simply do not support these claims.
While many are disappointed by these outcomes, they didn’t drive this decision. In truth, your inability to focus and execute a plan was somewhat of a relief. Given the abhorrent nature of many of your ideas and positions, we think it’s for the best that they never materialized.
The biggest problem is that while you never hesitated to criticize and promote chaos, you failed to develop solutions and bring people together. Our nation needs strategy and compromise, not poorly thought out reactions and angry tweets. We simply cannot have someone in this office that is so committed to dividing the country instead of uniting it.
While you frequently criticized the Affordable Care Act, we never once saw a credible plan to replace it. We’re looking for someone who can bring solutions to the problems facing our nation, not merely criticize with immunity.
In addition, your anti-science agenda is jeopardizing the long-term health of both our nation and the world at large. If we’re to secure our position as a world leader, it needs to be through technology and innovation. We’re looking for someone who can embrace these tools of the future, not overcommit to an obsolete past. We need to compete through science, robotics, and alternative energy, not dogma, tariffs, and coal.
Continuing to invest in your anti-science agenda would be putting this nation on a doomed strategy — not only for the current generation, but for our children and generations to come.
Another primary concern is with the people you chose for your staff. Candid feedback and constructive arguments are critical to a well-run organization. Surrounding yourself with sycophants creates a serious liability for our future. As a nation, we simply cannot attain excellence with the second- and third-rate staffers that you’ve placed in critical positions.
I could go on, but there’s little reason to continue focusing on past mistakes.
I suppose the final straw was in how you managed the pandemic over the past year. There’s no doubt that this was a difficult situation. But your unwillingness to recognize and acknowledge this issue is the antithesis of the leader that we need. In the vacuum of strong leadership, the nation floundered, with individual states making shortsighted decisions that prolonged the nation’s suffering — a price that we’re still paying today.
We need someone who’s willing to face facts with clarity and put the good of our nation before their own personal gains. We need someone who can lead through crises and adversity to build a stronger future. And simply put, this expectation seems beyond your capacity.
In truth, you didn’t actually get the majority vote in 2016. I suppose that should have been a good indicator of things to come.
For all these reasons and more, we need to let you go. Perhaps if you’d spent less time on the golf course and more time trying to lead, this would have ended up differently. But I suppose there’s no sense in second-guessing past decisions at this point.
We’d appreciate your maturity and professionalism in accepting this decision so that we can all move forward. Feel free to take some time and clean out your office. We’ll need you to be out by January 20th. | https://jswilder16.medium.com/us-voters-fire-underperforming-employee-1b112e794fec | ['Jake Wilder'] | 2020-11-07 18:47:55.458000+00:00 | ['Politics', 'Society', 'Donald Trump', 'Leadership', 'Election 2020'] |
This fascinating photo shows photos of missing children in the first column. | This fascinating photo shows photos of missing children in the first column. The second column shows a computer composite of what the children might look like as time progressed. The final photo shows what the children looked like when they were finally found. | https://medium.com/random-awesome/this-fascinating-photo-shows-two-photos-of-missing-children-in-the-first-column-6e8f3aa8d02d | ['Toni Tails'] | 2020-12-28 22:58:14.734000+00:00 | ['Creativity', 'Art', 'Family', 'Technology', 'Parenting'] |
How to Get the Most Out of Your Paid Facebook Video Ads | Times have continued to rapidly change in the way video is consumed, and it’s important for marketers to stay up to date on new viewing trends in order to effectively reach customers wherever they are. In this case, I’m referring to Facebook video ads.
You have most likely noticed the increase in video content all over your Facebook News Feed. It seems marketers are leaning into this ad format because they’re recognizing that it’s what consumers want.
According to Facebook’s research, people’s eyes cannot resist new immersive and moving formats, with participants in their experiment gazing five times longer at video than static content. For obvious reasons, this is a huge opportunity that allows you to get in front of Facebook’s over two billion users. But while video has the potential to boost your organization to the top, there are a few things you need to keep in mind to get the most bang for your buck.
Here’s how to use Facebook video ads to drive the best results possible.
Step 1: Defining Your Audience
The first step in successfully running a Facebook campaign is understanding who you’re trying to reach and what you want them to do. Speaking Facebook’s lingo, this means determining your target audience and identifying the campaign objective.
For prospecting campaigns, I recommend testing multiple audiences. For example, maybe your core target audience is adults age 35–64 who are interested in outdoor activities, but you’re also interested in nurturing millennials. Between first-party and third-party data, you could have an interest-based audience and a lookalike audience for both your core audience demo & millennials, leaving you with four different audiences to test.
Step 2: Video Consumption & Channel Alignment
Once you’ve determined your audiences, it’s important to understand their video consumption habits so you can narrow in on what placements might work best. Our creative team often refers to Facebook’s three buckets for video consumption — “on-the-go, lean forward, and lean back.”
“On-the-Go” Videos
If you think about your daily routine, this will most likely resonate with your own viewing habits. Most people are in the habit of watching short videos as they quickly skim their feeds throughout the day during any time they have while they’re on the go. According to Facebook, viewers consume content 41% faster on their mobile News Feeds than on their desktop News Feeds, which makes this “on-the-go” segment even harder to reach. It’s important to focus on capturing scrollers’ attention quickly, with only seconds available to get your message across.
“Lean Forward” Videos
Viewers seem to “lean forward” when they find a little more free time in the day, maybe while taking a break, and they have the time to watch stories or longer in-feed videos. With stories, you have a few more seconds to get your brand’s message across, but it still needs to get the job done in 15 seconds or less.
“Lean Back” Videos
Lastly, you might notice that you “lean back” and watch longer-form mobile videos, like a TED Talk, while lying in bed before going to sleep. Obviously as a viewer you have a very different mindset and attention span for ads depending on your intent at the time, which is why it’s so important for marketers to tailor the message depending on the placement.
Picking the Right Video Format
There are some key things marketers should keep in mind to ensure effectiveness of mobile video strategies.
First, always start with the audience. You need to determine the audience you’re trying to reach, and what channels will be most effective at getting in front of them. Is that channel “on-the-go,” “lean forward,” or “lean back,” or all of the above?
Second, think about optimizing video content for the audience and the channel. People treat their content consumption on each channel differently, so the creative content needs to be made for the attention they anticipate from that specific channel. Plus, we know people pay more attention to video on a smartphone (vs. a computer), so the creative needs to be mobile-optimized. Regardless of the placement, the focus should be on capturing viewers’ attention quickly and on highlighting the messaging and branding up front.
Lastly, try using videos that don’t require sound, or make sure to use captions so that viewers can engage freely wherever they may be.
Step 3: Make Certain That Channel Delivery Informs Creative Concept
There are many ways you can slice how video is consumed, and it will only continue evolving with advancements in technology. With video assets, it’s important to make sure you’re not only targeting the right audience in the right placement, but also that different types of video are created for the different goals across channels.
Aligning the creative and the paid media strategy is key, because the more relevant your ads are the more likely you are to succeed in the Facebook auction (which is necessary to see the results you want from paid media campaigns). It’s also important to keep in mind that content on mobile is consumed differently, so not all creative assets are created equal.
Step 4: Care About (and Track) the Right Metrics
Once you’ve successfully launched video ads, it’s time to experiment! The digital landscape allows you to test and iterate new ad formats and placements in order to develop the most effective ways video can be leveraged within your overall marketing strategy, so make sure to take advantage. You can still set up your campaign with a main objective of reach, conversions, or engagement, but with video there are other specific metrics that are important to understand. Here’s a list of Facebook’s video-related metrics that you should keep an eye on:
Facebook Video-Related Metric Glossary
Reach : The number of people who saw your video ad.
: The number of people who saw your video ad. 2-second continuous video views : The number of times your video was played for 2 continuous seconds or more.
: The number of times your video was played for 2 continuous seconds or more. Cost per 2-second continuous video view : The average cost for each 2-second continuous video view. This metric is calculated as total amount spent, divided by 2-second continuous video views.
: The average cost for each 2-second continuous video view. This metric is calculated as total amount spent, divided by 2-second continuous video views. 3-second video views : The number of times your video played for at least 3 seconds, or for nearly its total length if it’s shorter than 3 seconds. For each impression of a video, we’ll count video views separately and exclude any time spent replaying the video.
: The number of times your video played for at least 3 seconds, or for nearly its total length if it’s shorter than 3 seconds. For each impression of a video, we’ll count video views separately and exclude any time spent replaying the video. Cost per 3-second video view : The average cost for each 3-second video view. This metric is calculated as total amount spent, divided by 3-second video views.
: The average cost for each 3-second video view. This metric is calculated as total amount spent, divided by 3-second video views. 10-second video views : The number of times your video played for at least 10 seconds, or for nearly its total length if it’s shorter than 10 seconds. For each impression of a video, we’ll count video views separately and exclude any time spent replaying the video.
: The number of times your video played for at least 10 seconds, or for nearly its total length if it’s shorter than 10 seconds. For each impression of a video, we’ll count video views separately and exclude any time spent replaying the video. Cost per 10-second video view : The average cost per 10-second video view. This metric is calculated as total amount spent, divided by 10-second video views.
: The average cost per 10-second video view. This metric is calculated as total amount spent, divided by 10-second video views. Video plays : The number of times your video starts to play. This is counted for each impression of a video, and excludes replays.
: The number of times your video starts to play. This is counted for each impression of a video, and excludes replays. Video watches at 25% : The number of times your video was played at 25% of its length, including plays that skipped to this point.
: The number of times your video was played at 25% of its length, including plays that skipped to this point. Video watches at 50% : The number of times your video was played at 50% of its length, including plays that skipped to this point.
: The number of times your video was played at 50% of its length, including plays that skipped to this point. Video watches at 75% : The number of times your video was played at 75% of its length, including plays that skipped to this point.
: The number of times your video was played at 75% of its length, including plays that skipped to this point. Video watches at 95% : The number of times your video was played at 95% of its length, including plays that skipped to this point.
: The number of times your video was played at 95% of its length, including plays that skipped to this point. Video watches at 100% : The number of times your video was played at 100% of its length, including plays that skipped to this point.
: The number of times your video was played at 100% of its length, including plays that skipped to this point. Video average watch time: The average time a video was played, including any time spent replaying the video for a single impression.
Facebook just recently announced that they’re changing the way 3-second and 10-second video view counts are calculated. Originally these view counts included any time spent rewinding & rewatching, so advertisers felt like they couldn’t accurately measure consumption. Now they’re updating these view metrics to only include unrepeated seconds of watch time. Facebook works to stay on top of evolving trends, like mobile video viewing, and does a good job of incorporating feedback from both marketers and consumers.
And Finally, Give It a Try!
The explosion in online video viewing is only going to continue. You only have a few seconds to get your brand’s message across, so make that time count! Video ads can tell an extremely engaging and meaningful story, so make sure it’s attention-grabbing and designed with the mobile screen in mind. It’s important to stay up-to-date on new ad placement trends, because they are only going to continue evolving. Get ahead of voice ads, mixed reality, augmented reality, and virtual reality, and master your video advertising strategy!
If you found this article helpful, check out 11 Facebook Ad Optimization Tactics Your Should Be Implementing for more tips and tricks.
Perspective from Ashley Stark, Paid Media Manager at Element Three | https://medium.com/element-three/how-to-get-the-most-out-of-your-paid-facebook-video-ads-765f2aee50a2 | ['Element Three'] | 2018-08-30 20:41:53.097000+00:00 | ['Paid Media', 'Facebook', 'Paid Advertising', 'Digital Marketing', 'Advertising'] |
Logging and debugging in JavaScript. A few methods I use on a daily basis. | I wrote this article to share a few debugging and logging methods I tend to use on a daily basis. I hope that at least some of them aren’t as popular as console.log, at least among the ones that just begin their journey with JavaScript.
1. console.log()
I think the console.log method is pretty straight forward. However, if you’re new to web development, here is a quick description. The console.log method is probably the most used utility developers use to debug their JavaScript. You can pass an object, string, number, array etc. as an argument and it will print its value to the console available in most modern browsers.
💡 Tip #1: When logging arrays, we can use the spread operator, so we don’t need to click the array in console.
Code example
const elements = ["theDevelobear", { is: "a" }, ["great", "blog"]]; console.log(elements); console.log(...elements);
Above code example would print the following output into the console.
The console.log trap
console.log can sometimes be deceiving. What I mean is that what you see in the console can be something you wouldn’t expect it to be. Consider the following code:
const bear = { name: "Teddy" }; console.log(bear); bear.name = "Bearnard"; console.log(bear);
What you would probably expect to see in the console is the first object containing the name “Teddy” and the second with the name “Bearnard”. Let’s see:
It seems right, but wait. Let’s expand the objects:
What?! 🤨
Told you — console.log can sometimes be deceiving. What happend here is quite easy to understand and also quite easy to forget about. The console.log method uses the value in the collapsed version and a reference to object when expanded.
Stylish console.logs
Printing values to console is really useful, but the .log() method can do much more cool stuff. It takes a formatting string as a first argument, so you can place a %c operator and pass a string with CSS styles as a second argument. This way you can make the values more visible, or display a message to users who are accessing the console on your production site (just like Facebook does).
💡Tip #2: You can use console.clear() in code to be sure that the message is printed at the top of the console.
Code example
const styles = [ "border: 1px solid #3E0E02", "color: white", "padding: 20px", "background: -webkit-linear-gradient(#ee0979, #ff6a00)", "font-size: 1.5rem", "text-shadow: 0 1px 0 rgba(0, 0, 0, 0.3)", "box-shadow: 0 1px 0 rgba(255, 255, 255, 0.4) inset, 0 5px 3px -5px rgba(0, 0, 0, 0.5), 0 -13px 5px -10px rgba(255, 255, 255, 0.4) inset", "line-height: 40px", "text-align: center", "font-weight: bold" ].join(";"); console.clear(); console.log( "%c Are you sure you need to be here? Please visit https://example.com/faq to read more about security issues", styles );
Following code will generate:
Facebook console example:
2. Use debugger and feel like an embedded developer
What if you don’t want to put console.logs everywhere? You can use the debugger.
It is a great tool that you can use to stop the execution of the script on certain lines (just like with breakpoints). You can place the debugger keyword inside of your code, for example inside of function and the context of the method will be available in the javascript console in your browser. You’ll also be able to see the values of variables available in the scope, step over to another function call, resume the execution and so on. Having some experience with debugging the STM32 C and Assembly code I must admit that using the debugger feels simmilar (ten thousand times faster with all the hot reloading magic, but still simmilar).
It needs some getting used to, though.
Example — debugger basics
Let’s say you have a function in your code which looks like this:
const dummyFunc = () => { const a = 7; debugger; const b = a * 2; debugger; const c = b * 2; debugger; };
After running your code, Chrome will stop the execution of the code on the first debugger. It will act like a breakpoint. It will also automatically open the “Sources” tab for you. There are a few useful things hidden inside.
The window on the top obviously shows the code that is running. We need it to keep track of where we are at the moment.
Below the code, we can see a Scope/Watch window. It is used to observe the values of our local variables as well as the ones from the global object (Window in my case).
After clicking Resume Script Execution button the code will start to run and pause on the second debugger.
Now we can see that the variables did update.
The Watch window allows us to watch specific variables during the execution of the script.
3. console.assert()
You can use console.assert() to make assertions inside your code. It is really useful when you need to check whether some condition occurs and you don’t want to go through a list of true/false values or put additional “ifs” only to debug the app. Using console.assert you can just throw a message when the assertion method returns false.
Caution: Keep in mind that falsy assertion in Node.js < v10.0.0 will throw an AssertionError and pause the execution of your app.
Code example
console.assert(true, "This message will NOT be outputted"); console.assert(false, "This message WILL be outputted");
4. console.count()
If you don’t know this one and you develop React apps, you definitely should try it. console.count() method logs the number of times it was called. This function takes an optional string type argument which will be its title. In case you don’t provide the argument, it will display a message like “default — ”.
Code example
console.count(); // returns "default: 1" console.count("Sidebar component render count: "); // returns "Sidebar component render count: 1" console.countReset(); // default: 0 console.countReset("Sidebar component render count: "); // Sidebar component render count: 0
This leads us to why I think you should know this method. It enables you to easily count the number of renders of each component you use. Of course, you can use a package for that, but if you’re a #BundleSizeFreak then Tip #3 is for you.
💡Tip #3: You can simply put a console.count() inside of component’s render method and it will let you know if your components doesn’t re-render unnecessarily.
5. Turbo Console Log
Unfortunately, this is not one of the methods from chrome’s console (it would look really cool though — this.turboConsoleLog) but it is a great extension available to download from the VSCode Marketplace. If you’re using VSCode like I do, you should really check it out. It enables you to log a variable by selecting it and pressing ctrl+alt+L. You can also configure it with a few settings:
Log message prefix — this one is great! You can provide an emoji (like the one on the picture below) and the log will instantly become more visible in the console.
Wrap log message — instead of providing an emoji at the beginning of the message, you can automatically wrap it like on the gif below.
There are also a few keyboard shortcuts we can use to be more productive:
ctrl+alt+L (L like log) — main shortcut that puts a console.log below,
(L like log) — main shortcut that puts a console.log below, alt + shift + C (C like comment) — by pressing this combination you can comment out all console.logs inside a file.
(C like comment) — by pressing this combination you can comment out all console.logs inside a file. alt + shift + U (U like uncomment) — uncommenting all console.logs.
(U like uncomment) — uncommenting all console.logs. alt + shift + D (D like delete) — deleting all console.logs from the file.
6. Be creative, add your own console methods
You can easily add your own methods and use them in your projects, just get creative and remember that it is meant to help you be more productive. You can create a console.yolo method which will for example… see for yourself! | https://thedevelobear.medium.com/logging-and-debugging-in-react-a-few-methods-i-use-on-a-daily-basis-e0b60420cc54 | [] | 2019-02-14 08:57:43.544000+00:00 | ['JavaScript', 'Logging', 'React', 'Debugging', 'Console Log'] |
Why B2B CRM is Entrepreneur’s Choice? | Supporting the processes and execution of Customer Relationship Management, B2B CRM solutions are now days basking a prestigious treatment in businesses industry. Their ever increasing scope and advanced customizable features have earned these softwares a valuable space in entrepreneur’s budgets. According to a research, more than 91% of companies (with less than 10n employees) use CRM solution. This depicts the gratification level of users in the industry.
Other than having innumerable fascinating feature and being a most followed trend, there are many other factors which drive entrepreneur’s attention to B2B CRM solution and make B2B CRM, a priority pick for them such as:
Provides Cloud Storage: Cloud Storage offered by B2B CRM solution is not less than a consecration for businesses. Through cloud storage, all important data is arranged and hived at one centralized platform, where it is made accessible for all related personnel. The availability of data at one central point saves time which is usually wasted in searching information.
Streamlines Business Processes: With the effective use of CRM, business processes get streamlined and automated. This particular feature of B2B CRM wins many benefits for the business in multiple aspects i.e. makes the operational system transparent, multifold profit and cuts down the operational cost. Referring to a recent Forrester Research: “Companies that excel at lead nurturing generate 50% more sales ready leads at 33% lower cost.”
Forecasts Customer Behavior: Another important functionality that contributes to the value of B2B CRM is forecasting of customer behavior on the basis of multiple attributes which can be buying behavior, purchasing history, spent money etc. Through forecasted results, reports are generated in CRM software that help management in making strong business decisions.
Paves Way to SUCCESS: Paving way to success is the ultimate goal and main identity of every CRM solution. With the positive outcomes drawn from above mentioned functionalities, B2B CRM paves a smooth and short way for entrepreneurs to get to the peaks of success.
All these amazing features collectively make B2B CRM, a prior choice for Entrepreneurs. With the effective use of this overly smart business software, one can bring magical results to its business.
Looking to adopt a CRM, You can check out this Infographic of 13 popular CRM Systems
How do you use your CRM? Is your CRM capable of performing any additional functions, let me know in the comments section below.
Get your FREE Consultation session with our experts on the perfect CRM for your business | https://medium.com/business-startup-development-and-more/why-b2b-crm-is-entrepreneurs-choice-a84371af9338 | ['Phillips Campbell'] | 2017-05-02 11:47:19.800000+00:00 | ['CRM', 'Tech', 'Salesforce', 'Business', 'Entrepreneurship'] |
Restify.JS: Your Production Ready REST API At Scale With The Microservice Architecture. | Get up and running with the Restify.js REST API based Framework at scale in the Microservice way by building and deploying a User Microservice.
Today, microservices are taking the lead in many tech companies. However, some still build their application in a monolithic way. Their system seem to be well protected, but it may represent a security issue, where hackers can find a breach to corrupt the whole system. Therefore, services like Users Service, Payment Service, Email Service and other sensitive services residing in the same network will also be exposed.
This scenario wouldn’t have been possible if the application was split into little independent services that interact with one another through a secured API. In this case, even if the main service was corrupted, other services would still be intact and away from danger. This is an advantage of the microservice architecture.
What is a Microservice?
Microservice is compared to the Linux philosophy of small tools each doing something well, which we mix, match and combine into a larger tool. So it is a service or a tool built apart to only perform a specific task. This service can then be easily integrated into other services.
The microservice architecture adds another layer of security to your application, making it more secured and barricaded. So, in this article, we will consider a very sensitive service that almost all applications have: The User Service.
Why a User Microservice/Service?
Instead of adding a user model and a few routes and views to an existing application, this wouldn’t be practical for a real-world production application although it is totally feasible. But we care much about the high value of user identity information and the super strong need for robust and reliable user authentication. Based on this, we will develop our User Service in a microservice way. This will help build a user authentication system with the required level of security, one that is probably safe against all kind of intruders. However, a talented miscreant can still break in because, in every software program there are always some bugs here and there that the intruder could still take advantage of.
Introducing the User Service
The user service is in my humble opinion the most important service in any production application. It contains every single information about the user. Don’t just think in terms of Name, Password and Date of Birth, but consider also the user’s Bank Account Credentials, his Followers, the things he views or does daily, his Favorites Channels etc… Those information are a gold mine for hackers and other companies in need of advertising their products by targeting the right kind of user.
To limit the risk that users’ data will fall into the wrong hands, we will tightly secure and barricade the user authentication microservice. It can be used by any of our application through a well protected REST API. This will be an independent service deployed on a standalone Heroku server. Any other service/server that needs to access our User Service, must be authenticated first. It means the remote service must have the valid credentials to be authorized access to our User Service. Without any ado, let’s do this service together.
Kicking off our User Service
To make this simpler and avoid you the pain of setting up and configuring a database server, we will use SQLite as our main Database along with the Sequelize ORM. As our Node.js Framework, we will use a REST framework named Restify.js. As the main website suggests, Restify is a Node.js web service framework optimized for building semantically correct RESTful web services ready for production use at scale. This makes it perfect to build a scalable user service. We will finally deploy it on a heroku server. To start, our user service will have the below structure within a directory called users. So create a new project folder named “superapp” and inside it, create a directory entitled “users”, then create the bellow sub-directories and files within the users directory.
superapp/users/
server.js
routes/
- index.js
controllers/
- index.js
models/
- users-sequelize.js
database/
- default-sequelize.sqlite3
config/
- default.json
scripts/
- create-user.js
- read-user.js
- delete-user.js
- list-users.js
sequelize-sqlite.yaml
.gitignore
.babelrc
server.js will contain the logic to create our user server and protect it with some credentials.
config/ directory will contain a json configuration file that represents the single source of truth for all the service routes.
routes/ and controllers/ as the titles imply are the routes and controllers of the user service.
models/ directory will contain the logic to create, read, delete and list users from the database. To do this, we will use the Sequelize ORM to interact with our database. This directory will also contain the logic to authenticate users.
database/ directory will contain our SQLite database file holding users’ data.
scripts/ directory will contain logic to test the creation, reading, deletion and listing of users.
sequelize-sqlite.yaml will be the configuration file to configure the type of database we are to use. As said earlier, we are using SQLite, but you could also use MySQL if you prefer. See Database section at the end of the article.
.gitignore is the configuration file we will use to ignore all npm modules so that they don’t move to production or a git repository.
.babelrc is the configuration file where we configure babel. Since we will be heavily using ES6+, we use babel to translate ES6 code to plain JavaScript.
package.json is the json file where we install all our dependencies and inject environment variables when the server starts. It will automatically be created when you initialize the project with npm.
To kick off the project, at the root of the project in the users directory, let’s initialize with the following commands:
npm init -y git init
The first command will initialize the package.json file and the second will create a new git repository. That being said, let’s start installing our dependencies.
Pre-requisites:
Node@v10 or higher
npm@v6 or higher.
Dependencies
npm i bcryptjs config debug fs-extra gravatar js-yaml jsonwebtoken md5 restify restify-clients restify-errors sequelize sqlite3 uuid --save
Dev Dependencies
npm i babel-cli babel-preset-env babel-preset-stage-2 cross-env nodemon --save-dev
Before we go any further, let’s configure .babelrc and .gitignore. These files are to be created at the project root as shown in the project structure above.
.babelrc
{
"presets": [
"env",
"stage-2"
]
}
.gitignore
/node_modules
/database/users-sequelize.sqlite3
We can now start creating our server, but first, we will start setting up a better way to debug our app using the debug library. The code is the following.
server.js
Next, we create authentication credentials that the user server will be expecting from any other remote service that needs access to it. So, we will create an array holding an object. We do this to ease the checking later by just iterating through the array. Therefore, let’s add the following code.
These credentials are hard-coded for now, but to add more complexity, we will need to encrypt the key later. However feel free to change the key(highly recommended) for your personal use. The next thing to do is to write the logic that will check for the required credentials.
We wrote a check function that will be passed later to Restify as a middleware. As we said earlier, any other remote server or service that needs to access the user service will first need to authenticate itself. It means that it must exactly hold the earlier credentials(user and key). This function is in charge of checking the authentication credentials before granting any access.
The first condition checks if the the authorization header and its basic Auth exist. The req object actually has an authorization header which holds the username and password in its basic Auth. This first check will save us time and resources by not bothering to look for any credentials if those objects do not even exist. But if they exist, we will then iterate through the array and check that the username and password presented by the remote server is the same as the credentials on the user server. If it is the same we set the BASIC_FOUND to true and grant access with the next() function in the chain, otherwise we deny access to the server and return a friendly debugging message “Access Denied”.
N ote: Unlike Express.js and other Node.js framework, Restify expects you to always pass the next() function. As you may already know, it is a middleware that determines whether to go to the next function in the chain. In case you don’t want to go to the next function, just pass false to it as in: next(false). Nevertheless, there are also other ways to stop restify going to the next function in the chain; so you can visit Restify official website for more information, under the 5th paragraph entitled: Using next().
Now we can create a Restify server that represents the user server. So add the following code to the next line.
The createServer() is a restify method that creates a restify server. Now that it is created, we will need to configure the server with the following code.
On the first line, we enable authorization on the server using a restify plugin named authorizationParser(). The next line is passing the check function we wrote earlier as a middleware to the server we just created. This means that every single http request made to the user server will first pass through the check function for authorization.
On the next line, we enabled query string parameters with the queryParser() plugin api to be able to retrieve data in a get request through req.params. We also enabled the body object with the bodyParser() plugin api to be able to retrieve forms data sent through POST or PUT requests. This time, note that we will not be retrieving forms data though req.body but rather through req.params still. Why? Simply because, we pass to the bodyParser() a mapParams attribute set to true.
Note: In Restify, to be able to retrieve data from a get, post or any other http request, you must configure the server with plugins api to do so. There are many other plugins api available for configuring a restify server, but this is all we need for our service. However, you can visit restify official website under the plugins api section for more information.
As we would in every Node.js app, let’s handle errors with the following code.
Taking advantage of the debugger we set up at the top of the server.js file, we are listening to “uncaughtException” and “unhandledRejection” events to print error messages in case of Error and Promise Rejection. What’s next? We will now create the various routes we need to create, read, delete and list users.
User Routes
Before we create the different routes and their related controller functions, we will first configure a single source of truth for all our routes.
config/default.json
This file represents our single source of truth. The code is the following.
We can then write the logic for our very first route.
routes/index.js
Now that our first route has been put in place, let’s create its related controller in order to start the server to make sure that everything is working fine so far.
User Controllers
Let’s write our very first controller. It goes the following way.
controller/index.js
When we start our server, this route will not be of any particular use. But don’t worry, we just want to test that our user service is up and running on the appropriate port, be it in development or production. Before we start it though, let’s add to our server.js file the route we just created. We do this because we used the server object within the routes module to listen and start the service. So in server.js, import the routes module and pass to it the server the following way.
...
import usersRoutes from "./routes/";
...
usersRoutes(server);
The usersRoutes(server) function is to be called at the end of the server.js file, right after error handling functions.
To start the server, we will need to touch our package.json file by configuring the server command.
package.json
{
...
"server": "cross-env DEBUG=superapp-users:* PORT=4000 nodemon --exec babel-node server.js"
...
}
Now, still being at the project root in our console, we launch our server.
npm run server
If it all goes well, we must see the message:
superapp-users:routes-debug User Authentication Server User-Auth-Service running at User Authentication Server User-Auth-Service running at http://127.0.0.1:4000
Our API has started taking a shape. Our main API in development will then be http://127.0.0.1:4000/users. This will be the entry-point for any of our User Service functionalities. We chose to keep it simple and stupid.
Now, using Postman, let’s test the User Service by calling the api http://127.0.0.1:4000/users. If you do not have Postman, it is totally fine, just follow along. But if you do, open it. In the url/request bar, select the GET request type, then provide the username and password authentication credentials under the Authorization tab, then send the request. Without the credentials, our request would be rejected with a 500 Error Status and a friendly response message “No Authorization Key Found”. In case, the credentials are incorrect, our request would be rejected with a 401 Error Status and a friendly response message “Server Authentication Failed”. The following screenshot well demonstrates what we mean with a successful access to the User Service.
Note: Be carefull, http://127.0.0.1:4000/users IS NOT the same as http://127.0.0.1:4000/users/. Throughout this article, the three dots(…) are to indicate the previous or next piece of code.
So, are we good? Perfect. You can now shut down the server if you want. Now let’s add our actual routes and controllers to get into the real deal.
User Routes — More Routes
Starting from here, we will be added all the required routes to create, read, delete and list users. Going back to routes/index.js, we then add the following routes.
routes/index.js
Next, we add each of the related controllers.
controllers/index.js
The register() callback function creates a new user using the create() function of the user’s model. In the latter function, you can notice we are retrieving post data through req.params insated of req.body as we said we would earlier. Since this is an asynchronous request, we use the ES7 async/await feature, to make sure the user is successfully created before we send back the response. As explained earlier, we also used next(false) at each step to tell Restify that it should not continue to the next call in the chain but to stop right there. In case you forget to call next() or next(false), even after completing the first task, Restify will go to the next function, which will result in an error. The same logic is used throughout the rest of the controller.
The next thing to do is to add a function that will find the user by email. So add the following code after the register function.
The primary role of the find() function is to look for an existing user. You are free to rename it to findByEmail() if your prefer. So before we create a user, we will first need to check if they exist. In case they do exist we return a friendly message to the user, otherwise we go on creating the user. The response sent back is an object that will determine whether to create the user or not. You will see the real use of this function when you call this api endpoint on your main platform/service. However we will be testing this soon.
Next, is to find the user by its id. So here is the following code after the above function.
The findById() is just like the find() function, except that it looks for the user by its id. The function is handy when we want to delete the user, check the user’s post, profile etc..
Let’s now authenticate the user by checking its password. The logic then goes this way.
The checkPassword() checks the user’s password when they are trying to log in. Using just the email and password, the response sent back from the user’s model is an object that determines whether the user’s credentials are correct or wrong.
It is now time to write the logic to delete a user from the database.
The destroy() function deletes the user from the database. To do this, it is necessary to know the user’s id. However, here we are deleting the user by its email. We chose to use emails because we wanted to be able to easily test later. You will see clearly in to this further.
Finally, we can write the logic to list our all registered users.
The list() function lists all users. The result send back from the user’s model will either be empty or an array of length > 1. So in case there is no user registered, we simply return an empty array, otherwise we return the list. Note that in every function, we implemented a trycatch. This helps us to handle asynchronous request errors when they arises.
So good so far, right? The last step to getting all the functionalities of our User Service up and running, is to write the logic that will actually create and save users’ data in the database, as well as performing on the database different actions such as reading, authenticating, deleting and listing.
User Model
Before we start creating the user model, let’s start configuring the database we will be using in a YAML file.
sequelize-sqlite.yaml
In this configuration, We start giving a name to our database. We then set up a password and a username to access the database. However, the most important parameters are the dialect which specifies the kind of database to use and the storage which specifies where to store the database file.
The operatorAliases parameter configures SQLite operators like equal, greater than, lesser than etc.. This simple configuration file actually helps our service to be more scalable. At anytime, we can switch to any database, be it MySQL or Postgres. In case we want to switch, we will just need to change the dialect and storage properties, rename for instance the yaml file to sequelize-mysql.yaml or sequelize-postgres.yaml and download the appropriate Node.js modules.
Now that this is set, we can start with the user model.
models/users-sequelize.js
First, we start importing Sequelize. We could have only use SQLite to write queries to interact with the database directly, but we use the Sequelize ORM to ease our development and make the user server more scalable. We are importing bcryptjs to mainly crypt users’ passwords. We could have used Node.js default file system(fs) to read a file, but it does not support promises, so we imported the fs-extra module that has all the features of the default file system but also some additional features along with promises support. Finally, we imported the js-yanl module to read the YAML configuration file we created earlier.
Then we defined the sqlize and SQUser variables. They will respectively configure Sequelize to use a database and define the User document or table if you prefer. The Op variable is defined to use Sequelize operators.
The connectDB() function contains logic to create the User table and connect to the database. The first condition checks if there is an existing database file. It there isn’t, we then create a new one by reading the YAML configuration file whose parameters are supplied to Sequelize to create the database. The second condition checks if there is already an existing User Table. If there isn’t, we create a new table, otherwise, we just return the existing table for actions to be performed upon. Anytime we want to touch the database, the connectDB() function will always get called to check for all those things.
Next, we can write functions that will interact with our database. First of all, we will start saving a new user to the database. Add the following code right after the connectDB() function.
The create() function creates a user into the SQLlite database. Sequelize has an inbuilt function called create(). As you can see, it is called upon the User table to save a new user. But before we have any interaction with the database, we must first ensure that the connection is established. For this reason we use promises to wait for the connectDB() function.
The find() function looks for a particular user in the database. This is possible through the Sequelize findOne() function that searches a unique user by a particular property, in our case: email. The if condition helps creates a new user. In case the user exists, we return an object specifying that the found property is true, therefore returns a friendly message to the user on the view. However, if the user does not exists in the database, we rather return an object specifying that the found property is false. Such information will determine whether to create the user or not.
The findById() is just the same as the above find() function, except that this time, it searches the user by its id within the database.
The checkPassword() is where we authenticate users to log them into their account. Of course, the whole mechanism is not done here. Nevertheless, we authenticate users by first checking if they actually exist in the database. If they do, it means that they have previously been registered. So the next block of code checks the email and password. Since we used bcryptjs to encrypt the password upon registration, we still use it to decrypt upon authentication. If the password does not match we return a friendly “Invalid Credentials” message to the user in the view, otherwise we sent an object that will be used to log the user in.
The destroy() function permanently deletes a user from the database. It first checks if the user exists then removes them with the Sequelize destroy() function.
The list() function as you may already know, list out all existing users in the database. The Sequelize findAll() is in charge of listing them all. We then iterate through the array returned by the Sequelize function to filter out user’s password. You may be thinking that there is no need since all passwords are encrypted. I totally understand your concern, but thinking this way would be a terrible mistake. Remember, we care much about users’ data that our main concern is to make it highly secured.
Therefore, the sanitizer() function will be responsible for filtering out the user’s password as we may notice. To finish with our model, we lastly need to export all our functions, otherwise we won’t be able to interact with the database from the controller file.
export { create, find, findById, checkPassword, destroy, list }
This last line of code ends the models/users-sequelize.js file. The final and next stage before deploying our User Service will be of course, to test it. So let’s go.
Testing the User Service
In every software development lifecycle, tests are written to ensure the application behaves the way we expect and spot bugs at very early stage. This is what we will be doing here. Note that at this stage, we are to create another server which will be a client server that handles the logic to create a new user. Create a file called create-user.js within the scripts directory as shown under the Kicking off our User Service section.
create-user.js
Using the “use-strict” mode, we first make sure that our code is safe and less error-prone. Then we import all the modules we will need to create this test. The “restify-clients” module is the library we will use to create the client server that simulates http requests against our User Service. This same client server will be used in all other tests. As you may already know, the “uuid” module will give us a unique id at the creation of every single user. Now, lets create the client server.
The createJsonClient() is the “restify-client” method that will create our client server. You need to make sure that it is listening on the same port as with the User Service. This is because, it is against this service we will be making http requests. Next we will need to provide the credentials needed, to have access to the User Service.
…
client.basicAuth(“super-admin”, “FLDBY482-KUD5-X74P-7PDZ-6W0MB46DLMN5”);
So, after creating our client server, we have access to the basicAuth() method which takes as parameters a username and a password , that are the credentials. To finish writing this test, we will write the logic that will actually create the user the following way.
To make the code cleaner and more readable, we created variables that represent some of the user’s data and a function named encryption that is responsible for encrypting the user’s password using the “bcryptjs” module. Next we create an IIFE function that will immediately get called when the client server starts. This is where the post http request will be made against the User Service to finally create our very first user. This block of code ends the create-user.js file.
You and I are all tempted by now to run those two servers to see what actually happens and make sure that our service is smoothly doing what it has been made for. But relax, we will get to this shortly. Let’s just finish with all other tests, then we can sleep on our laurels.
The next code logic is similar to the create-user.js file, except that here we will be reading the information of the user we created above. If you haven’t yet, create a file named read-user.js that contains the following code.
read-user.js
You see that the logic is quite the same and pretty straightforward using the same createJsonClient() method. However, in our get request, you may notice that we use process.argv[2]. Remember that we are using Node.js after all. In Node.js, the process.argv[] is an inbuilt API of the process module which is used to pass command line arguments. Here, we are looking for a particular argument within the command of the current running process. Therefore, the 2 within the array specifies that is the thrid argument we are looking for. So, in package.json, we will provide within the “read-user” script command, the user’s email to read the user’s info. That is how we will look for a particular user in the database.
Now, here is the rest of our code for respectively listing all users and deleting a particular user from the database. If you still haven’t yet, create a file named list-users.js and delete-user.js that respectively contains the following code.
list-users.js
delete-user.js
This is the end of our tests. Now before we start the User Service and the client server for testing the service, we will need to make some amendment in our package.json file. Open it to amend the following scripts.
package.json
Next, open two different terminals and run the following command in the first.
npm run server
This command kicks off our User Service. Keep this running. Then, run the following command in the second terminal.
npm run create-user
This command creates a new user and save it to our SQLite database. The logs shown in both consoles must indicate if the creation was successful or not. So, you can test that it was indeed created by running the “read-user” command.
npm run read-user
If successful, the logs in the console must read the information of the user whose email you provided in package.json. Do the same for the rest of the commands
npm run list-users npm run delete-user
Now you can create as many as users you want. Just make sure that you always provide a new email in the create-user.js file. Do the same for reading and deleting the user you want by updating their email in the respective scripts configured in package.json.
Deploying to production
Now we are good to go. What remains is that we host our User Service on Heroku. The first step for doing this is to uninstall Babel from Dev Dependencies and re-install it in Dependencies.
- Dev Dependencies
npm uninstall babel-cli babel-preset-env babel-preset-stage-2 --save-dev
- Dependencies
npm i babel-cli babel-preset-env babel-preset-stage-2 --save
Since Node.js and even less Heroku do not not support ES6 codes yet, doing this will allow us to use Babel to run ES6+ codes with ease in production. In package.json, you may have already noticed the main command that will start our User Service in production:
"start": "SEQUELIZE_CONNECT=sequelize-sqlite.yaml babel-node server.js"
You can plainly see the use of babel-node . But be careful, this is not recommended. I used Babel in production for a quick fix just to have our Scalable User Service Up and Running in production. Babel is supposed to be used only in development and not in production.
However, starting from version 10+ of Node.js, ES6/ES7+ features are now supported, but not in same file extension. COMMONJS Modules use the .js file extension and do not support ES6 codes yet. But with the event of ES6 Modules, the community introduced an experimental module that understand ES6+ codes in a .mjs file extension. Therefore, I can highly suggest instead of using .js extensions, you can use .mjs extensions and write ES6+ codes without the need of Babel. Just make sure that you add in package.json the command — experimental-modules in the scripts where you are executing .mjs files. So if you decide to use this extension, your main start command could be like:
"start": "SEQUELIZE_CONNECT=sequelize-sqlite.yaml node --experimental-modules server.mjs"
This is one way of solving the problem. That being said, I leave it up to you to find another better way of running ES6+ codes in production. You can share it in a comment bellow.
Next, rename config/default.json file to config/production.json. You should also set to true the “production” value of the API’s entry-point which is in the index controller.
Now, what you need is a Heroku account. If you have none yet, then create one. Afterward, you will need the Heroku Toolbelt installed on your machine. If you do not have it on you machine, then follow this link for instructions. Once the Heroku CLI installed on your machine, open a terminal and login with your heroku credentials, providing your username and password.
heroku login -i
Once logged in your terminal, you can push the project to a git repository. Create first the project on your github or gitlab account if you haven’t yet. Then add it on your machine with the very first command:
git remote add origin https://github.com/your-username/project-name git add -a ./ git commit -m “Created the User Service” git push origin master
Now lets create a new heroku app.
heroku create superapp
This command creates our project on the heroku platform. Here, we named it “superapp”, but feel free to name it whatever you like. Then check if the heroku project repository was successfully added on your machine with:
git remote -v
Finally, we can push to production.
git push heroku master
This command deploys automatically the app to production. You can view the progress and activity on your heroku account dashboard. However, being at the root of the project, you can also view it from the terminal with the following command.
heroku logs --tail
We are all set. We have successfully created a Scalable User Service and shipped to production. Now, our api’s entry-point in production will be https://superapp.herokuapp.com/users. Test this api in Postman as we did earlier.
Now, do you have another application that needs to create, read and authenticate users? It could be a Photo Sharing or Booking Platform. That platform is then your first and main service. Now you have another fully functional service: the User Service. All you will need is to make both services communicate through the api we just shipped. You can also adapt this api to the need of your platform to perform whatever action you need on the User Service. Nothing is impossible, the only limit is your imagination. However, there are things we can still improve: Tests, Database and Security.
Tests
So far, we have tested our application using the restify-clients module to simulate http requests against the User Service. We’ve also tested our api’s entry-point with Postman. However, what we haven’t tested are the other remaining endpoints. So taking advantage of Postman, try to cover the rest of the endpoints. Why not also trying to write some units tests ;)
Database
After deploying to heroku, you will realize that your sqlite3 database file is being cleaned up after a while; everytime the dyno reboots. This is because the Heroku filesysten is ephemeral, thus causing you to loose all previous stored data. This is normal because, SQLite is not really a production grade database. So consider switching to MongoDB, MySQL or Postgres.
Switching to MySQL may even be easier than what you think. As we said earlier, you just need to adapt the parameters within a new YAML configuration file called sequelize-mysql.yaml, then download the required mysql module for Node.js. But note that you will have to configure a MySQL database server on heroku, which will eventually require a credit card.
Security & Beyond
We have secured our User Service as we could for now. But there is still work to be done. I will soon be releasing my next article entitled Docker: Building Scalable Applications In The Microservice Way Made Easy. With the architecture already set in place, we just developed a subnet segmentation without realizing it. This is a good design decision. Our User Service now represents a Network that we can name AuthNet, because it is an authentication system. In the Docker article, we will develop an additional subnet called FrontNet. This will be the main service that our “superapp project” will be offering. So that service could either be a Photo Sharing or Booking Platform as we mentioned a while ago. This service will then communicate with the User Service through the API we just deployed with the required level of security to have access to it. We will create a protective security wall between both subnets by dockerizing and making them communicate through a Docker Bridge Network. You can imagine each Service, independently sitting in its network performing its task. So, we will continue from there and explore together how to fully protect our superapp project at scale with this Containerization Technology, but this time on Digital Ocean. Stay safe till then.
Happy Coding! | https://medium.com/swlh/restify-js-your-production-ready-rest-api-at-scale-with-the-microservice-architecture-e8ba77e7551f | ['Steven Daye'] | 2020-06-15 16:49:22.269000+00:00 | ['Heroku', 'Nodejs', 'Restify', 'Microservices', 'Rest Api'] |
What To Expect When You Look At The Future | “No man ever steps in the same river twice, for it’s not the same river and he’s not the same man,” said the Greek philosopher Heraclitus.
If there is a constant in the Universe, it is the fact that everything changes.
And we are in the midst of massive changes, whether we like it or not. Even the prospect of small personal changes creates discomfort because we don’t like change.
People prefer to stay comfortable in a predictable way. But life is unpredictable. Who could have predicted the current circumstances? No one. Oh, sure, we all knew that it was a possibility that sooner or later, something like this could happen. Yet, we face unpredictable circumstances.
Even now, we cannot predict how things will turn out. We don’t know what life will be like when all is said and done. All we can do is hope for the best and mentally prepare ourselves to face the worst.
I would like to share some perspectives put forth by some brilliant and knowledgeable people on what we can do now, and in times to come knowing what we know, and given the unpredictable nature of life.
First, Ryan Frawley talks about the futility of worry, and how “Coulda, Shoulda, Woulda” never make any sense. We don’t know how or where the roads we didn’t take would’ve ended. So why not acknowledge where we are and look at what’s ahead and decide which way to go from here.
Next, Rosennab shows us why we don’t need to worry because the Universe always has our back. There is an intelligence that can see and know what is up ahead and can mold the circumstances to help us tread through any challenges we may face, regardless of what choices we make in the present times.
Finally, Bill Abbate shows us how we can expand our world and continue to grow. He says,
There is a direct link between your ability to broaden your perspectives and your Intellectual growth.
Through intellectual growth, we can see more solutions to our challenges and have the ability to choose the ones that suit us best from where we find ourselves.
I love Medium. I can find many brilliant and thought-provoking stories that expand my perspective, and then I share those with others, enabling me to be of service. It’s a win-win-win situation.
As always, thank you for reading and responding.
More about me: | https://medium.com/top-3/what-to-expect-when-you-look-at-the-future-75910e379a76 | ['Rasheed Hooda'] | 2020-05-15 23:33:50.256000+00:00 | ['Future', 'Pandemic', 'Self', 'Life', 'Wisdom'] |
Hey, Barracuda! Queer TV in Australia, with Composer Bryony Marks | Hey, Barracuda! Queer TV in Australia, with Composer Bryony Marks
The mind behind the music of “Please Like Me” and “Barracuda”
Image courtesy of Norman Parkhill, inSync Music
AUSTRALIA, BOTH AS COUNTRY AND CONCEPT, is something of a paradox. An enormous sunburnt landmass dumped at the bottom of the planet, a million miles from anything else, one would be forgiven for presuming the disconnected nature of its inhabitants. Yet this big red rock has housed some of the finest writers, musicians and artists in the English-speaking world, many of them well-regarded for their collaborations and joint ventures.
The big lonely landmass, then, understands the meaning of that old adage ‘no man is an island’. Composer Bryony Marks seems to share that belief, having worked on projects spearheaded by Australia’s best-loved and most highly acclaimed authors. With 2011’s Cloudstreet — adapted from the applauded novel by Tim Winton, an icon of Australian letters — and the ABC miniseries Barracuda listed among her composer credits, Bryony is a Komponistin with flawless taste in picking projects.
“I am drawn to projects which move me,” she says, “which speak to the human condition in all its crazy glory. There’s no one subject, genre, sub or dominant culture that particularly resonates for me. Rather I would say my favourite projects have shared an element of authenticity, of honesty, expressed in manifold ways.”
She is also an artist whose work holds a very special place in my own heart and head.
I was 20 years old when I first saw the small-screen adaptation of Christos Tsiolkas’s novel Barracuda, broadcast on UK television while in my second year of uni. Much of Bryony’s score is driven by propulsive guitar strumming and some more experimental ambience, offering a musical embodiment of our young protagonist’s triumphs, failures and incendiary inner turmoil.
Charting the powerful determination of gay swimmer Danny Kelly, the working-class child of Greek immigrants, as he navigates the latent homophobia and racism of an elitist sports academy in late-90s Melbourne, Barracuda deals with complex themes that the atmosphere of Marks’s music allows the audience to better understand. The small cue that leapt out at me in the second episode, “In the Pool,” really set the tone for just how prominently her music would serve as an emotional aid to this series: a minute-long sparkle of piano and guitar, it patters across a stunning lateral tracking shot of Danny flying over the length of a swimming pool.
Marks had read Tsiolkas’s 2013 novel before she was approached to score its adaptation. “What a visceral, frightening, complex, brilliant read it was,” she recalls. “In part a coming-out and coming-of-age story, with Danny’s richly drawn Greek/Irish family life in contrast to the frigid Anglo restraint of the Eastern Suburbs boys, it was mesmerising.”
Straying from the narrative of the book, neglecting Danny’s adulthood for a four-year period of his adolescence in the run-up to the Sydney 2000 Olympics, the series handles Danny’s troubling homosexuality with more suggestive deft than the shocking slap of Tsiolkas’s original prose. An astounding array of feelings pass unspoken, with many of the series’ standout moments presented in silent closeups or the pointed use of slow motion. The image of Danny playfully wrestling his aloof, wealthy new classmate into a swimming pool — and holding him in an underwater embrace as they slowly sink to the bottom — has forever been impressed into my memory.
The programme’s approach to societal divides of wealth, class, and sexuality at the dawn of the millennium was remarkably poignant, and remains one of my most influential viewing experiences as a young adult.
Unsurprisingly, Marks “jumped at the chance to work on this material with such a talented team — exceptional producers Tony Ayres and Amanda Higgs; and director Robert Connolly, who is rigorous, passionate, highly creative and communicative.”
A textbook example of Marks’s affinity for collaboration, she reflects fondly on how Connolly “extended me and challenged me, particularly in the long musical sequences composed for the swimming races. Everyone I worked with on Barracuda stridently wanted to honour the book, much the same way the cast and crew felt on the earlier adaptation of Tim Winton’s much-loved Cloudstreet. It’s hugely exciting,” she says, “and a little terrifying for all involved.”
Another major credit in Bryony’s body of work is long-running ‘dramedy’ Please Like Me, created, written by, and starring gay Australian comedian Josh Thomas.
Following a fictionalised version of himself, the show delves into everything from suicide attempts, homophobia, family relationships, and the funny, messy process of finding one’s place in the world.
As well as the creative force behind the show’s script, what ultimately drew Marks to the recording studio was the prospect of working with one of her favourite, long-standing collaborators — and indeed her husband — the director Matthew Saville. The husband-and-wife duo have worked on numerous projects together, and Bryony’s eagerness to work on Please Like Me was further heightened by her admiration for the show’s creator.
“Josh is one of the most honest people I know. Please Like Me’s hilarious and devastating take on coming out, learning how to live as an openly gay man — as well as its portrayal of struggles with mental health — as juxtaposed with the banalities and oddities of its characters’ domestic lives, was quite extraordinary to score.”
Such was the strength of their creative partnership that Bryony and Josh united once again for his new American show Everything’s Gonna Be Okay, “which has been fantastic to work on as well. Many people want safe scores that deliver a literal reinforcement of the action or emotion of each scene, but because Josh trusts me, I’m able to experiment.” This is an arrangement that has allowed her, as she puts it, “to play like a kid in a sandpit — which, as you can imagine, is quite joyous.”
In many ways, Marks is a composer driven more by emotion than any formal approach to making music.
Indeed, much like one of her contemporaries, Australian film composer Antony Partos, she is a musician in tune with the reactions drawn out of her by the material. “I think my strength as a screen composer is not the actual sound or style of the music I write, but more the response I have to the pictures.”
It was while enrolled at the University of Melbourne’s Conservatorium of Music that Marks says she refined her critical thinking, “which has helped me in the way I view the content of the stories on which I work.” Music, above all else, “is the primary way I express myself. As a very private person, I can hide in its abstractions and say everything I want to say. It’s an emotional outlet.”
Marks is one of the most purely emotive composers I have ever encountered through a television set. Her music has a beguiling hushed quality, creating a gentle sonic cradle within which the audience can house their personal reaction to the drama unfolding before their eyes, leaving those feelings to simmer quietly or sparkle sharply.
A chance encounter with Australian-German independent production Berlin Syndrome, broadcast on TV the night before I had an interview for a job in Germany, was another major bridge to Marks’s music. Set in my favourite city in the world, the film not only tugged at my heartstrings with its location shots of Berlin, but was an eerily prescient viewing experience on the eve of a Skype call that allowed me to live and work in one of my favourite countries. Few other moments in film have brought as many tears to my eyes as the closing minutes of Berlin Syndrome, as scored by Bryony’s achingly beautiful piece for string quartet “Out”.
The point at which Marks is now, as a commercial composer, is one she considers to have been “a dream run,” particularly given her minority status as a woman in a predominantly male-driven industry. Marks is keen to underline how clearly she recognises her own privilege as a “tertiary-educated, able-bodied woman,” and, unlike many of her female colleagues, does not “feel as if my career has been negatively impacted in any way by dint of my gender. Not only have I had plenty of work so far, but I’ve also been fortunate enough to enjoy creatively fulfilling work alongside exceptional collaborators.”
Above all else, Marks recognises her privilege and feels she is lucky to be where she is.
As a citizen of that big red rock a million miles from the rest of the world, Marks recognises the need for partnership and exposure, and is a keen advocate of platforming other experiences. “I’m so heartened,” she says, “by the initiatives in Australia aimed at diversifying and expanding the very limited pool from which talent has been traditionally sourced.”
She is proud to be a part of this change to the status quo, as a component in the presentation of media that portrays experiences beyond that of the “white, hetero, middle-class” majority. Having created the soundscape of Australia’s two most high-profile queer television shows of the past decade, she is a keen supporter of minority voices, “working with great people, who happen to be gay, on fantastic, meaningful stories.”
As a gay viewer myself, this is certainly the basis of my connection with Bryony’s music. I believe she is the sound of stories that are not always told, and need to be heard. At the end of the day, she feels simply grateful to have been given such exceptional opportunities within an especially fickle industry. “It wasn’t until I was 33, and pregnant with my first child, that I started working professionally, and I haven’t stopped since. Music allows me to contribute to the world in a way I find meaningful, and this seems the greatest privilege of all.”
It is a craft she is still learning, and she doesn’t believe she will stop learning. Between her own identity and the music she makes, “there is no separation, and that’s the way I like it: it’s all just living a life.”
And what a generous, evocative, sensational life it is. May I be forever thankful for all it has given my open ears. | https://medium.com/prismnpen/hey-barracuda-queer-tv-in-australia-with-composer-bryony-marks-1f175a896f20 | ['Liam Heitmann-Ryce'] | 2020-11-21 16:47:09.755000+00:00 | ['Television', 'Creative Non Fiction', 'LGBTQ', 'Interview', 'Music'] |
Boosting Algorithms Explained | Boosting Algorithms Explained
Theory, Implementation, and Visualization
Unlike many ML models which focus on high quality prediction done by a single model, boosting algorithms seek to improve the prediction power by training a sequence of weak models, each compensating the weaknesses of its predecessors.
One is weak, together is strong, learning from past is the best
To understand Boosting, it is crucial to recognize that boosting is a generic algorithm rather than a specific model. Boosting needs you to specify a weak model (e.g. regression, shallow decision trees, etc) and then improves it.
With that sorted out, it is time to explore different definitions of weakness and their corresponding algorithms. I’ll introduce two major algorithms: Adaptive Boosting (AdaBoost) and Gradient Boosting.
1.AdaBoost
1.1 Definition of Weakness
AdaBoost is a specific Boosting algorithm developed for classification problems (also called discrete AdaBoost). The weakness is identified by the weak estimator’s error rate:
In each iteration, AdaBoost identifies miss-classified data points, increasing their weights (and decrease the weights of correct points, in a sense) so that the next classifier will pay extra attention to get them right.The following figure illustrates how weights impact the performance of a simple decision stump(tree with depth 1)
How sample weights affect the decision boundary
Now with weakness defined, the next step is to figure out how to combine the sequence of models to make the ensemble stronger overtime.
1.2 Pseudocode
There are several different algorithms proposed by researchers. Here I’ll introduce the most popular method called SAMME, a specific method that deals with multi-classification problems. (Zhu, H. Zou, S. Rosset, T. Hastie, “Multi-class AdaBoost”, 2009).
AdaBoost trains a sequence of models with augmented sample weights, generating ‘confidence’ coefficients Alpha for individual classifiers based on errors. Low errors leads to large Alpha, which means higher importance in the voting.
the size of dots indicates their weights
1.3 Implementation in Python
Scikit-Learn offers a nice implementation of AdaBoost with SAMME (a specific algorithm for Multi classification).
Parameters:base_estimator : object, optional (default=None) The base estimator from which the boosted ensemble is built. If None , then the base estimator is DecisionTreeClassifier(max_depth=1) n_estimators : integer, optional (default=50) The maximum number of estimators at which boosting is terminated. In case of perfect fit, the learning procedure is stopped early. learning_rate : float, optional (default=1.) Learning rate shrinks the contribution of each classifier by learning_rate . algorithm : {‘SAMME’, ‘SAMME.R’}, optional (default=’SAMME.R’) If ‘SAMME.R’ then use the SAMME.R real boosting algorithm. base_estimator must support calculation of class probabilities. If ‘SAMME’ then use the SAMME discrete boosting algorithm. random_state : int, RandomState instance or None, optional (default=None)
2. Gradient Boosting
2.1 Definition of Weakness
Gradient boosting approaches the problem a bit differently. Instead of adjusting weights of data points, Gradient boosting focuses on the difference between the prediction and the ground truth.
weakness is defined by gradients
2.2 Pseudocode
Gradient boosting requires a differential loss function and works for both regression and classifications. I’ll use a simple Least Square as the loss function (for regression). The algorithm for classifications shares the same idea, but the math is slightly more complicated.(J. Friedman, Greedy Function Approximation: A Gradient Boosting Machine)
Gradient Boosting with Least Square
Following is a visualization of how weak estimators H are built over time. Each time we fit a new estimator (regression tree with max_depth =3 in this case) to the gradient of loss(LS in this case).
gradient is scaled down for visualization purpose
2.3 Implementation in Python
Again, you can find Gradient Boosting function in Scikit-Learn’s library.
Regression: loss : {‘ls’, ‘lad’, ‘huber’, ‘quantile’}, optional (default=’ls’) Classification: loss : {‘deviance’, ‘exponential’}, optional (default=’deviance’) The rest are the same learning_rate : float, optional (default=0.1) n_estimators : int (default=100) Gradient boosting is fairly robust to over-fitting so a large number usually results in better performance. subsample : float, optional (default=1.0) The fraction of samples to be used for fitting the individual base learners. If smaller than 1.0 this results in Stochastic Gradient Boosting. subsample interacts with the parameter n_estimators . Choosing subsample < 1.0 leads to a reduction of variance and an increase in bias. criterion : string, optional (default=”friedman_mse”) The function to measure the quality of a split.
Strength and Weakness
Easy to interpret: boosting is essentially an ensemble model, hence it is easy to interpret it’s prediction strong prediction power: usually boosting > bagging (random forrest) > decision tree resilient to overfitting: See this article sensitive to outliers: since each weak classifier is dedicated to fix its predecessors’ shortcomings, the model may pay too much attention to outliers hard to scale up: since each estimator is built on its predecessors, the process is hard to parallelize.
Summary
Boosting algorithms represent a different machine learning perspective: turning a weak model to a stronger one to fix its weaknesses. Now you understand how boosting works, it is time to try it in real projects! | https://towardsdatascience.com/boosting-algorithms-explained-d38f56ef3f30 | ['Zixuan Zhang'] | 2019-08-07 22:05:06.901000+00:00 | ['Machine Learning', 'Data Science', 'Visualization', 'Mathematics', 'Data'] |
Towards evaluation for a sustainable and just future | Timber transport in the Amazon Tapajos River (photo by author, January 2018)
Over the past eight months, the novel coronavirus pandemic has infected some 20 million people and killed more than 700,000, sparing virtually no country. The economic and social consequences have been devastating. The virus SARS-CoV-2 that caused COVID-19 crossed over from its non-human host, probably a bat, directly or more likely through an intermediate host like a pangolin, to a human in or around the city of Wuhan in China in late 2019. The exact transmission mechanism is still not known but the root causes are clear. The spill-over of zoonotic viruses like SARS-CoV-2 is becoming more common as we come into ever closer contacts with animals, both domesticated and wild. As human activities extend deeper into undisturbed ecosystems, undiscovered pathogens are released. The destruction is driven by the expansion of agriculture and cattle ranching, logging and deforestation, road construction, mining, new settlements and urban sprawl, making space for the growing human population and its ever increasing demands for raw materials, food stuffs and consumer goods.
Although COVID-19 in itself was not known, the coming pandemic was widely predicted by scientists and there were even government taskforces set out to prepare us for its eventuality. There were precedents-SARS, MERS, H1N1, Zika, Ebola and others-although their impacts were much more modest. COVID-19 spread like a wildfire in a globalized world-there were 3 billion airline trips taken in 2019-due to its characteristics of being airborne and contagious before infected persons become symptomatic.
What does any of this have to do with evaluation, you might ask. In my view, everything. And if not, what is the relevance of evaluation to the real problems of the world? The pandemic is an illustration of the kind of challenges we face today, how interconnected the world is, and how events in one place have global consequences. It also shows how economic development and environmental degradation are intimately intertwined. As we cut down trees, not only do we come into contact with lethal pathogens, but we also undermine the forest’s ability to sequester carbon thereby speeding up global warming. As people get richer, their diets tend to become more meat-based. There are now half a billion cows and 23 billion chicken on the planet. There is a patch the size of Denmark in the Amazon, which has been cleared to grow soy beans to feed pigs in Denmark. Another consequence of the increased meat consumption is higher rates of obesity, diabetes and cardiovascular diseases even in countries that previously didn’t experience them. A recent study by Harvard University provided strongest evidence yet linking air pollution directly to higher mortality. Human health and ecosystem health are inseparable.
The pandemic has affected different groups and communities differently. In the USA, Black, Indigenous and other People of Colour (BIPOC) have been disproportionately hit because they are more likely to be employed in essential jobs that cannot be done remotely, and their living conditions are more cramped. They may also have more pre-existing medical conditions rendering them more vulnerable to the virus. Climate change affects the poor and vulnerable communities hardest, whether it is those living on the low-lying coast of Bangladesh pummelled by more frequent cyclones and sea-level rise or small farmers in African drylands suffering during prolonged droughts.
Many evaluators write about these global challenges, using terms like ‘complex’ and ‘wicked,’ but I am not sure that the practice of evaluation has kept up with the theory. Evaluation as a profession has its roots in social inquiry, where we test the effectiveness of interventions on a well-defined treatment population against a control group. We may use experimental or quasi-experimental tools, or we may lean more towards more participatory and qualitative approaches, but either way the focus is on a single intervention and its effects. Our evaluations test the effectiveness of the intervention in terms of its pre-determined objectives. The desire is to be able to attribute any changes in the outcome to the intervention-or, recognizing the complexity and presence of multiple actors, at least the specific contributions of the intervention.
Apart from being narrowly project-focused, evaluations are still driven by donor concerns for accountability and ‘value for money.’ This treats the central question as a matter of simple accounting instead of a choice between types of intervention or organization that can, say, lift the largest number of people out of poverty with the least amount of money. To make things worse, the accounting in development cooperation is for the purposes of the donors and their priorities, not for the benefit of the claim-holders that the project is intended to benefit. This accounting mentality in evaluation tends to miss the big picture and may end up doing more harm than good.
Seldom do evaluations look at the big picture: Are we actually doing the right thing? Is the intervention that we are promoting meaningful in the larger whole? Is it something that the intended beneficiaries want and need? Is it fixing one part of the problem but creating others elsewhere? Is it having unintended consequences for the environment, for disadvantaged groups, for indigenous peoples, for power relations, etc.?
We must incorporate the environment into our evaluations. Sustainable development lies on social, economic and environmental foundations, yet evaluation-like national accounting-is almost exclusively concerned with the economic and, to a lesser degree, social capital, while natural capital and its depreciation are considered external to the system. According to the World Bank, low-income countries get 47% of their wealth from natural capital. This figure certainly underestimates the value of ecosystem services, in terms of clean water and air, health benefits, recreation, protection against natural hazards, etc. Evaluators must learn how to operate at the nexus of environment and development, which means understanding the interplay between human and natural systems.
Some of these lessons come out clearly in the evaluation of the GEF Global Wildlife Program, which is directly relevant to warding off pandemics such as the current one. The evaluation revealed the need to address the root causes of illegal wildlife trade on multiple fronts while also protecting endangered species in situ. Working with local communities to provide sustainable livelihoods is important, but not sufficient. It is essential to address political will, corruption and demand for wildlife products in the market countries of Asia, Europe and North America. Such interventions-and evaluating them-require holistic perspectives and a broad understanding of the dynamic systems.
For evaluation to remain relevant, it must rise above its project mentality and start looking beyond the internal logic of the interventions that are evaluated. It must systematically search for unintended consequences that may lie outside of the immediate scope of the evaluation. It must expand its vision to encompass the coupled human and natural systems and how they interact. And it must resist focusing on accountability for donors and instead make sure that it contributes to learning, for the wellbeing of the beneficiaries and nature in an equitable manner. If we achieve this, evaluation will be better positioned to contribute to more sustainable and just development in an interconnected world.
Disclaimer: The opinions expressed in this article are the author’s own and do not necessarily reflect those of the European Evaluation Society. | https://juhauitto.medium.com/towards-evaluation-for-a-sustainable-and-just-future-44898a1eb517 | ['Juha Uitto'] | 2020-08-17 14:20:39.197000+00:00 | ['Evaluation', 'Environment', 'Pandemic', 'Covid 19', 'Climate Change'] |
Working with the .NET CLI. C# From Scratch Part 2.0 | Welcome to another part of the series C# From Scratch, a course dedicated to teaching you everything you need to know to be productive with the C# programming language.
In the previous part of the series, we learned what .NET does for our applications and why we need it to run applications we write in C#. That part of the series is available here.
In this part of the series, we’ll get hands on with the .NET Command Line Interface, a tool that is installed as part of the .NET SDK to make developing .NET applications easier.
Accessing the .NET CLI
The .NET Command Line Interface is installed on your machine as part of the .NET SDK installation.
Once the .NET CLI is installed on your machine, you can access it through a Command Prompt window (also known as a terminal or shell depending on your operating system).
On a Windows machine, you can launch the command prompt by going to the Start menu and typing ‘cmd’. Press enter to launch the Command Prompt app. | https://kenbourke.medium.com/working-with-the-net-cli-d1accf086606 | ['Ken Bourke'] | 2020-12-09 10:07:34.027000+00:00 | ['Software Development', 'Csharp', 'Dotnet', 'Learning To Code', 'Programming'] |
Religious Leaders Call For Global Ban on LGBTQ+ Conversion Therapy | Hundreds of religious leaders around the world are taking a stand against LGBTQ+ conversion therapy.
More than 370 international faith leaders from 35 countries signed a declaration on Wednesday calling for a global ban on the harmful practice. Archbishop Desmond Tutu, the Archbishop Emeritus of South Africa, was among the signatories of the declaration.
Led by the Global Interfaith Commission on LGBT+ Lives, the declaration marked the launch of the commission, which aims to “provide a strong and authoritative voice from religious leaders across the global faith community who wish to affirm and celebrate the dignity of all.”
The declaration acknowledges and affirms all sexual orientations, gender identities, and gender expressions and calls for an end to violence against LGBTQ+ people, expressing regret for any and all religious teachings that have helped to contribute to the marginalization, rejection, and alienation of queer and trans people around the world. The declaration also calls on all nations to help bring an end to the criminalization of LGBTQ+ people.
“We recognize that certain religious teachings have, throughout the ages, been misused to cause deep pain and offense to those who are lesbian, gay, bisexual, transgender, queer, and intersex,” the commission said in a statement. “This must change.”
Conversion therapy is a harmful and ineffective practice that aims to change a person’s sexual orientation or gender identity. Not only has conversion therapy been widely debunked and discredited, but it can also cause severe physical and psychological harm and places many LGBTQ+ people at an increased risk of depression and suicide.
Some “treatments” of conversion therapy include hypnosis, prayer, electric shock treatments, and physically painful stimuli. While the practice has been condemned by a number of major medical associations in the US and abroad, it is still legal in many countries and states.
Only five countries — Germany, Malta, Ecuador, Brazil, and Taiwan — have banned conversion therapy, thus far. In the US, 20 states and a number of cities have outlawed the practice for minors.
“It is incredible to see hundreds of religious leaders from all different backgrounds and denominations join together and come out against the discredited practice of conversion therapy. We hope this bold effort will send a message to LGBTQ youth in diverse communities of faith that they are deserving of love and respect and should be proud of exactly who they are,” Sam Brinton, the Vice President of Advocacy and Government Affairs at The Trevor Project, said in a statement.
“We must all come together to protect LGBTQ youth in every city, state, and country around the globe from these dangerous conversion efforts and all forms of anti-LGBTQ violence.”
While bans on conversion therapy have gained momentum in recent years, most state and citywide bans in the US only target licensed therapists who engage in the harmful practice. Religious leaders remain exempt. So although these laws are necessary, they still fail to protect LGBTQ+ youth from religious leaders and faith-based institutions that seek to change them and subject them to what can only be classified as torture.
Out of 40,000 respondents, 14% of LGBTQ+ youth reported that a religious leader tried to persuade them to change their sexual orientation or gender identity, according The Trevor Project’s 2020 National Survey on LGBTQ Youth Mental Health.
“There are many LGBT+ people who suffer emotional hurt and physical violence to the point of death in countries across the world,” Reverend Canon Mpho Tutu van Furth, daughter of Desmond Tutu, said in the commission’s press release. “For this reason, we are joining forces as faith leaders to say that we are all beloved children of God.” | https://medium.com/an-injustice/religious-leaders-call-for-global-ban-on-lgbtq-conversion-therapy-6b95740d60da | ['Catherine Caruso'] | 2020-12-18 17:43:32.185000+00:00 | ['LGBTQ', 'Equality', 'Society', 'Politics', 'Religion'] |
Beginner's Guide to Exploratory Data Analysis and Feature Engineering | Exploratory data analysis (EDA) is an approach to analyzing data sets to summarize their main characteristics, often with visual methods. A statistical model can be used or not, but primarily EDA is for seeing what the data can tell us beyond the formal modeling or hypothesis testing task.
When I started my journey in Data Science field I always had difficulty with the starting point of any problem but after reading few exemplary Kernels in Kaggle I have realized the power of Exploratory Data Analysis and its impact on Data Modeling and Predictions
I am trying to explain how we can do EDA and Feature Engineering in as the simplest way to get some insight on the Titanic Disaster. I have put only specific code snippet before each visualization and analysis , if anyone interested in full code then refer to the link provided at the end.
The sinking of the Titanic is one of the most infamous shipwrecks in history. On April 15, 1912, during her maiden voyage, the widely considered “unsinkable” RMS Titanic sank after colliding with an iceberg. Unfortunately, there weren’t enough lifeboats for everyone on board, resulting in the death of 1502 out of 2224 passengers and crew.
I have used data set which is provided by Kaggle for Titanic: Machine Learning from Disaster Competition
Features Analysis
Let’s import required libraries for EDA
#Importing required libraries
import numpy as np
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
import plotly.graph_objects as go
import plotly.express as px
from sklearn.ensemble import RandomForestClassifier #Importing train data set
ds_train=pd.read_csv("/<InputDirectory>/train.csv") #Checking features in train data set
ds_train.head()
Now , analyze these features/variables one by one
Survived is a target variable where survival of a passenger is predicted in binary format i.e. 0 for Not Survived and 1 for Survived
PassengerId and Ticket variables can be assumed as Random unique Identifiers of Passengers and they don't have any impact on survival,hence we can ignore them
Pclass is an ordinal datatype for the ticket class, it can be consider as passenger's Socio-Economic Status and it may impact passenger’s survival chances so we will keep this in our analysis. It's unique values are 1 = Upper Class, 2 = Middle Class and 3= Lower Class
Name is self explanatory , we will skip this variable from our analysis
Sex or Gender could have played an important role in survival because during any evacuation from disaster , preference will be given to female gender and to test this notion we will consider gender in our analysis
SibSp and Parch represent total number of the passenger's siblings/spouse and parents/children on board respectively, it could be used to create a new variable called 'Family Size' (Creating new feature/variable is an example of Feature Engineering)
Age could have also played a role in survival , so we will keep this is our feature list
Fare is also an indicator of Socio-Economic Status of passenger, let's keep this in our feature list
Cabin is a Cabin number of the passenger and it can be used in Feature engineering to get an approximate position of the passenger when the accident happened, also from deck level, we can deduce Socio-economic status. However, after looking at data it looks like that there are many null values so we can drop this column from our feature list.
Embarked is a port of embarkation of passenger and this may have an impact on target variable so we will keep this variable for now. It has 3 unique values , C = Cherbourg ,Q = Queenstown and S = Southampton
Visualization
Now , we will try to see relation between selected features by creating Seaborn and Plotly visualization
First start with passenger’s Age
#Converting Age into series and visualizing the age distribution
age_series=pd.Series(ds_train['Age'].value_counts())
fig=px.scatter(age_series,y=age_series.values,x=age_series.index)
fig.update_layout(
title="Age Distribution",
xaxis_title="Age in Years",
yaxis_title="Count of People",
font=dict(
family="Courier New, monospace",
size=18,
)
)
fig.show()
We can deduce few points from above graph
* Majority of passengers aged more than 20 years and less than 50 years
* 30 passengers shares the same age i.e. 24 years
* There are 164 passengers who are less than 20 years
Let’s check how Gender is distributed among passengers
print("Number of Passengers Gender Wise
{}".format(ds_train['Sex'].value_counts()))
#Gender wise distribution
fig = go.Figure(data=[go.Pie(labels=ds_train['Sex'],hole=.4)])
fig.update_layout(
title="Sex Distribution",
font=dict(
family="Courier New, monospace",
size=18
))
fig.show()
It’s quiet evident that number of male passengers are almost double of female passengers.
Let’s see how many female and male survived across different age group.
#Create categorical variable graph for Age,Sex and Survived variables
sns.catplot(x="Survived", y="Age", hue="Sex", kind="swarm", data=ds_train,height=10,aspect=1.5)
plt.title('Passengers Survival Distribution: Age and Sex',size=25)
plt.show()
Well it’s pretty evident from above graph that majority of female passengers are survived
* Majority of Male passengers aged between 20 to 50 years had not survived . It means most of the young men had not survived this disaster
* Oldest male passenger aged 80 years ,had survived
* Age and Sex were major factors in deciding passenger’s fate
Now, let’s see Pclass variable relation with survival
#Visualize relation between Pclass and Survival
fig = go.Figure(data=[go.Pie(labels=ds_train['Pclass'],hole=.4)])
fig.update_layout(
title="PClass Distribution",
font=dict(
family="Courier New, monospace",
size=18
))
fig.show()
More than half of the passengers were travelling in Lower Class. Let’s see how survival is linked with Pclass
#Visualize PClass and Survival
#Create categorical variable graph for Age,Pclass and Survived variables
sns.catplot(x="Survived", y="Age", hue="Pclass", kind="swarm", data=ds_train,height=10,aspect=1.5)
plt.title('Passengers Survival Distribution: Age and Pclass',size=25)
plt.show()
* Again , majority of young male passengers aged between 20 to 50 years and travelling in lower class had not survived
* Oldest male passenger who survived the disaster was travelling in upper class
* Young men who survived the disaster were travelling in upper class
If the passenger was a man aged between 20–50 years, and not so rich at the time of travel then their chances of survival were very less
To support our Socio Economic Status theory let’s focus on one more variable Fare
#Visualize Fare and Survival
#Create categorical variable graph for Sex,Fare and Survived variables
sns.catplot(x="Survived", y="Fare", hue="Sex", kind="swarm", data=ds_train,height=10,aspect=1.5)
plt.title('Passengers Survival Distribution: Fare and Sex',size=25)
plt.show()
In above graph , for feature ‘Sex’ consider 1 for female and 0 for male . It’s evident that female passengers with lower ticket fare survived the disaster and few male passengers with highest fare also survived.
It means when it comes to gender ,female got preference across all the class otherwise Socio Economic Status played an important role in survival.
Now , we will see Embarked variable’s impact on survival
#Visualize relation between Embarked and Survival
fig = go.Figure(data=[go.Pie(labels=ds_train['Embarked'],hole=.4)])
fig.update_layout(
title="Embarked Distribution",
font=dict(
family="Courier New, monospace",
size=18
))
fig.show()
Majority of passengers embarked from Southampton , let’s visualize it’s survival distribution.
#Visualize Embarked and Survival
#Create categorical variable graph for Embarked,Age and Survived variables
sns.catplot(x="Survived", y="Age", hue="Embarked", kind="swarm", data=ds_train,height=10,aspect=1.5)
plt.title('Passengers Survival Distribution: Embarked and Age',size=25)
plt.show()
We can not deduce any direct relation between Embarked and Survival.
Let’s check correlation coefficient between these features
# Training set high correlations
ds_train.corr()
We can see direct correlation between ‘Survived’ and ‘Fare’ variables , other variables are in-directly related with Survival
* Age is correlated to Fare and Fare is correlated to Survived and our analysis also show how Age played a role in survival , by this we can say that Age is related to Survival
* SibSp and Parch are related to each other and also both are related to Fare which make sense because more number of people means more fare, by virtue of this both can be related to Survived
Feature Engineering
Feature engineering is the process of using domain knowledge to extract features from raw data via data mining techniques. These features can be used to improve the performance of machine learning algorithms. Having and engineering good features will allow you to most accurately represent the underlying structure of the data and therefore create the best model. Features can be engineered by decomposing or splitting features, from external data sources, or aggregating or combining features to create new features
Let’s start Feature Engineering with creating new variable Family Size by adding SibSp , Parch and One(Current Passenger)
#Add new column 'Family Size' in training model set
ds_train['Family_Size'] = ds_train['SibSp'] + ds_train['Parch'] + 1
print("Family Size column created sucessfully")
ds_train.head()
Now we will see how Family size will is related with Survived variable
#Visualize Family size and Survival
sns.barplot(x="Family_Size", y="Age", hue="Survived", data=ds_train,palette = 'rainbow')
plt.title('Family Size - Age Survival Distribution',size=20)
plt.show()
sns.catplot(y="Family_Size", x="Survived", hue='Sex',kind="swarm", data=ds_train,height=8,aspect=1.5)
plt.title('Family Size - Gender Survival Distribution',size=25)
plt.show()
*Chances of survival are less for large Family (>5 members)
* If family size is small then main passenger’s gender decides on survival , this supports previous deduction of : Gender’s role in survival
Note: Survival data is marked for main passengers and not for the whole family, whereas family member’s names must be there in the list and they may or may not survived. In other words, on just looking at survival column we can not deduce that fate of all family member was the same
Last Word
We can see that by just visualizing the relation between few variables we got so many insights and further we can use this newly gained knowledge regarding a feature in training data models by adding new features and removing the unnecessary one.
Refer Kaggle Kernel or Juypter Notebook for whole analysis and data modeling | https://medium.com/analytics-vidhya/beginners-guide-to-exploratory-data-analysis-and-feature-engineering-ec0ded88cff6 | ['Kush Bhatnagar'] | 2020-04-25 05:50:53.517000+00:00 | ['Exploratory Data Analysis', 'Beginners Guide', 'Data Science', 'Data Visualization', 'Data Scientist'] |
The Venture | I have a love/hate relationship with writing. Love writing and hate that I don’t have time to do more of it. Wife, mom, grandma(!), accountant … and writer.
Follow | https://medium.com/haiku-hub/the-venture-22a69d58bf43 | ['Valori Maresco'] | 2020-05-14 03:26:32.524000+00:00 | ['Contemporary Haiku', 'Dreams', 'Haiku', 'Creativity', 'Poetry'] |
FastFormers: 233x Faster Transformers inference on CPU | Since the birth of BERT followed by that of Transformers have dominated NLP in nearly every language-related tasks whether it is Question-Answering, Sentiment Analysis, Text classification or Text Generation. Transformers enjoys much better accuracy on all these tasks unlike RNN and LSTM the problem of vanishing gradients, which hampers learning of long data sequences. Also unlike Transformers; RNN and LSTM are not scalable as they have to take into account the output of the previous neuron.
Now the main problem with Transformers is they are highly computed intensive in both training and inference. While the training part can be solved by using pretrained language models (Open-Sourced by Large Corporates like Google, Facebook and OpenAI 😏) and fine-tuning them on our dataset. Now the latter problem is addressed by FastFormers, a set of recipes to achieve efficient inference-time performance for transformer-based models on various NLU tasks.
“Applying these proposed recipes to the SuperGLUE benchmark, authors were able to achieve from 9.8x up to 233.9x speed-up compared to out-of-the-box models on CPU. On GPU, we also achieve up to 12.4x speed-up with the presented methods.” - FastFormers
The paper FastFormers: Highly Efficient Transformer Models for Natural Language Understanding mainly focuses on providing highly efficient inference for Transformer models which enables deployment in large scale production scenarios. The authors specifically focus on inference time efficiency since it mostly dominates the cost of production deployment. In this blog, we gonna walk through all the problems and challenges this paper addresses.
So how did they address the problem of high inefficient inference time of the Transformers?
They mainly utilize three methods i.e. Knowledge Distillation, Structured Pruning and Model Quantization.
The first step is Knowledge Distillation which deals with reducing the size of model wrt depth and hidden states without compromising with accuracy.
which deals with reducing the size of model wrt depth and hidden states without compromising with accuracy. Second, Structured Pruning that reduces the size of the models by reducing the number of self-attention heads while trying to preserve the accuracy as well.
that reduces the size of the models by reducing the number of self-attention heads while trying to preserve the accuracy as well. Finally, Model Quantization which enables faster model executions by optimally utilizing hardware acceleration capabilities. On CPU, 8-bit integer quantization method is applied while on GPU, all the model parameters are converted into 16-bit floating-point data type to maximally utilize efficient Tensor Cores.
In-depth Walkthrough
Knowledge Distillation: Knowledge distillation refers to the idea of model compression by teaching a smaller network, step by step, exactly what to do using a bigger already-trained network. While large models have higher knowledge capacity than small models, this capacity might not be fully utilized. It can be computationally just as expensive to evaluate a model even if it utilizes little of its knowledge capacity. Knowledge distillation transfers knowledge from a large model to a smaller model without loss of validity. As smaller models are less expensive to evaluate, they can be deployed on less powerful hardware like a smartphone.
Knowledge distillation methods: Two approaches are particularly used namely task-specific and task-agnostic distillation.
In the taskspecific distillation, authors distill fine-tuned teacher models into smaller student architectures following the procedure proposed by TinyBERT.In the task-agnostic distillation approach, authors directly apply fine-tuning on general distilled models to tune for a specific task.
Summary of the workflow for knowledge distillation. Courtesy: Floydhub
Knowledge distillation results: In the experiments, authors have observed that distilled models do not work well when distilled to a different model type. Therefore, authors restricted our setup to avoid distilling Roberta model to BERT or vice versa. The results of knowledge distillation on the tasks are summarized with the teacher models on validation dataset in the below table. (Student referred to Distilled Model)
Accuracy of teacher and student models on the validation data set for each task of SuperGLUE benchmark with knowledge distillation. Courtesy: FastFormers
Neural Network Pruning: Neural network pruning is a method of compression that involves removing weights from a trained model. In agriculture, pruning is cutting off unnecessary branches or stems of a plant. In machine learning, pruning is removing unnecessary neurons or weights. Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving the computational performance of inference without compromising accuracy. This helps in decreasing the size or energy consumption of the trained neural network and helps to make inference more efficient. Pruning makes the network more efficient and lighter.
Synapses and neurons before and after pruning. Courtesy: Link
Structured pruning methods: The first step of our structured pruning method is to identify the least important heads in Multi-Head Attention and the least important hidden states in the feed-forward layers.
First-order method for computing the importance score, which utilizes the first-order gradient information instead of using magnitude-based pruning.
Before doing the importance score computation, the authors add a mask variable to each attention head for the gradient computation of the heads. Then the authors run forward and backward passes of the model on the entire validation data set, then the absolute values of the gradients are accumulated. These accumulated values are then used as importance scores which we use to sort the importance of the heads and the intermediate hidden states.
Based on the target model size, authors select a given number of top heads and top hidden states from the network. Once the sorting and selection steps are done, authors re-group and reconnect the remaining heads and hidden states which result in a smaller sized model. When heads and hidden states are pruned, authors use the same pruning ratio across different layers. This enables further optimizations to work seamlessly with the pruned models.
In the experiments, authors found out that the pruned model can get better accuracy when it goes through another round of knowledge distillation. So knowledge Distillation is again applied to the model.
Model Quantization: Quantization refers to techniques for performing computations and storing tensors at lower bit widths than floating-point precision. A quantized model executes some or all of the operations on tensors with integers rather than floating-point values. This allows for a more compact model representation and the use of high performance vectorized operations on many hardware platforms.
8-bit quantized matrix multiplications on the CPU:8-bit quantized matrix multiplication brings a significant amount of speed-up compared to 32-bit floating-point arithmetic, thanks to the reduced number of CPU instructions.
16-bit model conversion for the GPU: V100 GPU supports full 16-bit operations for the Transformer architecture. Also, 16-bit floating-point operations do not require special handling of inputs and outputs except for having smaller value ranges. This 16-bit model conversion brings quite significant speed gain since the Transformer models are memory bandwidth bound workload. About 3.53x speed-up depending on the model settings was observed.
On top of the structural and numerical optimizations applied, authors also utilize various ways to further optimize the computations especially Multi-processing optimization and Computational graph optimizations.
Combined results
The table below speaks all how effective the results are given below | https://medium.com/ai-in-plain-english/fastformers-233x-faster-transformers-inference-on-cpu-4c0b7a720e1 | ['Parth Chokhra'] | 2020-11-04 17:34:08.505000+00:00 | ['Data Science', 'Technology', 'AI', 'Machine Learning', 'Deep Learning'] |
Intro to @ngrx/component | Intro to @ngrx/component
A comprehensive guide to improving your NGRX projects (part 2)
Image provided by the author.
New features are coming into NGRX, one of which is ngrx/component. It brings plenty of opportunities to make our programming easier & faster. Apparently, it appears onto the scene to help us get rid of the async pipe in templates. But also, under the hood, it takes maximum advantage for Angular life cycles.
We will explore it briefly, and in the meantime, you will see the base of some concepts to understand why we should use this. Finally, we are going to apply it to an easy project done with NGRX, getting the benefits of this feature. | https://medium.com/better-programming/intro-to-ngrx-component-15c0dfd9b44a | [] | 2020-07-30 21:28:35.446000+00:00 | ['Angular', 'JavaScript', 'Typescript', 'Ngrx', 'Programming'] |
Is my data safe in Cloud? | Security requires deep expertise and plentiful dedicated resources to achieve, mainly because it is a multidimensional issue comprising physical (data center) security, platform and network security, proactive threat detection, audits and compliance with industry-specific certifications such as HIPAA and PCI. But the first and most important step in any security conversation is trust.
First and most important step in any security conversation is trust
We know that trust is created through transparency. For this reason; Google Cloud has created trust principles which clarify the commitment to protect the privacy of customers data.
Google Cloud trust principle:
1. Your data belongs to you and no one else
Your data is processed according to your instructions.
You can access it or take it out at any time.
You are notified if a breach is detected that compromises your data.
You have access controls to safeguard who has access to the data within and outside your organization.
You have access to audit reports that keep track of all changes made and who touched what in your projects.
You have access transparency logs expand visibility and control over your cloud provider with near real-time logs and approval controls.
2. Google Cloud does not sell customer data to third parties. Nor is it used in advertising. 3. Your data is encrypted in transit and at rest at all times automatically. You do not have to ask or enable it, this happens by default.
And, if you want, you can apply additional encryption by bringing your own encryption keys. These are the two ways:
Use Customer Managed Encryption Keys (CMEK) where you use Google Key management service to manage the keys in the cloud. Use Customer Supplied Encryption Keys (CSEK) where you manage your keys on-premise. When using CSEK just be aware that if the key is lost, Google won’t be able to help you recover the data because there is no key that exists with Google for this data.
4. Know where your data is stored and rely on it being available when you need it.
Location of Google data centers is published and they are highly available, resilient and secure. You can rely on your data being available when you request it. You also have control over which locations you would like your data to be stored in depending on the service you use. You can choose to store data closer to your users, apps or both.
5. There are explicit rules to guard against insider access to your data and no “backdoor” to Google.
Invalid government requests are rejected, and transparency report is published for those requests.
6. The privacy practices are audited against international standards.
This means you can choose to store your data within Google Cloud anywhere in the world without having to worry about standard met for that specific location.
Protection and Control
Google Cloud provides you with the right tools to control access to the data and choose who has access to what parts of your data.
Dedicated Privacy Team
Privacy team is equally involved in the launch of each product and the documentation to make sure all the privacy requirements and standards are met.
Resources
To learn more about privacy on Google Cloud, check out this link.
Want more GCP Comics? Visit gcpcomics.com & follow me on Medium, and on Twitter to not miss the next issue! | https://medium.com/google-cloud/is-my-data-safe-in-cloud-41608c1d1f89 | ['Priyanka Vergadia'] | 2020-12-10 05:39:55.654000+00:00 | ['Cloud Computing', 'Security', 'Cybersecurity', 'Cloud', 'Data'] |
I Was Doing Fine Until | OK — Your Turn
THEME: But — that little 3-letter word that changes the direction in which the sentence seemed to be headed.
Now, share your own one-line poem on the theme as a response to this post or write a stand-alone piece if you prefer. Tag your piece “One Line” or “Chalkboard Espresso” if 15 words or less. If you have a stand-alone poem, be sure to leave a link in the response section below.
Want to get notified about the weekly prompts? Complete this form.
Have a question about the One Line project? Read our submission rules! Thanks to Kathy Jacobs and the entire Chalkboard team. | https://medium.com/chalkboard/i-was-doing-fine-until-34afaf6c554f | ['Harper Thorpe'] | 2020-07-01 19:29:30.466000+00:00 | ['Humor', 'Poetry', 'Words', 'One Line Poetry Prompt', 'Music'] |
Leveling up: why developers need to be able to identify technologies with staying power (and how to do it) | JavaScript fatigue has become a common phrase in the world of today’s front-end developers. It can seem like there’s a new hyped framework, architecture, command line tool, or SaaS developer service every day. The constant churn of new things can end up leaving developers more jaded than excited.
To avoid this, it’s important to build up a solid instinct for separating the technologies and products worth spending time on from the ones that will fade into obscurity after their 15 minutes of fame is over, their featured article on TechCrunch has faded to the archives, or the last passive aggressive comment on their “Show HN” is long forgotten.
My journey as a programmer started almost 30 years ago when I got my first computer: a used Commodore 64 that would greet me with a blinking cursor as an entry into “Basic V2”.
Since then, the only constant in the world of development has been change, and the need to always be learning and discovering. Here are some thoughts on how I’ve been able to keep up along the way without drowning in the constant flow of novelties.
Learn your history
This might be a surprising bit of advice in an article about getting ahead of the pace of change, but to understand and evaluate contemporary technologies, you have to learn about the history of your field.
In a space that changes this much and this often, its easy to take for granted that the stream of releases are truly new. But technology tends to be surprisingly cyclical; what might seem new on the surface, tends to have deep historical roots below.
When Ruby on Rails came out in 2004 it had an almost-meteoric rise and an immense influence on the industry. At the same time, most of the ideas underlying the Model View Controller (MVC) pattern it was based on, as well as the foundational object orientation patterns from Ruby, went all the way back to the Small Talk programming environment from the late 70’s.
For developers who were fluent with the major web platforms at the time (PHP, Java, ASP), Ruby on Rails introduced not just a whole new language with a new syntax, but new concepts and a major new paradigm for meta programming. However, for developers that had followed the rise (and fall) of SmallTalk and the languages and platforms inspired by it, Ruby on Rails was full of familiar concepts (with a bit of new syntax and some adaptation from the world of Small Talk applications crafted unto the web). All they needed to learn was the (important, but not huge) differences between Ruby and Small Talk, and the conceptual differences between MVC for the web and MVC for a Small Talk application.
In a similar way, when React came out it seemed to instantly sweep aside a whole generation of JavaScript frameworks. Most of these had tried to transfer a Rails-inspired MVC model to the browser. To many developers it seemed to be a drastic departure from both the single page app frameworks relying on templates with two-way data bindings, and from the simpler libraries like jQuery. But at its core, React was inspired by ideas from functional programming languages (especially OCAML) that went all the way back to the early days of computing.
The creator of React, Jordan Walke, recently described how his own journey back in history gave him the background needed to build out React:
For the longest time I just assumed “welp, I guess I’m just a weird programmer”. Then I finally took a course on programming language fundamentals (which used ML (SML) for much of the coursework) and I finally had some basic terminology to be able describe how I wanted to build applications. I also learned that the style of programming that I gravitated towards was neither weird, nor new and actually closer to some of the oldest philosophies of programming languages — philosophies which hadn’t ever become mainstream — philosophies that the industry had spent upwards of twenty years undermining (in my opinion, to their disadvantage).
https://www.reactiflux.com/transcripts/jordan-walke/
For many front-end developers the journey into the more fully-fledged world of full-on state management in React with some form of “Flux” architecture like Redux, maybe combined with Immutable.js, can feel overwhelming. But for developers with a solid historical foundation who had been following the re-emergence of functional programming — and the concepts around it going back to the creation of LISP in 1958 — React reflects familiar concepts and ideas.
Even when actively trying to learn a new technology, history can be a helpful teacher. When Rails was first released, it was tough to come by material about it aside from a few online docs, tutorials, and the source code itself (more about source code later). However, a lot was written about the evolution of MVC through Small Talk to Objective C, and lots of lessons learned from working with meta programming and OOP based on message passing in the Small Talk world.
This can be a great tool for learning new technologies much faster: instead of reading the latest tutorials and the emerging documentation, figure out what they’re inspired by, what previous knowledge they draw on and build upon. Most likely the material about those older technologies, ideas and methodologies will be much more mature and you’ll find lots of lessons learned that most likely apply to the new take on the field.
A solid historical awareness gives you a really good toolset to ask the question: what is different this time? The answer (or lack of one!) to that question will very often determine the success or failure of a new technology.
People, culture, and community matter
It’s easy to think that tools and technologies are simply evolving on their own. For example, Object Oriented Programming became Functional Programming, text editors developed into full fledged IDEs, and dynamic languages transitioned into statically typed languages. However, new technologies and frameworks don’t just follow an evolutionary path on their own. They’re invented, built, and disseminated by humans, organizations, and communities.
When a new tool or technology emerges, it’s important to question both the technical underpinnings (How is it different? What underlying patterns does it build on?) and motivation (Why did someone choose to build this now? Who are the people that feel passionate about this? What problems does this technology solve for organizations?).
One of my favorite essays on why some tools win while others fade away is Richard P. Gabriel’s “The Rise of Worse is Better” from 1989. It describes a possible reason why Unix and C overtook the LISP-based technologies — a reason that had nothing to do with the inherent qualities of the two solutions.
In the essay Gabriel describes a “worse-is-better” approach, the New Jersey school of design in contrast to the MIT/Stanford school, that weighs the simplicity of the implementation higher than the simplicity or correctness of the end-user interface. This focus allowed C and Unix to beat LISP in the market. C compilers were easier to implement, port and optimize than LISP compilers and this made it much faster for the Unix implementers to get software into the hands of the users. This lead to faster adoption and eventually meant that far more people (and companies) were invested in growing and improving the C/Unix ecosystem.
When viewing new technologies, understand not just what they aim to do, and how they are technically implemented, but also how they are going to spread and how they will grow a community. Often the technologies that become important to the mainstream programming community are those that have the best answers to those later questions, even in cases where they can seem like a step back on pure technology grounds.
But here’s the real trick: sometimes tools that are technologically way ahead of the curve are doomed to never get widespread adoption (I’m willing to bet a lot of marbles that we’ll not all be writing web-apps in the Idris language anytime soon). LISP never became mainstream, but so many of todays mainstream frameworks, languages, libraries and techniques owe a huge debt to the ideas it invented and explored, and even today learning LISP can bring lots of insight into future technologies.
If you can spot the tools that live in this intersection, then learning those might bring you your next developer super-power.
Always Understand the “Why”
Back when I started developing, the closest thing to StackOverflow was computer magazines with source code you could manually type into your terminal to get the programs running.
I’m a sloppy typer, and I could never manage to type in a complete program without errors along the way. This is actually one of the (admittedly very few!) advantages of computer program printouts versus copy and paste-able Stack Overflow snippets: to get it to work, you need to actually understand the code.
As developers we’re always working with looming deadlines and with a pressure to get new functionality, features, and bug fixes out in the hands of our users as fast as possible. I’ve seen developers that get so focused on getting something out there, they throw libraries and code snippets together without taking the time to understanding why it works. Or, they’ll see that something is broken and simply try different potential solutions without first taking the time to understand why the system broke in the first place.
Don’t be that developer. Make it a rule for yourself to never use a solution from Stack Overflow or elsewhere before you take the time to understand why that solution could work. Challenge yourself to go one step further and figure out what it would have taken for you to come up with that solution yourself.
Sometimes you’ll find an issue where a small change (maybe changing one library for another, calling a variation of the function you were using, etc.) solves a bug, but you don’t actually know why. Don’t settle at this point. Dig in and build up a mental model that lets you understand exactly why one solution failed and another worked. Very often this will lead to deeper learnings and you’ll discover patterns that might reveal undetected bugs lurking in other parts of your system.
Also approach new technologies in this way. Don’t focus on learning on the surface. Learning the syntax for a few different frameworks or languages won’t teach you much, but learning the decision making process below the surface of those technologies will fundamentally make you a better developer.
When all is said and done, the most important thing is not what you learn (which framework, which tool, which language), but what you learn from it.
Putting these lessons to work
Choosing the right tools isn’t always easy or obvious — even for the most prolific of programmers. There’s a constant trade-off between sticking to well known, trusted and reliable tools with few surprises, and adopting brand new technologies that can help solve problems in new and better ways. But, a little up front work can make successfully choosing and implementing new tools part of your development practice. Indeed, it is a practice, one that’s always evolving. Here are a few ways to apply the suggestions from this post.
Learn your history
Historical awareness provides a solid toolset to ask, “What is different this time?” The answer (or lack of one) often determines the success or failure of a new technology. New stuff is cool. New stuff is fun. But if you feel overwhelmed at the speed of it all and the occasional burst of JavaScript fatigue is kicking in, then slow down and remember that it’s a long game and that following the large trends is more important than constantly rushing to rewrite all your apps in the newest framework. Peter Norvig puts it great in his essay “Teach Yourself Programming in Ten Years”.
People, culture and community matter
Thanks to the meteoric rise of GitHub, Stack Overflow and NPM it’s a lot easier to get early insight into how a community will scale and how developers are responding to its ambitions. While contributors and stars can tell you a lot about projects that are already successful, they aren’t always great early indicators of success. However, you can use the same logic to help determine whether a project is likely be embraced by the community as you might already use to build your own software or choose which company you want to work for:
Is there a clearly-defined vision?
Is there a clear user need?
Are the right people, resources, and documentation in place for this to scale?
Is it extensible? I.e., can it scale or adapt to serve emerging technologies or user types?
And perhaps, who is behind it?
Always understand the “why” behind a technology
Don’t focus on the surface, but on the currents underneath. Learning the syntax for a few different frameworks or languages will get you by, but learning the decision-making process of those technologies will fundamentally make you a better developer.
Michael Feathers has a great list of “10 Papers Every Developer Should Read”. All are about foundational ideas on languages, architectures and culture and set a great baseline for understanding the ideas beneath so many of the trends that are still making waves in programming today.
Go forth and dive into all the new things! But do it at a pace that makes sense. A pace that gives you time to build the right kind of foundation. This eventually lets you adopt new technologies faster, understand them more deeply, and evaluate their staying power more thoroughly. | https://medium.com/netlify/leveling-up-why-developers-need-to-be-able-to-identify-technologies-with-staying-power-and-how-to-9aa74878fc08 | ['Mathias Biilmann'] | 2018-05-30 23:23:13.442000+00:00 | ['JavaScript', 'Learning', 'Development', 'Programming', 'Jamstack'] |
Three Different Things: November 15, 2019 | Three Different Things: November 15, 2019
Speaker Verification To Speech Synthesis, Google Putting Medical Data at Risk, and Your Data
Photo by jesse orrico on Unsplash
Click on the reference links and then compare to the generated speech. Pretty astonishing stuff from Google.
2. I’m the Google whistleblower. The medical data of millions of Americans is at risk
Above all: why was the information being handed over in a form that had not been “de-identified” — the term the industry uses for removing all personal details so that a patient’s medical record could not be directly linked back to them? And why had no patients and doctors been told what was happening? I was worried too about the security aspect of placing vast amounts of medical data in the digital cloud. Think about the recent hacks on banks or the 2013 data breach suffered by the retail giant Target — now imagine a similar event was inflicted on the healthcare data of millions.
I think EMR data ultimately has to live in the cloud to drive more effective outcomes… and as a consequence this data breach is likely going to happen. When it does, it’ll change the game — for the worse — for insurance companies and providers alike… not just the citizens who’s privacy is violated.
3. Andrew Yang wants you to make money off your data by making it your personal property
“By implementing measures to increase transparency in the data collection and monetization process, individuals can begin to reclaim ownership of what’s theirs,” Yang said in the plan.
In the example above, if we simply owned our own health data it would likely foster a much more competitive provider market. Providers would have better information to provide better care. Maybe Apple will figure this one out with Apple Health as a beachhead. | https://medium.com/early-hours/three-different-things-november-15-2019-a2e18d109d39 | ["Sean O'Brien"] | 2019-11-15 12:44:38.406000+00:00 | ['AI', 'Policy', 'Healthcare', 'Data Science'] |
A Simple React Hook to Prompt iOS Users to Install Your Wonderful PWA. | The Hook
The whole point of this hook is to load a notification only for Safari on iOS, so we’re going to create a new file called useIsIOS.js . In this code we’re going to do a few things:
Check if someone is viewing our app on an iOS device.
Are they using mobile Safari?
Have they not been prompted to install the app before?
If all of the above is true , then we’ll send them a prompt to install the app and, in the background, store a hasBeenPrompted item in browser localStorage with a timestamp.
We’ll start with creating a state for isIOS with useState .
This is pretty simple:
const[isIOS, setIsIOS] = useState({}); initializes a state — similar to setState in class function. On init, the state is an empty object by setting useState({}) .
initializes a state — similar to in class function. On init, the state is an empty object by setting . Then we use useEffect to do our checks from above.
At the moment, this does nothing except for try to run the checkForIOS() function. So let’s update our file to make it work!
We’ll install moment.js npm install --save moment , then import moment into our hook. We’re using moment to set & check timestamps to see if and when a user has been invited to install our app.
Getting into our checkForIOS() function, we’ll start by setting some variables, using moment, including a timestamp for today , lastPrompt (if it exists), and finally, the number of days since we last prompted our visitor to install the app.
A quick sketch of Astro turned logo, turned splash screen.
Looking at lastPrompt , you’ll see we’re trying to get an “installPrompt” key from the browser localStorage. If it exists, we use moment() to convert it into a Date object, otherwise it returns undefined . Perfect!
Next, have a look at days . This is where we’re checking the time difference, in days, of today compared to the last time the user was prompted to install our app. If they haven’t been prompted before, then we prompt them now!
Check for devices, browser, and OS
I tried a few different options, but this seems to be the most reliable method for identifying Safari and device types on iOS. We’re updating our checkForIOS() function with some old-school, pedantic JavaScript.
Typical JS.
It’s important to ensure that our iOS visitor is using Safari because iOS doesn’t permit other browsers to install our awesome PWA’s!
Do we prompt?
Now that we have our timestamp and device info, we can check if we should send our website visitors a notification to install our PWA.
Looking at our prompt variable, we’re saying:
“if our visitor has no stored timestamp — isNaN(days) — or we haven’t notified them to install our PWA in over 30 days,
— or we haven’t notified them to install our PWA in over 30 days, “and, they’re viewing our website on an iOS device,
“and, they’re using the Safari browser,
then set prompt to true”
Depending on whether our visitor meets all the criteria, prompt will return true or false. | https://medium.com/swlh/a-simple-react-hook-to-prompt-ios-users-to-install-your-wonderful-pwa-4cc06e7f31fa | ['Michael Lisboa'] | 2020-01-13 03:55:12.504000+00:00 | ['JavaScript', 'Web Development', 'React', 'iOS', 'UX'] |
An Effective Long-Term Weight Loss Solution: Meditation | Many millions of trees sacrificed their lives providing paper for all the books and articles about diets, dieting and weight loss. Low fat. Low carb. High protein. Paleo. Atkins. Mediterranean. It’s never ending.
I don’t have any qualms about any of these. I’m sure they work for the people who do them…for as long as they do them. But how many people actually stay on a diet for the rest of their lives? Not many. Which leads to the proverbial yo-yo, up and down, lose fifteen, gain eighteen, lose twenty, gain twenty path that so many weight loss aspirants travel.
The annoying Jennifers
I don’t mean to be flippant about this. Weight loss, dieting, body image…it’s an area that dominates the emotional landscape of many millions of people, especially women in America who are tormented every time they wait in a supermarket checkout line and are bombarded by the perfect bodies of Jennifer Aniston, Jennifer Garner and Jennifer Lawrence on the cover of Vapid Magazine (aka Glamour, Cosmo, Elle, Vogue…take your pick).
Fine. So people go on and off diets and their weights fluctuate wildly. Nothing new there.
There’s also nothing new about what has come next in the evolution of the weight loss/dieting debate. The smart people in this area say we don’t need to diet; we need to change our eating habits.
The three food culprits
What does that mean? Eating less refined sugar is definitely numero uno, followed by avoiding refined carbs and trans fats.
Which begs the question: Why do people eat badly, i.e., consume lots of sweets, carbs and fried foods? For most people the answer can be summed up in one word: stress.
Why does stress make you eat more, bad food? Because when we get stressed, our levels of a hormone called cortisol rise. Cortisol makes us crave sugary, salty and fatty foods, because our brains think they need fast fuel to fight whatever threat is causing the stress.
Ben & Jerry’s strikes again
Making matters worse, these unhealthy sugary, fatty foods actually turn off the mechanism in our brains that send out the signal that we are full. That’s why we tend to eat the entire pint (or two) of Ben & Jerry’s Chunky Monkey after a stressful day at work! (The Active Times, July 15, 2015, Katie Rosenbrock)
This, then, is where the focus of the weight-loss world needs to be — on reducing stress. I’ve relied on exercise to slay my stress for going on forty years. And of course there are myriad other salutary ways to reduce stress.
But in terms of stress reduction to facilitate weight loss, nothing is more effective than meditation. Meditation contributes in two main ways.
Meditation slays cortisol
First, it reduces cortisol in our system. A 2013 University of California-Davis (Saron et al) study found that meditation cut cortisol levels by more than half. (EOC Institute). This shouldn’t be the least bit surprising as the chief result of meditation is the calming of the mind which calms the entire being.
But it’s in the second area that meditation offers weight-loss seekers something that exercise and the other stress reduction techniques don’t. And that is the ability to do the inner work necessary to change our eating habits.
What inner work? Specifically, meditation trains us to be able to observe ourselves from a place of objectivity and nonjudgment.
See cookie, eat cookie
What the heck does that mean? When we eat from a place of emotion and stress, we ARE that emotion and stress. So we see that bag of chocolate chip cookies and there is no entity there to regulate our actions. It’s just see bag, grab bag, open bag, eat cookies until they’re gone.
What meditation does is separate what I call our conscious, true self from the egoic, all powerful, out of control self that dominates in most humans. By separating this conscious self from its dominant big brother, meditation allows that self to observe what big brother is doing.
Meditation strengthens the regulator
The more we meditate and the stronger this conscious self becomes, the better able we are to stop and say, “Okay. I see that bag of cookies. I feel the strong urge to go open it up and devour every cookie in the bag. But if I do that I’ll probably feel terrible in about fifteen minutes. Let’s hold off.”
Bottom line: Meditation strengthens our ability to observe and consequently self-regulate our behaviors. And in the world of dieting and weight-loss I can’t think of anything more important. Because self-regulating our eating really is most of the ball game when it comes to weight-loss.
Not to mention that regular meditation reduces anxiety, depression and chronic pain, strengthens our immune system and improves our focus, among many benefits. More important, meditation makes us calmer, more compassionate human beings.
So if you want to lose weight for the long term and garner all those other profound benefits, get meditating! If you’re looking for a place to start, check out my free program for regular people at davidgerken.net. | https://medium.com/change-your-mind/an-effective-long-term-weight-loss-solution-meditation-8c3cac96949d | ['David Gerken'] | 2020-12-24 13:45:41.110000+00:00 | ['Meditation', 'Self', 'Weight Loss', 'Health', 'Mindfulness'] |
“Hold On, I’m Coming” — Sam and Dave | I caught a TV documentary about classic Memphis record label Stax the other evening, which was, of course, a great excuse for the documentary makers to weave in a host of classic Stax tracks while they told their story.
In amongst the soundtrack of this documentary, jam-packed with some of the most memorable songs of the 1960s, was the unmistakable horn refrain from one of my all-time favourite soul tracks, “Hold On, I’m Coming” by Sam and Dave.
Throughout the 1960s and 70s, the name “Stax” was synonymous with “soul”. They even branded themselves “Soulsville USA”, riffing on Motown’s “Hitsville USA” strapline.
The Stax sound remains one of my favourites to this day. And Stax got that sound, in large part, due to the Stax house band, an incredibly tight and skilful unit which was easily at least the equal of other great studio bands of that era — the Funk Brothers at Motown, the Swampers in Muscle Shoals and a handful of others.
To perfect their funky, soulful sound, Stax stuck with two main components right through the studio’s golden era.
The label’s “house band” was the fusing together of two great bands in their own right. First, the Mar-Keys, Stax’s original house band with wonderful musicians like Steve Cropper on guitar, Duck Dunn on bass and trumpet player Wayne Jackson among others.
A kid called Booker T Jones started working at Stax in the early 1960s and the Mar-Keys largely morphed into Booker T and the MGs who had a US Top 3 hit of their own in 1962 with the instrumental “Green Onions”… (here if you need a reminder).
Unusually for the time, the studio band who made the recording also went on the road with the Stax artists. If you watch the video below, you’ll see Booker T and the MGs getting a shout-out.
Also unusually for the time, if you watch the “Green Onions” video above, you’ll note that Booker T and the MGs was a fully-integrated band who demonstrated that black people and white people could work perfectly well together, contrary to some views at the time, while creating beautiful music to make the world a better place along the way.
You might think that a combination of the Mar-Keys and Booker T and the MGs was enough to be getting on with, but that’s not all…Isaac Hays, who would go on to enjoy tremendous success of his own with “The Theme From ‘Shaft’” (here) in the early 1970s, was also a prominent feature of the Stax studio team a key part of the songwriting team at Stax.
Less publicly-recognised than, say Holland-Dozier-Holland at Motown, the no-less-talented songwriting team at Stax wrote many of the 1960s’ classic hits.
So it probably won’t surprise you to discover that Isaac Hayes, who co-wrote “Hold On, I’m Coming” for Sam and Dave along with label boss David Porter, and around 200 other songs for Stax, were later inducted into the Songwriters Hall of Fame.
Perhaps because the fevered imaginations of radio station programmers read too much into the title of “Hold On, I’m Coming” without troubling to read the rest of the lyrics, radio airplay was hard to come by. Sam and Dave would only make it to number 21 in the Billboard Hot 100 and “Hold On, I’m Coming” didn’t trouble the UK charts at all.
Despite that, “Hold On, I’m Coming” has come to be recognised as a classic…and in fairness, Sam and Dave would go on to take “Soul Man”, a very similar song in many ways, as far as number 2 in the Hot 100 a year or so later.
The quality of production from that little record label in Memphis, Tennessee was recognised in the end and, in addition to their chart performance, Sam and Dave picked up a Grammy for “Soul Man”.
“Soul Man” is a great song too, of course, but “Hold On, I’m Coming” marginally gets my vote, albeit on a photo finish.
The subject matter of the song, despite what a generation of radio station programmers might have thought, is relatively innocent. It’s about a guy telling his girlfriend that if she’s got any problems, he’ll be there for her in an instant…
Don’t you ever be sad
Lean on me when times are bad
When the day comes and you’re down
In a river of trouble and about to drown
Just hold on, I’m coming
Hold on, I’m coming
That seems innocent enough…laudable, even…
Admittedly, in the eyes of 1960s radio station programmers at least, “Hold On, I’m Coming” takes a turn for the worse later in the song…
Reach out to me for satisfaction
(Look-y here, all you gotta do)
Call my name now for quick reaction
That section is probably open to a wider variety of interpretations than the innocent-enough sounding first verse, although, of course, perfectly innocent interpretations of those lyrics are quite possible too.
But…and I can hardly believe I’m saying this in my lyric-obsessed little corner of the world…the lyrical interpretation of “Hold On, I’m Coming” hardly matter in the face of one of the classic soul performances.
Sam and Dave are rocking this one out…with such a thick vein of soul running through them that it positively flows out your speakers at you whether you’re a soul fan or not.
And the band behind them…the band…wow…
No less a musician than Booker T Jones himself on the keyboards…Steve Cropper (co-writer of “(Sittin’ On) The Dock Of The Bay”, amongst many other classic songs) on guitar…Wayne Jackson on horns…along with the cream of the Memphis music scene to fill out the sound.
And what a sound it was. A soul classic that’s meant to be danced to…meant to be embraced…meant to move you in a way that all songs aspire to, but few can manage…
Here’s Sam and Dave with a roof-raising, foot-stomping, barn-storming live performance of their Stax classic…written by Isaac Hayes and David Porter… “Hold On, I’m Coming”…
If you’ve read this far, thank you for your time and attention. I know you could have spent your time doing something else, so I’m very grateful that you’ve spent it in the company of one of my favourite songs.
The video is below, but if you prefer to listen to your music on Spotify, you can find today’s track here…https://open.spotify.com/track/6PgVDY8GTkxF3GmhVGPzoB | https://nowordsnosong.medium.com/hold-on-im-coming-sam-and-dave-d82e7d394736 | ['No Words', 'No Song'] | 2019-04-20 16:37:36.165000+00:00 | ['Music'] |
Try & catch finally JavaScript & Typescript | Try/Catch In TypeScript
Escape any error exception in Typescript 4.0 with unknown
Did they do it on purpose?
TypeScript covers many concepts from JavaScript. Also, the questionable ones. Some cases of error handling can be quite interesting, but this article will guide you through the jungle!
Microsoft describes TypeScript as a superset for JavaScript. What do I mean by telling you this? Simply every TypeScript code is also a valid JavaScript one. TypeScript can do more than its parent, but not less. If it wouldn’t the migration of projects wouldn’t be as easy as it is and the acceptance by the developers would also be shrinking.
Because of that, they also took the bad ones. Like the exception handling. If you break it down, the error handling is pretty similar to the one in C#. Throw is used to trigger an exception. And as you guess this one right, you will also guess the next: try & catch are used to handle them. Therefore, it is irrelevant if the exception is a default or a user-defined one.
Finally also made it into TypeScript. You can only use try-finally, no need for a catch here. Finally is used to make sure we clear unneeded memory allocations. But this is nothing new, this is implemented for years now.
It has not even to be the error type?!
Although JavaScript does not have a classical type system, it provides several error types. Error — This is the globalized error type used for any occasion. SyntaxError — is specified as thrown when a syntax error occurred and the ReferenceError — when any value is assigned to an undeclared variable or assignment without the var keyword or a variable is not in our current scope [1].
The language differentiates those three types and throws one of them when the circumstances are right. Or in our case, then the code has an error. Typescript derived all these errors, and that means the following code is simply valid.
There is one question floating around in my head. What type does the parameter exception have?
I think it is obvious this has to be the type Error or at least one type that has a common ground with all other named error classes. Some kind of Error — Interface.
The type of the parameter is any. This makes us do whatever we want because the compiler won’t complain, whatever we do with our parameter exception. We will get access to any defined properties. Typescript does not apply the otherwise strictly used type safety.
Two questions here:
Why is that so? Why could this be meaningful?
I answer the first question in a manner of seconds: JavaScript does not force us to use the error as an exception. You do not even have to derive from it, you can throw anything you want, a ball, a nut, a baseball bat… It lies in your hands.
This throw is pointless but syntactical correct. For the execution of this code, the program will catch us the value 42. The answer to everything in the universe.
And that’s the answer to the first question. Typescript infers from any because it can not guess or even know what code in other parts of our program could throw.
What comes next is the answer to the question, if this is even good?
Error error error
I actually don’t think that this is good. Why? Because instead of supporting an unsafe error type, Typescript should put a stop to it!
But how could you do that? The backward compatibility is the main promise Typescript gave us developers. It could simply use another type than any. For example unknown. It wouldn’t be much of a difference, because of unknown limits the accessibility to properties by even giving no opportunity to do so. And that’s for a reason, we don’t know what the unknown is, so it is therefore just logical not to access a thing, we don’t even know.
Unfortunately, this brings not more type safety but a bit of general safety, because of the fact not letting us accessing properties. And since Typescript 4.0 it is possible to declare a type inside the catch statement. We are forcing the program to explicitly declare any or unknown. We want the unknown type to reach our goal.
Since linters are a blessing for every developer and teams also, there is also a rule for that. ESLINT-plugin [2] got the rule of no-implicit-any-catch [3]. This will help you force anybody to code try &catches that are helpful. You can even force to use the type unknown. This is very helpful in the manner of type safety. And that’s why you use Typescript.
Type-Guard
Because you will already have a lot of any-typed exceptions on your code, you don’t want to change them all manually, but to work well with exceptions you better convert them. A Type-Guard is a useful way to do so. This one has to check if a value has the right type and, if possible, converts that value.
And of course, there is an NPM-Module for that. And hell yes, why not? The name is defekt [4]. This one can make your own error classes. For example, this one.
Conclusion
I love Typescript & JavaScript as well. They are great languages to turn out simple ideas pretty quick and accessible for everyone. They are also great to make really cool and extraordinary web applications. But one thing that Typescript has to get rid of are things like the topic covered by this article. Simply taking old error-prone properties of JavaScript and letting them untouched. On the other hand, what would TS be without the backward compatibility to JS? I really can’t make an accusation against Microsoft to do so. TS wouldn’t be so big if there were no backward compatibility.
And for that, it is even more important to know your language and to understand what’s happening under the hood and where your language comes from. JavaScript is the basement of modern web.
Further reading:
Links & References
[1] Exception Handling
https://basarat.gitbook.io/typescript/type-system/exceptions
[2] eslint-plugin
https://eslint.org/docs/developer-guide/working-with-plugins
[3] no implicit any catch
https://tinyurl.com/y8vckrmx
[4] NPM modul Defekt
https://www.npmjs.com/package/defekt | https://medium.com/javascript-in-plain-english/the-dark-side-of-typescript-try-catch-deeded18ba0d | ['Arnold Abraham'] | 2020-12-17 14:53:45.885000+00:00 | ['Programming', 'JavaScript', 'Software Development', 'Typescript', 'Web Development'] |
How Olafur Eliasson uses art to drive conversations on climate change | As creatives, it’s undeniable that we propagate ourselves into our work. Our backgrounds, experiences, and beliefs help shape our decisions regarding the projects we take on and the messages we amplify. There’s no doubt that with the presence of an audience comes great responsibility — especially in leveraging mediums to stir conversations we find important.
In the case of Olafur Eliasson, a Danish-Icelandic artist known for his installation pieces, art is seen as a platform to share ideas and inspire change. His creations, centered around a theme of transience and elemental materials, motivate viewers to focus on the imminent future — one dependent on the actions and choices we make today.
Image credit: Olafur Eliasson
The Weather Project (2003)
Olafur Eliasson’s The Weather Project is arguably the most iconic installation built for the Turbine Hall in London. Over the course of six months in 2003, this hazy mirage attracted over 2 million visitors — labeling it as one of the most visited contemporary art pieces in history.
With its vivid imagery, this project influenced collective awe and commotion for its powerful imprint on both the eyes and mind. By mimicking the realistic appearance of a sun and sky with illusory elements, Eliasson created a new world inside of the Tate Modern to remind viewers we only have one. | https://uxdesign.cc/how-olafur-eliasson-uses-art-to-drive-conversations-on-climate-change-5d10fe60bd52 | ['Michelle Chiu'] | 2020-10-20 20:31:36.092000+00:00 | ['Culture', 'UX', 'News', 'Design', 'Art'] |
Trick-or-Treating… Who Needs It? | If the kids must trick-or-treat, they can; just do it at home.
While trick-or-treating has a few different murky possibilities of its actual origin, the modern-day version is basically all of them melted together.
We get some of it from the old Gaelic tradition of Samhain, an end-of-harvest festival that would take place between October 31-November 1. During that time, people believed the peripheral line between our world and the spirit world was lessened and the souls of the dead were easily able to cross over. Households would leave offerings outside their doors to please and pacify the dearly departed. Folks would also leave a place setting at the dining table for the unseen guests and disguise themselves as spirits or ghouls in order to blend in with the ‘visitors’.
Modern Halloween doorbell ringers also take inspiration from ‘mumming’, or ‘guising’. Mumming has been documented since the thirteenth century, but it has been traced back to ancient Egypt. Mummers, or ‘guisers’, would disguise themselves, dressing up in costume or masks pretending to be the dead spirits and excepting the offerings themselves. If that didn’t work, they would perform music and skits in the streets, sometimes going door to door begging for treats in return for their performance. There are still Mummers’ festivals and parades to this day, paying homage to the ways of the old.
‘Mumming’ back in ye olde days — Image from WikiCommons / public domain
Lastly, part of the trick-or-treating tradition nowadays stems from remnants of the old school practice of ‘souling’ on All Souls Day (which has been sort of intertwined with All Saints Day.) Souling dates back to at least the Medieval times, and maybe even before. People would bake ‘soul cakes’ with a blend of autumn spices like nutmeg and cinnamon and would sometimes fold currants into the mix. They’d mark the top of the small round cake with an ‘X’ to brand it as an offering for the dead. Children (and the poor) would go from house to house begging for the soul cakes in exchange for songs and prayers for the cake-givers’ souls, and also for the souls of givers’ deceased loved ones.
Usually, Halloween’s fairly easy. You can just walk the kids around the neighborhood and look at the cool decorations while loading up on free sugary goodness. Even if it’s still allowed in your town and some families still want to try it out, they may find lots of dark porches and locked gates because there are just not as many households this year willing to participate. Instead of doing the ‘normal’ Halloween night, shake it up a bit.
Tell them the story of the soul cakes, and make your own version of the baked treat. To make it extra special for them; instead of the traditional ‘X’ marked on top of the cake, let the kiddos carve in their initial. That’ll be sure to get them excited for when it comes out of the oven. You can find great recipes for soul cakes online.
Image from wiki commons by Malikhpur
If you’re not into the whole soul cake-making idea, you can always just get everyone involved in baking some regular cookies or cupcakes and decorate them for Halloween.
Image and treats by author
If the kids insist on going trick-or-treating, they still can. Just have them do it at home. Place a candy-filled bowl (or a person, if you have enough people) in each bedroom or bathroom and shut the doors. Each door can be a different ‘household’ with a fake address, and a lil’ drawing of a mailbox or doorknocker/bell on the outside of each door. That way, they can still do ‘door-to-door’ trick-or-treating for their candy fix. Just make sure to buy bags of the good shit or giant-sized candy bars, you know, so they don’t feel totally gypped.
Safe kids. Safe parents. Safe everyone else. Ta-da! | https://medium.com/swlh/l-a-has-banned-trick-or-treating-for-2020-but-who-needs-it-30e57a7e2204 | [] | 2020-10-07 21:14:12.127000+00:00 | ['Self', 'Kids', 'Family', 'Creativity', 'Self Improvement'] |
Macroeconomic predictions.. Can we create an AI model to predict… | Can we create AI models to predict macro-economic trends?
Not too long ago, the prediction of the weather was a hit or a miss. The joke was that the TV weatherman was always carrying an umbrella on what was predicted to be supposedly a nice day. Today, we can model the weather patterns very accurately for several days into the future.
Similarly, given enough relevant data, including market indicators, human behavioral patterns, and current events, including the “black swan” events, we should be able to model the macro-economics.
In the words of someone very famous, the idea is to be “less wrong”. Being consistently better than an average, even by a tiny percent in predicting the future would have a big benefit.
For years, I have been discussing with my friends the concept of AI modeling based on thousands of macroeconomic data streams, and the opportunity it could give to the architect of such a system. Now, I feel that the data, the software and the hardware is within the reach of that goal.
It is very common to hear the business or the political pundits speak of what amounts to mono-thematic bias based on the very limited ability of a human mind to grasp only a few dimensions of a given topic. Here, I am trying to have AI combine thousands of complex data streams (think of each as a different spreadsheet) into a consistent and verifiable oracle.
At the start, I employed easily accessible market indications, data that influences the immediate macroeconomic changes such as manufacturing, unemployment, new housing permits, inventories, etc.
I made heavy use of averages as I realize that certain events build up to a change over periods of time, sometimes days, sometimes weeks, months, or even a year.
Of course, I realize that historical data is not enough.
I am interested in human behavior as well. Pundit and influencer opinions, even if wrong, definitely affect human behavior. We know for a fact that markets, or rather large groups of people, behave illogically. However, there are patterns in there in how an average Joe listens to the presidential speech, watches few pundits argue, reads an article or two, talks to friends at work, watches few videos titled “Imminent crash in [insert any year], hears that someone got laid off, and finally starts selling their stocks a few days later making a colossal investment mistake.
Such modeling is difficult, but it is achievable. We are already regularly deducting a human sentiment of Google searches, YouTube videos on the newest Apple or Tesla innovations, Twitter blasts, and blog posts. It is all known and proven technology.
This project is not meant to have a quick win, I realize that every hedge fund is pouring millions in similar research. Rather, I want it to be a gradual improvement learning process, a labor of love, a means of recurrent income towards my retirement, I still have about 20 years to solve this complex problem.
The plan is that every few days I add new data sources and continue building the models until I get good at predicting something. Yes, I have used the word “something” on purpose. I am not sure if I will be able to predict the particular price of the S&P 500, but I am sure I will find patterns in some niche, and having a wizard-level insight might open doors, a path less taken.
This article, and the accompanying open-source code, is not meant to teach you about the market, nor to teach you how to code, but rather to share my progress and to receive positive feedback and cooperation.
Project Objectives
to gather and analyze as many market indicators as possible
to learn patterns and interactions in the data
to predict market trends 5, 30, or 90 days ahead
to predict a particular stock price
to produce a re-usable code and document the process
What are Market Indicators?
Market Indicators are collections of data points (think spreadsheets) reflecting the historical performance of a particular area of interest, for example:
“S&P 500” index shows how major 500 stocks as a whole are performing
shows how major 500 stocks as a whole are performing “ISM Manufacturing” index shows how well the manufacturing industry is doing
shows how well the manufacturing industry is doing “GDP” (Gross Domestic Product) index shows how the country is doing
shows how the country is doing etc., etc., etc.
There are hundreds, if not thousands, of such indicators. Each country, state, county, and community have sets of data that reflect some trend.
Why not using the spreadsheets?
The spreadsheets and their graphing capabilities are the bread and butter of an individual person’s market analysis. However, spreadsheets are good when comparing only a few indicators, when the data starts multiplying the inputs are overwhelming. Please remember that an average person can remember about 7 numbers, now try to imagine daily data for the last few decades coming for a few hundred indications, multiplied by a couple of hundred or regions, and add to it sentiments from hundreds of sources. We can easily talk about millions, if not trillions of inputs.
Today’s desktop computers can process many “tera” operations per second. Tera is this much = 1,000,000,000,000. I think the point is clear.
Machine Learning approach
The overwhelming advantage of machine learning, or Artificial Intelligence (AI), is apparent when we try to find subtle patterns in thousands of indicators.
The human brain (using spreadsheets) fails to grasp the wealth of the information presented. Machine learning, on the other hand, can easily detect the patterns in massive datasets and derive a conclusion.
Why Julia?
Julia is modern, fast, elegant, multitasking, and does the math extremely well, of course, we are talking about Julia programming language
unlike C/C++, it is a pleasure to read and write Julia
it is designed in MIT specifically for scientific computing and machine learning
similarly to C, it is extremely fast
similar to Python, it is very easy to learn
it is designed for parallelism
it is designed for distributed computing (in case ~10,000,000,000,000 operations per second is not enough)
Read the source, Luke
I am making changes to the project’s code on a daily basis.
The approach and the code would surely diverge from any examples I would be able to provide here, hence the pun on Obi-Wan Kenobi’s “Remember, Luke, use the source.”, in other words, study the actual code.
The rest of the project is documented at:
Gain Access to Expert View — Subscribe to DDI Intel | https://medium.com/datadriveninvestor/market-indicators-a-machine-learning-project-with-julia-language-be1a452213f8 | ['Uki D. Lucas'] | 2020-12-13 12:41:48.533000+00:00 | ['Machine Learning', 'Julialang', 'AI', 'Macroeconomics', 'Stock Market'] |
Grid-based game with Unity: dev-log day 2 | Sorting out the order of operations
Alright, so I made my character sprite into a game object and then into a prefab in my resources folder. I used the same technique that I used in my grid_manager to load my prefab and then give it a position on the screen using its transform.
But this is where I hit my first hurdle. I wagered that the easiest way to position my character on the grid was to label one of the floor tiles as the “start_tile” and then find the start tile and position my character equivalently.
But both my character instantiation and placement and my grid generation are called in the built-in start method.
This creates a problem because my character placement is dependent on my grid, and I want my grid to be generated explicitly before my character Is placed.
My solution to this might not be ideal, but I found a way that is pleasing for me by reading through a bit of the unity scripting docs.
My solution was to create a master_manager that would act similar to how the kernel acts for your operating system. It’s a master game object that will call methods from other scripts in such a fashion that it can manage the generation flow of the game.
In a similar fashion, I have a game object that is the manager for any given process, so the grid has a managing script and the character has a managing script so on and so forth.
Here is what that looks like inside my master_manager:
I’ve elected to put this generation process inside of the built-in Awake function because the unity docs tell me Awake fires before the first frame of the game which seems to be an appropriate place to load up the initial state.
I’m absolutely convinced that I will find a more elegant way to achieve this, but for now, his approach is general and effective and has the capacity to solve many future problems I may have with the development. | https://medium.com/dev-genius/grid-based-game-with-unity-dev-log-day-2-1b8d10baa7d9 | ['Taylor Coon'] | 2020-12-28 08:33:56.292000+00:00 | ['Game Development', 'Unity', 'Coding', 'Programming', 'Development'] |
Renewable Energy Can Save Lives in Hurricane-Prone Regions | By Dr. Susan Pacheco
Most people might be surprised to learn that the leading cause of deaths from Hurricane Laura was not drowning, or fallen trees, but something much more preventable: carbon monoxide poisoning.
Tragically, eight people died from the unsafe operation of gas generators when the power went out, including four members of one family. Casualties from Hurricane Delta were also energy-related, one from a generator fire and one from a natural gas leak. It is a sad and stark reminder of how extreme weather overwhelms our power system, sometimes with deadly consequences.
Dr. Susan Pacheco with a young patient. (Photo courtesy of Dr. Pacheco)
Extreme weather events, which are becoming more intense with climate change, are the leading cause of power outages in the U.S. and Americans experience more outages than any other developed nation. A recent analysis found that since 2000, the U.S. has seen a 67% increase in major outages from weather-related events. From heatwave-induced blackouts in California, to this summer’s destructive ‘inland hurricane’ in the Midwest, to my own experiences in Texas, it is clear that our nation’s aging power system is not prepared.
For the sake of public health, we must invest in more local renewable energy and other solutions that can keep the power on during hurricanes.
As the deaths from recent hurricanes make clear, these outages are not just inconveniences, they have direct consequences for public health. I have witnessed them first-hand. When Houston flooded and hospitals in the Houston Medical Center lost power during tropical storm Allyson in 2001, hundreds of patients including children were evacuated through poorly ventilated, hot and dark stairwells, many of them manually ventilated during the evacuation process, and waiting for ambulances or relocation in the streets of the Medical Center. Sadly, 20 years later, not enough hospitals are in a better position to handle outages which are only becoming more frequent.
This is not a future problem, it’s a right now problem.
Beyond unsafe operation of home generators, power outages have myriad other health impacts. Prolonged power outages are a threat for the safe storage of refrigerated medications and the survival of individuals that require use of electricity-dependent durable medical equipment. Being left without air conditioning in the extreme heat, or without heating in the extreme cold, can also lead to deadly consequences.
Instead of dangerous gas generators, hurricane-prone regions should be making it easier to provide on-site, non-polluting solar and battery systems that can keep the power on when the central electricity grid goes down. In Texas, Austin is working with a $5 million grant from the Department of Energy to pilot ways to reduce blackouts from storms using solar and micro-grids. We should invest in more programs like this. Business and environmental groups alike have said this type of clean distributed energy not only improves grid resilience but also saves money. States could also invest in programs similar to the one in California that uses government funds to provide free solar and battery storage to low-income residents.
Opponents will argue that renewable energy is unreliable and too expensive, but that is simply not true. It’s a myth that fossil fuels fare better in extreme weather — in fact, they do worse — and they are contributing to the carbon pollution that’s making extreme weather worse.
We’ve seen for ourselves here in Texas. During Harvey, rain-soaked coal stockpiles were rendered useless, meanwhile, wind power was able to bounce back within a few days and some plants operated continuously throughout the storm. The storm also shut down around a quarter of the country’s refineries in the Houston area, leading to gas price spikes across the country.
Renewable energy keeps the lights on during emergencies, and is critical to mitigating climate change, not to mention it’s affordable. Research from the University of California, Berkeley shows that with the right policies, we can reach 90 percent clean electricity by 2035 at no added cost for consumers.
For the sake of our health, economy, and climate — let’s invest in ensuring our power system is able to handle the increasing number of storms we know are coming.
Dr. Susan Pacheco is a Houston-area pediatrician and a professor at the University of Texas. | https://medium.com/i-heart-climate-voices/renewable-energy-can-save-lives-in-hurricane-prone-regions-ebe8abcbae2f | ['I', 'Climate Voices'] | 2020-10-29 15:41:36.104000+00:00 | ['Climate Change', 'Renewable Energy', 'Public Health', 'Hurricane', 'Power'] |
10 Cool Python Project Ideas for Python Developers | Python Project Ideas for Python Developers
If you have made up your mind about the platform you’re going to use, let’s jump straight into the projects. Mentioned below are some fun projects addressed towards developers of all skill levels that will play a crucial role in taking their skills and confidence with Python to the next level.
1. Content Aggregator
Photo by Obi Onyeador on Unsplash
The internet is a prime source of information for millions of people who are always looking for something online. For those looking for bulk information about a specific topic can save time using a content aggregator.
A content aggregator is a tool that gathers and provides information about a topic from a bulk of websites in one place. To make one, you can take the help of the requests library for handling the HTTP requests and BeautifulSoup for parsing and scraping the required information, along with a database to save the collected information.
Examples of Content Aggregators:
2. URL Shortener
URLs are the primary source of navigation to any resource on the internet, be it a webpage or a file, and, sometimes, some of these URLs can be quite large with weird characters. URL shorteners play an important role in reducing the characters in these URLs and making them easier to remember and work with.
The idea behind making a URL shortener is to use the random and string modules for generating a new short URL from the entered long URL. Once you’ve done that, you would need to map the long URLs and short URLs and store them in a database to allow users to use them in the future.
Examples of URL Shortener —
Here is the link to join the course for FREE: —
3. File Renaming Tool
Photo by Brett Sayles from Pexels
If your job requires you to manage a large number of files frequently, then using a file renaming tool can save you a major chunk of your time. What it essentially does is that it renames hundreds of files using a defined initial identifier, which could be defined in the code or asked from the user.
To make this happen, you could use the libraries such as sys, shutil, and os in Python to rename the files instantaneously. To implement the option to add a custom initial identifier to the files, you can use the regex library to match the naming patterns of the files.
Examples of Bulk File Rename Tools —
4. Directory Tree Generator
A directory tree generator is a tool that you would use in conditions where you’d like to visualize all the directories in your system and identify the relationship between them. What a directory tree essentially indicates is which directory is the parent directory and which ones are its sub-directories. A tool like this would be helpful if you work with a lot of directories, and you want to analyze their positioning. To build this, you can use the os library to list the files and directories along with the docopt framework.
Examples of Directory Tree Generators —
5. MP3 Player
Photo by Mildly Useful on Unsplash
If you love listening to music, you’d be surprised to know that you can build a music player with Python. You can build an mp3 player with the graphical interface with a basic set of controls for playback, and even display the integrated media information such as artist, media length, album name, and more.
You can also have the option to navigate to folders and search for mp3 files for your music player. To make working with media files in Python easier, you can use the simpleaudio, pymedia, and pygame libraries.
Examples of MP3 Players—
6. Tic Tac Toe
Tic Tac Toe is a classic game we’re sure each of you is familiar with. It’s a simple and fun game and requires only two players. The goal is to create an uninterrupted horizontal, vertical, or diagonal line of either three Xs or Os on a 3x3 grid, and whoever does it first is the winner of the game. A project like this can use Python’s pygame library, which comes with all the required graphics and the audio to get you started with building something like this.
Image by OpenClipart-Vectors from Pixabay
Here are a few tutorials you can try:
More Fun Python projects for game dev:
7. Quiz Application
Another popular and fun project you can build using Python is a quiz application. A popular example of this is Kahoot, which is famous for making learning a fun activity among the students. The application presents a series of questions with multiple options and asks the user to select an option and later on, the application reveals the correct options.
As the developer, you can also create the functionality to add any desired question with the answers to be used in the quiz. To make a quiz application, you would need to use a database to store all the questions, options, the correct answers, and the user scores.
Examples of Quiz Applications—
Read about the Best Python IDEs and Code Editors —
8. Calculator
Photo by Eduardo Rosas from Pexels
Of course, no one should miss the age-old idea of developing a calculator while learning a new programming language, even if it is just for fun. We’re sure all of you know what a calculator is, and if you have already given it a shot, you can try to enhance it with a better GUI that brings it closer to the modern versions that come with operating systems today. To make that happen, you can use the tkinter package to add GUI elements to your project.
9. Build a Virtual Assistant
Photo by BENCE BOROS on Unsplash
Almost every smartphone nowadays comes with its own variant of a smart assistant that takes commands from you either via voice or by text and manages your calls, notes, books a cab, and much more. Some examples of this are Google Assistant, Alexa, Cortana, and Siri. If you’re wondering what goes into making something like this, you can use packages such as pyaudio, SpeechRecognition, gTTS, and Wikipedia. The goal here is to record the audio, convert the audio to text, process the command, and make the program act according to the command.
Here is the link to join the course for FREE —
10. Currency Converter
As the name suggests, this project includes building a currency converter that allows you to input the desired value in the base currency and returns the converted value in the target currency. A good practice is to code the ability to get updated conversion rates from the internet for more accurate conversions. For this too, you can use the tkinter package to build the GUI. | https://towardsdatascience.com/10-cool-python-project-ideas-for-python-developers-7953047e203 | ['Claire D. Costa'] | 2020-09-08 19:41:49.515000+00:00 | ['Python', 'Software Development', 'Technology', 'Data Science', 'Programming'] |
Is Deno a Threat to Node? | Is Deno a Threat to Node?
Deno 1.0 was launched on May 13, 2020, by Ryan Dahl — the creator of Node
Image copyrights Deno team — deno.land
It’s been around for two years now. We’re hearing the term Deno, and the developer community, especially the JavaScript community, is quite excited since it’s coming from the author of Node, Ryan Dahl. In this article, we’ll discuss a brief history of Deno and Node along with their salient features and popularity.
Deno was announced at JSConf EU 2018 by Ryan Dahl in his talk “10 Things I Regret About Node.js.” In his talk, Ryan mentioned his regrets about the initial design decisions with Node.
JSConf EU 2018 — YouTube
In his JSConf presentation, he explained his regrets while developing Node, like not sticking with promises, security, the build system (GYP), package.json and node_modules , etc. But in the same presentation, after explaining all the regret, he launched his new work named Deno. It was in the process of development then.
But on 13th May 2020, around two years later, Deno 1.0 was launched by Ryan and the team (Ryan Dahl, Bert Belder, and Bartek Iwańczuk). So let’s talk about some features of Deno. | https://medium.com/better-programming/is-deno-a-threat-to-node-1ec3f177b73c | ['Kapil Raghuwanshi'] | 2020-07-14 08:29:17.382000+00:00 | ['JavaScript', 'Startup', 'Technology', 'Nodejs', 'Programming'] |
Learning Programming Fundamentals using Python | Background
I am a self taught developer that started learning to code in March of 2019. I just recently graduated with my M.S. in Chemistry and while job hunting I came across programming and didn’t look back. I’ve tried many different methods to learning to program and I’m here to share some of the ways that I think are most effective. Regardless of what you’re aspirations are, game developer, iOS developer, android developer, everyone needs to learn the basics of progamming.
The Fundamentals of Programming
I recommend that whatever language you pick, stick with it for a while, or at least until you have a firm grasp of programming concepts. Learn one thing at time, programming isn’t going anywhere.
What are these fundamentals?
Data types Variables Basic data structures (Lists/Dictionaries) Loops Conditionals Functions Classes and objects (more important in object oriented languages) Problem Solving
This list isn’t exhaustive, and this is coming from someone who created their own syllabus for learning. It is highly possible I’m leaving a concept out that might be relevant, the internet and documentation are your friend!
I’m pretty sure you’re wondering why Problem Solving is on there but that’s essentially what 99% of programming is, problem solving. I’ll show you some real examples of problem solving as we make our way through the fundamentals and methods to approach debugging an application.
Installing Python
We’re going to be working with the latest version of Python and to keep this article short and sweet, I’ll link a video to installing it below. We’ll be working with the Python shell for now so no need to worry about any text editor yet.
Installing Python: https://www.youtube.com/watch?v=h4fhdhNWDKk
Don’t worry about Visual Studio Code for now, just follow until the installation finishes.
Launch a Python Shell
Disclaimer — I am on windows, most of these commands will be windows specific. However a quick google search can quickly get the commands for a mac!
The great thing about programming is that almost anyone can get started. If you have a computer you can start coding in your command prompt (terminal for mac), and even on your phone you can download apps to code programs in Python. To launch a Python shell we need to open our command prompt, in Windows 10 you can do that by typing “cmd” in the bottom left search bar, and for the mac users you’ll look for the Terminal. Once it’s open you should see the window that makes you feel like a hacker.
Command Prompt in Windows 10
Launching a Python shell from here is very simple, just type in py (windows) and hit enter . Your command prompt should change slightly to:
Python Shell
From here we can type in any valid Python code! Let’s start by exploring data types and what they are.
Data Types
All programs essentially do one thing, they take in some data, manipulate that data, then output some data. How the program manipulates that data depends on that data’s type. The data types below are found in Python with many more, and may be found in some form in other languages.
Strings
Integers
Float
Boolean values (True or False)
Strings
Strings are a way to represent any sequence of text. The syntax for a string is wrapping the text in either a pair of double quotes , "" or single quotes '' . You cannot mix and match single and double quotes or Python will throw the following syntax error:
Syntax error when not properly closing the double quotes.
The proper way to write a string in Python is:
We can even write numbers inside of the double quotes:
String containing a number.
This is where the difference between data types because very important. Based on the data type Python will know how to handle certain operations. You’ll notice that the 2 is wrapped in quotes '2' , this tells Python that this is a string. If we try to add "2+2" something very weird happens.
When we try to add 2+2 inside of a string, it doesn’t return 4, it returns the string "2+2" . Why is that? Well it’s because Python isn’t evaluating the expression 2+2, because we wrapped the addition inside of the quotes Python is treating it as a string data type. We CAN add strings together which is called concatenation, but the result is not something you’d expect and we’ll cover that when we talk about variables.
However, when we write the expression 2+2, Python treats them as integers and knows to add them together, which introduces our next data type, Integers.
Integers and Floats
Python represents numerical data in two types, integers and floats. We can perform calculations using them with arithmetic operators because Python follows PEMDAS.
The integer data type is pretty simple, it is any positive or negative WHOLE number. If the number has any decimals, for instance 1.0 , Python with not treat them as an integer! Instead any number with decimals are part of the Float data type. The difference is subtle and more important when dealing with arithmetic operations. Python supports the typical arithmetic operations shown below:
Arithmetic operators
If you add, subtract, divide, or multiply an integer by a float, the result will ALWAYS be a float. This may not seem huge but if you’re performing any conditionals or checks that rely on an integer data type, your code may break!
Boolean data type
The last data type that I mentioned is a boolean, which can only be two values, True or False , the capitalization matters! These will become important when we talk about conditionals because we’ll use these boolean values to execute different blocks of code depending on the situation.
Is there a way to check for these different data types?
I’m glad you asked! Python provides us with an built-in function called type(argument) that will return the data type of the argument.
Built-in function type() returning one of every data type.
Wrapping up
I want to keep these articles as short as possible because I don’t want to flood anyone with information.
Recap:
Data can exist in different types in Python (and other languages) , and Python can perform specific operations depending on the type of data. The four types covered today were Strings, Integers, Floats and Booleans.
In the next part we’ll talk about Variables, which is a place to store data and we’ll see how we can manipulate these data types.
I encourage you to experiment with these data types, can you add a string and integer together? What about a string and a boolean? Does True + False evaluate to anything? Can you divide by 0? Is there infinity?
If you encounter any errors in the Python shell, try copying and pasting the error into google to understand why it’s happening. | https://medium.com/analytics-vidhya/learning-programming-fundamentals-using-python-de42d505bb77 | ['Kristian Roopnarine'] | 2020-03-26 09:19:53.178000+00:00 | ['Python', 'Coding', 'Developer', 'Computer Science', 'Programming'] |
React File Uploads to Rails | Going forward, your database is pretty set up to take in file uploads from users, but there’s one more step that you need to take and I’ll touch on that later. On your frontend you will find a few headaches here, there are many different ways that you can send files to the backend, and I’m only going to show the easiest way (in my opinion.)
To start off, you will need to install ‘axios’ (which is as simple as ‘npm install axios — save’ in your terminal) and then on the top of your React file you’ll want to import axios like so:
import axios from 'axios'
Let’s say you have a User signup page where a user has to fill out a bunch of information. In React.js you will just have a state that holds all of this information:
state = { username: '', password: '', fname: '', lname: '', age: '', email: '', bio: '', image: '', errors: [] }
As the user fills out the information the state gets updated to hold the users input. The input fields in your form will stay the same for your cookie cutter information, but when it comes to the HTML tag for file uploads it will look something like this:
<label>Profile Image</label> <input type="file" accept="image/jpeg" /// for images onChange={this.handleFileUpload} /> accept=".mp3,audio/*" /// for sound files \\\\\\\\ File upload function will look like this \\\\\\\\\\\ handleFileUpload = (e) => { this.setState({ image: e.target.files[0] }) }
When uploading files, it will be stored in an array. Since we are only accepting one file upload, it will always be at the zeroth position.
Your ‘handleSubmit’ function will do all of the heavy lifting of packaging up your state and sending it off to your backend. To send files over to the backend you need to use FormData. Soo, your function will have something that look like this in it:
handleSubmit = (e) => { e.preventDefault() const formData = new FormData() for (const property in this.state) { formData.append( property, this.state[property] ) }
Let’s break this down, since there’s a lot going on.
We are preventing default (so that the page doesn’t refresh after sending a form.) We are creating new FormData, which pretty much looks like an empty object. To take a deeper dive checkout this link. This is my gift to you. This for … in loop neatly packages up everything in your state with the key value pairs of your state. ( username: ‘Iggs’) including the file the user uploaded. FormData is quite difficult to work with since, from my experience there is no way to peer inside of it. Console.log will return an empty object, debugger will only view inside an empty object (when it very clearly holds all of the information from the state.) So in other words it’s a lot of guessing and checking if it’s your first time working with FormData.
Now the only thing that’s left to do is, sending this formData to the backend:
axios.post("http://localhost:3000/users", formData)
Boom, ezpz. Now going back to the backend you controller needs to be able to accept this file upload. But it’s no sweat because Rails is the best. In your controller that you are trying to attach this file to, you’ll need to permit them in your params:
params.permit(:username, :password, :fname, :lname, :age, :email, :bio, :image)
That’s about it.
“But Ignas, how do I send this information from the backend to the frontend?”
Thanks for asking, that isn’t to hard either if you’ve added the serializer gem mentioned before. Your serializer file will look something like this:
class UserSerializer < ActiveModel::Serializer attributes :id, :email, :fname, :lname, :bio, :age, :username, :image
include Rails.application.routes.url_helpers def image rails_blob_path(object.image, only_path: true) if object.image.attached? end end
Now when you fetch information about users from the backend it will send everything shown above that ‘attributes’ points to. The ‘image’ that will be sent down will be a super long string that you need to plop onto a url. Like so:
const imageUrl = "/rails/active_storage/blobs/eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaHBBc3dCIiwiZXhwIjpudWxsLCJwdXIiOiJibG9iX2lkIn19--4b10997f040ab8eab876424dadcae0c4cd8caa14/20181224_113217.jpg" const link = `http://localhost:3000${imageUrl}` <img src={link}>
That’s about it. Thank you for taking the time to read this, I really do hope this helped out. | https://medium.com/swlh/react-file-uploads-to-rails-cc9c62e95a9d | ['Ignas Butautas'] | 2020-11-20 23:50:03.446000+00:00 | ['Files', 'Coding', 'React', 'Upload', 'Rails'] |
5 Scikit-Learn Must-Know Hidden Gems | Dataset Generators
Scikit-learn has plenty of dataset generators, which can be used to create artificial datasets with varying complexities and shapes.
For example, the make_blobs function creates “blobs”, or clusters of data, with any amount of samples, centers/clusters, and features/dimensions.
The values of the X and the y are:
When X is graphed and colored according to the labels y , the shape of the data can be visualized:
Scikit-learn has many other dataset creation functions:
make_moons(n_samples=100, noise=0.1)
make_circles(n_samples=100, noise=0.05)
make_regression(n_samples=100, n_features=1, noise=15)
make_classification(n_samples=100)
Pipelines
Pipelines allows for various methods to be combined into one singular model. This is especially the case in natural language processing (NLP) applications which require vectorizers or data that needs to be standardized or normalized. A pipeline can be created by combining several models together, in which data flows sequentially through the aggregate model. It has standard fitting and predicting capabilities, making the training process much more organized.
Various objects can fit into a pipeline:
Imputers. Do you have missing data? Try a Simple Imputer or a KNN Imputer.
Encoders. If your data is non-binary categorical, you may need to use a Label Encoder or a One-Hot Encoder.
NLP Vectorizers. If you are dealing with NLP data, use Count Vectorizers, TD-IDF Vectorizers, or Hash Vectorizers.
Numerical Transformations. Try standardizers, normalizers, and min-max scalers.
Grid-Search
A common task in machine learning is finding the right set of parameters in a model. Usually, one can either guess based on their knowledge of the task and the model or programmatically find the best set. sklearn has a built-in function — GridSearchCV — that automatically finds the best set of parameters for you to optimize model performance.
The GridSearchCV object takes in two parameters: firstly, the model object to be trained (in this case a Support Vector Machine classifier), and secondly, a dictionary describing the parameters of the model. Each key in the dictionary is one parameter from the model, where each value is a corresponding list or tuple of values in which the parameter may take.
After the grid search object is fitted, the best_params_ attribute can be used to output the best-performing parameter values for each of the model’s parameters. Other model parameters include tree depth in decision trees and number of voters in a random forest ensemble.
Validation Curves
To visualize a parameter’s effect on the model’s performance, use sklearn ’s validation_curve . It takes in a few parameters — the model, the parameter to be adjusted, a range of values for the parameter, and the number of folds. It is similar to a Grid-Search for one variable, and can help better visualize the results of a parameter shift.
The output of the validation_curve object is a tuple — one for the scores during training, and another for the testing scores. The number of rows in each represents the value of arrays for each of the parameter values, whereas each element in that array represents the value for each of the k folds.
When the results are plotted, the relationship between the parameter and the accuracy is clear.
This allows us to visualize the impact of tree depths on the accuracy. For instance. note that a tree depth of 5 or at tree depth of 6 performs reasonably well. Specifying a tree depth at anything further would cause overfitting, but one would need to evaluate this on testing accuracy to be sure.
K-Fold Cross Validation
Cross-validation is a method that gives much more accurate results than standard train_test_split methods (and actually requires less code!). With traditional train-test-splits, the data is randomly split into a training set and a testing set (usually a 7:3–8:2 ratio), the model being trained on the training set and evaluated on the testing set to truly measure the model’s ability to measure and not just to memorize. However, since each split is random, splitting the data ten times will yield ten different accuracies.
To address this issue, cross validation using k folds splits the data into k categories, training a model on k-1 folds and testing on the remaining 1 fold. After repeating this process, where each testing fold eventually covers the entire dataset, one arrives at a more complete and honest view of the accuracy. What’s better, there is no need to keep track of x-train, x-test, y-train, and y-test variables. The only downside to cross-validation is that it takes more time — but better results always have higher costs. | https://towardsdatascience.com/5-scikit-learn-must-know-hidden-gems-8249e5214a73 | ['Andre Ye'] | 2020-06-19 20:05:07.589000+00:00 | ['Machine Learning', 'Towards Data Science', 'Data Science', 'AI', 'Data Analysis'] |
The Trinity of AI — CrowdSource, OpenSource, & BlockChain | The fourth industrial revolution has started and while AI is at the forefront of that revolution, we must note that there are some fundamental blocks which are necessary for this revolution to thrive, sustain and drive the world economy and businesses.
We must look at these fundamental blocks closely to realise and comprehend what lies ahead and align our professions and businesses to ensure relevance. These fundamental blocks are Crowdsource, OpenSource and BlockChain and these are the very fabric of the 4th Industrial Revolution.
I don’t think that we would have seen the massive AI and Tech development we have seen over the last decade had OpenSource not existed and had companies not embraced it to further advance their knowhow and development.
OpenSource opened the gates for Crowdsourcing and somehow these two are very deeply linked, however they are different developments. With the help of Crowdsourcing, it became possible to harness the expertise of the crowd, from any remote location and from any time zone
Then came Blockchain which essentially is another form of Crowdsource or OpenSource because BlockChain is DLT (Distributed Ledger Technology) which is based on a decentralised consensus based system that does not allow a central authority to dictate. I see a lot of elements of OpenSource and CrowdSource in BlockChain.
It is now possible to harness the power of individual expertise, create powerful teams without hiring them, deliver results much faster though efficient selection of human resources, enable sharing and on demand work and delivery,
Benefits derived from OpenSource (Open source is freedom to use, study, modify and distribute Software, IP or any other Object or knowledge for any purpose, provided it comes under the OpenSource licence.)
Knowledge Sharing has become increasingly possible and rewarding. No need to code or learn from scratch when you can get a jump start from OpenSource libraries.
Debugging has become very easy.
Contributions are recognised and rewarded.
Fosters innovation and reduces the cost of innovation.
Creates a level playing field and helps technological progress.
Creates competition and therefore leads to advancement.
Benefits derived from CrowdSource (Crowdsourcing leverages the power of the crowd or teams of people either in their free time or for specific assignments to complete tasks in a much more economical way.)
Leverage the power of teams from anywhere and anytime.
Massive economic benefits both to the people and the organisations involved.
Tapping into crowd intelligence can solve problems faster and drive better results.
Diverse experience delivers better and more sustainable results / solutions.
Likelihood of better robustness for products / solutions delivered through crowdsourcing.
Creativity gets a massive boost.
Gives 100s of options and choices and usually results in the best solutions.
Benefits derived from BlockChain (A consensus driven, append only, distributed or decentralised ledger technology that is further secured through cryptographic hash functions and is just the right platform for exchange of money, ideas, digital goods etc in the gig economy.)
Most secure transactional platform.
Decentralised and therefore trust-less system.
Ensures anonymity and traceability.
Reduces the cost of transactions and eliminates payment middlemen.
Auto Executable contracts ensure better efficacy of business transactions.
Best payment platform for the gig economy.
Unification of global trade and payments is possible though blockchain.
All three — Crowdsource, OpenSource and BlockChain are the fundamental fabric of the 4th revolution. They are the means through which humanity will progress towards a better degree of equality, hopefully. | https://medium.com/dalla/the-trinity-of-ai-crowdsource-opensource-blockchain-ad1d747faacf | [] | 2018-09-28 06:54:12.774000+00:00 | ['Open Source', 'Blockchain', 'Revolution', 'Crowdsourcing', 'AI'] |
Are Programmers Headed Toward Another Bursting Bubble? | A friend of mine recently posed a question that I’ve heard many times in varying forms and forums:
“Do you think IT and some lower-level programming jobs are going to go the way of the dodo? Seems a bit like a massive job bubble that’s gonna burst. It’s my opinion that one of the only things keeping tech and lower-level computer science-related jobs “prestigious” and well-paid is ridiculous industry jargon and public ignorance about computers, which are both going to go away in the next 10 years. […]”
This question is simultaneously on point about the future of technology jobs and exemplary of some pervasive misunderstandings regarding the field of software engineering. While it’s true that there is a great deal of “ridiculous industry jargon” there are equally many genuinely difficult problems waiting to be solved by those with the right skill-set. Some software jobs are definitely going away but programmers with the right experience and knowledge will continue to be prestigious and well remunerated for many years to come; as an example look at the recent explosion of AI researcher salaries and the corresponding dearth of available talent.
Staying relevant in the ever changing technology landscape can be a challenge. By looking at the technologies that are replacing programmers in the status quo we should be able to predict what jobs might disappear from the market. Additionally, to predict how salaries and demand for specific skills might change we should consider the growing body of people learning to program. As Hannah pointed out “public ignorance” about computers is keeping wages high for those who can program and the public is becoming more computer savvy each year.
The Continuing Drive Towards Commodification
The fear of automation replacing jobs is neither new nor unfounded. In any field, and especially in technology, market forces drive corporations toward automation and commodification. Gartner’s Hype Cycles are one way of contextualizing this phenomenon.
Gartner’s 2017 Hype Cycle
As time goes on, specific ideas and technologies push towards the “plateau of productivity” where they are eventually automated. Looking at history one must conclude that automation has the power to destroy specific job markets. In diverse industries ranging from crop harvesting to automobile assembly technology advances have consistently replaced and augmented human labor to reduce costs. A professor once put it this way in his compilers course, “take historical note of textile and steel industries: do you want to build machines and tools, or do you want to operate those machines?”
In this metaphor the “machine” is a computer programming language. This professor was really asking: Do you want to build websites using JavaScript, or do you want to build the V8 engine that powers JavaScript?
The creation of websites is being automated by WordPress (and others) today. V8 on the other hand has a growing body of competitors some of whom are solving open research questions. Languages will come and go (how many Fortran job openings are there?) but there will always be someone building the next language. Lucky for us, programming language implementations are written with programming languages themselves. Being a “machine operator” in software puts you on the path to being a “machine creator” in a way which was not true of the steel mill workers of the past.
The growing number of languages, interpreters, and compilers shows us that every job-destroying machine also brings with it new opportunities to improve those machines, maintain those machines, and so forth. Despite the growing body of jobs which no longer exist, there has yet to be a moment in history where humanity has collectively said, “I guess there isn’t any work left for us to do.”
Pinsetters
Commodification is coming for us all, not just software engineers. Throughout history, human labor has consistently been replaced with non-humans or augmented to require fewer and less skilled humans. Self-driving cars and trucks are the flavor of the week in this grand human tradition. If the cycle of creation and automation are a fact of life, the natural question to answer next is: which jobs and industries are at risk, and which are not?
Who’s Automating Who?
AWS, Heroku, and other similar hosting platforms have forever changed the role of the System Administrator/DevOps engineer. Internet businesses used to absolutely need their own server master. Someone who was well versed in Linux; someone who could configure a server with Apache or NGINX; someone who could not only physically wire up the server, the routers, and all the other physical components, but who could also configure the routing tables and all the software required to make that server accessible on the public web. While there are definitely still people applying this skill-set professionally, AWS is making some of those skills obsolete — especially at the lower experience levels and on the physical side of things. There are very lucrative roles within Amazon (and Netflix, and Google…) for people with deep expertise in networking infrastructure, but there is much less demand at the small-to-medium business scale.
“Business Intelligence” tools such as SalesForce, Tableau and SpotFire are also beginning to occupy spaces historically held by software engineers. These systems have reduced the demand for in-house Database Administrators, but they have also increased the demand for SQL as a general-purpose skill. They have decreased demand for in-house reporting technology, but increased demand for “integration engineers” who automate the flow of data from the business to the third-party software platform(s). A field that was previously dominated by Excel and Spreadsheets is increasingly being pushed towards scripting languages like Python or R, and towards SQL for data management. Some jobs have disappeared, but demand for people who can write software has seen an increase overall.
Data Science is a fascinating example of commodification at a level closer to software. Scikit.learn, Tensorflow, and PyTorch are all software libraries that make it easier for people to build machine learning applications without building the algorithms from scratch. In fact, it’s possible to run a dataset through many different machine learning algorithms, with many different parameter sets for those algorithms, with little to no understanding of how those algorithms are actually implemented (it’s not necessarily wise to do this, just possible). You can bet that business intelligence companies will be trying to integrate these kinds of algorithms into their own tools over the next few years as well.
In many ways data science looks like web development did 5–8 years ago — a booming field where a little bit of knowledge can get you in the door due to a “skills gap”. As web development bootcamps are closing and consolidating, data science bootcamps are popping up in their place. Kaplan, who bought the original web development bootcamp (Dev Bootcamp) and started a data science bootcamp (Metis) has decided to close DevBootcamp and keep Metis running.
Content management systems are among the most visible of the tools automating away the need for a software engineer. SquareSpace and WordPress are among the most popular CMS systems today. These platforms are significantly reducing the value of people with a just a little bit of front end web development skill. In fact the barriers for making a website and getting it online have come down so dramatically that people with zero programming experience are successfully launching websites every day. Those same people aren’t making deeply interactive websites that serve billions of people, but they absolutely do make websites for their own businesses that give customers the information they need. A lovely landing page with information such as how to find the establishment and how to contact them is more than enough for a local restaurant, bar, or retail store.
If your business is not primarily an “internet business” it has never been easier to get a working site on the public web. As a result, the once thriving industry of web contractors who can quickly set up a simple website and get it online is becoming less lucrative.
Finally, it would border on hubris to ignore the physical aspect of computers in this context. In the words of Mike Acton: “software is not the platform, hardware is the platform”. Software people would be wise to study at least a little computer architecture and electrical engineering. A big shake up in hardware, such as the arrival of consumer grade quantum computers would (will) change everything about professional software engineering.
Quantum computers are still a ways off, but the growing interest in GPUs and the drive toward parallelization is an imminent shift. CPU speeds have been stagnant for several years now and in that time a seemingly unquenchable thirst for machine learning and “big data” has emerged. With more desire than ever to process large data-sets OpenMP, OpenCL, Go, CUDA, and other parallel processing languages and frameworks will continue to become mainstream. To be competitively fast in the near-term future, significant parallelization will be a requirement across the board, not just in high-performance niches like operating systems, infrastructure and video games.
Everybody Is Learning To Code
Websites are ubiquitous. The 2017 Stack Overflow Survey reports that about 15% of professional software engineers are working in an “Internet/Web Services” company. The Bureau of Labor Statistics expects growth in Web Development to continue much faster than average (24% between 2014 and 2024). Due to its visibility, there has been a massive focus on “solving the skills gap” in this industry. Coding bootcamps teach Web Development almost exclusively and Web Development online courses have flooded Udemy, Udacity, Coursera and similar marketplaces.
The combination of increasing automation throughout the Web Development technology stack and the influx of new entry level programmers with an explicit focus on Web Development has led some to predict a slide towards a “blue collar” market for software developers. Some have gone further, suggesting that the push towards a blue collar market is a strategy architected by big tech firms. Others, of course, say we’re headed for another bursting bubble.
Change in demand for specific technologies is not news. Languages and frameworks are always rising and falling in technology. Web Development in its current incarnation (“JS Is King”) will eventually go the way of Web Development of the early 2000’s (remember Flash?). What is new, is that a lot of people are receiving an education explicitly (and solely) in the current trendy web development frameworks. Before you decide to label yourself a “React developer” remember there were people who once identified themselves as “Flash developers”. Banking your career on a specific language, framework, or technology is a game of roulette. Of course it’s quite difficult to predict what technologies will remain relevant, but if you’re going to go all in on something, I suggest relying on The Lindy Effect and picking something like C that has already withstood the test of time.
The next generation will have a level of de facto tech literacy that Generation X and even Millennials do not have. One outcome of this will be that using the next generation of CMS tools will be a given. These tools will get better and young workers will be better at using them. This combination will definitely will bring down the value of low-level IT and web development skills as eager and skilled youngsters enter the job market. High schools are catching on as well, offering computer science and programming classes — some well educated high school students will likely be entering the workforce as programming interns immediately upon graduation.
Another big group of newcomers to programming are MBAs and data analysts. Job listings which were once dominated by Excel are starting to list SQL as a “nice to have” and even “requirement”. Tools such as Tableau, SpotFire, SalesForce, and other web-based metrics systems continue to replace the spreadsheet as the primary tool for report generation. If this continues more data analysts will learn to use SQL directly simply because it is easier than exporting the data into a spreadsheet.
People looking to climb the ranks and out-perform their peers in these roles are taking online courses to learn about databases and statistical programming languages. With these new skills they can begin to position themselves as data scientists by learning a combination of machine learning and statistical libraries. Look at Metis’ curriculum as a prime example of this path.
Finally, the number of people earning Computer Science and Software Engineering degrees continues to climb. Purdue, for example, reports that applications to their CS program have doubled over five years. Cornell reports a similar explosion of CS graduates. This trend isn’t surprising given the growth and ubiquity of software. It’s hard for young people to imagine that computers will play a smaller role in our futures, so why not study something that’s going to give you job security.
Rarity and Expectation
A common argument in the industry nowadays is around the idea that the education you receive in a four-year Computer Science program is mostly unnecessary cruft. I have heard this argument repeatedly in the halls of bootcamps, web development shops, and online from big names in the field such as this piece by Eric Elliott. The opposition view is popular as well, with some going so far as saying “all programmers should earn a master’s degree”.
Like Eric Elliott, I think it’s good that there are more options than ever to break into programming, and a 4 year degree might not be the best option for many. Simultaneously, I agree with William Bain that the foundational skills which apply across programming disciplines are crucial for career longevity, and that it is still hard to find that information outside of university courses. I’ve written previously about what skills I think aspiring engineers should learn as a foundation of a long career, and joined Bradfield in order to help share this knowledge.
Coding schools of many shapes and sizes are becoming ubiquitous, and for good reasons. There is quite a lot you can learn about programming without getting into the minutia of Big O notation, obscure data structures, and algorithmic trivia. However, while it’s true that fresh graduates from Stanford are competing for some jobs with fresh graduates from Hack Reactor, it’s only true in one or two sub-industries. Code school and bootcamp graduates are not yet applying to work on embedded systems, cryptography/security, robotics, network infrastructure, or AI research and development. Yet these fields, like web development, are growing quickly.
Some programming-related skills have already started their transition from “rare skill” to “baseline expectation”. Conversely, the engineering that goes into creating beastly engines like AWS is anything but common. The big companies driving technology forward — Amazon, Google, Facebook, Nvidia, Space-X, and so on — are typically not looking for people with a ‘basic understanding of JavaScript’. AWS serves billions of users per day. To support that kind of load an AWS infrastructure engineer needs a deep knowledge of network protocols, computer architecture, and several years of relevant experience. As with any discipline there are amateurs and artisans.
These prestigious firms are solving research problems and building systems that are truly pushing against the boundaries of what is possible. Yet they still struggle to fill open roles even while basic programming skills are increasingly common. People who can write algorithms to predict changes in genetic sequences that will yield a desired result are going to be highly valuable in the future. People who can program satellites, spacecraft, and automate machinery will continue to be highly valued. These are not fields that lend themselves as readily to a “3 month intensive program” as front end web development, at least not without significant prior experience.
Because computer science starts with the word “computer” it is assumed that young people will all have an innate understanding of it by 2025. Unfortunately, the ubiquity of computers has not created a new generation of people who de facto understand mathematics, computer science, network infrastructure, electrical engineering and so on. Computer literacy is not the same as the study of computation. Despite mathematics having existed since the dawn of time there is still a relatively small portion of the population with strong statistical literacy, and computer science is similarly old. Euclid invented several algorithms, one of which is used every time you make an HTTPS request; the fact that we use HTTPS every time we login to a website does not automatically imbue anyone with a knowledge of how those protocols work.
Bimodal Wage Distributions
More established professional fields often have a bimodal wage distribution: a relatively small number of practitioners make quite a lot of money, and the majority of them earn a good wage but do not find themselves in the top 1% of earners. The National Association for Law Placement collects data that can be used to visualize this phenomenon in stark clarity. A huge share of law graduates make between $45,00 and $65,000 — a good wage, but hardly the salary we associate with a “top professional”.
Distribution of salaries for people with a law degree, from NALP.
We tend to think that all law graduates are on track to becoming partners at a law firm when really there are many paths: paralegal, clerk, public defender, judge, legal services for businesses, contract writing, and so on. Computer science graduates also have many options for their professional practice, from web development to embedded systems. As a basic level of programming literacy continues to become an expectation, rather than a “nice to have”, I suspect a similar distribution will emerge in programming jobs.
While there will always be a cohort of programmers making a lot of money to push on the edges of technology, there will be a growing body of middle-class programmers powering the new computer-centric economy. The average salary for web developers will surely decrease over time. That said, I suspect that the number of jobs for “programmers” in general will only continue to grow. As worker supply begins to meet demand, hopefully we will see a healthy boom in a variety of middle-class programming jobs. There will also continue to be a top-professional salary available for those programmers who are redefining what is possible.
Regardless of which cohort of programmers you’re in, a career in technology means continuing your education throughout your life. If you want to stay in the second cohort of programmers you may want to invest in learning how to create the machines, rather than simply operate them. | https://medium.com/predict/are-programmers-headed-toward-another-bursting-bubble-528e30c59a0e | ['Tyler Elliot Bettilyon'] | 2018-10-11 21:47:41.285000+00:00 | ['Jobs', 'Web Development', 'Programming', 'AI', 'Silicon Valley'] |
3: Remember the bushfires to remember the virus | The floods had immense cost, for years subsequently — I note the 35 lives lost at the time, and $30 billion-plus of damage in more detail in articles about the concreting over of Brisbane — but they can leave no trace of the act of forgetting, which was almost a shocking as the flood itself.
So, as with the floods, there is every chance that these largest bushfires in the continent’s contemporary collective memory will not divert Australia from the path that created them. There is the same possibility with this virus, despite its much larger apparent impact. So now is the time to observe, to be fully in the moment, and to start thinking about, and projecting, what comes next.
It seems political leaders have little idea what next with an Australia not predicated on unsustainable growth, whether coal, metals, agriculture, aviation, tourism, property development — all of which, in the Australian mode at least, tend to be hugely carbon intensive and degenerative of natural systems. There is no coherent image of Australia put forward otherwise. The prime minister Scott Morrison is, after all, the man who held a lump of coal in his hand in parliament and said it was nothing to be afraid of. Was the almost unanimous rejection of Morrison’s hand, when offered as solace to bushfire victims, an implicit connection of his politics with those outcomes?
“By consistently choosing short-term economic growth, successive governments ignored climate science, prepared inadequately and left Australia’s natural environment and its people at the mercy of worsening climate events. They allowed a localised, often-manageable seasonal weather event to become a continuous catastrophe, limited neither by season nor by geography.” — Imogen Champagne, The Correspondent (2019)
A lump of destructive carbon, and some coal.
The Australian prime minister’s woeful response during the bushfires is reminiscent of Queen Elizabeth II’s to the Aberfan coal tip disaster, which many will have encountered for the first time in The Crown, recently — another carbon-related disaster, of course. This kind of deep lack of care for people and place is negating perhaps the most fundamental element of the raison d’être for government itself. It’s a clear abnegation of duty regarding the climate crisis, as the successful peoples’ suit in the Netherlands indicates. It’s not hard to imagine that Australian governments could be found similarly liable — this is a country ranked the worst of 57 countries on climate crisis policy, after all, and which actively blocked the climate negotiations in Madrid — and not ‘simply’ ethically, as current and future generations will surely find them, but also legally.
A passage in Ben Lerner’s novel ‘The Topeka School’ made me think of Morrison and his ilk recently. A character points out that Wile E. Coyote, chasing his nemesis the Road Runner, frequently ends up running off a high cliff. He remains suspended, however, impossibly suspended in mid-air. It’s only when Wile E. Coyote looks down, admitting his predicament, that he begins to slowly, but meaningfully, fall.
Australian politicians refuse to look down, as the country burns around them. They have nowhere else to go but down, but think they can somehow remain aloft, if they don’t look, if they don’t admit they are suspended in disbelief as well as mid-air, with no firm ground beneath their feet.
You can see Morrison, and his men, doing everything to avoid admitting their culpability. For decades it’s been clear. I was fresh off the plane in 2007 and it was immediately clear. The Australian governing elite, from Morrison to Murdoch, has no idea what form a new ground looks like, of another green world for the country, if it is different to the Australia of the past. There is no sense of a future, only a continued glorious past leading to the sunlit uplands of the present. Yet that present is on fire, under water, locked in its house or wearing a mask, due to air pollution—or now, the coronavirus. These fires, these floods, these viruses, they should all be whacking great punctuation points in that story from the past. Yet still they will not look down, hoping behind hope that they can remain suspended by their oblivion.
But these fires, the floods, and the viruses, have caught up with them. That scorched earth is rushing up to meet them and increasingly, it is likely that most Australians will begin to see that their leaders have not been defying gravity all along.
We will see the same scenario with Trump, and others like him, lobbying to restart the engines on the economy as soon as possible. They have no other ideas, after all. All Morrison has up his sleeve is pointedly going to the cricket during a bushfire, or to the rugby during a pandemic. These are attempts to brush near-existential crises under the carpet, clinging onto what he considers normalcy — really, stasis — rather than facing the systemic challenges affecting country and city, and learning how to rebuild a resilient Australia. They’re wearing blinkers, as well as masks.
We must write these moments to memory, and repeatedly return to them, before looking forward to something else.
Despite both being somewhat temporary, perhaps there is a difference in emotional response to bushfires and floods versus invisible viruses. I grew up on the edge of Sheffield, a short drive from the plague village of Eyam in Derbyshire. The memory of being tramped around it as a sullen teenager has never dimmed, due to the stories latent within the dark peak granite the village is constructed from. The village quarantined itself during the Black Death of 1665, preventing the spread of the plague from London to the north of England. It was an extraordinary act of self-sacrifice. At least a quarter of the village’s population died, yet they saved many thousands more in the surrounding towns. Walled grave plots still exist on the edge of the village, spaced apart to prevent contamination during funerals, as do the boundary stones in the woods around, hollowed out with holes for coins, which would be washed in water or vinegar, and exchanged for food and medicine. In the Riley graves, named after the farmer who owned the field, Elizabeth Hancock buried her husband and six children, all of whom died over the course of eight days. An early form of social distancing was practiced, with church services carried out in the woods, so people could stand apart from each other. As the plague arrived in infected fleas in a box of linen from London, it traced a path along trade routes, just as with COVID-19. | https://medium.com/slowdown-papers/3-remember-the-bushfires-to-remember-the-virus-885da9d90ff8 | ['Dan Hill'] | 2020-04-09 20:31:38.556000+00:00 | ['Politics', 'Coronavirus', 'Australia', 'Climate Change', 'Covid-19'] |
Why We Need To Give Respect To Aretha Franklin And The Dying Craft Of Artistry | Why We Need To Give Respect To Aretha Franklin And The Dying Craft Of Artistry
And to those who dare to be creators in this time of disrespect
When it was announced that the “Queen of Soul,” Aretha Franklin was dying, my initial reaction was sorrow at the imminent loss of a gem, who represented a time when being an artist was only possible if you were truly artful. And that could encompass whatever method of artistry that permitted that level of dedication and sacrifice — that is woefully missing these days.
Thanks to the climate of superficiality mixed in with heaps of self-adulation and pompousness, not to mention high doses of delusion, we can only afford to recognize over-night sensations, that are popular for being popular, and posses a background story that goes viral on the basis that it sticks to the “rags-to-riches phenomenon.”
Actual talent isn’t necessary these days, because it’s all about the race to spew out energetic tracks that are re-worked constantly to keep up with erratic change in temperature.
And what’s even more fascinating is how very little respect those of us have for the artists who did do the work, and produced the stuff that doesn’t just change the landscape of an industry, but also influences the mental trajectory for anyone who is lucky enough to have more than just a taste.
Lauryn Hill comes to mind when I think about the greats of my generation, who have given so much as a token of their generosity, which happens to be the most selfless and invaluable gesture of goodwill from originators.
Hill’s phenomenal masterpiece, The Miseducation of Lauryn Hill is currently taking up residence in the illustrious vault of the Library of Congress, and while that honor is well-deserved, there’s also the magnificence of the young Black woman who made me feel included in the narrative of delightful complexities — that can only be depicted with soulfully lyric banter.
The girl who displayed the language of hip hop with the Fugees and then elevated the universal appeal with her personalized affection for a genre she uncannily perfected, has spent the years since her Grammy winning days, staying relevant with controversy and growing disrespect from the public.
She’s now embroiled in a battle of wills against a nemesis who is determined to discredit her success at whatever cost. Whether he’s within his rights to do so, doesn’t really compare to the horror of existing in a world where a talentless Cardi B is considered worthier than a woman who spent her impressionable years making enough of an impression — to warrant an instinctual level of reverence — no matter what.
But the days of unfiltered artistry is dying out, and that’s what makes the passing of Aretha Franklin very hard to take.
As a Generation Xer, my exposure to the Queen and all the others who embodied that realm of never-ending hits that have been spiritually packaged as classics — was through the musical library of relatives. They were lucky enough to be present during an era that birthed the kind of shit that will never be replicated.
We will never again witness the startling audacity of hard-earned labor, that breeds the beginnings of fandom, that settles into fascination for what most of us can’t render — before becoming an endearing act of profound respect for the creator and what has been supremely created.
When I first heard the infectious “Giving Him Something He Can Feel” from the nineties girl group, En Vogue, I was enthralled and unaware that this was merely a sample from Franklin’s treasure chest. And then years later, I treated myself to a CD that contained the soundtrack from the movie Sparkle. That was my introduction to the breathtakingly alluring voice from an artist who used her art to generously remind us of how the greats make it look so easy.
Her career mimicked the dignity of her station and like her counterparts, she seamlessly evolved with the times and collaborated with the up and comers by lending her vocals to one of my faves — “I Knew You Were Waiting (For Me)” with the late George Michael. There was also the anthem of the eighties “Freeway of Love” that made the global trek, and managed to reach me in Lagos, Nigeria.
And all through her stunning and incomparable trajectory, there was always a feeling of practiced security whenever she made appearances.
From the performance at the inauguration of the very first Black president of the United States to being the first woman inducted into the Rock and Roll Hall of Fame to receiving the Presidential Medal of Freedom — and all the plethora of honors that formulate the foundation of a national treasure — Franklin evoked exactly what Barack Obama so aptly surmised.
“American history wells up when Aretha sings.” “Nobody embodies more fully the connection between the African-American spiritual, the blues, R&B, rock and roll — the way that hardship and sorrow were transformed into something full of beauty and vitality and hope.”
While that sounds spectacularly accurate, for me it always comes back to the painful demise of pure artistry, and how the precious few who did it for the glory of love and the commitment to laborious requirements, are rightfully leaving us with evidential capacity of what once was, and will never be again.
It’s sensationally gratifying that the Queen of Soul was once paired with woman who also electrified us with the symbolic translation of melodic jewels that will never fade. “A Rose Is Still a Rose” is the surviving magic between Franklin and Lauryn Hill that happened a decade ago, and resulted in a gold-plated album for the older royalty.
As we bid adieu to another orchestrator of what we fondly rely on when we need to be rescued from the dullness of the present, there’s the wonderment of how we assumed that the good times would continue to roll — with or without the blessing from above.
The future of creators in these hostile times signals a forecast that’s not so favorable because despite the palette for excellence, we’ve succumbed to the theory of instant gratification, and the branding of mediocre entries that have enough plastered hearts to secure misplaced endorsements.
The passing of legends serves as the emptying of vessels of religion, that united us once before, when all we had was the rhythm of our hearts that followed the beat to manuscripts of memorable feats — tracing our familial and selfied scrapbook.
How will it all be measured decades from now, when the dust blows away the layers to reveal pebbles that aren’t strong enough to resist the windfall of the majestic yesteryears — that will still stand firm with fastened nostalgia?
Thankfully, the host of angels that are assembling don’t have to worry because they did what needed to be done without asking anything in return.
And that’s why we give complete respect to Aretha Franklin, and the artistry of her craft that is slowing leaving us. We respect those who are quietly paying homage to the wealth of fortitude and divine adherence to the product — and it’s polished finish.
The sparkle will never stop flashing for attention, even when the Queen takes her final bow. | https://nilegirl.medium.com/why-we-need-to-give-respect-to-aretha-franklin-and-the-dying-craft-of-artistry-d4329509ff95 | ['Ezinne Ukoha'] | 2018-08-16 18:08:53.483000+00:00 | ['Death', 'Culture', 'Icons', 'Life Lessons', 'Music'] |
They All Hate Mr. Beltracchi Now | They All Hate Mr. Beltracchi Now
When a fake is sold for millions, should you envy the conman or pity the buyer?
Self-portrait of Gustav Klimt in Wolfgang Beltracchi’s interpretation. On the top right, Beltracchi as a saint watches over the scene. (Painting by Wolfgang Beltracchi, 2017)
The truth
When Don sat in his favorite chair that day, he couldn’t wipe the smugness off his face. In less than five hours his guests would be sitting at the dinner table, sipping champagne, peering through the glass wall at his fabulous winter garden. They would be chirping and singing like little prairie birds animated by the fireplace, until Susan, expertly sat at the top of the table would point to the wall on her left and say ‘Well, Don. Please don’t tell me you got a Matisse! Aren’t those insanely expensive?’.
Don would then cock his head a little and tell her an anecdote he heard in the art world about how Marthe Bibesco, the famous Romanian socialite, held Matisse all night long at her bosom before he painted ‘La Blouse Roumaine’.
Little did Don know, as he was fantasizing about entertaining his guests with imaginary tales of artists he knew shit about, that the true author of the Matisse he just purchased was already flat ironing a Campendonk in his Kölner studio.
The horror
Owners of art pieces worldwide looked over their shoulders in horror at the paintings adorning their walls when the news of a German serial forger spread internationally in 2010. It is believed that up to three hundred Beltracchi counterfeits had passed from one gallery to another, with some being auctioned at Christie’s for millions of euros.
Wolfgang Beltracchi and his wife, Helene, scammed art collectors in Europe for decades. Their method was failproof: they would identify a presumably missing painting from some second-tier’s painter collection and pretend to have magically discovered it in Helene’s grandfather’s heritage.
Beltracchi’s scam was like a puzzle with a few missing components: the pieces of information the collectors held in their hands were all real. The rest… well, was made up. To prove the authenticity of his merch, Beltracchi came up with various tricks — the standard counterfeit of documents, usage of frames from the 1900s bought for a few bucks in flee markets, and, the most impressive of all: he dressed up his wife to pose as her grandmother.
Yup. Helene sat in front of the camera, hair pulled up in a vintage hairdo, a rigid smile on her face, and pretended to be Josephine Jägers, the owner of a legit art collection.
Helene Beltracchi posing as her grandmother Josephine Jaegers, in front of artwork forged by her husband.
Collectors worldwide bought it and craved for more. So Beltracchi provided: Max Ernst, Andre Derain, Max Pechstein, Kees van Dongen, Heinrich Campendonk, and Fernand Leger.
That’s a mouthful worth millions.
The masterpiece(s)
His career as a forger came to an end in 2010, when he and his wife were arrested and sent to prison. The culprit (isn’t it always?) was a mislabeled white pigment that contained titanium dioxide. The forensic analyst must have thought it was Christmas when Beltracchi’s painting started looking like a crime scene.
‘Rotes Bild mit Pferden’ by Heinrich Campendonk, painted by Beltracchi. The painting that led to Beltracchi’s arrest.
I won’t spoil you the pleasure of discovering how Beltracchi was constructing his counterfeits. The efforts that went into producing a painting were truly unconventional! His genius stemmed not just from his undeniable talent, nor his elaborate techniques, but rather from his dazzling process of ‘filling in’ missing pieces of history.
In 2013 a documentary called ‘Beltracchi — The Art of Forgery’ was released in Germany, displaying a fun, easygoing Beltracchi educating the public about his philosophy, the methods of forgery he used in the past, and his new life after his sentence in jail.
You can watch the trailer with English subtitles below.
Outro
When Beltracchi’s scam was discovered, the art world was fuming. The bomb under their behinds was detonated with a va-va-voom, swiping their sexy credibility out the door. While the Susans and Dons of the world were weeping because of financial loss (understandably), the galleries were claiming Beltracchi committed a crime against humanity.
His sin? He planted realistic fabrications with no historical value.
The buyers are never looking for just a pretty picture, they want to own a feeling, be it anguish, terror, or hope. For example, the feeling that poured in Matisse’s veins when he painted ‘La Blouse Roumaine’? No Beltracchi in the world will ever be able to evoke such a thing.
I bid you farewell with my favorite quote from Mr. Beltracchi himself:
“Max Ernst’s widow said my forgeries were her husband’s most beautiful works.”
Oh, and don’t forget: Next time you order something at Christie’s remember that titanium dioxide pigments were only invented in the 1920s. | https://medium.com/literally-literary/they-all-hate-mr-beltracchi-now-22b750d5bf89 | ['Elise Bona'] | 2020-10-08 05:42:42.430000+00:00 | ['Essay', 'Nonfiction', 'History', 'Art', 'Artist'] |
How To Use GraphQL APIs in Vue.js Apps | GraphQL is a query language made by Facebook for sending requests over the internet. It uses its own query but still sends data over HTTP. It uses one endpoint only for sending data.
The benefits of using GraphQL include being able to specify data types for the data fields you are sending and being able to specify the types of data fields that are returned.
The syntax is easy to understand, and it is simple. The data are still returned in JSON for easy access and manipulation. This is why GraphQL has been gaining traction in recent years.
GraphQL requests are still HTTP requests. However, you are always sending and getting data over one endpoint. Usually, this is the graphql endpoint. All requests are POST requests, no matter if you are getting, manipulating, or deleting data.
To distinguish between getting and manipulating data, GraphQL requests can be classified as queries and mutations. Below is one example of a GraphQL request:
{
getPhotos(page: 1) {
photos {
id
fileLocation
description
tags
}
page
totalPhotos
}
}
In this story, we will build a Vue.js app that uses the GraphQL Jobs API located at https://graphql.jobs /to display jobs data. To start building the app, we first install the Vue CLI by running npm i @vue/cli . We need the latest version of Node.js LTS installed. After that, we run vue create jobs-app to create new Vue.js project files for our app.
Then, we install some libraries we need for our app, which include a GraphQL client, Vue Material, and VeeValidate for form validation. We run:
npm i vue-apollo vue-material vee-validate@2.2.14 graphql-tag
This installs the packages. Vue Apollo is the GraphQL client, and graphQL-tag converts GraphQL query strings into queries that are usable by Vue Apollo.
Next, we are ready to write some code. First, we write some helper code for our components. We add a mixin for making the GraphQL queries to the Jobs API. Create a new folder called mixins , and add a file called jobMixins.js to it. Then in the file, we add:
import { gql } from "apollo-boost"; export const jobsMixin = {
methods: {
getJobs(type) {
const getJobs = gql`
query jobs(
$input: JobsInput,
){
jobs(
input: $input
) {
id,
title,
slug,
commitment {
id,
title,
slug
},
cities {
name
},
countries {
name
},
remotes {
name
},
description,
applyUrl,
company {
name
}
}
}
`;
return this.$apollo.query({
query: getJobs,
variables: {
type
}
});
}, getCompanies() {
const getCompanies = gql`
query companies{
companies {
id,
name,
slug,
websiteUrl,
logoUrl,
twitter,
jobs {
id,
title
}
}
}
`;
return this.$apollo.query({
query: getCompanies
});
}
}
}
These functions will get the data we require from the GraphQL Jobs API. The gql in front of the string is a tag. A tag is an expression, which is usually a function that is run to map a string into something else.
In this case, it will map the GraphQL query string into a query object that can be used by the Apollo client.
this.$apollo is provided by the Vue Apollo library. It is available since we will include it in main.js .
Next, in the view folder, we create a file called Companies.vue , and we add:
<template>
<div class="home">
<div class="center">
<h1>Companies</h1>
</div>
<md-card md-with-hover v-for="c in companies" :key="c.id">
<md-card-header>
<div class="md-title">
<img :src="c.logoUrl" class="logo" />
{{c.name}}
</div>
<div class="md-subhead">
<a :href="c.websiteUrl">Link</a>
</div>
<div class="md-subhead">Twitter: {{c.twitter}}</div>
</md-card-header> <md-card-content>
<md-list>
<md-list-item>
<h2>Jobs</h2>
</md-list-item>
<md-list-item v-for="j in c.jobs" :key="j.id">{{j.title}}</md-list-item>
</md-list>
</md-card-content>
</md-card>
</div>
</template> <script>
import { jobsMixin } from "../mixins/jobsMixin";
import { photosUrl } from "../helpers/exports"; export default {
name: "home",
mixins: [jobsMixin],
computed: {
isFormDirty() {
return Object.keys(this.fields).some(key => this.fields[key].dirty);
}
},
async beforeMount() {
const response = await this.getCompanies();
this.companies = response.data.companies;
},
data() {
return {
companies: []
};
},
methods: {}
};
</script> <style lang="scss" scoped>
.logo {
width: 20px;
} .md-card-header {
padding: 5px 34px;
}
</style>
It uses the mixin function that we created to get the companies’ data and displays it to the user.
In Home.vue , we replace the existing code with the following:
<div class="home">
<div class="center">
<h1>Home</h1>
</div>
<form
<md-field :class="{ 'md-invalid': errors.has('term') }">
<label for="term">Search</label>
<md-input type="text" name="term" v-model="searchData.type" v-validate="'required'"></md-input>
<span class="md-error" v-if="errors.has('term')">{{errors.first('term')}}</span>
</md-field> Home @submit ="search" novalidate> Search {{errors.first('term')}} <md-button class="md-raised" type="submit">Search</md-button>
</form>
<br />
<md-card md-with-hover v-for="j in jobs" :key="j.id">
<md-card-header>
<div class="md-title">{{j.title}}</div>
<div class="md-subhead">{{j.company.name}}</div>
<div class="md-subhead">{{j.commitment.title}}</div>
<div class="md-subhead">Cities: {{j.cities.map(c=>c.name).join(', ')}}</div>
</md-card-header> <md-card-content>
<p>{{j.description}}</p>
</md-card-content> <md-card-actions>
<md-button v-on:click.stop.prevent="goTo(j.applyUrl)">Apply</md-button>
</md-card-actions>
</md-card>
</div>
</template> <script>
import { jobsMixin } from "../mixins/jobsMixin";
import { photosUrl } from "../helpers/exports"; export default {
name: "home",
mixins: [jobsMixin],
computed: {
isFormDirty() {
return Object.keys(this.fields).some(key => this.fields[key].dirty);
}
},
beforeMount() {},
data() {
return {
searchData: {
type: ""
},
jobs: []
};
},
methods: {
async search(evt) {
evt.preventDefault();
if (!this.isFormDirty || this.errors.items.length > 0) {
return;
}
const { type } = this.searchData;
const response = await this.getJobs(this.searchData.type);
this.jobs = response.data.jobs;
}, goTo(url) {
window.open(url, "_blank");
}
}
};
</script> <style lang="scss">
.md-card-header {
.md-title {
color: black !important;
}
} .md-card {
width: 95vw;
margin: 0 auto;
}
</style>
In the code above, we have a search form to let users search for jobs with the keyword they entered. The results are displayed in the card.
In App.vue , we replace the existing code with the following:
<div id="app">
<md-toolbar>
<md-button class="md-icon-button"
<md-icon>menu</md-icon>
</md-button>
<h3 class="md-title">GraphQL Jobs App</h3>
</md-toolbar>
<md-drawer :md-active.sync="showNavigation" md-swipeable>
<md-toolbar class="md-transparent" md-elevation="0">
<span class="md-title">GraphQL Jobs App</span>
</md-toolbar> @click ="showNavigation = true"> menu GraphQL Jobs App GraphQL Jobs App <md-list>
<md-list-item>
<router-link to="/">
<span class="md-list-item-text">Home</span>
</router-link>
</md-list-item> <md-list-item>
<router-link to="/companies">
<span class="md-list-item-text">Companies</span>
</router-link>
</md-list-item>
</md-list>
</md-drawer> <router-view />
</div>
</template> <script>
export default {
name: "app",
data: () => {
return {
showNavigation: false
};
}
};
</script> <style lang="scss">
.center {
text-align: center;
} form {
width: 95vw;
margin: 0 auto;
} .md-toolbar.md-theme-default {
background: #009688 !important;
height: 60px;
} .md-title,
.md-toolbar.md-theme-default .md-icon {
color: #fff !important;
}
</style>
This adds a top bar and left menu to our app and allows us to toggle the menu. It also allows us to display the pages we created in the router-view element.
In main.js , we put:
import Vue from 'vue'
import App from './App.vue'
import router from './router'
import store from './store'
import VueMaterial from 'vue-material';
import VeeValidate from 'vee-validate';
import 'vue-material/dist/vue-material.min.css'
import 'vue-material/dist/theme/default.css'
import VueApollo from 'vue-apollo';
import ApolloClient from 'apollo-boost'; Vue.config.productionTip = false
Vue.use(VeeValidate);
Vue.use(VueMaterial);
Vue.use(VueApollo);
uri: '
request: operation => {
operation.setContext({
headers: {
authorization: ''
},
});
}
}); const client = new ApolloClient({uri: ' https://api.graphql.jobs' request: operation => {operation.setContext({headers: {authorization: ''},});}); const apolloProvider = new VueApollo({
defaultClient: client,
}) new Vue({
router,
store,
apolloProvider,
render: h => h(App)
}).$mount('#app')
This adds the libraries we use in the app (such as Vue Material) and adds the Apollo Client to our app so we can use them in our app.
The this.$apollo object is available in our components and mixins because we inserted apolloProvider in the object we use in the argument of new Vue .
In router.js , we put:
import Vue from 'vue'
import Router from 'vue-router'
import Home from './views/Home.vue'
import Companies from './views/Companies.vue' Vue.use(Router) export default new Router({
mode: 'history',
base: process.env.BASE_URL,
routes: [
{
path: '/',
name: 'home',
component: Home
},
{
path: '/companies',
name: 'companies',
component: Companies
}
]
})
Now we can see the pages we created when we navigate to them. | https://medium.com/better-programming/how-to-use-graphql-apis-in-vue-js-apps-58414878867b | ['John Au-Yeung'] | 2019-09-30 23:49:30.368000+00:00 | ['GraphQL', 'Programming', 'JavaScript', 'Apollo', 'Vuejs'] |
What Google Wants: Two Crucial SEO Ranking Factors | One Big SEO Mystery Demystified
The Google Search Engine Ranking Algorithm is one of life’s great mysteries. Or is it?
They’ll never reveal exactly how the algo works but we can make an educated guess as to the most important ranking factors.
Legend has it there are 200+ data points that the Google algo takes into account when they decide who’s going to rank where.
This legend came into being circa 2009. Matt Cutts (the former head of Google’s Webspam Team) let it slip at a digital marketing conference that there were over 200 variables baked into the Google algorithm.
Two hundred different factors? Sounds complicated.
It is…and it isn’t. Simple common sense can help us distill that list down to what really moves the needle.
While Google may crawl a website and take note of hundreds of different items, two of them stand above all. Or more accurately, the 200+ factors can be grouped under two major headings for the most part.
Relevancy and Authority.
Combined, both factors take into account just about every important data point Google would ever consider.
Factor One: Website Relevancy
Common sense dictates that in order for your website to rank for a specific search phrase, your site must be relevant to said phrase.
If someone were to Google the phrase blue widgets, then your site/page had better be relevant to blue widgets if you intend on claiming one of those coveted page one rankings positions.
Relevancy is THE number one factor that determines whether you’re even eligible for a particular search term ranking. I always tell my clients, that relevancy is what gets you into the game and you have to be in it to win it.
On-Page Relevancy Factors
For the most part, relevancy is going to be determined by your on-page ranking factors. You’re able to control your level of relevancy via the content on your website or webpage.
Be sure to pay attention to the following when optimizing your website for on-page relevancy:
Domain Name: A keyword-friendly domain name is a huge relevancy signal. An EMD (exact match domain) or PMD (partial match domain) can provide a huge rankings boost provided all your other ranking signals are dialed in properly.
A keyword-friendly domain name is a huge relevancy signal. An EMD (exact match domain) or PMD (partial match domain) can provide a huge rankings boost provided all your other ranking signals are dialed in properly. URL Structure: Including your keywords in the URL’s of your website’s subpages will help optimize your inner pages for valuable long tail keywords. For example, your product and/or category pages are naturally going to include keyword-friendly product names and descriptions.
Including your keywords in the URL’s of your website’s subpages will help optimize your inner pages for valuable long tail keywords. For example, your product and/or category pages are naturally going to include keyword-friendly product names and descriptions. Meta Title: A well-crafted meta title is a necessity to entice search engine users to click on your link and visit your website. Your meta title should be properly descriptive and include your primary keyword while still being grammatically correct.
A well-crafted meta title is a necessity to entice search engine users to click on your link and visit your website. Your meta title should be properly descriptive and include your primary keyword while still being grammatically correct. Meta Description: While not as strong a ranking factor as the meta title, your meta description is the perfect opportunity to expand upon your primary keyword description while working in other variations or related keywords. Again, being grammatically correct above all is a necessity.
While not as strong a ranking factor as the meta title, your meta description is the perfect opportunity to expand upon your primary keyword description while working in other variations or related keywords. Again, being grammatically correct above all is a necessity. Title Tags: Your H1, H2, and other paragraph heading tags are perfect for mixing in other long-tail keyword variations or related keywords. The more context you’re able to provide Google by including a wide variety of related terms, the better off you’ll be.
Your H1, H2, and other paragraph heading tags are perfect for mixing in other long-tail keyword variations or related keywords. The more context you’re able to provide Google by including a wide variety of related terms, the better off you’ll be. Text Content: Of course, your primary on-page content will allow you to fully expand upon your topic/product and work in various related long-tail keywords and synonyms. Be careful not to go overboard however and run the risk of over-optimizing your content.
Of course, your primary on-page content will allow you to fully expand upon your topic/product and work in various related long-tail keywords and synonyms. Be careful not to go overboard however and run the risk of over-optimizing your content. Images: Image file names are a simple way to increase page relevancy to your target keyword using related keywords. The Alt Text and Alt Description fields can help as well if you complete them with some keyword friendly info.
Image file names are a simple way to increase page relevancy to your target keyword using related keywords. The Alt Text and Alt Description fields can help as well if you complete them with some keyword friendly info. Schema Markup: Schema Markup or Structured Data enables you to add code to your website that search engines can use to better understand and classify your website content. It’s a great opportunity to tell Google exactly what your site or page is about.
The above list is a great start to help make sure all on-page ranking factors are dialed in for maximum keyword relevancy.
Relevancy Is Primarily Determined By Your On-Page Ranking Factors
Off-Page Relevancy Factors
Google doesn’t just stop at your own website when checking for relevancy signals.
Your own social media profiles are likely to include content themed around your industry vertical and should also link back to your website. Google expects this and your schema markup code allows you to point out your most important business social media profiles to solidify that connection.
Mentions of your website on other relevant blogs/sites can be a very strong relevancy signal in the eyes of Google. It just makes sense that other websites catering to your target audience may end up sharing your content on their websites. We’re always linking out to supporting sources on 3rd party blogs from within our own blog posts.
Other relevancy signals may include industry related directory listings, niche related press releases, resource page listings, etc.
Geographic Relevancy
Relevancy isn’t just about keyword or topical relevancy. We also need to be aware of and optimize for geographic relevancy. Optimizing your website to help reach a local audience is the easiest way to help your business grow.
Greater local web visibility will ultimately mean more local traffic, more local clients and more sales. You can set up your website to blanket a single neighborhood, an entire town, a county, a state or an even larger geographic region.
Your goal is to make your website as relevant as possible to a specific location so that local consumers are able to find your content, products or services with a simple search engine query.
You can do this by making sure the following steps are an integral part of your SEO strategy:
Dedicated Website Contact Us Page
Business Address in Website Footer
Google My Business Listing
Dedicated Location Pages for Multi-Location Business
Business Citations
Geo-Relevant Content
Just as you’d make your webpages keyword relevant, you’re able to make them geographically relevant as well.
Factor Two: Domain Authority
As mentioned above, relevancy gets you in the game. Simply being relevant to a particular keyword or topic doesn’t guarantee top rankings — it takes more than that to convince Google that your website is worthy of a page one ranking.
There are 200+ factors that the Google Algorithm considers when deciding which sites rank on page one and which ones get buried on page two and beyond.
Make no mistake about it though — backlinks are the #1 off-page ingredient for ranking at the top of the heap for those highly competitive keywords that drive tons of traffic to your website.
Step one is always dialing in the on-page ranking factors of your website to make it as SEO friendly as possible. From there, the key is accumulating high domain authority backlinks from other 3rd party sites.
Each backlink is almost considered a vote in favor of your site over the competition. The catch to this is that these backlinks need to be coming from other relevant and authoritative sources. Links from irrelevant or “spammy” websites won’t necessarily help, and may in fact hurt your ranking efforts so be careful.
Building backlinks is almost akin to running an online PR campaign. By getting your name out there on the web, you’re increasing your site’s credibility and authority. Google can’t rank what they aren’t aware of, hence the PR comparison.
SEO isn’t just “what you know” (relevant content), it’s also “who you know” (authoritative backlinks).
SEO Is “What You Know” And “Who You Know”
The more powerful your backlink profile, the more your domain authority increases and the greater your chance of moving up the search engine rankings.
While relevancy may be enough for some low competition keywords, you’ll also need to work on establishing a long term link building strategy to improve domain authority and help with the more competitive/stubborn keywords. You should sprinkle in citation building, social sharing, press releases and other off-page techniques to help reinforce that all-important keyword relevancy.
One other thing to keep in mind is link velocity — the speed at which you earn inbound links. You’ll want more of a slow and steady progression so as to appear much more natural in the eyes of Google. Dumping thousands of links on your website in a very short period of time is a recipe for disaster. This is the very reason that SEO is a long-term strategy and not something that usually pays immediate dividends.
That’s the bare bones of what Google is looking for in terms of optimized content — Relevancy and Authority. It really is that simple. Kind of. With a twist. Maybe you are better off leaving this stuff to the pros like MAXPlaces … | https://maxplaces.medium.com/what-google-wants-two-crucial-seo-ranking-factors-dd90b0dd967e | ['Maxplaces Marketing'] | 2019-07-08 19:47:23.328000+00:00 | ['Seo Agency', 'Google', 'SEO'] |
Optimization: The Intuitive Process at the Core of AI | Artistic Representation of a Neural Network
Multi-dimensional optimization problems are at the core of many computer programs set to revolutionize several industries. One important use for this type of problem is with neural networks, a prominent type of AI. In this case, the current elevation of the explorer is how well the AI performs a task, and each input is how active a given neuron or neuron connection path is in the neural network. The structure of the network is complex, but the principle is still to find the best possible combination of inputs. The activations of each neuron and connection changes slightly every time the program runs, and the network performs its task better and better. This is just like how the explorer got progressively closer to the summit, but the summit here is a well performing neural network. AIs trained in this way will do tasks all the way from recognizing handwritten text to being the brains of self-driving cars.
Airbus Titanium Seat Bracket Made with Generative Design
Another important use is in generative design AI. Generative design software takes in several requirements from an engineer such as how strong a part needs to be and what material it will be made out of. For example, the part above is a bracket to hold an airplane seat to the floor, and will be made out of titanium. The computer is then tasked with designing the part to meet the requirements, but be as lightweight as possible. Again, from a starting geometry, the part becomes progressively lighter with each small change. Once no small change could possibly make the part better, then it is considered optimized. The parts made with this method take on strange alien-like shapes like the seat bracket, but are often 30% lighter, or more, than conventional parts which is a big deal in a lot of applications. This will mean lighter, faster cars, bridges that need less material to build, airplanes that can carry more weight, and mostly likely enable some new inventions that were not possible without it.
In addition to the two types discussed above, AI has many other forms, almost all of which rely on optimization. Even with all of the impact that AI is poised to have on our lives, it is almost comforting to know that, fundamentally, it is just a computer solving the problem of the blind explorer. | https://medium.com/swlh/optimization-the-intuitive-process-at-the-core-of-ai-10b15df14949 | ['Aidan Gould'] | 2020-12-04 18:53:51.921000+00:00 | ['AI', 'Optimization', 'Machine Learning', 'Generative Design', 'Gradient Descent'] |
Three Different Things: April 7, 2020 | Three Different Things: April 7, 2020
Social Media in Isolation, Tech Bubbles, and Digital-Only Business
(FTA)
Now — and I mean this in a careful way — I’m thinking synthetic opioids, narcotics, have relevance here. Opioids in a medical context are incredibly important for pain management after surgery. You would not want a world that did not have opioid-based pain killers. They can help your grandmother with her hip replacement. They can also destroy your life in another context. These technologies are life-giving and powerful, and we wouldn’t want to not have them. At the same time, if you’re spending your day on Twitter right now, it’s shredding your psychological health. It’s the physical equivalent of sitting here with drain cleaner, taking shots every hour. But if you’re on a Zoom call with your parents or cousins or something, it could be giving you the exact opposite effect!
I’d like to see a PSA/Health campaign that shares how certain online activity (Twitter, TikTok) can affect psychological health. It can be like inviting yourself to be gaslit. I think one-way, one-to-many communication (any service that allows anonymous posting with likes) is ultimately the culprit.
As the article mentions, I’m reminded that some tech is truly social. I used Facebook Messenger for the first time in years to have a video chat with extended family. Facebook is, well… Facebook… but video chat over Messenger was a great experience. I’m thankful Messenger exists at the moment.
2. So This Is How the Tech Bubble Finally Ends: Fully Charged
And just as in the aftermath of the last crash, the seeds of that next boom may already be being planted. Zoom Video Communications Inc., the video conferencing service, revealed this week that it had gone from 10 million daily active users to 200 million thanks to the crisis. Grocery delivery has become an essential service. Amazon.com Inc., is facing plenty of criticism for its treatment of workers, but the company is the backbone of the American way of life.
2020, the year digital was proven essential. Digital-only is now truly a way of life. It’ll be a way of business too.
With all this extra online activity, ad inventories must be overflowing. And with marketing budgets going way down, I expect CPMs to follow.
3. All Microsoft events will be digital-only until July 2021
As a company, Microsoft has made the decision to transition all external and internal events to a digital-first experience through July 2021.
Not surprised to see Microsoft taking the lead here and proclaiming events digital-only over a year out. Large conferences and events will never be the same. | https://medium.com/early-hours/three-different-things-april-7-2020-2cbd308f6a0d | ["Sean O'Brien"] | 2020-04-07 12:11:19.501000+00:00 | ['Social Media', 'Advertising', 'Digital Transformation', 'Facebook'] |
Synapse Team Spotlight: Getting to know Matt Freeland, Product Engineer | What is your role at Synapse and what are some of your responsibilities?
Officially a Product Engineer. One of my main responsibilities is assisting our implementation and account management teams in onboarding platforms. I’m their direct SE while they’re in the process of using the sandbox to learn the products and start developing their applications to interact with it. I also do some reporting stuff, working in our databases to pull reports for our compliance & engineering teams and do some general support engineering stuff. Additionally, I contribute to our Backend codebase.
Did you always know you wanted to become an engineer?
Kind of. Even before I was in an actual engineering role, it might seem ridiculous but I’ve been using Excel and doing a couple of art projects in various graphics programs like Photoshop. I had been introduced to the world of scripting so I would script automated tasks in Excel, I’d make macro-enabled spreadsheets and stuff like that so I did a little bit of lightweight coding and I’ve been doing that for basically my entire working life. Anything I could create shell scripts for or make macro-enabled spreadsheets for, I would do it and when I started learning Python it just all clicked for me and I realized yep, Engineer is where I want to be.
Once you made that decision, how did you know what the next step would be from there?
I got laid off from an industry that is very cyclical and is prone to bulk layoffs and decided that I wanted better stability and I had plenty of time and runway to make a change. I decided the best way would be to go to a programming boot camp. So I did with the express idea of becoming a web developer not yet sure that I wanted to be a back-end engineer at that point but as I went through the courses I kind of gravitated towards Python and Flask specifically and that is where my focus is right now.
What drew you to joining Synapse originally?
Initially and before I knew anything about the product, it was a stack that I was comfortable with. I looked at the technical specifications in a listing and was like ‘Hey Flask! I can work in Flask’. From there when I was doing some research on the company and the product it was just super cool and I was pretty invested in getting the job. I did a lot of research on the public API, but it really cemented in when I came in for the interview and talk to you and Sankaet and Hillary, sitting there and watching Sankaet’s overview of how the product worked, I was completely hooked at that point.
Want to read more, please visit our blog. | https://medium.com/synapsefi/synapse-team-spotlight-getting-to-know-matt-freeland-product-engineer-89c0f3fab171 | ['Carla Mcmorris'] | 2020-12-17 01:42:34.525000+00:00 | ['Engineer', 'Engineering'] |
A (Quick) Guide to Neural Network Optimizers with Applications in Keras | SGD (Stochastic Gradient Descent)
Stochastic Gradient Descent, in contrast to batch gradient descent or vanilla gradient descent, updates the parameters for each training example x and y. SGD performs frequent updates with a high variance, causing the objective function to fluctuate heavily.
SGD fluctuation. Source
SGD’s fluctuation enables it to jump from a local minima to a potentially better local minima, but complicates convergence to an exact minimum.
Momentum is a parameter of SGD that can be added to assist SGD in ravines — areas where the surface curves more steeply in one dimension than in another, common around optima. In these scenarios, SGD oscillates around the slopes of the ravine, making hesitant progress along the bottom of the local optimum.
Momentum helps accelerate SGD in the correct direction, therefore dampening the redundant oscillations as seen in image 2.
Nesterov momentum is an improvement over standard momentum — a ball that blindly follows the slope is unsatisfactory. Ideally, the ball would know where it is going so it can slow down before the hill slopes up again. Nesterov accelerated gradient (NAG) can give momentum a prescience by slowing SGD down before it reaches an upsloping area, helping reduce unnecessary redundancy in convergence. | https://towardsdatascience.com/a-quick-guide-to-neural-network-optimizers-with-applications-in-keras-e4635dd1cca4 | ['Andre Ye'] | 2020-03-05 16:00:23.470000+00:00 | ['Optimizer', 'Neural Networks', 'AI', 'Gradient Descent'] |
They all got crowns | They all got crowns
Debunking the dynamic of pitting pop divas against each other, with an exploratory data analysis on their strengths
If you, like me, are the kind of person that takes pop music seriously, chances are you have already been involved in heated discussions about pop divas. More often than not, arguments regarding this matter end up pitting women against each other and reinforcing the musical industry’s deeply rooted misogyny.
One of the reasons it may not be fruitful to engage in this discussion of trying to find the ‘best’ pop diva is that, just like everybody else, they all have their strengths and weaknesses. Using one’s strength to compare all others not only misses the point, but doesn’t give them enough credit for going through everything they went about to bring such an amazing body of work to us, the fans.
Thus, we’re not going to focus on such discussion.
We can’t argue with the fact that music revolves around our personal preferences and experiences. In so, what you’ll read below is not only a testament to each of these artists work, but a moment to reflect on what it is about it that resonates with you so much you're willing to fight to defend it.
By doing so, and being really open and curious, we’re not only gaining a more granular knowledge on our musical taste, but also rewriting this false narrative of comparison to a more respectful one, enhancing and lifting all these amazing women at the same time.
To do so, we collected data from Spotify and Genius, and we will dive into the full discography of nine pop divas (Beyoncé, Taylor Swift, Ariana Grande, Rihanna, Lady Gaga, Madonna, Mariah Carey, Britney Spears and Katy Perry), using twelve variables to analyze their songs (danceability, song duration, speechiness, energy, lyrics emotion, sound emotinal, among others). To this analysis, we just considered these artists solo projects.
Let’s go? | https://towardsdatascience.com/they-all-got-crowns-fbe4c29641c8 | ['Adauto Braz'] | 2020-07-30 20:50:53.392000+00:00 | ['Music', 'Pop Culture', 'Data Science'] |
How to Suck Less at Colors as a Developer | Color Formats
There are a lot of formats for colors. The most popular ones are Hex, RGB, HSB, and HSL. Let me explain the differences between them and which one you should use.
Hex and RGB
RGB and Hex are the most common formats for representing colors.
RGB gives us the intensity of red, green, and blue of one color. Intensity is a number that can go from 0 to 255. As an example, you could define a light purple color with 164 of red, 43 of green, and 217 of blue:
.color {
color: rgb(164, 43, 217);
}
Hex stands for hexadecimal. It’s similar to RGB; only the format changes: #RRGGBB .
As with RGB, R is for the intensity of red, G for green, and B for blue. These values can go from 00 to FF (which means 0 to 255 if you convert it to decimal). For example, #FF0000 represents a pure red. Indeed, #FF means red is at its highest intensity (255) and the others are at their lowest.
I’ll say it upfront, those two formats aren’t the best when you have to pick colors. Indeed, let’s say you want your app to have a blue primary color. You won’t have only one blue, right? You’ll have to make a lighter blue or a darker blue. Well, good luck doing that with RGB and Hex.
Note: RGB and Hex are not suitable for picking colors, but you can still use them for representing colors.
That being said, let me introduce you to HSB, HSL, and their differences. Let’s begin with HSB.
HSB
HSB means hue, saturation, and brightness.
H stands for hue. It’s a number measured in degrees that is between 0 and 360. The following drawing speaks for itself:
If you have a visual memory and you want to pick a color, this drawing will be your best friend.
What about saturation and brightness? For that, let’s take a look at the following drawing:
These four circles all have the same hue (210°), but still, they look different. Well, that’s because of saturation and brightness.
S stands for saturation. It’s a percentage representing how vibrant color is. Zero percent means you have no saturation, while 100% means you have the maximum. If you choose to add a blue to your UI with a hue of 210° and saturation of 0%, you’ll get a gray color. High saturation, vibrant color. Low saturation, dull color.
To better visualize the saturation for color, here is the blue we saw above, with different variations:
However, you can get two different results for brightness at 100% depending on the saturation:
If the saturation is at 0%, you’ll get pure white. If you have a saturation different than 0%, you’ll just get a highly bright color that is more or less vibrant.
The most perceptive of you will think, OK, HSB is great, but you can’t work with HSB in CSS. Instead, CSS supports HSL. Let’s see it right away.
HSL
HSL means hue, saturation, and lightness. Thus, the only difference compared to HSB here is the lightness.
Lightness represents how close a color is to black or white. Zero percent lightness is equal to black (like the brightness). But 100% lightness is equivalent to white no matter the hue or the saturation. This is the big difference between brightness and lightness.
Let’s take a value example with HSB and HSL. We’ll take a hue of 250° (darker blue) and a saturation of 75% (very vibrant).
For HSB, you’ll get:
H=250, S=75%, B=0% → black
H=250, S=75%, B=100% → very bright blue
For HSL, you’ll get:
H=250, S=75%, L=0% → black
H=250, S=75%, L=50% → very bright blue (the same as with HSB)
H=250, S=75%, L=100% → white
There’s something I find weird with HSL while designing. When I move the lightness up or down and convert it to HSB, the saturation isn’t the same. For that reason, I prefer to work with HSB, which seems easier to me.
In the end, it doesn’t matter if HSB is not available in CSS. What you can do is pick your colors upfront with HSB and convert them to hexadecimal values.
You know about color formats. Great. Let’s deep dive into the exciting stuff. | https://medium.com/better-programming/how-to-suck-less-at-colors-as-a-developer-69e35f3196dc | ['Thomas Lombart'] | 2020-08-27 15:47:11.478000+00:00 | ['Software Development', 'Colors', 'Design', 'UX', 'Programming'] |
You’re Not a Child Anymore, So Why Are You Still Eating Like One? | I can’t take credit for the title of this piece. In the height of my chiropractic practice many years ago, one of my patients said those words.
She was telling me about a birthday party at work. You know the type of party she was talking about: A conference room with stale coffee and a huge grocery store sheet cake.
Jennifer had been working hard to lose 40 pounds, but that big sheet cake brought back delightful childhood memories of frosting so sweet it makes your teeth hurt. Damn, that cake looked good and she was reaching for the biggest slice.
Suddenly a voice inside her said:
“You’re not a child anymore, so why are you still eating like one?”
Snapped out of her trance, she sampled one bite, then threw the rest in the trash.
I stared at the pie as I contemplated breakfast today.
I would say my diet is 80% wholesome, homemade, healthy food. But, I still love to snack, and I do love desserts. Pie for breakfast can sometimes seem reasonable 😉
Then I heard my dear patient’s words in my mind.
“You’re not a child anymore, so why are you still eating like one?”
Many of our associations with food are rooted in childhood. Foods can make us feel cozy and nurtured, like a plate from the Thanksgiving table. Others make us feel youthful and happy, like an ice cream cone or a slice of birthday cake.
It starts in your mind, not in your stomach.
Be aware of what’s motivating you as you make your food choices. Chances are you don’t even like that sugary sheet cake, so why do you reach for it?
Your inner child gets excited and jumps in to make your food choices for you. That little kid knows what it likes, and you’ve been letting it call the shots.
fast food
boxed macaroni and cheese
aversion to vegetables
sugary drinks
white bread
hot dogs
bags of chips or candy
breakfast cereals named after candy
sheet cake
But you don’t think whole grains or vegetables taste good?
Get over it. You’re not a child anymore.
Lots of things in life change when you outgrow childhood. You wear pants to work. You don’t leave your toys lying all over the flow. Temper tantrums are rejected.
Once you’re aware of what’s driving your food choices, you take back control.
Empower yourself. Remind yourself who you are. You’re a healthy, smart, reasonable adult. You love yourself enough to take care of yourself. You’re the image and likeness of your creator, after all, so live your life like it.
Learn to use the power of your thought to think yourself thin with my online course at Udemy.
You deserve genuine and lifelong happiness, the type of happiness that can’t be taken away from you no matter what sort of craziness is happening in the world. Read my book, Happy Ever After. We can all use that right now.
I made a free 5-day Mastering Happiness email course, and I want to share it with you! Visit me at christinebradstreet.com where you can get your course for free.
All images open source from pixabay.com | https://medium.com/change-your-mind/youre-not-a-child-anymore-so-why-are-you-still-eating-like-one-70cb4d9fe5f1 | ['Dr. Christine Bradstreet'] | 2020-11-14 13:47:35.824000+00:00 | ['Diet', 'Nutrition', 'Mindset', 'Weight Loss', 'Health'] |
Sharing graphs has never been easier | We are excited to announce the new sharing experience on Graph Commons.
Graph Commons is a collaborative platform for mapping, analyzing and publishing data networks. It empowers people and organizations to transform their data into interactive maps and untangle complex relations that impact them and their communities. Whether you use data mapping for investigative journalism, archival exploration, or content curating you’d want to have your audience engage with your published data. The goal of Graph Commons is to support quality data publishing, in addition to intuitive mapping and analysis of data networks, so we hope that this new sharing feature flourishes meaningful discussions around your work.
As of today, you can add visual annotations to any public graph, deep-link to your findings from rich social media posts, and make high resolution prints.
Comment with a view
Select an area on the graph and add your thoughts. Give feedback, share links, and generate new discussions.
Tip: In a comment, use # to reference nodes and @ to mention members. | https://medium.com/graph-commons/sharing-graphs-has-never-been-easier-434f1502c7fa | ['Burak Arikan'] | 2019-05-03 20:11:27.672000+00:00 | ['Social Media', 'Data Science', 'Data Visualization', 'Social Network', 'Journalism'] |
Where should we store the JWT for SPA? Memory, Cookie, or LocalStorage? | 3. Double tokens policy: HttpOnly Cookie + CSRF token
The HttpOnly tag for Cookie is one of solutions to defend XSS. The HttpOnly tag will restrict users to manipulate the Cookie by JavaScript.
It’s the reason people recommends us to save JWT in the HttpOnly Cookie instead of the localStorage.
Send response with JWT in the Cookie for Django/DRF
// Edit settings.py
JWT_AUTH = {
...
'JWT_EXPIRATION_DELTA': datetime.timedelta(seconds=300),
'JWT_ALLOW_REFRESH': True,
'JWT_REFRESH_EXPIRATION_DELTA': datetime.timedelta(days=7),
'JWT_AUTH_HEADER_PREFIX': 'JWT',
'JWT_AUTH_COOKIE': 'token'
} //Edit views.py ...
response = Response(serializer.data)
response.set_cookie('token', serializer.data['token'], httponly=True)
return response
And we won’t have to do other things in the client-side.
However, the HttpOnly Cookie causes another threat — Cross-site request forgery (CSRF). Attackers can retrieve Cookie by sending requests to the website.
Can we use CORS whitelist to protect Cross-site request forgery (CSRF)?
Can we use CORS whitelist to protect CSRF? That’s review the Same-origin policy.
Urls have the same schema, domain, and post The cross-domain requests from JavaScript are usually limited like XMLHttpRequest or Fetch API The cross-domain requests from HTML are usually unlimited like <script>, <img>, <link>, <iframe>, <video>
Copt right@A Layman
CORS does help prevent certain types of CSRF attacks from external sources. But it can’t prevent internal CSRF attacks.
It can restricts users to call endpoints from XMLHttpRequest(AJAX) and Fetch . But users still call endpoints in HTML tags(img, script, etc).
Set Cross-Origin Resource Sharing (CORS) in Django/DRF
// Install packages
pip install django-cors-headers //Edit settings.py
INSTALLED_APPS = (
...
'corsheaders',
...
) CORS_ORIGIN_WHITELIST = [
...
] CORS_ORIGIN_REGEX_WHITELIST = [
...
] MIDDLEWARE = [
...
'django.middleware.common.CommonMiddleware',
'corsheaders.middleware.CorsMiddleware' ]
Use CSRF token in Django/DRF
To defend CSRF, we can use the CSRF token with JWT. The server can response with CSRF token in the cookie.
REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': (
...
),
'DEFAULT_AUTHENTICATION_CLASSES': (
'rest_framework_jwt.authentication.JSONWebTokenAuthentication', 'rest_framework.authentication.SessionAuthentication'
),
}
The requests without the CSRF token will be rejected. | https://medium.com/a-layman/where-should-we-store-the-jwt-for-spa-memory-cookie-or-localstorage-2491912d8e79 | ['Sean Hs'] | 2020-08-06 12:20:08.491000+00:00 | ['Single Page Applications', 'Python', 'Django', 'Software Development', 'Jwt'] |
Gold Plating Software Products | Exit psychic abilities and enter research
Simply adding a washing machine won’t cut it, so it seems. You might have just opened pandora’s box of requests and complaints. More customer satisfaction they said, it will cost almost nothing they said.
An argument can be made here that some people would have been turned off by the lack of a washing machine and would rather rent somewhere else. Exit psychic abilities and enter research. The decision to buy a washing machine should have been inspired by solid market analysis. How many people actually asked for one? How many of them decided to rent regardless? How many free condos do you have on average? Can you increase the price for condos with a washing machine? What’s the recommended percentage of condos to put a washing machine in? How long do washing-machine-loving customers rent on average and how does that compare to others?
Additionally, if your decision was based on thorough analysis, it would necessitate its own project or sprint. It won’t be part of your MVP or an add-on to a completely different scope. Just as a thought exercise, try to think of reasons why or why not to buy a microwave and share your thoughts in a comment.
Software Analogy
I’m sure most readers have already picked up on what I’m trying to say with the washing machine example but, after all, this is a software-focused article. So how exactly does this apply to software projects?
A project manager or a software engineer thinks it’s a super-awesome idea to add feature X. They go on to develop this feature and after the deadline getting pushed back a week or two, they finally finish thinking it was worth it.
So here are a few different things that can come out of pandora’s box, that is, if the client notices the feature at all!
It’s broken! Looks like you didn’t cover all the corner cases and urgent unasked-for maintenance is now due.
Ok that’s a decent feature but can you do Y on top of/instead of X? Y will take you a sprint to do. Do you say no or ask for extra payment and upset the client? Maybe just swallow the cost? Either way, you lose. Looks like you forgot to propose your idea first.
your idea first. Actually, this feature is confusing my team and they might misuse it, can you disable it? They are the smartest I can afford to hire, sorry about subverting your expectations, I guess…
Upgrading the application, exciting times huh? Except remember that useless feature you added a few months earlier? It’s proving a bit problematic. So you can feel free to omit it and upset the 5% of your users utilizing it (or the client, by proxy) or spend a week or two trying to get it to work in the new version. Such a fun decision to make!
Oh btw, a few useless features down the line and it looks like bugs have been getting a longer and longer lifespan and we have a bunch of angry users. We need urgent maintenance. Do we pause other projects and potentially miss on opportunities? Or maybe we can hire more developers, increase our maintenance costs, and introduce a bit of newcomer’s chaos into the team! Or perhaps we can just ask everyone to commit to deadlines and work unpaid overtime and risk them suddenly disappearing to work somewhere else that is more organized and less stressful. What can go wrong with constant recruitment costs and team chaos?
W hat do we do about it?
It’s your job to have a proper conversation with the client/users to help them understand and convey to you what they really want.
Make a plan and stick to it! You might think that this is easier said than done but, let’s think about it. You do want to impress the client, maybe you are worried they will go for another provider if they don’t like the product and you don’t want to scare them off with too much commitment. Maybe you think you will lure in more users with nice features. You want to impress them by giving them “extra value” or “special features”. So far so good but the moment you decide you know what they want, things go south.
The best way to design a wonderful product is to do your homework beforehand. Do research, write user stories, have conversations, and most importantly don’t assume you know what the user wants! In fact, do you think they even fully know what they want? It’s your job to have a proper conversation with the client/users to help them understand and convey to you what they really want.
Take a look at this popular GIF, with a little twist
You should also realize that everything has a cost. The immediate cost might seem plausible but how about future costs? Software is not too different from a washing machine. You will need to maintain it and upgrade it. Every time you upgrade the frameworks you are using or move to new technologies, that feature is going to be a task “make sure it still works”. Every time a user finds a scenario that breaks it or an OS/Platform update renders the way it’s implemented obsolete, you will have to maintain it. So, think ahead!
So, before adding a new feature, think about the following points: | https://heshammeneisi.medium.com/gold-plating-software-products-7bffe427b215 | ['Hesham Meneisi'] | 2020-12-26 10:40:17.109000+00:00 | ['Software', 'Project Management', 'Software Development', 'Software Architecture', 'Engineering'] |
Setup MERN (MongoDB, Express JS, React JS, and Node JS) environment and create your first MERN stack application. | Setup MERN (MongoDB, Express JS, React JS, and Node JS) environment and create your first MERN stack application. Manish Mandal Follow Dec 10 · 8 min read
So are you also confused like most of the MERN beginners on how to create your first MERN project? and also how to setup the environment for your project? Even I was also confused when I created my first MERN project. I wanted to setup everything locally on my computer but there was hardly any tutorial for setting up the MongoDB locally everyone was using MongoDB atlas. But in this tutorial, I will cover how to use MongoDB locally and also how to structure our project.
We will first start with installing Node js on our machine. Visit node js official website here and download the latest version of node and install it on your machine. It’s available for your Linux, Mac, and Windows machine. I am using a Windows environment so all setup and configuration will be as per this environment. I have downloaded the Windows Installer (.msi) 64-bit version for my computer. After completion of download open the software and then it will ask for accepting the agreement and blah blah just click on next and after that, it will prompt you with this screen.
There is one checkbox to install Chocolatey on your machine. This is optional but still, I’ll recommend you to check this box as Chocolatey will help you to update your node in your machine easily in the future. After that click on install. After the installation has been completed you will be prompted with another screen to install python and visual studio build tools required for node native modules. Press any key to install all those required things.
We have successfully installed node in our machine now it’s time to install MongoDB in our machine. Visit MongoDB's official website to download the community version of MongoDB. The current version of MongoDB while writing this tutorial is 4.4.2. It can vary in the future but I guess the step would be the same. So I have downloaded MongoDB v4.4.2 Platform windows and Package MSI. After downloading follow the below animated steps to install MongoDB.
Note: I have also checked to install the MongoDB compass. MongoDB Compass will show your Database and its structure through an intuitive GUI.
After successfully installing MongoDB your MongoDB services will be running automatically in your windows services and the database data will be stored in Data Directory we have selected while installing the MongoDB software.
ctrl + shift + ESC open task manager and then go to the services tab
But by any chance, your MongoDB service is not listed you have to manually configure mongod for the command line and create a data directory for the database.
Optional if MongoDB service is not listed: If you type mongod to start your MongoDB connection you will be receiving an error like ‘mongod’ is not recognized as an internal or external command , this may be because our environment is not recognizing the command mongod . To fix this we need to add our MongoDB bin folder to the Environment variable. On your machine search for environment variable and open the Environment Variable dialog box and then edit the path variable and add your MongoDB installation bin folder to the box and click Ok.
Note: My current version is 4.4.2 that’s why the path contains a folder of 4.4. This may vary for your installation
Optional if MongoDB service is not listed: Now you will be able to run mongod command to start the database. But it will stop because there is one more thing we have to do is to create a data folder in our C drive. So go to your C drive and create a directory with the name data and inside that directory create another directory with the name db. This directory will hold all our database and its setting. Now you can run mongod command in your terminal to start MongoDB.
Note: The data directory we selected while installing MongoDB and the data directory we created in C drive manually hold different database so do not get confused that why your database is not listed or your collections are not showing. If mongodb services is running in your windows services do not run mongod command in terminal.
Now we can open MongoDB compass which we have installed simultaneously with MongoDB to create our Database and collections.
So we have successfully installed Node js and MongoDB on our Environment. Now it’s time to structure the boilerplate for the project.
Create and enter into the project directory and then inside that directory create a directory name backend and also one file name server.js. This server.js is the main file that you can use to call your database configuration and all your APIs routes. After that open terminal inside the root directory and run
npm init -y
This will create a package.json file in the project directory. Alternatively, you can use npm init for setting up the name or keywords for the project but that’s up to you.
Note: You can use require to import modules but I prefer using import to load node modules in files. To allow the node to use import add the below line to your package.json file
"type": "module"
2. Now we will install the required modules for our backend setup. Run the below command inside the terminal to install the required modules.
npm install express mongoose --save
3. After that we will config mongoose to connect with our MongoDB. Create a config directory inside the backend directory and then inside the config directory create a db.js file. This file will contain our database connection configurations. Paste the below code in the db.js file.
Note: Replace databasName with the name of your database.
4. Now import this db.js file to your server.js file and run node server.js in your terminal. It will log Database connected : 127.0.0.1
5. Now we will create a users collection in our database and its schema using mongoose. Create a models directory inside the backend directory and then inside the model create usersModel.js file and paste the below code.
6. Now just import the users model to the db.js file and it will create the users collection in your database with the schema. Open MongoDB compass to view the collection.
7. Now we will create some dummy users detail in our users collection. Open MongoDB compass and import below dummy data.
8. Now it’s time to create the controller which will be responsible for returning the response of the request. Create a controllers directory inside the backend directory and inside controllers create userController.js file.
9. Now in this file we will create two methods to retrieve all users and users by id but before that install, a module to act as a middleware for handling exceptions inside of async express routes.
npm i express-async-handler --save
10. Now paste the below code into the userController.js file.
11. Now we will create a route for the user. Create a routes directory inside backend directory and inside routes create userRoute.js file.
12. Paste the below code into the userRoute.js file.
13. Now it’s time to create our API but before that, I will install dotenv module to create a .env file in our project and then calling that into our server.js file. This .env file will help us in declaring our environment variable. We can save our credentials or keys here.
Install
npm i dotenv --save
Import
import dotenv from 'dotenv' dotenv.config()
14. Create a .env file in our project root directory and paste the below text to declare some variables.
15. Now will create user API using the express use method. We will also declare our PORT so now paste the below code into your server.js file.
16. Now run node server.js in your terminal to start the server on port 5000 and then open localhost:5000/api/users on your browser to get the list of all users.
We have successfully created our API using express js, node js, and MongoDB. Before moving into the React part I would like you to install the nodemon module and also make some changes to the package.json file.
npm i nodemon --save-dev
Now add this line under the scripts object inside the package.json file.
"start": "nodemon backend/server"
Now re-run your server using npm start in your terminal and this time, nodemon will monitor each change in your project and restart the server automatically.
Note: if you have used npm init -y to generate the package.json file please change "main": "index.js", to "main": "server.js" in your package.json file else nodemon will throw an error.
In most of my tutorials, I have covered how to use Axios to fetch data from the API to React. You can read my previous tutorial on the Simplest way to use axios to fetch data from an api in ReactJS. I won’t go much into detail like the above-mentioned tutorial but I will cover the basic part. So now go to the root directory of the project and follow the below-mentioned steps.
17. Create a react project with the name frontend.
npx create-react-app frontend
18. Install Axios into your react application.
npm install axios --save
19. Now replace all the code from the app.js file with the below-mentioned code.
Before starting our react application add the below line inside the package.json file of the react project or else you will receive a CORS error in your project.
"proxy": "http://127.0.0.1:5000",
20. Now start the application npm start and refresh the browser to see changes.
So now we have successfully built our first MERN project.
Tip: Instead of running node and react separately we can install a module to run them concurrently just install the concurrently module to your root project and add some lines mentioned below in your root package.json file.
npm i concurrently --save-dev
Now all you need is to run npm run dev from the terminal and both will start concurrently.
Below I have shared the GitHub repository for reference. | https://medium.com/how-to-react/setup-mern-mongodb-express-js-react-js-and-node-js-environment-and-create-your-first-mern-7774df0fff19 | ['Manish Mandal'] | 2020-12-10 19:47:00.522000+00:00 | ['React', 'Mongodb', 'Expressjs', 'Reactjs', 'Nodejs'] |
Starting a business isn’t hard | So when I agreed to speak with Tim Enwall, Misty Robotics’ CEO, it really was as an advisor to what they were attempting to start. But as I learned of Misty’s products, mission, and team, the more I discovered overlapping interests and experiences that I could bring to bear to an exciting opportunity. In fact, it seemed like the role was tailored specifically to those interests and experiences.
But still, the idea of rejoining a startup wasn’t wholly appealing. I had a lot of questions. They were mostly about myself. Could I have a sense of ownership in an organization I hadn't started? Had my abilities atrophied from several years of disuse? Would I have sufficient motivation to participate fully in this type of endeavor again? Could I maintain a sense of balance between the life I now felt protective of and the demands an early stage company would place on my time and attention?
Now a year into this new adventure, I have answers to these questions. Yes. No. Yes. No.
Well, three out of four ain’t bad, as they say. But I’ll need to make adjustments on that last one if I’d like to enjoy any sort of longevity at Misty (or any other effort I undertake). My life with Jena is too important to slip into old patterns — to prioritize work ahead of us. I’ll need to take time off. Not just vacation, but evenings and weekends. That can be hard at a startup. It will just take awareness, discipline, and good habits. It will take growth—growth of a different kind. | https://medium.com/alttext/starting-a-business-isnt-hard-c4603fb60a70 | ['Ben Edwards'] | 2018-12-15 20:18:05.520000+00:00 | ['Personal Development', 'Personal Growth', 'Business', 'Startup', 'Improvement'] |
5 things every successful business website needs to get right | You might think you know what glitzy features will serve your target customers best and make your website stand out from the pack, but have you even got the basics right?
Here are five of those basics that will account for the bulk of your website’s success or failure. You might want to talk them over with our web consultants in the UK.
1. A pleasing appearance
The right first impression is all-important.
A site that is attractive to browse doesn’t need to be complicated, but it should at least look like something from 2018.
That means ensuring the design of your site is responsive, so that it looks just as amazing across all devices — from the smallest smartphone to the largest desktop computer.
You should also use easy-to-read fonts, incorporate eye-catching images and make your site’s video and interactive elements useful and tasteful.
As with all the very best in design aesthetics, less is more.
2. The utmost clarity
Can your customers quickly and easily locate what they need from the moment they land on your homepage?
The various navigational elements of your site are vital to improving clarity.
It’s both large and small touches that make the difference.
Drop-down menus can certainly make your vital pages easier to find, but even just ensuring there’s always a link available back to whatever page of your site your visitor was previously browsing can do a lot to enhance their experience.
3. Quick loading times
In an age in which so many people browse on their smartphones ‘on the go’, it just won’t do if your site’s pages load sluggishly on a mobile connection.
Frustrated customers will just close their browser and look somewhere else.
Slow loading times will hurt your site’s search engine rankings, too: Google will hit you hard if your site is slow.
You need to do all of those little things that will help to speed up how quickly your site loads.
Proven methods range from the optimisation of image sizes to the removal of auto-play from any on-site videos, which can be notorious for depleting data on mobile phones.
Reducing the number of elements on the page, keeping Javascript to a minimum and ensuring the site is professionally optimised for speed makes all the difference.
4. A professional vibe
This isn’t quite the same as ‘appearance’. It’s about all the things that help to make your business website seem trustworthy.
If your site doesn’t give the impression of a serious and reputable business, your visitors won’t convert into paying customers.
Does your site feature positive testimonials from past customers or a page that outlines your company culture and values?
Even simply using real photos of your staff team (rather than stock photos that so many of your site visitors have already seen elsewhere) can do a lot to make your website look like that of a real, established business.
5. A high conversion rate
Above all else, your company website needs to convert visitors into customers.
Are you using clearly highlighted calls to action on your pages?
Follow the philosophy of ‘keep it simple, stupid’: your visitors are unlikely to respond positively to busy backgrounds or complicated navigation. Instead, aim for simple, clean designs that all your visitors can get to grips with straight away.
Show your company’s human face. Remember that ultimately, we don’t like to deal with faceless companies; we like to deal with human beings. That’s another reason to use authentic images of your staff team and you.
Just imagine the difference you could make to your existing site if you did all the above!
Consider how much more our web consultants in the UK could do for you. It couldn’t be simpler to call upon the web design professionals of PENNInk Productions: just complete our online contact form or give us a call on 020 8144 7931 today. | https://medium.com/pennink-productions-blog/5-things-every-successful-business-website-needs-to-get-right-f5a641d10261 | ['Pennink Productions'] | 2018-02-08 16:38:26.577000+00:00 | ['Lead Conversion', 'Optimisation', 'Business', 'Design', 'SEO'] |
Live Dashboards with Redash and Rockset | Redash is a powerful open source query and visualization tool that helps you make sense of your data. It connects to variety of data sources and also includes a native connector for Rockset. In this post we will demonstrate how to use Redash to build live dashboards on Rockset data sets.
Configure
If you’ve never used Redash before, you need to set it up first. You have two options: either run it yourself or use the hosted version provided by Redash.
Once you have set up Redash, you need to create a new Rockset data source. Read about Redash adding Rockset as a supported data source in Redash v6.
Look for “Rockset” in the New Data Source view.
To configure the data source, you should give it a name and an API key that you can create in the Rockset console. In most cases you can leave API Server as is, using https://api.rs2.usw2.rockset.com .
Query
Create the data source and you are ready to query Rockset! On the first load you might need to give some time for Redash to preload the schema of all Rockset tables. Once that’s done, you should see a list of all Rockset collections in your account and you can start executing SQL statements. Click Create in the nav bar and then Query to start your SQL query.
Live Dashboards and Visualizations
By default, Redash shows you a table with results of the query. However, you can also build rich dashboards and visualizations based on your results. Click on New Visualization , and from there you can select many different visualization options, such as Bar charts, Pie charts, Scatter plots and others. You can also add the visualization to a dashboard that will automatically refresh as new data comes in.
Find more info on the Rockset-Redash integration in our docs. Give it a spin and let us know what you think! | https://medium.com/rocksetcloud/visualize-data-in-rockset-with-redash-66b6ba7da646 | ['Igor Canadi'] | 2019-08-09 20:02:09.618000+00:00 | ['Analytics', 'Data Visualization', 'Sql', 'Data', 'Business Intelligence'] |
Convergence | We live in a time of convergence where examples are littered show casing various technologies or products converging to enhance end user experience. One case in point digital media and home entertainment serving to improve everyday experience. Just pick one category of your liking and you will soon find someone is pushing the envelope either bringing adjacency services closer or building one if it doesn’t already exist.
Same is true for the business software community. It remains an emerging trend to converge and deliver enhanced experience for business users.
I usually put the convergence in three categories that overtime gets delivered as one unified solution. These are:
1. Improve a person’s productivity in the workplace.
2. Improve the ability to make decision easier based on historical or projection based heuristics
3. Improve an individual social standing in the peer community
There are multiple ways this convergence is getting accomplished. People who observe business software see how the dots are getting connected — be it through an acquisition in the marketplace or the kind of releases introduced in the marketplace.
One big plus is the maturity (and availability) of technology infrastructure — particularly around web based computing model — which helps converge and deliver combined user experience feasible.
It is an exciting time being in this space and next decade is going to be much better than the past decade — as far as the overall business solutions are concerned. | https://medium.com/aloktyagi/convergence-b1963b37ff17 | ['Alok Tyagi'] | 2017-03-08 21:02:15.397000+00:00 | ['Accounting Solutions', 'Software Development', 'Erp', 'Agile', 'Business Intelligence'] |
Object-Oriented Programming — The Trillion Dollar Disaster | The Problems of State
Photo by Mika Baumeister on Unsplash
What is state? Simply put, state is any temporary data stored in memory. Think variables or fields/properties in OOP. Imperative programming (including OOP) describes computation in terms of the program state and changes to that state. Declarative (functional) programming describes the desired results instead, and don’t specify changes to the state explicitly.
Mutable State — the act of mental juggling
I think that large objected-oriented programs struggle with increasing complexity as you build this large object graph of mutable objects. You know, trying to understand and keep in your mind what will happen when you call a method and what will the side effects be. — Rich Hickey, creator of Clojure
State by itself is quite harmless. However, mutable state is the big offender. Especially if it is shared. What exactly is mutable state? Any state that can change. Think variables or fields in OOP.
Real-world example, please!
You have a blank piece of paper, you write a note on it, and you end up with the same piece of paper in a different state (text). You, effectively, have mutated the state of that piece of paper.
That is completely fine in the real world since nobody else probably cares about that piece of paper. Unless this piece of paper is the original Mona Lisa painting.
Limitations of the Human Brain
Why is mutable state such a big problem? The human brain is the most powerful machine in the known universe. However, our brains are really bad at working with state since we can only hold about 5 items at a time in our working memory. It is much easier to reason about a piece of code if you only think about what the code does, not what variables it changes around the codebase.
Programming with mutable state is an act of mental juggling️. I don’t know about you, but I could probably juggle two balls. Give me three or more balls and I will certainly drop all of them. Why are we then trying to perform this act of mental juggling every single day at work?
Unfortunately, the mental juggling of mutable state is at the very core of OOP . The sole purpose for the existence of methods on an object is to mutate that same object.
Scattered state
Photo by Markus Spiske on Unsplash
OOP makes the problem of code organization even worse by scattering state all over the program. The scattered state is then shared promiscuously between various objects.
Real-world example, please!
Let’s forget for a second that we’re all grown-ups, and pretend we’re trying to assemble a cool lego truck.
However, there’s a catch — all the truck parts are randomly mixed with parts from your other lego toys. And they have been put in 50 different boxes, randomly again. And you’re not allowed to group your truck parts together — you have to keep in your head where the various truck parts are, and can only take them out one by one.
Yes, you will eventually assemble that truck, but how long will it take you?
How does this relate to programming?
In Functional Programming, state typically is being isolated. You always know where some state is coming from. State is never scattered across your different functions. In OOP, every object has its own state, and when building a program , you have to keep in mind the state of all of the objects that you currently are working with.
To make our lives easier, it is best to have only a very small portion of the codebase deal with state. Let the core parts of your application be stateless and pure. This actually is the main reason for the huge success of the flux pattern on the frontend (aka Redux).
Promiscuously shared state
As if our lives aren’t already hard enough because of having scattered mutable state, OOP goes one step further!
Real-world Example, Please!
Mutable state in the real world is almost never a problem, since things are kept private and never shared. This is “proper encapsulation” at work. Imagine a painter who is working on the next Mona Lisa painting. He is working on the painting alone, finishes up, and then sells his masterpiece for millions.
Now, he’s bored with all that money and decides to do things a little bit differently. He thinks that it would be a good idea to have a painting party. He invites his friends elf, Gandalf, policeman, and a zombie to help him out. Teamwork! They all start painting on the same canvas at the same time. Of course, nothing good comes out of it — the painting is a complete disaster!
Shared mutable state makes no sense in the real world. Yet this is exactly what happens in OOP programs — state is promiscuously shared between various objects, and they mutate it in any way they see fit. This, in turn, makes reasoning about the program harder and harder as the codebase keeps growing.
Concurrency issues
The promiscuous sharing of mutable state in OOP code makes parallelizing such code almost impossible. Complex mechanisms have been invented in order to address this problem. Thread locking, mutex, and many other mechanisms have been invented. Of course, such complex approaches have their own drawbacks — deadlocks, lack of composability, debugging multi-threaded code is very hard and time-consuming. I’m not even talking about the increased complexity caused by making use of such concurrency mechanisms.
Not all state is evil
Is all state evil? No, Alan Kay state probably is not evil! State mutation probably is fine if it is truly isolated (not the “OOP-way” isolated).
It is also completely fine to have immutable data-transfer-objects. The key here is “immutable”. Such objects are then used to pass data between functions.
However, such objects would also make OOP methods and properties completely redundant. What’s the use in having methods and properties on an object if it cannot be mutated?
Mutability is Inherent to OOP
Some might argue that mutable state is a design choice in OOP, not an obligation. There is a problem with that statement. It is not a design choice, but pretty much the only option. Yes, one can pass immutable objects to methods in Java/C#, but this is rarely done since most of the developers default to data mutation. Even if developers attempt to make proper use of immutability in their OOP programs, the languages provide no built-in mechanisms for immutability, and for working effectively with immutable data (i.e. persistent data structures).
Yes, we can ensure that objects communicate only by passing immutable messages and never pass any references (which is rarely done). Such programs would be more reliable than mainstream OOP. However, the objects still have to mutate their own state once a message has been received. A message is a side effect, and its single purpose is to cause changes. Messages would be useless if they couldn’t mutate the state of other objects.
It is impossible to make use of OOP without causing state mutations. | https://medium.com/better-programming/object-oriented-programming-the-trillion-dollar-disaster-92a4b666c7c7 | ['Ilya Suzdalnitski'] | 2019-08-04 12:13:05.338000+00:00 | ['Object Oriented', 'Functional Programming', 'JavaScript', 'Design Patterns', 'Programming'] |
Java Tips — Creation of a new object | Java Tips — Creation of a new object
Different ways to declare and initialize a new object in Java
Photo by Samuel Zeller on Unsplash
To use an object in Java is essential to define and, when need to use, to execute the command of new provided by the syntax.
With the command of new is invoked the Java Virtual Machine that allocates the position in memory and gets the pointer associated with it after is called the specified constructor that initializes the memory area: now the object is usable.
There are several ways to declare and make usable an object and now try to describe them.
Constructor method in a class object
After start to describe the ways to initialize it is useful to recall the necessary step to create an instance of an object.
Every object is described into a Java class file: the file needs to have the same name as the object and also must be the same asthe class name. To initialize the object at last is necessary defines one or more constructor method that permits the initialization.
This is the definition get from the Oracle Java documentation:
Declaration: The code set in bold are all variable declarations that associate a variable name with an object type. Instantiation: The new keyword is a Java operator that creates the object. Initialization: The new operator is followed by a call to a constructor, which initializes the new object.
This is an example of default constructor and constructor with parameters:
Constructor examples
With the first type of constructor the operation of new initialize the attributes with their default values after the instantiation of the object is completed; the second constructor method, unlike the first, allows to the caller to provide in input a set of values only itself use to initialize the local attributes of the object.
Initialization in-line
The most common use for creating objects is when-is-needed.
Often the objects are defined, instantiated and initiated in a single line: this definition way expects the use of a type and a variable name. The type must be the same as the object that it would create.
After the declaration is immediately invoked the constructor method:
Declaration and initialization in-line of object
With the initialization in-line the object instantiated are visible only within the method or portion of the method enclosed in braces { }. These objects are not usable outside the braces.
The initialization in-line is used when a developer needs to have an instance of an object and this instance needs to be accessible only into a specific portion of the class.
Separate declaration and initialization
In any case, the declaration and initialization of the object occur at different times.
The declaration of the objects is positioned in a line and the instantiation and the initialization is in a different line. With this approach the declaration drives the visibility of the objects into the execution of the program.
Separate declaration and initialization of objects
If the declaration is defined as a class variable, the object, once instantiated, is visible in every portion of the class execution; otherwise apply the rules of the in-line declaration.
Initialization by reflection
An other opportunity to execute the initialization of the objects is the reflection; using the reflection is it possible to identify the right constructor method from a class definition and invoke it using the reflection library included in the Java SE.
The initialization by reflection is applicable in in-line mode or in separate mode.
Initialization in-line of object by reflection
This approach is useful in context when is required an high degree of dynamism for example when developing a library or a framework: in these cases it is difficult (and not performing) defines all objects that you need to you use, so with the initialization by reflection it is possible to predispose the software to work with generic information.
Conclusion
We have retraced some ways to declare, initialize and instantiate the object in Java: from the classical initialization in-line until the use of the reflection.
For the topic of an object creation topic was taken origin a lot of design patterns (ref. definition) to make reusable solution to create objects; in particular there is a category of design patterns in which are contained the Creational Pattern.
Git: repository | https://medium.com/quick-code/java-tips-creation-of-a-new-object-55408a410507 | ['Marco Domenico Marino'] | 2019-10-24 08:48:44.003000+00:00 | ['Programming Tips', 'Java', 'Programming', 'Programming Languages'] |
A Guide to Pandas and Matplotlib for Data Exploration | Photo by Clint McKoy on Unsplash
After recently using Pandas and Matplotlib to produce the graphs / analysis for this article on China’s property bubble , and creating a random forrest regression model to find undervalued used cars (more on this soon). I decided to put together this practical guide, which should hopefully be enough to get you up and running with your own data exploration using Pandas and MPL! This article is broken up into the following Sections:
The Basic Requirements
Reading Data From CSV
Formatting, cleaning and filtering Data Frames
Group-by and Merge
Visualising Your Data
The Plot Function Basics
Seaborn violin and lm-plots
Pair plots and Heat maps
Figure Aesthetics
Plotting with multiple axis
Making your charts look less scientific
The Basic Requirements
Reading CSV / Required Imports for Matplotlib & Pandas
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
%matplotlib inline car_data = pd.read_csv('inbox/CarData-E-class-Tue Jul 03 2018.csv')
Inline indicates to present graphs as cell output, read_csv returns a DataFrame, the file path is relative to that of your notebook.
Formatting, Cleaning and Filtering DataFrames
Often when dealing with a large number of features it is nice to see the first row, or the names of all the columns, using the columns property and head(nRows) function. However if we are interested in the types of values for a categorical such as the modelLine, we can access the column using the square bracket syntax and use .unique() to inspect the options.
print(car_data.columns)
car_data.head(2)
car_data['modelLine'].unique()
There are clearly multiple versions of the same model line entered under different variations of ‘Special Equipment’ so we will use a regex to replace anything containing SE with Special equipment. Similarly there are some columns with Nans (Not a Number) so we will just drop these with dropna(subset=[‘modelLine’]).
car_data = car_data.dropna(subset=['modelLine'])
car_data['modelLine'] = car_data['modelLine'].replace(to_replace={'.*SE.*': 'Standard equipment'}, regex=True)
We can also filter out unwanted values such as ‘undefined’ by comparing the rows of modelLine against some boolean question, this returns a boolean array of the same dimensions as the DataFrame rows which can be used to filter with the square bracket syntax again.
car_data = car_data[(car_data['modelLine'] != 'undefined')] car_data['modelLine'].unique()
This is looking much better!
Note above how pandas never mutates any existing data, hence we have to overwrite our old data manually when we perform any mutations / filters. Whilst this may seem redundant, its extremely effective method of reducing unwanted side effects and bugs in your code.
Moving on, we also need to change the firstRegistration field typically this should be treated as a python date format, but instead we will treat it as a numeric field for convenience in performing regressions on the data in a future article.
Considering this data is associated with car registration, the year is really the important component we need to keep. Thus treating this as a numeric field means we can apply numerical rounding, multiplication / division to create a Registration Year feature column as below.
car_data['firstRegistration'].head(5) car_data[‘firstRegistrationYear’] = round((car_data[‘firstRegistration’] / 10000),0)
car_data[‘firstRegistrationYear’] .head(5)
Looks like the output we were looking for.
Using Group-by’s and Merges
Group-by’s can be used to build groups of rows based off a specific feature in your dataset eg. the ‘modelLine’ categorical column. We can then perform an operation such as mean, min, max, std on the individual groups to help describe the sample data.
group_by_modelLine = car_data.groupby(by=['modelLine'])
car_data_avg = group_by_modelLine.mean()
car_data_count = group_by_modelLine.count()
Averages Data
Count Data: Note that this is simply a count of the records for each model Line
As you can see the mean value for each numeric feature has been calculated for each model Line. Group by’s are highly versatile and also accept lambda functions for more complex row / group labelling.
Next we will assemble a DataFrame of only the relevant features to plot a graph of availability (or car count) and average equipment per car. This DataFrame can be created by passing in a dictionary of keys which represent the columns and values which are single columns or Series from our existing data. This works here because both Data Frames have the same number of rows. Alternatively we can merge the two Data Frames by their indexes (modelLine) and rename the suffixes of repeated columns appropriately.
We will then plot these two variables sorting by equipment then availability as a horizontal bar graph.
# Since all the columns in car_data_count are the same, we will use just the first column as the rest yield the same result. iloc allows us to take all the rows and the zeroth column. car_data_count_series = car_data_count.iloc[:,0] features_of_interest = pd.DataFrame({'equipment': car_data_avg['equipment'], 'availability': car_data_count_series}) alternative_method = car_data_avg.merge(car_data_count, left_index=True, right_index=True, suffixes=['_avg','_count']) alternative_method[['equipment_avg', 'firstRegistration_count']].sort_values(by=['equipment_avg', 'firstRegistration_count'], ascending=True).plot(kind='barh')
Visualising Your Data
The Pandas Plot Function
Pandas has a built in .plot() function as part of the DataFrame class. It has several key parameters:
kind — ‘bar’,’barh’,’pie’,’scatter’,’kde’ etc which can be found in the docs.
color — Which accepts and array of hex codes corresponding sequential to each data series / column.
linestyle — ‘solid’, ‘dotted’, ‘dashed’ (applies to line graphs only)
xlim, ylim — specify a tuple (lower limit, upper limit) for which the plot will be drawn
legend— a boolean value to display or hide the legend
labels — a list corresponding to the number of columns in the dataframe, a descriptive name can be provided here for the legend
title — The string title of the plot
These are fairly straightforward to use and we’ll do some examples using .plot() later in the post.
S eaborn lmplots
Seaborn builds on top of matplotlib to provide a richer out of the box environment. It includes a neat lmplot plot function for rapid exploration of multiple variables. Using our car data example, we would like to understand the association between the equipment kit-out of a car and the sale price. Obviously we would also like this data segmented by model line to compare like with like.
import seaborn as sns
Passing in our column labels for equipment and price (x and y axis) followed by the actual DataFrame source. Use the col keyword to generate a separate plot for each model line and set the col_wrap 2 to make a nice grid.
filtered_class = car_data[car_data['modelLine'] != 'AVANTGARDE'] sns.lmplot("equipment", "price", data=filtered_class, hue="gears", fit_reg=False, col='modelLine', col_wrap=2)
As you can see putting a hue onto the chart for the number of gears was particularly informative, as these types of car tend to be no better equipped but more expensive. As you can see we could perform significant exploration of our dataset in 3 lines of code.
Seaborn Violin Plots
These plots are excellent for dealing with large continuous datasets, and can similarly be segmented by an index. Using our car dataset we can gain a greater understanding about the price distribution of used cars. Since the age of a car dramatically affects the price we will plot the first regsitration year as our x axis variable and price as our y. We can then set our hue to sepearate out the various model variants.
from matplotlib.ticker import AutoMinorLocator fig = plt.figure(figsize=(18,6)) LOOKBACK_YEARS = 3
REGISTRATION_YEAR = 2017 filtered_years = car_data[car_data['firstRegistrationYear'] > REGISTRATION_YEAR - LOOKBACK_YEARS] ax1 = sns.violinplot('firstRegistrationYear', "price", data=filtered_years, hue='modelLine') ax1.minorticks_on()
ax1.xaxis.set_minor_locator(AutoMinorLocator(2))
ax1.grid(which='minor', axis='x', linewidth=1)
Notice that the violin plot function returns the axis on which the plot is displayed. This allows us to edit property of the axis. In this case we have set minor ticks on and used the AutoMinorLocator to place 1 minor tick between each major interval. I then made the minor grid visible with line width of 1. This was neat hack to put a box around each registration year.
Pairplots & Correlation Heatmaps
In datasets with a small number of features (10–15) Seaborn Pairplots can quickly enable a visual inspection of any relationships between variables. Graphs along the left diagonal represent the distribution of each feature, whilst graphs on off diagonals show the relationship between variables.
sns.pairplot(car_data.loc[:,car_data.dtypes == 'float64'])
(This is only a section, I couldn’t fit all the variables in, but you get the concept.)
Similarly we can utilise the pandas Corr() to find the correlation between each variable in the matrix and plot this using Seaborn’s Heatmap function, specifying the labels and the Heatmap colour range.
corr = car_data.loc[:,car_data.dtypes == 'float64'].corr() sns.heatmap(corr, xticklabels=corr.columns, yticklabels=corr.columns, cmap=sns.diverging_palette(220, 10, as_cmap=True))
These two tools combined can be quite useful for identifying important features to a model quickly. Using the Heatmap for example we can see from the top row, that the number of gears and the first registration are positively correlated with price, where as milage is likely to be negatively correlated. Its by far a perfect tool for analysis, but useful at a basic level.
Figure Aesthetics
Plotting With Multiple Axis
Below is some data from my previous post on China’s Property Bubble. I wanted to show construction data for all cities and then provide a subsequent breakdown by city tier in a single figure.
Lets breakdown how we might create such a figure:
First we define the size of the figure to provide adequate graphing space. When plotting with multiple axis we define a grid on which axis may be place on. We then use the subplot2grid function to return an axis at the desired location (specified from top left corner) with the correct span of rows / columns.
fig = plt.figure(figsize = (15,12))
grid_size = (3,2)
hosts_to_fmt = [] # Place A Title On The Figure fig.text(x=0.8, y=0.95, s='Sources: China National Bureau of Statistics',fontproperties=subtitle_font, horizontalalignment='left',color='#524939') # Overlay multiple plots onto the same axis, which spans 1 entire column of the figure large_left_ax = plt.subplot2grid(grid_size, (0,0), colspan=1, rowspan=3)
We can then subsequently plot onto this axis by specifying the ax property of the plot function. Note that the despite plotting onto a specific axis, the use of the secondary_y parameter means a new axis instance will be created. This will be important to store for formatting later.
# Aggregating to series into single data frame for ease of plotting construction_statistics = pd.DataFrame({
'Constructed Floorspace (sq.m, City Avg)':
china_constructed_units_total,
'Purchased Units (sq.m, City Avg)':
china_under_construction_units_total,
}) construction_statistics.plot(ax=large_left_ax,
legend=True, color=['b', 'r'], title='All Tiers') # Second graph overlayed on the secondary y axis large_left_ax_secondary = china_years_to_construct_existing_pipeline.plot(
ax=large_left_ax, label='Years of Backlog', linestyle='dotted',
legend=True, secondary_y=True, color='g') # Adds the axis for formatting later hosts_to_fmt.extend([large_left_ax, large_left_ax_secondary])
To produce the breakdowns by city tier, we again utilise the subplot2grid but this time change the index on every loop, such that the 3 tier charts plot one below the other.
# For each City Tier overlay a series of graphs on an axis on the right hand column
# Its row position determined by its index for index, tier in enumerate(draw_tiers[0:3]):
tier_axis = plt.subplot2grid(grid_size, (index,1))
china_constructed_units_tiered[tier].plot(ax=tier_axis,
title=tier, color='b', legend=False)
ax1 = china_under_construction_units_tiered[tier].plot(
ax=tier_axis,linestyle='dashed', label='Purchased Units
(sq.m,City Avg)', title=tier, legend=True, color='r')
ax2 =china_property_price_sqmetre_cities_tiered[tier].plot(
ax=tier_axis, linestyle='dotted', label='Yuan / sq.m',
secondary_y=True, legend=True, color='black')
ax2.set_ylim(0,30000) hosts_to_fmt.extend([ax1,ax2])
Ok so now we have generated the correct layout and plotted data:
Make Your Charts Look Less Scientific
In the case of the above chart, I went for a styling similar to the ft.com. First up we need to import our fonts via Matplotlib font manager, and create a font properties objects for each respective category.
import matplotlib.font_manager as fm # Font Imports heading_font = fm.FontProperties(fname='/Users/hugo/Desktop/Playfair_Display/PlayfairDisplay-Regular.ttf', size=22) subtitle_font = fm.FontProperties(
fname='/Users/hugo/Library/Fonts/Roboto-Regular.ttf', size=12) # Color Themes color_bg = '#FEF1E5'
lighter_highlight = '#FAE6E1'
darker_highlight = '#FBEADC'
Next we will define a function which will:
Set the figure background (using set_facecolor)
Apply a title to the figure using the specified title font.
Call the tight layout function which utilises the plot space more compactly.
Next we will iterate over each axes within the figure and call a function to:
Disable all except the bottom spines (axes borders)
Set the background colour of the axis to be slightly darker.
Disable the white box around the legend if a legend exists.
Set the title of each axis to use the subtitle font.
Finally we just need to call the formatter function we created and pass in our figure and the axes we collected earlier.
Conclusion
Thanks for reading this tutorial, hopefully this helps get you up and running with Pandas and Matplotlib. | https://towardsdatascience.com/a-guide-to-pandas-and-matplotlib-for-data-exploration-56fad95f951c | ['Hugo Dolan'] | 2019-03-17 08:25:21.207000+00:00 | ['Matplotlib', 'Programming', 'Pandas', 'Data Science', 'Data Visualization'] |
Book Review: Boy Parts // Eliza Clark | Trigger warning: rape, assault
Let’s play a word association game, shall we. If I say ‘model’, what’s the first thing that comes to your mind? Perhaps you think of a tall, leggy Victoria’s secret model. Maybe you think of transgender model Munroe Bergdorf and her racism row with L’Oreal . Or maybe your mind goes to Canadian fashion model, Winnie Harlow , whose vitiligo gives her a particularly memorable face. In any case, I’m guessing the image that came to mind was of an attractive woman.
And you could hardly be blamed for having such a response; the modelling industry has been associated predominantly with women since its inception in 1853, when Charles Frederick Worth, the “father of haute couture”, asked his wife, Marie Vernet Worth, to model clothes he’d designed. And, typically, the person on the other side of the camera to the model has been a man; we know from the Hollywood #MeToo scandal how that power dynamic has often worked out .
Indeed, in September 2020, the model and actress Emily Ratajkowski published a harrowing essay titled: ‘ Buying Myself Back: When does a model own her own image? ‘. In it, she details the night she was assaulted by the photographer Jonathan Leder, who profited from “the most revealing and vulgar Polaroids” he had taken the night of her assault. In Eliza Clark’s fictional (2020), however, it is a female character profiting from the abuse of her male subjects.
Clark takes the ‘typical’ relationship dynamic between female model and male photographer and flips it on its head. In doing so, she provokes the reader to reflect on issues of contemporary sexuality and gender that have grown louder and louder in recent months, particularly with recent backlash to Harry Styles’ November 2020 cover shoot, where he is pictured wearing a dress .
In , Clark’s female protagonist Irina photographs average-looking men scouted from the streets of Newcastle in an incredibly clever black-comedy that, as the narrative develops like a polaroid picture, reveals itself to be an acerbic portrait of female trauma and toxic masculinity. Clark’s novel puts the class stratifications and power structures of the art world under the microscope, as Irina zooms into moments in her life where she has endured unspeakable things, flicking back through her old projects after she is asked to submit work for an upcoming gallery installation.
Clearly reeling from childhood sexual trauma at the hands of both men and women, Irina has — like so many before her — become an abuser herself. Her main victims are the innocent and physically non-threatening “Eddie from Tesco”, who stands at just 5"5 and drives his Mum’s old Micra, as well as her female friend and former lover, Flo, who Irina blows hot and cold with over the course of the book, in a typical pattern of narcissist abuse. And of course, there’s the homeless man who should think himself lucky to be photographed by someone as talented as Irina, too.
Consent has become part of mainstream discourse, with (2018), where Marianne is studying abroad in Sweden and has a BDSM relationship with Lukas, who forces Marianne to pose for him naked and tied-up, the camera lingering on the bruises on her wrists. When Marianne begs him to stop, Lukas ignores her. “You asked for this,” he says. But while these novels look at females harmed by men not respecting their consent, in universities leading consent workshops and Prime Minister Boris Johnson taking part in training on sexual harassment earlier this year. So perhaps it’s unsurprising that we are seeing more and more authors exploring the “grey” areas where one or more characters are left with a sinking feeling in the bottom of their stomach that something wasn’t OK about a sexual encounter; from a non-consensual blowjob in Holly Bourne’s How Do You Like Me Now? (2019) to Sally Rooney’s Normal People Boy Parts it is Irina who ignores Eddie from Tesco’s evident discomfort as she violates his boundaries in the way that others have repeatedly violated her.
Abuse in the fashion and art industry is rife, with countless stories of predatory photographers luring young men and women into their ‘studios’ where they are asked to undress and then the unthinkable happens. Eliza Clark’s is an electrifying look at the relationship between photographer and subject, which turns the more typical gender and power dynamic on its head and in doing so asks some fundamentally feminist questions about sex, gender and power.
Clark’s hypnotically dislikeable protagonist Irina has the repulsiveness of Ottessa Moshfegh’s unnamed narrator in (1991). The debt to Eaton Ellis is most obvious in the scene where Irina has dinner with sugar-daddy character “Uncle Stephen” and glasses him, with the clientele of the high-end establishment they are in seeming more concerned about whether they’ll get comped for the inconvenience caused to their evening than the man bleeding in the corner. But obvious literary debts aside, My Year of Rest and Relaxation (2018) combined with the sex appeal and psychological instability of Bret Eaton Ellis’ Patrick Bateman in American Psycho Boy Parts is still an incredibly thrilling read — certainly one for the Christmas wish list, as that time of year rapidly approaches.
Words by Beth Kirkbride | https://medium.com/the-indiependent/book-review-boy-parts-eliza-clark-25d8378be57c | ['Beth Kirkbride'] | 2020-11-26 15:19:14.873000+00:00 | ['Literature', 'Culture', 'Fashion', 'Books', 'Art'] |
The Sound of Silence: Phish in 2015 | Part 1.
It was after Night 1 in Atlanta that the firework — bright vermillion burst of heat and light — skipped right by me, missing my ankle by inches. Fireworks were quickly becoming a theme for the year. Remember the fireworks over Soldier Field? And Magna? And Mexico, so I heard…or saw on Periscope.
The Lakewood lot was one of the rowdier scenes of the Summer, at least of the shows I caught. Fireworks, bourbon, shirtless Southern boys, and girls. It was my second show of tour and the previous 48 hours had been a predictable adventure. Walking through the TSA Scanner the morning after Grand Prairie, I was vibrating so hard from the night before that I was absolutely sure the machine would explode while a cadre of Texas jackboots locked me up and threw away the key.
Luckily, that didn’t happen, I successfully boarded and awoke hours later in a Buckhead hotel room that wasn’t even mine, with only a foggy sense of how I had arrived there. Apparently I saw some friends on the train into town. Apparently I ordered a club sandwich and didn’t pay for it.
In the end, I got in and out of Dallas in less than 24 hours which is more than enough time to spend in Texas. Still, between helping to crack in a slick new venue, a first-set “Steam” with some extended jamming and a ranging “Chalkdust,” plus good company, there were worse places to hop on than Dallas.
Just a couple of weeks prior Phish had kicked off their Summer Tour with two highly anticipated shows in Bend, OR. Pre-tour energy had naturally been running high; Phish’s return to the stage after their usual Winter hibernation coupled with Trey’s participation in the Fare Thee Well shows. The fires were well stoked by then, the fans ready to rage right out of the gate. We’d hear glimmers of Trey’s experience with The Dead throughout the Summer, some favored effects and an occasional referential lick or two would keep the memory fresh alive on tour.
What’s the best way to cleanse the palate after all that Dead music and media hububb? New material of course. The biggest news out of Bend was the debuts, seven songs in total: “Blaze On, Shade, No Men, How Many People Are You? Heavy Rotation, Scabbard,” and “Mercury.” The exciting new material papered over what would otherwise be a mundane complaint, the band’s initial stabs at improv failed to connect. Save for that gripping “Simple” on the second night of tour which was a good omen. Listen back if you haven’t.
Then down the coast, in California, Phish just sort of clicked, pounding out a righteous, coherent show at Mountain View. Shoreline’s stellar second set is one of those long interwoven pieces. Featuring a monster “Twist,” and a terrific first jam for “Blaze On,” not the last we would hear from these vehicles. Shoreline set an early high bar for tour and gave the faithful back East something to look forward to.
All Signs Point to Yes.
Next night at the Forum was an energy explosion, the frenetic Saturday Night counterpoint to Friday’s delicacy in the Bay Area.
Martian Monster opened The Forum gig as it would several other quality shows this year. Phish put on a thrilling show for the often overlooked SoCal fans and any music industry bigwigs and heavyweights in attendance got a good strong dose of the band and an answer to why Phish continues kicking ass. One could almost hear Bob Lefsetz asking his readers if Phish was relevant again.
Austin and Dallas followed, two rather humdrum shows in terms of re-listening, what is often missed, and this case proves no different, there’s plenty to appreciate when Phish gets on a roll. And the band was snowballing early. There’s not an “off” Phish show to be found. Listen to it all. There were signs of increased care and creativity all Summer long.
Digging into some new material seems to have reinvigorated the band, as it tends to do, forcing them to quickly focus on themselves. Plus a bit of practice never hurts, what better reason to make the time than some new material. Early last year I often wondered how Mike, Page and Fish felt about the Fare Thee Well spectacle. Would they feel overshadowed by it? Would it strain any relationships? At this point, doubtful. There is no drama. just gratitude. We’re all in Bonus Time with Phish aren’t we? Aren’t we always winning no matter what, as long as these four guys get on stage every once in a while?
Fishman was noticeably absent in Chicago, I hardly think that means anything. Page and Mike were getting down all weekend long and that was great to see. Chicago man. Maybe we’ll be back for Wrigley? I’d go see that. | https://medium.com/the-phish-from-vermont/the-sound-of-silence-phish-in-2015-6cc40135fa3c | ["The Baby'S Mouth"] | 2016-01-25 18:38:25.654000+00:00 | ['Autobiography', 'Music', 'Phish'] |
Managing Multiple Environments in Terraform | Managing Multiple Environments in Terraform
How to use Terraform workspaces to manage multiple states
Photo by John O'Nolan on Unsplash.
Terraform has revolutionised the way we look at infrastructure, and with Cloud, it is a stepping stone toward “everything as code.” That is a quantum leap in the history of computing where everything — including hardware and operating systems — is virtualised and can be defined as code.
Terraform has simplified the lives of infrastructure architects, admins, and organisations alike, and it helps in building a computing infrastructure that is ever-changing as well as more scalable and elastic than ever before.
Terraform uses a declarative, high-level, and immutable set of code to define infrastructure. Infrastructure admins would typically just declare what infrastructure they would like to have without worrying about the internal API calls.
Terraform is cloud-agnostic and can help you manage multiple cloud configurations using a single setup. You can also declare dependencies between components spread across the cloud.
One challenge with IaC is reusability. Terraform recommends you keep your code DRY (Don’t Repeat Yourself). That is especially true if you are managing multiple environments. | https://medium.com/better-programming/managing-multiple-environments-in-terraform-5b389da3a2ef | ['Gaurav Agarwal'] | 2020-07-01 15:02:35.826000+00:00 | ['Terraform', 'AWS', 'Technology', 'DevOps', 'Programming'] |
Data Visualization Using Seaborn | Visualizing the distribution of the dataset.
1. Univariate distribution
histogram
kdeplot
distplot
2. Bivariate distribution
joint plot
pairplot
Univariate distribution
1. Histogram
A histogram is used for visualizing the distribution of a single variable(univariate distribution). A histogram is a bar plot where the axis representing the data variable is divided into a set of bins, and the count of observations falling under each bin is shown on the other axis.
Data variable vs count
Importing libraries and dataset
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns df=pd.read_csv("Results.csv")
df.head(3)
Now we will create histogram plots.
Creating histogram for “Marks” variable. Let’s see the distribution of marks in this dataset.
sns.histplot(x="Marks",data=df)
Inference
From the plot, we can see the range of marks. (5 to 100) This plot also clearly shows that more students get marks of more than 80.
hue
In seaborn, the hue parameter determines which column in the data frame should be used for color encoding.
We can include the “Grade” variable as a hue parameter.
sns.histplot(x="Marks",data=df,bins=10,hue="Grade")
Inference.
Now, after adding the hue parameter, we get more information like which range of marks belongs to which grade.
2. KDE plot
A kernel density estimate (KDE) plot is a method for visualizing the distribution of observations in a dataset, similar to a histogram. KDE represents the data using a continuous probability density curve in one or more dimensions.
KDE →Kernel density estimation is the way to determine the probability density function of a continuous variable.
Data variable vs density
sns.kdeplot(x="Marks",data=df)
Inference
By using the KDE plot, we can infer the probability density function of the continuous variable.
hue parameter in KDE plot
sns.kdeplot(x="Marks",data=df,hue="Grade")
3. Distplot
Distplot is a combination of a histogram with a line (density plot) on it. Distplot is also used for visualizing the distribution of a single variable(univariate distribution).
In distplot, the y-axis represents density. So the histogram height shows a density rather than a count. This is implied if a KDE or fitted density is plotted.
sns.distplot(df[“Marks”])
To visualize only a density plot, we can give hist=False .
sns.distplot(df[“Marks”],hist=False)
To visualize only the histogram, we can give kde=False .
sns.distplot(df[“Marks”],kde=False)
Bivariate distribution
1. jointplot
A Jointplot displays the relationship between two numeric variables. It is a combination of scatterplot and histogram.
sns.jointplot(x=”Marks”,y=”Study_hours”,data=df)
The joint plot also draws a regression line if we mention kind=” reg”.
sns.jointplot(x=”Marks”,y=”Study_hours”,data=df,kind=”reg”)
Using hue as a parameter
sns.jointplot(x=”Marks”,y=”Study_hours”,data=df,hue=”Grade”)
Pairplot
Pairplot is used to describe pairwise relationships in a dataset. Pairplot is used to visualize the univariate distribution of all variables in a dataset along with all of their pairwise relationships. For n variables, it produces n*n grid.
The diagonal plots are histograms and all the other plots are scatter plots.
sns.pairplot(df)
Inference
Data distribution should show some trends. In this example, Marks vs Study_hours gives a linear relationship(positive correlation).
Student_Id column is not showing any relationship with the “Marks” and also the “Study_hours” variable.
Student_Id column can be dropped from the dataset.
Using hue parameter in pairplot. | https://medium.com/towards-artificial-intelligence/data-visualization-using-seaborn-369ec156f03e | ['Indhumathy Chelliah'] | 2020-12-09 13:44:35.998000+00:00 | ['Machine Learning', 'Python3', 'Artificial Intelligence', 'Data Science', 'Data Visualization'] |
Understanding React | Let’s Get Started with the Fundamentals
Snippet
For VS Code users, here are some useful snippets:
Extension Name : ES7 React/Redux/GraphQL/React-Native snippets
rce -> Creates a Class based component
rfce -> Creates a Functional component
rconst -> Creates the constructor for the class
There are a lot other extension out there, the these are the one which is used most frequently.
Lifecycle Methods
Mounting
When an instance of component is created or inserted into the DOM
constructor
static getDerivedStateFromProps
render
componentDidMount
Upgrading
When a component is being re-rendered as a result of change to either it’s props or state.
static shouldComponentUpdate
shouldComponentMount
render
getSnapBeforeUpdate
componentDidUpdate
Unmounting
When the component is being removed from the DOM
componentWillUnmount
Error handling
When there is an error during rendering, in a lifecycle method, or in a constructor of a child component
static getDerivedStateFromError
componentDidCatch
Fragments
A common pattern in React is for a component to return multiple elements. Fragments let you group a list of children without adding extra nodes to the DO
Pure Component
Pure component only re-renders the class component when there is a shallow comparison of props and state. This results in a performance improvement. It works only with class based components.
Memo
It is a higher order component. What pure component is to class based component ,memo is to functional components
Refs
Refs makes it possible to access DOM nodes in React. There are two valid approaches are:-
React.createRef() method
Callback method.
Refs can be used with both functional and class component. Refs can also be passed from a parent component to the child component.
Forwarding Ref: Refs can also be forwarded from the parent compoonent to native input component using forwardRef(native component, ref) method. Basically the child component receives the ref from the parent component and attaches it to the native input element.
Portals
React portals provide a way to render children into a Dom node that exists outside the DOM hierarchy of the parent component. It provides the ability the break out of the DOM Tree. It uses a function ReactDOM.createPortal(JSX, id).Portal behave like a React child. We need of Portals to deal with child-parent CSS
Error Boundary
Error Boundary are React Component that catch JavaScript errors in the child component tree, log these errors, and display the fallback UI A Class Component that implements either one or both of the lifecycle methods getDerivedStateFromError and componentDidCatch becomes an error boundary.
getDerivedStateFromError: This is a static method that is used to render the fallback UI after an error is thrown.
componentDidCatch: This is a method which is used to log the error message.
Error Boundary catch error during rendering in lifecycle methods and in the constructor of the whole tree below them, however, they do not catch errors inside event handlers
Higher Order Component — HOC
A pattern where a function takes a component as an argument and returns a new enhanced component. It shares common properties within the components without having to repeat the code.
const newComponent = higherOrderComponent( originalComponent )
Non-technical example: const Ironman = withSuit( TonyStark )
It is a nice little pattern that can be used to share common functionality between React Components.
Render Props
The term “render prop” refers to a technique for sharing code between React Component using a prop whose value is a function.
React Context
Context provide a way to pass data through the component tree without having to pass props down manually at each level.
It mainly includes three steps: | https://medium.com/dev-genius/understanding-react-469e1ac3127d | ['Abhishek Srivastava'] | 2020-09-04 20:56:19.703000+00:00 | ['JavaScript', 'Web Development', 'React', 'App Development', 'React Hook'] |
36 Content Marketing Tools The Essential Toolbox | 36 Content Marketing Tools The Essential Toolbox Visualmodo Follow Dec 2 · 13 min read
The right content marketing tools turn any content marketing strategy into a highly successful plan. The problem? There are dozens of tools to choose from. How do you know which tools to use and which ones to leave behind? In this article, you’ll see the top essential content marketing tools.
I’ve researched the best content marketing tools that make a difference when your goal is curating, creating, and distributing your content across the Internet. Avoid all the confusion. Simply select a few of the following 36 tools. Let’s start it…
Content Creation Tools
Success with content marketing starts with outstanding content to deliver to your audience. Use these content creation tools to research and create your content.
1. Use Site:search
Do you know that you’ve previously written something quotable on your website but can’t remember which post it’s in?
Or, do you need to search a third-party website for research material? Site: search is a simple hack that lets you limit a search to a particular site’s content. This is the formula: site:samplewebsite.com [keyword]
Writing a post about book editing, and you remember reading something about that on becomeawritertoday.com? Enter the formula into Google this way: site:becomeawritertoday.com book editing. so, your search brings up every post on becomeawritertoday.com that deals with book editing.
2. Search Inside Giphy
Use the Giphy Chrome extension to find the perfect GIF. Take these four steps:
Firstly, open the extension inside Chrome Secondly, conduct a search Select a GIF Finally, drag & drop the GIF into your document
Giphy provides support for its extension, and it works inside applications such as:
Facebook
Twitter
Gmail
3. Google’s Explore Tool For Content Marketing Tools
Did you know that you can conduct Google searches without leaving your Google document? It’s possible when using Google’s “Explore” tool.
Highlight a keyword inside your document, right-click, and select “Explore.” You’ll notice a search results icon open in the right-side column of the document with three research options:
Web Images Drive
4. Google Alerts
Improve your content marketing results by writing about news inside your industry immediately after it happens. Use Google Alerts to send notifications to your account the moment something interesting comes up. It’s simple to set up an alert:
Firstly, Click here to open up Google Alerts Secondly, type a keyword into the search box Click “Create Alert” Finally, get real-time alerts by clicking on “show options.” Select “As it happens” as your “How often” setting
Content Curation Tools
Sometimes you need to collect other people’s content instead of creating everything on your own. So, it’s a process called content curation, and there are quality tools available to help make the job easier.
5. Curata Content Marketing Tools
Curata lets you use authors, news sites, keywords, and blog posts as content sources. Moreover, use the intuitive dashboard to add commentary, images, and even share or schedule the content you’ve collected.
6. Post Planner
Post Planner’s homepage cites Buffer and Buzzsumo’s research that using their app generates 510% more engagement. This tool uses past performance to curate shareable content for you. After creating social media posts around this curated content, Post Planner allows you to drag and drop your posts directly into a content calendar.
7. Pinterest
Use Pinterest to curate visual content for later use. Set up boards to collect images and infographics. You can include tags and links to help you search and find content when creating blog posts or social media posts.
8. Pocket
With Pocket, you’ll use a browser bookmarklet to quickly save content. Use tags to label each piece of content and make it easy to find your favorite topics at any time.
9. Scoop.it
Scoop.it automates the content curation process. Inside your account, use an area of interest to build out a topic page. Next, tell the app to use RSS feeds, websites, searches, and blogs to pull in related content. Use the curated content when it’s time to publish your next article or social media post.
Social Media Tools
Social media is an important part of any content marketing plan. It can also overtake your schedule if you’re not careful. Use these tools to help you:
Analyze campaign effectiveness
Automate your posting schedule on social media
Track & assess which new content ideas to use next
10. BuzzSumo Content Marketing Tools
Use Buzzsumo to view which content gets shared the most across a variety of social media platforms.
It helps you discover influencers and what types of content they’re sharing on their social sites. This is effective when finding partnership possibilities where influencers share your content for you. You should also use this tool to track trending content for new ideas about creating your next blog post around.
11. Followerwonk
Is Twitter a big part of your content plan? Use Followerwonk to gain insight into your Twitter audience. You’ll find more success growing a Twitter following after using this tool’s analytics that reveals who’s following you, who they’re connected with, and which influencers connect with for maximum content distribution.
12. Tailwind
Leverage your Instagram and Pinterest efforts with this simple, yet powerful, content scheduling tool. Tailwind tracks when your audience is online and optimizes when your posts on Instagram or Pinterest should get released. The result is more social mention and engagement because your content gets posted when your audience is most active.
Save time with the following Tailwind features:
Multi-board pinning
Bulk image upload
Hashtag lists
Pin looping
Drag & drop calendar
There’s also robust analytics and reporting area where you’ll uncover which posts and pins work well and which content isn’t resonating with your audience.
13. Buffer Content Marketing Tools
Supercharge your social media marketing campaigns using Buffer. It lets you build out various social media updates over several different channels based on a schedule that’s set up ahead of time. Track how well your content performs and make decisions about adjustments after viewing the analytics dashboard.
Beginner Blogging Tools
You may be wondering: What’s the best way to create and market content? Here’s the answer in one word: Blogging.
If you’re starting on your content marketing journey and haven’t yet started a blog, then it’s time to do it. The reason is that you don’t own your social media platforms. Social media is like building your business on sand because it can get taken away if you violate a platform’s service terms.
Learn how to start a successful blog so that you’re building a business on bedrock. Your “home base” online is your website. You can write and say what you want. No one can take your business away as long as you pay your hosting fees. Here are tools to easily get started with blogging.
14. Blog Tyrant Content Marketing Tools
Blog Tyrant offers many blogging tools all in one place.
Learn about the following areas:
SEO
Email marketing
Writing tips
WordPress plugins to start with
15. Welcome.ly
It doesn’t matter if you’re a beginner or a seasoned blogger; you need a homepage that converts traffic into subscribers. So, Welcome.ly makes it easy to set up a homepage in such a way as to maximize visitor’s focus on signing up for a lead magnet. Moreover, this tool forces you to keep distractions to a minimum by limiting homepage elements to:
Content upgrade opportunities
Social proof
Pilot story
16. Nameboy
You need to come up with the perfect domain name for your blog. Nameboy makes it easy to do via its domain name generator tool. So, just enter a word or two into the search box and discover available domain names that fit your business.
Essential Content Marketing Tools
Plenty of content management system (CMS) tools exists that help speed up the content creation process. Choose one of the following and keep your content organized.
17. HubSpot
Are you looking for an all-in-one CRM, sales, and inbound marketing suite of tools? Look no further than HubSpot. This tool gives you everything you need when it comes to creating actionable content. Many of HubSpot’s content marketing tools are free to try:
Pop-up tools
WordPress plugin for content management & lead capture
Form builder
Live chat & chatbots
Its CRM allows you to centralize all your content marketing efforts in one place so that you create, optimize, and distribute content out to the correct audience. HubSpot continuously analyzes your overall marketing plan so that lead generation and revenue numbers grow over time.
18. WordPress
One of the most popular content management systems used worldwide is WordPress. In simplest terms, WordPress is best defined as an open-source CMS that allows both beginners and seasoned veterans to create and manage their websites.
Use WordPress to create content, upload content, track website visitors, and distribute content to your audience. Its plugin architecture provides the ability to uniquely customize your content management efforts. Moreover, you can use WordPress to set up:
Blogs
Social networks
Static sites
Memberships
E-commerce stores
Online courses
Portfolios
19. Contentools
Content tools is an interesting content management system that keeps your content brainstorming, creation, and publishing need all in one place. So, it includes an “ideas pipeline,” workflows, the marketing calendar, social media scheduling, and analytics and tracking tools.
Use the in-app editor to build out your content and check for search engine optimization with its SEO indicator. Content tools use artificial intelligence to power its content insights tool. In addition, social media marketing tools provide auto-publishing functionality. Need to integrate with other tools?
No problem. Because it integrates with other third-party tools like Dropbox, Evernote, and Salesforce via Zapier.
Content Tracking Tools
What do you do after creating and distributing content? So, Track it! Analyzing what your audience responds to helps with understanding the type of content you need to create. Ever feel like you don’t have enough time? So, analyze the content your audience engages with the most and invest valuable time creating more of it.
Do you want to know the best places to post content or how to run profitable paid ad campaigns? Track referral sources so that you know which platforms send the most visitors back to your website. Below you’ll discover tools for tracking how well content performs and gain insight into audience interaction.
20. Link Explorer Essential Content Marketing Tools
Moz puts out a free tool called Link Explorer, and you should use it to investigate how well your website stacks up against competitors. Moreover, Link Explorer reveals the domain authority and backlink profile for any website. In addition, track whether your content generates backlink activity. Check competitor backlink profiles to uncover new partner possibilities that push your content up in the search engines.
21. Feedly Essential Content Marketing Tools
Feeling overwhelmed when it comes to following important topics and websites? Look no further than Feedly. It’s a one-stop-shop for keeping up with great content to share, news, and fresh topics to add to your content marketing calendar.
22. MonsterInsights
Always pay attention to your content marketing analytics. So, it’s the only way to know whether your strategy is working. MonsterInsights makes this tracking process simple because it sends the KPI reports and metrics. So you need it to make quality decisions straight into your WordPress back office. Connect Google Analytics to MonsterInsights and get more details added in about the following areas:
Firstly, time on page
secondly, pageviews
Demographics
Sessions
Top-performing posts
Bounce rates
Finally, referred traffic
Use MonsterInsights to track other important metrics like optimum publication times and SEO scores. In addition, integrate other platforms such as WPForms, WooCommerce, and Adsense.
Headline Optimization Tools
I think you’ll agree with me when I say: Your content marketing plan suffers if you can’t get readers interested in your headlines and titles. Excellent titles improve search engine results page click-through-rates and result in improved rankings.
Great headlines entice your audience to dig deeper into your main content. As a result, our subscribers and sales. The next set of content marketing tools will help you write the titles in addition to headlines that drive your entire content marketing plan.
23. Emotional Marketing Value Headline Analyzer
The Advanced Marketing Institute’s free tool. The Emotional Marketing Value Headline Analyzer gives each headline a score. In addition, this headline analyzer checks how well your headline affects certain emotions in your readers. More specifically:
Intellectual: Words that produce effectiveness when decisions about a purchase require careful evaluation or reasoning. Empathetic: Words that create impactful, positive emotions and resonate in an empathetic way. Spiritual: Words that hit people with deep emotional impact.
24. Headline Analyzer From IsItWP Essential Content Marketing Tools
Nothing works better when it comes to creating headlines than adding a second analyzer to your bag of tricks. Moreover, IsItWP offers a Headline Analyzer that rates your headlines based on structure, readability, and word usage.
This tool provides a score after looking at the following factors:
Uncommon words
Word balance
Emotional words
Word count
Common words
Headline length
Power words
IsItWP’s analyzer also shows you how each headline shows up in your email service provider or Google’s search results so that you can make adjustments “on the fly.”
25. Answer The Public Content Marketing Tools
Did you know that questions make powerful headlines? So, use Answer The Public to research questions your target market is asking.
This tool generates several questions based on your keyword that you enter into the search bar. Moreover, answering the questions your headlines and titles ask serves your audience in a meaningful manner. It also helps your content get found when searchers enter those questions into the search engines.
26. Sharethrough Headline Analyzer
Another headline analyzer? You bet!
The Sharethrough headline analyzer provides an impression and engagement score based on neuroscience. So, you and your team can’t use enough headline analyzer tools to consider how much click-through rates affect your Google rankings.
27. Blog Ideas Generator From HubSpot
How would you like to generate a group of headline ideas all at the same time? Enter up to five nouns into HubSpot’s Blog Ideas Generator, and your wish gets granted. Repeat the search as often as you’d like for more ideas.
Content Upgrade Tools
Think about it: There’s no point in creating content and distributing it if you’re not focused on generating leads.
The following tools help you create content upgrades that build your email marketing list.
28. Attract.io Essential Content Marketing Tools
Attract.io makes creating lead magnets a breeze. In addition, the tool helps you decide which type of lead magnet is best for your current project:
Firstly, a case study
Secondly, a checklist
How-to guide
Finally, a resource guide
It then provides a selection of templates based on a list curated from real businesses. Select a template, add your images and color scheme, and you’re left with a stunning content upgrade. Attract.io then provides a usable link so that your lead magnet is instantly shareable.
29. Color Hunt
I mentioned picking your color scheme when explaining Attract.io above. If you feel challenged in this area because you aren’t a designer, use Color Hunt. Moreover, this tool provides a variety of possibilities that help make selecting your content upgrade’s color scheme a breeze. Add their Chrome Extension for the greatest ease of use.
30. Canva Essential Content Marketing Tools
Use Canva to create lead magnets for your content marketing needs. So, you can create the following resources that make excellent content upgrades:
eBooks
Infographics
Charts
Checklists
Workbooks
Worksheets
Cheatsheets
Planners
In addition, this tool comes with a simple drag & drop editor, so you shouldn’t have any problems even if you’re in the design skills area.
31. Pixabay
A great content upgrade often requires interesting images. Pixabay offers quality royalty-free images that don’t require a budget. So, simply type your keyword into the search box and download as many images as needed at no cost.
32. Audacity Essential Content Marketing Tools
Do you know what’s crazy? How often, the audio gets forgotten when it’s time to create a content upgrade.
We all know about the popularity of podcasts. So, consider using Audacity and creating your next content upgrade by recording an audio file your subscribers can take on the run. One simple idea: Read your latest blog post and save it as an audio. You might find this method so popular that you start creating audio versions of every blog post and increasing email opt-ins each time.
Content promotion Tools
Producing outstanding content isn’t enough. Your content marketing strategy must include a promotion plan. The following tools help spread your content across various Internet channels for maximum exposure.
33. Viral Content Bee
Formerly called Viral Content Buzz, Viral Content Bee offers a platform for getting your content shared by Pinterest. Twitter, StumbleUpon, and Facebook influencers. Moreover, their team keeps a watchful eye on both content quality and the social media influencers’ effectiveness that share your content.
The focus is on keeping the process free from automated social media interactions. So, expect authentic social media sharing as your content gets distributed and improves your lead generation and sales results.
34. Paid Ad Platforms
Do you need to get a jumpstart on your search engine optimization? So, use Google Ads and leap to the top of the search. Don’t forget about selecting a social media site or two and paying for advertising. In addition, almost every social platform lets you pay for the right to expose new audiences to your content. Moreover, Facebook, YouTube, Instagram, and LinkedIn offer some of the best-paid content distribution opportunities.
35. Rafflecopter Essential Content Marketing Tools
Use Rafflecopter to create a giveaway promotion.
A giveaway allows you to share content, create social engagement, and create link building opportunities all at once.
Rafflecopter integrates into your blog, Facebook page, and other social media accounts. So that you can share content as your participants perform any of the following methods to gain their entry into the contest:
Firstly, leave a blog comment
Secondly, tweet the giveaway offer
Like a Facebook post
Twitter followers
Follow on Instagram
Follow on Pinterest
Link to a blog post
Finally, subscribe on Youtube
36. GoViral
GoViral helps you set up Thank You pages that “go viral.” So, you’ll create a thank you page that all new subscribers see immediately upon joining your list.
The GoViral, thank you. Page encourages your new subscriber to share your lead magnet in addition to blog posts, videos, or any other landing page of your choice to their social media audience. As a result, a wave of new traffic, social shares. So, new subscribers as your content spread virally across your audience. As a result, it grows on social media platforms.
Extra 37. DesignCap
DesignCap is an easy-to-use online tool to make graphic designs for business, event, and social media. It owns all the powerful features you could expect from a graphic design tool.
You can create a wonderful design within just 3 steps.
Here are some key features:
1.Easy to use, a friendly UI interface, no graphic design skill needed.
2. Abundant templates for presentation, chart, report, social media, infographic, etc.
3. A large library of stock photos, charts, preset text styles, modules & backgrounds.
4. Ability to save your design to your device, share on social media, and print it directly.
It’s Your Turn Using the Essential Content Marketing Tools
Are you ready to start dominating by putting out the best content in your niche? Listen in. Firstly, most of your competitors won’t take the time to create content systematically and consistently. However, you’re different, right? Let’s recap what you now have in your content marketing toolbox. So, you have tools to win the five stages of content:
Research Curate Create Promote Analyze
Let me leave you with one final tip:
In conclusion, don’t try to use all 36 of these content marketing tools at once. So, pick one or two from each category. Use them, get used to them, and master them. Focus on useful and engaging content. Finally, analyze how your audience reacts. Give them more of what they like.
As a result, you’ll generate all the leads and sales needed to build a thriving online business. | https://medium.com/visualmodo/36-content-marketing-tools-the-essential-toolbox-c271c880d5f5 | [] | 2020-12-02 02:20:50.333000+00:00 | ['Content', 'Creation', 'Content Strategy', 'Content Writing', 'Creativity'] |
Beginner’s Python Financial Analysis Walk-through — Part 5 | Image adapted from https://www.dnaindia.com/personal-finance/report-india-s-personal-wealth-to-grow-13-by-2022-2673182
“Predicting” Future Stock Movements
Boy am I glad you made it here! This section covers what I find to be the most exciting part of the entire project! At this point, if you’ve read through parts 1–4 of the project, you understand my steps to evaluate the historical performance of stocks, but the question we are all asking is “How do I predict if a stock will go up and make me rich?” This is probably why you’re here in the first place; You want to know how to choose a stock that has a higher likelihood of yielding greater returns. Let’s see how we can do this using simple moving averages and Bollinger band plots. Let’s make money!
Simple Moving Average
We begin by understanding a simple moving average (SMA). An SMA is a constantly updated average price for a certain period of time. For example, a 10 day moving average would average the first 10 closing prices for the first data point. The next data point would add the 11th closing price and drop the first day’s price, and take the new average. This process continues on a rolling basis. In effect, a moving average smooths out the day-to-day volatility and better displays the underlying trends in a stock price. A shorter time frame is less smooth, but better represents the source data.
There are many ways to plot moving averages, but for simplicity, here I use the cufflinks package to do it for me.
# The cufflinks package has useful technical analysis functionality, and we can use .ta_plot(study=’sma’) to create a Simple Moving Averages plot # User input ticker of interest
ticker = “NFLX” each_df[ticker][‘Adj Close’].ta_plot(study=’sma’,periods=[10])
Figure 1: A 10-day simple moving average overlaid on NFLX’s closing prices
Figure 1 above illustrates a 10-day moving average for Netflix’s (NFLX) closing prices. As evident, the SMA reduces the jagged peaks and valleys of the closing prices and gives better visibility to the underlying trends. In the past 2 years, we can clearly see the stock price trending upwards.
Death Cross and Golden Cross
Building on this concept, you can compare multiple SMA’s with different time frames. When a shorter-term SMA consistently lies above a longer-term SMA, we can expect the stock price to trend upwards. There are two popular trading patterns that utilize this concept: the death cross and the golden cross. I’ll turn to Investopedia for a definition: “A death cross occurs when the 50-day SMA crosses below the 200-day SMA. This is considered a bearish signal, that further losses are in store. The golden cross occurs when a short-term SMA breaks above a long-term SMA. This can signal further gains are in store.” Source
Let’s take a look at overlaying two SMA’s onto Netflix.
# User input ticker of interest
ticker = “NFLX”
start_date = ‘2018–06–01’
end_date = ‘2020–08–01’
each_df[ticker][‘Adj Close’].loc[start_date:end_date].ta_plot(study=’sma’,periods=[50,200])
Figure 2: Example of a death cross followed by a golden cross
As we can see from Figure 2, Netflix is an interesting case study. Around August 2019 we see a death cross. I was not following Netflix’s stock price at that time, but it seems the stock had been trending downwards. I’ll leave it as an exercise for you to explain the dip. Soon after, the death cross is followed by a golden cross near February 2020. Ever since the golden cross in February, Netflix’s stock has been on the rise.
Now we know Coronavirus explains a lot of price movements around that time. As more people stayed at home, more and more people turned to Netflix as the sole source of entertainment in the house. Netflix’s paid user subscription base grew significantly in the months following. That is no surprise.
However, what’s interesting to me is that during the major economic earthquake in Feb-Mar. 2020, when the rest of the stock market was plummeting, we don’t see a death cross. Quite opposite, we actually see a golden cross! If we focus on the orange line, we see that Netflix’s stock prices also took a big hit in March 2020, so it’s not that Netflix didn’t feel the impacts of COVID. The takeaway is that the stock dip was due to a one-off event and NOT a trend, as shown by the SMA’s. If in March we had traded solely based on a comparison of 50-day and 200-day SMA’s, we would have reaped all the future gains.
Next, I’ll talk about another tool used to make predictions on stock movements.
Bollinger Band Plots
Bollinger band plots are a technical analysis tool composed of three lines, a simple moving average (SMA) and two bounding lines above and below the average. Most commonly, the bounding bands are +/- 2 standard deviations from a 20-day SMA.
One major use case for Bollinger band plots is to help understand undersold vs. oversold stocks. As a stock’s market price moves closer to the upper band, the stock is perceived to be overbought, and as the price moves closer to the lower band, the stock is more oversold. Although not recommended as the sole basis for buy/sell, price movements near the upper and lower bands can signal uncharacteristically high/low prices for stocks. The latter is what I generally look for, as I hope to buy oversold stocks for cheap and let their values rise back towards the moving average.
Bollinger bands also allow traders to monitor and take advantage of volatility shifts. As we learned previously in part 4 of this project, the standard deviation of stock prices is a measure of volatility. Therefore, the upper and lower bands expand as the stock price becomes volatile. Conversely, the bands contract as the market calms down; This is called a squeeze. Traders may take squeezes as a potential sign of trading opportunities since squeezes are often followed by increased volatility, although the direction of price movement is unknown.
Let’s plot the Bollinger bands before we go deeper so we can take a look at what I’m talking about. The cufflinks package again makes it very simple to plot Bollinger Bands as shown in Figure 3.
# User input ticker of interest
ticker = "SPY"
each_df[ticker]['Close'].ta_plot(study='boll', periods=20,boll_std=2)
Figure 3. Bollinger Band Plot for SPY from 2018–2020
During an uptrend, the prices will bounce between the upper band and the moving average. While in this uptrend, the price crossing below the moving average can be a sign of slowing growth or trend reversal. You can see this crossing in Figure 4 below. Here I plotted the Bollinger bands for the SPY for the first 4 months of 2020. See how the orange closing price line dips below the 20-day SMA around February 18. Before this, there was a steady uptrend for the SPY. Afterwards, there was a hefty downtrend.
Figure 4. Unusual SPY market prices Feb-Mar 2020
As you can see above, the period of Feb-March 2020 showed a major downtrend in the SPY. It’s unusual to see the SMA breakout below the lower band 4 times within a month. With standard deviations, 95% of the values should lie within the +/- 2 standard deviations, so we’re seeing very unusual activity. Of course, this is understandable as peak Coronavirus fears struck America in this time frame. This is a good example of what a strong downtrend looks like. In the following months, we see a strong uptrend.
Conclusion
In this section, we’ve learned how to use moving averages and Bollinger band plots to take a step back from all the noise in the erratic trading data and focus on the trends. It’s hard to profit day-trading the daily ups and downs, but it’s much simpler to profit if you can identify strong growth trends . Using these techniques, you can also identify trend reversals and buy stocks early in a trend. On a shorter time horizon, you can use Bollinger bands to find undervalued or overvalued stocks. With this analysis, you can hopefully be more informed and make data driven decisions to buy stocks. In the next and last section, we’ll wrap up everything we’ve learned. | https://medium.com/analytics-vidhya/beginners-python-financial-analysis-walk-through-part-5-3777eb708d01 | ['Keith Chan'] | 2020-08-31 04:57:59.027000+00:00 | ['Coding', 'Beginners Guide', 'Financial Analysis', 'Stocks', 'Python'] |
The Psychology of Pair Programming | Dr. Sallyann Freudenberg is a software engineer and psychologist who has spent some serious time observing the behaviours of high performing software teams. Her blogs contain a wealth of information for any software professional looking to sharpen their skills.
As an engineer with no psychology background whatsoever, I couldn’t help but think — this is some really, really good stuff. It’s accessible, easy to understand and clear. I thought… perhaps I’ve picked up the wrong career? Maybe I’m a psychologist! So I read a little Freud and oh my god the horror, so I’m definitely still an engineer and have no interest in psychology any more. Also I can’t look my mother in the eye.
Anyway, after many hours of repetitive strain injury-inducing labour, here is what I have produced. A collection of tips, based on the work of Dr. Freudenberg. My hope is that you’ll use this as a primer, an appetiser if you will, before diving into Dr. Freudenberg’s work and developing a more full understanding. Let’s get started.
Take Regular Breaks
Easy one to open up with. I know I sound like your mother when you were young. She warned you against square eyes. Well she was wrong about the square eyes thing but she was right about taking a damn break once in a while.
Regular breaks are well understood in psychology. In the context of pair programming, they:
Prevent context from building up too much. Breaks break things up.
Lower the cognitive load and enable longer overall pair programming sessions.
Make you happier!
Of all the basic sins I see, this is the most common. Five hour meetings with no time budgeted for breaks. “Powering through” isn’t always the answer. Sometimes, going for a coffee or a beer (depending on how stuck you are) can provide the answer too. If you’re interested, there is a brilliant book about slack time that gives a much fuller answer to this topic.
Push the keyboard, don’t take it
The keyboard, in Dr. Freudenberg’s research, was the physical component of control. Engineers would pass the mouse around freely in Canada, but the keyboard was held sacred. In high performing pairs, the driver would volunteer control of the keyboard. It would never be taken. From this, we can glean a simple rule for both the driver and the navigator.
The Driver — learn when to pull over.
The example from Dr. Freudenberg’s research illustrates this.
Example 1 (Anna is navigating, Ben is driving): Anna: “If you…..go to….” Ben: (sliding the keyboard over to her) “(You) drive….it’s easier”.
The driver is aware of the most effective way to communicate and is happy to relinquish control when needed. Don’t let your ego cling to the keys. Hand it over when necessary.
The Navigator — don’t disempower.
When you’re watching, watch. Drop hints, be friendly, but don’t force control out of the driver’s hands. It creates resentment. Treat that keyboard carefully.
The driver is often seen as the position of power in the pair, but the navigator has the ability to derail the whole exercise by hijacking the keyboard when things get tricky. With great power comes great responsibility.
Have somewhere to draw.
A whiteboard is the gold standard here. A nice, big, clear space to draw all over. Words are great and all but nothing beats a bunch of wonky boxes and not so straight lines.
In Dr. Freudenberg’s research, a key finding was the use of scribbled on diagrams in high performing pairs. The creative act of writing the diagram sparks thoughts and jogs memories. They were far more powerful than pre-existing diagrams, which seemed to be more useful when seeking a holistic view of multiple teams and systems.
Sometimes things are difficult to convey without a hastily drawn picture. If you don’t have a whiteboard, use a notepad. In one example, Dr. Freudenberg observed someone tracing a diagram with their finger. Drawing sparks joy people!
Invite spontaneous outside help
So there you are, working through a problem. An API that needs consuming and saving into a database. You debate, discuss and start writing. Nothing is working and you don’t know why, but if you can just focus… just a little longer. A few more uninterrupted hours.
Then… Bill from the payments team leans over and asks you about something you just said.
Oh my GOD Bill
But Bill has got a good point.
“Hey, that endpoint actually uses a different model to the other APIs”, Bill mutters, acutely aware of the thunderous look in your eyes. You could be mad at Bill, but the truth is, he was right to stop you. You were about to write some broken-ass code. Is Bill the enemy? No. Your way of working is.
Interruptions are going to happen, especially when you’re building the wrong stuff. The truth is, Bill just saved your bacon. In Dr. Freudenberg’s research, this kind of open communication was welcomed by high performing pairs. It increased knowledge transfer and made for a much more efficient pairing.
So go apologise to Bill. He just saved you a bunch of time and fresh grey hairs.
Preserve Context
The problem with the previous scenario was the context. Maybe Bill could use some personal boundaries training, but if it wasn’t Bill, you know it’d be Sandra from the Ops department. Bloody Sandra…
That complex clockwork of context in your minds eye is only available in your living room. In the workplace, you’re gonna need to talk and not always on your own terms. Dr. Freudenberg explains that software engineers need to operate at multiple levels of abstraction at the same time, constantly flitting in and out of levels of detail. This creates problems when building up large, complex images in your mind. How do the high performing pairs deal with this?
Lists, you silly goose
The key is to keep focused on what you’re doing. When you find new things, put that down in writing. The items in the list form little flags, reminders of questions and facts you’ve unearthed. This means you can remain focused on the problem at hand and not lose track of the discoveries you’ve made along the way.
The next time James from HR comes in to tell you to stop high fiving strangers in the lobby, your context won’t be obliterated. You might be slowed down for a moment, but glance down at those sweet lists. You’re back in the matrix.
And one more thing about lists
Before you go all sharpie-pro on your checklist, Dr. Freudenberg has one more finding. More often than not, lists are not revisited. They’re written and discarded. It’s essentially just a bit of ephemeral storage for your brain. Don’t worry about neatness. Worry about getting your ideas on to something more permanent than the jelly in your head and protecting the current object of your focus.
Share the burden with tag team pairing.
Coffee number seven. This damn test won’t go green, no matter what you try. You decided to swap when it did, so you’ve been driving for well over an hour. You’re tired, your partner is bored and progress has all but stopped. Soon, you’ll go home and drink whiskey like any 90s American action hero.
We’ve seen this a thousand times. Individuals investing too heavily in the problem and refusing to review their goals. Often it’s pride, sometimes it’s sheer belligerence. To combat this, Dr. Freudenberg found that as well as specified goal points, high performing pairs would “tag” one another in. | https://medium.com/free-code-camp/the-psychology-of-pair-programming-86cb31f9abca | ['Chris Cooney'] | 2019-05-16 15:31:54.690000+00:00 | ['Software Development', 'Psychology', 'Pair Programming', 'Tech', 'Programming'] |
Wikipedia Is Not a Source for Your Writing | Wikipedia Is Not a Source for Your Writing
Don’t label this publicly-edited directory as an authority
Photo by andreyphoto63 / Logo by Wikimedia Foundation, CC BY-SA 3.0
As modern writers struggle to follow through with evidence-based concepts, it’s easy to get lazy and rely on less than ideal sources. It’s the perfect way to tarnish one’s reputation and put writing careers at risk. Citing sources is about more than attaching a name to a quote or paraphrased statement. Writers must use due diligence to ensure the accuracy of credited sources.
One of the most common websites credited is Wikipedia. Yet, it’s not a source. Wikipedia merely collects information from various places — sometimes 50 or more — and restructures it in digestible chunks. However, there’s no initial vetting of volunteer editors or the details they submit. That means, at any given moment, entries can contain fake or fraudulent information.
An ever-changing content hub
Look at Wikipedia like a living document. Entries are continually revised.
For example, the entry for notable author William Faulkner has been altered thousands of times since first created in October 2001. The page contains scores of footnotes, citations, and external links.
We want to think everyone who modifies Wikipedia content is honest and fair, but that’s not always the case. And Wikipedia admits it.
While some articles are of the highest quality of scholarship, others are admittedly complete rubbish. — Under: “We do not expect you to trust us” in Ten things you may not know about Wikipedia (locked entry)
Wikipedia’s general disclaimer reaffirms it can’t guarantee information is accurate.
The content of any given article may recently have been changed, vandalized or altered by someone whose opinion does not correspond with the state of knowledge in the relevant fields.
There are two significant issues here:
Unless someone notices a problem, nothing listed on Wikipedia is verified. While editors are supposed to cite every claim, this doesn’t always happen. People can and do use the system’s openness to push agendas. Brands do, too. In 2019, The North Face edited vacation destination entries to include branded photos to climb search engine ranks. Information cited today may be absent tomorrow. While this can happen to any internet resource — websites come and go — changes in Wikipedia entries often go unnoticed.
In other words, linking to a Wikipedia entry can lead readers to a page void of the content presented at the time of writing. It goes beyond text; images and other media may also disappear.
Use Wikipedia to locate authentic sources
Wikipedia is a good starting point when typical online searches prove difficult. Footnotes and external links can help guide you to reliable sources of information, including exact quotes.
Of course, the sources used to update Wikipedia entries should also be under scrutiny. Biases can affect which sources editors use and omit when presenting findings.
Don’t be the source of misinformation or disinformation
The internet is abound with false details, and the basis of intent determines a writer’s liability.
Misinformation consists of false details that are produced and spread without the intent to mislead. The most common form of misinformation is presenting or deducing an assumption or opinion as fact. In 2017, media outlets announced Tom Petty’s death prior to his passing. The LAPD later tweeted “information was inadvertently provided to some media sources.”
Disinformation is designed to mislead people into thinking something unfactual is true. An example would be accusing a government agency of sabotage acts when there is no evidence to support the claim.
The onus of identifying legitimate sources of information lies on the writer, always. Recognizing the most reliable outlets for factual information and proper citation is critical in building and maintaining trust and authority. | https://medium.com/write-i-must/wikipedia-is-not-a-source-for-your-writing-d1c35aa12509 | ['Pamela Hazelton'] | 2020-11-24 15:05:40.457000+00:00 | ['Writing', 'Facts', 'Research', 'Wikipedia', 'Data'] |
An intro to Redux and how state is updated in a Redux application | Photo by Fabian Grohs on Unsplash
I started learning Redux a few days back and it was an overwhelming concept for me at the start. After polishing my skills in ReactJS by making a personal book reading application, I headed towards Redux to learn more about it.
Today, I’m going to share a few core Redux concepts without using any view library (React or Angular). This is a kind of a personal note for future reference but it can help others as well.
Let’s dig in together!
What is Redux?
Redux is an open-source library to improve the predictability of the state in a JavaScript application. It is an independent library. It is commonly used with other libraries like React and Angular for better state management of the application. Redux was created by Dan Abramov in 2015 to handle complex state management in an efficient way.
When an application grows larger it becomes harder to manage the state and debug for issues. It becomes a challenge to track when and where the state is changed and where the changes need to be reflected. Sometimes a user input triggers some API call which updates some model. That model in turn updates some state or maybe the other model and so on.
In such a situation it becomes grinding to track the state changes. It happens mainly because there is no defined rule to update a state and state can be changed from anywhere inside the application.
Redux tries to solve this issue by providing a few simple rules to update the state to keep it predictable. Those rules are the building blocks of Redux.
Redux Store:
As we discussed earlier, the main purpose of Redux is to provide predictable state management in our applications. Redux achieves this by having a single source of truth, that is a single state tree. The state tree is a simple JavaScript object which holds the whole state of our application. There are only a few ways to interact with the state. And this makes it easy for us to debug or track our state.
We now have only one main state which occupies the whole state of the application located at a single location. Any changes made into the state tree are reflected in the whole application because this is the only source of data for the app. And, this is the first fundamental principle of Redux.
Rule #1 — Single source of truth
The state of your whole application is stored in an object tree within a single store. — Official docs
The ways you can interact with a state tree are:
Getting the state
Listening to the changes in the state
Updating the state
A store is a single unit that holds the state tree and the methods to interact with the state tree. There is no other way to interact with a state inside the store except through these given methods.
Let’s talk about the methods a store gives us to interact with the state.
getState() — Returns the current state of the application.
dispatch(action) — The only way to update a state is by dispatching an action and dispatch(action) serves the purpose. We will talk more in detail in a bit.
serves the purpose. We will talk more in detail in a bit. subscribe(listener) — The purpose of this method is to listen for the state changes. Every time a state is changed, it will be called and will return the updated state.
replaceReducer(nextReducer) — Replaces the reducer currently used by the store to calculate the state.
Now when we have a store which contains a state tree and a few ways to interact with the state, how can we update application state?
Updating state in the application:
The only way to update a state is to dispatch an action. This is the 2nd rule.
An action is a plain JavaScript object to keep track of the specific event taking place in the application. What makes it special is a ‘type’ property which is a necessary part of it.
{
type: "ADD_BOOK_TO_THE_CART"
}
The main purpose of this property is to let Redux know about the event taking place. This type should be descriptive about the action. Along with the ‘type’ property, it can have other information about the event taking place.
Actions can have as much information as you want. It is a good practice to provide less and necessary information — preferably an id or any unique identifier wherever possible.
Here we have an action to add a book to the cart.
An action
Once we define our action we pass it to the dispatcher. store.dispatch() is a function provided by the library which accepts an action to perform an action against the state. Redux restricts updating the state to this method only.
This strict way of updating the state ensures that the state can not be changed directly either by view or any network callback. The only way to update a state is by defining the action and then dispatching it. Remember that actions are plain JavaScript objects. Actions can be logged, serialized, and replayed for debugging purposes.
We now have a store, a state, and an action in our app to perform some tasks against the state. Now we need a way to use these actions to actually do the update. This can be done by using a pure function and this is rule #3.
Magic happens here. We need a simple pure function, which, as a parameter, takes the current state of the application and an action to perform on the state, and then returns the updated state. These functions are called reducers.
A simple reducer function
These are called reducers because they take the collection of values, reduce it to an updated state and then return it. Since reducers are pure functions they do not mutate the original state. Instead, they return the updated state in a new object. Our application can have one or more than one reducer. Each reducer can have a relevant state to perform specific tasks.
Since reducers are pure functions, they should have the following attributes:
Given the same input, it should return the same output every time — No mutation is allowed.
No side effects — No API call data change from an external source.
The process.
If we connect the dots, Redux is a library which has a store that contains a state tree and a few methods to interact with the state. The only way to update a state inside a store is to dispatch an action and define a reducer function to perform tasks based on the given actions. Once dispatched, the action goes inside the reducer functions which performs the tasks and return the updated state to the store. This is what Redux is all about. | https://medium.com/free-code-camp/an-intro-to-redux-and-how-state-is-updated-in-a-redux-application-839c8334d1b1 | ['Syeda Aimen Batool'] | 2019-05-09 21:35:07.467000+00:00 | ['React', 'Coding', 'Tech', 'Programming', 'Redux'] |
Religious Fatalism and the Politics of Change | Oppressor versus Oppressed
Jesus Christ Superstar follows two main characters: Jesus Christ and Judas Iscariot. Both characters are Jews living in Roman-occupied Palestine in what we now call the 1st Century AD. Depending on how you were raised, you may view the Roman Empire as a force of progress or a force of terror. It is true, Rome was able to connect far-reaching parts of Europe and the Mediterranean to one another, stretching from London to Alexandria. They built elaborate governmental buildings, bathhouses, and stadiums. Useful technologies were disseminated much easier, and medical advances too. But on the flip side came oppression, slaughter, war, rape, and pillaging… in the name of power and wealth and sometimes even in the name of peace.
The Jews under the Roman empire had special privileges. They didn’t have to pray to the Roman state gods: Jupiter, Juno, Venus, Saturn, and the like. They were allowed to worship in their own Temple and celebrate Jewish festivities.
At the time of Jesus’ life, however, tensions were increasing. Rome was anxious that the Jews of Palestine would try to revolt, cause trouble, try to get away from their grip. Rome sent large groups of soldiers to Jerusalem around Jewish holidays, such as Passover, to ensure no one ‘got smart’ and try to inspire a Jewish revolt. It would be inconvenient for Rome to deal with, but at the end of the day, they would have no problem razing their Holy city and enslaving the lot of Jews. So long as they knew their place, Rome would let them be.
Jesus and the Disciples at the Last Supper. Telegraph.co.uk
It is along this tension that Jesus Christ Superstar thrives. The musical opens with a song from the perspective of Judas, called Heaven on their Minds. You can watch the clip from the film here and read the lyrics here.
In many ways, this song indicates the beginning of the end for Judas. It is at this point that he now realizes that Jesus has become a superstar, a larger than life celebrity that surely will catch the eyes of Rome. He is afraid of all this could mean.
Perhaps some will not see him as a wise, charismatic sage but instead, confuse him for being God himself. Perhaps the man Judas loves will get killed at the hands of the state. Perhaps the world will forget all the good things Jesus had done in his life, and remember him instead as a mere political insurgent — the man who destroyed Jerusalem and brought the Jews back into foreign bondage. Judas is torn between wanting to keep the peace, caring for the least privileged, and his deep love and admiration for Jesus.
The relationship between Jesus and Judas provides us with two alternate histories. In the first, Judas turns Jesus in, Jesus falls into the hands of the state, is crucified, and becomes seen as a deity himself. In another, Judas does not assist, Jesus lives on, preaches, and dies. Is he remembered?
This is not a simple, oppressor versus oppressed relationship. There are nuances within both groups, revealing that all the characters in this world are fully human, not merely caricatures.
Fully Human
“Why I should die?
Would I be more noticed than I ever was before?
Would the things I’ve said and done matter any more?” — Jesus, Gethsemane
After watching the film adaptation of Jesus Christ Superstar, I realized that Joseph and the Amazing Technicolor Dreamcoat is built on caricature versions of each Biblical figure. Joseph is an innocent kid. Joseph’s brothers are scheming and unaware. Jacob is gullible. Potiphar is a chump. Potiphar’s wife is the embodiment of lust and evil. For the kind of play that Joseph is, it is perfect. It’s fun loving, whimsical, full of bright colors, and containing songs like Joseph’s Coat, which straight out make me laugh today.
I expected the exact same thing out of Jesus Christ Superstar. Instead, I found a really pointed exploration of how each of these Biblical figures would act, think, and move as the events of 33AD went down. For example…
PETER
“I had to do it, don’t you see?
Or else they’d go for me”
Peter denies Jesus because he fears the movement will die if he turns himself in. Also, he’s just afraid for his life in a way he’s never been before. It’s the first time the threat of persecution becomes real to him. Peter’s Denial.
MARY MAGDALENE
“Yet, If he said he loved me
I’d be lost; I’d be frightened
I couldn’t cope”
Mary Magdalene wrestles with the male-female relationship she has with Jesus, unsure how to love him correctly. She craves acceptance from someone as perfect as she images Jesus to be, but at the same time has a deep fear of commitment. I Don’t Know How to Love Him.
JESUS
The character of Jesus, at times, revels in his love and support. He unapologetically accepts the attention of the women who follow him, touching him, rubbing his skin (Everything’s Alright). Jesus also has a genuine desire to help people. He believes he can heal those who believe in him. But no matter how much he tries to spread himself, he understands that his time is running out, leaving less of himself to give in the first place.
The Temple/The Lepers brings the audience into the existential horror of this concept, where toward the end the infirmed envelope Christ. He feels he cannot escape and anxiously cries out, “There’s too little of me; don’t crowd me! Heal yourselves!”
The masterpiece of the musical, Gethsemane, dedicates over five minutes the humanity of Jesus. Jesus cries out, unsure who even began the whole ministry: himself or God. If he chooses to believe that he caused all of this to happen, he is left in bitter despair and hopelessness.
Though if he chooses to believe that all these acts are orchestrated, he can find the courage to drink ‘His cup of poison’ and suffer the death he knows will come soon. The question of free-will will be discussed again later on.
JUDAS
Judas is not the one-dimensional betrayer of Jesus. He feels an overwhelming need to act, to challenge the frivolous actions of those around him (such as Mary using oil for Jesus instead of selling it and giving to the poor), to care for the least privileged, even if it meant betraying his great love: Jesus Christ. No song in the musical better illustrates the complexity of Judas than Damned For All Time.
Judas has approached the Pharisees to try to do something about the movement, which he sees will inevitably end in the destruction of his home. He feels Jesus’ message is going in the wrong direction, based on the events of the first half of the musical. He repetitively tells us, “I have no thought at all about my own reward, I really didn’t come here of my own accord,” when he is negotiating with the Pharisees, Jesus’ sworn enemies.
The listener is left to wonder why Judas would go through this hassle if he’s not interested in any award. Even stranger, Judas admits his powerlessness in this situation, as if it was not his decision to betray Jesus.
To the point of tears, on the ground, we transition to Blood Money, where Judas is given his iconic silver shekels, a symbol of betrayal. Profit over people, perhaps the greatest person there ever was.
But… Judas hates wealth. He must be coerced into taking the silver only by convincing himself he can give it to the poor. He doesn’t align with the Pharisees or their mission at all. He loves Jesus, so much so that he spends his last sane thoughts speaking of how much he loves him.
“When he’s cold and dead, will he let me be? Does he love me too? Does he care for me?” — Judas, Judas’ Death
Like Mary, he wants to be accepted in spite of what appears to be his choices out of free will. If all the characters in the play are fully human, then indeed, Judas would have betrayed Jesus in a very transactional, worldly sense. But it is when the narrative is spiritualized, when it becomes Jesus’ destiny to die upon the cross, that it becomes troubling to assign blame to Judas. Didn’t someone have to turn him in? Didn’t someone have to arrest him, try him, kill him?
“Our ideals die around us, all because of you And now the saddest cut of all: Someone has to turn you in Like a common criminal, like a wounded animal” — Judas, The Last Supper
Free-Will
Pontius Pilate, like Judas, is a crucial link in the arrest, trial, and execution of Jesus in Jesus Christ Superstar. But beyond revealing Pilate as man, perhaps one that even tries to set Jesus free, Andrew Lloyd Webber goes one step further and uses Pilate as an unlikely prophetic vessel.
In one of my favorite songs from the musical, Pilate’s Dream, we meet Pilate far before he ever becomes a critical player in the narrative. Here, Pilate has a dream that reveals the rest of Jesus’s life, death, and the unfolding history of Christianity until modern times.
“Then I saw thousands of millions Crying for this man And then I heard them mentioning my name And leaving me the blame”
This very short, eerie song serves three functions. First, it affirms that this musical will play out as we expect. Jesus will be betrayed. He will be killed. His legacy will then go on to found the Christian movement which will become a major world religion.
Second, the song encourages the viewer to ask themselves whether or not any of these characters have free-will in the context of Jesus’ story, if he is, indeed, the Messiah. No matter what Pilate does in his life, he will always have to try Jesus. No matter what, he will be hated merely for existing.
Finally, if we are to believe that Pilate exists within a spiritual framework that pre-ordained the events of Jesus’ life, the viewer must accept that Pilate was animated as much by God as even Jesus was. Pilate receives visions of the future, being affirmatively told that the events of his dream must happen directly by God. There is no other way.
Just as all the actors in the film play a part, Pilate must play the part of the Roman, ruthless establishment that love must overthrow. There cannot be an oppressor without an oppressed, no hope without fear.
Unanswerable Questions
“Why’d you choose such a backward time and such a strange land? If you’d come today, you would have reached a whole nation Israel in 4 BC had no mass communication Don’t you get me wrong — I only wanna know” — Judas, Superstar
Though viewers, like me, left Jesus Christ Superstar with fresh perspectives on the relationships between Rome and the Jews, Jesus and Judas, and the like, the rock opera ends by leaving many questions unanswered.
In a surprise reappearance of Judas (perhaps now an angel, rewarded for carrying out his end of God’s deal or a haunting memory in Jesus’ mind), Judas begs these questions before Jesus’ carries his cross:
“Tell me what you think about your friends at the top Who’d you think, besides yourself, was the pick of the crop? Buddha, was he where it’s at? Is he where you are? Could Mohamed move a mountain, or was that just PR? Did you mean to die like that? Was that a mistake, or Did you know your messy death would be a record-breaker? Don’t you get me wrong — I only wanna know”
Jesus provides no answers to Judas, just as he remained silent in the face of Pilate’s earlier interrogation in the musical. Jesus accepts the life he has lives as he lived it.
Upon the cross, the final words of the musical are uttered by Jesus:
“God forgive them — they don’t know what they’re doing Who is my mother? Where is my mother? My God, my God, why have you forgotten me? I’m thirsty It is finished Father, into your hands, I commend my spirit”
As someone with a pronounced interest in the historical criticism of the Bible, this choice by Andrew Lloyd Webber strangely excited me. Instead of trying to select which version of Jesus’ last words he wanted to use, he actually used all of them. Yes, each of the Gospels of the New Testaments provides different ‘last words’ from Jesus, revealing different aspects of his nature in earthly death.
We cannot know what the last words of Jesus actually were. It’s impossible, and even the versions of events many consider infallible tell us very different accounts of the moments leading up to Jesus’ death. Perhaps, in combining each of Jesus’ last words, Webber reminders the viewer that Jesus, in principle, is a construction of versions of him we identity with, versions we co-create daily. Much like celebrities who are consumed by a fan-lore that paints an alternate reality of who one is, Jesus becomes the sum of those who tell his stories later on.
Jesus Christ Superstar ends without a resurrection. We are left with a melody titled John Nineteen Forty-One, of which the verse reads:
“At the place where Jesus was crucified, there was a garden, and in the garden a new tomb, in which no one had ever been laid.” John 19:41 NIV
From this point on, we are responsible for determining Jesus’ story because, well, Jesus is no longer in it. He has died. What will the viewer do with his metaphorical body? Will we move his body to the tomb? Will we believe that his body rose up? Will we flee Jerusalem, trying to protect ourselves, as his own apostles did? Will we consider Israel safe from insurgence? Will we write about him? Will we create films about it? Sing songs of him? | https://medium.com/interfaith-now/religious-fatalism-and-the-politics-of-change-25a4c79083f8 | ['Allison J. Van Tilborgh'] | 2020-06-26 16:42:49.654000+00:00 | ['Politics', 'Spirituality', 'Film', 'Music', 'Religion'] |
Bring Machine Learning to the Browser With TensorFlow.js — Part I | Edited 2019 Mar 11 to include changes introduced in TensorFlow.js 1.0. Additional information about some of these TensorFlow.js 1.0 updates can be found here.
TensorFlow.js brings machine learning and its possibilities to JavaScript. It is an open source library built to create, train, and run machine learning models in the browser (and Node.js).
Training and building complex models can take a considerable amount of resources and time. Some models require massive amounts of data to provide acceptable accuracy. And, if computationally intensive, may require hours or days of training to complete. Thus, you may not find the browser to be the ideal environment for building such models.
A more appealing use case is importing and running existing models. You train or get models trained in powerful, specialized environments then you import and run the models in the browser for impressive user experiences.
Converting the model
Before you can use a pre-trained model in TensorFlow.js, the model needs to be in a web friendly format. For this, TensorFlow.js provides the tensorflowjs_converter tool. The tool converts TensorFlow and Keras models to the required web friendly format. The converter is available after you install the tensorflowjs Python package.
install tensorflowjs using pip
The tensorflowjs_converter expects the model and the output directory as inputs. You can also pass optional parameters to further customize the conversion process.
running tensorflowjs_converter
The output of tensorflowjs_converter is a set of files:
model.json — the dataflow graph
— the dataflow graph A group of binary weight files called shards. Each shard file is small in size for easier browser caching. And the number of shards depends on the initial model.
tensorflowjs_converter 1.0 output files
NOTE: If using tensorflowjs_converter version before 1.0, the output produced includes the graph ( tensorflowjs_model.pb ), weights manifest ( weights_manifest.json ), and the binary shards files.
Run model run
Once converted, the model is ready to load into TensorFlow.js for predictions.
Using Tensorflow.js version 0.x.x:
loading a model with TensorFlow.js 0.15.1
Using TensorFlow.js version 1.x.x:
loading a model with TensorFlow.js 1.0.0
The imported model is the same as models trained and created with TensorFlow.js.
Convert all models?
You may find it tempting to grab any and all models, convert them to the web friendly format, and run them in the browser. But this is not always possible or recommended. There are several factors for you to keep in mind.
The tensorflowjs_converter command can only convert Keras and TensorFlow models. Some supported model formats include SavedModel, Frozen Model, and HDF5.
TensorFlow.js does not support all TensorFlow operations. It currently has a limited set of supported operations. As a result, the converter will fail if the model contains operations not supported.
Thinking and treating the model as a black box is not always enough. Because you can get the model converted and produce a web friendly model does not mean all is well.
Depending on a model’s size or architecture, its performance could be less than desirable. Further optimization of the model is often required. In most cases, you will have to pre-process the input(s) to the model, as well as, process the model output(s). So, needing some understanding or inner workings of the model is almost a given.
Getting to know your model
Presumably you have a model available to you. If not, resources exist with an ever growing collection of pre-trained models. A couple of them include:
TensorFlow Models —a set of official and research models implemented in TensorFlow
Model Asset Exchange —a set of deep learning models covering different frameworks
These resources provide the model for you to download. They also can include information about the model, useful assets, and links to learn more.
You can review a model with tools such as TensorBoard. It’s graph visualization can help you better understand the model.
Another option is Netron, a visualizer for deep learning and machine learning models. It provides an overview of the graph and you can inspect the model’s operations.
visualizing a model with Netron
To be continued…
Stay tuned for the follow up to this article to learn how to pull this all together. You will step through this process in greater detail with an actual model and you will take a pre-trained model into web friendly format and end up with a web application. | https://medium.com/codait/bring-machine-learning-to-the-browser-with-tensorflow-js-part-i-16924457291c | [] | 2019-03-11 20:43:18.126000+00:00 | ['Open Source', 'JavaScript', 'Python', 'TensorFlow', 'Machine Learning'] |
Machine Learning in Production using Apache Airflow | 2. Data Validation
Data Validation is the process of ensuring that data is present, correct, and meaningful. Ensuring the quality of your data through automated validation checks is a critical step in building data pipelines at any organization.
The data validation step is required before model training to decide whether you could train the model or stop the execution of the pipeline. This decision is automatically made if the following was identified by the pipeline [2]:
Data schema skews : these skews are considered anomalies in the input data, which means that the downstream pipeline steps, including data processing and model training, receive data that doesn’t comply with the expected schema. Schema skews include receiving unexpected features, not receiving all the expected features, or receiving features with unexpected values.
: these skews are considered anomalies in the input data, which means that the downstream pipeline steps, including data processing and model training, receive data that doesn’t comply with the expected schema. Schema skews include receiving unexpected features, not receiving all the expected features, or receiving features with unexpected values. Data values skew: these skews are significant changes in the statistical properties of data, which means that data patterns are significantly changed, and you need to check the nature of these changes.
Importance of the data validation:
The quality of the model depends on the quality of the data.
Increasing confidence in data quality by quantifying data quality.
Correcting the trained ML model can be expensive — prevent is better than cure.
Stopping training a new ML model if the data is invalid.
Airflow provides a group of check operators, that allows us easy verify data quality. Let’s look at how to use such Operators on practical examples.
2.1 CheckOperator
The CheckOperator expects a SQL query that will return a single row. Each value on that first row is evaluated using python bool casting. If any of the values return False the check is failed and errors out. So, simply added one task to the pipeline we can check if data exists for instance for a specific date.
2.2 IntervalCheckOperator
For more complex checks Airflow has IntervalCheckOperator. The Checks that the values of metrics given as SQL expressions are within a certain tolerance of the ones from “days_back” before.
One of the key-point for this check is ratio formula — which formula to use to compute the ratio between the two metrics. You can choose from two possible:
max_over_min: max(cur, ref) / min(cur, ref)
relative_diff : computes abs(cur-ref) / ref
IntervalCheckOperator allows you to check different metrics at the same time with the different ratios for some specific table.
2.3 ValueCheckOperator
The ValueCheckOperator — a simple and powerful operator. It performs a pure value check using SQL code. You can use a SQL query of any complexity to get any value. The check will pass if the computed value is equal to the passed value with some tolerance.
Using the check operators from Airflow, it is easier to understand where is a problem when you see that a check is failed, instead of dealing with error in the code.
The list of data checks at the beginning of the pipeline
In the current example, I use all checks to test input data. But you can also check any output values. Also, it will be useful to add the checks to ETL pipelines. | https://towardsdatascience.com/machine-learning-in-production-using-apache-airflow-91d25a4d8152 | ['Danylo Baibak'] | 2020-07-14 14:10:51.283000+00:00 | ['Data Science', 'Data Engineering', 'Data Pipeline', 'Machine Learning', 'Airflow'] |
The Legacy of Sir Elton John and the Art of Paying Tribute to a Living Legend | Lesson #2: Highlight the impact of the artist(s) on numerous generations of musicians with a tribute album, but don’t overemphasize hot young artists at the expense of seasoned contemporaries.
The most ambitious part of the recent Elton tributes are definitely the two tribute albums that were released on April 6th. Each album utilizes an impressive line-up of well-known musicians to cover 13 songs that Sir Elton John composed alongside his longtime lyricist Bernie Taupin (who has written the vast majority of Elton’s songs over his nearly half-century career). But aside from the concept, the two albums differ markedly.
Revamp, the pop oriented collection, boasts a great deal more star power and features better-known songs from Elton’s catalogue. The album gets off to a rocky start with a cover of “Bennie and the Jets” that makes great use of P!nk’s incredible vocals, but becomes a hot mess when Logic’s rap is introduced. Unfortunately, the lead off song isn’t the only disappointment. Q-Tip and Demi Lovato’s bizarre cover of “Don’t Go Breaking My Heart” is equally jarring, although it is arguably the classic hit of Elton’s that is most in need of updating. The most egregious sin on the album occurs in the form of Ed Sheeran’s cover of “Candle in the Wind.” His voice sounds nice, but the rearrangement is wildly inappropriate for the subject matter. (The song is about the tragic life and untimely death of Marilyn Monroe but he plays it like he’s at a bouncy jam session trying to flirt with a bunch of girls.) Then there’s Coldplay’s utterly listless cover of “We All Fall in Love Sometimes” and the utterly bizarre choice of Miley Cyrus to cover one of his most powerful songs, “Don’t Let the Sun Go Down on Me” (most famously recorded as a live duet with the late, great George Michael).
But thankfully the other 8 songs work quite well. Relative newcomer Alessia Cara adds grit to “I Guess That’s Why They Call it the Blues.” Mumford and Sons does an affecting rearrangement of “Someone Saved My Life Tonight.” Mary J. Blige wrings every last painful drop out of “Sorry Seems to be the Hardest Word.” Sam Smith soulfully evokes the deep sadness inherent in “Daniel.” The Killers do an excellent rendition of “Mona Lisas and Mad Hatters,” one of Elton’s best-loved album cuts. Queens of the Stone Age cap the album with a powerful take on “Goodbye Yellow Brick Road.” And then there’s the two best tracks on the album, Florence + the Machine’s cover of “Tiny Dancer” and Lady Gaga’s cover of “Your Song.” It is no easy feat making a cover of two of the most beloved songs in Elton’s catalogue work, but Florence and Gaga knock it out of the park, further establishing themselves as two of the greatest artists of their generation.
Restoration, the country oriented collection, uses somewhat lesser known stars and covers more obscure songs from Elton’s catalogue, but ultimately works better as a whole. In fact maybe it’s because the artists and the songs are less recognizable that it works better. But it’s not without it’s missteps. As good as Maren Morris sounds, we didn’t need a second cover of “Mona Lisas and Mad Hatters” (interestingly the only song duplicated on both albums). And we most definitely didn’t need a second appearance of Miley Cyrus, this time covering “The Bitch is Back.” But most of the rest of the album works exceedingly well. Little Big Town’s “Rocket Man,” Don Henley & Vince Gill’s “Sacrifice,” and Dierk Bentley’s “Sad Songs (Say So Much)” are stellar covers of some of his most well-loved songs. Miranda Lambert’s “My Father’s Gun,” Brothers Osborne’s “Take Me to the Pilot,” Kacey Musgrave’s “Roy Rogers,” and Rhonda Vincent & Dolly Parton’s “Please” give deserved second lives to lesser known album cuts. Although they aren’t the musical highlights of the album, Lee Ann Womack’s “Honky Cat” and Willie Nelson’s “Border Song” emphasize the songs’ impressive lyrical arrangements. And, Chris Stapleton’s “I Want Love” and Rosanne Cash & Emmylou Harris’s “This Train Don’t Stop Here Anymore” are superb covers of two of the most under-appreciated songs of the later stage of Elton’s career.
The albums include an impressive slate of famous musicians and classic songs from his catalogue, but they are not without notable omissions and missed opportunities. I, for one, would have loved to have heard covers of three of my favorite songs — the heavy and haunting “Believe,” the exceedingly romantic “Something About the Way You Look Tonight,” and the utterly masterful “The One.” And although it is moving to see how much the younger generation has been influenced by Elton, I would have loved to see more of his contemporaries make an appearance. Throughout his career he has had so many storied collaborations and personal relationships with music legends that it seems an utter shame so few true legends are featured on the album. Where’s Paul McCartney, Aretha Franklin, Stevie Wonder, Eric Clapton, and Dionne Warwick? Not to mention Madonna, who his love-hate relationship is a tabloid dream. And for better or worse (probably better), there’s no cuts from The Lion King soundtrack (Elton’s best selling album) to be found (although this is clearly explained by the fact that Elton created those songs with lyricist Tim Rice and not Taupin).
Lesson #3: Celebrate the legacy of the artist(s) with an all-star TV special, but don’t just make the tribute concert a promotion of the tribute albums.
On April 10th, CBS aired a two hour special called Elton John: I’m Still Stranding — A GRAMMY Salute. Even moreso than the tribute albums the special was somewhat shamelessly promoting the show was quite uneven. There were some strong performances, no doubt. Lady Gaga, Sam Smith, Maren Morris, Miranda Lambert, Little Big Town, and Alessia Cara did superb live renditions of their cuts from the tribute album. In a wonderful twist, John Legend took over “Don’t Let the Sun Go Down on Me” from Miley Cyrus and gave the brilliant song its just due; Kesha provided a typically raw and powerful take on “Goodbye, Yellow Brick Road” that matched Queen of the Stone Age’s take of it on the album; and Shawn Mendes and SZA did a cover of “Don’t Go Breaking My Heart” that blew the album version by Q-Tip and Demi Lovato out of the water. And then Sir Elton himself capped the show with a three song performance. He was in fine form, although I can think of several better songs than “Philadelphia Freedom,” “I’m Still Standing,” and “Bennie and the Jets” to highlight his incredible knack for performing live.
Interspersed with these rousing moments were a bunch of head-scratching and eye-rolling moments. Miley Cyrus’s tacky rendition of “The Bitch is Back” was a tone-deaf way to star the show. Coldplay and Ed Sheeran’s live renditions of their tribute album cuts worked no better this time around. Two attempts at turning Bernie Taupin’s lyrics into spoken word performance art were cringe-inducing. The tacky set design was cheap and uninspired (the worst being Maren Morris being forced to perform “Mona Lisa and Mad Hatters” alongside a replica of the table from Alice in Wonderland — get it?!?). And, most notably, other than a touching shout out to his AIDS activism the show said almost absolutely nothing about Sir Elton John and Bernie Taupin as people. I would have gladly excised a few of those performances for some words by people who actually know them or one of the countless classic clips that the Grammys surely has in its archive.
Elton John’s latest greatest hits collection (Copyright Virgin EMI)
Lesson #4: Release/rerelease classic music in a new, remastered package, but don’t ignore the die hard fans when crafting it.
Diamonds, the 51-track collection of Sir Elton’s greatest hits that was released last November in preparation for the farewell tour announcement, is great for the more casual fan looking to get a lot of his best known hits in one great-sounding collection. However, as the slew of negative reviews on the internet will tell you, it fell far short for his diehard fans. Many of them asked why a seventeenth (!) greatest hits collection was needed, when so many B-sides, live performances, soundtrack and compilation cuts, and other rarities remain so difficult to find. And they aren’t wrong.
But then again, collections that dive deeper into Sir Elton’s catalogue may still be forthcoming. After all, Sir Elton has made it quite clear that while he is done touring in 2021, he is not done with his love affair with music. And the world is better off for that.
Click here to read another article about a legendary music legacy: https://medium.com/rants-and-raves/counting-down-mariahs-48-best-songs-to-celebrate-her-anniversary-128535300326
Follow me on Twitter: https://twitter.com/RichardReflects | https://medium.com/rants-and-raves/the-art-of-paying-tribute-to-a-living-legend-ceebe74b5816 | ['Richard Lebeau'] | 2020-09-29 20:51:44.559000+00:00 | ['Music', 'LGBTQ', 'Grammys', 'Pop Culture', 'Media'] |
Thoughts on building software models | Photo by Uwe Hensel on Unsplash
I recently taught a course on how to build software models for beginner to intermediate coders. As part of that I tried to distil the general ideas that underpin good modelling practice.
So beyond the basics of syntax and how to write a function, what is it that really matters when we’re designing and writing software? What makes the difference between a good model and a bad model, and whether we deliver comfortably on schedule or if it’s a hectic last week with 16-hour days?
Everyone does things differently and there are few absolutes; this is what works for me: | https://medium.com/dev-genius/thoughts-on-building-software-models-a1321bfbc4d1 | [] | 2020-06-18 09:36:26.761000+00:00 | ['Software Engineering', 'Project Management', 'Coding Style', 'Software Development'] |
Video: On a Highway to Scale — Machine Learning as a Platform (Hebrew) | On a Highway to Scale: Machine Learning as a Platform — Ben Amir (Hebrew)
When you’re handling lots of data, you come across many issues and problems, including ones associated with model training. At Riskified, we also handle lots of data and face problems, which forced us to scale up our ML platform.
In this talk, Ben Amir covers how we speed up to the process of handling massive amounts of data and shorten the way to production using our new machine learning platform orchestration. Ben shares our insights from the process of building it up from scratch and how we leveraged Apache Spark and Kubernetes as key infrastructural players.
Ben is the ML Platform team lead at Riskified. After serving 12 years in the 8200 unit, he joined Riskified and fell in love with fighting fraud. He is a real fan of new technologies, trying new stuff, innovation and hard work. When Ben is not near his laptop, you can find him watching (or playing) basketball, walking around the city with his dog and probably grabbing some good food.
*Talk is in Hebrew | https://medium.com/riskified-technology/video-on-a-highway-to-scale-machine-learning-as-a-platform-hebrew-e2d9e167e43d | ['Riskified Technology'] | 2020-09-23 09:16:06.591000+00:00 | ['Machine Learning', 'Videos', 'Spark', 'Apache Spark', 'Engineering'] |
ICE Chief Tony Pham Is Leaving Out a Crucial Part Out of His Family Story, Cousin Says | My Cousin Runs ICE. He’s Killing the Same American Dream Granted to His Own Parents.
The lie at the heart of the Pham family ‘pass to freedom’
In August, my mother forwarded me an email. “Trump administration taps Vietnam refugee as new ICE chief,” it said. I opened it, and learned that my cousin, Tony Pham, had just been appointed to lead U.S. Immigration and Customs Enforcement (ICE). Tony’s ascent to this position instilled great pride in my family, especially among the older members who skew politically conservative. I, however, was appalled that my cousin allowed his identity as a refugee to be used as cover for the enforcement of increasingly cruel and dehumanizing immigration policies. And I questioned my cousin’s claim that he had followed the “lawful path to citizenship,” which doesn’t give a full picture of what really happened.
My cousin came to this country in 1975, one of 125,000 Vietnamese refugees who were resettled in the United States as a result of the Vietnam War, despite strong public opposition. “To ignore the refugees in their hour of need would be to repudiate the values we cherish as a nation of immigrants,” said President Ford, who’d fought to bring them here. “I was not about to let Congress do that.” This year, President Trump capped the number of refugees our country would accept at 18,000 worldwide. Had my cousin needed refuge in the United States today, the chances he would be permitted to enter would be slim.
When the communists took over Vietnam in 1975, millions of people were desperate to flee the country. This was certainly true for my uncles, who had served in the South Vietnamese military alongside U.S. troops. They would have faced certain torture and possible death had they stayed. Like today’s refugees, they would undertake any means possible to avoid persecution and to protect their families.
This is precisely what Tony’s father did. Tony was two years old at the time his parents emigrated. Had his family remained in Vietnam, he would likely have watched his father ripped away and possibly killed, seen his family’s livelihood destroyed, and grown up in poverty. But Tony was lucky. My mother had a devoted friend named Jerry Edwards, who us kids always called Mr. Edwards. He was an official in the U.S. Embassy in Vietnam with the Defense Attaché’s Office, according to correspondence with my family.
On April 19, 1975, 11 days before the fall of South Vietnam, Mr. Edwards wrote a letter addressed “To Whom It May Concern,” vouching for the integrity of Tony’s father. It begins, “This letter is to introduce my brother-in-law Captain Pham…” and it goes on to say, “He is a very sincere, loyal, and dedicated individual. Whatever aid and comfort you can provide him and his family would be greatly appreciated.”
Mr. Edwards understood very well the significance of his actions. Acting on his deeply held beliefs to save our family, he lied in this letter.
Mr. Edwards gave my uncle the letter to carry as a backup in case anyone questioned the validity of the hastily secured documents that facilitated his journey to America, documents my mother had courageously obtained for him through her connections. Over the years I’ve heard different family stories about the role Mr. Edwards’ letter played in Tony’s father’s escape, including that he never needed to use the letter at all, but was prepared to, in the event my uncle ran into trouble during the journey. What’s not in question is that over the years it has become part of the family’s public lore, and to this day, Tony says he carries that tattered letter in his wallet. He calls it his family’s “document to freedom.” In a recent interview with Fox, Tony asserted that the letter played a crucial role in securing his mother’s escape, stating that “based on that letter she was able to get a seat on the flight out of Saigon that day on April 19.” And in a 2014 Facebook post, he wrote, “I hope Mr. Edwards understood the significance of his action of signing our pass to freedom. Without it, our lives would have been dramatically different.”
Mr. Edwards understood very well the significance of his actions. Acting on his deeply held beliefs to save our family, he lied in this letter. He was not Capt. Pham’s legal brother-in-law at the time he wrote the letter for Tony’s father. His divorce from his first wife was not final until 1978 — three years after he wrote the letter. And though he would go on to live with my mother, brother, and me after we moved to Virginia, Mr. Edwards and my mother never married.
My cousin claimed in an email to ICE attorneys that he had followed the “lawful path to citizenship” while also touting this “document to freedom,” a document based on a lie. This misrepresentation of his family history underscores the hypocrisy of his claim and the shortcomings of a deeply flawed immigration system.
Mr. Edwards passed away in 1988, so he never got to see Tony’s rise to power in the Trump administration. He stepped into the void left by my parents’ divorce and was a father figure to me. He took me to church on Sundays, read fairy tales to me every night before I went to bed, and picked me up from school whenever I stayed late for extracurricular activities. He helped me study for my elementary school’s spelling bee championship, which I won, and he helped me with a school project called Olympics of the Mind when my teammates failed to show up. Most importantly, he taught me to be kind, generous, and compassionate. I haven’t always lived up to his expectations, but I have tried.
I am grateful to him for risking his job and his professional reputation on behalf of our family. I admire him for putting people above laws. I understand why he stretched the truth when he helped save my relatives. His courage shows the hollow protestations of many conservative immigrants like my cousin who say they “did it the right way” when they came to America, while those seeking refuge here today are shameful and inferior.
I know to the core of my being that Mr. Edwards would not approve of my cousin’s hypocrisy and certainly would not have approved of his aggressive deportation tactics. Mr. Edwards embodied empathy and grace in everything he did. He never could have supported policies that veer toward infringing on human rights. | https://gen.medium.com/my-cousin-runs-ice-hes-killing-the-same-american-dream-granted-his-own-parents-3b6d0fc8e70 | ['Philippa Pb Hughes'] | 2020-11-03 17:00:38.862000+00:00 | ['Gen Longreads', 'Immigration', 'Society', 'Politics', 'Ice'] |
Best Kanye West Samples: 20 Tracks That Revolutionised Hip-Hop | Paul Bowler
Photo courtesy of Def Jam
Few artists have mined the vaults of musical history quite as deeply or as ingeniously as Kanye West. From his early work as a go-to producer for the likes of Jay Z, Ludacris and Alicia Keys, to his much-celebrated career as the most compelling artist in hip-hop, Kanye has redefined how rap’s building blocks can be used. From re-twisting well-known classics into compelling new iterations, to altering perceptions of what kind of music can be sampled in hip hop, and introducing new generations of listeners to soul, funk, psych, house and gospel classics, his innovation continues to astound. Want proof? Here are the 20 best Kanye West samples. If you think we’ve missed one of yours, let us know in the comments section, below.
Listen to the best of Kanye West on Apple Music and Spotify, and scroll down for our 20 best Kanye West samples.
Best Kanye West Samples: 20 Tracks That Revolutionised Hip-Hop
20: The Royal Jesters: ‘Take Me For A Little While’
On ‘Ghost Town’, the penultimate track of Kanye’s 2018 album Ye, he returns to the soulful grooves with which he made his name, powering his verses with The Royal Jesters’ sterling version of the Vanilla Fudge classic.
Hear it on: ‘Ghost Town’
19: James Cleveland And The Southern Community Choir: ‘God Is’
Gospel is at the heart of West’s 2019 album, Jesus Is King, with the form both inspiring new music and lending the album several of its samples. Lifted by West for the track of the same name, this song by “The King Of Gospel”, James Cleveland, is a brilliant example of his celebrated fusion of pop, jazz, soul and gospel.
Hear it on: ‘God Is’
18: Tears For Fears: ‘Memories Fade’
West’s album found him dialling back on samples to make room for a self-produced melding of hip-hop and R&B. One of his few concessions was ‘Coldest Winter’, which appropriated Tears For Fears’ mournful synth-pop original as a base for a powerful meditation on loss.
Hear it on: ‘Coldest Winter’
17: The 24 Carat Black: ‘I Want To Make Up’
The lone producer on Pusha T’s much-celebrated 2018 album, Daytona, West fashioned a masterclass in sampling, channeling the mournful soul of this lost classic by Cincinnati collective The 24 Carat Black to brilliantly atmospheric effect on the track ‘Infrared’.
Hear it on: ‘Infrared’
16: Whole Truth: ‘How Can You Lose By Following God’
Looking to the funkier side of gospel for his inspiration on the Jesus Is King track ‘Follow God’, West builds his sequence of pounding hip-hop rhythms on this little-known, soulfully devout 1974 number.
Hear it on: ‘Follow God’
15: Pastor TL Bennet: ‘Father I Stretch My Hands’
A relative unknown until the 2010s, the Chicagoan pastor’s classic 1976 gospel track ‘Father Stretch My Hands’, a richly soulful number with a Stevie Wonder-esque warmth, was sampled heavily for West’s two-part track of the same name.
Hear it on: ‘Father I Stretch My Hands, Pt.1’
14: Mr Fingers: ‘Mystery Of Love’
Kanye ran hip-hop through a deep house blender on The Life Of Pablo’s track ‘Fade’, taking elements from two classics, Hard Drive’s ‘Deep Inside’ and this legendary number by Larry Heard’s Mr Fingers project, creating something shimmering and new in the process.
Hear it on: ‘Fade’
13: Shirley Bassey: ‘Diamonds Are Forever’
Never one to shy away from a well-known sample, West breathed new life into Bassey’s classic Bond theme, repurposing it on his hit single ‘Diamonds From Sierra Leone (Remix)’, which probed the ethics of the diamond trade. Winning Best Rap Song at the Grammys, the track received the ultimate accolade in the form of praise from the grand dame herself.
Hear it on: ‘Diamonds From Sierra Leone (Remix)’
12: Arthur Russell: ‘Answers Me’
West often throws leftfield samples into the mix, but few could have predicted that Arthur Russell’s avant-garde mid-80s track would find a new home in a hip-hop song. On The Life Of Pablo’s ’30 Hours’, West transforms the eerily echoey vocal and cellos of the original into a driving hip-hop meditation on his early career.
Hear it on: ’30 Hours’
11: Aretha Franklin: ‘Spirit In The Dark’
Among of the range of sped-up soul samples that defined the sound of Kanye’s debut album, , West demonstrated his genius for manipulation as he turned Aretha Franklin’s slow, bluesy vocals and mournful piano into a humorous and upbeat “chipmunk soul” backing on standout track ‘School Spirit’.
Hear it on: ‘School Spirit’
10: Steely Dan: ‘Kid Charlemagne’
On the Graduation stand-out ‘Champion’, West took an ebullient and catchy hook from Steely Dan’s fusion-inclined 1976 album track and ran with it. Adding punchy beats, a reggae shuffle and 80s synths turned it into a modern hip-hop classic.
Hear it on: ‘Champion’
9: Curtis Mayfield: ‘Move On Up’
On Late Registration’s fourth and final single, ‘Touch The Sky’, a pitched-down version of Mayfield’s 1970 soul classic provided an irresistible hook for West and fellow-rapper Lupe Fiasco’s positivist lyrics of self-fulfilment.
Hear it on: ‘Touch The Sky’
8: King Crimson: ‘21st Century Schizoid Man’
A masterclass in sampling prowess, ‘POWER’ cleverly interpolates from a number of sources, among them Cold Grit’s ‘It’s Your Thing’ and Continent Number 6’s ‘Afromerica’, but it’s the unhinged vocals and pounding drums of King Crimson’s 1969 psych-rock classic that truly drive the track, giving it a crucial bombastic sense of menace.
Hear it on: ‘POWER’
7: Bon Iver: ‘Woods’
A stunning example of West’s ability to repurpose seemingly disparate elements, My Beautiful Dark Twisted Fantasy track ‘Lost In The World’ transforms Bon Iver’s delicate a cappella number into a booming hip-hop anthem, throwing in a stirring sample from Gil Scott-Heron along the way.
Hear it on: ‘Lost In The World’
6: Ponderosa Twins Plus One: ‘Bound’
Amid the experimental and abrasive industrial landscapes of Yeezus, West also found time to dig deep in the soul crates. Used to explosive effect on the album’s final track, ‘Bound 2’, this brilliant 1971 obscurity has since had a new lease of life, also being picked up by Tyler, The Creator for ‘Boy Is A Gun’.
Hear it on: ‘Bound 2’
5: Ray Charles: ‘I Got A Woman’
Cheekily reconfigurating a snippet of Ray Charles’ R&B number into something entirely different, West’s mammoth dancefloor hit ‘Gold Digger’ featured a re-worked vocal turn by Jamie Foxx, who’d recently bagged an Oscar for his portrayal of the singer in the 2004 biopic Ray.
Hear it on: ‘Gold Digger’
4: Jackson 5: ‘I Want’
Snapped-up by Jay Z as a young producer, West’s work on the rapper’s The Blueprint album — which typically featured sped-up samples of Motown classics backed with stark, clipped beats — helped re-popularise sampling as one of hip-hop’s key building blocks. On single ‘Izzo (HOVA)’, West chops and reworks this evergreen Jackson 5 number while retaining its ebullient and life-affirming spirit.
Hear it on: ‘Izzo (HOVA)’
3: Daft Punk: ‘Harder, Better, Faster, Stronger’
West helped bring an increased electronic music presence into hip-hop with his inspired sampling of the French duo’s 2001 dancefloor bomb. A huge international hit when released as a single, ‘Stronger’’s ingenious melding of electronic dance and hip-hop styles brought him a new audience.
Hear it on: ‘Stronger’
2: Nina Simone: ‘Strange Fruit’
On key Yeezus album track ‘Blood On The Leaves’, West was bold and uncompromising enough to pair a sample from Nina Simone’s beloved civil-rights song with a bitter tale of failed relationships. The effect remains mesmerising.
Hear it on: ‘Blood On The leaves’
1: Chaka Khan: ‘Through The Fire’
Kanye’s debut solo single, ‘Through The Wire’ was famously written and recorded with his jaw wired shut after a near-fatal car crash. Driven by an irresistible, pitched-up sample of Chaka Khan’s ‘Through The Fire’, his self-reflective, heartfelt moment of carpe diem remains one of the defining moments — and samples — of his career.
Hear it on: ‘Through The Fire’
Looking for more? Discover the best Kanye West songs of all time.
Join us on Facebook and follow us on Twitter: @uDiscoverMusic | https://medium.com/udiscover-music/best-kanye-west-samples-20-tracks-that-revolutionised-hip-hop-d08d1d0ad0e6 | ['Udiscover Music'] | 2019-11-29 16:29:27.052000+00:00 | ['Hip Hop', 'Music', 'Culture', 'Lists', 'Pop Culture'] |
Jacques Roubaud on Math and the Art of Literary Invention | Jacques Roubaud on Math and the Art of Literary Invention
Adventurous heroines, abstract algebra, a poetics of grief, OULIPO, and the ongoing memoir of a grand project
Photo by Juliana Malta on Unsplash
If you’ve never heard the name “Jacques Roubaud,” just use any of these substitutions:
“Emeritus professor of mathematics at the University of Paris X-Nanterre”
“Composer of the innovative, recombinatory poetry collection ∈”
“Author of a fantasy about talking animals and abstract algebra”
And that’s only a start. Since publishing his first volume of poetry in 1967, Roubaud has produced a stream of innovative literary works in almost every form — including a trilogy of comic novels about the postmodern heroine Hortense, and a series of intricately interwoven memoirs that explore not only his personal experience but the mysteries of memory and the structures that organize our individual realities. His work in every form is suffused with mathematics, whether explicitly or subtly.
Roubaud has been a leading member of the experimental writing collective known as Oulipo (short for Ouvroir de littérature potentielle, or “workshop of potential literature“). Oulipo writers often employ “constraints” like leaving one letter of the alphabet out of a novel, or making word substitutions according to a predetermined algorithm. And they frequently organize their works according to numerical patterns, as exemplified in two of Roubaud’s poetry collections: ∈ follows a scheme derived from the ancient Japanese game of Go, and Trente et un au cube [31 cubed] contains thirty-one poems of thirty-one verses of thirty-one syllables.
While some Oulipo texts are merely curiosities, Roubaud’s novels, plays, and poems are really very good. And fortunately — in case your French isn’t as good as your math — his best works have been translated into English.
Perhaps the most delightfully original — and the most mathematical — of Roubaud’s works is La princesse Hoppy, ou le conte du Labrador (1990; The Princess Hoppy, or The Tale of Labrador, 1993). This inventive fantasy cleverly integrates talking animals (whose languages include Posterior Duck, which is silently conveyed by the motion of duck feet, and Anterior Duck, which is vocalized), with abstract algebra, encoded autobiographical references, and intertextual allusions that range from Alice’s Adventures in Wonderland to Chrétien de Troyes’ medieval romance, the Conte du Graal.
Fortunately again, Elvira Laskowski-Caujolle’s “Jacques Roubaud: Literature, Mathematics, and the Quest for Truth” outlines many of the math elements in The Princess Hoppy — and Jstor.org offers free access to the article.
Here’s a sample:
“To read Roubaud’s novel means not only to interpret and to analyze, but [also] to decipher the given puzzles and riddles. Like a mathematician who reads a mathematical text with pen and paper to verify theorems or to do the inevitable exercises at the end of a chapter, the reader of La Princesse Hoppy has to answer 79 questions to verify whether or not s/he has properly ‘understood’ the tale.”
In addition, complex mathematical and logical problems are woven through the whole narrative, as the princess tries to unravel a conspiracy in which three of her uncles are plotting against a fourth. She doesn’t know which uncle is the target of the plot — and every time any two of the uncles meet, they conspire in secret, creating a very large array of conspiratorial configurations. One key to the solution derives from a mathematical interpretation of a medieval monastic precept: the Rule of St. Benedict.
Laskowski-Caujolle’s commentary on the princess’s problem offers a glimpse of the complexity Roubaud has built into the story:
“If we translate the literary text into algebraic formulas, we will get the axioms for a mathematical group of order 4, when we consider a set of four kings [the uncles]. There exist only two groups of order 4, the cyclic group C4 and the so called Klein group KA, represented by the following multiplication tables (in general the elements are given by a, b, c, e; the element e is called the unity element):
So: “If we can determine the unity element for the set of the four kings and the type of group, we will be able to answer the princess’s question concerning the royal conspiracy.”
There’s much more of course, with clues and puzzles scattered throughout the text. As Roubaud explains elsewhere, everything in his work — down to each letter of the alphabet — has more than one meaning.
The Princess Hoppy represents one side of Roubaud’s writing: the playful, intellectually entertaining, and eccentric side. An opposite, complementary aspect is deeply personal, balancing emotional intensity with mathematical abstraction. In Quelque chose noir (1986; Some Thing Black, 1991), Roubaud organizes the experience of grief into nine groups of nine poems— beginning with the starkly physical account of finding a body, and ending with one last poetic expression of grief: “Nothing.”
By now I hope you’d like to know more about this unusual writer, so here’s a quick get-acquainted guide.
Born in 1932, Roubaud was a literary prodigy, who began publishing poetry at the age of twelve. But he disliked the competitive system of French education, and dropped in and out of school, studying English, linguistics, and other subjects at various points. Then, in 1954, he happened into contact with the innovative mathematical collective known as Bourbaki, and became interested in set theory. Although twenty-two is a very late age to begin the serious study of mathematics, Roubaud made rapid progress, and by the early 1960s had obtained a teaching position and completed his doctorate.
The decision to immerse himself in math was the first turning point in Roubaud’s life — but he couldn’t abandon literature altogether. During a dream on his birthday in 1961, he envisioned a vast web of works that would contain both literary and mathematical texts. He planned to provide a map of its thematic structure in the form of a novel, to be called Le grand incendie de Londres [The Great Fire of London].
Roubaud embarked on this “grand project” with a disciplined plan of writing, and extensive research into a traditional poetic form, the sonnet —a fourteen-line poem with ten syllables in each line and a thematic turn that usually divides the poem into an eight-line octet and a six-line sestet. Since there are several possible rhyme schemes, and several different ways of organizing the sonnet’s internal elements, the basic form has many variations.
Combining his sonnet research with a mathematical structure based on the game of Go, Rubaud produced his first major work, the 1967 collection titled simply ∈ (the symbol for an element belonging to a set). The poems and their internal components can be rearranged by the reader in a variety of configurations — and if that sounds intriguing, you can explore an English translation of the collection online, for free.
During the 1970s, Roubaud continued teaching mathematics and writing mathematically inspired poetry. He also collaborated on a play cycle and published Graal Fiction, a text that combines the retelling of several medieval stories about the Grail quest with interconnected theoretical commentaries.
By now, however, Roubaud had realized that his original vision of the “grand project” was impossibly ambitious — and at the end of 1978, he tore up the outline and threw it away.
The following year he met and married 28-year-old photographer Alix Cléo, with whom he formed a deep emotional and creative bond. Her sudden death from a pulmonary embolism, just three years later, became the second major turning point in Roubaud’s life, and for many years, his work reflected — often very subtly — a slow, complex process of grief and recovery.
The first expression of that process was Some Thing Black, a somber, often wrenching, volume of poems published in 1986. But in a characteristic alternation of emphasis, Roubaud surrounded the publication of Some Thing Black with two comic, carnivalesque novels: La belle Hortense: roman (1985; Our Beautiful Heroine, 1987) and L’enlèvement d’Hortense: roman (1987; Hortense Is Abducted, 1989). Three years later he concluded the trilogy with L’exil d’Hortense: roman (1990; Hortense in Exile, 1992).
A cross between off-kilter fairy tale and crazy-quilt detective story, the Hortense novels are not just irresistibly entertaining but also intellectually mischievous, as Roubaud parodies accepted ideas about the literary novel, and lampoons some exaggerated aspects of literary theory.
In 1989, Roubaud returned to his “grand project,” but from a radically different perspective. The prose work he published as Le grand incendie de Londres was both a description of his original vision and an account of why and how he abandoned it.
You can get a good idea of the book in English translation from the “Look Inside” feature on Amazon. And in case you’d like to read about the work before/during/after exploring the work itself, Dalkey Archive has kindly provided the text of several commentaries for free download.
Le grand incendie de Londres began a series of works that Roubaud calls the “minimal project” — exploring the dimensions of memory and the fragmented remains of what he had once imagined. A second volume, La Boucle, travels back to Roubaud’s memories of family and childhood. It was published in 1993 and translated as The Loop in 2002. The third volume, Mathematique: recit (1997; Mathematics, 2012), focuses on the period of his life during which Roubaud committed to the serious study of math.
A fourth (not yet translated) volume traces Roubaud’s early explorations in the practice of poetry — and taken together, Mathematique and Poesie form a “branch” of the minimal project, reflecting Roubaud’s practice of linking various texts in a tree-like structure.
The minimal project, which has grown to seven volumes, comprises a complete cycle of autobiographical reflection, but does not present a linear narrative. And in each text, memories are recounted in the order they came back into recollection, not in the order that events actually took place.
The elementary unit of these works is the “prose moment,” Roubaud’s term for the amount of writing that happens between the time he habitually begins (in the early morning darkness) and the time he stops (when sunlight reaches his desk). In constructing the memoir cycle, his practice was to write every day, but never re-read or revise what he had already written, preserving the spontaneity of his recollections.
In Mathematics, Roubaud offered this observation:
“I sought out arithmetic to protect myself. But from what? At the time, I would probably have replied: from vagueness, from a lack of rigor, from ‘literature.’”
Yet he went on to discover that the two realms — “arithmetic” and “literature” — could not be completely separated, at least in his own work. The result has been a lifetime of writing that not only breaks new ground intellectually, but also reaches into the deepest recesses of the human heart.
Which seems quite remarkable. | https://medium.com/literally-literary/poetry-prose-math-and-jacques-roubaud-998fa18071f9 | ['Cynthia Giles'] | 2020-05-15 20:16:04.093000+00:00 | ['Literature', 'Philosophy', 'Mathematics', 'Essay', 'Writing'] |
Warrior Rising | In his mother’s footsteps, the unlikely sheltered chosen
A disallowed controversial path forged, spuriously blessed
Tortured, perhaps, his crystal eyes haunted, subdued, saddened
Smoke and mirrors, a shattered facade of illusions
Fabricated realities altered, upheld in false truths, secrets
Publicly resigned, obedient, diligence befittingly executed
Behind fractured doors, slivers, splinters, ancient echos
Though the hand he holds may be human and flawed, it is loved
To draw first sword, first blood, critics take heed, away
Humble strides beneath the world’s judging gaze
Unrelenting, a desire for more, for better, simpler, quiet
The jewels reduced to dust in his wake
A new dawn, a revolution of change unfolds
Twenty-three years bereaved, growing, flourishing
A man, husband, father, captain, veteran, warrior
Oceans crossed, a great divide, symbolism does not escape
Loyalty reinforced, examined, proclaimed
Stepping back to stand forward, bravely
Strong and free in this colonial land, a new genesis
Emergence of evolved sovereignty, a warrior rising | https://medium.com/imperfect-words/warrior-rising-1985ad08dbfd | ['Edie Tuck'] | 2020-02-13 03:46:49.162000+00:00 | ['Poetry', 'Future', 'Warriors', 'Prince', 'Change'] |
Unleashing the real power of data: an interview with Máximo Gurméndez | Unleashing the real power of data: an interview with Máximo Gurméndez Arion Follow Sep 7 · 12 min read
Big Data, Machine Learning, Data Analytics. All of these are concepts that have been around for a while and, due to recent events, have captured more interest at all levels: from governments to the private sector and common people, we all have heard of them.
We, at Arion, thought it was relevant to shed some light into all of these concepts and understand what we can expect in the near future. That’s why Martín Bouza sat down for a chat with who we believe is one of the most capable professionals in the field, Máximo Gurméndez, Founder and Chief Engineer at Montevideo Labs and Academic Director of the Bachelor’s Degree in Data Science for Business at Universidad de Montevideo.
The start of it all
It all started with Amazon. The accurate product recommendations they offered was what lit the fire for Máximo. He left Uruguay and settled in the US for studying through the Fulbright Scholarship, which enabled him to connect with top-notch professionals and professors. As he explains: “I had the opportunity to meet people who were very relevant in the area of Big Data, in particular a professor who later hired me as an assistant researcher. He was one of the inventors of the map reduce, a framework that Google began to use towards the end of the nineties resulting in a paradigm shift in the way results were found on the Internet. Before, Altavista and all those pages were quite bad. Through this technology of map reduce and the use of algorithms such as page rank, the quality of searches was greatly improved. This professor that I worked with had a relevant part in developing this framework.”
Máximo then set out to turn the theory into practice and became an intern in Dataxu. At the time, this digital marketing startup was composed of 8 engineers with the goal of changing the way ads were bought in different internet media. “We believed that instead of being based on manual arrangements between companies, ads should be bought through auctions where everyone was going to win — because the advertisers who paid exactly the right price for the right ad at the right time would win. Publishers could potentially receive more revenue because if their sites were more relevant, then the auctions would naturally lead to those balances where publishers, media owners, web page owners, those with available space were going to have bigger chances, and all of this was done in the context of real-time shots, which happened at a rate of 3 million times per second, which is crazy”.
Dataxu progressed into using Machine Learning to decide how much to bet on each advertising space, at what moment and for whom, based on certain patterns. In 2019 Dataxu -which had grown from a startup to a company with over 300 people and with offices in 12 countries- was absorbed by Roku, the streaming giant.
Ten years ago, Big Data and Machine Learning were very new concepts, and not many understood what they meant or had worked with them, especially in Latin America. But that didn’t stop Máximo from coming back to Uruguay and founding Montevideo Labs. “At the beginning we had to convince people — the first 2 engineers worked from the attic of my house and we grew from there. Today we are almost 40 people. We’re very proud of the process and the personal growth we’ve had”.
What is big data, in a practical sense
What is Big Data? What does it mean for non academic players? How does it relate to concepts such as Machine Learning? For Máximo, “There are many points of intersection, from academic definitions to a whole lot of things, lots of different literature on Big Data and Artificial Intelligence. Big Data is especially mentioned in a business context, mostly in connection to building products based on data. The conventional systems with which we process the data are not suitable for these volumes of data because there are so many, they come so fast, they are very varied. Big Data is all of that. But I think that, above all, Big Data is about how to create business value from these data opportunities, which in many cases are unexplored data, that people do not wonder whether it could be used to improve other aspects of the business”. Máximo believes that this discipline will evolve even faster once it’s fully embraced by business people, shifting management paradigms, from experience driven to data driven. Of course, at the end, it’s human managers who will have to make decisions, but they will probably do so with a better understanding of data in lieu of intuition.
For Máximo, all of these are technologies with an immense potential, but we are still fine tuning our abilities. For example, the first time he used Shazam, his first impression was of surprise and amazement. “But then I tried singing the same song, and Shazam wouldn’t recognize it. If you sing to it or alter the song even a little bit, the song is no longer recognized. Shazam does what we call overfitting in Machine Learning: it’s programmed to capture that song as it is, but if you vary it a little bit it no longer works so well. And that kind of reflects the current state of Machine Learning: there are things for which Machine Learning is very useful, and in many problems we are far from creating a complete solution, which we can somehow depend on, understanding why we can depend on these decisions made by these systems. But yes, without a doubt shazaming, when one begins to apply it not only to music but to images, videos, different media, information flows that involve the time dimension, means that what we do today with text searches is going to change. We are on our way to becoming a world where we do not look for relevant information from texts but from anywhere else.”
Coronavirus: a very real motivation
The current pandemic is very interesting in terms of big data. As Máximo explains, all of the predictive models that forecasted what would eventually happen were wrong, because they were modelled on other diseases and behaviours. Behaviour changed very quickly almost worldwide, and not all companies and systems have models that are adaptable to abrupt changes in behavior. And he also finds the role of data scientists interesting, because “when you only look at the statistics you don’t know why you’re making the decisions you’re making, when you look at knowledge engineering the systems are not so accurate because we don’t have enough data to feed these systems and also not generate inconsistencies. And I think that probably the best outcome is a mix of the two. So we cannot say that analysts who do not know anything about epidemiology are right, nor we can say that only with cause-effect analysis we’ll be able to predict what is going to happen. We have to look at data and we have to understand data. They take these algorithms that try to classify things that they have never seen, like the zero shots or those algorithms, which are related to this problem. We have certain notions of things that happened in the past that we can apply to situations in the future, which is called transfer learning”.
For Máximo, all of this will enrich the discipline, because a lot will be learned about how to reconcile these two worlds. Big Data and Machine Learning are positioned uniquely to combat new virus outbreaks and to gather data. He believes people’s perspectives on data gathering will also change. “If I had asked this question a year ago, the first thing that would have come up would have been privacy issues. And today people are saying ‘well, privacy, yes, but my data is also contributing to a common good, which is that fewer people die’. When people are told ‘look, by providing your data fewer people will die’, the perspective changes”. This all means that we are moving towards discussions that are not so related to technology, but rather about ethics and the new limits are now, because technology will continue to advance and algorithms will continue to evolve.
For all of this to happen, the current limitations that the discipline is facing need to be addressed. It might be a case of data availability (not in terms of its existence, but of the collection), or it could be related to a still small pool of talent. Or it could be a matter of incentives. As Máximo sees it, “I think it’s connected to all of these. Some things don’t bloom because there is room for improvement in the incentives given for that. And although there is a lot of investment in academia, in order to solve certain problems, the really important funding comes from the industry, the governments, which are also motivated by current problems, not so much projected into the future but current problems. I also think that human beings are naturally curious, both the academic and the engineer are very curious and they will always dedicate some time to explore these borders, maybe not in the intensity and speed that is needed, but there will always be a quota of it”. He isn’t worried about the talent, because the way he sees it is that “not only industries but ordinary people as you said are learning more about data science. When you look at the coronavirus contagion curve, they are into it, they have learnt to read the double axis, things that ordinary people did not do before. They learn unintuitive topics such as exponential growth. The human mind is not prepared for exponential growth, so there are certain concepts that are incorporated, such as the R value. People became interested in these concepts”.
The role of experience, or how an experienced data scientist can assess the next step
Montevideo Labs is often faced with different customers asking the same question. What to do, when, and how. And because the world of possibilities that exists in Big Data is enormous, Máximo explains, one can assign 10 engineers to work on a certain problem, have a proof of concept and arrive at a model that makes very precise decisions, or one can put one person working for a month and arrive at a decent model. And it is not always easy to assess the return on that investment. And many times, ultimately people end up interacting with these models, for example if it is a forecasting system for a campaign. People go to a user interface and try to figure out, “if I put more budget and point more to this profile or this other profile, what will happen, how much will I spend, how many people will I reach, what kind of people will it go to?”. To get there, they want to answer those what if questions as quickly as possible. And sometimes the models give results that are not 100% intuitive, since the data says that raising this or that value actually lowers the cost. “A model that we could even call too exact, too polished, is perhaps not the best for the user experience.” There are a lot of other factors that are not only numerical but are user experience related. Sometimes an average model that gives generally correct results in the user experience and overall product is better than the most accurate model, and perhaps too much time was spent on trying to reach the perfect model. And there’s also the decision of which model to use: today there are millions of types, different approximations, from networks to decision trees, to averages, to dividing one number by another. All of these often work, sometimes not, and the real value of experience lies in deciding under what conditions to use more and less complex models. Infinite resources mean that you can do whatever you want, but with limited resources and with the ignorance of what is going to happen next in a much larger ecosystem, because it is an ecosystem that has millions of users interacting with your models, it is not so easy.
This is the logic behind the consultative work Máximo does at Montevideo Labs as a data scientist. Máximo describes it as “being the translator between technical-minded and business-minded actors. That is the role of the data scientist, to try to explain to the person who makes corporate decisions why we have to go with this approach or this other approach for the problem they are facing, which are technical approaches that for the business person really don’t make any sense. We also add value in that sense, working in different industries from agritech to programmatic marketing or even data streaming, so we get to bring together different profiles of people who understand each other and develop something that makes sense”. For this to happen, Montevideo Labs tries to expose their people to as many actors as possible so that they have an intuition, at least, of what things product managers, CEOs, scientists, hardware engineers are thinking about. That gives them insight on what is going on inside the head of each of these actors and how to bring them together to develop something that has value. They also have formative activities.
The appeal of Big Data for new generations
For Máximo, Big Data allowed him to connect his credentials with the way he understood the world around him. And he believes this might happen to others as well. Here are his three pieces of advice for young people that might be interested in becoming a part of this discipline.
#1 — Invest in long term wins
“I think the first thing is to think about investing in the long term. In the world of Big Data, let’s not think about short wins when it comes to preparing people. Let us be patient to overcome certain barriers that exist initially, analytical, mathematical barriers, that in the long term end up paying. Going through these long-term processes pays”. As convenient as it may seem to take a course or a quick training and become a part of the industry, Máximo believes that the contribution people who have focused on quick wins can make is substantially different from the contribution made by people with a deeper understanding.
#2 — Abstraction is everything
Big Data is a way to understand patterns that can be traced to decision making. But because the volume of data is so large, and so many variables can be considered, scientists need to approach reality with a high level of abstraction. “We are reaching the point where machines create abstractions that humans are not capable of understanding. In the latest Machine Learning algorithms we cannot understand why they came to that decision. If we try to look into it, they are black boxes, we cannot know why they make the decisions they make, which are generally correct and even more precise than those that humans could reach through logical reasoning. So we have to be prepared to think in an abstract way and understand those abstractions”.
#3 — Programming is not something to be scared of
For decades, a lot of emphasis in education was placed on mathematics. And that happened at the beginning, in primary education, secondary education, university. But programming was considerably less emphasized. “Today’s programming is what math was in the past. So it might be a little challenging at the beginning, but I believe that we are all capable of having that ability and we should not be afraid of it, and we have to be trained, because that’s the way to create simulations, it is the way to go testing, so don’t be afraid of programming”.
#4 — Be curious | https://medium.com/arionkoder/unleashing-the-real-power-of-data-an-interview-with-m%C3%A1ximo-gurm%C3%A9ndez-8bd754f2a49 | [] | 2020-09-07 16:07:54.547000+00:00 | ['Machine Learning', 'Analytics', 'Data', 'Data Science', 'Big Data'] |
Evolving MySQL Compression — Part 2 | This post follows a previous one, Evolving MySQL Compression.
By William Tom | Pinterest engineer, SRE
Pinterest’s main data source–Pin data–is stored as medium-sized (~1.2kb) JSON blobs in our MySQL cluster. These blobs are very compressible, but the existing compression system in MySQL was less than optimal and only resulted in 2:1 compression. In a previous post, we discussed why column compression is a more ideal compression system. In order to use column compression and have significant savings, we need to use a compression and optional predefined compression dictionary (i.e. lookback window). Here we’ll cover how we increased the compression ratio of Pin data from around 3:1 to 3.47:1.
Background
First, let’s take a look at the underlying compression library used by InnoDB, Zlib, a compression library initially used for the PNG file format. Zlib’s compression follows the DEFLATE compression algorithm which consists of two main stages. First, LZ77 replaces recurring strings with a pointer to a previous instance of that string as well as the length of the string to repeat. Then the resulting data is Huffman encoded. We optimize the LZ77 stage by using a predefined dictionary that preloads the lookback window with common substrings. This works especially well since each data object compressed is much smaller than the sliding window, allowing the entirety of each Pin object to look back into the majority of the predefined window. We were inspired by similar work done by Cloudflare.
Initial tests
Before starting this project, the predefined dictionary was populated with a single Pin object. The compression ratio was roughly 3:1, which is good given all Pin objects share roughly the same scheme, allowing keys to be reused. However, we knew we could do better.
It’s common for Pins to share certain JSON fields or substrings of those fields. By using only a single Pin, we weren’t taking advantage of additional savings accessible through those JSON values. Examples of such fields are links which often start with “http://” or fields expecting boolean values. I ran a few quick compressions to ensure there were savings to be had. I took roughly 10,000 Pins, concatenated them and compressed them using the largest window size (32kb) and highest compression level (Z_BEST_COMPRESSION, 9). This allowed each Pin to look back into the previous 25 or so Pins for common strings while compressing the data as much as possible. I then did the same thing, but, shuffled the bytes of each Pin before concatenating. Because the relative frequency of each byte is still the same (disregarding strings that are replaced with the LZ77 lookback tuple), the Huffman encoding portion of DEFLATE yielded roughly the same savings. The difference in compression ratios would provide an approximation of expected savings. Another option was to populate the predefined dictionary with a handful of Pins using deflateSetDictionary, but the first method allowed roughly the same outcome without having to write code. Regardless, the results showed there were savings to be had, so I continued on with the project.
The process
Initially, I tried modifying the Zlib source to determine which common strings the LZ77 portion compressed for use in the pre-defined dictionary. Programming in C isn’t my fastest coding language, so after a week or so I decided it’d be less painful to write to write my own utilities to generate a decent pre-defined dictionary for Pin objects than to spend my internship trying to become proficient with highly optimized K&R style C.
I took a large number of Pins (~200k) from two different shard DBs and two different generation eras. We shard based on userID, so earlier shard DBs contain a lot of older Pins with somewhat different schemas than more recently added Pins.
In order to generate a more general pre-defined dictionary for all generations of Pins, I shuffled the input data in three different ways:
Random Pin pairs from the older shard. Random Pin pairs from the newer shard. Random Pins from the older shard paired with random Pins from the newer shard.
I then batched the Pin pairs and concurrently looked for common substrings (within the lookback window and lookahead buffer, all larger than 3 bytes) between each Pin pair in the batch.
Because this is a long running process, I maintained persistent state at the end of each completed batch which allowed the script to start from the last completed batch in the event of failure. The substring frequencies from each batch were then aggregated and passed onto the next step, dictionary generation.
Dictionary generation
After batching, the substring are scored based on the length of substring and the occurring frequency. In order to avoid repeating substrings in the output dictionary, we swallow fully contained substrings and incorporate the score into the swallowing substring’s score. The highest n scoring strings that fit in the defined dictionary size are concatenated such that higher scoring strings are at the end of the dictionary (as per the zlib manual). Then, any overhang is truncated and the results are handed to the user as the pre-defined dictionary.
Although the implementation was slow, we found the frequencies of common substrings between 100,000 pairs of Pins in slightly more than 400 CPU days. We didn’t devote time to improving performance, because generating the dictionary itself isn’t a frequently run task. Furthermore, as much as I would’ve liked to come up with an efficient shortest common supersequence solution to fit more in the pre-defined dictionary, it’s NP-complete.
These tools have been added to our open source tools repository in the hope they might be useful to others.
Benchmarks
Benchmarks using this predefined dictionary looked promising. In terms of space savings, the computed dictionary saves more than 10 percent versus using a single Pin as the predefined dictionary, in addition to 40 percent savings over the existing InnoDB Page Compression. We considered using an 8kb or 16KB dictionary over a ~32KB (32506B) dictionary as a tradeoff to increase performance, but since it wasn’t a requirement, we didn’t increase the zlib compression level to maximize the compression savings.
Next steps
When the first blog post on column compression was written, we hadn’t deployed this change to production. Since, we’ve pushed column compression out without incident.
In the future, we’re planning to test a Pin-curated static Huffman tree as a potential pathway to additional savings. Currently, zlib doesn’t support this functionality and will either use a general static code or a dynamic code, depending on size.
Acknowledgements: Thanks to Rob Wultsch for his guidance and mentorship. Thanks to Pinterest for making my internship a memorable experience. | https://medium.com/pinterest-engineering/evolving-mysql-compression-part-2-2c3eb0101205 | ['Pinterest Engineering'] | 2017-04-02 17:25:22.651000+00:00 | ['Database', 'Compression', 'Engineering', 'Sre', 'MySQL'] |
An Approach to Application Modernization: Planning & Design Phase | An Approach to Application Modernization: Planning & Design Phase
Modernization is a team sport where end-user, line of business, and IT systems need to come together with a single goal in mind, “ease of use and improving user experience.”
By Ernese Norelus , Eduardo Patrocinio, Oliver Senti
This current installment is part two of a multi-series blog; we go deeper into applying the application modernization approach. It will leverage IBM Enterprise Design Thinking, Event Storming, Domain-Driven Design, and Event-Driven Architecture. We are drawing on an exemplary approach to the Garage Method to enable application modernization projects at scale by applying a compelling application modernization journey to the Cloud.
Why modernize?
Without a doubt, this should be your first question. “Why do I need to modernize my applications? What do I gain from this?”; modernization is a costly and risky endeavor. Frankly, what do companies gain from modernizing their applications aside from getting a big fat bill, incurring negative financial impact, and might not deliver the business value expected!
An actual modernization project is a transformative force for your company to remain competitive and relevant rather than a burden. All applications go through a lifecycle; modernization is a journey, not a destination. Your success is bound directly with your ability to transform and disrupt the industry and the competition. Application modernization is a necessary evil, where organizations must undertake at some point. They can be a myriad of reasons for you to modernize your application. Such as, the application no longer meets the organization’s needs or the application is going out of support. Deprecating end of life products with policy. You could choose not to update, ride the wave, cash in, and call it quits once you gain no more profit. Okay, I make it look rather drastic with a bit of sarcasm. There is a price to pay to remain relevant and competitive in a volatile market like all things, and you must be prepared to pivot when needed. You don’t have the luxury not to modernize, and else it’s like declaring bankruptcy if you can not compete with the agility of newer, leaner organizations.
In my experience, there are many wrong reasons for application modernization, don’t fall into the trap, such as: “I need to be on the latest technology, as everyone is going to the Cloud.”
Application modernization can be a technology-driven initiative, but such initiatives often fail if not supported by business objectives. Both business and IT must arrive at an agreed target architecture for this to be sustainable. Application modernization has to be part of the organization’s culture to gain the benefits that come with it. It must provide advice, guidance, standards, and controls for it to be successful. Here are my top three reasons for any business to go down the path of application modernization:
Create new business opportunities — “user satisfaction.”
Deliver applications and features faster with business agility and speed — “reduce onboarding time.”
Reduce the high cost of new features — “cost-effectiveness.”
By now, you would have realized that modernization is not an endeavor to take lightly, as it affects end-users, lines of business, IT systems, and IT personals who come to depend on the solutions to deliver business value. A poorly planned application modernization may not translate into improved business value. Thus, good reasons for beginning the journey with planning and design. The rest of the blog will do just that by walking you through our Garage Method approach to Application Modernization.
Modernization journey
As more applications are migrating to the Cloud, the IBM Garage provides a prescriptive approach to Application Modernization, starting with a business framing to review the business objectives:
Application Modernization focuses on delivering business value and addressing at-risk technologies. It’s a good idea to start with Business Framing; business framing is divided into five major parts:
Business drivers:
Initiative exploration:
Initiative prioritization:
Set up for success:
Follow up:
Application Modernization with Event Storming
The goal of application modernization is to refactor or rewrite for improvement/optimization a critical legacy system, a brownfield exploration of the business services or business processes. Here we bring to your attention an approach that has yielded much success with application modernization, leveraging on Event Storming and Domain-Driven Design. Event Storming and Domain-Driven Design have been extremely successful throughout the industry with the adoption of microservices.
Event Storming is a rapid “outside-in” design technique delivered in a workshop format to explore complex business domains interactively. It uses a visual and communicates business processes as a domain model in ubiquitous language, a synthesis of business modeling principles using Domain-Driven-Design, where the domain is defined as a sphere of activity or knowledge. The workshop focuses on domain events generated in the context of a business process or a business application. A domain event is something meaningful that happened in the domain. The workshop focuses on communication between product owners, domain experts, and developers. The process starts from the context of events happening in a domain and looks at events as fundamental elements in a model. The Domain model captures concepts and processes for a specific business domain and requires a deep understanding of the field in question. The best way to accomplish these requirements is through Event Storming.
Many of the concepts used during an Event Storming workshop are from Domain-Driven-Design; Event Storming focuses on an interactive, collaborative whiteboard exercise that engages all domain experts. In comparison, Domain-Driven Design is best at strategic and requires extensive training/practice to master the terminologies. Domain-Driven Design is best described as a software development approach to solving complex domain models; the solution revolves around the business model by connecting the implementation to the core business concepts. The common terminology between the business/domain experts and the development team are domain logic, subdomains, bounded contexts, context maps, domain models, and ubiquitous language as a way to collaborate and improve the application model and resolve any emerging domain-related issues.
Event Storming intends to uncover the underlying domains and the services and best used for:
Understanding greenfield applications
Understanding brownfield applications
Verifying new business process ideas
Assisting to find bloat or inefficiencies in processes
Understanding complex ideas or processes
How the Event Storming Process Really Works
An Event Storming is a collaborative, practical, hands-on scoping session that everyone can understand. The session is only successful if the right people are involved. Arguably, the most critical aspect of any project, not only in a modernization project. Select the team carefully and bring the business domain experts, customer executives, stakeholders, business analysts, software developers, architects, testers, and folks who support the production product. These should be the right resources, enthusiastic, and willing to embrace change, and able to work well in a team environment. It’s a great deal of investment that will yield a great return on investment. The session can vary from 2 hours to 5 days; all depend on how complex the system is and how much work is required. And another one of its unique features is that it results in actual output. And this physical output can be used as the basis to scope your microservices and start the incremental design process for your software architecture.
During Event Storming sessions, you will find there’s a lot of clarity gained on the end solution because there’s a lot of questioning and assumptions being resolved between people from different parts of the business. And with the development team involved, different if-then scenarios will be presented that the domain experts haven’t even thought of. And to keep the discussions within our Event Storming session in scope to the problem that we are trying to solve, it’s always useful to get the key stakeholders or the product owner to introduce the actual problem we are trying to resolve in the Event Storming session. And this introduction only needs to be brief. And here is a cheat sheet, which will help you carry out your Event Storming session in real life. This first cheat sheet lists all the key things you need to prepare for your Event Storming session.
We will be using a tool from miro.com, which provides a virtual whiteboard and virtual stickies. These are virtual stickies that never run out of stickiness. Another requirement for the Event Storming session is the role of a facilitator. This individual or team will be responsible for organizing the Event Storming session and ensuring that the Event Storming session conventions are followed during the scoping session.
Cheat Sheet:
The cheat sheet will help you carry out your Event Storming session in real life; it lists all the key things you need to prepare for your Event Storming session.
Domain Events
The first step of Event Storming is to plot events, and we need our facilitator to introduce this step. The whole concept around events is pretty straightforward. These are things that need to happen within our end solution. And these events can be anything that needs to happen within our existing system or our new proposed system. The convention has always been to use orange sticky notes for domain events, which we’ll do. We also need our group to plot the events left to right across a timeline to show the event’s order.
Events are activities in the domain / Events are immutable
They represent an action that happened in the past (past tense)
Because the action already completed, they can’t be rejected
Often broadcast to many destinations
Record a change to the state of the domain. Often the result of a command
Commands / decisions
Commands usually are paired up with events and typically represent an action, interaction, or decision that leads to the event it’s paired with. The convention for plotting commands is to use a blue sticky note. Command represents an action or a decision, this is normally either taken by a user or by the software itself. The idea behind plotting commands is to represent actions and decisions that led to an event and allow the team to question and debate what user or piece of software carried out the command that led to that event.
Commands are a type of activity that occurs in the domain.
Represents a request to perform an action.
The action has not yet happened, and it can be rejected.
Usually delivered to a specific destination.
Causes a change to the state of the domain.
External Systems
External systems within the Event Storming sessions by convention use a pink sticky note to show an external system. When we say the external system, this can be anything that we don’t have any control over. For example, an external third-party system could be an internal system that’s not currently part of what we’re modeling in the Event Storming session.
Separate system
Third-party or internal system
You can show commands received
You can show events emitted
Can also depict as a black box:
- Triggered by commands in your system
- In return triggers commands or events in your system
Policies / Business Rules
Commands that are preceded by something called a policy and are depicted as lilac sticky notes. Policies usually sit between an event and a command. Policies are that we’re forcing the team to think about all the events that need to happen before carrying out a command or a set of commands. One thing to note about the logic that the policies represent, sometimes the logic is explicitly known, as the organization knows all the events that lead to a certain command. Sometimes, the logic is implicit, as in it’s organically grown over time. People are not aware that these events need to happen before that specific command is carried out. And the idea behind plotting these in event storming sessions is to bring all this implicit logic out into the open and identify any additional events and commands.
How is our SYSTEM supposed to reach the given EVENTS?
Whenever EVENT then COMMAND
We need a LILAC between the ORANGE and the BLUE
Implicit policies: without an explicit agreement
Explicit policies: Assuming everyone is following them
Read-Models
Read Models allows you to plot any important data requirements. This allows you to emphasize any specific data requirements required to decide or carry out an action. This could be a specific part of a UI or a specific type of record which is required in order for the user to make a decision or carry out an action. We plot these data requirements as read models in Event Storming, and we use a green sticky note to plot them. The read model terminology isn’t very intuitive; it might be worth introducing read models within your event storming sessions where you present examples. A read model can include entire screens within your application, specific parts of a screen within your application, and the output of a report generated by your system. The output of a SQL query, notifications to a mobile app, basically anything readable that will help a user carry out an action or make a decision.
Data to help the user carry out a command
Aid user actions and decisions
Required data that could be anything.
- Data on screen
- Data from a SQL query
- Data from any data source
- Specific part of the UI
- Data on screen - Data from a SQL query - Data from any data source - Specific part of the UI Can use scribbling of a UI wireframe
Use to highlight specific required data
Hot Spots
A hot spot is a place that needs more exploration, lack of proper knowledge.
Things that are not working as planned
Exceptions
Irregularities
Failures in the system
Aggregates
Aggregate is a crucial step towards scoping and identifying our microservices. The reason for plotting aggregates is that we have something to represent the data that the commands act on and have something to present the data related to the events that happen within our system. Aggregate sticky notes usually sit between command and event pairing sticky notes. In reality, the aggregate should really be seen as a data holder that holds many instances of that specific type of data, and it also has a state based on that dataset. Another key reason for plotting aggregates is that they bring together command and event pairings with other command and event pairings that act on the same data.
Data holder
Commands act upon the data
Emits events in response
State is consistent with the data
Pale yellow sticky
A noun for the name
Way of grouping commands and events
Drawing Boundaries — Context is King
An exciting stage of the Event Storming session is called Bounded Context, which consolidates and decomposes aggregates and boundaries around them. These boundaries will help identify the initial scope of our microservices. Before we start this process, the first thing we need to do is we need to remove the concept of a timeline from our workspace because the timeline has served its purpose in terms of helping us identify all the events that need to happen in our system end to end. When it comes to grouping our aggregates, we look at the concepts and the language that the aggregates portray and where there’s a clear relationship. We use that relationship to group aggregates together. The idea of using related concepts and related language to group things is sometimes known as identifying the bounded context. Using this approach, we identify a clear relationship between route and routing audit, and we group these together.
Timeline no longer required
Group related aggregates
Related by concepts
Related by language
Use boundary for microservice scope
Review boundaries before finalizing scope
- Decompose for performance
- Consolidate for transaction boundaries
Consolidating and Decomposing Microservices
The output that we’ve obtained from our Event Storming session for our microservices’ scopes. Other techniques and factors should be taken into account to consolidate further or decompose the microservices’ scope.
Let’s look at a banking process (understanding a brownfield application); we will take on a monolithic application and refactor it into a microservice architecture. Along the process, we might figure out which set of services might need to be retired. The Event Storming would have already help with which applications are critical to your business. Applications that support a particular line of business is one logical grouping for decision making.
Grouping applications based on a particular line of business
Particular architectures or technologies
The starting point of any Event Storming workshop is with an initial brainstorming where everyone writes as many events as possible at the same time.
Putting all together
The first pass was to create all the events (Orange stickies) from the Account Creation system.
Pre-created accounted
Account Creation Processed
Created Account
KYC_R Reviewed Info
Lead Requested
Stored document (reference in MongoDB)
Notified the customer via (Email)
Notified Customer on Negative Status of (KYC Checks)
Issued Debit Card
Customer activated DC
Send pin to customer
Request Received through Branch (BPEL)
Received and Verified
Retrieved (Pre-printed Cheque Book)
Generated Cheque (printer)
Sent Cheque Book to customer
Printed customer Name on Cheque Book
Validated Credit Details
Modified Credit Details
All the commands (blue stickies) plotted from the Account Creation
Account Creation Process
Create Account
Deliver Account Kit to the Customer
KYC_R Review Info
Lead Requests
Activate Debit Card
Issue Debit Card
Received and Verified
Retrieve (Pre-printed Cheque Book)
Generate Cheque (printer)
Validate Credit Details
Modify Credit Details
All the External System (red stickies) part of the Account Creation system
KYC Checks
Reference in MongoDB
Notified the customer via (Email)
Generated Cheque (printer)
All the policies (lilac stickies) part of the Account Creation system
IC Verification
Notified Customer on Negative Status of (KYC Checks)
Block Customer in the System
Modifying and Resubmit the Application
All the read-models (green stickies) part of the Account Creation system
Account Creation Process
Lead Requests
All the aggregates (black stickies) part of the Account Creation system can also translate into microservices. | https://medium.com/ibm-garage/an-approach-to-application-modernization-planning-design-phase-ab4ec4454914 | ['Ernese Norelus'] | 2020-12-02 16:41:06.527000+00:00 | ['Design Process', 'Architecture', 'Microservices', 'DevOps', 'Software Development'] |
“Canon” is something we made up | “Canon” is something we made up
Whether we’re talking Star Wars or Twilight, Star Trek or Batman, there is no real difference between“canon” and “fan fiction”.
The entire argument has already been made in the headline and subheadline. The rest of this article will simply be a footnote on this point, which I believe to be more or less irrefutable.
Given this, you should recognize that this article is basically fluff, and consider that your next six minutes or so might be better spent doing something else.
Oh? You’re on your break at work and need something to pass the time? You’re in the restroom and need something to read? Or, you’re merely finding something else to occupy the agonizing, seemingly neverending hours of a sustained lockdown while counting down the days until you finally die of boredom? — or maybe of stress, while you watch the world descend into a politically-polarized, social media-driven collective hysteria?
Well, fine then. Here are my bogus opinions on a bogus argument I’ve seen play out on internet forums over and over again, because we all have strong opinions on irrelevant nonsense. Maybe this new angle can help you avoid some stress during your perusal of internet comments sections — even if the odds of such a thing approach pretty damn close to zero, since people who peruse internet comments sections aren’t exactly looking for peace of mind.
The argument is rather simple. I’m going to take a Socratic approach, where I pose questions, and then assume what your answers will be. This may seem unfair to you. But I object to this accusation of yours, because what I’m doing is no different than what Plato did. Only, I will actually say what it is I’m doing, rather than putting words in the mouth of someone who disagrees with my ideas. This makes me a more decent person than Plato, and thus you can trust me to present a fair and down-to-earth assessment of what you think, and how you’d respond. Fair? Fair. (See how that works?)
What are the characters in a fictional universe? Where do they exist? How do they exist?
Of course you have no troubling answering these questions. The characters, settings and events in a fictional universe are imaginary. They exist in our minds. They’re created by our imagination.
Let’s discuss what it really means to relegate something to the imagination. We might consider the Nominalists (yes, we have to include some substance or the article will be boring, just go with me here). John Stuart Mill explains here who they are:
In the later middle ages there grew up… a school of metaphysicians, termed Nominalists, who, repudiating Universal Substances, held that there is nothing general except names.
Nominalism is the viewpoint that universals, and general categories, don’t really “exist”. This is opposed to philosophers like Plato. Plato was a realist, who believed that the Forms constituted a sort of “universal” category, and that things in a given category therefore possessed a sort of “essence”. In Plato’s view, things are beautiful to the extent that they approach the universal “form of beauty”. What is “good” exists not simply in the minds of men, but in objective reality itself. The names we give things, then, would not be simply arbitrary, but actually designating some general, abstract category that actually exists.
Nominalists, on the other hand, say that these general categories don’t exist. They’re just abstractions. While the Platonic realist holds that an abstraction can have some degree of reality to it, the Nominalist denies this. Basically, it’s a negative position.
I assert that people who concern themselves over issues of “canon” in a fictional universe are acting like Platonic Realists, and those who reject such concerns are acting like Nominalists. I support the Nominalist position.
This is because a fictional character — like an abstraction, a form or a general category — exists solely within the mind. Granted, he or she can exist in a lot of minds. But even unlike issues of morality or religion, there is no real controversy as to whether James Bond really exists or whether Star Wars really did happen a long time ago in a galaxy far, far away. We all agree that these thing don’t exist.
So, what is “canon”?
Well, obviously, you say: The stories told concerning a certain set of fictional characters and settings that are owned by the person, or people, who first made up the fictional character and setting.
But many of the people who made up these characters and settings are long dead. One could even say that most iconic cultural masterpieces on our society were made by people who are now dead or not involved with their production in an endless barrage of sequels and remakes. There’s been far more Star Trek made after Gene Roddenberry died than when he was alive.
Well, sure, you would naturally say. But Rick Berman and Majel Barrett and Michael Okuda and all these other people managed the franchise over the decades after Gene’s death. And they brought in new people and those people became just as important to the canon.
But isn’t this a Ship of Theseus in that example?
Of course, you would reply that you do know of the Ship of Theseus, and that it is a thought experiment about a ship whose boards and parts rotted over the years and had to be replaced, until after a time the entire ship had been replaced in this fashion and none of the original parts remain. (You can thank me for giving you so much credit.)
So we are likening the ownership over a given “canon” to the Ship of Theseus. But wouldn’t this mean that, at best, one can gain ownership of a canon by inheritance? That it can be passed down? How far shall we take this notion?
What if there’s a completely new set of people making new content under the same brand name? What if none of the original people are involved with the show anymore? And what of George Lucas selling Star Wars to Disney? Someone who created something can sell the rights to make “canon”?
Enough. Listen. This is all made up. The only hard realities we can identify in all of this are money, and intellectual property. The extent to which any of this is real is only the extent to which someone’s ownership of something is backed up by law. That means a lot in material terms. But that doesn’t mean a damn thing in artistic terms.
And if we’re going to say that it does, all we’ll be doing is allowing market forces to determine what we’re allowed to imagine.
Stop thinking like such a Platonic Realist, and come over to the Nominalist camp. Someone reboots a beloved franchise and makes new content under the same brand, and it’s absolute garbage? Fine. It’s not real, it didn’t “really happen”, and it has absolutely nothing to do with the creative vision of the people who originally made it. Whether a firm worth billions of dollars acquired the rights to reproduce certain characters or imagery from another firm worth billions really should have no bearing on whether that reproduction is taken seriously.
Put another way — no, you don’t have to accept that Star Trek: Picard is “canon”, or that The Rise of Skywalker is “canon”, or whatever else.
But wait, you say. We have a term for this. It’s called “headcanon”.
No. No, no, no, no, no. You’re still being a shameless Platonic Realist, still trying to draw a division between different ways we perceive the characters and stories that exist nowhere else other than in our imagination. The division between “canon” and “fan fiction” isn’t real. Therefore, anyone who spends any amount of time arguing as to what is canon or isn’t, or telling people they have to accept something as canon because some studio owns a bit of intellectual property, or shitting on people who like something that doesn’t fit in with your viewpoint of a given fictional property — is a damn pitiful rube. (I’m obviously exempt from this principle because of my own self-awareness of it, or something).
The concept of “canon” is as made up as the fictional universes we apply it to, for just the reason that it is an attempt at creating general category among things that aren’t real. That means that if you’re both a Nominalist in regard to fictional franchises, and a philosophical Nominalist, the idea of “canon” is like, doubly untrue, dude. Not a good look, bro.
That’s it. That’s all the distraction I can currently muster. You’ll have to scroll over to something else now.
But maybe you’ll now look at the creative and intellectual productions of our culture as something we all share. The art of our cultures exists in all of us, and cannot be “owned” by some company. It is yours. It’s all yours — to do with as you please, in your own mental kingdom. | https://medium.com/the-shadow/canon-is-something-we-made-up-be67a792deec | ['K. J. L. Kjeldsen'] | 2020-12-09 21:06:07.262000+00:00 | ['Art', 'Fiction', 'Ideas', 'Culture', 'Writing'] |
Data-Driven Attribution and How it Differs across Google Products | Data-Driven Attribution and How it Differs across Google Products
Google offers several products with data-driven attribution, but what are the differences between them? How can you select the best service for your business? In this article, we review and compare the most popular products that offer Data-Driven Attribution.
According to Gartner, about 74% of CMOs expect to spend more on digital advertising in 2021 than they did in 2020. But how can you assess your channels to know exactly where to invest more? Which ads make potential customers move to the next step of the funnel?
The solution is hidden in attribution — how the value of a conversion is distributed across channels that move the user through the funnel. However, some attribution models show you only part of the picture. And these gaps in data might be critical. After all, according to the rule of seven touches, the actual purchase frequently happens only at a customer’s eighth interaction with a brand. However, all steps affect one another and eventually lead to the conversion. So how can we objectively assess the conversion path?
As an ad giant, Google offers multiple attribution solutions, from standard attribution models to advanced options with the possibility to track multiple channels. In particular, several products allow you to set up a Data-Driven Attribution model that will help you dive deep and accurately credit marketing channels.
But how can you decide which service will best fit your business? What’s the difference between Google Ads and Search Ads 360? In this article, we review and compare the most popular products that offer Data-Driven Attribution.
What’s Data-Driven Attribution?
Data-Driven Attribution (DDA) by Google focuses on your advertising account’s data as a unique starting point for analysis. Unlike standard models with predefined formulas, DDA uses algorithms to analyze every case differently and assess the mutual influence of channels in the funnel, even if it is complicated, inconsistent, and multi-step.
To satisfy various business needs, Google offers DDA in a range of services. The differences between these services are in the data analyzed, the algorithms applied, and the level of customization. Some of them are designed only to track ad clicks and optimize keywords and paid campaigns, whereas others provide a full analysis of a customer’s online journey.
Before selecting a particular product, consider the following:
What’s your advertising budget?
What are your business goals?
How many conversions do you have on average every month?
Now, let’s take a closer look at what products Google offers that include Data-Driven Attribution.
Data-Driven Attribution with Google Analytics 360
With Google Analytics 360, you can use Multi-Channel Funnels (MCF) Data-Driven Attribution based on the Shapley Value method. This algorithm analyzes the path of your users through existing touchpoints, then creates an alternative variant where one of the touchpoints is missing. This shows you exactly how a specific channel influences the probability of a conversion. Data-Driven Attribution assesses data from organic search, direct traffic, and referral traffic along with all the data that you’ve imported to Google Analytics, including data from other Google products (e. g. Google Ads, Campaign Manager 360). With DDA in Google Analytics 360, you get an overview of all users’ online actions in your funnel and how each channel influences conversions. This option is most suitable for large websiteswith high volume of conversions.
Let’s check Google Analytics 360’s minimum requirements for using DDA along with the pros and cons of DDA in this tool.
Minimum requirements:
A Google Ads account with 15,000 clicks and 600 conversions during the past 30 days
Ecommerce Tracking or Goals must be set up
If you meet these requirements, you can start using DDA in Google Analytics 360. To keep using it, you have to meet the following minimum conversion threshold for the past 28 days:
400 conversions of each type with a path length of at least two interactions
10,000 interaction paths in a specific view
Pros of DDA in Google Analytics 360:
Get a full analysis of a customer’s online journey
See which ads, keywords, and campaigns have the biggest impact on conversions
Distribute credit for revenue based on past data for a conversion
The amount of credit assigned to each touchpoint depends on order of touchpoints
Data analysis starts immediately, and the report on your first model becomes available within 7 days
Cons of DDA in Google Analytics 360:
High cost of an account: starts at $150,000/year
Hidden calculation logic: no explanation in the report
Requires a consistently high number of clicks and conversions
Doesn’t include offline data (phone calls, transactions in CRM)
Requires a Google Ads account
Data-Driven Attribution with Google Ads
The default attribution model in Google Ads is last click, but if you meet the minimum requirements you can configure Data-Driven Attribution. By default, data-driven attribution analyzes all clicks on your ads but not the entire customer journey. Based on these clicks, the model compares users who purchase to those who don’t and identifies patterns among those ad interactions that lead to conversions. To increase the number of conversions, you can use an automated bidding strategy that’s optimized based on information from the DDA model.
In contrast to Search Ads 360, Google Ads doesn’t allow you to run marketing campaigns across multiple engines and provides less detailed reports.
This product is suitable for medium-sized and bigger businesses that need to optimize marketing campaigns and keywords.
Now, let’s get to the minimum requirements and compare the advantages and disadvantages of using DDA in Google Ads.
Minimum requirements:
3,000 ad interactions in supported networks in the past 30 days
300 conversions in the past 30 days
To continue using this model, you have to meet the following minimum conversion threshold for the past 30 days:
2,000 ad interactions
200 conversions
Pros of the DDA model in Google Ads:
Helps you optimize keywords and paid campaigns
Helps you optimize bidding
Shows which ads play the most important role in reaching your business goals
Cons of the DDA model in Google Ads:
Don’t get the entire overview of the online user journey
Need to maintain the necessary level of conversions and clicks for 30 consecutive days before you can see data in Google Ads
If your data drops below the required minimum, the attribution model will automatically be switched to Linear
Data-Driven Attribution with Search Ads 360
Search Ads 360 helps you manage marketing campaigns across multiple engines (Google Ads, Microsoft Advertising, Yahoo! Japan Sponsored Products, Baidu, and Yahoo! Gemini) due to native integration with the Google Marketing Platform.
By default, Search Ads 360 uses the last click attribution model, but you can also configure DDA if you meet the minimum click and conversion requirements. Unlike Google Analytics 360 and Google Ads, Data-Driven Attribution in Search Ads 360 analyzes activities in Floodlight, the conversion tracking system for the Google Marketing Platform. The attribution focuses on paid marketing campaigns and shows you how clicks on keywords influence conversions. You can also adjust or create a new bid strategy that will automatically optimize bids based on the model’s data.
The Search Ads 360 service is suitable for websites with a high number of conversions who need to optimize their paid campaigns.
Let’s see the minimum requirements for and the pros and cons of using data-driven attribution with Search Ads 360.
Minimum requirements:
15,000 clicks in the last 30 days
600 conversions in the last 30 days
Pros of using DDA in Search Ads 360:
Get reporting data in near real time
Optimize bids automatically using Smart Bidding technology together with DDA
Create up to five DDA models to compare data with different channel groupings
Possible to upload offline conversions
Accounts for cross-environment conversions
Cons of using DDA in Search Ads 360:
Ignores search and display impressions
Might be not fully accurate: Search Ads 360 uses machine learning and historical data to model the number of conversions if it’s not possible to measure all conversions
Only tracks the number of conversions attributed to paid search
Additional setup required to realize all advantages: Campaign Manager, a set of Floodlight activities, and Search Ads 360 Natural Search reporting
Impossible to analyze conversions tracked by Google Ads, Google Analytics, or other conversion tracking systems
Attribution with OWOX BI
Google’s Data-Driven Attribution model is one algorithmic model that can ensure a granular approach to analyzing your data. Just like data-driven attribution by Google, OWOX BI ML Funnel Based Attribution assesses the effectiveness of your advertising campaigns and channels on the customer’s way through the funnel. It also provides you with real-time reports and allows you to import calculations to optimize bids. Unlike the Google model, however, OWOX BI attribution is based on Markov chains — a sequence of events in which each subsequent event depends on the previous. Using this algorithm, OWOX BI attribution shows how difficult it is to move from one step to another: the higher the difficulty of moving on from a step, the greater the value a channel receives.
On top of that, due to transparent calculations, you get a solid understanding of the figures behind each report so you can safely reallocate your budget. Finally, in comparison with Google products, attribution by OWOX BI provides meaningful results with smaller amounts of data required for analysis.
Let’s take a look at what you get with OWOX BI attribution.
Minimum requirements:
The minimum number of conversions depends on the number of sessions. For objective results, we recommend the following correlation between sessions and conversions:
Image courtesy of the author
Pros of OWOX BI ML Funnel Based Attribution:
Track a user’s offline actions
Control purchases and returns in your CRM
Assess the effectiveness of each advertising channel
Customize your funnel according to your business needs
Exclude unmanaged channels from your assessment
Compare funnel stages and evaluate their effectiveness
Analyze data based on thousands of projects with machine learning
Figure out a specific approach for each user cohort
Get ready-made reports in OWOX BI Smart Data
Use gathered data to manage bids and audiences
Conclusions
Google products that offer Data-Driven Attribution allow you to track different channels, determine which online ad is the most and least effective in Google Search, and analyze users’ online journeys in detail. Even though Data-Driven Attribution by Google is generally considered as one model, its implementation differs across products. To effectively measure data, you need to choose a service that fits your data type. Here are the primary focuses of each product:
Google Analytics 360 tracks all user actions, clicks, and displays based on multiple channels and their interrelations in the funnel.
tracks all user actions, clicks, and displays based on multiple channels and their interrelations in the funnel. Google Ads tracks ad clicks in Google Search.
tracks ad clicks in Google Search. Search Ads 360 tracks Floodlight activities and paid campaigns.
With OWOX BI, you don’t have to select among several services. You can get the benefits of Data-Driven Attribution by Google with transparent calculations on top and fewer minimum requirements all in one product. | https://medium.com/digital-diplomacy/data-driven-attribution-and-how-it-differs-across-google-products-22c644e57193 | ['Maryna Sharapa'] | 2020-12-08 14:45:25.165000+00:00 | ['Attribution', 'Data Driven', 'Technology', 'Google', 'Google Product'] |
How I automatically deployed Typescript to Elastic Beanstalk to speed up server-side development | How I automatically deployed Typescript to Elastic Beanstalk to speed up server-side development
Speed up your development process by auto-deploying Typescript code to EB!
A demo of Booktogether, an online publishing platform that I created the back-end for.
For my most recent team project Booktogether — an online publishing platform in Korean for book curations and reviews — I was responsible for server-side development using Typescript and Express, as well as deployment using AWS Elastic Beanstalk.
As I soon found out, manual deployment to EB was a tedious process. I was constantly compiling Typescript code to Javascript, compressing all the files into a .zip file, and uploading the compressed file to AWS.
That’s when I thought to myself, “is there any way I can speed up deployment, so I can just focus on writing working code?” After doing some digging online, I found this walkthrough by Aaron White that helped me set up AWS CodePipeline to send my code straight from Github to EB.
Side Note: A Common Error
Although I won’t go through configuration details of CodePipeline too deeply in this article, I would like to specify a problem that I encountered while setting up this auto-deployment system. When I originally pushed the server-side code (along with a build spec file) from Github to AWS, the EB console responded with the following error:
[Instance: i-12345] Command failed on instance. Return code: 1 Output: (TRUNCATED)…/opt/elasticbeanstalk/containerfiles/ebnode.py”, line 180, in npm_install raise e subprocess.CalledProcessError (TRUNCATED)
As I later realized, this error happened because of the npm install command that EB runs when code is deployed to the EC2 instance. Specifically, the installation of some packages trigger the node-gyp process, which is run by the default EC2 user.
The default user does not have permission to access some of the packages in the node_modules directory that npm install creates, which is why one needs to create a file called .npmrc in the project root and add the following:
# Forces node-gyp process to be run as root user, not ec2 default
unsafe-perm=true
Auto-deploying Typescript code to EB
Now back to the heart of the matter. When code is deployed to a Node.js environment in Elastic Beanstalk, node app is automatically called in order to initialize the server. However, because Node.js is a Javascript runtime environment, the Typescript code that I had written for my server would not run by default. I needed to compile my Typescript code first.
In order to accomplish this without having to manually upload my code as a .zip file, I had two options. The first was to simply compile Typescript code on my local machine using a command like npm run build , and then upload both Javascript and Typescript to my Github repository. This option, however, had two main drawbacks:
An extra step (executing npm run build ) would be added to the deployment process I would have to manage changes in both .js and .ts files in Github
Given these issues, I resorted to another option — specifically, the use of the .ebextensions directory. Files in this directory are used for advanced custom configuration of an Elastic Beanstalk environment. .ebextensions files are written in YAML or JSON format, with a .config file extension.
The config file I used in this case configured EB in such a way that the instance would compile all Typescript code to Javascript right after running npm install . Referencing the following article on Typescript compilation using .ebextensions, I created a file called source_compile.config in the .ebextensions directory and included the following script:
# source_compile.config
container_commands:
compile:
command: "./node_modules/.bin/tsc -p tsconfig.json"
env:
PATH: /opt/elasticbeanstalk/node-install/node-v10.15.3-linuxT-x64/bin/
The command tsc -p tsconfig.json tells EB to compile the Typescript project to Javascript given the configuration file tsconfig.json . If there is no outdir key specified in tsconfig.json , the compiled .js files will be located in the same locations as the corresponding .ts files.
Additionally, I needed to ensure the version of node specified in the PATH key was compatible with the node version used by my EB environment. If needed, one should change the node version (currently written as v10.15.3) and OS name used in the PATH key of the source_compile.config script above.
Having gone through all these steps, I once again merged changes into my main Github branch — and voilà! The server was up and running. | https://medium.com/quick-code/how-i-automatically-deployed-typescript-to-elastic-beanstalk-to-speed-up-server-development-22b89870e159 | ['Andrew Chung'] | 2019-12-21 16:17:11.979000+00:00 | ['Typescript', 'Deployment', 'Elastic Beanstalk', 'Nodejs', 'Servers'] |
How to Prepare for an Abdominal Hysterectomy | How to Prepare for an Abdominal Hysterectomy
An Obgyn explains this surgical procedure.
Our Preparing for series allows a patient to prepare themselves for a procedure properly. We answer questions about how long the procedure will last, what’s involved, what to expect, and even advice on packing your bag. While your surgeon preps, we’ll make sure you’re ready.
What is a hysterectomy?
A hysterectomy is a surgery to remove the uterus. Gynecologists perform hysterectomies for a variety of gynecologic conditions such as uterine fibroids, heavy periods, endometriosis, chronic pelvic pain, uterine prolapse, and gynecologic cancer.
During a hysterectomy, a surgeon removes the uterus. Gynecologists often recommend removing the fallopian tubes (bilateral salpingectomy) to reduce the risk of ovarian cancer. Some women will also need the removal of the ovaries (oophorectomy). Removal of the ovaries triggers hormonal changes. After a hysterectomy, a woman can longer get pregnant.
Gynecologists perform hysterectomies through a variety of techniques. The patient’s uterus size, body type, and prior surgical history help determine the surgical approach. Techniques include:
Vaginal hysterectomy Abdominal hysterectomy Laparoscopic hysterectomy Laparoscopic-assisted vaginal hysterectomy Robotic hysterectomy
What are the advantages of abdominal hysterectomy?
In an abdominal hysterectomy, the uterus is removed through an incision in the lower abdomen. The abdominal incision gives a large clear view of the pelvis and allows us to work through adhesions from prior surgeries or endometriosis most carefully. It can be performed even if the uterus is huge.
However, abdominal hysterectomy is associated with a greater risk of complications than a vaginal hysterectomy or laparoscopic hysterectomy.
Wound infections, bleeding, blood clots, and nerve and tissue damage are more common. Abdominal hysterectomy also requires a more extended hospital stay and a longer recovery time.
Some patients may not be candidates for minimally invasive approaches because of uterine size or prior surgical history. Your doctor will determine which surgical approach is most suitable for you.
Is hysterectomy safe?
Hysterectomy is a very safe surgical procedure, and complications are rare. However, as with any surgery, problems can occur, such as:
Fever and infection
Heavy bleeding during or after surgery
Injury to the urinary tract or nearby organs
Blood clots in the leg that can travel to the lungs
Breathing or heart problems related to anesthesia
Death
Some problems are discovered immediately, and some may not show until days, weeks, or even years after surgery. These problems include the formation of a blood clot, infection, or bowel blockage. Complications are generally more common after an abdominal hysterectomy and in women with certain underlying medical conditions.
How long will I be in the hospital?
Most women will need to stay 1–2 nights after an abdominal hysterectomy. Various factors, such as the patient’s underlying health status, surgical complexity, and physician preference, help determine the surgical plan.
Can my family visit me?
A trusted family member should drive you to and from the hospital. Families are welcome to stay with you before and after surgery. Hospital visitor policies for overnight stays vary with the ongoing COVID-19 pandemic.
Does my procedure require an anesthetic?
An abdominal hysterectomy requires general anesthesia, meaning patients will temporarily be put to sleep. The surgeon may also inject a local anesthetic into the incisions to decrease postoperative pain.
Why do I need a preoperative clinic visit?
Most surgeries will involve a preoperative visit with your surgeon to review the procedure’s risks and benefits and answer your questions regarding the upcoming surgery. Because hysterectomies will eliminate the possibility of child-bearing, your doctor will confirm that you do not want children in the future.
It is essential to provide your doctor with an updated list of all medications, vitamins, and dietary supplements before surgery. The surgical team will review your medications. Together we can plan when to take the last dose when to resume medications. Medication management is particularly important for patients taking aspirin, blood pressure medicines, and diabetes medicines. Your doctor should review all medication and food allergies. We remind patients to avoid alcohol 24 hours before the surgery.
If any blood work or preoperative testing is required, it will be scheduled and confirmed. If appropriate, share any lab work, radiologic procedures, or other medical tests done by other healthcare providers with your surgeon before your surgery. Some patients may need to supply a surgical clearance letter from their primary care physician.
Finally, the doctor will give instructions regarding your diet before the surgery.
Try to avoid wearing jewelry, make-up, nail polish/acrylic nails on the day of surgery. If you wear contacts, glasses or dentures, please bring a case.
You should also confirm the date, time, and location of the surgery.
What happens after I check-in at the hospital?
After arrival at the hospital, the staff will guide you to the pre-operative holding area to change into a surgical gown and store your belongings. You will meet the nursing team who will provide care during your surgery. They will review your medical history. The surgical consent form is reviewed, signed, or updated with any changes. An IV will be placed at this time. You may be given special stockings to help prevent a blood clot.
The anesthesia team will also interview you and answer questions. Typically your surgeon will review any last-minute questions.
What happens in the operating room?
After the preoperative evaluation, the team will guide you to the operating room. You will move from the mobile bed to the operating table. Monitors will be attached to various parts of your body to measure your pulse, oxygen level, and blood pressure. Then the anesthesiologist will give medication through your IV to help you go to sleep.
The OR nursing team will cover your body with sterile drapes and apply an antibacterial fluid to your abdomen and vagina. After you are asleep, a tube called a catheter will be placed in your bladder to drain urine. The team then performs a “surgical time-out.” A surgical safety check-list is read aloud, requiring all surgical team members to be present and attentive.
The gynecologist begins by making an incision in the lower abdomen. It is typically horizontal, but sometimes a vertical incision is needed if there is a large uterus or large mass.
Once the uterus and ovaries are visualized, we place a metal retractor to maintain a clear view of the pelvis. This step helps us safely operate and avoid injury to surrounding tissue such as the bladder, rectum, intestines, and ureter.
The surgeon works carefully from the outer edges inward. First, we dissect the broad ligament, the thin layer of connective tissue covering the female organs. If the plan is to remove the ovaries, we start with this step. Otherwise, we begin by separating the tubes from the surrounding tissues until the uterus is reached.
The surgeon then separates the uterus from the surrounding connective tissue by moving downward toward the cervix. At this point, the surgeons detach the bladder from the uterus. After the bladder is safely out of the way, the surgeon will focus on the uterine arteries.
These two blood vessels are the main blood supply to the uterus and travel over the ureters, the tubes which connect the kidney to the bladder. Once the uterine arteries are controlled, the surgeon then safely gradually separates the uterus from the body. Depending on the anatomy, bleeding, or scar tissue, the surgeon may decide not to remove the cervix.
The uterus and tubes (and sometimes ovaries) are sent to the pathology lab for microscopic analysis. The surgeon examines all of the surgical sites for bleeding.
The surgeon then sews the edges of the vagina closed to form the vaginal cuff. If the cervix has not been removed, it is carefully inspected for bleeding.
Afterward, the abdomen and pelvis are washed in a warm salt water (saline) solution. Then, the layers of the abdominal wall and skin are carefully closed.
Once the procedure is complete, the surgical team completes a post-procedure review. All instruments and equipment are counted and verified. When finished, the anesthesiologist will begin to wake up the patient and then transfer her to the recovery room.
What happens in the RECOVERY ROOM?
Once the operation is over, you will be moved into the recovery area. This area is equipped to monitor patients after surgery.
Many patients feel groggy, confused, and chilly when they wake up after an operation. You may have muscle aches or a sore throat shortly after surgery. These problems should not last long. You can ask for medicine to relieve them. You will remain in the recovery room until you are stable. Afterward, you will be moved to a hospital room for the rest of your stay.
As soon as possible, your nurses will have you move around as much as you can. You may be encouraged to get out of bed and walk around more quickly after your operation. Walking helps reduce the risk of blood clots. You may feel tired and weak at first. The sooner you resume activity, the sooner your body’s functions can get back to normal.
What preparations should I make for aftercare at home?
You should speak with your physician regarding the resumption of exercise and sexual activity. Your doctor will also review wound care instructions. Sexual activity is typically restricted for 6–8 weeks to allow the vagina to heal. Do not insert anything into your vagina — no sex, tampons, or douching — until cleared by your doctor.
Most women can return to basic activities in one to two weeks. Generally, we recommend patients stick to light activity only for the first 4–6 weeks. Light exercise helps your body heal and prevents some postoperative complications. Be sure to get plenty of rest, but you also need to move around as often as you can. Take short walks and gradually increase the distance you walk every day. Avoid strenuous exercise and heavy lifting.
You may resume a regular diet on the day of surgery. It may help prepare some meals and do your grocery store shopping and laundry before surgery.
You will be given instructions to help control postoperative pain during healing. Some pain is expected for the first few weeks after the surgery. You may also have light bleeding and vaginal discharge for a few weeks. Sanitary pads can be used after the surgery. Constipation is common after hysterectomies. Try a stool softener and fiber supplement. Some women have temporary problems with emptying the bladder after a hysterectomy. Some women have an emotional response to hysterectomy. You may feel depressed that you are no longer able to carry a pregnancy, or you may be relieved that your former symptoms are gone.
Your doctor will schedule a postoperative examination 4–6 weeks after the procedure.
After recovery, we recommend continuing your annual routine gynecologic exams. Depending on your age and reason for the hysterectomy, you may still need pelvic exams and pap tests.
DANGER SIGNALS
Call your doctor or report to the ER if you experience:
Pain not controlled with prescribed medication
Fever > 101
Severe nausea and vomiting
Calf or leg pain
Shortness of breath
Heavy vaginal bleeding
Foul-smelling vaginal discharge
Abdominal pain not controlled by pain medication
Inability to pass gas or have a bowel movement
This article was contributed by MacArthur Medical Center’s Dr. Reshma Patel | https://medium.com/beingwell/how-to-prepare-for-an-abdominal-hysterectomy-9a78cf0fb2c5 | ['Macarthur Medical Center'] | 2020-11-05 12:33:30.666000+00:00 | ['Women', 'Womens Health', 'Hysterectomy', 'Surgery', 'Health'] |
React 2020 — P8: Class Props Destructuring | If you’ve looked at React code before, there’s almost a 100% chance that you’ve seen destructuring. If you look at most import statements for class-based components, you’ve more than likely seen something like this:
import React, { Component } from 'react'; class ClassName extends Component { ... }
The { Component } is an example of destructuring. If we didn’t use destructuring, we would have had to use the following syntax in our class declaration:
import React from 'react'; class ClassName extends React.Component { ... }
What exactly is destructuring? It’s just a way to extract multiple keys from an object or an array and assign them to a variable. You might have seen the following syntax at one point in your career:
The variable a is assigned 10 and the variable b is assigned 20. This code just took the elements out of an array and placed them into variables.
In this example, we have an object user. The user object contains two properties: id and name. We can destructure and extract those properties into individual variables, or constants, as is done on line 6. Fun Fact: everything that we looked at so far is just JavaScript.
Let’s look at how we would destructure the props object inside of a class component. If you’re not familiar with the props object, take a look at my article on Class Component Props.
Create a new file inside of src/components and name it DinosFavoriteCar.js.
Create the class component and name it DinosFavoriteCar, but this time destructure the Component when importing React. Let the render() lifecycle method return a JSX element that contains the string Year Make Model. Import it into the App component and render it.
Now that we’ve verified that there are no errors, let’s pass some props to the DinosFavoriteCar component and use them like we’ve done before. Pass the 2020 Nissan GT-R by using the year, make, and model props.
<DinosFavoriteCar year="2020" make="Nissan" model="GT-R" />
You can use the props object inside of the class component DinosFavoriteCar to display the year, make, and model.
There it is. Nothing new so far. If you look at the DinosFavoriteCar component, you’ll notice that we use this.props for each attribute. Wouldn’t it be nice to extract the year, make, and model from the props object? Yes it would. So let’s do it.
Inside of your render() method, type the following:
const { year, make, model } = this.props;
We already saw an almost identical example with our user object above. The props object contains year, make, and model. We’ll extract those properties and assign them to the year, make, and model constants. When we do that, we can get rid of this.props.
In the next article, we’ll look at destructuring props inside of functional components. | https://medium.com/dev-genius/react-2020-p8-class-props-destructuring-739d592fca4 | ['Dino Cajic'] | 2020-09-09 20:41:34.827000+00:00 | ['JavaScript', 'Web Development', 'React', 'Reactjs', 'Programming'] |
An introduction to test-driven development with Vue.js | Test-driven development (TDD) is a process where you write tests before you write the associated code. You first write a test that describes an expected behavior, and you run it, ensuring it fails. Then, you write the dumbest, most straightforward code you can to make the test pass. Finally, you refactor the code to make it right. And you repeat all the steps for each test until you’re done.
This approach has many advantages. First, it forces you to think before you code. It’s commonplace to rush into writing code before establishing what it should do. This practice leads to wasting time and writing complicated code. With TDD, any new piece of code requires a test first, so you have no choice but take the time to define what this code should do before you write it.
Secondly, it ensures you write unit tests. Starting with the code often leads to writing incomplete tests, or even no tests at all. Such a practice usually happens as a result of not having precise and exhaustive specs, which leads to spending more time coding than you should. Writing tests becomes a costly effort, which is easy to undermine once the production code is ready.
Unit tests are critical to building robust code. Overlooking or rushing them increases chances of your code breaking in production at some point.
Why do TDD for components?
Testing a component can be counter-intuitive. As we saw in Unit Test Your First Vue.js Component, it requires a mental shift to wrap your head around testing components versus testing plain scripts, knowing what to test, and understanding the line between unit tests and end-to-end.
TDD makes all this easier. Instead of writing tests by examining all bits and pieces of a finished project and trying to guess what you should cover, you’re doing the opposite. You’re starting from actual specs, a list of things that the component should do, without caring about how it does it. This way, you’re ensuring that all you test is the public API, but you’re also guaranteeing you don’t forget anything.
In this tutorial, we’ll build a color picker. For every swatch, users can access the matching color code, either in hexadecimal, RGB, or HSL.
Despite its apparent simplicity, there are a bunch of small pieces of logic to test. They require some thinking before jumping into code.
In this article, we’ll deep dive into TDD. We’ll put some specs together before we write a single line of code. Then, we’ll test every public feature in a test-driven fashion. Finally, we’ll reflect on what we did and see what we can learn from it.
Before we start
This tutorial assumes you’ve already built something with Vue.js before, and written unit tests for it using Vue Test Utils and Jest (or a similar test runner). It won’t go deeper into the fundamentals, so make sure you get up to speed first. If you’re not there yet, I recommend you go over Build Your First Vue.js Component and Unit Test Your First Vue.js Component.
TL;DR: this post goes in-depth in the how and why. It’s designed to help you understand every decision behind testing a real-world Vue.js component with TDD and teach you how to make design decisions for your future projects. If you want to understand the whole thought process, read on. Otherwise, you can go directly to the afterthoughts at the end, or look at the final code on GitHub.
Write down your specs
Before you even write your first test, you should write down an overview of what the component should do. Having specs makes testing much more straightforward since you’re mostly rewriting each spec in the form of tests.
Let’s think about the different parts that compose our component, and what they should do.
First, we have a collection of color swatches. We want to be able to pass a list of custom colors and display as swatches in the component. The first one should be selected by default, and the end user can select a new one by clicking it.
Secondly, we have the color mode toggler. The end user should be able to switch between three modes: hexadecimal (default), RGB and HSL.
Finally, we have the color code output, where the end user can get the code for the currently selected color swatch. This code is a combination of the selected swatch and color mode. Thus, by default, it should display the first swatch as a hexadecimal value. When changing any of these, the code should update accordingly.
As you can see, we don’t go too deep into details; we don’t specify what the color mode labels should be, or what the active state looks like for the color swatches. We can make most of the small decisions on the fly, even when doing TDD. Yet, we’ve come from a simple definition of what the component should be, to a comprehensive set of specs to start from.
Write test-driven code
First, you need to create a new Vue project with Vue CLI. You can check Build Your First Vue.js Component if you need a step by step guide.
During the scaffolding process, manually select features and make sure you check Unit testing. Pick Jest as your testing solution, and proceed until the project is created, dependencies are installed, and you’re ready to go.
We’ll need to use SVG files as components, so you also need to install the right loader for them. Install vue-svg-loader as a dev dependency, and add a rule for it in your vue.config.js file.
// vue.config.js module.exports = {
chainWebpack: config => {
const svgRule = config.module.rule('svg')
svgRule.uses.clear()
svgRule.use('vue-svg-loader').loader('vue-svg-loader')
}
}
This loader doesn’t play well with Jest by default, which causes tests to throw. To fix it, create a svgTransform.js file as documented on the website, and edit your jest.config.js as follows:
// svgTransform.js const vueJest = require('vue-jest/lib/template-compiler') module.exports = {
process(content) {
const { render } = vueJest({
content,
attrs: {
functional: false
}
}) return `module.exports = { render: ${render} }`
}
} // jest.config.js module.exports = {
// ...
transform: {
// ...
'.+\\.(css|styl|less|sass|scss|png|jpg|ttf|woff|woff2)$': 'jest-transform-stub',
'^.+\\.svg$': '<rootDir>/svgTransform.js'
},
// ...
}
Note that we’ve removed “svg” from the first regular expression (the one that gets transformed with jest-transform-stub ). This way, we ensure SVGs get picked up by svgTransform.js .
Additionally, you need to install color-convert as a dependency. We’ll need it both in our code and in our tests later on.
Don’t serve the project yet. We’re going to write tests and rely on them passing or not to move on. We don’t want to control whether what we build works by testing it visually in the browser, nor being distracted by how it looks.
Instead, open your project and create a new ColorPicker.vue single-file component in the src/components/ directory. In tests/unit/ , create its associated spec file.
<!-- ColorPicker.vue --> <template>
<div></div>
</template> <script>
export default {}
</script> <style>
</style> // ColorPicker.spec.js import { shallowMount } from '@vue/test-utils'
import ColorPicker from '@/components/ColorPicker' describe('ColorPicker', () => {
// let's do this!
})
In your terminal, execute the following command to run tests:
npm run test:unit --watchAll
For now, you should get an error because you don’t yet have tests. Don’t worry though; we’ll fix this shortly 🙂 Note the usage of the --watchAll flag in the command: Jest is now watching your files. This way, you won’t have to re-run test by hand.
TDD goes in 3 stages:
Red: you write a test that describes an expected behavior, then you run it, ensuring it fails. Green: you write the dumbest, most straightforward code you can to make the test pass. Refactor: you refactor the code to make it right.
Step 1: Red
Time to write our first test! We’ll start with the color swatches. For clarity, we’ll wrap all tests for each distinct element in their own suite, using a describe block.
First, we want to make sure that the component displays each color that we provide as an individual swatch. We would pass those as props, in the form of an array of hexadecimal strings. In the component, we would display the list as an unordered list, and assign the background color via a style attribute.
import { shallowMount } from '@vue/test-utils'
import ColorPicker from '@/components/ColorPicker'
import convert from 'color-convert' let wrapper = null const propsData = {
swatches: ['e3342f', '3490dc', 'f6993f', '38c172', 'fff']
} beforeEach(() => (wrapper = shallowMount(ColorPicker, { propsData })))
afterEach(() => wrapper.destroy()) describe('ColorPicker', () => {
describe('Swatches', () => {
test('displays each color as an individual swatch', () => {
const swatches = wrapper.findAll('.swatch')
propsData.swatches.forEach((swatch, index) => {
expect(swatches.at(index).attributes().style).toBe(
`background: rgb(${convert.hex.rgb(swatch).join(', ')})`
)
})
})
})
})
We mounted our ColorPicker component and wrote a test that expects to find items with a background color matching the colors passed as props. This test is bound to fail: we currently have nothing in ColorPicker.vue . If you look at your terminal, you should have an error saying that no item exists at 0. This is great! We just passed the first step of TDD with flying colors.
Step 2: Green
Our test is failing; we’re on the right track. Now, time to make it pass. We’re not much interested in writing working or smart code at this point, all we want is to make Jest happy. Right now, Vue Test Utils complains about the fact that we don’t event have no item at index 0.
[vue-test-utils]: no item exists at 0
The simplest thing we can do to make that error go away is to add an unordered list with a swatch class on the list item.
<template>
<div class="color-picker">
<ul class="swatches">
<li class="swatch"></li>
</ul>
</div>
</template>
Jest still complains but the error has changed:
Expected value to equal:
"background: rgb(227, 52, 47);"
Received:
undefined
This makes sense; the list item doesn’t have a style attribute. The simplest thing we can do about it is to hardcode the style attribute. This isn’t what we want in the end, but, we aren’t concerned about it yet. What we want is for our test to go green.
We can therefore hardcode five list items with the expected style attributes:
<ul class="swatches">
<li class="swatch" style="background: rgb(227, 52, 47);"></li>
<li class="swatch" style="background: rgb(52, 144, 220);"></li>
<li class="swatch" style="background: rgb(246, 153, 63);"></li>
<li class="swatch" style="background: rgb(56, 193, 114);"></li>
<li class="swatch" style="background: rgb(255, 255, 255);"></li>
</ul>
The test should now pass.
Step 3: Refactor
At this stage, we want to rearrange our code to make it right, without breaking tests. In our case, we don’t want to keep the list items and their style attributes hardcoded. Instead, it would be better to receive swatches as a prop, iterate over them to generate the list items, and assign the colors as their background.
<template>
<div class="color-picker">
<ul class="swatches">
<li
:key="index"
v-for="(swatch, index) in swatches"
:style="{ background: `#${swatch}` }"
class="swatch"
></li>
</ul>
</div>
</template> <script>
export default {
props: {
swatches: {
type: Array,
default() {
return []
}
}
}
}
</script>
When tests re-run, they should still pass 🥳 This means we’ve successfully refactored the code without affecting the output. Congratulations, you’ve just completed your first TDD cycle!
Now, before we go to the next test, let’s reflect a bit. You may be wondering:
“Isn’t this a bit dumb? I knew the test would fail. Am I not wasting time by running it anyway, then hardcoding the right value, see the test pass, then make the code right? Can’t I go to the refactor step directly?”
It’s understandable that you’re feeling confused by the process. Yet, try to look at things from a different angle: the point here isn’t to prove that the test doesn’t pass. We know it won’t. What we want to look at is what our test expects, make them happy in the simplest possible way, and finally write smarter code without breaking anything.
That’s the whole idea of test-driven development: we don’t write code to make things work, we write code to make tests pass. By reversing the relationship, we’re ensuring robust tests with a focus on the outcome.
What are we testing?
Another question that may come to mind is how we’re deciding what to test. In Unit Test Your First Vue.js Component, we saw that we should only be testing the public API of our component, not the internal implementation. Strictly speaking, this means we should cover user interactions and props changes.
But is that all? For example, is it okay for the output HTML to break? Or for CSS class names to change? Are we sure nobody is relying on them? That you aren’t yourself?
Tests should give you confidence that you aren’t shipping broken software. What people can do with your program shouldn’t stop working the way they expect it to work. It can mean different things depending on the project and use case.
For example, if you’re building this color panel as an open source component, your users are other developers who use it in their own projects. They’re likely relying on the class names you provide to style the component to their liking. The class names become a part of your public API because your users rely on them.
In our case, we may not necessarily be making an open source component, but we have view logic that depends on specific class names. For instance, it’s important for active swatches to have an active class name, because we’ll rely on it to display a checkmark, in CSS. If someone changes this by accident, we want to know about it.
Testing scenarios for UI components highly depend on the use case and expectations. Whichever the case, what you need to ask yourself is do I care about this if it changes?
Next tests
Testing the swatches
Let’s move on to the next test. We expect the first swatch of the list to be the one that’s selected by default. From the outside, this is something that we want to ensure keeps on working the same way. Users could, for instance, rely on the active class name to style the component.
test('sets the first swatch as the selected one by default', () => {
const firstSwatch = wrapper.find('.swatch')
expect(firstSwatch.classes()).toContain('active')
})
This test, too, should fail, as list items currently don’t have any classes. We can easily make this pass by adding the class on the first list item.
<li
:key="index"
v-for="(swatch, index) in swatches"
:style="{ background: `#${swatch}` }"
class="swatch"
:class="{ 'active': index === 0 }"
></li>
The test now passes; however, we’ve hardcoded the logic into the template. We can refactor that by externalizing the index onto which the class applies. This way, we can change it later.
<template>
<!-- ... -->
<li
:key="index"
v-for="(swatch, index) in swatches"
:style="{ background: `#${swatch}` }"
class="swatch"
:class="{ active: index === activeSwatch }"
></li>
<!-- ... -->
</template> export default {
// ...
data() {
return {
activeSwatch: 0
}
}
}
This naturally leads us to our third test. We want to change the active swatch whenever the end user clicks it.
test('makes the swatch active when clicked', () => {
const targetSwatch = wrapper.findAll('.swatch').at(2)
targetSwatch.trigger('click')
expect(targetSwatch.classes()).toContain('active')
})
For now, nothing happens when we click a swatch. However, thanks to our previous refactor, we can make this test go green and even skip the refactor step.
<li
:key="index"
v-for="(swatch, index) in swatches"
:style="{ background: `#${swatch}` }"
class="swatch"
:class="{ active: index === activeSwatch }"
@click="activeSwatch = index"
></li>
This code makes the test pass and doesn’t even need a refactor. This is a fortunate side-effect of doing TDD: sometimes, the process leads to either writing new tests that either don’t need refactors, or even that pass right away.
Active swatches should show a checkmark. We’ll add it now without writing a test: instead, we’ll control their visibility via CSS later. This is alright since we’ve already tested how the active class applies.
First, create a checkmark.svg file in src/assets/ .
<svg viewBox="0 0 448.8 448.8">
<polygon points="142.8 323.9 35.7 216.8 0 252.5 142.8 395.3 448.8 89.3 413.1 53.6"/>
</svg>
Then, import it in the component.
import CheckIcon from '@/assets/check.svg' export default {
// ...
components: { CheckIcon }
}
Finally, add it inside the list items.
<li ... >
<check-icon />
</li>
Good! We can now move on to the next element of our component: the color mode.
Testing the color mode
Let’s now implement the color mode toggler. The end user should be able to switch between hexadecimal, RGB and HSL. We’re defining these modes internally, but we want to ensure they render correctly.
Instead of testing button labels, we’ll rely on class names. It makes our test more robust, as we can easily define a class name as part of our component’s contract. However, button labels should be able to change.
Now you may be tempted to check for these three specific modes, but that would make the test brittle. What if we change them? What if we add one, or remove one? That would still be the same logic, yet the test would fail, forcing us to go and edit it.
One solution could be to access the component’s data to iterate on the modes dynamically. Vue Test Utils lets us do that through the vm property, but again, this tightly couples our test with the internal implementation of the modes. If tomorrow, we decided to change the way we define modes, the test would break.
Another solution is to keep going with black box testing and only expect the class name to match a given pattern. We don’t care that it’s color-mode-hex , color-mode-hsl or color-mode-xyz , as long as it looks like what we expect from the outside. Jest lets us do that with regular expression matchers.
// ...
describe('Color model', () => {
test('displays each mode as an individual button', () => {
const buttons = wrapper.findAll('.color-mode')
buttons.wrappers.forEach(button => {
expect(button.classes()).toEqual(
expect.arrayContaining([expect.stringMatching(/color-mode-\w{1,}/)])
)
})
})
})
Here, we’re expecting elements with a class that follows the pattern “color-mode-“ + any word character (in ECMAScript, any character within [a-zA-Z_0-9] ). We could add or remove any mode we want, and the test would still be valid.
Naturally, right now, the test should fail, as there are no buttons with class color-mode yet. We can make it pass by hardcoding them in the component.
<div class="color-modes">
<button class="color-mode color-mode-hex"></button>
<button class="color-mode color-mode-rgb"></button>
<button class="color-mode color-mode-hsl"></button>
</div>
We can now refactor this code by adding the modes as private data in our component and iterate over them.
<template>
<!-- ... -->
<div class="color-modes">
<button
:key="index"
v-for="(mode, index) in colorModes"
class="color-mode"
:class="`color-mode-${mode}`"
>{{ mode }}</button>
</div>
<!-- ... -->
</template> export default {
// ...
data() {
return {
activeSwatch: 0,
colorModes: ['hex', 'rgb', 'hsl']
}
}
}
Good! Let’s move on.
As with the swatches, we want the first mode to be set as active. We can copy the test we wrote and adapt it to this new use case.
test('sets the first mode as the selected one by default', () => {
const firstButton = wrapper.find('.color-mode')
expect(firstButton.classes()).toContain('active')
})
We can make this test pass by manually adding the class on the first list item.
<button
:key="index"
v-for="(mode, index) in colorModes"
class="color-mode"
:class="[{ active: index === 0 }, `color-mode-${mode}`]"
>{{ mode }}</button>
Finally, we can refactor by externalizing the index onto which the class applies.
<template>
<!-- ... -->
<button
:key="index"
v-for="(mode, index) in colorModes"
class="color-mode"
:class="[{ active: index === activeMode }, `color-mode-${mode}`]"
>{{ mode }}</button>
<!-- ... -->
</template> export default {
// ...
data() {
return {
activeSwatch: 0,
activeMode: 0,
colorModes: ['hex', 'rgb', 'hsl']
}
}
}
We need to change the active mode whenever the end user clicks the associated button, as with the swatches.
test('sets the color mode button as active when clicked', () => {
const targetButton = wrapper.findAll('.color-mode').at(2)
targetButton.trigger('click')
expect(targetButton.classes()).toContain('active')
})
We can now add a @click directive as we did with the swatches, and make the test go green without having to refactor.
<button
:key="index"
v-for="(mode, index) in colorModes"
class="color-mode"
:class="[{ active: index === activeMode }, `color-mode-${mode}`]"
@click="activeMode = index"
>{{ mode }}</button>
Testing the color code
Now that we’re done testing the swatches and color code, we can move on to the third and final element of our color picker: the color code. What we display in there is a combination of the other two: the selected swatch defines the color we should display, and the selected mode determines how to display it.
First, we want to make sure we initially display the default swatch in the default mode. We have the information to build this since we’ve implemented the swatches and the color mode.
Let’s start with a (failing) test.
describe('Color code', () => {
test('displays the default swatch in the default mode', () => {
expect(wrapper.find('.color-code').text()).toEqual('#e3342f')
})
})
Now, let’s make this pass by hardcoding the expected result in the component.
<div class="color-code">#e3342f</div>
Good! Time to refactor. We have a raw color in hexadecimal mode, and we’re willing to output it in hexadecimal format. The only difference between our input and output values is that we want to prepend the latter with a hash character. The easiest way of doing so with Vue is via a computed property.
<template>
<!-- ... -->
<div class="color-code">{{ activeCode }}</div>
<!-- ... -->
</template> export default {
// ...
computed: {
activeCode() {
return `#${this.swatches[this.activeSwatch]}`
}
}
}
This should keep the test green. However, there’s an issue with this computed property: it only works for hexadecimal values. It should keep on working when we change the color, but not when we change the mode. We can verify this with another test.
test('displays the code in the right mode when changing mode', () => {
wrapper.find('.color-mode-hsl').trigger('click')
expect(wrapper.find('.color-code').text()).toEqual('2°, 76%, 54%')
})
Here, we’ve changed to HSL mode, but we’re still getting the hexadecimal output. We need to refactor our code so that our activeCode computed property is not only aware of the current color, but also the current color mode. One way we can achieve this is to create computed properties for each mode and proxy them through activeCode based on the selected mode.
First, we should simplify access to the current color and mode. Right now, we need to do an array lookup, which is repetitive and makes the code hard to read. We can use computed properties to wrap that logic.
export default {
// ...
computed: {
// ...
activeColorValue() {
return this.swatches[this.activeSwatch]
},
activeModeValue() {
return this.colorModes[this.activeMode]
}
}
}
As you can see, we’re not writing tests for these computed properties, as they aren’t part of our public API. We’ll use them later in our dedicated color mode computed properties, which themselves will be proxied in activeCode , which we’re testing in our “Color code” suite. All we care about is that the color code renders as expected so that the user can rely on them. How we get there are implementation details that we need to be able to change if need be.
We can now write our dedicated computed properties for each mode. We’ll map their name onto the ones in colorModes , so we can do an array lookup later in activeCode to return the right one.
For the hexadecimal output, we can externalize what we currently have in activeCode and refactor it using activeColorValue .
export default {
// ...
computed: {
// ...
hex() {
return `#${this.activeColorValue}`
}
}
}
Now, let’s modify activeCode so it proxies the right computed property depending on the active mode.
export default {
// ...
computed: {
// ...
activeCode() {
return this[this.activeModeValue]
}
}
}
This still shouldn’t make our latest test pass, since we haven’t written a computed property for it. However, our test that checks if the default mode renders correctly is still passing, which is a good sign we’re on the right track.
We now want to write a computed property that returns the color output in HSL mode. For this, we’ll use color-convert , an npm package that lets us convert colors in many different modes. We’ve already been using it in our tests, so we don’t have to reinstall it.
import convert from 'color-convert' export default {
// ...
computed: {
// ...
hsl() {
const hslColor = convert.hex.hsl(this.activeColorValue)
return `${hslColor[0]}°, ${hslColor[1]}%, ${hslColor[2]}%`
}
}
}
Great, our test passes! We can now finish this up adding the missing RGB mode.
Yet, as you can see, we’re currently not testing the output of our color computed properties in isolation, but through other tests. To make things cleaner, we could decouple that logic from the component, import it as a dependency, and test it separately. This has several benefits:
it keeps the component from growing every time we want to add a color mode,
it keeps domains separated: the component focuses on its own view logic, and the color modes utility takes care of testing each mode exhaustively.
First, create a new color.js file in the src/utils/ directory, and a matching spec file in tests/unit/ .
// color.spec.js import { rgb, hex, hsl } from '@/utils/color' // color.js import convert from 'color-convert' export const rgb = () => {} export const hex = () => {} export const hsl = () => {}
We can use TDD to test those three functions and make sure they always return the expected value. We can extract the logic we had in our Vue component for the last two, and write the RGB function from scratch.
For the sake of brevity, we’ll cover all three tests at once, but the process remains the same.
import { rgb, hex, hsl } from '@/utils/color' const color = 'e3342f' describe('color', () => {
test('returns the color into RGB notation', () => {
expect(rgb(color)).toBe('227, 52, 47')
})
test('returns the color into hexadecimal notation', () => {
expect(hex(color)).toBe('#e3342f')
})
test('returns the color into HSL notation', () => {
expect(hsl(color)).toBe('2°, 76%, 54%')
})
})
We now have three failing tests. The first thing we can do is to return hardcoded values to go green.
export const rgb = () => '227, 52, 47' export const hex = () => '#e3342f' export const hsl = () => '2°, 76%, 54%'
Now, we can start refactoring by migrating the code from our Vue component.
export const hex = () => `#${color}` export const hsl = color => {
const hslColor = convert.hex.hsl(color)
return `${hslColor[0]}°, ${hslColor[1]}%, ${hslColor[2]}%`
}
Finally, we can implement our rgb function.
export const rgb = color => convert.hex.rgb(color).join(', ')
All tests should stay green!
We can now use the color utilities in our Vue component and refactor it a bit. We no longer need to import color-convert in the component, nor do we need dedicated computed properties for each mode, or even for getting the active color and mode values. All we need to keep is activeCode , where we can store all the necessary logic.
This is a good example where doing black box testing helps us: we’ve been focusing on testing the public API; thus we can refactor the internals of our component without breaking the tests. Removing properties like activeColorValue or hex doesn’t matter, because we were never testing them directly.
// ...
import { rgb, hex, hsl } from '@/utils/color' const modes = { rgb, hex, hsl } export default {
// ...
computed: {
activeCode() {
const activeColor = this.swatches[this.activeSwatch]
const activeMode = this.colorModes[this.activeMode]
return modes[activeMode](activeColor)
}
}
}
We now have much terser code in our component, and better domain separation, while still respecting the component’s contract.
Finally, we can implement a missing test: the one that ensures the color code changes whenever we click a new swatch. This should already go green, but it’s still essential for us to write it, so we can know about it if it breaks.
test('displays the code in the right color when changing color', () => {
wrapper
.findAll('.swatch')
.at(2)
.trigger('click')
expect(wrapper.find('.color-code').text()).toEqual('#f6993f')
})
And we’re done! We just built a fully functional Vue component using TDD, without relying on browser output, and our tests are ready.
Visual control
Now that our component is ready, we can see how it looks and play with it in the browser. This allows us to add the CSS and ensure we didn’t miss out on anything.
First, mount the component into the main App.vue file.
<!-- App.vue --> <template>
<div id="app">
<color-picker :swatches="['e3342f', '3490dc', 'f6993f', '38c172', 'fff']"/>
</div>
</template> <script>
import ColorPicker from '@/components/ColorPicker' export default {
name: 'app',
components: {
ColorPicker
}
}
</script>
Then, run the app by executing the following script, and open it in your browser at http://localhost:8080/ .
npm run serve
You should see your color picker! It doesn’t look like much for now, but it works. Try clicking colors and change the color mode; you should see the color code change.
To see the component with proper styling, add the following CSS between the style tags:
.color-picker {
background-color: #fff;
border: 1px solid #dae4e9;
border-radius: 0.125rem;
box-shadow: 0 2px 4px 0 rgba(0, 0, 0, 0.1);
color: #596a73;
font-family: BlinkMacSystemFont, Helvetica Neue, sans-serif;
padding: 1rem;
} .swatches {
color: #fff;
display: flex;
flex-wrap: wrap;
list-style: none;
margin: -0.25rem -0.25rem 0.75rem;
padding: 0;
} .swatch {
border-radius: 0.125rem;
cursor: pointer;
height: 2rem;
margin: 0.25rem;
position: relative;
width: 2rem;
} .swatch::after {
border-radius: 0.125rem;
bottom: 0;
box-shadow: inset 0 0 0 1px #dae4e9;
content: '';
display: block;
left: 0;
mix-blend-mode: multiply;
position: absolute;
right: 0;
top: 0;
} .swatch svg {
display: none;
color: #fff;
fill: currentColor;
margin: 0.5rem;
} .swatch.active svg {
display: block;
} .color-modes {
display: flex;
font-size: 1rem;
letter-spacing: 0.05rem;
margin: 0 -0.25rem 0.75rem;
} .color-mode {
background: none;
border: none;
color: #9babb4;
cursor: pointer;
display: block;
font-weight: 700;
margin: 0 0.25rem;
padding: 0;
text-transform: uppercase;
} .color-mode.active {
color: #364349;
} .color-code {
border: 1px solid #dae4e9;
border-radius: 0.125rem;
color: #364349;
text-transform: uppercase;
padding: 0.75rem;
}
You should see something like this:
And we’re done!
Afterthoughts
How can we improve this?
For now, we have a robust test suite. Even though we don’t have 100% coverage, we can feel confident with our component going out in the wild, and evolving over time. There are still a couple of things we could improve though, depending on the use case.
First, you may notice that when clicking the white swatch, the checkmark doesn’t show up. That’s not a bug, rather a visual issue: the checkmark is there, but we can’t see it because it’s white on white. You could add a bit of logic to fix this: when a color is lighter than a certain threshold (let’s say 90%), you could add a light class on the swatch. This would then let you apply some specific CSS and make the checkmark dark.
Fortunately, you already have all you need: the color-converter package can help you determine whether a color is light (with the HSL utilities), and you already have a color utility module to store that logic and test it in isolation. To see what the finished code could look like, check out the project’s repository on GitHub.
We could also reinforce the suite by adding a few tests to make sure some expected classes are there. This doesn’t test actual logic, but would still be particularly useful if someone was relying on those class names to style the component from the outside. Again, everything depends on your use case: test what shouldn’t change without you knowing, don’t only add tests for the sake of it.
What did we learn?
There are several lessons to learn from this TDD experiment. It brings a lot to the table but also highlights a few challenges that we should be aware of.
First, TDD is a fantastic way to write robust tests, not too many and not too few. Have you ever finished a component, moved on to tests and thought “where do I even start?”? Looking at finished code and figuring out what to test is hard. It’s tempting to get it done quickly, overlook some critical parts and end up with an incomplete test suite. Or you can adopt a defensive approach and test everything, risking to focus on implementation details and writing brittle tests.
Adopting TDD for developing UI components helps us focus on exactly what to test by defining, before writing any line of code, if this is part of the contract or not.
Secondly, TDD encourages refactors, leading to better software design. When you’re writing tests after coding, you’re usually no longer in a refactoring dynamic. You can fix your code if you find issues while testing, but at this stage, you’re most likely done with the implementation. This separation between writing code and writing test is where lies the issue.
With TDD, you’re creating a deeper connection between code and tests, with a strong focus on making the public API reliable. Implementation comes right after you’ve guaranteed the outcome. This is why the green step is critical: you first need your test to pass, then ensure it never breaks. Instead of implementing your way to a working solution, you’re reversing the relationship, focusing on the contract first, and allowing the implementation to remain disposable. Because refactoring comes last, and you’ve established the contract, you now have mental space to make things right, clean some code, adopt a better design, or focus on performance.
It’s worth noting that TDD is much easier to follow with specs. When you already have a clear overview of everything the component should do, you can translate those specifications into tests. Some teams use frameworks like ATDD (acceptance test–driven development), where the involved parties develop specifications from a business perspective. The final specs, or acceptance tests, are a perfect base to write tests following TDD.
On the other hand, going with TDD to test UI components can be difficult at first, and require some prior knowledge before diving into it. For starters, you need to have good knowledge of your testing libraries so that you can write reliable assertions. Look at the test we wrote with a regular expression: the syntax is not the most straightforward. If you don’t know the library well, it’s easy to write a test that fails for the wrong reasons, which would end up hindering the whole TDD process.
Similarly, you need to be aware of some details regarding the values you expect; otherwise, you could end up battling with your tests and doing some annoying back-and-forths. On that matter, UI components are more challenging than renderless libraries, because of the various ways the DOM specifications can be implemented.
Take the first test of our suite for example: we’re testing background colors. However, even though we’re passing hexadecimal colors, we’re expecting RGB return values. That’s because Jest uses jsdom, a Node.js implementation of the DOM and HTML standards. If we were running our tests in a specific browser, we might have a different return value. This can be tricky when you’re testing different engines. You may have to seek some more advanced conversion utilities or use environment variables to handle the various implementations.
Is it worth it?
If you made it this far, you’ve probably realized that TDD demands time. This article itself is over 6,000 words! This can be a bit scary if you’re used to faster development cycles, and probably looks impossible if you’re often working under pressure. However, it’s important to bust the myth that TDD would somehow double development time for little return on investment, because this is entirely false.
TDD requires some practice, and you’ll get faster over time. What feels clumsy today can become a second nature tomorrow, if you do it regularly. I encourage you not to discard something because it’s new and feels awkward: give it some time to assess it fairly, then take a decision.
Secondly, time spent on writing test-driven code is time you won’t spend fixing bugs.
Fixing bugs is far more costly than preventing them. If you’ve ever had to fix critical production bugs, you know this feels close to holding an open wound on a surgical patient with one hand, while trying to operate with the other one. In the desert. At night. With a Swiss Army knife. It’s messy, stressful, suboptimal, and bears high chances of screwing up something else in the process. If you want to preserve your sanity and the trust your end users have in your software, you want to avoid those situations at all costs.
Tests help you catch bugs before they make it to production, and TDD helps you write better tests. If you think you should test your software, then you should care about making these tests useful in the first place. Otherwise, the whole thing is only a waste of time.
As with anything, I encourage you to try TDD before discarding the idea. If you’re consistently encountering production issues, or you think you could improve your development process, then it’s worth giving it a shot. Try it for a limited amount of time, measure the impact, and compare the results. You may discover a method that helps you ship better software, and feel more confident about hitting the “Deploy” button. | https://medium.com/free-code-camp/an-introduction-to-tdd-with-vue-js-66544710b50c | ['Sarah Dayan'] | 2019-05-17 15:45:21.856000+00:00 | ['JavaScript', 'API', 'Tdd', 'Vuejs', 'Tech'] |
Rookie CEO Leadership Styles Drive Company Culture | Rookie CEO Leadership Styles Drive Company Culture
The “L” in the PPLC Framework
Image by Gerd Altmann from Pixabay
When you schedule a last-minute meeting with your team today, do they shake in their boots or are they enthusiastic? Consider asking your team 3 words that would describe you. Do it anonymously if they do not want to be identified. Think about what the answers to these would mean.
What type of CEO will you be, Visionary, Authoritative or Collaborative? What leadership superpowers will you bring to the CEO table? Will you be a combination of styles?
My main purpose for writing The Rookie CEO book, writing these blog posts, and developing the “PPLC” framework is to help aspiring CEOs and leaders learn how they will lead and creating an action plan to become the best and most successful CEO and leader one can be! Now that your mind is wandering and thinking about what your team would honestly say about you, let’s dive into the next level.
In a previous post, I described more specifics on the PPLC framework, which I introduced in the Rookie CEO book. The acronym represents:
P=Path to CEO
P=Philosophies the new CEO brings to the position and the company
L=Leadership Styles of the new CEO
C= Culture that the new CEO will create or emerge, and is built on the foundation of their “PPL”
This post will focus on the “L”, Leadership Styles and how they impact the culture of the company!
Leadership Styles — Make or Break the Culture!
Many of us have led teams, managed people and have a track record of proven results. This means we already have leadership styles cemented in our brains and because we know ourselves better than anyone, we have chosen our methods to lead. As a new rookie CEO, you will now move from leading a smaller business function or team to leading a complete company consisting of all business functions. Many of the most successful CEOs in history accumulated many failures on the way to success. All CEOs are rookie CEOs once! As we kick off the discussions, let’s look at a few of the basic leadership styles then we will evaluate how to determine your style and what action steps to take to grow and enhance your personal style to become more agile for today’s world!
Basic Types of Leadership Styles
There is an abundance of books and articles about leadership styles and many experts out there today. Readers of business books and web sites will have a snapshot of this complex topic in their minds and perhaps may have even mapped their existing styles into their personal leadership style elevator pitch in preparation for interviewing and networking. In “The Rookie CEO, You Can’t Make This Stuff Up!”, each of the 9 features rookie CEOs that I worked directly for have had to develop their personal styles to take on their 1st CEO role. By way of real-life stories, I share the results and impact of their styles and offer advice and takeaways for future CEOs to learn from in order to fuel their new CEO career.
Elements of Leadership Styles:
Humanity — is the rookie a visionary wanting to change the world as fast as possible influencing the world today?
Inspiration and motivation — is the rookie dynamic and energetic?
Initiative and aggressiveness– what is the rookie’s career and experience foundation for taking on the new role? Will there be a new mission to change or disrupt the market? Will there be a new team built or an inheritance of an existing team needing to be vetted by the rookie?
Is the rookie a manager or leader? This will determine if they are good listeners, do they gather input from all team players and stakeholders and make decisions based on these discovery platforms or will they just make decisions and drive forward in an autocratic way?
Will the rookie micromanage or delegate assignments and let the team execute?
Will there be goals and objectives set, measured and reported?
Does the rookie bring their own strategy, or does he/she meet with the team, ask questions to get team input, then set the strategy?
Will each team member be held accountable?
How much fun does the rookie like to build into day-to-day operation and execution?
Once the roadmap is set, will there be what look like daily changes as new articles appear, the rookie has chats with his or her friends, or they read a new book?
What type of tenacity and “get it done” type of attitude does the rookie bring?
The above will help the rookie understand the types of disruptions they will tolerate and the structure of the operation vs. seat of the pants methods they will employ, and from there, how the company will operate. Can you see how these have a huge impact on the company culture? Remember, the rookie has philosophies that they bring and coupled with their leadership styles will define the culture.
The experts and books I mentioned earlier define leadership styles as coach, visionary, servant, autocratic, authoritative, laissez-fire, democratic, pacesetter, and bureaucratic. Sometimes the classifications are less in number, but the bullets all fall into these categories. One of the largest challenges for the aspiring CEO and leader, is to understand their own strengths and weaknesses, create a learning and growth plan and roadmap and go! One of my favorite things to do as well as a major personal strength is to coach and educate aspiring CEO and leadership talent and guide them along their path to success! There is only one thing I can think of that I can’t coach: true grit!
Summary
If you are on your way to becoming a CEO or senior leader, now that you have insight into how your philosophies and leadership styles can drive the company north or south, you can take inventory of your personal views and arsenal of skills. How do you want your company to be led? How do you want to build out your team?
Learn, understand, and journal everything you can. As in sports, a good foundation of basics and knowledge of rules pays off with success!
Now that you have a basic understanding of my PPLC framework and how you can utilize it for your career enhancement, you can begin your action plan to achieve greatness! Coming in the next post will be the final PPLC chapter: Culture!
If you would like to read more about Rookie CEOs and their sometimes bazaar stories with many key take-aways, grab a copy of my book “The Rookie CEO, You Can’t Make This Stuff Up!” in eBook, paperback or hardcover formats from Amazon.
Connect with me on Twitter or LinkedIn. Stop by my web site to learn more. | https://medium.com/the-innovation/rookie-ceo-leadership-styles-drive-company-culture-1cff53a5c6dc | ['Bill Miller'] | 2020-12-17 19:33:44.670000+00:00 | ['Leadership', 'Planning', 'Motivation', 'Strategy', 'Business'] |
How to make a movie recommender: creating a recommender engine using Keras and TensorFlow | The type of recommendation engine we are going to create is a collaborative filter. The data we are going to use to feed our model is the MovieLens Dataset, this is a public dataset that has information of viewers and movies. The code for this model is based on this tutorial from Keras. The code for this tutorial can be found here and for the whole project here.
How does collaborative filtering works
The idea behind a collaborative filtering model is to use data from two sources, in this case users reviews and user watch history, to find users with similar taste. This uses the assumption that people that watch the same movies have the same taste.
To achieve this result, we must create the embeddings that represent the relationship between user and movie. The result for a 1 dimensional example is a matrix, where the users are the rows and the columns are the movies. So looking at the example below, should the user in the last row like the movie Shrek?
We could say that she might not like Shrek, this based on her past movie history (The Dark Knight and Memento), and if we look at users that have watch those same movies as our current user we find two examples. The user that has also watch The Dark Knight has also watch Shrek so a vote for recommending Shrek, but the user that has watch Memento, has not watch Shrek so a vote against recommending Shrek. This leaves us in a “tie”, to which we can look at the users that have watch Shrek to see if we can find some similarity with the user. Since we cannot easily find a correlation, we should not recommend Shrek to our user.
This is more or less what our Machine Learning model has to learn, to find similarities and differences among users and movies. For a deeper understanding of this model, here are some sources:
How to make a collaborative filtering with TensorFlow and Keras
TensorFlow is an open-source library for computational mathematics and Machine Learning. Keras is a Deep Learning API that belongs inside TensorFlow, that makes it easier to define and write Neural Networks. These libraries will help us define, train and save our recommender model.
We will also use libraries like Pandas, Numpy and Matplolib for data transformation and data visualization.
Lets start with creating a virtual environment using virtualenv (check this tutorial on what is and how to use virtual environments in python). Here are the dependencies for this script:
tensorflow
pandas
matplotlib
With our dependencies installed lets load the data into our system. Since the dataset is part of the Keras API, we can download directly using the following commands.
import pandas as pd
import numpy as np
from zipfile import ZipFile
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from pathlib import Path
import matplotlib.pyplot as plt
import os
import tempfile
LOCAL_DIR = os.getcwd() os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
movielens_data_file_url = (
"<http://files.grouplens.org/datasets/movielens/ml-latest-small.zip>"
)
movielens_zipped_file = keras.utils.get_file(
"ml-latest-small.zip", movielens_data_file_url, extract=False
)
keras_datasets_path = Path(movielens_zipped_file).parents[0]
movielens_dir = keras_datasets_path / "ml-latest-small"
# Only extract the data the first time the script is run.
if not movielens_dir.exists():
with ZipFile(movielens_zipped_file, "r") as zip:
# Extract files
print("Extracting all the files now...")
zip.extractall(path=keras_datasets_path)
print("Done!")
This code will download the dataset into your computer and extracting the files into a directory. Now we have to load the data and makes some changes to generate datasets to train the model.
ratings_file = movielens_dir / "ratings.csv"
df = pd.read_csv(ratings_file) user_ids = df["userId"].unique().tolist()
user2user_encoded = {x: i for i, x in enumerate(user_ids)}
userencoded2user = {i: x for i, x in enumerate(user_ids)}
movie_ids = df["movieId"].unique().tolist()
movie2movie_encoded = {x: i for i, x in enumerate(movie_ids)}
movie_encoded2movie = {i: x for i, x in enumerate(movie_ids)}
df["user"] = df["userId"].map(user2user_encoded)
df["movie"] = df["movieId"].map(movie2movie_encoded) num_users = len(user2user_encoded)
num_movies = len(movie_encoded2movie)
df["rating"] = df["rating"].values.astype(np.float32)
# min and max ratings will be used to normalize the ratings later
min_rating = min(df["rating"])
max_rating = max(df["rating"]) print(
"Number of users: {}, Number of Movies: {}, Min rating: {}, Max rating: {}".format(
num_users, num_movies, min_rating, max_rating
)
) df = df.sample(frac=1, random_state=42)
Using Pandas to load the rating data into the computer as a Dataframe. Here we must find all unique userId and give them an encoding value. This value will tell us which row of our recommendation matrix is each user. Then rinse and repeat for the movieId . Finally, we will take the highest and lowest ratings to normalize them later and shuffle our data.
Now, lets create our training and evaluation sets. We will be using 90% of the available data to train and 10% to evaluate our model.
x = df[["user", "movie"]].values
# Normalize the targets between 0 and 1. Makes it easy to train.
y = df["rating"].apply(lambda x: (x - min_rating) / (max_rating - min_rating)).values
# Assuming training on 90% of the data and validating on 10%.
train_indices = int(0.9 * df.shape[0])
x_train, x_val, y_train, y_val = (
x[:train_indices],
x[train_indices:],
y[:train_indices],
y[train_indices:],
)
With our data processed, we are ready to create our model with Keras.
EMBEDDING_SIZE = 32 class RecommenderNet(keras.Model):
def __init__(self, num_users, num_movies, embedding_size, **kwargs):
super(RecommenderNet, self).__init__(**kwargs)
self.num_users = num_users
self.num_movies = num_movies
self.embedding_size = embedding_size
self.user_embedding = layers.Embedding(
num_users,
embedding_size,
embeddings_initializer="he_normal",
embeddings_regularizer=keras.regularizers.l2(1e-6),
mask_zero=True
)
self.user_bias = layers.Embedding(num_users, 1)
self.movie_embedding = layers.Embedding(
num_movies,
embedding_size,
embeddings_initializer="he_normal",
embeddings_regularizer=keras.regularizers.l2(1e-6),
mask_zero=True
)
self.movie_bias = layers.Embedding(num_movies, 1) def call(self, inputs):
user_vector = self.user_embedding(inputs[:, 0])
user_bias = self.user_bias(inputs[:, 0])
movie_vector = self.movie_embedding(inputs[:, 1])
movie_bias = self.movie_bias(inputs[:, 1])
dot_user_movie = tf.tensordot(user_vector, movie_vector, 2)
# Add all the components (including bias)
x = dot_user_movie + user_bias + movie_bias
# The sigmoid activation forces the rating to between 0 and 1
return tf.nn.sigmoid(x)
The model is define by two embedding layers, one for the users and one for the movies. Then we will use the dot product between the user embedding layer and the movie embedding layer. To the result we add a user bias embedding layer and a movie bias embedding layer. Finally, we run a sigmoid function on the result to get a vector between 0 and 1.
Now, lets train and test our model.
model = RecommenderNet(num_users, num_movies, EMBEDDING_SIZE)
model.compile(
loss=tf.keras.losses.BinaryCrossentropy(), optimizer=keras.optimizers.Adam(lr=0.001)
) history = model.fit(
x=x_train,
y=y_train,
batch_size=64,
epochs=5,
verbose=1,
validation_data=(x_val, y_val),
) model.summary()
test_loss = model.evaluate(x_val, y_val)
print('\
Test Loss: {}'.format(test_loss)) print("Testing Model with 1 user")
movie_df = pd.read_csv(movielens_dir / "movies.csv")
user_id = "new_user"
movies_watched_by_user = df.sample(5)
movies_not_watched = movie_df[
~movie_df["movieId"].isin(movies_watched_by_user.movieId.values)
]["movieId"]
movies_not_watched = list(
set(movies_not_watched).intersection(set(movie2movie_encoded.keys()))
)
movies_not_watched = [[movie2movie_encoded.get(x)] for x in movies_not_watched] user_movie_array = np.hstack(
([[0]] * len(movies_not_watched), movies_not_watched)
)
ratings = model.predict(user_movie_array).flatten()
top_ratings_indices = ratings.argsort()[-10:][::-1]
recommended_movie_ids = [
movie_encoded2movie.get(movies_not_watched[x][0]) for x in top_ratings_indices
] print("Showing recommendations for user: {}".format(user_id))
print("====" * 9)
print("Movies with high ratings from user")
print("----" * 8)
top_movies_user = (
movies_watched_by_user.sort_values(by="rating", ascending=False)
.head(5)
.movieId.values
)
movie_df_rows = movie_df[movie_df["movieId"].isin(top_movies_user)]
for row in movie_df_rows.itertuples():
print(row.title, ":", row.genres) print("----" * 8)
print("Top 10 movie recommendations")
print("----" * 8)
recommended_movies = movie_df[movie_df["movieId"].isin(recommended_movie_ids)]
for row in recommended_movies.itertuples():
print(row.title, ":", row.genres) print("==="* 9)
print("Saving Model")
print("==="* 9)
If we are happy with our model, we can save it so we can use it for our web application.
How to save your TensorFlow model
Since we are using Keras to describe our model we only need to save it in a folder in our computer. There is one thing to keep in mind, as more data is available we will need to retrain our models or if we want to experiment with new parameters. To keep track of all these, we will be using versioning. Since we will be using Tensorflow Serving to call our model, Tensorflow Serving will automatically update to the latest version.
MODEL_DIR = tempfile.gettempdir()
version = 1
export_path = os.path.join(LOCAL_DIR, f"ai-model/model/{version}") print('export_path = {}\
'.format(export_path)) tf.keras.models.save_model(
model,
export_path,
overwrite=True,
include_optimizer=True,
save_format=None,
signatures=None,
options=None
)
How to make your TensorFlow model work for a web application
To make recommendations from our application we need to serve out the model. To do this we will be using Tensorflow Serving. This an extensión of Tensorflow that allows to run our model using HTTP requests. This is done using the Docker image for Tensorflow Serving, we will be going over this over the Docker part of the tutorial. | https://medium.com/analytics-vidhya/how-to-make-a-movie-recommender-creating-a-recommender-engine-using-keras-and-tensorflow-a8e34c9ce48e | ['Juan Domingo Ortuzar'] | 2020-12-16 17:29:28.403000+00:00 | ['Keras', 'Python', 'Machine Learning', 'Recommendation System', 'TensorFlow'] |
Being Gay in a Straight Workplace | Mailboxes. Photo by @tinamosquito
I was a young, confused, naive 19 year old when I started Royal Mail as a Postman in 2004 (Mail Man to my American friends). I remember my dad dropping me off at my first day and saying to me ‘Kev, you’ve got to keep your head up, you keep looking down at the ground’. My brother, Mark also worked at the post office so I already had an in and this allowed me to make friends easily. My colleagues at Royal Mail are really welcoming with any new starter and I settled in very easily.
“At the office you would hear gay sexual references used for humour, with this and my concerns about people’s reaction I kept my sexuality quiet…”
Between 2004 and 2008 I was still trying to come to terms with my sexuality. Working at a place where everyone is straight didn’t help. I had no exposure to anything “gay” growing up and any reference that I did have was camp and flamboyant and that wasn’t me. Some of my colleagues were young and ‘on the pull’ for girls and I was pretending to them and myself that it was what I wanted as well.
I came out to my friends and family in 2008 but I was still worried about my colleagues reaction. At the office you would hear gay sexual references used for humour, with this and my concerns about people’s reaction I kept my sexuality quiet, my brother was the only colleague who knew.
I left Royal Mail at the beginning of 2009 to go travelling with my then boyfriend, now husband. Even though I was leaving I still felt the need to tell my colleagues that I was travelling with a ‘friend’. My brother came out for me at work whilst I was away on my travels and he informed me they were shocked but ultimately no one actually cared. I returned to Royal Mail just over a year after I left, however I was returning as an openly gay man. I wasn’t nervous though, I trusted my brothers assessment of their reaction to my sexuality. It helped that one of my friends who I have known for a long time also worked there and I knew he would have my back so I already had two allies if I needed them.
Friends in a workplace. Photo by rawpixel.com
“The phobia part of homophobia gives the notion that a part homophobia is a fear and some fear stems from the unknown, so my colleagues trying to find out and trying to understand my sexuality as far as I was concerned was great.”
I didn’t need them, what I needed instead was answers to a lot of questions! Not questions about why I never told them before about my sexuality but instead I was being asked about being gay. The questions included comparisons between straight and gay relationships, I was being asked how I knew I was gay, what my type was, do I fancy anyone in the office? (I would never answer this question and still won’t). I absolutely loved being asked these questions, it meant people were trying to understand my sexuality, trying to understand me and my relationship. The phobia part of homophobia gives the notion that a part homophobia is a fear and some fear stems from the unknown, so my colleagues trying to find out and trying to understand my sexuality as far as I was concerned was great.
I am so lucky to work with the people I do, I am not known as ‘the gay one’ I am just simply ‘Kevin’, or ‘Kev’, or ‘Mini Kev’ (I’m short), or ‘Mini Kiev’, or ‘Kevlar’ and my favourite name ‘the littlest homo’ (I should be offended by that last one but I love it). My relationship is treated on a par with everyone else’s and some of my colleagues came to my wedding. Every now and again I hear the old gay sexual innuendo’s that I used to hear in an attempt at humour. I try not to let this bother me, it is never meant with any malice and I don’t want people to worry about what is being said when I’m around. My concern with this is that there maybe someone else who wants to come out but is intimidated by this kind of humour like I was before I came out.
Some colleagues and I celebrating after my wedding.
“However, like me you could have a great bunch of colleagues who just see you for you and couldn’t actually give a damn about your sexuality.”
If I ever leave Royal Mail what I will find hardest is leaving the people behind. It’s like a family there, some of us have grown up together and others have seen the young ones grow up. I am no longer a young and naive 19 year old, I am 32 (almost 33) year old out and proud married gay man. I listened to my Dad’s advice and when I walk with my head down I try and pick it up again. When he died suddenly in 2010 my colleagues were there for me again and I know that if anything ever happens I will have their support. I am still slightly reserved like I was when I first started but that’s just me, gone are the days where I am reserved because I have a massive secret to hide.
My advice to anyone worried about coming out at work or in any walk of life is to try and tell one person who you trust first and then take it from there. I totally understand that it is different for everyone and unfortunately my experience will not be the same as others and there will be people out there who suffer homophobic abuse at work. However, like me you could have a great bunch of colleagues who just see you for you and couldn’t actually give a damn about your sexuality. | https://medium.com/lgbtgaze/being-gay-in-a-straight-workplace-a93c9018d37d | ['Kevin Laurie'] | 2018-10-04 19:09:48.773000+00:00 | ['LGBTQ', 'Society', 'Diversity', 'Coming Out', 'Workplace'] |
It’s Time for Apple to Make a Search Engine | Image Credit: Markus Winkler via Unsplash
Have you ever thought about something that a company should make but they never do? I’m sure back in the 90s, someone thought to themselves “Why does Porsche only make sports cars? I bet they could engineer an awesome sedan”. This fictional person would have been validated when the Porsche Cheyenne came to market. I find myself bearing this level of validation with the rumors that Apple is close to finally ditching Google and Microsoft and just saying screw it, let’s make our search engine. On the surface, this might seem like a silly idea. After all, Google and Bing do a great job of working with the iPhone and other Apple products. But upon further thought, Apple making their solution makes a ton of sense.
An Ever-Expanding Portfolio
Image Credit: Michal Kubalczyk via Unsplash
It is no secret that Apple has ambitions of becoming more of a software-driven company. This much is clear with the successful launch of services like iCloud, Apple Music, Apple Arcade, and Apple News+ over the years. What is also clear is that the hardware side of Apple, it’s legacy wing if you will, has become completely independent of outside partners. With the news coming this week that new Macs are ditching Intel processors in favor of the company’s in-house laptop processors it calls the M1 chip, this move to self-reliance is a clear priority to Apple.
There has always been a sense of exclusivity with Apple, hence the origin of the term “walled garden” to describe how Apple attracts customers and keeps them using the companies products. For years, this has been applied to hardware products only. Where it was hard to leave Apple once you had invested in the iPhone, Apple Watch, and iPad. The Apple Watch, for instance, is utterly useless without an iPhone or iPad to connect it to. Apple’s argument has always been that this closed-off nature is for the benefit of the end-user, that this tight integration between its products creates a more user-friendly experience. This same strategy is now being employed with the companies services as well.
Outside of Apple Music, Apple services are designed to run on Apple hardware. While Apple TV+ is available on some smart TV solutions, it is designed to be utilized on Apple hardware like the Apple TV box. Apple Arcade will only run on Apple hardware, and the same goes for Fitness+ and News+. These services are now a part of the overall Apple experience. The modern Apple computing solution involves not only committing to a hardware ecosystem but also a software one that is designed to fit in all aspects of your life. The idea that the company is putting forward is to eliminate other services from third party companies and instead have Apple handle that for you, it only makes sense then that a search engine for the iPhone, iPad, and Mac is the next evolution of that idea.
What Would an Apple Search Engine Look Like
Image Credit: Diego PH via Unsplash
It is no mystery that Google Search is by far the dominant internet search platform. In the United States, Google holds a commanding 88.4% of the internet search market while Microsoft Bing, Yahoo, and DuckDuckGo hold the bulk of the remaining web query market. There are a few reasons why this number is so lopsided. The first and foremost is that having a competent web searching platform relies heavily on artificial intelligence to be able to know what users will want to click on and how to correctly place advertisements in search to make the service more revenue and make partners continue to want to advertise on the platform. Secondly, Google search is the default search engine on all Android phones and Chromebooks which immediately positions it well. There is also the idea that Google was the first to market and focus on search creating something of a Kleenex effect when it comes to web searches, to the point that no one calls it web searches anymore.
In this market of sheer dominance from Google, how does Apple differentiate itself? Microsoft has bundled Bing in Windows devices and even gives users rewards points for using its service and cannot get to 10% market share. DuckDuckGo has gone all-in on being a search engine focused on privacy with no ads or trackers like Google and Bing, yet they too are largely irrelevant. What then, would be the appeal of an Apple-based search engine?
While DuckDuckGo hasn’t struck mass appeal with its privacy-focused approach, this is an approach that Apple could take with its efforts. Apple has created an entire marketing effort centered around the iPhone being a secure phone. The phone that doesn’t get malware as Android phones do. What Apple has that DuckDuckGo does not have is a massive marketing budget. Apple is very efficient at selling privacy as a feature, and would likely have billboards made to highlight this fact in a search engine.
An Apple search engine would be expected to keep all data on-device and have full integration into Apple services and throughout the operating system. Apple will also need to show itself as the anti-Google and avoid the temptation to serve targeted ads, especially on mobile. Apple could then easily craft a message to iPhone users that the Apple search will be a privacy-focused experience that does not track every activity and offers an ad-free experience.
While Apple making this sort of a direct challenge to a dominant Google app seems rather ridiculous, this would not be the first time that the company decided to replace a Google solution in favor of an in-house solution. Back in 2012, Apple released Apple Maps to replace Google Maps as the default navigation and maps application on the iPhone. This transition was not an easy one as early versions of Apple Maps occasionally offered incorrect information that prompted CEO Tim Cook to issue an apology. Since then, however, Apple has slowly been making the app better to the point that many iPhone users have abandoned Google Maps in favor of Apple Maps especially as the Google solution has become more bloated with unnecessary features over time. The blueprint for Apple to make a search engine competitor to Google is there, and Apple has shown that it can build a competent alternative to Google apps.
The Walled Garden Grows Larger
Image Credit: Omid Armin via Unsplash
In many tech circles, fans often joke about Apple’s “walled garden”. Where once someone starts using Apple products, they stay locked in because Apple has a way of creating a reliance on its services. In the past, this has meant creating lock-in through services like iMessage, FaceTime, and AirDrop. Now it seems that the company is starring to double down on this strategy and create more of a sense of exclusivity. One of the few remaining obstacles to cross seems to be the search engine, in terms of something that is interacted with daily by the user.
When you go down the list of first-party software solutions that Apple has made the standard on iPhone, the progression towards this seems natural:
Apple Music : the default music streaming service that gets new features on iOS first
: the default music streaming service that gets new features on iOS first iCloud : Apple’s in-house cloud storage system only available on Apple hardware
: Apple’s in-house cloud storage system only available on Apple hardware iMessage : Apple’s proprietary instant messaging service that only works with Apple hardware
: Apple’s proprietary instant messaging service that only works with Apple hardware Apple News+ : the new way to get curated news on Apple devices
: the new way to get curated news on Apple devices Apple TV+ : Apple’s way of competing with Netflix, Hulu, and Google TV
: Apple’s way of competing with Netflix, Hulu, and Google TV Apple Calendar: the default way of scheduling events on the iPhone
Apple’s messaging with all of these software solutions is that the company will create the best experience for Apple users through its apps as opposed to having to rely upon third-party solutions. Yet with the most simple of internet-related smartphone tasks, the web search, the company still does not have a solution and relies heavily on Google and Bing services to cater to its users. This is the missing piece of the Apple mobile computing puzzle that the company will look to solve sooner rather than later.
By having its search engine for iPhone users, Apple will create an argument of justification for the high price of the iPhone. Apple has never been a phone maker that makes low-end or “cheap” devices. iPhones always have and always will demand a price premium. Where a company like OnePlus is trying to diversify its portfolio with $200 Android phones, Apple remains adamant that its phones are worth paying extra for. Part of this is the Apple experience of tightly integrated software complemented by excellent hardware.
Moving forward, these software solutions are what will drive the differentiation of the iPhone versus a sea of Android competitors. The only way that Apple knows how to differentiate is through exclusivity. Having a dedicated search engine that is exclusive to Apple hardware creates this sense of exclusivity where the company can actively market a search engine without data tracking and ads that are ready out of the box on every new iPhone. By being able to mention all of these exclusive software features that cannot be accessed on anything but an iPhone demand is created beyond the simple aesthetics of the hardware. Apple has realized that the future of its smartphone business revolves around software more than hardware, and the company building its search engine will be a step in this direction. | https://medium.com/swlh/its-time-for-apple-to-make-a-search-engine-8fca5a680fd9 | ['Omar Zahran'] | 2020-11-16 17:43:11.408000+00:00 | ['Apple', 'Innovation', 'Technology', 'Search Engines', 'Business'] |
How to Build and Deploy a Jamstack Website Fast With Next.js | How to Build a Website With Next.js
1. Installation
You should have Node.js and Git installed on your computer. If you don’t have Node, downloading the installer is the easiest way.
2. Create a project
It’s as simple as running this command in your terminal:
npx create-next-app
First, it will ask you this question: “What is your project named?” Type in the name of your project and it will generate all the needed files.
In your terminal, go to the directory of your project. The script will show you the folder when it has installed all the dependencies.
When you open this folder via your favorite editor, it should look something like this:
3. Add content and styling
If you check the pages folder, you will see two JavaScript files and one folder.
The index.js is your homepage. The _app.js is the wrapper for all page components. Here, you can add all kinds of global styling.
Run npm run dev and open your browser on localhost:3000 . Now you can see your new Next.js website.
If you want to get content from Markdown files, an API, or a CMS, I recommend checking out all the starter projects from Next.
Running it locally is cool, but eventually, you want to show it to the world.
4. Create a GitHub project
Before we can deploy it, create a project and host your code safely there.
We want to host it for free on Netlify. For Netlify, you should add a config file to get your site running in no time.
Create a netlify.toml file and copy this code in it:
[build]
command = "npm run build"
publish = "out"
With this code, you tell Netlify what your build command is and in which folder it needs to serve that build version.
5. Deploy on Netlify for free
Log into Netlify and create a new project based on your GitHub account.
Select the repo your website is in and click next. The next step should be configured for you because of the netlify.toml file.
When everything goes as planned, you should see that there is a deployment running.
When that build and deployment process is finished, you can visit your site by clicking on “Preview deploy.” Now your website is alive and you can share it with the world.
Of course, I recommend spending a reasonable amount of time adding content and styling so it is very pleasing to your visitors’ eyes. I wish you good luck! | https://medium.com/better-programming/how-to-build-and-deploy-a-jamstack-website-fast-with-next-js-a61df3c822f | ['Dev Rayray'] | 2020-12-22 12:35:36.129000+00:00 | ['JavaScript', 'Jamstack', 'Netlify', 'Programming', 'React'] |
In Defense of Period Sex | Back in my days of cubicles and office work, a co-worker once complained to me that she was on her period, but her boyfriend was staying over for the weekend. She knew he'd be satisfied sexually by her own efforts, while she'd wind up feeling frustrated and have to take plenty of painkillers for her menstrual cramps.
"The worst thing about it," she whispered, "is that I'm so freaking horny on my period. Ugh. Men will never get it."
My frustrated coworker ranted on for a few more minutes, complaining that men expect BJs and handys during that time of the month while we feel like shit. I couldn't disagree since my personal experience involved much of the same, but I thought it was sad. My co-worker was unwilling to broach the subject with her then-boyfriend and eventual husband. She didn't want him to think that she was crazy.
How many other women refuse to bring up their desires in the bedroom only because men might think they're gross? And who decided it was so damn gross in the first place?
Were menstrual cycles reversed and only experienced by men, I have no doubt that period sex would be about as normal as men choosing to grow beards. As far too many women already know from experience, just as many men are no strangers to making sex happen... come hell or high water. So I suspect that most men would continue to have sex if they had a menstrual cycle--bleeding be damned.
Like my former coworker, I tend to feel pretty frisky whenever my period rolls around. Especially when I'm more "in-tune" with my body. When I feel like my best self, my period inspires cravings for bananas, top quality dark chocolate... and p in v sex.
For a long time, I thought it was something I shouldn't talk about because it was somehow abnormal. In short, I thought I was a freak to even want to have sex during my period.
The idea of period sex being some depraved or otherwise disgusting act likely stems from society's overall attitude about women on their periods. Most of us have been taught that it's dirty or unclean. Many women will go to great lengths just to avoid being seen buying menstrual products like pads or tampons. In some households, it's mortifying to even imagine a brother, father, or husband might see some evidence of our Aunt Flo.
"It’s sad that the most glorious of sexual experiences can make us feel guilty, ashamed, embarrassed, and abnormal." -Sue Johanson
In reality, it's silly and sad. As much as some of us girls feel like we're hemorrhaging, our blood and menstrual fluid are counted in milliliters. And not much more than 100 at the most. We’re needlessly self-conscious about this ongoing bodily process that won't deter us from school or work--unless perhaps, we're doubling over in pain due to cramps and inflammation.
And while period sex carries a weird stigma, what about other types of sex? Like sex when you have allergies or a cold. Plenty of couples have sex and risk getting their partner sick with a sore throat, a runny nose, or worse. We all risk some exposure to STIs. Yet it's practically the norm for women to worry more about period sex. Unexpected periods can ruin honeymoons, long-awaited holidays, and any number of impromptu sex sessions--usually because so many women think they're not supposed to even want sex then.
For the record, if you are a woman who hates being touched during her TOM and/or wants nothing to do with period sex, I don't think you're a prude, and I'm not trying to change your mind. But for women who do want to have sex that also feels pleasurable to them during their periods, that conversation shouldn't be taboo.
Another monthly gift?
There are definite health benefits to period sex, just like there are health benefits to giving your male partner head. For one thing, period sex feels good. As in otherworldly good. Period sex is extra slippery due to the extra lubricant, of course. And reaching orgasm delivers pain-relieving hormones throughout your body--just what you need while on the rag.
But honestly, if you suffer from excessively painful menstrual cramps and inflammation, there's just something about penetrative sex. Like scratching a deep itch that nothing else can soothe. Just the mere sensation of p in v intercourse during a period is enough to work out your kinks, like a much-needed massage.
Personally, penetrative sex isn't my favorite sexual activity during any other time of the month, but once I'm feeling bogged down by TOM, I actually crave it alone. And sure, I know how weird that sounds to a lot of people, because I've talked to them about it.
Most women I speak with either a.) don't want to tell their partners about their desires for period sex, or b.) say they can't imagine it feeling good since their monthly gift makes them feel so bad. But I think plenty of women would be surprised to discover just how pleasurable it can be. If there wasn't so much yucky stigma.
Some men I've dated won't try it. I might broach the topic and gently mention if my period comes early, and some will say it weirds them out. But to be honest, none of those men have been weirded out to have a woman on her period service them orally or manually--even knowing she's not feeling great. They just don't want to see or smell her blood.
Maybe if a woman's reproductive system and menstrual cycle were less taboo, we'd not only have an easier time bringing up period sex, but we could talk more openly about tips to make it more pleasurable for all parties.
I credit sex educator Sue Johanson for going a very long way to help reshape my formerly rigid or even fearful views about sex, and I always appreciated how she made sex so... matter of fact. I dont think period sex needs to be any different.
The truth is that a freshly showered woman who's already a couple of days into her period could very likely have period sex without her partner being none the wiser. But as soon as a woman tells a man that she's got her period, his mind is in danger of running away with him and picturing a Jackson Pollock set of bedsheets splattered red.
(Please don't lie to your sexual partner. I'm not advocating that women hide their periods.)
Keep it simple, silly.
If you're lucky enough to find a partner who's up for period sex, invest in a dark-colored bath sheet or dark bed sheets you're willing to wash promptly. And take a shower before intercourse, or opt for a shower-only rendezvous.
Keep in mind that women bleed vaginally (which means the clit does not bleed, ahem, hint hint), and even the heaviest of flows won't kill you. Guys, wear a condom. Pregnancy is less likely during that time of the month, but not impossible. Plus, condoms help prevent STIs.
One innovative solution for mess-free period sex is to have the menstruating woman wear a FLEX Disc, a unique and (also disposable) alternative to tampons, pads, and menstrual cups. Because FLEX sits just past the vaginal canal in the same place as a diaphragm, messy period sex becomes a non-issue.
Look, I'm never out to force people into sexual activities that don't interest them, but I do think it's time to end the stigma of enjoying period sex. It's not weird or gross. Throughout history, women have been encouraged to stifle their own pleasure for the sake of men, children, and society. Besides, it's okay to find pleasure in something which may not be convenient.
It's not too late to change the dialogue we have about sex and menstruation, but let's face it--the change must begin with us. Women have to change the way we speak to and about our own bodies before expecting change from our men.
If a woman wants period sex, she should be able to speak openly to her partner without shame or fear of censure. And men, I'm not suggesting you engage in some disgusting sexual practice that makes you gag. I do ask that you challenge yourselves to question why a woman's period tweaks you out if you see no problem with (or even get turned on by) a woman handling your messy jizz.
Just in case you're still on the fence about period sex, here are a couple of reputable resources from Healthline and VeryWell. And if you're looking for a laugh? It's like Sue says, "If you can’t laugh about sex, you shouldn’t be doing it." | https://medium.com/awkwardly-honest/in-defense-of-period-sex-f16ba5558581 | ['Shannon Ashley'] | 2019-01-27 03:06:18.797000+00:00 | ['Health', 'Relationships', 'Culture', 'Sex', 'Women'] |
Build Not Hotdog from HBO’s Silicon Valley using OpenCV | By: Ben Zhang
Of the questions innovators ask themselves, there’s one that nags and frustrates: “How do I frame problems such that my solutions are creative, intelligent, and elegant?”
HBO’s Silicon Valley chronicles the (often futile) exploits a group of startup founders undertake as they try to adequately address the above issue. A popular example of such a project is Not Hotdog — an app that determines whether objects are hotdogs or not. Since its creation, the app affected a profound change in perspectives regarding food, creating a rigid dichotomy between foods that are hotdogs and foods that aren’t.
You might wonder — what’s the driving force behind Not Hotdog’s success? Just as cannonballs must have physical weight to be effective, applications must have technological heft to be impactful. Not Hotdog’s heftiness lies in its use of machine learning, a mysterious subfield of computer science and statistics that has matured rapidly in recent years. Despite the recent abundance in machine learning applications, solutions that incorporate neural networks are often heavy-handed, lacking in the aforementioned elegance of execution. I demonstrate how to integrate neural networks into a computer vision app. I hope that, through this example, you’ll develop a better understanding of when and how to apply machine learning.
Before we begin, here’s a link to my own Not Hotdog codebase.
Setup
Python and OpenCV are necessary for this project. Refer to this guide for Windows and this guide for Unix-based systems.
Loading ML Models into OpenCV
Since OpenCV allows developers to load deep neural networks from popular frameworks (like Caffe2, Tensorflow, and Torch) through its dnn library, we can load a pre-trained image classification model from a framework of choice. Tensorflow’s Inception model is particularly refined as it accurately classifies roughly 1,000 classes, and it is quite fast.
To download the version of Inception that we’re using, click here. Unzip the “inception5h.zip” file and put its contents directly into your project in the directory of your choice.
Here’s the code for loading a neural net in OpenCV:
def initialize_dnn():
class_names_path = os.path.join(inception_path,’imagenet_comp_graph_label_strings.txt’)
model_path = os.path.join(inception_path, ‘tensorflow_inception_graph.pb’)
class_names_descriptor = open(class_names_path, ‘r’)
class_names = class_names_descriptor.read().strip().split(‘
’)
inception_net = cv2.dnn.readNetFromTensorflow(model_path)
return inception_net, class_names
Preprocessing For Inception
Because Tensorflow’s Inception only takes images formatted in a very specific manner, we need to pre-process our images using more traditional OpenCV functionalities.
1. Resize the Image: Many modern neural networks consume images of exactly 224x224 pixels, a characteristic shared by Inception. We need to resize the image as close as possible to 224x224, then pad any remaining space with white. For example, an image may end up looking something like this:
Note the white padding.
2. Blob It: BLObs, or Binary Large Objects, are the serialized inputs that Inception takes as input. We can convert an image to a BLOb using the following OpenCV command, where “resized” is the image after processing:
blob = cv2.dnn.blobFromImage(resized, 1, (224, 224), (0,0,0))
net.setInput(blob)
Usually, image classifiers operate only on images that are regularized in a particular manner. To fulfill this constraint, most people that use classifiers have to regularize the colors in whatever dataset they’re classifying. Since Inception regularizes its input, we won’t be modifying any images this way.
Time to Classify
Since we’ve set up Inception and formatted its input correctly, it’s time to classify! Normally, we’d have to train the model ourselves. However, Inception is fully trained, and one forward pass through the network produces accurate confidence intervals to work with. Filtering through these confidence intervals, it’s possible to determine the confidence interval in which the image is or is not a hot dog. The histogram below demonstrates this principle: the neural network produces a probability distribution in which the biggest bar on the graph corresponds to which kind of object the image belongs to.
Bells and Whistles
Finally, we have to produce an output image. Using OpenCV provisions for drawing, it’s possible to produce an image that looks like this:
I hope that in the process of building this classifier, you’ve developed a better understanding of how to design machine learning applications. If you have questions or comments, feel free to leave a comment here. You can also find me on Linkedin here. | https://medium.com/tribalscale/build-not-hotdog-from-hbos-silicon-valley-using-opencv-34592aa0c4cf | ['Tribalscale Inc.'] | 2019-04-25 15:46:33.796000+00:00 | ['Software Development', 'Development', 'Silicon Valley', 'TensorFlow', 'Machine Learning'] |
How the Trump Campaign Built an Identity Database and Used Facebook Ads to Win the Election | There may be some fake news on Facebook, but the power of the Facebook advertising platform to influence voters is very real. This is the story of how the Trump campaign used data to target African Americans and young women with $150 million dollars of Facebook and Instagram advertisements in the final weeks of the election, quietly launching the most successful digital voter suppression operation in American history.
Throughout the campaign, President-Elect Donald J. Trump shrewdly invested in Facebook advertisements to reach his supporters and raise campaign donations. Facing a short-fall of momentum and voter support in the polls, the Trump campaign deployed its custom database, named Project Alamo, containing detailed identity profiles on 220 million people in America.
With Project Alamo as ammunition, the Trump digital operations team covertly executed a massive digital last-stand strategy using targeted Facebook ads to ‘discourage’ Hillary Clinton supporters from voting. The Trump campaign poured money and resources into political advertisements on Facebook, Instagram, the Facebook Audience Network, and Facebook data-broker partners.
Depress The Vote
“We have three major voter suppression operations under way,” a senior Trump official explained to reporters from BusinessWeek. They’re aimed at three groups Clinton needs to win overwhelmingly: idealistic white liberals, young women, and African Americans.”
The goal was to depress Hillary Clinton’s vote total. “We know because we’ve modeled this,” the senior Trump official said. “It will dramatically affect her ability to turn these people out.”
For example, Trump’s digital team created a South Park-style animation of Hillary Clinton delivering the “super predator” line (using audio from her original 1996 sound bite), as cartoon text popped up around her: “Hillary Thinks African Americans are Super Predators.” Then, Trump’s animated “super predator” political advertisement was delivered to certain African American voters via Facebook “dark posts” — nonpublic paid posts shown only to the Facebook users that Trump chose.
Facebook is refusing to release a copy of the animated “Hillary Thinks African Americans are Super Predators” advertisement, or any other ‘negative’ presidential political ad that it ran. Facebook is also refusing to release details about the gender, ethnic, or location targeting parameters of these ads. Until further review, it’s uncertain if these targeted political advertisements are fully compliant with federal law.
Facebook’s advertising platform has recently come under fire from Congress for allowing advertisers to target African American, Asian American, Hispanic, and other “ethnic affinities”. Facing a wave of criticism, Facebook announced last week that it would build an automated system that would let it better spot ads that discriminate illegally. Facebook anticipates that its new system will be available by early 2017.
After the election, Facebook CEO Mark Zuckerberg said at a conference, “I think the idea that fake news on Facebook influenced the election in any way is a pretty crazy idea.” But he avoided the elephant in the room — President-Elect Trump’s election victory proves the power of Facebook advertising to influence the election.
Obviously, Zuckerberg would never say its a “pretty crazy idea” that Facebook’s advertising platform is supremely effective at persuading Facebook users to click, buy, or vote. In 2015, Facebook’s revenue from advertising was $17.9 billion dollars. According to its annual report, Facebook generates “substantially all of our revenue from advertising. The loss of marketers, or reduction in spending by marketers, could seriously harm our business.”
Trump’s Huge Digital
The engine of the Trump campaign was its digital operations division. Headquartered in San Antonio, the Trump digital team consisted of 100 staffers, including a mix of programmers, web developers, network engineers, data scientists, graphic artists, ad copywriters, and media buyers. The chief executive of Trump’s digital operation was Brad Parscale, a successful entrepreneur and founder of the marketing agency Giles-Parscale Inc.
Parscale worked closely with President-Elect Trump and was one of select few members of Trump’s inner-circle entrusted to tweet from his personal Twitter account, @ realDonaldTrump. Parscale’s lack of prior campaign experience was actually one of his greatest assets.
“I always wonder why people in politics act like this stuff is so mystical,” Parscale says. “It’s the same shit we use in commercial, just has fancier names.” On the strength of Parscale’s ability to generate campaign donations using Facebook and e-mail, the digital operations division was the Trump campaign’s largest source of cash.
In the Bloomberg BusinessWeek piece, “Inside the Trump Bunker, With Days to Go”, reporters Sasha Issenberg and Joshua Green detail how deeply President-Elect Trump was interested in his campaign’s digital strategy and fundraising operations. “Trump himself was an avid pupil. Parscale would sit with him on the plane to share the latest data on his mushrooming audience and the $230 million they’ve funneled into his campaign coffers.”
100,000 Trump Campaign Websites
In the early days of Trump’s campaign, Parscale was given a small budget and the goal of expanding Trump’s base of supporters. Parscale made a calculated decision to invest all the money on Facebook advertising. Using his laptop to buy $2 million dollars in Facebook ads, Parscale unceremoniously launched Trump’s first digital ad campaign.
To start, Parscale uploaded the names, email addresses, and phone numbers of known Trump supporters into the Facebook advertising platform. Next, Parscale used Facebook’s “Custom Audiences from Customer Lists” to match these real people with their virtual Facebook profiles. With Facebook’s “Audience Targeting Options” feature, ads can be targeted to people based on their Facebook activity, ethic affinity, or “location and demographics like age, gender and interests. You can even target your ad to people based on what they do off of Facebook.”
Parscale then expanded Trump’s pool of targeted Facebook users using “Lookalike Audiences”, a powerful data tool that automatically found other people on Facebook with “common qualities” that “look like” known Trump supporters. Finally, Parscale used Facebook’s “Brand Lift” survey capabilities to measure the success of the ads.
Parscale also deployed software to optimize the design and messaging of Trump’s Facebook ads. Describing one such test, the Wall Street Journal reporter Christopher Mims writes that “one day in August, his campaign sprayed ads at Facebook users that led to 100,000 different webpages, each micro-targeted at a different segment of voters.” In total, Trump’s digital team built or generated more than 100,000 distinct pieces of creative content.
The Data Hombre
Following Trump’s official nomination as the Republican Party presidential candidate in July 2016, Parscale was tasked with building and scaling the campaign’s digital targeting capabilities. One main supplier of Trump’s data was the Republican National Committee. (RNC Chairman Reince Preibus famously invested more than $100 million dollars into the party’s data and infrastructure capabilities since Mitt Romney’s 2012 loss.)
Preibus and his team the RNC flew down to San Antonio to meet Parscale and discuss what party officials began describing as “the merger.” Over dinner at Parscale’s favorite Mexican restaurant, Preibus and Parscale negotiated a partnership agreement between the RNC and Trump campaign. The RNC granted Trump access to its list of 6 million Republicans, but Trump could only keep 20% of any cash he raised from the list. The other 80% of campaign donations belonged to the RNC.
In retrospect, it seems like the Trump campaign was out-negotiated by the RNC. However, at the time, the Trump campaign had virtually no digital infrastructure and hadn’t actively raised any money during the primaries. In fact, when the Trump campaign sent out its first e-mail solicitation in late June, about 60% of Trump’s emails were blocked by spam filters.
Constructing the Project Alamo Database
Under the guidance of Jared Kushner, a senior campaign advisor and son-in-law of President-Elect Trump, Parscale quietly began building his own list of Trump supporters. Trump’s revolutionary database, named Project Alamo, contains the identities of 220 million people in the United States, and approximately 4,000 to 5,000 individual data points about the online and offline life of each person. Funded entirely by the Trump campaign, this database is owned by Trump and continues to exist.
Trump’s Project Alamo database was also fed vast quantities of external data, including voter registration records, gun ownership records, credit card purchase histories, and internet account identities. The Trump campaign purchased this data from certified Facebook marketing partners Experian PLC, Datalogix, Epsilon, and Acxiom Corporation. (Read here for instructions on how to remove your information from the databases of these consumer data brokers.)
Another critical supplier of data for the Trump campaign and Project Alamo was Cambridge Analytica, LLC, a data-science firm known for its psychological profiles of voters. As described by BusinessWeek, “Cambridge Analytica’s statistical models isolated likely supporters whom Parscale bombarded with ads on Facebook, while the campaign bought up e-mail lists from the likes of Gingrich and Tea Party groups to prospect for others.”
Statistical models from Cambridge Analytica also dictated Trump’s travel itinerary. The locations of Trump’s campaign rallies, the centerpiece of his media-centric candidacy, were chosen by a Cambridge Analytica algorithm that ranked places in a state with the largest clusters of persuadable voters.
“I wouldn’t have come aboard, even for Trump, if I hadn’t known they were building this massive Facebook and data engine,” says the Trump campaign Chairman Steve Bannon. (Bannon is also a Board Member of Cambridge Analytica.) “Facebook is what propelled Breitbart to a massive audience. We know its power.”
Facebook Dark-Posting Super Predators
Powered by Project Alamo and data supplied by the RNC and Cambridge Analytica, Trump was spending $70 million a month on digital operations, much of it to cultivate a universe of millions of fervent Trump supporters, many of them reached through Facebook. Mostly, Trump harnessed his digital operation for good — to identify his supporters and to raise money, ultimately collecting a dominant $275 million dollars in donations through Facebook. However, as Trump’s momentum and public support were eroding in the final weeks of the campaign, his digital team plotted a last-ditch effort to use Facebook ads against supporters of Hillary Clinton.
As reported by BusinessWeek, “Trump’s campaign has devised another strategy, which, not surprisingly, is negative. Instead of expanding the electorate, Bannon and his team are trying to shrink it. “We have three major voter suppression operations under way,” said a senior Trump official. They’re aimed at three groups Clinton needs to win overwhelmingly: idealistic white liberals, young women, and African Americans.”
On October 24, two weeks before Election Day, Trump’s team began placing paid political advertising on select African American radio stations. In addition, Trump’s digital team created a South Park-style animation of Hillary Clinton delivering the “super predator” line (using audio from her original 1996 sound bite), as cartoon text popped up around her: “Hillary Thinks African Americans are Super Predators.”
Using the Facebook advertising platform, Trump’s animated “super predator” political advertisement was targeted to certain African American voters via Facebook “dark posts” — nonpublic posts whose viewership the campaign controls so that, as Parscale puts it, “only the people we want to see it, see it.” (So far, Facebook has refused to publicly release Trump’s “Hillary Thinks African Americans are Super Predators” political ad and its audience targeting parameters).
The goal was to depress Hillary Clinton’s vote total. “We know because we’ve modeled this,” the senior Trump official told BusinessWeek. “It will dramatically affect her ability to turn these people out.”
Digital Discouragement Wins
Campaigns typically spend millions on data science to understand their own potential supporters — to whom they’re likely already credible messengers — but Trump was willing to take a risk and speak to his opponent’s supporters. In the end, Trump’s risky bet on micro-targeted Facebook ads to discourage African Americans and young women from voting was handsomely rewarded with a presidential campaign victory.
On Election Day, Democratic turnout in battleground was surprisingly weak, especially among sporadic and first-time voters. David Plouffe, manager of President Obama’s 2008 campaign, noted that, “in Detroit, Mrs. Clinton received roughly 70,000 votes fewer than Mr. Obama did in 2012; she lost Michigan by just 12,000 votes. In Milwaukee County in Wisconsin, she received roughly 40,000 votes fewer than Mr. Obama did, and she lost the state by just 27,000. In Cuyahoga County, Ohio, turnout in majority African-American precincts was down 11 percent from four years ago.”
Trump’s presidential election victory is the most successful digital voter suppression operation in American history. The secret weapons in Trump’s digital arsenal were Project Alamo, his database of 220 million people in the United States, and the Facebook Advertising Platform. By leveraging Facebook’s sophisticated advertising tools, including Facebook Dark Posts, Facebook Audience-Targeting, and Facebook Custom Audiences from Customer Lists, the Trump campaign was able to secretly target Hillary Clinton’s supporters and covertly discourage them from going to the polls to vote.
Stay Updated:
Related Stories: | https://medium.com/startup-grind/how-the-trump-campaign-built-an-identity-database-and-used-facebook-ads-to-win-the-election-4ff7d24269ac | ['Joel Winston'] | 2017-07-20 07:31:19.421000+00:00 | ['Facebook', 'Hillary Clinton', '2016 Election', 'Donald Trump', 'Law'] |
3 Reasons Why a Creative Lifestyle Leads to Greater Personal Fulfillment | The modern world is one in which people are increasingly beginning to feel confused when it comes to what to do with their lives.
Traditional career paths are dying out, age-old religious beliefs are losing their cultural binding power, and despite the increasing prosperity of developed nations, depression and other forms of mental illness are on the rise.
At the root of much of today’s anxiety is the question of whether or not personal fulfillment will ever be found in life.
If you, dear reader, are anything like me, then you have worried about this exact same thing.
There are a lot of different perspectives on how we should go about finding fulfillment.
Some will say to turn to religion. Others to work. And still, others might say to focus all of one’s attention on friends and family.
There is value in that advice, but what’s resonated with me the most is the idea that to meet the challenge of a seemingly meaningless world, it’s best to cultivate our own creativity, which allows us to create our meaning — and life satisfaction — as we go along.
The excitement of the artist at the easel or the scientist in the lab comes close to the ideal fulfillment we all hope to get from life, and so rarely do. Perhaps only sex, sports, music, and religious ecstasy — even when these experiences remain fleeting and leave no trace — provide as profound a sense of being a part of an entity greater than ourselves. But creativity also leaves an outcome that adds to the richness and complexity of the future. — Mihaly Csikszentmihalyi
Whether it is through the discipline of writing every day or taking up a hobby like painting or dance, I believe that getting in touch with our creativity is the best remedy to some of the more existentially-flavored anxieties besetting modern humans.
There are many good reasons to believe that this is so. For the purposes of this piece, I’ve decided to focus on just three.
Creativity is an active, rather than passive, process. Cultivating creativity requires that we adopt better habits. Participating in the creation of culture is more fun than only ever consuming it.
Hopefully, by the end of this piece, it’s easier to see why the act of creating things is one of the best antidotes for a modern world in which meaning and satisfaction are increasingly harder to come by.
Creativity Is an Active Process, Consumption Is a Passive One
One of the main reasons more people don’t make an effort to create their own work is that any creative discipline requires the cultivation of challenging skills.
It’s far easier to passively consume the results of other people’s creative work than it is to strike out and try to produce our own.
Skills demand a time investment in which one feels awkward, clunky, and totally inept before they start to return joy and satisfaction, whereas quick indulgences offer us immediate pleasure with little to no time investment.
But rather than seeing the challenge associated with growing one’s creativity as a downside, it should be looked at as an upside and embraced.
Why so?
Because anyone who has ever worked on a skill for a substantial period of time will tell you that the joy that comes from mastering challenges and discovering new ones to strive towards far surpasses life’s quick and easy pleasures.
Even the attainment of something seemingly simple — such as a single push up — is felt, from the perspective of the person whom it’s challenging for, as a deeply satisfying experience.
Pleasures such as eating, sex, or binge-watching TV may be extremely satisfying in the short-term (and largely due to biological reasons), but they are ephemeral, which is to say the pleasure wears off almost as soon as we have felt it, and so, of course, we will eventually crave more.
Skills, on the other hand, require our active, continued commitment in order to keep on growing in complexity, which leads us to reason number two.
Increasing Creative Skills Requires Forming Better Habits
To write a good blog post, paint a decent picture, or be able to perform even a simple song on a musical instrument requires that we question our current habits and adopt better ones if necessary.
Out of all our habits, how we choose to wield our focus is perhaps the most important one.
Another way to think of focus is in terms of priorities.
The decision to use one’s free time to passively consume content on social media — rather than producing it ourselves — is firmly habitual in nature.
There’s no reason we can’t replace — via slow, gradual effort — habits of passive-consumption (such as spending all our free time watching Netflix) with ones of active-creation, which contribute to the development of what Hungarian psychologist Mihaly Csikszentmihalyi refers to as a more “complex self.”
Put another way, we aren’t destined to only ever gaze at the better work of others. We can learn to both enjoy other people’s creativity as well as nurture our own.
We may not be able to produce works of such quality as the various historical and contemporary notables, but we can certainly break past our current limits and discover just how far our potential can carry us in any given direction.
And isn’t discovering that much better than never knowing at all?
To relate back to reason number one, however, part of the difficulty in doing this pertains to our habitual tendency to put off challenging tasks in favor of easier ones, even though we know that the greater the challenge, the greater the reward.
Once we adopt creative habits, however — such as sitting down to write every day, even if the finished product isn’t “perfect”— it becomes very difficult to feel content with the old routines that used to run our lives.
To give you a concrete example, once I finally started writing every day and got back into drawing and painting with my girlfriend, we’ve found it increasingly difficult to find something enjoyable to watch on Netflix.
After being slightly frustrated that nothing seemed to catch our interest (as one of our primary activities was to simply watch things together), we came to the conclusion that this is because we’re now more interested in pursuing our own creativity rather than passively sifting through the works of others.
This doesn’t mean that we have ditched consumption entirely. For instance, we really love The Mandalorian on Disney+.
What’s really changed is that we’ve become more discriminatory regarding what consumes our attention.
We’ve balanced out the joy we get from creating our own work with that we get from consuming the work of others, and this has been hugely beneficial in terms of feeling more satisfied in life.
And so, we eventually stopped feeling frustrated that less and less content seems to interest us.
We now realize that it’s okay to not let our attention be spent so easily, to have better standards for what we do consume on social media/television, and to prioritize the development of skills that create our own lives as we go along.
Now that we’ve been at it for a while, it’s become more obvious that there’s no going back to the way things were.
It’s much more enjoyable to grow one’s creative skills than it is to forever miss out on the opportunity, which leads us to reason number three.
Creativity Allows Us to Participate in Culture, Passive Consumption Relegates Us to the Sidelines
Most of us will never become the next Einstein, create a popular film franchise like Star Wars, or write a bestselling book series that becomes a hit TV show like Game of Thrones.
But this doesn’t mean that our creative potential is worthless or that we can’t participate in the evolution of culture.
The truth is quite the opposite: by nurturing our own creativity and bringing happiness to ourselves, we do the same for others.
After all, every individual is a constituent part of their culture. They are related to it via friends, family, coworkers, etc. And the internet has made it so that each of us can be “related” to virtually anyone else who is using social media at any given instant.
So how we choose to evolve ourselves determines how effectively we find our own groove in the changes currently taking place.
Put another way, we are each a role model to those around us. The habits and activities we adopt and put into practice reveal to ourselves — and the rest of the world — who we are.
This is why it behooves each of us to try and get in touch with our more creative side. It can help us steer the direction of our own lives and thus help steer the direction of humanity at large as well.
And fostering creativity is a step in the right direction for everyone.
One way to characterize much of the modern world’s predicament is that humanity currently consumes more from nature than it gives back.
Creativity — when adopted with the right mindset — can get us thinking about how to find satisfaction without the need to “take it” from something else.
The act of creation can reinforce our sense of responsibility to ourselves and others, as well as to the planet that we live on.
Furthermore, as the culture changes and more and more people wake up to the fact that fame, status, and fortune don’t necessarily bring lasting happiness, they will increasingly turn to methods of self-expression to find personal fulfillment.
If we don’t work on any creative skills of self-expression, then we remain unchanging and passive. This is a lot like opting to sit on the sidelines of our unfolding story as observers, rather than participants.
We have to do some observation, to be sure. But our lives miss out on an essential ingredient of happiness if we don’t participate as well.
As difficult as the question may be, we should all ask ourselves, “If I spend my whole life passively consuming the results of other people’s creative work instead of producing my own, will I approach death happily and without regrets?”
I reflect on that question often.
I think it’s safe to say that it’s the primary question that motivates me to keep writing, regardless of how large of an impact my blog has in the grand scheme of things.
Every time I ponder that question, I’m reminded of how essential it is that I continue to express myself via writing and I feel rejuvenated enough to keep at it another day.
Without the creative ability to express oneself, life becomes a more miserable affair.
And I don’t know about you, but I think that life should be an exhilarating choose-your-own-adventure experience, in which each individual feels empowered to go on a journey of self-discovery, sharing with others what they learn along the way.
Becoming more creative is — to my mind — the most effective way to come closer to feeling personally fulfilled in a somewhat crazy, constantly shifting modern world. | https://medium.com/the-innovation/3-reasons-why-a-creative-lifestyle-leads-to-greater-personal-fulfillment-8fb2fa1901f4 | ['Colton Tanner Casados-Medve'] | 2020-12-09 00:02:28.081000+00:00 | ['Life Lessons', 'Growth', 'Self Improvement', 'Creativity', 'Lifestyle'] |
Project Estimation: Practical Tips | Despite the fact that responsibility for the numbers provided in the estimate rests with your dev team or software development partner, both you and your vendor are in charge of its accuracy.
To get an estimation goes beyond just having an idea and money to support it. Project owners must be able to express the project idea to the estimators, cast their vision, and plot the vector of the project evolvement. There are several considerations to make so as to avoid margin of errors and other forms of consequences that are generally not palatable.
Practical Tip for Project Stakeholders
This article is shedding light on some practical tips that can be used when you are putting together a software development project estimation. These tips will guide you to a reasonable estimation, help to avoid reviewing the budget repeatedly, and overall, save your time.
Below are some tips to follow.
Highlight your Goals
Starting a software project goes with stating your goals and commitments in writing. Have that in mind immediately after conceiving the idea for the project.
Your objectives in the forms of goals and commitments must coincide with the pressing needs of your business. You should also set a time vision as when you want your goal to be achieved. This will help you assess if the business aim of the project is attainable considering the goals and time.
Have a Thought-Through Vision and Set of Requirements
A detailed requirements specification should be a document that contains requirement analysis, interface prototypes, competitor analysis, use cases, target users, etc. In other words, it reflects the exact vision of the expected product. The detailed specification enables the provision of a relatively fixed price and allows to envisage of any risks related to the project.
The absence of a documented project requirements specification leaves you two options:
1. The first is creating a specification with the help of your development partner. You will need to provide them all the available materials and have a set of discovery meetings and consultations. Review and discuss the delivered specification to make sure you are on the same page.
2.The second option is managing a project with an agile development approach on a time and materials basis. Be aware that it can be a wild goose chase especially if you are not working with a trusted IT vendor. Without the initial documentation to guide the project, an inexperienced team can spend a lot of time and budget while making changes in the course of the project.
Consider the Non-Functional Requirements
Non-functional specifications serve a different use compared to the functional requirements that usually describe ‘what’ a system needs to perform. It is vital to consider the non-functional requirements, which expatriate ‘how’ the system should work.
With special reference to the nature of a project or business, non-functional requirements may have a different level of importance and relevance. Adopting non-functional requirements may also depend on the complexity of implementation.
A typical list of non-functional requirements may include such aspects as scalability, performance, high availability, security, usability, interoperability, and maintainability. Be ready to cover all these issues when discussing the project with your software development partner. They are crucial for project planning and can significantly affect the estimation.
Collect and Compare Estimates from Various Sources
It’s of interest to get bids for your project from different sources. There will always be different prices and timelines in the estimates you receive. However, the shortest timeline or most expensive doesn’t translate to the best.
Your final choice of vendor based on estimation must be well-considered and justified. You should make a comparison of factors that have led to the estimated total such as:
Human resources: How many programmers will work on the project and what is their skill level.
Technology: What is the sophistication of technology employed for project development.
Work/Time frame: How long it will take to complete the project and what is expected to be done.
Discuss the Assessment of the Project
The estimation resulting from the discovery sessions should be discussed with the team that provides it.
Software development project estimation can be revisited to suit your expenses or meet the deadlines. Decide with the estimation team on what you can alter. You can both reach an agreement based on what is a priority and necessity as regards tasks for the project. A popular method of realizing this is the MoSCoW developed by Dai Clegg of Oracle UK in 1994.
MoSCoW simply means:
M — Must have this requirement to meet the business needs
S — Should have this requirement if possible, but project success does not rely on it
C — Could have this requirement if it does not affect anything else on the project
W — Would like to have this requirement, but delivery won’t be this time
Opt for a Team with the Relevant Experience
To avoid estimations made from trial by error, you should engage the services of a vendor experienced in the nature of your project. This will help you have a more realistic estimation as all the necessary assessments and considerations will be inclusive.
The experience of a good estimation team will be of great value if you have no requirements at all. In this case, the team will guide you through the requirements definition process and finally come up with a befitting estimation.
Man-Hours vs Project Time Frame
This is a ratio that has a direct effect on the total cost of an estimation. You have to consider how soon you want the project to be ready. If you want the project delivery to be fast, then there should be more hands-on the project, and that will increase man-hours and results in more cost.
A competent vendor with a skillful team might not necessarily need more man-hours as competence will regulate working speed. Competence will make the estimation to be minimal as both man-hours and project time will surely reduce.
Don’t be Ignorant of the Project Risks
All the projects of software products have risks involved and so they should be well analyzed. Mostly, software vendors include expenses to cover the risks in the estimation. Be sure to discuss possible risks with your vendor.
For example, the choice of third-party integrations can be a risk factor. Another form of risk is not adhering to regulatory policies relating to your project. All risks must be considered in the estimation to make sure the project plan and delivery eventually work out fine.
A Model Approach
Discovery Phase
The initial process is the discovery phase. It is a meeting between the expert team and the project stakeholders. The main objective of this session is to get a high-level understanding of the project. This can take more than one session depending on how big the project is or how many details you can provide during the first session.
The vendor gets insight into the owners’ business, its goal, strategy, and operations process. This is achieved through specific target questions. In the course of the discovery phase, the vendor collects all your project requirements and details. This will help them provide an accurate outlook of the cost and timeline of your project, and create a competitive estimation.
The Discovery session is irrespective of available project requirement documentation. For establishing a level of trust this session is preceded by signing a non-disclosure agreement (NDA) with a customer.
During this session, the following are considered:
The high-level vision of the project Identification of your business goals for the project Discussion of project documentation Elicitation of main project features Detailing project scope
Presentation of Estimation
This is the last phase preceded by internal deliberations. The vendor in return meets with the client and reports the estimate. It is usually a ballpark estimation or a range. All the details of the breakdown arriving at the estimation total are explained with a justification.
Are you a new business startup, an established business owner, or a product manager? Do you want to create, tweak, or manage an app? iTwis is willing to see you through this process. Our well-skilled and experienced mobile development team can successfully facilitate your app production process from scratch and bring it to a significant product release. Contact us today for a consultation. | https://medium.com/itwis/project-estimation-practical-tips-66c2b9b59bc4 | ['Ayo Oladele'] | 2020-11-05 08:00:33.661000+00:00 | ['Mobile App Development', 'Product Management', 'Project Management', 'Software Development', 'Web Development'] |
The Differences Between a Junior, Mid-Level, and Senior Developer | Coding
Despite what most people think, coding is not about communication with a computer. Coding is about communicating with humans and instructing computers. Eventually, code gets compiled and translated to zeroes and ones.
Code has to make sense for other developers that have work with it in the future. A new team that has never seen the code before should be able to open the code and start working on new features or bug fixes. This is where the big difference is between junior and senior developers.
I will leave out the mid-level developer in this comparison because the mid-level developer is kind of a gray area when it comes to coding skills. Obviously, it is somewhere in between the junior and senior. It probably leans more towards the senior side. This mainly has to do with experience, since mid-level developers have probably been through the whole development cycle at least once. They have made a lot of the most simple mistakes and learned from them.
How to sniff out the junior developer?
Junior developers are inexperienced. Some just graduated and are starting their first full-time job. The mindset of a junior developer often is to just make the code work. Working software and good software are considered the same.
Programming straightforward code is hard. And it’s something that junior developers don’t do. Junior developers write fancy code. You can recognize the junior developer by quirky one-liners and overly complex abstractions. This is the junior developer's way of showing off and letting the other developers know how good they can code. And it’s wrong.
Junior developers focus on the computer side of the code at the expense of the human side.
And what about the senior developer?
When looking at the code of a senior developer, you might think: is this all there is? Where’s the rest of the code? A senior developer writes simple, straightforward, and maybe even dumb code. This is one of the biggest qualities that a developer can have when it comes to programming. A senior developer follows the KISS principle: Keep it simple, stupid.
A senior developer thinks about their code in a different way than the junior developer. Code written by a senior developer will be made with maintainability and scalability in mind. This is a totally different mindset than the junior developer has—the senior is thinking about the people who have to work with the code, while the junior is just thinking about making it work for the computer. | https://medium.com/better-programming/the-differences-between-a-junior-mid-level-and-senior-developer-bb2cb2eb000d | [] | 2019-08-02 00:18:13.191000+00:00 | ['Programming', 'Software Development', 'Software Engineering', 'Tech', 'Technology'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.