title stringlengths 1 200 ⌀ | text stringlengths 10 100k | url stringlengths 32 885 | authors stringlengths 2 392 | timestamp stringlengths 19 32 ⌀ | tags stringlengths 6 263 |
|---|---|---|---|---|---|
React 2020 — P7: JSX. I contemplated even doing a JSX… | I contemplated even doing a JSX tutorial since it resembles HTML syntax so much. There are a few differences I guess that we do need to cover though. JSX stands for JavaScript XML. You don’t have to use JSX in React, but it is a heck-of-a-lot easier to use than the alternative.
Let’s create a new component and ruffle some feathers at the same time. Under src/components create a new functional component called WhyMenShouldntWearSkinnyJeans.
Import the component into App.js and render it.
If you take a look at the result in your browser, you’ll see your rendered component (npm start).
Great. Standard stuff so far. But, how would we convert the example we just looked at to not use JSX? The simple syntax that we’re used to gets converted into JavaScript with the aid of React.createElement(). The createElement() method accepts a few parameters. The first parameter is the type of element that you want to create. In our case, we want to create a <div> tag.
React.createElement('div', ...)
The next parameter lists the element attribute. We’re not going to need any attributes for this element, so we’ll just enter null.
React.createElement('div', null, ...)
The third parameter is the content that’s going inside of the element.
React.createElement('div', null, 'Because they look terrible');
If you save your file, you’ll notice that you get the same result.
This already looks messy. What if we had nested tags? Imagine that we had an <h1> tag surrounding the string Because they look terrible. It’s pretty simple with JSX.
It’s not as neat with React.createElement(). You can’t just surround the third parameter string with the <h1> tag.
Won't work
React.createElement('div', null, '<h1>Because...</h1>');
So how do you get around this? You have to place another React.createElement() as the third parameter and then pass all of the content.
As you can see, this starts looking like a complete mess, just like skinny jeans on men. I hope that you’re seeing the benefit of JSX already.
I’m not satisfied with just one reason though. I want to add a slew of reasons. JSX makes my life simple. We’ll create an unordered list and list a few reasons in there.
Should we create it for fun without JSX? Sure, why not?
I suggest that you just forget about this and use JSX like normal people.
What are some other differences between JSX and HTML? JavaScript has some reserved keywords, like class, so you can’t just plug in the class attribute to obtain a CSS class like you would with HTML. In JSX, the class attribute referring to a CSS class is actually className.
<div className="bad-decisions"><ul>...</ul></div>
You’ll notice that className is written with the camel case notation. You can have functions called with the onClick attribute; again, notice the camel case syntax. If we add a function burnPants to the onClick attribute, we’ll have a Failed to Compile error because we haven’t created the burnPants function (which we definitely should create).
Finally, you will have to add the self-closing tag syntax. For example, the <br> tag in HTML does not have a closing tag. Since it doesn’t, we call that tag a self-closing tag. You may be familiar with self-closing tag syntax in HTML as well, even though it’s not pushed any more. In JSX, it’s required: <br />. Same concept can be applied to other tags such as the <img /> tag. That’s really all you have to know for JSX. You’re always more than welcome to look up the documentation on React’s website. | https://medium.com/dev-genius/react-2020-p7-jsx-c7066a0a7d98 | ['Dino Cajic'] | 2020-09-04 20:56:07.968000+00:00 | ['React', 'Reactjs', 'JavaScript', 'Programming', 'Web Development'] |
The Forgotten Problem After Spinal Cord Injury | The Forgotten Problem After Spinal Cord Injury
How a doctor can help you regain control after injury
Severe pain? No. Impaired sexual function? Nope. Weakness? Not correct. Inability to walk? Also no. While each of these significantly impacts the quality of life, they are not the answer we are looking for. So what is it then?
Photo by Meta Zahren on Unsplash
Pooping and peeing yourself!
As a doctor specializing in rehabilitation medicine, many of my patients have suffered spinal cord injuries. I oversee their medical care after they leave the main hospital and are admitted to the rehabilitation hospital. Depending upon the location and severity of the spinal cord injury, the patients often experience loss of control of bladder and bowel function (a.k.a. incontinence).
Why is incontinence such a big deal?
Incontinence can be embarrassing
Even the most self-confident individual may be embarrassed if another adult has to undress them, wipe off their stool or urine, and help don a fresh diaper and pants. Imagine you are at the mall with your significant other and you look down to realize your shorts have been saturated in urine. Your face immediately flushes red and you scurry off to the restroom to change. Individuals often live in fear of incontinence so they choose to stay home, missing out on events and severely decreasing their quality of life.
Imagine you are out at the mall with your significant other and you look down to realize your shorts have been saturated in urine.
Skin breakdown and infection
Individuals with spinal cord injuries are at high risk of skin breakdown, particularly over bony areas such as the tailbone and heels. Depending on the location of the injury, they may be unable to sense the need to shift their body weight to offload pressure. Constantly increased pressure leads to skin breakdown and moisture from stool or urine breakdown even further. Stool is home to bacteria which, when introduced to a fresh skin wound, may cause an infection requiring oral or even intravenous antibiotics.
Kidney damage and urinary tract infections
Elevated bladder volumes increase pressure in the bladder, pushing urine from the bladder back up to the kidney causing potentially permanent kidney damage. Alternatively, if urine stays in the bladder for too long, bacteria may colonize resulting in the development of a urinary tract infection.
Risk of bowel obstruction
Development of ileus, a condition where the intestine has difficulty moving stool through the gut, can be easily managed in these patients if recognized early. The patient is not allowed to eat or drink and a tube is placed down the nose into the stomach. This tube removes air and any material in the stomach that the patient could vomit. Ileus will typically resolve in a few days without further treatment. If ileus is not recognized, it could progress to a serious bowel obstruction that may require surgery.
Decreased participation in therapies
While admitted to the rehabilitation hospital, patients receive around three hours of therapy a day (physical, occupational, and speech depending on their needs). An incontinent accident during therapy requires leaving the therapy gym, getting cleaned up, and hopefully returning to therapy before the session time runs out. If incontinence occurs on a daily basis, missed therapy time adds up quickly.
Photo by Joyce McCown on Unsplash
How do we prevent incontinence?
Bowel movements are managed with oral medications to soften stool and help the gut move it along the intestines and a rectal suppository to empty the stool. Using a rectal suppository at the same time every day retrains the bowels to empty at the specified time rather than randomly throughout the day. Eventually, this leads to a reduced number of bowel accidents.
Using a rectal suppository at the same time every day retrains the bowels to empty at the specified time once per day
The bladder is initially managed with a catheter to drain the urine. If the catheter is removed and the patient is unable to urinate on their own or if they can urinate but a significant amount of urine remains, a temporary catheter is inserted to drain the bladder then removed. This process repeats every few hours. If the patient never regains control of urination, temporary catheterization may be completed indefinitely. If the patient is unable to catheterize themself and no one else that can perform it for them, a permanent catheter is placed.
Bladder and bowel function is an often overlooked but essential topic after spinal cord injury. Controlling incontinence will greatly decrease medical complications and improve quality of life. | https://medium.com/beingwell/the-forgotten-problem-after-spinal-cord-injury-80aa93edb181 | ['Gary Stover'] | 2020-06-29 19:00:06.596000+00:00 | ['Health', 'Spinal Cord Injury', 'Medical', 'Healthcare', 'Medicine'] |
Top Ten Reading-Friendly Records Of 2016 | Top Ten Reading-Friendly Records Of 2016
by Aubrie Cox & Jim Warner of Citizen Lit
1. David Bowie “Blackstar”
Three days. It was barely enough time know this album before David Bowie left to blaze another path before us. While violently yoked to Bowie’s death, the true testament of Blackstar’s beauty is how it refuses to be seen only as an eulogy. Omnipresent and atmospheric, it has served as a backdrop to multiple books for us this year.
Blackstar pairs well with: poetry and creative nonfiction.
2. Danny Brown “Atrocity Exhibition”
Vinyl Me Please called it “The best post punk album of the year,” and for good reason. The album is a blender-spun fever dream of hip-hop, punk, and proto-industrial landscapes. There is a tense, confessional quality to the chaos in Brown’s delivery; the flow is simultaneously disjointed and cathartic. Danny Brown is fearless in his layering of broad influences, reaching out to the corpse of Ian Curtis and the smoldering husk of his hometown of Detroit with equal and brutal aplomb.
Atrocity Exhibition pairs well with: hybrid genre work like the Rose Metal Press library.
3. Car Seat Headrest “Teens of Denial”
Slacker rawk of the first order with a millennial twist on Richard Hell’s “Blank Generation” (which was a twist on folkie Bob McFaden’s song “Beat Generation,” which was… oh well, whatever, nevermind). The album emerges from the bedroom bandcamp symphonies category to a fully-formed artistic thesis, equal parts confident and confused.
Teens of Denial pairs well with: first poetry collections.
4. Nicolas Jaar “Sirens”
The vinyl copy of Sirens was packed with a quarter wedged between the plastic sleeve and the album cover. As the quarter bounced around the sleeve, it would scratch off the film on the cover to reveal its artwork. The album’s onion skin-layers are a slightly transparent brooding hustle, throbbing with nervous energy — like that gap between the afterparty and the comedown with nowhere to go.
Sirens pairs well with: dystopian fiction, novels like Matt Bell’s Scrapper.
5. Kyle Dixon & Michael Stein “Stranger Things OST (Vol. 1 and 2)”
We were late to the binge-watching phenom of the Summer (something about moving to Philadelphia and whatnot), but we were hooked by a sci-fi retro-glory that was the theme song. The two volumes (pressed on smokey color vinyl) are a John Carpenter-grade synth dream.
Stranger Things OST pairs well with: graphic novels, sci-fi, and the Two Dollar Radio catalog.
6. Mono “Requiem for Hell”
The soundtrack for a Trump America? Maybe in title alone. The epic sprawl of the titular track (all 17:48 of it) takes more cues from black metal than you would care to know and replaces the cavernous, church burning tendencies of the genre with an icy studio sheen.
Requiem for Hell pairs well with: adult fantasy, suspense, and lyrical prose such as Our Hearts Will Burn Us Down by Anne Valente.
7. Mitski “Puberty 2”
Young and dreamy, the aptly-titled Puberty 2 has hope and enthusiasm which can barely be contained by the scope of emotions which color this sophomore release. Her voice asserts the narrative which commiserates with the lonely, but challenges the lovers to be more than the sum of their parts.
Puberty 2 pairs well with: Young Adult (duh) and non-formula-dependent romantic novels, or anything written by Leesa Cross-Smith.
8. BadBadNotGood “IV”
If A Certain Ratio happened thirty years later and weren’t born in Manchester but in Toronto — well you get the picture (if you know ACR, if not we won’t hold it against you, but you should probably listen to Factory Records more). Jazz and Krautrock with a side of back bacon.
IV pairs well with: short story collections.
9. Solange “A Seat at the Table”
Unpopular opinion — we like this album better than Lemonade. This is not to say we are hipstering out on Bey, rather, it’s a testament to how fucking good A Seat at the Table truly is. Good craft is just undeniable. There are few art forms which more quickly (and astutely) assess the state the world around us than poetry and music. Transcendent of turmoil and cultural conflict, Solange asserts her identity while at the same time inheriting the weight of unheard voices, carrying both into octaves and ranges which bends light into dark alleys.
A Seat at the Table pairs well with: identity poetry and hybrid work such as Wendy C. Ortiz’s Bruja.
10. Radiohead “A Moon Shaped Pool”
Jim’s a Gen-Xer so it pains him to call Radiohead an elder statesmen of rock, but the truth hurts sometimes. No, it’s not Wilco-level dadrock, but there is a demographic who call OK Computer their Sgt. Pepper’s Lonely Hearts Club Band. Album #9 sees Radiohead draw on all their phases to distill an record which is long on atmosphere and vulnerability.
A Moon Shaped Pool pairs well with: everything, like a good table red.
Honorable Mention: Beach Slang “A Loud Bash of Teenage Feelings”
While it’s not really a reading album per se, we think this is one record everyone should have. Sincerity oozes from its chords with some of the best Replacements lyrics Westerberg never wrote.
A Loud Bash of Teenage Feelings pairs well with: your high school diary and rediscovering your youthful optimism. | https://medium.com/little-fiction-big-truths/top-ten-reading-friendly-records-of-2016-f240180456fc | ['Little Fiction'] | 2016-12-23 17:59:13.328000+00:00 | ['David Bowie', 'Year In Review', 'Music'] |
It Takes Luck To Succeed, Even When You’re Super-Talented | Luck. It’s that annoying success component the professional elite loathe talking about.
Love them or despise them, most superstars and billionaires worked hard to reach their status. But if they’re honest, they’ll acknowledge the role random chance played in their rise to the top.
If you dig into the backstory of most successful people, you’ll find instances of luck contributing to their big break.
You can be a great talent and produce groundbreaking work, but without a lucky break the world may never know. That’s what nearly happened when the song, Music Box Dancer by Frank Mills was released in 1974.
When Mills originally released Music Box Dancer, it bombed — not surprising for an unknown Canadian pianist without a commercial track record.
The song would have faded into the netherworld of forgotten music were it not for a string of improbable events four years after its release.
Mills signed with a new record company who put the song on the “B side” of a new single and sent it to easy listening radio stations across Canada.
By some quirk of luck, the record accidentally landed at a pop radio station. The program director dismissed the “A-side” of the record as unfitting for a pop station, but then the second stroke of fortune occurred. Something compelled the program manager to play the “B-side,” Music Box Dancer. He fell in love with the tune and put it on heavy rotation.
Five years after its initial release, the record surged in popularity, going gold in Canada and prompting a re-release in the U.S., where it reached #3 on the Billboard Hot 100. The song is now considered one of the iconic instrumentals of the 20th century, appearing in various movies and television shows.
That success would never have happened if the record hadn’t accidentally ended up at a pop radio station, and if the program manager hadn’t played the “B-side.” Mills wrote and performed a beautiful tune, but it took a few happy accidents before it became a hit.
Even the super-talented need a little help from the universe before hitting it big. But where does that leave the rest of us? Can we get lucky? Of course. It happens all the time. You can’t control it, plan it, or predict it, but you can attract it. | https://medium.com/curious/it-takes-luck-to-succeed-even-when-youre-super-talented-8db2a874c146 | ['Barry Davret'] | 2020-12-28 13:36:42.656000+00:00 | ['Self Improvement', 'Life Lessons', 'Entrepreneurship', 'Leadership', 'Business'] |
Building a Scalable API in Node | This article covers the topic of creating a scalable and configurable REST API, served with the help of express. The code is written in TypeScript. The API is database-agnostic, but in the final example, I will use MongoDB.
TL;DR: Here is the repository with the complete template.
Building a greenfield project is exciting. We think about the problem we face, technologies that can simplify the solution and many other factors we believe may be beneficial for the design. Nevertheless thinking about API we want it scalable and configurable. During my career, I had several situations where I was responsible for building the core of an API, and I believe I can share with you some hints that I found useful. It doesn’t matter if you are starting a brand new project in node, or you just want to learn something about building an API — hopefully, you can find this article relevant in both cases.
Scalability
Scalability begins with defining the proper architectural boundaries. I saw many startups that have started with the ‘Yolo’ approach, building the monolithic backend for the whole app, without giving it a second thought. As time passes such a backend becomes harder and harder to maintain. And if (hopefully) the startup gain popularity, the only way to scale it is by setting up the whole application multiple times, on multiple nodes. It’s costly and not as effective as it could be. Such a solution often needs to be re-written.
Think about your product first. Write down requirements. Discover the architectural boundaries — the places where the system can be divided. DDD (Domain Driven Design) is a great way to teach you how to do it. What for? To let every part of the system work separately. If there is a need to scale our system up, you don’t have to do it for the whole. You can discover some most overloaded parts. It gives you much better maintainability of the system.
Building the system in such a way is preparing it for becoming microservice-based one day. But we never start with microservices. We start with the monolith that has well-defined architectural boundaries and then, someday, we can easily extract microservices from it. Listen to what Martin Fowler says about it:
Don’t even consider microservices unless you have a system that’s too complex to manage as a monolith Martin Fowler
Configurability
How sure you are that you always gonna use this one particular database? How sure you are that the tool that is good for now, will be good in the future as well? You cant be sure at all. Architectural drivers can change as the times passes, your tools will as well. So firstly — postpone the important architectural decisions as long as you can — until you know a lot about the system. And secondly — write the system in a way that allows you to change the decisions later on.
But how to do it? Configurability is key. And that’s where the Dependency Inversion Principle comes to play (the D principle from the well-established SOLID rules). Invert the flow of dependencies. Don’t let your core logic to rely on the concrete classes/modules. Always create a layer of abstraction. It’s easier than you think. For instance, talking about the database — try not to call DB directly, always use interfaces. You can implement this interface any way you please, using any database you want. That will allow you to change your decision about the database in the future.
Let’s write some code!
In the example below I will show you my idea for an API template. We will write some logic responsible for user creation. We can start with one of two approaches. Either we create a true monolith, where architectural boundaries are kept by separating the code into the modules, or we can create a template for multiple services in the single git repository and keep boundaries more strict. Let’s start with the second approach and see where we can get.
mkdir api-template & cd api-template
mkdir user-service & cd user-service & yarn init
yarn add express
yarn add typescript @types/express ts-node --dev
Then please add tsconfig.json, tslint.json and .gitignore files to finalize the project configuration. Then create the following directory structure:
api keeps the implementation of all our API endpoints
keeps the implementation of all our API endpoints application is responsible for Application Services. We want our api to know as little as possible, and delegate all the job to application services
is responsible for Application Services. We want our api to know as little as possible, and delegate all the job to application services common keeps the implementation of Value Objects — objects that don’t change in time and represent the same arbitrary value
keeps the implementation of Value Objects — objects that don’t change in time and represent the same arbitrary value db handles all database-related implementations
handles all database-related implementations events are responsible for notifying the app about some changes that happened recently
are responsible for notifying the app about some changes that happened recently exceptions keeps the custom exceptions we can throw to be more informative
keeps the custom exceptions we can throw to be more informative middlewares handles all the behavior that we want to inject before the call reaches the API
handles all the behavior that we want to inject before the call reaches the API models is responsible for all the domain models our application work with
User Model
Let’s say we want to know the user’s first and last names, email, and password. Create an appropriate model inside the models directory. Let’s start with the naive implementation including simple email and password validation:
export class UserModel {
firstName: string;
lastName: string;
email: string;
password: string;
constructor(firstName: string, lastName: string, email: string, password: string) {
const re = /^(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/; if (!re.test((email).toLowerCase())) {
// todo throw some exception
} else if(password.length < 6) {
// todo throw some exception
} else {
this.firstName = firstName;
this.lastName = lastName;
this.email = email;
}
}
}
It doesn’t look pretty, does it? Email and passwords are a perfect candidates for a value objects. So let’s create the proper exceptions first:
export class PasswordNotValidException extends Error {
constructor() {
super("Password not valid");
}
} export class EmailNotValidException extends Error {
constructor() {
super("Email is not valid");
}
}
Then its time to create value objects:
import {EmailNotValidException} from "../exceptions/email-not-valid.exception"; export class Email {
private readonly value: string; public static of(email: string): Email {
if (Email.isValidEmail(email)) {
return new Email(email);
} throw new EmailNotValidException();
} private static isValidEmail(email: string) {
const re = /^(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/;
return re.test(String(email).toLowerCase());
} private constructor(email: string) {
this.value = email;
} public getValue() {
return this.value;
}
}
For a password, it may be a little bit more tricky, as we probably don’t want to keep and store plain passwords in our system. Add bcrypt and @types/bcrypt libraries using yarn, and then implement the Password class:
import {PasswordNotValidException} from "../exceptions/password-not-valid.exception";
import {hash} from "bcrypt"; export class Password {
// todo move them to env
private static SALT_ROUNDS: number = 10;
private static MIN_PASS_LENGTH: number = 6; private readonly value: string; public static async of(pass: string): Promise<Password> {
if (Password.isValidPassword(pass)) {
return await Password.create(pass);
} throw new PasswordNotValidException();
} public static ofHash(hash: string): Password {
return new Password(hash);
} private static isValidPassword(pass: string): boolean {
return pass.length >= Password.MIN_PASS_LENGTH;
} private static async create(pass: string) {
const passHash = await hash(pass, Password.SALT_ROUNDS);
return new Password(passHash)
} private constructor(hash: string = '') {
this.value = hash;
} public getHash() {
return this.value;
}
}
And now our implementation of the UserModel simplifies drastically:
import {Email} from "../common/email";
import {Password} from "../common/password"; export class User {
firstName: string;
lastName: string;
email: Email;
password: Password; constructor(firstName: string, lastName: string, email: Email, password: Password) {
this.firstName = firstName;
this.lastName = lastName;
this.email = email;
this.password = password;
}
}
We don’t have to validate anything here, as we know email and password have to be valid to be created in the first place.
User Database Abstraction
As I mentioned earlier, we want to postpone important decisions as long as we can. So, for now, let’s use the simple abstraction for the database. Create db/user/interfaces directory and then define an interface:
import {Email} from "../../../common/email";
import {User} from "../../../models/user";
export interface UserDatabase {
isEmailUnique(email: Email): Promise<boolean>
create(companyUser: User): Promise<User>
}
As you see, I’ve added the method isEmailUnique because we need to check it before creating the user. Of course, we could set the constraint on the database, but if we want to keep control of the application and not rely on specific features of external tools, let’s stick to my proposition.
It’s time to store the data somewhere. As we have an interface, we can go with any database or any implementation we want. Here is an example of the implementation of MongoDB. Let’s create an in-memory database for our case.
import {UserDatabase} from "./interfaces/user.database";
import {User} from "../../models/user";
import {Email} from "../../common/email";
export class UserInMemoryDb implements UserDatabase {
private users: User[] = [];
create(user: User): Promise<User> {
this.users.push(user);
return Promise.resolve(user);
}
isEmailUnique(email: Email): Promise<boolean> {
return Promise.resolve(!this.users.find(x => x.email.getValue() === email.getValue()));
}
}
UserService
We don’t want our api to act directly on the database. We want to have a separate logic responsible for any action that touches the database. That’s where Application Services helps. Create a new class inside the application directory:
import {UserDatabase} from "../db/user/interfaces/user.database";
import {User} from "../models/user";
import {EmailNotUniqueException} from "../exceptions/email-not-unique.exception"; export interface ApplicationService {
}
export class UserService implements ApplicationService {
private readonly userDatabase: UserDatabase;
public constructor(userDatabase: UserDatabase) {
this.userDatabase = userDatabase;
}
public async create(user: User): Promise<User> {
if (!await this.userDatabase.isEmailUnique(user.email)) {
throw new EmailNotUniqueException();
}
return await this.userDatabase.create(user);
}
static getType(): string {
return "UserService";
}
}
As you see we are using a new exception here. Please add it to the exceptions directory by yourself. The other thing worth mentioning is that we are passing the database interface to the service constructor. It enables us to easily inject there any DB we want.
getType method is also added for a reason — we will be able to determine the service using its type.
Notice in this step we only do create a user. In such a situation some other actions should happen — possibly email should be sent, etc. We don’t want to wait for sending an email, it’s the action that can be run asynchronously. And it can be one of many actions that we want to execute afterward. We will get back to this topic soon.
User Controller
Now, install the express-validator library, open the api directory, and create index.ts file:
import {Router} from 'express'
import {check} from "express-validator";
const router: Router = Router()
const userController = new UserController();
router.post('/',
[
check('name', 'Field required').exists(),
check('nip', 'Field required').exists(),
check('email', 'Field required').exists(),
check('password', 'Field required').exists()
],
userController.create) export default router;
Here we validate the existence of the required request parameters and we pass the request for the UserController to handle. Let’s create the UserController then:
import {User} from "../models/user";
import {UserService} from "../application/user.service";
import {Request, Response} from 'express';
export class UserController {
public create = async (req: Request, res: Response): Promise<any> => {
const user: User = await User.of(req)
const result: User = await this.getService().create(user)
res.status(201).send(new ResponseBuilder(result).setMessage('User created'))
}
private getService(): UserService {
// todo implement it!
}
}
You can notice a few errors here. First of all, we didn’t pass UserService yet, as we want to configure it later on. The second thing is we don’t have the of method on the User object. Let’s fix it right now by adding the following method to the User’s object:
static async of(req: Request) {
const body = req.body;
return new User(body.firstName, body.lastName, Email.of(body.email), await Password.of(body.password));
}
The third problem is the ResponseBuilder class. You can send the response in any way it pleases you, but you are free to use some ResponseBuilder sample I did here.
Service config
And finally, we need to configure our service. Install cors , helmet and morgan and create the service class inside src directory:
import express from 'express'
import apiV1 from './api/index'
import bodyParser from 'body-parser'
import cors from 'cors'
import helmet from 'helmet'
import morgan from 'morgan'
import {UserService} from "./application/user.service";
import {UserInMemoryDb} from "./db/user/user.in-memory-db";
import {ApplicationService} from "./application/interfaces/application.service";
class Service {
private readonly _express: express.Application
private readonly _appServices: Map<string, ApplicationService>
get express(): express.Application {
return this._express;
}
get appServices(): Map<string, ApplicationService> {
return this._appServices;
}
constructor() {
this._express = express();
this._appServices = new Map<string, ApplicationService>();
this.setUp();
}
public setUp(): void {
this.setApplicationServices()
this.setMiddlewares()
this.setRoutes();
}
protected setApplicationServices() {
this.appServices.set(UserService.getType(), new UserService(new UserInMemoryDb()))
}
public setRoutes(): void {
this._express.use('/api/v1', apiV1)
}
private setMiddlewares(): void {
this._express.use(cors())
this._express.use(morgan('dev'))
this._express.use(bodyParser.json())
this._express.use(bodyParser.urlencoded({extended: false}))
this._express.use(helmet())
}
}
export default new Service();
A few important things are going on. We:
create the express app
register the application service by its type, configuring a database it uses.
set middlewares
set routes we’ve already created
Now, as we already have UserService registered, we need to get back to the UserController and implement the getService method:
private getService(): UserService {
return service.appServices.get(UserService.getType()) as UserService;
}
Then the last step is to create an index.ts file in the user-service directory:
import service from './src/service'
const PORT = 8080
service.express.listen(PORT, () => {
console.log(`Server is listening on ${PORT}`)
})
Then add a new script to package.json
"scripts": {
"dev": "nodemon --watch src --exec ts-node ./index.ts"
},
Voilà! We have an initial logic to run the service. You can run it and send some data to the endpoint http://localhost:8080/api/v1/ Nevertheless there are still some steps we need to make to ensure it’s working properly.
Response data
After sending a valid request to the endpoint, you might have noticed that we get the whole user model as a response. We are also getting a password hash. It’s unacceptable. To fix it add the following method to UserModel:
/* method called while sending the model as API response */
public toJSON(): any {
return {
email: this.email.getValue(),
firstName: this.firstName,
lastName: this.lastName
}
}
Error Handling
You’ve probably already noticed that sending a request with the wrong input throws an exception and puts the request on hold. That’s because we don’t have a proper error handling middleware yet. Let’s create one:
import {Request, Response} from 'express'
import {validationResult} from 'express-validator'
import {ResponseBuilder} from "../response/response.builder";
export const route = (func: any) => {
return (req: Request, res: Response, next: () => void) => {
const errors: any = validationResult(req)
/* validate all generic errors */
if (!errors.isEmpty()) {
return res
.status(422)
.send(
new ResponseBuilder().err('Validation failed', errors.array())
)
}
/* process function and catch internal server errors */
func(req, res, next).catch((err: any) => {
res
.status(err.ERROR_CODE ? err.ERROR_CODE : 500)
.send(new ResponseBuilder().err(err.toString()))
})
}
}
As you can notice it catches all errors that our system can generate and returns it in an elegant response. From now on, adding an ERROR_CODE property to the exception class can change the status code sent back to the client — and I strongly advise you to change it to 422 for all custom errors we’ve created.
Executing actions on success
Sometimes we want to react somehow when the action completes. We may need to send w Welcome Email, log something asynchronously or do anything else after the request is completed. On the other hand, we don’t want to make the client wait for the response. One of the elegant solutions for this problem is an event handler. Please take a look at the code below:
export interface DomainEvent {
}
export interface DomainEventSubscriber {
handle(event: DomainEvent): void
canHandle(event: DomainEvent): boolean
}
export interface DomainEventPublisher {
publish(event: DomainEvent): void
subscribe(subscriber: DomainEventSubscriber): void
}
export class ForwardDomainEventPublisher implements DomainEventPublisher {
private subscribers: DomainEventSubscriber[] = [];
subscribe(subscriber: DomainEventSubscriber) {
this.subscribers.push(subscriber);
}
publish(event: DomainEvent): void {
this.subscribers.forEach(async (subscriber) => subscriber.canHandle(event) ? await subscriber.handle(event) : null)
}
}
Here we create an abstraction for Event, EventPublisher and EventSubscriber. There is also a sample implementation of EventPublisher. Let’s move on with our implementation:
export class UserCreated implements DomainEvent {
email: Email
constructor(email: Email) {
this.email = email;
}
} export class WelcomeEmailSubscriber implements DomainEventSubscriber {
handle(event: UserCreated): void {
// todo send an welcome email to the user
}
canHandle(event: DomainEvent): boolean {
return event instanceof UserCreated;
}
}
And register this Subscriber in the service.ts
class Service {
...
private readonly _publisher: DomainEventPublisher
...
constructor() {
...
this._publisher = new ForwardDomainEventPublisher();
...
}
public setUp(): void {
...
this.registerEventsSubscribers()
...
}
protected registerEventsSubscribers() {
this._publisher.subscribe(new WelcomeEmailSubscriber());
} protected setApplicationServices() {
this.appServices.set(UserService.getType(), new UserService(new UserInMemoryDb(), this._publisher))
}
}
As you may notice, we’ve passed the publisher to the application service. Now let’s fix the application service:
export class UserService implements ApplicationService {
private readonly userDatabase: UserDatabase;
private readonly publisher: DomainEventPublisher;
public constructor(userDatabase: UserDatabase, publisher: DomainEventPublisher) {
this.userDatabase = userDatabase;
this.publisher = publisher;
}
public async create(user: User): Promise<User> {
if (!await this.userDatabase.isEmailUnique(user.email)) {
throw new EmailNotUniqueException();
}
return await this.createNewUser(user);
}
private async createNewUser(user: User) {
const newUser: User = await this.userDatabase.create(user);
this.publisher.publish(new UserCreated(newUser.email));
return newUser;
}
static getType(): string {
return "UserService";
}
}
That’s it! From now on, we will be publishing the event UserCreated, then all the subscribers that can handle that type of event will react.
The template on GitHub
If you want to see the full code, feel free to download it here. Please note, that I did some improvements in this repo such as:
I’ve added the swagger configuration. From now on you can see the endpoint’s documentation opening the URL: http://localhost:8080/api/v1/docs
I’ve added the MongoDB and I’ve changed the default DB to Mongo (from in-memory database)
I’ve used yarn-workspaces to keep the code more structured and reusable
I’ve added the docker-compose for setting up the MongoDB easily
I’ve added the authentication mechanism (logging in / out)
I’ve moved some of the configuration logic to the .env file
I’ve added tests to make sure our API works properly
Wrapping it up
It took some time, but we did it! We’ve implemented an elegant template for a scalable and configurable node-based API. From now on, you can use it to create new services in your application and to keep control of the complexity of your codebase. | https://medium.com/swlh/building-a-scalable-api-in-node-41c65f84d9c1 | ['Maciej Kocik'] | 2020-11-25 07:18:07.022000+00:00 | ['Nodejs', 'Expressjs', 'Typescript', 'Mongodb', 'Rest Api'] |
Ten Machine Learning Algorithms You Should Know to Become a Data Scientist | Machine Learning Practitioners have different personalities. While some of them are “I am an expert in X and X can train on any type of data”, where X = some algorithm, some others are “Right tool for the right job people”. A lot of them also subscribe to “Jack of all trades. Master of one” strategy, where they have one area of deep expertise and know slightly about different fields of Machine Learning. That said, no one can deny the fact that as practicing Data Scientists, we will have to know basics of some common machine learning algorithms, which would help us engage with a new-domain problem we come across. This is a whirlwind tour of common machine learning algorithms and quick resources about them which can help you get started on them.
1. Principal Component Analysis(PCA)/SVD
PCA is an unsupervised method to understand global properties of a dataset consisting of vectors. Covariance Matrix of data points is analyzed here to understand what dimensions(mostly)/ data points (sometimes) are more important (ie have high variance amongst themselves, but low covariance with others). One way to think of top PCs of a matrix is to think of its eigenvectors with highest eigenvalues. SVD is essentially a way to calculate ordered components too, but you don’t need to get the covariance matrix of points to get it.
This Algorithm helps one fight curse of dimensionality by getting datapoints with reduced dimensions.
Libraries:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.svd.html
http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html
Introductory Tutorial:
https://arxiv.org/pdf/1404.1100.pdf
2a. Least Squares and Polynomial Fitting
Remember your Numerical Analysis code in college, where you used to fit lines and curves to points to get an equation. You can use them to fit curves in Machine Learning for very small datasets with low dimensions. (For large data or datasets with many dimensions, you might just end up terribly overfitting, so don’t bother). OLS has a closed form solution, so you don’t need to use complex optimization techniques.
As is obvious, use this algorithm to fit simple curves / regression
Libraries:
https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.htmlhttps://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.polyfit.html
Introductory Tutorial:
https://lagunita.stanford.edu/c4x/HumanitiesScience/StatLearning/asset/linear_regression.pdf
2b. Constrained Linear Regression
Least Squares can get confused with outliers, spurious fields and noise in data. We thus need constraints to decrease the variance of the line we fit on a dataset. The right method to do it is to fit a linear regression model which will ensure that the weights do not misbehave. Models can have L1 norm (LASSO) or L2 (Ridge Regression) or both (elastic regression). Mean Squared Loss is optimized.
Use these algorithms to fit regression lines with constraints, avoiding overfitting and masking noise dimensions from model.
Libraries:
http://scikit-learn.org/stable/modules/linear_model.html
Introductory Tutorial(s):
https://www.youtube.com/watch?v=5asL5Eq2x0A
https://www.youtube.com/watch?v=jbwSCwoT51M
3. K means Clustering
Everyone’s favorite unsupervised clustering algorithm. Given a set of data points in form of vectors, we can make clusters of points based on distances between them. It’s an Expectation Maximization algorithm that iteratively moves the centers of clusters and then clubs points with each cluster centers. The input the algorithm has taken is the number of clusters which are to be generated and the number of iterations in which it will try to converge clusters.
As is obvious from the name, you can use this algorithm to create K clusters in dataset
Library:
http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html
Introductory Tutorial(s):
https://www.youtube.com/watch?v=hDmNF9JG3lo
https://www.datascience.com/blog/k-means-clustering
4. Logistic Regression
Logistic Regression is constrained Linear Regression with a nonlinearity (sigmoid function is used mostly or you can use tanh too) application after weights are applied, hence restricting the outputs close to +/- classes (which is 1 and 0 in case of sigmoid). Cross-Entropy Loss functions are optimized using Gradient Descent. A note to beginners: Logistic Regression is used for classification, not regression. You can also think of Logistic regression as a one layered Neural Network. Logistic Regression is trained using optimization methods like Gradient Descent or L-BFGS. NLP people will often use it with the name of Maximum Entropy Classifier.
This is what a Sigmoid looks like:
Use LR to train simple, but very robust classifiers.
Library:
http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html
Introductory Tutorial(s):
https://www.youtube.com/watch?v=-la3q9d7AKQ
5. SVM (Support Vector Machines)
SVMs are linear models like Linear/ Logistic Regression, the difference is that they have different margin-based loss function (The derivation of Support Vectors is one of the most beautiful mathematical results I have seen along with eigenvalue calculation). You can optimize the loss function using optimization methods like L-BFGS or even SGD.
Another innovation in SVMs is the usage of kernels on data to feature engineer. If you have good domain insight, you can replace the good-old RBF kernel with smarter ones and profit.
One unique thing that SVMs can do is learn one class classifiers.
SVMs can used to Train a classifier (even regressors)
Library:
http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html
Introductory Tutorial(s):
https://www.youtube.com/watch?v=eHsErlPJWUU
Note: SGD based training of both Logistic Regression and SVMs are found in SKLearn’s http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html , which I often use as it lets me check both LR and SVM with a common interface. You can also train it on >RAM sized datasets using mini batches.
6. Feedforward Neural Networks
These are basically multilayered Logistic Regression classifiers. Many layers of weights separated by non-linearities (sigmoid, tanh, relu + softmax and the cool new selu). Another popular name for them is Multi-Layered Perceptrons. FFNNs can be used for classification and unsupervised feature learning as autoencoders.
Multi-Layered perceptron
FFNN as an autoencoder
FFNNs can be used to train a classifier or extract features as autoencoders
Libraries:
http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html#sklearn.neural_network.MLPClassifier
http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPRegressor.html
https://github.com/keras-team/keras/blob/master/examples/reuters_mlp_relu_vs_selu.py
Introductory Tutorial(s):
http://www.deeplearningbook.org/contents/mlp.html
http://www.deeplearningbook.org/contents/autoencoders.html
http://www.deeplearningbook.org/contents/representation.html
7. Convolutional Neural Networks (Convnets)
Almost any state of the art Vision based Machine Learning result in the world today has been achieved using Convolutional Neural Networks. They can be used for Image classification, Object Detection or even segmentation of images. Invented by Yann Lecun in late 80s-early 90s, Convnets feature convolutional layers which act as hierarchical feature extractors. You can use them in text too (and even graphs).
Use convnets for state of the art image and text classification, object detection, image segmentation.
Libraries:
https://developer.nvidia.com/digits
https://github.com/kuangliu/torchcv
https://github.com/chainer/chainercv
https://keras.io/applications/
Introductory Tutorial(s):
http://cs231n.github.io/
https://adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks/
8. Recurrent Neural Networks (RNNs):
RNNs model sequences by applying the same set of weights recursively on the aggregator state at a time t and input at a time t (Given a sequence has inputs at times 0..t..T, and have a hidden state at each time t which is output from t-1 step of RNN). Pure RNNs are rarely used now but its counterparts like LSTMs and GRUs are state of the art in most sequence modeling tasks.
RNN (If here is a densely connected unit and a nonlinearity, nowadays f is generally LSTMs or GRUs ). LSTM unit which is used instead of a plain dense layer in a pure RNN.
Use RNNs for any sequence modelling task specially text classification, machine translation, language modelling
Library:
https://github.com/tensorflow/models (Many cool NLP research papers from Google are here)
https://github.com/wabyking/TextClassificationBenchmark
http://opennmt.net/
Introductory Tutorial(s):
http://cs224d.stanford.edu/
http://www.wildml.com/category/neural-networks/recurrent-neural-networks/
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
9. Conditional Random Fields (CRFs)
CRFs are probably the most frequently used models from the family of Probabilitic Graphical Models (PGMs). They are used for sequence modeling like RNNs and can be used in combination with RNNs too. Before Neural Machine Translation systems came in CRFs were the state of the art and in many sequence tagging tasks with small datasets, they will still learn better than RNNs which require a larger amount of data to generalize. They can also be used in other structured prediction tasks like Image Segmentation etc. CRF models each element of the sequence (say a sentence) such that neighbors affect a label of a component in a sequence instead of all labels being independent of each other.
Use CRFs to tag sequences (in Text, Image, Time Series, DNA etc.)
Library:
https://sklearn-crfsuite.readthedocs.io/en/latest/
Introductory Tutorial(s):
http://blog.echen.me/2012/01/03/introduction-to-conditional-random-fields/
7 part lecture series by Hugo Larochelle on Youtube: https://www.youtube.com/watch?v=GF3iSJkgPbA
10. Decision Trees
Let’s say I am given an Excel sheet with data about various fruits and I have to tell which look like Apples. What I will do is ask a question “Which fruits are red and round ?” and divide all fruits which answer yes and no to the question. Now, All Red and Round fruits might not be apples and all apples won’t be red and round. So I will ask a question “Which fruits have red or yellow color hints on them? ” on red and round fruits and will ask “Which fruits are green and round ?” on not red and round fruits. Based on these questions I can tell with considerable accuracy which are apples. This cascade of questions is what a decision tree is. However, this is a decision tree based on my intuition. Intuition cannot work on high dimensional and complex data. We have to come up with the cascade of questions automatically by looking at tagged data. That is what Machine Learning based decision trees do. Earlier versions like CART trees were once used for simple data, but with bigger and larger dataset, the bias-variance tradeoff needs to solved with better algorithms. The two common decision trees algorithms used nowadays are Random Forests (which build different classifiers on a random subset of attributes and combine them for output) and Boosting Trees (which train a cascade of trees one on top of others, correcting the mistakes of ones below them).
Decision Trees can be used to classify datapoints (and even regression)
Libraries
http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html
http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html
http://xgboost.readthedocs.io/en/latest/
https://catboost.yandex/
Introductory Tutorial:
http://xgboost.readthedocs.io/en/latest/model.html
https://arxiv.org/abs/1511.05741
https://arxiv.org/abs/1407.7502
http://education.parrotprediction.teachable.com/p/practical-xgboost-in-python
TD Algorithms (Good To Have)
If you are still wondering how can any of the above methods solve tasks like defeating Go world champion like DeepMind did, they cannot. All the 10 type of algorithms we talked about before this was Pattern Recognition, not strategy learners. To learn strategy to solve a multi-step problem like winning a game of chess or playing Atari console, we need to let an agent-free in the world and learn from the rewards/penalties it faces. This type of Machine Learning is called Reinforcement Learning. A lot (not all) of recent successes in the field is a result of combining perception abilities of a convnet or a LSTM to a set of algorithms called Temporal Difference Learning. These include Q-Learning, SARSA and some other variants. These algorithms are a smart play on Bellman’s equations to get a loss function that can be trained with rewards an agent gets from the environment.
These algorithms are used to automatically play games mostly :D, also other applications in language generation and object detection.
Libraries:
https://github.com/keras-rl/keras-rl
https://github.com/tensorflow/minigo
Introductory Tutorial(s):
Grab the free Sutton and Barto book: https://web2.qatar.cmu.edu/~gdicaro/15381/additional/SuttonBarto-RL-5Nov17.pdf
Watch David Silver course: https://www.youtube.com/watch?v=2pWv7GOvuf0
These are the 10 machine learning algorithms which you can learn to become a data scientist.
You can also read about machine learning libraries here.
We hope you liked the article. Please Sign Up for a free ParallelDots account to start your AI journey. You can also check demo’s of our APIs here.
Read the original article here. | https://towardsdatascience.com/ten-machine-learning-algorithms-you-should-know-to-become-a-data-scientist-8dc93d8ca52e | ['Shashank Gupta'] | 2018-05-07 12:58:31.423000+00:00 | ['Algorithms', 'Artificial Intelligence', 'Towards Data Science', 'Machine Learning'] |
Passwordless Authentication in React Native Using Facebook Account Kit (Part 2) | In this article we will create the services that we described on the introductory Part 1, to be ultimately used on Part 3 of this series.
Introduction
We will create a simple API with two services:
/auth: this will allow us to verify the code from Facebook Account Kitobtained from the application and identify or create the user on our database for future login attempts. This service will also return a JWT(JSON Web Token) to identify the user on the next requests that require security. /me: based on the user credentials provided in the JWT, this endpoint will check with the database and return the required user data.
Project Setup
For this example we will use NodeJS’ framework Hapijs, but you can use other languages/frameworks that you feel comfortable with.
We will begin by creating a new project and installing the following dependencies:
yarn init
yarn add hapi hapi-auth-jwt2 jsonwebtoken lowdb node-fetch qs
yarn add -D nodemon
Then, open the package.json file and add the following line on the scripts section. This will allow us to refresh the server on the fly, as changes are introduced:
"scripts": {
...
"start": "nodemon ./index.js -e js",
...
}
Finally, we create an index.js file and start coding.
You can find the complete code for this example on this Github Repository.
Configuring the server
On this step we’ll be adding the necessary libraries and code a simple route to verify that everything is working properly.
It’s worth noting two important aspects of the code we include below:
The creation and initialization of a database to store users as they register. For simplicity, in this example we chose lowdb, which is a simple database that stores data on disk. The configuration of a Hapijs strategy using hapi-auth-jwt2 that will allow us to intercept requests with a JWT on the Authorization header, verify that said JWT is valid, include the data as part of the request and access them easily by calling request.credentials from any route that we secure, as we will see further on.
The constant JWT_SECRET is included as a part of this example to keep this as simple as posible, but on a real scenario we strongly suggest you obtain it from an environment variable. Moreover, we suggest you use a random auto-generated key (for example using a tool such as openssl).
In order to verify this, we run the command yarn start and from a browser we request the page http://localhost:3000, which should return the text OK.
Programming the authentication service /auth
First of all, we create the following environment variables and constants.
If you wish to use the Facebook application that we created on Part 1 of this series of articles, you will have to replace the values.
The constant FACEBOOK_APP_SECRET is included as a part of this example to keep this as simple as posible, but on a real scenario we suggest you obtain it from an environment variable.
We then replace the route we created above on this post with something that makes a little more sense:
Change the method GET to POST and configure the strategy using { auth: false } as to avoid the interception of the request in search of a JWT. Create the function getFacebookToken which will be in charge of verifying the code provided by the user in the application, along with the id and secret of our Facebook application. Asides, we will use the access_tokento check for more information on the Facebook user. In this case, the phone number. Create the function getFacebookMe to check the user data using the access_token. Last, we code the method handleAuth with the logic for the route in question. We save the user on the database if they didn’t already exist and generate and return the signed JWT.
Programming the secure service /me
For the previous step to make sense, we will now create a route that will require the user to be registered. In this case, the route will return the user’s profile from the database:
We add the route and, in the configuration, we include the parameter { auth: 'jwt' } so that the strategy we previously defined will return the credentials from the client on the request.auth variable. We create the method handleMe with the necessary logic to search a user by their identifier in the database and return the full profile.
And…you’re all set! All that’s left is to integrate the application with the services as described on Part 3.
Passwordless Authentication Using Facebook Account Kit | https://medium.com/react-native-training/passwordless-authentication-in-react-native-using-facebook-account-kit-part-2-367a820f269c | ['Juan Pablo Garcia'] | 2018-11-09 14:32:45.306000+00:00 | ['Node', 'React', 'Authentication', 'React Native', 'JavaScript'] |
22 Things You Should Give Up if You Want to Be a Successful Developer | 22 Things You Should Give Up if You Want to Be a Successful Developer
Discover what could be holding you back
Photo by ian dooley on Unsplash
When you become good at something, you can hit a wall in your development. No matter how hard you try, you feel like you can’t break through it. Pushing harder doesn’t pay off as much as before.
In this case, the solution might be not to add something but actually to remove something.
“It’s only by saying NO that you can concentrate on the things that are really important.” — Steve Jobs
Our habits and what we believe in determine 90% of our actions. To be a successful developer, we must become successful first in thoughts and then in actions.
By giving up certain habits and beliefs, you create space and time for the better. | https://medium.com/better-programming/22-things-you-should-give-up-if-you-want-to-be-a-successful-developer-aaee8699185c | ['Dmitry Shvetsov'] | 2020-06-24 21:20:54.813000+00:00 | ['Software Development', 'Motivation', 'Self Improvement', 'Career Development', 'Programming'] |
Predicting Heart Failure Using Machine Learning, Part 2 | The easy way to XGBoost parameter optimization
Photo by Robina Weermeijer on Unsplash
I predicted heart failure using Random Forest, XGBoost, Neural Network, and an ensemble of models in my previous article. In this post, I would like to go over XGBoost parameter optimization to increase the model’s accuracy.
According to the official XGBoost website, XGBoost is defined as an optimized distributed gradient boosting library designed to be highly efficient, flexible, and portable. It implements machine learning algorithms under the gradient boosting framework. XGBoost provides a parallel tree boosting (also known as GBDT, GBM) that solve many data science problems in a fast and accurate way.
XGBoost is very popular with participants of Kaggle competitions because it can achieve a very high model accuracy. The only problem with it is the number of parameters one has to optimize to get good results.
XGBoost has three types of parameters: general parameters, booster parameters, and task parameters. General parameters select which booster you are using to do boosting, commonly tree or linear model; booster parameters depend on which booster you have chosen; learning task parameters specify the learning task and the corresponding learning objective. A detailed description of all parameters can be found here.
Going over all parameters is beyond the scope of this article. Instead, I will concentrate on optimizing the following selected tree booster parameters to increase the accuracy of our XGBoost model:
Parameters that help prevent overfitting (aliases are for XGBoost python sklearn wrapper that uses sklearn naming convention)
eta [default=0.3, range: [0,1], alias: learning_rate ]
Step size shrinkage used in update to prevents overfitting. After each boosting step, we can directly get the weights of new features, and eta shrinks the feature weights to make the boosting process more conservative.
max_depth [default=6, range[0,∞]]
Maximum depth of a tree. Increasing this value will make the model more complex and more likely to overfit. 0 is only accepted in lossguided growing policy when tree_method is set as hist and it indicates no limit on depth. Beware that XGBoost aggressively consumes memory when training a deep tree.
min_child_weight [default=1, range[0,∞]]
Minimum sum of instance weight (hessian) needed in a child. If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight , then the building process will give up further partitioning. In linear regression task, this simply corresponds to minimum number of instances needed to be in each node. The larger min_child_weight is, the more conservative the algorithm will be.
gamma [default=0, range[0,∞], alias: min_split_loss ]
Minimum loss reduction required to make a further partition on a leaf node of the tree. The larger gamma is, the more conservative the algorithm will be.
subsample [default=1, range: [0,1] ]
Subsample ratio of the training instances. Setting it to 0.5 means that XGBoost would randomly sample half of the training data prior to growing trees. and this will prevent overfitting. Subsampling will occur once in every boosting iteration.
colsample_bytree [default=1, range [0,1]]
Colsample_bytree is the subsample ratio of columns when constructing each tree. Subsampling occurs once for every tree constructed.
lambda [default=1, alias: reg_lambda ]
L2 regularization term on weights. Increasing this value will make model more conservative.
alpha [default=0, alias: reg_alpha ]
L1 regularization term on weights. Increasing this value will make model more conservative.
2. Parameter to handle imbalanced dataset
scale_pos_weight [default=1]
Control the balance of positive and negative weights, useful for unbalanced classes. A typical value to consider: sum(negative instances) / sum(positive instances) .
3. Other parameters
n_estimators [default=100]
Number of gradient boosting trees. Equivalent to number of boosting rounds.
All the above parameter definitions are from the official XGBoost website.
With this abbreviated knowledge of tree booster parameters, let’s import libraries, load our dataset, create independent and dependent variables, and split our dataset into training and testing sets.
Now, let’s train our model with the default parameters.
As you can see, even with default parameters, the model provided us with acceptable results. To find optimal parameters, I used GridSearchCV, a library function that is a member of the sklearn’s model_selection package. It helps to loop through predefined hyperparameters and fit our model on the training set. So, in the end, we can select the best parameters from the listed hyperparameters. I tried three values for each of the following parameters: learning_rate, max_depth, min_child_weight, gamma, subsample, and colsample_bytree.
Next, I trained our model with updated parameters. Since I was decreasing learning_rate, I increased the number of gradient boosting trees (n_estimators).
With this one simple step, I managed to increase validation accuracy from 76.67% to 80.00%, and validation AUC (area under the curve) from 68.81% to 75.48%.
With plenty of time and computer power, one can expend the range of values for booster parameters search and use the GridSearchCV results as a base for further parameter investigation. For example, if GridSearchCV learning_rate decreases from the default of 0.3 to 0.2, in the next round of search, we can move the range further to the left like [0.05, 0.1, 0.2].
Next, I tried to optimize regularization parameters reg_lambda and reg_alpha.
The GridSearchCV found default values of reg_alpha and reg_lambda to be optimal. The last parameter left to optimize was scale_pos_weight, which I kept increasing until I found a value of 4 to give me the best results. I then trained my model with these final optimized parameters. Below is the code with extra metrics, including sensitivity, specificity, positive predictive value, and negative predictive value.
Conclusion: With the help of GridSearchCV, in a few simple steps, I managed to increase validation accuracy from 76.67% to 78.33% and validation AUC (area under the curve) from 68.81% to 77.09%. Although one can get the best accuracy improvements with feature engineering, XGBoost parameter optimization is also worthwhile.
Thank you for taking the time to read this post.
Best wishes in these difficult times.
Andrew
@tampapath | https://medium.com/analytics-vidhya/predicting-heart-failure-using-machine-learning-part-2-b343471dbde8 | ['Andrew A Borkowski'] | 2020-10-10 16:11:52.952000+00:00 | ['Parameter', 'Xgboost', 'AI', 'Optimization', 'Machine Learning'] |
Supervised Learning Algorithms | Hmmm….Algorithms huh!!!
As I pledged in my last article that I would be writing about algorithms in next article.
Here I am buddies.
Algorithms are the core to building machine learning models and here I am providing details about most of the algorithms used for supervised learning to provide you with intuitive understanding for where to use it and where not to.
By the end of this article, you will be adept at algorithms from intuitive level of understanding.
CAVEAT: I AM NOT DESCRIBING MATHS BEHIND IT INSTEAD HOW IT WORKS AND WHERE TO USE IT.
So, folks here we go.
1.NAIVE BAYES
Naive Bayes are the algorithms used for classification based on Bayes theorem and it is the foundational algorithm to know at most for machine learning.
Advantages:
It is very helpful for handling large amount of datasets and generalizes the data accurately for such large datasets. Applied mostly in classification problems eg. spam detection,spam filtering,sentiment analysis,fraud detection, recommendation engine etc.
Disadvantages:
It is naive i.e doesn’t understand data in ordered format like in text learning.(Still it is preferred for its speed and easiness of use).
Stock price prediction
2.LOGISTIC REGRESSION
Logistic regression by name sounds algorithm for regression but in-fact it is a classification algorithm. It is a linear and simplest classification algorithm.
Pros:
It is simple and interpretable. It works best for linear data i.e when classes we are trying to predict are non-overlapping and linearly separable.
Cons:
When classes are non-linear, it will fail. It can’t handle complex problems.
3.Linear Regression
Linear Regression is also a linear model but used for regression problems.
Advantages:
It is also simple, interpretable and hard to overfit. It is best when the relationship between input and output variables is linear.
Disadvantages:
It will underfit the data when the relationship between input and output is nonlinear i.e it fails to generalize non linear data accurately. It also can’t model complex relationships.
4.K_NEAREST_NEIGHBORS
It is an algorithm that has the ability to model non-linear data as well as linear data efficiently. It is used for both regression and classification problems.
Advantages:
Albeit being simple and interpretable ,it is highly flexible and efficient at learning more complex, non-linear relationships. Used in recommender systems,like in Netflix, spotify etc.
Disadvantages:
It doesn’t work well when no of observations and features grow i.e doesn’t generalized well for large datasets.
5.SUPPORT VECTOR MACHINES(SVM)
SVM are highly flexible algorithms that make a separating data-line between datasets. It can be used for both regression and classification.
Advantages:
It can handle complex datasets as well. It works for nonlinear data too.
Disadvantages:
Prone to noise. Don’t work well for large datasets.
6.TREE BASED METHODS
Tree based methods are the most effective algorithms developed for solving extremely complex domains of problems. It is compatible for both classification and regression problems.
There are many tree based methods:
1.Decision tree 2.Bagging 3.Random Forests 4.Boosting(Gradient boost, Ada Boost, XG Boost).
Advantages:
These methods are best for supervised learning for prediction problems. Handle complex relationships along with handling missing data and categorical features in an adept way.
Disadvantages:
Difficult to interpret and might take long to train the model as well.
7.NEURAL NETWORKS:
Neural networks are the state of the art technique to generalize even the most complex problems out there in the world. These algorithms come under deep learning which is the most complex still the most efficient model to handle cumbersome problems and get the best metrics for our problems. Since these methods are really complex, we should first try to use above simple linear models before getting our hands dirty on neural networks.
Hooo🥱…..finally the article is over but not the learning process. I have provided the basic understanding of these algorithms used for machine learning from an intuitive perspective so that you would be able to perceive them with breeze. Next its up-to you to get more adept at these topics.
I guess you got a bit of concepts on these algorithms from this article. I hold my pen here. Oops I hold my hands out of my keyboard😂😂.
Anyway….
Thank you.And yeah be happy and don’t worry .Just take a small step at a time and you will reach the summit in a jiffy. | https://medium.com/analytics-vidhya/supervised-learning-algorithms-ad934e0b1834 | ['Anjan Parajuli'] | 2020-09-16 14:39:49.185000+00:00 | ['Machine Learning', 'Data Science', 'Artificial Intelligence', 'Deep Learning', 'Algorithms'] |
The Revelation of Johnnie Ray | There’s hints of him in a few places, like in the 1982 hit song “Come On Eileen” by Dexys Midnight Runners.
Poor old Johnnie Ray,
Sounded sad upon the radio;
He moved a million hearts in Mono.
Our mothers cried;
Sang along, who’d blame them.
He’s in Billy Joel’s 1989 song “We Didn’t Start the Fire.” A 1994 biography by Jonny Whiteside is out of print.
Academics discuss him sometimes. “Few details about Johnnie Ray conform to sexual scripts of the postwar era,” notes Vincent L. Stephens in Rocking the Closet. “Deaf, effeminate, bisexual, openly approving of black culture, and loathed by many music critics, he was seemingly an anomaly.”
I pull up newspaper archives, trying to follow the story as it’s happening. He wasn’t the first to sing “Cry.” He was the first man. It’s a woman’s song. The narrative is one woman is talking to another who’s been dumped by a boyfriend, now sits listening to the radio.
“So let your hair down and go right on and cry . . .”
In January 1952 a mystified reviewer tries to follow the fluctuations in gender. “His voice breaks tearfully over a phrase as if it couldn’t go on, but it rises manfully from the abyss of its own despair. Frequently it lingers longingly over a word with the secret regret of an old maid fingering a faded rose in her memory book.”
Then Johnnie Ray moves — the way men usually don’t. “His body rocks, his firsts are clenched desperately, and his tightly shut eyes frequently yield tears in the middle of a song.” He’s interviewed. “I really try not to cry,” he says. “I mean, after all, it doesn’t look very masculine on the floor of a nightclub or in a recording studio. But I get lost in a song, in a sad song, and I can’t help it. The tears just come.”
Why does he cry? “Maybe there’s a sadness in me, from when I was a boy,” he says. “I was half deaf and I still wear a hearing aid. As a kid I couldn’t hear the teachers very well. Other kids got tired of shouting. I was left by myself.”
His plans include saving money, and getting married. “I’ve never been in love,” he says. “I’d like to be. I’d like to be very much.”
“There is no in-between on Johnnie Ray’s almost weird singing style — you either like it intensely or you detest it,” a reviewer writes.
The latter is more evident in newspaper coverage. “There is an affliction upon us, momentarily, in the form of a singer called Johnnie Ray,” goes another update.
But his live shows became, briefly, unmissable. In her memoir, Rosemary Clooney recalls catching one at the Copa. “Frankie Laine was there, Tallullah Bankhead, the Dutchess of Windsor.” She notes Marlene Dietrich and Yul Brynner there, separately, though everyone knows they’re having an affair. Johnnie sang, Clooney writes, “and everyone waited for him to fall down on his knees and cry and do all the outlandish things that people had never seen done on the Copa stage before.”
Dietrich, I notice, is quoted in a Leonard Lyons showbiz column speaking of the show, saying she approves of the ‘exaggerated mannerisms’. “There’s too much underplaying in show business today,” she says.
He got good notices. “It was a masterful display of showmanship that evoked mass hysteria resembling a Holy Roller meeting,” Billboard reports.
The New York Times sent its classical music reviewer. “When he sings ‘Cry’, Johnnie Ray looks and sounds like a man possessed by an unassuageable grief,” he writes. The kids like it, but it’s all a passing fad, he thinks. “His pain may be their pain. His wailing and writhing may reflect their secret impulses. His performance is the anatomy of self-pity.”
I read that and think: they’re reading him as gay, and probably all know about the incident in the bathroom—Johnny’s run-in with a vice officer in Detroit the previous year.
He’d perform for decades, but it’s like he tried to turn into someone else. As Vincent L. Stephens puts it: “Ray attempted to downplay the very things that made him interesting. He went for middle of the road instead of propelling himself forward musically and culturally in the lane he had already carved out.”
The pivotal moment might be when Ray marries in May 1952. “Crying Crooner is Paralyzed at Wedding,” the headlines read. As the report reads: “Even in the small room, none could hear him say ‘I do’.”
He was born January 10, 1927 in Dallas, Oregon.
“That son of a gun could play the heck out of a piano,” a childhood friend says. I’m flipping through Whiteside’s biography. Ray grew up Christian during the Depression, with a real strong mother and almost totally silent father, emotionally destroyed after losing a family farm.
Little Johnny sings in church, loves country music, then black music. “We could hear him all over the neighborhood,” a childhood friend recalls. He left an impression for curious clothes and odd episodes, like dying his hair green. “Show business was his obsession, all the time,” his sister says.
He’d later describe performing in nearly religious terms, as a ‘calling’, saying: “I had to become what I had to become.”
Age 13, at a swimming hole, he was with a group of boys, throwing each other in the air using a pup tent. He was up ten feet in the air, and coming down, fell on the grass. A blade of grass jammed into his left ear, punctured his ear drum and he lost half his hearing. Disoriented, he walked home.
Unsure what his problem was, he drifted into a private world. When he got his first hearing aid, he sat raptly listening to rainfall, as his family cried.
Though it’s not clear, here as always, how reliable he is when telling stories of his past. The scholar Cheryl Herr, in a 2009 paper on Ray’s hearing disability, suspects the accident story was invented, and the hearing loss was congenital. As a boy, he’d have denied it as long as he could, “as though to accept the diagnosis of a severe hearing impairment were deeply embarrassing and painfully dishonoring to his family.”
His sister tells Whiteside: “Mama was very domineering, and if she could have called it, she would have kept him in her little world, tied by the apron strings.”
His father, suddenly, spoke up, saying Johnnie should go follow his dreams. “Go as far as you can go, and if when you get there you don’t have the money to get home, I’ll help you.”
Ray went to L.A. to become a movie actor, but got no interest. “I wasn’t handsome,” he’d say. “I knew I wasn’t going to be getting any leading man parts, so I did what I had to do to stay in the business, which was to sing and play the piano.” He’d credit one of the strippers with taking him under her wing. “She taught me stage presence and she and a couple of comics were always on hand to throw a drunk out if he tried to pick a fight.”
All along, he was working on his hazy sexuality. “There were women and there were men,” an L.A. friend tells Whiteside. “He was a little little kid with a big dick. He liked to talk about that and people liked to hear about it.” When he’s with women, it seems like they’re doing a lot of work.
By 1949, after a year or so in L.A., he was ready to go home. “Starved to death, literally—I remember stealing lemons for breakfast once,” he recalls. He called home and his dad sent him money for a return bus trip.
He came back and worked clubs in Oregon, worked on his songwriting, and on becoming an alcoholic—his lifelong romance.
There was God in there too. He had a cosmic moment lying on a riverbank at night on the Umpquah River, saying up to the heavens, “I guess I’m licked.” Staring a passing cloud, he ‘heard things’, he said, and that night wrote a song, “The Little White Cloud that Cried.” It would be the B-side to “Cry,” and set the narrative focus for his career, a descent into a realm of taboo: male grief and male feeling.
In club appearances he started getting wild, “very physical and demonstrative,” he’d say. “I had difficulty keeping jobs.” He worked across the country, getting a footing in Detroit, where clubs let him go crazy onstage. He recalls later “the piano bench would go flying across the stage, draperies would be coming down, the piano would get beaten, the music scattered everywhere. I was all over the place.”
A talent scout for Columbia Records thought Johnnie was a superstar. His bosses didn’t agree.
“They all thought I was pitching a girl who sounded like Dinah Washington!” Danny Kessler tells Whiteside. “Finally, I convinced them that she was a boy, and then I had to break the news that she was a white boy.”
Local police were even less impressed. Having spotted Ray as a “scrawny white queer with the gizmo stuck in his ear,” in Whiteside’s words, they grabbed him one night at a bar when he hit on a cop. Johnnie Ray’s pick-up lines are preserved in the incident report.
“Hi, hot isn’t it?” he says.
“Sure is,” the cop replies.
“I know where it’s cool,” Johnnie says. “Real cool.”
“Where might that be?” the cop asks.
“My place,” Johnnie replies.
They scuffled when the handcuffs came out. Hauled in, Ray turned off his hearing aid, pled guilty, and returned home.
He was friends with Tony Bennett, who’d credit him as “the father of rock and roll.”
In an interview with Whitehead, Bennett recalls that music at the time was “very sentimental, lovely, well-written songs, done very sweetly. But Johnnie became a visual performer. He was the first to charge an audience.” In part it was because Ray, when performing, couldn’t hear himself.
In his inner silence, he seemed to go into a hyperactive state. The piano bench was inevitably sent flying, the curtains torn, as Johnnie pioneered in taking the microphone on trips around the stage. “He broke tempo, rules, piano lids, music stands and hearts every time he performed,” Whiteside writes. “More and more, his audiences responded in kind.”
With little idea his record was released, he went to a promotional appearance to see hundreds of teenage girls waiting. They were for him.
He put out for them, getting “pretty carried away,” a friend recalls, “so he was rolling on the floor, crying and tearing his hair out, but that’s not the worst. He then stood up, took off one of his loafers, which had a metal cleat on it, and commenced banging out the rhythm on the top of this brand new, uninsured grand piano! And the audience was going crazy, absolutely crazy.”
The birth of rock owed to a demographic shift. The male population was depleted. After the carnage of World War 2, there was the Korean War. Girls were thirsty?—maybe for a man who wasn’t intent on planetary destruction.
“Cry” was written by Churchill Kohlman, a black man working as a night watchman at a Pittsburgh dry cleaning factory.
It would be perfect to capture his emotional fervor and, as Whiteside notes, “unorthodox blend of male power and female sensitivity . . .”
“There was an electricity to that recording session that nobody could put their finger on,” Ray would recall.
Johnnie Ray and Churchill Kohlman (1952)
Released in late October 1951, the song was a cultural phenomenon. He tried to present his fame as an open license to being emotional. “People are too crowded inside themselves these days. They’re afraid to show any love. And boy, what is the primary existence for existing? It’s beauty and love.”
But his theme was grieving. “Cry” was a song about male feelings—locked up in a dungeon, now pouring out.
It made him into a figure of fun. He was the ‘Cry Guy’, the ‘Prince of Wails’. “He is America’s Number One Public Weeper,” a headline read.
And then Johnnie had a girlfriend. Herman Hover, who owned the Earl Carroll Theater and hosted Ray shows, gives Whiteside a colorful interview. “Someone told me that Johnnie was homosexual,” he says, “but I didn’t place any weight in that because most of your top attractions are homosexuals. It was so prevalent that it didn’t matter.”
What mattered was business, and Marilyn Morrison, the girl pursuing Johnnie, was the daughter of a prominent club owner on the Sunset Strip. A clear effort, in his mind, to take control of Johnnie’s career.
But Johnnie liked a woman in charge?—as much as his dad did. Marilyn was adept at navigating the world of showbiz, arranging his business contacts, and staging photos. After they divorce in 1954, he’s becomes a sort of companion to gossip maven Dorothy Kilgallen.
Rosemary Clooney notes: “Johnnie might be gay, as people said, but clearly not all the time.”
His real romance might be with alcohol. His longtime road manager, Tad Mann, later writes a memoir, Beyond the Marquee. “With most of Johnnie’s love affairs there was a correlation of one’s ability to drink glass to glass.”
His thoughts on the Kilgallen thing? “Though she would find moments with him memorable she knew Johnnie would be unable to break away from his male relationships.”
An icon of gay history, if just for his music. In “(Here Am I) Heartbroken” he sings, “It’s bad enough that I lost her / I had to lose him too.”
And a 1959 song, “A Sinner Am I”—where it’s not clear if he’s singing to God or a lover who isn’t described.
Tell me am I a sinner?
Tell me am I wrong?
Something is wrong with the picture
This is not my fault
So am I a sinner?
Am I a sinner?
In 1959, I’d say, that’s as gay as it gets.
In 1984, interviewed at his home in Hollywood, he’s asked if he ever sings happy songs.
He did once, he says, but the audience didn’t like it. “What they wanted from me was guilt and rejection. I gotta be pretty broken-hearted for them to enjoy me.”
In St. Louis, later the same year, he’s “struggling most of the time,” a reviewer thinks. His voice “no longer has much range, and he could hardly present the material with much of the impact he once had.” Only “Cry” and “The Little White Cloud that Cried” seem to come across.
He’s aging badly. There’s a darkness. He does AIDS benefits. He never publicly addresses his sexuality. In 1988, he’s in Australia, being interviewed before a show. How does he manage to keep crying on cue?
“I think I can move myself to tears on stage because I am a sensitive person,” he says. “I had an isolated and lonely childhood. My tears were always genuine and not just part of the performance.”
He never enjoyed fame, he says. “It was a lonely life. It was difficult to meet people, especially girls. There were lots of them hanging around but I was far too gallant to take advantage.”
He looks ‘haunted’ a moment, the reporter thinks.
October 7, 1989. He plays the Grand Theater in Salem, Oregon. It’s the first time he’s ever performed near where he grew up. “I’m a little dodgy about that,” he tells a reporter in his hotel room. “These people have never seen Elmer’s kid work.”
Both his parents had died. He gives the newspaper a photo of him with them, a family, and talks about his career. “All of a sudden, Johnnie Ray came along, and that wasn’t fun at all. There was a lot of controversy.”
He says he’s writing a memoir. “Some of it is kind of painful, and some of it is kind of funny,” he says.
He dies months later of liver failure. The concert in Salem is the last item noted in his obituary. “It doesn’t get any better than this,” he’d told the audience. | https://medium.com/prismnpen/the-queerest-sound-1b234b9eeb80 | ['Jonathan Poletti'] | 2020-06-27 19:57:34.281000+00:00 | ['Creative Non Fiction', 'Sexuality', 'LGBTQ', 'History', 'Music'] |
How to Reverse Diabetes and Lose Belly Fat in 60 Days | How to Reverse Diabetes and Lose Belly Fat in 60 Days
In less than two months, I dropped my HbA1c from 8.2% to 5.8% and lost 24 pounds by making targeted habit changes
Photo by Ehimetalor Akhere Unuabona on Unsplash
The title sounds like a bold claim, right? It’d be hard to believe it if I hadn’t done it myself. Here’s the story of how I did it — and how you can too.
On August 17, my doctor said one sentence that rocked my world: “You are a Type 2 diabetic.” Naturally, she prescribed diabetes-management drugs.
I was like, “No, thanks. Diabetes is a lifestyle condition. I’ll make lifestyle changes to reverse it.”
With the experience of someone who’s probably had this discussion more than a few times previously, she said, “That’s not likely.”
Hold my beer.
Less than a month later, on September 12, I had dropped 14 pounds and 1.5 inches of waistline. A second blood panel showed my triglycerides and LDL cholesterol had dropped to normal levels — for the first time in my life — and my HbA1c level had dropped — but still indicated diabetes. (It was at 8.0% against a normal max of 6.0%.)
On October 15, I had dropped another 10 pounds and two more inches of belly fat. My third blood panel showed that my HbA1c level had moved to 5.8% — within the normal range.
I had started at 194 pounds and was at 170 at that point. My target weight is 155, and so my journey continues. But here’s how I achieved these remarkable results within just 60 days. | https://medium.com/better-humans/how-to-reverse-diabetes-and-lose-belly-fat-in-60-days-7f73c3d48c27 | ['Bob Wuest'] | 2020-11-21 11:49:20.967000+00:00 | ['Health', 'Diabetes', 'Keto', 'Weight Loss', 'Fasting'] |
Lies about Statistics | As an applied scientist, understanding and use of statistics properly is essential. Unfortunately, statistics is the most misunderstood and least developed skill in the populace. This is amazing to me because all children under the age of five are natural statisticians, more specifically they learn using stochastic processes.
Stay with me. A stochastic isn’t bad because sounds scary word, it’s your friend, embrace it. But what is it? Let’s use a simple analogy.
A child drops a spoon while eating. The parent picks it up and gives it back to the child. A younger child repeats this process more times than an older child. The child is performing an experiment using a stochastic process. In other words, the child is using statistics to determine how likely a spoon will hit the ground when dropped. (Basically testing gravity). Gravity is pretty unforgiving, so the child learns dropping a spoon will ALWAYS hit the ground (i.e., the outcome is 100% certain). 100% certain is another phrase for 100% probability an attempt will have the same outcome. This is the basis of statistics and a stochastic process. After this is learned, performing the experiment may be amusing, but does not provide anymore knowledge, to the child moves on to another experiment. Usually, something that will test the boundaries of a parent’s patience.
Almost all college students need to pass the dreaded statistics course, even liberal arts majors (who try to avoid science in favor of art and humanities — something that does not require math). The best statistics teacher I knew in college was my political science professor. After passing the course, almost all students continue to avoid the subject. This is because it is highly technical and difficult to do properly. The key here is properly.
The core reason statistics are synonymous with lies is because doing statistics properly is hard. Studies that are wrong are published all the time. This re-iterates the belief that statistics is bullshit. Bad statistics are worse than bullshit, they are usually dangerous. Testing a vaccine is ALL about statistics. Almost all the statisticians I know work crunch numbers for drug research. Releasing a vaccine without doing proper research (i.e., the probability of the vaccine working or killing someone is as close to 0% as possible) is pretty important.
It’s sad to say that some really smart people making important decisions are bad at statistics. They first step in any study is determining if the data is valid. This is accomplished using the “run test”. It basically determines if the data comes from a repeatable process where each point is collected under the same conditions. This is where the problem occurs. Conditions change. Unless the conditions are locked down, it’s not possible to determine the probability of a cause having an effect. For anything more other than dropping a spoon, controlling conditions is hard. It’s even harder when trying to determine cause and effect when the process used to collect the data is unknown.
Public officials are not required to use statistics to make decisions. This has a direct effect on public welfare. The wide variation in how directives in response to COVID-19 outbreaks is making this clear. In reality, determining the cause of an outbreak is the only way to determine if changing behavior will have an effect on the outcome. The US strategy to urge self quarantine and take away ALL opportunities for infection is basically killing a fly with a cannonball. This is because experiments to determine cause are not or cannot be performed in a manner that allows statistics to be valid. Without using this scientific method, we are left to trust the wisdom of our leaders and governors.
Wisdom is a funny thing. It only works when the conditions that led to a belief are valid for the context in which the wisdom is used. For instance, restricting the occupancy of a restaurant effects the mortality rate of COVID-19. This may be true in some instances, but to what degree depends on conditions. Some governors of localities (i.e., mayors) have measured the infection rates traced back to restaurant visits. At least one calculated at 1.5% infection rate, basically it is 98.5% UNLIKELY that an infection occurs from a restaurant visit. Should restrictions be placed on restaurants across the board if this is true? If conditions across the board are the same, NO. If conditions across the board are NOT the same, YES.
It is pretty widely known that a single person who does not wear a mask properly or at all, infects a large area. It is best to restrict the movements of this “polluter” to contain the pathogen. In a sit down restaurant with good practices, people don’t move around much. In a fast food restaurant with self service, people move around a lot. They are more likely to be in a hurry and less likely to be “polluters”. So, maybe restricting self service restaurants is warranted but sit down restaurants is not.
How do we know? Performing experiments would tell us. Let’s say we have two restaurants in the same type of community with staff trained in the same manner. Say the restaurants have the same owner. Let’s also say the restaurant has surveillance that records activity. Data can be gathered to determine the likelihood of “polluting” activities. This data can be analyzed statistically to determine cause and effect. One restaurant could impose a stricter policy, say 25% occupancy vs. another at 50% occupancy. If the data indicates the policy is being followed, the opportunity for pollution can be calculated to a reasonable degree of certainty. Guidelines and directives can therefore be more detailed and more effective, without completely shutting down entire sub-economies, like the restaurant industry.
OK, so maybe this is hard, but it is necessary. It will remain necessary until inoculations are ubiquitous enough to make the probability of infection too low for concern. Say 1 in a million infections per month across the US. This is possible. It took decades to eliminate polio and measles through inoculations (after development of an effective vaccine). But it took a couple of years to eradicate malaria in the area around the Panama Canal during construction. (A massive force killing mosquitoes was employed and breeding grounds were found and destroyed relentlessly.)
The notion that you can make statistics say whenever you want is only true for BAD statistics. Obtaining GOOD statistics is hard and distinguishing between GOOD and BAD statistics is also hard. So, yeah, you can make BAD statistics say whatever you want. However, to make a GOOD decision requires a GOOD statistical study with high probably from a trusted (i.e., tested/competent) source essential. If the study results in a probability of a specific cause having a specific effect that matches the conditions being controlled by a decision, the result is predicable. It is rarely possible to do this, therefore GOOD statistics are very hard to come by.
That being said, giving up on trying to get GOOD statistics as the basis of decisions MUST not be abandoned. If anything, more education on statistics is needed. We are tested periodically to determine if we have the skill to drive. Having a test for making decisions based on good statistics seems like a good idea, but unlikely to occur. A good leader will take the time to appreciate this skill and surround themselves with people who provide good statistics and detect when the conditions under which they are employed are valid. This requires the leader to listen before AND AFTER a policy is made and deployed. A good statistician will be vocal about the validity of the policy given what effect it actually has and bring new recommendations to the leader’s attention. A good leader will change the policy and recognize that conditions in the real world may or may not be the same as those used to calculate the statistics. When conditions are not the same, better statistics must be calculated and policies adjusted. If done properly the matter will be resolved as quickly as possible. | https://medium.com/out-of-your-mind/lies-about-statistics-6b59e3f6e7aa | ['Joe Bologna'] | 2020-11-08 14:45:49.831000+00:00 | ['Society'] |
Image classification (persons, animals, other) on raspberry pi from pi-camera (in live time) using custom model .h5 (output to terminal), using TensorFlow | Note before you start:
So, let’s start :)
Hardware preparation:
Software preparation:
1. Create a neural network model to predict 3 classes: persons, animals, others.
I have already done it in the article below. So in this article, you do not need to do this.
2 Try to predict class from camera raspberry pi in live time with output to the terminal using Tensorflow Lite and custom model .h5 (which I trained and saved above).
For that, you need to run the next code:
pip3 install #install Tensorflow Litepip3 install https://github.com/google-coral/pycoral/releases/download/release-frogfish/tflite_runtime-2.5.0-cp37-cp37m-linux_armv7l.whl
git clone #clone repo with my codegit clone https://github.com/oleksandr-g-rock/classify_picamera_in_live_time_cusom_model.git #go to direcory
cd classify_picamera_in_live_time_cusom_model #copy h5 custom model
wget https://github.com/oleksandr-g-rock/create-image-classification-for-recognizing-persons-animals-others/raw/main/animall_person_other_v2_fine_tuned.h5 #run script
python3 classify_picamera_with_live_time_custom_model.py
3 You should see something like that.
Image classification (persons, animals, other) on raspberry pi from pi-camera (in live time) used custom model .h5 (output to terminal) (Photo,GIF by Author) https://github.com/oleksandr-g-rock/classify_picamera_in_live_time_cusom_model/blob/main/0_3yXD6SiSVVOvWpJ5.gif
Result:
In this article, we have created image classification (persons, animals, other) on raspberry pi from pi-camera (in live time) used custom model .h5 (output to terminal). The full code is located here. | https://medium.com/analytics-vidhya/image-classification-persons-animals-other-on-raspberry-pi-from-pi-camera-in-live-time-used-5e7ccc236781 | ['Alex G.'] | 2020-12-18 16:14:21.121000+00:00 | ['Python', 'Image Classification', 'Raspberry Pi', 'TensorFlow', 'Colab'] |
Provisioning a Network Load Balancer with Terraform | Are you using some form of load balancing in your application? Don’t answer. It’s a rhetorical question. You bet I am, you scream defiantly. How else am I going to ensure that traffic is evenly distributed?
Load Balancers come in all shapes and sizes. In the past, it used to be a concern for the operations folks. As an application developer, you could spend years without having to think about them. That’s not always the case in the cloud. Luckily, AWS makes it easy for us to create such resources. More so if you use Infrastructure as Code (which I’m sure you are). I’m going to use Terraform in this article to provision Network Load Balancer instances.
It’s AWS. Of course, there are multiple competing options
There are three different types of load balancers in AWS.
Classic load balancers are becoming a relic of the past. Usually, your choice is between an NLB (Layer 4) and an ALB (Layer 7). If you are worried about the number of features, they got you covered. All three are managed infrastructure. AWS handles the availability and scaling transparently for you.
Let’s talk about NLBs. Being a Layer 4 means that you don’t know about the application protocol used. Even so, most of your load balancing needs in life can be covered with an NLB. Unless you want routing based on an HTTP path, for instance. In that case, you need an ALB, which I’ll cover in a future post.
Setting up a basic load balancer
Setting up a load balancer requires provisioning three types of resources.
The load balancer itself
itself The listeners that will forward the traffic
that will forward the traffic The target groups that ensure that the traffic reaches its destination
The most typical setup is a Virtual Private Cloud (VPC) with a public and a private subnet. The load balancer goes in the public subnet. The instances live in the private subnet. To protect ourselves against outages, we deploy everything to multiple Availability Zones (AZ).
Assuming that we have an existing VPC (identified by vpc_id ), this snippet creates the load balancer.
The aws_lb resource is confusing because it represents both NLBs and ALBs, depending on the load_balancer_type argument. Some arguments only apply to one type, so you've got to read the documentation carefully.
enable_cross_zone_load_balancing is an interesting parameter. It'll help prevent downtimes by sending traffic to other AZs in case of problems. Cross-AZ traffic ain't free, so make that an exception!
Listeners
Our load balancer is not being a good listener right now. We’ve got to fix that. Through the aws_lb_listener resource, we specify the ports we want to handle and what to do with them. We want to listen to both port 80 and 443, so we'll set up two different resources using for_each . Let's have a look at the code.
You see the ports defined in the ports variable. Next is the protocol . If we only want to forward the request, we use TCP or UDP. We can also choose to terminate the TLS connection by using TLS as a protocol. For that, we'd need to set up a certificate, though.
After port and protocol are there, we need the action to perform. The most common action is to forward it to our receiver target group. Additionally, we can do redirects, fixed results, or even authentication. By the way, I showed how to do authentication in this article.
Target Groups
The last step is defining the target group(s) so that the load balancer knows who will receive the requests. We do that with the aws_lb_target_group resource. Here we branch again, as there are different possibilities.
Instance-based
The target group can point to specific instances. That’s the default target_type . You don't want to explicitly specify instances (What if they go down?), but rather create an Autoscaling Group (ASG). We assume an existing ASG in the code.
We add a depends_on block containing the lb resource so that the dependencies are properly modeled. Otherwise, destroying the resource might not work correctly.
IP-based
We use the target_type ip when using IPs instead of instance ids. We assume that these IPs are available and readable through a data resource. They are connected to the target group through a aws_lb_target_group_attachment .
The connections to the ENIs are expressed as a list of [port, ip] pairs. That requires some ungainly terraform loops to define everything properly.
These are two typical examples, but it’s not the only way of doing it. ECS supports adding target groups to reach services directly. If you are working with Lambda, that needs an ALB.
I’ve left a bunch of details out to avoid writing a 10k words article. You can customize the health check ( health_check ) associated with each target group, the algorithm used ( load_balancing_algorithm_type ), and a host of other things. The flexibility can be overwhelming. I recommend starting small.
Security Groups
Oh yes, security groups. Every so often, running curl against your shiny, new infrastructure results in timeouts. Inevitably, you forgot the security groups.
Network load balancers don’t have associated security groups per se. For both instance and IP based target groups, you add a rule that allows traffic from the load balancer to the target IP.
Getting to our load balancer
That’s about it. With all these resources, we’ve got ourselves a working load balancer!
All load balancers are reachable through their automatically assigned DNS entry. We can programmatically find it thanks to the AWS CLI.
Provided there is a registered target, we can query it using the content of dns and see that our setup, in fact, works.
Internal Load Balancers
A load balancer doesn’t always have to be publicly available. Let’s say you use VPC endpoints to keep your traffic inside AWS’s network. We don’t want to expose our load balancer to the public if it’s going to sit behind a VPC endpoint service. Instead, you set the internal parameter to true . The LB can live in a private subnet.
Operations
Operations is a bit of a strong word. There is not a lot to operate here. Not for us, at least. Still, let’s finish with some thoughts about that.
Out of the box, a lot of CloudWatch metrics are exported for your convenience. The AWS Console has some nice charts to look at. You could use another monitoring tool if you wish.
Pretty lines make everything better
What about costs? As you can see on the pricing page, an NLB has a fixed price, plus a fairly arcane operating cost based on Load Balancer Capacity Units (LCU). Honestly, the easiest way to monitor expenditures is by looking at previous months in the Cost Explorer.
Lastly, performance. An NLB scales like there is no tomorrow. Each unique target IP can support 55000 simultaneous connections, and the whole thing should be merrily passing along requests long after your applications have collapsed into a smoking pile of ashes. The word managed is genuinely appropriate because you’ll rarely have to do anything past the provisioning.
Conclusion
Load balancers are an integral part of every cloud setup. It’s a vast topic as well, and thus I could only scratch the surface. However, this is enough to get started with a rock-solid foundation. | https://medium.com/swlh/provisioning-a-network-load-balancer-with-terraform-3c44624ba436 | ['Mario Fernández'] | 2020-11-17 20:37:30.997000+00:00 | ['Terraform', 'Infrastructure As Code', 'Load Balancing', 'AWS', 'Networking'] |
Opportunities for New Contributors | We’re always excited when Bokeh users express interest in becoming contributors, and usually the first question is: how can I help?
Our issue list on GitHub is a good place to start… but can be daunting if you’re new to the project. To make things easier for you, several of these are tagged good first issue, which are some of the more accessible fixes and improvements, and a great way for a new contributor to make a difference. Good first issues generally come in two flavors: technical, but specific and contained; and less technical but broad, touching several aspects of the project to accomplish a goal.
We’ve chosen some of these good first issues to highlight, and encourage our users to get involved. For all of these, contributors should feel free to add to the discussion of the issue and get a clear picture of how best to solve the problem. If you have a question about an issue, chances are very good that other people have the same question!
We’d like to make this a regular feature on the Bokeh blog, to encourage new contributors on an ongoing basis. Here’s our inaugural list.
Good First Issues
Document bokeh.sampledata ( 9329 ) . This section of our Reference Guide is nearly empty! The GitHub issue includes a couple of suggestions for how to address this.
. This section of our Reference Guide is nearly empty! The GitHub issue includes a couple of suggestions for how to address this. Add fill properties to slope ( 9194 ) . This one proposes the ability to shade an infinite area above or below a linear boundary. The issue poses this as a modification to the Slope annotation.
. This one proposes the ability to shade an infinite area above or below a linear boundary. The issue poses this as a modification to the Slope annotation. Support semi-infinite bands ( 6767 ) . This one’s a similar concept: the ability to have a Band with only one boundary, so that the filled area extends infinitely.
. This one’s a similar concept: the ability to have a Band with only one boundary, so that the filled area extends infinitely. Add a progress bar to bokeh widget list ( 6556 ) . This is one that would benefit a lot of users, and requires both Python and TypeScript components; the core team is happy to help interested contributors with examples and direction.
. This is one that would benefit a lot of users, and requires both Python and TypeScript components; the core team is happy to help interested contributors with examples and direction. Expose tap and hover tool hit radius ( 2230 ) . Originally brought up in a Stack Overflow question about making it easier to select points with TapTool and HoverTool.
. Originally brought up in a Stack Overflow question about making it easier to select points with TapTool and HoverTool. GridPlot ignores ‘title’ property (1449). In a gridplot layout, it would be useful to be able to set a single master title that refers to all of the plots in the layout. This issue also lists a couple of approaches to get this top-level title working.
Also, as always, we are interested to see what you make. Our examples gallery could use some fresh new instances of Bokeh at work. Our examples focus on readable code that illustrate how to use Bokeh; there are interactive applications that show how to leverage Bokeh Server, but also plenty of simple or static plots that act as clear visual references for users who are exploring their options for visualization within the library. So whatever you’re designing, big or small, we’d love to see it and consider it for inclusion in our examples set!
As always, anyone interested in helping out should drop by the Dev Chat Channel and say hello!
Thanks,
Carolyn Hulsey | https://medium.com/bokeh/opportunities-for-new-contributors-39bfe736c962 | [] | 2020-06-29 01:39:22.523000+00:00 | ['Data Science', 'Python', 'Data Visualization', 'Bokeh', 'Open Source'] |
MOST VALUABLE BOOKS IN FUTURE 2021 FREE PDFBest Seller [PDF] [EPUB] TAtomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones Download | MOST VALUABLE BOOKS IN FUTURE 2021 FREE PDFBest Seller [PDF] [EPUB] TAtomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones Download gamemmmj Follow Dec 28 · 4 min read
Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones BY James Clear ;
[PDF]-Read Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones || EPUB Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones Kindel — Version!
Publisher : Avery; Illustrated edition (October 16, 2018)
Language: : English
Hardcover : 320 pages
ISBN-10 : 0735211299
ISBN-13 : 978–0735211292
Item Weight : 1.15 pounds
Dimensions : 6.27 x 1.19 x 9.29 inches
Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones PDF — Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones Epub — Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones Mobi — Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones Audiobook — Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones Kindle
IN THIS LINK http://123books.xyz/0735211299
The #1 New York Times bestseller. Over 1 million copies sold!
Tiny Changes, Remarkable Results
No matter your goals, Atomic Habits offers a proven framework for improving — every day. James Clear, one of the world’s leading experts on habit formation, reveals practical strategies that will teach you exactly how to form good habits, break bad ones, and master the tiny behaviors that lead to remarkable results.
If you’re having trouble changing your habits, the problem isn’t you. The problem is your system. Bad habits repeat themselves again and again not because you don’t want to change, but because you have the wrong system for change. You do not rise to the level of your goals. You fall to the level of your systems. Here, you’ll get a proven system that can take you to new heights.
Clear is known for his ability to distill complex topics into simple behaviors that can be easily applied to daily life and work. Here, he draws on the most proven ideas from biology, psychology, and neuroscience to create an easy-to-understand guide for making good habits inevitable and bad habits impossible. Along the way, readers will be inspired and entertained with true stories from Olympic gold medalists, award-winning artists, business leaders, life-saving physicians, and star comedians who have used the science of small habits to master their craft and vault to the top of their field.
http://123books.xyz/0735211299
Learn how to:
• make time for new habits (even when life gets crazy);
• overcome a lack of motivation and willpower;
• design your environment to make success easier;
• get back on track when you fall off course;
…and much more.
Atomic Habits will reshape the way you think about progress and success, and give you the tools and strategies you need to transform your habits — whether you are a team looking to win a championship, an organization hoping to redefine an industry, or simply an individual who wishes to quit smoking, lose weight, reduce stress, or achieve any other goal.
Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones PDF EPUB Download Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones ebooks with Apple Books on iPhone. In fact, a real bookworm will never normally budget this dear actions only for certain venues along with times. Free Kindle Books, Nook Books, Apple Books and Kobo eBooks Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones PDF EPUB Download Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones Take place I just read Amazon ebooks on Android? Ask for PDF and EPUB documents with Google Play Books. Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones PDF EPUB Download Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones Download and save PDF files for a iPhone utilizing the Books app that is definitely part of the iOS operating system. Books for iPhone, iPad, and Android .
EPUB is an e-book file format that uses the “.epub” file extension. The term is short for electronic publication and is sometimes styled ePub. EPUB is supported by many e-readers, and compatible software is available for most smartphones, tablets, and computers. EPUB is a technical standard published by the International Digital Publishing Forum (IDPF).
An electronic book, also known as an e-book or eBook, is a book publication made available in digital form, consisting of text,images, or both, readable on the flat-panel display of computers or other electronic devices.[1] Although sometimes defined as “an electronic version of a printed book”,[2] some e-books exist without a printed equivalent. E-books can be read on dedicated e-reader devices, but also on any computer device that features a controllable viewing screen, including desktop computers, laptops, tablets and smartphones. | https://medium.com/most-valuable-book-in-futures-free-pdf-book-atomic/most-valuable-books-in-future-2021-free-pdfbest-seller-pdf-epub-tatomic-habits-an-easy-3fa3da515daf | [] | 2020-12-28 15:37:28.249000+00:00 | ['People', 'Novel', 'Construction', 'Culture', 'Pop Culture'] |
Stop Shaming Failure | Stop Shaming Failure
Failure is the greatest indication of ambition.
Photo: Jan Antonin Kolar / Unsplash
Somewhere, sometime, we decided that failure was bad. We stopped focusing on whether someone gets back on the horse, and instead judged them for falling off in the first place. We began to consider failures as embarrassments. It was egocentric of us, as rationally we knew others fail, and that it made sense to fail, yet we still refused to grant ourselves the same slack. We started shaming failure, viewing it as a reflection of character or will.
“You just didn’t want it bad enough.”
But sometimes we do want something, we want it so much but still don’t get it. And that’s life because if everyone got it, it wouldn’t be worth the same amount to us. Yet we still feel embarrassed by failure. We keep it to ourselves, letting it rot away within us. And it’s a cycle, as by keeping it to ourselves and not discussing it, we further promote the shameful concept of it, and stop others from sharing.
Failure is not a bad thing. Failure has many purposes:
1. Failure lets us question how badly we want something.
I recently applied to be a spinning instructor and went to the audition, absolutely terrified. I was so sure I wanted this, I rehearsed for hours and dreamt of getting it. But then the audition was a disaster, and it took everything for me not to burst into tears until I finally left an hour later. No big surprise but I didn’t get it. They, and the few people I told, all said I should keep practising and try again. But I haven’t, and now I realise maybe I don’t want to. Because if the idea of going through that audition puts me off the job would follow, perhaps I don’t want it enough. Failing let me actually determine whether I want this.
Of course, you’ll be disappointed to fail. You’ll need time to lick your wounds and pick yourself up. But after you do, if you’re unsure of whether you want to try again, that says a lot. Maybe you needed the failure to realise that.
2. Failure separates the determined.
Because if you do get back up and try again, that says more about you than if you got it in the first place. The people who keep trying are the ones we look to for inspiration. You can’t know your strength until you are knocked down. I want to be a writer, and I’ve received dozens of manuscript rejections, and I’ll probably receive dozens more. It sucks. It sucks, and I barely talk about it, because I feel like I am failing, like I am not good enough. It makes me question if I’m good enough to be a writer. But maybe being a writer isn’t just about being the best in it; it is about wanting it enough to keep trying, to be so determined that you wait for your chance.
Talk about your failures, because that shows your strength, as you keep trying.
3. Failure grounds us.
Failure is incredibly human. It’s a reminder that we have limits that we are imperfect. Thank goodness we are, as if we were good at everything, we wouldn’t enjoy specific things as much. The areas we excel in bring us a bucketload of serotonin, the happy neurotransmitter. The low of failure only accentuates the high of success. You cannot be successful if you do not experience failure. We would be plateaued, stationary, without the gorgeous highs and lows that create us. Failure ensures we never stop giving all of the efforts, as not everything will come easy. We can’t get too comfortable, so we need failure to rock the boat.
4. Failure connects us.
As much as we keep failure to ourselves, it is a prime moment for human connection. It a chance to reveal our weakness, and by sharing that we allow someone else to see us and to empathise with us. It’s harder to empathise with someone who is always doing fantastic, jealousy may give way. Think of wolves or dogs, how they will roll over and bare their neck for the other animal. This is their most vulnerable point, and by displaying it to the other, they are declaring themselves subordinate. Dogs will follow this by playing together; they do not need violence or territorial behaviour. Show your neck to someone by being open about failure. Because they have failures too, and by bringing them into the conversation, you allow them to do the same.
5. Failure teaches you.
Failure highlights your weaknesses, which can be incredibly confronting. But wouldn’t it be worse to not be aware of these weaknesses? To let them hold you back without realisation? Failure isn’t rejection; it’s highlighting the areas for improvement. Like I said, no one is perfect, and failure simply shows us where we can improve on imperfections. Take failure as a lesson. Allow yourself to be disappointed, but then consider what lessons you can take from it.
My manuscript was rejected. Do I need to look at my cover letter? Do I need to consider a new project? Do I need to test my synopsis with others?
I failed my driving test. I am unsafe for the roads at the moment. Do I need to work on specific things, or is the issue in my confidence?
You got rejected for a job. Are you sure want to pursue that job or is a tiny part of you almost relived? Find out what influenced their decision, what can you do better next time? When you get your next job, the relief will be far greater and more satisfactory. | https://medium.com/the-ascent/stop-shaming-failure-4b8e7c287c94 | ['Fleurine Tideman'] | 2020-10-19 19:32:16.680000+00:00 | ['Self Improvement', 'Advice', 'Self', 'Growth', 'Society'] |
I Already Know I’m Fat | Newsflash!
I Already Know I’m Fat
Your attempts to shame me for it only stoke my fire
Photo by Tiffany Burke (provided by Author)
When you walk through life in a body that society deems untenable, moments of imposed shame are so constant they become commonplace. It’s not that you get used to them exactly; more that when they happen there is a complete lack of surprise.
Fat shaming comes in countless shapes and sizes.
Shame is a pervasive emotion; it’s primal and raw, and often times devastating. Much like human beings, fat shaming comes in countless shapes and sizes. At its core, it’s an act of bullying, singling out, making fun, joking about, or discriminating against someone because they are fat. Fat shaming is dangerous. It promotes disordered eating and can lead to stress and depression. It’s insidious and it eats away at us from the inside out.
When I sat down to write about fat shaming and how it feels, I looked up definitions and articles and perused the net. I can explain what fat shaming is, but that’s not why I’m here. Unsettled, I couldn’t seem to narrow in on what I really wanted to say about the self-hatred our society expects of fat women in particular. The mere idea of being shamed for the shape of my body brings up such vitriol in me that my fingers freeze on the keyboard.
I could tell you about all of the times I’ve been shamed for the size and shape of my body. There are so many that it’s easy to recall examples, and at the same time I feel as though if I start listing them, there may not be an end in sight.
At the dentist, when the hygienist inexplicably mentions that eating a low carb diet can help with something or other, then follows it up with “oh, but you’re probably already doing that,” I can only assume she means because I must be diabetic? Or trying to lose weight? I don’t ask for clarification, but later I kind of wished I had.
On the cross-state journey to visit my sister, when I have to ask three separate times on three separate planes for a seat belt extender despite the fact that cars and trucks and buses come with belts long enough for bigger bodies.
At any newsstand when I see magazine after magazine with words like success and new life and impressive tacked onto the afters, but where bodies like mine are only in the failure of before.
I could try to explain how it feels to have people find your existence offensive.
At the theater to finally see Hamilton, something I’ve been dreaming of for years, when I sit down and feel how the armrests squeeze my hips, when I know I’ll be conscious of my seat for the entire show.
At McDonald’s, when we stop for a family lunch after running around at the park, and I don’t fit in the booth my kids pick out in the play area. This seems especially ironic to me when I consider where we are.
At the office, when New Year’s rolls around again and I get to listen to my co-workers talk daily about diets, salads, and gym routines and how much better they’ll be if they lose weight.
I could try to remember, dredging up old memories of feeling singled out, discriminated against, or looked down on because my body takes up more space. I could try to explain how it feels to have people put on a mask of concern when they try to convince you that the reason they are shaming you is for your own good, your own health, rather than because they find your existence offensive.
If I really want to get personal, I could delve into the connection between my fatness and my romantic life. Maybe I could tell you about all of the times I’ve heard that I’m not fat, sometimes followed by ‘you’re beautiful,’ as if being fat and being beautiful are mutually exclusive and you can’t possibly be one and the other.
Then, there’s the mother of all fat shames, ironically born of someone’s own shame at perceived rejection. This particular type of venom is spat in the moments when someone tries to convince himself that he can’t be rejected by you because you are a fat, ugly beast and he never wanted you to begin with.
why didn’t you tell me AFF is adult fat finder? Cannot believe the women and their photos… what world do you live in?
Well, this is a loaded question if I’ve ever seen one. Ironically, I don’t think he actually wanted to know what world I live in, but I don’t let that stop me. Ask and you shall receive.
I live in a world where people think it’s okay to send hateful, negative messages asking me how I could possibly dare to exist.
I live in a world where grown men who feel rejected deal with it by lashing out and trying to hurt strangers to make themselves feel less insecure. In this particular exchange, the first message he sent me said “Hi” and had a more friendly tone. I can only assume that he got angry because 8 hours later, I hadn’t responded. So he decided that he wasn’t actually interested anyway, because I’m too fat for him.
*record scratch*
In what world does that make any kind of sense? This isn’t elementary school, there are no backsies, attempts to convince me that you weren’t interested in the first place by insulting me are poorly executed at best. I’m not even going to get into the thinking behind the lack of consideration for the idea that I just hadn’t had time to check my messages yet that day.
I live in a world where people think it’s okay to send hateful, negative messages asking me how I could possibly dare to exist, to want to date, or to experience physical intimacy given my weight. In this world, I can be objectified, I can be someone’s BBW fetish, but if I don’t respond to it positively I become repulsive. The first thing they jump to is always the size and shape of my body.
I live in a world where people see me as less than because I am more than. In this world, it’s so easy for people to pull out the insult fat, they barely even have to try. I have learned that the world believes that because of my weight, I should be alone, I don’t deserve to have men like me, I don’t deserve to feel sexy, I don’t deserve intimacy or pleasure. I don’t even deserve to be treated with common courtesy.
If you think you’re going to catch me off guard with the fact that I’m fat, I’ve got some news for you. I already know I’m fat. I’m sorry to steal the satisfaction you would have felt if I’d been surprised at this fact, or at the fact that some men are not attracted to me. There seems to be this strange and pervasive idea that fat women look in the mirror and see something other than what’s there.
Why is the idea that I deserve to be treated just as well as anyone else regardless of my BMI or pants size something that takes this much convincing?
I think about the size of my body all the time. There’s a reason I am careful to include a full body shot on any dating profile I make. The idea of it is that people like this clever dude, who only prefer one type of woman, will not contact me. There are plenty of men who find me attractive just the way I am, and I don’t for each person who doesn’t to tell me so.
I’m not interested in your assessment of my shape, or of your opinions about how ashamed I should feel. Being surrounded by the media, TV, movies, and magazines imposing shame and other-ness on me is exhausting. So when people like this guy come along telling me how disgusting and worthless I am, you’ll have to forgive me for not having any patience left to give.
It’s taken me over 35 years to really believe that it’s not me who needs to change. The world needs to get with the times and recognize that people come in all shapes and sizes. I’m tired of being asked to stop existing. Some days I feel like I’m screaming into a void. How many times do I have to recount being told I’m not good enough for people to understand it? Why is the idea that I deserve to be treated just as well as anyone else regardless of my BMI or pants size something that takes this much convincing?
So, what world do I live in? I live in a world of bullies, but I’m not worried because I left that shit behind in grade school.
I am lucky that my mind is in a place where I get to make a choice every day. That I have developed the strength over time to use your intended shame as tinder to stoke the fire in my belly. Some days I don’t feel strong, but most days the muscle memory of the shaming I’ve experienced is just more fuel. I am righteous, and I am worth being treated with respect.
This is how I look. I am done being ashamed.
Don’t miss a thing! Sign up for my weekly newsletter here.
You might also enjoy… | https://medium.com/fattitude/i-already-know-im-fat-ff9ceefd344f | ['Rachael Hope'] | 2019-10-22 18:12:23.698000+00:00 | ['Self', 'Body Image', 'Health', 'Fat', 'Equality'] |
Creating 3D animations with a python graphics engine. | In our last article, we created a program in python to render a wireframe model of a 3D object. (if you have not already read it, I recommend having a look here https://medium.com/@henrynhaefliger/3d-graphics-with-the-python-standard-library-af3794d0cba). In this post we will take a look at improving that model and applying it to animations. As per the usual, you can also find the complete code for this project on my Github repository https://github.com/hnhaefliger/PyEngine3D.
To begin, we will be giving our models a more solid look, this means correctly rendering solid faces. To do this, we will calculate a new value for each triangle which will be called avgZ — the average Z coordinate of the points in each triangle. This will allow us to generate the triangles closer to the camera, last, meaning they will cover those behind. The code for the math is simple:
We then need to sort this new list by its 4th value, we can do this with a lambda function and the sorted() builtin:
We can now insert this into the render function of our Engine3D
We also need to update our createTriangle function so that the triangles are not transparent:
This will allow us to render an image like this:
As you can see, the model is no longer transparent. All that is missing now is an animation.
To do this, we will create a function called rotateY, with the parameter angle. This function will apply a formula to every point to rotate it around the Y axis by the given angle:
This function can be modified for any axis but will suit our purposes for now. For example if we run this code:
We will get an output like this:
Now that we can rotate the object, it is a matter of applying the tkinter.Tk.after() method:
This function tells the engine to clear the screen, rotate the model by 0.1 degrees, render the image and then repeat again after 1ms. When running this code, we will get a smooth animation of the shark rotating around the axis. | https://medium.com/quick-code/creating-3d-animations-with-a-python-graphics-engine-35ede0c01e3d | ['Henry Haefliger'] | 2019-10-15 21:32:55.867000+00:00 | ['Python', '3d', 'Coding', 'Tkinter', 'Graphics'] |
To My Beloved Brother | Photo by Annie Spratt on Unsplash
To my beloved brother,
It has been almost twelve years since I’ve last seen you. It has been almost twelve years since I’ve last heard your voice. It has been almost twelve years since I’ve had a sibling.
I miss you so much, and not a day goes by where I don’t think about you. There have been many milestones I have celebrated without you, and a part of me breaks every time I realize that you can’t be there with me. Even more so knowing how many more milestones I will celebrate without you.
I have so many questions that will never be answered. Whenever I ask myself these questions, all I’m left with is silence and emptiness. I still long for the relationship we once had, and I can’t help but wonder how our relationship would be like if you were still with us.
What happened twelve years ago left a deep wound in our family, but we have recovered, we are happy, and we have grown closer.
Dad, Uncle, and I mentioned you at my wedding when we each gave our speeches. I realize that, even though we have recovered, we will never forget you. You have left such an enormous impact on our family. Even though you aren’t here, I still want you to know that you are well-loved and we still cherish you.
With love,
Your Loving Sister | https://medium.com/this-shall-be-our-story/to-my-beloved-brother-7a5e0b4af069 | ['Tiffany Hsu'] | 2020-12-13 23:17:22.692000+00:00 | ['Siblings', 'Death', 'Family', 'Future', 'Healing'] |
The Complete Guide To SCSS/SASS | In this tutorial Sassy, Sass and SCSS will refer to roughly the same thing. Conceptually, there isn’t much difference. You will learn the difference as you learn more, but basically SCSS is the one most people use now. It’s just a more recent (and according to some, superior) version of the original Sass syntax.
Here’s a list of my best web development tutorials.
Complete CSS flex tutorial on Hashnode.
Ultimate CSS grid tutorial on Hashnode.
Higher-order functions .map, .filter & .reduce on Hashnode.
You can follow me on Twitter to get tutorials, JavaScript tips, etc.
To start taking primary advantages of Sass all you need is just the key points.
They will be explored in this tutorial.
(I tried to be as complete as possible. But I am sure there might be a few things missing. If you have any feedback post a comment and I’ll update the article.)
Note: All Sass/SCSS code compiles back to standard CSS so the browser can actually understand and render the results. Browsers currently don’t have direct support for Sass/SCSS or any other CSS pre-processor, nor does the standard CSS specification provide alternatives for similar features (yet.)
Let’s Begin!
You can’t really appreciate the power of Sassy CSS until you create your first for-loop for generating property values and see its advantages. But we’ll start from basic SCSS principles and build upon them toward the end.
What Can Sass/SCSS Do That Vanilla CSS Can’t?
1. Nested Rules — Nest your CSS properties within multiple sets of {} brackets. This makes your CSS code a bit more clean-looking and more intuitive.
2. Variables — Standard CSS has variable definitions. So what’s the deal? You can do a lot more with Sass variables: iterate them via a for-loop and generate property values dynamically. You can embed them into CSS property names themselves. It’s useful for property-name-N { … } definitions.
3. Better Operators. You can add, subtract, multiple and divide CSS values. Sure the original CSS implements this via calc() but In Sass you don’t have to use calc() and implementation is slightly more intuitive.
4. Functions — Sass lets you create CSS definitions as reusable functions.
Speaking of which…
5. Trigonometry — Among many of its basic features (+, -, *, /) SCSS allows you to write your own functions. You can write your own sine and cosine (trigonometry) functions entirely using just the Sass/SCSS syntax just like you would in other languages such as JavaScript.
Some trigonometry knowledge will be required. But basically, think of sine and cosine as mathematical values that help us calculate the motion of circular progress bars or create animated wave effects for example.
6. for-loops, while-loops, if-else statements. You can write CSS using familiar code-flow and control statements similar to another languages. But don’t be fooled, Sass still results in standard CSS in the end. It only controls how property and values were generated. It’s not a real-time language. Only a pre-processor.
7. Mixins. Create a set of CSS properties once and reuse them or “mix” together with any new definitions. In practice, you can use mixins to create separate themes for the same layout, for example.
Sass Pre-Processor
Sass is not dynamic. You won’t be able to generate or animate CSS properties and values in real-time. But you can generate them in a more efficient way and let standard properties (CSS animation for example) pick up from there.
New Syntax
SCSS doesn’t really add any new features to CSS language. Just new syntax that can in many cases shorten the amount of time spent writing CSS code.
Prerequisites
CSS pre-processors add new features to the syntax of CSS language.
There are 5 CSS pre-processors: Sass, SCSS, Less, Stylus and PostCSS.
This tutorial covers mostly SCSS which is similar to Sass. But you can learn more about Sass at www.sass-lang.com website.
SASS — (.sass) Syntactically Awesome Style Sheets.
SCSS — (.scss) Sassy Cascading Style Sheets.
Extensions .sass and .scss are similar but not the same. For command line enthusiasts out there, you can convert from .sass to .scss and back:
Figure 1 — Convert files between .scss and .sass formats using Sass pre-processor command sass-convert.
Sass was the first specification for Sassy CSS with file extension .sass}. The development started in 2006. But later an alternative syntax was developed with extension .scss which some developers believe to be a better one.
There is currently no out-of-the-box support for Sassy CSS in any browser, regardless of which Sass syntax or extension you would use. But you can openly experiment with any of the 5 pre-processors on codepen.io. Aside from that you have to install a favorite CSS pre-processor on your web server.
This chapter was created to help you become familiar with SCSS. Other pre-processors share similar features, but the syntax may be different.
Superset
Sassy CSS in any of its manifestations is a superset of the CSS language. This means, everything that works in CSS will still work in Sass or SCSS.
Variables
Sass / SCSS allows you to work with variables. They are different from CSS variables that start with double dash — var-color you’ve probably seen before. Instead they start with a dollar sign $:
Figure 2 — Basic $variable definitions.
You can try to overwrite a variable name. If !default is appended to variable re-definition, and the variable already exists, it is not re-assigned again.
In other words, this means that the final value of variable $text from this example will still be “Piece of string.”
The second assignment “Another string.” is ignored, because default value already exists.
Figure 3 — Sass $variables can be assigned to any CSS property.
Nested Rules
With Standard CSS nested elements are accessed via space character:
Figure 4 — Nesting with standard CSS.
The above code can be expressed with Sassy’s Nested Rules as follows:
Sassy scope nesting looks less repetitious.
Figure 5 — Nested Rules.
As you can see this syntax appears cleaner and less repetitious.
This is in particular helpful for managing complex layouts. This way the align in which nested CSS properties are written in code closely matches the actual structure of the application layout.
Behind the veil the pre-processor still compiles this to the standard CSS (shown above) code so it can actually be rendered in the browser. We simply change the way CSS is written.
The & character
Sassy CSS adds the & (and) character directive.
Let’s take a look at how it works!
Figure 6 — On line 5 the & character was used to specify &:hover and converted to the name of the parent element a after compilation.
So what was the result of above SCSS code when it was converted to CSS?
Figure 7 — The & character is simply converted to the name of the parent element and becomes a:hover in this case.
Mixins
A mixin is defined by @mixin directive (or also known as mixin rule)
Let’s create our first @mixin that defines default Flex behavior:
Figure 8 — Now every time you apply .centered-elements class to an HTML element it will turn into Flexbox. One of the key benefits of mixins is that you can use them together with other CSS properties. Here I also added border:1px solid gray; to .centered-elements in addition to the mixin.
You can even pass arguments to a @mixin as if it were a function and then assign them to CSS properties. We’ll take a look at that in the next section.
Multiple Browsers Example
Some experimental features (such as -webkit-based) or Firefox (-moz-based) only work in browsers in which they appear.
Mixins are helpful in defining browser-agnostic CSS properties in one class.
For example if you need to rotate an element in Webkit-based browsers, as well as the other ones, you can create this mixin that takes $degree argument:
Figure 9 — Browser-agnostic @mixin for specifying angle of rotation.
Now all we have to do is @include this mixin in our CSS class definition:
Figure 10 — Rotate in compliance with all browsers.
Arithmetic Operators
Similar to standard CSS syntax, you can add, subtract, multiply and divide values, without having to use the calc() function from the classic CSS syntax.
But there are a few non-obvious cases that might produce errors.
Addition
Figure 11 — Adding values without using calc() function. Just make sure that both values are provided in a matching format.
Subtraction
Subtraction operator — works in the same exact way as addition.
Subtracting different type of values.
Multiplication
The star is used for multiplication. Just like with calc(a * b) in standard CSS.
Figure 12 — Multiplication and Division
Division
Division is a bit tricky. Because in standard CSS the division symbol is reserved for using together with some other short-hand properties. For example, font: 24/32px defines a font with size of 25px and line-height of 32px. But SCSS claims to be compatible with standard CSS.
In standard CSS division symbol appears in short-hand font} property. But it isn’t used to actually divide values. So, how does Sass handle division?
Figure 13 — If you want to divide two values, simply add parenthesis around the division operation. Otherwise, division will work only in combination with some of the other operators or functions.
Remainder
The remainder calculates the remainder of the division operation. In this example, let’s see how it can be used to create a zebra stripe pattern for an arbitrary set of HTML elements.
Creating Zebra stripes.
Figure 14 — Let’s start with creating a zebra mixin.
Note: the @for and @if rules are discussed in a following section.
This demo requires at least a few HTML elements:
Figure 15 — HTML source code for this mixin experiment.
And here is the browser outcome:
Figure 16 — Zebra stripe generated by the zebra @mixin.
Comparison Operators
Figure 17 — Comparison Operators.
How can comparison operators be used in practice? We can try to write a @mixin that will choose padding sizing if its greater than margin:
Figure 18 — Comparison operators in action.
After compiling we will arrive at this CSS:
Figure 19 — Result of the conditional spacing @mixin
Logical Operators
Figure 20– Logical Operators.
Figure 21 — Using Sass Logical Operators, to create a button color class that changes its background color based on its width.
Strings
In some cases it is possible to add strings to valid non-quoted CSS values, as long as the added string is trailing:
Figure 22 — Combining regular CSS property values with Sass/SCSS strings.
The following example, on the other hand will produce compilation error:
Figure 23 — This example will not work.
You can add strings together without double quotes, as long as the string doesn’t contain spaces. For example, the following example will not compile:
Figure 24 — this example will not work, either. Solution?
Figure 25 — Strings containing spaces must be wrapped in quotes.
Figure 26 — Adding multiple strings.
Figure 27 — Adding numbers and strings.
Note content property works only with pseudo selectors :before and :after . It is recommended to avoid using content property in your CSS definitions and instead always specify content between HTML tags. Here, it is explained only in context of working with strings in Sass/SCSS.
Control-Flow Statements
SCSS has functions() and @directives (also known as rules). We’ve already created a type of a function when we looked at mixins you could pass arguments to.
A function usually has a parenthesis appended to the end of the function’s name. A directive / rule starts with an @ character.
Just like in JavaScript or other languages SCSS lets you work with the standard set of control-flow statements.
if()
if() is a function.
The usage is rather primitive. The statement will return one of the two specified values, based on a condition:
@if
@if is a directive used to branch out based on a condition.
This Sassy if-statement compiles to:
Figure 27 — Example of using a single if-statement and an if-else combo.
Checking If Parent Exists
The AND symbol & will select the parent element, if it exists. Or return null otherwise. Therefore, it can be used in combination with an @if directive.
In the following examples, let’s take a look at how we can create conditional CSS styles based on whether the parent element exists or not.
If parent doesn’t exist & evaluates to null and an alternative style will be used.
@for
The @for rule is used for repeating CSS definitions multiple times in a row.
Figure 28 — for-loop iterating over 5 items.
This loop will compile into the following CSS:
Figure 29 — Outcome of the for loop.
@each
The @each rule can be used for iterating over a list of values.
Iterating over a set of values.
This code will be compiled to the following CSS:
Figure 30 — Compiled animal icons.
@while
Figure 31 — While loop.
Figure 32 — Listing of 5 HTML elements produced by while loop.
Sass Functions
Using Sass / SCSS you can define functions just like in any other language.
Let’s create a function three-hundred-px that returns value of 300px.
Figure 33 — Example of a function that returns a value.
When the class .name is applied to an element a width of 300px will be applied to it:
Of course Sass functions can return any valid CSS value and be assigned to any CSS property you can think of. They can even be calculated, based on a passed argument:
Sass Trigonometry
Trigonometry functions sin and cos are often found as part of built-in classes in many languages, such as JavaScript, for example.
I think learning how they work is worth it, if you’re looking to reduce the time taken to design UI animations (Let’s say a spinning progress bar, for example.)
Here I will demonstrate a couple of examples that reduce code to a minimum for creating interesting animation effects using the sin() function. The same principles can be expanded upon to be used for creating interactive UI elements (movement around a circle, wavy designs, etc.)
Using trigonometry together with CSS is a great way to reduce bandwidth. Instead of using .gif animations (each might take an extra HTTP request to load since .gif animations cannot be placed into a single sprite-map image.)
You can write your own trigonometry functions in Sass.
Writing your own functions in Sass
This section was included to demonstrate how to write your own functions in Sass/SCSS.
In trigonometry many operations are based on these functions. They all build on top of each other. For example, the rad() function requires PI(). The cos() and sin() functions require rad() function.
Writing functions in Sass/SCSS feels a lot like writing them in JavaScript or similar programming languages.
Figure 34 — pow() function.
Figure 35 — rad() function.
Figure 36 — sin() function.
Finally, to calculate tangent using the tan() function the functions sin() and cos() are required components.
If writing your own math and trigonometry functions isn’t exciting you can simply include compass library (see next example) and start using sin(), cos() and other trig functions out of the box.
Oscillator Animation
Let’s take everything we learned from this chapter to put together a sinusoid oscillator animation.
Figure 37 — Combining everything we know about Sassy CSS and CSS we arrive at this oscillating animation. | https://jstutorial.medium.com/the-complete-guide-to-scss-sass-49ac053fcde5 | ['Javascript Teacher'] | 2020-10-24 02:14:31.796000+00:00 | ['Coding', 'Design', 'Front End Development', 'Web Development', 'CSS'] |
Learn to use an Artificial Intelligence Model in 30 Minutes or Less (Part 2: Windows) | Learn to use an Artificial Intelligence Model in 30 Minutes or Less (Part 2: Windows) RJ Jain Follow Feb 14 · 13 min read
This is a continuation from part 1 here, which describes the steps necessary to initialize an AI model using IBM Watson’s Natural Language Understanding service. In this part, we’ll be taking a look at how we can utilize our AI model on Windows 10. If you’re a macOS user, click here to view the relevant guide.
Let’s get started!
W-21
Open Notepad by searching for it in your taskbar, or finding it in your Start Menu.
W-22
Go to File > Save As, and navigate to the folder you created in step 17. Before creating a file name, click on the drop down menu labeled ‘Save as type:’ and select ‘All files (*.*).’
W-23
Now, in the ‘File name:’ text box, type in ‘MediumArticleAnalysis.bat’ to name the file. You don’t have to use this exact file name if you don’t want to, but make sure the *.bat* file extension is still used.
The *.bat* file is a Windows batch file. A Windows batch file saves commands that can be entered into the Windows command prompt.
Doing this:
Is exactly the same as double-clicking on a *.bat* file that looks like this:
W-24
Now that we have our empty batch file saved, it’s time to add some commands! Go ahead and copy the following code, and paste it into your file:
curl -X POST -u "apikey:[YOUR-API-KEY]"^
--header "Content-Type: application/json"^
--data "{\"url\":\"
"[YOUR-IBM-WATSON-NATURAL-LANGUAGE-UNDERSTANDING-URL]/v1/analyze?version=2019-07-12"^
> MediumArticleAnalysisOutput.txt
echo Finished!
pause title Learn to use an Artificial Intelligence: Article Analysiscurl -X POST -u "apikey:[YOUR-API-KEY]"^--header "Content-Type: application/json"^--data "{\"url\":\" [CHOSEN-ARTICLE-URL]\ ",\"features\":{\"sentiment\":{},\"categories\":{},\"concepts\":{},\"entities\":{},\"keywords\":{}}}"^"[YOUR-IBM-WATSON-NATURAL-LANGUAGE-UNDERSTANDING-URL]/v1/analyze?version=2019-07-12"^> MediumArticleAnalysisOutput.txtecho Finished!pause
W-25
If you’re not too familiar with programming, this might look a little imposing, but don’t worry! We’ll go through what it means line-by-line.
The first line is the simplest, this contains the title for the window running the program. It isn’t necessary, but it can be easier to keep your workspace organized when you don’t just have multiple windows titles ‘Command Prompt.’
The second line contains a command called ‘curl,’ often stylized ‘cURL.’ This is short for ‘Client URL,’ and as one might imagine, the command is used to transfer data between a client computer and a server computer located at a certain URL. As such, you can think of the following code as a ‘letter’ that is eventually going to be sent to the IBM Watson service we set up earlier, in order to get a request.
As we continue down the second line, we see ‘-X POST.’ The ‘-X’ is what is known as a ‘flag,’ which is essentially a way to change an option for a specific command. A command can operate in many ways, and by using flags we can select a specific option. Normally, the ‘curl’ command only receives data from a server. In this case, we want to send data before receiving a response, so we use the ‘-X’ flag to change the method we’re using to interact with the server, and then the ‘POST’ option for that flag in order to signal to the computer that we’re sending data.
The next flag we see on the second line is the ‘-u’ flag. This flag tells the server-side computer that we’re sending some sort of credentials in order to identify ourselves. Generally, this would be a username and password, but in our case it’s going to be the API Key we noted down earlier. Go ahead and replace the ‘[YOUR-API-KEY]’ block with the API Key you recorded in step 15. Make sure to delete the square brackets! We don’t need those.
The third line calls forth the ‘ — header’ flag. A header is any sort of supplemental data, which tells a computer how to process your main data. In this case, the header is telling the server-side computer that we’re sending it JSON information. JSON stands for JavaScript Object Notation, and it is a data format derived from the popular JavaScript programming language.
The fourth line is where we get to the ‘heart’ of our code. Here we have the data that is being sent to IBM’s server for processing. Paste the URL of the web article you selected in step 19 where you see ‘[CHOSEN-ARTICLE-URL]’ and again, make sure to remove the square brackets! One thing to be careful about is the backslash (‘\’) that comes after the URL. Make sure you don’t delete this! Otherwise the computer will find it hard to recognize where our URL ends.
After you paste in the URL, take a look at the rest of the line. You should see the word ‘features.’ This tells IBM what features of the AI Model we want to utilize, here we have selected ‘sentiment,’ ‘categories,’ ‘concepts,’ ‘entities,’ and ‘keywords.’ There’s a whole list of what we can use in IBM’s documentation here, but for now this should be plenty!
The fifth line has no flag. When the ‘curl’ command encounters a value with no flag, it assumes that this is the URL it’s supposed to access. As such, we want to make sure that we have the URL for our IBM Watson Natural Language Understanding service entered. Replace the ‘[YOUR-IBM-WATSON-NATURAL-LANGUAGE-UNDERSTANDING-URL]’ block with the URL you saved in step 15 earlier. Once again, make sure you didn’t leave the square brackets in!
The sixth line is fairly simple, the ‘>’ tells the computer to send the output of the command to a file instead of just displaying it in the window, and the ‘MediumArticleAnalysisOutput.txt’ provides a name for the file the output will be stored in.
The seventh line ensures that the command line window lets you know that the command has finished, by printing out the word ‘finished,’ and the eight line ensures that the window stays open instead of closing automatically once the file has been fully executed.
Now you should be ready to run your first AI analysis!
Just to double check, make sure you completed the following steps:
Replace the ‘[YOUR-API-KEY]’ block with the API Key you recorded in step 15. Make sure to delete the square brackets! We don’t need those.
Paste the url of the web article you selected in step 19 where you see ‘[CHOSEN-ARTICLE-URL]’ and again, make sure to remove the square brackets! One thing to be careful about is the backslash (‘\’) that comes after the URL. Make sure you don’t delete this!
Replace the ‘[YOUR-IBM-WATSON-NATURAL-LANGUAGE-UNDERSTANDING-URL]’ block with the URL you saved in step 15 earlier. Once again, make sure you didn’t leave the square brackets in!
Save the file, and you should be ready! Your file should look something like this:
If you’re having trouble, I’ve uploaded the file I used here. Just open it in Notepad and replace the API Key and URL
W-26
Exit out of Notepad, and go back to the file explorer window showing the folder you created earlier.
W-27
Double click on the file, and you should see the following screen pop up:
If so, congratulations! You’ve just completed your first AI analysis. Now, let’s take a look at the results.
W-28
When you head back to the folder containing your batch file, you should now see a second file titled “MediumArticleAnalysisOutput.txt’ which contains the results from your analysis. Open the file to see the results.
W-29
Let’; go through what we see.
The first block of text is titled “usage,” and this tells us what all Watson has analyzed for us. ‘text_units’ essentially tells us how many articles Watson has analyzed, and ‘text_characters’ tells us how many characters were in those articles. ‘features’ tells us how many of Watson’s features we used.
The next block of text contains the first piece of analysis Watson has given us: a sentiment analysis. A sentiment analysis tells us the overall tone of a piece of language; whether it is positive or negative on a scale from -1 to 1. The closer it gets to -1, the more negative the language is. The closer it gets to 1, the more positive the language is. This article seems to be fairly negative, with a sentiment score of around -0.57. Sentiment analysis can be extremely useful for companies and politicians as they analyze recent news stories and social media activity. It lets them know how they’re being perceived by the public, without having to manually sort through thousands of articles and posts.
Below that is the section titled ‘keywords.’ Here, Watson analyzes the article to determine the most relevant keywords for describing the text. It then assigns these keywords a score from 0 to 1 in order to describe how relevant they are.
Below the keywords section, is a section titled ‘entities.’ Entities tell us the subjects described in an article. They tend to be more specific than keywords. For example in an article about Dell laptops, a keyword might be ‘personal-computer manufacturers,’ while an entity might be ‘Dell Inc.’ Watson tells us the type of entity, the name of the entity, how relevant it is to the article on a scale of 0 to 1, how confident it is in its assessment on a scale from 0 to 1, and how many times the entity is mentioned in the article.
Next is the section titled ‘concepts.’ Concepts tell us what the article is about, without relying exclusively on words within the text. Keywords give us general descriptors about portions of the article, entities give us specific subjects mentioned in the article, but concepts tell us the general categories of information an article might fall into. For example, an article about Microsoft Azure would return the concept ‘Cloud Computing,’ even if cloud computing isn’t explicitly mentioned anywhere in the article.
The last section is titled ‘categories.’ Categories sort the analyzed language in accordance with IBM’s five-level taxonomy. IBM’s taxonomy starts by sorting the article into one of 21 ‘level 1’ categories. It then sorts it further into more specific categories based on the article’s content, until it can no longer confidently analyse the overall subject of the article. Watson also assigns a score to the category, which tells us on a scale from 0 to 1 how well it thinks the article falls into the category.
W-30
Now that we’ve analyzed an article, let’s analyse our own language!
Follow steps W-21 to W-23 again in order to create a new batch file in the folder, only this time name the file ‘MediumSentenceAnalysis.bat’
W-31
Now, copy the following code into Notepad:
title Learn to use an Artificial Intelligence: Sentence Analysis
curl -X POST -u "apikey:[YOUR-API-KEY]"^
--header "Content-Type: application/json"^
--data "{\"text\":\"[YOUR-SENTENCE]\",\"features\":{\"sentiment\":{\"targets\":[\"[TARGET-1]\",\"[TARGET-2]\",\"[TARGET-3]\"]},\"keywords\":{\"emotion\":true}}}"^
"[YOUR-IBM-WATSON-NATURAL-LANGUAGE-UNDERSTANDING-URL]/v1/analyze?version=2019-07-12"^
> MediumSentenceAnalysisOutput.txt
echo Finished!
pause
W-32
In step W-25 above, we replaced the ‘[YOUR-API-KEY]’ and ‘[YOUR-IBM-WATSON-NATURAL-LANGUAGE-UNDERSTANDING-URL]’ blocks respectively with the API Key and URL we noted down in step 15, making sure to remove the square brackets. I explained what that does in step W-25, so be sure to scroll up and give it a read through in case you don’t recall.
You’ll notice that the code looks identical to what we saw for the article analysis earlier, with the exception of line 4, which starts with ‘ — data.’
If you recall from earlier, the ‘ — data’ flag contains the information that we’re sending to Watson to analyze, so it makes sense that this would be the only thing to change. The first block in that line reads ‘[YOUR-SENTENCE],’ and we’ll be replacing this with a sentence or a few sentences to analyze.
Go ahead and think of something you want Watson to read! There are a few things to keep in mind though. You want to make sure there is some sentiment or intention in the sentence(s). For example, using the sentences ‘The sky is blue. Grass is green.’ would be extremely boring for Watson to analyze, because they’re simply statements of fact.
I used the sentences ‘Dogs are awesome; they’re always there to pick us up! Spiders are terrifying; I can’t stand the way they scurry around!’ because they’re statements of opinion, not fact, and because they don’t explicitly tell Watson what I mean. For example, I could have used ‘I love dogs! I hate spiders!’ as my sentences, but that would leave no work for the AI model to do. Feel free to use the same sentences as me, or to choose your own!
If you scroll to the right, you’ll see the word ‘sentiment.’ If you recall from earlier, a sentiment analysis determines how positive or negative the analyzed language is. Further to the right, we see three blocks which read ‘[TARGET-1],’ ‘[TARGET-2],’ and ‘[TARGET-3].’ This time, when we conduct the sentiment analysis, we’re going to choose specific subjects for Watson to focus on. Instead of just telling us the sentiment of the language as a whole, this will give us the sentiment of the language in regards to certain subjects.
Make sure you choose targets! If you don’t designate targets, you wont get a very accurate sentiment analysis. The results from this exercise will show us why.
The sentences ‘Dogs are awesome; they’re always there to pick us up! Spiders are terrifying; I can’t stand the way they scurry around!’ may be fairly neutral when looked at together, but they would clearly be positive in regard to dogs, and negative in regards to spiders. Choosing targets allows us to ensure we don’t miss anything by overgeneralizing. The three targets I chose are ‘Dogs,’ ‘Spiders,’ and ‘Cats.’ Two of these are clearly mentioned in my sentence, while one isn’t. I wanted to see what Watson does in this scenario.
If you want to use my sentence and targets, copy the fourth line from here:
--data "{\"text\":\"Dogs are awesome; They're always there to pick us up! Spiders are terrifying; I can't stand the way they scurry around!\",\"features\":{\"sentiment\":{\"targets\":[\"dogs\",\"spiders\",\"cats\"]},\"keywords\":{\"emotion\":true}}}"^
Copying from the paragraph tends to lead to errors, so make sure not to do so! When we copy and paste certain fonts, instead of pasting single and double quotation marks properly like this:
single: '
double: "
Pasted quotation marks often end up looking like this:
single: ‘ or ’
double: “ or ”
These quotation marks work well for improving readability for humans, but they can confuse computers, as they’re technically different characters.
If you continue down the line, you can see that we’re once again using Watson’s keywords feature. This time, however, we’re adding an option to keywords, called ‘emotion.’ This means that for each keyword Watson identifies, it will also evaluate how the language seems to feel about the keyword. The emotions Watson analyzes for are sadness, joy, fear, disgust, and anger.
To recap:
Replace the ‘[YOUR-API-KEY]’ block with the API Key you recorded in step 15. Make sure to delete the square brackets! We don’t need those.
Replace the ‘[YOUR-IBM-WATSON-NATURAL-LANGUAGE-UNDERSTANDING-URL]’ block with the URL you saved in step 15 earlier. Once again, make sure you didn’t leave the square brackets in!
Replace the ‘[YOUR-SENTENCE]’ block with a chosen sentence, or a couple sentences of your choosing. No square brackets! Make sure not to delete the backslash (‘\’) at the end.
Replace ‘[TARGET-1],’ ‘[TARGET-2],’ and ‘[TARGET-3]’ with three targets for the sentiment analysis. No square brackets! Make sure not to delete the backslash (‘\’) at the end.
If you’re having trouble, I’ve uploaded the file I used here. Just open it in Notepad and replace the API Key and URL
Go ahead and save the file. Here is what my code looked like:
W-33
Close out of Notepad and head back to the folder the file is saved in. Run the batch file, and you should once again get a window that looks like this:
If so, your analysis should be complete!
W-34
Head to your folder, and you should see a new file called ‘MediumSentenceAnalysisOutput.txt’ which contains the analysis from the AI model. Open to see what Watson has to say about our sentence. Here’s what my file looks like:
As you can see, my sentiment analysis showed a highly positive sentiment in regards to dogs, and a highly negative sentiment in regards to spiders. You can see how choosing targets for a sentiment analysis can be useful; the overall score of the sentences was positive, as shown by the score under ‘document,’ even though my views on spiders were clearly negative. Watson completely ignored the ‘cats’ target I had set, as the word ‘cats’ didn’t appear anywhere in my text.
Scrolling down, we come to the ‘keywords’ section. For me, ‘Spiders,’ and ‘Dogs,’ were the most relevant keywords in that order. Watson found that my most significant emotion in regards to spiders was fear, and my most significant emotion in regards to dogs was joy.
There you have it! That was a quick introduction to utilizing AI models with Watson. If you’re interested in learning more about what you can do with Watson’s Natural Language Understanding service, their documentation is available here. For more information about Watson in general, see here.
I chose Watson in part because of today’s date, but I also find it to be the most accessible and easy-to-use AI platform available today. Google, Amazon, and Microsoft also have incredibly robust and powerful AI offerings. There’s a great introduction to Google Cloud Platform and TensorFlow here, Microsoft’s Azure AI here, and Amazon’s AWS AI Services here if you’re interested. Microsoft’s tutorial is probably the easiest and most accessible of the three, but I’d recommend Google’s course if you’re interested in a thorough and easy-to-understand introduction to artificial intelligence and machine learning. Unfortunately, AWS isn’t nearly as learner-friendly as its competitors, but Timo Böhm has still managed to put together a spectacular introduction.
I hope this was an informative and enjoyable experience for everyone following along, thank you for reading! | https://medium.com/analytics-vidhya/learn-to-use-an-artificial-intelligence-model-in-30-minutes-or-less-part-2-windows-7ecf41eb9bb9 | ['Rj Jain'] | 2020-02-20 04:24:02.151000+00:00 | ['Tutorial', 'Naturallanguageprocessing', 'Ibm Watson', 'Artificial Intelligence', 'Beginner'] |
Marshall Chess’ Introduction To Chess Records | He’d learned the family business from an early age. Starting to work in the summer holidays, when he was 13, Marshall’s first job was to break up the cardboard boxes that Chess records would arrive in. “All my summers were there,” he says. “I was always around. I had a little motorbike I would ride to work. It’s almost as if your dad was in the circus… I loved the atmosphere and I wanted to be around my dad. The only way I could have a relationship with him was to go to work.” When he left university, Marshall Chess joined the family business full-time. “I said, ‘Dad, what’s my job?’ And he said, ‘Motherf__ker, your job’s watching me!’”
Immersed in Chess Records from an early age, Marshall Chess finds it almost impossible to pick his favourite songs from the label. “They all live with me,” he says. “It’s part of my life.”
There is, however, one song in particular that he can honestly claim to be his favourite. Marshall Chess reveals it to uDiscover Music below, kicking off an exclusive introduction to Chess Records, as seen through the eyes of a man who was there when most of it happened.
Chuck Berry: ‘Maybellene’ (1955)
Marshall Chess: I have one favourite: Chuck Berry, ‘Maybellene’. That came out in 1955 and I was 13. My life changed. Before that were a strictly blues label. We sold music to black people, who, in America, didn’t even have record players. And there were no record shops in the black neighbourhoods in the 40s. People bought records at the barber shop, at the general store. The biggest blues hit could have been 20,000, 30,000 [sales]. Most of them sold 8,000, 10,000, 15,000 at 25 cents. It wasn’t a lot of money, in other words. Even though we were having hits, I was living in a third-floor walk-up apartment.
My son, years and years ago, wanted to meet Chuck Berry. He was 88 years old and he was touring his final tour, and he was in New York at a club called BB King’s. I hadn’t seen Chuck in about 10 years. I knew him very well. And I said, “When that came out, everything changed.” You know, we moved to a house. And he took my hand, and tears were sort of in his eyes, and he said, “What are you talking about? Don’t you think my life also changed in 1955?” Because he was the first black guy that made money — enough. He made money and he sacrificed a lot. He gave away the writer’s share on ‘Maybellene’ for the first few years to the DJ, Alan Freed, who broke the record. Played it all night long in New York over and over. So that’s why it’s my favourite. It affected my life so much.
Muddy Waters: ‘Mannish Boy’ (1955), ‘I Just Want To Make Love To You’ (1954)
Marshall Chess: My №2 favourite artist at Chess was Muddy Waters, whom I was also very close with, and also was our first star — our biggest blues star. And also a close friend of my father’s. First time I met him he was like an alien from outta space. He came to the house and I was, I don’t know, I might have been 11 or 10, and he had on a bright-green fluorescent suit with shoes that were made out of — you could see the skin, like a pony skin. You could see the hair on them. He was a sharp-dressed man, with that really high, processed hair. And he came out of his car and he said, “You must be young Chess. I’m here to see your pappy.” And that’s how I met him… I love so many of his songs but I would pick ‘Mannish Boy’ and ‘I Just Want To Make Love To You’.
Bo Diddley: ‘Bo Diddley’ (1955)
Marshall Chess: 1955 was a banner year for Chess and this was one of the first crossovers that white people bought… [Chess] exploded with white people in the UK first. Way before America. America, we noticed when Muddy Waters played the Newport Jazz Festival… and we put out Muddy Waters At Newport [1960]. That album was the beginning of the album business… and we noticed that, in Boston, in the New England area, people were buying that album — more than we’d ever sold. And it was people that went to that festival. That’s when we first saw this white market in America growing.
Howlin’ Wolf: ‘Smokestack Lightning’ (1956), ‘Evil’ (1954)
Marshall Chess: My two favourites — even though I probably have 10 favourites — I would say ‘Smokestack Lightning’ and ‘Evil’… Being around those blues lyrics as a young kid, talking to those guys, what it instilled in me about life at a very young age — about pain and trouble — like the lyric, “Another mule is kicking in your stall.” I didn’t know what that meant. You know what that means? Another man is f__king your wife or your girlfriend. But I would ask that, find that out. I’d have to figure that out when I was 14. So yeah, that changed me immensely as a person.
Sonny Boy Williamson II: ‘Help Me’ (1963)
Marshall Chess: Another artist that, really, I just loved so much was Sonny Boy Williamson. He was such a character. And my favourite song of his is ‘Help Me’. Mainly because, as a young kid I was exposed to all these lyrics — many of them sexual and many of them psychological, like ‘Help Me’. And I’d hear them over and over again. In fact, I always tell people this. They ask me: what did these blues guys talk to you about? I was a kid! You know what they would always ask me? Did I get any yet? Had I had sex yet? “You get any yet, motherf__ker?” I mean, the lyrics are all about women and sex — a lot of ’em. And about problems. And, of course, growing up, I had problems. And ‘Help Me’ — you know, you’ve got that feeling when you’re growing up.
Little Walter: ‘Juke’ (1952)
Marshall Chess: Little Walter changed the whole face of blues. He was a harmonica player in Muddy Waters’ band and he had a very big ego. He wanted to go on his own, and his first record was ‘Juke’, an instrumental. My uncle always used to tell me, “You know, before ‘Juke’, blues bands didn’t have harmonica players. But after ‘Juke’, which was such a big hit, every band had an amplified harmonica.” Miles Davis once told me Little Walter was a genius. He listened to him a lot.
My younger sister, Elaine, they used to always have her listen to a record, both sides, and say, “Which is the A and B?” We felt some melody or something that would attract her would be the right A-side. And with Little Walter, with ‘Juke’, at that time we had a building with an awning in front of it by the bus stop — it was a few feet away. And with no air conditioning, man — hot Chicago, hot summer. Doors open in the summer. And when they were playing Little Walter’s first session, when they were playing that ‘Juke’ record, someone at the front noticed these women all dancing around by the bus stop. And that inspired them to rush that right out.
Chess Soul
Marshall Chess: There was all these Chess hits that I liked. Bobby Moore And The Rhythm Aces, ‘Searching For My Baby’. Loved that. We had these great doo-wop records, and I lived some of the doo-wop. I loved The Moonglows: ‘Ten Commandments Of Love’, ‘Sincerely’. And then you get into the 60s: Fontella bass, ‘Rescue Me’. Billy Stewart, ‘Summertime’. Etta James, ‘At Last’. And then, of course, The Dells — I could keep naming artists. I loved Rotary Connection, that was my group that I founded. That last track that they made when I was just leaving, ‘I Am The Black Gold Of The Sun’. Fantastic. Great song.
Then you go into what’s called Northern soul now. That blew me away. Only in England, when I discovered all those Northern soul songs. A lot of them I was involved in — executive producing or involved — that were never even hits that Northern soul people love. So that’s also a buzz. It never stops. It’s such an amazing repertoire of music that goes from the 40s right until Chess was sold [in 1969]. We had this tremendous creative output.
Some of the best Chess Northern soul rarities are collected on the 7” box set Chess Northern Soul: Volume III, which is due out on 16 March and can be ordered here.
Join us on Facebook and follow us on Twitter: @uDiscoverMusic | https://medium.com/udiscover-music/marshall-chess-introduction-to-chess-records-d79c2e7b96f6 | ['Udiscover Music'] | 2018-08-10 19:33:39.763000+00:00 | ['Music', 'Features', 'Soul', 'Blues', 'Pop Culture'] |
Rolling goals: How to get things done? | I’ve been trying to write this post for a few weeks now, every time I start writing my mind just start traveling and I cannot get it done. The funny thing is that this post is just about that, I have a really hard time being clear about the things I want to do but I don’t need to.
Once a thing is a must it magically gets crystal clear to me what needs to be done, that’s a piece of cake:
I need to finish my work
I need to buy some food
I need to deliver the paper before the deadline
Now, there are things that I don’t need but want to do, and that’s a whole different story:
I want to create my personal website
I want to write a book on Web Mapping using Mexico as the topic
I want to publish more photos on Unsplash
I want to get an idea for an open source project into an usable prototype
I want to go out running at least three times a week
Those are real items for my list, and sadly, have been there for a long time. But things get worse, I thought this was only happening to me but then job after job I realized it is a very common problem in enterprises.
In this post I decided to put a few ideas together on the topic and share with you some insights that seems to be working for me. My only original idea here is the approach to the problem, and probably even that have been discussed more than once on books or other posts, all the ideas are from books that I’ve been reading and I’ll be happy to share the titles if you are interested, I am bad at keeping track of pages and quotes, so don’t ask me for that, I often change words a lot and just keep sources as inspiration.
I know sometimes things don’t seem to be related but please, bear with me until the end of the post. TL;DR Goals as any other thought evolve inside our heads, write it down and frame it as much as possible, it does not mean the problem won’t change, but it means you’ll have something to work upon, changes can be applied incrementally or not, but in both cases that will be part of a new goal.
The first step to solve a problem is to define the problem
First I thought it was procrastination, I thought it was lack of discipline and I thought it was my fault. None of this was true. I am a very active person, I am into many things at the same time, that is true, but I am really efficient doing some stuff that needs to be done, like laundry, washing the dishes or work, I am usually on time for appointments and for things that I need to deliver. So that was not the problem.
Then I discover that sometimes at work, when I had to do things I did not want to I start feeling anxious and getting distracted easily, even if I end up doing things as expected it was really inefficient. That was the first clue to find what my problem really was. The things I didn’t want to do were usually things that I knew that should be redone in the following weeks, or things that I was not sure what really meant but I did not wanted to ask, because even if it was not openly accepted, reading minds is a task that is often confused with seniority in a job.
At this point it was clear, I am too lazy to work twice, at least on the same thing… and I get bored.
Make the problem appealing to all parties involved
I had a problem, but it was not only my problem, it was really common on both a personal and a professional level. But we all agree that saying that you are lazy to your employer is not the best thing to do. So I had a second problem, make others understand that my problem and their problem was really the same problem, I know, it sounds like a tongue twister.
The first argument is, always, that things change, you know, time is money and business move fast kind of argument. I am not talking about my long term job expectations, 10 years from now, I am talking about what I’ll be doing, today, tomorrow, next week, and how to plan a bit ahead. So I needed to take a deep breath and patiently explain that is not about flexibility or rigidity. You can be very flexible with proper definition and short term goals or very rigid without proper definition and long term goals.
The second argument, common as well is that polls did not show that people have a feeling of uncertainty or anxiety, that is, task are already well defined. Well people have feelings about uncertainty for some completely unreachable and vague things like crisis and global economy but they don’t feel anxious by gaining weight because of poor fast food consuming habits. That’s the way we are, we have biases in place. Our brain trick us to thing we have things sorted out, otherwise we would not be able to deal with it. If estimations are systematically wrong and people cannot work without “having to confirm some facts” chances are task are not well defined even if everyone feels they are.
Human mind is a marvel of evolution, but it do have its short-comings, if you don’t have clear well-defined and framed goals, chances are by the time you get there you will have a very different perspective than the one you had when you started. It makes sense, since we all learn by doing and we are doing things, thus we are learning.
Fitting all pieces together: the solution
I know I started talking about me and my goals and suddenly I was talking about work experiences, well, I thing that is one of the beautiful things about human kind, we are social by nature and we can extrapolate things from different contexts. To suffer those things at my workplace open my eyes to solve problems I face as an individual.
Now, I’ll start with the work context but end up at the personal level. Agile is a common methodology nowadays, and everyone jumps on board, but everyone have a different definition about what it means. Two core concepts are Acceptance Criteria and Definition of Done, and no, they are not the same concept, or they should not be. Here is the misconception in two lines:
It will be accepted when it is done
It will be done when it is accepted
Can you spot the trick? There is a circular reference! And people will use it at their convenience.
I won’t dig into Agile here, but let’s get this clear, acceptance criteria is about a checklist, how can I say that a given task is done? and the answer lies on the description of the task, what was expected of it? Definition of done is a more general procedure, what done means within a project? What tests are expected to be applied? The link between the two is: Definition of Done is used to create an acceptance criteria for each task. Definition of done is a theoretical thing (properties satisfied when something is done) and the acceptance criteria is a practical and often pragmatic checklist of things that must be worked upon in a task.
In leadership and career counseling they often recommend a kind of tangible actionable items as objectives, and usually it is also mentioned that this must be measurable. Well, the thing is, it is hard to evaluate something if it does not have those properties, for work related issues, and for personal issues as well.
Often when we are working on personal projects in our spare time, we want to move fast and cut some corners, but we are biased and we do it in a not so intelligent way. We are afraid of over thinking or not getting to action so we avoid sitting down and defining a plan, but that plan is what we need in the first place and it will be really helpful to keep us motivated.
Granularity is very important, it refers to how much detail is needed for each definition, or, what the minimum information is. There is no simple rule to say how much is enough. It is an introspection exercise to define that, I guess the best guidelines are previous definitions, we already left all the Agile thing, it is not about time estimation or efficiency but about a well-being and self-awareness, did it really worked? was I happy with the end results? did I spent more time or energy that I wanted to? At the end, it all started from things I wanted to do, so there is no right or wrong but a delta or a difference between actual results and desired/expected outcome.
Write it down!
It is very important, as I mentioned, our mind is not created in a way that is good tracking absolute things, everything is relative to our current state, to have a written record can help you realize how your mind plays tricks on you.
Examples
I know it is a complex topic, and this probably won’t make much sense without examples, please feel free to disagree with them but use them as a guide for define your own.
Writing a post
Bad idea: I’ll write a great post about parenting
Good idea: I’ll write a two pages post about the problems that parenting poses on young people with unreliable income
Photo by NeONBRAND on Unsplash
It is hard to define what is great, also, it depends on who do you ask, and parenting is a really broad topic. So, saying it is a two pages makes you be aware that when you have the first page you are half way there and framing it to a subgroups makes it easier to keep yourself focused. It is not saying anything about the title, it can be as short or long as you wish, it is a definition of the task. If you are a writer and have a lot of experience maybe saying that you’ll write 3 articles a month is more than enough, everything else is assumed to be constant, that is, as it has been the last months, but if you are starting maybe even more definition is needed, like, write a paragraph a day on that topic.
2. Contributing to open source
Bad idea: I want to get 5 fixes in a month.
Good idea: I want to contribute 5 code-related fixes in a month on a given project that I’ve been working for some time/ I’m interested in learning and get them merged.
Photo by NESA by Makers on Unsplash
You may think that the hard part is to get the code working, usually it is not, there are tons of guidelines in the projects, some projects, although not a majority are open source but does not accept external contributions. If you are not clear enough from the beginning you might end up discouraged by the first rejection or you may have a few talks and then abandon the ship. To be clear about your expectations helps you get the discipline needed, not the other way around.
3. Reading a book
Bad idea: I’ll read 52 books a year.
Good idea: I’ll read 20 minutes a day of books that I like and 10 minutes of books I don’t like and I won’t read more than one book at the same time.
Photo by Stephen Andrews on Unsplash
Often once you start reading you will end up reading a bit more, at least that happens to me, but it is a way to do some small but steady work everyday. I think it is as important to put an upper limit as well as a lower one, because it is possible to stop doing something else and then feel guilty about it, and avoid doing something you lack because you know you’ll spend too much time on it, like video games. | https://medium.com/dev-genius/rolling-goals-how-to-get-things-done-917cda88dedc | ['Migsar Navarro'] | 2020-06-16 08:01:49.257000+00:00 | ['Procrastination', 'Time', 'Software Development', 'Productivity', 'Time Management'] |
商业世界的设计 | in In Fitness And In Health | https://medium.com/shidanqing/%E5%95%86%E4%B8%9A%E4%B8%96%E7%95%8C%E7%9A%84%E8%AE%BE%E8%AE%A1-dfe503b5d9a8 | [] | 2017-04-25 12:04:50.588000+00:00 | ['Design', 'Business'] |
4 Things Tom Cruise’s Frustration Showed Us That Being A Leader Is Stressful | 1) They Have To Repeat Themselves nearly ALL THE TIME
“We want the gold standard. They’re back there in Hollywood making movies right now because of us! Because they believe in us and what we’re doing!!!”
Today, we live in a world where there is just too much information to be stored in our heads. There is just not enough memory to remember them, and too much information will only gain you no audience at all. That is why leaders must repeat themselves nearly all the time.
It may sound annoying to repeat what you say all the time, parents understand this very much, but it’s necessary.
Repetition is the key to successful communication, it has a great power especially for learning, and important in speeches to motivate others. Leaders must also redefine their way of repeating themselves so that they won’t bore their followers: simple and concise. That way, it reminds their colleagues and followers what they are fighting for.
In Tom’s case, he knew that he is creating a blockbuster movie. Perhaps the crews forgot? Who knows.
In the ranting showed above, we can see that he repeated himself three times in a different way: creating a great quality movie. First, he told them that they are of a high standard. Second, he reminded the crew that they (I’m not sure who Tom meant by ‘they’ but I’m assuming stakeholders or producers)are the reason why Hollywood is back in business. Lastly, he ended it by telling them that there are people out there entrusting their project to work.
Another example of his repetition of the message was when he threatened to fire his crews, and he delivered it repetitively and some with different context:
I have told you and now I want it and if you don’t do it you’re out. If I see it again you’re fucking gone — and you are — so you’re going to cost him his job, if I see it on the set you’re gone and you’re gone. That’s it. Am I clear? You understand what I want? Do you understand the responsibility that you have? Because I will deal with your reason. And if you can’t be reasonable and I can’t deal with your logic, you’re fired. That’s it. That is it. I trust you guys to be here. That’s it. That’s it guys.
As you can see here, because of how serious the Covid cases are in the US, Tom had to ensure that all his people are safe. There cannot be a protocol broken, otherwise, it’d cause the whole movie to add more cost if they had to delay the project just because if someone broke the protocol.
2) They Work Later Than Others
I’m on the phone with EVERY f*cking studio at night, INSURANCE COMPANIES, PRODUCERS!!
Simon Sinek once said:
“Leaders are the ones willing to look out for those to the left of them and those to the right of them. When it matters, leaders choose to eat last.”
Leaders who prioritize others first before themselves will gain even more followers. In the working world, in order to have the people you hire to be comfortable having you as their leader, you are going to have to prioritize their well-being before yourself.
As written in the quote above, it showed that Tom worked harder and later than others, telling everyone that he had to speak with a lot of people to get their movie in process in this kind of time.
Because, like any leader would think, who else will do it if not him? Sure, maybe he hired people to deal with the other stakeholders, but ultimately, he’s the one who must be making the decision.
Doing this is exhausting. It’s messy and a lot of work to do. But leaders must do it for everyone if they want the business going, especially for Tom’s case. Who else would be dealing with insurance companies and producers than himself? | https://medium.com/live-your-life-on-purpose/4-things-tom-cruises-frustration-showed-us-that-being-a-leader-is-stressful-a20a9da95c94 | ['Nicole Sudjono'] | 2020-12-24 13:24:25.693000+00:00 | ['Leadership', 'Movies', 'Business', 'Motivation', 'Self'] |
Writing About the Self in the Age of Narcissism | But how does she do it? What allows her to disappear into herself? Most artists I’ve known or have studied are exquisitely sensitive and needy creatures, craving acceptance and love to an abnormal degree. They swing wildly between the two poles of grandiosity — “I’m the greatest” and “I’m the worst” and they still have a child-like desire for approval. Okay, maybe I’m just describing myself. But I don’t think I’m alone! Try to name a classic rock band or rapper who didn’t start making music as a way to get love or get laid. On the higher brow end of the spectrum, David Foster Wallace crudely said of writing Infinite Jest, his 1,100 page masterpiece, that it was a way to impress a woman he was crushed out on: “It was a means to her end.” But the motivating force behind any great work doesn’t really matter. Whatever spurs an artist to enter that deep space where creations materialize is fair game. When I recently started a Facebook thread on this subject, my writer friend Mela Heestand commented,
“The deeper you go into the particular self, the closer you get to the universal. I absolutely want any work of art to go as far as the artist is willing to go into the self. It’s generous, even if the artist is also looking for adulation.”
Whew, that’s good to hear and I know it to be true. Of course, once an artist becomes a celebrity, what started out as productive self-absorption often becomes a true psychological disorder: malignant narcissism.
My parents had a bit of this more malignant strain, whereas I suffer from narcissistic injury, according to my Jungian therapist who specializes in this kind of thing. My injury really has two main symptoms. First, it frequently presents as an angry voice in my head, which, when observed from the proper angle, is actually hilarious. Pompous, humorless, demanding, insulting, aggrieved, this voice is the personification of a non-stop tantrum, raging at the injustice of being “put upon” by the world in all its disappointing reality: “How dare that person drive so slow in front of me! Why is that guy standing on the sidewalk so close to where I’m locking my bike? I shouldn’t have to fold my pile of laundry — I have pressing ideas to get down on paper!” Progress for me is exposing these kinds of thoughts to the people around me so we can all laugh at them together.
The other symptom rears its ugly head whenever I’m tasked with listening. If anyone talks to me for more than three minutes, I start to check out. My eyes get heavy, my mind wanders, I fight to stay engaged. It’s sometimes physically painful to have to sit there and keep listening. I love my wife and children and my closest friends and I’m committed to giving them what they need from me, so I keep trying. But the task is Herculean! “I’m bored. I’m hungry. Did he always have that mole on his neck? Why is she telling me all of this? I listened to my relatives talking at me for most of my life. When will it be my turn to talk?”
The malignant narcissists out there are probably beyond hope. They lack the self-awareness and empathy needed for therapy (think: Tony Soprano, Trump). It’s probably best to just steer clear of them. But I believe their children have a fighting chance (think: AJ, Meadow, and dare I say Barron?). I think of myself as a recovering narcissist; my narcissism is latent. Because of various factors in my childhood, my tendency to think and act like an asshole is something to which I’m predisposed. But with the right tools I’m able to live close to an asymptomatic life, with a minimum of assholery. In the past, I’ve had streaks of full-blown narcissistic mania and my behavior was ugly and shameful (see my new book Qualification for a blow-by-blow account). But, paradoxically, having the thing I have makes it possible for me to go deeper into myself than most. And my intention is to share what I find so that it actually helps others. We’re traumatized into being artists and it’s a curse and a gift. | https://medium.com/swlh/writing-about-the-self-in-the-age-of-narcissism-e244b0e4382d | ['David Heatley'] | 2019-12-19 21:41:48.426000+00:00 | ['Trump', 'Creativity', 'Autobiography', 'Memoir', 'Narcissism'] |
ENGLISH-TO-MALAYALAM MACHINE TRANSLATION USING PYTHON | This article is based on a project which I did with my friends Nahid M.A , Paul Elias and Shiv Shankar Nath.
Today, the internet supports a wide array of languages. So, the concept of machine translation has indeed emerged as an important factor in connecting people who speak different languages. In this article, we are going to take a look at the process of translating English to Malayalam using Transfer Rules.
WHAT IS MACHINE TRANSLATION?
Machine translation can be defined as the process by which a software coverts text or speech in one language to another language. In other words, it is the study of designing systems that translates text from one natural language to another. Machine translation helps people from different places to understand an unknown language without the aid of a human translator.
WHY MACHINE TRANSLATION?
Machine Translation is considerably cheaper compared to human translators. They can sift through extremely high amounts of data within a very short span of time. Computer programs can translate enormous quantities of data consistently within a small time frame. If these were done manually, they would have taken weeks or even months to complete.
“Without translation, I would be limited to the borders of my own country. The translator is my most important ally. He introduces me to the world.”
– Italo Calvino
TRANSFER RULES IN MACHINE TRANSLATION
Transfer rules can be defined as a set linguistic rules which are defined as correspondences between the structure of the source language and that of the target language. Making use of transfer rules is one of the most common methods of machine translation.
MT using transfer rules can be divided into three steps :
Analysis of the source language text to determine its grammatical structure
Transfer of the resulting structure to a structure suitable for generating text in the target language
Generation of the output text
In this project, we make use of the Malayalam transfer rules. These are a set of rules which have to be followed in order to construct Malayalam sentences with good grammatical structures :
Image Source : Anitha T Nair, Sumam Mary Idicula, 978–1–4673–2149–5/12/31.00 IEEE 2012
All the “codes” which are mentioned in the above table represents the various parts of speech.
Image Source : https://pythonspot.com/nltk-speech-tagging/
Various transfer rules were used in this program in order to attain accurate results. NP (Noun Phrase) and VP (Verb Phrase) are considered as the parent tags.
These are some of the Transfer Rules that were implemented :
If the parent tag VP contains child tag VBZ NP, it is reordered as NP VBZ
If the parent tag NP contains child tags NP PP, it is reordered as PP NP
If the parent tag NP contains child tags NP VP, it is reordered as VP NP
If the parent tag VP contains child tags VBG NP, it is reordered as NP VBG
POS Tagging of the input text
PACKAGES IMPORTED
DATASET USED
The Olam English-Malayalam dataset has been used for this project. This is a growing, free and open, crowd sourced English-Malayalam dictionary with over 200,000 entries. The dataset consists of English words, their Malayalam definitions, and part / figure of speech tags.
Link to the dataset : https://olam.in/open/enml/
Olam Dataset
ALGORITHM
SAMPLE OUTPUT
Consider the input text “She is driving a car”
Initially, the POS tagging of each word takes place, as shown below.
POS Tagging of input text
Reordering of words
After applying the transfer rules and translating the words, we get the output.
Output Text
In a machine translation task, the input already consists of a sequence of symbols in some language, and the computer program must convert this into a sequence of symbols in another language. — Page 98, Deep Learning, 2016.
ADVANTAGES OF MT USING TRANSFER RULES
Machine Translation using Transfer rules has its advantages over other conventional translation methods. These include :
This method takes grammatical structure of the translated Malayalam sentence into account.
This method produces more meaningful outputs compared to Rule-Based MT (RBMT).
Using POS tags, we can identify the part of speech each word represents in the sentence.
DISADVANTAGES OF MT USING TRANSFER RULES
This method of Machine Translation also has its fair share of disadvantages. These include :
In order to improve the accuracy, we need to add a large number of rules.
In some cases, POS tags are assigned to the words without considering the context of the sentence. This can affect the accuracy of the output.
Writing the transfer rules require a lot of time. Moreover, good linguistic knowledge is necessary. One needs to be well versed with the language in order to deduce the transfer rules.
Inability to accurately translate sarcasm and idioms. In such cases, the literal meaning of the input is considered. The non-literal, expressive meaning of idioms such as “It’s a piece of cake” and “Let the cat out of the bag” will not be considered.
CONCLUSION
To conclude, Machine Translation is the task of automatically converting source text in one language to text in another language. In this case, we are implementing MT using Transfer Rules to convert English to Malayalam. This method can even be applied for other languages. Throughout the years, the accuracy of MT systems have been constantly improving. Now, we have AI translation models which are capable of producing highly accurate results at a very fast rate.
We can only wonder what the future of MT holds. Whatever it turns out to be, it will undoubtedly keep producing significant ripples in the language industry.
REFERENCES | https://medium.com/analytics-vidhya/english-to-malayalam-machine-translation-using-python-e61f3c76deee | ['Joel Jorly'] | 2020-12-22 16:40:22.094000+00:00 | ['Machine Translation', 'AI', 'Machine Learning', 'Malayalam', 'Language Translation'] |
Exploring token economics for electronic health records | There are a number of projects trying to build a blockchain based electronic health record (EHR) right now. Here are some of the reasons:
Multiple disparate parties with equally diverse incentives need access to the same data
Having a single network with interoperable data is a dream
The integrity of that data is of the utmost importance
Control of patient data should be distributed to the edges, into the hands of patients, who the data really is about
The open architecture associated with blockchains would be a welcome development in a world dominated by the walled gardens of Epic and Cerner
The Center for Biomedical Blockchain Research documents 28 companies tackling personal health records alone. Most, if not all of these platforms, are trying to enable some form of data monetization and selling data is the primary transaction that takes place on them.
Moreover, the majority of these 28 have had an initial coin offering (ICO), otherwise known as a token sale. The reality is that these tokens are primarily used as a fundraising mechanism and their functional purpose is treated as an afterthought, if thought of at all. Their tokens have suffered accordingly.
The reality today is that these tokens are primarily used as a fundraising mechanism and their functional purpose is treated as an afterthought, if thought of at all.
There are a wide array of token models, but particularly popular within the current generation of blockchain enabled EHRs is the “medium of exchange” token model, where a token is used as the native payment within an ecosystem. This suffers from the well documented “velocity” problem, which has significantly contributed to healthcare ICOs’ poor performance to date.
Token economics
This is unfortunate and a disservice to everyone. If designed right, tokens can be an extremely powerful tool. Token economics or tokenomics is a burgeoning field for the study of how to design tokens. The objective of token economics is to use economic incentives to achieve a desired objective. Restated another way, token economics tries to design a system to achieve a desired objective and make money for token holders.
For a blockchain based EHR you could have several objectives, but I think the principle objective should be to maximize the sharing of data. This is for two reasons: the first is that incumbent systems lack the business incentives to share data today, and the second is that the future of healthcare lies in AI, and data will be the fuel for that AI.
Moreover, getting token economics right in a blockchain based EHR could yield a number of further benefits:
Driving network effects
Effective and decentralized network governance
Fundamentally new business models
The leveraging of cryptoeconomic primitives like token curated registries
Allowing regular people to gain from the increased value of the network that is created
Encouraging more adoption as a result of an appreciating native token
How do we design a system that achieves this? Well, token economics isn’t an exact science yet, but we do have a few tools in our metaphorical token toolbox. There are a few token mechanisms we can introduce that could move us towards our goal of data sharing and accrue value to the token holders.
Token mechanisms for a blockchain based EHR
Our blockchain based EHR in this example will need its own native token and I’m going to label this new token the Blockchain Enabled EHR Token, or BEET.
Token burning
A simple token burn could be executed like this:
A percentage of the value of all data sales denominated in BEET are sent to a burn address and forever removed for the supply
are sent to a burn address and forever removed for the supply The above percentage parameter could be agreed upon by on-chain protocol governance
The goal of burning tokens is to reduce the available supply, which, with all else held equal, should cause an appreciation in price.
In theory, this should encourage more data sales in aggregate, as more sales results in a further reduced supply and further appreciation in price.
Staking
Staking is the process of “setting aside” tokens for a period of time.
A percentage of the value of all data sales denominated in BEET could go towards a pool used to pay node hosters/miners of this network
If a participant agrees to stake a certain amount of tokens, say, $25,000, then the fee they pay is reduced
Different amounts staked and duration of staking could merit different levels of fee reduction
By staking your tokens you are reducing the available supply, if only temporarily, as well as contributing to an overall lower velocity for the token. Both of these should add upwards pressure to the token and hopefully cause appreciation in price.
Governance
Governance refers to a broad set of processes enshrined by code, formal or informal processes, and norms that govern how a blockchain changes over time. Fred Ehrsam lays out a great case for why governance is important here.
I think governance in a blockchain enabled EHR is interesting because it distributes control between the many different parties that use an EHR. In particular, it gives a way for patients to exercise real influence over decisions they wouldn’t otherwise have.
An example where governance could be important is in the deciding underlying data standards:
There will need to be a shared data standard all participants agree upon. Occasionally this will need changes as the underlying standard is upgraded. Those changes could be proposed and ratified using an on-chain governance mechanism.
Any participant could propose a data standard change with and conduct a simple yes/no poll.
Network users, whether providers, pharma companies, universities, patients, patient advocacy groups, payers, etc could vote for yes/no with and the weight would be proportional to their tokens.
The side with more tokens behind it would win and the proposed change is either adopted or rejected.
There are many variants of this, a popular one including quadratic voting.
By governing the network this way you resolve disputes between parties, let anyone participate in that resolution, and create a shared data standard.
Parties will have preferences on which data standards are used. Perhaps your companies’ engineers are used to working with one particular standard and would have to spend time learning a new standard and updating current systems. Obviously, you would be willing to spend some money to prevent this from from happening.
Cryptoeconomic primitives
Cryptoeconomic primitives are the “well established, generic building blocks” of the crypto world. Jacob Horne has a stellar introduction you can read here Some of these like token curated registries (TCRs) and curated proof markets, need a native currency to work. MedCredits is using a TCR to curate a decentralized registry of physicians as an example.
Here are three potential applications of cryptoeconomic primitives that could use BEET:
A TCR in this system could be used to curate a list of healthcare providers and their associated wallet addresses
in this system could be used to curate a list of healthcare providers and their associated wallet addresses Curated proof markets could be used to price different data sets
could be used to price different data sets Having ownership of the data from a clinical trial on the same ledger as ownership of IP that results from that clinical trail poses some interesting possibilities. Patients could be given the opportunity to buy into that IP via bonded curves.
At the end of the day primitives are simply tools that can be combined and interchanged to achieve particular goals. There are an infinite number of potential applications and there will be more cryptoeconomic primitives to come. The trick is to find one or a combination that will help a network achieve a goal (i.e more flow of data) as well as create value for token holders.
Concluding thoughts
I want to be clear that I don’t know what the right model for success is. No one does. But, the above mechanisms are levers we can use to try and nudge a network towards our goals while creating value for token holders. Implementing them is simply that, a nudge in the right direction.
People behave in odd ways, often contrary to how you would expect, and without trying these systems out in real life we won’t know what works or what doesn’t work. The first generation of token economics was homogeneous and disappointing, the next should be experimental and full of bold pioneers. And, when someone does get it right, it will achieve a lot of good and be amazing to watch in action.
Onwards!
Connect with me
You can also follow me on here on Medium or on Twitter. I appreciate feedback! Please write me with your ideas for potential token economic mechanisms. | https://medium.com/hackernoon/exploring-tokenomics-for-electronic-health-records-89de5598053 | ['Robert Miller'] | 2018-10-17 13:46:02.204000+00:00 | ['Blockchain Healthcare', 'Health', 'Electronic Health Records', 'Blockchain', 'Token Economics'] |
Mastering Data Aggregation with Pandas | Data aggregation is the process of gathering data and expressing it in a summary form. This typically corresponds to summary statistics for numerical and categorical variables in a data set. In this post we will discuss how to aggregate data using pandas and generate insightful summary statistics.
Let’s get started!
For our purposes, we will be working with The Wines Reviews data set, which can be found here.
To start, let’s read our data into a Pandas data frame:
import pandas as pd
df = pd.read_csv("winemag-data-130k-v2.csv")
Next, let’s print the first five rows of data:
print(df.head())
USING THE DESCRIBE() METHOD
The ‘describe()’ method is a basic method that will allow us to pull summary statistics for columns in our data. Let’s use the ‘describe()’ method on the prices of wines:
print(df['price'].describe())
We see that the ‘count’, number of non-null values, of wine prices is 120,975. The mean price of wines is $35 with a standard deviation of $41. The minimum value of the price of wine is $4 and the maximum is $3300. The ‘describe()’ method also provides percentiles. Here, 25% of wines prices are below $17, 50% are below $25, and 75% are below $42.
Let’s look at the summary statistics using ‘describe()’ on the ‘points’ column:
print(df['points'].describe())
We see that the number of non-null values of points is 129,971, which happens to be the length of the data frame. The mean points is 88 with a standard deviation of 3. The minimum value of the points of wine is 80 and the maximum is 100. For the percentiles, 25% of wines points are below 86, 50% are below 88, and 75% are below 91.
USING THE GROUPBY() METHOD
You can also use the ‘groupby()’ to aggregate data. For example, if we wanted to look at the average price of wine for each variety of wine, we can do the following:
print(df['price'].groupby(df['variety']).mean().head())
We see that the ‘Abouriou’ wine variety has a mean of $35, ‘Agiorgitiko’ has a mean of $23 and so forth. We can also display the sorted values:
print(df['price'].groupby(df['variety']).mean().sort_values(ascending = False).head())
Let’s look at the sorted mean prices for each ‘province’:
print(df['price'].groupby(df['province']).mean().sort_values(ascending = False).head())
We can also look at more than one column. Let’s look at the mean prices and points across ‘provinces’:
print(df[['price', 'points']].groupby(df.province).mean().head())
I’ll stop here but I encourage you to play around with the data and code yourself.
CONCLUSION
To summarize, in this post we discussed how to aggregate data using pandas. First, we went over how to use the ‘describe()’ method to generate summary statistics such as mean, standard deviation, minimum, maximum and percentiles for data columns. We then went over how to use the ‘groupby()’ method to generate statistics for specific categorical variables, such as the mean price in each province and the mean price for each variety. I hope you found this post useful/interesting. The code from this post is available on GitHub. Thank you for reading! | https://towardsdatascience.com/mastering-data-aggregation-with-pandas-36d485fb613c | ['Sadrach Pierre'] | 2020-09-03 21:35:45.919000+00:00 | ['Programming', 'Software Development', 'Python', 'Data Science', 'Technology'] |
Forwarding CloudWatch Logs to Logentries using CloudFormation | Photo by Robert Larsson on Unsplash
The Logentries documentation describes how to setup CloudWatch Logs forwarding via the AWS Console. This process is tedious and time-consuming if you need to forward many log groups, so let’s save time by using CloudFormation!
Note: Logentries is now called Rapid7 InsightOps, but that’s a mouthful, so I’ll refer to the service by its former name.
High-level architecture
CloudWatch Logs can stream log data to other AWS services by attaching a subscription filter to a log group, with a filter pattern determining which logs are sent to downstream services (and which aren’t). We’ll use this feature to selectively stream data to a Lambda function which will upload the logs to Logentries.
This diagram depicts how log data moves from the originating AWS resource, through CloudWatch Logs to Lambda, and finally, Logentries.
Preparing the deployment package
Download the official Logentries Lambda function from GitHub. This is the code which will send our CloudWatch Logs to Logentries. Technically, this can be used as-is however, to reduce the size of the deployment package, we’ll remove the unneeded files. | https://medium.com/energetiq/forwarding-cloudwatch-logs-to-logentries-using-cloudformation-aa5641762f86 | ['Callum Gardner'] | 2019-08-15 03:32:51.412000+00:00 | ['Logentries', 'AWS', 'Cloudwatch', 'Lambda', 'Cloudformation'] |
Being Covid-19 Negative is The Best Christmas Present | We’re less than a week away from Christmas and it seems that despite the harrowing year we’ve all endured, people have finally lost their patience.
Stores are full, nobody is social distancing and businesses are defiantly open, with restaurants and bars offering indoor dining.
We are failing ourselves.
The number of covid cases in the U.S. has made it abundantly clear that we do not have this pandemic under control.
Hospitals are at capacity and then some. Healthcare workers are at their wit’s end, overworked, underpaid, and forced to work in unsafe conditions. Nurses, the canaries in the goldmine, have been screaming their lungs out all year about the reality of what is going on in hospitals, and they are ready to quit.
Beyond this pandemic our healthcare providers will continue to suffer from PTSD thanks to the horrors they’ve witnessed, the lives they couldn’t save.
We are not listening anymore.
The idea that because someone is so bored from quarantining that they should be allowed to just ignore the state of the world, the real suffering of many, and do whatever they want is ABSURD.
What justification would suffice? It is infuriating beyond words.
It’s far from over.
We are in the very thick of the worst of this pandemic. We haven’t yet seen the full extent of the November-Thanksgiving surge. Many of the people still in the hospital being kept alive by intubation will die, and they will be replaced shortly after by more people who will inevitably succumb to this terrible disease.
This is the reality we face.
Have hope, but be practical.
The vaccine is not a safety net. We are already starting to see shortages. There is no definite schedule as to when anyone outside of healthcare workers will receive it.
It is not going to be there to save you if you get sick in the next few months.
When in the history of the United States of America have you seen such a massive rollout of vaccines? It is going to be a bumpy process, made all the more difficult by those who attempt to sabotage the effort with their cries of FREEDOM! and illogical conspiracy theories.
It’s not the most wonderful time of the year
Do yourself and everyone you know a favor, stay home. Social distancing works. If you must go out, wear a mask and be extra careful. Wash your hands with actual soap and water, not just anti-bacterial gel, and call it a day.
Be mindful of others as you go about your day. And if you must see your family, get tested. | https://medium.com/illumination-curated/being-covid-19-negative-is-the-best-christmas-present-c515545f3150 | ['Valerie Mercado'] | 2020-12-21 16:39:45.259000+00:00 | ['Covid 19', 'Health', 'Self', 'Christmas', 'Pandemic'] |
How To ‘Manage’ A Woman | How To ‘Manage’ A Woman
When she has the audacity to challenge you
Image: Wikipedia
The most important thing to do when a woman has challenged or chided you in some way is to firmly remind her that you are the man. You are the reasonable and rational one, the one who truly understands the situation in a way that she doesn’t. In other words, you are the adult and she is the petulant child who is emotionally spouting off.
But because she’s such a petulant creature, mostly you can’t just come right out and say that directly. Instead, you have to handle her, and manage her, using both your innate superiority and all the gaslighting you can muster to help rein her back in. This isn’t always an easy task, so here are some tips about how to accomplish this more effectively:
The first thing to try is flaring your metaphorical neck frill to exert dominance — unless of course, you have a literal neck frill. Then use that. Flaring your neck frill lets her know that she has stepped over the line and that you are displeased by this. It gives her the opportunity to back down without a further fuss. Unfortunately, in this day and age, this heavy-handed form of dominance posturing rarely works on a woman who has done the laborious work of deprogramming and healing herself from a lifetime of exposure to this kind of domination. In most cases, you are going to have to use a lot more finesse.
The next step is to appeal to your connection. It’s well known that women are all about relationships, and you should take any opportunity to help her to understand that you are only seeking to correct her because you care. What kind of patriarch, er, I mean, friend would you be if you didn’t help her to better understand the error of her ways? Say things that sound supportive but that actually undermine her credibility at the same time, such as “I want you to keep sharing your ideas because that’s something that’s important to you.” Up the effectiveness of this by refusing to actually engage with her ideas in any substantive way.
Many women will respond appropriately to this kind of handling. They don’t actually want to have the relationship disrupted and will jump at the chance to smooth things over once you’ve made it clear that you are telling her to pipe down not because you don’t like her, but because you actually like her a lot and you are just looking out for her best interests.
Use phrases like, “I think we can both learn a lot from each other,” and she’ll be so disarmed by your caring that she’ll never bother to ask you what exactly you’ve learned from her. If that still isn’t enough, you’re going to have to bring out your best ‘splaining tools. Remind her that you not only know more about the world than she does but you actually know more about her than she does. That’s an important one. After all, she’s really just kind of a big kid thrashing around. You are the sensible one. When you tell her about what she really meant, what she believes, what her goals are, etc., you can gain the paternalistic upper hand by reminding her of that fact.
If she bristles at this, assure her that you are not condescending; you are simply trying to stand up for yourself and your beliefs. Isn’t that allowed? Are you expected to walk on eggshells around her? Most women have been deeply conditioned over a lifetime to prioritize the feelings of others, particularly the men in their sphere, so this actually has a good chance of shutting her down. She will start to feel guilty about standing firm in her own position if you make her believe that this is somehow being unfair to you. That’s how you win!
You can expand on this by telling her things like, “Trying to start a culture war isn’t going to get you anywhere,” even though she is merely advocating for basic human rights. Remind her that her advocacy for herself and others is really cute, but also kind of meaningless. She doesn’t have the insight, the intelligence, or most importantly, the power, to make any real difference in the world, so she should just stop attempting it — unless it’s undertaken in a way that you approve of and support.
Remind her to watch her tone and her rhetoric because if you find it uncomfortable and disconcerting, she really needs to be brought down a peg or two. Remind her who the parent is and who is the child. You can usually rely on her inclinations to prioritize your comfort over hers for this, as well as the cultural notion that angry girls are unattractive. Being angry about something, particularly as relates to herself, is a breach of the social contract wherein women nurture and care for those around them, lovingly putting their own needs last in the equation. If you can find a way to remind her of this without saying it outright, you can usually get her back in line and thoroughly managed without her even realizing quite what happened.
Remember that even women who seem confident and strong in their beliefs can be reined back into their correct societal place if you know how to manage them properly using a combination of bullying, charm, and misdirection. Girls, even before they reach puberty, know that they should not challenge boys, and should not exhibit anger towards them — not if they want to be well-liked and considered “cool.” Many women are still run by this same narrative and can be coerced and cajoled into behaving appropriately if you know what to do — and now you do — whether you actually have a working neck frill or not.
This story was not inspired by either James or Nat. They are both wise enough and self-confident enough to treat the women in their lives as partners and equals.
© Copyright Elle Beau 2020
Elle Beau writes on Medium about sex, life, relationships, society, anthropology, spirituality, and love. If this story is appearing anywhere other than Medium.com, it appears without my consent and has been stolen. | https://medium.com/inside-of-elle-beau/how-to-manage-a-woman-f7ac80015275 | ['Elle Beau'] | 2020-08-30 19:46:30.911000+00:00 | ['Society', 'Satire', 'Feminism', 'Equality', 'Essay'] |
Google Tests Larger URL in SERPs | A good testing strategy is an integral part of improving every website, and Google is always testing something new. Whether it is the color of the background behind AdWords ads, or a recent test to change the color of their black top navigation bar, Google continues to investigate which tweaks can help users find the results they are looking for as quickly as possible.
I just spotted what looks to be one of the current tests Google is performing. The green color remains the same at hex color #093, but the font size has increased from 13px to 16px.
Google has certainly tested other elements of the URL before, but this seems to be one of the first times they have increased the size of this critical page element. For example, Google does show enhanced URLs in organic results for those who markup their breadcrumbs with Schema.org or other rich snippets.
This could benefit larger brands with recognizable domain names, and I wonder how the increased URL size will impact CTR of paid AdWords ads compared to organic results. | https://medium.com/jordan-silton/google-tests-larger-url-in-serps-8a3ea501903c | ['Jordan Silton'] | 2016-04-29 01:34:01.096000+00:00 | ['SEO', 'Google', 'Testing'] |
How to not be dumb at applying Principal Component Analysis (PCA)? | Laurae: This post is an answer about how to use PCA properly. The initial post can be found at Kaggle. It answer three critical questions: what degree of information you allow yourself to lose, why truncating the PCA, and what should be fed in your machine learning algorithm if you intend to understand what you are working with.
DoctorH wrote: Let’s say I have a classification problem. First, I will do some feature engineering, possibly using one hot encoding. This may mean that I end up with, say, 500 features. Presumably, the correct thing to do at this point is a PCA. But how? Okay, here are some explicit questions: Should the PCA be used merely for feature selection? In other words, should I look at the pearson correlation of the features with the first few PCA vectors, and let that guide which features to choose? Or perhaps it is better to forget the old features altogether, and train my algorithm on the PCA vectors? When applying the algorithm that finds the PCA vectors, should I feed into it only the 500 features, or is it better to also feed into the category column (one hot encoded) as well? Obviously the test data doesn’t have a category column, but one can do the following: use the PCA vectors trained on the 500 features + the category column (one hot encoded), and then project the test data to the linear subspace spanned by the projection tof those vectors to the first 500 coordinates. Presumably that might be better, because then those vectors might detect patterns regarding what correlates with various categories, no? Do people do that sort of thing? Why is it a bad idea, if they don’t?
Answering point by point your questions.
DoctorH wrote: 1. Let’s say I have a classification problem. First, I will do some feature engineering, possibly using one hot encoding. This may mean that I end up with, say, 500 features. Presumably, the correct thing to do at this point is a PCA. But how?
Not always. The question remains: it depends on the objective you have. If:
You are looking for maximum performance: you take all PCA and initial features and feed through a L1 regularization to do “fast” feature selection, or you use any other feature selection method you like. You can also take the first principal components (like: top 95% variance).
You are looking for maximum interpretability: do not use PCA unless your data is in a good shape afterwards. See picture below.
DoctorH wrote: 2. Should the PCA be used merely for feature selection? In other words, should I look at the pearson correlation of the features with the first few PCA vectors, and let that guide which features to choose? Or perhaps it is better to forget the old features altogether, and train my algorithm on the PCA vectors?
Yes and no. Principal components are all uncorrelated to each other (correlation = 0). Higher variance on a localized (lower) amount of variables does not mean it is better. See the picture below.
In any case, it depends on the machine learning algorithm you are going to apply. For instance:
If you are going to apply a non-correlation robust algorithm (ex: LDA, Linear Regression…) : you must clear out all high correlations which might shut down the performance of the algorithm, and also clear out all the correlation chains (i.e break your one-hot encoded feature once: remove one column from the final encoding). Or you just use all PCA vectors.
If you are going to apply a correlation robust algorithm (ex: Random Forests, xgboost…): you do not need to care about correlation.
DoctorH wrote: 3. When applying the algorithm that finds the PCA vectors, should I feed into it only the 500 features, or is it better to also feed into the category column (one hot encoded) as well? Obviously the test data doesn’t have a category column, but one can do the following: use the PCA vectors trained on the 500 features + the category column (one hot encoded), and then project the test data to the linear subspace spanned by the projection tof those vectors to the first 500 coordinates. Presumably that might be better, because then those vectors might detect patterns regarding what correlates with various categories, no? Do people do that sort of thing? Why is it a bad idea, if they don’t? When applying the algorithm that finds the PCA vectors, should I feed into it only the 500 features, or is it better to also feed into the category column (one hot encoded) as well?
PCA looks at variance. If you do not standardize your features, they will have different weights in the PCA. As a good starting point, it is common to standardize to {mean, variance} = {0, 1}, thus {mean, std} = {0, 1}.
If we assume your category column is 200 columns long, the total of these 200 columns must have the same weight in the PCA as one other column. Therefore, you would standardize these 200 columns to {mean, variance} = {0, 1/200} = {0, 0.005} and {mean, std} = {0, ~0.0707}. Hence, these 200 columns from 1 column would have the same weight as one other column.
Presumably that might be better, because then those vectors might detect patterns regarding what correlates with various categories, no?
Yes and no. Check picture below.
Do people do that sort of thing? Why is it a bad idea, if they don’t?
Yes and no. It all started in the literature, and some researchers warned it is not always a good idea and you must check what you have after you used the transformation. This also applies to similar methods, such as Independent Component Analysis, MCA, FA… The best picture to understand why is below (for the fourth time :p ). | https://medium.com/data-design/how-to-not-be-dumb-at-applying-principal-component-analysis-pca-6c14de5b3c9d | [] | 2017-01-03 19:44:37.568000+00:00 | ['Machine Learning', 'Design', 'Data Science', 'Statistics', 'Data Visualization'] |
Learning D3 — Multiple Lines Chart w/ Line-by-Line Code Explanations | Set Up the Canvas
To set up the canvas for D3 graphs, in your HTML file :
Line 4: Load D3 directly from d3js.org — so you don’t need to install locally.
Load D3 directly from d3js.org — so you don’t need to install locally. Line 5: Load colorbrewer — we are going to use a color palette from this package.
Load colorbrewer — we are going to use a color palette from this package. Line 8–30: Style section to style different elements.
Style section to style different elements. Line 34: onload= “lineChart()” means we are telling the system to load the linechart() function immediately to show D3 graphs after the page has been loaded.
means we are telling the system to load the function immediately to show D3 graphs after the page has been loaded. Line 36–37: Create a SVG in the size of 1200px by 750px for us to put graphic elements in later. Load the grid image from the URL to set it as the background image for the SVG canvas.
Scale values for x-axis and y-axis
Now we move on to create the D3 linechart() function. In this example, our data is the seven-year ad spending data for each of the six main media channels pulled from eMarketer.com.
Line 4–47: Input data.
Line 50–51: Set the left and top margin. Because we will draw our x-axis and y-axis from the left and from the top, we want to leave some space on each side so the labels will be completely shown.
Line 54–58: We need to tell D3 that the values in the year column are years, not integers, so we scale them correctly when drawing the x-axis. d3.timeParse(“%Y”) convert the input data in the year format ( %Y ) to a format that D3 recognizes as years. Then we use the foreach{} function to pass each element in the year array to the function.
Line 61–62: When drawing an axis, we need to scale the value range so it will draw correctly scale-wise on the canvas. d3.extent returns the range of the year. Then we use d3.scaleTime() to scale the time, pass the range to .domain() and then scale it to the range we will draw the axis on [leftMargin, 900] .
if use d3.extent for the y-axis
Line 65–66: We will do a similar scaling for the y-axis. But instead of using d3.extent , we find the maximum of the value using d3.max and add the top margin to make sure we are leaving enough space on top of the y-axis. Notice that for the y-axis, the range is [600,0] , because the y-axis is drawn from bottom (600) to top (0).
Origin (0,0) of the SVG Canvas is at the top-left corner
Draw x-axis and y-axis with D3
x-axis and y-axis with D3
Line 2–3: Set up the xAxis function we will call later. d3.axisBottom() is a function that will create a horizontal axis, ticks will be drawn from the axis towards the bottom, labels will be below the axis as well.
Line 5–9: Draw the x-axis. It will be drawn from the origin (0,0) top-left corner, so we need to move it down using translate(0,620)
Line 10–13: Append x-axis label, and position it to be at the middle-center below the x-axis — . attr(“x”,(900+70)/2) and .attr(“y”, “50”)
Line 16–18: Set up the yAxis function we will call later. d3.axisLeft() is a function that will create a vertical axis, ticks will be drawn from the axis towards the left, labels will be on the left side of the axis as well. Ticks(10) specifies the number of ticks we want to show.
Line 20–24: Draw the y-axis. We move it a little bit to the right so we will have the left margin left for labels, and a little bit down so it will intersect the x-axis using .attr(“transform”,`translate(${leftMargin},20)`) .
Line 25–30: Append the y-axis label. By default, the text is drawn from left to right, so we need to rotate it anti-clockwise by 90 degrees .attr(“transform”, “rotate(-90)”). "text-anchor” is used to tell d3 that the (x,y) position of the text should be based on “start”, “middle” or the “end” of the text.
Your output so far should look like below.
x-axis and y-axis with D3
Draw multiple lines with D3
Draw multiple lines with D3
Line 2–4: In order to draw multiple lines with each line representing one media, we need to group the data by media using the .nest() function. .key(d=>d.media) tells d3 to group the data by media column.
Line 6: Always a good idea to use console.log() to print out the data object so you can get a concrete idea of what it looks like.
Line 9–10: We use .map() function to return the array of keys ( media channels) in the nested data. Then we use scaleOrdinal() , a function to scale ordinal data, to match each color in colorbrewer.Set2 to each media channel.
Line 17: Use the nested data .data(sumstat) so a line will be drawn for each group.
Line 19–25: Draw the lines by appending “path” . The attr(“d”) defines the path to be drawn, within it, we call the d3.line() function that will create the d attribute of the path following the sequence of points we will define. Set x-coordinate to be the year, y-coordinate to be spending, for the curve, we use the curveCardinal type d3.curveCardinal.
Line 27: For the color of each line, we call the color() function we created before to give each group its assigned color.
Line 32–41: We will draw a circle for each data point to highlight that those are the discrete data points we have vs. continuous data throughout the years. To do this, we will use data instead of the nested data sumstat , because we are not drawing one circle for each group. .attr(“r”) defines the size of the circle, .attr(“cx”) defines the x-coordinate of the center of the circle, .attr(“cy”) defines the y-coordinate of the center of the circle. Finally, we use the color() function to give each circle color based on its media.
Your output so far should look like below.
Append legend, source, and title with D3
The final touches of this chart are its legend, source, and title. We will cover annotation and tooltips in our next tutorial. These are important but often-neglected elements of an effective chart (more on this here).
Append legend, source, and title with D3
Line 2–7: Set everything up for drawing a legend by appending g to group all the elements, entering the nested data sumstat so we draw one circle for each group, assigning class .attr(“class”, “legend”) for styling.
Line 9–13: We first draw the circles of the legend. We want to position them in one column, this means each circle will have the same x-coordinate .attr(“cx”,1000) and the y-coordinate increases by 30px from one to the next .attr("cy",(d,i)=>i*30+355) , i indicates the position number in the array. Again color will be assigned using the color() function we created before.
Line 15–18: We create the text labels in a similar way to the circles, set the x coordinate a bit right to the circles .attr(“cx”,1020)
Line 21–39: Finally, we add in the titles and source. For a single chart, it’s very easy to do. We just append text to the position we want. The code should be pretty self-explanatory. I recommend you using live-server so you can instantly see the position of the text as you change your code.
Your final output should look like below.
Multiple Lines Chart with D3.js
Hope this post is helpful for you? As always, feel free to reach out if you have any questions. You can find the complete code on my Github.
Finally, a little preview — for the next post, I will go through how to create hover-over tooltips and annotations with D3. Follow me so you can be notified when the post is up! | https://medium.com/javascript-in-plain-english/learning-d3-multiple-lines-chart-w-line-by-line-code-explanations-40440994e1ad | [] | 2020-12-15 09:53:08.774000+00:00 | ['Visualization', 'JavaScript', 'D3js', 'Web Development', 'Programming'] |
B2B 行銷方法論(二):為什麼不該做大量傳播?如何用 Inbound 集客式行銷的概念,達到產品廣告化? | Get this newsletter By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.
Check your inbox
Medium sent you an email at to complete your subscription. | https://medium.com/y-pointer/b2b-saas-content-marketing-ae078808d295 | ['侯智薰 Raymond Ch Hou'] | 2019-05-16 16:40:21.124000+00:00 | ['Work', 'Product', 'Business', 'B2B', 'Marketing'] |
How to Build a Reporting Dashboard using Dash and Plotly | A method to select either a condensed data table or the complete data table.
One of the features that I wanted for the data table was the ability to show a “condensed” version of the table as well as the complete data table. Therefore, I included a radio button in the layouts.py file to select which version of the table to present:
Code Block 17: Radio Button in layouts.py
The callback for this functionality takes input from the radio button and outputs the columns to render in the data table:
Code Block 18: Callback for Radio Button in layouts.py File
This callback is a little bit more complicated since I am adding columns for conditional formatting (which I will go into below). Essentially, just as the callback below is changing the data presented in the data table based upon the dates selected using the callback statement, Output('datatable-paid-search', 'data' , this callback is changing the columns presented in the data table based upon the radio button selection using the callback statement, Output('datatable-paid-search', 'columns' .
Conditionally Color-Code Different Data Table cells
One of the features which the stakeholders wanted for the data table was the ability to have certain numbers or cells in the data table to be highlighted based upon a metric’s value; red for negative numbers for instance. However, conditional formatting of data table cells has three main issues.
There is lack of formatting functionality in Dash Data Tables at this time.
If a number is formatted prior to inclusion in a Dash Data Table (in pandas for instance), then data table functionality such as sorting and filtering does not work properly.
There is a bug in the Dash data table code in which conditional formatting does not work properly.
I ended up formatting the numbers in the data table in pandas despite the above limitations. I discovered that conditional formatting in Dash does not work properly for formatted numbers (numbers with commas, dollar signs, percent signs, etc.). Indeed, I found out that there is a bug with the method described in the Conditional Formatting — Highlighting Cells section of the Dash Data Table User Guide:
Code Block 19: Conditional Formatting — Highlighting Cells
The cell for New York City temperature shows up as green even though the value is less than 3.9.* I’ve tested this in other scenarios and it seems like the conditional formatting for numbers only uses the integer part of the condition (“3” but not “3.9”). The filter for Temperature used for conditional formatting somehow truncates the significant digits and only considers the integer part of a number. I posted to the Dash community forum about this bug, and it has since been fixed in a recent version of Dash.
*This has since been corrected in the Dash Documentation.
Conditional Formatting of Cells using Doppelganger Columns
Due to the above limitations with conditional formatting of cells, I came up with an alternative method in which I add “doppelganger” columns to both the pandas data frame and Dash data table. These doppelganger columns had either the value of the original column, or the value of the original column multiplied by 100 (to overcome the bug when the decimal portion of a value is not considered by conditional filtering). Then, the doppelganger columns can be added to the data table but are hidden from view with the following statements:
Code Block 20: Adding Doppelganger Columns
Then, the conditional cell formatting can be implemented using the following syntax:
Code Block 21: Conditional Cell Formatting
Essentially, the filter is applied on the “doppelganger” column, Revenue_YoY_percent_conditional (filtering cells in which the value is less than 0). However, the formatting is applied on the corresponding “real” column, Revenue YoY (%) . One can imagine other usages for this method of conditional formatting; for instance, highlighting outlier values.
The complete statement for the data table is below (with conditional formatting for odd and even rows, as well highlighting cells that are above a certain threshold using the doppelganger method):
Code Block 22: Data Table with Conditional Formatting
I describe the method to update the graphs using the selected rows in the data table below. | https://medium.com/p/4f4257c18a7f#906f | ['David Comfort'] | 2019-03-13 14:21:44.055000+00:00 | ['Dashboard', 'Towards Data Science', 'Data Science', 'Data Visualization', 'Dash'] |
Launch AWS instance and attach volume using AWS CLI & script | For getting the instance-id, you can create a command for getting the instance id using the query and filters. Though you can also use the simple command, but that won’t help us in automation, that we’ll do in the latter part of this article.
aws ec2 attach-volume --device /dev/xvdb --instance-id i-0159851f2556c1a04 --volume-id vol-0536d4b73a5937242
Part II — Automating the things using scripting
Tip: To read the full code at once, go to the end of the article
For automating the above things, means for creating those resources with a faster pace and no manual involvement, you just need to do a few more things.
First thing, you need to notice that the above commands that we have discussed above are for the Windows Command or PowerShell, and if you try to run those commands in Linux, some of those will give some error. For automating, we are going to write the bash script, and for that, you need a bash shell, which you can get either from any Linux Distro or even Git Bash.
So, Let’s start the scripting part
The first thing, you need to do is to declare the variables for the tags or the name of resources.
#!/bin/bash #Declaring Variables keyname=testingclikey
instancename=testinginstance
sgname=testingsg
ebsname=testingebs
Now, let’s say you want to create the key-pair from the bash. But, here, you need to note that if you are going to give tag to the key-pair, as we have done in the command at the start, it will give an error. This is because we have used the curly braces in that command, {}, which are intended for other things in Bash. So, to tackle this error, you can use double quote “” as done in command below:
#creating the key pair aws ec2 create-key-pair --key-name $keyname --tag-specification "ResourceType=key-pair,Tags=[{Key=Name,Value=$keyname}]" --query "KeyMaterial" --output text > $keyname.pem
Similarly, you can create the security group:
#Creating the security group aws ec2 create-security-group --group-name $sgname --description "created from CLI" --vpc-id vpc-50f0e456
Now, the heart of Scripting
Till now, everything was really fine, but when you are going to create the inbound rule for the security group, you need to retrieve the group-id of the security group that you have created and then pass that group-id to another command.
So, for getting the security group info, we can use the aws ec2 describe-security-groups command. But that will show you the info about all the SGs and in json format. So, we need to format that output, then, pass to other commands. For that you can use the following command:
aws ec2 describe-security-groups --query "SecurityGroups[].GroupId" --filters Name=group-name,Values=$sgname
If you have noticed, that the above command is in windows and the output is not what we require, so, now switch to Linux, and let’s do some change in the output.
Now, the output is in the shape of what we require, so, instead of printing that in Console, let’s store that in variable sgid. Here, for formatting the output we are using the sed and tr. See command below:
sgid=`aws ec2 describe-security-groups --query "SecurityGroups[].GroupId" --filters "Name=group-name,Values=$sgname" | sed -n 2p | tr -d \"`
Now, we have the security group id stored in sgid variable. So, let’s use this and create inbound rules.
#Adding the inbound rules to the security group created aws ec2 authorize-security-group-ingress --group-id $sgid --protocol tcp --port 22 --cidr 0.0.0.0/0
For launching the instance, the command is the same as discussed previously except that we are now using the variables inside the command:
#Launching the instance aws ec2 run-instances --image-id ami-0e306788ff2473ccb --count 1 --instance-type t2.micro --key-name $keyname --security-group-ids $sgid --tag-specification "ResourceType=instance,Tags=[{Key=Name,Value=$instancename}]"
For Creating the volume, not much difference except using double quotes and variable in place of the name of the volume.
#Creating the Volume
aws ec2 create-volume --availability-zone ap-south-1a --size 1 --tag-specification "ResourceType=volume,Tags=[{Key=Name,Value=$ebsname}]"
Now, for attaching the instance volume, you need the volume id and the instance id and then, pass the values to another command. To achieve, we have used the following commands:
#storing the value of Volume ID in variable volumeID
volumeID=`aws ec2 describe-volumes --query "Volumes[*].VolumeId" --filters "Name=tag:Name,Values=$ebsname" | sed -n 2p | tr -d \"` #Storing the value of instance ID in the Variable instanceid
instanceID=`aws ec2 describe-instances --query "Reservations[*].Instances[].InstanceId" --filter "Name=key-name,Values=$keyname" | sed -n 2p | tr -d \"` #Attaching the volume to the instance
aws ec2 attach-volume --device /dev/xvdb --instance-id $instanceID --volume-id $volumeID
So, Let’s sum up everything into one script file:
GitHub Repo: https://github.com/theadarshsaxena/AWS-CLI-Scripting/ | https://theadarshsaxena.medium.com/launch-aws-instance-and-attach-volume-using-aws-cli-script-bc0fecbcd6a9 | ['Adarsh Saxena'] | 2020-10-25 08:00:54.552000+00:00 | ['AWS', 'Automation', 'Aws Cli', 'Cloud', 'Scripting'] |
Snow White and the Seven Unwanted Messages in Children’s Literature | Childhood is wonderful, like a flower on its way to full bloom, like a butterfly emerging gracefully out of a cocoon. A child’s mind is young, willing to take in all the information it can. It is curious and impressionable and more susceptible to the messages seemingly harmless story books convey than we realize.
Progress in children’s literature occurs at an almost glacial pace. For far too long, children are subjected to the same old fairy tales, the same old ‘classics’, and, consequently, the same old gender stereotypes incorporated in them. These distorted values are instilled into them through books, through movie adaptations, through cartoons inspired by these stories. When the world children are exposed to seems to persevere to maintain the flawed ideas of the role of young boys and girls, it is hardly surprising how the glaring differences between the two genders still persist.
One can identify 7 common themes across most of children’s stories. Here are 7 insidious messages children should not be hearing right before they go to sleep at night (or ever):
1. Damsel in distress
No one is unfamiliar with the poor damsel who always seems to find herself in an unpleasant situation that she herself does nothing to get out of. She is delicate and helpless like a deer caught in headlights, and someone somewhere is always out to get her. Her main contribution to a story is often simply looking beautiful and passively accepting her circumstance. Every now and then, she may shed a tear, but that’s about where it ends. Too often, she is not given any agency or opportunities to make decisions. The damsel is captured and then the damsel is rescued. Her role is responding to what is being done to her, even when she is the protagonist of the story. This teaches young girls with immense potential to never tap into their capabilities.
2. Knight in Shining Armour
Of course, one virtue the female lead of the story is permitted to have is that of patience. She is good at waiting, and waiting is all she does. With no attempt to escape a distressing situation, she awaits her knight in shining armor to get rid of her troubles for her. He always succeeds (there was never any doubt about that, he is a Man, after all!), and relieves her of her many woes. He is always the solution, always the light at the end of the tunnel. This only teaches young girls that dependency will be rewarded.
3. Older Vicious Women
The portrayal of older women and older men in children’s fairy tails is quite unfair, to say the least. Older women, in most cases, are described to be horrendous in appearance. They are malicious and always wish to inflict harm on the young female protagonist. They are witches, evil stepmothers, and sisters. Their sole purpose is to make the life of the young girl miserable because they envy her beauty.
Older male characters, on the other hand, are powerful kings and kind, elderly men with a pure heart. This contrast between the two contributes significantly to the way the perception of men and women is shaped in a child’s mind.
4. Vanity
Speaking of older women envying the beauty of a young girl, what is with their unrealistic preoccupation with good looks? Why are they obsessed only with the preservation and acquisition of a pretty face? Women are shown to be vain to such an extent that, not only do they spend excessive amounts of time pursuing beauty but are willing to cause harm to another woman in the process. This is a common stereotype that is constantly reinforced in children’s stories and fairy tales. The impact of this can be seen even today when girls are ridiculed for wanting to look ‘good’, for it is automatically assumed to be the only thing they exert any amount of effort into.
5. Rivalry
Another common theme recurring in these fairy tales is of women constantly being pitted against each other. They are almost never there to empower each other. They are in battle, always in opposing teams, and fighting over things like beauty and men. This not only reinforces the point about vanity but also establishes beauty and men as things worth quarreling over. It is no wonder so many young girls still feel like they are in competition with each other.
6. Body Type Being Associated with Traits
The media receives significant criticism for portraying only one type of body as the ‘ideal’, while the fairy tales kept away on the bookshelf of every house get away with doing the same. The female lead character is always seen to have a slender body and tiny waist. It is designed to be pretty and graceful and dainty and easy for a prince to lift when he inevitably rescues her. Not only is this body type always saved for the female protagonist, but any other type is also assigned to women who are evil and hold bad intentions. This leaves any body type that is not slender associated with villainous traits and the quality of being extremely wicked. Embossed into the minds of young girls at a very early age, this leads to problems relating to low self-esteem. While the media is accepted to be quite foul in its discrimination between body types, the reinforcement of the association of moral goodness with one’s appearance in a seemingly safe storybook is particularly harmful.
7. Virtues in men and women
Finally, the fairy tales and stories narrated to children have strong implications in the way they define what is expected of them in the future. The benchmark of excellence for both differs vastly and is infuriatingly low for girls. Girls are encouraged to aim to achieve perfection in appearance, in being well-groomed, in being still and delicate, and in stroking the ego of the male hero. Boys are encouraged to fight for what they want, to be brave. Why is she not taught to stand up for herself and others? She is to be kind to everyone but herself. She is celebrated for doing pretty much nothing other than be innocent and cower in the face of a challenge. Strength and courage are unimaginable traits for her to possess.
The tame and gentle way in which children’s literature inculcates certain unwanted values in children makes it all the more insidious. It is left unquestioned and considered innocent, which is perhaps why it is so difficult to see the need to change them. Significantly. When themes with the potential to trap young children in rigid gender stereotypes appear at an age when they are so gullible, one must acknowledge that it is only to go downhill from there. Room must be made for empowering young girls and balancing the roles of male and female characters in writing meant for younger readers. It is time to stop propagating these messages under the mask of bright colors and magical characters. | https://medium.com/thecontextmag/snow-white-and-the-seven-unwanted-messages-in-childrens-literature-c1a2ac572031 | ['Context Staff'] | 2019-03-01 06:06:35.637000+00:00 | ['Literature', 'Society', 'Culture', 'Women', 'Feminism'] |
Art Docents of Los Gatos Launch “Big Data” Art Workshop For Sixth Graders | A big data art workshop at Fisher Middle School in Los Gatos. The spirals illustrate how each student spends their time during the day. Each spiral represents a student’s time. The different colors represent different activities, with the favorite activity in the center, and other activities in descending order spiraling out. The number of dots for each activity represent the proportion of time they spend on the activity. The students were by turns fascinated or jealous at the amount of time their peers got to do certain things — especially how much time their peers got from their parents for “screen time.”
These multi-colored spirals lined up on a giant sheet of white paper might look as mysterious as Egyptian hieroglyphics to the uninitiated, but they’re actually representations of how a class of sixth graders spend their time at Los Gatos’ Raymond J. Fisher Middle School.
The spirals form a “data portrait” inspired by the Austin, Texas-based Laurie Frick, an artist who “explores the bumpy future of data captured about us.”
Each spiral pictured here represents the hours spent in a typical sixth grader’s day: The colored dots at the center of a spiral represent a child’s favorite category of activity, with the other categories (sleep, meals, school, homework, screen-time, physical activity, creative and “other,” spiraling outwards to the least favored.
The kids created these data portraits during a workshop and lesson about the concept of “big data.” It’s a new 90-minute class taught during a math period by the Art Docents of Los Gatos, and is part of a push by the Los Gatos Education Foundation and the school district to incorporate more of a STEAM approach to learning.
STEAM stands for an educational approach that guides student inquiry, dialogue, and critical thinking, and it incorporates elements of science, technology, engineering, the arts and mathematics into lessons.
The idea behind the innovative and unusual new lesson is to engage students in a simple, accessible way in order to introduce them to the more complex idea of “big data,” and how this modern phenomenon can reveal important insights in the world around us.
The parent volunteer-run workshop does this by engaging he kids in a relatively simple and fun hands-on charting and art activity. The insights revealed about their days tend to surprise and excite the sixth graders as the spirals show them an aspect of their lives they hadn’t explicitly thought about before. (They also live learning about how their peers spend their time, and how much screen time their buddies are allowed to have everyday, for example.)
The Art Docents’ Big Data Workshop showcases the work of several notable big data artists to illustrate how modern computing and art can quantify and reveal what’s otherwise invisible around us.
The group’s goal is to tie as much of the idea of big data and big data art to aspects of the kids’ lives. So, for example, the kids learn about the eCloud dynamic sculpture of weather data above the walkway between gates 22 and 23 at the San Jose International Airport. Through discussion and the introduction of key terms, the workshop also introduces the idea of data science as a career.
“I think if you expose kids to the vocabulary, it opens their world to new possibilities,” explains Julie Ferrario, the Art Docents of Los Gatos’ president.
In addition to videos of Frick’s art works of self-quantification, the lesson shows how 25-year-old climate scientist and artist Jill Pelto highlighted the declining population of Coho Salmon through an illustrated chart of the population data; how artist Stanford Kay used data visualization in the shape of a foot to illustrate the relative “carbon footprints” of countries around the world; how artist Aaron Koblin visualized and found interesting patterns in flights across the U.S. and New Year’s Eve text message flows across wireless phone networks.
The lesson explains what Big Data is, how activities in our everyday lives generate it through our use of social media, and navigational devices for example. The kids learn, for example, that the millions of Instagram photos, Twitter messages and uploaded documents online add up to “quintillions” of bytes exchanged online every day, and how that data can be analyzed and interpreted to detect trends. The kids learn that hometown company Netflix analyzes viewing patterns to create shows, and that other entities are using and manipulating big sets of data to build self-driving cars, and to find cures for diseases, among other things. Kids also learn what kinds of professional roles data scientists can take.
“Art is as much of a product of the technologies available to artists as it is of the sociopolitical time it was made in, and the current world is no exception,” noted writer Jacoba Urista in a 2015 article in The Atlantic.
The class was inspired by Rick Smolan and Jennifer Erwitt’s coffee table book “The Human Face of Big Data,” which was sponsored by Silicon Valley companies CISCO, EMC, Originate, tableau, and VMWare (and Fedex.) Ferrario’s husband Dale, works at VMWare and suggested that Julie consider teaching about Big Data Art as part of the Art Docents’ repertoire of art appreciation classes in the Los Gatos Union School District.
“Given where we live, I think the expectation from parents is that the kids are going to get the latest innovations as part of their STEM experience at school, so I was trying to somehow get technology more deeply into what [the Art Docents] do all the time,” he said.
The Art Docents of Los Gatos’ curriculum team worked with district staff and iterated several versions of the workshop over the past year and a half to incorporate the innovations of Silicon Valley into a fun and educational part of middle school math class.
The non-profit, founded in 1973, is a volunteer-run arts and cultural educational organization. It serves more than 3,000 students in the Los Gatos Union School District, which includes Daves Avenue, Blossom Hill, Van Meter, and Lexington elementary schools and Fisher Middle School. | https://laistirland.medium.com/art-docents-of-los-gatos-launch-big-data-art-workshop-for-sixth-graders-64197b3266d4 | [] | 2019-07-10 15:13:18.993000+00:00 | ['Big Data', 'Arts In Education', 'Silicon Valley', 'Education', 'Art'] |
If You’re Struggling to Create, You Should Ambulate | If You’re Struggling to Create, You Should Ambulate
How walking will unlock your inner creativity
Stuck for ideas? If you’ve signed up to feeding the digital beast with data then you know the feeling. She’s a hungry taskmaster. The need to generate content on an almost continuous basis can become overwhelming. If you’re staring at a blank page or screen and that nagging little doubt monster is lurking, then it’s time to go for a walk.
Those of you who follow my disjointed verbal ramblings know by now that I am in the Philippines. Before I settled, I would walk across wide swathes of it, from village to village. Just me, my backpack and the open road. Before you send me lunch money, it was out of choice, driven by my desire to experience the country on a level no tourist package could ever provide.
Discovering the secret known only to those who walk.
It was scary at times, interesting, exciting, entertaining and immensely taxing, physically. I shed a small whale's worth of western blubber in a year. This was in most part due to a lack of beer. sweltering temperatures. walking, and an aversion to eating food that was staring back at me. I broke a finger, suffered a mild concussion, lost pints of blood to the local mosquitos, and walked my way through six pairs of shoes.
When I eventually found the place I wanted to call home, I was feeling twenty again, rather than the fifty-something laps shown on the clock.
During my hundreds and hundreds of miles covered on foot, I discovered a few things about myself, the most important, without doubt, was that I was comfortable being alone with myself. That took a while. What’s of relevance here though is that I also discovered a secret. It’s a secret known to all walkers and one which they guard jealously.
The rhythm of placing one foot in front of the other for an extended period of time is cathartic. It is liberating, illuminating, healing and so much more. It mutes the noise of the world, allowing your creative mind to escape the deafening cacophony of those mundane daily duties that can crush creativity.
If you’re going to ambulate with a view to create, it is an activity best undertaken alone and I’ll explain why, how and when you should plan your walks.
The only rule is no tech.
The only rule that applies to walking for creativity is no tech. Smartphones and their digital counterparts are the very antitheses of creativity. You can bring along your smartphone after, and only after, you’ve engaged its flight mode. There can be no interruptions. Fight the separation anxiety and rather lock it in the car or leave it on your desk.
Next, pick a route that will give you at least 30 minutes of uninterrupted walking. Don’t skimp on time, it takes a while to get those creative juices flowing. Drive away from neighborhoods, neighbors and any potential interactions, to a place you know you won’t be disturbed. Even a local sports track will do the trick. If you can’t escape people, stick in your earphones. Don’t plug them in, just use them to appear unsociable. Print a T that says “Pee Off. I’m Walking.” Wear a hoodie. Do whatever it takes to ensure your walk is going to be purely ‘you’ time.
It should take between five and ten minutes for your cognitive brain to switch off and the creative side to kick in. It’s the daydreamer you’re looking for. Find and follow those thoughts, don’t push. Let it come naturally. There is no mantra required, no affirmations or catchy phrases whilst facing the rising sun. Just walk. One foot in front of the other and the muse will find you. I guarantee it.
When she does you can be overwhelmed by the flow. You will have days when the floodgates are really lowered and it becomes too much. That is where your phone or a dictaphone can come in handy for voice notes. Ideas are the rare commodity that fuels creativity and you should absolutely never waste them or risk them being lost later to life’s drudgery.
I’m a little old fashioned and still use a small digital dictaphone. It’s light, has insane battery life and cannot tempt me to check for WiFi when I pass below a promising looking palm tree. It’s only occasionally that I find I have a need for it and as I said earlier, try for zero distractions.
You can use your walking time to explore your characters, play out plots in your mind, draft frameworks for new pieces or simply contemplate the meaning of life if you’re that way inclined. If you blog, you’re definitely going to need a recording device as the ideas come thick and fast. Don’t just think plot, think life. Walking it turns out, accesses your onboard philosopher.
I have a theory about our creativity. I see it as a bottomless pit filled with endless ideas and when creativity fails us, it is not that the pit has run dry, it is simply our extraction method that needs attention.
Walking rips that pit right open.
Some of you are going to raise the time issue. Where are you supposed to find that extra half an hour. Let me ask you this. How much unproductive time do you waste sitting in front of your keyboard, tapping your fingers and waiting for the smallest glimmer of an idea you can build on? Technology doesn’t foster creativity, it merely serves as an outlet for it. Your half an hour walk will provide you with a week's worth of ideas and your waistline will be rooting right alongside the muse for her outing.
So, if your brain tends to fancy, take it for a walk and set if free. It will soar to heights you never imagined possible and return laden with gifts from the muse above. | https://medium.com/lighterside/if-youre-struggling-to-create-you-should-ambulate-9fc6bd8a301f | ['Robert Turner'] | 2019-12-19 02:41:36.379000+00:00 | ['Writers Block', 'Walking', 'Lighterside', 'Creativity', 'Writing Tips'] |
Lin-Manuel Miranda’s Revisionist History | Hamilton An American Musical
The thrill and the joy of Lin-Manuel Miranda’s musical “Hamilton” cannot really be overstated. I had been waiting, basically since it opened, to see it. New York wasn’t possible nor was Chicago, but, once it landed in Toronto, I was not throwing away my shot. Tickets were bought, hotel rooms arranged and an outfit worthy of the occasion selected. And then: Covid hit. Yet, thanks to the wonder of Disney+ I was really not throwing away my shot as I ended up watching “Hamilton” in my mothers house, in a comfy chair; wearing shorts and a pyjama t-shirt, double fisting free cocktails and feasting on charcuterie all while having a front row seat and the ability to rewind should there have been anything I missed. Needless to say, it was awesome, and, also unlikely, I will ever buy a ticket to go to the actual theatre again.
It wasn’t just the setting, the snacks or the seating though that made the “Hamilton” experience exceptional. Those things gilded the lily for sure, but, it was the experience of “Hamilton” itself that was magic and lived, incredibly, up to all of its hype as things rarely, if ever, do. It should be stated upfront that while “Hamilton” was nearly universally praised it was also, fairly and rightly, criticized as ignoring altogether Alexander Hamilton’s role in slave trading and Native American genocide and those are, obviously, not small things and, again, obviously, rightful criticisms. While Lin-Manuel Miranda tackles Hamilton’s shortcomings through his character flaws including loquaciousness and arrogance, as well as his extramarital affair, there were glaring omissions that are better encompassed in Ron Chernow’s book on which the musical was based.
While the revision of history through its omission is problematic, Miranda’s ability to divine the future through the lens of the past, thereby revising both, is pure and unadulterated magic. There is no other word for a play and a playwright who manage to foresee a future while portraying the past and this is where “Hamilton” gets so, so, so much right. The cast is full of black and brown bodies inhabiting a landscape that, in 1776, belonged almost exclusively to white men. It tells the story of these white men through rap, hip hop, reggae and R&B; forms and styles appropriated now being reclaimed by those who created them. And, it’s not that these forms and styles can’t be shared, on the contrary, as white bodies inhabit the landscape of “Hamilton” too. What those white bodies don’t do in this play though is dominate or subjugate.
Moreover, it is the women of “Hamilton”, the Schuyler sisters: Anjelica, Peggy and the future Mrs. Alexander Hamilton, Eliza, who claim centre stage. In fact, spoiler alert: Eliza lives to tell her own story, as much, and perhaps even more, than her husband Alexander’s. It was she who lived long enough to work to establish the Washington monument, move toward abolishing slavery, enshrine her husbands legacy and open an orphanage in New York City. It is astounding to see women as more than figments of others imaginations and, instead, having rich, full and complex lives all their own.
While Eliza lived, Alexander did not having been killed by a man as ambitious, as driven, as arrogant, loquacious and domineering as himself: Aaron Burr. Toxic masculinity man; it’ll get you every time. In all seriousness though (which toxic masculinity, toxic anything really, most certainly is), these men were portrayed as thoughtful, idealistic, scholarly and, like the play itself, flawed. Yet, in an incredibly refreshing turn, it was their flaws that, yes, broke them, but, also, that fuelled the engines of profound and lasting change. It was their belief in their ability to create a world not yet imagined that drove them and that drove America to become more than previously imagined.
Isn’t that what always fuels change? Dark as much as light. Insanity as much as soberness. Recklessness as much as deliberation. It is always in the middle, at the tensest point between opposites, where growth and change can most fully happen. Burr’s caution in the face of Hamilton’s relentless tirelessness. Lafayette and Laurens’ willingness to die on the altar of change versus Seabury’s allegiance to the status quo. We always need everything and we always need everyone; sometimes and especially those who annoy us and push our buttons the most. These men, and women, as written by Miranda, were driven by creativity to create and creation is messy. As Hamilton says, “every action’s an act of creation” and these were people willing and determined to create at all costs.
The creation of a new world is often a disaster; full of bloodshed, war, harm and the potential for destruction of self and others. It can be devastating and yet there is no other way to do it. Hamilton knew it then and Miranda knows it now. Black Lives Matter and every single protester know it too. As Laurens sings in “Hamilton”, it is those forced to live on their knees who, ultimately, rise up and we are witnessing such a rising right now. If the protesters didn’t understand that some sacrifices are worthy of risk, they would not be marching in a global pandemic. They would not risk their health or their lives. Heather Heyer and Summer Taylor would not have left their homes to, ultimately, make the ultimate sacrifice, without knowing the possibility of grave risk. No revolutionary should have to risk their lives, but, no revolution has ever come without often painful sacrifice.
True revolutionaries make a decision that it is worth sacrificing and fighting for a future that will be better for those that will follow because, on earth as it was in “Hamilton” they know that “tomorrow there will be more of us”. And who is the “more of us” who will rise up? It’s men, and women, like Hamilton and his crew: immigrants, poor, sons of whores; sons without fathers who become founding fathers. Because, as they astutely note in Hamilton, “immigrants get the job done”. Take that Stephen Miller. No great nation or great society was ever built by sameness. It was created by difference and forged by strife; the only ways to ever arrive at a more perfect union.
Hamilton did not create a perfect world or a perfect economic system but he created one that was better than what had been there before. Miranda through his play also did not create a perfect world but he created a world more perfect and representative than the one which came before it. If either of these men, Hamilton or Miranda, had been cancelled by the culture at large, the change they envisioned could not have happened. A world more just, equal and fair would be nothing more than a pipe dream.
We cannot cancel people or things when they are not perfect. It is thoughtless and careless and will never get us to where we need and want to go. The only way we can ever get where we need and want to go is through resistance and revolution. Lin-Manuel has said, “History is entirely created by the person who tells the story”. Well, he told a story that will one day be relegated to the history books and, while that story may be imperfect, it was more perfect than its origin story. It is more perfect because it divines, through the lens of the eighteenth century, the world of the twenty-first. It revises history as it embodies and represents the world that revolutionaries — imperfect and flawed as they may have been — envisioned and were willing to work and fight and strive for. Hamilton is referred to as “An American Musical” and that is accurate as it is a very American story. It would also have been accurate to call it “Hamilton: A Human Story” as it is like humanity itself. It is messy, flawed, striving, imperfect and, in the end, beautiful, amazing, wondrous, perfect in its imperfectness and just absolutely stunning in its gloriousness. | https://quagchri.medium.com/lin-manuel-mirandas-revisionist-history-13c9f2e20967 | ['Christine Quaglia'] | 2020-07-11 19:20:55.175000+00:00 | ['Hamilton', 'Society', 'Equality', 'Justice', 'Race'] |
What it’s Like to Audition at a Las Vegas Strip Club | What it’s Like to Audition at a Las Vegas Strip Club
Have you ever wondered what a half-naked job interview looks like?
Photo by Alfonso Scarpa on Unsplash
When I was a stripper , I liked to relocate often and work at different strip clubs across the country. I lived and stripped in Colorada and California, back and forth, until I decided to spend a month in Barcelona. Then Boston. Then New York. Stripping allowed me the schedule, finances, and flexibility to do so. Thankfully, I had friends who were also strippers, meaning that my friend group consisted of able and adventurous individuals.
When one of my girlfriends invited me to tag along to work in Las Vegas for a week, I couldn’t wait.
Las Vegas was once the holy grail for strippers. I’m not sure if Sin City still holds its title, considering the current state of the world, but in the “before times,” traveling to Las Vegas to work at one of the strip clubs for a few nights (especially during a convention) translated to a ridiculous amount of money.
I was already two years into my career, and I was comfortable auditioning at different clubs across the country. But I knew Vegas was a completely different ball game.
In Las Vegas, the clubs were bigger. The music was louder. Club management was stricter on who they hired. Customers were rowdier and drunker and there were more of them. The other strippers (aka my competition) were way hotter in Vegas and there were three times as many girls working one night at a Vegas club as there were in a regular Denver club.
Las Vegas clubs were a regular strip club experience magnified by 1000.
It made sense that the audition process in Las Vegas was magnified too. This is what I learned about getting hired to strip in Vegas. | https://medium.com/sexography/what-its-like-to-audition-at-a-las-vegas-strip-club-b3d0f473dfa | ['Erin Taylor'] | 2020-10-08 16:23:22.803000+00:00 | ['Sexuality', 'Lifestyle', 'Sex Work', 'Work', 'Society'] |
Life Among the Rooftop Pirates | Looking up, I notice the plant baskets hanging from the balcony railings have gone and so has the furniture. I can still picture the chair Kitten used to jump on as a mid point to the square Everest from which he surveyed the word with twitching whiskers, fascinated. I would gaze at his small gray and white face, hoping something would catch his attention long enough.
So he wouldn’t jump back down just yet.
The first time I spotted Kitten, I was locked into a staring contest with Periscope, his housemate, an orange tabby whose favorite mode of observation was sticking his neck out through the railings as far as it’d go. One day, there seemed to be an extra pair of tiny ears floating next to him. That was my view.
My neighbors could survey my kingdom at leisure but I could only glimpse theirs from below.
During golden hour, one of us would sit by the window with our legs resting on the edge and Periscope would stare encouragingly at the human whose fingers moved excitedly across a shiny metal rectangle according to a mysterious melody.
The human would have a strange contraption around their head and smile a lot, their upper body animated, like that moment when you go from excited to asleep within a second because you’re still a kitten.
Periscope stared and, sometimes, the human stared back.
The long-haired one never seemed to blink, probably thanks to those small transparent eyeball covers that keep ocular globes extra moist. Cats know about your disposable contact lenses and may have eaten the odd one, which is why litter box gifts sometimes look at you.
Today, Kitten and Periscope moved out.
There were no goodbyes. | https://kittyhannaheden.medium.com/life-among-the-rooftop-pirates-6f93b3d7cd05 | ['A Singular Story'] | 2020-06-09 12:36:24.638000+00:00 | ['Fiction', 'Society', 'Humor', 'Cats', 'Netherlands'] |
Fine-tune a non-English GPT-2 Model with Huggingface | Originally published at https://www.philschmid.de on September 6, 2020.
introduction
Unless you’re living under a rock, you probably have heard about OpenAI’s GPT-3 language model. You might also have seen all the crazy demos, where the model writes JSX , HTML code, or its capabilities in the area of zero-shot / few-shot learning. Simon O'Regan wrote an article with excellent demos and projects built on top of GPT-3.
A Downside of GPT-3 is its 175 billion parameters, which results in a model size of around 350GB. For comparison, the biggest implementation of the GPT-2 iteration has 1,5 billion parameters. This is less than 1/116 in size.
In fact, with close to 175B trainable parameters, GPT-3 is much bigger in terms of size in comparison to any other model else out there. Here is a comparison of the number of parameters of recent popular NLP models, GPT-3 clearly stands out.
This is all magnificent, but you do not need 175 billion parameters to get good results in text-generation .
There are already tutorials on how to fine-tune GPT-2. But a lot of them are obsolete or outdated. In this tutorial, we are going to use the transformers library by Huggingface in their newest version (3.1.0). We will use the new Trainer class and fine-tune our GPT-2 Model with German recipes from chefkoch.de.
You can find everything we are doing in this colab notebook. | https://towardsdatascience.com/fine-tune-a-non-english-gpt-2-model-with-huggingface-9acc2dc7635b | ['Philipp Schmid'] | 2020-11-07 15:38:25.738000+00:00 | ['Data Science', 'AI', 'NLP', 'Machine Learning'] |
The Wonders of OOPS | Introduction
Once upon a time programming was a mystic art! An art that could only be performed by the ones well versed in the way of computers. But this image of programming has long been changed. There are many tutorials out there both in texts and videos that make this art much easier to learn. One can learn any language and build their applications. But is that all? Or is there more to this than meets the eye?
The Story
You can now get started with building your app in any language. You settle on python as it is a beginner-friendly language. You google “Python tutorial” select one of the top results and start learning the language. You are now ready and one step closer to build your app. During this journey, many errors are going to attack and prevent you from completing the app, but your Google-Fu is top-notch. Those puny errors will be of no match to you! After all the think, write, and debug cycles you have reached your goal. That app you wanted to build is now ready!
Now happy and excited, you show off your learnings to your peers. You get compliments and also some interesting feedbacks for additional features. Inspired by the well build apps from big companies such as Microsoft, Google and Adobe, you decide to improve your app. You get to work and start writing code to add all those cool features your friends suggested. But wait! Now you are facing a different kind of problem. Your codebase is now a whopping 200 lines long, and now you’re finding it difficult to add new code. The errors are now much more frequent with functionalities breaking on adding new code. Being frustrated, you decide to take a few days of break.
Now with a fresh spirit and mind, you decide to complete your app. You start scanning through your code to reach the point where you left off. But to your horror you are now facing difficulties understanding your own code. Still undeterred, you push yourself for a couple of hours. To your dismay, you are still facing the same trouble as before. You lose all your motivation and finally decide to quit.
So what happened? What went wrong?
The Analysis
What you are experiencing is a classic problem of poor code structuring. This leads to low maintainability and makes it very hard to add new code. This is common in many beginner’s projects where the main focus is on the functionality. So how do we write code which is not only functional but also easily maintainable and extendable?
The Solution
After the term “Object-Oriented Programming (OOP)” coined by Alan Kay at grad school in 1966 or 1967. It took off in the software industry, replacing the largely prevalent procedural programming paradigm.
The advantages of using this new paradigm are,
It improves simplicity , as the software objects are analogous to the real-world objects. It brings down the program complexity by providing an intuitive idea about the structure.
, as the software objects are analogous to the real-world objects. It brings down the program complexity by providing an intuitive idea about the structure. It encourages modularity , as each object is a separate entity that is decoupled from each other.
, as each object is a separate entity that is decoupled from each other. It improves extensibility , as OOP has the concept of classes and inheritance where a class can extend upon the functionality of another class.
, as OOP has the concept of classes and inheritance where a class can extend upon the functionality of another class. It increases modifiability , as we can change the internal implementation of class methods which would not affect other classes provided we don’t change the public interface. The internal implementations of objects are abstracted away from each other.
, as we can change the internal implementation of class methods which would not affect other classes provided we don’t change the public interface. The internal implementations of objects are abstracted away from each other. It improves maintainability , as classes are self-contained they can be maintained separately. This makes locating and fixing errors easier.
, as classes are self-contained they can be maintained separately. This makes locating and fixing errors easier. It encourages reusability, as we can instantiate a class anywhere and any number of times.
So following the OOP paradigm we can easily resolve our code extensibility and maintainability issues. But hold on! how do we learn about it?
The Path
The OOP paradigm is a vast field with many programming languages that have slightly different implementations of it. But there are some core concepts that one needs to learn first to grasp it.
Encapsulation
Abstraction
Inheritance
Polymorphism
I believe in the philosophy of Learn by Doing, where you learn about something by getting hands-on with it. I have recently completed some exercises at Crio.Do on the said concepts. Solving a real-world based problem using the learnt concepts goes a long way by solidifying the intuition and the concept together.
Conclusion
At the beginning of every programmer’s life, their first major project is a structural mess. Learning the basic OOPS concepts helps to introduce a structure to the code. But this is just the beginning of the journey. There are more stuff to learn like SOLID and GRASP principles and so on. This journey is a continuous evolution and has no real end. As the applications scale and the requirements change, there might arise a need for a new structure in future. So staying updated is a crucial job for a developer in this ever changing field of IT.
So good luck with your journey! And thanks for taking time to read my article. | https://medium.com/swlh/the-wonders-of-oops-e2c5aefa08b9 | ['Arunangshu Biswas'] | 2020-07-31 22:40:51.031000+00:00 | ['Software Architecture', 'Programming', 'Software Engineering', 'Software Development', 'Oops Concepts'] |
How Minimalism Improves Fitness | How Minimalism Improves Fitness
Progress through simplicity
As any avid enthusiast understands, regular fitness training leads to a better, healthier you. While everyone’s definition of “better” may differ, one truism endures: it doesn’t have to be complicated, it doesn’t have to be excessive, it just has to be consistent.
The best way to ensure consistency? Perform physical activities you enjoy. You’re much more likely to stay on track if you’re engaged with what you’re doing. Playing volleyball on the beach with friends burns a lot of calories.
I’ve always loved Minimalism for its conceptual simplicity. How you can appreciate negative space as an art form, just as you would a sculpture or painting. How appreciating the value of things rather than quantity repels devils like materialism, greed and dishonesty. How decluttering your environment declutters your mind.
From this, freedom arises.
These are not novel ideas, but ideas often ignored. Staying on top of your fitness and adopting a more minimalist mindset are easy to understand, but not easy to adopt. This starts by identifying points of intersection between fundamental minimalist ideals and cornerstone fitness concepts. | https://medium.com/in-fitness-and-in-health/how-minimalism-helps-fitness-c8099e9618d3 | ['Scott Mayer'] | 2020-10-04 00:32:10.188000+00:00 | ['Fitness', 'Health', 'Lifestyle', 'Life', 'Minimalism'] |
How to get the next page on Beautiful Soup | Recursive function — The trick to get the next page
Ok, here’s the trick to get the job done: Recursiveness.
We are going to create a “parse_page’ function. That function will fetch the 10 albums the page will have.
After the function it is done, it is going to call itself again, with the next page, to parse it, over and over again until we have everything.
Let me simplify it for you:
I hope it is clear: As we keep having a ‘next page’ to parse, we are going to call the same function again and again to fetch all the data. When there is no more, we stop. As simple as that.
Step 1: Create the function
Grab this code, create another function called ‘parse_page(url)’ and call that function at the last line.
The data object is going to be used in different places, take it out and put it after the search_url.
We took the main code and created a parse_page function, called it using the ‘search_url’ as parameter and took the ‘data’ object out so we can use it globally.
In case you are dizzy, here’s what your code should look like now:
Please check this line:
Now we are not fetching the ‘search_url’ (the first one) but the URL that we pass as an argument. This is very important.
Step 2: Add recursion
Run the code again. It should fetch the 10 first albums as always.
That’s why because we haven’t used recursion. Let’s write the code that will:
Get all the pagination links
From all the links, grab the last one
Check if the last one has a ‘Next’ text
If it has it, get the relative (partial) url
Build the next page url by adding base_url and the relative_url
Call parse_page again with the next page url
If doesn’t has the ‘Next’ text, just export the table and print it
Once we have fetched all the cd attributes (that’s it, after the ‘for cd in list_all_cd’ loop), add this line:
We are getting all the ‘list item’ (or ‘li’) elements inside the ‘unordered list’ with the ‘SearchBreadcrumbs’ class. That’s the pagination list.
Then, we go to the last one and get the text. Add this after the last code:
Now we check if ‘next_page_text’ has ‘Next’ as text. If it does, we take the partial url, we add it to the base to build the next_page_url. If it does not, there is no more pages, so we can create the file and print it.
That’s all we need. Run the code, and now you are getting dozens, if not hundreds of items!
Step 3: Fixing a small bug
But we can still improve the code. Add this 4 lines after parsing the page with Beautiful Soup:
Sometimes there is a ‘Next’ page when the numbers of albums are multiple of 10 (10, 20, 30, 40 and so on) but there is no album there. That makes the code to end without creating the file.
With this code, it is fixed.
Your coding is done! Congratulations! | https://medium.com/quick-code/how-to-get-the-next-page-on-beautiful-soup-85b743750df4 | [] | 2019-09-15 07:06:18.476000+00:00 | ['Web Scraping', 'Python', 'Programming', 'Beautifulsoup', 'Coding'] |
What it’s like to eat at one of the best restaurants in the world? | We humans always have been fascinated by awe-inspiring experiences. Since ancients times, explorers and artists had found worthy the risk of entry to new fields with the hope of discovering something. With the past of time and the advances in science and technologies, they are less external experiences that can generate genuine awe in us. Basically, nothing impresses us, nothing gets us out of our comfort zone. We believe we just can google everything.
That informational arrogance comes with a dangerous dark side: the laziness to expand our perception. Travel, technology and food are some of the few areas we are delivering using to experience more awe in our lives, defined as the emotion that arises when one encounters something so strikingly vast that it provokes a need to update one’s mental schemas (Keltner & Haidt, 2003). What Neil deGrasse Tyson calls the cosmic perspective, where we see the big picture, and we are reset, we are transformed. It forces to re-accommodate our mental models of the world in order to assimilate the experience.
A study out of Stanford University on the subject of awe found that regular incidences in which we experience awe leave in us increased feelings of compassion, empathy, altruism and general well-being. In other words, getting your mind blown is good for you.
In 2014, I made a research about how a small restaurant on the remote costa brava in Spain was named 5 times the best restaurant in the world and Ferran Adria one of the most influential chefs have ever existed. After going deep into El Bulli’s creative process, I found fascinating how they applied the concepts of lean startups, design thinking, user experience and agile way before those concepts exist in IT, to build the legacy of this iconic restaurant. I made a short talk at the CanUX that year titled How El Bulli turned dining into an experience. After the talk, one of the most common questions I got was: “What it’s like to eat at El Bulli?” Unfortunately, I could not respond to that question because the restaurant has closed in 2011.
Just after El Bulli closed, Ferran’s brother, Albert Adria and former creative director, opened a new restaurant in Barcelona named Tickets: initially planned as a traditional tapas restaurant, but after El Bulli fans quickly set the expectations, Tickets pivot to Avant-Garde Cuisine and start it raise to the top. Albert Adria was named The World’s Best Pastry Chef 2015 and Tickets one of The World’s 50 Best Restaurants in 2018. You can see more details on the Netflix series Chef’s Table: S05 E04. (highly recommend)
This fall, after years of wondering me, I have the chance to finally response the original question: What it’s like to eat at one of the best restaurants in the world?
But before the food experience, let me share some facts about avant-garde restaurants:
Get a table at Tickets is not easy. Mark Zuckerberg and Lionel Messi are some of their famous visitors. It not impossible difficult as it was for El Bulli, but required 3 or 4 months to book in advance. We were extremely lucky to arrive without a reservation and get the help of Albert Adria itself to find a table. Maybe I’ll share how I made that happen in another post.
Once you are at the table, you can choose to order your plates from the menu but that will limit the experience to your knowledge of the Avant-Garde Cuisine. We choose for the degustation menu selected by the waitress based on our preferences and allergies.
Be ready to stay and eat for at least 3 hours. Portions are small, often you ended with one or two bites but you will try a large number of dishes and desserts.
You will feel a weird sensation of being fool by the food, but you will love it.
This is not a regular price dinner. You don’t have to take a second mortgage to pay for it but count with a bill x3 to x4 more expensive than a regular fancy dinner. We can debate about the ROI of this, but sometimes we pay for expensive food that is basically garbage or for example, in Singapore, we pay only $3 for a Michelin star dish at the legendary Liao Fan Hong Kong Soya Sauce Chicken Rice. All is relative of how it makes you feel at the end.
Back to Tickets, the decoration resembles the time of Circus and Willy Wonka, with lights and colorful posters | https://solanojuan.medium.com/what-its-like-to-eat-at-one-of-the-best-restaurants-in-the-world-3bd05b7b60e6 | ['J. P. Solano'] | 2019-01-24 16:56:22.021000+00:00 | ['Creativity', 'Foodies', 'UX', 'Inspiration', 'Food'] |
Cope vs. Create | Change of season got you down? You’ve got plenty of company. While SAD (seasonal affective disorder) is known to affect only 5% of US adults every year, the symptoms — pervasive sadness, undue fatigue, difficulty concentrating, lost interest in normally enjoyed activities — may well be plaguing many millions more as a result of life under quarantine. And if doomscrolling panic, Zoom fatigue, SAD szn, remote work burnout and pent-up post-election hysteria aren’t already enough, with a second wave of coronavirus starting to hit and more stay-at-home orders sweeping the nation seasonal depression could soon get a whole lot worse.
What to do? Now that usual remedies like more in-person interaction and big social gatherings aren’t readily available, we need to find more creative ways to cope. Metapod, for one, to the rescue. It’s that big, green, Pokémon-inspired suit which doubles as a cocoon for humans and sold out less than seven hours after launching this month.
Unzip the Metapod, climb inside, and watch Sarah Cooper’s new Netflix comedy special Everything Is Fine, which feels like a tribute to the “this is fine” dog meme that first went viral in 2016 as our collective denial of a rapidly deteriorating situation. Everything, in fact, was certainly not fine back then, and still isn’t. An NPR review called the show “the first piece of pandemic entertainment to successfully convey the feelings of instability and emotional unsteadiness that have been part of so many of our lives since the spring.”
The “how it started vs. how it’s going” meme that people usually share to show glow-ups and big career moves has been overtaken by a more satirical view of the state of the world in 2020. How it started: blissful ignorance. How it’s going: dumpster fires everywhere.
When lockdown is officially Collins Dictionary’s word of the year and we can buy life-sized cocoons online, there’s no denying our months-long quarantine has forced us to turn inwards. The Pandemic Logs series from the New York Times features seemingly mundane but telling details from readers’ diary entries about living in self-isolation. “My ‘remembering Dad’ sunflower bloomed today,” wrote Ann Bovee of Redmond, WA. “I went on a three-mile walk alone with my podcasts and wished I had a walking buddy,” wrote Pam R. of Sarasota, FL.
And while fast food brands on Twitter may be the cheeriest corner on the Internet, even McDonald’s, purveyor of Happy Meals, is questioning our mental well-being.
This new kind of vulnerability in brand communications is refreshing. We want to get in touch with our anger, sadness and regret — emotions we experience but rarely discuss in public. America’s Secret Playlists, a research study that analyzed thousands of private Spotify playlists, called it a shift towards “emotional realism,” where 44% of Americans confess they have listened to music to purposely feel dark emotions.
So how do we find emotional release? Snack giant Mondelez International wants to help by “humaning”. That’s what the company calls its new global marketing strategy. The goal: Move away from pure data-driven tactics towards making more “human” connections with consumers. Not surprisingly, humaning has already racked up Pepsi levels of ridicule in the ad world for tone-deafness.
Who’s kidding whom? When it comes to releasing and providing relief from strong or repressed feelings, corporate America’s emotional range has so far been limited at best. Artists, on the other hand, know a thing or two about catharsis. “Make art. Talk to your family. Read. Watch a movie. Move a muscle — it changes a lot,” says Marilyn Minter in Hirshhorn Museum’s video diaries that take us inside artists’ studios as part of a living record of the worldwide pandemic.
People need the cathartic power of creativity now more than ever. As more of us feel the blues while being cooped up, the idea isn’t just to cope but to create. It’s why bedroom pop, with its homespun quality and lo-fi sounds, is one of the most popular genres today. Since 2018 it’s grown from an obscure Spotify playlist to one with more than 600,000 subscribers. Artists skew young and female, produce dreamy, contemplative music from their bedrooms and write about deeply personal issues. Your sad-music starter pack must include: mxmtoon, Clairo, FKA Twigs, Beabadoobee, Faye Webster.
In the gaming world, Animal Crossing’s popularity is similarly revealing. Says Bitch Media’s social media editor Marina Watanabe, everything about the social simulation game “from the cute aesthetic to the wholesome music to the mundane tasks and lack of combat or high stakes feels like it’s designed to keep me calm.”
Whether through DIY indie music or soothing virtual islands, more people are channeling their innermost thoughts and feelings in isolation by activating the imagination. The challenge for businesses, then, is to incubate creativity and fuel catharsis by providing an outlet for what can’t be shown or said. It makes us feel less alone.
When it comes to mental health, today’s marketing has been serving up the Four Seasons version when we clearly need more Total Landscaping — work and words that cut deeper, hit us right in the feels and snap us out of the numbness. As quarantine thickens my own cocoon of isolation and targeted ads about therapy from BetterHelp start to flood my social media feeds, the healing power of making is just what the doctor ordered. | https://medium.com/thoughtmatter/cope-vs-create-be6ad597c495 | [] | 2020-11-19 17:18:29.721000+00:00 | ['In The Know', 'Sad', 'Catharsis', 'Creativity', 'Depression'] |
The Big Economic Shift: Democratic Candidates 2020 Energy and Environment Report | by Andrew Busch and Leah Hamilton
For the Democratic presidential candidates, there has been an intense focus on climate change and the energy sector.
A large number of the candidates’ policy proposals link in with or support the Green New Deal (GND). The GND was one of the broadest and most aspirational proposals put forward by Democrats to change both the energy sector and the economy, with the aim of reducing greenhouse gases and fossil fuel use. It is also the costliest ever proposed in the history of the country. It was introduced by Rep. Alexandria Ocasio-Cortez in February 2019, and out of the 2020 Democratic candidates we are covering, it was co-sponsored by Sanders, Warren, Harris, and Booker. Biden has also indicated his support for the GND.
The GND sets out several goals, including:
· to achieve net-zero greenhouse gas emissions through a fair and just transition for all communities and workers
· to create millions of good, high-wage jobs and ensure prosperity and economic security for all people of the United States
· to invest in the infrastructure and industry of the United States to sustainably meet the challenges of the 21st century
· to secure for all people of the United States:
clean air and water
climate and community resiliency
healthy food
access to nature
a sustainable environment
· to promote justice and equity by stopping current, preventing future, and repairing historic oppression of indigenous peoples, communities of color, migrant communities, deindustrialized communities, depopulated rural communities, the poor, low-income workers, women, the elderly, the unhoused, people with disabilities, and youth (referred to as ‘‘frontline and vulnerable communities’’.
The GND then includes a number of proposed plans to bring about these goals, including:
· building resiliency against climate-change related disasters
· repairing and upgrading infrastructure in the US
· meeting 100 percent of the power demand in the United States through clean, renewable, and zero-emission energy sources
· upgrading to smart power grids
· upgrading existing buildings
· spurring growth in clean manufacturing
· working with farmers and ranchers to make agriculture more sustainable
· overhauling transportation systems
· mitigating the long-term effects of pollution
· removing greenhouse gases
· protecting threatened ecosystems and endangered species
· cleaning up hazardous waste
· identifying other emission sources
· promoting the international exchange of technology and expertise.
There are two main limbs to the GND: climate justice and investment. The issue of climate justice relates to how the potential effects of climate change will affect ordinary people. In terms of investment, the idea is that public investment would create green jobs.
Almost 50% of Americans worry “a great deal” about the effects of climate change. In addition, “215 of the world’s biggest companies, including giants like Apple, JPMorgan Chase, Nestlé, and 3M, see climate change as a threat likely to affect their business within the next five years,“ and a Carbon Disclosure Project report noted that 73% of companies that reported to them have oversight of climate-change related issues at a board level. Democratic candidates in 2020 are responding to these concerns from businesses and the public by putting forward policies that look into carbon emission reduction, resilience, research, and infrastructure changes. The GND is a framework that the candidates are using to pursue these aims.
Many of the candidates have slightly different ideas or nuances to their plans that mean they approach these issues with varying proposals. However, there is overlap in their plans to address environmental issues and the structuring of the energy sector. As an example, all of the 2020 Democratic candidates support rejoining the Paris agreement.
Key Takeaways:
1. All of the 2020 Democratic candidates support the GND in some way and propose numerous different approaches to implementing its framework.
2. A large focus is on reducing reliance on fossil fuels and the oil and gas industries, with a shift towards 100% clean or renewable energy through a focus on research, innovation, and regulatory changes in the manufacturing, transport, and agriculture sectors.
3. The final issues that all candidates attempt to tackle are climate resiliency, climate justice (focusing on those most potentially impacted by climate change), and “making polluters pay”.
Due to Harris dropping out of the race, the research will now focus on Biden, Sanders, Warren and Buttigieg.
Biden
Biden’s focus on environmental issues is long-standing and he has been described as a “climate change pioneer” by Politifact. He supports portions of the GND and notes it as a “crucial framework for meeting the climate challenges we face,” but has not explicitly endorsed it.
One of Biden’s primary plans is his Plan for a Clean Energy Revolution and Environmental Justice. In this plan Biden describes the issue of climate change as a “climate emergency,” and puts forward a number of proposals to implement the GND, as well as to “revitalize the U.S. energy sector and boost growth economy-wide.” He believes that the US can be made into a “clean energy superpower,” and that clean energy technology can be created and exported in a way that creates and preserves middle-class jobs in America.
The Biden Plan sets out five primary goals, including:
1. Ensure the U.S. achieves a 100% clean energy economy and reaches net-zero emissions no later than 2050.
2. Build a stronger, more resilient nation (to climate issues).
3. Rally the rest of the world to meet the threat of climate change.
4. Stand up to the abuse of power by polluters who disproportionately harm communities of color and low-income communities.
5. Fulfill our obligation to workers and communities who powered our industrial revolution and subsequent decades of economic growth.
Biden also states that he would roll back the Trump administration’s Tax Cuts and Jobs Act tax cuts to pay for the above goals, with the aim of creating a clean energy future. The total cost of his plan is estimated to be $1.7 trillion in federal investment, combining this with private sector and state and local investments to total $5 trillion.
To enact his climate plan and the above five goals, Biden has also set out a number of more-detailed plans. For example, to enact his plan for net-zero emissions no later than 2050, he would sign a series of executive orders. He does not explicitly state what he would include in each executive order.
In addition, he would also demand that Congress enact legislation making a historic investment in energy and climate research and innovation, establishes an enforcement mechanism, and incentivizes the rapid deployment of clean energy technologies.
In addition, he would:
· Require aggressive methane emission limits for new and existing oil and gas operations.
· Use the Federal government procurement system to drive towards 100% clean energy and zero-emissions vehicles.
· Ensure that all US government installations, buildings, and facilities are more efficient.
· Reduce emissions from transportation by preserving and implementing the Clean Air Act and developing fuel economy standards.
· Establish new appliance and building efficiency standards.
· Require public companies to disclose climate risks and greenhouse gas emissions from their operations.
· Protect biodiversity and the Arctic National Wildlife Refuge.
Biden would also establish ARPA-C, a new Advanced Research Projects Agency focused on climate. The purpose of ARPA-C would be to help the US achieve 100% clean energy by using “game-changing technologies”, focusing on:
grid-scale storage at one-tenth the cost of lithium-ion batteries;
small modular nuclear reactors at half the construction cost of today’s reactors;
refrigeration and air conditioning using refrigerants with no global warming potential;
zero-net energy buildings at zero-net cost;
using renewables to produce carbon-free hydrogen at the same cost as that from shale gas;
decarbonizing industrial heat needed to make steel, concrete, and chemicals and reimagining carbon-neutral construction materials;
decarbonizing the food and agriculture sector, and leveraging agriculture to remove carbon dioxide from the air and store it in the ground; and
capturing carbon dioxide from power plant exhausts followed by sequestering it deep underground or using it to make alternative products.
Biden would also target airline emissions as a large source of greenhouse gas emissions. In addition, he would promote and fund carbon capture and storage research and technologies.
He would also improve the deployment of clean energy throughout the economy, including energy-efficient buildings, electric vehicles, local transportation solutions, better agriculture practices, mitigating urban sprawl, and creating a national strategy to develop a low-carbon manufacturing sector. Tax credits and subsidies would also be available to businesses so that they can upgrade their equipment, factories, and processes, and be able to deploy these low-carbon technologies.
Another part of the Biden plan is to build a more resilient nation when it comes to climate issues. This includes developing building codes to build and rebuild before and after natural disasters and working with the insurance industry to manage and reduce risk, and the cost of transferring risk. For example, he would aim to lower insurance premiums for homeowners and would work with FEMA to expand the Community Rating System. He would also promote and expand the development of climate resilience industries (such as coastal restoration or resilient infrastructure design) to increase jobs.
He would expand both passenger and freight rail systems to ensure the United States has the “cleanest, safest, and fastest rail system in the world.”
Biden also notes a number of steps that he would take to expand the global response to climate change, including:
Convene a climate world summit to directly engage the leaders of the major carbon-emitting nations of the world to persuade them to join the United States in making more ambitious national pledges, above and beyond the commitments they have already made.
Lead the world to lock in enforceable international agreements to reduce emissions in global shipping and aviation.
Embrace the Kigali Amendment to the Montreal Protocol, adding momentum to curbing hydrofluorocarbons, an especially potent greenhouse gas, which could deliver a 0.5-degree Celsius reduction in global warming by mid-century.
This plan also includes putting in place measures to stop other countries from “cheating on their climate commitments,” by interlinking trade policy with climate policy. For example, a Biden administration would stop China from subsidizing coal exports and outsourcing carbon pollution. He would also demand a worldwide ban on subsidies for fossil fuels. Biden would also reform the IMF and development bank debt repayment standards, so that projects with high climate costs would become riskier to take on. Further discussion of trade-related climate plans is discussed in our report on trade.
He also proposes a form of climate justice, essentially promoting policies to protect those who could be disproportionately affected by climate change. To do this, he would make it a priority for all agencies to “engage in community-driven approaches to develop solutions for environmental injustices affecting communities of color, low-income, and indigenous communities.”
Biden also has a plan to handle the impact of the energy transition on workers in industries such as coal miners and power plant workers.
In addition, Biden’s plan also includes:
over 500,000 public charging stations for electric cars;
incentivizing innovation in sustainable aircraft fuels to reduce airline emissions; and
improving commuter and freight train travel.
Biden would also aim to improve appliance and home-building manufacturing standards, to reduce emissions, and would “put in place a national program to target a package of affordable energy efficiency retrofits in American homes.” A further goal is to have low-income housing made more efficient, and for the US Department of Energy to increase their efforts to add new efficiency standards.
Finally, he would restore the full electric vehicle tax credit.
Sanders
Sanders’ plans on the environment and energy sector are the largest policy section on his website, titled “Green New Deal”. Some of these policies are also covered in other sections, but the majority of his plans are covered in this part.
His first proposal is to reach 100% renewable energy for electricity and transportation by 2030 and complete decarbonization by 2050. Part of this would be done by creating another Power Marketing Administration and expanding existing PMAs to build more solar, wind, energy storage, and geothermal power plants. $526 billion would be spent on creating a new, modern, underground, renewable electricity grid. As part of the plan to reduce electricity use, he proposes to improve home and business energy efficiency with regard to buildings and lower energy bills. Heating and cooling would also be brought onto the electric grid, instead of being fueled from natural gas, propane, or oil.
In addition, he intends to reduce emissions generally by 71% by 2030 and reduce emissions among less-industrialized nations by 26% by 2030. All energy generated as a result of GND plans would be publicly owned.
Sanders would not allow any new nuclear power plants to be built.
In addition, he proposes to create 20 million jobs for solving the climate crisis, primarily in the industries of “steel and auto manufacturing, construction, energy efficiency retrofitting, coding and server farms, and renewable power plants.” He also proposes to add jobs in the sustainable agriculture and engineering fields, as well as creating a Civilian Conservation Corps. Some of the jobs that the Civilian Conservation Corps would do include “building green infrastructure, planting billions of trees and other native species, preventing floods and soil erosion, rebuilding wetlands and coral, cleaning up plastic pollution, constructing and maintaining accessible paths, trails, and fire breaks; rehabilitating and removing abandoned structures, and eradicating invasive species and flora disease; and other natural methods of carbon pollution sequestration.”
He states that to create all of the above jobs, he would make a “historic” $16.3 trillion public investment “in line with the mobilization of resources made during the New Deal and WWII.” He does not lay out the specifics of this plan.
He would also make sure that displaced fossil fuel industry workers would be guaranteed “five years of a worker’s current salary, housing assistance, job training, health care, pension support, and priority job placement.” For any fossil fuel industry workers who are displaced by the plan, the Work Opportunity Tax Credit would be provided to employers who hire them.
Another of his efficiency proposals is to build “affordable and high-quality, modern public transportation”, and he would also set up grants and trade-in programs for the purchase of high-efficiency electric vehicles.” Electric vehicle charging infrastructure would also be expanded, at the cost of $85.6 billion. All school and transit buses would also be replaced with electric vehicles, and all diesel shipping trucks would be replaced with long-range electric trucks.
With regard to the transport industry generally, Sanders aims to massively increase public transit ridership. He would expand the high-speed rail system, with a focus on intercity rail. In addition, Sanders proposes to invest in decarbonizing the shipping and aviation industries as soon as possible.
He would also make large investments in sustainable agriculture practices.
Sanders would create a $40 billion Climate Resiliency Fund, intended to protect those who would be most affected by changes occurring as a result of climate change.
He would also put forward $200 billion towards the Green Climate Fund and would rejoin the Paris Climate Agreement. The Green Climate Fund helps to promote the transfer of “renewable technologies, climate adaptation, and assistance in adopting sustainable energies” to developing countries, so that they can shift more rapidly towards low-carbon economies.
To pay for the GND, Sanders suggests:
Making the fossil fuel industry pay for their pollution, through litigation, fees, and taxes, and eliminating federal fossil fuel subsidies.
Generating revenue from the wholesale of energy produced by the regional Power Marketing Authorities. Revenues will be collected from 2023–2035, and after 2035 electricity will be virtually free, aside from operations and maintenance costs.
Scaling back military spending on maintaining global oil dependence.
Collecting new income tax revenue from the 20 million new jobs created by the plan.
Reduced need for federal and state safety net spending due to the creation of millions of good-paying, unionized jobs.
Making the wealthy and large corporations pay their fair share.
One of his ideas to promote research and development in sustainable energy, is to expand funding into an initiative called StorageShot. He would put $30 billion towards this initiative. It is based on the previously successful program, SunShot, which was run by the Department of Energy in 2011. The intention of the StorageShot plan is to “invest in public research to drastically reduce the cost of energy storage, electric vehicles, and make our plastic more sustainable through advanced chemistry.” The aim of this would be to replace coal and natural gas plants that serve as base generation for the electricity grid.
Under a Sanders administration, particular trade deals would be renegotiated with the aim of reducing pollution. Agreements would be renegotiated in a way to “ensure strong and binding climate standards, labor rights, and human rights with swift enforcement.”
Public infrastructure would be retrofitted with climate resilience in mind. This would include readying public highways, bridges and water systems to ensure that they are ready for any climate impacts that may come. He would also invest in infrastructure and programs that are intended to protect the people and communities that will be most affected by climate change, including flooding, wildfires, extreme storms, drought, and sea-level rise.
One of the major proposals in Sanders’ policy is that he would “make the fossil fuel industry pay for their pollution.” As part of this, he would massively raise taxes on corporate polluters’ and investors’ fossil fuel income and wealth. In addition, he would raise penalties on pollution from fossil fuel generation and would require fossil fuel infrastructure owners to buy fossil fuel risk bonds to pay for disaster impacts.
He would also require corporations to audit and report their climate risks, which would contribute to a national report called the Climate Risk Report. This Climate Risk Report would be created by the SEC and the EPA. Corporations the violate domestic climate goals would be subject to sanctions.
Sanders would also end fossil fuel subsidies and would end all new and existing fossil fuel extraction from public lands. In addition, he would ban offshore drilling, and would stop the “permitting and building of new fossil fuel extraction, transportation, and refining infrastructure.” Old and abandoned fossil fuel infrastructure would be required to be cleaned up, and leaking infrastructure would be required to be repaired by fossil fuel corporations.
Under a Sanders administration, fracking would be banned, and mountaintop removal coal mining would also be banned. The import and export of fossil fuels would also be banned. Federal pensions would also be divested away from fossil fuels, and “financial institutions, universities, insurance corporations, and large institutional investors” would be pressured to move their investments away from fossil fuels and into clean energy bonds.
For imports, Sanders would place a fee on Carbon Pollution-Intensive Goods under the World Trade Organization General Agreement on Tariffs and Trade Article 20.”
Several different government agencies would be reorganized and restructured to deal with a shift towards a clean energy economy, including the Department of Energy, Department of the Interior, Bureau of Land Management, Bureau of Safety and Environmental Enforcement, Bureau of Ocean Energy Management, Energy Information Administration, Federal Energy Regulatory Commission, and Federal Emergency Management Agency. He would require these departments to work as part of a “centralized taskforce to phase out fossil fuels by expediting research, development, deployment, and technical support for polluting industries to ensure a smooth transition.”
He also proposes that all projects flowing from the GND would have “fair family-sustaining wages, local hiring preferences, project labor and community agreements, including buying clean, American construction materials and paying workers a living wage.” Workers in clean energy jobs would also be encouraged to form a union, by establishing Bernie’s Workplace Democracy Plan.
Sanders proposes to support family farms by investing in regenerative and sustainable agriculture. He would fund $410 billion towards carbon sequestration ideas, increasing resiliency, and focusing on design, technical assistance, equipment, infrastructure, and repaying debt. As part of his plan he would also invest $41 billion to help large animal feeding farms to transition towards more regenerative practices, and $41 billion to help socially disadvantaged and beginning farmers who have traditionally been underserved by USDA programs. Sanders also proposes the Rural Energy for America program to promote clean energy options for the agriculture industry. This would allow farmers to grow and harvest renewable energy alongside their crops.
For more about Sanders’ proposals in relation to agriculture, take a look at our report.
Sanders’ plan also sets out that communities that need extra assistance in the transition to a clean energy economy would be eligible for additional funding through regional commissions. $5.9 billion in funding would be distributed as follows:
$2.53 billion for the Appalachian Regional Commission
$506.4 million for the Delta Regional Authority
$304 million for the Denali Commission
$405 million for the Northern Border Regional Commission
$94 million for the Southeast Crescent Regional Commission
$2.02 billion for Economic Development Assistance Program
GND funding for parks and public lands would be distributed equally throughout urban, rural, and suburban areas, and urban sustainability initiatives would be undertaken to improve the environmental and social conditions of low-income neighborhoods and communities.
Finally, he sets out a number of environmental justice principles. This includes that “hazardous waste sites, chemical and industrial plants, aging lead pipes, and decaying infrastructure that endanger the health of all citizens will be fully regulated.” He would also expand permitting rules to measure cumulative environmental impacts and require polluters to remediate those impacts. In addition, all agencies would be required to comply with Executive Order 12898, which requires them to “identify and address the disproportionately high and adverse human health or environmental effects of their actions on minority and low-income populations, to the greatest extent practicable and permitted by law.”
Sanders notes that the first two years of the GND would be a big upheaval, and he would focus the first portion of the transition on energy assistance, by passing the Low-Income Home Energy Assistance Program, to “help low-income families pay their heating and cooling bills.” This would ensure that price changes due to the GND do not impact families as much. He would also invest $964 billion for low- and moderate-income families to invest in cheaper electricity for heating and cooling needs. Sanders introduced the Residential Energy Savings Act, which created a voluntary loan program allowing property owners or tenants to finance energy efficient upgrades to residential buildings.
Sanders and Rep. Alexandria Ocasio-Cortez have also proposed the Green New Deal for Public Housing Act. This legislation would fund $180 billion over 10 years, with the purpose of retrofitting and improving public housing to reduce the energy costs of these homes. Energy retrofits would include things such as “new cladding, efficient window glazing, and electric appliances.” This Bill was also co-sponsored by Warren.
Due to the link between energy and food, Sanders would also fund $215.8 billion for free, universal school meals.
Warren
Warren also has a large number of policies on energy and environment-related issues. For example, some of her plans include her plan for 100% Clean Energy for America, as well as her plan for Accelerating the Transition to Clean Energy. She also sets out a dedicated plan for Fighting for Justice as we Combat the Climate Crisis, and a plan for Leading in Green Manufacturing. She also has a separate plan titled Tackling the Climate Crisis Head On.
A lot of the plans from Warren have a significant amount of overlap, and some of her plans repeat the same proposals as other plans. Her plan for Tackling the Climate Crisis Head On aggregates a summary of a lot of her plans in one place.
In her Plan for Tackling the Climate Crisis Head On, Warren sets out the areas in which she proposes energy and environmental sector plans. This includes:
· Green Manufacturing
· Fighting Corruption
· Clean Energy
· Green Infrastructure
· Environmental Justice
· Protecting Public Lands
· Sustainable Agriculture
· International Standards
· Improving Trade
· Tribal Lands
· Clean Air and Water
She then goes into more detail on each of these plans. She first notes that her Green Manufacturing Plan “would invest $2 trillion over the next ten years in green research, manufacturing, and exporting.” Her Green Manufacturing Plan has three key arms:
· Green Apollo Program
· Green Industrial Mobilization
· Green Marshall Plan
The Green Apollo Program would commit $400 billion over 10 years to clean energy research and development. This program would also create a model based on the National Institutes of Health, to create National Institutes of Clean Energy. The type of research that will be prioritized is “research that can be commercialized to help close the gap in hard-to-decarbonize sectors — such as aviation and shipping — and in areas otherwise underrepresented in the existing R&D portfolio, like long-duration grid storage.” Warren would also expand existing energy R&D programs like ARPA-E.
The plan for Green Industrial Mobilization is primarily the idea of using federal procurement and needs to drive demand, investing a $1.5 trillion federal procurement commitment in “American-made clean, renewable, and emission-free energy products for federal, state, and local use, and for export.” This plan is intended to cover a wide range of technologies and would require federal contracts to receive energy efficiency designations.
Finally, the Green Marshall Plan is aimed at encouraging other countries to “purchase and deploy American-made clean energy technology.” The Green Marshall Plan proposes to do so by creating a new federal office “dedicated to selling American-made clean, renewable, and emission-free energy technology abroad,” and would use a $100 billion commitment to help other countries purchase this technology. The new federal office would offer financing options to foreign purchasers to create incentives to buy American technology.
Warren also notes that the impact of climate change, as well as the impact of a shift to a clean energy economy, will affect different communities in different ways. As a result, she suggests “prioritizing resources for frontline and disadvantaged communities,” as well as “benefits to uplift and empower workers who may be hurt by the transition to a more green economy.”
Her plan to Accelerate the Transition to Clean Energy involves “using the power of public markets to accelerate the adoption of clean energy.” The primary shift would be that she proposes that companies should be required to share “how climate change might affect their business, their customers, and their investors.” She notes that there are a lot of companies that could be hugely affected by climate change, as well as that there would be large effects for particular industries such as the energy industry, during the transition to clean industry.
She would also require the SEC to tailor the disclosure requirements by industry, so that, for example, fossil fuel industry companies would have to make more detailed disclosure.
In her plan for 100% Clean Energy for America, Warren notes that she is following an approach already set out by Washington Governor Jay Inslee, in the form of the ten-year action plan to achieve 100% clean energy. She ties this in with her Green Apollo plan to invest $400 billion in clean energy R&D, noting that her focus would be on a few key industries, because “electricity, transportation, buildings, and related commercial activity are responsible for nearly 70 percent of all U.S. carbon emissions.” She also notes that the Green Manufacturing Plan and Green Marshall Plan would be a part of this goal, and that she would fund an additional $1 trillion on top of Governor Inslee’s proposals, funded from the reversal of the Tax Cuts and Jobs Act.
Her plan is to achieve:
By 2028, 100% zero-carbon pollution for all new commercial and residential buildings;
By 2030, 100% zero emissions for all new light-duty passenger vehicles, medium-duty trucks, and all buses;
By 2035, 100% renewable and zero-emission energy in electricity generation, with an interim target of 100% carbon-neutral power by 2030.
Like Biden and Sanders, Warren also proposes measures to help workers affected by the transition to a clean energy economy, noting that she would “ensure benefits to uplift and empower workers who may be hurt by the transition to a more green economy … [such as] … providing them with financial security — including early retirement benefits — job training, union protections, and benefits, and guaranteeing wage and benefit parity for affected workers.” New jobs created under her plans would also be unionized.
She also proposes the adoption of 100% clean electricity, by setting high standards for utilities nationwide, requiring utilities to achieve 100% carbon neutral power by 2030, and to achieve clean, renewable, zero-emission electricity generation by 2035. She also proposes the creation of a Federal Renewable Energy Commission, to replace the Federal Energy Regulatory Commission to regulate the US electrical grid. Federal agencies would also be required to achieve 100% clean energy in their power purchases by the end of her first term.
She would also expand federal subsidies to speed up clean energy adoption and would “establish refundable tax incentives to speed utilities’ deployment of existing smart grid and advanced transmission technologies,” and would expand the coordination between regions and states.
Finally, in her 100% Clean Energy Plan she proposes 100% clean vehicles and buildings. To do this, she proposes setting standards for vehicle emissions, including 100% zero-emissions for all new light- and medium-duty vehicles by 2030. In addition, she would invest in the modernization of the manufacturing base and would expand consumer tax credits for the purchase of these kinds of vehicles. She also proposes a “Clean Cars for Clunkers” which is modelled on the Recovery Act trade-in program, to encourage consumers to replace fuel-inefficient cars with zero-emission vehicles. Other forms of transit would also be decarbonized, including maritime, rail, and aviation.
For clean buildings Warren proposes to adopt new zero-carbon building standards by 2023 and will link Federal agencies’ grant processes to energy and pollution standards. She would also eliminate all fossil fuel use in new and renovated federal buildings by 2025. Part of the Green Manufacturing Plan would also be used to purchase clean energy products for federal building purposes, such as retrofits and heating technology. She proposes to use a combination of tax credits, direct spending, and regulatory tools to also encourage private capital investments in energy efficient buildings.
Warren also has a plan to protect public lands and proposes “a total moratorium on all new fossil fuel leases, including for drilling offshore and on public lands.” She also suggests using these public lands instead for “providing 10% of our overall electricity generation from renewable sources.”
Warren also makes a number of proposals with regard to sustainable agriculture in her plan for A New Farm Economy, which can be read about in our Agriculture report.
Her plans on Environmental Justice include a recognition that some communities could be more negatively impacted by climate change than others. To resolve this issue, she proposes to improve environmental equity mapping, and to implement an equity screening for climate investments. She also reiterates her plan to support fossil fuel industry workers and those displaced by a shift towards clean energy. She also proposes stricter standards for water quality standards, including the Safe Drinking Water Act and the capitalization of the Drinking Water State Revolving Fund and the Clean Water State Revolving Fund.
She would also attempt to mitigate flood and wildfire risks by instructing FEMA “to fully update flood maps with forward-looking data, prioritizing and including frontline communities in this process,” and by improving fire mapping and prevention programs. At-risk populations would be prioritized in this process.
Finally, she would also “encourage the EPA and Department of Justice to aggressively go after corporate polluters.”
Warren states that the cost of her Green Manufacturing Plan would be covered by her Real Corporate Profits Tax, which ensures that “the very largest and most profitable American corporations don’t pay zero corporate income tax,” as well as by ending federal oil and gas subsidies, and closing corporate tax loopholes. As noted, many of her other climate plans are proposed to be paid for by reversing the tax cuts in the Tax Cuts and Jobs Act.
Buttigieg
Buttigieg’s plans for energy and environment are included in several different policies. His primary plan is titled Rising to the Climate Challenge.
Buttigieg’s plan has three different pillars:
· Build a Clean Economy
· Invest in Resilience
· Demonstrate Leadership
His section on “Build a Clean Economy” is the most extensive. First, he sets out proposals for becoming a zero-emissions economy by 2050, including:
· By 2025, double the clean electricity generated in the U.S.
· By 2035, build a clean electricity system with zero emissions and require zero emissions for all new passenger vehicles.
· By 2040, require net-zero emissions for all new heavy-duty vehicles, buses, rail, ships, and aircraft and develop a thriving carbon removal industry.
· By 2050, achieve net-zero emissions from industry, including steel and concrete, manufacturing, and agriculture sectors.
He then sets out more-detailed plans for each of these goals. First, he proposes investment to help the US become a leader in clean energy technologies. He would create a price on carbon economy-wide, that would increase each year. A border-adjusted tax would also be applied to imported goods that had not been subject to a price on carbon where they were produced.
Buttigieg would quadruple clean energy R&D funding to $25 billion per year by 2025, and over $200 billion over 10 years. He would also create a number of clean energy investment funds, including:
· American Clean Energy Bank, with $250 billion of capitalization, to provide loans, grants, and guarantees to finance clean energy technologies, energy efficiency, and resilient infrastructure projects.
· Global Investment Initiative, with another $250 billion fund, to partner on clean energy and resilient infrastructure projects that use American technology and are built by American companies.
· American Cleantech Fund, which will be capitalized with $50 billion to support projects of new technologies that are too risky for the private sector.
He would also issue US climate action bonds, to help pay for clean energy and resilience deployment projects. In addition, he would abolish subsidies for the oil, gas, and coal industries, including the intangible oil and gas deduction, excess over cost depletion, and other subsidies.
Next, he proposes to prioritize energy efficiency in a number of ways. First, he would expand federal programs that offer affordable electricity access, including the doubling of the Weatherization Assistance Program funding, and $1 billion to the Low-Income Energy Assistance Program.
He also suggests a number of tax incentives, including an energy efficiency rebate to cover 30% of the costs of improvements for residential homes and apartments. In addition, he suggests a tax credit for commercial building efficiency, and a new CarbonStar program that would provide consumers with information on which products have a lower carbon footprint.
His next proposal is to transform the energy sector in several ways. His first proposal on this point is to establish a national Clean Electricity Standard (CES). This standard would set national standards that still allow states and regions to develop more-tailored solutions. However, the overall goal would be to meet the goal of 100% clean electricity by 2035.
He would also incentivize clean energy deployment with tax credits for solar, wind, geothermal, and other clean energy technologies, as well as long-duration battery storage. He also suggests integrating high quantities of renewables into the grid, with a nationwide network of high-voltage direct current transmission lines, and rules set by the Federal Energy Regulatory Commission that would help to work towards the zero-emissions clean electricity system by 2035.
On transportation, he would require that all new passenger vehicles be zero-emissions by 2035, and all heavy-duty vehicles be net-zero emissions by 2040. The electric vehicle tax credit would be expanded to a maximum of $10,000 per vehicle, with the aim of helping lower- and middle-income families to afford electric vehicles. He would also expand the EV infrastructure tax credit to build out charging infrastructure around the country.
Technology transition loan guarantees would also be offered to vehicle manufacturers, to help existing automobile assembly lines to shift towards different technologies. In addition, he would expand the use of biofuels, and would establish a national clean fuel standard.
Finally, he would expand on the Department of Transportation’s Smart City Challenge and would invest $100 billion over 10 years in surface transportation for cities, including modernizing subways and other transit systems. He would also aim to enhance heavy-duty vehicle efficiency standards to try to move towards 100% clean energy heavy-duty vehicles by 2040.
Buttigieg also suggests setting up clean industrial technology standards and modernizing the manufacturing process, with the aim that industrial plants in steel, cement, and petrochemicals (for example) would be net-zero emissions by 2050. To this end, $1 billion per year in R&D would be invested in advanced, low-carbon manufacturing research. He would also enact rules that aim to reduce methane emissions and would support a Buy Clean program for federal government purchases.
He also has plans for carbon removal by 2040, including direct air capture. This carbon would be stored underground and proposes to use this for carbon fiber material creation.
On farming and agriculture, he would support farmers to develop new tools and technologies to make agriculture more sustainable and productive and would commit $50 billion to R&D in agriculture for reducing carbon emissions. He would also promote research in soil carbon measuring technologies and soil carbon sequestration.
Buttigieg also notes that he plans to create high-paying clean energy and infrastructure jobs in this transition and would also incentivize strong labor standards and would provide transition assistance for displaced workers and families. He also notes that vulnerable and indigenous communities would be supported more strongly during this transition. Finally, he also supports establishing the US Climate Corps.
In his second section, on investing in resilience, he proposes establishing “Regional Resilience Hubs” to help communities, which will help to understand and manage risks. They will also be funded with $5 billion annually. Buttigieg would also make sure that all federal infrastructure investments are climate resilient and would develop federal guidelines for investments and implementation of approaches such as nature-based climate solutions. This is intended to build resilience to flooding, fires, and drought. He would also establish national Catastrophic Extreme Weather Insurance.
His final section addresses how he would demonstrate leadership in this space. He proposes using “every executive authority available” to take action to reduce emissions and require resilience in infrastructure. He would also sign a “Buy Clean” executive order for federal purchasing. In addition, he would strengthen SEC guidance on the disclosure of material climate risks faced by publicly listed companies. He would also hold a Pittsburgh Climate Summit.
He also notes that he views climate change as a national security threat. He would submit a more-ambitious US emissions reduction goal on an international level and would also take part in global efforts to reduce non-CO2 emissions. He would also pledge $5 billion per year to collaborate with foreign governments on climate issues and would double the US pledge to the Green Climate Fund. Bilateral and multilateral relationships on climate change, such as with China and India, would be focused on encouraging more climate discussions.
Summary
As stated in the Warren research, the Green New Deal would create significant changes to energy production and usage in the United States, with significant costs coming alongside. Many of the proposals are also reliant on technology that has not yet been invented. This is a strong short-term negative as the economy will need time to adjust to new energy priorities.
The major issue with GND, and the candidates’ various proposals, is the cost. The estimated cost of the GND is large and variable. There have been a number of different estimates for the GND cost, such as “ $51.1 trillion to $92.9 trillion, or $316,010 to $419,010 per household,” from the Competitive Enterprise Institute, and up to $93 trillion from the American Action Forum.
In response, many Democrats note that the costs of not dealing with climate change could be much higher and have much more than monetary effects.
The pathway to pay for these costs is a difficult one, as spending will need to be limited in other areas. Otherwise, taxes will need to be raised, or fines (for example, for polluters) would need to be significantly increased.
Another issue is the balance of energy sources that candidates’ are considering, as there is a difference between “clean energy” and “renewable energy”. There are also major questions as to whether aiming for 100% renewable energy is feasible or realistic, especially to achieve in such a short time frame.
The problem with renewables including wind and solar is that they are variable. They are not “dispatchable”, i.e. available to dispatch as need requires. Rather, the amount of energy they produce changes day-by-day and hour-by-hour. This non-dispatchable energy needs to be balanced with dispatchable energy from other sources, including nuclear and non-renewables. Some suggest that non-renewable energy emissions could be balanced with carbon capture and sequestration, but this also comes with problems.
A major review performed by researchers for the Energy Innovation Reform Project found that effectively, above 60–80% decarbonization, the costs rise sharply and the benefits reduce. While 100% decarbonization may be possible, it may be much faster and much less expensive to get 80% of the way there, and retain the use of some non-renewable sources, as well as nuclear.
The candidates’ focus on 100% renewables or complete decarbonization of the economy may result in increased costs that are simply unnecessary. A better approach is a significant move towards decarbonization that is more balanced, faster, and more stable in terms of energy supply. With heavy use of natural gas and coal, decarbonization past 60–80% would not be possible, but their continued use in limited amounts as backup energy sources could help an economic and energy sector transition towards significant decarbonization in a much quicker way.
Another good approach to reach decarbonization is the expansion of the grid to support the sharing of variable renewable energy sources. This can help to balance out dips or variability in supply. One major positive proposed by Sanders is the StorageShot program, which aims to allow energy to be stored much more effectively. This kind of research could revolutionize the energy sector, and allow a much greater use of renewables, and their storage completely eliminates the issue of variability.
On the positive side, the Green New Deal for Public Housing Act could promote massive manufacturing and job growth in areas with large amounts of public housing. Many of these areas are so-called “red” states, and many blue-collar jobs would be needed to retrofit public housing in the ways proposed by the Act. For more about issues on housing, read our report.
Also, the creation of an energy department similar to ARPA-E would likely yield new innovations in electricity storage and batteries. If scale electricity storage could be achieved, then the need for fossil fuels would abate rapidly as solar and wind would be able to generate the needed power. If the candidates would divert more of their proposed spending to this task of solving storage, they would be able to solve the problem faster and have a larger, faster impact on the environment.
Essentially, the problem of greenhouse gases and climate change is one that the Democratic candidates are highly focused on, and their proposals are extensive in light of that. The large costs of the GND make it a difficult “political pill” to swallow, as it would create major disruptions to the energy industry and would also require economic policy changes to get the necessary funding.
A problematic question is also a matter of cost: what are the costs to the economy and the population if radical changes to the energy sector do not occur (such as through a framework like the GND)? While the economic costs of implementing the GND can be estimated, the costs of not implementing it are much harder to quantify. Determining the balance between these options is therefore a complex one, as the results of each argument “winning” are not necessarily simple to determine.
URL sources | https://andrewbusch.medium.com/the-big-economic-shift-democratic-candidates-2020-energy-and-environment-report-1d2a368a8fc | ['Andrew Busch'] | 2020-01-13 22:13:44.348000+00:00 | ['Elections', 'Politics', 'Climate Change', 'Energy', 'Environment'] |
How I Became Myself Again: What Writing Did for My Self-Worth in 2019 | How I Became Myself Again: What Writing Did for My Self-Worth in 2019 Jason Forrest Follow Jan 2 · 12 min read
I love year-end lists not only to learn about the best movies, books, articles, and events of the previous year, but also because I really value the act of self-reflection. Just like the end of a sprint in Agile development, this time of the year is for retrospection.
I want to share my story of 2019 so that anyone can see what is possible with some planning and a bit of luck. Often simply beginning is the hardest aspect of doing anything, and giving yourself the freedom to experiment and get things wrong is equally important. But in order to tell you who I am now, I need to tell you who I was a year ago.
My Story (Or How I Went From Being Relatively Famous to Missing Something in My Life)
Looking back at the last year, I’m struck with how far I have come not only as a writer but also as a person in search of a passion.
The author performing with the Birthday Party Berlin crew (link for more)
Here’s my story:
What many in the dataviz community may not know is that I am a very well-known figure in a small but global electronic music scene. Just before the turn of the last decade, and shortly after the birth of my son, I decided to move into the next chapter of my career. Much to everyone’s surprise, I had literally accomplished all my dreams as a musician and it was time to move on.
Over the span of a few years, I stopped being a professional musician and moved into iOS development, and then co-founded a start-up for online video called Network Awesome. Going into Network Awesome, I knew that there could only be two outcomes: (1) that my company would work and I could do that as a job, or (2) I would gain the skills to “get a real job.” After a few years, Network Awesome was almost successful (i.e., not) and after a painful job search, I found myself working as a UX Designer. It was a job that I liked a lot, and it taught me how to use a system and leverage a process to make something with a purpose. But despite how interesting it was, it was ultimately just a job — not an identity like I had as a musician.
Not having a purpose (or a passion) was extremely difficult for me. I had gone from being a person that others sought out to just a guy with a job. Certainly happy in my life, but missing that certain spark.
“A2Z+ Alphabets & Other Signs” (amazon)
In 2017, after working in UX Design for eight years, I started to go back and re-explore the foundations of design that I first learned in art school. I had always been interested in typography, so I spent a few months learning and experimenting. Shortly thereafter, I chose to move into data visualization as a way to marry my design interests with my technical and UX process knowledge. It all seemed related and I guess I wasn't wrong.
One day I found a book of rare typographic specimens and diagrams called A2Z+ and that’s where I stumbled across some charts created in the year 1900 by the African-American political activist W.E.B. Du Bois. There wasn't much information in the book, so I hit Google and started to look around. I was shocked. My initial search showed no substantial writing on his work. I found myself digging further and further into research. Despite feeling so massively unqualified to write about his landmark series of charts, I published my first article on his charts in July 2018. That piece is my most-read article to date and eventually led to a presentation at the Tapestry Conference in November of that year. | https://medium.com/nightingale/how-i-became-myself-again-what-writing-did-for-my-self-worth-in-2019-1ed69286686 | ['Jason Forrest'] | 2020-01-02 16:14:55.914000+00:00 | ['Career Advice', 'Advice', 'Careers', 'Data Visualization', 'Writing'] |
Pickling Machine Learning Models | How to Pickle
The pickle module has two methods.
pickle.dump()
The pickle.dump() method dumps the Python object in the pickle file. This creates a .pickle file in your current working directory ( pickle.dump(what_are_we_dumping, where_are_we_dumping_it) ):
In the code snippet above, we are creating an example_dict.pickle file from an example_dict dictionary. Statements 1 and 2 perform the same task of converting the dictionary into a pickle file. Using the with statement ensures that open file descriptors are closed automatically after the program execution leaves the context of the with statement. The 'wb' in the open statement means we are writing bytes to file.
pickle.load()
The pickle.load() method lets you use the .pickle file ( pickle.load(what_do_we_want_to_load) ) by loading it in the memory: | https://medium.com/better-programming/pickling-machine-learning-models-aeb474bc2d78 | ['Aryan Deore'] | 2020-08-19 15:23:21.225000+00:00 | ['Machine Learning', 'Programming', 'Data Science', 'Python', 'Coding'] |
A beginner’s guide to Git — how to start and create your first repository | After a short introduction on what is Git and how to use it, you will be able to create and work on a GitHub project.
What is Git?
Git is a free and open source software created by Linus Torvalds in 2005. This tool is a version control system that was initially developed to work with several developers on the Linux kernel.
Many control systems exist, like CVS, SVN, Mercurial and others, but today Git is the standard software for version control.
Version control, right?
If you are new in the development world, these words will not tell you anything. However, don’t worry after this short paragraph, you will exactly know what a “Version Control System (VCS)” is.
Version control is a management system that takes into consideration modifications you’ve made on a file or a set of files (example: a code project). With this system, developers can collaborate and work together on the same project.
A branch system is carried by version control and allow developers to work individually on a task (example: One branch, one task or one branch, one developer) before to combine all changes made by the collaborators into the main branch.
All changes made by developers are traced and saved in a history. It can be beneficial to track modifications made by every collaborator.
Version Control System (VCS) change history — Copyright to ToolsQA post
Where to find Git repositories
If you want to start using Git, you need to know where to host your repositories. There are many hosting platforms where you can put your code free of charge. Some options aren’t free, but mostly you don’t need them except in specific cases.
Here the three most popular Git hosting services:
GitHub : Owned recently by Microsoft — Launched in 2008 (31 million users in October 2018).
Owned recently by Microsoft — Launched in 2008 (31 million users in October 2018). GitLab : Owned by GitLab Inc. — Launched in 2011.
Owned by GitLab Inc. — Launched in 2011. BitBucket: Owned by Atlassian — Launched in June 2008.
Note: Hosting platforms are available in two ways, on the cloud (hosted online) or self-installed on your server (private hosting).
Why use Git as a developer
This tool is inescapable for worldwide developers. Here is a list of advantages of this tool:
No more copies, when you finish your work on a significant update for your application or a bug fix, you just need to “push” your project online to save it.
Delete and break your code; you just need to type a command to come back to the previous version and continue your work.
Work with your friends without sending an e-mail with the compressed project each time the code changes.
You can afford to forget what you did. A simple command is necessary to check your changes since the last time you saved your work.
I just told you the main advantages if you don’t use Git at the moment. Believe me; this tool can become paramount. As an example, you can configure services to work with Git and automatically deploy and test your code.
Now, let’s practice with Git and GitHub
Now that you know what Git and Github are, it’s time to practice with concrete exercises.
After these exercises, you will be able to create and manage your projects via GitHub with all the basic features of Git.
Note: I chose GitHub as our hosting service for Git because it’s the most used in the world. Don’t be afraid; the procedure is quite the same on other services. Please remember this article take into consideration you know at all the basics SHELL commands. If not, some parts of this article will be confusing.
#1 step — Time to start!
Looking forward to getting started? Let’s do it!
This first exercise is not very complicated; it’s divided into two steps. The Git installation and GitHub account creation.
a. GitHub account creation
To create your account, you need to connect on the main GitHub page and to fill in the registration form.
GitHub main page with registration form
Nothing more! You are officially a new member of GitHub!
b. Git installation
Now you need to install Git tools on your computer. There are different Git software, but it’s better to install the basic one to start. We will use the command line to communicate with GitHub.
Once you are more comfortable with the command line, you can download Git software with a user interface.
For Ubuntu:
First, update your packages:
$ sudo apt update
Next, install Git with apt-get:
$ sudo apt-get install git
Finally, verify that Git is installed correctly:
$ git --version
For MacOSX:
First, download the latest Git for Mac installer.
Next, follow instructions on your screen.
Finally, open a terminal and verify that Git is installed correctly:
$ git --version
For Windows:
First, download the latest Git for Windows installer.
Next, follow instructions on your screen (you can leave the default options).
Finally, open a terminal (example: powershell or git bash) and verify that Git is installed correctly:
$ git --version
For all users:
One last step is needed to complete the installation correctly! You need to run in your terminal the following commands with your information to set a default username and email when you are going to save your work:
$ git config --global user.name "Gaël Thomas"
$ git config --global user.email "example@mail.com"
#2 step — Your first GitHub project!
Now that you’re ready, you can return to the main GitHub page and click on the “+” icon in the menu bar.
GitHub menu bar with “+” icon
Once you click on this button, a new menu appears with a “New repository” entry. Click on it!
Submenu with “New repository” entry
The repository creation page will appear. Choose a cool name for your first repository and put a small description before clicking on the “Create repository” button.
Note: In the context of this article, please don’t tick “Initialize this repository with a README”. We will create a “README” file later!
Repository creation menu
Well done! Your first GitHub repository is created. If you want to see all your repositories, you need to click on your profile picture in the menu bar then on “Your repositories”.
Submenu with “Your repositories” entry
#3 step — A good cover
It’s time to make your first modification to your repository. What do you think about creating a cover for it, a kind of welcome text?
a. A local version of your project
Your first mission is to get a copy of the repository on your computer. To do that, you need to “clone” the repository. On the repository page, you need to get the “HTTPS” address.
Repository page with “HTTPS” address
Once you had the address of the repositories, you need to use your terminal (through shell commands) to move in the place where you want to put the directory copy (for example you can move in your “Documents” folder). When you are ready, you can enter:
$ git clone [HTTPS ADDRESS]
This command will make a local copy of the repository hosted at the given address.
Output message of “git clone” command
Now, your repository is on your computer. You need to move in it with:
$ cd [NAME OF REPOSITORY]
Note: When you clone, Git will create a repository on your computer. If you want, you can access your project with the computer user interface.
b. Repository edition
Now you can create a file named “README.md” in your folder (through the terminal or user interface on your computer). I’m not giving you any more details about this step, nothing in particular. Open your folder and add a file as if it were a standard folder.
If you want to do something cool, copy and paste this template in your “README.md” file. You can replace information between the hooks to personalize the output.
c. Let’s share our work!
Now that you have modified your project, you need to save it. This process is called committing.
To do this, get back to your terminal. If you have closed it, go back in your folder.
When you want to save your work, four steps are required. These steps are called: “status”, “add”, “commit” and “push”. I have prepared a standard procedure for you to perform each time you want to save your work.
Note: All the following steps must be performed within your project.
“status”: The first thing you need to do once your work is to check the files you have modified. To do this, you can type the following command to make a list of changes appear:
$ git status
“git status” output in our project
“add”: With the help of the change list, you can add all files you want to upload with the following command:
$ git add [FILENAME] [FILENAME] [...]
In our case, we are going to add “README.md” because we want to save this file.
$ git add README.md
Note: If you type again “git status”, the “README.md” will appear now in green. This means that we have added the file correctly.
“commit”: Now that we have added the files of our choice, we need to write a message to explain what we have done. This message may be useful later if we want to check the change history. Here is an example of what we can put in our case.
$ git commit -m "Added README.md with good description in it."
“push”: You’re there, you can now put your work online! If you type the following command, all your work will be put online and visible directly on the repository page.
$ git push origin master
You did it! If you come back on your repository page on GitHub, you are going to your “README.md” file with a beautiful preview of it.
Repository page with “README.md” file
Useful commands for Git
You are still missing some essential commands as a beginner with Git. Here is a list that will be useful to you during your project.
Display the history of commits (all modifications made on the project).
$ git log
Revert back all your changes since the last commit.
$ git checkout .
Revert all changes on a specific file since the last commit.
$ git checkout [FILENAME]
Display the last changes on a file since the last commit.
$ git diff [FILENAME]
Remove all unexpected files in your project (not committed).
$ git clean -dfx
Add all files and make a commit at the same time. | https://medium.com/free-code-camp/a-beginners-guide-to-git-how-to-create-your-first-github-project-c3ff53f56861 | ['Gaël Thomas'] | 2020-08-03 20:09:09.319000+00:00 | ['Software Development', 'Technology', 'Productivity', 'Git', 'Programming'] |
Love Them or Hate Them, Coding Exercises Are an Essential Part of Software Engineering Interviews | Love Them or Hate Them, Coding Exercises Are an Essential Part of Software Engineering Interviews
You can learn a lot about someone’s proficiency by asking the right questions
Photo by ThisisEngineering RAEng on Unsplash.
When interviewing for a software engineering job, it’s common to be handed a dry erase marker and told to solve some arbitrary problem:
“Write a function that determines if the letters in a given string can be rearranged to form a palindrome.”
“Implement a memoization function.”
“How would you sort an array containing up to 100,000 randomly generated integers?”
As an interviewee, I used to hate whiteboard problems. The pressure of having to understand and solve a seemingly pointless or obscure problem in front of a stranger is enough to give anyone anxiety. I’ll never actually write a merge sort during my day-to-day responsibilities as a software engineer anyway, so what’s the point?
Now, as I sit on the other end of the conversation conducting the interviews, I’m beginning to see the merit of this format. The truth is, watching someone code for 30 minutes can tell you more about them than you would ever learn by asking them a hundred theoretical questions.
In practice, a mix of theoretical questions and coding exercises is important to include in the interview. For now, though, let’s examine why the whiteboard questions are so crucial. | https://medium.com/better-programming/love-them-or-hate-them-coding-exercises-are-an-essential-part-of-software-engineering-interviews-f66da65aecea | ['Tyler Hawkins'] | 2020-10-07 14:57:02.429000+00:00 | ['Programming', 'Interview', 'JavaScript', 'Startup', 'Coding Interview'] |
How to Build a Reporting Dashboard using Dash and Plotly | A method to select either a condensed data table or the complete data table.
One of the features that I wanted for the data table was the ability to show a “condensed” version of the table as well as the complete data table. Therefore, I included a radio button in the layouts.py file to select which version of the table to present:
Code Block 17: Radio Button in layouts.py
The callback for this functionality takes input from the radio button and outputs the columns to render in the data table:
Code Block 18: Callback for Radio Button in layouts.py File
This callback is a little bit more complicated since I am adding columns for conditional formatting (which I will go into below). Essentially, just as the callback below is changing the data presented in the data table based upon the dates selected using the callback statement, Output('datatable-paid-search', 'data' , this callback is changing the columns presented in the data table based upon the radio button selection using the callback statement, Output('datatable-paid-search', 'columns' .
Conditionally Color-Code Different Data Table cells
One of the features which the stakeholders wanted for the data table was the ability to have certain numbers or cells in the data table to be highlighted based upon a metric’s value; red for negative numbers for instance. However, conditional formatting of data table cells has three main issues.
There is lack of formatting functionality in Dash Data Tables at this time.
If a number is formatted prior to inclusion in a Dash Data Table (in pandas for instance), then data table functionality such as sorting and filtering does not work properly.
There is a bug in the Dash data table code in which conditional formatting does not work properly.
I ended up formatting the numbers in the data table in pandas despite the above limitations. I discovered that conditional formatting in Dash does not work properly for formatted numbers (numbers with commas, dollar signs, percent signs, etc.). Indeed, I found out that there is a bug with the method described in the Conditional Formatting — Highlighting Cells section of the Dash Data Table User Guide:
Code Block 19: Conditional Formatting — Highlighting Cells
The cell for New York City temperature shows up as green even though the value is less than 3.9.* I’ve tested this in other scenarios and it seems like the conditional formatting for numbers only uses the integer part of the condition (“3” but not “3.9”). The filter for Temperature used for conditional formatting somehow truncates the significant digits and only considers the integer part of a number. I posted to the Dash community forum about this bug, and it has since been fixed in a recent version of Dash.
*This has since been corrected in the Dash Documentation.
Conditional Formatting of Cells using Doppelganger Columns
Due to the above limitations with conditional formatting of cells, I came up with an alternative method in which I add “doppelganger” columns to both the pandas data frame and Dash data table. These doppelganger columns had either the value of the original column, or the value of the original column multiplied by 100 (to overcome the bug when the decimal portion of a value is not considered by conditional filtering). Then, the doppelganger columns can be added to the data table but are hidden from view with the following statements:
Code Block 20: Adding Doppelganger Columns
Then, the conditional cell formatting can be implemented using the following syntax:
Code Block 21: Conditional Cell Formatting
Essentially, the filter is applied on the “doppelganger” column, Revenue_YoY_percent_conditional (filtering cells in which the value is less than 0). However, the formatting is applied on the corresponding “real” column, Revenue YoY (%) . One can imagine other usages for this method of conditional formatting; for instance, highlighting outlier values.
The complete statement for the data table is below (with conditional formatting for odd and even rows, as well highlighting cells that are above a certain threshold using the doppelganger method):
Code Block 22: Data Table with Conditional Formatting
I describe the method to update the graphs using the selected rows in the data table below. | https://medium.com/p/4f4257c18a7f#4476 | ['David Comfort'] | 2019-03-13 14:21:44.055000+00:00 | ['Dash', 'Dashboard', 'Data Science', 'Data Visualization', 'Towards Data Science'] |
4 Software Development Techniques to Level up Your Data Science Project | Software development is the process followed by developers and programmers to design, write, document, and test codes. Regardless of what programming language you use or what your target application field is, following the specific guidelines of good software development is essential in building a high-quality, maintainable project.
Data science projects — may be more than other types of software projects — should be built with the mentality of maintainability. That is because, in most data science projects, the data is not constant and is frequently updating. Moreover, it is expected from any data science project to be extendable and to be crash-resistant. It should be immune to any mistake in the data.
Because every single part of the code in a data science project is build to fit a specific shape or form of data, if a wring data is given to the code, it might break it down. Of course, you never want your code to break, no matter what data it is fed. Hence, when designing and building the code, there are a few things to keep in mind to make your code more resilient.
There are many guidelines to follow to design and write good, stable code. However, in this article, we will focus on what I think is the 5 most important rules — or skills — needed to build a solid data science project.
So, let’s get right to it…
Documenting
We can’t talk about good software without mentioning documentation. Now, there are two steps to keep your code clean and well documented. The first step is commenting on your code. Comments are critical to walking people reading your code and — most importantly — your future self, through your thought process when you wrote the code.
Comments need to be simple, not more than two sentences, and straight to the point. Never forget writing a descriptive docstring whenever you define a class or a function or create your own modules. When writing comments, always remember:
Comments are not there to explain code to people; code is there to explain comments to the computer.
Once your codes and comments are done — well, for the time being since code is never done — you need to build sufficient documentation to your code. Documentations are external explanations of the code written — usually — in plain English. Documentations are often created using documentation processing tools, such as Sphinx and DocUtils. Documentations are often a part of your project’s website.
When it comes to best practices, it’s a good idea to start writing your documentation before you start coding. It will act as a guide to what needs to be done. Unfortunately, most of us — including myself — don’t follow this rule. However, we all need to start practicing it.
Testing
When we write code, we often write it based on some variables and datasets. However, it is very common that your code may contain some bugs that will only appear in some particular cases or with a specific dataset. Therefore, testing your application before deploying it can be crucial.
But, testing can get quite complicated, especially when it comes to data science projects. Often, data science projects are tested using reviews from other data scientists because most of the well-known testing methodologies are difficult to apply in case of data science projects.
That is because a simple change in data could lead to significant changes in the performance of the code. Through the years, researchers and developers have looked for the best way to test data science projects. They found out that the best way to test data science applications is through unit testing.
Unit testing is a type of testing that is used to detect changes that may break the flow of your program. They help with maintaining and changing the code. There are many Python testing libraries that you can use to perform unit testing.
Unittest is the built-in library in Python that is used to perform unit testing. Unittest is often referred to as PyUnit, and it is an easy way to create unit testing programs. Pytest is a complete testing tool — which is my favorite. Pytest has a simple straight forward approach to build and uses unit tests. Hypothesis is a unit test-generation tool. The goal of developing Hypothesis is to assists developers in creating and using unit tests that tackle the edge cases of your code.
Data Management
Getting a little bit specific to data science projects, when dealing with data, one thing we need to be careful with is managing our data. We need to consider many things, such as how are your data created? How big is it? Will it be loaded every time or stored in the memory?
When working with data, we need to be very careful with memory management and how the code is interacting with the data. One thing to consider is how Python functions call affect the memory usage of your code. Sometimes, function calls take up more memory than you realize.
One way you can overcome that is by using Python’s automatic memory management capabilities. Here’s how Python deals with function calls:
Every time you call a function and object is created with a counter of the number of places; this function is used. Whenever we use or reference this function, the counter is incremented by 1. When the code reference goes away form the function object, the counter is decremented by 1 till it hits 0. Once that’s done, the memory will be freed.
If you’re wondering how you can write code that uses this automatic memory management, wonder no more. Itamar Turner proposed 3 different way you can make your functions more memory efficient:
Try to minimize the use of local variables. If you can’t, then re-use variables instead of defining new ones. Transfer object ownership of functions that takes a lot of memory usage.
Using Domain-specific tools
Last but not least, to help you build resilient projects, make use of tools built specifically for data science. Of course, there are well-known tools, such as IPython, Pandas, Numpy, and Matplotlib.
But, let me shed some light on two not very known tools:
GraphLab Create: is a Python library used to build large-scale, high performing data products quickly. You can use GraphLab Create to apply state-of-the-art machine learning algorithms, such as deep learning, boosted trees, and factorization. You can perform data exploration through visualization, and you can quickly deploy your project using Predictive Sevices. Fil: is a Python memory management tool for data science. You can use Fil to measure peak memory usage in your Jupyter notebook. As well as to measure peak memory usage for normal — none Jupyter-based — Python scripts, and debug out-of-memory crashes in your code. Moreover, Fil can help in reducing your memory usage significantly.
Takeaway
Nowadays, building a good data science project is not enough to make you stand out. You need your project to be crash-resistants and memory efficient. That’s why using some software development skills; you can take your data science project to the next level and make it stand out.
The software development skills we discussed in this article are:
Efficient documenting and commenting. Testing, testing, and then some more testing. Wise data and memory management. Special tools that can ease up your work and increase the efficiency of your project.
What we didn’t talk about, though, is the most crucial skill any developer must obtain, which is the ability always to be working on improving your skills and knowledge base, as well as keeping up to date with recent technologies and tools. | https://towardsdatascience.com/4-software-development-techniques-to-level-up-your-data-science-project-59a44498ca3f | ['Sara A. Metwalli'] | 2020-09-18 03:12:04.032000+00:00 | ['Data Science', 'Software Engineering', 'Software Development', 'Programming'] |
8 Effective Ways to Upgrade Your Mindset for Success | I am a strong believer that success and happiness are all about mindset. Your mindset and belief system affect everything in your life from what you think and feel to what you do and how you react to the world around you.
In order to achieve your goals, your mindset needs to match your aspirations, otherwise it will hold you back from being successful in your endeavours.
Here are 8 effective ways you can upgrade your mindset:
1. Change your Self-Talk
The conversations you have with yourself are a direct reflection of your mindset. If you are telling yourself “I am not good enough to achieve my dreams”, your thoughts will create your reality and your mindset will hold you back from having the life you want. To upgrade your mindset, change your negative self-talk to an empowerment speech. Sounds cliché, but telling yourself “I can do this” or “I got this”, really works.
2. Change your Language
After changing your inner thought dialogue and the story you are telling yourself, change the way you talk to other people. Avoid phrases like “I am always like this” or “I am always doing this” for encouraging a growth mindset. Furthermore, make it a habit to talk about the things that are going well in your life instead of complaining and talking about your problems. This will encourage a mindset of abundance instead of fear and lack.
3. Determine the mindset you need and act as if
Pick a goal you want to achieve and ask yourself: “Which mindset do I need to achieve this goal?” and “Which mindset do people have that were successful at this goal?”.
For example, healthy & fit people might share the mindset “I love taking care of my body, nourishing it with whole foods and exercising every day.”. If it’s your goal to be healthy & fit, act as if you already HAVE the mindset of a healthy & fit person. This way, you are basically tricking your brain to adapt a new mindset and reenforcing it with action.
4. Identify & Overcome your Personal Mental Blocks
We all have certain limiting beliefs that hold us back from realizing our full potential. Most of these mental blocks are created in our childhood when we learn to see the world in a certain, sometimes limiting way. What we see, hear and experience (e.g. financial scarcity) becomes our default and our filter for reality. We then go through life creating more of the same experiences that match our view of the world.
Identifying and overcoming your own specific mindset blocks is one of the most effective ways to completely transform your mindset. If you are interested in learning more about the 6 most common mental blocks that hold you back from success, you can download my free ebook here.
5. Learn & Apply
Read books from great minds to understand and adapt their thinking. Read books about how the mind and brain works. Learn from mindset experts through online courses, events and coaching.
Here are some of my favourite mindset resources:
Mindset by Carol Dweck to learn about the growth mindset
Everything from Gabrielle Bernstein to adopt a mindset of abundance and align with the flow of life
The writing of Thomas Oppong on Medium for great nuggets on improving your thinking
The Online Courses from Denise DT for upgrading your money mindset
My Free Ebook “6 Mental Blocks That Keep You Stuck in Life”
6. Surround yourself with people that match your desired mindset
Want to upgrade your money & success mindset? Start hanging out with people that are very successful and seem to have an abundance of money flowing their way at any time. It is easier to adapt a new mindset when you see that it is already working for other people. Learn how they think and adapt their daily habits to match their mindset.
7. Create new habits to support your mindset change
Integrate powerful habits into your day that help your mindset change and reenforce your thinking with action. For example, if you are upgrading from a “fixed” to a “growth” mindset, schedule time for learning and note down your learnings and achievements every day. Also, actively seek out challenges so you can prove to your mind that you can grow and learn to overcome whatever life throws at you.
The easiest way to act yourself into thinking differently is to ask yourself: “What would I do if I had ______ mindset?” and then go and do that! When I started my own business, I needed to upgraded from a “I am not good enough to do this”-mindset to believing “I have all the tools, skills and resources I need to start a successful company”. And when I asked myself this question, I realized I literally just needed to start doing something. So I scheduled time to write an article for my blog every morning before I went to the office for my day job.
8. Jump out of your comfort zone
If you put yourself in situations that challenge you, you have no other choice than to rise to the occasion and upgrade your mindset. It becomes a necessity to survive.
So ask yourself “What situations can I put myself in that will require me to operate on a higher mindset?”. Basically, the idea is to engineer your environment to upgrade your brain! | https://medium.com/swlh/8-effective-ways-to-upgrade-your-mindset-for-success-e1687830f649 | ['Liz Huber'] | 2019-05-31 02:11:03.237000+00:00 | ['Self', 'Mindset', 'Brain', 'Success', 'Goals'] |
Learning to Photograph: My First Few Shots | in In Fitness And In Health | https://medium.com/justins-journal-of-findings/learning-to-photograph-my-first-few-shots-751c787b734c | ['Justin Lyle'] | 2019-10-27 23:03:33.329000+00:00 | ['Music', 'Labs'] |
Halloween Costumes, Cultural Appropriation, and The Death of Fun | I have met the enemy of All Hallows’ Eve enjoyment.
And it is a small but very judgmental, vociferous group.
For those of you who follow me on Twitter, you’ll know that I’ve declared intellectual war on those who aim to quash our freedom of expression. Much of my recent Twitter activity has been devoted to battling the idea that people, in particular white people, are banned from wearing costumes that exhibit a culture they don’t belong to. At least, that’s the sentiment expressed by those who have abolishing innocuous buffoonery in their sights.
Horseshit, I say. Complete and utter horseshit.
People in an enlightened society like ours have the capacity to differentiate between harmless charades and genuine racial hatred, I declare with a raised fist and puffed chest. Let’s encourage them to exercise it.
No, they say, with closed minds and blaring mouths. It’s insensitive. The feelings of a few must come before the pleasure of the majority.
Because the tyranny of dressing up as something scandalous is so oppressive to certain groups that have been historically marginalized, Halloween is now an acceptable casualty on the quest for a more just and humane future for all.
And the weapon that’s used to bludgeon the carefree masses into ideological submission every Halloween?
Cultural appropriation.
Specifically, accusations of cultural appropriation.
In the Woke Folk Lexicon, cultural appropriation is generally defined as the act of adopting aspects of one culture by members of another culture. While not inherently heinous, the term takes on a more poisonous connotation in cases when members of a culture considered “privileged” adopt elements from a culture considered “marginalized”.
To be fair, there is a modicum of validity to the concept; most people would agree that trivializing facets of another culture in a way that’s offensive or perpetuates negative stereotypes of it should be frowned upon. And perhaps there is some risk of a cherished custom becoming diluted when others begin to embrace it in different ways.
However, accusations of cultural appropriation are now far too numerous, with many innocent displays of affection from members of one culture to another being thrust into the same division of bigotry, right alongside acts that would make the most fanatical Klan member blush.
This may be because the intent of offenders no longer matters; it’s the cultural crime that matters.
If you’re white and you wear a sombrero and drink margaritas and partake in Cinco de Mayo festivities because your friends invited you out for a good time, then you better have an apology prepared for when your social sin is shared with the general public.
Another reason why cultural-appropriation-as-an-evil should be met with fierce scrutiny is the fact that we wouldn’t have most of the comforts of modern civilization if it weren’t for cultural appropriation. Sublime works of art, music, film and other forms have birthed from the marriage of various components of disparate cultures, along with written language, math, science, technology, philosophy, etc.
Yes, even Halloween as we know it came from cultural appropriation; ancient Celtic harvest festivals, in particular the Gaelic festival Samhain, are generally believed to be the source of today’s Halloween practices.
Because Halloween is largely based on people pretending to be something they’re not for a day, those who participate in the tradition are, of course, prime targets of those with porcelain sensibilities, loud opinions, and too many Twitter followers.
Halloween is the one time of year where we can all act edgier, scarier, and more sensual than we usually are.
It’s when we can shed our inhibitions, let our creativity run wild, break some taboos, and practice our freedom to shock, gleefully sticking up our middle fingers to the shackles of political correctness and polite society.
And yet a deafening — albeit, microscopic — cultural splinter group has made it their mission to slaughter any joy that can be had while temporarily assuming the identity of another in the name of whimsy. Despite this group’s diminutive size, there’s no denying that they’re the ones with the social megaphone, declaring their rigid idea of kindness on the rest of us without thought or remorse for its ramifications on even the most trivial aspects of our daily lives.
By placing such Victorian dictates on what we can wear, this group removes the fun, magic, and mystery of a holiday that, in many ways, is about embracing the darkness and laughing at death.
If we can briefly put away our fears of the Great Unknown, then surely we can briefly put away our insecurities about race and culture?
Halloween must return to what made it so diverting and seductive. We as a collective — not fragmented — culture must raise our tolerance for offense and give people the benefit of the doubt; if you see someone donning a costume that’s influenced by your culture, ask yourself this: “Is that person displaying their hatred of my people, or do they just want to quit being themselves for a while?”
It’s most likely the latter.
What Woke Folk call “appropriation”, I call “exchange”, “preservation”, and “openness”.
Adherents of cultural-appropriation-as-an-evil don’t understand that by inviting others to participate in your culture, you’re reducing the boundaries that divide us. When I as a Mexican tell white people that they’re not allowed to dress up as a bandito for Halloween or engage in Día de Muertos festivities, do you really think I’m promoting racial tolerance?
One of the great things about cultural appropriation is that it gives us an opportunity to share, enjoy, and expand each others’ cultures. By treating the customs and traditions of our respective cultures as dynamic and malleable, and welcoming outsiders interested in exploring them and celebrating in their own way, we can only gain more friends and supporters.
And what better time to encourage such social fluidity than the holiday that encourages us to turn off our restraint and scoff at the hang-ups of life?
I’m not proposing that everyone should adorn themselves in the worst, distasteful costumes they can. And certainly some forms of extremely racially charged attire should be discouraged.
What I AM proposing is that we should think twice before calling out a complete stranger for wearing something that upsets us in order to win social points, and take a moment to recognize that that person probably just wants to have some breezy entertainment.
This Halloween, let’s stop allowing our desire to provoke, frighten, and arouse be held hostage by an authoritarian few, and revel in the forbidden fun that comes with exchanging our identities for something more outrageous, the beau monde’s edicts be damned.
For culture is a dish, like Halloween candy, that’s best shared with everyone.
If you want some humor with your horror, check out my publication: | https://joegarzacreates.medium.com/halloween-costumes-cultural-appropriation-and-the-death-of-fun-9ebbc0cd5d8b | ['Joe Garza'] | 2019-11-05 20:28:50.832000+00:00 | ['Social Media', 'Culture', 'Halloween', 'Society', 'Life'] |
Reconciling Spiritual Oneness and Identity Politics | Reconciling Spiritual Oneness and Identity Politics
It’s about realizing subjectivity.
Photo by Daria Rom on Unsplash
Sometimes I find myself asking, “Why don’t I see more Black women New Age spiritual gurus?”
Looking at the spiritual teachers who sell out weekend workshops and rack up millions of hits on YouTube, I see primarily white men. Sure, there are some white women in the mix, some men of color (whose wisdom somehow always seems to get Orientalized), but looking around the world of capital-S Spirituality, transcendence into divine Oneness seems the purview of the Alan Wattses and Eckhart Tolles of the world. I just don’t see a lot of Black women. This is certainly not to deny the deeply powerful spiritual teachings of countless Black women, simply that the widely-hailed champions of this space I’ll here dub “New-Age-hippie-land” seem to be predominantly white, and slightly less predominantly male.
And I wonder, why is that? Is the way we frame Blackness and womanhood as a culture that keeps their teachings from finding a broad audience? Certainly, evidence would suggest that both Blackness and womanhood require a visionary teacher to justify themselves far more in the eyes of society.
Is it more that the contemporary New Age spiritual movement is already overwhelmingly white, often financially privileged, and is part-and-parcel with teachings, rituals and practices that are primarily accessible to the white and financially privileged? Yoga retreats to Central America are accessible to the upper-middle class of California, not to most actual residents of Central America. The psychedelic movement can be all good vibes and rainbows when your body hasn’t been criminalized for generations, especially for drug use.
Or is it that this whole “movement” seems to frame Oneness in a way that weaponizes it against the realities of oppression for particular identities? How can we declare “We Are All One” when our distinctions in the eyes of society can literally determine who lives and who dies at a traffic stop?
Two World-views
Walking a spiritual path through the contemporary political landscape is a chronic paradox. Do we create our own realities, or are we acted upon by socio-political-economic systems? Is the culprit “scarcity thinking,” or colonial capitalism? How can we reconcile the perfection and harmony of all existence with the drone bombing of Yemen? Is there room for both universal transcendence and social justice?
I am of course not going to attempt to speak for Black women, all women, or any folks of color. Doing so would not be my place. All I can speak to is the seeming diametric opposition of trying to both awaken, and get woke, while refusing to compromise either.
The truth is, the two are not at all disjointed.
I believe that the common “Spiritual self-help” narrative of “Work on your own growth! Follow your own bliss!” can be toxic and individualizing, and erases the reality of our interdependence and need for solidarity. I believe that the common political narrative of “It’s all systemic! We are the victims of an unjust world!” can be toxic and infantilizing, and erases the reality of our power to create and the need to come into union with our true selves.
It may appear that I am writing about two utterly irreconcilable ways of seeing the world. In a sense, I am. But the remedy to suffering in both world-views is, actually, the same. Socio-political liberation and spiritual realization are the same process.
Realizing Subjectivity
To be awakened is to be fully subjective —in the sense of, to be the subject of the sentence. In the sentence “I am something,” spiritual transcendence is about realizing yourself as simply the “I,” and no longer the object, the “something.” To be the creator rather than the role, the experiencer rather than that which is experienced.
Oppression is an act of objectification. Systems of oppression make objects out of subjects. They then exploit what they have dubbed objects. From the objectification of Black bodies to the objectification of women, from the objectification of workers to the objectification of land — there is no difference in process between the Ego-mind’s perception of other, and oppression. This is not to say that all Ego-mind perception is oppression, simply that both are the act of objectification.
To oppress another, you must first view the object of your oppression as an object. The language we use more commonly is that we “dehumanize” that which we oppress, attack or exploit. We make our object less-than-human. What we do, first and far more simply, is objectify it — we perceive it as something separate from ourselves, that our individual will, our Ego-mind “I,” can act upon. But that Ego-mind’s “I,” whose particular subjectivity exists only in contrast to an object, is not the true I.
More plainly: “I am not you” is not the same understanding as “I am.”
When we view anything as an object, we view it as existing to serve a particular function. An object can only exist in contrast. Its meaning is always defined by the role we have decided that it fills. Its worth is always judged by how it fills that role. Only when something is an object can it be exploited or punished, attacked or denigrated.
Spiritual awakening is the act of realizing subjectivity. To “realize” means both “to understand as real,” and “to make real.” In a space beyond definition and separation, in the dimension of Oneness, there is no difference between acting within yourself and acting within the world. All is self, and self is all. Spiritually speaking, this is truth —and the perception of difference can only ever be a perception.
Realization, then, is an act of reconciliation: reconciling our own perception with the reality of universal Oneness, in all its harmony, freedom, peace, and love. Realizing is the act of creating, in the lived experience, the truth of universal subjectivity.
Systems of oppression make a false subjectivity for particular categories of beings. Under white supremacy, whiteness has subjectivity. A white person can be fully human, and just be. Under colonialism, the colonizer has subjectivity. Under patriarchy, men have subjectivity. Under capitalism, the wealthy have subjectivity. Under ableism, the able-bodied have subjectivity. What is considered subjective is allowed to simply be, and everyone else must first live as their role.
Realizing subjectivity is the liberation from objectification. We typically think of spiritual work as coming to understand the self not as a defined thing, but as the subject, the “I” beyond all objects. But spiritual work is the liberation from objectification, and is likewise done by making subjectivity more real. We come to know ourselves as subjects, and we make our subjectivity real. They are one in the same act. Seeing ourselves as subjects, whose purpose and worth and identity has nothing to do with role, and seeing others as subjects, are spiritually the same. In Oneness, there is no other.
Awakening is coming to know, in full being-ness, beyond role. From a world of full subjects, whose subjectivity is reflected in treatment and opportunity, in law and culture and economy and politics, can arise the universal experience of subjectivity — y’know, that whole “Oneness” thing that gets talked about in New-Age-hippie-land.
Reclaiming the Sacred
Worker is a role. Boss is a role. Ruler is a role. Landlord is a role. House is a role. Farmland is a role. Pipeline is a role. Teacher is a role. Partner is a role. All roles are objects. In the same breath — within a society with any objectification, woman is a role. Man is a role. Black person is a role. White person is a role. Queer person is a role. Disabled person is a role. These are identities that arise out of contrast, and are defined by being separate from a different identity.
It makes intuitive sense to me that, when one has been forced to defend one’s identity from attack, denigration, oppression, murder, slavery or genocide, that one might want to fight for the goodness, freedom, righteousness and sacredness of one’s identity. The talk of transcending identity and realizing ultimate Oneness might ring hollow. It might look as an invalidation of the beauty and sacredness of those identities. It might seem to completely miss the point.
That is because, in this context, it does miss the point — because spiritual awakening is the act of realizing subjectivity. Union (or yoking, or yoga) is the act of uniting what is perception (separateness, objects) with what is reality (Oneness, universal subjectivity). The problem of systemic oppression is that it creates objects. Establishing the goodness, freedom, righteousness and sacredness of an oft-denigrated identity is an act of reclaiming the subjectivity of what has been treated as an object.
Creating a world of more subjects, more full beings allowed to live as full beings — with the freedom to express and to be, to love and receive love, to interact as complete selves containing multitudes and infinities, with others who are complete selves containing multitudes and infinities — this is spiritual work.
Reclaiming sexuality from the clutches of being a sex object is spiritual. Reclaiming meaningful, self-determined work from the clutches of exploited labor is spiritual. Fighting for the sanctity of Black lives is spiritual. Defending land and water from extraction and desecration is spiritual. Look at that word — desecration. Etymologically, it comes from “de-consecrate,” or, to undo the process of a thing being sacred.
Heaven on Earth = Utopia
A world beyond the Ego mind’s judgment — that space of Heaven on Earth — is a world of true equality and absolute sacredness. We reach that world not from painting all with the same brush, but from everything shining fully. The universe is sacred when everything in it is sacred. Union is reached when our experience of the universe aligns with the truth of its sacredness. To enshrine the sacred — from land to womanhood to Black life — is to move towards Heaven on Earth.
We do not reach that world by declaring that we have transcended all distinctions. We transcend all distinctions by reaching that world.
When we embody Oneness, we live in a world that is sacred. Everything is a subject, and life emerges out of deep relationships between subjects that are themselves One. When we live in a world that is sacred, we embody Oneness.
Just as (many of us) are seeming to grasp that under this system, all lives will matter when Black lives matter, so too must we understand on a universal level: elevating and consecrating the identities we have desecrated is the act of transcending identity. To realize Utopia is to realize Heaven on Earth.
To answer my own question, every fist raised in defense of the sacredness of Black life is a spiritual teacher. It does not matter to the universe whether or not the word “universe” is uttered. Liberation is liberation, from the yoga mat to the Capital Hill Autonomous Zone. In wrenching back sacredness from the jaws of oppression, declaring subjectivity in the face of objectification, we do sacred work. We transcend. | https://medium.com/dogs-with-buddha-nature/reconciling-spiritual-oneness-and-identity-politics-d7a0d2d03357 | ['Anna Ronan'] | 2020-09-10 05:32:12.984000+00:00 | ['Universe', 'Spirituality', 'Identity Politics', 'Society', 'Politics'] |
POMprompt # 19 | POMprompt
POMprompt # 19
Things that go ‘bump’ in the night
Y’all knew this one was coming. It’s Fall, October, the owls calling out in the night. That freaky Netflix show, The Haunting of Bly manor. Halloween on the horizon of a hilltop moonlit night. It’s all eerie. Eerie, indeed.
It’s time to embrace the creepy, the freaky, the weird things that go bump in the night…this one’s about fear.
When I was a child, lots of things frightened me. Being an anxious child who couldn’t sleep — even worse. I was wracked with nightmares, sleepwalking and sleep staring (whatever that is) and all of the nervous energy between. There were creepy things beneath my bed. Behind the closet door. Tapping on the window. I was afraid of a lot as a child, even going to school. The grocery store, parking lots, white vans, men with red hair — yeah, the list was long and I had a wild imagination.
There. That place beneath the bed in the dead of night — that’s where I want you to go. Feel that fear a bit. Whatever frightened you terribly as a child, or even now, is great material for poetic expression.
USE IT.
And write your poem. Use language that sets the mood. Let us feel the fear and mystery with you. Scare us too.
This time you won’t be alone.
Your poetry prompt for POMprompt #19 is Things that go ‘bump’ in the night Now, let’s see those super-creepy, scary, and mysterious poems!
Happy Halloween, Y’all!
Poetically yours,
Christina M. Ward, EIC of The POM | https://medium.com/the-pom/pomprompt-19-78709b95129 | ['Christina M. Ward'] | 2020-10-20 00:05:35.120000+00:00 | ['Poetry', 'Creativity', 'Pomprompt', 'This Happened To Me', 'Prompt'] |
Sustainable Agriculture: A Key Link in Climate Action Policy | In the United States and Europe, there is a growing consensus that farmers play an integral role in climate change solutions. With one-third of global greenhouse gas emissions coming from agriculture, climate goals cannot be achieved without shifting food systems to a sustainable path. That is why the EU’s landmark Green Deal includes the Farm to Fork Strategy, a comprehensive approach to achieving sustainable food systems that will reduce the environmental and climate footprint of the EU food system.
Ahead of Climate Week NYC, the European Union, in partnership with The Nature Conservancy, convened a panel of experts and policy makers on September 15 to discuss sustainable agriculture on both sides of the Atlantic. Entitled “Bringing Farmers to the Table: Agriculture, Climate, Economy and Equity,” the online event involved more than a hundred U.S. stakeholders and opened with remarks from European Commissioner for Health and Food Safety Stella Kyriakides.
“We want Europe to become the world’s first climate neutral continent by 2050 built on a fair and equal society, and green economic growth,” Commissioner Kyriakides said in her opening remarks. “We want to bring our trading partners with us in this transition.”
Recognizing the clear dangers posed by climate change and the key role the agricultural community can play in climate solutions set the stage for a robust discussion focused on three themes: the benefit of precision farming, the potential of regenerative farming practices, and the need for equitable risk sharing.
“Farmers are front-line workers,” said Greg Viers, a life-long grain farmer from Central Iowa. “Farmers see the effects of climate change sooner and to a larger effect than other people in general,” making them a natural constituency for climate action and sustainable practices. Viers also serves as a durum wheat procurement manager for Barilla America, the world’s largest pasta manufacturer.
Viers stressed that precision farming, guided by innovative technologies and better data, allows farmers to farm better and use only the amounts of fertilizer and seed they need. This practical approach to conservation is what Rep. John Curtis (R-Utah) described as a “win-win climate solution” that’s good for the earth and financially for the farmer.
Modifying certain farming practices can not only save resources, it can benefit the soil and the earth’s climate as a whole. Dr. Yichao Rui, a soil scientist at the Rodale Institute, outlined the important role that regenerative agriculture can play in mitigating climate change. By implementing regenerative farming practices, such as having diverse crop rotations, eliminating synthetic fertilizers and pesticides, using organic soil matter for soil fertility, and using cover crops in the winter, organic soil sequestration can reduce greenhouse gas emissions. “To promote carbon sequestration, we should promote microbial biomass production and protection. We can achieve that through regenerative practices” he said.
Today, 1–5% of agricultural land around the world is being worked in a regenerative manner. Recognizing the benefits of regenerative farming, the EU’s Farm to Fork Strategy lays out concrete targets to be met by 2030, including reducing the use of pesticides by 50%, reducing nutrient losses by at least 50%, and reducing the use of fertilizers by 20%. The EU also aims to reduce sales of antimicrobials in farming and agriculture by 50% and increase the amount of agricultural land under organic farming in the EU to 25%.
Making food systems more sustainable cannot be left to farmers and farm-level solutions. Dr. Christine Negra, Senior Advisor for Climate and Agriculture at the United Nations Foundation, said governments need to work with private sector partners to ensure that innovation is reaching all farmers, and that they have equitable access to these cutting-edge tools. She also argued that the risks for adopting new technologies cannot rest only with the farmer, but that collaboration is required between “public sector, private sector, and farmers to mitigate risk wherever you can, and where they cannot be mitigated, to share the risk as best you can.”
Rep. Curtis agreed, saying “one of the largest impediments to [farmers] being good stewards of the land are financial.” That is why he supports the bipartisan Growing Climate Solutions Act. Rep. Chellie Pingree (D-Maine) also supports the bipartisan legislation, which, she said, “would lay the groundwork and reduce the barriers to entry for farmers who want to diversify their agricultural income and earn money for implementing climate friendly practices.”
Looking at the world around us, it is becoming clear that the threat of climate change is ever-present. As the EU’s Deputy Head of Delegation Michael Curtis said, “The planet is changing in front of our eyes. For some, climate change is not a distant threat, but a shocking reality today.” That is why the EU is approaching the issue of climate change holistically from all fields, including agriculture. Forty percent of the EU agriculture budget will contribute to climate action, totaling $24 billion a year.
With this strong commitment, and in collaboration with international partners, the EU is ready, Commissioner Kyriakides said, “to lead this transformation, for people, the planet, and our long-term prosperity.” | https://medium.com/euintheus/sustainable-agriculture-a-key-link-in-climate-action-policy-1fe8c3d3a3f9 | ['Matthew Stefanski'] | 2020-09-25 15:45:00.028000+00:00 | ['Sustainability', 'Sustainable Agriculture', 'European Union', 'Climate Change', 'Agriculture'] |
Types & Scales of Data in Descriptive Statistics | Descriptive statistics help you to understand the data, but before we understand what data is, we should know different data types in descriptive statistical analysis. The below screen helps you to get an overview of it.
Types of Data
A data set is a grouping of information that is related to each other. A data set can be either qualitative or quantitative. A qualitative data set consists of words that can be observed, not measured. A quantitative data set consists of numbers that can be directly measured. Months in a year would be an example of qualitative, while the weight of persons would be an example of quantitative data.
Now, let’s suppose you go to KFC to eat some burgers along with your friends, you placed the order at coupon counter and after receiving from the food counter everyone eats what you ordered on their behalf. If someone asked about the taste to others then the ratings on the taste will vary from one to another but if asked how many burgers we ordered then everyone will come to a definite count and it will be the same for all. Here, Taste’s ratings represent the Categorical Data and the number of burgers is Numerical Data.
Types of Categorical Data:
Nominal Data: When there is no natural order between categories then data is nominal type.
Example: Color of an Eye, Gender (Male & Female), Blood Type, Political Party, and Zipcode, Type of living accommodation(House, Apartment, Trailer, Other), Religious preference( Hindu, Buddhist, Muslim, Jewish, Christian, Other), etc. Ordinal Data: When there is natural order between categories then data is ordinal type. But here, the difference between the values in order does not matter.
Example: Exam Grades, Socio-economic status (poor, middle class, rich), Education-level (kindergarten, primary, secondary, higher secondary, graduation), satisfaction rating(extremely dislike, dislike, neutral, like, extremely like), Time of Day(dawn, morning, noon, afternoon, evening, night), Level of Agreement(yes, maybe, no), The Likert Scale(strongly disagree, disagree, neutral, agree, strongly agree), etc.
Types of Numerical Data:
Discrete Data: The data is said to be discrete if the measurements are integers. It represents count or an item that can be counted.
Example: Number of people in a family, the number of kids in class, the number of cricket players in a team, the number of cricket playing nations in the world.
Discrete data is a special kind of data because each value is separate and different. With any data, if we can answer the below questions then it is discrete.
1. Can you count it?
2. Can it be divided into smaller and smaller parts? Continous Data: The data is said to be continuous if the measurements can take any value usually within some range. It is a scale of measurement that can consist of numbers other than whole numbers, like decimals and fractions.
Example: height, weight, length, temperature
Continuous data usually require a tool, like a ruler, measuring tape, scale, or thermometer, to produce the values in a continuous data set.
Scales of Measurement:
Data can be classified as being on one of four scales: nominal, ordinal, interval or ratio. Each level of measurement has some important properties that are useful to know.
Nominal Scale: Nominal datatype defined above can be placed into this category. They don’t have a numeric value and so it neither be added, subtracted, divided nor be multiplied. They also have no order; if they appear to have an order then you probably have ordinal variables instead. Ordinal Scale: Ordinal datatype defined above can be placed into this category. The ordinal scale contains things that you can place in order. For example, hottest to coldest, lightest to heaviest, richest to poorest. So, if you can rank data by 1st, 2nd, 3rd place (and so on), then you have data that is on an ordinal scale. Interval Scale: An interval scale has ordered numbers with meaningful divisions. Temperature is on the interval scale: a difference of 10 degrees between 90 and 100 means the same as 10 degrees between 150 and 160. Compare that to Olympic running race (which is ordinal), where the time difference between the winner and runner up might be 0.01 second and between second-last and last 0.5 seconds. If you have meaningful divisions, you have something on the interval scale. Ratio Scale: The ratio scale has all the property of interval scale with one major difference: zero is meaningful. When the scale is equal to 0.0 then there is none of that scale. For example, a height of zero is meaningful (it means you don’t exist). The temperature in Kelvin(0.0 K), 0.0 Kelvin really does mean “no heat”. Compare that to a temperature of zero, which while it exists, it doesn’t mean anything in particular (although admittedly, in the Celsius scale it’s the freezing point for water).
Scales of Measurement | https://medium.com/analytics-vidhya/types-scales-of-data-in-descriptive-statistics-d3aa439c0b1e | ['Manoj Singh'] | 2020-01-29 08:18:06.772000+00:00 | ['Machine Learning', 'Data Science', 'AI', 'Descriptive Statistics', 'Statistics'] |
Fortune 500 Companies on Social Media in 2016 — Analysis | Fortune Magazine annually compiles a list of America’s largest corporations, aptly named the “Fortune 500” given their size and wealth.
In 2008, the University of Massachusetts Dartmouth Center for Marketing Research released one of the first studies on social media adoption among the F500 and has repeated that study every year since. Here are the highlights from the 2016 edition:
‘Infogram Insights’ offers a deeper look at relevant, newsworthy topics — visualized with Infogram. Every week we explore the data that is forever changing our world.
Follow Infogram on Medium and sign up for our weekly newsletter! | https://medium.com/infographics/f500-companies-on-social-media-in-2016-fbcc56d3919d | [] | 2016-12-30 10:01:56.090000+00:00 | ['Social Media', 'Marketing', 'Infographics', 'Stock Market', 'Data Visualization'] |
It’s Time to Urbanize Technology | London by night, Source : NASA
An AI world tour
One year ago, I went on an artificial intelligence (AI) world tour. This journey aimed to answer one question: can AI help us build sustainable cities? In other words, inclusive and dynamic cities respectful of the environment. To answer this question, I explored 12 cities and met 130 entrepreneurs, scientists, and experts around the world [1].
Some people I’ve been lucky enough to meet during my world tour. From left to right starting from the bottom : Alia Al Mur, Yoshua Bengio, Luc Julia, Joy Bonaguro, Alex Pentland, Nigel Jacob, Monique Savoie and Yutaka Matsuo
During this project, I gradually understood the huge potential and the risks of AI for our cities. But as my encounters and explorations progressed, unexpected questions came to me: What if the way we dwell in cities could show us how to build better social media? What if our built environments could reveal how to better design technologies? What if urban wanderings could tell us how to explore the web?
Indeed, it’s remarkable to observe the resilience of cities in the face of the political, social and technological disruption that have taken place throughout our history. Cities have not only resisted this perpetual change, they made it possible and liveable. They succeed in what Hannah Arendt called one of the main challenges of humanity: creating a common world for different generations [2]. In a nutshell, cities support and empower our civilizations for centuries. This success, how imperfect it may be, should help us better design the digital world and new technologies.
Social contracts for digital spaces
First observation from those who look at a city is that it’s a space where people live peacefully together. Citizens cohabitate, collaborate and share infrastructures daily. Thus, millions of singularities interact harmoniously in a remarkable way.
Now let’s imagine a new kind of city. A city where people would insult and injure you at any time, where some group of men could steal from you or force you to work for free just because they are stronger, where some people could enter your “home” and steal anything they want from as there are no property laws. It would be a “war of all against all”[3] rules by the only law of the stronger. Actually, such a city would not even exist as men and women couldn’t gather.
Frontispiece of Thomas Hobbes’ Leviathan by Abraham Bosse
As surprising as this description looks, this is what’s currently happening in our digital spaces. Hate speech are increasing on social media, we have no ownership of our data and we do not even have access to our digital footprint (which, however, generates a huge economic value for others). It looks very much like what Thomas Hobbes called a “state of nature”.
What Hobbes taught us is that the first step to get out of the state of nature and towards a society is the creation of a social contract. To trust each other, people must agree around a political common project. For Hobbes, this political project must lead to a monarchy but for other philosophers, as Jean-Jacques Rousseau, social contracts can establish a democracy. What will be of interest here is not so much the political outcome as the deliberative process by which a people agree on a mode of governance. This agreement begets some laws and customs which are the cement of the social fabric. Only then, people can become citizens who will construct what Richard Sennett calls, using a french world, a “cité”[4]. A cité ‘ doesn’t represent a city’s built environment (this is what Sennett calls the “ville”) but its uses and people. In other words, a cité is a common space coined by people who share interests and values. We could say that a cité is the materialization of a social contract.
So here’s our first urban lesson: before building a city we must create a cité. And before creating a cité we must agree on a social contract.
Ancient Agora in Athens, often used to illustrate the concept of cité. Source : greeka.com
We can easily see that very few digital spaces follow this principle. Most of the platforms and social media we’re using are following rules that have been decided unilaterally. You can accept them or leave the digital place. But that’s not how a social contract or even a cité works
A cité is built from “an accumulation of small interventions that contribute to a feeling of “this is also my city”” (Saskia Sassen). If we transpose this logic to digital spaces, we obtain open-source softwares, contributory platforms (as Wikipedia) and decentralized social media (as Mastodon) which are the closest current examples for a kind of social contract around technology. In these places where the hacker ethic prevails [5], contributors can say “this is also my digital”.
Materializing data through interfaces
In Building and Dwelling: Ethics for the City, Richard Sennett unveils that cities work as long as the “ville” echoes the“cité” and vice versa. In other words, when our built environment embodies our history and embraces our customs. This idea has also been beautifully expressed by Saskia Sassen when she describes ’’the capacity of the material to make itself visible”.
We could say that a material is visible as long as it tells us a kind of truth, no matter its nature. It can be historical, personal, or even philosophical. For example, it’s happened when you see a place which expresses a piece of your country’s history, a bench that reminds you of your first kiss with your partner or a statue which illustrates the human condition. This urban dialogue between the city and its inhabitants make it readable and, most of all, liveable.
The Weight of Oneself (Elmgreen & Dragset), in Lyon (France). Source : Onlylyon
Technologies as sensors and algorithms are functioning in the opposite way. They are all invisible. In order to create a frictionless and user-friendly experience, technologies have been designed to be imperceptible. While in a city the built environment is telling something to you, with invisible sensors/algorithms you’re the one who’s unintentionally telling a personal truth (your political preferences, sexual orientation, localization; etc.). This dehumanising relationship between digital infrastructures and citizens has something of Valdrada, the city imagined by Italo Calvino in his Invisible Cities:
“Thus, the traveller, arriving, sees two cities: one erect above the lake, and the other reflected, upside down. Nothing exists or happens in the one Valdrada that the other Valdrada does not repeat, because the city was so constructed that its every point would be reflected in its mirror, and the Valdrada down in the water contains not only all the flutings and juttings of the façades that rise above the lake, but also the rooms’ interiors, with ceilings and floors, the perspective of the halls, the mirrors of the wardrobes. Valdrada’s inhabitants know that each of their actions is, at once, that action and its mirror-image, which possesses the special dignity of images, and this awareness prevents them from succumbing for a single moment to chance and forgetfulness”
Similar to Valdrada, our behaviors and personalities are reflected in a lake — a data lake. With the significant difference being that we have no access to our reflection.
Following the urban dialogue principle, sensors and algorithms should be at least visible. Instead of creating frictionless technologies, we should make interactive ones which empower our singularity. For example, by having access to some of the knowledge that algorithms have about us by aggregating our data. This means that we’re not only asking to see what’s behind the algorithm (or the sensor), we also want to make it readable.
This principle emphasizes the necessity to work on interactive and visible interfaces between human beings and technologies. Interfaces which materialize data and make them sensible.
From wandering to deviating algorithms
When you succeed in merging harmoniously the cité and the built environment, you have a city. Such a city is characterized by its ability to produce liberty, at least on an individual level.
Streets, squares and public spaces in general are spaces of encounters and exploration. Encounters with other individuals and explorations of new cultures. Whether through street performances, shop windows, monuments or even just a face [6]… the city dweller is constantly under the temptation to deviate, in the literal sense of diverting from a trajectory. And, much like the epicurean clinamen [7], this deviation gives rise to freedom. The freedom to amble and roam new imaginary worlds. The freedom to undertake and break free from social determinism. The freedom to interact with unexpected people.
This is why cities are lands of opportunities. They give you the possibility to evolve, to become who you are.
Fitler Bubble as described by Eli Pariser in his 2011 TED Talk
Social media and platforms are functioning in a radically opposed way. The more you use them, the more you’re locked in an echo chamber (or filter bubble). This is not only dangerous by polarizing our societies, it’s also alarming as you can no longer explore new possibilities and ways of thinking. Deviating behaviours, that make the evolution of societies and species possible, are standardised by optimised algorithms.
That’s interesting to notice that these over-optimised algorithms are making wandering impossible in cities. Mesmerised by their smartphones, individuals turn into “smombies”[8] impervious to their environment. This phenomenon is so important and dangerous that some cities, such as Seoul, have even been forced to build lighting infrastructure to encourage “smombie” to bring back the attention to the street.
Warning sign to inform of the presence of “smombies”. Source : CHRISTOPH SCHMIDT / DPA / AFP
Following the wandering principle, we should create deviating algorithms. It means algorithms that would show you contents in contradiction with who you are and what you believe. Not always, but just enough to give you the possibility to explore something else than yourself. This principle can be generalized to almost all industries which are using recommendations to show you contents and products or to suggest you some connections. We would probably stay less time on such platforms, but it would enable us to become better human beings and more empathetic societies [9].
Urbanizing technology
I’ve tried to synthesize how some urban properties could help us make liveable digital spaces and humanist technologies. We could call this process of exporting urban principles towards the field of digital and new technologies, “urbanization of technologies”.
If social contracts, urban dialogue and wandering can lead to urbanized technologies[10], many others are to be discovered: Eli Pariser recently wrote an article about online parks, French think tank hérétique raised awareness about the dangers of our smartphones by representing them as a city called “Algoville” and architect Pierre Bernard imagined how Baron Haussmann could inspire engineers and designers. Initiatives are growing and we are just at the beginning of this tech urbanization.
“Algoville” by hérétique
Obviously, this urbanization process has limitations as digital gets their own specificities. This is why it’s not about building “digital cities” (which would be nonsense), but taking inspiration from the history of the cities and urban planning to make our technologies liveable.
This urbanization is more than ever necessary as digital is threatening our social fabric while new technologies are making our cities uninhabitable. People and territories are increasingly morphing into data to serve algorithms. It’s time to reverse this paradigm. The challenge is huge, but we have with us hundreds of generations to succeed.
[1] : The insights and discoveries of this project are freely available in URBAN AI Report.
[2] : An idea mainly developed in Between Past and Future
[3] : Leviathan, T.Hobbes
[4] : Building and Dwelling: Ethics for the City, R.Sennett. Using the French language, Sennett distinguishes two urban realities : cité (people and their customs) and ville (the built environment).
[5] : The Hacker Ethic and the Spirit of the Information Age, Pekka Himanen
[6] : The theme of the face as an invitation to stroll in the city has been developed by Louis Aragon in Aurélien
[7] : In Epicurean physics, clinamen refers to the deviation of atoms from their vertical fall into the vacuum. This clinamen breaks “the laws of fatality” (Lucretia), generates the meeting of bodies and gives birth to freedom
[8] : Suitcase word formed from smartphone and zombie to refer to city dwellers who are constantly looking at their phones
[9] : Alex “Sandy” Pentland also showed that algorithms that enable exploration improve economic efficiency in Social Physics and in Beyond the Echo Chamber
[10] : The concept of “urbanized technologies” initially from Saskia Sassen who developped this idea in here contribution for URBAN AI Report : Urbanized Technology | https://medium.com/swlh/its-time-to-urbanize-technology-141fa5580574 | ['Hubert Beroche'] | 2020-12-17 18:59:36.844000+00:00 | ['Innovation', 'Digital', 'Technology', 'Urban Planning', 'AI'] |
Python Web Crawler for Beginners: Parse Data from the Static Website | Introduction
The web crawler is an efficient way to get the data if we don’t have REST APIs or libraries to retrieved data.
What we the most want to do with web crawlers is retrieving data in real-time. In a program of a web crawler, it usually sends a request to the target website as a flight company, EC website, or galleries of products. Then parse the response from the website and extract the information we expect.
We can present data in different ways as web pages, APIs, or an executable file. There are some cases I used web crawling to solve.
Case 1. Retrieve the most popular products from Yahoo
Yahoo by crawler is one of my case to retrieve popular products with the discount rate from Yahoo on 2017.
The web crawler may not work once engineers modify or refactor the target website. The way web crawlers to retrieve data is depending on the specific ID or class name. I think it may not work now.
Case 2. Retrieve data from Medium
On Daily Learning and my web resume, I implement a web crawler to retrieve articles from my Medium publication and show data on websites. The advantage to do that is we won’t have to save the data in the database and maintain the article on different sources. | https://medium.com/a-layman/python-web-crawler-for-beginners-parse-data-from-the-static-website-2955b37e1ae6 | ['Sean Hs'] | 2020-11-08 09:51:18.218000+00:00 | ['Project Management', 'Médium', 'Python', 'Software Development', 'Web Crawler'] |
How to Integrate Cognito with Social Sign-in | by Ryan Bui, Javascript Developer
If you want to manually integrate social sign-in using Cognito for your app, you’ve come to the right place! I couldn’t integrate social sign-in with what Cognito offers. This was because the front-end couldn’t integrate with Amplify or Cognito user-pools easily due to their use of Flutter. Flutter is currently too young to integrate with those services.
I also didn’t want to create an identity pool to grant temporary AWS credentials to the front-end. To solve this problem, we enabled an /auth endpoint that will signup/login to Cognito programmatically. The users will in turn get auth tokens.
The users must send their own social tokens from Google, Facebook, or Apple. The validation of those tokens will be done from the back-end.
Signup
What we need is to use the “CognitoIdentityServiceProvider” library and provide necessary credentials and the attributes that we want to assign to the user (email, name, etc). Those attributes have to be required when the pool is created (example below).
In return, we need the userID we just created.
Login
Upon login, we need to provide the username and password. If you want, you can login using the email only. when successful, we need the idToken that will be used later when authorizing user calls to our API.
Authorizing incoming requests
There are two ways to configure authorizers. If we do it manually, we have to open AWS console and head to API Gateway service and create an authorizer in our API.
Next step, give a name to the authorizer and select the Cognito user pool you want. Select a method where you want to use an authorizer and click on ‘Method Request’ to link this method to the authorizer.
First, in authorization, select the authorizer. For OAuth scopes, select “NONE” to expect ID Tokens. If you want to use access tokens with scopes, you can follow the documentation in AWS.
So that’s all you need to do for manual configuration! If you want to automate it with serverless, here’s how you can do it:
Here we just need to provide the ARN of the user pool. We don’t want to cache tokens, which is why we use resultTtlInSeconds: 0. As an identity source, put the path to the authorization header.
Now the consumer of this API just needs to pass the Cognito ID token in the authorization header.
A quick recap…
Cognito doesn’t integrate well with Flutter. We used the AWS SDK to signup and login. By doing so we can generate tokens that we will pass back to the client. Those tokens are used to validate against authorizers in the API gateway.
Special thanks to Serguey Arellano Martínez for co-authoring this article. Check out his tutorials on:
How to Configure AWS on Route 53 & How to Proxy an S3 Static Website | https://medium.com/tribalscale/how-to-integrate-cognito-with-social-sign-in-c2bcd73ba0cc | ['Ryan Bui'] | 2020-12-16 17:09:03.902000+00:00 | ['Cognito', 'Tutorial', 'AWS', 'Flutter', 'Front End Development'] |
Where To Start as Web Developer? | For a beginner this is HUGE, now to the fun stuff. What you need to do next is start learning a programming language, there are a LOT of them out there but I strongly suggest JavaScript and that’s because this language specifically is the backbone of the web dev domains, learning it necessary because in a way or other you have to use it in your process. How to learn it? Simple just search for it on YouTube and don’t forget don’t learn too much. JavaScript is a High-Level language which means you can, it is close to English and you won’t find any problem understanding it. After that you have successfully become a front-end web dev. Make sure you work on personal projects to maintain your knowledge and to develop yourself.
Now a whole world is open for you, what I suggest is start learning about frameworks and libraries, what that mean is pre-programmed files that will make your life a lot easier, start by jQuery, it’s a JavaScript library and you will love it but don’t get used to it.
Next and IF YOU FEEL OK WORKING WITH ALL WHAT I HAVE MENTION ABOVE you can step it up and learn a framework such as VueJS, angular, react, amber and so on, I can’t really tell you what to learn you will need to decide by your self (Just google it and see what you like).
At this point you’re done so big up and let us start the REAL fun :) | https://medium.com/codinoon/where-to-start-as-web-developer-62296563a50b | [] | 2020-11-27 12:08:45.056000+00:00 | ['JavaScript', 'Python', 'Front End Development', 'Web Development', 'CSS'] |
What is the Best Type of Singing Lessons for You? | What is the Best Type of Singing Lessons for You?
Music school, a choir, a private tutor, or video lessons? Let’s take a look for alternatives and find the most suitable option for your needs!
Photo by Andrea Piacquadio from Pexels
So, you decided to start singing lessons. Maybe you want to sing along with your instrument, want to improve your public speaking skills or you are just pursuing a new hobby.
How are you planning to learn? Singing lessons can be expensive and require considerable weekly time, so you want to make sure you will find the best place and method for it. Let’s take a look at the options you have and compare the pros and cons of each of them.
Disclaimer: I do not have any affiliation with any of the entities mentioned as example in this post.
Music schools
Photo by John Matychuk on Unsplash
Every music school is different. In big cities, it is easy to find schools specialized in one genre in specific, e.g. jazz school or rock school. But some schools teach music in general. Some academies are specialized in singing, while others provide classes of different instruments.
Every school has specific teaching methods. It is common to have some sort of evaluation to acquire a certification. Good institutions invest in qualified teaching staff, appropriate equipment, facilities, and resources to provide successful education programs.
Advantages:
You become part of a community with similar-minded people, where you receive personal mentoring and can easily keep updated about music projects and gigs inside this community.
Usually, the curriculum includes different courses like singing, music theory, and combo, so you can acquire a complete education.
The certifications may be useful if you are willing to teach music in the future.
Disadvantages:
Fixed schedule for classes, with periodic evaluations, which may be hard to articulate if you do not have an organized schedule. However, some schools have flexible schedules as well.
Requires financial investment.
How to find the most suitable school for your needs?
Lookup for schools around you and try to get some information personally or on their website.
If the school organizes concerts, attend one to determine if you see yourself doing that type of work. Another good idea is to connect with some students to clarify any questions you have about the school methods. Some schools even organize open classes, where you can assist one class and chat with the teachers and students by the end.
Once you identify a potential school for you, there is nothing better than trying for yourself! Many schools offer trial classes, so you can schedule one and see how it works for you.
Choir
Choirs are another option to learn how to sing. There are different types of choirs, organized by age, by gender, or by genre. For example, I have been part of three choirs in the past — a child choir, a religious choir, and an adult choir focused on classical music — and the repertoire was very different in all of them.
I’ve heard a lot of misconceptions about what is being to be part of a choir, for example, that you don’t have to be a skilled singer to be a part of it and that you only sing boring music in churches. I can assure you this is not true. First, choir rehearsals include serious singing training. Second, the choir repertoire varies very much. During my choir experience, I have sung religious music, worldwide music, and even Queen songs.
Lisbon Youth Choir, photo by Teatro Nacional de São Carlos
Advantages:
Just like the singing school, the choirs are a community where you get singer friends with different music projects and life experiences to share.
Less pressure to learn, since your individual voice is not in the spotlight.
It is easy to find choirs with affordable fees (or even for free).
Disadvantages:
Lack of personal coaching.
Fixed schedule for rehearsals.
How to find the most suitable choir for me?
You can apply the same process I suggested to find a music school: search for choirs around you, attend one of their performances (or lookup for it online), connect with some members to clarify any doubts, and ask to attend one rehearsal.
Personal Coach
Singing is a complex skill. It evolves a lot of body knowledge and training, just like a sport, but also requires poetry interpretation, word pronunciation, and performance experience. It is a useful skill not only for someone who aims to become a singer. For professions that require public speaking, having voice control is essential to become a confident speaker and avoid voice damage. A vocal coach is the best option for someone who wants customized lessons.
After 2020’s pandemic, remote online lessons became popular, which create more opportunities. Although the sound delay affects the quality of the lesson, it may be the only option for someone living in a small village to receive mentoring from a personal vocal coach, or the only opportunity for someone to have lessons with an international top vocal coach.
Vocal Coach Cheryl Porter with a student, photo by Cheryl Porter
Advantages:
Receiving personalized education and dedicated mentoring.
Disadvantages:
It is an expensive option
How to find the most suitable vocal coach for me?
Vocal coaches can be hard to find in small towns, and good ones can be hard to find even in big cities. Once you get a contact, research about his teaching background. Many singers start to teach to make extra income but don’t have tutor experience. Being a good musician doesn’t mean someone is also good at teaching, mentoring, and motivating, so even if the vocal coach is a well-known musician, you must lookup for his teaching experience and get student reviews before signing up.
Video Lessons
YouTube is full of free video lessons from many different vocal coaches. It is easily accessible, entirely free and you can attend the lessons in your own schedule. Many vocal coaches also offer their own e-learning courses you can purchase.
As a singer, I have to advise you not to rely on video lessons if you are a beginner. Receiving feedback is the most important factor when learning to sing. You must make sure you are learning proper technique not only to sound better but also to be sure you are not damaging your voice.
I believe video tutorials may be very useful for some specific purposes:
Review previously learned technique, if you already are a skilled singer;
Find warm-up exercises, again, if you already are a skilled singer;
Music theory lessons;
Singing tutorials who don’t require actually singing, e.g. tricks to remember lyrics, songwriting tips, or insights about the music industry.
Nicola Milan is a vocal coach with dozens of jazz & blues tutorials in her youtube channel
Advantages:
Free schedule
Affordable price (or even for free)
Disadvantages:
You have to motivate yourself
You don’t receive any feedback
There is no support if you have any doubts
How to find the most suitable online lessons for me?
I don’t know how to answer this one. There are online lessons to learn every singing technique and even to learn how to sing specific songs. The best online lessons will depend on your goals.
Online lessons are a good complement if you already have some vocal training and understand the tutor’s vocabulary. But please, be careful when learning how to sing by yourself. The voice is a very delicate instrument and bad technique can be harmful.
In conclusion
Music schools are most suitable if you want to learn not only singing as an instrument but music as a whole. It is a good investment if you are committed to being part of the music community and connect with other musicians.
Choirs can be a more affordable option than a music school and also allow you to integrate a music collective.
A personal coach can give you the most personal feedback possible and video lessons are the most flexible option, in terms of schedule and investment.
No option is better than the other. There are good and bad music schools, as well as good and bad choirs, personal coaches, and online lessons. The most worthy option will depend on your personal goals, available time, and financial capacity. Lookup for the options you can access and consider your alternatives carefully. | https://medium.com/age-of-awareness/how-to-find-worthy-singing-lessons-c222b05d6ad9 | ['Mariana Vargas'] | 2020-10-21 15:05:08.307000+00:00 | ['Education', 'Learning', 'Music', 'Teaching', 'Singing Lessons'] |
Taylor Swift’s New Song Has Me Rethinking Think Pieces | Taylor Swift’s new pro-LGBTQ single has once again riled up her detractors on the Internet with criticisms of money-grabbing, self-serving, and poor ally-ship. But here’s an alternative take.
Promotional Image for “You Need to Calm Down” (copyright: Taylor Swift/Republic Records)
“You Need to Calm Down”
Last Friday, Taylor Swift dropped her new song “You Need to Calm Down,” the second single from her forthcoming album Lover (her seventh, due August 23rd).
Like the first single, “Me!” (a duet with Brendon Urie of Panic! At the Disco), it is a candy-coated, rainbow-colored pop confection that seems determined to show us that she has fully moved on from the relatively darker and more cynical era that surrounded her last album, Reputation.
Unlike the first single, however, underneath the synth beats and clever hooks there is actually a coherent, progressive, and dare I say bold theme embedded in the lyrics of “You Need to Calm Down.” This is hardly the first time Taylor Swift has written about living your best life despite the pathetic haters that try to get you down (two of her best songs — “Mean” and “Shake It Off” cover this exact ground). But, it is the first time she got quite so specific about it.
The first verse references the admittedly long-list of haters that Taylor Swift has and how they tend to hide behind their Twitter handles (“You say it in the street that’s a knockout/But you say it in a tweet that’s a cop out”). She argues that from personal experience obsessing over an enemy only leads to internal suffering and then pleas, “Can you just not step on my gown?”
In the second verse — which has been the source of much debate — she takes on the homophobia faced by her LGBTQ friends: “You are somebody that I don’t know/But you’re coming at my friends like a missile/Why are you mad, when you could be GLAAD? Sunshine on the street at the parade/But you would rather be in the dark ages/Making that sign must’ve taken all night/You just need to take several seats and then try to restore the peace/And try to control the urge to scream about all the people you hate/Because shade never made any less gay!” She then makes a parallel plea from the first verse, “Can you just not step on his gown?”
And in the bridge, she shifts back to a more self-centered approach noting how the internet likes to set all of the female pop stars against each other (“We see you over there on the internet/Comparing all the girls who are killing it/But we figured you out/We all know now we all got crowns.”) And then, inevitably, “Can you just not step on our gowns?”
Clocking in at under 3 minutes, it is a bright and breezy listen that involves some clever lyrical themes, a nice beat, and an explicitness in its support of LGBTQ people that is exceptionally rare in popular music. I was hardly ready to call it a classic, but I was delighted and playing it on repeat.
Taylor Swift hangs out with some LGBT icons in the video for “You Need to Calm Down” (copyright: Taylor Swift/Republic Records)
And then came the think pieces…
Enter the Think Pieces
Articles started popping up all over the internet with titles like “Taylor Swift’s New Single is a Teachable Moment about How Not to Be an Ally” and “Taylor Swift’s ‘You Need to Calm Down’ Misses the Point about Being an LGBT Ally.” The authors of this article mercilessly deconstructed the song, its promotion, and its writer, interpreting everything through the most cynical possible lens.
Criticisms include (but are certainly not limited to): She should have made the song just about LGBTQ people and not inserted herself into it. She shouldn’t have made song about LGBTQ people at all. She shouldn’t be telling people how to feel or how to act as allies. As a (presumably) heterosexual person, she had no right to perform at the historic Stonewall Inn and doing so was queer-baiting. Name-dropping GLAAD was lazy writing. The list goes on… and on… and on…
After reading these, I couldn’t help but think to myself, “Am I so shallow that I can’t see the horrible disservice she has done to my community with this song? Am I so easily seduced by a pop beat and my love for Taylor that I can’t see how she’s appropriating our culture for her own personal gain?”
But then I saw the trailer for the music video (released earlier today) and saw the incredibly impressive long list of LGBT icons who were slated to appear in it, including Ellen DeGeneres, LaVerne Cox, Adam Rippon, and Jesse Tyler Ferguson. Clearly these influential people didn’t think Taylor’s heart or mind were in the wrong place or they wouldn’t have agreed to appear.
So then I got thinking about all the problems with think piece culture.
Rethinking Think Pieces
Before I enumerate my problems with current think piece culture, I have to make something extremely clear. I do not think anyone should be dissuaded from writing a critical or cynical think piece about anything or anyone. To all the think piece writers out there — your emotions and thoughts are valid and the Internet (and the world) is a better place because you critically examine our culture.
As I see it, the problem with think piece culture isn’t that there are an over-abundance of critical and cynical think pieces. The problem is that there are rarely positive ones to balance them out.
And I see all too easily how this happens. The instant the think pieces came out about Taylor’s new song, I decided I had two choices as a blogger. One choice was to not write anything at all. After all, the public had spoken and decided this was bad so who was I to say otherwise. My other choice was to take on the think pieces and dismantle their arguments, thus inviting a heated social media debate.
Only this morning did it occur to me that I had a third option — I could just express my far less cynical view of the whole thing and send it out into the world. It won’t be as flashy or click-baity or sharable as the takedowns and I run the risk of being called naive or — worse — being the subject of my own think piece takedown. (But, thankfully, I am not nearly an important enough voice for anyone to waste their time.) So here’s an alternate take.
An Alternate Take on “You Need to Calm Down”
Taylor Swift’s personal roots are in suburban Pennsylvania and her professional roots are in the country music industry — neither are bastions of LGBTQ acceptance. Born in 1989, her social brain’s formative years were dominated by the AIDS epidemic, the rise of the Evangelicals, and a time when any celebrity coming out of the closet was unheard of. Yet on her road to superstardom, she nevertheless became a major LGBTQ ally.
One of the notably apolitical Swift’s first public statements was denouncing an anti-LGBTQ political candidate in Tennessee. Since then she has put her money where her mouth is, donating large sums of money to fight anti-LGBTQ laws in the state (nicknamed the “Slate of Hate”) and becoming increasingly vocal about her views on LGBTQ equality.
Many of her songs have always had an appeal to LGBTQ listeners, with lyrical themes related to beating the bullies (“Mean,” “Shake It Off”), owning the narrative (“Blank Space,” “Look What You Made Me Do”), unrequited love (“You Belong With Me”), and heartbreak (“Back to December,” “All Too Well.”) But despite her activism and her LGBTQ-friendly lyrical themes, prior to “You Need to Calm Down” she had never explicitly discussed LGBTQ rights in her lyrics — well, save a cheesy, vague line about boys liking boys and girls liking girls in “Welcome to New York.”
And here’s the thing — very few pop stars have ever done so either. Gay icons like Madonna, Barbra, Celine, Mariah, Cher, Beyonce, and Britney have all made queer-friendly jams and publicly expressed their support for the community, but have never directly confronted these issues in their lyrics. Two of the only pop artists that have are quite complicated. Katy Perry gave us the cringe-inducing lesbian-as-a-sexy-experiment “I Kissed a Girl” and Lady Gaga gave us the self-empowerment of “Born This Way,” which is a hard comparison because unlike those other artists Gaga actually self-identifies as bisexual.
Not only does Taylor Swift use the word “Gay” and reference Pride and GLAAD (the influential non-profit that fights for LGBTQ equality), but she goes a step further. She’s not content to just tell the LGBTQ community that they are loved or should love themselves more, but she actually calls to task the homophobes whose toxic hate is as destructive to LGBTQ people as it is self-destructive.
The lyrics also take one of the most oft-used criticisms of queer people and turns it against those who hate them. Queer people have been historically derided as too flashy, over-the-top, dramatic, sensitive, demanding, sexual, and flamboyant. Here, the only people who are “too loud” and “need to calm down” are the haters. [Also, regarding the oft-criticized line “Shade never made anybody less gay,” I think it is quite clear she is saying that hate does not change people’s actual sexual orientation not an ignorance of the fact that homophobia does often lead to drastically increased concealment of one’s sexual identity.]
The video is a love letter to the LGBTQ community, with the aforementioned list of icons frolicking in a candy colored trailer park, living their best lives despite the protestors. (OK, the ending where Taylor ends her longstanding feud with Katy Perry by having them embrace each other while dressed in fast food costumes is exceedingly random, but it certainly gets points for shock value.)
Taylor Swift gets some pool time in the video for “You Need to Calm Down” (copyright: Taylor Swift/Republic Records)
So, I have now argued that I think “You Need to Calm Down” is pro-LGBTQ and actually represents an important milestone in pop music. Now there’s the issue of whether or not Taylor is an opportunist. Note that it was not her who called the song a queer anthem, but her critics. Note that her performance at the Stonewall Inn was a surprise and not televised or pre-publicized. Note that the inclusion of GLAAD in the lyrics led to a spike in donations for the organization (as was intended). Note that the song is being released in the context of her doing important work for LGBTQ people in her private life. Note that the song is perfectly in fitting with the lyrical themes and aesthetic of her new era. Note that her 7th album was going to be a huge commercial hit across the board (including with LGBTQ people) regardless of whether the explicitly supported them in her lyrics or not. Now counter-argue that she is merely an opportunist trying to profit off a community she is not a member of. I’m willing to listen.
People can hate the song. They can think its beat is lazy, its lyrics are trite, and that it represents a missed opportunity to do something even bolder or more queer-positive. But I honestly struggle with the idea that song’s message is a negative for the LGBTQ community or problematic. After decades of asking our pop culture icons for more explicit ally-ship, we get one who makes a damn good effort and the Twitterverse seems determined to take her down.
But ultimately, the takedown mentality is par for the course given Taylor Swift’s penchant for riling up detractors and the cynical think piece culture we are immersed in. | https://medium.com/rants-and-raves/taylor-swifts-new-song-has-me-rethinking-think-pieces-281cb242a507 | ['Richard Lebeau'] | 2020-12-21 00:14:02.663000+00:00 | ['Culture', 'Feminism', 'Media', 'LGBTQ', 'Music'] |
Bobandii talks Te Reo, technology and tracksuits on new album | Bobandii talks Te Reo, technology and tracksuits on new album
‘Running Handshake / B.L.U.E.’ proves that, given the right ingredients, New Zealand has the talent to step up to the digital production plate.
Coming to terms with your creative chaos can be a tough journey. Our minds can be consumed by thoughts, ideas, aspirations and self-consciousness. Many of us attempt to reign in our creativity to improve it. We build walls around our intuition and spend far too much time battling with those walls. On Running Handshake / B.L.U.E., Bobandii has taken a big step towards breaking down his creative barriers. The album is eclectic, genre-bending and emotionally unruly but somehow it bleeds together to build an immensely rewarding listening experience.
The album is packed with thematic threads:
Running Handshake / B.L.U.E. side-steps quickly between styles and moods. Tracks like Tracksuit and Falcon are energetic head boppers, whereas Free and Focus are atmospheric and emotional. What ties the album together so well are the themes. There is a lot to read into on the project, and I don’t want to constrain the ideas expressed into a single sentence, but they include Bobandii’s culture, his love-hate relationship with technology and his artistic journey. Sometimes these themes are expressed using three words or a whole verse. Sometimes they’re communicated in the mood of the track, and other times they crop up through cunning wordplay. If you have difficulty opening yourself up to a bit of an introspective journey, you may find it tricky to spot the threads, but they’re there.
International acclaim:
New Zealanders are incredibly talented in many genres. We have internationally cherished metal, pop, hip-hop, electronic and reggae artists. One area we don’t always meet an international standard at is experimentation. With so many digital production tools proliferating, it could be argued that New Zealand has yet to take advantage of them in our own way. While we can be proud of our traditional musical success, Bobandii provides New Zealand with a unique opportunity to make an international name for experimental digital audio production. Given the right attention and support, Bobandii has the potential to influence music in the same way as Flume, Sophie, Bon Iver, James Blake or alike. Running Handshake/ B.L.U.E. proves it’s only a matter of time. | https://medium.com/hendon/bobandii-talks-te-reo-technology-and-tracksuits-on-new-album-f1efe403a7f | [] | 2019-06-30 03:06:43.855000+00:00 | ['Bobandii', 'Running Handshake', 'Msc', 'Review', 'Music'] |
Does Libra matter? | Does Libra matter?
Facebook’s new cryptocurrency, Libra - how it impacts banks, social networks, and you.
This week, Facebook launched a stable, fiat-backed cryptocurrency, called Libra, built to facilitate cross-border money transfers. Facebook’s goal is to build a payments network around Libra by creating an online ecosystem (on/off the social network) where users can make purchases and peer-to-peer (P2P) transfers. Its structure and backing create the potential for Libra to be a catalyst in the cryptocurrency market.
Why is Facebook interested in crypto?
A large technology player coming into the payments market with a solution that excludes cards has long been a concern for banks, card payment networks and other traditional financial institutions. This is one attempt. Libra is early and opinionated but gives a glimpse of where the financial system can go, yet it does have interesting implications for what the financial system can shape into:
A global blockchain-based payments system could lead to fundamental changes in the digital consumer economy — payments can become faster, more reliable and cheaper.
A significant portion of Bank’s profits come from cross-border and foreign exchange, which would be directly impacted by this alternative.
Libra has a long path ahead: decisions by global policy makers will have a major impact on the future of the cryptocurrency.
It’s important to consider the notable differences between Libra and Bitcoin, the latter of which has been the basis for most cryptocurrency policy and design. Of note:
Libra will not be fully decentralized or anonymous (at least initially).
Libra is meant to be exchanged, not to be used as store value.
Libra is backed by various global fiat currencies in order to avoid massive volatility experienced in most existing cryptocurrencies.
Facebook going after global payments validates the massive opportunity to improve existing cross-border and P2P payments systems. Transferring money from one country to another should be as easy as it is to send a text message. Banks can leverage their existing market position with new technologies to better fill this gap, giving them a better shot at avoiding disintermediation when (and if) Libra (or something similar) should take off.
What is Libra?
“a simple global currency and financial infrastructure that empowers billions of people” — Libra White Paper
The Libra is a stablecoin — a digital currency that’s supported by established government-backed currencies and securities. The goal is to avoid massive swings in value so Libra can be used for everyday transactions, one of the main challenged with more volatile crypotcurrencies, like Bitcoin.
Facebook created a new subsidiary, called Calibra, to build the new wallet and focus on the company’s blockchain efforts. The digital currency is expected to launch in 2020.
How will it work?
In order for the Libra to reach the usability and acceptance scale required to make it work as a payment method, it needs to be widely trusted. Facebook is creating a new payments network on which users can buy things and pay each other. This ecosystem consists of:
The Libra Blockchain : a new, open rail for anyone to build on top of.
: a new, open rail for anyone to build on top of. The Libra Reserve : a reserve of real assets, distributed globally, to back Libra.
: a reserve of real assets, distributed globally, to back Libra. The Libra Association: a governance structure for the entire Libra network and Reserve.
Members of the Libra Association will each invest $10 million in the network through the Libra Reserve, which will govern the digital coin. While Facebook will be one of the founding members, it will be just one equal member of the Association. The Founding Members are:
Payments : Mastercard, PayPal, PayU (Naspers’ fintech arm), Stripe, Visa
: Mastercard, PayPal, PayU (Naspers’ fintech arm), Stripe, Visa Technology and marketplaces : Booking Holdings, eBay, Facebook/Calibra, Farfetch, Lyft, MercadoPago, Spotify AB, Uber Technologies, Inc.
: Booking Holdings, eBay, Facebook/Calibra, Farfetch, Lyft, MercadoPago, Spotify AB, Uber Technologies, Inc. Telecommunications : Iliad, Vodafone Group
: Iliad, Vodafone Group Blockchain : Anchorage, Bison Trails, Coinbase, Inc., Xapo Holdings Limited
: Anchorage, Bison Trails, Coinbase, Inc., Xapo Holdings Limited Venture Capital : Andreessen Horowitz, Breakthrough Initiatives, Ribbit Capital, Thrive Capital, Union Square Ventures
: Andreessen Horowitz, Breakthrough Initiatives, Ribbit Capital, Thrive Capital, Union Square Ventures Nonprofit and multilateral organizations, and academic institutions: Creative Destruction Lab, Kiva, Mercy Corps, Women’s World Banking
Why it makes sense for Facebook
The large majority of Facebook’s revenue comes from advertisement. Payments are a potential way for Facebook to turn its messaging platforms — Messenger, WhatsApp — into complementary, revenue-generating businesses.
With its 2.5 billion users, Facebook’s network dwarfs that of any other bank or payment system globally — with a proven network effect. Monetizing its user base could lead to a global commerce network inside the Facebook Ecosystem.
As expected, this is a project that will take time and face several regulatory hurdles. There is a long way to go. Facebook has attempted to address some immediate regulatory concerns by establishing a governance structure that places Facebook as an equal member of the governing body. The Libra Association has as a charter to define a sustainable governance structure, a path towards decentralization and oversee the Libra Network and Reserve.
Facebook will be heavily involved in developing the technology behind Libra throughout 2019. The Libra Blockchain is open-source, intentionally, enabling any developer to build application on top of it. This is an initiative that the entire payments ecosystem should be following.
Let’s see what happens when you point 2.5 billion people at a cryptocurrency. | https://medium.com/hackernoon/does-libra-matter-7cf94da982a5 | ['Rita Waite'] | 2019-06-24 01:41:06.747000+00:00 | ['Fintech', 'Facebook', 'Payments', 'Cryptocurrency', 'Bitcoin'] |
The Intuition in The Queen’s Gambit | Netflix’s The Queen’s Gambit must be the ultimate mathematical geek series. One cannot underestimate the attention to detail you find in this movie. That detail takes you back into a different time and leaves you with the impression that it was all too real. The movie is not only about chess, but rather our relationship with our minds, the changes in society, and the passions that we are obsessed with.
The movie takes place in the decade of the 1960s. Where a young child (Elizabeth Harmon) is sent to an orphanage after her mother’s tragic death. What’s incredible is how this movie reveals the changes in 1960s, the technology, architecture, interior decoration, music, fashion style and even inflation. The movie reveals in astonishing detail of how the main character lives in a world that is changing at an incredible pace. The aesthetics of the series was stunning. It’s a different kind of film when the environment is also a big part of the story.
The movie begins in an orphanage in Kentucky. She then gets adopted by a family and gets introduced to the 1960s single-family home with a car on every driveway. Her adopted mother stays at home while the father is always away on business trips. The home has all its walls covered in wall-paper (rarely seen today).
One can see how technology changes as cars evolve. Even small details of how the telephone changes from rotary phones (is dial a phone number still understood today?) to touch tones are taken care of. This was a very different time when technology changes were brutally evident. The movie takes you back in time and shows you how everything remarkably changed so quickly.
But that’s just a slice of the movie, it’s about a gifted chess player. Few may have noticed that her natural mother wrote a Group Theoretic Ph.D. dissertation. In one of Harmon’s recollection of her childhood, while her mother burned books. She picked up a green book which was titled “Monomial Representations and Symmetric Presentations” by Alice Harmon. This reveals her unique innate ability to work with the mathematics of groups and symmetry. It was at this point that I began to wonder if this was a real story of someone I didn’t know about.
The thing which surprised me about the orphanage was they prescribed daily vitamins and tranquilizers (benzodiazepine sedative delivered in half-green capsules) to their children. The daily regiment of benzodiazepine allowed Harmon the ability to halucinate and accelerate her learning of chess. Don’t try this at home, despite how real it may seem this movie is still a work of fiction!
There is an underlying theme here that an intense passion for a subject can drive one insane. This is not an uncommon observation in the world of chess. Harmon’s tutor Beltik realized how one’s passion could drive someone mad. Humans and chess could be like moths that are attracted to a flame. However, the movie reveals examples of players with a mature relationship with the game and their passion. The player Luchenko who looked like an eccentric grandmaster sought beauty in the game rather than gamesmanship.
The orphanage had little opportunities for intellectual engagement. if you lived in the 60s or even decades that followed you will recognize this. You see none of the distractions and stimulus that we experience daily today. Information was sparse. I was indeed surprised that they had Chess magazines during those days.
Harmon learned to play from the Janitor who read books (‘Modern Chess Openings’ written by Griffith and White) about chess. This movie has a lot of use of chess vocabulary. In my youth, I did play chess and read some books, so the jargon was familiar to me (example: Sicilian defense, Ruy Lopez, Capablanca). I’m not sure how this plays for people without that exposure. I gather it’s the same experience of ignorance I feel when people talk about the game of Go. I’ve never played Go.
The movie reveals the popularity of chess even in the US. Despite this, the movie reveals how the rest of the world took chess more seriously than the US. This certainly was true, the US has a strange relationship with intellectual past-times.
It’s fascinating how this movie explores how Harmon becomes better at chess. Through different mentors that exposes her to new approaches. She begins by reading games played by other masters that aligned with her style. She gets exposed to blitz chess. A fast-paced type of chess that was frowned upon in the movie ‘Searching for Bobby Fisher’. Harmon even makes use of sense-deprivation methods like going underwater.
The movie talks about the intuitive chess player. The player that has no fear in sacrificing her pieces. The player that breaks the rule. Harmon in one scene even remarks that she does not do chess puzzles because they are unlikely to happen in a real game. Harmon indeed has indeed a keen insight into intuition. It is as if the patterns of real games are different from the patterns in made up chess patterns. It is perhaps why playing blitz-chess can also mess up your intuition. I used to play a lot of reverse chess, and that certainly messed up my intuition! The movie lets the outsider into the minds of chess playing enthusiasts.
This movie reveals to its audience the incredible passion, work, and ability of people to become master of this game. It also reveals to us the background of a rapidly changing world. In the decade of the 1960s, the changes were visible for all to see. In perhaps the last two decades, it has not been as obvious.
The movie showed how passionate the Russians were of the game of chess. Transfer that image to East Asian countries like China, Japan and Korea who had a similar passion for an intellectual game like Go. The game of Go is much older than Chess and is likely to be rich in strategies and tactics. Now fast-forward into the future (actually several years ago now), where an artificial machine (see: AlphaGo) bested the best human player. How does this affect the psyche of a nation? You can appreciate this better by watching Queen’s Gambit.
But there’s more. The chess world champion, the Russian Borgov, is characterized as machine-like that plays by the book without error. In Chess (unlike Go), there is a fascination for those players who introduced an unorthodox way of playing. The kind of intuitive player who had moves that were entirely unique and novel. The kind that would break all orthodoxy, the kind that does not follow the rules.
Unfortunately, that kind of chess players was finally discovered not to be a human, but rather a machine (see: AlphaZero). Humans exchange knowledge by giving names to different lines of thought. That is why in the movie they talk about the Sicilian Defense, the Queen’s gambit, Ruy Lopez and other chess jargon. Experts play chess by following patterns and becoming experts at variations on patterns. That is why it was not a good strategy to play against an expert like Gorgov. In contrast, an intuition machine like AlphaZero doesn’t play with any patterns and rules that come from humans. It plays an alien form of chess. A kind of chess that was commonly understood to be an intuitive kind of chess. Where the word intuition is used here to mean something beyond the rules.
The movie’s attention to detail to detail cannot be overemphasized. The final game on actual gameplay. In real life, the game did end in a draw. Yet, the creators of the movie took the additional effort to find an improvement in the original game so that one side wins! You can find some analysis of the game in the movie here and here. This improvement wasn’t published until it was shown in the film. I would not be surprised if AlphaZero was used to discover the novel line of moves. My fearless prediction is that the next chess-playing AI is going to be called Harmon.
The exponential changes in our society have surpassed our own human intuitions of this world. We have are at a point in civilization where we cannot physically see these changes. The 1960s was more than half a century ago, back then the changes were obvious and they were rapid. To avoid distractions, one could simply leave the phone off the hook. (Note: terms like “off the hook” or “dial” a number are so far removed from the semantics of modern phone usage)
The Queen’s Gambit will be loved by all the geeks who grew up approximately around that time or grew up in chess. It will be difficult to recreate this kind of experience in today’s world. But perhaps the ending of the movie offered a clue. That we discover our humanity when we discover how the love of a game connects us all. It was a unique experience and this movie lets us all re-experience again those fascinating times. | https://medium.com/intuitionmachine/the-intuition-in-the-queens-gambit-f1c3c04da1ed | ['Carlos E. Perez'] | 2020-11-11 10:38:46.769000+00:00 | ['Chess', 'AI'] |
React Native: How to Load and Play Audio | React Native: How to Load and Play Audio
Working with audio clips in React Native and Expo AV
Expo AV: Reliable Native Audio Support
This article introduces the expo-av package, a universal audio (playing and recording) and video module for React Native projects. The package is maintained by Expo, the go-to library for bootstrapping React Native projects targeting a range of platforms.
Audio packages have come and gone in the React Native ecosystem, most of which either do not work or have not been updated for a significant amount of time. When looking at packages to adopt — especially for critical tasks like audio playback — a reliable and well maintained package is needed. Major iOS updates are launched on a yearly basis and Android updates considerably more regularly. These updates sometimes come with breaking changes, so choosing reliable source code on the React Native side will be in every developers interest.
Note that react-native-audio has not been updated in over 2 years, and react-native-sound has been stagnant for over 1 year (since the time of writing this article); these packages should be avoided as they will not support the latest native APIs, and will lead to dead-end troubleshooting when they do not work in your project.
With all this being said, expo-av ticks all the boxes in terms of maintenance and platform support. With weekly downloads at >25,000, and the last update published a month ago (at the time of writing), it is a fairly strongly adopted package that the developer should have confidence using. Not all apps require audio, so it can be hard to gauge the popularity relative to the whole React Native ecosystem.
expo-av also has strong documentation that is kept up to date. The reader can visit the package on GitHub as the true source, but the majority of documentation is hosted on the Expo website, where there is a dedicated audio page along with an API reference page for the Expo AV module. It is this documentation that we’ll be referencing throughout this article as we build some audio tools.
What this article will cover
Now that I have hopefully persuaded you that Expo AV is the way to go with React Native audio, it is time to delve into some development. Instead of writing a standard tutorial of loading and playing an audio clip, we will make things more interesting to more closely reflect real-world use cases. After installation, this piece will:
Explain the audio loading and playing workflow and the key APIs needed to get audio working.
Walk through an audio controller class that loads, plays, resets and stops audio clips. This class will be separate from React components as to separate the audio logic from component logic.
Demonstrate how to load and manage multiple audio clips simultaneously. For this piece we will assume that a male and female audio clip need to be loaded, whereby the end-user can choose which gender to listen to. Of course, switching the gender can be achieved simply by toggling that gender via state management, but the corresponding audio itself must be loaded and available, ready to be played.
and audio clip need to be loaded, whereby the end-user can choose which gender to listen to. Of course, switching the gender can be achieved simply by toggling that gender via state management, but the corresponding audio itself must be loaded and available, ready to be played. Walk through a <PlayButton /> component, that will initialise the audio (request to load the audio files from a remote server), and manage the audio state. More specifically, we will visit concepts such as auto play, tap to play, tap to stop, and reflect this in component state. Another thing we need to manage with audio is the component state while the audio is being retrieved. This means asynchronously requesting the audio and waiting for it to be fully loaded before it can be played. This adds further (but needed) complexity into the component as to not attempt to play an unloaded audio file that would result in a runtime error. | https://rossbulat.medium.com/react-native-how-to-load-and-play-audio-241808f97f61 | ['Ross Bulat'] | 2020-12-25 07:28:49.655000+00:00 | ['JavaScript', 'App Development', 'React Native', 'Software Engineering', 'Programming'] |
An Evening With the Hearts | It wasn’t just the Rolling Stones who attended a pedestrian gig I played back in my youth to pay the rent. Ann and Nancy Wilson of Heart came to one of my rent-paying jobs as well — proving that you never know who you’re going to run into and where.
As a young road warrior back in my 20’s, I almost always managed to find some work to escape the city and its heat. I didn’t really care whether the band, music, or even pay was to my liking. I just wanted out and would take anything that came my way.
As such, I spent the summer of ‘77 touring with the Shirelles (which was actually about the best summer employment I found during that period), playing 29 nights just in August alone! Six of those dates were at the Atlanta Hyatt, a brand new downtown hotel (complete with a breathtaking atrium). This was by far our fanciest gig on the run — what with playing in the Showcase of the South — and in the city’s flashiest venue. Accordingly, we performed low-volume and clad in tuxes for the Southern gentry those six nights.
At the time, there were a couple of sisters tearing it up in the rock world — a band called Heart — playing at the Omni (the city’s sports arena) while we were in Atlanta.
As aficionados of Herbie Hancock and everything funky, jazzy, and syncopated, the guys (including me) knew almost nothing about the band staying in the suite on the top floor of the hotel. Except they were bigger than we were because they had the suite — and we didn’t!
Well…as fate would have it…the Omni was blacked out one night and Heart couldn’t perform their concert. So where did they end up for the evening? Watching the Shirelles in the hotel lounge.
While the previous audiences had been demure and civilized, Heart (or should I say the girls) were rock and roll raucous. Clearly, the Wilsons were rebels with a cause. And that cause was to party hearty in a faux sophisticated environment. Oddly, while the girls carried on as if they were male rock stars, the guys sat still almost like any politically incorrect move might get them fired.
Anyway, the Shirelles always held a q & a session after “Soldier Boy.” And when Ann shouted out “y’all get a lot of groupies?”…Doris (one of the Shirelles) acknowledged their rock stature by telling the audience
“Tonight, we have the pleasure of performing for ‘THE HEARTS.’”
Ouch! So much for the girls’ rock stardom. To their credit, Heart barely grimaced as I guess they weren’t really expecting some oldies group from 10 years before to understand how popular they were.
Curious about the act who’d come to see us play, I came to discover after our tour was over that they were (and are) huge talents. In fact, I became a big fan and am to this day. Their songs contain timeless melodies delivered in a genre (rock) that isn’t always that melodic! And Ann has a voice for the ages.
Looking back on the two times I “gigged’ anonymously to pay the rent, only to find myself performing for superstars, I wonder why I didn’t approach those million-selling artists on either occasion.
With the Stones, I almost get why I ran away. The band and I sounded horrible. It wasn’t exactly the showcase with which to impress rock and roll icons. But with Heart…my friends and I had been backing the Shirelles every night for a month and knew the show cold. We were rehearsed and tight. And we had nothing to be ashamed of.
But for whatever reason, none of the guys in the band I was in approached the girls or their backup musicians during that week. And that’s really too bad because as I said, I became a big fan of the Wilson sisters.
Oh well! Another opportunity wasted. The story of my life, apparently. But to be truthful…not entirely. Like that very night, I did have really good sex with the blond waitress at the club in the Hyatt! | https://medium.com/my-life-on-the-road/an-evening-with-the-hearts-da0a8d7affb2 | ['William', 'Dollar Bill'] | 2020-10-28 20:56:31.732000+00:00 | ['Inspiration', 'Music Business', 'Music', 'Travel', 'Culture'] |
What Is the JAMstack and How Do I Get Started? | What Makes Up the JAMstack?
Back to the JAMstack: it’s typically comprised of three components: JavaScript, APIs, and Markup.
Its history stems from growing the term “static site” into something more meaningful (and marketable). So, while ultimately a static site is the end result, it’s blown up to include first-class tooling for every step of the way.
JAMstack breakdown
While there isn’t any specific set of tools that you need to use, or any tools at all beyond simple HTML, there are great examples of what can make up each part of the stack. Let’s dive into each component a little bit.
JavaScript
The component that’s probably done the most work to popularize the JAMstack is JavaScript. Our favorite browser language allows us to provide all of the dynamic and interactive bits that we might not have if we’re serving plain HTML without it.
This is where a lot of times you’ll see UI frameworks like React, Vue, and newcomers like Svelte come into play.
“A Simple Component” example from reactjs.org
They make building apps simpler and more organized by providing component APIs and tooling that compile down to a simple HTML file (or a bunch of them).
Those HTML files include a group of assets like images, CSS, and the actual JS that ultimately get served to a browser via your favorite CDN (content delivery network).
APIs
Utilizing the strengths of APIs is core to how you make a JAMstack app dynamic.
Whether it’s authentication or search, your application will use JavaScript to make an HTTP request to another provider which will ultimately enhance the experience in one form or another.
Gatsby coined the phrase “content mesh” that does a pretty good job of describing the possibilities here.
You don’t necessarily have to reach out to only one host for an API, but you can reach out to as many as you need (but try not to go overboard).
For instance, if you have a headless Wordpress API where you host your blog posts, a Cloudinary account where you store your specialized media, and an Elasticsearch instance that provides your search functionality, they all work together to provide a single experience to the people using your site.
Markup
This is the critical piece. Whether it’s your handwritten HTML or the code that compiles down to the HTML, it’s the first part you’re serving to the client. This is kind of a de facto piece of any website, but how you serve it is the most important piece.
To be considered a JAMstack app, the HTML needs to be served statically, which basically means not being dynamically rendered from a server.
If you’re piecing a page together and serving it with PHP, it’s probably not a JAMstack app. If you upload and serve a single HTML file from storage that constructs an app with JavaScript, it sounds like a JAMstack app.
Static output from Gatsby on AWS S3
But that doesn’t mean we have to always build 100% of the app within the browser. Tools like Gatsby and other static site generators allow us to pull in some or all of our API sources at build time and render the pages out as HTML files.
Think if you have a Wordpress blog, we can pull in all of the posts and ultimately create a new HTML file for each post. That means we’re going to be able to serve a precompiled version of the page directly to the browser which usually equates to a quicker first paint and faster experience for your visitor. | https://medium.com/better-programming/what-is-the-jamstack-and-how-do-i-get-started-b09a05a195f1 | [] | 2020-04-01 22:26:42.097000+00:00 | ['Technology', 'Programming', 'JavaScript', 'Jamstack', 'Development'] |
How Home Videos Reminded Me of My Journey as a Film Artist | “I cannot believe that I was doing all this stuff way back then,” I said to my mother after we had watched several of the family movie tapes. “It’s exactly what I’m doing right now with my work. It’s so validating to me to see that, but also so frustrating. Why didn’t I just follow my art?”
I have struggled with this question a lot in the past five years. Something happened to me the moment I turned 18 — I think I felt the pressure of the world push in on me. People kept telling me I would never make it as a novelist and I’d have to live in Hollywood to be a director (and having just left that region five years before, I wasn’t in a hurry to return).
Things were very different in the mid-90s. It was the days of dial-up internet, long before the gig economy was even a twinkle in the Muses’ eyes. The odds really were against you if you wanted to be an artist of any kind. There were only a few, very specific paths you could take, in very specific locations, and even then, you’d have to be the best of the best to have a chance to gain entrance into these exclusive careers.
It took me almost ten years to get my bachelor’s degree because I kept waffling between getting a practical degree in teaching or following my dreams and pursuing an MFA in writing or going to film school. I can’t even begin to tally up how much money I wasted during that time of indecision — especially if you add on the $25,000 MAT I eventually earned for that “practical” teaching career that I did not enjoy and had to leave by the age of 38.
Oh, if only I had followed Young Yael’s dreams. I look at her now, in those videos (well, I can’t see her from behind the camera, but I can look at her through her work), and I see the woman I am today. I see that same creative spirit. I see the same person who is looking to capture the little moments in life, the little details, and make people take notice of them. Make people realize that all those tiny things that we think are so insignificant are the things that really matter. The way the light falls. The little wildflower growing through the crack in the sidewalk. The song of a robin.
I wonder where I’d be now if I hadn’t taken the practical route, if I had skipped over the $25,000+ journey into teaching and followed my artist’s heart, instead.
Maybe it doesn’t matter. Maybe all roads would have led to this point.
Maybe the bigger point is just that I’ve always been this person. I’ve always been a storyteller. And just knowing that, having evidence of that, is such a comfort to me.
I am who I have always been.
© Yael Wolfe 2020 | https://medium.com/wilder-with-yael-wolfe/how-home-videos-reminded-me-of-my-journey-as-a-film-artist-544d3d62a75f | ['Yael Wolfe'] | 2020-05-16 18:39:38.397000+00:00 | ['This Happened To Me', 'Filmmaking', 'Artist', 'Creativity', 'Photography'] |
When a List is Not Enough- Python Arrays | As a beginner in Python, lists have to be my favorite objects to work with. List comprehension reduces multiple lines of code to one line. However, if you have to work with big data, lists may not be the most efficient objects to work with. Therefore we need a more efficient alternative. Say hello to Python arrays.
Arrays behave much like lists but the content is type constrained. You have to specify the content type. Python arrays are much like arrays in C language. Consider using an array specially if your list is going to contain only numbers. When creating an array, you have to provide type code (for a list of type code, I found this page very useful). This is the key to arrays’ efficiency. For example, if you use the type code ‘b’, which in C type is signed char, and can be used for int type in python, each item is stored in a single byte, which for large sequences of numbers, saves a lot of memory. Since arrays are type constrained, Python will not let you put any item that does not fit the type specified. Therefore you are trading off the flexibility of lists but getting efficiency in return. However, most of the things that you can do with lists is supported by arrays as well.
To make an array, you import the array module. The array object takes two arguments, the first one is type and the second is the content. Let’s say we want to make an array holding numbers 0 to 9.
from array import array a = array(‘b’, (x for x in range(9)))
As you can see it is very easy. However to demonstrate the power of arrays, let’s create one with 10 million random floats.
from array import array
from random import random floats = array(‘d’, (random() for i in range(10**7)))
Just like a list, you can check the number of items in an array by using len.
len(floats)
Array also make it very easy to save data into a file and loading it.
with open(‘floats.bin’, ‘wb’) as file:
floats.tofile(file)
To load the file, first be define an empty array and then load the numbers from the saved file into that array.
floats2 = array(‘d’) with open(‘floats.bin’, ‘rb’) as file:
floats2.fromfile(file, 10**7)
Let’s check that the two floats are the same.
floats==floats2
This should return “True”
If you were to save the same amount of data in a text file, it will be much slower and the resulting file will be much larger. Therefore large amount of numeric data can be easily processed and stored as well as loaded efficiently using arrays.
Numpy Arrays
Any article about arrays in python will be incomplete without the mention of Numpy arrays. Numpy arrays are multi-dimensional and can hold not only numbers but user defined objects as well as provide efficient element wise operations.
Lets say we want to define as array that contains numbers from 0 to 12.
import numpy as np a = np.arange(12)
Let’s see what our Numpy array looks like:
a array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
You can check the shape of any array as follow:
a.shape (12, )
We can see that our array has 12 items in one dimension. We can change the shape as follow:
a.shape = 3, 4
Let’s check what our array looks like now:
a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]])
To simplify in my head, I think of Numpy arrays as list of lists, but that’s oversimplifying it. However it makes sense when you want to retrieve a row.
a[1] array([ 4, 5, 6, 7])
a[1] thus retrieves the item at the index 1.
If we want to retrieve a specific element, we can use it’s index in both rows and columns.
a[1,2] 5
Think of the first number as the number of the row of the element starting from 0 and the second number is the number of column starting from 0.
We can also pick a whole column. Let’s say I want to pick column 1, with counting starting from 0.
a[:,1] array([ 1, 5, 9])
Last, we can also “flip” the array as follow:
a.transpose() a array([[ 0, 4, 8], [ 1, 5, 9], [ 2, 6, 10], [ 3, 7, 11]])
That’s it! My short introduction to arrays in python. | https://medium.com/python-in-plain-english/when-a-list-is-not-enough-python-arrays-11008bd57940 | ['K. Nawab'] | 2020-11-07 21:32:50.832000+00:00 | ['Python Programming', 'Coding', 'Programming', 'Python', 'Arrays'] |
Can You Jumpstart Your Career With AWS Certifications? | Cloud expertise is currently one of the most sought-after tech skills out there, and cloud jobs are some of the highest paying in the industry. But how does one break into this exciting field?
A couple of years ago, I found myself stuck in a dead-end job and was itching to break into a career in the cloud, but I did not know how to.
After discovering AWS, I came up with a plan: I was going to get all three associate-level certifications, which would make me a cloud developer in extreme demand, right?
I just need to cram for a couple of tests. Easy peasy!
So how did that plan pan out?
(See my original blog post from 2017 about my certification journey here)
Certifications Are Controversial
Certifications are a bit of a touchy subject in tech: some people swear by them, others think they are overrated.
I kinda agree with both of those sentiments!
Let me explain.
If you think that a certification alone will make you an expert in a given topic, then you are in for a surprise. You will not gain expertise by passing a test. This attitude is what gives certification a bad rap. A growing number of so-called “certified experts” that seem to lack an understanding of the very topics they are certified in does not help.
With that being said, certifications have some benefits, two of which are:
They provide an easy-to-follow path for studying and self-learning: for complex topics like AWS services, one can be easily overwhelmed, but having a defined and structured list of topics to cover makes it much easier to learn While most technical hiring managers tend to be indifferent towards certifications, many recruiters and HR people like them, so having a certification could potentially improve your chances of being noticed and might help get your foot in the door
Brief Overview of the AWS Certification Path
As of 2020, AWS has multiple tracks for certification, as seen in the image below.
AWS Certification Path as of 2020 — https://aws.amazon.com/certification/
Foundational: for those completely new to the cloud and seeking to get a general understanding. Avoid this track if you are more technically-minded
Associate: this is where most people will start (and where I started). The Solutions Architect is a good one to start with Professional: recommended only after 2+ years experience, and I really encourage you to follow this recommendation Specialty: deep dives into specific topics that require lots of experience and knowledge (security, networking, etc.)
For most technically-inclined people, I recommend starting with the associate-level certifications.
How I Broke Into the Cloud Career
As I mentioned before, a couple of years ago I decided that pursuing the three associate certifications from AWS would be all I need to break into a career in the cloud.
After months of studying, I did manage to pass all three certifications and I started applying to jobs.
Initial Failure
I was getting called back, but my interviews were not going that great.
While the knowledge I gained from all those certifications was helpful, it was not the only thing employers were looking for. In my pursuit of passing my AWS tests, I neglected other skills that were equally critical.
One of the most critical skills I ignored at the time: knowing how to write quality code. It turns out that if you want to be a cloud professional, writing code is one of the most important skills that employers are looking for.
My mistake: I leaned too heavily on my certifications.
Turning Things Around
After many failed interviews and after realizing what my areas of weaknesses were, I went to work on filling the gaps.
For me, my Achilles’ heel was the coding challenges: I was doing great on most technical and behavioral aspects of my interviews, but I was bombing the code challenges.
Only when I focused on my weaknesses and accepted that my AWS knowledge alone will not suffice, things started to turn for me. After devoting a considerable amount of time studying object-oriented programming and doing a lot of practice, I started interviewing more confidently and doing well on coding-related questions and challenges, which very quickly landed me what has been the most fulfilling and enjoyable career I’ve had so far.
My cloud knowledge gained through studying for those certifications, in tandem with my coding skills, ended up being the winning ticket.
Should You Pursue Certifications?
Definitely!
Shia LaBeouf thinks you should!
If you are new to the cloud, or interested in jumpstarting your career, pursuing an AWS certification will be a great place to start.
Just remember though: depending on your background, those certifications alone might not be enough.
Suggested Path
I highly recommend you start with the AWS Solutions Architecht — Associate certification, and then possibly pursue the two other certifications afterward (Developer and SysOps Administrator). Unless you have a couple of years of experience in the cloud, do not pursue the professional-level certifications.
But do not stop with the certification: understand that more than likely, you will need to supplement with additional skills and experience.
Final Thoughts
You Can Absolutely Do This
If you have been thinking about transitioning into the cloud but have been looking for a sign, this is your sign!
Demand for cloud professionals is only increasing. A global pandemic has caused more companies and workloads to move to the cloud, and increased the demand even further.
Call to Action
I highly recommend that you explore the certification path that AWS offers and consider pursuing one (or more).
If you are overwhelmed by all the options and “additional skills” that you need to develop, I have put together a free ebook that will guide you through the whole process as well as provide you the best online resources for self-learning. I even give you a structured program the you can follow.
You can get my ebook for free here: https://www.moneerrifai.com/ebook/ | https://medium.com/hackernoon/can-you-jumpstart-your-career-with-aws-certifications-317e5e2e9170 | ['Moneer Rifai'] | 2020-05-27 21:15:26.458000+00:00 | ['Cloud', 'AWS', 'Certification', 'Career Advice'] |
Gut the EPA and Courts, and We’ll End Up Like China | Gut the EPA and Courts, and We’ll End Up Like China
Without recourse, lacking enforcement, a Beijing activist got desperate
by DAVID AXE
Pres. Donald Trump has attacked the U.S. judicial system after it blocked his Muslim ban. At the same time, Trump’s top advisor Steve Bannon has vowed to “desconstruct the regulatory state.” Scott Pruitt, Trump’s corrupt Environmental Protection Agency head, has begun doing just that with regard to America’s ecological protections — pledging to cut back rules preventing air and water pollution.
The president’s assault on the courts and the EPA could have a devastating effect on Americans’ health and safety and the fates of countless other species. There’s a great, smoggy example of what happens in the absence of a strong judiciary and environmental enforcement — China, one of the dirtiest countries in the world.
Two-thirds of the world’s high-end steel-producers — together burning 800 million tons of coal per year — are located in a geographic ring surrounding Beijing. In all, China consumes half the world’s coal in what are, by world standards, highly inefficient plants. As a result, the air in Beijing typically registers around 100 micrograms of particulate per cubic meter, six times what the EPA considers safe. And Beijing isn’t even China’s dirtiest city.
China has no independent judicial branch and also possesses a tiny, weak environmental agency — the Ministry of Environmental Protection employs just 400 people in a country of 1.4 billion . The EPA, by contrast, employs nearly 15,000 people in a country of 300 million, and U.S. courts frequently intervene to block polluters. Unlike Americans, everyday Chinese people suffering from polluted air and water have few legal ways of fighting for cleaner air and water.
“Rules and regulations are there, enforcement is not,” Ma Jun, director of the Institute for Public and Environmental Affairs in Beijing, told DEFIANT during a visit to his Beijing office. What’s more, “in China, environmental litigation is quite difficult,” Ma added. The Chinese People’s Institute of Foreign Affairs, a Hong Kong-based non-profit organization, sponsored the visit. | https://medium.com/defiant/gut-the-epa-and-courts-and-well-wind-up-like-china-197ded87db0d | ['David Axe'] | 2017-02-25 04:00:08.886000+00:00 | ['Environment', 'Climate Change', 'Defiant Science'] |
A sudden sense of liberty | The title doesn’t even appear in the song, of course.
Same with Blue Monday, Thieves Like Us, Bizarre Love Triangle, Fine Time. Titles of New Order hits rarely appeared in their lyrics. But then they rarely appeared on their record sleeves, either. New Order seemed determined to keep you at arm’s length.
True Faith drew me in. I was around 12 or 13 and most singles I taped off the radio had hooks. Most videos did what videos were supposed to do — make pop stars look like pop stars. This was different.
The song was weird, in a really exciting way that I didn’t really understand or know what to do with. For the first time weird felt OK — warm, intoxicating, mysterious. The video was silly-memorable yet also mysterious. It made New Order seem the least pop-like pop stars I’d ever seen.
Things stick at that age, and True Faith got to me. It is euphoric, but also wistful and melancholic. It shape shifts its mood, resists definition. That ambiguity set me free, I think. I found it at the perfect time.
When I was a very small boy,
very small boys talked to me.
Now that we’ve grown up together,
they’re afraid of what they see.
I wasn’t quite a very small boy then. But I hadn’t grown up yet either. I was in limbo, able to look back but uncertain about the future. This verse, with its malevolent nursery rhyme rhythm, subjected me to an involuntary premonition. It was a weird, outside-yourself kind of feeling. But then, True Faith is a weird, outside-yourself kind of song. It was a madeleine into the future. It’s now a portal back to that feeling, that limbo. A limbo between innocence and experience, between the past and the future, between freedom and the price we pay for it.
I feel so extraordinary
Something’s got a hold on me
I get this feeling I’m in motion,
A sudden sense of liberty
The words of someone transported. An epiphany of the faithful, perhaps. Is this a song about religion? Perhaps back then I was dimly aware it might be. Its title isn’t sung, its dread of the future never labelled, yet that image of fearful men is coloured by each. Small boys grown up together, but pulled apart. Perhaps even then I figured that would require a force with the capacity to shape minds and erode trust.
But maybe I’m conflating my own sudden sense of liberty — my initiation into pop music — with the idea of religious epiphany. Certainly back then the song itself felt like a divine occurrence. Music wasn’t simply a click away. Songs you liked appeared on the radio and TV, suddenly, without warning. It was a blessed experience, and you simply prayed you’d be ready with the blank tape on pause.
I don’t care ’cause I’m not there,
And I don’t care if I’m here tomorrow
So maybe it’s not about religion, after all. Perhaps it’s about drugs. Or music. Or music and drugs. It would figure — Manchester, late 80s. But this is hedonism, not oblivion. Limbo — a glimpse from outside yourself, a realisation that only now and this really matter.
Looking at it now the song seems less about a particular object of devotion, and more about devotion itself — the dreamlike state of True Faith, testimony from someone momentarily stood outside of themselves. Blinkers off, for better or worse.
That’s the price that we all pay,
our valued destiny comes to nothing.
I can’t tell you where we’re going,
I guess there’s just no way of knowing.
Despite the sense of abandon, growing up comes at a terrible cost in True Faith. References to adult relationships are dotted around the three verses. They bring fear, guilt, disappointment and failure. | https://medium.com/a-longing-look/a-sudden-sense-of-liberty-306b83fba4da | ['James Caig'] | 2015-11-03 09:30:28.519000+00:00 | ['Lyrics', 'New Order', 'Music'] |
It's all about Outliers | What causes the outliers?
Before dealing with the outliers, one should know what causes them. There are three causes for outliers — data entry/An experiment measurement errors, sampling problems, and natural variation.
Data entry /An experimental measurement error
An error can occur while experimenting/entering data. During data entry, a typo can type the wrong value by mistake. Let us consider a dataset of age, where we found a person age is 356, which is impossible. So this is a Data entry error.
These types of errors are easy to identify. If you determine that an outlier value is an error, we can fix this error by deleting the data point because you know it’s an incorrect value.
2. Sampling problems
Outliers can occur while collecting random samples. Let us consider an example where we have records of bone density of various subjects, but there is an unusual growth of bone in a subject, after analyzing this has been discovered that the subject had diabetes, which affects bone health. The goal was to model bone density growth in girls with no health conditions that affect bone growth. Since the data is not a part of the target population so we will not consider this.
3. Natural variation
Suppose we need to check the reliability of a machine. The normal process includes standard materials, manufacturing settings, and conditions. If something unusual happens during a portion of the study, such as a power failure or a machine setting drifting off the standard value, it can affect the products. These abnormal manufacturing conditions can cause outliers by creating products with atypical strength values. Products manufactured under these unusual conditions do not reflect your target population of products from the normal process. Consequently, you can legitimately remove these data points from your dataset.
Impact of the outlier
Outliers can change the results of the data analysis and statistical modeling. Following are some impacts of outliers in the data set:
It may cause a significant impact on the mean and the standard deviation
If the outliers are non-randomly distributed, they can decrease normality
They can bias or influence estimates that may be of substantive interest
They can also impact the basic assumption of Regression, ANOVA, and other statistical model assumptions.
To understand the impact deeply, let’s take an example to check what happens to a data set with and without outliers in the data set.
Let’s examine what can happen to a data set with outliers. For the sample data set:
1, 1, 2, 2, 2, 2, 3, 3, 3, 4, 4
We find the following mean, median, mode, and standard deviation:
Mean = 2.58
Median = 2.5
Mode = 2
Standard Deviation = 1.08
If we add an outlier to the data set:
1, 1, 2, 2, 2, 2, 3, 3, 3, 4, 4, 400
The new values of our statistics are:
Mean = 35.38
Median = 2.5
Mode = 2
Standard Deviation = 114.74
As you can see, having outliers often has a significant effect on your mean and standard deviation.
Methods to Identify outliers
There are various ways to identify outliers in a dataset, following are some of them:
Sorting the data Using graphical Method Using z score Using the IQR interquartile range
Sorting the data
Sorting the dataset is the simplest and effective method to check unusual value. Let us consider an example of age dataset:
In the above dataset, we have sort the age dataset and get to know that 398 is an outlier. Sorting data method is most effective on the small dataset.
Using graphical Method
We can detect outliers with the help of graphical representation like Scatter plot and Boxplot.
1. Scatter Plot
Scatter plots often have a pattern. We call a data point an outlier if it doesn’t fit the pattern. Here we have a scatter plot of Weight vs height. Notice how two of the points don’t fit the pattern very well. There is no special rule that tells us whether or not a point is an outlier in a scatter plot. When doing more advanced statistics, it may become helpful to invent a precise definition of “outlier”.
2. Box-Plot
Box-plot is one of the most effective ways of identifying Outliers in a dataset. When reviewing a box plot, an outlier is defined as a data point that is located outside the box of the box plot. As seen in the box plot of bill vs days. Box-Plot uses the Interquartile range(IQR) to detect outliers.
Using z-score
Z-score (also called a standard score) gives you an idea of how many standard deviations away a data point is from the mean.. But more technically it’s a measure of how many standard deviations below or above the population mean a raw score is.
Z score = (x -mean) / std. deviation
In a normal distribution, it is estimated that
68% of the data points lie between +/- 1 standard deviation.
95% of the data points lie between +/- 2 standard deviation.
99.7% of the data points lie between +/- 3 standard deviation.
Formula for Z score = (Observation — Mean)/Standard Deviation
z = (X — μ) / σ
Let us consider a dataset:
Using the IQR interquartile range
Interquartile range(IQR), is just the width of the box in the box-plot which can be used as a measure of how spread out the values are. An outlier is any value that lies more than one and a half times the length of the box from either end of the box.
Steps
Arrange the data in increasing order Calculate first(q1) and third quartile(q3) Find interquartile range (q3-q1) Find lower bound q1*1.5 Find upper bound q3*1.5
Anything that lies outside of lower and upper bound is an outlier
Let us take the same example as of Z-score:
As you can see we have found Lower and upper values that is: 7.5 and 19.5, so anything that lies outside these values is an outlier. | https://medium.com/analytics-vidhya/its-all-about-outliers-cbe172aa1309 | ['Ritika Singh'] | 2020-08-31 04:59:45.735000+00:00 | ['Outliers', 'Data Science', 'Data Analysis', 'Visualization', 'Statistics'] |
The Man Who Predicted the Housing Market Crash Just ‘Went Short’ on Tesla | If there’s anything that I have learned in my investing journey thus far, it’s been to ‘never bet’ against Elon Musk. Whenever the skeptics went ‘short’ on Tesla a vast majority of traders had lost money betting against this “extraordinary billionaire visionary”.
In short Elon Musk can be compared to a ‘real-life Tony Stark’: “Genius, Billionaire, Playboy, Philanthropist”.
But Dr. Micheal Burry who is well known for his bets against big banks during the 2008 housing market crash has recently gone on Twitter to express his views about Tesla, and this clash of the titans has Wall Street in shackles.
According to a recent tweet, the veteran investor stated the following:
“So,@elonmusk, yes, I’m short $TSLA, but some free advice for a good guy… Seriously, issue 25–50% of your shares at the current ridiculous price.That’s not dilution. You’d be cementing permanence and untold optionality. If there are buyers, sell that #TeslaSouffle”
Image taken from Dr. Burry’s Twitter account on December 2nd, 2020
When taken at face value the tweet can simply be viewed as a clash of egos between two “gurus” who both know their game well. But besides the tweet, Dr. Burry had posted a spreadsheet that had details of all the leading auto manufacturers in the world alongside Tesla, to justify his hypothesis.
Dr. Burry’s main hypothesis was the fact that the revenue, EBIT, and total market capitalization for the 32 leading auto manufacturers were $2.28 trillion, $99 billion, and $806 billion, whereas for Tesla the revenue, EBIT, and total market capitalization were $24.5 billion, ($69 million) and $438 billion.
In a nutshell, if you see the numbers from a valuation perspective you are essentially paying roughly 50 percent more than Tesla to acquire a stake in all the other 32 auto manufacturers, and in the process, you’d have gotten 90 times more in revenue!
Shares of Tesla Inc. (NASDAQ:TSLA) plummeted by 7.4 percent early Wednesday morning but recovered slightly towards the end.
This is essentially what short-sellers benefit from!
In simple terms, when you are selling short a particular stock you are betting against the stock price going up. The simplest way to profit is to sell the shares at a high price to repurchase them later at a lower price, and the “short-seller” pockets the difference.
The one thing which needs to be kept in mind is that the shares that were sold short, are in most cases borrowed from someone else. Thus it is essential to buy the shares back later so that the short position can be closed. Thus, if the shares continue to go up in value, then the short-seller inevitably misses out on any kind of profits whatsoever.
What we don’t know is how long Dr. Burry plans on keeping the short position ‘open’.
A few weeks prior to Tesla’s stock split the share price rose to a whopping~$1,100, (it surged even further before the stock split) taking the company’s market cap to an incredible ~$209 billion. This put Tesla roughly $6 billion ahead of Toyota back then, making it the most valuable automaker in the world!
Tesla’s massive ambitions were initially heavily reliant on the success of the Model-3, an affordable mass-market electric car that was supposed to revolutionize the auto industry.
Elon Musk had outlined this in his master-plan memo way back in 2006 —
“The strategy of Tesla is to enter at the high end of the market, where customers are prepared to pay a premium, and then drive down the market as fast as possible to higher unit volume and lower prices with each successive model. Step 1: Build a luxury electric sports car that could make some money. Step 2: Use the revenue from selling the sports car to create a more affordable luxury sedan. Step 3: Once the luxury sedan takes off, use proceeds from this sale to market and sell a truly affordable electric car that the masses could afford”.
Elon Musk bet the entire company’s future on this little dream due to which he even upgraded the manufacturing facilities and invested heavily in new factories. However, the significant delays in delivering the end-product to customers had produced a backlog in terms of both expectations and revenues later on.
In March 2016, when the company started accepting reservations for the Model-3, more than 325,000 bookings — roughly $14 billion in implied future sales meant that the company had to ramp up its facilities even further to churn out more than 5000 Model-3’s just to break even. This, later on, gave rise to what Musk had referred to as “production hell”.
As an article in Wired previously quoted —
Even Musk had conceded that the company’s fully automated factory vision, the “alien dreadnought,” wasn’t working. Workers ripped out conveyor belts inside the Fremont plant. Employees began carrying car parts to their workstations by hand or forklift and stacking boxes in messy piles. At one point, Musk halted production for an entire week to make repairs. On some level, Musk seemed to recognize that he was undermining Tesla. “Excessive automation at Tesla was a mistake,” Musk tweeted. “To be precise, my mistake.” He once told a colleague: “We just have to stop punching ourselves in the head.”
Observing the chaos Wall Street started aggressively shorting the stock back then, and it was more or less guaranteed to make traders a hefty profit by betting against Musk’s capabilities.
The same premise now seems to come back into play and when you combine that with the backlog of orders, Dr. Burry’s bet, and the email sent to Tesla employees regarding cutting costs, the possibility of Tesla stock getting “crushed like a souffle under a sledgehammer” does seem to make sense.
What will be interesting to see is if investors actually manage to recognize the company’s potential as a leading battery manufacturer and alternative energy provider heading into the future or will they continue to take a “short-sighted” trading approach based on daily price fluctuations.
With Goldman Sachs upgrading their price target for Tesla to $780 observing their EV growth Wall Street will be at a crossroad for the next few weeks to come. “ To go long, or to go short”? That is indeed the question. | https://siahmed007.medium.com/the-man-who-predicted-the-housing-market-crash-just-shorted-tesla-2293595a9293 | ['S I Ahmed'] | 2020-12-23 10:26:56.360000+00:00 | ['Business', 'Investing', 'Finance', 'Money', 'Startup'] |
Make a grid-based game with Unity: dev-log day 1 | Grid-based Game: day 1
The first thing I’m going to do is generate a 2D grid in unity.
I barely know how to do anything in unity so after installing unity and starting a new 2D project with “placement-name” I googled how to do exactly what I wanted to do “How to generate a 2D grid in unity.”
I found this awesome tutorial.
The first thing I do when I find informative videos that solve my problem is to save them to a list. I have separate lists for every major project I work on so that I can reference them whenever I want.
Saving valuable tutorials is critical in this day and age because there is a ton of noise on the web that will misdirect you when you’re trying to learn technically difficult stuff like this.
Here are a couple of my favorite sources:
Sebastian Lague
Lost Relic Games
For this tutorial, I needed a picture of some kind to be my floor so I used gimp to make something absurdly simple to work as my floor. For those who don’t know, Gimp is basically photoshop except free and open source. I’ve used it a lot in the past so I prefer it.
Here is what I made :
In order to keep development moving, I’m going to start every solution with the dumbest, simplest solution possible and work on fine-tuning them later. This floor block is no exception.
This is also where I decided to make this a 32px game. Each grid, and therefore everything else in the game is going to fit inside of a 32-pixel box. This is kind of an aesthetic choice. I want it to feel kind of retro.
After following the tutorial I ended up with this code:
There’s nothing fancy going on here but there are some important fundamental concepts taught in this video. It teaches how to make a basic prefab and how to instantiate and reference game objects.
For those who are totally new, a gameObject is just that, an object in this game that has a lot of properties or other behaviors attached to it.
A prefab is like a resource that can be re-used and it has some extra stuff attached to it that gives it a little more functionality than just a simple png file.. We don’t need a view more intimate than that right now.
Here is what I’ve ended up with. A 30x30 grid of my floor tile.
In the next log. I’m going to get a character set up and have the camera follow him around. | https://medium.com/dev-genius/make-a-grid-based-game-with-unity-dev-log-day-1-7feebfd400cd | ['Taylor Coon'] | 2020-12-28 16:54:42.673000+00:00 | ['Coding', 'Development', 'Game Development', 'Unity', 'Programming'] |
The Year Everyone Got Horny | The Year Everyone Got Horny
Leave it to months of social isolation to bring out peoples’ kinkier sides online
Photo: Igor Ustynskyy/Getty Images
Though it feels like light-years ago, last January gave us the debut of Gwyneth Paltrow’s vagina-scented candle, an electrifying hug between Brad Pitt and Jen Aniston, and an eerily portentous dating show called Love Is Blind. So with a lusty start like that, it should come as no surprise that 2020 has turned out to be one of the internet’s horniest years on record.
When the term “horny on main’’ first became a fixture of internet meme culture circa 2016, it was considered to be an embarrassing and pathetic quality bestowed upon those unable to keep their chaste public and filthy private lives separate on social media. When Ted Cruz’s Twitter account was caught liking a clip of incest porn, he was guilty of being horny on main. As was Pope Francis when his official Instagram account liked a picture of adult entertainer Natalia Garibotto dressed as a lingerie-clad school girl in November. And who could forget the overly enthusiastic Beto O’Rouke supporter who penned a 2018 viral tweet comparing the presidential campaigns of Richard Ojeda and Michael Avenatti to “the guy who thinks good sex is pumping away,” while Beto is more like “the guy who is all sweet and nerdy but holds you down and makes you cum until your calves cramp.” But there’s nothing like months and months spent indoors exclusively in your own company to swing the needle of public opinion in favor of certain online sexual practices formerly deemed cringeworthy.
In mid-March, as the coronavirus swept the globe, single people everywhere were forced to sound the death knell on their romantic lives as love and sex moved online for the foreseeable future. So with all of our normal outlets for pent-up libidinal energy suddenly stripped from us, a very isolated public turned to the only forum they had left after consuming copious amounts of pornography: being openly, unabashedly horny on main.
Naturally, Caroline Calloway was an early adopter of the 2020 zeitgeist of being oversexed and underserviced. The scammer and social media star decided on April 1 to pin a full-frontal nude to the top of her Twitter feed, followed shortly by the launch of an OnlyFans account. The adult-content venture appears to have been a very lucrative one for Calloway, who reportedly earned over $100,000 this year, which she used to pay back the advance for that book she never wrote.
It’s hard to imagine another point in history where a song as sexually explicit and pro-female pleasure as “WAP” would even be released, let alone become the #1 song in the country.
After months of layoffs, unemployment, and sheltering-in-place, Calloway’s not the only one flocking to start their own OnlyFans. A number of high-profile influencers and celebrities like Chris Brown, Bella Thorne, Cardi B, and Michael B Jordan have also joined the site over the last few months to share content ranging from sex tapes and charitable thirst traps to behind-the-scenes videos addressing the latest tabloid rumors.
Even as little as a decade ago, a celebrity signing a deal with an adult entertainment website would have caused a serious tabloid sensation. But it seems the country has slowly become, if begrudgingly, more permissive of A-listers’ sexual slipups and exploits. Social media has also done much to humanize these stars and influencers, making them feel like a friend instead of a public brand. Adding OnlyFans into the mix is simply another way for celebrities to turn their followers’ desire to have a more intimate relationship with them into cold hard cash. A paywalled platform of that size also gives celebrities the opportunity to dispel rumors, speak directly to their most die-hard fans, and express themselves more freely without fear of backlash in the press.
Regular folks have also increasingly pivoted their social media accounts into their own personal, much more public, OnlyFans, posting full nudes to raise funds for both themselves and various charities. According to Mashable, terms like “nudes” and “dick pics” tweeted alongside “coronavirus” jumped 384% on Twitter from the beginning of March to April, and the peach emoji spiked 46%.
With all of this newfound free time on our hands spent not making love, TikTok has also surged in popularity in America. The app has seen numerous trends shaped by the female gaze sweep the platform in the last several months, giving rise to video categories like Maid Tok, men whose entire schtick is chopping firewood or throwing clay while topless, and a flood of double-entendre-heavy content created by the floppy-haired, single-earring-bedecked young gentlemen who dominate the medium. “Accountant” TikTok has also been on the rise in 2020 — the neutral catchall profession used by a wide variety of sex workers to dole out tips about their vocation without getting flagged.
Given the litany of spectacularly shitty things that have transpired over the last 12 months, who are we to deny anyone even a modicum of joy?
However, all of the above was simply part of 2020’s grand plan to prepare us for the unveiling of the undeniable musical sensation that is “WAP.” It’s hard to imagine another point in history where a song this sexually explicit and pro-female pleasure would even be released, let alone become the #1 song in the country. The hit also helped usher Megan Thee Stallion and her particular brand of female sexual empowerment into the mainstream. As she told GQ in October, “Sex is something that it should be good on both ends, but a lot of times it feels like it’s something that men use as a weapon or like a threat. I feel like men think that they own sex, and I feel like it scares them when women own sex.”
Owning up to being extraordinarily horny on main, however, doesn’t always lead to these types of inclusive, culture-shifting conversations. This time of great sexual suppression has also resulted in plenty of people acting out and making some very questionable, if not outright bad, choices. Around May we got the unholy invention that is the A Bug’s Life fleshlight, and by October, Jeffrey Toobin had accidentally exposed himself to his colleagues after deciding he was overdue for a wank sesh mid-Zoom meeting.
But overall, our mass sexual repression and reappropriation seem to be rapidly pushing society in the general direction of increased sex-positivity. And given the litany of spectacularly shitty things that have transpired over the last 12 months, who are we to deny anyone even a modicum of joy? It’s never been clearer that everyone is simply doing the best they can while navigating historically dark circumstances. So if all it takes to keep you going is getting a little horny on main, go ahead and get your rocks off. | https://gen.medium.com/the-year-everyone-got-horny-7fd8692bfa0b | ['Emily Kirkpatrick'] | 2020-12-22 06:33:15.080000+00:00 | ['Social Media', 'Sexuality', 'Culture', 'Onlyfans', 'Music'] |
The Next Black Mirror Movie | There’s an episode of the UK TV Show “Black Mirror” called Be Right Back, where a grieving widow uses an online service that builds an AI profile of her deceased husband. It texts to her, talks to her on the phone, and such, and eventually she orders this AI of her dead husband installed into a robot. The robot emulates her husband in every way, even sexually. But in typical Black Mirror style, it gets creepy and the episode focuses on the idea that we can have emotional attachments and interpersonal relationships with machines, and such. In some ways, it’s a bit like the article I wrote here, about my experiences monitoring and shutting down my wife’s social media accounts after she passed away:
This Black Mirror episode strikes me very deeply because I would have the same conundrum as the main character, and I don’t know what I would do in that conundrum. I don’t think I’d buy the AI now, but I probably would have then.
As I think more about the plot of that Black Mirror episode, and I look at the state of both the internet around me, the world at large, and the state of the modern human condition, I think the episode missed a very important conclusion. If mankind had that technology, we as a species probably wouldn’t be using it to assuage our anguish over losing a relative like the main character did, or like I might have. Mankind would use it to fuck.
Fuckbots and Infidelity
Follow me through a thought experiment, where that particular Black Mirror universe was real. The company has a product where they can take social media, emails, voicemails, and all the collective information gathered about us by Google or similar megacorporate information gathering agencies, and can emulate humans to a nearly precise degree. They obviously wouldn’t market this product to widows and widowers. There aren’t enough of us. They’d market it to everyone. And the buyers wouldn’t be emulating their loved ones, lost or living. They’d be emulating themselves.
It would be nice to have a robot to answer the door for you at your house, that looked like you, acted like you, and managed the house in the same way as you. It would be nice to have a robot that does the dishes. It would be nice to have a robot that did your work for you. But the robot in Black Mirror also had sex. If every person in the world who ever had an urge to have an extramarital relationship also had the ability to replace themselves with a robot, that was so good at emulating them that their partner wouldn’t even notice, they’d never get caught. They could drop the robot into their lives, sneak out, and have all the extramarital sex they liked, if they were the kind of person who was so inclined.
But what if their partner was doing the same thing?
And this is where Black Mirror really missed an opportunity to make a deeper, weirder, stranger episode. Herein, witness my pitch for the overarching storyline of the next full-length Black Mirror Movie feature film, to hit theaters whenever they open from Covid.
Do Androids Dream of Electric Dildos?
Our main character, Adam, is a husband driven to cheat by his wife Evelyn’s disaffection. He connects with a woman online, Satin, meets her at a coffee shop, and is hesitant about crossing the line of infidelity. Satin assures him that it will be safe, he just has to buy a synthetic replacement for himself, by the Forbidden Fruit Corporation (we can’t call it Apple unfortunately for trademark reasons) which will replace him while he’s seeing Satin on their affair. He does so, and uploads all his relevant information into the Forbidden Fruit database. It generates an AI to install into the robot that will emulate him completely. The Forbidden Fruit End User License Agreement reserves the right to retain any information he provides, and he doesn’t read it because nobody reads those things. Mission successful, bot is in place, and he partakes in a sordid sexual encounter with Satin while Evelyn is none the wiser. Everybody loves love scenes. Put some throwbacks to Fatal Attraction in the backdrop.
Adam returns home, wracked with guilt, but tries to go about his daily routine with Evelyn. Eventually he starts noticing suspicious things about Evelyn’s behavior, things that just don’t seem quite right. He talks with Satin about them, because he now suspects that his wife may be cheating on him, and using a robot to cover her own infidelity. But he struggles with how to confront her about it, or even if he should considering he is now a philanderer. Satin does her best to calm him, assuring him that he is just imagining things, and invites him out for coffee. Adam again replaces himself with the robot, meets Satin, they talk, things get sexy, and they return to Satin’s apartment and we have the movie’s next love scene.
This time, though, the scene is a split scene, because Adam’s replacement robot is having sex with Evelyn, which makes the scene twice as hot, as well as twice as weird as the camera bounces back and forth between each bedroom.
Now we pivot completely over to Evelyn, which the movie has not focused on much to this point, as she has a conversation with her workout friend Lilith at her spin class. Evelyn expresses disaffection in her own marriage, and Lilith recommends she buy a Forbidden Fruit robot so she can go “find herself.” She does so, goes through the same motions Adam went through in an abridged fashion, installs her robot into their marriage and sneaks off to hit the dance clubs. The scene as she’s leaving shows the robot interacting with Adam, but it’s the same scene the movie opened with from a different camera angle.
The viewer draws multiple implications at this point. Adam’s original disaffection with Evelyn may not have been with Evelyn at all, perhaps it was with a robot. And the “twice as hot” scene above involved a sexual encounter not between a woman and a robot, but between two robots.
Next, we see Lilith and Satin having a glass of wine at a bar, talking about their weekly conversion ratios and their Forbidden Fruit stock options. Towards the end of this conversation, it becomes clear that both are Forbidden Fruit Marketing Robots.
You get the idea I think. Turtles All The Way Down and such. Any talented writer could carry this to wherever they wanted to go. It could turn into a dystopian nightmare where the entire human race is systematically replaced by fuckbots because we failed to control our primal urges and allowed the technology of gratification to consume our minds. We could discover that Forbidden Fruit is in fact run by an AI, the Grandmaster Fuckbot of them all. We could see Adam and Evelyn both discover the terrible truth and go on an action filled spree of violent terror against their fuckbot overlords like in “They Live.” Perhaps we discover that Adam and Evelyn are in fact the only two remaining humans, and that the entire world is being acted out in a strange script by machines simply doing what we, the humans who so foolishly bit from the tree of knowledge, created to replace us. Perhaps Adam and Evelyn are reunited in their violent reign of terror against Forbidden Fruit and move off to an island to try and make the first new human baby of the new world.
Date Movie
Let’s extend our thought experiment and pretend you took your boyfriend or girlfriend to see this movie on your first date, and that your date is an intelligent, interesting person, and clearly not a fuckbot. You go for drinks afterwards.
Your conversation opens with the obvious “fuckbots are evil” lesson; they are fruits of thousands of years of technology flowing from Eve biting the Apple of Knowledge. A harder lesson may be that gratification of primal urges through technology is itself unhealthy. That perhaps traditional value systems which suppress the gratification of primal urges succeed in building societies that work, where gratification-based value systems fail on some historical “axis of approximated metaphorical fuckbot.” An even tougher one may be that automation itself is evil, because it knows no bound, and somewhere within that bound of automation lies the entirety of human experience. Once the entire experience is automated, nothing remains. That our addiction to technology is itself an addiction to becoming steadily less human, until there are no humans left.
And then you and your date decide to go to church together. | https://medium.com/handwaving-freakoutery/the-next-black-mirror-movie-75bc8688eb64 | ['Bj Campbell'] | 2020-09-18 19:07:24.078000+00:00 | ['Sex', 'AI', 'Culture', 'Random'] |
9 Best Keto Blogs To Follow in 2020 | Image source: Pexels
9 Best Keto Blogs To Follow in 2020
Here are some of the best keto blogs to educate, inspire, and motivate at every stage of your keto diet journey.
How to Decide Which Keto Blogs to Follow
First, think about what you’re hoping to gain from keto blogs. Are you only looking for recipes? Or are you looking for authoritative information about the ketogenic diet? That’s because there’s no point in wasting your time navigating through blogs that don’t offer the kind of info you need to make your diet work.
Once you’ve decided what your priority is, make sure the blogs you’re following offer quality content. But how do you decide what counts as “quality?”
Say, for example, you’re looking for keto diet recipes. In that case, you first need to familiarize yourself with the basics of keto nutrition and check if the recipe blog you’re looking at follows these rules. You also want to make sure that their recipes have nutrition information, preferably where you can see the macros for each serving of the meal. An additional plus is any blog that has a YouTube channel — demonstration videos make keto cooking easy for keto beginners.
But if you just need reliable information about the ketogenic diet, choose blogs with content written or reviewed by medical professionals. This can prevent you from falling prey to misinformation.
Some blogs offer both, which can help you get the best of both worlds: loads of keto diet info with tasty keto recipes.
9 Best Keto Blogs
Below are some of the most informative and authoritative keto diet blogs on the Internet. Most of these offer both informative and keto cooking content. They’re listed in random order.
1. KetoConnect
If you’ve ever researched about the ketogenic diet on YouTube, there’s a good chance you may have come across KetoConnect. It’s one of the most popular and fast-growing keto channels with 800,000+ subscribers. Couple Megha and Matt are the owners of this channel as well as a blog of the same name, KetoConnect. They’re both experienced keto dieters, and you’ll find their content informative, useful, and, above all, fun.
They share everything you may want to know about the keto diet, including lots of delicious recipes on their YouTube channel and their blog. The biggest plus about this blog is the connection between the owners and their audience (you).
2. PerfectKeto
PerfectKeto is another popular keto blog that’s owned by Dr. Anthony Gustin, a former sports rehab clinician. Like KetoConnect this blog also covers A-Z of everything you may want to know about the ketogenic diet.
One of the major perks of this blog is that all the content was written there is by health care professionals and experienced health writers, which means you’re looking at reliable info. Another plus is that they have a private 26000+ members keto community on Facebook where you can interact with other newbie and experienced ketoers.
3. Ruled.me
Ruled.me is a pioneer in informative keto content. It is one of the first keto diet blogs to offer comprehensive guides for the ketogenic diet. It was started by Craig Clarke, who shares his 12+ years of first-hand experience with low-carb living. All content is written and reviewed by experienced writers and health professionals.
Ruled.me consistently receives millions of readers each month. The blog features in-depth guides on almost every burning topic in the keto sphere. They also have an extensive collection of keto recipes and over 300 000+ social media followers.
4. LowCarbYum
As the name implies, Low Carb Yum is a blog dedicated to tasty keto recipes. From low carb takes on classic dishes like pot pie and pizza to truly unique keto recipes like fat bombs, Low Carb Yum has you covered.
The recipes here are ordered by course, diet, and cooking method. This blog also features gluten-free, dairy-free, paleo, vegetarian, and many other options. Each recipe has easy to understand step-by-step instructions, complete with high-definition pictures to make cooking easy and fun. The recipes on this blog have been featured on popular websites such as Shape, Women’s Health, and Men’s health. Like many popular blogs, Low Carb Yum also has a YouTube channel with over 50 000+ subscribers and over 1. 7 million followers on Facebook. Follow this blog on social media to get their latest recipes.
5. KissMyKeto
Long-time friends Alex and Michael created KissMyKeto after struggling to find ketogenic products that truly met their needs and expectations. Their focus, as a result, was to develop and distribute high-quality keto supplements and other products like keto snacks, keto broths, ketone meters, and more.
Besides that, they want to offer keto dieters content they can trust. To make that possible, they have teamed up with talented writers and medical professionals such as doctors, dietitians, and nutritionists who review every article published on their site. On the KissMyKeto blog, you’ll find the latest information on the science of keto as well as practical tips on how to make the diet work — this also includes a recipes section. KissMyKeto also has a dedicated YouTube channel, Instagram account, Facebook, and more.
6. Healthline
This might come as a surprise, but Healthline deserves a special mention as a comprehensive and reliable source of keto information.
Healthline is a powerful authority website that receives over 90 million page views per month, and it ranks number 1 on Google for many ketogenic diet topics. All of the content published on this site is written and reviewed by experienced medical professionals. This means it can be (and probably already is) your go-to place for reliable information about the keto diet and more. Use search engines or the website’s own search bar to look for keto topics. You can also follow them on social media like Facebook and Instagram.
7. DietDoctor
DietDoctor was founded by Sweedish family physician, Dr. Andreas Eanfeldt, in 2011. He stopped working as a physician in 2015 to dedicate himself full-time to his website. Dr. Eanfeldt’s goal is to revolutionize how people look at nutrition.
DietDoctor is one of the oldest and one of the most trustworthy sites covering keto diet and nutrition information. Medical doctors and experts write all the articles on the site’s blog. DietDoctor has over 600 000 followers on social media and has a large collection of comprehensive keto guides and recipes.
8. Ketogasm
If you’re a woman who’s looking for a blog that’s as informative as it is entertaining, this is the place to go. Ketogasm is a website to helping women succeed on keto as explained on their “About us” page. It was founded by Tasha, a talented nutrition educator and author who started following the ketogenic diet to lose weight and treat her PCOS. She shares everything she’s learned about the keto diet on her blog.
Besides informative articles, ketogasm also has a free course, meal plan, keto calculator, recipes, and everything a woman would need to get started on the keto diet. You can also read Tasha’s book offering guidelines and recipes and listen to her podcast, both linked to this website.
9. Headbanger’s Kitchen
Headbanger’s Kitchen s a highly popular blog with a fast-growing YouTube channel as evident by its 400K+ subscribers. The blog and channel are both dedicated to keto-friendly recipes. The creator behind them is Sahil, a metal musician who shares his love of both keto and metal music on his blog. If you’re someone similar, you’ll definitely enjoy Sahil’s creativity.
Where keto diet is concerned, his blog features many unique keto diet recipes — some being keto versions of classic Indian, Japanese, Hungarian, Chinese, and international cuisine in general. The blog has easy to understand step-by-step instructions for all the recipes featured.. Also featured on his blog are informative keto resources, personal vlogs, and merchandise.
Takeaways
Following the keto diet can be difficult for many beginners. That’s where blogs and other websites can help with informative content and easy-to-follow guides. When navigating the endless scope of keto diet information on the internet, it’s sometimes best to choose a couple of reliable sources for all your keto info and recipes.
The blogs listed here are some of the best out there, offering expert-reviewed content, creative keto-proof recipes, and even personal stories to help get you inspired and keep you motivated. | https://medium.com/age-of-awareness/9-best-keto-blogs-to-follow-in-2020-6fc545142912 | ['Hana Hamzic'] | 2020-03-31 20:58:41.373000+00:00 | ['Health', 'Fitness', 'Keto', 'Blog'] |
Hide or Show Floating button on Scroll in Flutter | So let’s start,
Initialize required variables State object /class:
Call initState() method
This method calls at the beginning when the widget is created. The framework will call this method exactly once for each State object it creates. So the initState() is the best place to implement the behaviors which should occur before widget build. So in the initState() method, I implement handleScroll() function which is responsible for updating the isScrollingDown variable whenever the user scrolls the listview in the reverse(down) direction.
Call dispose() method
This method calls when the state object removed permanently which means the dispose() method will call by the framework when the state object will never build again. So when after calling the dispose() method, we are not having the ability to call setState() methods again. So this is the better place to remove the scroll Listener. Put the dispose() methods after the initState() method.
Create methods to hide and show the floating action button
Whenever the user scrolls forward (Up) we should have to hide the Floating button, so hideFloationButton() will calls. As well as whenever the user scrolls revers (down) we should have to show the Floating action button, so showFloationButton() will calls. Inside both methods, we are changing the value of the _show variable. Since the Floating action buttons visibility depends on the _show variable we are changing the value of it.
Implement the method to handle the scroll
Inside this function, we are adding the scroll listener, So that when the scroll occurs the function will trigger. Moreover, This is the main operation which fulfill our requirement. Which hide and show the floating action button in the scroll. When the user scroll reverse direction we are calling the hideFloationButton() so the floating action button will disappear. and also when the user scrolls forward we are calling the showFloationButton() , so the floating action button will appear.
Build the widget | https://nuwanthafernando95.medium.com/hide-or-show-floating-button-on-scroll-in-flutter-636d660ff9fb | ['Nuwantha Fernando'] | 2020-05-06 05:43:25.775000+00:00 | ['Application Development', 'Flutter', 'Mobile App Developers', 'Development', 'Software Development'] |
InfoViz Week 4 | InfoViz Week 4
Data-Ink, Chartjunk and Graphical Design
The concept of data-ink, though apparently obvious, was highly illuminating due to its simplistic formulation that emphasizes the visual balance of data v/s everything else. Never before had I thought about say, grids, being visual clutter, most likely because (as Tufte reasons) those are quite helpful when plotting by hand, but not in print. The example with null data-ink ratio also made it resoundingly clear that trend lines may not accurately reflect the data (since not a single point necessarily lies on one). Nonetheless, I have quite a few objections this time around, beginning with a want for unambiguous principles. For example, the oft-used phrase “within reason” is subject to interpretation and has not been substantiated by a framework of reasoning that could be applied to a wide range of graphics. Next, he argues about portrayal of symmetric pieces of information being redundant, however, without any convincing study of how people perceive it after trimming a particular half. There is only a fleeting mention elsewhere, that eventual familiarity with “new” designs (such as the half face) will make them seem just as reasonable. Similarly, when erasing or consciously repeating data representation (an example of the latter being the display of surface ocean currents), what is a good metric to decide the extent to which one must erase/repeat? A more formal treatment to describe what is jarring (or conducive) will really help here.
Moving on, we are given a look at the various ways in which injudicious use of ink leads to the equivalent of fashion disasters in data graphics. At first, the cynicism was exhausting, but the dizzying optical art made it seemed rather restrained! In recent times, though, with widespread popularity of “flat” design, we encounter chartjunk much less often, which is an interesting scenario of more general design sensibilities having a positive impact on visualization. Tufte also proposes ways to make grids subdued yet more effective, which is curious and even surprising as the variation of color (even shades of gray) and line thickness is omitted from previous chapters on graphical excellence, probably out of spite for the aesthetic. Again, while the improvements are visible to me, a more concrete rationalization of say, contrast perception among people, would drive the point home. Finally, I’m probably guilty of making a few duck-graphics myself, as a child with newfound access to plotting tools on a computer and the inclination to decorate excessively. There is however, no excuse for when such things find their way in professional publications and that has been well articulated and affirmed by Tufte.
Rounding up the principles proposed in the previous chapters, Tufte tries to create better iterations of existing data graphics. My impression of the redesigns is simply this: is it a matter of taste or can we find a basis in more precise theories (of perception, etc.)? I would do some redesigns differently, or not change them as much. A common recourse that can be noticed is how in the concluding paragraphs, Tufte takes a milder stance and in this case, calls the exercise an “experiment”, caving in to admit a notion of beauty that underlies even statistical graphics. | https://medium.com/cogs220/infoviz-week-4-dbd859b8eaf8 | ['Kandarp Khandwala'] | 2016-10-17 02:41:50.263000+00:00 | ['Data Visualization', 'Design'] |
Building A Mental Model for Backpropagation | As the beating heart of deep learning, a solid understanding of backpropagation is required for any deep learning practitioner. Although there are a lot of good resources that explain backpropagation on the internet already, most of them explain from very different angles and each is good for a certain type of audience. In this post, I’m going to combine intuition, animated graphs and code together for beginners and intermediate level students of deep learning for easier consumption. A good assessment of the understanding of any algorithm is whether you can code it out yourself from scratch. After reading this post, you should have an idea of how to implement your own version of backpropagation in Python.
Real-valued circuits, and a force of correction
Mathematically, backpropagation is the process of computing gradients for the components of a function by applying the chain rule. In the case of neural networks, the function of interest is the loss function. I like the interpretation by Andrej Karpathy in CS231n: take the compute graph as real-valued circuits with logic gates. The gates are the operations in the function, e.g. addition, multiplication, exponentiation, matrix multiplication, etc.
This is a great mental model because it means backpropagation is a local process. Every gate in the circuit can compute its output and the local gradient without knowing the big picture.
During the backward pass (backpropagation), the gate applies the chain rule, i.e. taking its output’s gradient on the final output of the circuit and multiply it with the local gradients with respect to all of its inputs. The backward pass can be implemented using a recursive approach to traverse back from the output of the circuit back to all the inputs.
Intuitively, the final effect of backprop and its associated weight updates is that the circuit “wants” to output a value that’s closer to whatever target value we have. Take the gradient of the add gate (-4) in the graph above as an example, it means that changing q by +1ε will result in a change of -4ε in f. If we’d like a higher f, we could make q lower. This is essentially what a gradient is. People sometimes call it “sensitivity”. Another great analogy is a force of correction. The sign of the gradient indicates the direction of correction, and the magnitude indicates the strength.
From function to compute graph
One of the best ways to visualize backprop is to draw the compute graph of the function. Let’s look at this weird function below to demonstrate how to draw its compute graph and then do backprop on it manually. (σ() is the sigmoid function)
To calculate its gradients, we can decompose it into add, sigmoid, square gates as shown in the animated steps below:
Concretely, the process consists of 3 high-level steps
Build the compute graph from operations (gates) Run the forward pass through each operation Run the backward pass based on (1) the values computed in the forward pass, and (2) the backward function for each gate to calculate their local gradients
You can follow along and manually calculate the values. I’ll show how to implement it in the last section, but now let’s look at a trick that will help us simplify the process.
Staged computation
Any kind of differentiable function can act as a gate, and we can group multiple gates into a single gate whenever it is convenient.
Since we should never explicitly solve for the gradients analytically, the selection of these function components becomes a problem to consider. Take the sigmoid function for example:
We can decompose it into add, multiply, negate, exponentiate, reciprocal gates individually as shown in the following compute graph:
A simple sigmoid already has so many operations and gradients, it seems unnecessarily complex. What we could do alternatively is just to have one sigmoid gate applying on the output of the red box along with the function that calculates its gradient. The gradient of the sigmoid is very simple:
This way we avoid a lot of unnecessary computation. It saves us time, space, energy, makes the code more modular and easier to read, and avoids numerical problems.
The code for the sigmoid gate could look something like:
It has both the forward pass output and the function to compute the local gradient for the backward pass in just a few lines of code. In the next section, I will fit this into the bigger picture and show how to write a mini autograd library with such components.
Autograd: a recursive approach
To let a computer calculate the gradients for any function expressed in Directed Acyclic Graphs (DAG) using the chain rule, we need to write the code for those 3 high-level steps mentioned before. Such a program is often called auto-differentiation or autograd. As you’ll see next, we can structure our code into a Tensor class that defines the data and operations, so that it not only can support building dynamic compute graphs on the fly, but also backpropagating through it recursively.
Forward pass: building the graph
Code from Karpathy’s micrograd
A Tensor object has data , grad , a _backward() method, a _prev set of Tensor nodes, and a _op operation. When we execute an expression, it builds the compute graph on the fly since we have overridden the Python operators such as + , * and ** with customized dunder methods. The current Tensor's _backward() , _prev and _op are defined by its parent(s), i.e. the Tensor(s) that produced it. For example, the c in c = a + b has the _backward() defined in __add__ , and _prev = {a, b} , _op = '+' . This way we can define any operation we want and let Python construct the graph.
Here we are talking about neural networks, so the expression we care about is the loss function. Take the MSE loss for example (using 1 scalar data point for simplicity), MSELoss = (w*x - y)**2 where w , x and y are Tensor objects and are initialized to be 3, -4, and 2 respectively. The graph is then automatically constructed as:
Notice that subtraction is actually negation and addition. I named the intermediate nodes for illustration purposes only. With the graph ready, we can implement backprop!
Backward pass: topological sort
Code from Karpathy’s micrograd
backward should compute gradients one by one starting from the current Tensor node and move to its ancestors. The order of the traversal needs to be topologically sorted to make sure the dependencies are calculated at every step. One simple way to implement this topological sort is a depth-first search. Here's an animation to show how it's executed for the MSELoss example.
This is graph traversal 101: a plain old DFS. If you have a big complex neural net you just replace the upper left corner that is wx with your big graph, and the DFS will go there and compute the gradients, provided that all the operations and their _backward are defined in the Tensor class. Some of the obvious operations we need here include max , sum , matrix multiplication, transpose, etc. To use the gradients to update your weights, do:
for p in <your_parameters>:
p.data -= learning_rate * p.grad
There you have it, the autograd algorithm in a few lines of code. It is the backbone of any deep learning framework. Now that the hard part is behind us, to complete the implementation of your mini deep learning framework, you just need to implement a model interface with a collection of layer types, loss functions, and some optimizers.
Recommended readings
This post is heavily inspired by Andrej Karpathy’s awesome CS231n lectures and beautifully written micrograd: tiniest autograd engine. If you would like to take a look at a different and more extended version of a DIY deep learning framework that closely resembles PyTorch, check out the one implemented in Grokking Deep Learning by Andrew Trask. If you prefer diving into PyTorch’s autograd directly instead, Elliot Waite has a great video on Youtube which will save you a lot of time digging into the source code. Keep on learning!
References | https://towardsdatascience.com/building-a-mental-model-for-backpropagation-987ac74d1821 | ['Logan Yang'] | 2020-10-27 05:46:54.388000+00:00 | ['Machine Learning', 'Python', 'Backpropagation', 'Deep Learning', 'Algorithms'] |
Four Things You Probably Don’t Know About the Accordion | The first thing you probably don’t know about the accordion is that the whole world loves it. At least, that was the first thing I learned at the 2011 International Accordion Festival in San Antonio, Texas, which featured bands from every non-Antarctic continent playing more than a dozen completely different styles of accordion music. There were the German-style polka players, sure, but there was also zydeco and klezmer, Irish music and Cape Verdean music, Danish music by way of Iowa. Seriously, the whole world loves the accordion.
I was at the festival with my boyfriend Niko and his mom Sarah, both of whom are musicians who had played with some of the bands performing there. And while I’m on the universal appeal of the accordion, I should mention that they started in on the foreign language with the first band we saw, a Tejano group called Albert Zamora y Talento. When I say “foreign language,” I don’t mean Spanish — I mean that impenetrable language spoken by people who actually know about music.
Zamora is a Grammy-nominated artist from nearby Corpus Christi, so when he took the stage, the crowd went wild. They went wild again every time he pulled some showman stunt like playing the accordion behind his back. I was in awe — the dude was shredding on the accordion — but my boyfriend and his mom rolled their eyes. “It would be impressive if he modulated up half a step,” said Sarah.
“Or changed chords,” said Niko.
“I definitely know that those words have to do with music,” I said, quietly, to myself, in my head.
Of course, this all changed when Zamora and his band members, who were perhaps a bit on the husky side, started doing coordinated dances involving a lot of thrusting hips. Then Sarah slapped her knees with delight. “This isn’t an American thing,” she said. “Big, fat, shambling guys trying to look sexy — and it’s working!” To be fair, they weren’t that big and fat, and the hip thrusts demonstrated pretty conclusively that they weren’t shambling. It was, however, working.
***
Let’s back up for a moment, because the second thing you probably don’t know about the accordion is what it even actually is. The simplest definition of an accordion, I learned, is that it’s a free-reed instrument. A reed is the same basic deal as the mouthpiece of, say, a saxophone, except with an accordion, instead of blowing into the reed with your mouth, you blow air over it with the bellows (known in the biz as the “crinkly in-and-out part”). To play a melody, you press buttons on the side(s) of the accordions.
Lots of other free-reed instruments, members of the extended accordion family, showed up at the festival. For example, the frontman of the Hector del Curto Tango Quartet played not the accordion but the bandoneón. (Niko, regarding the quartet’s supermodel-looking Juilliard-trained cellist: “It’s every chubby composer’s dream to poach a beautiful cellist from Juilliard.” His mom: “Maybe every chubby composer should learn to play the bandoneón like that.”)
For another example, a group called Riyaaz Qawwali, which plays Sufi devotional music, used the harmonium rather than the accordion. They played the type of ambiguously worded love songs that can be sung either to God or to your main squeeze. (Accordion? Squeeze? I don’t know. I tried.) Translations of song titles included “‘My Eyes Are Thirsty For You’…but not in a creepy way.”
***
The third thing you probably don’t know about the accordion is that not many women play it. At the festival, there was a female vocalist, a female guitar player, the aforementioned female cellist, and that was … it. Granted, that’s a problem throughout the music industry, but it was still disappointing — especially given the presence of my boyfriend’s mom, who plays multiple instruments and studied accordion with one of the festival’s featured artists. Put her onstage!
I’m not saying the lack of women was definitely part of a worldwide patriarchal conspiracy to ensnare us all in male-dominated religions, BUT I’m just SAYING we were told there would be a Mass with mariachi music (because, accordions) at a local Catholic church, and when we went, there was no mariachi music. And then we just had to sit through a whole Mass. In Spanish. I mean, it’s cool, I learned that I have to wear my vestido de fiesta when God invites me to his banquete, but I feel like we could have avoided some awkwardness if the Catholic church didn’t lie about mariachi music or prevent women from being priests and getting abortions and performing at the 2011 International Accordion Festival.
***
The fourth thing you probably don’t know about the accordion is that, as the emcee put it, “accordion music is dance music.” It was true; by the second day, Sarah was wishing she had a capable dance partner. “Maybe I should place a personal ad,” she said. “‘Large Determined Woman Wishes to Dance.’”
Ironically, the only performances I attended where I didn’t see any dancing were the Lammam Group’s. The Lammam Group is three brothers who play classical Arabic music. One of them is Elias Lammam, who is Sarah’s accordion teacher and literally one of the best accordionists in the world. When Ivan Milev first saw Elias play, he kissed his fingers. When Abu Seoud first saw Elias play, he announced, “You are the new Abu Seoud!” I am told this is as impressive as it sounds. The festival’s blurb on Elias explains how “[Arab] accordionists began tuning their instruments to Middle Eastern scale — maqams. These scales make prominent use of microtones — notes not found in Western scales — and are impossible to play on standard accordions.” Maybe it’s just harder to dance to music that’s so good it actually achieves the impossible.
Aside from the Lammams, no matter what style of music was playing, people would get up and then, subsequently, get down. Young Latino couples with babies in strollers, middle-aged white couples wearing cowboy gear unironically, bespectacled oldsters, bejorted hipsters — everyone. At one point, a man in a black fez with a Bulgarian flag draped around his shoulders led twenty or thirty strangers in a folk dance that couldn’t form a proper circle without pushing half the dancers into a river. I don’t think I’ve ever seen so many different people just enjoying themselves together. I guess what I’m trying to say is … accordions for president??
Lauren O’Neal is from the Internet and lives in Austin, Texas. | https://medium.com/the-hairpin/four-things-you-probably-dont-know-about-the-accordion-254983452a8d | ["Lauren O'Neal"] | 2016-06-01 20:10:07.272000+00:00 | ['Accordions', 'Music'] |
Negotiation 101 for Freelancers | Building confidence
Being confident and enthusiastic is the key to successful negotiation. I know that I was certainly lacking in confidence when I first started out as a freelancer. Confidence is rooted in the knowledge that you will probably be successful at something because you have been successful at similar tasks in the past. But can we project confidence even when we lack self-assurance? Yes! In fact, there are several things we can do to achieve this:
Act the part
Imagine yourself to be a confident person. Get that image in your mind and act it out. Even if you’re nervous, act confidently. Stand up straight, dress professionally, and try to play the part. Fake it till you make it, if you will. As a natural introvert, this took me a while, but I got there eventually.
Give yourself a pep talk
Listen to what those voices are saying inside your head. If they are eroding your confidence, stop those tapes and replace them with new, encouraging messages. I know I’m my own worst critic, and maybe you’re the same. But positive self-talk can do wonders for your confidence.
Listen
Confident people are generous enough with their time to listen to others. Show interest by asking relevant questions to make your counterpart feel heard and understood. This will help your case. I definitely feel more comfortable and appreciated when my counterpart seems interested in what I have to say.
Know your stuff
Your confidence obviously can’t be all a front. If you are prepared and sure of your facts, you’ve got a better chance of projecting confidence. That’s because you’ll feel the part and know you won’t be put on the spot or called out. | https://medium.com/the-innovation/negotiation-101-for-freelancers-ac449a524cc8 | ['Kahli Bree Adams'] | 2020-08-02 20:25:26.203000+00:00 | ['Business', 'Small Business', 'Entrepreneurship', 'Solopreneur', 'Freelancing'] |
Product feedback is a gift | I am lucky enough to work at a company where we often joke that every employee is a self-declared Chief Product Officer. Whether internal or external, every Strava user has an idealistic image of what they want the Strava app to do for them.
Because many of us are athletes ourselves, Strava employees are a concentrated representation of our most loyal user base. Whenever I am out on a bike ride wearing my Strava kit, I am often stopped by someone to tell me 1) how much they love Strava and 2) they would really love if Strava did X.
When your users care so much about your product, how can an engineering manager funnel these strong and differing opinions into something productive?
we’re pretty enthusiastic about sport
Internal Feedback
Well before starting to build a product, our leads group (engineering, product, design and analytics) assembles a product brief to keep us focused on solving a core athlete problem.
When launching a product internally, we include a feedback Slack channel to direct comments and concerns. Within seconds, we have tens of screenshots and paragraphs of opinions. (Are people constantly monitoring Slack for an excuse to not do their normal work?)
As an engineering manager, I deeply understand the product goal and the engineering steps required to fulfill it. I can accept a barrage of feedback and clearly define what engineering tasks are most important for our product to be as effective as possible. I use this unique position to enable our talented engineers to deliver the most impact from their work.
Managing Feedback
the new web training log — walks, hikes, swims, SUPs and rides all together
As builders of the product, only we engineers know what we have time to do. We must be explicit about what feedback we are looking for, but more importantly, what we are not looking for. Making expectations clear up front allows us to focus on important things like browser rendering issues while saving the color preference comments for later. At the internal release stage, we are not typically seeking design feedback, but rather, feedback on core functionality.
One of our most-loved features is the Training Log — a visual overview of your activities each day — which uses color to differentiate sport and size to represent length of an activity. When we launched Training Log for internal testing, we had already spent tens of hours analyzing data and discussing implementations. We had debated how to handle multiple activities on a day, sport bounds (is an hour of yoga the same size as a 3 hour bike ride?) and the sport color palette (we have 37 different activity types). This was not the time to rehash discussions about how multiple bubbles appear on a given day.
Stating upfront that I am choosing to send any non-critical issues to the backburner helps both our alpha testers and engineers stay focused on the right things.
Deciding Priority
In late summer 2019, we launched Fitness on mobile. This feature tracks your training load and fatigue over time. Our employees were over-the-moon excited to have their beloved fitness graph now available in the palm of their hand. However, our six mobile engineers only had so much time before the public release, and their time had to be best utilized.
Fitness & Freshness on Web (left) and the simplified mobile version (right)
To decide the priority of issues to address, we go back to the product brief and the core athlete needs. How can we help all athletes track their progress over time and at a glance? For Fitness, this was super clear. To enable this feature to all athletes, it was absolutely essential that we build a flow to help new athletes log a Perceived Exertion score to generate their fitness graph without existing heart rate data. The fatigue and form lines that were core to the Fitness and Freshness feature on the web, were removed for clarity on mobile.
When the barrage of internal feedback comes in, I pipe all the Slack messages into a Google sheet and create JIRA tickets for everything, allowing us to group similar issues. With usually one sprint before launch, we look at the whole picture and draw a distinctive line between the “must-do”s and the “nice-to-have”s. This helps engineers conceptualize exactly what needs to be done before launch and only pick up the bonus tasks if they have time.
External Feedback
the feedback module on Activity Detail, Route Detail and Training Dashboard
At Strava, our user experience is so important to us that we built a feedback system that we drop into high-touch features such as activities, routes and fitness. Users can tell us directly what they think, inside the app. We regularly analyze this at an aggregate level, but we also read the free-form text input! We also have a community-driven ranked feature request, where athletes can vote and comment on what they want to see next. We are extremely fortunate to have enthusiastic athletes who are not afraid to tell us what they want from Strava as a whole, and will find any form to communicate with us directly. We use this data to inform what and how we build.
Sometimes feedback can be more abrasive than we would like. It is my job as an engineering leader to carefully consider user opinions but never, ever forget the careful thought and long hours the team has put into building the end product. Some engineers, especially early career ones, haven’t yet developed the thicker skin of industry experience; we must constantly remind ourselves that singular opinions do not define our work.
Celebrate
Launching a product to millions of passionate users is a special moment. In this excitement, celebrate your team. Remember to personally thank the individuals who stepped up over the course of the project. Call out the cross-functional partners who worked hard to deliver design assets, go-to-markets and strategic feedback. Now go enjoy a launch happy hour and start scheming up your next product launch. | https://medium.com/strava-engineering/product-feedback-is-a-gift-ec0c96762f4c | ['Melissa Huang'] | 2020-08-26 19:08:35.774000+00:00 | ['Mobile App Development', 'Product Management', 'Software Development', 'Strava', 'Engineering Mangement'] |
Grace Health Develops Ethical Framework for SRHR Information in AI-based Solutions | Grace Health Develops Ethical Framework for SRHR Information in AI-based Solutions Grace.health Follow Apr 7 · 5 min read
With Grace Health’s vision of improved health for all women everywhere and leveraging the strength of mobile technology, automation and artificial intelligence to put female health directly into the hands of women worldwide we know that we have an amazing opportunity but also a responsibility. That is why we brought together a multidisciplinary group of experts to look closer at the ethical impact of our AI, and to develop a framework for bias-reduced Sexual and Reproductive Health and Rights (SRHR) information and services inside AI-based solutions.
Why are we doing this?
At Grace Health, we are pioneering a sector that traditionally has relied upon human interaction for information dissemination. Societal norms have put women in disadvantaged and discriminatory structures. Our vision at Grace Health is improved health for all women everywhere. We believe that we can realize this vision by leveraging the strength of mobile technology and artificial intelligence (AI) to put female health directly in the hands of all women worldwide and deliver a scalable platform to the 1.9 billion women in emerging markets who own their own phone but in many cases lack access to smart and relevant health services and information. We see AI and automation as a scalable tool to overcome discrimination and obsolete social norms and to solve the current problem of accessing health services and information instantly and with discretion.
But making use of AI does not mean that we can completely disregard the impact of human interaction. Services like the Grace chatbot that use AI and Machine Learning (ML) to remedy challenges of discrimination often overlook the fact that AI itself is a tool created by humans and thus prone to biases and norms. As we at Grace Health aim to serve women at scale and to develop services for the next billion users, designing for inclusivity has always been one of our top priorities. Therefore, we decided to partner with some of the top expertise on norm critical content as well as the ethical and societal impact of AI: to dig deeper into the impact of biases and norms on our AI.
We approached the Swedish Innovation Agency, Vinnova, whose focus on norm critical innovation perfectly aligned with the work we wished to do and last fall we received a grant to carry out the work.
The project
For the past six months, Grace Health has worked closely with the project partners to design workflows and frameworks for creating norm critical and bias-reduced educational content on the topics such as identity, sexuality, sexual pleasure, violence and more.
“With the current 250 000 women using Grace Health’s AI-powered female health assistant, Grace, and with a steadily increasing growth, we couldn’t be happier about collaborating with this knowledgeable multidisciplinary group of experts to drive development and usage of methods, tools and processes to support norm critical and norm creative innovation within AI ” Therese Mannheimer, Founder and CEO of Grace Health
Building on the insights of this work, the larger aim of the project will be to together with our academic partners develop an ethical framework for SRHR education, information dissemination and services inside AI-based solutions.
“Women’s health and sexual education are of crucial importance to empower women across the world and to improve their knowledge and behaviour. AI-based systems can help make information available to women in a timely and personalised way, in particular in places where women would be less able to get this information due to cultural tabus, demographics or geographic distances. However, developing and using AI in a responsible way is crucial, given the importance and sensitivity of the topic. This is more than ensuring unbiased data, but also requires transparency of processes, accountability for the results and use of the system, ensuring participation and inclusion of users and stakeholders, and openness about aims and approaches used. We are happy to work towards these aims with Grace Health.” Virginia Dignum, Professor at Umeå University, Wallenberg chair on Responsible Artificial Intelligence.
We hope our joint work can serve as an important catalyst, inspiration and framework for startups and larger companies and help them design and build norm critical AI.
“In general, society is biased, and this bias is reflected in collected data which is then used to build AI. So, bias is propagated from humans, over data, to AI systems. We have to realize that it’s kind of a double-edged sword: AI has great benefits for our society, but we also know that AI can be biased in a way that leads to prejudice and discrimination. We hope that our research can help to raise more awareness about biased AI and develop computational methods that mitigate unwanted bias so that it is not amplified” Suna Bensch, Associate Professor at Umeå University
The project parties
RFSU, the Swedish member association of International Planned Parenthood Federation (IPPF), has a long-standing expertise in providing sexuality education and methodologies to address stereotyped norms, counteract myths and address misconceptions.
The Department of Computing Science at Umeå University, is one of the leading research departments in ethical artificial intelligence. It consists of a collective of renowned researchers with expertise on AI ethics and responsible AI, human-AI interaction, computational linguistics, chatbots and social robots. Within the project, the team is exploring the potential ethical challenges, potential threats, and how to best make sure that AI capability is well understood and applied properly.
Virginia Dignum is a professor at the Department of Computing Science at Umeå University, Sweden where she leads the research group Social and Ethical Artificial Intelligence. She is also a member of the European High Level Expert Group on AI, and Fellow of the European AI Association (EURAI). Expertise on AI, AI ethics, and human-AI interaction.
Suna Bensch is Associate Professor at Umeå University. She holds a Master’s degree in computational linguistics, and a Ph.D. in theoretical computer science. She recently completed her role as PI for the VINNOVA Vinnmer project “Integrative Human Language Technology”.
Botkyrka municipality represents great diversity with 52 % of its population having another ethnic background than Swedish. The Botkyrka municipality has a long and in-depth competence in developing inclusive and accessible educational material to people of all ages.
What’s next
The results and learnings of the project will be published by the end of 2020. Stay tuned for progress updates along the way.
To learn more about the project don’t hesitate to get in touch at hello@grace.health. To learn more about Grace Health and the first-ever digital women’s health clinic designed for the next billion women online, go to www.grace.health | https://medium.com/grace-health-insights/grace-health-develops-an-ethical-framework-for-srhr-information-in-ai-based-solutions-4e6624054c17 | [] | 2020-04-14 19:39:46.546000+00:00 | ['AI', 'Bias', 'Ai For Healthcare', 'Srhr'] |
Build Your Own Vue 3 SWR Hook | The Hook
Now it’s time to build our custom hook. We create a hooks folder inside src and then create a swr.js file.
We’ll start by creating a global cache and the function that will be exported and make all the work we need. By putting the cache outside of the returned function we ensure that it’s unique and accessible to all callers. The function will receive a key, and a promise, and will return the cached value if it exists. After that, we’ll resolve the promise and update the cache and/or return the corresponding response. Well use named export for the function (just a personal preference):
We get a big problem with this code because whether we have or don’t have the cached value, we’ll resolve the promise and return the updated value (or error). But in our piece of code, if we get the cached value it’s returned and that’s it. With this approach, we can’t move on and resolve our promise to revalidate the cache. Another problem is that we’re returning two kinds of response, one is pure data (from the cache) and the other one is a promise. And the error treatment is a little bit rough.
To make this work we’ll use Vue’s composition API ref. This utility creates a reactive and mutable object. By using this all we have to do is return the reactive constant and the callers will be notified of it. We’ll start this constant with the cache’s key-value or null (in case the key doesn’t exist). To avoid the possibility of the caller changing our state, we’ll use another composition API functionality, readonly . The second version of our hook code now looks like this:
It’s a lot better, but there’s still room for improvement. I think we can add and optional parameter to load the initial state (in case is not already in the cache) and return other parameters so that the caller knows if we are revalidating, if an error has occurred (and which error was that). Since now we are returning multiple values, it’s a better idea to create a state object with all the keys inside and update them accordingly. In this case, reactive is more suitable then ref. Another change we’ll have to do to make it possible for the caller to use destructuring and get individual reactive values is to make use of composition API utility toRefs .
Another feature I think would be cool is to add localStorage . With this addition, if the key has already been called anytime in the past, the user will be instantly provided with the data. To make the state saving automatic whenever the data change we can use watchEffect . We’ll wrap the localStorage ’s setItem method in a try-catch to avoid problems when the fetched data exceeds the quota, which will make our application stop working.
With these final changes, our custom hook is ready to be used | https://medium.com/better-programming/build-your-own-vue-3-swr-hook-f54124ee6ed6 | ['Rogerio Amorim'] | 2020-08-24 16:53:20.490000+00:00 | ['Vue 3', 'Programming', 'Vuejs', 'JavaScript', 'Vue'] |
Fitness Cycling Over 60 | Gorilla Bow Week 4 Workout 2b | Mitigating Shoulder Pain | https://youtu.be/KxeV-HhjzGY
Shoulder pain is not new for me. I have dealt with it for at least two decades and went through over a year of physical therapy for it about five years ago. I do certain stretches many times a day in a attempt to keep it under control. I thought the Gorilla Bow routine was helping it at first. But toward the end of week three the pain intensified and is now reaching intolerable levels. Thing is, I like the results of these workouts in every other respect. So I am going to bring my resistance down a notch or two to see if I can continue and work through it. I will also reintroduce some light resistance shoulder exercises to see if that helps.
Gorilla Bow 355 lbs Week 4 Workout 2b
-Dead Lift 15x355
-Chest Row 10x 160 (underhand and 5 second holds)
-Bicep Curls 10x50 (5 second holds)
-Toe Raises 15x355
My “Daily Visit with God” Journal/Devotional tool is online on Amazon.
Check it out at, https://www.amazon.com/dp/1723870420?ref_=pe_870760_150889320
This is a journaling book I created and published to help people who want to keep a record of their walk with God.
Hey! I created a new team on an app called Charity Miles. They have sponsors who pay 10 cents per mile for cycling and 25 cents per mile for walking or running to the charity you select. I selected Wounded Warrior Project. My team is @Christians_Care. We now have 21 members on our team and have donated over 6000 miles. I invite you to join me.
I give my YouTube vlogs the main title of “Fitness Over 60.” It is my goal to build a community of like minded riders whose purpose is to get and stay active and fit into their senior years. At 60, I see myself as just entering those years. I am not as fit as I would like to be, but neither am I trying to become a superstar athlete. I am becoming more fit all the time with:
-Regular cycling,
-Elliptical exercises
-Frequent resistance training (#mygorillabow) and
-Ketogenic/Intermittent Fast eating
are the main elements to my fitness plan.
I’m getting cash back rebates from my online orders from BSP — Rewards Online Shopping Mall. I shop everything from Walmart to my local Tractor Supply Store. I have received more than $1325 back in my bank from online purchases I would have made these purchases anyway.
Check it out at http://www.bsp-rewards.com/M04VB.
Check out our church website at, http://www.puyallupbaptistchurch.com
Listen to sermons I have preached at https://www.youtube.com/user/marvinmckenzie01
Check out the books I have written at my author spotlight on Lulu.com:
http://www.lulu.com/spotlight/marvinmckenzie
My author Page for Kindle/Amazon
http://www.amazon.com/author/marvinmckenzie
Notice: I do not endorse or agree with everything I hear on the podcasts I make reference to. | https://medium.com/fitness-cycling-after-fifty/fitness-cycling-over-60-gorilla-bow-week-4-workout-2b-mitigating-shoulder-pain-dd699ddb8e9c | ['Marvin Mckenzie'] | 2018-12-07 18:39:21.867000+00:00 | ['Fitness', 'Health', 'Life', 'Strength', 'Seniors'] |
Free Dental Care at a “Day for Special Smiles” | On December 9, the dental college at A.T. Still University in Mesa, Arizona will be closed — unless you are a Special Olympics Arizona athlete. For that day, the students and entire faculty will completely devote their time and efforts to fill cavities, conduct cleanings, and provide a multitude of other dental care procedures to dozens of athletes in need — all free of charge.
The event is the brainchild of Abrahim Caroci, a student at the college who has been increasingly active in the Special Olympics Special Smiles program.
“This is all a product of me being exposed to Special Olympics,” he said. “Special Smiles is a great idea, but ideas are only great when you put them into action locally.”
And acting locally is exactly how Special Olympics Arizona is reaching out and finding athletes most in need. On Oct 15, the Program held its fall games and offered Healthy Athletes. More than 100 athletes were screened in the Special Smiles tent, and about half were found to be in need of follow-up care and invited to the December event.
“This is very exciting for us, to be able to bridge the gap between screening and dental care for the athletes,” said Isaac Sanft, Healthy Athletes coordinator for Special Olympics Arizona.
According to Abrahim, however, the December event is just a fraction of the impact he hopes to have on the health for people with intellectual disabilities. Using social media and a website he launched — www.globaloralhealthleaders.org, he wants to be able to connect dental students with resources, volunteer opportunities with Special Smiles wherever they live, research, and each other to collaborate and compare notes. Although just getting started, there are already “chapters” established in California, Texas , and even Brazil.
“If you want to change things, start on the student level,” Abrahim said. “They are the future oral health leaders.” | https://medium.com/specialolympics/free-dental-care-at-a-day-for-special-smiles-6bc79e46b85e | ['Special Olympics'] | 2016-11-08 15:47:01.017000+00:00 | ['Health', 'Nonprofit', 'Dental', 'Arizona'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.