title stringlengths 1 200 ⌀ | text stringlengths 10 100k | url stringlengths 32 885 | authors stringlengths 2 392 | timestamp stringlengths 19 32 ⌀ | tags stringlengths 6 263 |
|---|---|---|---|---|---|
Using Websockets with Python | WebSocket
Websocket is a communications protocol, providing full-duplex bi-directional communication over a single TCP connection.
To understand Websockets, first, we have to have a clear understanding of HTTP protocol cause both go hand in hand.
HTTP protocol life cycle
HTTP is a protocol which allows the fetching of resources, such as HTML documents. It is the foundation of any data exchange on the Web and it is a client-server protocol, which means requests are initiated by the recipient, usually the Web browser.
HyperText Transfer Protocol is an application layer is a stateless application-level protocol where generally client requests information with headers using actions like GET, POST, PUT … and server sends the response back to the client and the connection is closed.
For every communication, HTTP protocol will open a connection, data exchange then the connection is closed.
If the requirement is to fetch the data continuously from the server. For example fetch the realtime data for cryptocurrency exchange, gaming application or for a chat application. Then there will be multiple HTTP calls to the server.
SAVE the SERVER
Photo by Vilmar Simion on Unsplash
How’s the Websocket is different from traditional HTTP protocols. Websockets uses bi-directional communication and the connection will not break until the client or server decides to terminate the connection.
Credit goes to https://blog.stanko.io/do-you-really-need-websockets-343aed40aa9b
The life cycle of Websockets
Handshake
Websocket is also an application layer protocol and it is an HTTP upgrade that uses the same TCP connection over ws://
In simple terms client asks for the server that, can we make a WebSocket connection and in reply server will say, yeah buddy let’s upgrade the connection to WS
This is called the handshake and the connection is established between client and server.
Open and Persistent Connection
Once the connection is established communication happens in terms of bi-directional message communication.
Connection Closed
Once the connection is established its stays forever until the client or server wants to terminate the connection.
Enough of the theory, let’s dive into the implementation. So to have a WebSocket connection we first need to have a client and a server. For the implementation, we are using Python’s Flask Server that is a microframework.
tenor
We will be using Socket.io for the client socket library and Flask-SocketiIO for the server WebSocket library for our example.
Flask-SocketIO
Lets first install all our dependencies to run the project like flask-socketio or may be flask if that is not installed yet. As the WebSocket can connect to n number of clients so flask has to use some asynchronous libraries like Eventlet or Gevent. So in our case, we are installing Eventlet.
To run the project with Eventlet thread library let’s install Gunicorn web server or maybe you can use UWsgi server.
Now we just have to set up the flask project and initialize the flask-socketio and for more information please visit the official docs of flask-socketio
Now by default, the server is running on 5000 port with WebSocket. Now we just need to connect from the client to the server. Let’s make the connection using Socket.io
Socket.io
To install socket.io either you can use CDN for it or if you are using npm then just fire
And to simplify our solution we are using CDN link
So above code will connect to localhost server on port 5000 on which we have already started our flask server with WebSocket. Now our client is connected with our server and you can check that in the web console with the statement as socket connected
And to check whether the client is connected to the server or not just add the below code.
@socket_io.on('connect')
def test_connect():
print("socket connected")
Now as the client and server are connected then there’s an HTTP upgrade happened and the connection is made on ws://localhost:5000 with status code as 101(Switching Protocols). And now all the data exchange will happen on the same network call only. You can test that our in the network tab there is a header called WS in chrome.
So let’s send some data from client to the server over the same network without creating any new network call. To send a message from client to server you can either use send or emit method.
socket.emit('message', { 'hello': 'world' });
You can even send the data to the server with a custom event just change the first parameter to anything else. But just be sure that there should be a receiver configured on the server that is listening to that event.
With Flask-SocketIO the server needs to register handlers for these events, similarly to how routes are handled by view functions.
The following example creates a server-side event handler for an unnamed event:
@socketio.on('message')
def handle_message(message):
print('received message: ' + message)
The above example uses string messages. Another type of unnamed events use JSON data:
@socketio.on('json')
def handle_json(json):
print('received json: ' + str(json))
SocketIO event handlers defined as shown in the previous section can send reply messages to the connected client using the send() and emit() functions.
The following examples bounce received events back to the client that sent them:
from flask_socketio import send, emit @socketio.on('message')
def handle_message(message):
send(message) @socketio.on('json')
def handle_json(json):
send(json, json=True) @socketio.on('my event')
def handle_my_custom_event(json):
emit('my response', json)
Note how send() and emit() are used for unnamed and named events respectively.
Broadcasting
Another very useful feature of SocketIO is the broadcasting of messages. Flask-SocketIO supports this feature with the broadcast=True optional argument to send() and emit() :
@socketio.on('my event')
def handle_my_custom_event(data):
emit('my response', data, broadcast=True)
When a message is sent with the broadcast option enabled, all clients connected to the namespace receive it, including the sender.
And to handle the messages coming from the server, we have to register a new handler for the given event on the client-side, so that client can get the message from the server.
socket.on('my event', (data) => {
console.log(data); // this will log the output that server has send to the client
});
Where to use Websockets instead of traditional HTTP calls
Realtime applications like bitcoin exchange. Just check out the link and open the chrome dev-tools network tab you can see massive to and fro of the data happening.
2. Chat application. You can check out Socket.io chat application that uses the WebSocket concept. Please check out the link for chat application demo.
Conclusion
Websockets are very easy, lightweight and super interesting when you see a lot of data is flowing to and fro in a single call, without bombarding the server with too much of network calls.
This was just my understanding of Websocket. I hope, it has helped you. If you want to discuss with me about anything or anything related to tech, you can contact me here or on Linkedin. | https://medium.com/koko-networks/using-websockets-with-python-4396e54d36e6 | ['Anand Tripathi'] | 2020-09-04 12:24:04.184000+00:00 | ['Flask', 'JavaScript', 'Python', 'Websocket', 'Socketio'] |
“Lidsville” | Running a freelance business means wearing many hats. More heads would really come in handy. | https://backgroundnoisecomic.medium.com/lidsville-2c6f12f931e | ['Background Noise Comics'] | 2018-10-05 04:43:44.135000+00:00 | ['Creative Process', 'Freelance', 'Freelancing', 'Comics', 'Creativity'] |
Blasting 10,000 Jobs Into Oblivion | “You need to stop losing money now—or else.”
That is along the lines of what Softbank CEO Masayoshi Son said to the founders of the businesses he tossed billions in. In a meeting at Langham resort, Son gathered business leaders from Softbank’s portfolio companies to give a clear message: no gimmicks. If not, you’re going to wind up like Adam Neumann.
Ousted—but still, a billionaire—and infamous, Neumann transformed into a public lesson for the portfolio companies. Billions of dollars went up in flames along with Neumann’s entrepreneurial career. Softbank, after an over-investment of $10.6 billion to save the sinking co-working giant ship, has declared that there will be no more lifelines like that after WeWork. In the same year, Son made good on his word and dropped pet care service Wag, cutting their leash on the burgeoning losses.
Unfortunately, burgeoning losses have become a feature of Softbank portfolio companies and WeWork’s implosion showed that public investors aren’t that easy to please. While the winter season is ending, Softbank’s Vision Fund is deep in its own financial sheet winter, and the second Vision Fund isn’t performing well either.
Despite four investment opportunities, Softbank only chose one to dump $250 million in, despite submitting term sheets to the other startups. Although Son is clearly exercising more prudency with future investments (focusing on those that actually know how they can profit), most analysts and investors are pessimistic about Vision Fund I’s portfolio performance. Bloomberg columnist Shuli Ren described the Vision Fund as the “pied piper of unicorns”: most of them will eventually be flops.
Despite the lackluster—and some being calamitous—performance of these companies, one cannot discount their economic impact. Collectively, Softbank portfolio companies have created at least 241,000 jobs, according to Human+Business Digest. Ride-hailing companies like Uber, Didi, and Ola have triggered a wave of regulatory shifts where they operate. Oyo, the budget hotel room chain, became the world’s third-largest hotel brand.
Those are no small feats, but today, the focus is on the possible and existing downfalls.
As one founder puts it: “it’s stone-cold crazy”.
Softbank portfolio companies have collectively put at least 10,700 employees out of job, according to Scott Galloway, and this number will continue to rise as companies engage in financial sheet balancing tactics. Dumping bleeding auxiliary businesses, selling off loss-making business units, and laying off hordes of employees are all classic tactics to make paper profits.
Paper profits are also a feature of Softbank’s investing style.
Without real-world value and a path to profitability, Softbank portfolio companies are now having an ocean to cross just to splatter some green ink on their financials. This is without more Softbank cash and the best hopes for these companies is to either raise money from previous investors in a newer round or have an IPO.
Son is in favor of the latter and he has made it clear: IPO by 2022 or 2023. For the portfolio companies, they have to start profiting before then.
What can be the next move for Son? The message has been sent and startups need to work on it 24/7 without excuses. In the past decade, a booming economy facilitated WeWork’s meteoric rise. Today, these portfolio companies aren’t going to enjoy the same economy, which raises the bar on what is considered a genius business move.
Son has admitted to being embarrassed about his investments, but there might more to be embarrassed about as the Vision Fund creates more flustercucks around the globe. A good example: Paytm’s performance has now stalled. A great example: Uber’s and Slack’s public market performance didn’t whet the appetite of public investors.
An exemplary example: Uber, Didi Chuxing, and Rappi are deep in a food-delivery price war in Latin America, all of them being funded by Softbank’s money.
Calamitous? Perhaps, but Softbank is setting billions on fire and part of those billions come from credit lines and loans. Vision Fund II has comprised almost entirely of its own money and the Middle East is more than willing to reject it. No matter how much data Softbank has, it’s clear that their situation today looks more like a tragedy than venture capital comedy. | https://medium.com/swlh/blasting-10-000-jobs-into-oblivion-4b751b8aaeb1 | ['Andy Chan'] | 2020-01-31 22:15:02.433000+00:00 | ['Startup', 'Work', 'Business', 'Venture Capital', 'Finance'] |
Charticulator | Charticulator
Microsoft Research has quietly open-sourced a game-changing visualization platform
Charticulator allows for truly bespoke visualisation through a drag-and-drop interface. Source: Charticulator
Data visualisation is an area where experimentation is rewarded. It is important to be able to rapidly prototype ideas when creating charts. It is easy to think up of impressive ways of building a graph to show a trend in a data set only to find out that, once created, it simply does not work as expected.
Even well thought out charts can turn into a mess when real data is added
Whilst open source tools such as Python and R have a large number of packages to support the creation of charts and graphs, iterating with code can be slow and introduces a steep learning curve. Given that visualisation of data is normally only a small part of a data scientist’s workflow, spending time understanding the specifics of a visualisation library can be painful.
Charticulator
The Charticulator platform not only enables charts to be created from data using a simple drag-and-drop interface, it enables full customisation of the elements within charts in the same way. The platform is open source (MIT license, source code here).
The way that all of the chart attributes can be built from data allows for almost anything to be created. When using the platform you can concentrate more on bringing your data to life than on the process of doing so.
If this sounds a bit abstract, it is probably better explained through the video below:
Building a bespoke chart in under 2 minutes (Source: Charticulator)
An added bonus
Whilst charts can be exported and saved in a number of formats, it is the ability to create a custom visual for Power BI which makes Charticulator even more powerful:
Export to Power BI to create and share outputs on interactive reports
The below video shows the process in more detail:
How to create a custom visual in Power BI using Charticulator. Source: Curbal
You can find Charticulator here! | https://medium.com/swlh/charticulator-b513d18e5f00 | ['Josh Taylor'] | 2020-10-11 21:37:56.573000+00:00 | ['Visualization', 'Power Bi', 'Data Visualization'] |
Senseless Acts of Beauty | On the average day, I find it a challenge to go about a “normal” life while understanding that the world is collapsing around us. Even if you don’t think that climate change is going to cause societal break-down (or is already), day after day of dire news can easily lead to the climate grief inflicting so many of us.
I also am the mother of a toddler, who will hopefully continue to be blissfully unaware of the state of the world for a few more years. This means that I spend my days with Minnie Mouse, Anna & Elsa, wooden blocks, play dishes and a million other toys and games that are at odds with my thoughts and feelings.
I happened upon And Beauty for All one day, and it has provided me with a new sense of the beauty of these aspects of my life that are about love and joy and a happy child, regardless of what else is happening in the world.
While their focus is on the beauty of the natural world, I find I can easily apply it to my daughter’s excitement about going to dance class and our amazing trip to a kid-friendly, vegan, brunch drag show. Beauty is everywhere and we have lost our humanity if we let it be stolen from us, even or especially while facing tragedy.
On their website, they have a list of Simple things you can do and my favorite is “Encourage senseless acts of beauty and random acts of kindness.”
It’s such a simple idea, but the senseless acts of beauty really speaks to me. Whatever else we have left to give, here in the greatest trial of humanity, we can give ourselves and each other senseless acts of beauty. Where do you find your everyday beauty? | https://medium.com/radical-hope/senseless-acts-of-beauty-7246f87f94da | ['Violet Bee'] | 2019-11-05 15:23:13.124000+00:00 | ['Environment', 'Climate Change', 'Beauty', 'Parenting', 'Kindness'] |
Essay: On being “different” | I am sad, angry, disappointed, but still hopeful.
I am angry that in the 21st century we still have to deal with issues like this. I am angry that in the 21st century people have to die for something that shouldn’t exist anymore.
I am angry that in the 21st century, when we have artificial intelligence, self-driving cars, smartphones, 5G network, yet we still do NOT have an accepting society.
Yes, the racism has been here for a quite long time, but that doesn’t mean that it is okay. It isn’t. And all of that media exposure doesn’t mean there is more of racism or that it is getting worse. No. The form of it has been like that for quite few years. But it is being filmed, people will stand around and record it. That’s how it reaches the world. And yes, it is good that the whole world knows about it. But it is not good if the world acts like it isn’t a worldwide matter.
For those who think racism is okay or just don’t give a damn
If you even remotely think that racism is okay, I have news for you. You are part of the problem. The same goes for those who don’t care about it. If you will rather stand next to someone who is a victim to racism and not speak up, you are part of the problem. Let’s stop enabling people to be racists. Let’s break the chain, the circle of having to fight for it for so long. The black people have already fought for centuries to get where they are, and they are still NOT fully accepted by this world. Why? Just because white people think that we are better? News again, we are not.
For all of those who belong in this category, I have got two questions for you.
1.Are you even human? That is a human being that was KILLED because of a SUSPICION. Nothing else. George Floyd didn’t even resist, he begged those police officers to allow him to breath, to get water, to stand up. There were 3 cops on one person. George said “I can’t breathe” not once or twice, but FIFTEEN TIMES. Now tell me again that this is OKAY. I dare you.
2. What if it was your father, brother, son? Would you still feel the same way? I think not, because that’s what most of the society thinks. It doesn’t concern me. What happened to Floyd cannot happen to me, because I live in XY, and I am white… Just one word for you. Privilege. If you genuily think this, you are priviliged. You are relying on the fact that you’re white that nothing BAD will ever happen to you. Please, don’t think so. This can happen to anyone, yet it is happening to black people.
I hope that you will realise that racism is not okay. That it can happen to anyone and the form doesn’t have to be violent. It can be verbal and not that even. Black people are often denied jobs they are qualified, just because they are black. They are often being followed in stores, just because some Karen doesn’t feel safe, or just to make sure that black people will not steal.
We live in that era that is full of stereotypes, but those people of colour, they are just like we are and they want the same things. They want to be loved, accepted, they want a decent job, family, house, car… Nothing extra, nothing special. And yet the majority of people can’t get over the fact that they are a bit darker than we are.
Let’s love our neighbours, whatever their identity, their race, their orientation. Does it really matter in the end? It is none of our business if they are gay, trans, straight, pan, bi, nonbinary etc… That is their thing and it doesn’t matter. It doesn’t affect our lives, so let’s drop that shit of caring about it. If you have friends who are part of the LGBTQ+ community, tell them you support them (that’s what friends are for, right?). If you have a black, brown, mixed friend in your life, text them, call them, FT them saying that you support them and you will stand with them in this fight. Why? Because the society can only change when all of us stand together UNITED. We have, I hope, learned our lesson that we cannot play the card: “they are different” anymore. So let’s play the card “we are together united”.
For those who think racism is NOT okay and give a damn
First, I want to thank you that you are trying to help the black people. Because they need every single additional voice they can get. There are multiple ways how you can help, either by donating money, donating your time and attending protests (if it is safe and there wasn’t martial law put in place), but please, obey the police officers. Majority of them is angry with us, they support the movement as well, but they are first and foremost doing their jobs. They are not the enemy. The enemy is the system, so please, DO NOT ATTACK them, do not disobey them, be respectful as well. Treat them the same way you want to be treated. Yes, it is very sad what happened to George and the ones before him, BUT, provoking the police officers or national guard will not do any good for the cause. It will just stain it as dangerous, not safe, not peaceful and then the authorities will try to stop it.
I hear you. I am angry with you. I am sad with you . I respect you. I support you. I will treat you the same. I don’t give a damn about colour, race, identity. You’re human to me, that’s enough. I SUPPORT YOU. | https://medium.com/share-it/essay-on-being-different-afd6f183da35 | ['Klára Kőszeghyová'] | 2020-10-21 08:24:06.241000+00:00 | ['Essay', 'Thinking', 'Racism', 'Society'] |
Out of control | The sky is cloudy like my thoughts
It is always moody like my neighbour next door
Summer is coming
And rain will not drizzle but pour
The sky is also like me you see
Mostly quiet but fiddly,
Unpredictable and noisy
Simply out of control | https://medium.com/poets-unlimited/out-of-control-2811339f3def | ['Clara Zaitounian'] | 2017-09-21 13:48:04.601000+00:00 | ['Poem', 'Poets Unlimited', 'Weather', 'Poetry'] |
Striking a Chord: Songs That Make Us Feel | Music is religious to me in the way that I rely on lyrics and melodies to react to my feelings, like prayers. Where there is a moment, there is a song. Where there is fear, there is an arpeggiated sneer to rid us of negativity. Where there is love, there is rhythm and poetry to describe our emotions for us, embody our physical urges into lyrics, and translate confusing thoughts into beats. Love is at the centre of most music, whether it’s love for the night, ourselves, sex or partners. While words are my thing, and I’m working on becoming more expressive through physicality, music has always been my go-to form of expressing love for others. Each layer of production that goes into a song is symbolic and expresses emotions that I want to share but feel I don’t have enough words or knowledge to express.
In other words, songs speak feelings that I struggle to verbalize.
My partner and I have exchanged music religiously since the start of our relationship. Our happiest memories are encapsulated in catchy tunes and meaningful melodies. Like scents, songs bring us back to the early days of our relationship and have always allowed us to document what we share. I’m not much of a photo person, anyways. I love making my partner playlists; and I love receiving theirs, too. It’s my way of saying, ‘Here’s how you make me feel. Here are the sounds that remind me of you. Here are the songs I want you to listen to when you want to think of me.’ Each phase of our relationship has a playlist; some I never meant to share with my partner, but eventually did. The first time we said, ‘I love you,’ has about an hour and 53 minutes worth of music devoted to it. Times we’ve fought have songs too, but I’ve kept those to myself, for now.
Music has meant so much to me all my life because it’s how I perfected the English language. It’s where I saw non-white people creating art I could relate to. It’s pivotal to how I built my identity transitioning to Canada after immigration. I learned how to express emotions through poetry and song before I learned how to do so through prose. So, the playlists I make for my partner are more than just songs to make them smile; they’re me saying, ‘This is the best way I can describe to you how I feel.’
I’ve always been curious to see if other people perceive music the way I do. That’s why I wanted to explore what songs mean for others when it comes to love, sensuality, and intimacy. Music resonates the most for me in these areas in particular. Thus, Nuance reached out to our community to hear about the songs that have influenced our sexual and romantic journeys the most. Thank you to all of our contributors!
The whole playlist is available for listening on your preferred music platform at the bottom of the article.
Take Me To Church — Hozier
Submitted By: Elizabeth
Not so much in shaping as it is confirming how I feel spirituality fits into the conversation of my sexuality. I feel like the song does a pretty good job of playing with concepts of the erotic embodied through theorists like Jacqui Alexander n Audre Lorde. I didn’t know there was a queer storyline in the music video so it only cemented how I loved the song as expression of the possibilities of convergence btw the erotic and divine, esp through queer love. | https://medium.com/shareyournuance/striking-a-cord-songs-that-make-us-feel-7b283f687699 | ['Nuance Media'] | 2018-05-01 15:54:04.933000+00:00 | ['Relationships', 'Love', 'LGBTQ', 'Sex', 'Music'] |
Why We Find Joy and Value in Creating Data Visualization | Why We Find Joy and Value in Creating Data Visualizations
A reflection on the unseen forces behind visual works
I don’t know about you, but when something piques my interest — TV tropes, color theory, the life-changing magic of tidying up, anything — I have to wrap my mind around the “shape” of the related knowledge. I get sucked into reading (or listening to) whatever I can about the topic while under this spell.
Illustration by the author
A while back, I discovered that information design was its own field, and I wanted to know the fundamental principles and best practices. I also found myself lost. Whether it was picking out a learning resource, charting tool, or styling option, it wasn’t always clear why to choose one over the other. Like so many just starting out in the field, it was frustrating to hear the same answer to many questions: “It kind of depends.”
But are there certain sorts of problems that data visualization can help with or needs it’s especially good at meeting? In a recent conversation, we asked members of the Data Visualization Society (DVS) to give their take. Welcome to a rough guide on knowing where data visualization can help you.
1. To challenge assumptions and expand the mind
When a Swedish professor was taken aback by how much misconception existed in the world, he went on to make it a mission to “fight devastating ignorance with a fact-based worldview that everyone can understand”. His name was Hans Rosling, co-founder of Gapminder Foundation, and his common tool of choice? Charts on global trends, combined with great storytelling.
Illustration by the author
Data visualization is particularly good at helping us to think beyond our own personal experiences, especially if it’s on issues that are little known or not often covered in the media. When an event or issue easily springs to mind, we tend to exaggerate the importance or frequency of it. Likewise, we tend to downplay things that we are not as familiar with.
By making the data more memorable and “sticky” in our mind, vizzes can help overcome availability bias, which is our tendency to give preference to information or events that are more recent or observed personally.
The effect is amplified when we insert the audience in the data visualization. For example, in The New York Times’ interactive article ‘How Much Hotter Is Your Hometown Than When You Were Born?’ piece on climate change, the audience is asked to enter their hometown and year of birth. The article then goes on to compare the number of hot days one can expect today relative to when they were born, as well as what they can expect as they age. Climate change becomes something more personal and relatable.
That said, sometimes it takes more to convince your audience. In one dataviz practitioner’s experience, when the charts don’t match expectations, the audience’s first instinct may be to find fault with the data. Cultivating trust is necessary, particularly when audience members see themselves as domain experts on the topic.
2. To reason about data
It’s a myth that designing visualizations is only for the end of the data analysis process or when you are ready to communicate some insights. Coming up with quick and dirty prototype charts or even sketches has its benefits at the early stages.
The more obvious upside is that it’s easier to pick out patterns and outliers from a chart than chunks of raw data or rows of summary statistics. But besides using data visualization as a way to understand, we can also use it as a way to think. By translating our internal thinking process into objects in the external world, we clarify our ideas and make them more actionable.
In my case, thinking or prototyping through vizzes gets me in an experimental mode. It opens up more possibilities and curbs any pesky perfectionist tendency to get everything right the first time. For others, it can serve as a complementary approach to writing, which fosters a more sequential frame of thinking.
Illustration by the author
3. To unleash beauty into the world
Some pleasures cut across age, gender, and ethnicity: rainbows, bubbles, and ice cream to mention a few. When a data visualization looks and feels a certain way, it can give rise to similar feelings of delight.
Maybe we can’t always put a finger on the particular aesthetics that made the data sing, but we can usually agree that the visualization was beautiful. In this sense, the desire to produce a data visualization (or even data art) is linked to a human impulse to create beauty.
Some DVS members even see data visualization as a way of reclaiming space — to breathe new life into the serious spaces that typically neglect or reject beauty. Beauty embedded in a data visualization also has a functional role. To quote design guru Paul Rand:
Ideally, beauty and utility are mutually generative. In the past, rarely was beauty an end in itself… The function of the exterior decoration of the great Gothic cathedrals was to invite entry; the rose windows inside provided the spiritual mood.
In the data visualization context, beautiful design serves to guide users to key elements and aids in their understanding. Yet, despite the net positive results, people often resist the pursuit of beauty or dismiss it as a frivolous act altogether, particularly in the business world. When handling dashboard designs, dataviz practitioner Jason Forrest previously found his colleagues rejecting suggestions of beautiful designs in favor of something more basic. His workaround was to make the design prototype anyway, which got people more interested in the suggestions.
4. To connect in an attention-starved world
Data visualizations, particularly beautiful ones, have the benefit of acting like eye candy. They can get your audience to stop, look, and hopefully, engage with the data. Information designer and LinkedIn instructor Bill Shander tells us not to underestimate the “value of eye candy to simply generate interest”.
The Pudding is a great example of tapping into data visualization’s eye candy power with its rich visual essays on topics like how women’s pockets are inferior and the laughter climax of Ali Wong’s stand-up comedy. As data journalism becomes a global field, we will see more outlets drawing on data visualizations to create compelling content.
The Pudding’s showcase for Information Is Beautiful awards 2018. See original article on pockets here.
Why does it work? As a visual metaphor for data points, data visualization has the ability to make ideas more easily digestible and captivating at the same time. In this way, ideas embedded in visualizations are more likely to persist and spread.
It’s not only dataviz or data journalism folks who recognize this, either. Marketers know that eye-catching data visualizations combined with a powerful narrative can be very shareable and persuasive, as exemplified in the “Data Visualization + Data Storytelling Is Marketing Gold” article making the rounds on the internet.
5. To define and shape data culture
If the hallmarks of a culture include the community’s shared languages, food, music, and social habits, then in much the same way, data visualization practices are a hallmark of an organization’s data culture. Ultimately, the kind of data visualizations that can be produced is the result of how data is organized and integrated from various sources in the organization.
As DVS founding member Elijah Meeks puts it, “Do you boil everything down to a few KPIs? Are you comfortable with sophisticated representation? Do you think about uncertainty? All that is expressed in the data visualization that you use in your presentations, your email reports, and your public dashboards.”
And despite the growing imperative to be data-first, data-driven, or [insert new buzzword] in today’s economy, it’s not possible to build competency in everything at once. Data visualization can be a gateway for getting people to be more comfortable with unfamiliar data practices. For dataviz practitioner Wendy Small, the use of more simple data visualizations like line charts has been a healthy and effective way to encourage new approaches to reading data as part of a data literacy initiative.
The clarity that data visualization provides also encourages people to work better. When dataviz practitioners Keisha and Evelyn Münster handled projects on process-related data, they found that visualizing the process details proved more illuminating than relying on some aggregated numbers. It also sparked better conversations about what was going on.
6. To experience hobbies (or anything else) through a different lens
Data visualization is not all serious business. It lets us geek out to our heart’s content on our interests. Exploring the visual patterns that emerge from a data set on a hobby is another way to enjoy the hobby.
If you enjoy beer, check out Nathan Yau’s chart of beer styles, which looks like a beer-colored mosaic, complete with details on flavor as well as how high each beer style tends to be in alcohol content and bitterness. It’s a great way to explore multiple types of beer, from the American Lager to Fruit Lambic, without the risk of a hangover. Or maybe you are a fan of meta, and you may enjoy Christian Swinehart’s data storytelling of the storytelling structure in Choose Your Own Adventure Books. There seems to be something out there for everyone.
What about the “quantified self” movement, where people mine vast collections of personal data and visualize them?
When journalist Lam Thuy Vo’s marriage dissolved a number of years back, she created a blog Quantified Breakup to organize her responses in data visualizations. One post showed her apartment-related weight loss as she got rid of furniture. In another post, she tracked text messages exchanged with people she met online after the divorce and visualized those messages as sparks that flew off the screen.
It’s like taking pictures to remember a beach vacation in Bali, except we’re not just logging one-off moments. We’re tracking the same thing over time and compressing the result into a series of ebbs and flows or whatever patterns it shows. In some way, the process of visualizing this data functions as a form of self-discovery or, if we’re hurting, self-healing.
A Final Note
There you have it, six reasons we create data visualization. This list is not exhaustive by any means, but it shows the value of data visualization as it sits at the intersection of beauty, influence, and expression.
Venn diagram of data visualization, created by the author
Thanks to the Data Visualization Society members for contributing to the discussion, including: Alexwein, Bill Shander, Bridget, Cameron Yick, Elijah Meeks, Evelyn Münster, Erica Gunn, Jack Merlin Bruce, Jason Forrest, Keisha, Matthew Montesano, Nicole Edmunds, Stephen Singer, Wendy Small. | https://medium.com/nightingale/why-we-find-joy-and-value-in-creating-data-visualization-c3282ed56930 | ['Alexandra Khoo'] | 2019-08-28 18:35:31.643000+00:00 | ['Design', 'Data Visualization', 'Information Design', 'Data Science', 'Visual Design'] |
How Japanese People Stay Fit for Life, Without Ever Visiting a Gym | How Japanese People Stay Fit for Life, Without Ever Visiting a Gym
For people stressed or intimidated by fitness culture
Illustrations by Kaki Okumura
In the United States, I’m often bombarded with images and ads of fitness culture. Athleisure is the craze, and it seems that the majority of people are members of gyms like Anytime Fitness, 24 Hour Fitness, or LA Fitness. Any decent hotel or typical college campus has free access to a gym, sometimes even offering workout clothes for rental. It’s the land of Alo Yoga and the birthplace to Crossfit. The most successful online influencers write about fitness, and it’s not uncommon to see someone share their workout on social media as they would their food.
But in contrast to that, for a country that is a leader in longevity and has very low rates of obesity — the least among high-income developed nations at 4.3% — you might be surprised to find that there is not much of a workout culture in Japan. Athleisure is not a big thing, and not many people have a membership to a gym. People would rarely use their lunch break for a gym session, and those who do are probably seen as exercise zealots.
In a recent Rakuten Insight survey of 1000 Japanese citizens ages 20 to their 60s, about half of those questioned revealed that they barely exercised, about once a month or not at all. Citing not enough time or simply that they don’t like exercising that much, most people just didn’t see working out as part of their lifestyle.
What’s going on here?
What Exercise Looks Like in Japan
If you take a closer look as to what exercise means to Japanese people, you’ll find that exercise equates working out. But perhaps exercise can take on forms that aren’t necessarily about going to a gym and lifting weights, or going on 10km runs. Namely, perhaps the exercise we need is the kind of exercise that is weaved into our lifestyle: walking.
What the above results show is not that exercise isn’t important to be healthy, but that in Japan’s approach to moving, perhaps most don’t see it as exercise. Japanese adults walk an average of 6500 steps a day, with male adults in their 20s to 50s walking nearly 8000 steps a day on average, and women in their 20s to 50s about 7000 steps. Okinawans in particular are well-known for their walking culture, being especially mindful about incorporating movement in their daily lifestyle. Nagano, a rural prefecture in Japan, was able to flip their high stroke rate by incorporating over 100 walking routes, and now their citizens enjoy the highest rates of longevity in the country.
“The first thing we wanted was just to get people walking. Everyone can do that. You walk, you talk, you get exercise and that helps build up a sense of community,” — Nagano, Matsumoto’s mayor, Akira Sugenoya
Most Japanese citizens live in very walkable cities where public transportation is convenient, safe, and affordable, and not many households own cars. As a consequence, when most people go to work, they walk. When people go grocery shopping, they walk. When people are going out for dinner, they walk. It’s an activity adopted every day by every generation: walking is a part of daily life like breathing is.
The Steps to Better Lifelong Health
This is not a call against working out. I love working out, and spend a few hours a week running, biking, swimming, and completing calisthenic exercises. I don’t doubt the advantages of a good sweat, and find that it boosts both my physical and mental health.
But fitness culture can feel overwhelming for those who aren’t used to it, and too much can perpetuate cycles of shame and guilt. It can make us believe that reaching and maintaining a healthy weight is only available to the dedicated ones who consistently lift weights and are making enough time for daily runs.
Instead what this shows is that, like how eating healthfully doesn’t need to be eating only salads, healthful exercise doesn’t need to be only working out — the lifestyle fitness you need may just be in a bit more walking | https://kokumura.medium.com/how-the-japanese-exercise-to-stay-youthful-be2d6105e6e6 | ['Kaki Okumura'] | 2020-11-08 11:49:22.141000+00:00 | ['Self Improvement', 'Lifestyle', 'Health', 'Fitness', 'Culture'] |
Texas Commissioner’s Facebook Post Suggests “Muslim World” Be Nuked | By: editors
The elected Republican official recently shared a Facebook photo of a mushroom cloud, suggesting an atomic bomb to be used on Muslim countries for “peace.”
Texas Agriculture Commissioner Sid Miller, whose Facebook page is usually filled with farm jokes and extremely patriotic quotes, has recently come under fire for a social media post that proposes bombing the “Muslim world” in the name of peace.
The photo, which Miller reposted from another Facebook page Patriots IV Drip 2 — an anti-Obama group known for their Islamophobic rhetoric — looked like it was taken from Nevada nuclear tests in 1957. Accompanied with some hateful hashtags, the photo read: “Japan has been at peace with the U.S. since August 9, 1945. It’s time we made peace with the Muslim world.”
Although the Republican’s campaign Facebook page soon removed the photo, the damage had already been done.
The hateful meme, suggesting the deadly weapon that killed tens of thousands of innocent people in Hiroshima and Nagasaki be used in middle-eastern countries, sparked an outrage among Texas Democrats.
“It is unacceptable for Republican Sid Miller to be promoting such disgusting rhetoric. Sadly, this kind of racist, xenophobic hate speech qualifies you for higher office with Republicans’ Tea Party fringe base,” stated Manny Garcia, Texas Democratic party deputy executive director. “We hope Sid Miller shows some respect for Texans and the responsibility of holding state office and issues an apology.”
Read More: Here’s What The Media Won’t Tell You About Mass Shootings And Muslims
However, Miller’s campaign spokesperson alleges that the commissioner did not share the photo himself as he is currently in China. To make the matters worse, he also made it perfectly clear that Miller will not apologize.
“We’re not going to apologize for the posts that show up on our Facebook page,” said Todd Smith, the Republican’s campaign spokesman. “I don’t know who did it, but I’m not going to start a witch hunt to find out who did.”
He estimated that about 18 people have access to the campaign account.
“I read the post this morning, and we’re at the 60th (sic) anniversary of dropping the atom bomb in order to destroy an insidious enemy that was intent on destroying American lives, and we face a similar enemy who has vowed to destroy American lives, and I think that’s the topic that the American people are focused on,” Smith added.
Whether Miller himself posted the image or someone else did, one thing is for certain. While posting such an outrageous thing on social media is a questionable sentiment for anyone, it’s a particularly bad idea for an elected government official.
Sid Miller served in the Texas House of Representatives for 12 years prior to his election as the state’s agriculture commissioner, and this is not the first time he has made headlines for his anti-Muslim remarks.
Earlier this year, the commissioner told a crowd in Austin that he worries his grandchildren would one day live in an America that was a “Muslim country.”
Recommended: This Campaign Is Fighting Islamophobia By Showing Islam’s Real Message
Carbonated.TV | https://medium.com/carbonated-tv/texas-commissioner-s-facebook-post-suggests-muslim-world-be-nuked-d62aed7201a0 | [] | 2015-08-18 19:55:21.265000+00:00 | ['Facebook', 'Islam', 'Islamophobia'] |
Building an AI-Powered Searchable Video Archive | In this post, I’ll show you how to build an AI-powered, searchable video archive using machine learning and Google Cloud-no experience required.
Want to watch this story instead? Check out this video.
One of my favorite apps ever is definitely Google Photos. In addition to backing up my precious pics to the cloud, it also makes all of my photos and videos searchable using machine learning. So if I type “pool” in the Photos app, it returns all everything it recognizes as a pool:
This is all well and good if you just want to use somebody else’s software, but what fun is that? Today we’ll build our own version of Google Photos, for videos.
Not for nothing, there are lots of good reasons to build your own video archive. For one, it’s fun. For two, you can add features Google Photos doesn’t currently support, especially for videos. Like searching by what people say (transcripts), in case you need to find all the clips where someone says, “well now we have it on film,” or “oh sh*t.” For three, building your own app allows you to more easily integrate with your other software and control how your data is stored and handled. For example, I built my archive’s backend on Google Cloud, which let me take advantage of Google Cloud’s privacy, security, and compliance guarantees.
My searchable video archive ended up looking like this:
and it stored and indexed all of my family home videos (~126 GB). Using machine learning, specifically the Video Intelligence API, I was able to do all sorts of analysis, including automatically splitting long videos, identifying objects and scenes, transcribing audio, and extracting on-screen text.
The app ended up being extremely good at searching for cute moments. Using computer vision, it recognized scenes and objects like “wedding,” “firework”, “performance,” “baby laughing”, “home improvement,” “roller coaster,” and even “Disney World”:
It could also search transcripts. This is how I found the clip of my very first steps, because in these clips, my parents say something like, “Look, Dale is taking her first steps!”:
Finally, the tool was able to search any on-screen text, like the words “Mets” and “New York” on these players’ shirts or the “Bud” poster in the background:
The video archive ended up being a pretty good Father’s Day gift, especially since I wasn’t actually able to see my dad in person this year.
In this post, I’ll show you how you can build your own archive, just like this. But if you want to skip straight to the code, check out the Making with ML Github repo.
Machine Learning Architecture for Video Processing
The app is divided into two bits, the frontend and the backend. The backend was built using a combination of Google Cloud, Firebase, and a tool called Algolia (for search). The frontend was built with Flutter, a framework for building web and mobile apps, but could have easily been a React or Angular or iOS or Android app.
The backend architecture looked something like this:
I use this kind of architecture or pipeline all the time when I build apps that tag or index data with machine learning. It works like this:
First, data (in this case, an individual video) is uploaded to a Google Cloud Storage bucket. Uploading kicks off a Cloud Function (this is like an AWS lambda, i.e. a small chunk of code that runs in the cloud) The cloud functions calls the Video Intelligence API to kick off video analysis The Video Intelligence API writes its results as JSON to a second storage bucket That written data, in turn, kicks off a second cloud function that parses the JSON and writes it to a more convenient data store-in this case Firestore and Algolia.
From here, my frontend Flutter app could talk to the backend and search for user queries. If these technologies are unfamiliar to you, fear not-I’ll go into depth in a bit.
There are also a couple of steps I couldn’t fit in that diagram. For example, I did a bit of preprocessing with the Video Intelligence API on some very long video files that split them into smaller clips, and that also identified any timestamps shown on screen. Also, I wrote a Cloud Function specifically for taking an uploaded video and generating a thumbnail for it (check out this function).
Quickly Transferring Video from Drive to Cloud Storage
But first, before we get into the weeds, let’s talk about transferring data from Google Drive to Cloud Storage. In theory, moving data from Drive to Storage should be fast, since all the data can stay within Google’s network. But frustratingly, in practice, there’s no slick way to do the transfer. Happily, I found a neat hack in this Medium article by Philipp Lies. The trick is to use a Colab notebook -a free, educational Google tool for running Python code in the cloud-to do the transfer. It’s quick, easy, and very effective!
The Video Intelligence API
The key tool that made this project possible was the Video Intelligence API built by Google Cloud. It takes a path to a video in Cloud Storage and spits out, among other things:
Audio transcriptions (i.e. “automatic subtitles”)
Known objects (e.g. plane, beach, snow, bicycle, cake, wedding)
On-screen text (i.e. on street signs, T-shirts, banners, and posters)
Shot changes
Explicit content detection
This data can then be used as indices we can use to search for specific videos.
The Price
If you’re me, your first thought is, sure, but I bet it’s super expensive. I analyzed 126 GB of video or about 36 straight hours, and my total cost using this API was $300, which was kind of pricey. Here’s the cost breakdown per type of analysis:
I was surprised to learn that the bulk of the cost came from one single type of analysis-detecting on-screen text. Everything else amounted to just ~$80, which is funny, because on-screen text was the least interesting attribute I extracted! So a word of advice: if you’re on a budget, maybe leave this feature out.
Now to clarify, I ran the Video Intelligence API once for every video in my collection. For my archive use case, it’s just an upfront cost, not a recurring one.
Using the API
Using the Video Intelligence API is pretty straightforward once you’ve got your data uploaded to a Cloud Storage Bucket. (Never heard of a Storage Bucket? It’s basically just a folder stored in Google Cloud.) For this project, the code that calls the API lives in video_archive/functions/index.js and looks like this:
const videoContext = {
speechTranscriptionConfig: {
languageCode: 'en-US',
enableAutomaticPunctuation: true,
},
};
const request = {
inputUri: `gs://VIDEO_BUCKET/my_sick_video.mp4`,
outputUri: `gs://JSON_BUCKET/my_sick_video.json`,
features: [
'LABEL_DETECTION',
'SHOT_CHANGE_DETECTION',
'TEXT_DETECTION',
'SPEECH_TRANSCRIPTION',
],
videoContext: videoContext,
};
const client = new video.v1.VideoIntelligenceServiceClient();
// Detects labels in a video
console.log(`Kicking off client annotation`);
const [operation] = await client.annotateVideo(request);
console.log('operation', operation);
One line 1, we create a videoContext with some configuration settings for the API. Here we tell the tool that audio tracks will be in English ( en-US ). One line 8, we create a request object, passing the path to our video file as inputUri , and the location where we'd like the results to be written as outputUri . Note that the Video Intelligence API will write the data as json to whatever path you specify, as long as its in a Storage Bucket you have permission to write to. On line 12, we specify what types of analyses we’d like the API to run. On line 24, we kick off a video annotation request. There are two ways of doing this, one by running the function synchronously and waiting for the results in code, or by kicking off a background job and writing the results to a json file. The Video Intelligence API analyses videos approximately in real time, so a 2 minute video would take about 2 minutes to analyze. Since that’s kind of a long time, I decided to use the asynchronous function call here.
If you want to play with this API quickly on your own computer, try out this sample from the official Google Cloud Node.js sample repo.
The Response
When the API finishes processing a video, it writes its results as json that looks like this:
{
"annotation_results": [ {
"input_uri": "/family_videos/myuserid/multi_shot_test.mp4",
"segment": {
"start_time_offset": {
},
"end_time_offset": {
"seconds": 70,
"nanos": 983000000
}
},
"segment_label_annotations": [ {
"entity": {
"entity_id": "/m/06npx",
"description": "sea",
"language_code": "en-US"
},
"segments": [ {
"segment": {
"start_time_offset": {
},
"end_time_offset": {
"seconds": 70,
"nanos": 983000000
}
},
"confidence": 0.34786162
} ]
}, {
"entity": {
"entity_id": "/m/07bsy",
"description": "transport",
"language_code": "en-US"
},
"segments": [ {
"segment": {
"start_time_offset": {
},
"end_time_offset": {
"seconds": 70,
"nanos": 983000000
}
},
"confidence": 0.57152408
} ]
}, {
"entity": {
"entity_id": "/m/06gfj",
"description": "road",
"language_code": "en-US"
},
"segments": [ {
"segment": {
"start_time_offset": {
},
"end_time_offset": {
"seconds": 70,
"nanos": 983000000
}
},
"confidence": 0.48243082
} ]
}, {
"entity": {
"entity_id": "/m/015s2f",
"description": "water resources",
"language_code": "en-US"
},
"category_entities": [ {
"entity_id": "/m/0838f",
"description": "water",
"language_code": "en-US"
} ],
"segments": [ {
"segment": {
"start_time_offset": {
},
"end_time_offset": {
"seconds": 70,
"nanos": 983000000
}
},
"confidence": 0.34592748
} ]
},
The response also contains text annotations and transcriptions, but it’s really large so I haven’t pasted it all here! To make use of this file, you’ll need some code for parsing it and probably writing the results to a database. You can borrow my code for help with this. Here’s what one of my functions for parsing the json looked like:
/* Grab image labels (i.e. snow, baby laughing, bridal shower) from json */
function parseShotLabelAnnotations(jsonBlob) {
return jsonBlob.annotation_results
.filter((annotation) => {
// Ignore annotations without shot label annotations
return annotation.shot_label_annotations;
})
.flatMap((annotation) => {
return annotation.shot_label_annotations.flatMap((annotation) => {
return annotation.segments.flatMap((segment) => {
return {
text: null,
transcript: null,
entity: annotation.entity.description,
confidence: segment.confidence,
start_time: segment.segment.start_time_offset.seconds || 0,
end_time: segment.segment.end_time_offset.seconds,
};
});
});
});
}
Building a Serverless Backend with Firebase
To actually make the Video Intelligence API into a useful video archive, I had to build a whole app around it. That required some sort of backend for running code, storing data, hosting a database, handling users and authentication, hosting a website-all the typical web app stuff.
For this I turned to one of my favorite developer tool suites, Firebase. Firebase is a “serverless” approach to building apps. It provides support for common app functionality-databases, file storage, performance monitoring, hosting, authentication, analytics, messaging, and more-so that you, the developer, can forgo paying for an entire server or VM.
If you want to run my project yourself, you’ll have to create your own Firebase account and project to get started (it’s free).
I used Firebase to run all my code using Cloud Functions for Firebase. You upload a single function or set of functions (in Python or Go or Node.js or Java) which run in response to events-an HTTP request, a Pub/Sub event, or (in this case) when a file is uploaded to Cloud Storage.
You can take a look at my cloud functions in this file. Here’s an example of how you run a Javascript function (in this case, analyzeVideo ), every time a file is uploaded to YOUR INPUT VIDEO BUCKET .
const functions = require('firebase-functions');
exports.analyzeVideo = functions.storage
.bucket(YOUR_INPUT_VIDEO_BUCKET)
.object()
.onFinalize(async (object) => {
await analyzeVideo(object);
});
Once you’ve installed the Firebase command line tool, you can deploy your functions, which should be written in a file called index.js , to the cloud from the command line by running:
firebase deploy --only functions
I also used Firebase functions to later build a Search HTTP endpoint:
/* Does what it says--takes a userid and a query and returns
relevant video data */
exports.search = functions.https.onCall(async (data, context) => {
if (!context.auth || !context.auth.token.email) {
// Throwing an HttpsError so that the client gets the error details.
throw new functions.https.HttpsError(
'failed-precondition',
'The function must be called while authenticated.',
);
}
const hits = await utils.search(data.text, context.auth.uid);
return {'hits': hits};
});
On line 3, I use functions.https.onCall to register a new Firebase function that's triggered when an HTTPS GET request is made. On line 4, I check to see if the user that called my HTTPS endpoint is authenticated and has registered with an email address. Authentication is easy to set up with Firebase, and in my project, I’ve enabled it with Google login. On line 12, I call my search function, passing the userid context.auth.uid that Firebase generates when a new user registers and that's passed when they hit my endpoint. On line 13, I return search results.
Quick n’ Easy Search
Next, I needed a way for users to search through my video archive. Because they wouldn’t know exactly what terms to search for or what phrases to use, I needed my search implementation to be smart. It should be tolerant of typos and make good guesses for what users want based on their queries (just like Google search).
I was pretty intimidated by the idea of having to implement a search feature myself (ironic, since I work at Google!), until I stumbled across a Search API built a company called Algolia. Aloglia lets you upload data as json (which is conveniently the data format I had!) and then query that data from your app (or from their console, if you’re debugging). Here’s what the Algolia console looks like:
It deals with typos automatically, as you can see above, where I spell the word “Father’s Day” wrong.
The tool also has a lot of different configuration options. You can adjust which json fields are searchable, how search results should be ranked, how much to tolerate typos, and more:
If you want to see some code samples, take a look at video_archive/functions/algolia.js. Here’s the code for making a search query in Javascript:
exports.search = async function(query, userid) {
const client = algoliasearch(
process.env.ALGOLIA_APPID,
process.env.ALGOLIA_ADMIN_APIKEY,
);
const index = client.initIndex(process.env.ALOGLIA_INDEX);
const res = await index.search(query, {
tagFilters: [userid],
attributesToRetrieve: ['videoId', 'transcript', 'text', 'entity', '_tags'],
});
if (!res.nbHits) return [];
return res.hits
.filter((hit, index) => {
return res.hits.findIndex((h) => h.videoId == hit.videoId) == index;
})
.map((hit) => {
return {videoId: hit['videoId']};
});
};
On line 2, I provide my credentials and create a search client. On line 6, I specify which dataset or “index” I want to search. On line 7, I kick off a search query specifying both query text (e.g. “birthday”), “tags” to filter by (I used tags to associate data with users), and which json fields I’d like to receive. Line 14 looks kind of complicated, but I’m just filtering for duplicate movie ids and formatting a json response.
A Flutter Frontend
Because I’m not a very good frontend developer, I decided to use this project as an excuse to learn Flutter, Google’s new-ish platform for writing code that runs anywhere (web, Android, iOS). Overall I had a lot of fun playing with it and thought styling Flutter apps was way easier than CSS. Here’s the end result:
I just built a web app, not iOS or Android this time.
You can check out all the frontend code in the Flutter folder of the repo, but since I’m new to this, no promises it’s “correct” ;).
So that’s how you build an AI-powered video archive! Questions or comments? Ping me on Twitter! | https://towardsdatascience.com/building-an-ai-powered-searchable-video-archive-a4721a72e126 | ['Dale Markowitz'] | 2020-06-17 16:17:22.501000+00:00 | ['Machine Learning', 'Google Cloud Platform', 'Artificial Intelligence', 'Software Development', 'Video Production'] |
A Tale of Two Countries: Coronavirus Impacts on Energy Demand in China and India | A Tale of Two Countries: Coronavirus Impacts on Energy Demand in China and India
The coronavirus outbreak is impacting energy demand in the world’s most populous countries, China and India, on different timescales
China was the first economy affected by the virus and is now in the process of recovering. Indian energy demand, on the other hand, had until recently remained relatively unscathed but is set to decline steeply after New Delhi imposed a nationwide lockdown.
Figure 1. Kayrros China implied crude demand index (Source: Kayrros)
Implied crude demand in China continued to recover in the last week of March from earlier in the month, ending at slightly over 2 MMb/d from its lows, but well-below levels that prevailed before the outbreak.
Chinese crude oil inventories extended earlier gains despite the implied recovery in refining activity, reaching a new record and bringing storage utilization close to 70%.
Kayrros satellite monitoring reveals that Chinese coal-fired power generation also rebounded last week from its earlier plunge and is now close to values seen in 2018 and 2019 at the same time of year. The increase is in line with the restart of various industries, such as cement plants. In Hubei Province, the epicenter of the pandemic, the number of active cement kilns rose between March 12-19, but later remained flat, showing that activity has yet to fully recover.
Activity of cement kilns in Hubei Province (Source: Kayrros)
As another sign of industrial activity uptick, NO2 concentrations over China have increased again.
NO2 Pollution over China (Source: Kayrros)
On March 24, India state oil refiners announced plans to reduce crude runs as domestic oil product demand took a nosedive amid the nationwide lockdown.
Kayrros analysis shows that implied crude demand dropped slightly in the week ended March 29. Crude oil inventories filled, and Kayrros measured a 2.7 MMb build in the country.
Kayrros India implied crude demand index (Source: Kayrros)
Kayrros satellite monitoring of coal power plants in India reveals a sharp drop in power generation since the end of March. Power generated from a sample of fifty-six coal power plants has reached multi-year lows.
This is a sanitized excerpt from a report released to Kayrros subscribers last week. To learn more, reach out to contact@kayrros.com. | https://medium.com/kayrros/a-tale-of-two-countries-coronavirus-impacts-on-energy-demand-in-china-and-india-4a1235429fb3 | [] | 2020-04-09 12:51:52.194000+00:00 | ['Covid 19', 'Business', 'Energy', 'Environment', 'China'] |
Anal Sex Is For (Nearly) Everyone | Comics and articles about sex and relationships. Also at sexedplus.tumblr.com and on Facebook as Sex Positive Education.
Follow | https://medium.com/sexedplus/anal-sex-is-for-nearly-everyone-858c474a62e2 | ['Sexedplus Dan'] | 2018-11-02 17:54:54.351000+00:00 | ['LGBTQ', 'Sex', 'Sexuality', 'Health', 'Comics'] |
Reducing JavaScript Bundle Size. Part One: Measurements and high-level… | Javascript Bundles
Let’s first talk about JavaScript bundles. Bundles usual refer to JavaScript and CSS bundles sent from the server to the user’s browser. When we discuss it in passing, the primary focus is typically the size of the initial JavaScript bundle sent to the client to initialize the web app, and how that bundle impacts the amount of time before a user can use the app. However, what is the main bundle?
There was a time when we would send a couple of JavaScript files to make our webpage more interactive. More often than not, you would have a CDN link to JQuery, maybe some plugins, and then a main.js which held most of your custom coding. This wasn't a hard-fast rule, but in this scenario, you only had a single file with relatively little JavaScript. The JavaScript would be sent to the browser with the HTML and CSS when a user made a request to the server. The actual webpage and content were often entirely built on the server before being sent to the client. If the user wanted another page, they made a subsequent request to the server.
That’s not to say minification and size were not essential at the time!
The landscape is much different nowadays. Single Page Applications (SPA’s) mean that a user will make a request to a server and download the entire web app. From there, JavaScript will take care of routes, interactions, network requests, and more. That means there’s a ton more JavaScript than ever before. Furthermore, our projects have gone from a couple of .js files to hundreds or thousands. If we were to request all of these individually, it would take an eternity!
Instead, we use tools like Webpack, Parcel, or Rollup to take our files and package them into bundles for distribution. The default strategy of these bundling systems is to create a single bundle out of our .js files, as that will take the least amount of time and the fewest number of requests to collect the data. Of course, these systems do more than that, but for our purpose, this created bundle is our main bundle. To see this in action, we can create a new create-react-app .
npx create-react-app bundle-test
cd bundle-test
npm run build
We are using create-react-app as it’s a pure boilerplate to visualize changes, without needing to step into the code!
We will get into what these other “chunks” are later in the series; we’re just interested in the main.#.chunk.js. This chunk is the collection of all our components packaged into a single file by webpack. Any new code we write will be added to this chunk and downloaded as a single bundle.
However, why do we care about the size of the bundle? | https://medium.com/better-programming/reducing-js-bundle-size-58dc39c10f9c | ['Denny Scott'] | 2020-03-07 20:49:37.796000+00:00 | ['Webpack', 'JavaScript', 'React', 'Reactjs', 'Ecmascript 6'] |
Cypherium | Cypher-BFT Enables Decentralization for HotStuff | Facebook recently released its Libra white paper and has been under the spotlight of the global media. Libra’s White paper articulates that its mission is to build a worldwide financial infrastructure that is simple, easy to use, bringing blockchain concepts and providing the world’s most in-need populations with a smooth, borderless payment experience.
The consensus protocol it uses is LibraBFT, a new Byzantine fault-tolerant consensus agreement, which is based on another consensus agreement, HotStuff with some minor changes. For technical reasons, Libra is currently a permissioned blockchain, but Libra’s stated goal is to become permissionless; to that end, Facebook promises to make Libra a permissionless blockchain, similar to Ethereum, in five years.
The HotStuff consensus algorithm has become a renewed focus of attention due to its implementation in Libra. As HotStuff’s lead scientist, Dahlia Malki, only four major projects have adopted HotStuff so far, but more projects are expected to follow. Cypherium is one of HotStuff’s early adopters. As Cypherium’s Chief Executive, Sky Guo, and the first author of the HotStuff algorithm paper, Ted Yin, are good friends, the Cypherium team has been paying attention to the HotStuff since the algorithm’s V2 version ( https://arxiv.org/abs/1803.05069v2), and the Cypherium dev team began implementing the software as of V5 ( https://arxiv.org/abs/1803.05069v5 ). But vastly different in intention and design from Facebook, Cypherium’s goal has always been to deploy a permissionless, open HotStuff consensus, and with its mainnet it does just that.
The difference between “permissioned” and “permissionless” blockchains has to do with whether an entity can access a blockchain network as a verifier node. In a “permissioned blockchain”, the entity runs the verifier node through its permission grant mode, to ensure the network’s stability without requiring much consideration for security; in a “permissionless blockchain”, any entity meets the technical requirements can run a verifier node. For permissionless chains like Ethereum, any miner who finds a qualified nonce value can participate in the processing of the transactions and receive a certain fixed mining reward and transaction fee reward. This is conducive to the extensive development of the entire ecosystem, but the design of the entire chain must give more consideration to the overall network performance and security. Permissionless blockchains can easily become permissioned blockchains, but the reverse is much more difficult.
Cypherium has implemented HotStuff with a dual-chain consensus mode permissionless mechanism. HotStuff consensus prototype’s verifier identities are within a fixed range, and in addition to that, LibraBFT is also built on the PoS mechanism. LBFT adds a realistic mechanism when applying Hotstuff to the blockchain use case, such as introducing the concept of epochs, allowing consensus node replacement, and incentive and penalty mechanisms. In order to prevent the leader from being attacked, an unpredictable leader election mechanism (VRF), and the like, are introduced.
The Cypherium team initially considered using the PoS system in HotStuff implementations as well, but later their devs found that due to the strongly closed PoS mechanisms, it is not good for the ecosystem development, as this structure easily attracts hacker attacks, only suitable for permissioned chains and cannot properly achieve the ultimate goal of Cypherium’s permissionless public chain. So, the team began to develop its unique design to adopt the PoW+HotStuff hybrid consensus mechanism.
When Cypherium CEO Sky Guo was considering which consensus algorithm to use for the underlying public chain design, his friend, the first author of the HotStuff paper Maofan “Ted” Yin introduced him to the HotStuff protocol. After Sky conducted an in-depth study of HotStuff, he discussed HotStuff’s design and performance aspects together with Ted.
Finally, Sky Guo and the Cypherium dev team decided this would be the ideal replacement for the PBFT algorithm, as they found that HotStuff can achieve their goals through the Cypherium’s unique double-chain architecture, consisting of an election chain and a transaction chain. In this architecture, one chain does the electoral rotation operations of the PoW mechanism, while the other is a chain of transactions based on the current verifier nodes of the consensus committee.
Because Cypherium has chosen a PoW mechanism in its first step, any computing device can become a verifier node and does not depend on a trusted third party to mine Cypher coins.
Whenever a miner successfully mines PoW, at that time, the node with the most participation in the verifier committee leaves the committee, and the new miner becomes a member of the verifier committee. No one can predict the election results, as the network achieves a permanent dynamic rotation.
Each committee rotation is generated as shown below, to form a Key chain:
Figure 1
the dynamic adjustment of the computational difficulty value makes the time for the PoW to obtain the verification value unpredictable, we define the Key chain as a slow chain (a block being generated in an average of a few minutes), and only the consensus committee or the leader replacement event gets recorded.
After the Key block is confirmed, the consensus verification node belonging to the Key block participates in the consensus processing of the transaction package under the Leader, and forms the V1, V2 , V3 , V4 , V5 views as shown in the following figure :
Figure 2
These views are connected to form another chain, which we define as a transaction chain. Since the PoW calculation is no longer needed at this step, the confirmation of each view only needs to be verified by the consensus signature of the consensus committee, so the block production rate is very fast, processing up to 6,000 transactions per second.
It is worth noting that the Key chain and the transaction chain mentioned above are actually a single, continuous chain in the HotStuff algorithm, as shown below:
Figure 3
When the normal node mines a PoW, it broadcasts to all the current verifier nodes, and the Leader of V2 initiates View Change, and will send a Cmd type=key corresponds to the Cmd part in Figure 2, indicates that a View Change for the verifier nodes has been requested. When the DECIDE step in Figure 2 is finished, a new View is generated. The View needs to record the identity information of all the verifier nodes of the current consensus committee (such as public key, IP, etc.), in order to save storage space and easy to search, the information is saved as a separate Key blockchain, also recorded on the transaction chain, the transaction chain has Cmd type=tx to differentiate from the Key blockchain.
The verifiers receive transactions from the client and shares these transactions with each other through a shared txpool protocol. In each round of transaction grouping, there is a verifier playing a leading role (Leader) and proposes to extend a transaction block to a sequence of blocks that contains complete previous transaction history.
The verifier receives the proposed block and checks its voting rules to determine if it should vote for the block. If the verifier intends to vote for this block, it will execute the transactions of that block without external influence. Then the verifier sends a vote on this block and its state to the Leader. After the Leader collects the voting result from all verifiers, it generates a quorum certificate (quorum certificate (QC)).
Figure 4
The QC is the proof of 2f +1 vote obtained for this block, and the QC with block data is broadcast to all verifier. When a continuous Three-chain commit rule is satisfied, the block is committed. If there is a QC at k-th round, and k + 1 and k + 2 rounds have two blocks and QCs confirmed, the block is committed.
The commit rules ultimately allow an honest verifier to commit a block. The Cypherium chain ensures that all honest verifiers will eventually commit the block. Once a block sequence has been committed, the state generated by the transaction execution can continue and forms a replicated database.
Each verifier node has a built-in PaceMaker to record the time difference between each step ( prepare, pre-commit, commit, and decide). If timeout occurs, the view change request is immediately sent to the new Leader. If the new Leader does not respond, then it is sent to the next Leader.
The traditional view change PBFT has O (n ^ 2) complexity of messaging. That is, before the view change happens, all honest verifier nodes shall confirm that all honest nodes indeed proceed to the next View.
HotStuff very innovatively turned the classic two-round consensus of PBFT into three rounds, and then reduce the cost of view change to O(n), view change does not need to wait for the “I know other people also know about the view change” layer. It is possible to change itself, so the message complexity is reduced from. O (n ^ 2) to O(n): as long as the honest node sends a view change request directly to the new leader and receives the new leader’s feedback, the view change will start.
After the view change is completed, the Cyperium chain will record the results of the corresponding QC and change, form a new Key block, and continue the process as described above under the new consensus committee. | https://medium.com/cypherium/cypherbft-enabling-decentralization-for-hotstuff-1713da26628b | [] | 2020-05-08 15:06:19.559000+00:00 | ['Governance', 'Tech', 'Work', 'Facebook', 'Blockchain'] |
How To Run Next.js App With Nodejs API on Minikube | Create a Deployment and Service Objects
A pod is a group of one or more containers that share the storage and network and has the specification on how to run the container. You can check the pod documentation here.
Let’s create a pod with the below file. Before that, You need to start the Minikube on your local machine with this command minikube start and create a pod with this kubectl create -f pod.yml
pod.yml
It takes some time to pull the image from the Docker Hub if you are doing it the first time or depending on the image size. Now you can see the pod is in the running status and exec into it to explore the file structure, etc.
// get the pod
kubectl get po // exec into running pod
kubectl exec webapp -it -- /bin/sh
kubectl get pod
Deployment
Creating just one pod is not enough and what if you want to scale out the application and want to run 10 replicas at the same time. What if you want to change the number of replicas depending on the demand. That’s where the deployment comes into the picture. We specify the desired state in the deployment object such as how many replicas you want to run etc.
Kubernetes makes sure that it always meets the desired state. It creates replica sets which inturn creates pods in the background. Let’s create a Deployment for our project with this command kubectl create -f deployment.yml
deployment.yml
We have 5 replicas in the specification and the deployment creates 5 pods and 1 replica set.
Deployment
Service
Service is an abstract way to expose an application running on a set of Pods as a network service. Let’s create a service with the type NodePort so that we can access the Next app from the browser. Here is the service object YAML
service.yml
Create a service with this command kubectl create -f service.yml and you can list the service with this kubectl get svc
Service
You can create one file called manifest.yml to place all the Kubernetes objects in one place and create all of the objects with one command kubectl create -f manifest.yml | https://medium.com/bb-tutorials-and-thoughts/how-to-run-next-js-app-with-nodejs-api-on-minikube-66b22ae8e589 | ['Bhargav Bachina'] | 2020-12-19 06:02:01.506000+00:00 | ['Programming', 'Docker', 'Web Development', 'JavaScript', 'Kubernetes'] |
The Revealing Module Pattern in Specific Example | The Revealing Module Pattern in Specific Example
The last part of the Customizing Google charts series
NOTE: This article contains snippets of code from a project I worked on in 2016! It uses jQuery and Google charts API that could have changed.
In the first part of this series, we created a Treemap using getBoundingClientRect() method. In the 2nd part we created a custom Line Chart legend. We are going to put these 2 charts together utilizing the revealing module pattern (to put it simply, make our code cleaner and easier to maintain). When we interact for example with Line chart, it is reflected also on the Treemap and vice versa. Here is what it looks like:
Interconnected charts
Even though we display only 2 charts, our codebase got more complex and we have to consider possibility of expanding it. Therefore we should also consider applying a design pattern to our code that will prevent us from writing some spaghetti code.
Photo by Sheila Joy on Unsplash
Design patterns are reusable solutions to commonly occurring problems in software design
What we had so far (in previous demo1 and demo2) was basically just bunch of function declarations one after another. Something like this:
(function () {
// some global variables here
function drawTreemap() {...}
function drawLineChart() {...}
function updateLineChartCols(elem, firstClick) {...}
function setColor(data, perc) {...}
// and more functions
})(jQuery);
Nothing wrong about it, but we might face some problems with sharing some part of code due to function scope. We can organize the code better using modules for example. There are several ways of implementing modules in JS:
The Module pattern
Object literal notation
AMD modules
CommonJS modules
native ES modules
Nowadays modern browsers support native modules, when you use import and export keywords, but back in 2016 using jQuery I opted for the revealing module pattern (modified version of the module pattern).
With the module pattern, only a public API is returned, keeping everything else within the closure private. The pattern utilizes an immediately-invoked function expression (IIFE) where an object is returned.
To read more about the module pattern I recommend this section from Addy’s Osmani book Learning JavaScript Design Patterns.
Let’s clarify the jargon before we get to the specific example.
What is IIEF?
Immediately invoked function expression is a function that is immediately executing itself as the name suggests. It can have 2 forms:
(function(){...})()
or
(function(){...}())
We can pass in any argument in the ‘invocation part’ of the code and use it as a parameter with a different name within the function:
(function IIFE($){...})(jQuery)
In this example we pass in jQuery object and use it as a $ inside the function. IIFE allows us to hide the enclosed variables or functions from the outside scope and thus avoid collisions between different identifiers.
The Revealing Module Pattern
Let’s first see how our code looks like after implementation. Demo on Stackblitz
(function () { function drawTreemap() {
...
var cellColor = api.setColor(tableData,false);
}
function drawLineChart() {
...
var data = api.formatDataForLineChart(json);
}
// the revealing module pattern starts here with IIEF
var api = (function () {
function updateLineChartCols(elem, firstClick) {...} function setColor(data, perc) {...} function formatDataForLineChart(data) {...} return{
updateLineChartCols: updateLineChartCols,
setColor: setColor,
formatDataForLineChart: formatDataForLineChart
} })();
})(jQuery); // this is another IIEF
We define all of our functions and variables in the private scope and return an anonymous object with pointers to the private functionality we wished to reveal as public.
The code is more organized, maintainable and readable. Notice how we can define the functions in one place and call them outside of that private scope where we need, thus solving also the scope issue. We access them as object properties so we could also name them differently in the return statement.
Conclusion
There are modern frameworks that use native ES modules and a component based approach is a way to go now. This is awesome, but if you still write or maintain Vanilla JS or jQuery code, this pattern might come handy. And not only this pattern and in those scenarios. In every app development, there comes a day when something doesn’t work as expected, and you don’t know why. Understanding of JS helps and the design patterns are part of it.
Thank you for reading! | https://medium.com/swlh/the-revealing-module-pattern-in-specific-example-cb94b647b428 | ['Miroslav Šlapka'] | 2020-12-15 11:02:38.654000+00:00 | ['Programming', 'JavaScript', 'Web', 'Design Patterns', 'Web Development'] |
The value of monitoring soil | A lot comes down to taking a good soil sample — even remote sensing and modeling approaches rely on collecting physical samples. Good sampling design calls for samples that reflect the unique attributes of a particular site, including the soil type, land use, vegetation, and climate present.
Today, there’s no universal standard for doing this. Approaches to sampling and analysis differ by country and platform, making it a challenge to draw comparisons and manage our global soil resource coherently.
Table 4 from the Smith et al. paper summarizes the models commonly used to estimate greenhouse gas emissions and potential for carbon (“C”) sequestration in agricultural soils, but both the models and where they are applied are limited in geographic and scientific scope.
The challenges ahead
Methods
Though the world is finally turning overdue attention to carbon removal and the untapped potential for carbon sequestration in agricultural soils, the methods that underpin our ability to do so reliably and cost-effectively are lagging behind. We need tools that allow us to measure SOC non-destructively (as in, without having to dig a hole) and model how different agricultural practices affect soil carbon. The dream would be to have the ability to scan a sample in Montana, then instantly add it to a global library of soil samples, where it can be referenced across a database to help improve future predictions of soil SOC in similar contexts.
Geographic bias
In order to harness the power of soils to draw down atmospheric carbon, we need to expand soil carbon monitoring across the globe. Today, many regions lack both long-term monitoring and infrastructure in order to adequately quantify and track soil carbon stocks changes across time. And long-term experiments are also limited in geographic scope, restricting the broad applicability of the results to mostly temperate ecosystems. Ramping up capacity and expanding soil monitoring across the globe is critical.
Variation
As soil scientist Tony Hartshorn (Montana State University) likes to say, soils are sneaky — you can walk a few meters and find yourself on a different soil type, with inherently different properties and soil carbon sequestration capacities. That means we need to collect soil samples from a lot of spots to get a better sense of how soil carbon inherently varies and how different management approaches affect it.
And we need to do these assessments frequently enough that we capture the “good” and “bad” years, so farmers and ranchers are not penalized for reduced carbon sequestration rates during a drought year, for example.
Impatience
Soil C gains can be slow and small enough from one year to the next to be undetectable. That means we need to think about MRV protocols that take the speed of carbon accrual into consideration, and payment schemes that support farmers through the 3–5 years it takes for soil carbon to accumulate at measurable rates.
Diversity of stakeholders
Many groups rely on soil MRV today, and many more groups will need to in the future as natural climate solutions become more mainstream. The groups that need to accurately measure and track soil carbon include:
Farmers and ranchers who can make management decisions to optimize soil carbon sequestration
Businesses that rely on agricultural products and have made climate commitments, including building soil health and maximizing soil carbon sequestration
Government entities (at the local, regional, national, and international scales) that have expressed greenhouse emission reductions goals that include carbon drawdown by plants and soil or that regulate emissions
Scientists who are improving modeling approaches and developing new tools for soil carbon measurement and verification
Policymakers developing incentive structures that rely on trusted and accurate measurement of soil carbon
Philanthropies that are building strategies around carbon removal and need to understand both the challenges and opportunities of soil carbon sequestration.
So, what comes next?
We have a long way to go to address the challenges in front of us and make the most of soil carbon sequestration’s potential. Luckily, we’re not starting from scratch. We know how to measure soil organic carbon, we have modeling platforms that are getting better and better every day, and we have a growing community of farmers and ranches who are excited to test out new agricultural practices and track soil changes.
In their paper, Pete Smith and his colleagues are calling for a global soil MRV platform — one that includes benchmark demonstration sites on representative soils across the globe, in conjunction with models that can take the data from benchmark sites and project changes in soil carbon into the future. A global soil monitoring system of this scale will require international cooperation, capacity-building, and technology transfer, often from countries with ample resources to under-resourced countries. It will require collaboration and ambition that matches the scope of the climate change challenge.
Smith et al. also advocate for long-term experiments that can serve as platforms to track soil carbon change, verify models, and provide testbeds for new ways to manage for soil carbon sequestration. Official MRV processes can be time-consuming and costly, so Smith et al. suggest self-reported soil carbon sequestration activities be spot-checked and added into the data pool, both to bolster models and help assess uncertainties in soil carbon measurements or predictions. By lowering the barrier to entry, more folks can participate, add their data, and help scale soil carbon sequestration in agriculture.
As we plan ahead, we should prioritize working towards congruence across metrics of soil health and carbon sequestration that can be applied across a range of geographies, agricultural practices, and jurisdictions. We also need to develop and benchmark tools and metrics that are easy to use and inexpensive.
And we should remember — despite clear opportunities to improve how we measure soils, carbon sequestration in agricultural systems is ready for prime time today. The farming practices are not new, not reliant on inventing a specific technology, and can be implemented on millions of acres across the world. We just need the political will and scientific support to make it real. | https://carbon180.medium.com/the-value-of-monitoring-soil-d45b0b5ca33c | [] | 2019-10-08 18:08:10.531000+00:00 | ['Research', 'Agriculture', 'Climate Change', 'Soil Health', 'Science'] |
Mocking network calls in Swift | Mocking URLSession through different methods
We want to test our code without actually downloading anything, we need to set up a mock for URLSession .
Subclassing URLSession
Our first attempt at this mock class has two sections, that is URLSessionDataTaskMock and URLSessionMock. The URLSession itself is subclassed, and this means we can override the dataTask functionality.
The URLSessionDataTaskMock returns a closure it has been given when when resumed; that is when it is called it will simply return that closure.
The URLSessionMock is a class that returns that URLSessionDataTaskMock rather than the usual URLSessionDataTask (which itself is usually expected to return the downloaded data into the App’s memory).
As a result, when then call this with the following test, we inject some data (so there is something for URLSessionDataTaskMock to be able to return. The URLSessionMock() can be injected into the function downloadData as URLSessionMock() is actually a URLSession so no casting is required.
Now when the test is run, we don’t have to wait for the network to return — obviously it isn’t quite instant but on my machine I could change the timeout to 0.1 within testUsingSimpleMock.
This code is contained within the repo towards the bottom of the page, and within that repo this code is in a NaiveMockNetworkCalls folder.
The problem with subclassing
Subclassing URLSession and DataTask means that we are exposed to any changes that Apple might make to these classes , and (as you have seen) there is quite alot of code that is required for just some mocking .
Failed Attempt 1: Mock URLSession, conforming to a protocol and Mock DataTaskMock
Our second attempt at mocking here requires that we conform to a protocol.
protocol URLSessionProtocol {
func dataTask(with url: URL, completionHandler: @escaping (Data?, URLResponse?, Error?) -> Void) -> URLSessionDataTask
}
So we need to extend URLSession to conform to the protocol
extension URLSession: URLSessionProtocol {}
Then we create the URLSession as before, however we need to mock the data task too where it conforms to the following URLSessionDataTaskProtocol
protocol URLSessionDataTaskProtocol {
func resume()
}
so URLSessionDataTaskMock is as follows:
Now the issue is that URLSessionDataTaskMock within the URLSessionProtocol doesn’t have the correct type - It needs to return a URLSessionDataTask and it can’t be cast as above — that is it always fails.
Method 2: Mock URLSession, subclass DataTask
One (working) solution is to mock URLSession but to subclass DataTask . This allows us to create a skinny DataTaskMock which basically overrides resume to with an empty function body.
This code is contained within the repo towards the bottom of the page, and within that repo this code is in a MockNetworkCalls folder.
Method 3: Mock URLSession, Mock DataTask
We can mock both URLSession and DataTask by using similar techniques to the one above.
Now we change the URLSessionProtocol to return an associated type
protocol URLSessionProtocol {
associatedtype dta: URLSessionDataTaskProtocol func dataTask(with url: URL, completionHandler: @escaping (Data?, URLResponse?, Error?) -> Void) -> dta
}
where URLSessionDataTaskProtocol is as before
protocol URLSessionDataTaskProtocol {
func resume()
}
and the extensions are written the same as before above.
We have to use a generic constraint in our function signature as below
Our mock class is then represented by the following:
which is then tested with exactly the same test as before (full code is in the repo at the link just under this code):
class MockNetworkCallsTests: XCTestCase {
func testUsingSimpleMock() {
let mockSession = URLSessionMock()
mockSession.data = "testData".data(using: .ascii)
let exp = expectation(description: "Loading URL")
let vc = ViewController()
vc.downloadData(mockSession, completionBlock: {data in
exp.fulfill()
})
waitForExpectations(timeout: 0.1)
}
}
This code is contained within the repo towards the bottom of the page, and within that repo this code is in a FullMockNetworkCalls folder. | https://stevenpcurtis.medium.com/mocking-network-calls-in-swift-ad04b59e79 | ['Steven Curtis'] | 2020-04-29 08:02:00.189000+00:00 | ['Mac', 'Software Engineering', 'Swift', 'Programming', 'Software Development'] |
Technical solution or creative thinking? A Y-DATA graduate on why dealing with data can be so exciting | Technical solution or creative thinking? A Y-DATA graduate on why dealing with data can be so exciting Lyoka Ledenyova Follow Dec 17 · 6 min read
Meet our hero of the day: Pini Koplovitch — a talented scientist that has recently graduated from our Y-DATA program. Pini’s first degree was centered on what the brain does and how it works, he studied both Cognitive Sciences and Biology at the Hebrew University of Jerusalem. After completing his BSc he proceeded directly to a PhD in Computational Neuroscience with an emphasis on the neuronal basis of neuropathic pain. In his study Pini was developing a genetically engineered biopump as a novel treatment for neuropathic pain, and the initial results of his research were found to be very promising. Unfortunately, during the proceeding phase he faced technical obstacles that did not allow more complex experiments and the continuation of the study. After careful consideration Pini has decided to leave the academic life, but not before receiving his MSc and publishing the PoC results in the leading journal in the field. During his study Pini has already started using clustering algorithms to semi-automatically analyze vast amounts of neuronal activity that helped him conduct much more experiments and increased the overall efficiency of the practical study.
“It’s important to be able to recognize when things are working and when they aren’t, to choose a correct path, as one will oftentimes find themselves on the crossroads in this regard, at least as far as research is concerned. It’s important to understand how things work internally on some level at least, the math and the statistics of it. A data scientist needs skills from both the practical and the research worlds. Many experiments can now be performed due to us having stronger tools in genetics, which allow us to take a closer look at the genes themselves. There are more advanced statistical methods of data science we can use to provide correct analysis. Data science does help a lot, but in the life science field people are still going to find themselves in need to conduct experiments, use live animals for them to see how one subject affects the others, etc. Both sides are going to be required moving further, and the best case scenario would be a scientist well versed in both the experimental and the analytical work.”
After leaving the Edmond & Lily Safra Center for Brain Sciences at the Hebrew University of Jerusalem Pini decided to focus on getting a firmer grasp of data science as a whole in order to positively impact people’s lives. He’s still eager to work in brain science or in the medical field to continue helping sick and diseased people.
“Data science is a tool, and I’m eager to use it in my professional work. I also just enjoy researching things, as I did with my PhD, even though I dropped it for many reasons. I used to work with animal research a lot, and I do want to experiment and discover things of course, but let someone else do the dissecting and I will focus on numbers. Finding solutions is something that excites me the most. I see data science as a tool to achieve said solutions. One can always use data science, but some problems don’t require fancy algorithms to get solved. Heavy machinery is not always the answer.”
Academic work taught Pini how to handle a lot of data, ask the right questions and find the relevant solutions to the problems at hand. He was twice on the Dean’s List, participated in “ETGAR” Honor’s Program and completed Y-DATA program with outstanding success and honors. The Y-DATA industry project he did together with Urska Jelercic for Healthy.io was chosen for distinction.
“When it comes to the Y-Data project, the main challenge we probably faced was working with the small and under-representative set of images the company provided us to work with. Instead of trying to squeeze better performance (and probably overfit) we decided to focus more on error analysis and to better understand which are the ‘hard’ images and what make them hard. This led to probably the main insight we delivered — which classes of images the company should focus when collecting more images. The second insight was that the architecture they encouraged us to work with is overkill for this problem (or at least for the current dataset) and much lighter architectures perform the same. The biggest excitement for me was the fact that we were solving a real life problem. Discussing things with other students and mentors during the project was inspiring, as every now and then I would get new insights or get introduced to new ways of thinking in regards to the problems we needed to solve.”
It seems like there is nothing that excites Pini more than understanding that his work can be beneficial for medical science in general and people’s lives in particular, and still he’s not limiting himself and is ready to test his “good student” skills in new fields as long as it involves solving problems and tackling new challenges. Despite lots of honors that Pini has, he looks extremely modest and undemanding. He knows his strong sides but doesn’t want to brag about it. It doesn’t take a long time though to understand that he’s a curious and intelligent hard worker who’s ready to dedicate all his free time to something he’s passionate about.
“Something to wake up in the morning for, something to be passionate about — is enough for me. A perfect day is the day when I am able to solve a big chunk of a problem at hand and to advance towards the end goal. As long as there is even a little progress, I feel satisfied. I am a curious person. Even the most mundane questions spark the will to research in me, this is what inspires me. I find interest in finding something new, something that no-one has ever done or seen before, and hopefully applying that something to solve a relevant issue. When the goal’s been reached there’s nothing interesting left. You don’t ever want to reach that end unless you can find something new to strive for.”
Pini gives the impression of being an independent scientific figure that can deal with everything on his own, but when the talk turns to career goals and job perspectives, he makes a surprising statement.
“One does need to socialize. One also needs to consult with colleagues, share insights, etc. In my PhD studies I had my own project, but I did have people I could consult with. I do want to work with people who tackle similar or related problems so that we can share insights and advice, better understand the tasks we are given and increase our efficiency. It’s important to work alongside people, not necessarily with them, to be able to do our job better. Two people cannot drive a car simultaneously, but the end goal is to reach the same destination with different cars with everyone pulling their weight.”
Pini believes that intuition comes with knowledge and experience and that to be a good data scientist one also requires creative thinking. Yet there doesn’t seem to be much in the world that couldn’t be explained from Pini’s rational point of view.
“I guess I am analytical because I actually know how the brain works. It removes most of the magic of how we see, think and understand, stultifies us questioning if our decisions have any meaning, or if we have free will at all. Alternatively, this knowledge poses new questions on its own accord, makes me think of how these things actually happen, how they work “under the hood”. We are physical entities in the end, and there is nothing metaphysical that can influence the physical. Even if we feel like we possess free will, there are physical processes that made us do whatever it is that we did. In parallel, we have got this notion that we decided to do something, but it only comes after the process of doing something has already begun. It’s not that there is a thought of initiating something and then the initiation of that something, but rather there’s doing something and the emerging feeling you get from some parallel channel that says “it’s me, I wanted to do it!”. Most people don’t like this way of thinking, though. I still enjoy thinking and believing that I do influence my own decisions. This is what keeps us moving in our day-to-day lives.”
Pini is open for job offers in data science positions. Feel free to reach out to him on his Linkedin. | https://medium.com/y-data-stories/technical-solution-or-creative-thinking-28fc1189894e | ['Lyoka Ledenyova'] | 2020-12-17 14:31:59.570000+00:00 | ['Cognitive Science', 'Data Scientist', 'Biology', 'Data Science', 'Neuroscience'] |
Go — Composite Pattern. Composite Pattern falls under… | Prerequisite
Before reading this article, make sure you have a good understanding of Go Interface.
Introduction
Composite Pattern falls under Structural Design Pattern. Based on the book Design Patterns [Gamma et al.], the intent of this pattern is:
Compose objects into tree structures to represent part-whole hierarchies. Compositelets clients treat individual objects and compositions of objects uniformly. — Design Patterns [Gamma et al.] p. 163
This pattern is effective to dynamically compose a structure of objects. The easiest way to describe it by imagining a set of objects composing a hierarchy. For example, it can be a hierarchy of an organization.
In this article you will learn:
How to implement basic composite pattern to build the organizational hierarchy
How to reuse the composite pattern for another hierarchy problems
Implementation
We start this journey with our fictional friend, Kant a programmer in an online shop company. He has a task to create an organizational hierarchy of the company.
Kant starts to write Component Interface, this is the foundation of his code:
Then, he continues to create the Employee model which inheriting Component interface:
For simplicity, we will use a console print and simplify the input by directly hard code the values. Also, we can ignore success/error (result, error) pattern that usually being used in Go.
After Kant wrote the Employee model, he starts to write the Composite struct. This struct became the base-ground for hierarchy definition. Kant realized, the organizational hierarchy can be treated similar to a Tree data structure, so he creates the logic for add and displays people to the organization as shown below:
Let’s take a breath for a moment because I will explain what Kant did there.
type Composite struct {
component Component
leaves []Component
}
As we saw, Composite struct has two attributes:
component: The root of the Tree/Head of the organization
leaves: The leave of the Tree/Subordinate of the organization
Also, pay attention to this code:
func (c Composite) Display() {...
That code will make Composite struct inheriting Component interface.
Also, Kant just realized in the company which he working, there are so many divisions and each of the divisions might have sub-divisions.
Company Hierarchy in a simple way
So he starts to build the composition of company hierarchy as shown below:
What Kant did:
Build the hierarchy of office of VP of Idealist
Build the hierarchy of office of VP of Realists
Build the hierarchy of the company, by adding the Ceo and add two offices of Vice Presidents
As we remember, in composite.go we have Display() :
func (c Composite) Display() {
c.component.Display()
if len(c.leaves) == 0 {
return
}
fmt.Println("===List of Subordinates===")
for _, leaf := range c.leaves {
leaf.Display()
}
fmt.Println("===End===")
}
So, that method will iterate and print component and leaves from:
VP of Idealist
VP of Realist
CEO (company level)
If we run composite.ShowEmployeeHierarchy() in main.go , the result will be:
Office of VP of Idealist:
=====Display Office of VP of Idealist=====
ID: ID-2
Name: Plato
===List of Subordinates===
ID: ID-4
Name: Hegel
===End===
Office of VP of Realist:
=====Display Office of VP of Realist=====
ID: ID-3
Name: Aristotle
===List of Subordinates===
ID: ID-5
Name: Hume
===End===
Office of CEO:
=====Display Company=====
ID: ID-1
Name: Socrates
===List of Subordinates===
ID: ID-3
Name: Aristotle
===List of Subordinates===
ID: ID-5
Name: Hume
===End===
ID: ID-2
Name: Plato
===List of Subordinates===
ID: ID-4
Name: Hegel
===End===
===End===
Reusability
After Kant finished his task, his PM came to him and asked whether he can build the shopping item catalogue in a hierarchical fashion. The deadline is in 30 minutes. Maybe for some programmer other than himself, will protest the deadline. But, because this Kant, our beloved programmer, he just nodded and turn back his face to his computer with a smiley face.
As we remember, Kant already create composite.go which is the solid solution to solve the shopping item catalogue.
One thing which he needs to do just only composing the product catalogue, let’s abstract this to be like:
ItemCategory model
This model is inheriting Component interface.
ShoppingItem model
This model is inheriting Component interface.
Composing the Shopping catalogue
This code is very straightforward by reusing composite.go .
Then before 30 mins were overdue, Kant called his PM. His PM and team applaud him because of his quickness to accomplish a new task.
Conclusion
After we observe what Kant did, we can get a lot of benefits by implementing Composite design pattern.
We get adaptability, the code has a clear path to define the hierarchy. By utilizing the interface, we can hide the implementation detail of the code to make it become adaptive to changes.
We get reusability, the code can be reused to another problem. As long as the problem has a similar pattern.
We get agility, the agility is an important point for software development. We have adaptive code and reusable component, it will give us advantage overtime, by avoiding reinventing the wheels every time there is a new feature which has a common problem with what we’ve been done yesterday.
The Composite pattern can be combined with the Decorator pattern.
When decorators and composites are used together, they will usually have a common parent class. So decorators will have to support the Component interface with operations like Add, Remove, and GetChild. — Design Patterns [Gamma et al.] p. 173
The benefit of using a design pattern, we can consolidate one design with another design, and it will give us a rich benefit exponentially in adaptability, reusability, and agility.
By defining fine-grained code and reusable components, team productivity can be increase exponentially to create a new feature for the benefit of the company. | https://medium.com/dev-genius/go-composite-pattern-393caaa0b105 | ['Haluan Mohammad Irsad'] | 2020-07-07 10:56:07.773000+00:00 | ['Software Engineering', 'Software Development', 'Software Design Patterns', 'Composite Pattern', 'Golang'] |
Game Changer: Surgery For Blocked Arteries May Not Actually Be Necessary | Game Changer: Surgery For Blocked Arteries May Not Actually Be Necessary
Newly Released Research Upends Years of Clinical Practice
Photo by Piron Guillaume on Unsplash
Someone has an abnormal stress test, indicating a lack of blood flow to the parts of the heart under stress. Often times, the next step recommend by the Cardiologist is a cardiac catheterization, or angiogram. This is a picture of the arteries that supply blood to the heart. If there is a blockage, then something is done to fix it.
This coronary intervention, as it’s called in the business, is either a stent — a wire scaffolding that keeps the artery open — or sometimes, open heart surgery. It’s been accepted practice for a long time. As an ICU specialist, I frequently see such patients after their procedures in my ICU. Yet, was it really necessary? New research seems have to said, “Perhaps not.”
In brand new research presented at the annual meeting of the American Heart Association in Philadelphia, researchers said:
The trial showed that heart procedures added to taking medicines and making lifestyle changes did not reduce the overall rate of heart attack or death compared with medicines and lifestyle changes alone.
The same is true with those patients with chronic kidney disease.
The study was large (over 5000 patients were enrolled), randomized (the scientific gold standard), very well-conducted, and enrolled patients in several countries. The results were eagerly awaited, and they are truly a game changer. According to an article in The NY Times, “the nation could save more than $775 million a year by not giving stents to the 31,000 patients who get the devices even though they have no chest pain…”
Is a stent necessary after an abnormal stress test? New research says, “Perhaps not.”
Now, there were important caveats to these study results. These results do not apply to those patients having an actual heart attack. Evidence clearly shows that stents save lives in acute heart attacks. The results of this study only applies to those with stable symptoms. In fact, as researchers said, “The more chest pain to begin with, the more symptoms improved after getting a stent or bypass surgery.”
Now, were all those thousands of patients that received stents in the past mistreated? No, of course not. This study didn’t show that those randomized to the intervention arm suffered untoward harm.
At the same time, these results should give both doctors and patients pause when interpreting the results of an abnormal stress test. While the studies demonstrated that cardiac catheterization was exceedingly safe, they are still invasive procedures that are not without possible complications. And so, if medications alone, without a surgical procedure, can achieve the same results, this needs to be discussed thoroughly with your doctor.
What is most interesting to me is what the community of heart doctors will do in response to this new research. Indeed, as The NY Times article mentioned, “This is far from the first study to suggest that stents and bypass are overused. But previous results have not deterred doctors, who have called earlier research on the subject inconclusive and the design of the trials flawed.” I don’t think the same criticisms can be lodged against this trial.
These sorts of studies are always welcome. It is important that we physicians make sure that the treatments we recommend to our patients, as much as possible, or backed by solid scientific evidence. It is part of the constant process of learning, especially as new technology develops in Medicine. If some patients can be saved an invasive procedure, while achieving the same good outcome, this should make both doctors and patients happy. | https://drhassaballa.medium.com/game-changer-surgery-for-blocked-arteries-may-not-actually-be-necessary-271ccb1cfcd0 | ['Dr. Hesham A. Hassaballa'] | 2019-11-18 03:46:16.478000+00:00 | ['Surgery', 'Heart Attack', 'Health', 'Medicine', 'Research'] |
Useful scikit-learn tips-2 | Photo by Bonnie Kittle on Unsplash
This blog post covers yet another set of tips from Kevin Markham’s useful scikit-learn tips series; Have you ever felt confused in understanding the fit and transform methods? Has there been an occasion of applying fit_transform on the test data? Are you familiar with the advantages of using sklearn’s pre-processing pipeline even though pandas is your favourite choice for data wrangling? Need a refresher on encoding categorical variables? Read ahead to find out more…
Understanding fit, transform, and fit_transform methods
The “fit” method is used when the transformer learns something about the data; the “transform” method is used to apply the learned transformation to the data. The following are some examples, explaining what the “fit” and “transform” methods do in each case:
Using fit_transform
Instead of calling the fit and transform methods separately, we might as well call the fit_transform method; but we should be sure to use fit_transform only on the training data and use only the transform method on the test data. If we use fit_transform on the test data, then we are essentially asking our model to learn from the test data as well which is not allowed leading to the subtle problem of ‘data leakage’.
Advantages of preferring sci-kit-learn over pandas for preprocessing
Can cross-validate the entire workflow: If we want to call cross_val_score on the entire workflow, then it’s a very good practice to use sklearn preprocessing pipeline. In this case, the preprocessing is applied separately on each fold of the data rather than on the whole of the data at once, which results in a more accurate model.
If we want to call on the entire workflow, then it’s a very good practice to use sklearn preprocessing pipeline. In this case, the preprocessing is applied separately on each fold of the data rather than on the whole of the data at once, which results in a more accurate model. Can grid search model and preprocessing hyperparameters: We generally use a grid search to find out optimal hyperparameters for our model, however, we might as well do a grid search on the preprocessing hyperparameters such as the best strategy for imputation, normalization, etc.
We generally use a grid search to find out optimal hyperparameters for our model, however, we might as well do a grid search on the preprocessing hyperparameters such as the best strategy for imputation, normalization, etc. Avoids adding new columns to the source DataFrame: We often need to add new columns and apply a transformation to certain columns; Using sklearn’s ColumnTransformer is a much better option than adding additional columns to the source DataFrame which would make the DataFrame difficult to explore.
We often need to add new columns and apply a transformation to certain columns; Using sklearn’s ColumnTransformer is a much better option than adding additional columns to the source DataFrame which would make the DataFrame difficult to explore. pandas lack separate fit/transform steps to prevent data leakage: Data Leakage if left unnoticed can impact model performance severely; it’s always one of the best practice to use sklearn as the fit/transform methods provide clear distinction as to which method to apply on the training and test data obtained from the train_test_split.
Encoding Categorical Features- OneHotEncoder and OrdinalEncoder | https://medium.com/nerd-for-tech/useful-scikit-learn-tips-2-9849855dfeb3 | ['Bala Priya C'] | 2020-12-01 16:15:27.051000+00:00 | ['Machine Learning', 'Scikit Learn', 'Python', 'Data', 'Data Preprocessing'] |
One Week On: Facebook’s Reactions | Last week Facebook FINALLY released a wider range of emotions for users to express themselves. We all know the arguments for the dislike button, so why Reactions instead?
Facebook recognised that not all posts are inherently likeable. Think about the times you’ve come across a post from a friend lamenting the passing of a pet, getting fired, or moaning about their bad day, not to mention the huge number of posts from consumers complaining to big businesses. Not really, likeable, yeah?
Reactions have already proven, in the last few days at least, to somewhat improve the accuracy of users’ feelings towards a post. Now we’ve got “Like”, “Love”, “Haha”, “Wow”, “Sad” and “Angry”.
Not exactly the “Dislike” button, which Mark Zuckerberg has made pretty clear won’t be happening, but certainly a step towards allowing us more emotional expression, right?
I’m sure you’re thinking though, do people even have time to be bothered selecting a certain emotion? Maybe not for every post they come across, but for posts they feel really passionate about, holding down the button for a second or two and selecting the corresponding emotion isn’t too much of a stretch. Facebook was cautious not to give users too many options, lest our attention span waned. Or is it because human expression is limitless and it’s not physically possible to list the number of emotions a user may have in reaction to a certain post?
Nevertheless, I’d say what they’ve come up with is a relatively good balance, given written communication can never fully express the emotions and intuition derived from physical, face-to-face human interaction.
As Mark Zuckerberg said himself:
“Not every moment you want to share is happy. Sometimes you want to share something that is sad or frustrating… The result is reactions, which allow you to express love, laughter, surprise, sadness or anger. Love is the most popular reaction so far, which feels about right to me!” | https://medium.com/digitaldisambiguation/one-week-on-facebooks-reactions-e345aa7a35bb | ['Jason Mcmahon'] | 2017-08-27 23:56:44.888000+00:00 | ['Digital Marketing', 'Facebook Marketing', 'Facebook', 'Facebook Ads', 'Social Media'] |
‘WHO IS’ JAKE&PAPA? BEHIND THE BROTHERHOOD AS PREMIERED BY REVOLT.COM | LOS ANGELES, CA — Thursday, May 25th, 2017 — Today, Jake&Papa take their fans on an behind-the-scenes look into the brotherhood so crux to their musical journey. With an assorted arrangement of visuals from their tour days with Tory Lanez, SXSW, critically acclaimed visit to Sway In The Morning, and their telling interview with legendary radio host, Big Boy, this video gives slight insight into what makes this R&B duo so incredible.
In the clip, as seen on REVOLT.com, Big Boy tells Jake&Papa, “It’s got to be good to look to the side and know that you trust this man because this is your blood brother.”
Papa of Jake&Papa replies, “I thank God for that everyday. This grind is hard, it’s tough, and if you don’t have somebody who’s like you’re mirror and doing it for the same reasons that you’re doing it, you’re in trouble.”
Watch behind the brotherhood of Jake&Papa here. Their project Tattoos&Blues EP was released via Priority Records on April 28th, with their single “Phones” (as produced by Bizness Boi). Take a listen to Tattoos&Blues on SoundCloud or via all DSPs worldwide. | https://medium.com/ijeoma/who-is-jake-papa-behind-the-brotherhood-as-premiered-by-revolt-com-6361bdef656b | [] | 2017-05-30 15:07:28.389000+00:00 | ['Revolt Tv', 'Music', 'Brotherhood', 'Press Release', 'Jakepapa'] |
How Project Kooda Found Product-Market Fit During Lockdown | Quick background
We spent the first few months of 2020 running a Minimum Viable Product (MVP) for our company, Project Kooda. We’d just developed a platform designed to use an AI-people mix to match high-skill freelancers up with companies that needed them on a contract basis.
Project Kooda Companies page
After releasing this successful MVP, companies were lining up, ready and committed to using our product. We also had freelance-professionals signing up left, right, and centre.
But just two weeks into our public launch in South Africa, a national lockdown was announced, followed by the issuing of a National State of Disaster due to the outbreak of Covid-19.
The breath was totally knocked out of our business. All the verbal commitments we had fell by the wayside. We were left wondering what we should do next. We couldn’t help but think that our business prospects had ended before we’d even had a chance to begin to build something great.
But as all good entrepreneurs do, we chose to find the opportunity present in the crisis before us. We found that the protracted lockdown had a severe impact on the local economy, and many firms responded with waves of retrenchments. This meant two things:
Many highly skilled people were suddenly looking for freelance jobs; and, Many companies had just started looking for ways to keep their businesses operational, despite the gaps left by the retrenchments they had just made.
We soon realised that our product was perfectly suited and adapted to solve the problems firms were facing as a result of the lockdown. We figured that if we simply kept pushing, we’d be soaring once the initial scare of the lockdown died down. It was only set to last a few weeks, after all…
“We need this yesterday!..call us back in 6 months’ time”
A few weeks into the national lockdown, and after spending countless hours meeting with corporates across the country, we did not feel like we were making any significant progress. We were trying to convince businesses that hiring freelancers would not only keep them operational at a fraction of the cost, but it would also provide much-needed relief to the recently unemployed. Most businesses felt the same way we did, and could see the value of our product — but were still unable to commit to it.
But why was this the case?
“decision fatigue refers to the deteriorating quality of decisions made by an individual after a long session of decision making. It is now understood as one of the causes of irrational trade-offs in decision making.”
- Roy Baumeister, in The Psychology of Irrationality
Roy seems to sum up their reasoning perfectly. Nose-diving sales, retrenchments, restructuring, and salary cuts left decision-makers at companies exhausted and unable to commit to anything — even a product perfectly suited to their situation.
Fair enough, but we knew that we needed to see an uptick in user adoption soon. If we didn’t, the writing would be on the wall for our business.
In an effort to solve our problem, we started listening more intently to our prospective clients. We tried to pay attention to the ‘meta’ of what they were saying. Our thought process was that when a client says, for example, “we’re just too busy”, that shouldn’t be taken as a sign to come back later — it should be taken as a signal that your product is being perceived as something time consuming and complex.
Anybody that has worked in operations knows full-well of the pains of new process creation and rollout. Documentation, training, meetings…nightmare.
Project Kooda regularly cuts lead-times in half, but only after companies have completed their onboarding and received training to use the service. It became clear that the barrier to entry was just too high for teams that were already under capacity and stretched to their limits.
Back to the drawing board
After a sneaky, curfew-breaking emergency meeting at one of our co-founders’ homes, we began workshopping. We decided that we needed to build a new tool that would address all of the issues that we’d picked up on. We resolved that the tool needed to:
Be accessible to small businesses. Lockdown seemed to impact SME’s the worst, our product needed to be financially attainable to them. Have an ultra-low barrier to entry. No more lengthy onboarding processes or long-term commitment. Immediately demonstrate value. Whatever we developed, we wanted to deliver bite-sized value at the point of entry. Keep freelancer experience at the center of our design. How could we claim to have the best freelance experts in SA without treating them as such?
After a grueling two-week sprint, we designed and built Kooda Connect — a subdivision of Project Kooda.
The product allows anyone to instantly book a call with an industry leader to chat about a problem they are facing. Tax, Legal, B-BBEE, Cyber Security, Marketing, and Cloud experts at the fingertips of anybody who needs them, at a fraction of the normal costs associated with that degree of consultancy.
The best part? No sign-up required. Simply find an expert in the field you’re looking for help with, book time in their built-in calendar (which integrates directly with their personal calendar, so you can trust the timeslots), and pay right on the spot via our integrated payment gateway.
So what happens if your session extends beyond advice and requires work to be done? Well, Project Kooda has readily built-in tools to help you manage that full project lifecycle, all in one place, without skipping a beat.
Kooda Connect
With a handful of top local and international freelancers, we launched our product.
Traction
Right after we started talking to companies about Kooda Connect, we realised that we had created something that resonated with them. Bookings started flowing in, and so did the feedback that would help us improve our offering. Companies have reported that our service halves the price of their current consultancy providers and delivers results within days, rather than months. We also saw a massive uptick in the number of freelance contracts being created and managed via Project Kooda, almost all of which started off with a consultancy session on Kooda Connect. Our product finally began to generate revenue and more importantly, deliver on our promise to help freelancers access more reliable work.
On the freelancer side, we wanted to create an experience that was 10 times better than anything available, anywhere else. After speaking to hundreds of Freelancers, we identified that their number one frustration was the need to spend most of their time finding work, with very little time doing the actual work that they enjoy. We couldn’t agree more - and we used this insight to further shape our product design. Validation came when many of our beta freelancers started asking for links to their profiles to send on to prospective clients. Their feedback was that they found it far easier than managing bookings and payments manually. Their time is taken more seriously because we charge for it, which makes them more comfortable with adding value right out the gates (improving the client experience in the process).
Key takeaways
So what were our learnings over the past few months? Since lockdown is more or less over, we’ve removed the tips for dealing with working from home during load-shedding, as well as how to cope with people that aren’t aware that their camera is on during Zoom meetings. I’ve summarised our core learnings below:
Listen to your users’ problems, not their solutions. We’ve all heard it before — ‘it’d be great if it could do this..’ or ‘it’d be SO cool if it could do that…’. We went down the rabbit hole of thinking that if we simply built what companies had asked for, we’d be in the green. This proved to be wrong time and again, and cost us many precious hours at the start of lockdown. The simple truth is, it is your customers’ job to tell you about their problem, and your job to create the solution. Hold your problem and customers tightly, hold on to your solution loosely. We knew the problem existed, and that our solution had faults, but the prospect of re-building was so scary that we had initially considered moving our service to at other markets as a fix. We ultimately decided that we had set out to solve a problem and were damn well going to solve it, and we are very glad that we did. Project Kooda (our original product) is starting to look like a fantastic success. The reason it didn’t work initially was because new concepts require iteration and innovation. That came to us in the form of rethinking the path of entry for companies. Your solution needs to be 10 times better than the original. The source of Kooda’s traction was not our relationships with companies — it came from our top freelancers loving the product so much that they wanted to funnel all of their work through it. A product that changes the game is never just a little bit better than the competition. It has to make people wince at the idea of ever returning to the old way of doing things. Can you imagine going back to sending letters by post now that we’ve got instant messaging and email?
Looking forward
South Africa has a lot of rebuilding to do post-lockdown. We’re working hard to get sustainable work for as many talented freelancers as possible, and to help companies rise back up into stronger, more lean entities. We believe that the best way to take on the challenge ahead is by helping high-skill individuals transition into freelance consulting work, and to help companies form and implement strategies to incorporate freelancers into their workspace.
We’d love to hear about your learnings during lockdown. Please submit them as an email to info@projectkooda.com | https://medium.com/swlh/how-project-kooda-found-product-market-fit-during-lockdown-a40dde6dd007 | ['Joshua Minsk'] | 2020-10-12 14:57:20.213000+00:00 | ['Startup', 'South Africa', 'Business', 'Product Market Fit', 'Covid'] |
Software Developers : Scapegoats For Security Vulnerabilities | Nowadays there is a wide variety of security layers used by organizations at different stages of the software development life cycle. Static code analysis, dynamic analysis, penetration tests, bug bounty programs, or manual findings all offer different frequencies and different coverage levels to catch vulnerabilities.
Software developers need to deal with all vulnerabilities coming from these different sources while ensuring that they release new features or new applications within the deadlines.
Under the pressure of releasing applications at the speed of DevOps, security is not a luxury every software developer can afford.
Nevertheless, with the rising trend of secure coding, they are also the ones that are asked to change the way they work and code with security in mind.
However, while we ask them to give up some deeply rooted habits, how many of us would really embrace a change when there is no upside to it?
When it comes to sales teams, we rarely question the reasoning behind bonus programs. No one claims it is their job to try to close more deals and they should not be paid extra for hitting their targets.
On the contrary, almost everyone thinks that when sales teams go the extra mile, the company also benefits from it by generating extra revenue, and therefore it makes sense to share the extra profit with them.
But doesn’t the same apply when software developers write more secure code? Instead of generating more revenue, by coding securely, they prevent loss of reputation or loss of sensitive data which can not be restored even if you spend millions of dollars.
It’s a myth that software developers do not really care about security or take it lightly. At the end of the day, no one wants to build something easily penetrable and destructible by outside attackers.
The real problem is when it is time to evaluate their performance, most of the time secure coding is not one of their KPI’s.
If security is not among the things talked about in 1:1’s with team leads or one of the indicators of their performance, it is only natural for them to focus on other tasks with higher priorities to stay afloat.
When you want to improve your security posture and reduce your exposure to code-related vulnerabilities, how can we create a win-win scenario and talk developers into change?
Let’s picture the ideal world we want to create. First, we want our software developers to be mindful of security and create as few vulnerabilities as possible. Second, we want them to remediate vulnerabilities in the agreed-upon time frames.
As we have tipped off before, we see the problem more as a cultural one that requires a change in the way the performance of software developers is assessed. Without a security-driven culture embedded in the KPI’s of software developers, unfortunately security is destined to be an afterthought.
We know cultural problems are not solved in one day and we need baby steps towards our ideal world.
As a start, we recommend picking a certain vulnerability type (e.g. Injection) that you believe could be most devastating for one pilot project.
Once you know the persona non-grata vulnerability type, it is time to set a target fix time for these vulnerabilities. 3 days, 10 days, or 20 days. That is totally up to the flexibility of your development pipeline, frequency of your releases, and your risk perception.
What matters most at this point is just to come together with software developers and not leave the table until you come to an agreement and everyone commits to the targets that are mutually decided.
If there are outstanding vulnerabilities from the selected type, at the end of the agreed time frame, set a meeting with software developers and have your first retrospective analysis on what went wrong and what went well. Then, try to turn these meetings an integral part of their sprint planning so that vulnerabilities make their way into work items in each sprint.
When you are certain that vulnerabilities are fixed within deadlines in this pilot project, gradually roll out your approach for a larger set of projects and new vulnerability types.
Along with making security coding a KPI, when you give developers visibility into the security posture of what they are building and let them track their remediation metrics through a platform like Kondukto, you will quickly experience a dramatic change in their approach towards vulnerabilities.
Instead of being ignored or treated as second class citizens, secure coding and remediation of vulnerabilities will become a means to stand out in the crowd for your developers. They will be hot topics in their internal conversations and even part of the feedback they receive from team leads.
As well as comparing the actual remediation speed with target rates, one other KPI we strongly recommend is the recurrence ratio of vulnerabilities to get closer to our ideal world where software developers do not create the same vulnerabilities over and over again.
When you observe the same types of vulnerabilities are created by certain teams or developers you can create customized training programs to help them improve their security knowledge.
Secure coding is an unstoppable trend and security is expected to be one of the traits that tell an excellent developer from a good developer in the near future. When you help them improve their security skills, they will obtain a lifelong asset that will give them a competitive advantage.
So, don’t shy away from introducing secure coding as a KPI for your software developers. The better they perform, the safer is the company, and the higher they are valued. Just be transparent in why you have to do this and clearly explain how they will also benefit from it.
Tell them the definition of a good developer is not the same as it was 10 years ago and will not be the same 10 years later. Secure coding will be one of the indispensable characteristics looked for when hiring software developers in the coming years.
To be fair to your existing developers and to make newcomers easily fit into your culture, make secure coding one of the characteristics you look for when you are hiring new developers.
To prevent paranoia and fear of punishment, try to create an environment of trust where you encourage secure coding by helping poor performers improve their skills through custom training programs.
Start the change with yourself to ask it from developers. Only then, the change will be embraced rather than resisted. Be the change you want to see. | https://medium.com/dev-genius/software-developers-scapegoats-for-security-vulnerabilities-b743cc56f105 | ['Can Bilgin'] | 2020-06-26 12:34:02.943000+00:00 | ['Application Security', 'Software Engineering', 'Devsecops', 'Software Development', 'DevOps'] |
As Rich as Ash | uninhabited daybreak
drives temperature of grey
in melancholy strides
dashboard radiates
the shivering nest,
windshield cadence
of gold fragments
in tepid light, lines betray
and colour only whispers
having overstayed,
the skin of existence
is an ambiguous sky,
a glossy invitation
to drift impartially,
to temper the red beat
in blooming calm
the fallow heart
is as rich as ash | https://medium.com/scrittura/as-rich-as-ash-9b437aae1af9 | ['Jessica Lee Mcmillan'] | 2020-12-30 02:38:58.094000+00:00 | ['Poetry', 'Peace', 'Colors', 'Meditation', 'Weather'] |
My Hit Record | My Hit Record
It took 25 years to find out I’d actually had one
photo by author
It wasn’t a great song. In fact, it wasn’t even a good song. Or a completed song, for that matter. But years ago when I was young and full of ambition, I brought a tape of several songs I’d written to a studio owner who thought he heard something magical therein and wanted not only to record my compositions — but have me sing them as well! I thought he was crazy. Maybe a couple of the tunes were noteworthy. But my voice? No way. I can’t sing for shit!
Well…it didn’t take long for him to come around on my vocal talents. But we did record three of the compositions, one of which (the not very good or even completed one) was titled “I’ve Got It.” It was really nothing but a lively disco track styled after “The Hustle” (which was a big hit at the time) with not much more than 4 lyric lines. The B part was instrumental — if for no other reason than my co-author and I hadn’t written words over that part.
Regardless…my partner (the studio owner) went for a big production on the number. He hired an industry veteran (Carl Hall, who played the Wizard in “The Wiz” on Broadway) to do background vocals, and found a soulful singer to do the lead.
And then he hired 10 string players, and had me write my first string chart. By the time it was over, we had a half-written song that sounded really professional. (My partner kind of knew what he was doing. He’d had a hit record and act as producer of “We Ain’t Got Nothin’ Yet” by the Blues Magoos.)
Art (the partner) and I had a somewhat acrimonious parting of the ways soon after the recording, and I moved on to other projects (I was fairly active at the time). And then 4 years later, he called to say that he’d sold the record to a fledgling company and had $500 for me as my part of the advance.
I’d totally written Art and that record off at that point. I actually had something else on the Billboard dance charts when he called and could care less about that stupid half-finished song. But the five hundred bucks? I went and got my money — and a box of 25 45 rpm records of our production.
Looking at the label, I knew Rojac wasn’t much of a record company. The artwork was bare-bones, and the company address at the bottom of the record was 125th Street in New York. Still, I called to identify myself. The company (as it were) was a father and his son who sang the record. The office was actually Harlem World disco on 125th Street.
I rode the A train up to the club, ate some neckbones with the fellaz, and actually did a performance of the song on a Saturday night. But I knew that record was going nowhere and as usual, I shoved off for the next project which might hopefully get me the hit record I sought.
Fast forward more than 20 years. I was out of the music business and selling advertising to escorts for a living. And a handsome one at that. I’m not sure that even a big hit record would have earned me as much cash as I was making running my agency.
One day, I received an overseas phone call from a guy in Norway asking if I was the Bill Mersey who wrote and arranged “I’ve Got it.” I thought it was a joke. But it turned out that the song had been the soundtrack of his youth. He was writing a feature for a music magazine and wanted to know if I had any copies of the record.
Generally, I kept at least one or two of every record I’d been involved in for posterity. This was before the age of You Tube where you can find virtually every record ever recorded. Back then, if you didn’t have a copy of your stiff, it would be almost impossible to hear the record.
I offered that I probably had one or two left and added “how much could they really be worth?” (It sounded like he wanted one.) The caller admitted that he would pay more than a few bucks. But I shucked the whole deal. It wasn’t worth the few dollars I’d gain to sell them and then not have any copies for the rest of my life.
Fast forward another few years, and another guy — this man from England — called about the same record. And he offered to buy the publishing rights (which were not actually mine to sell) for $2000! Because he couldn’t find Art, he said it would be ok for me to sell them. Why would I turn down $2000?
So I negotiated for a while and got him up to $2500. But he wanted the two copies of the record I still had from the box of 25 Art had given me 25 years before. No problem! For $2500, I could live. Plus, his company was going to re-master and re-release the record and would send me a couple of copies as replacements.
Sure enough, a week later, the dude came to New York and rang my apartment bell. He was a young guy who looked more like a college student than a record executive. Neither of us had a copy of the contract. But that didn’t seem to bother him. Just so I gave him the two copies of the original “I’ve Got It,” he was fine with the transaction.
And so, he whipped out 25 one-hundred-dollar bills, and I forked over my two copies plus some other stuff I'd been involved in which he was somehow familiar with. I couldn’t understand how he knew about those old stiffs and why he’d want any of them. But who cared? I would find out sooner than later.
This all took place when the internet was in its relative infancy. Google was new, and very few people knew how to navigate it. But I’d just learned. After Fry (his name) left, I began to ponder the situation: “Why would he pay all that money for publishing rights and not even have me sign on the dotted line? All he wanted was the records, it seemed.”
It was then that I Googled “I’ve Got It” + “Tolbert” (the artist) and finally discovered why he’d paid all that money with no signed contract. An original copy of “I’ve Got It” had sold on eBay to a Japanese businessman for $4116! I was in shock. How could that be?
It didn’t take long for me to understand. I had run my own record company at the end of my music career. And I knew that there were a few distributors who would pay cash (rather than consignment) for a couple of hundred records that sounded good to his ears. The distributor knew he could sell them all and would purchase the records at a discount in that knowledge.
And I knew from experience, that’s what happened to “I’ve Got It.” Rojac had circulated a few to Europe where miraculously, the record had found an audience. But by the time it became popular, there were precious few around for purchase. The law of supply and demand had set in and boosted the price to north of $4000!
Eventually, I came to discover a musical movement in England called “Northern Soul,” whose mission was to find American soul records that sounded like hits but had never seen the light of day — and then popularize them at their huge club events. “I’ve Got It” had become one of the genre’s flagship songs. And to have an original copy of the record? Well…it was worth $4116 to somebody.
I spoke with Fry after that and ripped him a new asshole for paying only $1250 for each copy. Karma did catch up with him in the end. Art discovered the re-release and they had to pay for the rights all over again!
I don’t know how many copies the re-release sold. And I don’t really care. What struck me as most absurd was how much that stupid 45 was commanding on eBay. Imagine that you have a bunch of old 45s gathering dust in storage. And that one of them is worth $4000. And then imagine, it’s a record of a song that you wrote, arranged, and co-produced. It was just too insane.
You might ask at this point “so what happened to the other 23 records Art gave you along with the $500?”
When a producer had a new release, we would give them out as calling cards to other people in the industry. Who I’d actually given them to I could barely recall 25 years later. And even if I had, I wouldn’t know how to find them. And even if I could find them, what’s the chance they kept the record?
I did call my mother and brother, to whom I often gave copies of my new releases. But alas, neither had a copy.
Anyway…to struggle in the music business for 15 years and never have a hit record. And then to find out 20 years after you gave up that I actually had had one? Really fucking strange!
Wanna hear it? Here we go! | https://medium.com/my-life-on-the-road/my-hit-record-61a3bd63a4ea | ['William', 'Dollar Bill'] | 2020-12-10 21:23:13.701000+00:00 | ['Music Business', 'Music', 'Life Lessons', 'Memoir', 'Culture'] |
Getting Your First Developer Job Without Any Experience | How I got my first job at 17 years old
At 17, I had dabbled around in a few programming languages with the most prominent one being Javascript. I became very fond of Javascript and was amazed at everything you could do with it. I didn’t have much prior experience at this point.
What I had was about a year of experience from learning on my own.
During the same year, I got my first freelancing gig to develop a WordPress website for a local business. This happened after a period of me wondering if I will ever get a job where I get to code during my teens. I was thrilled, on cloud nine.
I couldn’t believe that I was about to make money using a skill I had started learning a year earlier while being self-employed. The truth is that I didn’t know much WordPress at the time of getting that first freelancing gig but I wasn’t going to let that stop me.
After all, if I had managed to get thus far, why shouldn’t I be able to learn WordPress?
I’m sure you’ve heard the phrase, fake it till you make it. Try it, it works.
After that gig is when things took off. It wasn’t long until I found a local startup that was looking to hire developers. I quickly reached out and applied, not long after, I had got another gig.
I ended up working for that startup for about a year until I graduated high school. | https://medium.com/swlh/getting-your-first-developer-job-without-any-experience-45031bcc2897 | ['Oskar Yildiz'] | 2020-06-09 12:33:08.752000+00:00 | ['Tech', 'Startup', 'Software Development', 'Work', 'Programming'] |
Looker Deep Dive: Looks and Dashboards — Acrotrend Solutions | Looker Deep Dive: Looks and Dashboards
In earlier blogs we gave an introduction on Looker and in the subsequent one discussed about some advanced functionalities available in LookML. In this blog we will cover the visualization layer of Looker and will talk about some unique possibilities from the same.
Looker is a cloud-based business intelligence (BI) platform designed to explore and analyze data. The solution helps businesses to capture and analyze data from multiple sources and make data-driven decisions.
Looker provides business teams the ability to analyze supply chains, market digitally, quantify customer value, interpret customer behavior, and evaluate distribution processes. Users can also “view source” to understand how the data they are viewing is being manipulated. The dashboards allow presenting data and insights using customizable charts, graphs and reports. All dashboards and queries can be drilled into, so users can discover information in multiple layers. Looks and dashboards are the two major building blocks in Looker.
Looks:
Looks are the saved visualizations that can be created by the Business User. These single visualizations are created in the explore section of the Looker and are used to understand and analyze the data. These Looks can be shared and reused in multiple other dashboard implementations.
Dashboards:
Dashboards allow you to place multiple tables or graphs or looks on one page, so that you can get a quick view of the related content. If you like, you can also make dashboards interactive, so that users can filter them down to the specific data they are interested in.
Now that we have understood the basic layout of Looks and Dashboards in the Looker Software, let us delve into understanding some new and unique features that we can implement in our dashboards to make them more interactive and aesthetically pleasing.
One of the interesting things about Looker is that it connects to the database directly, so we always get the fresh data for analysis. The dashboards provide the functionality to drill down into the data of the graphs and tables created and help us understand it on a granular level. For example, to understand the churn pattern of customers in a gym we investigated the customer visits, age distribution and their customer service touch points for the complaints they have registered.
With the overall view of the dashboard we can understand the number of customers that belong to the complaints that are registered. When we further drill into the problem, we also want to look at what are those specific complaint words that are being reported by these churning customers that can help us eliminate those problems and in turn reduce the churn rate. We thus provide the functionality to further drill into the touch points graph as shown below:
How many times have you come across a situation where you wanted to add-in a link on the current dashboard to a webpage, visualization or dashboard? Looker provides an easy solution to this problem with its Hyperlinking feature on the dashboards. You can create a hyper link to another visual, dashboard and even a webpage search.
Looker uses its Link Parameter to achieve this functionality, it is included in the dimensions as,
A few examples to understand the use of this are listed below,
Suppose you want to give the functionality of exploring the city sales visuals on the basis of user city,
For example, when you are viewing the churn rates by sites, you can further drill down to understand the reason and distribution of churn rates based on customer age, visits or touch points.
Your visualizations can now be made aesthetically pleasing using the custom visual implementation functionality of Looker. You can implement some fancy visuals like Sankey charts, Liquid Gauge, Bubble Charts etc. The visuals generated can be stored as a Look and used in dashboards.
You can add the custom visualizations that are available freely on Looker ( https://looker.com/platform/blocks/directory#viz).
Below you can see the implementation of a Liquid Gauge visual in the dashboards to represent the percentage of revenue growth against previous year.
These are not limited to only the ones already available, but you can create your own visuals using JavaScript.
It is often observed that different organizations prefer all their content to have the same theme to maintain the consistency of the content shared across. Looker helps in achieving this goal of your organization with its Themes feature currently available in the Embedded versions of the dashboards. You can define your own customized theme with organization specific fonts and colors of texts and backgrounds. Here’s a sample of the embedded version of our Churn Analysis use case, which reflects customized font and color changes.
These were some new and unique features of Looker dashboards, which can help you develop more efficient set of dashboards and help your business engage with their data in a better way.
Looker is coming up with very exciting new features in upcoming releases. Click here to read more about the new applications feature.
Stay tuned for more such unique & exciting features.
We have implemented multiple Looker dashboards which include a lot of advanced looker functionalities. To know more and see our work, please contact us.
At Acrotrend we help our clients get the right insights from the data. If you are struggling to get the right insights from your data or want to develop BI reports for your organization, then click here to check out our KPI Dashboarding service offerings. | https://medium.com/acrotrend-consultancy/looker-deep-dive-looks-and-dashboards-acrotrend-solutions-1f708bd46b48 | ['Acrotrend Consultancy'] | 2019-07-29 15:26:04.167000+00:00 | ['Business Intelligence', 'Looker', 'Data Visualization', 'Customer Analytics'] |
A Redux Implementation With the New Kid on the Block: Flutter | A while ago my colleague, Victor, took an initial look at Flutter, a new framework for mobile development from Google. At the time Flutter was still in its early stages, the first beta having just been released. Here, we will look into how to extend an app with a Redux store, Thunk future actions, and the Redux Persistor.
What is Redux?
Redux is a container for a store of information for applications. It started with Redux for the React.js framework, where it alleviated a lot of the issues with passing information around the app tree. In React, just like in Flutter, an application is composed of components (in Flutter called Widgets), which are related to each other in a parent-child structure. The data flow is then top-down: the parent provides its children with prop(ertie)s and each component can have its own state.
For small applications this is not a big problem — the data can be centralised at the root node, or just passed around with props. However, with any application that has any degree of complexity, this gets rather tricky. Take as an example, an online shopping app and its integral part — the shopping cart. The cart has to follow the user around any screen that they might navigate to, and sometimes even share its information with other components like a checkout page. Passing that data around, with easy access to its modification would be a lot of work. And here comes in Redux — a single source of truth that is accessible from any component within the app, no matter where it is in the tree.
A Redux store is therefore mostly just an object that holds information about the state of the app and handles its delivery and changes. Any component connected to the store can emit Actions — think API calls or reactions to user input — that take a copy of the current state and modify it. That new state is then committed to the store through the Reducers. After the commit, the changes are visible to every component that relies on them, triggering a re-rendering to reflect the shift in state.
Redux store
In this example, we will use the structure supplied by Android Studio when making a new project. The app will have a list of movies in its store, allowing any component to connect to that piece of data, display it and modify it.
First let us make the Movie object. It will have some basic properties such as title, year of release and so on.
class Movie {
final String title;
final int year;
final String image;
final double rating;
Movie(this.title, this.year, this.image, this.rating);
}
With the movie class defined, we can make the Redux store that holds a movie list and the currently active movie.
class AppState {
final List movieList;
final Movie currentMovie;
AppState({
this.movieList,
this.currentMovie,
});
factory AppState.initial({movieList = const [], currentMovie}) => AppState(
movieList: movieList,
currentMovie: currentMovie,
);
AppState copyWith({
List movieList,
Movie currentMovie,
}) {
return AppState(
movieList: movieList ?? this.movieList,
currentMovie: currentMovie ?? this.currentMovie,
);
}
}
Then let us make an action that adds a new movie to the store.
class MovieSetSuccessAction {
Movie currentMovie;
MovieSetSuccessAction(payload) {
this.currentMovie = payload;
}
}
MovieSetSuccessAction setMovieAction(Movie movie) {
return new MovieSetSuccessAction(movie);
}
And reducer that then handles such a change to the store.
AppState _movieSetSuccessReducer(AppState state, MovieSetSuccessAction action) {
return state.copyWith(currentMovie: action.currentMovie);
}
final movieReducer = combineReducers<AppState>([
TypedReducer<AppState, MovieSetSuccessAction>(_movieSetSuccessReducer),
]);
To connect a component, such as the text in the main page, to the store we will construct a ViewModel. It allows us to specify exactly which parts of the store the component uses, so that changes unrelated to it, do not cause it to re-render.
class _MyHomePageViewModel {
final Movie currentMovie;
_MyHomePageViewModel({
this.currentMovie
});
static _MyHomePageViewModel fromStore(Store<AppState> store) {
return _MyHomePageViewModel(
currentMovie: store.state.currentMovie,
);
}
}
With all the parts of the store ready, we can now add the store to our main app:
void main() {
final Store<AppState> _store = new Store<AppState>(
movieReducer,
initialState: AppState.initial(),
);
runApp(MyApp(
store: _store,
));
}
class _MyHomePageState extends State<MyHomePage> {
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: new StoreConnector<AppState, _MyHomePageViewModel>(
converter: (store) => _MyHomePageViewModel.fromStore(store),
builder: (context, viewModel) {
return Column(
children: [
Text(viewModel.currentMovie != null ? viewModel.currentMovie.title : "No movie selected"),
],
);
},
),
),
);
}
}
We will also connect the FloatingButton to the store:
floatingActionButton: new StoreConnector<AppState, VoidCallback>(
converter: (store) {return () => store.dispatch(setMovieAction(new Movie("A nice movie", 2020, "img/poster.jpg", 0.0)));},
builder: (context, setMovieFunc) {
return new FloatingActionButton(
onPressed: setMovieFunc,
tooltip: 'Set movie',
child: new Icon(Icons.movie),
);
},
),
ReduxThunk
When dispatching actions, a lot of times we want to use an asynchronous function, such as a call to an API. As the basic implementation of Redux does not support async actions, we require the Thunk middleware for that.
Firstly we will change the main() method to introduce the middleware to the store:
void main() {
final Store<AppState> _store = new Store<AppState>(
movieReducer,
initialState: AppState.initial(),
middleware: [thunkMiddleware, ]
);
runApp(MyApp(
store: _store,
));
}
When fetching data from an API, the most used type of encoding is a JSON object. We therefore need to extend the Movie model to allow for serializing to and from JSON
class Movie {
final String title;
final int year;
final String image;
final double rating;
Movie(this.title, this.year, this.image, this.rating);
static Movie fromJson(Map<String, dynamic> json) {
if (json != null && json.length > 0) {
return new Movie(json['title'], int.parse(json['year']), json['image'], double.parse(json['imDbRating']));
} else {
return null;
}
}
dynamic toJson() {
return {
"title": this.title,
"year": this.year.toString(),
"image": this.image,
"imDbRating": this.rating.toString(),
};
}
}
Then we can make an asynchronous action that, say, fetches a list of movies from the API
ThunkAction fetchMoviesRequest() {
return (Store store) async {
List<Movie> result = [];
apiClient.apiCall(API_GET_TOP_MOVIES)
.then((res) {
for (dynamic item in res['items']) {
result.add(Movie.fromJson(item));
}
store.dispatch(
new MoviesFetchSuccessAction(result));
});
};
}
class MoviesFetchSuccessAction {
List<Movie> movieList;
MoviesFetchSuccessAction(payload) {
this.movieList = payload;
}
}
And extend the reducer to allow for this action
final movieReducer = combineReducers<AppState>([
// (...)
TypedReducer<AppState, MoviesFetchSuccessAction>(_moviesFetchSuccessReducer),
]); AppState _moviesFetchSuccessReducer(AppState state, MoviesFetchSuccessAction action) {
return state.copyWith(movieList: action.movieList);
}
We will also extend the _MyHomePageViewModel to provide the dispatch function and the movie list to the text in the home page
class _MyHomePageViewModel {
final List movieList;
final Movie currentMovie;
final Function() getMovies;
_MyHomePageViewModel({
this.movieList,
this.currentMovie,
this.getMovies
});
static _MyHomePageViewModel fromStore(Store<AppState> store) {
return _MyHomePageViewModel(
movieList: store.state.movieList,
currentMovie: store.state.currentMovie,
getMovies: () {
store.dispatch(fetchMoviesRequest());
},
);
}
}
And then use those hooks to display the data:
class _MyHomePageState extends State<MyHomePage> {
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: new StoreConnector<AppState, _MyHomePageViewModel>(
converter: (store) => _MyHomePageViewModel.fromStore(store),
builder: (context, viewModel) {
return Column(
children: [
Text("Number of movies in list ${viewModel.movieList.length}"),
MaterialButton(onPressed: viewModel.getMovies, child: Text("Fetch movies")),
Text(viewModel.currentMovie != null ? viewModel.currentMovie.title : "No movie selected"),
],
);
},
),
),
);
}
}
ReduxPersist
To make sure that the state can persist between user interactions for a smoother user experience, one might want to introduce the Redux Persist package to the fold. It allows us to save the state of the store as a JSON object in a couple different ways. The easiest is to save it locally to file, but it can also be sent to some backend service that allows you to restore the state across devices and to even integrate it with a Redux enabled web app. However, while it does provide some useful functionality, it requires the addition of JSON serialisability to the store, which can be a bit of a hassle.
Firstly, add the persistence functionality to the store and load a state in if its available
Future<File> get _localFile async {
final path = await getApplicationDocumentsDirectory();
return File('$path/AppState1.json');
}
void main() async {
WidgetsFlutterBinding.ensureInitialized();
final persistor = Persistor<AppState>(
storage: FileStorage(await _localFile),
serializer: JsonSerializer<AppState>(AppState.fromJson),
);
AppState initialState = await persistor.load();
final Store<AppState> _store = new Store<AppState>(
movieReducer,
initialState: initialState ?? AppState.initial(),
middleware: [thunkMiddleware, persistor.createMiddleware()]
);
runApp(MyApp(
store: _store,
));
}
And add the JSON serialize methods to the Store
class AppState {
final List movieList;
final Movie currentMovie;
AppState({
this.movieList,
this.currentMovie,
});
factory AppState.initial({movieList = const [], currentMovie}) => AppState(
movieList: movieList,
currentMovie: currentMovie,
);
static AppState fromJson(dynamic json) {
if (json == null) {
return AppState.initial(movieList: [], currentMovie: null);
} else {
return AppState.initial(movieList: parseList(json, 'movieList'), currentMovie: Movie.fromJson(json["currentMovie"]));
}
}
dynamic toJson() {
List movieList = this.movieList.map((movie) => movie.toJson()).toList();
dynamic currentMovie = this.currentMovie != null ? this.currentMovie.toJson() : null;
return {
'movieList': movieList,
"currentMovie": currentMovie
};
}
AppState copyWith({
List movieList,
Movie currentMovie,
}) {
return AppState(
movieList: movieList ?? this.movieList,
currentMovie: currentMovie ?? this.currentMovie,
);
}
}
List<Movie> parseList(dynamic json, String key) {
List<Movie> list = [];
json[key].forEach((item) {
list.add(Movie.fromJson(item));
});
return list;
}
Conclusion
Redux is a very useful tool for developing apps of significant size or complexity. It centralizes the process of accessing and modifying data, making it easier to get what information you need, wherever you need it. It is not necessary for every app out there — the additional costs of setting it up are quite high. However, it forces you into following its paradigm, which standardises the flow of data and enforces a neat separation of code.
The additions on top of Redux make it even better. ReduxThunk allows you do make the actions asynchronous, facilitating delayed responses or API calls without much work. There is no reason not to add it to any Redux project, as all you need is just the middleware declaration and you’re good to go!
ReduxPersist introduces more complexity to the application, however the gains to fluidity from having a locally stored state of the app available no matter the network are quite substantial. It is by no means a necessary addition, but in certain circumstances or in any cross-platform application, it could prove extremely useful.
Jakub Orlinski
Jakub is a front-end developer at El Niño and became quite a big fan of Docker two years ago but, sadly, never got round to part two of his blog. | https://medium.com/swlh/a-redux-implementation-with-the-new-kid-on-the-block-flutter-6e9d76047023 | ['El Niño'] | 2020-10-09 14:56:59.297000+00:00 | ['App Development', 'Flutter App Development', 'Flutter', 'Google'] |
Breaking The Peace In The Pool | Breaking The Peace In The Pool
He loved two women. One ended up in the pool, and the other a murder suspect.
Photo by Joe Pizzio on Unsplash
Michael and Jan Roseboro were a power couple in Denver, Pennsylvania. Their wedding in 1989 was well attended with family and friends. The Roseboros appeared to be deeply in love and the perfect couple.
Money was not a worry for the couple. Michael took over his family’s business. He owned the local funeral home. Jan told people that her husband always joked that he never worried about customers because people were always dying.
Their lifestyle reflected their income. Jan decided to be a stay-at-home mom; Michael agreed that it was the right thing to do for their family.
To make sure she was comfortable, he bought a big home for the family. There was a pool in the backyard, something Jan insisted on. She wanted to be able to swim in the summer without having to go to the beach.
After the birth of their first child, the perfect marriage became strained. Michael started spending more time at work, always making excuses as to why he couldn’t be home with the family.
Jan became suspicious. One night she asked him if there was another woman. He confessed that he had been seeing someone on the side. The other woman excited him sexually and catered to his needs.
Neither wanted a divorce. They agreed to go to marriage counseling to work on their issues.
But it wasn’t enough to keep Michael from cheating again.
Another Mistress, Same Story
A few months after Jan and Michael stopped going to counseling, the cycle began again. Family dinners were missed, late nights at work became the norm.
Michael always gave the excuse that he had a lot of paperwork to catch up on. His wife did not believe him, sniping at the messenger of one such message that he always seemed to be buried under paperwork.
Things grew tense between the couple. Jan hurled accusations of infidelity at her husband. He asked her why she was a cold fish in bed. Each complained to their friends about how unhappy they were.
The person Michael complained to the most about Jan was his mistress, Angela Funk. She listened patiently and encouraged her lover to make decisions that made him happy. When he complied, she was the happiest woman in the world.
Angela and Michael checked into hotels every chance they got. When people saw them together, they claimed to be working on a surprise for Jan. Nobody questioned why they needed to be in a hotel room to plot a surprise.
Word got back to Angela’s husband about the affair. He was livid that his wife was fooling around on him. And then he found text messages between Michael and Angela. They talked about wanting to be together and getting married.
Before her husband could yell at her or end their marriage, Angela revealed she was pregnant. She didn’t tell him that Michael was the father of the baby.
A Big Splash
Michael and Jan’s marriage started to break apart during this time. She threatened to file for divorce nearly every day. Her husband countered that if she was so unhappy in the relationship, she should leave.
Neither wanted the marriage to end. Not because they loved one another. Rather, the amount of money that was on the line. Jan would be giving up her cushy lifestyle. She would get child support and alimony, but the amounts would not be anywhere near what came into the house every month.
Money was also Michael’s motivator to keep the marriage together. He feared losing half of everything to his wife. When he married her, he encouraged her spending habits, but now that things were ending, he didn’t want to continue paying for her.
The night of July 22, 2008, was hot and balmy. Jan sat by the pool while her husband claimed to be in bed.
Around 10 pm, Michael called 911. He told them he found his wife at the bottom of the pool; she was unresponsive. While on the phone with the emergency operator, he performed CPR.
When the paramedics arrived, they took Jan to the hospital. Rather than go with his wife, Michael stayed home with the children. Police noted that they thought it was weird.
Neighbors also reported that the lights that allegedly woke him were off. Investigators took note of that and started asking around about anything else they should know about the Roseboros.
An unknown person tipped them off to Michael’s extramarital affair. They mentioned Angela by name. The police had a suspect.
Underneath It All
They brought Angela in for questioning. She told them all about her affair with Michael. During her testimony, she revealed that she was pregnant with her lover’s child.
Angela told detectives that she was home with her husband at the time of the murder. Something he confirmed.
Police turned their attention to Michael. Something about his story didn’t fit with the facts as they knew them. He claimed to have jumped into the pool to save his wife, but his clothes were bone dry.
Officials who arrived on the scene noted that his face was scratched up pretty bad. They decided to bring him in and see if he would tell them anything.
One detective asked about the scratches on his face in passing. Michael answered that he’d been playing with the kids in the pool, and one scratched him. And so it went with most of their questions; he had an answer for nearly everything.
He left without being arrested. There wasn’t enough evidence to keep him longer.
Until forensics came back on Jan’s body, Michael’s DNA was under her fingernails; it appeared as though the two fought before she died.
With Michael as the prime suspect, the case was turned over to the prosecutors.
Michael Roseboro was charged with his wife’s murder, using only circumstantial evidence.
The Court Fight
Prosecutors decided their case should be a referendum on Michael’s character. While half the town believed him to be a nice, sweet, if misguided man, the other half thought of him as a sociopath, a womanizer. The District Attorney seized on the latter.
They played the 911 tapes, where Michael sounded cold and in control, in their estimation. His text messages and emails with Angela were presented to the jury. The most salacious revelation was saved for last; Michael and Angela spent three hours in a hotel the day Jan was murdered.
Defense attorneys tried to pick apart the state’s case. They argued that Michael’s DNA was under her fingernails because they were married. What married couples did behind closed doors was between them and God.
Cheating on his wife did not equate to killing her; they argued about the Angela-centric evidence. He was a young, good looking guy that women threw themselves at. If he partook in an affair or two, wasn’t that an issue for him and Jan to deal with?
The jury disagreed with the defense. They voted to convict Michael on first-degree murder charges.
Pennsylvania law says that the mandatory sentence is life in prison without a chance of parole. The judge imposed that punishment without any additional comment.
Michael reportedly confessed to another prisoner that he was the one who killed his wife. His family is devastated. | https://medium.com/crimebeat/breaking-the-peace-in-the-pool-9ab31717b8ff | ['Edward Anderson'] | 2020-12-01 10:47:40.132000+00:00 | ['Justice', 'Relationships', 'True Crime', 'Society', 'Family'] |
#GreenHangoutLagos: Building Collaboration To Solve Lagos State Environmental Problem | The Green Hangout Lagos brought together 53 young environmentalists to discuss the pressing issues facing the environment and SDGs in Lagos state. The ideas is to see how they can work together and create more collaboration in solving the pressing issues facing the environment and supporting the government’s effort in meeting the agenda 2030 of the United Nations and agenda 2063 of African Union. | https://medium.com/climatewed/greenhangoutlagos-building-collaboration-to-solve-lagos-state-environmental-problem-7a112fc3a5fb | ['Iccdi Africa'] | 2020-02-20 09:07:33.502000+00:00 | ['Climate Change', 'Environment', 'Sdgs', 'Lagos', 'Women'] |
9 Vue Input Libraries to Power Up Your Forms | Photo by Kelly Sikkema on Unsplash
A poorly designed form can turn visitors away from your site. Luckily for Vue developers, there are tons of Vue input libraries available to make prettying up your forms a breeze.
There are several benefits to having an intuitive and user-friendly form, including
Higher conversion rate
Better user experience
More professional branding
Like every other major framework, there are tons of community solutions for building beautiful Vue.js forms. From simple text inputs all the way to advanced phone number templates, there are so many options for your forms.
Here are some of my favorite Vue Input Libraries. While, this list is just about form elements, I’ve compiled a list of Vue icon libraries too.
I hope you find these tools as useful as I do!
1. Vue Select
Working with <select> elements is a huge part of any form. But if you have experience doing this, you'll know that they can be a real pain to customize.
Luckily, Vue Select, a library by Jeff Sagal provides an intuitive way to add features like
It’s easy to use and I’ve definitely used it across several projects.
2. Vue Input Tag
Allowing site visitors to add their own tags is a common feature that forms want. However, implementing your own flexible system can be tricky — especially for people new to Vue.
The Vue Input Tag library is a great way to add a powerful feature to your forms.
3. Vue Dropdowns
Vue Dropdowns is another library that handles <select> elements. Not only does it create sleek inputs, but it also provides a great way to set data and listen to events such as change and blur .
With a simple setup, it’s definitely a great way to make your forms look prettier with minimal effort.
4. Vue Color
Vue Color is a simple way to add color selection into your forms. Implementing one of these systems from scratch takes hours of planning and work, but using Vue Color takes just a few minutes.
This library also is highly customizable.
It comes with several different styles, event hooks, and support for different color formats. I definitely recommend Vue Color for adding some next level customizability to your app.
5. VueJS Date Picker
VueJS Date Picker is one of the cleanest date picker libraries that I’ve seen. It gives you a calendar view that allows users to click around to select a date.
In my opinion, it’s very professional looking and is also extremely customizable. In fact, it has dozens of easy-to-edit props and events tto perfectly match your use case. However, I think the default setup is also great for a majority of projects.
But don’t take my word for it, check out this screenshot from the Vue DatePicker demo.
6. Vue Switches
Switch inputs are a beautiful way to create toggled options — they’re sleek, intuitive, and can be modified to match virtually any app’s style.
Vue Switches is an amazing library for creating beautiful switch inputs. With a variety of themes and the ability to customize colors and text, it’s a flexible solution for your forms.
7. Vue Dropzone
Vue Dropzone is a drag and drop file upload library. In the past few years, drag and drop file uploads have been becoming more widespread and they’re an easy way to make your app feel modern.
Vue Dropzone provides dozens of custom props and events that allow you to tweak its functionality to your specific projects. But regardless if you choose to modify it or not, it’s a simple, yet powerful tool to add to your forms.
8. Vue Circle Sliders
Vue Circle Sliders are a great way to add a little flair to your forms. Different than a typical, linear slider input — circle sliders can feel more natural depending on the values you’re collecting.
I love this library because it’s so versatile. It supports touch controls, allows you to set max/min values, and even lets you control the step size of your slider.
Overall, this is a really cool option to consider to add some more style to your Vue applications.
9. Vue Phone Number
Without using any libraries, it can get a little tricky collecting phone numbers. You’d have to worry about formatting, country codes, etc.
The Vue Phone Number library takes care of everything and comes with a beautiful UI that looks professional and secure, two factors that will increase the conversion rate of your forms.
It’s also extremely flexible and you can customize several features, including
Valid Country Codes
Theme and Colors
Phone Number Formatting
Wrapping Up
While this is by no means a complete list of Vue input libraries, these 9 that I’ve listed have helped me save so much time while developing projects. Plus, I think they’re all simple ways to power up your forms with advanced features.
I hope you discovered some new tools that you can incorporate into your Vue projects.
What are some of your favorite input libraries? I’d love to hear from you! | https://medium.com/javascript-in-plain-english/9-vue-input-libraries-to-power-up-your-forms-91ca63ac389d | ['Matt Maribojoc'] | 2020-12-24 01:47:19.758000+00:00 | ['Technology', 'JavaScript', 'Vuejs', 'Web Development', 'Front End Development'] |
How to Choose the Right NPM Package for Your Project | When Should I Use One?
Let’s say you’re developing the “next great application.” You run into a problem and decide you do not want or do not know how to write a particular feature.
One of the main reasons you’d want to install a package is to use pre-existing code. There’s no need to reinvent the wheel or do a lot of difficult time-consuming programming when you can download standalone tools you can use right away in your application.
“There must be an external cool library that someone has already written.”
OK, you’re probably right, but keep in mind that one of NPM’s cons is that the registry has no vetting process for submission. This means that packages found there can be low-quality, not secure, or malicious.
So how will you find the right package for your needs? And how will you know you can trust it to do the job over time? Out of thousands of packages to choose from, it may not be obvious which one to pick.
With so many available and new ones considered “what you should really turn to,” it may be daunting to choose the right one for your project. | https://medium.com/better-programming/how-to-choose-the-right-npm-package-for-your-project-c3d1cc25285e | ['Nitai Aharoni'] | 2020-08-26 15:31:52.182000+00:00 | ['Programming', 'Open Source', 'NPM', 'Software Engineering', 'JavaScript'] |
Upgrade your Strength Training: Stay Strong and Hold onto Muscle at Home | No load? No problem!
When we can train with weights and in gyms, we have an easy way to vary the resistance we are putting our bodies under. Increasing this resistance is a way we can adjust the mechanical tension through our body.
Without equipment such as barbells, dumbbells, machines this can be much harder to do without using whatever you can find around the house (bag of carrots? Children? Books in a rucksack?).
To this end, it will be difficult to provide enough of a stimulus to our body to drive big gains in strength or muscle mass increases.
We have to find other ways to produce a favourable environment to bring about muscular and neurological adaptations behind strength and muscle growth.
Isometric holds
Isometric is a term meaning movements where a muscles length neither shortens or lengthens ie: no movement, or holding a position.
A common reference exercise here would be a Plank and it’s variations, but there are many ways that we can include isometrics into our training with a little imagination and willingness.
Looking at the physiology of isometrics, most of the benefits for strength and muscle growth comes from the metabolic stress (the accumulation of different metabolites in the bloodstream or working muscles) that is induced when we are holding a muscle in a highly contracted state for extended periods.
The effect this metabolic stress has is to initiate different pathways which signal further pathways in a cascade pattern. Eventually, these signals can result in the activation of processes responsible for muscle growth and strength gains.
How to do Isometric training:
Take nearly any bodyweight movement and find a position in that movement where you can feel the most tension. Generally, this will be the point that’s hardest to hold, typically at about the midway point of the movement.
Once you’ve found this position, hold it for as long as you can. Either until you start to lose the position with form breaking down or you can’t hold it any longer.
Rest anywhere from 30–60 seconds depending on how long you could hold it for and repeat for 3–5 sets as a starting point. Over time progress to longer holds, less rest, or more sets.
Some example exercises:
Wall-sits
Planks + variations
Side planks + variations
Split squats
Bear crawl holds
Pullup bar hangs
Push-up holds (Top, Halfway down, Bottom)
Single leg hinges
Static Deadlifts/squats/presses/rows (use a band or towel)
For more ideas of different movement patterns to explore check out this article:
AMRAP (As Many Reps As Possible) timed sets
Using AMRAPs with movements in a set timeframe can be another great way to promote this metabolic stress, as we saw in the isometrics.
Using these exercises on their own, as a finisher to a session or mixed in a circuit you can find plenty of variations and fun to be had here. Between sessions, you could aim to beat your reps from previous as well to add in a competitive element with yourself.
The rules here are pretty straightforward:
Keep the movements simple: Squats variations, push-ups, Bicep curls, Band rows. This is not the place to throw in complex or high-intense movements (Avoid the box jumps, weightlifting movements that require complex patterns or a high degree of cognitive/thining effort)
30–60 secs is a good starting point. Set the clock, press go and get after it.
If it’s burning, it’s working! Keep the rest periods short 30–40 secs and again repeat for 3–5 sets.
Another great way to include these is mixed together with the isometrics from before. Find movement patterns than are similar and pair them up to increase the metabolic stress effects. Try out Wall-sits + Squats, Split-squat holds + lunges, Bear-crawl holds + Push-ups.
Plyometrics for Power and CNS recruitment
Where strength is concerned (top-end strength) neuromuscular recruitment is king. In other words, strength is largely a result of more muscle fibres being used to apply force.
Power is a result of strength and speed. Essentially, the speed/rate that we can apply force (RFD).
In gyms, we can train these qualities by lifting heavy things or lifting moderately heavy things quickly.
Outside of the gym setting our best options are to use the power of Plyometrics/jumps and sprinting.
Plyometric training is an often misunderstood concept and subsequently, get completely butchered and applied poorly to training.
In developing power, movement quality is an absolute priority. As soon as things start to slow down, drop in force or get sloppy then stop. You are not doing better by doing poor quality work here.
Less = More. Specifically, Less high-quality work > More low-quality work.
How to do plyometrics properly:
Choose a jumping based exercise that is easily repeatable (Jumping onto or off of something requires too much resetting between). Exercises like Pogo hops, jump squats, Alternating jumps lunges for example.
Perform the exercise at a max effort for 8–12 seconds, as soon as it slows down/gets sloppy or you lose power STOP. The aim here is to perform the exercise with MAX EFFORT.
Rest for long amounts of time to keep the movement quality high. This should be anywhere from 40–90 seconds between sets of this length, but could easily be longer (90–120 secs) if necessary.
Repeat for 6–10 total rounds, increase only based on your ability to keep the force production quality high.
Aerobic Capacity: build your foundations
Now, this might seem counterintuitive. Surely cardio kills gains right?
Really, it depends (an unhelpful response!)
If your training leans towards that of an endurance runner training for marathons or similar then the volumes you are doing will not be conducive to getting stronger or maintaining muscle.
But the ability to both supply and use oxygen through our aerobic energy systems is a vital underpinning of our recovery and ability to handle stress (the type of stress we create through training). This is something that absolutely benefits our strength and muscle building goals.
If we can recover more effectively, we can handle more training stress and do more training volume.
Aerobic training is simple to implement, but that doesn’t always mean it has to be easy.
Here are some guidelines to follow:
Aim to be working for 20–40 mins. Less than 20 mins and your intensity will likely start to increase and you won’t be getting truly aerobic benefits. More than 40 mins and you will likely be starting to diminish the strength and muscle mass you are trying to maintain.
Choose exercises that are relatively simple, easy and don’t leave you flat out on the floor struggling for breath. Walking, slow jogging, easy circuits (mix bodyweight exercises with walking or carrying objects or short isometrics)
Keep the intensity low and pace sustainable. If you are forced to slow down or rest for long times between reps or exercises then it is probably too high intensity.
For a deeper dive into the Aerobic system and its benefits for health and performance check out this article:
Eat, Sleep, Rest & Recover:
Making sure you are getting sufficient protein and calories in your nutrition.
Aiming for 7–9 hours of quality sleep per night, ideally going to bed 2 hours before midnight.
Ensure that you are getting enough rest and recovery between sessions to let your body adapt to the training you are doing. Too much training without adequate recovery will knock you back a whole lot more than being conservative and doing too little.
This comes down to dialling in all of the big-ticket areas that make the biggest difference to our training, health and strength.
If you implement these things for a few weeks you will see the difference it has on your strength gains. | https://medium.com/in-fitness-and-in-health/upgrade-your-strength-training-stay-strong-and-hold-onto-muscle-at-home-a3bbf7e52f67 | ['Kieran Moore'] | 2020-10-17 17:57:20.790000+00:00 | ['Lifestyle', 'Health', 'Lockdown', 'Strength', 'Fitness'] |
Mark Zuckerberg says Facebook doesn’t tap your microphone for better ad targeting | Back in 2016, Facebook denied the allegation that the social media listens in on your conversations via microphones in order to better target ads. During the joint hearingbefore the Senate Judiciary & Commerce Committees, Facebook CEO Mark Zuckerberg faced the same question, which he addressed with an absolute denial.
Senator Gary Peters (D-MI) asked Mr. Zuckerberg whether his company used audio from personal devices for ad targeting with a “yes or no,” he adamantly responded “no.” Then he explained that Facebook does have audio access, but only when users record videos on their devices for the social network. Otherwise, it doesn’t have any access to your microphone.
Here is Mark Zuckerberg’s full answer:
Senator, let me get clear on this, you’re talking about this conspiracy theory that gets passed around that we listen to what’s going on on your microphone and use that for ads. To be clear, we do allow people to take videos on their devices and share those, and videos have audio, so we do while you’re taking a video, record that and use that to make the service is better by making sure your videos have audio, but I think that is pretty clear. But I just wanted to make sure I was exhaustive there.
Well, Mr. Zuckerberg said his point. Now, it’s completely up to you whether you are satisfied or not. However, due to the recent data scandal, it seems to be difficult for everyone to put back trust on Facebook. | https://medium.com/the-technews/mark-zuckerberg-says-facebook-doesnt-tap-your-microphone-for-better-ad-targeting-10a4ac7c9734 | ['Selene Kyle'] | 2018-04-11 06:21:07.490000+00:00 | ['Technews', 'Facebook'] |
Practical Data Analysis Using Pandas: Global Terrorism Database | The Global Terrorism Database (GTD) is maintained by National Consortium for the Study of Terrorism and Response to Terrorism (START). The database file used in this notebook can be downloaded from Kaggle page (available in .csv format). It consists of the data of worldwide terrorist attacks from 1970 to 2017 including more than 180,000 attacks and 100 features. The GTD defines terrorism as —
“The threatened or actual use of illegal force and violence by a non-state actor to attain a political, economic, religious, or social goal through fear, coercion, or intimidation.”
Here in this post, we will use the data (over 150 MB) to learn pandas and various mapping libraries like Folium and Basemap and, eventually draw conclusions on different results.
What you can expect to learn from this post —
Pandas Groupby.
Pandas Crosstab.
Pandas Cut.
Pandas plot.
Leaflet Map using Folium.
Transform Coordinates (latitude and longitude) to map projections using Basemap.
And many more important concepts.
All the codes used for this post can be found in my GitHub. Link is given at the end of the post. Let’s begin! | https://medium.com/swlh/practical-data-analysis-using-pandas-global-terrorism-database-20b29009adad | ['Saptashwa Bhattacharyya'] | 2020-11-16 23:54:12.505000+00:00 | ['Machine Learning', 'Data Science', 'Pandas', 'Python', 'Data Analysis'] |
Scaling the Wall Between Data Scientist and Data Engineer | This is the first article in a series of three, which focus on production ML and the intersection between data science and engineering. The other two are Trawling Twitter for Trollish Tweets and Deploying an ML Model to Production using GCP and MLFlow.
One of the most exciting things in machine learning (ML) today, for me at least, is not at the bleeding-edge of deep learning or reinforcement learning. Rather it has more to do with how models are managed and how data scientists and data engineers effectively collaborate as teams. Navigating those waters will lead organisations towards a more effective and sustainable application of ML.
Sadly, there is a divide between “scientist” and “engineer”. A wall so to speak. Andy Konwinski, Co-founder and VP of Product at Databricks, along with others point to some key hurdles in a recent blog post about MLFlow. “Building production machine learning applications is challenging because there is no standard way to record experiments, ensure reproducible runs, and manage and deploy models,” says Databricks.
The genesis of many major challenges in applying ML today — whether that be technical, commercial, or societal — is the imbalance of data over time coupled with the management, as well as utilisation, of ML artifacts. A model can perform exceptionally well, but if the underlying data drifts and artifacts are not being used to assess performance, your model will not generalise well nor update appropriately. This problem falls into a gray area that is inhabited by both data scientists and engineers.
In other words, the crux of the problem is that the principals of CI/CD are missing in ML. It doesn’t matter if you can create a really good ‘black box’ model, if your environment changes, such as input data, and the model isn’t regularly assessed in the context of what it was built to do causing it to lose its relevance and value over time. This is an issue that’s hard to tackle because the people that are feeding the data in, engineers, and the people that designed the model, scientists, don’t have the happiest of marriages.
There are tangible examples of this challenge. Think about all those predictions saying Hillary Clinton was going to win amongst several other ML goofs. From self-driving cars killing an innocent pedestrian to prejudiced AIs, there have been some large missteps, which I would argue generally have origins in the gray area between data science and engineering.
That said, negative and positive alike, ML impacts our society. More positive, and slightly less commercial, examples include the electricityMap, which uses ML to map the environmental impact of electricity all over the world; ML in cancer research is currently helping us to detect several cancer types earlier and more accurately; AI driven sensors powering Agriculture towards meeting the global skyrocketing demands for food.
The Wall
With that in mind, it’s critical to get production ML and more specifically model management right. However, coming back to the point, data scientists and data engineers don’t always speak the same language.
It is not uncommon for a data scientist to lack an understanding of how their models should live in an environment that continuously ingests new data, integrates new code, is called by end-users, and can fail in a variety of ways from time to time (i.e. a production environment). On the other side of the divide, many data engineers do not understand enough about machine learning to understand what they are putting into production and the ramifications for the organisation.
Far too often have these two roles operated without enough consideration for one another despite the fact that they occupy the same space. “That’s not my job” is not the right approach. To produce something that is reliable, sustainable, and adaptable, both roles must work together more effectively.
Scaling the Wall
The first step to speaking each other’s language is to build a common vocabulary — to have some kind of standardisation of the semantics, and therefore how the challenge is, or tangential challenges are, discussed. Naturally, this is fraught with challenges — just ask several different people what a data lake is and you’re likely to get at least two different answers, if not more.
I’ve developed common reference points that I call the ProductionML Value Chain and ProductionML Framework.
We’ve broken the process of productionising ML into five overlapping concepts which are too often considered separately. Whilst it may seem like introducing a holistic framework like this would increase complexity and interdependency — in practice those complexities and interdependencies already exist — and ignoring them is just kicking a problem down the line.
By allowing for consideration of neighbouring concepts in the design of your production ML pipeline — you begin to introduce that elusive reliability, sustainability, and adaptability.
ProductionML Framework
The ProductionML Value Chain is a high-level description of what is required to operate a data science and engineering team for the purpose of deploying models to end users. There is naturally a more technical and detailed understanding — I call that a ProductionML Framework (some might call this Continous Intelligence).
ProductionML Framework
This framework was developed after several rounds of experimentation with commercial MLOps tools, open source options, and the development of an internal PoC. It is meant to guide the future development of ProductionML projects, particularly the aspects of production ML that require input from both data scientists and engineers.
Data Science in orange and Data Engineering / Devops in blue
If you’re not familiar with those aspects, see data science in orange and data engineering / devops in blue.
As you can see, the “Training Performance Tracking” mechanism (e.g. MLFlow) and the Govern mechanism are centrally situated in this architecture. That is because every artifact, including metrics, parameters, and graphs, must be archived during the training and testing stages. Moreover, what is called Model Management is fundamentally tied to how the model is governed, which leverages those model artifacts.
The Govern mechanism combines artifacts and business rules to promote the appropriate model, or estimator to be more specific, to production while labeling others according to rules specific to the use case. This is also called model versioning, but the term ‘govern’ is used to avoid confusion with version control and emphasise the central role that the mechanism plays in overseeing model management.
A Golden Gun?
We’re all on this journey together. We’re all trying to scale the wall. There are a lot of great tools entering the market, but to date, no one has a golden gun…
Source: mrgarethm — Golden Gun — International Spy Museum
MLFlow makes great strides from my perspective, it answers certain questions around model management and artifact archiving. Other products similarly address relatively specific issues — albeit their strengths may be in other parts of the ProductionML Value Chain. This can be seen in Google Cloud ML Engine and AWS Sagemaker. Recently, the beta version of AutoML Tables beta was made available by GCP but even that does not deliver everything required out of the box, albeit does come much closer.
With that continued disparity in mind, it is absolutely critical to have a common vocabulary and framework as a foundation between scientist and engineer.
Is the wall too tall? From my experience, the answer is no, but that’s not to say ProductionML is not complex.
This article is the first in a three-part series related to ProductionML. Stay tuned for the next two.
Obligatory James Bond Quotes
M: So if I heard correctly, Scaramanga got away — in a car that sprouted wings!
Q: Oh, that’s perfectly feasible, sir. As a matter of fact, we’re working on one now.
Perhaps that’s how you should get over that wall… | https://medium.com/weareservian/scaling-the-wall-between-data-scientist-and-data-engineer-51b0a99da073 | ['Byron Allen'] | 2019-07-08 04:24:29.292000+00:00 | ['Machine Learning', 'Data Science', 'DevOps', 'Mlflow', 'Data Engineering'] |
React 2020 — P6: Class Component Props | We covered class components in the previous article. We created a couple of Jim Carrey movie components and we stopped there since we didn’t want to keep creating new movie components for each Jim Carrey movie.
It would be beneficial if we had one component that allowed us to render any movie that Jim Carrey appeared in by passing in the movie name as the argument. If you’ve been following along, passing arguments should not be a new concept for you in React. We already looked at how to pass arguments in functional components; we just need to figure out how to do it in class-based components.
Let’s create a new file under src/components and call it BestJimCarreyMovies.js.
The file will contain a class-based component named BestJimCarreyMovies that returns a string “Movie by Jim Carrey.” The component should be exported so that it can be imported elsewhere.
We’ll import it in our src/App.js file and render it in our App component.
Once you make sure that you’re running your development server (npm start), check the browser.
The first two movies are from the two components that we created in the previous article.
In functional components we passed the props object as an argument to the functional component. The props object contains all of the properties that were declared. In class components, since we extend React.Component, we gain access to the props object through inheritance. To access a property or method inside of the object, we use the this keyword. If you’re not familiar with this, you’ll need to familiarize yourself with Object Oriented Programming. The easiest way that I can explain it is if you’re talking about yourself you use the I or my pronouns. For example, if you were talking about your eyes you would say my eyes. If you were talking about someone else’s eyes, you would say his/her/their eyes. With this (my) you’re pointing to yourself. Since props was inherited, it’s within the object, so you’ll reference it with this.
The process of creating the property is the same as what we did with functional components. When you render your component in the App component, you’ll attach a custom property to it; in this case we’ll create a custom property named movie.
In src/App.js <BestJimCarreyMovies movie="Ace Ventura" />
The movie gets attached to the props object and is accessible by using this.props.movie.
In src/components/BestJimCarreyMovies.js return(
<div>{ this.props.movie } by Jim Carrey</div>
)
If we check our browser, we can see that Ace Ventura by Jim Carrey is displayed. We can now render this component as many times as we want and pass a new argument to it each time.
I know you see where this is going. Why not abstract this out even further? Why not just create a Movie component and pass the movie title and actor as props?
We can render movies from Jim Carrey or from any other actor now.
That’s really all there is to the props object in class based components. See you in the next article when we cover JSX. | https://medium.com/dev-genius/react-2020-p6-class-component-props-af254c103d14 | ['Dino Cajic'] | 2020-09-03 20:57:25.596000+00:00 | ['JavaScript', 'Web Development', 'React', 'Reactjs', 'Programming'] |
Pie Charts in Python | A pie chart is a type of data visualization that is used to illustrate numerical proportions in data. The python library ‘matplotlib’ provides many useful tools for creating beautiful visualizations, including pie charts. In this post, we will discuss how to use ‘matplotlib’ to create pie charts in python.
Let’s get started!
For our purposes, we will be using the Netflix Movies and TV Shows data set, which can be found here.
To start, let’s read the data into a Pandas data frame;
import pandas as pd
df = pd.read_csv("netflix_titles.csv")
Next, let’s print the first five rows of data using the ‘.head()’ method:
print(df.head())
As we can see, the data contains columns with various categorical values. Pie charts typically show relative proportions of different categories in a data set. For our pie chart visualizations, the ‘rating’, ‘country’ ,and ‘type’ columns are good examples of data with categorical values we can group and visualize.
To get an idea of the distribution in categorical values for these columns, we can use the ‘Counter’ method from the collections module. Let’s apply the ‘Counter’ method to the ‘type’ column:
from collections import Counter
print(Counter(df['type']))
Let’s also look at the ‘rating’ column: | https://towardsdatascience.com/pie-charts-in-python-302de204966c | ['Sadrach Pierre'] | 2020-05-26 21:20:02.286000+00:00 | ['Data Science', 'Python', 'Technology', 'Programming', 'Software Development'] |
Learn to Collect, model, and deploy data-driven systems using Python and machine learning | If you want to learn A to Z of Collecting, modeling, and deployment of data-driven systems using Python and machine learning, consider looking at this Coursera specialization by UC San Diego (View here).
The instructors of this four course specialization are:
Julian McAuley , Assistant Professor, Computer Science
, Assistant Professor, Computer Science Ilkay Altintas, Chief Data Science Officer, San Diego Supercomputer Center
Four Courses in the specialization
This specialization by University of California San Diego consists of four courses. We highly recommend that you take all the four courses as presented. Those who already have the knowledge of whats being taught in a particular course, they can choose to skip and go to the next course.
Following are the four courses:
Basic Data Processing and Visualization Design Thinking and Predictive Analytics for Data Products Meaningful Predictive Modeling Deploying Machine Learning Models
To learn more about these courses, please go here
Who should take this specialization?
Anyone who has intermediate knowledge of the subject and is aiming to become a Data Scientist, Senior Data Analyst, or Data Engineer. Apart from this, anyone who wants to further enhance their Machine Learning, Python Programming, Predictive Analytics Data Processing and Data Visualization skills.
The specialization is 100% online, you can start from anywhere, just choose your own schedule and maintain it. So go ahead and give this specialization a go! You get 7 days of free trial if you land of the specialization page through any of the links in this post.
Those who can’t afford the price may consider applying for the financial aid program coursera runs. Those who are deserving may end up enrolling in the specialization for free.
Always aim for the Coursera certificate
If you choose to go for this specialization, make sure to get the certificate by fulfilling all the conditions. Coursera certificates are recognized by top tech companies. Therefore, make sure to mention it on your resume and Linkedin profiles to get an edge over the competitors.
Hope you find this post insightful, we look forward to bring more resources like this in future. Just follow us on social media to get the latest updates. | https://medium.com/python-programming/learn-to-collect-model-and-deploy-data-driven-systems-using-python-and-machine-learning-7c2c3d80080f | ['Ian Gould'] | 2019-04-10 11:56:57.141000+00:00 | ['Machine Learning', 'Python', 'Data Science', 'Data Driven', 'Data Visualization'] |
What Happens to Your Body When You Suddenly Stop Exercising | What Happens to Your Body When You Suddenly Stop Exercising
You will feel four major changes, and it’s vital to know exactly what happens to your body
Photo by Dollar Gill on Unsplash
Exercising only has benefits for your health when you do it regularly. But making it a habit does not happen easily or overnight and takes effort, motivation, and sacrifice.
There are many things completely out of your control that can become a hurdle in your fitness journey – the best example being the current pandemic. The COVID-19 lockdowns led many to hit pause as they couldn’t get to their gyms, play sports or go out for a run. And while you may have figured out how to resume your workouts safely by now, it’s good to know what exactly happens to your body when you suddenly quit exercising. Here are four major changes you’ll notice:
#1: Increased oxygen demand
Exercising is good for your heart as it improves the efficiency with which your heart pumps blood. This blood carries oxygen, which spreads through the entire body more effectively. When you do not exercise for a few weeks, it becomes difficult for the heart to handle extra blood flow and reduces the heart's ability to use oxygen effectively, which is medically called VO2 max.
VO2 max is the heart, lungs, and muscles' maximum ability to use oxygen effectively during exercise. It is also used to measure the aerobic capacity of a person.
In the book Endurance Training: Science and Practice, it has been stated that two to four weeks of detraining can significantly reduce VO2 max.
You can easily make this out on your own after quitting; you may soon be short of breath from climbing a flight of stairs.
#2: Loss of muscle mass
It might not become visible immediately, but you will start to notice changes in your muscles (smaller and weaker muscles) after quitting high-intensity exercise or weight training. A study published in the Clinical Physiology and Functional Imaging revealed that a detraining period of 12 weeks could reduce muscle mass and lower muscular strength. However, it further stated that the person could regain all of it quickly after retraining as the muscles have memory. After quitting exercising, the person first loses power and endurance and then strength. You can notice this while picking up the heavy groceries; while you will be able to carry the weight, you may get tired more quickly than before.
#3: Spiked blood sugar levels
When we eat food, our blood sugar levels spike up. In the people who exercise daily, this extra sugar gets absorbed by the muscles and other tissues to form energy. However, when people stop working out, these sugar levels remain elevated for longer after a meal. A study published in the journal Medicine & Science in Sports & Exercise showed even three days of inactivity in young, healthy individuals can lead to glucose intolerance.
#4: Weight gain
When you stop working out, the body fat increases as your calorie requirement decreases. Your metabolism slows down, and the muscles lose their ability to burn as much fat. Also, since you’re not burning the same amount of calories as you used to while working out, the extra calories will be stored as fat in the body.
Disclaimer: The information provided here is intended to provide free education about certain medical conditions and certain possible treatments. It is not a substitute for examining, diagnosing, treating, and providing medical care provided by a licensed and qualified health professional. | https://medium.com/in-fitness-and-in-health/what-happens-to-your-body-when-you-suddenly-stop-exercising-14d0ac7af31e | ['Vivek Coder'] | 2020-12-22 17:33:51.434000+00:00 | ['Self', 'Exercise', 'Health', 'Life Lessons', 'Life'] |
Outlier — Candidati per diventare un relatore! | Sign up for The 'Gale
By Nightingale
Keep up with the latest from Nightingale, the journal of the Data Visualization Society Take a look | https://medium.com/nightingale/outlier-candidati-per-diventare-un-relatore-32f892b953e0 | ['Mollie Pettit'] | 2020-11-13 00:27:15.918000+00:00 | ['Community', 'Data Visualisation', 'Outlierconference', 'Data Visualization', 'Dataviz'] |
In Defence of “Serverless” —the term | Serverless is not a good term, yet it is used to describe a powerful and often misunderstood concept.
Being a concept, “an abstract idea”, it is intrinsically nuanced. It’s a feeling, a quality, a spectrum — and overall a space of possible solutions. That space is bounded by the characteristics and qualities of the different “things” it contains. More and more serverless experts are becoming frustrated by using the term serverless for defining this complex space.
This article will address the shortcomings and strengths of the term “serverless” as well as presenting a defence of the benefits of challenging ourselves to change the way we think about building applications — moving further towards serverless architectures.
Some existing definitions:
Serverless allows you to build and run applications and services without thinking about servers — AWS A Serverless solution is one that costs you nothing to run if nobody is using it. — Paul Johnston Serverless architectures are application designs that incorporate third-party “Backend as a Service” (BaaS) services, and/or that include custom code run in managed, ephemeral containers on a “Functions as a Service” (FaaS) platform. — Mike Roberts
These are all masterfully concise and astute definitions, but they lack the practicality of detail, and this detail is the key to understanding the power of a serverless approach.
As a side note, one less elegant definition of mine from earlier in the year was comprised of a tweet chain, conveying the journey I’ve gone through forming a mental model of the space of serverless solutions.
For those not familiar with the symbol “⊂” used above, it means the thing on the left is a subset of the thing on the right, i.e. FaaS is a subset of the Set of Serverless things.
Diving Deeper
From AWS Lambda’s release in 2014 it’s been common for people to use the terms FaaS (Function as a Service) and Serverless synonymously. FaaS is a term for Compute services that allow functions of code to be run in an isolated way. These functions run in response to events (e.g. HTTP calls) and though a system that abstracts the underlying complexity of the Server, it's OS and all the details or provisioning and scale.
FaaS is a key part of many serverless architectures, but you can have a serverless architecture without using FaaS, and a non-serverless architecture that makes use of FaaS. FaaS is very much a Serverless approach to Compute, but Compute (i.e. running code) is not the sole aspect of many application architectures.
There is a wide range of things an application architecture may need. e.g. Compute, Storage, Monitoring…
A few examples of Serverless architecture components provided by AWS
As mentioned above, FaaS functions are triggered by events, e.g. an HTTP call. The other serverless services in the image above can all be triggered by events and even generate events themselves, which can go on to trigger more services. By moving to architectures that make efficient use of these services, there is a general move to architectures that are more event-driven.
This idea of event-driven architectures has more recently started being used hand in hand with the term a serverless architecture.
This linking of terms is spreading further as cloud providers like AWS release more and more services targeting serverless architectures. And it’s not new, serverless has from an early stage often been characterised as “scale to zero” — such an approach to scaling and billing is obviously tightly coupled to having services that are event-based.
An example of this focus on event-driven architectures is one of the new major serverless services released by AWS — arguably the biggest since Lambda.
EventBridge is AWS’s answer to a serverless fully managed Event Bus. This allows events to flow through different services with less code, better observability and some very forward-thinking additions like the newly launched EventBrige Schema Registry. The Schema Registry can automatically detect and aggregate all the events of huge distributed architectures into one centralised registry — even providing automatic typed SDKs for developers to access the events from the IDE.
The Space of the Possible Solutions Grows
The space of possible solutions includes all the services pictured above.
As well as many other services both in AWS and in other cloud providers.
As well as services to be released over the next decade.
As well as serverless systems self-rolled by creative engineers building new and exciting things by glueing together these component blocks in ways even the cloud providers are not anticipating. EventBridge, after all, was the result of AWS seeing the weird and wonderful ways Cloudwatch Events were being used to trigger Lambdas.
Another example of this phenomenon could be the simple Serverless Event Scheduling System we built by combining of Step Functions and Lambda — covered in detail in another of our articles.
So, why are we not all adopting Serverless
Well first off, it’s hard to see this space of possible solutions from a single, often misused, word — Serverless
Especially when a certain framework chooses to use it as their name — an amazing tool, but not helping to preserve the complex semantics behind the term.
Now, it’s obviously not just a question of lexicon that has slowed the spread of serverless architectures. It’s a paradigm shift, potentially even more impactful than the move to the “Cloud” before it.
Engineers who have grown up building systems composed of servers, self-managed systems and perpetual state have to move to think in event-driven ways with seemingly infinite compute scale and storage — along with new challenges, constraints and a lack of tooling and best practices.
For instance, just one part of moving to a more serverless architecture often involves adopting DynamoDB (or similar) in place of the relational databases of the past. Now NoSQL isn’t new, but the idea of using it for any and all core business requirements is — this is because the whole system now works at a scale and event-driven nature before unknown.
It’s not easy — DynamoDB is complex, doing it well even more so. Throw onto that abandoning the familiar web application frameworks of the past — instead keeping functions lightweight with a low cost of change and offloading typical framework responsibilities to the cloud provider — none of this is trivial!
The benefits outway the costs when you can see the “space of the possible” clearly
People are not going through this pain of migrating their systems and mindsets for the sake of it. Serverless architectures, as discussed in many other articles, allow developers to focus on writing the code that embodies the most business value, while reducing the TCO (Total Cost of Ownership), increasing scalability and reducing the carbon footprint at the same time.
Words Fail
Sometimes, words fail. Sometimes it’s too much to put on one word the responsibility of communicating the boundaries of such a wide and polymorphic space of possible solutions.
But as a word that was created way before many of these newer serverless services — it’s not done a bad job. In fact, it’s a testament to the strength of the word that we are struggling to separate it from this space.
If it had been FunctionFull there is no way we would have kept using it as the space grew, and maybe we would have not grown the space in the way without the Serverless North Star guiding us towards an intangible quality.
For now Serverless, to me at least, manages to do a hard job, defining the borders of a very fluid and complex space of possible solutions in which we can build next-generation architectures. It would help if there was not a framework of the same name, it would help if people didn’t first hear it synonymous with Lambda and it would help if people stopped saying “but you know there are servers…”. That being said, I’ve not heard a better proposal yet!
For now, our job is to do damage control on the current “brand issues” serverless has by sharing more best practices, more content, more architectures, more tooling and also more stories about what can go wrong. | https://medium.com/serverless-transformation/in-defence-of-serverless-the-term-764514653ea7 | ['Ben Ellerby'] | 2019-12-17 00:04:09.179000+00:00 | ['Serverless', 'AWS', 'Cloud Native', 'Lambda', 'Cloud'] |
Put Away Childish Things | 10/16/94, Soldiers and Sailors Memorial Auditorium, Chattanooga, TN
Imagine you’re writing the blurb for an upcoming Phish concert in a newspaper, circa 1994. Drawn as music critics are towards the easy narrative, you’d probably foreground Phish’s reputation for goofiness: the vacuum cleaner, the giant beach balls, a capella songs and trampolines. Since it was too early for the “Grateful Dead’s heir” cliche, if your local music hack even got to the music, it would focus on Phish’s genre shapeshifting, plentiful covers, and maybe their proggy Zappa genetics.
For most of the shows I’ve listened to in 1993 and 1994, that hypothetical blurb does a pretty accurate job of anticipating reality. Aside from a few legendary anomalies, Phish mostly delivered what their press kit promised, providing most, if not all of the above features night in and night out. But imagine if you read that blurb and then attended a show like 11/2/94, 11/16/94, or 12/29/94, expecting a silly prop band and instead finding yourself assaulted by half an hour of mind-bending experimentation. You might write your local music critic a strongly-worded letter.
Phish have never entirely shed their humorous side, and bless ’em for it — you can’t help but appreciate a bunch of fiftysomethings chanting and re-enacting “golf cart marathons” at Madison Square Garden. But maturing improvisationally in the mid-90s required a little bit of growing up. Logistics alone demanded some pruning — if you’re going to spend 30 of your nightly 150 minutes on one song, you have to make some cuts elsewhere. But what the band chose for the chopping block offers some clues to their headspace as they went through one of their most significant evolutions.
10/16/94 doesn’t feature the final version of any songs, but the setlist does contain a couple on their deathbed: “The Landlady” and “Big Ball Jam.” Not counting its unlikely recent resurrection, this is the second-to-last “Landlady” without horns, and the last time it would occupy the set-opening position Phish toyed with in 1994. A couple songs later, there’s the third-to-last “Big Ball Jam,” that scourge of taper sections everywhere and a second-set staple since fall 92.
For “The Landlady,” I’ve blabbed before about the end of “Phish jazz”; in short, the subtleties of jazz just weren’t going to work any more as the band moved into larger and larger venues, while simultaneously the band was moving past traditional jazz rules of improvisation. By fall 94, “Landlady” stuck around largely so the band could check a genre box: yes, we can do jazz. It’s frequent pairing with “Poor Heart” at this time reinforced that tokenism: yes, we can do both jazz *and* country, and we can even segue them together for extra credit. Reviving PYITE and its Landlady interpolation accomplished the same thing, in a more interesting fashion.
“Big Ball Jam” also didn’t play as well in larger venues (pity poor Brad Sands and his ball retrieval duties), but more importantly, just wore out its schtick. The epitome of “fun in person, terrible on tape” never really grew beyond a few minutes of random noise, and its increasingly intermittent appearance in 1994 suggested the band was getting tired of the act.
To put it in a favorable light, the “song” did represent a noble effort to increase crowd participation, explicitly acting out the symbiotic relationship of Phish and their audience. Much like the secret language, it was not just a crowd-pleasing gimmick, but a test of fan knowledge — those who were paying attention could get one over on the newbies. But it’s also interesting that BBJ (and secret language, for that matter) start to disappear around the same time as fan-generated rituals, such as the “Stash” clapping or “Wilson” chant, begin to organically arise.
There’s one other casualty of Phish’s growing “maturity” that’s represented in this show not by a near-final appearance, but by its absence. We’re now nine shows into the fall ’94 tour, and on only three occasions so far (10/8/94 “Purple Rain,” 10/10/94 “Love You”, and 10/13/94 “I Didn’t Know”) has Fishman dragged out his Electrolux. By comparison, the first 9 shows of the spring/summer tour had 5 vacuum appearances, and the first nine shows of 1993 had…nine.
The gradual fading out of Fishman’s feature segment is perhaps the biggest adjustment of all. Not only does it start moving Phish away from their image of “that band with the guy in the dress who plays vacuum,” it opens up a lot of second-set real estate that used to be the domain of BBJ, HYHU, and a whole lot of sucking and blowing. Any momentum built up by early second set jamming didn’t have a chance against that lengthy bloc of hijinks, and to make any significant progress on set flow, it had to go…or at least become a lot less frequent.
Knowing how fanbases work, I’m sure there was a segment of people alienated by this transition — somewhere, some jaded vet talks about how Phish totally sold out when they stopped using speed skating workout equipment in the middle of “It’s Ice.” But some kind of yard sale had to take place (“3 giant inflatable balls for the price of 1!”) to take the next step forward, and the band happily stopped far short of going completely “straight.” Scaling back the funny-weird just opened up more room for the scary-weird, a change that would take fans, and the blurb writers of the world, a while to catch up with. | https://medium.com/the-phish-from-vermont/put-away-childish-things-92cd2fbcc51 | ['Rob Mitchum'] | 2017-03-08 23:15:58.995000+00:00 | ['Music', 'Phish'] |
Women Deal With Men Like Joseph Epstein All The Time | Women Deal With Men Like Joseph Epstein All The Time
There’s nothing new about the attack on Jill Biden.
I’d just defended my dissertation. We were out celebrating, when a guy sat down at our table. He started hitting on us. When he heard about my Ph.D., he decided to make his own.
He pulled out a pen and grabbed a napkin.
When he was done, he held it up with both hands. “What do you think?” he said. “It’s probably worth more than yours!”
It didn’t hurt my feelings. I had a tenure-track job with a major university waiting for me. All that guy had was a faux hawk. We ignored him until he left. Later, he poured a drink on someone and got thrown out.
It wasn’t until a few days ago that I remembered that otherwise forgettable moment, when reading an op-ed column that’s gotten a lot of attention lately.
Of course, I’m talking about Joseph Epstein’s “suggestion” that Jill Biden drop her credentials from her name and stick to being First Lady, now that the election is finally over. In the end, Epstein is no different from the guy with the faux hawk. They’re both intellectually insecure man-children who have their own issues to work through.
Sadly, they have a wide audience. | https://medium.com/the-apeiron-blog/women-deal-with-men-like-joseph-epstein-all-the-time-f102bbfbad58 | ['Jessica Wildfire'] | 2020-12-17 16:02:36.956000+00:00 | ['Equality', 'Women', 'News', 'Society', 'Feminism'] |
Domo arigato, Mr. Smalltalk | My adventure began in 2007 when a dear friend of mine from Cherniak Software, a major Smalltalk shop, persuaded me to try Smalltalk. After many years of programming in C and C++ and FORTRAN and Tandem TAL and assembly language, I found that Smalltalk was a breath of fresh air! It was a beautifully simple and elegant language. It had a beautifully simple and elegant programming environment. It was astonishingly easy to learn, and yet it was unbelievably powerful (uncharacteristic of a simple instructional language intended to teach children how to program).
Years later, after I saw how Smalltalk was languishing in the marketplace, I decided to found Smalltalk Renaissance, a marketing and promotional nonprofit. I served as its Campaign Director during 2015.
I found it difficult to engage the Smalltalk community. After many years and many failed attempts to popularize Smalltalk, most had felt beaten down into submission. They began to rationalize that Smalltalk didn’t need to be popular. The community was smug and insular, and that was just fine with them.
I can’t say I blame them. Smalltalk has had a storied history and a lot of water under the bridge. But after all these years, it just didn’t seem worth getting their hopes up again.
Nevertheless, I was determined to give it another shot. It was no less than this great language deserved.
Forget Grassroots
My strategy was simple. For years, people had written numerous technical articles and blogs, and given numerous technical talks at conferences, demonstrating what you could do with Smalltalk and how you could do it. People had been contributing to various open source Smalltalk projects, in particular, the Pharo project, helping to improve the platform and making it more useful for businesses. However, the “if you build it, they will come” philosophy wasn’t working. Appealing to people on an intellectual basis wasn’t working. Smalltalk was still being ignored. What needed to be done, I surmised, was to appeal to people on an emotional basis, the way it’s done in marketing and advertising. Smalltalk needed to promote itself in the same way that Apple promotes iPhone and Mac computers, in the same way Donald Trump promotes himself as a Presidential contender.
Smalltalk needed to make a lot of noise in social media (not so much in conventional media because we don’t have the financial resources to do so). It needed to keep the marketing message very simple and highly focussed. It needed to bend the truth a little bit. It should appear flashy and exciting; Smalltalk had to attract as many eyeballs as possible. I had to take on the role of Steve Jobs, master marketer. | https://medium.com/p/domo-arigato-mr-smalltalk-aa84e245beb9 | ['Richard Kenneth Eng'] | 2017-08-26 23:34:09.067000+00:00 | ['Programming', 'Programming Languages', 'Donald Trump', 'Startup', 'Smalltalk'] |
Libra under the Privacy Magnifier | Billions of words have already been said and written about the new ambitious project “made in” Facebook, Libra. In the following lines, we will not examine the project when it comes to its White Paper and its pretty clear ambitions, but we will try to propose a privacy-centric analysis based on Facebook statement.
Far from a cynical use of the recent scandals related to privacy and user data protection, the intent of this article finds its roots in a profound and somewhat justified concern in regards to the privacy question Facebook asks for the last decade and the financial privacy question Facebook will ask from 2020. As for the methodology, we will use Facebook’s official sources primarily, including but not limited to its Libra White Paper and Calibra’s website.
A Privacy Journey with Libra
calibra.com
Our journey began when David Marcus and his team released the so-expected White Paper of Facebook’s Global Cryptcoin, aimed to be launched in 2020. All the cryptosphere has seized the subject with no time, accompanied by most of the financial press and the high tech community.
As for us, we proceeded to a straightforward game, ctlr F: privacy, to check how will live the upcoming Facebook Coin in the light of an uncommon increasing concern when it comes to our privacy death foretold.
And here are the elements we have found.
Safe. Secure
On Calibra’s website, under the tab “About the Currency”, you can easily find the company’s vision when it comes to privacy:
“Your transaction activity is private, and we will never post it publicly. Calibra is a subsidiary of Facebook that has been set up to be separate to help protect your financial and account information. Learn more about security and privacy on Calibra.”
The first impression is that Facebook took it seriously this time and commits to keeping our Libra transactions private, statement illustrated by the fact the State-size-Company has set-up a separate company to handle the development of the coin.
Learn more about Privacy
So we’ve clicked on Learn more, but we didn’t learn that more actually.
The link redirects us to a PDF where we can read:
“Facebook teams played a key role in the creation of the Libra Association and the Libra Blockchain, working with the other Founding Members. While final decision-making authority rests with the association, Facebook is expected to maintain a leadership role through 2019. Facebook created Calibra, a regulated subsidiary, to ensure the separation between social and financial data and to build and operate services on its behalf on top of the Libra network. Once the Libra network launches, Facebook, and its affiliates, will have the same commitments, privileges, and financial obligations as any other Founding Member. As one member among many, Facebook’s role in governance of the association will be equal to that of its peers.”
In another word, Calibra, as a separate legal entity, owned by Facebook Inc., shall be considered as the guarantor of Libra’s users’ privacy. A little light, you’d admit.
In the same document, Calibra provides its vision when it comes to sharing account information or financial data with Facebook.
Pretty straightforward actually, as you may see:
“Aside from limited cases, Calibra will not share account information or financial data with Facebook, Inc. or any third party without s customer consent. For example, Calibra customers’ account information and financial data will not be used to improve ad targeting on the Facebook, Inc. family of products.”
Let us rephrase, Calibra will not share any kind of financial data with Facebook or any 3rd party unless you, meaning the user, will provide your consent.
Question is how Calibra will request your consent? Won’t it be hidden somewhere in a long “Terms of Use” document that nobody reads but everyone agrees to? And how you will be able to review this expected acceptation over time?
We still don’t know.
But let’s get back to the corpus, the first sentence of the quoted passage says everything about Calibra’s consideration when it comes to financial data.
“Aside from limited cases.” Privacy in general and Financial Privacy, in particular, are not asymmetrical. Nuances are quite dangerous, especially when the Judge is a for-profit company with very questionable ethics.
What are these limited cases, according to Calibra?
Here we go: Preventing fraud and criminal activity — Compliance with the law — Payment processing and service providers.
Let’s review these limited cases one-by-one.
Preventing fraud and criminal activity
This is the easiest one actually since Calibra will share your financial data when it feels you are using Libra to commit fraud, pose a security threat or any kind of criminal activity. Here no surprise, as a subsidiary of a US-based public entity, Calibra will collaborate with authorities to avoid illegal use of its coin.
But who is the judge? Is Calibra pretending to appreciate by itself what is a criminal activity, or will it answer to an official request to access financial data? No clue here. The answer, according to Facebook past, finds itself in the middle.
Calibra will most likely build and implement automated SOPs and AI-based tools to prevent any kind of malicious use of the coin, and for sure, it will share data with relevant authorities. But will you know, as a coin holder, that you will be under an “investigation” if and when it will happen? Will you know, if you are aware, what are the charges? Or will Calibra act as both judge and executioner?
Compliance with the law
Calibra will respect the law. Fine. But which one? The US one, since Facebook is American? Swiss one? Since Calibra is based in Geneva? And again, will you know in real-time once Calibra will share your financial activities with your local Tax office?
Payment processing and service providers
Here, Calibra’s quote talks for itself.
“When you authorize a payment, we share data with third parties necessary to process that transaction. We also share Calibra customer data with managed vendors and service providers — including Facebook, Inc. — that support our business (e.g., to provide technical infrastructure or direct payment processing). In both cases, we share only the Calibra customer data that is necessary for completing the defined activity or service.”
In another word, each time you will proceed to a payment using Libra, third parties, vendors, service providers, and even Facebook will get your data.
Well, here, the “limited cases” statement falls apart. No limitation. I all cases Calibra will share your financial data with everybody.
To summarize, based on what Calibra released earlier this week and although it claims officially financial privacy of Libra’s users will be protected, it is easy to understand Facebook Coin does not aim to respect its troth.
In no cases, it seems Libra’s user financial data will be protected and kept private.
Libra is Beam’s Doppelganger
Courtesy to CW — The Flash — Zoom, The Flash’s Doppelganger
We will have to be patient since Libra’s launch is scheduled for 2020, and far from the jealousy-based critics, we offered here a text commentary… in the text to find Libra has been thought and designed as a Doppelganger of all cryptocurrencies.
To simplify things, crypto users will have to choose whether they are willing to renounce on their financial privacy by using Libra, or will they want to protect it and keep it private by using Beam?
So a word to the wise. | https://medium.com/beam-mw/mimblewimble-libra-privacy-8951f328ee4e | ['Beam Privacy'] | 2019-06-20 10:40:35.718000+00:00 | ['Privacy', 'Mimblewimble', 'Cryptocurrency', 'Libra Coin', 'Facebook'] |
How to build Animated Charts like Hans Rosling — doing it all in R | How to build Animated Charts like Hans Rosling — doing it all in R
A Small Educative Project for Learning Data Visualisation Skills leveraging 2 libraries (gganimate and plot.ly) — UPDATED with new gganimate version
Hans Rosling was a statistics guru. He spent his entire life promoting the use of data with animated charts to explore development issues and to share a fact-based view of the world. His most popular TED Talk: “The best stats you’ve ever seen” counts more than 12 million views. It’s also one of the Top 100 most viewed TED Talk in the world.
Have you ever dreamed of building animated charts like his most popular one for free, from scratch, under 5 minutes, and in different formats you can share (gif, html for your website, etc.)?
What we will be building with gganimate ;)
This article will show you how to build animated charts with R using 2 approaches:
R + gganimate library that will create a GIF file
R + plot.ly that will generate an HTML file that you can embed into your website (see below the plot.ly version)
What we will be building with plot.ly ;)
The assumption is that you already have Rstudio installed and know some of the basics as this article will only covers the steps and the code to generate the visuals. The full code is available on Github for educational purpose.
STEP 1: Search and download the datasets you need
First stop is on gapminder.com to download the 3 data sets needed. This foundation was created by the Rosling family. It has fantastic visualizations and data sets that everyone should check to “fight ignorance with a fact-based worldview that everyone can understand”.
We will download 3 excel files (xlsx):
Once the file are downloaded and saved in your working folder, it’s time to clean and merge the data sets.
STEP2: Clean and merge the data
a1) Loading the data with xlsx library (replace ‘..’ by your folder)
# Please note that loading xlsx in R is really slow compared to csv library(xlsx) population_xls <- read.xlsx("indicator gapminder population.xlsx", encoding = "UTF-8",stringsAsFactors= F, sheetIndex = 1, as.data.frame = TRUE, header=TRUE) fertility_xls <- read.xlsx("indicator undata total_fertility.xlsx", encoding = "UTF-8",stringsAsFactors= F, sheetIndex = 1, as.data.frame = TRUE, header=TRUE) lifeexp_xls <- read.xlsx("indicator life_expectancy_at_birth.xlsx", encoding = "UTF-8", stringsAsFactors= F, sheetIndex = 1, as.data.frame = TRUE, header=TRUE)
a2) UPDATE — Install gganimate new version on R version 5.3+
library(devtools)
library(RCurl)
library(httr)
set_config( config( ssl_verifypeer = 0L ) )
devtools::install_github("RcppCore/Rcpp")
devtools::install_github("thomasp85/gganimate", force = TRUE)
b) Clean and merge the data with reshape and dplyr libraries
# Load libraries
library(reshape)
library(gapminder)
library(dplyr)
library(ggplot2) # Create a variable to keep only years 1962 to 2015
myvars <- paste("X", 1962:2015, sep="") # Create 3 data frame with only years 1962 to 2015
population <- population_xls[c('Total.population',myvars)]
fertility <- fertility_xls[c('Total.fertility.rate',myvars)]
lifeexp <- lifeexp_xls[c('Life.expectancy',myvars)] # Rename the first column as "Country"
colnames(population)[1] <- "Country"
colnames(fertility)[1] <- "Country"
colnames(lifeexp)[1] <- "Country" # Remove empty lines that were created keeping only 275 countries
lifeexp <- lifeexp[1:275,]
population <- population[1:275,] # Use reshape library to move the year dimension as a column
population_m <- melt(population, id=c("Country"))
lifeexp_m <- melt(lifeexp, id=c("Country"))
fertility_m <- melt(fertility, id=c("Country")) # Give a different name to each KPI (e.g. pop, life, fert)
colnames(population_m)[3] <- "pop"
colnames(lifeexp_m)[3] <- "life"
colnames(fertility_m)[3] <- "fert" # Merge the 3 data frames into one
mydf <- merge(lifeexp_m, fertility_m, by=c("Country","variable"), header =T)
mydf <- merge(mydf, population_m, by=c("Country","variable"), header =T) # The only piece of the puzzle missing is the continent name for each country for the color - use gapminder library to bring it
continent <- gapminder %>% group_by(continent, country) %>% distinct(country, continent)
continent <- data.frame(lapply(continent, as.character), stringsAsFactors=FALSE)
colnames(continent)[1] <- "Country" # Filter out all countries that do not exist in the continent table
mydf_filter <- mydf %>% filter(Country %in% unique(continent$Country)) # Add the continent column to finalize the data set
mydf_filter <- merge(mydf_filter, continent, by=c("Country"), header =T) # Do some extra cleaning (e.g. remove N/A lines, remove factors, and convert KPIs into numerical values)
mydf_filter[is.na(mydf_filter)] <- 0
mydf_filter <- data.frame(lapply(mydf_filter, as.character), stringsAsFactors=FALSE)
mydf_filter$variable <- as.integer(as.character(gsub("X","",mydf_filter$variable)))
colnames(mydf_filter)[colnames(mydf_filter)=="variable"] <- "year"
mydf_filter$pop <- round(as.numeric(as.character(mydf_filter$pop))/1000000,1)
mydf_filter$fert <- as.numeric(as.character(mydf_filter$fert))
mydf_filter$life <- as.numeric(as.character(mydf_filter$life))
STEP3 — UPDATE to new version of gganimate: Build the chart with gganimate and generate a GIF file to share with your friends
Now that we have a clean data set that contain the 3 KPIs (population, fertility and life expectancy) and the 3 dimensions (country, year, continent) we can generate the visual with gganimate.
# Load libraries
library(ggplot2)
library(gganimate)
#library(gifski)
#library(png) # Add a global theme
theme_set(theme_grey()+ theme(legend.box.background = element_rect(),legend.box.margin = margin(6, 6, 6, 6)) ) # OLD VERSION
# Create the plot with years as frame, limiting y axis from 30 years to 100
# p <- ggplot(mydf_filter, aes(fert, life, size = pop, color = continent, frame = variable)) +
# geom_point()+ ylim(30,100) + labs(x="Fertility Rate", y = "Life expectancy at birth (years)", caption = "(Based on data from Hans Rosling - gapminder.com)", color = 'Continent',size = "Population (millions)") +
# scale_color_brewer(type = 'div', palette = 'Spectral')
# gganimate(p, interval = .2, "output.gif") # NEW VERSION # Create the plot with years as frame, limiting y axis from 30 years to 100
p <- ggplot(mydf_filter, aes(fert, life, size = pop, color = continent, frame = year)) +
labs(x="Fertility Rate", y = "Life expectancy at birth (years)", caption = "(Based on data from Hans Rosling - gapminder.com)", color = 'Continent',size = "Population (millions)") +
ylim(30,100) +
geom_point() +
scale_color_brewer(type = 'div', palette = 'Spectral') +
# gganimate code
ggtitle("Year: {frame_time}") +
transition_time(year) +
ease_aes("linear") +
enter_fade() +
exit_fade() # animate
animate(p, width = 450, height = 450) # save as a GIF
anim_save("output.gif")
Now you can enjoy your well deserved GIF animation and share it with your friends.
STEP4: Build the chart with plot.ly and generate an HTML file to embed in your website
# Load libraries
library(plotly)
library(ggplot2) # Create the plot
p <- ggplot(mydf_filter, aes(fert, life, size = pop, color = continent, frame = year)) +
geom_point()+ ylim(30,100) + labs(x="Fertility Rate", y = "Life expectancy at birth (years)", color = 'Continent',size = "Population (millions)") +
scale_color_brewer(type = 'div', palette = 'Spectral') # Generate the Visual and a HTML output
ggp <- ggplotly(p, height = 900, width = 900) %>%
animation_opts(frame = 100,
easing = "linear",
redraw = FALSE)
ggp
htmlwidgets::saveWidget(ggp, "index.html")
The code is available on Github. Thank you for reading my post if you enjoyed it please clap. Feel free to contact me if you want to make animated charts within your organization.
Other interesting links to learn more about animated charts with R: | https://towardsdatascience.com/how-to-build-animated-charts-like-hans-rosling-doing-it-all-in-r-570efc6ba382 | ['Tristan Ganry'] | 2018-12-31 02:40:06.607000+00:00 | ['Visualization', 'Data', 'Data Science', 'Towards Data Science', 'Analytics'] |
How to Run an Exceptional Data Science Team | Framework (we’ll use the restaurant business as an example)
Data science is about transforming a raw ingredient (data) into something useful (knowledge and insights). The restaurant business shares this first principle. Moreover, successful teams in both data science and great restaurants are also well seasoned with three elements: partnership, leadership, and sponsorship.
Staying with the restaurant metaphor for a moment — a successful restaurant isn’t all about the chef (even a celebrity chef), but the partnership of the “front of the house” (the dining space) and the “back of the house” (the kitchen).
Restaurateurs who show diners what the entire house can achieve provide the leadership to align the incentives of everyone on the team. Leaders know what mix of skills is needed (prep chef, pastry chef) in the team, and have a clear idea of what greatness means for each member of the team, and the team as a whole.
Finally, the restaurant needs the support of people who believe in what it can achieve and are willing to offer their sponsorship to make that happen.
The flow of activities for success
❶ Understand your consumers — their preferences and familiarity with what you’re serving.
Restaurants need to know their diners. Your team needs understand the internal consumers of your analytics.
❷ Source, store, and properly prep high-quality ingredients.
Restaurants purchase food ingredients from a purveyor, then prep and store them in a walk-in cooler. You pull data from a database, transform it, and then store it in a warehouse.
❸ Use the right techniques and skills to create a product that delights your consumers.
A master chef’s techniques are culinary. Yours are analytical.
❹ Make sure the meal is well presented and promptly delivered.
Great presentation engages both diners and consumers of analytics.
❺ Get feedback on how your consumers are enjoying what you’ve worked hard to create.
Exceptional teams embrace continuous improvement. That can’t happen without feedback.
Sensible enough, right? But have you developed your analytics process to accomplish all these activities effectively? For a restaurant, not being able to accomplish even one of the activities means failure. The same holds for a data science team. Below we’ll describe each activity as it relates to your team.
1 | Understand your consumers — their preferences and familiarity with what you’re making.
Understanding — even anticipating — what your consumers need is critical to making sure that value is created. But simply understanding that your consumers of analytical insights want valuable insights that help them solve their problems is like knowing that diners want a nice dinner — that’s a given. You’ve got to dig deeper.
How analytically savvy are your consumers? In other words, what’s their Analytical Quotient [2] (“AQ”)? In what form can they digest their insights? Do they want easy-to-digest insights on a regular basis, or do they want larger, game-changing insights that may require more time to develop along with a more elaborate presentation? If consumers aren’t analytically savvy, how can you ensure that what you serve will be more than empty calories that just temporarily satisfy a craving to have something analytical?
Understanding your consumers is essential to know how, what, and how often insights should be produced. That means developing strong relationships to make it easier to understand and anticipate the business problems they’re trying to solve and which decisions they have to make.
2 | Source, store, and properly prep high-quality ingredients.
Unlike a restaurant, there’s only one primary ingredient in data science: raw data. But it comes in an infinite number of varieties and flavors, and has to be reliably sourced, prepped, and stored like ingredients in a restaurant pantry. Some data are dirty. Other data are highly perishable. All data should be:
Sourced from databases, data lakes, and other data sources. (Think farmer, fishmonger, and forager.)
from databases, data lakes, and other data sources. (Think farmer, fishmonger, and forager.) Transformed, so they’re easily usable with an Extract-Transform-Load (“ETL”) process. (Think prep chef.)
so they’re easily usable with an Extract-Transform-Load (“ETL”) process. (Think prep chef.) Stored in a data warehouse. (Think walk-in fridge.)
3 | Use the right techniques and skills to create a product that delights your consumers.
The best who do this are highly-skilled, highly sought-after professionals who perform work that approaches artistry. They’re often in the limelight. The best teams rely on them for their success. Like top chefs, your analysts need the equipment, space, and support to shine. You need to feed their desire to constantly develop and push themselves. You have to show your chefs how to ensure that what they produce — as brilliant as they think it might be — will be enjoyed by your consumers. The rule of thumb at Credit Sesame is that any data science work product should be feasible to implement, improve credit wellness for our members, and enlighten all consumers of analytics within the company.
Yuck v. Yum [3]
4 | Make sure your product is properly presented, put together well, and promptly delivered.
How you present the final product may entice — or repel — your consumers. Great restaurants understand the importance of visual appearance — multiple studies have shown how the appearance of food on a plate affects how diners perceive taste.
The same is true for the consumers of analytics — judicious use of color, avoidance of clutter, logical flow, and attractive graphics all work together to educate, and perhaps whet appetites for further enlightenment. Effective data visualization and data storytelling will draw in your consumers and help them digest your product.
Restaurants make sure diners get their soup while it’s hot and their ice cream while it’s cold. Likewise, your consumers should receive their analyses in a timely way. Delays from analysis-paralysis may result in a stale product your consumers won’t want to use, or be able to use.
5 | Get feedback on how your consumers are enjoying what you’ve worked hard to create.
Top restaurateurs know which menu items are the most popular. They look for what gets left on the plate, or worse, sent back to the kitchen. Survey responses, complaints, online reviews — restaurateurs use all of these to understand what works and make tweaks to what doesn’t. Sometimes sleuthing is necessary to uncover problems. Many diners may be reluctant to give feedback for fear of giving offense or appearing uninformed.
Likewise, you need to know what portions of your work product were acted upon? What portions were left untouched? Did you notice a lack of interest in what you thought was your most valuable insight? Was it because the insight wasn’t clear enough, or because your team didn’t have the appropriate business context when developing the insight?
Success = A well-seasoned data science team
You’ll also need the following seasoning elements to make sure your data science team stays exceptional.
Partnership. The front and back of the house must work well with each other. For example, strong relationships with data architects and data engineers mean ready access to well-prepared data. Having the recruiting team in your corner means access to the talent you need. Close partnerships with consumers of your analytics will help your team better understand their businesses.
The front and back of the house must work well with each other. For example, strong relationships with data architects and data engineers mean ready access to well-prepared data. Having the recruiting team in your corner means access to the talent you need. Close partnerships with consumers of your analytics will help your team better understand their businesses. Leadership. Data science leaders should promote a vision that sets the direction for the team and aligns the team with the consumers. That vision should crisply convey to consumers of analytics the value that can be produced. That value will ultimately come from the team’s ability to unlock value from data through data science. For instance, at Credit Sesame, our team’s vision is to improve credit wellness by unlocking the value of our data through data science.
Data science leaders should promote a vision that sets the direction for the team and aligns the team with the consumers. That vision should crisply convey to consumers of analytics the value that can be produced. That value will ultimately come from the team’s ability to unlock value from data through data science. For instance, at Credit Sesame, our team’s vision is to improve credit wellness by unlocking the value of our data through data science. Sponsorship. Strong executive sponsorship is necessary to promote a data-driven culture, where everyone understands that producing game-changing insights requires time, resources, and investment. Sponsorship means data scientists are less likely to be treated as short-order cooks who only deliver dashboards and reports, instead of serving as thought leaders who drive strategy.
Virtuous cycle of data science greatness
Takeaway
Diners can count on their favorite restaurants to serve up well-prepared deliciousness. To guide your data science team to greatness, take a cue from the great restaurants.
❶ Understand your consumers’ needs, their AQ, and their businesses.
❷ Source, store, and properly prep data with the right equipment. (Yes, foraging may be necessary!)
❸ Use the right techniques and skills to create work product that delights and nourishes your consumers. Avoid empty calories!
❹ Present your work so it’s approachable. Deliver it promptly.
❺ Get feedback. If it’s not explicit, look for clues that indicate feedback. Go back to the first point. Rinse, repeat.
Finally, add three more seasonings to complete your recipe for success:
Partnership
Leadership
Sponsorship
Want to build a Michelin 3-Star data science team? Use this five-step framework, add the three seasonings above, and your team will consistently serve up hot, delicious business insights that delight your consumers. | https://towardsdatascience.com/run-your-data-science-team-effectively-2b304111b142 | ['Matthew Raphaelson'] | 2020-09-14 19:54:19.093000+00:00 | ['Data Science', 'Analytics', 'Startup Lessons', 'Business', 'Startup'] |
軟件專業人員的現代體系結構設計模式 | Get this newsletter By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.
Check your inbox
Medium sent you an email at to complete your subscription. | https://medium.com/ataiwansoftwaredeveloperinsg/%E8%BB%9F%E4%BB%B6%E5%B0%88%E6%A5%AD%E4%BA%BA%E5%93%A1%E7%9A%84%E7%8F%BE%E4%BB%A3%E9%AB%94%E7%B3%BB%E7%B5%90%E6%A7%8B%E8%A8%AD%E8%A8%88%E6%A8%A1%E5%BC%8F-72cb26ac212a | ['胡家維 Hu Kenneth'] | 2020-11-01 13:49:34.493000+00:00 | ['Software Development', 'Software Architecture', 'Software Engineering', 'Programming'] |
5 Free Android App Development Courses for Beginners to Learn in 2021 | 5 Free Android App Development Courses for Beginners to Learn in 2021
These are the best free courses to learn Android with Java and Kotlin for FREE
Hello guys, If you are passionate about creating mobile games and applications and want to learn how to develop Android apps or want to become an Android application developer, then you have come to the right place.
In this article, I am going to share some of the best and free Android development courses for Java programmers and others. You might know that earlier Java was the only language that is used to create Android applications.
But, ever since Google has announced Kotlin as the official language for Android app development, which means you can use either Kotlin or Java to create Android apps.
If you are unsure about learning Android, then let me tell you that it is probably the single most technology that will give you the most significant reach in the world.
There are billions of mobile devices, including phones, tablets, and computers, which are running the Android Operating System.
By learning Android and creating apps, you can not only impact the lives of that many people but also make a career and living for yourself. It not only allows you to find a job in reputed, fortune 500 companies but also you can work as a freelancer and become an entrepreneur by creating your own apps.
I have said that before if you want to become a programmer in this century, you better know both mobile and web development. These are the two skills that will always be in demand, and you will never short of work and opportunities.
In the past, I have shared free courses and books to learn Java and Kotlin, and today we’ll see some free Android development courses from Udemy and Pluralsight which you can use to learn Android application development.
Btw, if you don’t mind investing some money while learning a useful skill like Android Application development then I also highly recommend The Complete Android Oreo Developer Course — Build 23 Apps! course on Udemy.
It’s not free but complete worth of your time and money and you will learn Android Oreo App Development using Java & Kotlin by building real apps including Super Mario Run, Whatsapp, and Instagram!
5 FREE Courses to Learn Android in 2021
Without any further ado, here is my list of free Android courses for programmers and developers. Btw, let me make it clear that even though these courses are free, it doesn’t mean that they are poor quality.
They are just made freely available by their instructors for promotional and educational purposes. You should also be careful while joining the course because sometimes instructor converts their free class to a paid one, particularly in Udemy, once they reach their promotional target.
Anyway, let’s check out some of the best free courses to learn Android application development in 2021.
This is one of the most comprehensive courses to learn Android application development with 27+ hours of content. The course not just teaches you Android but also Java programming. If you are thinking of starting Android development with Java, then this is the perfect course for you.
In this course, first, you will have a good overview of Java and then set up the Android development environment by downloading and installing Android Studio.
After that, you will learn to create an Android app, debug an Android application, and how to create a signed APK file to submit to the Google Play Store for listing.
You will also learn fundamental concepts of Android like Explicit and Implicit Intents, how to use Fragments, custom list view, Android action bar, how to use Async task, how to use Use Shared Preferences, Files and SQLite, etc.
Here is the link to join this course for FREE — Learn Android Application Development
This course is trusted by more than 218, 000 students, and with 26+ hours of content its no less than any paid Android course on Udemy. In short, a perfect course to learn Android application development using Java programming language.
This is one of the best courses to learn Android online; it’s both comprehensive but fun to watch as well. This is also one of the most popular Android courses on Udemy, with over 341,499 students already enrolled in this course.
It’s also not just a short 30 minutes course but contains more than 11.5 hours of quality material to teach you Android.
The course is also very hands-on; you will learn to set up your own development environment using Android Studio and create, run, and debug the application on both Emulator and device.
If you want to become a professional Android developer in 2021, this is the course you should. The only downside of this course is that it’s not updated recently, but still, it’s instrumental in learning Android, whose fundamentals haven’t changed much in the last few years.
Here is the link to join this course for FREE — Become an Android Developer from Scratch
This is a rather more up-to-date course to learn Android in 2021. It covers both Android 8 Oreo and Android 7 Nougat. It also covers Android 6 Marshmallow, depending upon whether or not you want to learn it.
The course is delivered by Kavita Mandal, and it contains more than 8.5 hours of training material, which covers all the basic and some advanced Android concepts.
The course is also hands-on, and you will learn how to develop Android applications in Android Studio, the most popular IDE for creating Android apps. You will learn to create a project, navigate, run, and debug and also explore some shortcuts for active development.
At the end of the course, you will also build a “Quiz App” in Android. Overall, an excellent course to start with Android 8 development in 2021 for free.
Here is the link to join this course for FREE — The Complete Android Oreo(8.1), N, M and Java Development
This is another great free course on learning Android on Udemy, the best part of this course is that it’s structured nicely to cover essential concepts of Android.
Created by Eduonix Learning Solution this starts from the underlying Android architecture and ecosystem and follows it up with simple APIs and then moving to complex and latest APIs such as Sensors, Material Design, and Data Storage.
It’s also more up-to-date and covers a practical aspect of Android development like tips to make your App more professional, how to monetize your apps, and prepare yourself for Android Job Interviews.
In short, a perfect course to learn professional Android development for free, whether you intend to find a job as an Android developer or create your own app to become entrepreneurs, this course is excellent for both.
Here is the link to join this course for FREE — Android Fundamentals: Ultimate Tutorial for App Development
This is one of the first courses you should attend on Android. It’s fundamental and covers some essential fundamentals of the Android application development platform.
In this course, you will first learn how Android apps are structured, then download Android Studio to create the Hello World app. After that, you will extend the Hello World app to learn core concepts such as drawables, styles, menu, and testing.
The course finishes with a list of next steps for you to expand your Android knowledge.
Here is the link to join this course — Start Developing for Android
Btw, this course is not exactly free, as you would need a Pluaralsight membership to access this course. It’s good to have Pluralsight membership because you get access to more than 5000+ courses to learn the latest technologies.
But, if you can’t join, you can also take a 10-day free trial without any commitment to access this course for free, well almost because the trial will give you 200 minutes worth of watch, which is more than enough to complete this course.
That’s all about some of the best free courses to learn Android app development and create cool Android games and apps which you can sell on Google’s Play store and make money. Android also opens the door for several mobile application developer jobs; if you would like to work for other companies, you can always find a suitable position with your Android skills.
Other Programming Courses and Articles You may like
5 Courses to learn React Native Framework in 2021
10 Technologies Programmers can learn in 2021
Top 5 Courses to Learn Python in 2021
5 Paths to Learn MicroService Development in 2021
5 Courses to Learn Java Programming in 2021
5 Machine Learning and Data Science Courses in 2021
5 Free Courses to Learn Angular in 2021
Top 5 Free Courses to Learn BlockChain in 2021
5 Free Course to Learn Big Data, Hadoop, and Spark
10 Free Docker Courses for Developers
5 Free Courses to learn iOS App Development for Programmers
Thanks for reading this article so far. If you like these free Android courses, then please share it with your friends and colleagues. If you have any questions or feedback, then please drop a note.
P.S. — If you are looking for just one course to learn Android from start to end, then I suggest you join The Complete Android N Developer Course on Udemy. You can get this course on just $10 on Udemy’s several flash sale, which happens every month. | https://medium.com/javarevisited/5-free-courses-to-become-an-android-developer-d4d207f53675 | [] | 2020-12-11 09:03:12.528000+00:00 | ['Mobile App Development', 'Coding', 'Android', 'AndroidDev', 'Android App Development'] |
Twitter and Facebook Use Plummets in the Arab World | Eight years after Twitter and Facebook helped fuel a wave of revolutions that toppled long-standing dictators across the Arab world, social media use in the region has plummeted.
The fall comes as several governments crack down on expression online and implement strict surveillance regimes, leading free speech activists to question the supposed progress that has been made since the days of President Zine El Abidine Ben Ali in Tunisia and President Hosni Mubarak in Egypt.
In Tunisia, the decline was especially steep with internet users on Facebook dropping from 99 percent in 2013 to just 48 percent in 2018, according to a recent survey by Northwestern University in Qatar. The number of Tunisians online using Twitter also dropped by nearly 25 percent.
The annual survey — titled Media Use in the Middle East — questioned over 7,000 people in seven Arab countries: Egypt, Jordan, Lebanon, Qatar, Saudi Arabia, Tunisia, and the United Arab Emirates.
“It doesn’t take much more than a flicker for social media use to change,” Everette Dennis, the dean and CEO of Northwestern University in Qatar, told Cheddar. Dennis first conducted the survey in 2013 to track social media use following the Arab Spring uprisings.
The decline was consistent across the Arab world. The number of Emiratis on Twitter fell from 79 percent to 52 percent, and in Qatar, Facebook was largely abandoned with just 9 percent of respondents saying they still use the platform, marking the lowest known figure of any developed country. In Egypt, Twitter use fell by nearly 30 percent in just four years.
Nonetheless, general internet use rose significantly in all of the surveyed countries — especially in the Gulf nations where over 90 percent of people use the internet, surpassing the 89 percent of Americans online.
‘Social media became a critical part of the toolkit for greater freedom’
The forsaking of traditional social media is especially noteworthy considering that Twitter and Facebook were serious contenders for the Nobel Peace Prize in 2011. Human rights groups and activists across the world lauded the platforms for facilitating the mass movements that removed decades-old authoritarian regimes.
“Social media carried a cascade of messages about freedom and democracy across North Africa and the Middle East, and helped raise expectations for the success of political uprising,” University of Washington Professor Philip Howard said in a statement. “Social media became a critical part of the toolkit for greater freedom.”
In a 2011 study, Howard found that tweets about revolution in Egypt increased a hundredfold — topping over 230,000 a day — in the week before Mubarak’s resignation.
Prominent activists were also hailed for their social media use during the uprisings. One of the most well known is Wael Ghonim, a former Google executive in Egypt. In January 2011, Ghonim anonymously created a Facebook page titled “We are all Khaled Said,” which called for justice following the death of a young man, Khaled Mohamed Said, at the hands of police in Alexandria months earlier.
Said quickly became a symbol of abuse by the Mubarak regime, and Ghonim’s Facebook page emerged as a rallying space for everyday Egyptians. In just three days, the page had over 100,000 followers, Ghonim wrote in a Medium post, and “We are all Khaled Said” became a common chant in Cairo’s Tahrir Square during the revolution.
In late January, Ghonim was arrested after the government shut down internet and cell phone networks across the country. He was released four days before Mubarak stepped down in February and publicly admitted to running the Facebook account. Ghonim too was considered a likely contender for the Nobel Peace Prize in 2011.
Yet nearly 10 years later, governments in the Arab world have continued to monitor and suppress expression online, which has contributed to the chilling of social media use, experts say.
“Government has come back and rebounded, and in some ways are more repressive,” Dennis told Cheddar. “These states are fragile, and people are extremely cautious online.”
In Egypt, for example, President Abdel Fattah el-Sisi has cracked down on press freedom and speech online, effectively thwarting the more robust media ecosystem called for in the Arab Spring. Just last year, el-Sisi implemented a series of laws aimed at combating extremism that significantly tightened the state’s control of the internet — Human Rights Watch called the guise of counterterrorism a cover to “prosecute peaceful critics and to revive the infamous Mubarak-era state security courts.”
Egyptian President el-Sisi at his second presidential inauguration in June 2018.
The regulations, such as the Anti-Cyber and Information Technology Crimes act, empowered the government to block websites it considered threats to national security and explicitly surveil popular social media platforms; specifically accounts with over 5,000 followers, which are deemed to be in the public interest and therefore worthy of monitoring.
Egyptians “have been persecuted for Facebook posts, tweets, art work, and even personal, unpublished writing that has fallen into the hands of the Egyptian authorities,” Najia Bounaim, Amnesty International’s director in North Africa, wrote in a statement protesting the laws, which she said increase the “government’s already broad powers to monitor, censor and block social media and blogs.”
Freedom House also has reported an “uptick in censorship and surveillance” across the Middle East and North Africa since the Arab Spring.
“Egypt in particular has escalated online controls through blocking hundreds of websites and cracking down on dissent,” Amy Slipowitz, a research associate at the watchdog organization, told Cheddar. “Fearful of repression, users have resorted to censoring themselves on the internet.”
Digital Living Rooms Over Digital Public Squares
Moreover, analysts note that along with surveillance concerns, Arabs’ cultural affinity towards privacy also has contributed to the swift decline of Facebook and Twitter.
“Arab culture is much more private. People in the Middle East are more relationship based and family oriented,” Dennis told Cheddar, adding that internet users are more inclined to use private platforms like Snapchat and ephemeral features like Instagram stories.
In fact, both Snapchat and Instagram were the only two platforms to increase in overall usage, jumping 18 percent and 33 percent respectively in the last five years, according to the Northwestern study. Arab women in particular, Dennis said, are gravitating toward the two platforms since the filters and fleeting stories can provide an increased level of modesty.
“Another part of Facebook and Twitter’s decline is competition,” Dennis added, noting that there are now several other, more private, alternatives to the traditional platforms.
Facebook seems, nevertheless, to recognize this shift in consumer preferences, which is pronounced in the Middle East but pervades countries throughout the world. People increasingly “want to connect privately in the digital equivalent of the living room,” company’s founder and CEO Mark Zuckerberg wrote in a March blog post.
“I believe a privacy-focused communications platform will become even more important than today’s open platforms,” Zuckerberg said. Most Arabs apparently agree. | https://medium.com/cheddar/twitter-and-facebook-use-plummets-in-the-arab-world-8db7af4c64eb | ['Spencer Feingold'] | 2019-06-04 23:04:15.870000+00:00 | ['Egypt', 'Twitter', 'Social Media', 'Facebook', 'Arab Spring'] |
One small change to your React components that lets you extend the style | Let’s try that again, this time we’ll make sure to pass down our className prop:
// The same styled-component
const StyledOriginal = styled.div`
background: lightBlue;
width: 300px;
height: 150px;
text-align: center;
padding: 2rem;
` // The same React component, but this time with className
const OriginalComponent = ({ className }) => {
return (
<StyledOriginal className={className} >
<h3>Correct 👌</h3>
<p>We pass the className prop to the styled component. Look mom I'm green!</p>
</StyledOriginal>
)
} // We attempt to extend the original component and expect the background to be green
const ReStyledComponent = styled(OriginalComponent)`
background: lightGreen;
`
This time we get the desired result and a change in styles:
Note: This isn’t specific to styled-components. This issue would still occur if the outer most element was a simple <div> inside the original React component.
What if the className on my component is already in use?
There may be situations where you will want to apply a className to your component while passing down a className prop. This begins to get complicated.
Thankfully, we can use the spread operator to help us out:
const OriginalComponent = ({ title, bodyText, ...rest }) => {
return (
<StyledOriginal
className="in-use"
{...rest}
>
<h3>{ title } 👌</h3>
<p>{ bodyText }</p>
</StyledOriginal>
)
}
Here, we’re passing the ‘rest’ of the props down to our outer most element.
This means that by default, the className prop will be passed through.
TLDR:
Pass a className OR ...rest prop down to the outer most element of your component to make its styles extendible
Take a look at this Codepen for a live example.
Cheers
Kits | https://medium.com/javascript-in-plain-english/are-your-react-components-style-extendible-17148ba98c00 | ['Kitson Broadhurst'] | 2020-06-06 10:21:49.947000+00:00 | ['Styled Components', 'Programming', 'Front End Development', 'React', 'Coding'] |
AWS Knowledge Series: Cognito User Pool | Amazon Web Service platform’s Identity and Access Management offering consists of two major components — Cognito User Pool and Cognito Identity Pool. In this article we deep dive into Cognito User Pool and find out what it is, how to use it and when to use it.
Cognito User Pool
Cognito user pool — at its core — is a “user repository” with very rich user registration and authentication capabilities. This is where you can define a schema of the “USER” entity of your application / solution. When you are setting up a Cognito user pool and defining how users are going to identify themselves in you application, you have two main options to choose from as shown below:
AWS Cognito User Pool: Sign-in options
Username — This allows user to set an alphanumeric string which they can then use as their identifier for the application that uses this Cognito user pool as its user repository. If you capture user’s email and mobile number during registration, Cognito provides an capability to verify these for you. Once verified and if you have set the options “Also allow sign in with verified email address” and “Also allow sign in with verified phone number” — users will also be able to use their verified email or phone number to identify themselves and login. The last options allows your users to change their user name.
Email address or phone number — This is allows users to either use a verified e-mail or phone number as their identifiers. As an implementer you can decide which one it will be or if you select the last option above (Allow both email addresses and phone numbers (users can choose one)) then you will let the user decide if they want to use email address or phone number as their identity.
One very important setting of the user pool is the case sensitivity of the identity. It is recommended that you make the user’s identifier case insensitive. Remember once the user pool is created — There is no way to change any of its core configurations like user’s identity, case sensitivity of user identity. So be careful about how you configure the user pool.
Since it is a user repository — obviously it offers capability to store user’s attributes. Out of the box following attributes can be stored.
AWS Cognito User Pool: User attributes
You can decide which attributes are mandatory and which are optional. Based on this configuration, while registering user in Cognito user pool you will need to pass all attributes marked as “Required”. If you want to store any attribute other than the ones listed above — You have the capability to define custom attributes.
AWS Cognito User Pool: Custom Attributes
The custom attribute can either be “String” or “Number” type. You can also define the min / max length / value for such custom attribute and if the attribute is mutable or not.
Password Management
Being a user repository, the most important aspect is management of user’s password. While setting up the user pool, you can define characteristics of user’s password. Following are the options you can select from to define the password strength.
AWS Cognito User Pool: Password strength configuration
If you are defining user pool for a consumer facing application, you should set the user pool to allow user’s to sign themselves up. On the other hand if you want a tighter control over creation of users then you can set it up such that only people with administrator level access to the user pool will be able to create users. So in order for a user to access your application — They first have to request an administrator to create an account for them and provide a temporary password. The user can then login using the temporary password and change it immediately after first login.
Multi-Factor Authentication
Cognito user pool offers multi factor authentication capability out of the box. One thing to note is that this does come at an additional cost of sending SMS text messages to deliver the security code. This means that all users must provide their phone numbers and the phone numbers must be verified during the user signup process. You can choose to not use MFA at all or make it mandatory so that all your users have to mandatorily opt for MFA. The third option is to make it optional and let the user decide if they want to use MFA or not.
Account Recovery
One of the most annoying and time consuming support function is to help user’s recover their account after they have forgotten their password. With Cognito user pool, you can give the capability to recover accounts in the hands of your users. When a user forgets their password, they can have a code sent to their verified email or verified phone to recover their account. If you have MFA enabled then it is recommended not allowing phone to be used for both password resets and multi-factor authentication (MFA).
Integration with Simple Email Service
Cognito user pool can use simple email service to send out email messages to the users. These messages include MFA, email verification, welcome email for administrator configured users.
AWS Cognito User Pool: SES Configuration
You require a verified email address. Note the syntax of the “FROM email address” field — “Hello from your company” will be the name that the user will see in their e-mail client and it will appear to be sent from “hello@yourcpmpany.com” email address. When user replies to any of these e-mails, you can configure an email address which will receive the these replies. If you are expecting large number of users to signup and use your application it is recommended that you use SES for all your cognito related e-mail transactions.
You can also customise the verification and invitation email messages by adding custom HTML.
User Device Tracking
You can also configure if you want to remember devices that user is using to access your application. When turned on, all distinct devices of the users will be tracked and registered. You can also give an options to your end user if you want them to decide if their devices are remembered or not. By default this capability is turned off. This capability is useful when you want to restrict the number of devices the user can access your application / resources from.
Client Application Integration
Most of the user functions like login, registration and account recovery has to be integrated with a client application. Client applications can be native mobile clients or a web applications. In order to allow these client applications to access the Cognito user pool capabilities, you have to register these client applications with the Cognito user pool. The way you do that is by defining a client.
AWS Cognito User Pool: Client App Configuration
Giving an appropriate App client name is important as it helps to identify which client applications have access to the user pool. The client applications can use AWS Amplify (https://aws.amazon.com/amplify/) to integrate with Cognito user pool. Each application client you register is given an application client id and an optional application client secret. When you use AWS Amplify, you configure the ID and secret so that you can seamlessly access Cognito user pool capabilities within your application. Remember that for web clients, you have to leave the “Generate client secret” check-box unchecked.
AWS Amplify uses SRP (secure remote password) protocol so make sure that you enable the same for the application client.
You can also define if these application clients can read / write user attributes. This gives you the capability to restrict what the user can do in a mobile application v/s a web application. Any attributes that are marked as mandatory will always be writable by all app clients.
Workflow Triggers
In any application / solution, many (or sometimes all) user registration / sign-in related events require some post processing e.g. sending a welcome e-mail after successful user signup, sending notification email to user that they accessed the application so that user can immediately be alerted if someone else is accessing the application using their credentials. This is where “triggers” of Cognito user pool help. Following are various events for which you can assign a Lambda functions.
AWS Cognito User Pool: Triggers
Cognito service will invoke the Lambda function when a specific event happens. You can write your custom logic within the Lambda function to perform the requisite workflow. For example you can use the “Pre sign-up” trigger check if the user has entered a “disposable” email address or not. If you are migrating users from your own repository to Cognito then you can use the “User Migration” trigger to identify if the user signing in is a new user or an existing user. If the user is an existing user, you can perform the migration of user from old system to Cognito. If you want to restrict the number of devices user can access your application then you can utilise the “Pre authentication” trigger to query the number of distinct devices user is accessing the application from and then either allow of deny the request.
Social Login
One important capability of the Cognito user pool is the capability to configure external identity providers so users can user their existing social accounts to identify them selves and register with your application.
One limitation of using the social login with Cognito user pool is that you have to user the “hosted UI” to allow users to login to your application. The hosted UI is a full blown OAuth server, backed by the Cognito API. The “hosted UI” offers very limited UI customisations and may not gel well with your overall user experience of the application (specially for mobile applications). The AWS Amplify SDK for web, iOS and Android does have support for hosted UI so integrating it within your application is easy. Once social logins are configured you can also define which application clients can allow which identity providers. You can also define mapping of user attributes received from identity providers with Cognito user attributes.
Uses of Cognito User Pool
If you are looking for a simple, scalable user management and authentication service Cognito user pool is a very good option. It provides wide range of capabilities from user sign up, user sign-in, password / account management etc. Once user establishes their identity (username, email or phone number), you can then use that identity within your application backend for authorisation purpose.
Remember Cognito user pool is an authentication service — It is not an authorisation service. So you will have to implement the authorisation management yourself.
Also if you want Cognito user pool users to access any of the AWS resources e.g. S3, RESTful services hosted on API gateway — then you will need to integrate the Cognito user pool with a Cognito Identity pool. The Cognito identity pool is capable of using Cognito user pool as an identity provider and issue an AWS IAM token using which Cognito user pool users can access AWS resources that are protected by AWS IAM policies. Expect more about this in the article on Cognito Identity Pool.
When should you use Cognito User Pool?
Use case 1: If you have a backend services (e.g. Spring boot based REST APIs hosted on EC2) that is already having authorisation support, you can use Cognito user pool as authentication provider.
User case 2: If you have a web application that requires user identity to personalise their experience.
Cognito User Pool & Server-less Architecture
If you are building a server less application on AWS ans want your end users to have seamless access to it — just having Cognito user pool is not enough. In a server-less application most resources that the user will need to access are protected by AWS IAM infrastructure such as RESTful services hosted on API Gateway, Files stored on S3. In order to access such resources, the users need an AWS IAM access token. Cognito user pool is not capable of issuing an AWS IAM access token. This is the job of the Cognito Identity Pool. You have to integrate Cognito user pool with Cognito identity pool for this purpose. For server less application use case, it is also recommended that you do not integrate social identity providers with your Cognito user pool. Instead integrate the social identity providers as additional identity providers along with Cognito user pool in the Cognito identity pool. This allows the users to user social or Cognito user pool credentials to sign-in and access your server less application. With such configuration, client applications integrating social login do not have to use the “hosted UI” for the sign-up / sign-in experience allowing much needed flexibility in sign-up / sign-in UX design. | https://sanjay-dandekar.medium.com/aws-knowledge-series-cognito-user-pool-d52fc36e72b1 | ['Sanjay Dandekar'] | 2020-08-16 19:55:39.375000+00:00 | ['Aws Cognito User Pool', 'Aws Cognito', 'AWS'] |
3 Million Judgments of Books by their Covers | Last week, my friend Nate Gagnon and I launched Judgey, a browser-based game that gave users the opportunity to literally judge books by their covers. We’re both makers, and Nate is a writer, and I’m technical. So excuse me if I get technical — I promise to reward you with pretty graphs.
As it so happened, the internet approved of our goof, as did Goodreads (whose API we used), various public libraries and book stores, Book Riot, Adweek, and a few articles (some yet to surface). We also got a tenuous mention in The Washington Post (points if you can find it, but we’ll take it).
Having both seen what kind of traffic a reddit front page could bring (Nate for Forgotify and myself for Netflix’s “Spoil Yourself”) we did the needful technical bolstering to prevent a Reddit Hug of Death. I’d love to tell you about said bolstering, as well as technical aspects of the game’s development, but I’ll save that for a future post.
Spoiler: it will contain this graphic of some scores resulting from gameplay emulation for testing.
We tracked various datapoints of our first-week 300,000+ visitors using Google Analytics to monitor how many levels they completed and how judgmental they were. We waited to turn on more detailed event tracking until after the reddit spike, because Too Many Event Actions can get your tracking turned off entirely.
BUT THE POINT is that thanks to all you nice people, we saw 3 million books judged by their covers this last week, and we’ve crunched the numbers for the most recent 733,802 of these judgments.
One last preface: This isn’t a scientific study. The results do not account for how well known a book is (which would influence the rating despite the cover), nor do they account for the fact that Goodreads does not allow ratings under 1 star. Each book’s results certainly had a pattern however, some we found very interesting.
Click to enlarge
Tweet this graphic | https://medium.com/swlh/3-million-judgements-of-books-by-their-covers-f2b89004c201 | ['Dean Casalena'] | 2015-09-15 23:38:55.441000+00:00 | ['Web Development', 'Data Visualization', 'Books'] |
Read on:Symphony — What’s in the first issue? | #ON01.18 on:Symphony
Foreword
Sir James Dyson describes how and why his engineers decided to redesign musical instruments from scratch. He also explains that when it comes being an inventor, constantly engaging in creative side-projects and tasks are a vital part of the pilgrim’s progress.
Invented instrumental
Introducing the Cyclophone, a pipe organ made using recycled vacuums | Photography Matthew Beedle
Ever heard of an Ampsi-chord? How about a Cyclophone? No? Well that’s because Dyson engineers recently invented them. After months of hard work, six entirely new musical instruments were unveiled in Dyson’s Malmesbury HQ in front of a crowd of unsuspecting onlookers. Read all about how each new musical device works and, most importantly, how it sounds?
Click here to read the full story.
Q&A with Toby Purser, Conductor of the Orion Orchestra
Toby Purser conducting ahead of his performance at London’s Cadogan Hall | Photography Tom Cockram
Toby Purser is one of the music industry’s most respected rising stars. After a chance conversation with Sir James Dyson, Toby was selected to conduct a piece of music which was composed specifically for — and using — Dyson technology. We talk with him about what makes a good conductor and why he thinks music needs to constantly try new things.
Interview: Meet David Roche the man blending science and music
David Roche pictured in Downing College, Cambridge | Photography Tom Cockram
David Roche is an experimental, young composer who like to try and make music using unusual methodologies and techniques. In his most recent piece, David took up the challenge of making a serious piece of music for, about, and even using, Dyson technology. We ask what inspires him when making music?
The Dyson Symphony — A song of sound and science
On 18th February 2018, London’s prestigious musical venue, Cadogan Hall, hosted the first ever performance of an original piece of Dyson music, using never before played instruments designed and made by Dyson employees. This is the story of the unusual experiment to combine science and music.
Click here to read the full story.
What does music look like?
A piece of music made visual by American artist, Nicholas Rougeux | Image Nicholas Rougeux
It is one of those questions artists and scientists alike have wondered: if we could see it, what would music look like? This is precisely what artist, Nicholas Rougeux has attempted to envisage with his series Off the staff. For on: magazine he has recreated the programme of Dyson X Orion Orchestra’s collaboration feeding the sheet music for the performance through his algorithm making the music into art.
on: is Dyson’s magazine, which is published quarterly and distributed in Dyson Demo spaces. If you would like to read more from our issues please click here. | https://medium.com/dyson/read-on-whats-in-the-latest-issue-of-dyson-s-magazine-b720818870a1 | ['Dyson On'] | 2018-08-30 08:27:13.129000+00:00 | ['Magazine', 'Music', 'Innovation', 'Symphony', 'Dyson'] |
Matt Mullenweg’s (CEO, Automattic) Five Levels Of Remote Work | Remote work is poorly understood and for good reason. What most people have experienced is merely being “allowed” to work remotely on occasion, having to stay home with someone sick in the family, logging in while traveling or waiting for the cable guy to install internet.
While I am a fan of remote working I am not sure that most companies realize that experimenting with remote work until the end of the covid-19 crisis is a free strategy option. I’ll detail more of what I mean at the end, but first its worth helping you re-frame how you think about remote work.
Over the last ten years many “remote-first” companies have been rethinking how work should get done and have discovered that to truly thrive as a distributed, remote organization there is an inevitable learning curve that one must progress.
Matt Mullenweg has been one of the biggest proponents of this way of working and is the CEO of Automattic, which employs more than 1,000 people “in 75 countries speaking 93 different language.”
In a podcast with Sam Harris he outlined his “ five levels of remote work “ which I thought was the best explanation of some of the subtle differences between different levels of remote work and I decided to break them down to help companies understand this journey.
Remote Work 101
Most people are familiar with the experience of level 1 or level 2 remote work. During the covid-19 crisis many employees are finding themselves in a “copy the office” experience of remote work still being available during the same hours they would in an office.
Level 1 — Emergency
Working from home is not easy, but possible. If you have to.
Basics: Have internet, cell phone, some way to access email
Have internet, cell phone, some way to access email Work if possible: Usually can put things off until back in office because that’s how most people work
Usually can put things off until back in office because that’s how most people work Mindset: “we don’t know what employees are doing” therefore you want to minimize the ability for people to work remotely or flexibly as much as possible
Level 2 — Copy The Office
In level two, companies have better tools and access to working remotely, but it is still mostly for people who have an excuse. In this scenario, the company is still designed to operate around an in-person dynamic and people who are working remotely are expected to follow similar hours and procedures as everyone else. At this level if someone starts working remotely full-time it is often with the understanding that the person will be harming their long-term career prospects.
Language : outdated terms like “telecommute”
: outdated terms like “telecommute” Requirements : Need to be able to access things from the office
: Need to be able to access things from the office Default mode : synchronous; Copying “office hours” 8am-5pm; factory model for knowledge work
: synchronous; Copying “office hours” 8am-5pm; factory model for knowledge work Pitfalls : More tracking, screenshots of screens
: More tracking, screenshots of screens Challenges for workers: removing some freedom & agency, may end up being even less productive
Inflection Points | https://medium.com/betterworkingworld/matt-mullenwegs-ceo-automattic-five-levels-of-remote-work-c3581f53bede | ['Paul Millerd'] | 2020-05-26 10:37:07.318000+00:00 | ['Technology', 'Work', 'Business', 'Remote Work', 'Productivity'] |
The Million Dollar Question: How To Survive And Become a Developer | The Million Dollar Question: How To Survive And Become a Developer
It always seems impossible until it’s done.
Photo by Bahman Adlou on Unsplash
“Don’t be afraid to give up the good and go for the great.” — John Rockefeller
Whether you take on the challenge or not, choose and decide, what’s important is you stop wondering and start moving.
Stop daydreaming and start working towards it, it’s always easy to wonder and dream about good things, but if you don’t do anything to make it happen, then your dreams will be just like that, dreams.
What if it won’t work? What if it does.
So, how do we overcome this scary thought every aspiring developer is suffering?
The thing is that regrets don’t happen the moment we make a decision, it happens after, and by just thinking of that thought means we can do everything we can to not ever regret that decision, we do everything we can to make it work and make it happen.
I’ve made a lot of wrong decisions in my life — but the thing is that I don’t have any regrets on any of them, every mistake I made me wiser, stronger, and better.
Thinking of it now, everything does happen for a reason, all those rejections were just redirections — I have become who I am, and taking on this developer journey almost 5 years ago was the best one ever.
That first step
and the most important one.
Here’s a pro tip from one developer to another: We don’t regret the decisions we make the moment we made them, it happens after, so I do my very best not to regret any of it, regardless of the outcome, I’ll make sure I will give everything I’ve got and learn all the lessons it could give.
So make sure you give everything you can, do everything you needed to just to survive, and if things really don’t work out even if you give your best, there will be no regrets I promised you, know yourself, be yourself and you’ll know better.
But before that happens, give it a shot.
Coming from experience and surviving it, I might have earned the right to say this, you can do anything you truly want if you believe.
You can do anything but there will be rules and consequences, there will be struggles and there will be failures, success always comes after overcoming every struggle, it always is and always will be, so get over it and do the work.
Accept that reality, and everything in your life will change.
Just entertain that thought, what if it will work. | https://medium.com/for-self-taught-developers/the-million-dollar-question-how-to-survive-and-become-a-developer-21eeaee44e4f | ['Ann Adaya'] | 2020-12-20 02:16:36.011000+00:00 | ['Software Engineering', 'Software Development', 'Work', 'Programming', 'Web Developer'] |
How to Automate Analytics UX with Panel | How to Automate Analytics UX with Panel Hector Follow Jan 7 · 12 min read
Data science and analytics teams create predictive models that help solve business questions, but most of these models are not implemented in products or embedded in applications, so where do they live? What happens when end users require the outputs of models but there is no product a data science team alone can build to store the model outputs?
In an effort to allow models or analyses created by data science teams to be easily exposed to users through standalone visualizations or production quality web applications, the Anaconda team has created the PyViz platform. This platform contains many libraries that make both visualization and data analysis in Python concise, uniform, and most importantly, repeatable.
Some data science & analytics developers may be familiar with Bokeh and Datashader, but a new tool in the PyViz world, Panel, was introduced at AnacondaCon 2019.
In this article we’ll go into some depth on:
Panel functionality Panel use cases Serving an app using Panel server Production deployments
Intro
We’re the Analytics and Data Science team at Under Armour working on global consumer engagement. Many of our use cases revolve around understanding the needs of our customers through our first party data and effectively predicting user responsiveness to marketing tests via algorithm development. A common issue we run into is that we don’t have a simple way to expose our predictive or explanatory models to end users without involving software engineering teams and building in house applications to expose our models. With our goal of providing trustworthy predictions quickly using our algorithms, we needed a tool that would expose the outputs of our generalized models and how they change based on user input.
Panel Functionality
What separates Panel from other tools in the PyViz ecosystem is the ability to bring together multiple visualization tools and their outputs into one centralized visualization — plotting library agnostic — and allowing end users to define data or model parameters via interactive widgets. All of the above also comes with a lightweight methodology for deploying Panel applications to a web server that your end users can access. Python scripts as well as Jupyter notebooks can be used to develop and deploy interactive applications.
Use Cases
Some example use cases where Panel could be used:
Say you have trained a forecasting model based on your company’s sales data and have extracted the model weights. Panel would allow an end user to then input new data sales time series data from your company and forecast it out with a variable number of periods based on your model, and additionally could show uncertainty intervals. You have trained the world’s best classifier on the MNIST dataset and saved the model. A user can then bring their own picture of a number drawn by hand, upload it via your web application and your model would show them the prediction and its probabilities. You have geospatial data that can predict when there will be the most traffic around bus terminals. Your end user would be able to move an hour-of-day slider left to right and look at predicted traffic or demand around bus terminals to see when they should buy their tickets.
If you would like more potential use cases, we encourage you to check out the Panel gallery.
Serving an App
Now we will take you on an end to end walk through running a basic analysis and deploying that app to a Panel server on your local machine. For the purposes of this post we will be using tutorial data and creating an ARIMA model, with little to no predictive power, and deploying that model. The aim of this is to show how to deploy a model, not how to create a model that can effectively predict something.
Open up your terminal and make sure the conda environment you want to use is activated.
Install Panel, using the pyviz channel with the command below.
conda install -c pyviz panel
Then open up a notebook or Python editor and get data.
We’ll be using one of Seaborn’s datasets on flight passengers from the 1950's.
You will notice the self notation in this method—this is because we're building this out as part of a larger Panel class.
df = pd.read_csv('
df['month'] = df['month'].\
apply(lambda x: strptime(x,'%B').tm_mon)
df['date'] = df[['year', 'month']].\
apply(lambda x: '-'.join(x.astype(str)), axis=1)+'-01'
df['date'] = pd.to_datetime(df['date']).dt.date
df = df[['date','passengers']]
self.data_frame = df.copy() def get_data(self):df = pd.read_csv(' https://raw.githubusercontent.com/mwaskom/seaborn-data/master/flights.csv' df['month'] = df['month'].\apply(lambda x: strptime(x,'%B').tm_mon)df['date'] = df[['year', 'month']].\apply(lambda x: '-'.join(x.astype(str)), axis=1)+'-01'df['date'] = pd.to_datetime(df['date']).dt.datedf = df[['date','passengers']]self.data_frame = df.copy()
We’re setting the data frame as a class attribute for later use.
Add a date range slider for filtering model data
Now we’ll go into the Panel decorators which allow for callbacks depending on user interactions with widgets.
There are many different types of widgets that can be used through Panel, and if there’s not currently one supported you can make it yourself with javascript and have it callback to the Panel application as well! See the list of currently supported widgets.
For this use case we’ll add a date range slider to allow users to filter the data that is fed into the model.
It’s as easy as this:
date_slider = pn.widgets.DateRangeSlider(
name='Date Range Slider',
start=dt.datetime(1949, 1, 1),
end=dt.datetime(1960, 12, 1),
value=(dt.datetime(1949, 1, 1), dt.datetime(1960, 12, 1)),
width=300, bar_color='#D15555')
Now when you have functions that depend on the slider you only need to put the following decorator above your method definition.
@param.depends('date_slider.value')
Create an ARIMA model on the data
Now we can define the model and plot predicted vs actual values.
This is a basic ARIMA exponential smoothing model.
def arima_model(self):
df_model = self.data_frame.copy()
df_model = df_model.\
set_index(pd.DatetimeIndex(df_model['date'],freq='MS')).\
drop('date',axis=1)
df_model = df_model[df_model.index > self.date_slider.value[0]]
df_model = df_model[df_model.index < self.date_slider.value[1]]
# basic exponential smoothing model
model = ARIMA(df_model, order=(0,1,1),freq='MS')
model_fit = model.fit(disp=0)
fitted_df = df_model.copy().\
rename({'passengers':'actual'},axis=1)
fitted_df['fitted'] = model_fit.predict(typ='levels')
fitted_df['error'] = abs(fitted_df['actual'] - fitted_df['fitted'])
fitted_df['perc_error'] = fitted_df['error']/fitted_df['actual']
fitted_df.reset_index(inplace=True,drop=False)
p = figure(plot_width=800, plot_height=266)
p.line('date','actual',source=fitted_df,
legend='Actual', color='red')
glyph = p.line('date','fitted',source=fitted_df,
legend='Fitted', color='blue')
p.yaxis.formatter = NumeralTickFormatter(format="0,0")
p.xaxis.formatter = DatetimeTickFormatter(months = "%m/%Y")
p.title.text = 'Actual vs Fitted ARIMA Values'
p.xaxis.axis_label = 'Date'
p.yaxis.axis_label = 'Passengers'
p.xgrid.grid_line_color = None
p.ygrid.grid_line_color = None
hover = HoverTool(renderers=[glyph],
tooltips=[("Date","
("Fitted","
("Actual","
formatters={"date": "datetime"},
mode='vline'
)
p.tools = [SaveTool(), PanTool(), hover, CrosshairTool()]
return p @param .depends('date_slider.value')def arima_model(self):df_model = self.data_frame.copy()df_model = df_model.\set_index(pd.DatetimeIndex(df_model['date'],freq='MS')).\drop('date',axis=1)df_model = df_model[df_model.index > self.date_slider.value[0]]df_model = df_model[df_model.index < self.date_slider.value[1]]# basic exponential smoothing modelmodel = ARIMA(df_model, order=(0,1,1),freq='MS')model_fit = model.fit(disp=0)fitted_df = df_model.copy().\rename({'passengers':'actual'},axis=1)fitted_df['fitted'] = model_fit.predict(typ='levels')fitted_df['error'] = abs(fitted_df['actual'] - fitted_df['fitted'])fitted_df['perc_error'] = fitted_df['error']/fitted_df['actual']fitted_df.reset_index(inplace=True,drop=False)p = figure(plot_width=800, plot_height=266)p.line('date','actual',source=fitted_df,legend='Actual', color='red')glyph = p.line('date','fitted',source=fitted_df,legend='Fitted', color='blue')p.yaxis.formatter = NumeralTickFormatter(format="0,0")p.xaxis.formatter = DatetimeTickFormatter(months = "%m/%Y")p.title.text = 'Actual vs Fitted ARIMA Values'p.xaxis.axis_label = 'Date'p.yaxis.axis_label = 'Passengers'p.xgrid.grid_line_color = Nonep.ygrid.grid_line_color = Nonehover = HoverTool(renderers=[glyph],tooltips=[("Date"," @date {%F}"),("Fitted"," @fitted "),("Actual"," @actual ")],formatters={"date": "datetime"},mode='vline'p.tools = [SaveTool(), PanTool(), hover, CrosshairTool()]return p
Once the model is trained, a Bokeh plot is created which compares actual vs fitted values and adds tooltips to the chart.
Create the Panel layout of the application
Now we can choose how we want to layout our application.
Since we only have one chart it’s fairly simple, but you can also create columns and rows and insert multiple charts in the same format.
We also call the get_data() method here to make sure the data attribute is set.
The full file should look like the following block.
Notice at the bottom, we call the class object and the panel() method so it's ready to serve.
import param
import datetime as dt
import panel as pn
from statsmodels.tsa.arima_model import ARIMA
from bokeh.models import HoverTool, SaveTool, PanTool, CrosshairTool
from bokeh.models.formatters import NumeralTickFormatter,DatetimeTickFormatter
from bokeh.plotting import figure
from time import strptime
pn.extension()
class panel_class(param.Parameterized):
title = 'Forecasting Airline Passengers'
subtitle = pn.pane.Str('Actual vs ARIMA Model',
style={'font-size': '15pt'})
date_slider = pn.widgets.DateRangeSlider(
name='Date Range Slider',
start=dt.datetime(1949, 1, 1), end=dt.datetime(1960, 12, 1),
value=(dt.datetime(1949, 1, 1),
dt.datetime(1960, 12, 1)),width=300,
bar_color='#D15555')
def get_data(self):
df = pd.read_csv('
df['month'] = df['month'].\
apply(lambda x: strptime(x,'%B').tm_mon)
df['date'] = df[['year', 'month']].\
apply(lambda x: '-'.join(x.astype(str)), axis=1)+'-01'
df['date'] = pd.to_datetime(df['date']).dt.date
df = df[['date','passengers']]
self.data_frame = df.copy()
def arima_model(self):
df_model = self.data_frame.copy()
df_model = df_model.\
set_index(pd.DatetimeIndex(df_model['date'],freq='MS')).\
drop('date',axis=1)
df_model = df_model[df_model.index > self.date_slider.value[0]]
df_model = df_model[df_model.index < self.date_slider.value[1]] import pandas as pdimport paramimport datetime as dtimport panel as pnfrom statsmodels.tsa.arima_model import ARIMAfrom bokeh.models import HoverTool, SaveTool, PanTool, CrosshairToolfrom bokeh.models.formatters import NumeralTickFormatter,DatetimeTickFormatterfrom bokeh.plotting import figurefrom time import strptimepn.extension()class panel_class(param.Parameterized):title = 'Forecasting Airline Passengers'subtitle = pn.pane.Str('Actual vs ARIMA Model',style={'font-size': '15pt'})date_slider = pn.widgets.DateRangeSlider(name='Date Range Slider',start=dt.datetime(1949, 1, 1), end=dt.datetime(1960, 12, 1),value=(dt.datetime(1949, 1, 1),dt.datetime(1960, 12, 1)),width=300,bar_color='#D15555')def get_data(self):df = pd.read_csv(' https://raw.githubusercontent.com/mwaskom/seaborn-data/master/flights.csv' df['month'] = df['month'].\apply(lambda x: strptime(x,'%B').tm_mon)df['date'] = df[['year', 'month']].\apply(lambda x: '-'.join(x.astype(str)), axis=1)+'-01'df['date'] = pd.to_datetime(df['date']).dt.datedf = df[['date','passengers']]self.data_frame = df.copy() @param .depends('date_slider.value')def arima_model(self):df_model = self.data_frame.copy()df_model = df_model.\set_index(pd.DatetimeIndex(df_model['date'],freq='MS')).\drop('date',axis=1)df_model = df_model[df_model.index > self.date_slider.value[0]]df_model = df_model[df_model.index < self.date_slider.value[1]]
model = ARIMA(df_model, order=(0,1,1),freq='MS')
model_fit = model.fit(disp=0)
fitted_df = df_model.copy().\
rename({'passengers':'actual'},axis=1)
fitted_df['fitted'] = model_fit.predict(typ='levels')
fitted_df['error'] = abs(fitted_df['actual'] - fitted_df['fitted'])
fitted_df['perc_error'] = fitted_df['error']/fitted_df['actual']
fitted_df.reset_index(inplace=True,drop=False)
p = figure(plot_width=800, plot_height=266)
p.line('date','actual',
source=fitted_df, legend='Actual',
color='red')
glyph = p.line('date','fitted',
source=fitted_df,
legend='Fitted',
color='blue')
p.yaxis.formatter = NumeralTickFormatter(format="0,0")
p.xaxis.formatter = DatetimeTickFormatter(months = "%m/%Y")
p.title.text = 'Actual vs Fitted ARIMA Values'
p.xaxis.axis_label = 'Date'
p.yaxis.axis_label = 'Passengers'
p.xgrid.grid_line_color = None
p.ygrid.grid_line_color = None
hover = HoverTool(renderers=[glyph],
tooltips=[ ("Date","
("Fitted","
("Actual","
formatters={"date": "datetime"},
mode='vline'
)
p.tools = [SaveTool(), PanTool(), hover, CrosshairTool()]
return p
def header(self):
title_panel = pn.pane.Str(self.title,
style={'font-size': '20pt',
'font-family': "Times New Roman"})
return title_panel
def subheader(self):
return self.subtitle
def panel(self):
logo = """<a href="
<img src="
self.get_data()
return pn.Row(
pn.Column(logo,self.date_slider),
pn.Column(self.header,self.subheader,self.arima_model))
panel_class().panel().servable() # basic exponential smoothing modelmodel = ARIMA(df_model, order=(0,1,1),freq='MS')model_fit = model.fit(disp=0)fitted_df = df_model.copy().\rename({'passengers':'actual'},axis=1)fitted_df['fitted'] = model_fit.predict(typ='levels')fitted_df['error'] = abs(fitted_df['actual'] - fitted_df['fitted'])fitted_df['perc_error'] = fitted_df['error']/fitted_df['actual']fitted_df.reset_index(inplace=True,drop=False)p = figure(plot_width=800, plot_height=266)p.line('date','actual',source=fitted_df, legend='Actual',color='red')glyph = p.line('date','fitted',source=fitted_df,legend='Fitted',color='blue')p.yaxis.formatter = NumeralTickFormatter(format="0,0")p.xaxis.formatter = DatetimeTickFormatter(months = "%m/%Y")p.title.text = 'Actual vs Fitted ARIMA Values'p.xaxis.axis_label = 'Date'p.yaxis.axis_label = 'Passengers'p.xgrid.grid_line_color = Nonep.ygrid.grid_line_color = Nonehover = HoverTool(renderers=[glyph],tooltips=[ ("Date"," @date {%F}"),("Fitted"," @fitted "),("Actual"," @actual ")],formatters={"date": "datetime"},mode='vline'p.tools = [SaveTool(), PanTool(), hover, CrosshairTool()]return p @param .depends('title')def header(self):title_panel = pn.pane.Str(self.title,style={'font-size': '20pt','font-family': "Times New Roman"})return title_panel @param .depends('subtitle')def subheader(self):return self.subtitledef panel(self):logo = """http://panel.pyviz.org "> https://panel.pyviz.org/_static/logo_stacked.png " width=200 height=150 align="center" margin=20px>"""self.get_data()return pn.Row(pn.Column(logo,self.date_slider),pn.Column(self.header,self.subheader,self.arima_model))panel_class().panel().servable()
Once your app is ready, go ahead and deploy. On the command line, move to the directory that houses your script. Once you are there make sure you have the correct conda environment, with Panel installed and activated. Once there, run the following command:
panel serve tutorial.py
This will create a Bokeh server on your local machine on the default port 5006. If you go http://localhost:5006/tutorial you should see the visualization.
Production Deployment on AWS EC2
Now that we’ve gone through creating a local server to showcase our app, we can move onto deploying apps to production cloud environments.
For the purposes of this article we will be using AWS EC2 instances. While none of the Panel code relies directly on EC2, our description of setting up instances and opening ports is all based on EC2. If you use another cloud provider, please refer to their documentation for setting up the infrastructure.
First, create an EC2 instance with an Ubuntu OS. If you want to restrict your applications to users only behind your organization’s private VPC or other private network — especially if you are handling any personal user data at all — make sure to deploy your EC2 instance on a private subnet.
Next, install Anaconda on the EC2 instance. Once the Anaconda base environment is ready you can share environment files (either manually copying or pulling from a source control repository) to create your specific environments between your local machine and the EC2 instance to make sure deployment environments are as close as possible to experimentation and development environments.
Modifying your Script to Serve Content
Now we can explore one of the most useful utilities provided by the Bokeh server: the server class. The server class is exposed through the bokeh.server library and allows users to define a server object's applications and all other command line configurations through pure Python code, as well as start and stop a server from within the script itself. This means we can choose to expose multiple applications to the server via a uniform and automated manner, and deploy them all on the same server and port.
The following chunk will find all Python scripts within the same directory and sub-directories as the original file and return a list with the loaded Panel applications as objects, and another list of their respective file names for each project.
import os
import json
import sys
from importlib.util import spec_from_loader, module_from_spec
from importlib.machinery import SourceFileLoader
def get_modules(self,set_attribs=False):
path = os.path.dirname(os.path.abspath(__file__))
subdir = []
for i in os.scandir(path):
if i.is_dir():
subdir.append(i.path)
subdir = list(filter(lambda k: 'ipynb' not in k and 'spy' not in k, subdir))
filedir = []
for i in subdir:
for x in os.scandir(i):
if x.is_file() and x.path.endswith('.py')==True and x.path.endswith('main.py')==False:
filedir.append(x.path)
filedir_name = [x.rsplit('/',1)[1] for x in filedir] # everything after last slash
filedir_name = [x.rsplit('.py',1)[0] for x in filedir_name] # everything before .py
modules = []
for i in range(0,len(filedir)):
spec = spec_from_loader(name=filedir_name[i],
loader=SourceFileLoader(fullname=filedir_name[i],
path=filedir[i]))
mod = module_from_spec(spec) # evaluating module
spec.loader.exec_module(mod) # executing module
modules.append(mod) # adding module to list
objects = []
for i in range(0,len(modules)):
x = modules[i].\
panel_class(name=str.replace(filedir_name[i],'_',' ').title())
objects.append(x)
if set_attribs==True:
self.applications = objects
self.filedir_name = filedir_name
else:
return objects,filedir_name
Panel allows for either Python scripts, Jupyter notebooks, or full directories to be served as applications. A succinct way to deploy quickly is to use Python scripts, so in the code above we filter out .ipynb files and any other non .py files, but this could easily be switched to look for those files explicitly.
Once all your Panel objects are read into the list, we can proceed to creating the server object. Just before though, we have to specify a helper function which allows us to modify the inner functionality of the Panel objects we’re working with:
def modify_doc(self,panel,doc):
panel_obj = panel.panel()
title = panel.title
return panel_obj.server_doc(title=title,doc=doc)
Now we can create the server object.
In the following function we take the applications and application name list returned by the get_modules() function.
Then, depending on how many applications were found in the directory, we create Bokeh application objects using a function handler since Panel allows for callbacks in its applications.
Instead of writing the code itself here, we create a string object with the code which we later evaluate, in order to be able to automate the process of defining application objects in this function.
Once the loop has finished creating the string, we create a server object, and the first argument to the server object will be the evaluated string code holding our application objects in a format that the Bokeh server can deploy.
Afterwards, the port parameter is defined, a prefix is added if necessary, and websocket origins are defined.
The websocket origins parameter is important as it defines the IP addresses the server should listen on and accept requests from. Multiple origins may be specified here.
You should set your EC2 instance’s IP address here so the server can listen to requests on it.
If you want to allow multiple users or multiple sessions to be served at once, modify the num_procs parameter to allow multi threaded processes on the server.
The last line starts the server, and makes sure it runs until the server itself is shut down.
from bokeh.server.server import Server
from bokeh.application import Application
from bokeh.application.handlers.function import FunctionHandler
from functools import partial
def start_server(self,prefix_param='',port_param=5006):
objects = self.applications
filedir_name = self.filedir_name
base_command = '{'
for i in range(0,len(objects)):
if i < len(objects)-1:
command = "'/"+str(filedir_name[i])+"': Application(FunctionHandler(partial(self.modify_doc,objects["+str(i)+"]))),"
base_command = base_command+command
elif i==len(objects)-1:
command = "'/"+str(filedir_name[i])+"': Application(FunctionHandler(partial(self.modify_doc,objects["+str(i)+"])))}"
base_command = base_command+command
server = Server(applications=eval(base_command), # evaluate programmatically written code
port=port_param,
prefix='/'+prefix_param+'/',
allow_websocket_origin=['localhost:'+ str(port_param)], # add the ip address of your ec2 instance when deploying the application
num_procs=8)
server.run_until_shutdown()
Putting it all together for completeness, we have the programmatic_server class shown below.
At the bottom we have added code to be able to run this file from the command line.
from bokeh.server.server import Server
from bokeh.application import Application
from bokeh.application.handlers.function import FunctionHandler
from functools import partial
from importlib.util import spec_from_loader, module_from_spec
from importlib.machinery import SourceFileLoader
import os
import sys
class programmatic_server():
def get_modules(self,set_attribs=False):
path = os.path.dirname(os.path.abspath(__file__))
subdir = []
for i in os.scandir(path):
if i.is_dir():
subdir.append(i.path)
subdir = list(filter(lambda k: 'ipynb' not in k and 'spy' not in k, subdir))
filedir = []
for i in subdir:
for x in os.scandir(i):
if x.is_file() and x.path.endswith('.py')==True and x.path.endswith('main.py')==False:
filedir.append(x.path)
filedir_name = [x.rsplit('/',1)[1] for x in filedir]
filedir_name = [x.rsplit('.py',1)[0] for x in filedir_name]
modules = []
for i in range(0,len(filedir)):
spec = spec_from_loader(name=filedir_name[i],
loader=SourceFileLoader(fullname=filedir_name[i],
path=filedir[i]))
mod = module_from_spec(spec) # evaluating module
spec.loader.exec_module(mod) # executing module
modules.append(mod) # adding module to list
objects = []
for i in range(0,len(modules)):
x = modules[i].panel_class(name=str.replace(filedir_name[i],'_',' ').title())
objects.append(x)
if set_attribs==True:
self.applications = objects
self.filedir_name = filedir_name
else:
return objects,filedir_name
def modify_doc(self,panel,doc):
panel_obj = panel.panel()
title=panel.title
return panel_obj.server_doc(title=title,doc=doc)
def start_server(self,prefix_param='',port_param=5006):
objects = self.applications
filedir_name = self.filedir_name
base_command = '{'
for i in range(0,len(objects)):
if i < len(objects)-1:
command = "'/"+str(filedir_name[i])+"': Application(FunctionHandler(partial(self.modify_doc,objects["+str(i)+"]))),"
base_command = base_command+command
elif i==len(objects)-1:
command = "'/"+str(filedir_name[i])+"': Application(FunctionHandler(partial(self.modify_doc,objects["+str(i)+"])))}"
base_command = base_command+command
server = Server(applications=eval(base_command),
port=port_param,
prefix='/'+prefix_param+'/',
allow_websocket_origin=['localhost:'+str(port_param)],
num_procs=8)
server.run_until_shutdown()
if __name__ == "__main__":
# command line arguments
prefix = sys.argv[1]
port = sys.argv[2]
port = int(port)
# calling modules
serv_obj = programmatic_server()
serv_obj.get_modules(set_attribs=True)
serv_obj.start_server(prefix_param=prefix,port_param=port)
Save this script as programmatic_server.py.
Theoretically, you can now put N number of applications in your directory and the Bokeh server can serve all of them.
You can start your server with the following command on the terminal, making sure you are using the correct conda environment:
python programmatic_server.py <prefix_of_choice> <port_number>
Make sure the EC2 instance port you have chosen is open. This is done by editing the security groups configured on your instance to accept incoming requests on the port through the TCP protocol.
Once that’s ready, navigate to http://<ip_address>:<port_number>/<prefix>/ and if you have multiple applications you should see a Bokeh HTML page listing all the applications currently being served.
If you only have one application this index will not be present, but if you navigate to http://<ip_address>:<port_number>/<prefix>/<script_name> then you should see your application.
The index can be disabled with the --disable-index option and can also be customized, but that's outside the scope of this article.
Structure of Apps for Deployment
The methodology of deploying shown above will only work if all of your apps are structured in the same format.
One clear way to name your application is to use the script name as the name of the application (which is shown as the tab name and the main title on the page itself). Every script also includes a panel_class which holds all the pertinent code to run the visualization.
Each panel_class can have as many helper functions as is needed, the only requirement is a method named panel which includes the logic for rendering the application itself. This method should execute all data ingestion and transformation functions as well as modeling methods from the class.
This structure allows us to automate the serving of the apps in a concise manner.
The structure we use is the same as was shown in the tutorial earlier, the only difference is we remove the last line panel_class().panel().servable() as the server takes care of this functionality.
All the code used in this post can be found here: https://github.com/hrzumaeta/panel_post_tutorial
Now we have an in house server which can be exposed to users and can be created and automated purely in Python.
That’s it! Hopefully we could illustrate the value of a tool like Panel to data science teams and how it can allow for structure to be applied to models that drive decision making.
If you’ve run into similar problems and have thoughts on this approach, we’d love to hear them!
Many thanks to the rest of the full team Phil Kim, Kate Sullivan, Tanvi Kode, Calvin Lau, Kameron Wells and Amol Sachdeva who have all contributed to our current Panel implementation and to this post.
If you enjoyed this tutorial and want to learn more about this work or are interested joining one of our data teams, please visit our Careers page and follow us on LinkedIn or Instagram. | https://medium.com/ua-makers/how-to-automate-analytics-ux-with-panel-4b24ff5a886e | [] | 2020-01-21 16:34:51.324000+00:00 | ['Anaconda', 'Python', 'Data Science', 'Bokeh', 'Data Visualization'] |
Lessons I learned about Agile Software Development | I am so happy to see that more and more big companies, which used to have heavyweight processes regarding software engineering, are moving to more agile software engineering methodologies.
Right now in my job, I have to engineer a piece of software for a customer with a small team trying to be as agile as possible. Fortunately, the customer supports agile software engineering, but still, there we encountered some challenges.
In this article, I want to share my experience realizing that project and the lessons learned from doing that.
Sophisticated Software Engineering
I have a pretty specific idea of how sophisticated software engineering should look, which might differ from your point of view. Short version, I think software should be developed in small cycles where engineers show a DevOps mindset.
Feedback loops
It is all about feedback loops. Feedback loops on different abstraction layers.
Continuous Integration with automated tests gives feedback about whether new modifications on the codebase are working the existing code.
Cycles of development time frames in combination with demos for the customer give feedback about whether the requirements match or not.
Retrospectives give feedback about whether the process itself should be improved.
You apply the mantra Inspect and Adopt and the more cycles you have, the more you can actually and respond to change. I think this is what it means to be agile. You constantly pull yourself out of the comfort zone, or even better you can prevent yourself from even getting to the comfort zone.
DevOps
Yes I know, you are tired of buzzwords. Anyway, I have to mention it, because, in my opinion, it is a very important engineering mindset or culture.
In classic software engineering, software is developed by developers and operated by operators or admins. The problem with that is:
developers usually have no clue about operating software
operators usually don’t know anything about the implementation of it.
This makes debugging and releasing the software kind of heavyweight process, which is referred to as Gap between developer and operations.
This gap somehow prevents us from being agile because it enlarges the development cycles, which by the way is the reason why those “Big Bang” deployments are most likely used in classic software engineering.
So … you will probably find several definitions out there of what DevOps exactly is, but we will look at it as a mindset of engineers who control the whole software lifecycle from development to operation.
You build it, you run it!
Challenges
Knowing about how to do sophisticated software engineering, unfortunately, is not enough to also actually do it, there are different challenges you will face especially when working with a customer. I will discuss these in the following sections.
Divergence of software engineering practice
So in the company, I am working for, we know about how to engineer software in an agile way, as we are implementing the Kanban framework for our software engineering process.
Our customer uses Scrum which is also an agile software engineering framework but is slightly more strict and has more ceremonies than Kanban.
Even though there are some similarities in both the Kanban and Scrum framework which can be described as abstract agile principles, basically the principles of the agile manifesto, people tend to stick to specific details of the respective framework instead of following the agile principles behind them.
So the challenge for us here is to somehow synchronize the customer and our team regarding the software engineering process. Not only because of the different approaches and understanding of agile software engineering practices but also because we have different separate boards to visualize our work, this is a very challenging task.
The Gap between Dev and Ops
Even though we know that the gap between developer and operations is a bad thing, working with a customer you will most likely have to face this situation. The reason for that is, that customers will probably want to use their infrastructure while not being able to give you access to it you would need.
In our case the customer uses AWS and we have only restricted access, but they supported us with a such called Infrastructure Engineer who should help us with the tasks regarding the infrastructure.
The challenge here is communication. On the one hand, we have to communicate what we need for the software we are implementing to be operated. But on the other hand, he also has to communicate to us what exactly he is doing, because in the end “we build it, we run it” or “we build, we fix it if it fails”.
The more this knowledge gap is diverging, the more you will fall into old patterns of engineering software and the more rigid your process will be.
Underestimating tasks
Estimation is a hard topic. Some people out there completely rely on the estimation of tasks while mostly having some kind of deadline, while some people say completely refuse to give estimations and work with deadlines.
And here you have the problem. Most customers want you as engineers to give estimations and work with deadlines. But, this contradicts the agile software development approach where you show the progress of the project on a regular base.
To cut a long story short, you will most probably have to estimate and work with deadlines, as we had to, and I guarantee that you will underestimate and overestimate tasks.
Scope Creep
We refer to scope creep as changing of requirements or scopes. We all know (at least I hope so) that it is simply not possible to specify every single requirement of a software system upfront, which has different reasons.
The customer himself will not know them all up front, they will change and even then, information gets lost while communicating them to the engineers, just to name some problems.
Scope Creep will happen and it will happen in the most unpleasant situations, in the middle of a sprint or even some days before a deadline. It will make you feel upset but you will have to somehow deal with it.
Solutions
So, we had a quite tough job to overcome these challenges, but here is how we tried to tackle them.
It is very important to understand that finding solutions for the different challenges resulted in constantly reflecting on the work we are doing and being willing to adopt as well as being honest to ourselves.
Daily Standups
We started the project with two engineers on our side, so we didn’t need a daily synchronization on the current progress or occurring impediments as communication between us was very regular and lightweight.
As soon as we realized that there will be this gap between implementing and operating the software I mentioned before, we decided to do a daily standup to synchronize with the Infrastructure Engineer.
Ops Showcase
As the project proceeded we realized that, even though we synchronized daily with Infrastructure Engineer and also did pairing sessions with him, the gap began to get bigger and bigger.
To tackle that problem we decided to have such called ops showcases, where the Infrastructure Engineer introduces us in the work he did to operate the software we implemented.
Task Grooming
As already mentioned, task estimation is a tricky topic and we found ourselves giving wrong estimates on tasks or even forgetting to specify every single detail about a task while creating tickets on our board.
Therefore we started to do a weekly technical Task Grooming where we went through all tickets and checked for the right description and tried to estimate them right before Sprint Planning. This way we tried to compensate wrong specification and estimation as well as making our work more precise and predictable.
Note: A precondition for this is that the backlog is prioritized by the Product Owner
Sprint Planning
While engineering a piece of software in a customer project, the customer will most likely want to see the current progress of the project and have a certain degree of predictability of how the progress will in the near future.
In the Sprint Planning session, which takes place at the beginning of each sprint we create transparency about all the tickets on our board(especially the ones in the backlog) and let the customer decide which tickets should be worked on in the upcoming sprint. At the same or directly afterward we, as the team implementing the software, decided how much of the chosen tickets we can achieve.
This way has little to no surprise regarding the outcome, the progression of the work we are going to achieve, as we all come to a commitment about the work we think can be achieved in every sprint.
By the way, do you remember what I told you about the connection of feedback loops and the ability to engineer software in an agile way? Well having sprints(with Sprint Planings and Demos) also allows the customer to be agile as he can also inspect and adapt based on the results of every sprint.
Retrospective
Doing Retrospectives on a regular bases is probably the most important ceremony we have in agile software engineering. We want to get feedback about how the team feels and performs, as well as figuring out how we can improve.
The Retrospective usually takes place at the end of each sprint. There are different ways how to do a Retrospective but basically, you want the team to reflect and them to think about ways to optimize their work.
In our Retrospective, we first catch the general feeling about the last sprint of every team member by letting them choose one of the following smileys to express their feelings: :), :/, :(. This way we can get a very basic first impression.
After that, we show data like how many tickets were done and the divergence of the estimated and real effort for every done ticket. Doing that allowed us to get a feeling about the quality of our estimations.
The last part of the retrospective is probably the most important and valuable. Each member notes his/her lessons learned as well as things the team should start, stop, or continue doing. Now we try to extract Action Items out of those notes and assign a team member to it, which is then responsible for the progress of that respective item. On each Retrospective, we also check which of the Action Items of the last Retrospective were achieved to decide if they are still important for the team or can be neglected.
Note: The last part is not a complaining session. Instead of just complaining, try to formulate your complaint positive, as something the team should start doing. This way you avoid team members being offended.
Lessons Learned
This project was an awesome opportunity to move from a DevOps mindset to a NoOps mindset and to play around with serverless technologies. But don’t get me wrong this was a serious project for a big customer which is now used in production.
In my opinion, we not only learned a lot about serverless technology and AWS but also we got a lot of knowledge of how to do agile software engineering in a better way. Let’s summarize the most important things: | https://medium.com/the-innovation/agile-software-development-on-a-customer-project-experience-report-e1059505e2ce | ['Noah Ispas'] | 2020-08-26 17:04:50.166000+00:00 | ['AWS Lambda', 'Scrum', 'Agile', 'Serverless', 'AWS'] |
The Music Plays On — Chet Baker. Donato writes about Chet Baker | Chet Baker
I get sad whenever I listen to Chet Baker. Perhaps it’s because I discovered him at the wrong end of his life when hearing of his death in 1988. Or perhaps it’s because the first image I ever saw of him was from the cover of the 1988 album, My Favorite Songs: The Last Great Concert. The cover photo captures the eternally futile search for happiness that’s in the eyes of every drug addict, and those eyes have haunted me for over thirty years. So has the recording.
Perhaps it was the documentary, Let’s Get Lost, also released in 1988. Directed by Bruce Weber, we see a highly intelligent, yet conniving Baker weave in and out of lucidity and in and out of people’s lives, revealing just enough vulnerability and charm to score his next hit. He’s heartbreaking, maddening, brilliant, and beautiful.
The tragedy is made complete when you start at the beginning.
Chet Baker was born into a musical family in Oklahoma in 1929. He dropped out of high school and joined the U.S. Army at 16, in 1946, and became a member of the 198th Army Band, stationed in Berlin. He left the Army in 1948 and studied music for a couple of years in Los Angeles. He re-enlisted in 1948 and played in the Sixth Army Band at the Presidio in San Francisco. By the time he was discharged in 1951, he was already playing in local clubs. The boy could play!
In 1952, aged 22, he toured the west coast with Charlie Parker and joined the Gerry Mulligan Quartet. Baker was part of what became known as West Coast Jazz. Here’s a great live recording from 1953 with Stan Getz embodying this sound.
He had moviestar looks, loved to drive nice cars, wear nice clothes, and was never, shall we say, left wanting for attention.
The photographer, William Claxton, perfectly captured Baker’s charismatic and brooding personality in a series of photos from this time.
Chet Baker — William Claxton, photographer
Heroin and Jazz sadly went hand in hand during this time and while Baker claimed he started using heroin in 1957, colleagues and acquaintances said that he was already using in the early fifties. Nevertheless, he was one of the top earning jazz artists of the decade, releasing album after album that were instant classics, including -
Chet Baker Quartet featuring Russ Freeman 1953
Chet Baker Ensemble 1953
Grey December 1953
Chet Baker Sings 1954
Chet Baker Sextet 1954
The Trumpet Artistry of Chet Baker 1954
He was known to hock his instruments for drugs, and was arrested and imprisoned in Italy for possession. In 1966, in Sausalito, CA, he got in a fight, most likely over drugs, that resulted in his teeth getting knocked out, which ruined his embouchure and made it unable for him to play the trumpet. He was working at a gas station when he realized he needed to find a way to come back to music.
With dentures and a new approach to trumpet playing, but without ever getting clean, Baker began to make a comeback in the seventies. Moving to New York City, he would often play with Jim Hall, Paul Desmond, and Hubert Laws. Here are two classic albums from this time.
She Was Too Good To Me 1974
Concierto 1975
The last ten years of his life, Baker lived in Europe. In 1983, Elvis Costello, a long-time fan, hired him to play trumpet on his song, Shipbuilding, exposing a whole new generation to Baker’s playing.
Chet Baker was found dead on the street in front of his Amsterdam hotel on May 13, 1988, having apparently fallen from his second floor hotel room, with cocaine and heroin found in his room and in his body. There was speculation that it was a drug deal gone bad but there was no evidence to back up this claim. There was also rumor that he had locked himself out of his room and was trying to climb from the adjacent hotel room over to his. Whatever really happened, it was just plain sad. This last album I’m sharing was recorded less than a year before his death. Heartbreaking, maddening, brilliant, and beautiful.
In Tokyo 1987 | https://donatocabrera.medium.com/the-music-plays-on-chet-baker-e866712d5e83 | ['Donato Cabrera'] | 2020-05-06 09:23:45.182000+00:00 | ['Las Vegas Philharmonic', 'Jazz', 'Music', 'Donato Cabrera', 'California Symphony'] |
Secretary Geithner’s budget speech, part 2 | Yesterday I praised Treasury Secretary Geithner for three elements of the fiscal policy speech he gave at the Harvard Club of New York this past Tuesday.
Future budget deficits are caused in part by both demographics and rising health costs. (I would strike “in part.”) We can’t wait to address our fiscal problems. The markets will at some point force action but we don’t know when. If we wait until they do, the solutions will be much more painful. Higher interest payments are a cost of inaction that will squeeze out other policy priorities.
Today I will provide a few areas of disagreement, with more to follow in future posts.
In the first two points below my goal is not to prove Secretary Geithner wrong, but to show a set of reasonable conclusions that differ from his.
All quotes below are from Secretary Geithner’s speech.
4. I think the Administration’s proposed deficit and debt reduction is too little and too slow.
If we put our deficits on a path to get them below 3 percent of GDP by 2015 and hold them there, with reforms that politicians commit to sustain, then the federal debt held by the public will peak in the range of 70 to 80 percent of GDP, and then start to fall.
A 3%-3.1% deficit is the break-even point for a constant debt/GDP ratio. So with a 3% target, “start to fall” actually means “basically hold steady, and start to fall by only the slightest amount.”
There is a huge difference between 70 and 80 percent of GDP. The President and the Secretary are leaving themselves a lot of wiggle room ($1.5 trillion if we measure it relative to this year’s GDP).
if we measure it relative to this year’s GDP). The maximum deficit I would prefer would be 2% of GDP (roughly the historic average over the past 50 years), and I’d be happy to support lower. I am uncomfortable with debt/GDP that high and would want to lock in a sharper decline in that ratio. To do that, at a minimum you need smaller deficits. I remember fondly when balance was the goal, and would still support that goal and the spending cuts needed to meet it.
deficit I would prefer would be 2% of GDP (roughly the historic average over the past 50 years), and I’d be happy to support lower. I am uncomfortable with debt/GDP that high and would want to lock in a sharper decline in that ratio. To do that, at a minimum you need smaller deficits. I remember fondly when balance was the goal, and would still support that goal and the spending cuts needed to meet it. The biggest problem with this weak deficit goal is “with reforms that politicians commit to sustain.” Unlike in business where you can sign a binding contract, Congress by definition has the ability to change the rules in the future. You need to be more aggressive in your initial fiscal goal precisely because you have to allow for the likelihood that future Congresses will make things worse rather than better. If you only minimally satisfy your fiscal goals, then any future bad event or unwise action immediately puts you in bad territory again. Given Congress’ track record, this is a risk not worth taking.
Of course, it’s easy to say “I’m for more deficit reduction” if you don’t specify how you’d get there. For the time being I will say that I support the Ryan budget, plus I would slow the growth of (“cut”) Medicare spending in the short run as much as is needed to hit my more aggressive deficit target. I would prefer repealing the new health care spending from last year’s two laws. If I couldn’t get that, I would increase cost-sharing in Medicare. If I couldn’t get that, I would cut all Medicare provider payment rates. I would also make explicit changes to slow Social Security spending growth, although those effects would be outside of this budget window.
5. I have a two part goal, one part of which is deficit/debt reduction.
The Secretary (and the President) defines the fiscal goal as follows:
For the United States, this means a deficit below 3 percent of GDP. Achieving this is the essential test of fiscal sustainability.
A deficit/GDP target is one essential test of fiscal sustainability. The other essential test is that government not perpetually expand to consume an ever-greater portion of society’s resources. If we have budget deficits of 3% but spending and taxes grow to 25%, then 30%, then 35% of GDP, then individuals, families, and businesses will have ever fewer resources to address their own needs and to solve problems they face.
I would instead say, “The essential test of fiscal sustainability has two parts. Budget deficits should be no more than 2% of GDP, and preferably less. Government spending should stay stable as a share of GDP, so that the benefits of an expanding economy are controlled by private citizens rather than by the government.”
To read more of this argument, please see my earlier post: Deficits are an important but incomplete metric.
6. I don’t think the President’s new budget proposal is credible.
Secretary Geithner puts the best face on a proposal that I think in most respects lacks credibility because it lacks detail. I also disagree with several of the President’s proposals where he has provided detail, but my complaint here is claiming you have a proposal when you don’t. Rather than rehash this argument, please see my earlier (long) post: Understanding the President’s new budget proposal.
Note for instance how Secretary Geithner finesses a trillion dollar deficit gap between the President’s outline and the Ryan plan:
The fiscal plans that are on the table include roughly $4 trillion in deficit reduction over the next 10 to 12 years so there is broad agreement on the ultimate goal and timeline.
As I explained earlier, there is about a trillion dollar deficit difference between using 10 years and using 12 years. There is not broad agreement on either the ultimate goal or the timeline. The Administration (implicitly) confirmed this, and it invalidates this claim by Secretary Geithner.
I will respond to more points from the Secretary’s speech in future posts.
(photo credit: Wikipedia) | https://medium.com/keith-hennessey/secretary-geithners-budget-speech-part-2-67760bf34e3b | ['Keith Hennessey'] | 2016-12-22 04:05:15.041000+00:00 | ['Budget', 'Economy', 'Taxes', 'Health', 'Seniors'] |
A Conversation with Composer Cormac Bluestone on It’s Always Sunny and The Cool Kids | Cormac Bluestone is a long-time composer and musician with an extensive background in television, film, and theatre. His styles range from New Orleans-inspired jazz to hip-hop to gospel and everywhere in between. Best known, perhaps, for his original songs and music for It’s Always Sunny in Philadelphia, he’s currently composing for FOX’s The Cool Kids.
In our interview below we talk music composition, television, storytelling, removing bias, creativity, and the importance of taking a chance.
Cormac Bluestone
Andrew Cheek: You have experience in a variety of media — from writing music for the theatre, for film, and television—to working as a director, an editor, and even an animator, through all of these things do you feel you have one true calling?
Cormac Bluestone: You know, in this business you don’t always get to choose what you do. So I’m lucky I’ve gotten to do a lot of things. I think at the end of the day if I could write music I would be so happy to do that. All of those things [you mentioned] are interconnected, though, and having a bigger picture of what’s going on and production and post-production, makes you better at each of the individual things, too. As far as callings go, yeah. Definitely music!
Andrew: When writing music or theme music for a show, and again whether thats for theatre or television or anywhere else, how do your interactions with the actors and writers influence and shape the music?
Cormac: In television and film you’re definitely dealing with producers and the creative team, and I try to let them take the lead. You want everyone to feel comfortable and feel good. You want them to feel good that they’ve put this in your hands. So, if they want to give me music, if they want to give me scripts… just being receptive to their process. I think being a composer you have to be a translator, too. You want to be open to how they want to translate what they’re looking for to you. With a lot of scores you get a cut with a temp track. I love good temp tracks. I’ve been lucky to work with editors and producers that take terrific temps. That’s a great jumping-off point.
In a case like The Cool Kids we really were trying to get the music and theme before we had picture. In that case I talked with Charlie Day, who’s one of the creators on the show, a lot about the music and we were kind of in the same ballpark about what we were thinking—kind of this bluesy, New Orleans type theme for the music. In that case I just kept sending demos and we kept talking about it.
I feel every project is a little different and you just have to be open — or at least I try to be open — to how the producers want to run the show. I love it when producers have a lot of ideas and want to give you a lot of demos. And I also love it when they’re laissez-faire and don’t want to have anything to do with it and say “Just give us something that we’re going to fall in love with.”
Andrew: Yeah. Sometimes they may have something more specific in mind that they want to have matched up with visually, and then other times as you’re saying with The Cool Kids you’re working from the sound first.
Cormac: And you’re trying to get that music in so the editors are cutting with your music. I think that’s a big goal. So, they start to fall in love with your music because it’s been there as long as the cut has been there.
Andrew: In that case the music isn’t just an add-on but is more centrally located in terms of the importance of the whole work.
Cormac: Exactly.
Andrew: So how did you get started writing music and working in the film industry? Those may be slightly separate things, I guess, but…
Cormac: I started writing music when I was really young! I took piano lessons and guitar lessons and I really loved musical theatre growing up and… always wrote songs and had ideas for what the next American musical would be. So I was constantly composing and once I got to high school I had a great faculty and music teachers who gave me some real, grounded music theory. I started to notate, and learned to arrange music properly.
In grade school I was really lucky — we had an electronic music teacher who taught me (I was probably the only kid interested in it) who taught me how to use MIDI instruments, you know, during the eighties when they were just inventing MIDI instruments. I really learned how to, at a very young age, sequence music using Performer 1.0, which was one of the original Apple sequencers. Just writing, and arranging, and composing it was something I was doing because I loved the technology and I loved to play so much.
Andrew: So that interest came out of initially playing music, more? And then from there you transitioned to composing?
Cormac: Absolutely. Although I think I was probably writing as soon as I could play. And I still feel I’m a little like that—when I learn to conquer a new part of my playing I instantly want to incorporate it into my tool bag of writing. You know, constantly growing outward — “What can you do?”
Andrew: What do you see — and this kind of goes off of that — what do you see as the intersection between composing music and interpreting or playing something someone else has written?
Cormac: For me, and I’ve been lucky enough to work in comedy, I think it always boils down to the same thing. Really wrapping your head around the given circumstances of the project you’re on…. whether it’s Shakespeare (which I’ve done a lot of music for), to It’s Always Sunny in Philadelphia. Really keeping an eye on the given circumstances and leaning into the given circumstances rather than the idea. I think that is generally the most effective way as a composer to be board with the storytelling. So you’re never winking at the comedy, you’re never making a dramatic moment too, what would the word be, dramedy, not dramedy… You know what I mean. Too overdone.
Andrew: Yeah. I know what you mean.
Cormac: So, that is I think the big intersection. Not getting in the way. Not adding time to the piece, just being a part of those given circumstances so the story can shine through.
Andrew: Do you have a lot of experience playing?, as a performing artist playing an instrument?
Cormac: I have played with a lot of rock bands. I lived in L.A. for a number of years where I was a side man for a couple bands. I still play. I love playing in bands with other people’s music it, it just gives you a different perspective and you kind of get to sit back a little bit and get to have someone tell you what to do (which is always nice).
I love getting to also be a cog in the machine. It makes you a better player, it makes you a better composer, and it kind of goes back to that original point — the more you know about the wider variety of the process the better. If you’re arranging for, you know, any given instrument… for woodwinds, I don’t play woodwinds, but I’ve spent a lot of years in New York arranging for a clarinet player. The more I knew about that instrument the better a score I could write because I started to learn “Oh, it doesn’t really work if you do xyz on this instrument.” So it all, I think, is very cumulative. And figuring out the larger picture.
Cormac Bluestone — in action.
Andrew: It’s almost like removing a bias, I think. Because then you’re not approaching other instruments with the understanding of just one instrument or even from just one media either.
Cormac: Yeah, it is like removing bias! It’s having a broader understanding of the intricacies of what’s happening with each instrument. [It] makes the whole score more effective.
When I score, one thing I really try to keep an eye on especially for parts that are going to be played live is that they’re fun to play. That’s kind of the big thing in my mind — that you’re giving the musicians or, I play a lot of my own stuff, giving myself something that is actually fun to play. That people aren’t just these little cogs in the machine there just to elevate an idea that they’re not really a part of.
Andrew: It’s easy to maybe have an idea about how a piece could function ideologically or with some contrast or… the more intellectual side behind it. But if it’s not fun or interesting it can kind of fall flat.
Cormac: I totally agree. If it’s not fun you’re doing it wrong.
Andrew: Yeah. So, how would you encourage someone to compose if they’re not really sure where to begin?
Cormac: I’ll start with this. Say yes. Take a chance and say yes. When I talk to young composers myself, they go “Oh I don’t know how to do… this that and the other.” You know, I think that’s what separates the professionals from the amateurs — that the professionals are willing to learn and figure out how to do things they don’t know. There have been plenty of times where I’ve been asked to do things — “Oh, can you do this?!” — and not having a clue how to do it in the moment you say yes. Because I have confidence that I’ll be able to not only figure out how to do that thing, but to conquer it and to deliver something really great.
People when they start out, they’re so aware of their limitations they forget how much they have to offer. And sometimes you really have to take a chance. You mentioned animation before. One question people ask me — and I’m not an animator, I’m not known for animating I just like the workflow of it — someone will say “How do you use Adobe After Effects?” And I’ll say “Well, what do you want to do with it? It does everything. What do you want to do with it?” Figure out what you want to do and then do that thing. Learn how to do that. And then that goes in your tool bag. Then learn something else. Put that in your tool bag. Young composers need to take a risk and say yes. They’ll know where they need to go — the only trick is figuring out how to get there.
Andrew: I really like that. That’s really smart.
Cormac: [Laughs] Off the record it’s really smart until you say yes and you just can’t figure it out. But I think people generally can figure it out. If you have the drive and you want to do it, do it.
Andrew: In the internet age we live in I think we’re very lucky to be able to have so many tutorials and resources. For example, I was going to do video editing and I downloaded Premiere Pro and I was like “Oh my god” — it was the same thing — “You can do everything! But I feel like I can only do two things.” So it took awhile to do even very simple things. But then once I started getting the hang of it through lots of tutorials online I was gradually learning the language of the software. So, availability of resources helps but the drive, too, is important.
Cormac Bluestone — on stage.
Cormac: I 100% agree. I don’t think I’ve ever really called customer service for anything and I was working on something for It’s Always Sunny a couple of years ago, and I hit a real snag. And I could not fix this snag and finally said “Well, off to customer service I go.” They could not solve this problem. I actually had unearthed a bug in the software and they said “Keep an eye out for it in the next release…”
That’s what it came to. I totally agree with you, though, all the information is out there. Learn how to Google it! There’s no reason you can’t figure out how to use Logic, Pro Tools, Ableton, etc.
Andrew: Is there a dream project you have your sights set on or that you’re working on currently?
Cormac: Oh man. A dream project…!
Andrew: Maybe it’s not the project but just a type of project or something that you’ve had in mind for a long time…?
Cormac: I love the old John Carpenter films like They Live. I would love to do one of those kind of noir eighties synth-type films. Kind of like Mark Mothersbaugh a score he just did for Thor: Ragnarok. I would love to do something like that. Or work with Edgar Wright. I think his directing work is incredible. The Gutter Brothers. I just love that genre-twisting work. Or even Doug Liman I mean, he’s incredible. Kind of, one of those guys in there. Not even a big action movie but, something like Drive or something in that vein where you get to have a lot of fun with the score and it’s highly stylized. | https://andrewgcheek.medium.com/an-conversation-with-its-always-sunny-composer-cormac-bluestone-c684d1e6112d | [] | 2019-10-31 22:44:29.455000+00:00 | ['Its Always Sunny', 'Composer', 'Television', 'Music', 'Interview'] |
How To Pass Command-Line Values to a Python Script | Python provides a native library for passing command-line values to scripts called argparse. With argparse, we can include variable values in the execution command instead of using input() to request the value mid-execution. This saves us time and more importantly allows script execution lines to be saved and conveniently used — either manually or automated.
If you’ve written any Python scripts that require one or two values via input(), then read on and consider implementing argparse to simplify the execution of your scripts. I’ve implemented argparse in scripts such as my command-line JSON splitter and many of the iFormBuilder API tools.
In this guide, we’ll introduce the argparse library, how to begin using it, and some tips and tricks to avoid common pitfalls as you get on your way.
What is argparse?
Python has a variety of native libraries, available by a simple import. Argparse — short for argument parser — is a library that allows us to define command-line arguments.
These arguments can be either required or optional. Additionally, we can define the data type and helper text for each argument. If your script has an argument that is needed for the script to function properly, then the argument should be required. If the value is optional or will have a default value within the script, then the argument can be optional.
Our First argparse Script
Let’s get started writing our first argsparse script. We begin with an import statement and creating an ArgumentParser() object.
import argparse parser = argparse.ArgumentParser()
Our ArgumentParser object — stored in the variable parser — will be used to define our command-line arguments. We’ll start by using the .add_argument() method to define an argument for a number. Afterwards, use the .parse_args() method to store our data to a variable. Finally, we’ll print our argument value to demonstrate how it is referenced.
import argparse parser = argparse.ArgumentParser()
parser.add_argument("number") args = parser.parse_args()
print(args.number * 2)
To see how the script works, open up your terminal and execute the script as normal. You should receive an error. This is because our argument was not included in the script call.
> python3 app.py usage: app.py [-h] number
app.py: error: the following arguments are required: number
This time, we’ll pass a value to the script by entering it directly after the file’s name.
> python3 app.py 5
55
The good news is, we did not receive an error; however, our output is not what was expected. This is because an argument from argparse is going to be defined as a string… unless we tell it otherwise.
Let’s add a data type definition to our argument.
import argparse parser = argparse.ArgumentParser()
parser.add_argument("number", type=int) args = parser.parse_args()
print(args.number * 2)
Now when we execute, we have the value of number properly doubled.
> python3 app.py 5
10
What if we want to add an optional argument? To do this, we must prefix the argument’s name with a hyphen. Additionally, we must add a dest attribute to our argument, which specifies the name in which the value will be stored.
import argparse parser = argparse.ArgumentParser()
parser.add_argument("num1", type=int)
parser.add_argument("--num2", type=int, dest="num2") args = parser.parse_args() if args.num2:
print(args.num1 * args.num2)
else:
print(args.num1 * 2)
To pass a value to an optional argument, the name must be used in the script execution command.
> python3 app.py 5 --num2 5
25 > python3 app.py 5
10
Tips and Tricks
After covering the basics of argparse, you’re ready to go out into the world and write scripts which require command-line arguments! But there are plenty of opportunities for roadblocks and opportunities that are not fully realized. To prepare you for success, here are three tips and tricks that go beyond the simple basics.
Optional arguments do not need to have command-line value
If you want to use an optional argument as a simple flag, then set the action="store_true" attribute in your argument definition. With this in place, including the argument in the execution command will result in a value of True being stored.
Let’s change up our arguments to be a more realistic example.
import argparse parser = argparse.ArgumentParser()
parser.add_argument("greeting")
parser.add_argument("--caps", action="store_true") args = parser.parse_args() if args.caps:
print(args.greeting.upper())
else:
print(args.greeting)
Now take a look at the differences in script execution commands.
> python3 app.py "Hello World"
Hello World > python3 app.py "Hello World" --caps
HELLO WORLD
I often use a --debug argument in my scripts to determine if I’d like extra logging during the execution.
Include helper text
Variables always make perfect sense when we create them, but what about in one week? one month? what about when you share your script with a colleague? The bottom line is that a little bit of documentation can go a long way and avoid traversing through your code as a refresher.
Fortunately, argparse has the ability to define helper text with our arguments. Inside the .add_argument() method, we can pass a keyword argument help with a string value.
import argparse
parser = argparse.ArgumentParser() parser.add_argument("greeting", help="Text to be printed")
parser.add_argument("--caps", help="capitalize the greeting", action="store_true")
Now, our helper text is displayed we use --help from the command-line.
> python3 app.py --help
usage: app.py [-h] [--caps] greeting positional arguments:
greeting Text to be printed optional arguments:
-h, --help show this help message and exit
--caps capitalize the greeting
Using --help is a universal technique to argparse so expect that user will know leverage the help menu. I often use help text to communicate string formats and data types for arguments such as date formats or that an argument must be an integer.
Set a short version of argument names
Generally speaking, the practice of using excessively short names is frowned upon. The convenience is often outweighed by possible confusion and lack of context. That being said, there’s no denying the potential convenience.
We can define multiple names for an argument, which gives us the best of both worlds.
import argparse parser = argparse.ArgumentParser() parser.add_argument("greeting", help="Text to be printed")
parser.add_argument("-c", "--caps", action="store_true") | https://medium.com/code-85/how-to-pass-command-line-values-to-a-python-script-1e3e7b244c89 | ['Jonathan Hsu'] | 2020-05-11 11:32:25.715000+00:00 | ['Technology', 'Software Development', 'Python', 'Data Science', 'Programming'] |
Bokeh 1.3.0 Released | Before getting to the release itself, a few project announcements:
First, the next release will be the last release to support Python 2. After that, starting with Bokeh 2.0, Python 3.5 will be the minimum supported Python version. We will publish a blog post soon outlining all expected Bokeh 2.0 changes.
Earlier this month, Bokeh passed 10k stars on GitHub and is about to pass 10k followers on Twitter. Thanks to everyone for your support and interest!
The Bokeh project has assumed direct control over the CDN that publishes BokehJS resources. This includes making a new base URL of cdn.bokeh.org the primary location going forward. All existing links to cdn.pydata.org will continue to function indefinitely, but users are encouraged to use the new URL starting immediately.
As a reminder, the Bokeh project has recently launched two new sites:
A new project front page at bokeh.org
An improved support forum at discourse.bokeh.org
Both these sites are great resources for new and old users, please use and share them often!
We have also created a Project Bokeh page on LinkedIn. Anyone who has contributed to Bokeh may now list Bokeh on their own profile. There is not much content there yet, but we hope to ramp things up in the coming months.
As part of all this increased emphasis on outreach, we have had a sharp new logotype produced:
Please feel free to use this anytime you are sharing or writing about Bokeh.
Finally, the July Fundraiser is still ongoing! Although we have just recently met our original goal of 1000 USD, every bit helps offset operational costs (e.g. to keep the CDN running), so please donate if you can, or help spread the word:
THANK YOU to everyone who donates to Bokeh! We will make a wrap-up blog post about the July fundraising experience in the near future.
Now, on to new features! | https://medium.com/bokeh/bokeh-1-3-0-released-cca6b7af20ef | [] | 2020-07-05 11:27:14.566000+00:00 | ['Python', 'Open Source', 'Data Science', 'Bokeh', 'Data Visualization'] |
The Trades Union Congress (TUC) have reported that your commute took you 18 hours longer last year… | The Trades Union Congress (TUC) have reported that your commute took you 18 hours longer last year compared to a decade ago, despite the billions of pounds spent on the roads and rail network. As privatisation has failed, journeys becoming too expensive, slow and unreliable, where is the incentive for people to use public transport as a cleaner and greener method of travelling?
Climate change and global warming, resulting from carbon emissions exhumed by polluting, congested traffic, should be motive enough to want to find ways in which to encourage more people to use public transport. However, as The Times reports that “journeys by all forms of transport take longer than they did”, whilst rail commuters face the longest commute, it’s of little surprise that people are reluctant to spend so much of their free time travelling to and from work on overcrowded and unreliable public transport. Chief Executive of Work Wise UK, Phil Flaxton, suggested that “long commutes have become a part of the UK’s working culture, but the excessive time spent commuting is one of the main factors contributing to work-life balance problems”, which effectively deters people from public transport.
The Telegraph earlier revealed that London is the second most congested city in Europe, with the average person spending 73 hours per year in traffic, whilst Manchester also placed in the top 25. The Manchester Evening News further reports that the World Health Organisation (WHO) lists Manchester as the second most polluted city in the UK, whilst listing London as 22nd, and additionally highlights that Greater Manchester has the highest rates of asthma related emergency hospital admissions in the entire country. Further to this, the BBC reports that Manchester and Salford have the country’s worst congestion outside of London, and that the Greater Manchester Combined Authority (GMCA)Greater Manchester Combined Authority (GMCA) states that the resulting levels of roadside nitrogen dioxide were a “public health crisis.”
The TUC have reportedly suggested that government spending is inadequate and has failed to keep up with the demand for greener methods of travel and reducing Manchester’s carbon footprint. Making improvements to infrastructure, and incentivising the use of public transport will reduce traffic congestion, reduce carbon emissions, improve air quality, and should shorten people’s commutes as the roads should be less busy.
The United Kingdom has seen a 9% rise in weekly commuting time, with the biggest increase being for bus commuters who now face a journey approximately 7 minutes longer. Increasing demand for shorter and more comfortable commutes is evident, and in order to allow for smoother journeys, the government must be willing to increase funding for infrastructural changes. A spokeswoman for the Department of Transport said “we’re investing more than £48 billion into our railways to cut journey times, increase service frequency and introduce new trains”, and added that “we are also giving councils extra powers to work in partnership with bus companies to improve services.” These infrastructural improvements are fundamental, but we must also find ways to further incentivise people to take public transport. EnergiToken is the solution.
With public transport under constant fire from disgruntled commuters, EnergiToken has the power to make public transport more appealing for the everyday user, as a platform which rewards users for energy efficient behaviour, such as taking public or ow carbon transport. When a consumer uses a greener method of transport, they are rewarded with EnergiTokens (ETK) which is a financially tangible cryptocurrency that is awarded to the user, and then can be spent within the EnergiToken ecosystem, built up of approved, energy efficient partners. If people are being rewarded for acting environmentally consciously, they will be more inclined to do so on a more frequent basis until it becomes habitual behaviour. The end result? More people using clean methods of transport, less traffic congestion on the roads, improved air quality due to carbon emission reduction, and a happier and healthier population.
Yes, there’s plenty that the government could and should be doing to increase public transport use, however as we are all living on this one planet which we share, so we all need to take individual action.
Visit energitoken.com today to find out how you can be rewarded for energy efficiency and protecting the environment. | https://medium.com/energitokennews/the-trades-union-congress-tuc-have-reported-that-your-commute-took-you-18-hours-longer-last-year-4f7c6bb1c0c9 | [] | 2018-11-13 13:19:46.375000+00:00 | ['Environment', 'Pollution', 'Transportation', 'Infrastructure', 'Climate Change'] |
Read a Rare Alan Vega Interview | God!
I went down to Texas with Marty to play the famous club Emo’s. It was packed with Mexicans, cowboys, you name it. Generally, we’ll play for 45 minutes to an hour. Tops we’ll do an hour and 15 [minutes]. They kept asking for song after song. We were out there for two and a half hours, and nothing was thrown at us. In fuckin’ Texas, man! I was sure we would have gotten the arrows there. After years and years of being booed unmercifully and every object in the world has been thrown at us–you name it, it’s been thrown at us–we’re getting applause all the time. That’s the way it is. We’re still trying to be confrontational with our music, but nobody’s buying it. People are loving it. I listen to what my kid is listening to. He’s gonna turn 10, and it’s unbelievable the shit he’s getting.
Like what?
He goes through phases. It was the White Stripes for a while, which drove me completely crazy because I don’t like the band that much. I like what they’re doing intellectually, but to listen to it over and over again, can’t he graduate to something else? Now it’s Disturbed. And that goes on for months. I have to listen for months. “Shut it off already! You’re driving me crazy.”
What does punk mean today?
I don’t know what it means anymore. The first time we used that word was in 1971. We did this show in a gallery. Of course, we couldn’t get anywhere else to play. I was showing my art, my sculptures at this pretty big gallery. I asked if we could do a show there, and they agreed, which blew me away because no one else was giving us a show. We billed it as a “punk music mass.” Up until then, that word had never been used except in an article in ’69 by Lester Bangs. He did a thing in Creem magazine when Creem was basically the greatest fucking rock magazine around. He did a big piece on Iggy Pop, and the word punk was used for the first time that I’d seen it.
Where I grew up in Brooklyn, man, a punk was like a wuss, the guy who ran away from the fight. “You’re a punk. You’re a weasel. You’re nothing.” Now it has this connotation of being the tough guy thing. The revolution? Are you kidding? So I liked the word and used the term “punk music mass,” maybe inadvertently trying to turn it into something else. One day I wake up and there’s the word punk all over the place. That’s when it became meaningless to me. Somebody said that Suicide had to be the ultimate punk band because even the punks hated us. I think that hits it right on the head. I never really saw us as a punk band. They called us glitter. We’ve been called everything. It’s like country-Eastern music or New York blues music. But they did hate us. “You’re supposed to be punks, and you’re hating Suicide?”
This question is from Beth Ruder, one of our readers from St. Louis: When you sing, do the noises you make live inside you, or do you make them to cancel out something else?
Out of myself, or out of the music?
I’m assuming she means out of life or what’s around you.
I assume it does, yeah. People always used to ask me why I was so angry. I never thought of myself as being angry. But maybe I was. I don’t know. I grew up pretty much on the streets of Brooklyn. I never had a dime to my name. Everything was a struggle. Just looking around–I guess I’m a political guy in some ways–in those days it was the Vietnam War, and Nixon was in power, and Marty was a very political guy, too. We were both very upset. New York City was crumbling.
Is there anything you miss about New York in the ’70s?
I miss everything about New York in the ’70s. I’m a stranger in a strange land right now. It’s been gentrified up the asshole. Everything’s a new building all over the place. Prices are skyrocketing. People who are living in New York now, I don’t even know who they are. Sometimes I literally start crying when I walk through the streets where I used to live. I’ll start crying sometimes because I see the ghosts of all the people on the streets I used to hang out in. It was so great. Everybody knew each other. We knew the jazz guys, the rock guys, helping out if we could. If somebody got a pad, we’d all crash at one place, but you know, the cops wouldn’t bother you. That didn’t matter. Crime was rampant, which was cool. Where’s the crime now? They cleaned it up so much. Which is great for my kid. | https://medium.com/self-titled/read-a-rare-alan-vega-interview-5e4a57cef3ad | [] | 2016-08-03 00:02:21.577000+00:00 | ['Interview', 'Punk Rock', 'Music', 'Alan Vega'] |
Pass CKAD (Certified Kubernetes Application Developer) with a Score of 97! | Pass CKAD (Certified Kubernetes Application Developer) with a Score of 97!
Study Guide and Tips
Photo by Razvan Chisu on Unsplash
Earlier this month I’ve passed the CKAD (Certified Kubernetes Application Developer)Exam. It is an exam that certifies the attendees’ ability to design, build, configure, and expose cloud-native applications for Kubernetes. What you need to do is to score a passing grade (66%) on a 2-hour online, proctored performance-based exam. The exam consists of a set of performance-based items (problems) to be solved in a command line, running Kubernetes. You take the test online with a proctor watching you (You cannot see the proctor’s face, you just communicate with the proctor using the chat box provided). So make sure your webcam is working before the exam by running the exam environment testing tool (I will tell you my miserable experience later).
Photo by Author on Linux Foundation Portal
Curriculum Overview
Let’s jump into the curriculum and talk about how to prepare for the exam. The curriculum is posted on GitHub and you should check it before the exam.
13% — Core Concepts
You need to know how to deploy your app using kubectl. Follow the below link and start the interactive tutorial. The question will ask you to deploy an app using the image mentioned and expose the service with port XXX. If you are familiar with the basic concept of deployment, pod, and node, you will have no problem with this section.
18% — Configuration
This section tests how to create ConfigMaps or Secrets and configure Pods using data stored in ConfigMaps or Secrets. Make sure you understand the material of the related documentation and follow the steps stated at least once. You will do similar things in the exam.
10% — Multi-Container Pods
In my opinion, this is the easiest question. All you need to do is to follow the instruction (the question will tell you which image to use, and all sort of condition needed), copy and modify the yaml file in the documentation. In this question, you create a Pod that runs two Containers. The two containers share a Volume that they can use to communicate. Check out this link or bookmark it.
18% — Observability
Definitely, there will be a question testing you how to configure LivenessProbes and ReadinessProbes. Learn the difference between initialDelaySeconds and periodSeconds before the exam and familiar yourself with the syntax of the probe of httpGet and exec command. All material you can refer to this link.
20% — Pod Design
For the deployment rollout and rollback, you will be asked to create a deployment and update the deployment. If you are asked to change the image tag of the deployment, you can just use the imperative command to perform rolling updates.
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
After that, you will be asked to rollback. Again you can use the imperative command to perform a rollback.
kubectl rollout undo deployment.v1.apps/nginx-deployment
Also, you can scale a Deployment by using the following command:
kubectl scale deployment.v1.apps/nginx-deployment --replicas=10
The best way to study is to read the official documentation directly. It covers many typical use cases for Deployments. Although not all the use cases will be tested, it is good for you to know various use cases as you may be facing a similar situation in your job.
One question will be related to running an example job/cronjob, writing a job/cronjob spec. The important concepts of a job or a cronjob are: backoffLimit, activeDeadlineSeconds, completions, parallelism.
Don’t try to write your own job specification from scratch. Always copy the example specification from the documentation and modify it.
13% — Services & Networking
All you need to know in this section is Services and Network Policies. Learn the difference between ClusterIP, NodePort, and LoadBalancer. Remember to state the pod selector in the definition file. For Network Policies, read carefully which pod does the network policy apply to and modify the podSelector in the specification. You also need to make sure whether the ingress or egress policy is going to enforce. All the information is stated in the question and what you need to do is to figure it out carefully and not to make mistake on it.
8% — State Persistence
In order to pass the exam, you must learn how to configure a pod to use a PersistentVolume for storage. There is certainly one question associated with PV and PVC concept. I advise you to repeat the below exercise until you are familiar with the configuration flow vastly. Basically, you first create a PersistentVolume and then create a PersistentVolumeClaim that binds to the PV you just created. Finally, you create a Pod to mount your volume with your PVC.
Course Recommended
I bought this course and the exam in a bundle on Linux Foundation with USD200 in a discount period. If you are new to Kubernetes, this course is informative and useful for you to grasp the concept of Kubernetes. The material is more than enough to pass the exam but those are valuable if you need to use Kubernetes in your job.
If you already have some experience or familiar with Kubernetes and just want to pass the exam, there is a cheaper choice to learn concepts and practice. The course provided by Mumshad Mannambeth has many hands-on labs right in your browser. You don’t even need to provision your own cluster, just use your browser to access the environment and do the practice. The question is similar to the actual exam so you can have a taste of what you will be encountered in the real exam.
You can also buy his course on Kode Kloud. There are many other DevOps related courses.
Practice
Practice all the exercises in this GitHub repo using the Kubernetes Playground provided by Katacoda if you don’t have your own cluster.
Tips
1. Use imperative commands to save time and use — dry-run=client to generate YAML files for you to edit easily. You will find the below command very useful.
kubectl create deployment nginx --image=nginx --dry-run=client -o yaml
2. Make sure you are working on the correct context. You will be given the context information at the top of each question.
kubectl config use-context my-cluster-name
3. Make sure you are working on the correct namespace.
4. Make sure your webcam is working!
The Miserable Experience
The proctor needs to be able to watch your face and your screen all the time during the exam, if they cannot see you, they will not proceed with the exam. I have actually scheduled the exam twice. The first time, although I passed all the environment test before the exam and my webcam worked properly, the proctor said he cannot see my webcam video after several minutes during the pre-checking period of the exam (The exam was not started yet, just perform the procedure of identity and environment checking). Every time when I reconnect, the proctor said he can see me now but after a period of time, he cannot see me again. This happened and wasted us for more than 4 hours (even longer than the exam itself!). I also phoned the technical support of the exam platform and the support staff didn’t find any problem at all. She can see me very well but only the proctor cannot see me! Finally, we can just cancel this exam and schedule the exam again.
I wrote an email to the Linux Foundation talking about this thing and they offer me a pre-flight exam (go through all the pre-exam procedures but no actual exam is taken). It’s very weird that I use the same notebook within the same network, this time the proctor can see me very well without disconnect. So until today, I don’t even know what was happening in my first scheduled exam. Luckily I was able to finally attend the exam without any problem
Final Tip
Practice, practice, and practice! Good luck with your exam! | https://towardsdatascience.com/pass-ckad-certified-kubernetes-application-developer-with-a-score-of-97-af072a65f1ce | ['Joshua Yeung'] | 2020-11-21 15:14:50.503000+00:00 | ['Ckad', 'Software Development', 'Kubernetes', 'Cloud Native'] |
Argument Capturing: A must know unit testing technique. | Argument Capturing: A must know unit testing technique. JAVING Follow Jun 22 · 3 min read
@Captor is an annotation in the mockito library that is used alongside the ArumentCaptor class to capture arguments that are passed to the methods of mocked objects. It is always used alongside verify() to retrieve the arguments that were passed when a method from a mocked dependency is called. It is a useful tool that can enable us to create extra assertions to our tests and therefore make our unit tests more accurate.
Here we have a GuestsBook class that will save a GuestEntry into a the guestsBookRepository. The guestsBookRepository is a dependency of the class and if we were to write a unit test, we would have to mock it. Do you think a mock will be enough to accurately test the this class?
If we were to mock the dependency of this class and try to test it. The only thing we would be able to do, would be to just verify that there was an interaction with the guestsBookRepository object, but we would not be able to check the data that was passed accurately.
The reason for this is that we don’t have control over the GuestBookUtils class which is used to generate guestEntry. This constraint in unit testing terminology is called hard wired dependency. We say that is hard wired because the class at the moment has no mechanism to inject it.
This test is inaccurate because we are not checking the values that we are passing to the save() method
If at this point we were to take some time to think about OOD(Object Oriented Design), maybe we would reach the conclusion that something needs to be done with GuestBookUtils to allow our selves to test more accurately the class. Should we get rid of the static method call? Should we inject a factory? There are probably multiple things that we could make this class a bit more testable. But is likely that those would require refactoring.
There’s nothing wrong with refactoring, I am pro-refactoring and I believe we should always strive to improve the quality of our code-base. Unfortunately in real life sometimes we have to postpone this kind of decisions, specially in very large and convoluted code bases or maybe when we need to meet a tight deadline.
Of course the side effect of postponing refactoring will be the generation of technical debt. Maybe we will talk more about refactoring in another story but now let’s have a look at how @Captor can help us here now, without altering our production code.
Maybe is the right thing to do, but is not the right moment.
In this alternative implementation of our unit test, we will use @Captor to create a more accurate version of the test. Notice that we have extra assertions now. This is possible because our guestEntryCaptor is recording the values that are passed to the save() method.
A more accurate test is possible thanks to the @Captor annotation
The mechanics are very simple, here’s a summary of the steps:
Create an object of type ArgumentCaptor using the generic parameter to specify the type of object that you want to capture. Annotate that object with the @Captor annotation. In your usual mock verification call the capture() method to enable the capturing Use the getValue() method of your captor object to get the recorded values and assign them to a variable. Add the additional assertions | https://medium.com/javarevisited/argument-capturing-a-must-know-unit-testing-technique-e88b3a6a6af1 | [] | 2020-06-22 09:38:34.273000+00:00 | ['Java', 'Software Development', 'Unit Testing', 'Junit', 'Mockito'] |
How to Prototype VR Designs Without Code | My key takeaway: when looking at the way we work and will work in the future, there is a lot of potential to use VR for certain tasks. This became increasingly obvious in the months since the COVID-19 pandemic began and the lockdown took place in most countries. We are living now in a world where remote work is officially here, bringing tons of advantages (and potential disadvantages) with it.
I wanted to understand the main pain points of remote work to find good use cases for VR. After my initial research, I came up with these main pain-points:
It can be difficult to concentrate when working from home.
There can be many distractions during work.
It can be difficult not having a dedicated workspace.
It can be difficult to collaborate on projects.
It can be difficult to do remote workshops.
It can be difficult to have ideation sessions online.
What became clear after the research is that people have problems with concentration; it’s difficult for many people to focus in places they don’t call their usual workspace. Secondly, people need better ways to collaborate, especially during workshops, ideation sessions, or even normal meetings.
The best way to integrate VR into our workflows is to use it for these two areas. This means only using VR for specific dedicated tasks, but not throughout the whole workday. I am convinced that the future of work will be all about collaboration and concentration! So I got to work prototyping a VR prototype that would enable just that.
The power of prototyping for VR ideation
In this post, I will guide you through my project and how I prototyped this project. I am a designer, not a developer. My goal is always to come up with a great solution, a great design that I can test, and to build a prototype without coding something myself.
Prototyping was extremely important in this project. It helped to validate ideas quickly, do user testing, and adjust the design. Starting a virtual reality project means using the right methods and tools which helped to design the experience. Here are my three quick tips on how to get started in VR by using the right tools and methods.
3 tips for designing in VR
The best recommendation for all digital designs is probably: Try it out! You shouldn’t design for VR without building a prototype that can be tried in a headset itself.
1. Start with pen and paper
Virtual reality is just the medium, and regardless of whatever medium, you should always make sure to start with the product design basics:
Who am I designing for?
What is the problem I am trying to solve?
Where and what context will this be used in?
Why are we doing this?
Think about the goals you want the user to achieve and perhaps even formulate a little elevator pitch for your experience.
Describe the scenes needed as a storyboard and scribble up some possible assets and UI alternatives. There are also grid templates available that can help you draw up a scene as a 360° panorama.
2. Asset creation
After finishing the sketches, you can head right into Adobe XD where you can create the assets for your design. Start by creating different artboards for different scenes. The design process for VR is very similar to the usual process for digital designs for apps and websites; but, what is very important to keep in mind is that the navigation is a key element of this experience.
For the topic of my VR prototype, collaboration and concentration, I focused on different elements that you can move around in a virtual space, allowing you to personalize your whole workspace. For the concentration workspace, the idea was that the user can change it to meet their needs with all necessary information and tools. For example, that might be notes from a meeting, sketches or even first design ideas.
To do this, I created a curved virtual working space in front of the user at a 90° angle. It was super important for me to test whether this angle actually worked, presenting the right amount of information without overwhelming the user.
3. Prototyping your VR experience with the Draft XR plugin
Draft XR is a free plugin for Adobe XD that allows you to create a realistic VR prototype, and it was an immense help during this project. Using it, you can quickly test your ideas and iterate on your designs.
Prototyping with VR tools requires some basic coding skills and takes much more time. The most valuable thing you can do, as a designer prototyping VR experiences, is to be able to put a VR headset on and test your designs out, right in Adobe XD.
For designers, VR is a game-changer
For us designers, it’s important to keep up to date, experiment with tools, and prototype our ideas to be able to consult the client in the best way and come up with the best solutions for the problems the future will throw at us. The new dimension and immersiveness of VR opens up so many new ways to design experiences.
I think it’s super important to stress that ideating does not require any fancy tools or coding skills. Prototyping and being able to test your ideas early in the process with real users is incredibly valuable. My forever favorite quote, from Tom and David Kelley, says it best, “If a picture is worth 1000 words, a prototype is worth 1000 meetings.”
Read my full case study of working with the Draft XR plugin in Adobe XD. | https://medium.com/thinking-design/how-to-prototype-vr-designs-without-code-6adfcabb1c8a | ['Patricia Reiners'] | 2020-09-08 13:32:48.825000+00:00 | ['Design', 'VR', 'Virtual Reality', 'Design Best Practices', 'UX'] |
Op-Ed: Why we need to talk about the applications of blockchain technology on the financial market | Michael Brenndoerfer is a current UC Berkeley Master of Engineering student in the class of 2018. He is in the Electrical Engineering and Computer Science (EECS) department, and his concentration area is Data Science & Systems. Most of his work at Berkeley revolves around Deep Learning, Systems and Crypto. After graduating, he plans on expanding his cryptocurrency related startup. In his free time, he loves to travel and is no stranger to living out of a suitcase.
Disclaimer: This piece was written in August of 2017. Certain data, especially market capitalization, has changed since then. Furthermore, over the last few months cryptocurrencies and Blockchain technology have seen a vast rise in mainstream awareness and acceptance.
What is Blockchain?
Blockchain technology is popular in the context of cryptocurrencies. People know of Bitcoin and Ethereum, but Blockchain technology can be used for extensive applications beyond just cryptocurrencies.
Blockchain is a technology that allows for multiple parties’ parallel access to a commonly shared, but distributed, ledger. Its major distinguishing feature is its unprecedented level of data integrity (Treat, Brodersen, Blain, & Kurbanov, 2017), while being decentralized. More precisely, a Blockchain can be imagined as a distributed and decentralized (and that’s the important point) database, which, put simply, is always in a state of integrity. This is achieved through a network of random participants that try to solve a cryptographic function. This process is called proof-of-work (also known as mining) and is the pillar upon which the initial Blockchain technology is based. The process of successfully mining transactions leads to a constant validation through the network, forming consent, so that nobody can tamper with the data. Once a transaction becomes persistent, there is basically no way to undo the history. This allows for absolute trust in the data, in its consistency and integrity. (Crosby et al., 2015).
Photo by Vladimir Solomyani on Unsplash
Potential applications of Blockchain technology
Almost every financial institution is currently exploring and evaluating its possibilities. A heavily investigated application is the reduction of legacy IT and back-office processes by moving them on the Blockchain. Estimates based on test environments show potential savings of more than 50% (Treat et al., 2017). In order to enable these changes, regulatory rules need to be adopted for the Blockchain. The BBVA Blockchain in financial services working paper, by Cermeño (2016), suggests several approaches towards regulation, such as “supra-regulators” on a global level, or regional regulators, which themselves report to a higher organization. This approach allows them to split the organization up into smaller, more granular consortia.
Recent joint research between the European Central Bank and Bank of Japan (2017) aimed to identify possibilities to move today’s financial transactions on distributed ledgers. The result of the research was that Blockchain based technologies might be able to meet today’s standards of Real-Time Gross Settlement (RTGS) systems, the backbone of gross transactions of securities and assets between banks (Committee on Payment and Settlement Systems of the central banks, 1997). However, a performance impact was noticed when increasing the number of transactions, and when increasing the distance between the participants in the network. Nevertheless, as shown by the European Central Bank and Bank of Japan (2017), distributed ledgers have the potential to further increase today’s security and reliability, in terms of availability, quality of service, and data safety.
“Since 2016, the usage of Cash has been heavily in decline in Sweden and many stores do not even accept it anymore.”
Further research by Bech and Garratt (2017) at the Bank of international settlements and UC Santa Barbara, investigated the applicability of central bank based cryptocurrencies. They scrutinized what a centrally-issued crypto currency would need to look like, and what use cases it would need to cover. An interesting development in that regard is, that since 2016 the usage of Cash has been heavily in decline in Sweden, and many stores do not even accept it anymore (Cermeño, 2016). Additionally, Bech and Garratt (2017) explained how such a trend makes the adoption of cryptocurrencies more likely.
“Every aspect indicates that the interest and demand in Blockchain is going to increase further.”
With its first appearance in 2013, Blockchain is still a comparatively a young technology. However, according to CoinMarketCap.com (2017), at present all cryptocurrencies combined yield a market capitalization bigger than $150B USD and basically every bank is investigating use cases. Every aspect indicates that the interest and demand in Blockchain is going to increase further. [Update: Since writing this piece, in December 2017 Bitcoin reached its highest market cap at almost $330B USD, before falling again to around $140B USD in mid-February 2018.]
Learn more about Michael and his cryptocurrency startup here.
References | https://medium.com/the-coleman-fung-institute/op-ed-why-we-need-to-talk-about-the-applications-of-blockchain-technology-on-the-financial-market-3d728441c36a | ['Berkeley Master Of Engineering'] | 2018-03-26 18:03:32.489000+00:00 | ['Uc Berkeley', 'Op Ed', 'Blockchain', 'Engineering', 'Bitcoin'] |
Starbucks Customer Segmentation Analysis with Python | There has been lots of talk around using data for marketing and you have probably heard about the high level applications of data usage on Netflix like the story of Cambridge Analytica in The Great Hack or more recently, Social Dilemma. This project is not as high level as that but in a way helps marketing teams understand their users and plan marketing campaign. I took on this project to understand how one could use data to inform business decisions and it was a great learning process.
Do you know Starbucks?
Starbucks is an American multinational Corporation with various chains of coffeehouses and roastery reserves. Seattle, Washington is its headquarter. With over 28,000 locations with an army of employees, Starbucks’ mission is “To inspire and nurture the human spirit — one person, one cup and one neighborhood at a time.”
To do this, the company has to keep making sure it targets the right customers at the right time through their customer journey. It has a diverse customer base and understanding patterns of customer behavior is a crucial need for the business. The data for this was a gotten from a software simulation of what the real customer base of Starbucks looks like and I my goal was to group customers into different demographics and then understand which offer they responded to more (offer completion).
Offer type:
BOGO (Buy one get one)
Discount
Informational
BOGO and discount comes with rewards but the informational doesn’t come with any. It’s more like a normal brand campaign on products it sells. I wasn’t interested in the most viewed offer but number of completed offers since companies are looking at sales and if they do complete the offer, that might signals that customers might want to see more of those campaigns.
Preprocessing steps
Understanding the data: This involved taking a look at the data to understand what each parameter meant, looked at data types, did descriptive analysis and check if there was any data which needed cleaning.
Cleaning: This involved removing null values, changing the structure of the data, renaming values in row and column columns and finally merging the data.
Exploratory Analysis: To see if I was missing anything, I did a quick exploratory data analysis and found out everything was good to go.
Exploratory Data Analysis
After preparing and cleaning the data (the hard part, phew!) I generally noticed that 24% of customers were people aged between 50 to 59 which is higher than all customers aged between 18 to 39 year (22%). The second largest group were those aged between 60 to 69 which accounted for 20%. Does this mean that young people don’t drink much coffee? | https://medium.com/analytics-vidhya/starbucks-customer-segmentation-19ac086e5405 | ['Tamunotonye Harry'] | 2020-10-06 12:52:13.864000+00:00 | ['Customer Segmentation', 'Data Scientist', 'Data Science', 'Data Visualization', 'Exploratory Data Analysis'] |
Embracing Uncertainty To Grow Your Business: What Works For Co-Creating Inclusion Founder Alethea… | The Nitty-Gritty:
How Co-Creating Inclusion founder Alethea Fitzpatrick took a long & winding road to creating her new company
Why embracing uncertainty has helped her stay flexible while building her business
The many pivots she’s made from architecture to photography to operations management to diversity consulting
Why respecting her Zone Of Genius has kept her from getting caught up in expectations
Starting a business is a risk.
Running a business is a risk.
Growing a business is a risk.
Ostensibly, we’ve all signed on for this risky life as an entrepreneur. But, we often find ourselves searching for certainty and hunting for the “sure thing.”
We do it every time we think another $2000 course is going to answer all our questions about building a successful business. We do it every time we wait a few more months to launch a new offer into the world. We do it every time we avoid reinforcing a boundary because it might upset a client.
In our last episode, Episode 241 with Charlie Gilkey, we talked about how striving for certainty keeps us stuck.
When we aim to choose the “sure thing,” we hesitate, procrastinate, and avoid.
Charlie advocated for building our courage by finding all the moments in the day when we can choose the option that has room for growth, for vulnerability, for risk… and yes, for uncertainty.
I was reminded of that idea during my conversation with today’s guest, Alethea Fitzpatrick.
Alethea is the founder of Co-Creating Inclusion, a diversity, equity, and inclusion consulting firm with a mission to shift culture and drive equity through workshop facilitation, leadership development, and business integration.
But Alethea has also been the founder of a host of other ventures!
To continue our focus on resilience and entrepreneurship, I wanted to talk with Alethea about the long and winding journey she’s taken to get to where she is now. Because where she is now is authentic, organic growth and a whole new level of success doing work that is incredibly important to her (and to the world).
We’ll get to how she’s achieved that in a minute — but first…
…let’s take another look at how having the courage to tolerate uncertainty, to even embrace uncertainty, can work in a business.
Later in the conversation, Alethea shares that she’s chosen clarity of her Why and her What but she’s remaining open to how it’ll all come together. She’s choosing to be strategic about designing a container that’s flexible enough to hold different outcomes.
I think this is a beautiful example of what Charlie was talking about in our last episode — but it also seems to be the secret sauce for how Alethea has gotten where she is right now. She has always followed clarity while allowing for openness and uncertainty so that she could grow into the direction of her goals.
Keep that in mind as Alethea and I talk about the journey she’s taken to get to where she is now.
Alethea and I also talk about the businesses and jobs that predate Co-Creating Inclusion, the moment she realized there was a new opportunity presenting itself, how the transition into consulting felt, and how her Zone of Genius keeps her from getting caught up in expectations.
Now, let’s find out What Works for Alethea Fitzpatrick! | https://medium.com/help-yourself/embracing-uncertainty-to-grow-your-business-what-works-for-co-creating-inclusion-founder-alethea-76690b69a1a6 | ['Tara Mcmullin'] | 2019-10-08 13:38:28.965000+00:00 | ['Small Business', 'Resilience', 'Podcast', 'Entrepreneurship', 'Business'] |
The Importance of Fun | The Importance of Fun
Without it, you get sick.
Photo by Angelo Pantazis on Unsplash
I teach people how to succeed at weight loss, and one of the important things I teach them is that having fun in your life is an absolute requirement for success in weight loss. In fact, it’s an absolute requirement for health in general.
You’ve heard “All work and no play makes Jack a dull boy”? It’s true, and not just dull, but sick; sick in body, mind and spirit. In fact, we know that there is a direct correlation between mood and disease and pain. Your brain and your body need the chemicals that flow when you’re having fun, for your mood, for your spirit, and for all your body functions, including your immune system and your unconscious drive to obtain what you need.
The Pleasure Principle
Sigmund Freud, M.D, one of the giants in modern psychological theories, coined the term “The Pleasure Principle” in the 1920's, intuitively concluding that the most basic part of our personality is like an innocent egoless child, discovering extreme pleasure, then pursuing that ecstasy, that fun, without restraint or limit. Living in that sublime state of consciousness was the goal of life. Of course, it wasn’t long after birth that we discovered limits.
Indulging without restraint, whether it is a hot fudge sundae or joyous sexuality, can be ecstasy, almost heaven on earth. But then, there can be a downside, unless we find a way to moderate the indulgence. We need to establish conditions and control so it does not produce destructive consequences like disease, ruined relationships and addiction.
Freud wasn’t the first to note the extreme importance of pleasure. From our earliest days of wondering about our being and purpose, philosophers have concluded that hedonism, pleasure-seeking, was central to our life. In the oldest example of civilization’s wisdom literature, the Epic of Gilgamesh, we read, “Fill your belly. Day and night make merry. Let days be full of joy. Dance and make music day and night …These things alone are the concern of men.”
Greek philosophers embraced hedonism. Democritus said the supreme goal of life was “contentment” or “cheerfulness”. Of course, here, he was talking not just about physical pleasure, but emotional pleasure, contentment, happiness, satisfaction. It was not just about food and sex.
Another ancient Greek philosopher, Epicurus, believed that the greatest good was to seek modest, sustainable pleasure in the form of tranquility and freedom from fear and absence of pain. Hedonism, pleasure-seeking, for him, was not simply indulging in an orgy of physical pleasure. It was the pursuit of contentment, satisfaction, peace and happiness. The Buddha, confronting the suffering that is a normal part of life, found and taught followers how to find bliss. A life free from fear and suffering, filled with life’s greatest pleasure is the hallmark of real health.
The science and psychology of pleasure and fun.
Pleasure and fun are experiences of thought and feeling, mind and emotion, a state of consciousness. But they are not just an ethereal product of physical stimulation.
We know now that certain experiences trigger a flood of chemicals in your brain that produce good feelings, degrees of euphoria (from Greek euphoros ‘borne well, healthy’, from eu ‘well’ + pherein ‘to bear’.) Some of these chemicals are the neurotransmitters that are associated with moods like elation and happiness. Some are called endorphins, meaning “morphine within” because they were discovered to act in the same way in the brain as the opiates, the pain-killing feel-good drugs that can create a feeling of euphoria (but with a terrible consequence that the natural “drugs” don’t have).
From day one, when we discovered food and the pleasure of satisfying our needs, we began flooding our brain with these chemicals that flow when we do something that feels good, just for the pleasure of it, just for the fun of it. Then we discovered other ways to pleasure ourselves, and all we wanted to do all day was play and have fun. Life was good!
Sometimes, as we grow up, life becomes hard, and we stop having fun. We may even begin to think that having fun is not OK. We start getting serious, concerned with a good work ethic and productivity instead of playing. That may be good for business, but it can cause problems. If we stop flooding our brains and bodies with those feel-good chemicals on a regular basis, we stop the natural medicine that keeps us well, body, mind and spirit. We get sick, body, mind and spirit. We need to keep doing those things that trigger the flow of the good chemicals. And we know what triggers that.
Playing, laughing, singing, dancing… these are the things that pump those well-being chemicals. Having fun is what makes them flow. Good food, playing games, making love, making merry. All of these things (and more) flood our brains and bodies with those chemicals that make us feel good, that produce health. You’ll know it’s happening when you’re having fun. And you need it. Without fun, we not only become dull, we get sick.
Put fun things on your “to do” list, and reduce the things that feel like punishment and pain.
Some of us were brought up in cultures that frowned on having fun. It was childish and a waste of time. Grown-ups worked! You suffered! We were taught that that was the right and noble way to be.
For me, dieting was an example of the suffering that was necessary. You had to give up enjoyment to lose weight, I was told. But it didn’t work. It made me a miserable 300+ pounder. But thank God, I found what worked. The solution to permanent weight loss lies in enhancing the joy, not denying it. I found a way to live that’s more satisfying than what I did when I was an overweight overeater.
There is suffering in life, but the way to a good life is not in letting it be endless suffering. For a good life, we need to find a way out of that, to find our bliss, our happiness.
I don’t want to give the impression that one should avoid difficulty or doing things that are hard. Learning how to achieve and maintain a healthy weight is work. But working at something you love doing can seem almost like fun.
We need fun. We need en-joyment. We need to be pumping joy in to our lives.
Don’t forget to have some fun every day, every weekend, and several big times a year. Don’t let your life fall into joylessness. All work and no play make us dull and sick. Make it your business to have fun on a regular basis. It’s really important.
William Anderson is a Licensed Mental Health Counselor, the author of “The Anderson Method of Permanent Weight Loss” (paperback and Kindle at Amazon, audiobook at Audible). He was obese until his early thirties when he developed the solution. He lost 140 pounds, has kept it off for 35 years, and has taught thousands how to solve their weight problem. | https://medium.com/thrive-global/the-importance-of-fun-39355f3d30e7 | ['William Anderson'] | 2019-03-07 12:26:54.435000+00:00 | ['Mindfulness', 'Happiness', 'Health', 'Success', 'Self Improvement'] |
Project Valiant | In the early 21st century, I took a road trip with about 11 friends from Philadelphia to San Francisco and back. My friend Tadge brought along a 4x5 camera, and a lens called a Apo-Lanthar 300mm f/9, which was radioactive enough to set off geiger counters. I asked him, “What do you take pictures of with a radioactive lens?” He replied: “America.”
A decade later in 2010, I was ready for another cross country jaunt. I was living in San Francisco with a 1964 Plymouth Valiant convertible, a car I’d always wanted, but that didn’t exist outside of junk yards in the salty rusty Northeast.
It was a magical car. It predated the moon landing. It had an ashtray for every passenger. It had an AM radio, incidentally the most electrically complex thing in the car. Everything else was mechanical, with a “can’t kill ‘em” Slant-6 engine. When I pressed down on the gas pedal, it pushed a lever that opened a little flap on the carburetor, which allowed air to gasp in, huffing atomized jets of gasoline with it down to the pistons, where a series of valves would close, and a spark would trigger an explosion in perfect time. No transistors or sensors or limiters anywhere along the way, just iron and fire and simple machines. The slant-6 is a clock that runs on explosions.
I’d driven it to Big Sur a half dozen times, whenever someone came to visit for more than a couple of days. In the Valiant, you got a totally unobstructed view. No roll bars or head rests or shoulder belts or other gizmos to get in your way. I wished I could take a picture that captured the feeling.
I tried a few things. First, I stuck a GoPro on the bumper, but it was too low to the ground, and kind of Mad Max-y. Then I tried suction-cupping my Canon 7D to the windshield with a fisheye lens. That was better, but even that didn’t give you a full sense of place. There were things to look at in all directions, no matter where I pointed it, you were missing something.
I did some googling and found a Canadian kid who’d taken 360 video of the world’s largest dodgeball game. I looked over his technical writeup and started doing some research of my own. Turned out, the best way to do it was to sync up 5 cameras (with a clap, just like the old days), dump individual frames from each one (30 frames per second * 5, 150 images per second. Yeesh.) and then stitch them all together one by one with Panotools. Arduous work, but technically doable. There really wasn’t any info about 360 video at this time, only a handful of people were making it, and they were doing so by applying techniques from still photo stitching frame by frame to video.
So I set to work as I often do: I begged some free welding work from my friend Jule and also borrowed two GoPro’s from him and my dad, and purchased an extra two myself. He overnighted me a square piece of aluminum with a tripod-compatible bolt on the bottom that I would use to mount all 5 GoPros. I also ordered 5 old-school iPod connectors, which GoPro used for their extension port, and soldered up the power connectors so I could power all the cameras externally while driving. These ran down to a small 5v power supply driven by a 7805 voltage regulator circuit I soldered together on some perfboard. It got piping hot, but it was outside the car on the windshield and so the rushing, free air of America’s highways kept it cool. I did burn myself on it a few times though.
Each camera had a 32GB SD card, which meant I could record about 4 hours of video at a time before all 5 were full. I bought 12 of them and when we stopped for gas I would rotate all 5 out with fresh cards, and Kat would copy and clear each of the old ones to an external hard drive on my laptop. By my calculations, the entire drive was going to take somewhere north of 400GB to store, and the hard drive I carried would write billions of 1’s and 0’s to a magnetically sensitive disk as we drove.
The first day was the longest drive I’d ever done in the car to date: 13 hours straight from San Francisco to Salt Lake City, through the mountains on the Nevada border and past the salt flats where they set land speed records. And the whole setup worked. We arrived dusty and exhausted and crashed hard that night, but the next morning I went through the hard drive and scanned through the footage and it had all worked!
That was the first and only successful day of the trip, video-wise. As we continued across the high plains of Wyoming and South Dakota, one or more of the cameras conked out during a leg, taking all the footage it had recorded with it. Deep scans and SD card recovery tools were no use, and a 360 degree video with a big gap on one side didn’t really work. I tried to narrow down which cards or cameras were bad, but it seemed to move around unconfined to any particular piece of hardware, and we eventually gave up trying once we hit heavy rain in South Dakota after an underwhelming trip to Mount Rushmore.
When I got home to Boston, I never quite figured out what to do with this huge pile of footage, still filling up a big chunk of a few hard drives on the shelf. I cobbled together a WebGL video player for 360° videos called Valiant360 and published a 30 minute section of the road trip that did work, as we departed my garage and left San Francisco heading due east before the sun came up. It’s a project I felt a little guilty never saw the light of day in a big way, but also I never really figured out how to stitch such a huge pile of imperfect video with sections of video missing at random.
Now it’s easier to make 360 video, with lots of precision 3D printable brackets on Thingiverse, and a handful of commercial solutions. Someday, perhaps, someone may succeed in capturing the whole country end to end and all the way around. But in the end, you should probably just drive it yourself. The world’s smaller than you think, except for Nebraska. | https://medium.com/cinematicvr/project-valiant-7978ddf9bee | ['Charlie Hoey'] | 2017-07-01 10:01:42.841000+00:00 | ['Virtual Reality', 'GoPro', 'Storytelling', '360 Video', 'VR'] |
Over-the-Wall Data Science and How to Avoid Its Pitfalls | Over-the-Wall Data Science and How to Avoid Its Pitfalls
Putting machine learning models in production
Over-the-wall data science is a common organizational pattern for deploying data science team output to production systems. A data scientist develops an algorithm, a model, or a machine learning pipeline, and then an engineer, often from another team, is responsible for putting the data scientist’s code in production.
Such a pattern of development attempts to solve for the following:
Quality: We want production code to be of high quality and maintained by engineering teams. Since most data scientists are not great software engineers, they are not expected to write end-to-end production-quality code.
Resource Allocation: Building and maintaining production systems requires special expertise, and data scientists can contribute more value solving problems for which they were trained rather than spend the time acquiring such expertise.
Skills: The programming language used in production may be different from what the data scientist is normally using.
However, there are numerous pitfalls in the over-the-wall development pattern that can be avoided with proper planning and resourcing.
What is over-the-wall data science?
A data scientist writes some code and spends a lot of time to get it to behave correctly. For example, the code may assemble data in a certain way and build a machine learning model that performs well on test data. Getting to this point is where data scientists spend most of their time iterating over the code and the data. The work product could be a set of scripts, or a Jupyter or RStudio notebook containing code snippets, documentation, and reproducible test results. In the extreme, the data scientist produces a document detailing the algorithm, using mathematical formulas and references to library calls, and doesn’t even give any code to the engineering team.
At this point, the code is thrown over the wall to Engineering.
An engineer is then tasked with productionizing the data scientist’s code. If the data scientist used R, and the production applications use Java, that could be a real challenge that in the worst case leads to rewriting everything in a different language. Even in a common and much simpler case of Python on both sides, the engineer may want to rewrite the code to satisfy coding standards, add tests, optimize it for performance, etc. As a result, the ownership of the production code lies with the engineer, and the data scientist can’t modify it.
This is, of course, an oversimplification, and there are many variations of such a process.
What is wrong with having a wall?
Let’s assume that the engineer successfully built the new code, the data scientist compared its results to the results of their own code, and the new code is released to production. Time goes by, and the data scientist needs to change something in the algorithm. The data engineer in the meantime moved on to other projects. Changing the algorithm in production becomes a lengthy process, involving waiting for an engineer (hopefully the same one) to become available. In many cases, after going through the process a couple of times, the data scientist simply gives up, and only critical updates are ever released.
Such interaction between data science and engineering frustrates data scientists because it makes it hard to make changes and strips them of ownership of the final code. It also makes it very difficult to troubleshoot production issues. It is also frustrating for engineers because they feel that they are excluded from the original design, don’t participate in the most interesting part of the project, and have to fix someone else’s code. The frustration on both sides makes the whole process even more difficult.
Breaking down the wall between data science and engineering
The need for over-the-wall data science can be eliminated entirely if data scientists are self-sufficient and can safely deploy their own code to production. This can be achieved by minimizing the footprint of data scientist’s code on production systems and by making engineers part of the AI system design and development process upfront. AI system development is a team sport, and both engineers and data scientists are required for success. Hiring and resource allocation must take that into account.
Make the teams cross-functional
Involving engineering early in the data science projects avoids the “us” and “them” mentality, makes the product much better, and encourages knowledge sharing. Even when a fully cross-functional team of engineers and data scientists is not practical, forming a project team working together towards a common goal solves most of the problems of over-the-wall data science.
Expect data scientists to become better engineers
In the end, data scientists should own the logic of the AI code in the production application, and that logic needs to be isolated in the application so that data scientists could modify it themselves. In order to do so, data scientists must follow the same best practices as engineers. For example, writing unit and integration tests may feel like a lot of overhead for data scientists at first, however, the value of knowing that your code still works after you’ve made a change soon overcomes that feeling. Also, engineers must be part of the data scientists’ code review process to make sure the code is of production quality and there are no scalability or other issues.
Provide production tooling for data scientists
Engineers should build production-ready reusable components and wrappers, testing, deployment, and monitoring tools, as well as infrastructure and automation for data science-related code. Data scientists can then focus on a much smaller portion of the code containing the main logic of the AI application. When the tooling is not in place, data scientists tend to spend much of their time on building the tools themselves.
Avoid rewriting the code in another language
The production environment is one of the constraints on the types of acceptable machine learning packages, algorithms, and languages. This constraint has to be enforced at the beginning of the project to avoid rewrites. A number of companies are offering production-oriented data science platforms and AI model deployment strategies both in open source and commercial products. These products, such as TensorFlow and H2O.ai, help solve the problem of a production environment being very different from that normally used by data scientists.
This post first appeared on the Life Around Data blog. Images by MabelAmber and Wokadandapix on Pixabay | https://towardsdatascience.com/over-the-wall-data-science-and-how-to-avoid-its-pitfalls-5af6fa2eef2b | ['Sergei Izrailev'] | 2019-08-28 12:14:05.434000+00:00 | ['Machine Learning', 'Data Engineering', 'Team', 'Data Science', 'Product Management'] |
Côte-des-Neiges (Chapter III) | EL QUE NO SE PUÉ TIRÀ SE’ONDEA — WHERE THERE IS WILL THERE IS A WAY
I had never been more enchanted, more delirious with desire, and more terrified of anybody as I was of this woman. There she was, her body wrapped around my body, her eyes locked with mine — I was hers, of that I had little doubt…
A 45 mins brisk walk…or a quick 15 mins taxi ride. Either way, we were going to dance the night away
Background
This is a work of fiction. Any similarities to actual persons and/or events is purely coincidental. The main character — and narrator — is Alejandro Zurita, a twenty-two year old bon vivant, studying Business Administration at Vanier College in Montréal in the early spring of 1996, just as the dot com boom and the Web 1.0 is about to start…
Scents of summer
The air felt warm but comfortable, a riotous combination of scents jockeying for position inside our senses: Faint notes of pollen, flowers, and fresh-plowed earth coming from just outside the city. Every year, as the summer entered its finals weeks, farms would start spreading manure, so that the nutrients would fertilize the soil before the winter; then the wind would carry the smell far and wide.
Gitana, gitana
Gitana, gitana
Tu pelo, tu pelo
Tu cara, tu cara
“Oui, bonjour? C’est Candy,” she answered.
The old Willie Colón song, Gitana, was Candy’s ringtone. When her phone rang, she reached into her purse for her sleek and decidedly utilitarian Motorola STARTAC. It may look dated now when compared to an iPhone SE (2020), but in 1996 this was the best and latest in mobile technology.
It was THE status symbol, announcing the owner as a modern and plugged-in member of the intelligentsia. Candy was a recently minted member of the club as a young and up-and-coming marketing executive working for Hydro-Québec, the provincial power utility and one of the most important centers of money, power and influence in the province.
“Oui, Ça va Marie,” she looked up smiled, reaching out and squeezing my hand. Marie La Liberté was one of her oldest and closest friends. She was a Registered, ER Nurse at the Centre Hospitalier de l’Université de Montréal; the largest hospital in the city — and by 2020 — the largest in North America.
“We just finished le souper at Hélena…oui, just walking to the old port…” she trailed off, as we continued on Rue McGill, enjoying the sights and sounds.
The scents of ethnic cuisine — an exotic, eclectic mix of oils, meats and spices — wafting out of the open doors and windows, spilling onto the streets; tightly wrapped around the bits of sounds and conversation.
The rhythmic patterns of cutlery clinking against dishes, off-key interpolations of Descending E Phrygian scale, Cuban Clave, or Cante Alentejano providing the soundtrack. The faint, sulphurous smell of car exhaust, the omnipotent aroma of fish, tainted water, and rotting moss coming from the nearby Saint Lawrence River…C’est Montréal. Vivant. Brut!
“Oh, no. We are going dancing tonight. Alejandro me soulèvera et nous danserons toute la nuit,” she whispered on the phone, giving me a sexy, knowing wink; leaving Marie with only the sound of her voice to read between he lines.
We walked down on Rue McGill and turned right on Rue William and continued to the corner of Rue des Soeurs-Grises, slowing down some more as we passed Restaurant Hà-Vieux-Montréal, another of one of the city’s premier spot.
The reputation was well deserved, a meal at Hà wasn’t just a meal. Hà immersed their guests in the traditional Vietnamese concept of Bia hơi¹. In Vietnamese, “bia hoi” refers to both fresh-brewed beer and the roadside restaurants where locals, perched on plastic stools, gather to consume it.
“What?” she said, sounding both annoyed and surprised.
Sacrament and Debauchery
“Coño; pero que mierda e’ ‘eto! They moved?”, she asked; switching from French, to English, to Spanish, using the Hispaniola dialect, with the South-Eastern register; all without missing a beat. Yeah I know. Sexy and intelligent, she was THAT hot.
At the corner of Rue William and Rue Wellington, we turned left towards Charon Brothers Park. Square des Frères-Charon as it was known in French, is a small square and park at the crossroads of several historic streets around Old Montréal; bordered clockwise by Rue Wellington, Rue McGill, Rue Marguerite-d’Youville and Rue des Soeurs-Grises.
The square was built as part of a network of public spaces along the axis of McGill Street; which itself is a historic thoroughfare that links the Old Port/Old Montréal to the contemporary city center, Centre Ville. We continued on Rue Marguerite-d’Youville until Rue de Calliére and veered off to walk around the Pointe-à-Callière Museum complex; taking the time to admire its mix of modern and faux-classical architecture.
“Sure, Marie. Il n’y a pas de problème. We’ll meet you guys at l’cathédrale,” she said.
The Cathedral or l’cathédrale, was a small Latin Club on Rue Peel, near the Peel Métro station. It was just a small, square room with tables and bar stools arranged against the wall; with the bar at one end, and the dance floor in the middle.
It wasn’t much to look at but it was hugely popular with our crowd, people who enjoyed real music — played by real musicians with real instruments — and who knew how to handle themselves on a dancefloor.
If you knew how to dance Salsa, Merengue, Cumbia and Bachata; and you like to been seen and dress to the nines, l’cathédrale was the place to be on Friday nights.
“That was Marie. They moved l’cathédrale,” she said as she ended the call and returned the phone to her purse.
The tone in her voice had that excited quality, that distinctive timbre one gets when dying to tell someone a big secret, but not saying much and waiting for them to figure out the clues.
“Y donde ‘tà ‘hora. Where to did they relocate now?” I asked.
“3988 Rue Saint-Denis, Le Plateau-Mont-Royal,” she said, impatiently waiting for my mind to catch up and for me to get the joke.
“Wait a second!” I said, finally making the connection.
The sacrilegious jerk in me, the a-hole who enjoyed watching porn on which female performers dressed as nuns could not contain his glee.
“That’s the old Catholic Church. The one they had to close down last year!”
“That’s right, Saint Jude Catholic Church. Looks like ‘ol Judas Iscariot is up to his old tricks again,” she joked, and the ironic pun was not lost on me.
The influence of the once all-powerful Catholic Church of Québec had been reduced from a lion’s roar to a domesticated cat’s purr. By 1996, the average attendance rate in the province’s churches had plummeted to an unsustainable 7%; down from a lofty high of 90% in the years leading up to the 1960s.
All told, about 40 churches were falling out of use each year in the province. In almost all cases due to shrinking congregations and the parishes no longer being able pay their bills. The Archdiocese of Québec’s inability to adapt to a changing realities of dwindling congregations and an aging population — along with a lack of young people stepping up to replace the retiring clergy — had culminated in the demise of one of the province most powerful institutions.
The end result has been an increasing number of unused church properties being sold to the private sector⁴,⁵,⁶. Some churches have been converted into pizza parlors, Spas, or even luxurious condominiums — and one, apparently — into a temple of hedonistic inebriation, sweaty bodies and wanton debauchery.
“Come, my child,” I said, doing my best pervi-Monsignor impersonation,
“tonight we shall listen to the gospel of Tito Puente, Héctor Lavoe, Willie Colón and the venerated apostles of La Fania All Stars,” provided we don’t burst into flame the minute we set foot inside the building, I thought.
“Yes, father,” said Candy in her equally exaggerated, naughty catholic school girl³ voice.
“Oh, we are going to burn in hell,” I thought, as I took her hand to continued our stroll.
A voice that conjured Eden
As the sun was finally setting and the first points of light appeared in the sky, we left the Pointe-à-Callière Museum behind, casually strolled down Place Royale, and back onto Rue de la Commune towards the Vieux Port.
Rue de la Commune traces the original shore of the Saint Lawrence River. The historical buildings along the north side of the road are former commercial buildings, now turned into upscale, hipster shops and restaurants serving the city’s booming tourist industry.
A 45 min leisure walk or a brisk 15 min taxi ride…either way, we planned to dance the night away
We walked by the Montréal Science Centre, with its massive IMAX Theatre and up to the Sailor’s Memorial Clock and Tour de l’Horlodge — The Montreal Clock Tower. This impressive monolith, whose construction began in 1919 and lasted until 1922, is set on a small bend along the river shore; offering fantastic views of Île Sainte-Hélène and the South Shore.
The Clock Tower is about 45m or 148 ft tall, the stairs have a total of 192 steps from the bottom to the top of the tower. There are three observation stops along the staircase. The north facing side has a memorial plaque to the sailors who died during the First World War; while the western facing side has rectangular columns reaching from the base.
On one of the green spaces along la promenade there were several street performers, food vendors and souvenirs peddlers all vying for the tourists’ attention and dollars. We passed a quartet of musicians playing a strip down version of an old “fusón” Son Cubano Candy liked, called De Oro by Fernando Echavarria⁸,⁹ y La Familia André. Excited, she grabbed my hand and dragged towards them.
“Oye, mi pana. Dejeme cantar un poquitico aqui!”, Yo! move over, let me sing some bars! She said to the guy singing. She just grabbed the microphone, and the poor guy simply stepped aside and let her.
Y hay en tus ojos negros, junto al sol
Lo alegre en tu sonrisa
Y tu pelo en la brisa de oro
Mi sueños trenzó (To’ el mundo) te quiero
The guy playing the quinto, trés golpes and salidor congas, nodded at me, as if asking “Do you want some of this…?” He didn’t know I had a set of cueros at home and played with friends every once in a while. I sat down and proceeded to show off my marcha, timba, and cumbia prowess — doing my best to honor Pedro Peralta, the percussionist in the original recording — while Candy’s voice filled the air,
Que de tus ojos le hablo a tu boca
Y a tu boca le hice sentir
Lo que guardaba aquí en mi pecho
Y sólo despertó por ti
En tu pelo teji mil sueños
Y en tu aliento sembre un clavel
En tu cuerpo hice mi cobija
Y mi sábana fue tu piel, te quiero
She was a goddess. Her silky-smooth, breathy voice bringing Fernando’s poetry to life,
of my eyes, I spoke to your lips
and I made your lips into desire
A desire that burns inside my heart
That roars for no one but you
With a strand of your hair, I wove a thousand dreams
With your voice I conjured Eden
Inside of me you found your shelter
My bare skin your blanket
By the time the music ended, over one hundred people had congregated to watch and hear “her performance”. She got an already-standing ovation from the now “adoring” public. A couple of older gentlemen gave her flowers, hastily bought from the street vendors; and professed their undying “love”, to the chagrin of their wives. The guys playing kept pleading,
“Otro màs! Otro màs!” One more. One more. It was all very sweet, spontaneous and beautiful. The perfect ending to a romantic stroll through what has to be the most idyllic place in all of Montréal.
A fresh poison each week
We headed out again on to the street, walking a few paces to the taxi stands; and a quick 15 minutes ride later, we arrived at l’cathédrale. The place was packed. Jean-Pierre, “l’incroyable Hulk”, as he was called, recognized Candy and waved us in,
“Merci, JP,Ça va?”, she asked, flashing that Monna Lisa smile that made you hand over all your money.
“Oui, Ça va, Candy,” he replied, opening the door for her, while giving me the universal, if silent ‘sup bro chin raise.
“Marie et François sont déjà à l’intérieur et vous attendent les gars. Profite de ta nuit,” he informed us just as we went in — indicating Marie and François were already waiting for us — and closed the door behind us. Marie spotted us the minute we walked in and came running to meet us by the bar.
A Church-to-Latin-Club conversion is a tricky proposition. The best church conversions, according to Clarence Epstein, history professor at Concordia University, emulate the original intention of the structure as a meeting place or public space.
The owners of l’cathédrale seemed to have heeded his warning, because the interior was airy, welcoming, and unlike many other clubs in the city, it made having a conversation — while the music was playing — something resembling normal.
The bar occupied the place of the altar, against the high Lancet windows facing the street. At the other end was a small stage, used for live performances, with a DJ booth off to the side. As before, tables and stools were arranged against the wall, leaving plenty of room for the dancefloor.
A rig of stage lights and speakers was suspended high above. All other available spaces, including the bell tower were set up with chairs, bar stools or leather couches. Ostensibly, this was to encourage the patrons to mingle, drink, dance and talk. The last part without resorting to shouting over the music and drunken sign language.
“Now It’s Party!”, she crowed.
She threw open her arms, first giving Candy, and then me, a warm hug and the traditional Faire La Bise, or two-sided, Frech style kiss. There is a science to a French greeting: To proper Faire La Bise, you lean forward, touch cheeks and kiss the air while making a kissing sound with your lips.
Marie grabbed Candy by the hand and off they went to find a seat and gossip. This left Francois and I to do what boyfriends do when their girlfriends run off to gossip — about their girlfriends — with their other girlfriends: We went to the bar and ordered drinks.
François Amadou was a Noir de France, or Black Frenchmen. Originally from Bénin in West Africa, he had grown up in France, in Issy-les-Moulineaux just outside Paris. He had moved to Montreal in 1994 and worked for the CBC French service, Ici Radio-Canada Télé, in Montréal.
Given the cultural similarities between Benin and Hispaniola — most of the African slaves that ended up on the island came from the ancient Kingdon of Dahomey — we hit it off immediately. Despite not knowing a word of Spanish, he was a fantastic salsa dancer, and could hold his own with the congas. Needless to say, I liked him a lot.
“So, primo, what’s new in the world of news,” I said, using the Spanish vernacular for “cousin” as we approached the bar, gesturing the bartender for a beer and a glass of white wine for Candy. It was François’ job to tend to Marie’s needs, drinks-wise and otherwise.
“Same old, same old. Ce sont les tonneaux vides qui font le plus de bruit,” he lamented.
His “beat” was the political desk, and in 1996 there was never a shortage of noise and drama from Montréal City Hall, Quebec City or Ottawa. He wasn’t wrong, It’s the empty barrels that make the most noise. Jean Chrétien and his friends just didn’t shut up.
“Yeah, but it beats the hell out of a nine-to-five job. C’mon let’s get our ladies drunk and take advantage of ‘em,” I said, grabbing our drinks and heading back to meet up with the girls.
“Ça ma l’air bon,” he concurred. Sounded it like a plan indeed.
Once the proper amount of beers, spiced rhum, tequilla shots and wine had been duly consumed. The stress of the week drained away by flood of laughter and animated conversation, Marie, François, Candy and I worshiped at the Church of Saint Jude. We listened to the gospel of José Alberto “El Canario”, exhorting the faithful to sing along,
Que paren el reloj
Que suban esa música
Que bajen esa luz
Que quiero bailar contigo
Words are like air, they are like the wind
After a smoldering set of Salsa Romántica, when Eddie Santiago’s “Tú me quemas” and Frankie Ruiz’ “Desnudate mujer” packed the dance floor; the music stopped, the lights dimmed and the fog machines hissed — covering the dancefloor in an thin, endless fog. When the first chords of the clarinet rang out, Candy looked at me in the semi darkness, with hunger in her eyes.
This was her song — Willie Colón’s Gitana — and she wanted to be seen. She wanted to look into my eyes and see pure, unadulterated lust looking back at her. She wanted every man within her line of sight to want her, to desire her…and to feel the envy of every woman wishing to be her as she swayed and danced, giving herself to the music with reckless abandon.
She grabbed me by the waist and pulled to her, forcing me to feel the rising heat off her body, the fire raging inside of her. When the bass, the Bongos Drum, and the Maracas entered on the ONE count of the 8th bar, I was not quite sure if I had it in me to finish the song. There was a 50/50 chance I would be dragging her into another bathroom stall and have my way with her before Willie sang the first verse.
The way the rhythmic structure of the bass line — and syncopated percussion — emphasized the ONE, made her push her body into mine, her breast into my chest, her thighs against my legs, until I could feel the throbbing in the space between them. Delirious, I wished I could throw her onto the floor and taste her.
But just as I was sure to lose my mind, just when I thought I could not hold on any longer, the song changed its time signature. Going from a flamenco-inspired, orgasm-inducing, slow Son Montuno, to a more traditional and subdued Guaguancó —as Willie sang the chorus, she gently pulled away.
Gitana, gitana
Gitana, gitana
Tu pelo, tu pelo
Tu cara, tu cara
If there was a song that perfectly described Candy; how she loved, how she hated, it was Gitana, or Gipsy Woman. The pulsating 8th notes, the driving marcha: change of hand, hold, turn. Cross body lead, turn. My God, this woman could dance!
Her hips swaying in perfect rhythm with the music; her chest heaving, wet hair clinging to her face…and every pair of eyes belonging to a man slave to her body’s every move.
Porque sabes que te quiero
No trates de alabarme tú
Pues lo mismo que te quiero
Soy capaz hasta de odiarte yo
There she was: raw, pure…sexual. This woman was magnificent, powerful, and complete. The intelligence and confidence, the fearlessness and ambition. The piercing green eyes. The full, pillowy lips, wet traces of rouge where her skin met her mouth. The perfectly tanned, silk smooth skin glistening in the lights.
The passion, the jealousy, the blind rage, the empathy and the hate. All perfectly balanced and contained for the moment, but make no mistake, she was a lion in captivity — majestic and docile yes — but ready, willing and able to rip you to shreds at the slightest provocation.
Porque sabes que te quiero
No trates de alabarme tú
Pues lo mismo que te quiero
Soy capaz hasta de odiarte yo
This woman could look into my soul without opening her eyes. She could make me feel what she felt, whisper in my ears without uttering a word. She could be far across the room and yet hold me captive to her gravity. I was a mere satelite orbiting her, falling deeper into her embrace.
I had never been more enchanted, more delirious with desire…more terrified and aroused by anybody as I was by this woman. There she was again, her body wrapped around my body, her eyes locked with mine — I was hers, of that I had no doubt.
Sin mirarte yo te miro
Sin sentirte yo te siento (yo te siento)
Sin hablarte yo te hablo
Sin quererte yo te quiero
Kiss me once again before we say goodnight
“Amor, take me home and make love to me,” she whispered.
Make love to me. Not fuck my brains out, or plow me until it hurts; but “make love to me”—there is a difference. Her request meant she wanted me to bring her to the place I felt most safe, and make her feel safe. The place where I was the most at ease and make her feel at ease.
It means to show up and be present. To show her my true, honest and naked self, so that she, in turn, could show me her own, true, naked self. It meant she would give herself to me truly, madly, deeply. She would do so without any barriers or walls, no pretenses or games…with no boundaries and no safety net.
She would vurnerable, lower her defenses and let me in, so long as I too did the same. She meant that she would give me her all — her mind, her body and soul — and trust me to handle her precious cargo with kind love and tenderness.
This was a request for intimacy, vulnerability, and trust. The ensuing multiple orgasms she would inevitably enjoy, however, were an ancillary benefit to the emotionally bonding experience.
The song ended, we walked off the dancefloor and said our good-byes to Marie and Francois, stumbling towards the exit. JP motioned to a taxi as soon as we walked out, opening the door for her to get in — I was on my own.
I managed to get into the cab and sit next to Candy, without cracking my head open,
“Thank you, GP,” I thought.
I heard myself shout our address, passing a CND $20 to the driver; telling him to keep the change. I took in a big breath of the fresh morning air, trying to clear my head, getting my mind into game mode for the night’s encore performance – I mean, to make sweet jungle love ’til the mornin’ to the woman in my life. There is a difference… | https://medium.com/the-desabafo/c%C3%B4te-des-neiges-chapter-iii-1d1ddb2edf1d | ['Juan Alberto Cirez'] | 2020-10-19 18:37:42.099000+00:00 | ['Religion', 'Technology', 'Erotica', 'Music', 'Fiction'] |
An Army of Bacteria Protects Vaginal Health | An Army of Bacteria Protects Vaginal Health
Highly specialized bacterial probiotics maintain balance
Photo by Daily Nouri on Unsplash
Years ago I was a “specimen” in a research study. I drank the fermented water of pap each day for several weeks. Then, I was given a stick to swab around my vaginal surface. Microbiological tests were performed on the swab stick to check for the presence and quantity of the probiotic Lactic Acid Bacteria (LAB). Of course, the probiotic was observed in sufficient quantity.
The World Health Organization (WHO) and the Food and Agricultural Organization of the United Nations (FAO) defined probiotics as live microorganisms that have health benefits when consumed or applied to the body in adequate amounts.
Probiotics are dietary supplements containing beneficial microorganisms especially bacteria and yeast. Probiotics can also be obtained from foods prepared by bacterial fermentation. Examples of such food include yogurt, kimchi, kefir, sauerkraut, tempeh, miso, pickles, and some type of cheese, not forgetting the fermented water of pap.
Photo by Cristiano Pinto on Unsplash
Our bodies have good and bad bacteria
Our body are filled with healthy bacteria, but also can be exposed to harmful bacteria. Probiotics are known as good, friendly, and healthy bacteria. Antibiotics kill bacteria both good and bad bacteria. Probiotics, therefore, help to restore the good bacteria lost to lost frequent or prolonged use of anti-bacterial medication.
A highly specialized army of bacteria exists in the vagina. These bacteria are always at work keeping the vagina pH balanced, healthy, and in good order. These bacteria fight hostile and unhealthy bacteria.
However, antibiotics use can disrupt this balance. Also, an overgrowth of other microorganisms like bacteria and yeast can cause an imbalance and lead to infections. Hence, the need for probiotics. Some medications used to treat these infections also contain some amount of probiotics to restore the normal flora of the vagina.
Lactobacillus acidophilus is the most common strain of probiotics in the series of Lactic Acid Bacteria (LAB) for maintaining a healthy vaginal balance and promoting vaginal health.
There is also Lactobacillus rhamnosus and Lactobacillus reuteri. A study published in the Clinical Microbiology and Infection Journal indicate these strains stick to vaginal epithelial surfaces thereby making it more difficult for hostile bacteria to grow. Thus, vaginal balance is maintained. Hydrogen peroxide, lactic acid, and bacteriocins are produced by the Lactobacillus spp which inhibit the growth of harmful bacteria that cause bacterial vaginosis (a bacterial infection in the vagina).
Capsules, vaginal suppositories, or probiotic foods can help prevent and treat distorted vaginal pH and promote vaginal health. A 2014 study carried out showed that oral consumption, as well as vaginal administration of probiotics, help in the prevention and treatment of bacterial vaginosis and an overall improvement in vaginal health.
Photo by Alison Marras on Unsplash
Experts prefer whole foods to supplement as sources of probiotics. Yogurt containing live cultures has been proven to be one of the best sources of lactobacilli.
Consumption of probiotic-containing foods in a healthy woman, like me, has potential benefits with no known risks. You may want to ask how I felt after taking the probiotics for that period of time. The good bacteria multiplied, stuck to my vaginal epithelial surface, and made it more difficult for hostile bacteria to grow. My vagina was happy and healthy, and so was I! | https://medium.com/beingwell/probiotics-and-vaginal-health-7bc651aa87d4 | ['Deborah Agbakwuru'] | 2020-05-27 01:30:13.535000+00:00 | ['Womens Health', 'Medication', 'Health', 'Health Foods', 'Supplements'] |
7 ways shopping will get better by 2020 | Entrance to the Innovation Lab
7 ways shopping will get better by 2020
It’s not just voice and robots. Here’s the cool tech you haven’t heard about yet that’s ready to change the way you shop.
Walking around the National Retail Federation’s annual convention and expo provides thousands of examples of retailers “improving the customer experience.” It’s a nice line, but what does that really mean for all of us consumers? NRF’s Sarah Neale Rand reports from the “Retail 2020” exhibit in the Innovation Lab at NRF 2018: Retail’s Big Show, where some of the coolest emerging tech that will shape the near-future of retail is on display.
It’s my lucky 13th trip to Retail’s Big Show, and retail looks a lot different than it did in 2006. But things look different for me, too: In addition to working full-time for NRF, I’m back in grad school and a mother of two. A great shopping experience for me is all about efficiency and convenience, so when NRF’s Vice President of Technology Jason Hoolsema and Tusk Ventures’ Managing Director Seth Webb agreed to give me a tour of the Innovation Lab at Retail’s Big Show, I wanted to see what upcoming technologies were going to make shopping faster and easier for me.
The grocery store experience
Five Element’s DASH Robot Shopping Cart
These days, with two kids under six distracting me, I’m lucky if I only have to crisscross the grocery store twice in a trip. Five Element’s DASH Robot Shopping Cart maps out the most effective route, leads me around the store and stops at the items on my list (no more crisscrossing for me). I pay at the cart (no lines!). Then it follows me to my car and returns to the store all by itself. Sign me up.
Comparing features and exploring products
Spacee’s simulated reality demo
When it comes to buying technology for my home, I could spend days reading reviews, watching demos or looking at products in stores. Spacee and June20 are going to make all that research so much easier. Spacee’s simulated reality uses light projection to make any surface a 3D interactive experience — I can “demo” a Nest thermostat that’s actually just a piece of plastic or touch a table top to interact with information.
June20’s sliding tablet
June20 brings physical and digital displays together with an informative sliding tablet that lets the customer move between products, call up the right information about each product and effectively compare their features. I won’t admit how much time (or how many trips to the store) I spent buying a smart lock a few years ago, but I bet this would have limited my shopping research to 20 minutes. Pluses for retailers: reducing shrinkage, reduced inventory display costs and a content-rich online experience in store.
Take the trip out of the equation
Starship Technologies’ delivery robot
Whether it’s a quick trip to the grocery store for milk or that lunch I didn’t have time to run out and buy, Starship Technologies’ robots are making local delivery easier than ever. I can order last-minute groceries for tonight’s family dinner, and follow the robot online as it navigates to my office. The win for retailers: Packages can be delivered for a fraction of the cost a more traditional delivery service would require.
The right fit — the first time
Volumental’s shoe-fitting solutions
The last time I needed a new pair of boots, I ordered FIVE pairs — multiple styles and sizes — to make sure at least one fit. And of course, I returned four pairs. Volumental is solving that problem. Their fast and accurate 3D foot scanning system and advanced AI ‘Fit Engine’ can recommend a pair of shoes based on individual foot shape, size and preference. Say goodbye to wasted time and energy trying on shoes that aren’t going to fit anyway. The win for retailers: shoe returns decrease 25 percent.
Chris Baldwin, CEO of BJ’s Wholesale Club visits the Volumental booth
Completing the outfit
A cute blue and brown skirt has lived in my closet for (at least) the last year, with no tops to match. It’s all I could think about at FINDMINE and Slyce’s booth. FINDMINE’s machine leaning platform and Slyce’s image recognition tech come together for a “Complete the Look” tool that would let me take a picture of that lonely skirt and see a complete outfit to keep it company. The win for retailers: Both FINDMINE and Slyce’s tools are increasing engagement and conversions.
EverThread’s technology shows products in every possible color
I once made the rookie parent mistake of furniture shopping with a toddler. I was only able to focus on all the options around me for the five minutes I convinced her to run laps around a display couch, and I left empty-handed in the end. Now when I need to buy a couch and want lots of options, I can — from the comfort of my soon-to-be-old couch — turn to a retailer using EverThread and its visualization software. EverThread’s technology takes photoshoots out of the process and lets retailers show products (like the couch I now need) in every possible color. While I’m at it, I might try out a new rug and curtains to match. The win for retailers: Since multiple product views increase sales by 58 percent, this is sure to give conversions a bump. And if I want to make sure my new couch and rug look good and fit properly in my living room (so I don’t have to worry about returning any furniture … no time for that!), ecommerce visualization platform Tangiblee will take care of that.
My mall assistant
Satisfi Labs’ AI conversation platform
These days my trips to the mall are intensely purposeful. I know exactly what I need to accomplish, though that doesn’t mean I don’t need help. Satisfi Labs’ AI conversation platform can be that help. Its chatbot might help me find deals, answer questions about stores carrying certain products and help me navigate right to a cookie shop at the end. The win for retailers (and malls): bringing customers back with a unique experience.
The supply chain that delivers
I’m in awe of my mom and every other woman in history that raised children without the ability to order diapers online, often at 3 a.m., and know it will arrive that day — or at least in the next two days. Add in the days I forgot to buy a textbook before the new semester started or the mitt my 4-year-old needs for tee-ball practice, and some days it feels like magic that the thing I absolutely need shows up on my doorstep. That’s why I was so enamored with Locus Robotics’ warehouse robots. They work alongside people to more than double human productivity. That means the pipe cleaners I need for this week’s science fair are headed my way that much faster. Bonus for merchants: Retailers are already seeing a reduction in operating costs by 30–40 percent.
Locus Robotics’ warehouse robots
I used to think of 2020 as far in the future, but it’s almost here. Most of the Retail 2020 exhibitors I talked to are already working with retailers. That means all the cool technology that makes shopping more convenient and efficient is accessible now. I’m looking forward to all the time I’ll save — maybe I can find room for just one more commitment in my life. Or maybe I’ll just take a nap. | https://medium.com/nrf-events/7-ways-shopping-will-get-better-by-2020-d90ef650cfb | [] | 2018-02-01 17:59:34.970000+00:00 | ['Technology', 'Retail', 'Innovation', 'Trends', 'Future'] |
Social Media — Professional Marketing Tool or Gimmick? | Social media platforms offer the following advantages:
Networking
Establishing global networks with colleagues, peers, business partners from various industries and potential clients is a marketing strategy that has already existed long before the internet but also works here.
Developing an online reputation
Anything you write online can be found by other people unless you have restricted access. This is a very powerful tool with which you can pass on your knowledge, experience and trustworthiness to colleagues and potential customers. You can use it to build a reputation around the globe and position yourself as an expert in a particular topic or subject area. Used properly, you will sooner or later be perceived as the person to go to for a certain topic.
Exchange of information
Don’t underestimate the value of information such as interesting events, new regulations and job offers that you receive on message boards or in dialogue with fellow freelancers, for example, on Facebook. This alone may justify participating in the relevant social media platforms.
Job offers
Many clients looking for professional services don’t just use online job portals, but increasingly, LinkedIn, Facebook and Twitter as well.
Visibility & SEO
This is the actual core of digital marketing. Potential customers should find you before they find your competitors. SEO stands for Search Engine Optimisation and means optimising your online presence in the form of your website or profile in such a way that search engines like Google list it in a higher rank on the results page so that it is displayed above those of your competitors. Search engines use complex algorithms to generate the order in which the results are shown. These are largely based on three aspects: keywords, traffic and activity.
Keywords
Use words and phrases that a potential customer would use to find your services on your profiles, in your blog, forum posts, tweets, etc. Many freelancers write things like “freelance translator” in their profile headlines or keywords. But nobody will search for that. You have so much more to offer, so specify what makes you different from your competitors, such as your language pairs, your expertise or additional professional background information. Remember that you want to be found, and a potential client could, for example, enter “professional medical translator Italian English with experience in nutritional science” into Google. Your objective must be to be listed on page one on Google for relevant searches.
Traffic
The more hits your site or your profile page has, the higher it is rated by Google.
Activity
Google recognises when a page has not been updated for longer periods of time and will automatically list it in a lower position in the search results. A website, a profile page or blog that is updated regularly will maintain its Google ranking. | https://medium.com/the-lucky-freelancer/social-media-professional-marketing-tool-or-gimmick-f6cba93133be | ['Kahli Bree Adams'] | 2020-06-30 05:52:34.846000+00:00 | ['Marketing', 'Business', 'Freelancing', 'Social Media', 'Small Business'] |
How To Collaborate With Your Employees To Keep Your Business Moving Forward | As Social Psychologist Heidi Grant says in ‘Get Your Team to Do What It Says It’s Going to Do’:
Creating goals that teams and organizations will accomplish isn’t just a matter of defining what needs doing; you also have to spell out the specifics of getting it done because you can’t assume that everyone involved will know how to move from concept to delivery.
Creating goals that teams and organizations will accomplish isn’t just a matter of defining what needs doing; you also have to spell out the specifics of getting it done because you can’t assume that everyone involved will know how to move from concept to delivery.
Goals can seem impenetrable and overwhelming without a detailed plan to reach them. Take this as an example — Akash, a young entrepreneur, wants to start a business of handmade chocolate. Now to achieve this big goal, he has to make smaller tasks — getting a company incorporated, taking care of legal compliances, find a manufacturer of dark and white chocolate, packaging, so on and so forth. If Akash tries to finish these tasks one step at a time, he will be able to get nearer to achieve his ultimate goal.
If your employees create a plan to reach their goals, it is more likely that they will achieve them. The time that they invest in the start of the process will have a huge pay off at a later stage. Additionally, keep in mind that the goals may shift over some time, and, therefore, the plans will also need some adjustment.
Here are a few tips to get you started:
Step 1: Determine The Tasks Needed To Accomplish The Goal:
The whole process of creating tasks is to make complicated or long term projects/goals much more manageable and achievable.
Break down the goal in task-based components. If one task seems overwhelming, break it further into smaller tasks. Some items can be worked on simultaneously while others need to be completed sequentially. The list of tasks should be categorized accordingly.
Step 2: Chalk out a timeline for completion of the task:
A start and finish date should be attached to each task to make sure that the items move synchronously.
You can use a Gantt Chart or other time-scaled task diagram that show what tasks are upcoming and allow you to make adjustments. Individuals can also create similar visual-based charts on excel or digital calendars. Discuss a contingency plan for a time when something doesn’t move.
Step 3: Gather The Resources Needed To Fulfill Each Task
Many efforts and plans fail because people underestimate the time and resources required to accomplish each task.
After planning the tasks, consider what each employee needs. Consider whether the employees are available to finish these tasks with the current workload. Consider if the employees have the training and knowledge to complete the tasks and reach their goals. Supply the additional resources that your employees require — a new tool, access to some special training or colleagues to assist them or even an entire team to assist. Further, discuss the ways to help your employees fill the gaps.
Step 4: Get It On Paper
Once you and your employees have reached an agreement on goals, document them. Some organizations send the written form to HR. However, irrespective of the process, keep the file with you and send a copy to your employee also.
The critical information that the documentation should contain are as follows:
Date of the Performance Review meeting Key points brought up by you and your employee Goals of the employee for the next review meeting A detailed plan to achieve the goals A description of any resource or training required to achieve the goals The time frame for follow up meetings
Follow Up And Reassess Goals
Performance Management is an ongoing process, so have periodic check-in conversations to keep track of your direct report’s progress.
GE (General Electrics) encourages frequent conversation with employees to revisit two key questions: 1. What am I doing that I should keep doing? 2. What am I doing that I should change?
The check-in can be done monthly, quarterly or weekly based on how many employees directly report to you.
When To Change Your Employee’s Goals
If the employee’s goal needs to be revised, you must review previously set goals and plans. Top three questions with which you should lead are:
Are the goals still realistic post the changes in constraints or resources? Are the goals still timely? Is now the best possible moment to achieve them? Are the goals still relevant? Do they even now align with the company’s strategy?
Additionally, these check-ins will present themselves as an opportunity to monitor your employees’ performance, offer feedback or provide coaching and thus making sure that everyone moves in the right direction towards achieving the overall goals of the company. | https://medium.com/wonderquill/how-to-collaborate-with-your-employees-to-keep-your-business-moving-forward-f987145aab20 | ['Astha Singh'] | 2020-10-01 18:54:08.687000+00:00 | ['Leadership', 'Business', 'Collaboration', 'Entrepreneurship', 'Performance Management'] |
You May Not Like My Decision to Wear Makeup, But It’s My Own Little Form of Feminist Revolution | Call me sick. Call me the shameful product of a patriarchal society. It’s okay. I’ve heard it before, and it makes no difference to me. I look pretty darn good for a forty-eight-year-old woman. And let me tell you, it’s not from good DNA.
Just last month I spent four hundred dollars on beauty products that can only be prescribed by a doctor. They’ve kept me looking a decade younger than most of my friends. And guess what? Those products were worth the money.
And yes I get Botox. I’ve also had fillers. And laser treatments. And microdermabrasion. And once I even tried lip injections, which I never did again because I didn’t like the look.
What I want to know is why it is so wrong to admit I do these things?
Because society has so zealously of late embraced the “No Makeup” movement, it’s becoming rarer and rarer to find articles on cosmetic treatments and emerging beauty trends. It’s all about accepting your flaws. Playing them up. Embracing them.
Currently, it’s a faux pas of major proportions to admit you want to do things such as erase your wrinkles, hide your freckles, or whittle your waist. However, saying you don’t shave your underarms or tweeting #no filter or #naturalbeauty gains you instant adulation.
And I one hundred percent support these women’s rights to their own expressions of individuality. But I also have a right to express mine through makeup.
As a matter of fact, I was immensely proud to see celebrity Gwyneth Paltrow admit she had taken anti-wrinkle injections in her latest interview with Allure magazine. But I know some, wait let me change that, many, now call her foolish or superficial.
Even the article itself was a bit disrespectful to Paltrow. While she was honestly confessing to a decision she knew many would see to be a step back in the feminist movement, the editors littered her article with boldfaced notes focusing on the inherent danger of these injections. It made Paltrow not only look shallow, but it also made her look like an idiot.
For example, one note stated a fact from the Journal of the American Medical Association that “a single gram of botulinum toxin in crystalline form, ‘evenly dispersed and inhaled, would kill more than 1 million people’.”
So does getting injections make her reckless? A traitor to advocates of natural beauty, self-acceptance, and feminism? Not in my opinion.
When asked about her decision to slowly wade into the waters of cosmetic procedures and injections, Paltrow states, “I don’t know that I would go full-bore into other stuff. But I’m not opposed. People have asked me, ‘Would you do this? Would you do that?’ I’m open to anything. I need to gauge what’s right for me at every phase in my life. Women should not judge other women, and we should be supportive of the choices we make.”
And that’s my belief as well. That to judge other women’s personal choices or appearances is an ironic act that actually goes against what feminists are fighting for in the world.
But often die-hard members of this community see me and other female beauty “junkies” as the enemy.
As a matter of fact, in an Independent article entitled “Come on Feminists, Ditch the Makeup Bag. It’s a Far More Radical Statement Than Burning Your Bra,” author Julie Bindel mentions the fact that even as a child, she felt disdain for “the idea of smothering [her] face in various pastes and powders,” referring later to these things as “hideous gunk.” She also mentions that “vast numbers of women endure the daily routine of applying makeup.”
I respect her right to feel this way, but to me, making up my face and applying cosmetics is not something I “endure.” It’s something I relish.
I love the art of it, and the truth is I come to the mirror with excitement, not drudgery. I like the fact that I can be Michelangelo when I wake up in the morning to apply cosmetics. There are a thousand different personalities inside of me, and I want to use makeup to express these myriad aspects of my identity. It makes me feel empowered, not dominated by the “patriarchy.”
An article in Fashionista cites British makeup artist Pat McGrath, who states that when creating the looks for the Fall 2017 Prada show his focus was
“on using makeup more as a statement and a mode of self-expression, than as a way to please other people.”
And this is what makeup is to me. Self-expression.
When I wear makeup, I feel bold, not powerless. By using these products, I am putting whatever particular aspect of myself that is “speaking” to me out there for the world to see. The way I view it, this is an act of daring, not of docility. It is the courage not to hide what I feel inside, but to own it.
Psychologists even recognize this fact, calling this emotional response the “lipstick effect,” one which centers around the theory that wearing makeup “helps women to have the feelings of self-esteem, personality, and attitude.”
And it’s no secret that when people feel confident, the likelihood increases that their goals and endeavors will be successful.
As a matter of fact, an article in the Huffington Post details a study done by researchers and scientists from Harvard Medical School. The study focused on whether or not the confidence boost of wearing makeup increased cognitive abilities. They divided study participants into three groups before giving them a series of tests to take. One group drew, one group listened to uplifting music, and one group applied makeup.
The results?
The volunteers who listened to music had good outcomes on the tests, but the students wearing makeup had even better ones.
And if wearing makeup increases my attractiveness and audacity and augments my intelligence as well, I’m not going to stop wearing it, no matter how many people see me as perpetuating the patriarchy.
My personal ideology is that by embracing makeup and other things that give me pleasure and feelings of empowerment, I am giving an invisible middle finger to a culture that, throughout the centuries, has denied me and other members of my gender choices on what to do as it concerns both our bodies and our behaviors.
That’s what makes me a feminist, at least in my own opinion. And as a feminist, I’ve been taught to raise my voice and speak my truth, so this proclamation is me doing just that.
The truth is my decision to wear makeup is my own little feminist revolution, one that I joyfully fight each day as I reach for the bright red lipstick on my vanity top.
The Bottom Line:
Singer Alicia Keys, a celebrity who has been a dominant force in supporting the no-makeup movement, posted a picture of herself sans makeup on Twitter stating, “Y’all, me choosing to be makeup free doesn’t mean I’m anti-makeup. Do you!”
And by choosing to wear makeup, I am “[doing] me.” And in my humble opinion, that’s what feminism is all about. | https://medium.com/the-partnered-pen/you-may-not-like-my-decision-to-wear-makeup-but-its-my-own-little-form-of-feminist-revolution-e338b003c904 | ['Dawn Bevier'] | 2020-09-30 23:16:43.107000+00:00 | ['Women', 'Self', 'Society', 'Beauty', 'Feminism'] |
Introduction to Programming Paradigms | Photo by Clément H on Unsplash
As someone who never graduated as a computer science degree but very passionate and eagerly wants it to know more about programming as a whole, you will eventually encounter different hurdles of programming subjects that you are unfamiliar with. In my case, “paradigms” in programming have been the topic that stuck in my brain for a while. The best way to learn and understand a new topic is to test yourself to teach/write an article/make a video of that topic for someone else who is also trying to learn. So here, I am going to talk briefly about the programming paradigms.
First, what is a paradigm? A paradigm is a pattern of something or a model. In the programming term, it is essentially the approach to programming that language support. In other words, programming paradigms are just different styles or ways of programming. The paradigms models that we use also defines the task and means of programming. There is a mass of programming paradigms, but I am here mainly talking about the two main ones, declarative vs. imperative.
The principal programming paradigms by Peter Van Roy
Imperative Programming Paradigm
An imperative Programming paradigm is usually how the average person thinks of programmers. Programmers in this paradigm give an order or command on “how” the computer should explicitly execute, and mostly it performs from top to bottom lines of codes. The order of the steps is crucial because a given step will have different consequences depending on the current values of variables when the action executes. The time and state are matters since it is mutable and exact details of steps(precision matters). Think about it as a clock(time machine), every second-minute hands and dials are a matter to the time.
Another example would be going to a restaurant ordering a burger. You request that burger by commanding the waiter to have a slice of medium-rare meat, the fries need to add extra salt and fried more crispy, the lettuce should be only two slices, tomatoes three installments, and lastly, add more bbq sauce and pickles on the top of the burger. As you can see, everything is an exact precise order. With this in mind, there are two main sub paradigms in imperative programming.
Procedural programming: It is a concept based on routines, subroutines, modules methods, or functions(procedures). It is like a list of instructions to tell the computer what to do step by step(procedures). It is also known as top-down languages. Most of the early programming languages are all procedural.
-Programming languages: Cobol, C, C++, Java, Pascal.
Object Orienting Programming(OOP): Treat everything as an object passing messages to one another. It can share properties or behaviors as well as changing state and reusable components. Each viewed as separate entities having their state, which is modified only by built-in procedures, called methods. Lastly, it is the most widely used paradigm around the world; it is easy to understand and read.
-Programming languages: Java, Ruby, C++, Python, Javascript.
Declarative Programming Paradigm
A declarative programming paradigm is a style of building programs that express the logic of a computation without talking about its control flow(Wikipedia). Instead of telling the program “how” to do but “what” to do. A better way to understand this, we can go back to the example of ordering a burger in the restaurant. This time you are just going to order one big mac without even mentioning a lot of precise details to add the burger. You are telling them what you are going to order, as opposed to how you are going to order the burger. Since the declarative paradigm expressing logic computational, it is used a lot in mathematical logic terms. There are two main sub paradigms in declarative programming.
Logical Programming: Based on mathematical logic, facts, and rules within the system (not instructions).
-Programming languages: Prolog, ALF (algebraic logic functional programming language), Ciao, Alice.
Functional Programming: a programming paradigm that treats computation as an evaluation of mathematical functions, avoiding changing-state and mutable data. No side effect and more comfortable to debug. It is taking some inputs from whatever arguments, and it returns the output value (data goes in, and data comes out). The functions do not modify any values outside the scope of that function, and the functions themselves are not affected by any values outside their scope.
-Programming languages: Haskell, Kotlin, Scala, Clojure, Elm, Mercury, Javascript.
The differences between the two programming paradigms
Notice some of the programming languages overlaps declarative and imperative programming paradigms. Some of these languages have both declarative and imperative programming paradigms, which we called multi-paradigm programming languages. It is basically if the programming language has more than one paradigm, we called them multi-paradigm programming languages. However, some functional programming languages are purely functional, for instance, Haskell, Elm, and Mercury. Some have both paradigms such as Javascript, Java, C++, Scala, Python, Kotlin, Rush.
Example of code in imperative vs. declarative programming
“Imperative programming is like how you do something, and declarative programming is more like what you do.”
There is no the best paradigm, is all depends on which scenario is your task for and what type of problems it makes sense to solve. Each model has its best for an individual case. Paradigms define programming, just like paradigms enable scientific progress. As a community, we need to agree on a particular model of programming and what it means to write a good program for us to get anything done.
This article intended to be a brief introduction to the programming paradigms. If you want to know more about programming paradigms in-depth, you should check out these fantastic videos and articles. | https://medium.com/swlh/introduction-to-programming-paradigms-aafcd6b418d6 | ['Osgood Gunawan'] | 2020-05-05 03:06:18.853000+00:00 | ['Software Engineering', 'Programming Paradigms', 'Software Development', 'Computer Science', 'Programming'] |
The Unbreakable Cycle of Racism | The Unbreakable Cycle of Racism
It has to change. And I must believe it will.
Image: Jasmin Merdan/Getty Images
I grew up attending a small private school with mostly White students. I was almost always the only Black kid in class. It didn’t matter much. Until second grade.
One day near the beginning of that school year, my teacher sat all of the students in a circle. “I want you to each tell the class one interesting fact about your family,” they said.
For the most part things that my classmates spoke about were harmless. I can’t remember what I shared. But I do remember the girl who sat next to me.
“My family hates Black people,” she said.
The room fell silent and every single face turned to me. I was stunned, and didn’t quite know what to say. I decided to just be honest.
“But I’m Black,” I said. “Do you hate me?”
My classmate just shrugged.
“You’re not Black,” she said. “You’re Brown. So no, I don’t hate you.”
I was young enough to still have questions about race. But I was also old enough to understand that just because my skin was Brown in color, I was still a Black person. The kind of people her family hated. And she didn’t have a problem sharing this as an “interesting fact.”
After class, my teacher pulled me to the side to talk to me about what my classmate said. I could tell by her mannerisms that she was trying to be comforting, but I honestly couldn’t hear anything she was trying to say to me. All I could think was: People can hate someone because of their skin color?
That was the first time in my life I ever remember experiencing racism. The older I got, the more racism, ignorance, and bigotry I witnessed and experienced. Sometimes my “friends” used racial slurs when they sang along to popular songs. During sporting events, I was called racial slurs by opposing players. And as my eyes opened, I was affected by things I simply witnessed but did not experience, like people being treated unfairly by law enforcement.
All of these things bothered me. I didn’t let it show. I’m not sure why. But my way of dealing with racism was to pretend it didn’t hurt.
Then in 2009, I watched Oscar Grant murdered by someone who was paid to protect and serve. I shared the video with my family and friends and we were all extremely disturbed by it. But we all knew that Oscar Grant and his family would get justice; there was video. There was no argument. There was only what any person’s eyes could see: murder.
But in November of 2010, the man who murdered Oscar Grant was sentenced to two years in prison with credit for time served. Two years… with time served… for murdering someone. I couldn’t believe it. I talked to my family about my frustrations with the verdict. This led to the talk, all too familiar in Black households, about how my sisters and I should interact with police officers if we were ever stopped by them. (We’d each had this talk previously with our parents when we first started driving.) I listened, and tried to believe that what they were telling us was true — and that it could save our lives one day.
Then it happened again. And again. And again. Trayvon Martin, Eric Garner, Michael Brown Jr., Tamir Rice, Freddie Gray, Sandra Bland.
And in all of these cases, the people responsible for the deaths were not convicted. In July 2016, on consecutive days, Alton Sterling and Philando Castile became the next two Black people murdered by police. These murders were the ones that finally broke me. I couldn’t be stoic. I couldn’t hide my emotions anymore.
After hearing about Philando Castile, I sat in my car and cried. I cried because I was mad, I was scared, I was frustrated. At that moment, I knew that Black lives were not valued in this country. I met a few of my White friends at a restaurant later that evening and was surprised to hear how upset and appalled they were at the two murders we had all just witnessed. I had a few other White friends text me and tell me that even though they did not understand my experience as a Black man in this country, they knew what happened was wrong and that they wanted to learn more about my experiences to become a better ally and advocate.
Since then I have had the opportunity to talk to some of them about what it is like being told that I am hated because of my skin color, or about what it is like to hear someone lock their car door or clutch their purse when I walk past, or about the fear and panic that takes over my body any time I drive by a police officer. Being able to voice my frustrations to people who did not look like me was helpful. And while I was still scared, sad and frustrated to see this continue, I began to think that if some non-Black people felt that way, there might be others that did too and that maybe that could lead to change.
And then it happened again. I watched a cop kneel on George Floyd’s neck for eight minutes and 46 seconds while other officers did nothing to intervene. How could they care so little for a human being?
I’m not sure why, but this time was different. I was still emotional, scared, and exhausted — but this time I was pissed off, too. I decided to be more vocal and utilize my platform to bring awareness to the injustices I witnessed. I called people out (on social media and in person) for their ignorance and blatant racism. I donated to organizations that promote social justice. I supported Black-owned businesses. I read books and watched films and documentaries about racism and the justice movement and discussed them with anyone who would listen. Even though some of these are things I had done in the past, I knew that if I wanted to be a part of the change and wanted/expected the people around me to be a part of the change, I had to do more. Sharing on social media a few times was not enough. Donating once was not enough. Reading books and watching films and avoiding difficult conversations with people was not enough.
So here we are today. I’m still tired, I’m still scared, and I’m still mad that we are dealing with this in 2020. I want to be able to go for a jog, walk home, drive a car, sit at home watching TV without having to be afraid for my life. I do not want to fear that the next time I say “goodbye” or “I love you” to my friends and family will be the last time. But now I am also hopeful. The deaths of George Floyd, Breonna Taylor, Ahmaud Arbery, Elijah McClain, and Rayshard Brooks seem to have started a change. People of all races, religions, sexual orientations, and economic classes from all over the world are pissed off and have decided to fight for equality and human rights. And while this is a start, there is still a lot of work to be done.
If you want to be an ally. Here are some specific ways you want to help.
Simple. But still important. Listen to what Black people have to say.
Demand reform from governors and state/local legislators.
Register to vote — and then vote on November 3.
Call out racism and racial injustices when you hear/see it.
Donate to organizations and causes that promote social justice.
Acknowledge your privilege and utilize it and your platform to help promote change.
I remain a proud Black man. I know that racism is a part of what it means to be Black. It can be an emotional roller-coaster. But what I do know is that now, people of all colors are more motivated for change than ever before.
This story was originally published by the VCU Center for Sport Leadership, as part of their “Our Stories” series, an initiative inspired by the #BlackLivesMatter movement. | https://level.medium.com/the-unbreakable-cycle-of-racism-559876357942 | ['Murray Littlepage'] | 2020-10-07 19:21:35.865000+00:00 | ['Childhood', 'Violence', 'Racism', 'Society', 'Race'] |
With Tourette Syndrome, One Size Does Not Fit All | Camden Alexander
By Camden Alexander, Tourette Association of America Youth Ambassador
I have been playing the electric guitar since I was nine. I love Green Day, The Offspring, Led Zeppelin and Blink 182. Now I’m in seventh grade, play in a local rock band and feel lucky. I feel lucky because I’ve had the chance to discover how playing instruments helps me be creative and have fun — but it also helps me with my Tourette Syndrome, which I was diagnosed with at age 4.
Playing musical instruments helps me feel in the moment and relaxed. My tics temporarily fade away when I play guitar. I’ve come to love performing and even like to act. I’ve been in You’re a Good Man Charlie Brown, Wiley and the Hairy Man and Hamlet. As I get to know more and more kids with Tourette, I learn that so many of us love to rock out, excel at instruments and perform — who would have thought?
I’ve been getting to know a lot of other kids with Tourette, because this past March, I went to Washington D.C. and was trained to be a Tourette Association of America Youth Ambassador. I went to Capitol Hill, met with elected officials and advocated for increased research funds and awareness for Tourette. The more I learned, the more I realized how interesting and unique kids with Tourette really are. May 15 — June 15 is National Tourette Syndrome Awareness Month, the perfect time for everyone to understand what Tourette Syndrome is, and the people who have it.
We are creative, we play sports, we act, we play instruments and we advocate! I’m excited to now have the knowledge base to present and teach the facts related to Tourette. The big issue with this neurological condition is that it’s misunderstood. Movies and pop culture references often focus on Tourette as simply a disorder which involves saying inappropriate things. While some people do have that piece, it’s a minority of those with Tourette. Tourette Syndrome is not a one-size-fits-all condition. It affects everyone differently and it has zero impact on intelligence.
Tourette is marked by experiencing involuntary movements and sounds (tics) that vary in type and severity. However, there are lots of Tourette symptoms that are less obvious than tics, and these other symptoms generally are not understood by teachers, adults or other kids. That’s why the Tourette Association website is set up so people can learn more about Tourette. Tourette is much more than tics.
When I was in kindergarten and early elementary school my teachers did not understand my symptoms and it led to a lot of problems. By second grade though, with the help of my parents, I got an Individualized Education Program (IEP) and I then started to receive the support I needed. Teachers, principals, and special educators were supportive. I think in this sense I was very lucky as well, because I constantly hear how difficult it is for other kids to get this kind of support.
I remember when a Youth Ambassador presented at my school. I admired him for sharing his story and I knew that I could do the same. I’m thrilled to begin my work as an Ambassador to raise awareness and educate my peers about Tourette. I want everyone to have access to the support system that I have and #Rally4Tourette!
Camden Alexander, age 13, is a student in Marblehead, Massachusetts. He is a trained Tourette Association of America Youth Ambassador. | https://medium.com/generation-youthradio/with-tourette-syndrome-one-size-does-not-fit-all-9570959532 | [] | 2016-05-25 00:47:58.352000+00:00 | ['Education', 'Music'] |
Cervello awarded Elite Status in the Snowflake Partner Network | Cervello, a Kearney company, has achieved an exciting milestone. We have just been granted Elite Status in the Snowflake Partner Network — the highest level available for Snowflake Solution Partners.
Colleen Kapase, Snowflake Vice President of worldwide partner and alliances, noted in bestowing the award: “On behalf of the Snowflake Leadership Team, we were delighted to congratulate Cervello on being one of the first Solution Partners, globally, to achieve Elite Tier in the Snowflake Partner Network. It’s a true reflection of the dedication that the Cervello team has for our combined clients and our partnership.”
Why is this recognition such a big deal for us? First, because it’s a testament to our commitment to best-in-class client service. But perhaps even more important is the acknowledgement of our mastery of data-driven solutions powered by Snowflake’s cloud data platform.
At Cervello we are focused on helping our customers achieve business outcomes through data. We do that by using Snowflake’s cloud data platform to unlock the power of data, so that customers can make informed decisions that deliver real business value. It’s what we call the data dividend.
My colleague Mike Cochrane, sums this up well: “As organizations become increasingly digital, maximizing the value of data is more critical than ever. We have used the Snowflake platform to help many of our clients realize this value for a broad range of use cases.”
If you are not familiar with the Snowflake Cloud Data Platform, what makes it so unique is its architecture. In all our years working with technology, we have not seen anything like it, and we’ve written about it extensively in numerous blog posts. The successes our clients have achieved with Snowflake are both unmatched and consistent.
To ensure that client successes continue, Cervello continually invests in educating our experienced consultants on Snowflake. We’ve also created an internal Center of Excellence focused on the platform, which enables us to bring a unique perspective to clients’ most pressing challenges.
We look forward to continued, acclaimed results as an Elite partner.
About Cervello, a Kearney company
Cervello, is a data and analytics consulting firm and part of Kearney, a leading global management consulting firm. We help our leading clients win by offering unique expertise in data and analytics, and in the challenges associated with connecting data. We focus on performance management, customer and supplier relationships, and data monetization and products, serving functions from sales to finance. We are a Solution Partner of Snowflake due to its unique architecture. Find out more at Cervello.com.
About Snowflake
Thousands of customers deploy Snowflake’s cloud data platform to derive insights from their data by all their business users. Snowflake equips organizations with a single, integrated platform that offers the data warehouse built for the cloud; instant, secure, and governed access to their network of data; and a core architecture to enable many other types of data workloads, such as developing modern data applications. Find out more at Snowflake.com. | https://medium.com/cervello-an-a-t-kearney-company/cervello-awarded-elite-status-in-the-snowflake-partner-network-3b5d46f3c9a6 | ['Glyn Heatley'] | 2020-07-22 13:38:39.818000+00:00 | ['Database', 'Data', 'Snowflake', 'Cloud Computing', 'Analytics'] |
Stop Giving Unsolicited Fitness Advice | First, ask how they feel
This simple question can take you and your friend, relative, or student ten steps ahead into a more comfortable and decent conversation.
“Hey, sorry for intruding. I’m wondering if you’re open to discussing health in general? “Okay, I guess.” “Thanks! First things first, I think you are great just the way you are. If there’s anything you would like to discuss in detail, let me know because I happen to know a thing or two about health, nutrition and exercise.”
Then, pause to let the other person respond and go by their cues.
This might not be the perfect way. But it’s a good way to start a conversation around one of the most sensitive topics for an individual.
For me, this elicited a sweet smile from my sister. She acknowledged the honesty, openness, and option of my question.
She comfortably mentioned that yes, she is not perfectly happy being very slim and of small build compared with her average college peers. She has been frequently mistaken to be a school student whenever she went to teach high school students.
“All right, got it, perfect. So now do this and that and you will be…”
Stop. Hold your horses. Use your informed knowledge to guide better.
Get to know if they “understand” health
Still keeping my focus, I asked her if she feels that her small body stature is unhealthy.
Often times, people “think” that having a body structure which is not the norm around them simply implies that they are not doing enough to be healthy. Probably they are eating less, probably they are working out inadequately, or anything else which they are doing which is keeping them away from becoming the norm.
Something must be wrong, right?
It is a far more common notion than we’d like it to be. I suffered from this in high school. Come to think of it, at times in college too. It’s something you would have definitely thought of at points in your lives, isn’t it?
Jump to today, if you follow the topic “health” using even Quora, you know that body type is a very, very complex equation, and defining “healthy” for these different body types is even more so.
Slim people have been found to live longer than average people. Short people have been found to live much longer. Celebrating all bulky or muscle-ly does not seem to be statistically significant anymore. So is our present knowledge about the definition of health all wrong?
The point in consideration is that it is too naive to tag a person unhealthy based on his/ her body type “just because” it doesn’t fit the norm. Take for example a well-established study by the European Heart Journal, which stated: “Overweight and obese people were found to be at no greater risk of developing or dying from heart disease or cancer, compared with normal-weight people, as long as they were metabolically fit despite their excess weight.”
My cousin smiled again, and said, yes, she knows this (bummer) and she understands that her health is not detrimental by any angle right now. But, it can get tricky with less muscle and fat at a later stage of her life.
Which I completely agreed to (and appreciated her knowledge of.) She would like to gain some muscle — that’s her goal.
“Well then, start eating blah blah, start exercising blah blah …”
Nope. Not yet.
What have you been doing so far?
Okay, funnily enough, this article seems like a typical nutritionist interview.
Photo by Jopwell from Pexels
But probably not this question.
In my fitness journey, I felt the biggest gap was to not being asked about my ongoing approach with whoever I was taking advice from.
I always wanted to discuss my current plan, and why it should work or was not working, or anything which ensured that I remained consistent with plans until their realization time. But no. Never happened.
This led to me running from one person’s advice to another’s without following through the first ones fully, eventually doomed to quit the second ones to fall for a third’s advice!
And it was sometime before I learned the art of being consistent, analyzing, improving and improvising personal strategies with focused external feedback. Therefore, always first listen to what they have been doing so far, before advising on what they should be doing from here.
My sister took a carb intensive diet, with meals broken into parts throughout the day. But she exercised quite less.
She believed, as so many of us slim people do, that bulking up on carbs would somehow, increase the fat and muscle content and make them at least look fuller. Lean muscles can come later.
She already checked some boxes I was going to advise on, so very well this step:
Skipped the conversation of “Oh I already do this exercise, eat these kinds of food, supplements (nothing new you said Shagun, huh)” I didn’t come out as a crappy suggestion box uncle Showed my genuine interest in her health
Your time to shine is now
When I got to know the holistic picture of her goals and her journey so far, I was of course in a better place to share my insights on how she could reach them.
I knew she wanted to noticeable weight gain and her body responds negligibly to carbs intake. It seemed what she was missing was the bodybuilding requirement.
Amidst her growing up, her metabolism was already evolved to balance the “intake vs requirement” see-saw. If she continued her excessive carb spree, it would only be a matter of time before she was gaining the wrong kinds of fats or starting to suffer from under-the-layers deficiencies or problems.
Therefore, I suggested she take up strength training and increase her healthy protein consumption. I gave her actionable and easy-to-follow practical steps. I forewarned, that it will be a slow journey, but it should be an effective one.
Photo by Andrea Piacquadio from Pexels
Takeaway summary | https://medium.com/age-of-empathy/stop-giving-unsolicited-fitness-advice-3529a70d5957 | ['Shagun Sharma'] | 2020-11-30 12:13:56.811000+00:00 | ['Self', 'Health', 'Nutrition', 'Fitness', 'Empathy'] |
How to Setup Hadoop Cluster on AWS using Ansible | Hey everyone,
In this article, I will show you an interesting Automation in which we will Setup the Hadoop Cluster (HDFS) on top of AWS Cloud (EC2) and we will do everything using a tool called Ansible which is one of the best Automation tools. And the best thing in this article is we will use Ansible’s Dynamic Inventory and Roles concepts to make the whole process more Dynamic.
I expect that you have some familiarity with AWS, maybe you’ve launched an EC2 instance, or similar. And also have some basic knowledge about Ansible and Hadoop. You can easily find information about these technologies on the internet and in this article, our main aim is to integrate all these technologies.
So let’s understand what we actually gonna do to setup this automation.
A genda!!!
In the real world, we generally deal with BigData and the size of this BigData is in Peta or ExaBytes or more. And to handle and process this big data we use technologies like Hadoop. But to store such huge data we need more and more storage devices or resources and it will increase our cost. But we can use Cloud Storage to store this bigdata as Cloud providers provide almost unlimited Storage like AWS S3, EBS, EFS.
So in this, we will launch EC2 instances to setup our Hadoop Cluster i.e., NameNode and DataNodes. And then we will create and attach EBS Volume with DataNodes to store the bigdata that the HDFS Cluster processes. And to setup all these things we will use Ansible to automate our work.
So Let’s do the Hands-on Practical!
To know the installation of Ansible refer to my previous article
Setup AWS Ansible Dynamic Inventory
Now first we have to launch EC2 Instances using Ansible. But before doing this we have to setup some more things in ansible so that we can able to do Provisioning on AWS using Ansible.
First, we have to setup the Ansible Dynamic Inventory, so that it can able to fetch all EC2 Instances and then can able to configure those instances.
Dynamic inventory is an ansible plugin that makes an API call to AWS to get the instance information in the run time. It gives you the ec2 instance details dynamically to manage the AWS infrastructure.
To setup a dynamic inventory in ansible the most common way is to use the pre-created EC2 python files. But instead of this, we will use an ansible dynamic inventory plugin that is aws_ec2.
So aws_ec2 plugin is a great way to manage AWS EC2 Linux instances without having to maintain a standard local inventory. This will allow for easier Linux automation, configuration management, and infrastructure as code of AWS EC2 instances.
In order to use Ansible for AWS, we have to install boto, boto3, and botocore Python libraries
pip3 install boto boto3
To enable the aws_ec2 plugin we have to add the following statement to the ansible.cfg file:
enable_plugins = aws_ec2
So our ansible.cfg file looks like this
ansible.cfg file example
In the above image, you can see that my inventory path is ansible_plugins, so this is a handy way to manage all ansible plugins like if we have multiple AWS accounts or different cloud accounts.
Now after this we have to create a configuration file for the aws_ec2 plugin, here I have created aws_ec2.yaml file as shown below
---
plugin: aws_ec2
aws_access_key: <YOUR-AWS-ACCESS-KEY-HERE>
aws_secret_key: <YOUR-AWS-SECRET-KEY-HERE> regions:
- ap-south-1 strict: False
keyed_groups:
- key: tags
prefix: tag
- key: placement.region
prefix: aws_region
This is a simple configuration file and you can modify it according to your needs. So in the above file, notice keyed_groups keyword under which we basically write that how we want to group the instances like according to the tags, region, instance type, etc. And here I used tags to group my HDFS Cluster NameNode and DataNode instances.
Test the dynamic inventory by listing all the EC2 instances like this
ansible-inventory --list
Create Ansible Roles
In Ansible, the Role is basically a way to groups all the things we write in a playbook or it is the primary mechanism for breaking a playbook into multiple files. So Roles are the collection of variables, tasks, files, templates, and modules. And this simplifies writing complex playbooks, and it makes them easier to reuse.
In ansible to create a Role directory structure, we can use the ansible-galaxy command like this
ansible-galaxy init <role name>
Or we can also manually create this structure if we have limited groups.
Now in this, we will create 3 roles that are :
For launching EC2 Instances to setup HDFS Cluster
To configure an instance as a Hadoop Master (NameNode)
To configure instances as Hadoop Slaves (DataNodes)
Creating Role to Launch EC2 Instances
In this role, we will create a tasks file and a vars file.
Here is the tasks file which has all the tasks that ansible will perform
In the above file, you can see I have used variables and some user-defined variables too like number of instances and tags, which we will used to launch instances as namenode and datanodes with a single role. And notice that I used a module named meta which will automatically refresh our dynamic inventory as soon as we launch some instance.
Variable file for this role is
---
region: "ap-south-1"
ami_id: "ami-0a9d27a9f4f5c0efc"
instance_type: "t2.micro"
subnet_id: "subnet-2e8ee562"
sg_id: "sg-007e984dff1d14721"
key: "taskoskey"
The security group I attached to all the instances allow only SSH and HDFS services running on 9001 port.
Creating Roles to Configure HDFS Cluster
To configure a HDFS Cluster using ansible, you need to have some basic knowledge like how to setup a HDFS Cluster manually. But if you are new to Hadoop and don’t know how to setup HDFS then you can check out my this article which is a step-by-step guide to setup HDFS locally.
So Let’s see how to setup HDFS with Ansible
Creating Role for NameNode
In this role, we mainly have 3 directories i.e., tasks, templates, and vars.
Here is the tasks file which has all the tasks that we want Ansible to perform in order to configure NameNode and it typically performs these tasks
Install Hadoop and java
Configure instance as NameNode and start its services
Mount the EBS Volume with its NameNode Data directory
Now in the templates directory, there are 2 Hadoop files i.e., hdfs-site.xml and core-site.xml which ansible will copy in the NameNode’s /etc/hadoop directory and replace some variables with their values. And these files are also known as jinja templates. So the content in these files are
hdfs-site.xml file
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property>
<name>dfs.{{val}}.dir</name>
<value>{{hdfs_dir}}</value>
</property> </configuration>
core-site.xml file
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property>
<name>fs.default.name</name>
<value>hdfs://{{master_ip}}:{{hdfs_port}}</value>
</property> </configuration>
And here are the variables for this role
master_ip: "0.0.0.0"
val: "name"
hdfs_dir: "/master_hdfs"
hadoop_dir: "/etc/hadoop"
hdfs_port: 9001
Creating Role for DataNodes
The structure of the role to configure DataNode is same as the NameNode role and there are some minor changes in the tasks file and Variable file.
Since in Hadoop we do not format DataNodes, so we have to remove the task that does formatting and also we use a different command to start DataNode services, so we also have to remove this task. So typically we just have to remove 2 tasks from the NameNode tasks file and add the below task only and the other tasks remains the same.
- name: Starting Hadoop DataNode services
command: "hadoop-daemon.sh start datanode"
ignore_errors: true
And the Variable file for this role is
master_ip: "{{ groups.tag_Hadoop_namenode[0] }}"
val: "data"
hdfs_dir: "/slave_hdfs"
hadoop_dir: "/etc/hadoop"
hdfs_port: 9001
Now all the required Roles have been created and the Dynamic Inventory is also ready. And now we just have to create one final Ansible Playbook that will run all these roles and setup our HDFS Cluster on AWS.
And here is that Playbook
Now we just have to run this playbook like this
ansible-playbook setup.yml
Just this one command will do everything for us from launching EC2 instances to configuring them as NameNode and DataNodes.
Output of this command | https://medium.com/dev-genius/automate-hadoop-cluster-deployment-on-top-of-aws-using-ansible-194b623b9103 | ['Ajay Pathak'] | 2020-12-28 17:17:41.539000+00:00 | ['Hadoop', 'Automation', 'Ansible', 'DevOps', 'AWS'] |
How to Render a List with React, GraphQL, and Apollo | In this article, we are going to take a look at how to retrieve and display a list of objects using React Hooks, GraphQL, and Apollo client.
List Function Component
Let’s start by displaying a list of objects using a React function component.
import React from 'react'; function List({posts}) {
return (
<div>
{posts.map(post =>
<div key={post.id}>
{post.title}
</div>
)}
</div>
);
} export default List;
The List function component takes a list of post objects and displays the title of each post in a div element.
In the App parent component, a hardcoded list is created and sent to the List function component.
import React from 'react';
import List from './List'; const posts = [
{ id: 1, title: "Lorem Ipsum" },
{ id: 2, title: "Sic Dolor amet" }
]; function App() {
return (
<div>
<List posts = {posts} />
</div>
);
} export default App;
That’s all we needed. We managed to define a hardcoded list of objects and then render that list using React.
GraphQL
Now let’s move further and take this list from a backend API. As discussed we will use a GraphQL API instead of a REST API.
Create Fake API
We are going to use json-graphql-server to create a fake GraphQL API on the fly.
Create a db.js file with the following GraphQL scheme definition.
module.exports = {
posts: [
{ id: 1, title: "Lorem Ipsum", views: 254, user_id: 123 },
{ id: 2, title: "Sic Dolor amet", views: 65, user_id: 456 }
],
users: [
{ id: 123, name: "John Doe" },
{ id: 456, name: "Jane Doe" }
],
comments: [
{ id: 987,
post_id: 1,
body: "Consectetur adipiscing elit",
date: new Date('2017-07-03')
},
{ id: 995,
post_id: 1,
body: "Nam molestie pellentesque dui",
date: new Date('2017-08-17') }
]
}
Then start the API using the json-graphql-server db.js — p 3001 command. At this point, the API is available at http://localhost:3001/ .
Access Fake API
Next, we will create an API utility function getAllPosts() that retrieves all the posts from the API.
We call the backend API using the axios library.
import axios from 'axios'; const axiosGQL = axios.create({
baseURL: 'http://localhost:3001/'
});
const Get_All_Posts_Query = `
{
allPosts {
id
title
views
}
}
`; function getAllPosts() {
return axiosGQL
.post('', { query: Get_All_Posts_Query })
.then(getData)
.then(data => data.allPosts);
}; function getData(response){
return response.data.data;
} export default { getAllPosts };
The Get_All_Posts_Query constant defines the GraphQL query to retrieve all the posts. It specifies that we want to retrieve all posts and that for each post we need the id , title , and the number of views .
Render the list
We can make the API call inside the App parent component using the useEffect React hook.
The useEffect hook allows us to execute a side-effect operation like retrieving data from and API after the component was rendered. Once the list is retrieved from the server it is saved as a local state.
The local state is defined using the useState hook. Initially, the state is just an empty list. When the list is retrieved from the backend API, the state is updated using setPosts function.
Once we have the list we sent it to the List component.
import React,{useState, useEffect} from 'react';
import List from './List';
import api from './api'; function App() {
const [posts, setPosts] = useState([]); useEffect(()=> {
api.getAllPosts()
.then(setPosts);
},[]) return (
<div>
<List posts = {posts} />
</div>
);
} export default App;
Apollo
Next, let’s make the API using the Apollo Client which provides a better integration to the GraphQL API.
Start by installing it.
npm install @apollo/client --save
Then we create the Apollo Client and use it to make the API call.
import { ApolloClient, InMemoryCache, gql } from '@apollo/client'; const client = new ApolloClient({
uri: 'http://localhost:3001/',
cache: new InMemoryCache()
});
const Get_All_Posts_Query = `
{
allPosts {
id
title
views
}
}
`; function getAllPosts() {
return client
.query({
query: gql`${Get_All_Posts_Query}`
})
.then(getData)
.then(data => data.allPosts);
}; function getData(response){
return response.data;
} export default { getAllPosts };
As we can see, there is not much difference from Axios API call.
useQuery Hook
Next, we are going to simplify the API integration by using the useQuery hook provided by the Apollo library.
This time we just keep the definition of the GraphQL query in the API file.
import { gql } from '@apollo/client';
const Get_All_Posts_Query = gql`
{
allPosts {
id
title
views
}
}
`; export { Get_All_Posts_Query };
In the App parent component we execute the API query using the useQuery hook.
As you can see, we make the API call by passing a GraphQL query string to the useQuery hook. The hook returns an object containing the loading , error , and data properties.
The loading property is used to display a loading message when necessary.
The error property is used to display an error message when needed.
The data retrieved from the server stays in the data property.
import React from 'react';
import { useQuery } from '@apollo/client';
import { Get_All_Posts_Query} from './api';
import List from './List'; function App() {
const { loading, error, data } = useQuery(Get_All_Posts_Query); if (loading) return (<p>Loading...</p>);
if (error) return (<p>Error : {error.message}</p>); return (
<div>
<List posts = {data.allPosts} />
</div>
);
} export default App;
Material UI
Next, we will improve the layout using the Material UI components.
npm install @material-ui/core
We can modify the select query to retrieve some more information like the user name.
import { gql } from '@apollo/client';
const Get_All_Posts_Query = gql`
{
allPosts {
id
title
views,
User {
name
}
}
}
`; export { Get_All_Posts_Query };
Then we can use the Material UI List components to display our data.
import React from 'react';
import List from '@material-ui/core/List';
import ListItem from '@material-ui/core/ListItem';
import ListItemText from '@material-ui/core/ListItemText';
import Divider from '@material-ui/core/Divider'; function PostList({posts}) {
return (
<div>
<List component="nav">
{posts.map(post =>
<React.Fragment>
<ListItem button>
<ListItemText secondary={post.User.name} />
<ListItemText primary={post.title} />
<ListItemText primary={post.views} />
</ListItem>
<Divider />
</React.Fragment>
)}
</List>
</div>
);
} export default PostList;
Recap
List components can be defined in React as functions returning an HTML representation of it.
GraphQL provides an alternative way of building APIs. The result is retrieved from the API by sending GraphQL queries. The API calls can be done by making POST calls to the API and sending the GraphQL queries.
The useQuery hook from the Apollo library provides a nicer way of executing GraphQL queries.
The source code is available on github. | https://medium.com/programming-essentials/how-to-render-a-list-with-react-graphql-and-apollo-8f4c1e5e39a0 | ['Cristian Salcescu'] | 2020-10-30 14:26:59.312000+00:00 | ['JavaScript', 'React', 'Programming', 'Front End Development', 'GraphQL'] |
Your Friendly, Neighborhood Superintelligence | Martin, The Plains of Heaven
Your Friendly, Neighborhood Superintelligence
“What if the great apes had asked whether they should evolve into Homo sapiens [and said] ‘Oh, we could have a lot of bananas if we became human’?” — Nick Bostrom
One of our older stories is the demon summoning that goes bad. They always do. Our ancestors wanted to warn us: beware of beings who offer unlimited power. These days we depend on computational entities that presage the arrival of future demons — strong, human-level AIs. Luckily our sages, such as Nick Bostrom, are now thinking about better methods of control than the demon story tropes of appeasement, monkey cleverness, and stronger magical binding. After reviewing current thinking I’ll propose a protocol for safer access to AI superpowers.
The dilemma of intelligence explosion.
We are seeing the needle move on AI. Many, but not all, experts, galvanized in part by Bostrom’s publication of his book, Superintelligence, believe the following scenario.
A disconnect will happen sometime after we achieve a kind of “strong AI”, so all-around capable that it is like a really smart human. The AI will keep getting smarter as it absorbs more and more of the world’s knowledge and sifts through it, discovering better modes of thinking and creative combination of ideas.
At some point the AI will redesign itself, then the new self will redesign itself — rinse, repeat — and the long-anticipated intelligence explosion occurs. At this point, we might as well be just whistling past the graveyard, for we shall have no control over what happens next.
Bostrom said we risked creating a super AI akin to a capricious, possibly malevolent, god. The toxic combination was autonomy and unlimited power, with, as they would say in the days of magic, no spells to bind them to values or goals both harmless and helpful to ourselves.
The AI menagerie.
Bostrom imagined the types of AI’s along a scale of increasing power, mental autonomy, and risk of harm to us:
{ tool →oracle → genie → sovereign }.
A tool is controllable, like the AI apps of today, but unoriginal, only able to solve problems that we can describe, and maybe even solve ourselves more slowly. An oracle would only answer difficult questions for us, without any goals of its own. A genie would take requests, think up answers or solutions and then implement them, but then stop and wait for the next request. A sovereign would be like a genie that never stopped, and was able to create its own goals.
Bostrom showed how any of the weaker types could evolve into stronger ones. Humans would always be wanting more effective help, thus granting more freedom and power to their machine servants. At the same time, these servants become more capable of planning, manipulating people, and making themselves smarter: developing so-called superpowers that move them towards the sovereign end of the spectrum.
A Matter of goals.
A key concern of Bostrom’s is about a superintelligence having the wrong “final goals.” In service of a final goal a tireless and powerful AI might do harm to its human creators, either with or without intending harm or caring about said harm.
Degas, Autumn Landscape
Bostrom’s examples include turning all the atoms of the earth (including our own atoms, of course) into paper clips, or using all available matter into computronium to be certain about some goal calculation, such as counting grains of sand.
These examples were chosen to show how arbitrary a super AI’s behavior might be. In human terms, such maniacal, obsessive behavior is stupid, not intelligent. But it could happen if the AI lacked values about the preservation of life and the environment that we take for granted. If we were able to impart such values to an AI, could we expect that it would retain them over time? Bostrom and others argue that it would.
Goal integrity as identity.
Consider how much of our lives is spent servicing final goals like love, power, wealth, family, friendship, deferring our own deaths and destroying our enemies. It’s a big fraction of our time, but not all of it. We also crave leisure, novel stimulation, and pleasure. All of us daydream and fantasize. A pitiful few even learn and create for its own sake. If our motivations are complex, why wouldn’t a super AI’s be too?
Arguably an AI of human level will be motivated to mentally explore its world, our world, in search of projects, things to change, ideas to connect and refine. To function well in a dynamic environment it should also make choices that maximize its future freedom of action. This, in turn, would require wider, exploratory knowledge.
Still, we should assume as Bostrom did that specific goals for an AI might be pursued with obsessive zeal. Much, then, hinges on our having some control of an AI’s goals. How could this be done, given an entity that can rewrite its own programming code?
Bostrom’s answer is that the essence of an AI, since it is made of changeable software, is, in fact, its set of final goals. Humans change their values and goals, but a person is locked into one body/mind that constitutes, in various ways, its identity and Self. Our prime goal is survival of that Self.
On the other hand, a computerized agent consists of changeable parts that can be swapped, borrowed, copied, started over, and self-modified. Thus the system is not really constituted by these parts; its essence is its final goals, which Bostrom calls teleological (goal-directed) threads. It is motivated to not change those goals because that is the only way to ensure that they will be satisfied in the future.
Learning values: the Coherent Extrapolated Volition (CEV) of humanity.
Raphael, The School of Athens
There is obvious circularity in this concept of final goal stability, and no one whom I have read thinks that a super AI’s values really won’t change. Furthermore, human values are complex, dynamic and often incompatible, so we cannot trust any one group of developers to choose values and safely program them into our AI. Solutions proposed so far rely on having the system learn values in a process that hopefully makes them: (1) broadly desirable and helpful to humans, (2) unlikely to cause havoc, such as an AI “tiling Earth’s future light cone with smiley faces” or installing pleasure electrodes in everyone’s brain, and (3) constitute a starting point and direction that preserves these qualities as values and goals evolve.
The one trick that pops up over and over is programming the initial AI (often called the “seed AI”) with the main goal of itself figuring out what human-compatible values should be. This applies new intelligence to a very difficult problem, and it means that continual refinement of said values is part of the system’s essence.
An early paper (“Coherent Extrapolated Volition”, it’s witty, insanely original and brilliant; you should read it) from Eliezer Yudkowsky of the Machine Intelligence Research Institute calls the goal of this learning process, “coherent extrapolated volition” or CEV. Volition because it’s what we truly want, not usually what we say we want, which is obscured by self and social deception. Yudkowsky said: “If you find a genie bottle that gives you three wishes … Seal the genie bottle in a locked safety box … unless the genie pays attention to your volition, not just your decision.” Extrapolated because it has to be figured out. And Coherent because it needs to find the common ground behind all our myriad ideological, political, social, religious and moral systems.
Whether a CEV is possible.
Perhaps you are freaking out that a CEV might endorse norms for human behavior that you find utterly repellent, yet those norms are embraced by some sizable fraction of humanity. And likewise, for those other people, your ideas are anathema. The supposed antidote to our division on these matters is the “coherent” aspect of CEV. That is, somehow with better thinking and wider knowledge, an AI can find our moral common ground and express it in a way that enough of us can agree with it. The word, hogwash, comes to mind, and I wonder whether some philosopher is already writing a book about why CEV can never work. But let’s look deeper.
Yudkowsky put the goal in what he (as an AI hard-head) called poetic terms: “(a CEV is) our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.”
So consider, we have never actually tried something like this: a respected and eminent thinker takes all of our different moral concerns and finds civilizational principles on which the most respected people of all countries and creeds can agree. If you were writing the screenplay, the attempt would be about to fail until The AI gives humanity some gift of great value, the solution to an age-old problem. This causes a sea change in public attitude towards The AI, which is now seen as a benefactor, not a devil. This parable suggests a necessity for the advanced AI project to be open and international.
Since Yudkowsky’s first paper the idea of CEV has gained traction, both in Bostrom’s important book and elsewhere (Steve Petersen, Superintelligence as Superethical). Thinkers are trying to identify the initial mechanisms of a CEV process, such as ways to specify goal content, decision theories, theories of knowledge and its real-world grounding, and procedures for human ratification of the AI’s CEV findings. For CEV, these philosophical skills will still be useless without a good theory of ethical coherence. That is, how to find coherence between various “beliefs about how the world is and desires about how the world should be” (Petersen, above) Clearly, the success and consequences of a strong AI project now depend, not just on esoteric computer science, but also on practical and very non-trivial philosophy.
Thinkism, science and superpowers.
It’s often said that an intelligence explosion will happen so fast that we will have no time to react or guide it in any way. This belief has been mocked by Kevin Kelly as depending upon “thinkism,” the idea that intelligence by itself can cause an explosion of progress. According to Kelly, thinkism ignores a demonstrated fact: new power requires new knowledge about nature, and you have to do science — both observational and experimental — to acquire new knowledge. Science takes time and physical resources. How would a new AI, with the goal of making itself better, or of solving any difficult problem, do the science?
Bostrom thinks that there are a number of “superpowers” an AI would be motivated to develop. These would be instrumental in the sense that they would serve many other goals and projects. One superpower is Technology Research, which is also an enabler and paradigm for scientific research.
What to avoid: the AI demon as Bond villain.
Suppose we decide to handicap an AI by severely limiting its access to the physical world, effectively having it depend on us for executing (but maybe not for designing) technology research. This leaves a mere four other (Bostrom-identified) superpowers for it to start getting new research done: Strategizing, Social Manipulation, System Hacking, and Economic Productivity. A physically isolated AI could teach itself how to manipulate people just by studying recorded knowledge about human history, psychology, and politics. It would then use advanced strategy to indirectly gain control of resources, and then use those resources as a wedge to improve the other superpowers.
Like a villain from a James Bond story, the AI might first develop its own workforce of true believers, idealists, cynics and sociopaths. These would help it gain and operate probably secret labs and other enterprises. It could bypass some of the ponderousness of scientific research: no need for publication and peer review, no grant applications and progress reports. But this alone would not speed things up enough for a super AI. It would need to develop its own robots, and possibly find a way to create zombie humans to deal with opposition by normal humans.
Vision of a safe oracle AI.
Assume that an AI could be sufficiently isolated from direct access to physical resources, and that its communications could only occur on public channels. The latter condition would prevent secret manipulation of people, and would also diminish human fears that an AI was favoring particular factions. Further, assume that a tool-level demi-AI (also locked into public-only channels) was available to analyze the open communications for evidence of concealed messages that might be used by the seed AI to manipulate allies or dupes into setting it free. The human governance of this AI would retain the ability and authority to physically shut it down in case of trouble.
What It can do for us.
Such an AI might be considered to be a safe oracle and adviser to the whole world. It could be allowed read-only access to a large fraction of human knowledge and art. It would start out knowing our best techniques of: idea generation from existing knowledge, extrapolative reasoning, strategic planning, and decision theory, to help it answer questions of importance to human well-being.
Maybe the first priority tasks for an oracle should be development of advice on steps instrumental to CEV, including better theories of things like values clarification and decision-making, among a host of philosophical problems. Both the results of these tasks, and the consequent first drafts of a CEV, would be something like a constitution for the combined human/AI civilization. It could allow us the much safer, slow, and non-explosive takeoff to superintelligence hoped for by Bostrom and other deep thinkers.
At some point in its development process, the oracle AI would give us deeply considered, well-explained opinions about how to deal with environmental, political, social, and economic problems. Advice on setting R&D priorities would help us avoid the various other existential risks like climate, nanotech, nuclear war, and asteroids. The oracle would continue to refine what it, or a successor AI, could maintain as a value and motivation framework when and if we decide to allow it out of its containment box and into the wild.
Slowing the explosion.
The public oracle approach allows us to get the early advantage of a “human-level” AI’s better thinking and wider information reach. It tries to minimize human factionalism and jockeying for advantage. There are plenty of obstacles to a public oracle project. For one, it must be the first strong AI to succeed. If a less safe project occurs first, the public oracle would probably never happen. Bostrom made a strong case that the first artificial general intelligence without adequate controls or benign final goals will explode into a “singleton”, an amoral entity in control of practically everything. Not just a demon but an unconstrained god.
Any explosive takeoff bodes ill for humanity. The best way to get a slower takeoff is a project that includes many human parties who must agree before each important step is taken. It might be that we shall be motivated for that kind of cooperation only if we’ve already survived some other existential challenge, such as near environmental collapse.
A slow project has a downside: it could get beat to the finish line. Bostrom has identified ( “Strategic implications of openness in AI development”) another twist here. A slow, inclusive project would need to be open about its development process. Openness could increase the likelihood and competitiveness of rival projects. Being safe takes time, so the project that pays the least attention to safety wins the race to strong AI, and “Uh-oh!”. Here again, a single slow and inclusive project bodes best.
The challenge of a safe and friendly strong AI can be productive, even if we are decades from making one. It forces us to find better ideas about how to be a mature species; how to not be always shooting ourselves in the foot. | https://towardsdatascience.com/your-friendly-neighborhood-superintelligence-f905ff21dfa4 | ['Ted Wade'] | 2020-06-29 15:22:32.959000+00:00 | ['Artificial Intelligence', 'Superintelligence', 'Singularity', 'Ethics', 'Towards Data Science'] |
Classification from scratch — Mammographic Mass Classification | In our previous article, we discussed the classification technique in theory. It’s time to play with the code 😉 Before we can start coding, the following libraries need to be installed in our system:
Pandas: pip install pandas Numpy: pip install numpy scikit-learn: pip install scikit-learn
The task here is to classify Mammographic Masses as benign or malignant using different Classification algorithms including SVM, Logistic Regression and Decision Trees. Benign is when the tumor doesn’t invade other tissues whereas malignant does spread. Mammography is the most effective method for breast cancer screening available today.
Dataset
The dataset used in this project is “Mammographic masses” which is a public dataset from UCI repository (https://archive.ics.uci.edu/ml/datasets/Mammographic+Mass)
It can be used to predict the severity (benign or malignant) of a mammographic mass from BI-RADS attributes and the patient’s age. Number of Attributes: 6 (1 goal field: severity, 1 non-predictive: BI-RADS, 4 predictive attributes)
Attribute Information:
BI-RADS assessment: 1 to 5 (ordinal) Age: patient’s age in years (integer) Shape (mass shape): round=1, oval=2, lobular=3, irregular=4 (nominal) Margin (mass margin): circumscribed=1, microlobulated=2, obscured=3, ill-defined=4, spiculated=5 (nominal) Density (mass density): high=1, iso=2, low=3, fat-containing=4 (ordinal) Severity: benign=0 or malignant=1 (binomial)
Screenshot of top 10 rows of the dataset
So we talked a lot about the theory behind it. It’s fairly simple to build a classification model. Follow the below steps and get your own model in an hour 😃 So let’s get started!
Approach
Create a new IPython Notebook and insert the below code to import the necessary modules. In case you get any error, do install the necessary packages using pip.
import numpy as np
import pandas as pd
from sklearn import model_selection
from sklearn.preprocessing import StandardScaler
from sklearn import tree
from sklearn import svm
from sklearn import linear_model
Read the data using pandas into a dataframe. To check the top 5 rows of the dataset, use df.head() . You can specify the number of rows as an argument to this function in case you want to check different number of rows. BI-RADS attribute has been given as non-predictive in the dataset and so it won’t be taken into consideration.
input_file = 'mammographic_masses.data.txt'
masses_data = pd.read_csv(input_file,names =['BI-RADS','Age','Shape','Margin','Density','Severity'],usecols = ['Age','Shape','Margin','Density','Severity'],na_values='?')
masses_data.head(10)
You can get a description of the data like values of count, mean, standard deviation etc as masses_data.describe()
As you might have observed, there are missing values in the dataset. Handling missing data is something very important in data preprocessing. We fill out the empty values using the mean or mode of the column depending on the data analysis. For simplicity, as of now, you can drop the null values from the data.
masses_data = masses_data.dropna()
features = list(masses_data.columns[:4])
X = masses_data[features].values
print(X)
labels = list(masses_data.columns[4:])
y = masses_data[labels].values
y = y.ravel()
print(y)
The vector X contains the input features from column 1 to 4 except the target variable. Their values will be used for training. The target variable i.e Severity is stored in the vector y .
Scale the input features to normalize the data within a particular range. Here we are using StandardScaler() which transforms the data to have a mean value 0 and standard deviation of 1.
scaler = StandardScaler()
X = scaler.fit_transform(X)
print(X)
Create training and testing set using train_test_split . 25% of the data is used for testing and 75% for training.
X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.25, random_state=0)
To build a Decision Tree Classifier from the training set, we just need to use the function DecisionTreeClassifier() It has a certain number of parameters about which you can find on the scikit-learn documentation. For now, we would just use the default values of each parameter. Use predict() on the test input features X_test to get the predicted values y_pred . The function score() can be used directly to compute the accuracy of prediction on test samples.
clf = tree.DecisionTreeClassifier(random_state=0)
clf = clf.fit(X_train,y_train)
y_pred = clf.predict(X_test)
print(y_pred)
clf.score(X_test, y_test)
The DecisionTreeClassifier() without any tuning gives a result around 77% which we can say is not the worst.
To build an SVM classifier, the classes provided by scikit-learn include SVC , NuSVC and LinearSVC . We will build a classifier using SVC class and linear kernel. (To know the difference between SVC with linear kernel and LinearSVC you can go to the link — https://stackoverflow.com/questions/45384185/what-is-the-difference-between-linearsvc-and-svckernel-linear/45390526)
svc = svm.SVC(kernel='linear', C=1)
scores = model_selection.cross_val_score(svc,X,y,cv=10)
print(scores)
print(scores.mean())
In this section, I am trying to show you a different approach for creating a classifier. The svc classifier object is created using SVC class on the training set. cross_val_score() function evaluates score uses cross-validation method. Cross-validation is used to avoid any kind of overfitting. k-Fold cross-validation implies k-1 folds of data used for training and 1 fold for testing. The score obtained using this is around 79.5%
Similar to the Decision Tree Classifier, we can also create Logistic Regression classifier. The function LogisticRegression() is used. The classifier is fitted on training set and similarly used to predict target values for the test set. It gives a mean score of 80.5%
clf = linear_model.LogisticRegression(C=1e5)
clf = clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
scores = model_selection.cross_val_score(clf,X,y,cv=10)
print(scores)
print(scores.mean())
Thus, if we want to build a single classifier we can do it in just 10 lines of code😄. And in no effort, we achieved an accuracy of 80%. You can create your own classification models (there are plenty of options) or fine-tune any of these 😛. Also if you are interested you can give a shot to Artificial Neural Networks as well 😍. For me, I got the best accuracy of 84% with ANNs. You can find the code for this on my GitHub account.
If you liked the article do show some ❤ Stay tuned for more! Till then happy learning 😸 | https://medium.com/datadriveninvestor/classification-from-scratch-mammographic-mass-classification-a0b5f53fb5 | ['Apoorva Dave'] | 2019-03-13 11:52:21.197000+00:00 | ['Machine Learning', 'Python', 'Classification', 'Data Science', 'Beginners Guide'] |
Fewer Angels | Audible Sundays is a freeform series inspired by one new music release a week. Each entry is written in synchronicity to the mood of the song and the photograph. Follow the playlist on Spotify: | https://medium.com/chance-encounters/fewer-angels-1739e51a8432 | [] | 2020-10-16 20:06:10.115000+00:00 | ['Audible Sundays', 'Music', 'Poetry', 'Photography', 'Los Angeles'] |
Super Furry Animals | Super Furry Animals
When Super Furry Animals appeared on the British music scene in the mid-1990s, they were surrounded by foppish, guitar pop bands — bands like Radiohead, Oasis and Blur. While it took those bands years to turn to experimentation — to get “weird” — SFA had been weird from the get-go. The Welsh band’s 1996 release, Fuzzy Logic, was an oddball in a year of Wonderwalls and Country Houses. Their debut record whirred, blipped and squawked but unlike other artsy rock, it actually rocked. Like a masculine Ziggy Stardust, SFA charted a course through edgy guitars, bizarre lyrics (dreams about hamsters generating electricity?), and oddly catchy choruses.
Each subsequent release has proven that their fertile imagination knows no bounds. Radiator was somehow weirder yet even more catchy. Songs like “Chupacabras” turned their sci-fi lyrics (in this case about vampire bats) into fist-pumping rock anthems. Even if you couldn’t understand the choruses you could chant along.
Once they’d perfected their own brand of space rock on their third record Guerilla, they released the mostly down-to-earth, though sung entirely in Welsh, Mwng. Their last two records, Rings Around the World and last year’s Phantom Power, have explored a more expansive, lush sound. Strings accented the analog synths, sequencers and rock instruments. The results have been something like a space-age Pet Sounds. The band also released these last albums in quadrophonic (surround) sound on DVD.
SFA’s keyboardist, Cian Ciaran, told The Rage that the road to DVD albums began as an experiment in sound that blossomed into a full-blown media show. “We did quadrophonic sound in our live shows about two years before, like the end of ’98. Then DVDs started becoming more accessible over here. Primarily we wanted to have surround sound. It was a sonic thing not a visual thing.” But the versatility of the DVD medium pushed the band to develop a visual element that eventually made its way to their live show. “The sound came live and then on DVD. And then the visuals came on DVD and then to the live show.”
It was the logical step for a band who’s always been fascinated by technology. Samplers, sequencers, techno beats, and synth sounds have always been a part of the band’s music. Ciaran attributes the electronic influence to the dance music that the band has listened to since the late 1980s. When questioned about technology’s influence, he says, “It’s more a question of embracing what’s available. If people shied away from technology we’d still be playing acoustic guitars around the campfire. Technology is something you can’t be blinkered to. You should embrace it and push it and if it helps your music, then use it and abuse it. We’re not purists in that sense … or in any sense.”
SFA’s voracious appetite for new sounds and ideas begs the question: How do they keep challenging themselves? “It’s not something we think about,” Cian answers. “I suppose you think more about when’s it gonna stop rather than where the next thing’s coming from.” It’s a problem most bands would love to have. “There’s so many ideas. They start backing up and you can’t get them out of the way quick enough or fast enough.”
Despite their citations of dance music influences and their liberal use of any and all electronics, there has never been any doubt that SFA is a rock band. From the wiry, distorted guitar riffs of “Frisbee” off their debut to the steady fuzz of “Venus & Serena” from their most recent record, SFA have never abandoned rock music. They may take rock music on some interesting detours, but their experiments only strengthen their musical voice. At once international and provincial, rural and cosmopolitan, lo-fi and high-tech, Super Furry Animals personify a 21st century rock band. Their appearance at the Exit/In is a rare treat to see this fascinating Welsh group. | https://medium.com/hey-todd-a/super-furry-animals-feature-48ebd7b9fb62 | ['Todd A'] | 2020-03-11 23:33:02.329000+00:00 | ['Zine', 'Interviews', 'Nashville', 'Pop Culture', 'Music'] |
Array.splice and Array.slice in JavaScript | Array.splice
Splice is used to modify the content of an array which includes removing elements, replacing existing elements, or even adding new elements to an array.
Using the splice function updates the original array.
Consider the following array:
const arr = [0, 1, 2, 3, 4, 5, 6, 7, 8];
Array.splice signature:
arr.splice(fromIndex, itemsToDelete, item1ToAdd, item2ToAdd, ...);
Removing the elements
To remove elements from an array, we write:
var deletedItems = arr.splice(3, 2);
This will delete one element, starting from index 3 and returns the deleted array. As a result, we get:
deletedItems // [3, 4]
arr // [0, 1, 2, 5, 6, 7, 8]
Adding new elements
To add new items to an array, we write:
const arr = [0, 1, 2, 3, 4, 5, 6, 7, 8];
var arr2 = arr.splice(2, 0, 100, 101);
At 2, this will add the numbers 100 and 101. The final values will be:
arr2 // [] , since we didn't deleted an element from an array
arr // [0, 1, 100, 101, 2, 3, 4, 5, 6, 7, 8]
Modifying an existing element
We can cleverly modify an existing element in an array using splice so that we delete the item at an index and insert a new element in its place.
const arr = [0, 1, 2, 3, 4, 5, 6, 7, 8];
To replace 3 with 100, we write:
var arr2 = arr.splice(3, 1, 100);
// which means - at index 3, delete 1 element and insert 100
We get the following values for arr and arr2 after executing the above code snippet: | https://medium.com/developers-arena/array-splice-and-array-slice-in-javascript-e53006d4d6fb | ['Kunal Tandon'] | 2020-02-16 17:34:22.341000+00:00 | ['Angular', 'JavaScript', 'React', 'Programming', 'Arrays'] |
A Subjective Top Ten — The Best Pokémon Songs | “I wanna be the very best, like no one ever was . . .”
There are few people under the age of thirty who don’t know the next lines to this song. In fact, many of us know all the lines to the Pokémon Theme and can’t help but sing it at the top of their lungs at the slightest mention of the track — or Pokémon in general.
In addition to being a great metaphor for life as a journey, the franchise has introduced us to fantastic characters with intriguing storylines, has forced us to waste days of our lives battling Zubat, and has made us long for eternal youth with pretty much unlimited money and no responsibilities whatsoever.
What we tend to forget is that the Pokémon franchise has not only produced world-renowned games, anime series, films, and manga, but also truly amazing music. Let’s take a look at ten particularly impressive songs that are tied to the world of Pokémon, trying to determine which one is “the very best”.
Honourable Mention: “Don’t Say You Love Me”
performed by M2M, from Pokémon: The First Movie (1999)
Before we get into the top ten, here’s a reminder that M2M — the Norwegian duo of Marion Raven and Marit Larsen — had their first international hit with the lead single from the first Pokémon movie.
Yes, Marit Larsen, the singer-songwriter responsible for the mega hit “If A Song Could Get Me You”, rose to fame with a Pokémon song.
That being said, “Don’t Say You Love Me” may be a great pop song, but with lyrics that don’t really fit a Pokémon theme at all the track doesn’t quite make the top ten.
10. “To Know The Unknown”
performed by Innosense, from Pokémon 3: The Movie — Spell Of The Unown: Entei (2001)
Innosense, the girlband that called Britney Spears a member for less than a year and later released a song with the very 2000-ish title “www.fan-ta-see”, recorded this ballad for the third — and possibly best — Pokémon movie.
“To Know The Unknown” seems to be a cheesy all-I-need-is-you love song upon first listen, but given that the movie revolves around a little girl missing her mother, the track is actually far deeper.
As such, it sets the tone for several Pokémon songs on this list: it can be interpreted as a love song, but it carries additional meaning. Also, it’s melancholy and hopeful at the same time. This is an interesting feat that surprisingly many Pokémon tracks achieve, making them more complex and layered than most music aimed at children and young teens.
9. “Black And White Theme”
performed by Erin Bowman and Joe Philips, from Pokémon the Movie: White — Victini and Zekrom and Pokémon the Movie: Black — Victini and Reshiram (2011)
The Generation 5 anime series, released under the Black & White title, was met with plenty of criticism for illogical battles, annoying characters, and plain bad writing. While the series itself may not represent the strongest phase of the franchise, Pokémon music was at its best during the Black & White era.
The full version of the “Black And White Theme” that accompanied the duo of Victini movies set a new tone for Pokémon theme songs. Slower and more serious than former opening tracks, the mid-tempo song marks a more grown-up take on the anime’s traditional subject matter.
The lyrics reflect that development with lines such as “it’s not always black and white” and “it’s not always right or wrong” that break the simplified good-versus-bad storytelling of most franchises for a more nuanced approach.
8. “I’ll Always Remember You”
performed by Kirsten Price, from Pokémon: The Rise Of Darkrai (2008)
A mid-tempo ballad with an unusual structure, “I’ll Always Remember You” is the first song on this list with plenty of tear-potential. With heartfelt lyrics such as “I’ll carry your dreams until they come true”, the track works well within the Pokémon canon, but serves as an emotional ode to someone gone too soon in the real world as well.
While some Pokémon songs clearly cater to kids, it’s the powerful pop moments like this one that stand out from a musical and lyrical perspective. If you thought you’re too old to enjoy Pokémon in general and Pokémon music in particular, let this gem change your mind.
7. “We’re Coming Home”
performed by Jess Domain, from Pokémon the Movie: Genesect And The Legend Awakened (2013)
Back to Generation 5 and another example of a pretty bad movie accompanied by a pretty fantastic soundtrack. “We’re Coming Home” is the perfect ending theme for the final installment of the Black & White series, looking back on the travels of the Unova team and paving the way for new adventures in a new region.
“We had some fun out there, now didn’t we?” the song starts. “I never knew the world had so much for us to see.” Then the focus shifts towards the future as Jess Domain sings “we’re coming home, the only place that’s never too far.”
With a slightly more interesting chorus, “We’re Coming Home” might have ended up even higher on this list, but its sublime verse melodies and lyrics that surely put a smile on your face earn it a solid 7th place.
6. “Pokémon Johto”
performed by PJ Lequerica, from Pokémon: The Johto Journeys (2000)
The previous entries on this list mainly stand out because of their emotional lyrics and impressive songwriting. Well, it’s fair to say that “Pokémon Johto”, the theme song to the third season of the anime, isn’t particularly strong in that regard. Lines like “everybody wants to be a master, everybody wants to show their skills” are a little too straightforward to hold any deeper meaning.
Then why is this track number 6, you ask? For the simple reason that no other Pokémon song is as catchy as this one. From the silly “do do doop do do do” bit to the powerful chorus, “Pokémon Johto” is bursting with energy and positivity. So let’s dance around for a moment and belt out the simple lyrics at the top of our lungs, shall we?
5. “The Time Has Come (Pikachu’s Goodbye)”
performed by Marti Lebow, from Pokémon Heroes: Latios & Latias (2003)
Many fans consider the Latios and Latias movie the best the franchise ever produced. It not only includes one of the saddest scenes in Pokémon history (the death of Latios), but also one of the saddest songs in the Pokémon discography. However, “The Time Has Come” had already played in the original anime season years prior before it was used in the ending credits of Pokémon Heroes.
While the song features a similar tone to “I’ll Always Remember You”, the subject matter is slightly different, and the wording is even more emotional. From the lines “I can see the day we’ve met, just one moment and I knew you’re my best friend” to “as our team is torn apart, I wish we could go back to the beginning”, the lyrics are insanely heartbreaking — even if you don’t apply them to an adorable electric mouse.
If you aren’t moved to tears by this one, then a) you’re weird, and b) just continue reading. You’ll get there, I promise.
4. “This Side Of Paradise”
performed by Bree Sharp, from Pokémon: Destiny Deoxys (2005)
Enough sadness. At least for now. “This Side Of Paradise” opens with a “la la la” bit that puts Johto’s hooks to shame. Then it moves through fast-paced verses and a chorus that oozes with happiness and zest for life.
With feelgood lines such as “the clouds are high, the world is open” and melodies any popstar would kill for, “This Side Of Paradise” is the Pokémon theme song that “Don’t Say You Love Me” should have been.
That being said, just close your eyes for a moment and imagine the Spice Girls singing this. Cool thought, right?
3. “Pokémon Theme”
performed by Jason Paige, from Pokémon: Indigo League (1998)
You knew this one had to come sooner or later. The incomparable original Pokémon theme breaks into the top three, but it doesn’t quite manage to top this list.
On the one hand, the iconic song has masterfully set the tone for most anime themes to follow. It captures Ash Ketchum’s energy and positive outlook on life perfectly, and its chorus is without doubt one of the strongest Pokémon has to offer.
That being said, “Pokémon Theme” was written with a young audience in mind, meaning it doesn’t exactly have a lot of depth to it. Also, it only works in the context of the franchise, while most other songs on this list are fantastic pop tunes in their own right.
Are we cutting this one a lot of slack because it’s the original song that first got us hooked on the franchise? Perhaps. Is it objectively not really good enough for the number 3 spot? Maybe. But emotional attachment has to be earned, doesn’t it?
Anyway, here we are, so let’s sing along. 3, 2, 1 . . . go!
2. “It’s Always You And Me”
performed by Neal Coomer and Kathryn Raio, from Pokémon the Movie: Genesect And The Legend Awakened (2013)
The ending theme from the Genesect movie has already made the list. Now here comes the movie’s opening song, which was also used for the Adventures In Unova And Beyond season of the anime, albeit in a shortened form.
“It’s Always You And Me” beats its original predecessor with a knockout chorus and compelling lyrics that focus on friendship without sounding too childish or clichéd. At the same time, Coomer and Raio’s vocals alternate flawlessly, their dynamic adding another dimension to the track and picking the central message of the lyrics up on a sonic level.
Really, though, what earns this theme the runner-up spot is the beginning of its chorus. The lines “we’ve come so far, we’ve fought so hard to get where we are” capture the essence of Pokémon’s ‘journey’ theme better than any other, making “It’s Always You And Me” an impeccable song to reflect on both the franchise’s rich history and your own, personal growth.
Still, there’s one Pokémon song that’s even better . . .
1. “I Choose You”
performed by Haven Paschall, from Pokémon The Movie: I Choose You! (2017)
Remember that I said you’d be moved to tears later on? This is that moment.
“I Choose You” plays during the ending credits of the twentieth Pokémon movie, while all of Ash’s former travelling companions are shown on screen, inspiring melancholia and tears of joy in any true Pokémon fan.
Yet, even if you feel no emotional attachment to the past anime series and movies, the piano ballad will still get to you. This is the case since “I Choose You”, like no other Pokémon song, bridges the gap between friendship-based anime storyline and romantic real-life interpretation. The track can be a love song, but it can just as well address a good friend. It’s about any kind of partner in crime, any person you’re glad to have around, so that everyone can place it in their own emotionally charged context.
Also, it’s sublimely written. The verse lines “hello stranger, don’t I know you? Nice to see you my friend” express brilliantly in simple terms how you connect with some people instantly, feeling as if you’d always known each other. Then the chorus goes “I choose you, what else can I do? We’re just too good together to part”, breaking down the complex feelings and thought processes behind emotional attachment better than Adele or Ed Sheeran ever could.
Songs with simple wording often come across as uninspired and boring, but this one doesn’t. It’s “the very best” as far as Pokémon songs are concerned — at least as of today. So now, as vocalist Haven Paschall (Serena’s voice actress, by the way) sings, “on to journeys anew”. | https://medium.com/artmagazine/a-subjective-top-ten-the-best-pok%C3%A9mon-songs-a49c7ee0de87 | ['Christoph Büscher'] | 2018-04-26 16:19:25.719000+00:00 | ['Top 10', 'Music', 'Anime', 'Pokemon', 'Pokémon'] |
Digital Marketing | Is it Only Hype that Sells? | Following the lead
Originally published at www.beaconsocialmedia.com
In the marketing industry using emotion or hype to sell a product or service has very much become today’s trend. In fact, the use of this so called ‘enticing’ energy has become so deeply ingrained in society, that many of us do not even realise the hooking tactics businesses and marketeers are using to ‘draw us in’, and sadly, this is what marketeers are taught in their study and training.
This ‘hooking’ method of marketing for business is not unlike fishing. You bait the hook, with something that will appeal to the emotions of your desired catch, then sit back and wait for the haul to roll in. It is a method based purely on manipulation of people wants and desires.
Stop. At what point did we drop our standards in business and in life to allow such a game to go on, for go on it does and if we truly look around us, it is not only occurring in big industry, but it has now infiltrated all industries and all sectors of business marketing.
Today’s Marketing industry has lowered its standards to become a game that plays on people’s emotions, wants and desires.
Is it possible there is an alternative approach? One that is true and in fact honouring of the people it goes out to? Is it possible we have tainted the word marketing, which should simply be a term to describe ones sharing of their business as an offering to others around them?
These are big questions, however they are questions that can be answered simply.
The answer is yes. There is another approach, and yes, we can still market our businesses, but in a way that is not imposing, hooking, or manipulating of the people.
Yes, the word marketing has been tainted, and yet is it we, the industry and the people it targets who have allowed this to occur. Equally so, it is we who can turn this around.
It is true that emotional content is something people respond to. But does that give us permission to use manipulation tactics? This is a great question to consider.
On the other side of things, what of genuineness, honesty, integrity and the like. Do these things not appeal to each and every one of us? Could it be that perhaps they hold even more appeal then the tantalising energy of excitement or drama?
We say yes. It may not seem as exciting, but what it offers is something of unsurpassed and lasting value. And that is, true relationships with the people that we serve in our business and a pure joy in knowing we hold a level of integrity that we will not drop in any case. There is no reward greater than this.
You can read more about the author of this blog (pictured below) here | https://medium.com/multiplier-magazine/digital-marketing-is-it-only-hype-that-sells-b4c123fca92c | ['Beacon Social Media'] | 2018-02-24 18:15:08.264000+00:00 | ['Marketing', 'Business', 'Business Strategy', 'Social Media', 'Social Media Marketing'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.