title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Writer’s Block? Stop Writing and Do This Instead.
Photo by Jernej Graj on Unsplash Writer’s Block? Stop Writing and Do This Instead. Don’t just stare at a blank page. Many people think that when they experience writer’s block, the cure is to sit down in front of a blank page and not move until they’ve written something. This is an excellent way to write something really bad. Sure, “you can edit bad but you can’t edit blank.” At least having something is better than nothing. But what if you could skip all that torture and get to the part where the writing flows again? There is only one way to truly cure writer’s block, and that’s to get inspired. But inspiration is a tough cookie. It’s elusive, transitory, and never seems to show up for work at the same time we do. So how do we induce inspiration? How do we make ourselves open to new ideas? How do we create that spark? Well, we’re not going to do it by staring at a blank page and a blinking cursor. Often, when I start to feel that familiar sensation of not knowing what to write next, I begin to panic. But I recently realized that writer’s block is just a signal my brain uses to let me know that it’s run out of juice. Instead of panicking, I realized that writer’s block is a reminder. A reminder that we need to feed our brains with more research. More prep work. We can feed our writing brain by giving it lots of great material to think about. It’s very difficult to go into the kitchen and start cooking without a recipe or any ingredients. Similarly, it’s very difficult to sit down without having done any prep work and pump out 5,000 words. For writers, we have prep work that involves outlining and developing ideas. But we also have to do prep work in order to have great ideas in the first place. This kind of prep work is done through immersion. What does this “idea” prep work look like? What is immersion for a writer?
https://medium.com/a-life-of-words/writers-block-stop-writing-and-do-this-instead-690b0c0603b1
['Grace Claman']
2020-07-10 01:32:26.162000+00:00
['Inspiration', 'Writing Tips', 'Writing Life', 'Writers Block', 'Writing']
7 навыков крутого оратора. Как выступать лучше
in In Fitness And In Health
https://medium.com/propitch-%D0%BF%D1%80%D0%B5%D0%B7%D0%B5%D0%BD%D1%82%D0%B0%D1%86%D0%B8%D0%B8-%D0%B8-%D1%81%D1%82%D0%BE%D1%80%D0%B8%D1%82%D0%B5%D0%BB%D0%BB%D0%B8%D0%BD%D0%B3/7-%D0%BB%D0%B8%D1%87%D0%BD%D0%BE-%D0%BC%D0%BE%D0%B8%D1%85-%D0%BF%D1%80%D0%B8%D0%BD%D1%86%D0%B8%D0%BF%D0%BE%D0%B2-%D0%B4%D0%BB%D1%8F-%D0%BF%D1%80%D0%B5%D0%B7%D0%B5%D0%BD%D1%82%D0%B0%D1%86%D0%B8%D0%B9-5ebbc055908a
['Стас Гуревский']
2017-02-04 12:13:02.286000+00:00
['Presentation', 'Public Speaking', 'Storytelling']
How TikTok Is Addictive
How TikTok Is Addictive Psychological Impacts of TikTok’s Content Recommendation System TikTok is the fastest growing social media platform in the world! Each month TikTok has 800 MILLION active users. That is more active users than Twitter, Reddit, SnapChat, and Pinterest! Unlike these and other rival platforms, TikTok at its core recommends content. Recommendations aren’t but a feature, they are what makes TikTok work! TikTok receives more engagement per user than Instagram and on average its users spend 52 minutes per day on the platform. These are incredibly shocking statistics coming from a platform that begin in late 2016! This article will explore how the TikTok recommendation algorithm works, the implications of such a system and what the goals that TikTok aims to accomplish by providing a platform that keeps its statistically young users so engaged. TikTok is Unique Many social platforms use some variation of a recommendation algorithm to provide accurate content that fit the historic behaviors of its users. To spare you the details, TikTok is able to recommend you videos by what you and others have watched. If you watched the same TikTok as others, you are likely to be recommended the videos they have watched. These algorithms are able to predict the preferences that you would give to a piece of content based on the activity of similar users to you. There are many methods and varieties of these recommenders. If you are interested, an in depth description of YouTube’s algorithm can be found here. Normally, social platforms provide you with more control first. What I mean by that is you get to decide what you do and do not see immediately as you enter the platform. Take Instagram as an example, its central focus is to provide its users a photo and video sharing platform. You immediately are introduced to a feed of images and videos from individuals you chose to follow. The section of the app to infinitely scroll through recommended images takes a backseat to the images and videos from those you chose follow. The same is true historically from YouTube, and Facebook among other older social media. However, TikTok is recommendation first. As soon as you enter the platform you are hit with an infinite viewing experience of 15 second videos made predominately by young creators that you never chose to watch. This makes the platform immediately stimulating. TikTok’s mission, as it claims, aims to “.. capture and present the world’s creativity, knowledge, and precious life moments, directly from the mobile phone. TikTok enables everyone to be a creator, and encourages users to share their passion and creative expression through their videos.” Although users can and do provide such content, the consequences of how the platform operates should be subject to rigorous scrutiny. Why would a platform with such an initiative employ a recommendation first approach? Would that not support captivation over creativity? If the platform aims to present creativity, knowledge and precious life moments, why is the content restricted to 15 second videos? The Audience A staggering 41% of self-reported TikTok users are aged 16–24yrs old. Since the platform is restricted to those aged 13yrs or older and from the sentiment of those who use or recognize the platform, it would not be surprising if the platform has a much larger younger audience. Younger users are much more impressionable, and naive. It is easy to recognize the social dangers that occur when an open and auto recommended platform uses the viewing data of an impressionable audience to engage with its platform. At the very least, this may manipulate a younger individuals perception of what is to be socially acceptable behavior and what are well formed beliefs. This can be on issues of social and individual identity, and a distraction from accomplishing the crucial tasks requisite of them at such a formative age to engage with the passions that can lead to a flourishing life. Furthermore, younger individuals may feel more obliged to engage with this platform. The more impressionable the user, the more they lack the control to disengage from the platform and the more data the platform has to accurately recommend these individuals appropriate content. This effect spirals given that both the social demands and the improving content the platform provides, the more it is likely these users spend on TikTok. A Cornell student of the name Niko Nguyen wrote an opinion piece about the their troubling experience with the platform. This is a quote from that piece: “The majority of my past winter break was spent on TikTok, up until the moment when the app took its dying breath on my phone. But even when I wasn’t scrolling through the endless stream of cringe-worthy “Renegade” dances and entertaining life hacks I know I’ll never use, I started to notice TikTok everywhere I looked. My group chats were constantly inundated by waves of TikToks my friends found funny. Scrolling through Instagram stories and Twitter — two social media platforms competing against TikTok — I found myself consuming the short-form videos outside of the app itself. Offline, when I was with my friends, we would reference TikToks, discuss TikToks, joke about TikToks, remake TikToks. It squirmed and squeezed its way into every corner of my social life. In the blink of an eye, TikTok somehow dominated the social media industry, firmly planting itself into the culture of today’s youth.” This speaks to the issues addressed here and adequately describe some of the sentiments many young individuals have of the platform. TikTok has also been accused of including harvesting user data and suppressing content made by queer, differently abled and fat creators. Which, if true, could have drastic social and personal effects on its impressionable and large audience. How TikTok Affects Us It is important to understand what is representative of addiction to stipulate if TikTok is addictive for you. Common symptoms of addiction are; an inability to stop using, negative health effects, obsessive behavior and it is used to cope with outside problems. TikTok may be addictive if the content provides enough stimulus for the users to exhibit addiction symptoms and a neurological reaction that is consistent with addiction. What sort of content can be addictive? Content containing information, and especially it is short and captivating. TikTok has a lot of information. Having access to relevant information can improve decision making. This is why new and relevant information are rewards for our brain. Rewards are treated in the mid brain with a dopamine response just as an intake of foods high in sugar, fat and salt is. This is both true for the consumption, but also for the anticipation of such a reward. Interestingly, TikTok’s information is presented in a 15 second form. This means that although the users do get well-recommended stimulating information, I am hard-pressed to believe that meaningful content can be adequately displayed in such a short amount of time. Twitter has been scrutinized for its character limit for a very similar reason. It is simply difficult to convey adequately the view that you represent so quickly with such limited and distanced engagement. This makes the platform more susceptible to the kinds of expected stimulus seen in addiction. TikTok as a platform fulfills some of those requirements for addiction. The short videos provide us with relevant information that stimulate a dopamine response. This process is constantly reinforced by consistently supplying us with more appropriately recommended videos. What I would I also propose is TikTok allows viewers to engage intimately and anonymously with many content creators without being judged or feeling obliged to participate while giving users permission to judge. This sense of connection, distant engagement, the ability to be anonymous and expressive with a stream of constantly relevant information and our recommendations being chosen for may serve as platform to develop an addiction. As of yet, there are no direct mental disorders listed in the Diagnostic and Statistical Manual of Mental Disorders (DSM) for information/TikTok addiction. What TikTok Does Well Although I have been critical of TikTok, it is important of note that TikTok does provide a platform with advanced features for individuals to engage with and share in their creativity. This platform helps individuals democratize their creativity. Are you a young painter struggling to present your talent to the world? If you present yourself well enough than perhaps TikTok will recommend you to millions. Are you a comedian, yearning to have your jokes heard? Make them into 15 second videos and perhaps you can start a successful show from your phone. This holds true for dancers, singers, and many sorts of artists than can restrict there stories and art to the 15 seconds. Although these social platforms are engaging, they can also be detrimental. If you are creative and recognize that this platform is an opportunity, it is important to understand that TikTok is enticing and has been criticized for having controversial content and restrictions. It is important to be aware of the possible social pressures and personal consequences when making decisions about how you are engaging your talents and the use of your time. The Attention Economy These problems speak to a larger issue that is present with many large platforms using information and data as a commodity. The pressure for media companies to gain as much engagement for as long as possible is what is known as the Attention Economy. With its user base and a possible valuation of $100 billion in 2020, TikTok is quite up there with a large platform. Your attention is scarce and is also desirable. Large platforms compete for attention as that allows them to generate revenue through advertisements, the selling of data, goods and services. Large platforms like Google, Facebook operate as a medium for others to create, buy and sell. This can be simply selling you a funny video, or a tangible commodity such as merchandise. For these companies to thrive they need to not only provide a well designed and working platform, but must habituate you to and ultimately make you dependent on the platform. Sites like YouTube do this so well as there no other website quite like it and the cost for creating a directly competitive platform is too high. You can learn something new with visual representation, engage with creators, get the latest news etc… all on one free and easily accessible website. Many argue that the attention economy keeps us from doing the work that gives us meaning and a sense of purpose. The constant distracting notifications and endless carefully selected content controls us by manipulating our examined behavior. The consequence of this is that we never have to fully embrace our loneliness, sadness and poor sense of self but we also feel pushed to engage with things, that upon reflection, we should not have. Conclusion It may be said that TikTok could be a platform that is addictive to its consumers. If you enjoyed the read and have a critical and thought provoking response, feel free to share it in the comments and outvote the article as many times as you deem worthy!! Thanks! Source: TikTok’s Mission statment: https://support.tiktok.com/en/privacy-safety/for-parents-en TikTok’s statistics: https://www.statista.com/statistics/272014/global-social-networks-ranked-by-number-of-users/ https://www.businessofapps.com/data/tik-tok-statistics/ Source for ‘Midbrain Dopamine Neurons Signal Preference for Advance Information about Upcoming Rewards’ : https://www.cell.com/neuron/fulltext/S0896-6273(09)00462-0 Sources: Cornell Student essay: https://cornellsun.com/2020/02/09/nguyen-the-terrifyingly-tantalizing-trend-thats-tiktok/ Accusations of data harvesting and content supression : https://www.theguardian.com/technology/2020/feb/14/first-quitting-tiktok-statement-shows-popular-app-has-come-of-age Src: https://cornellsun.com/2020/02/09/nguyen-the-terrifyingly-tantalizing-trend-thats-tiktok/
https://medium.com/dataseries/how-tiktok-is-addictive-1e53dec10867
[]
2020-09-13 13:03:26.654000+00:00
['Data Science', 'Social Media', 'Tik Tok', 'Addiction', 'Psychology']
Our Post COVID-19 World: A Request for Startups
The more we need social distancing, the more we need better tools to expand our social networks. Maintaining existing relationships, both personal and professional, is easy enough using the existing digital tools at our disposal. Plenty of products have emerged focused on enhancing the spontaneity of these interactions, but what about the serendipity of forming new connections? While some are optimistic that our entirely digital existence will break down barriers to network-based access, instead I think we’ll see more creative ways to create positional scarcity, digitally (ie. clubhouse) and therefore new expansion or deepening of personal networks will prove challenging while we’re all stuck at home. How are we to establish relationships with people that aren’t already on our radar or within our network? How can we still form pseudo-random connections? Companies like lunchclub begin to fill this void but more are needed. Perhaps even more importantly (if hiring and funding decisions are to be made online), we’ll need creative ways to recreate the environmental cues that spark dynamic conversations with new acquaintances that are lacking within the constraint of static, face-to-face video chats. We’ll need to design solutions to help us gain insight into the ways in which our counterparts interact with others without the small cues we’re used to perceiving while spending in-person time with others in casual or social settings. When we’re forced to spend so much time in our digital world, it’s two-dimensionality becomes frustratingly apparent. Over the past two years, there has been a lot of focus on creating digital communities and hanging out in shared, digital spaces. While some such digital communities or spaces result in more social online experiences, most of them are frustratingly two-dimensional. Personally, the lack of three-dimensional space is perhaps the most noticeable difference between our old way of life and the new one we must temporarily inhabit. Some think the answer may be virtual reality, but that still requires users to overcome a large obstacles (hardware purchase, for example) and may make users feel even more isolated (1:1 relationship with headset.) Instead, I think we’re going to see more innovation in spatial software. Spatial software is characterized by the ability to orient bodies and objects in space, in a parallel to the real world. It reorients software logic around our perception of space as it exists in the physical world rather than orienting it around recency (WhatsApp) or popularity (Facebook.) Spacial software begins to restore our perception of the relationships between us and others, between our location and our surroundings, between our thoughts and our environment, and between time and place, all of which are important orientations that have fallen away as we’ve shifted into our two-dimensional online world. All credit for articulating this concept goes to John Palmer. Touchless technology becomes a necessity. I think the current preference for touchless transactions is likely to persist and will ultimately render touchscreens and physical artifacts (cash and cards) as antiquated as handshakes post-virus. This could lead to an accelerated shift towards voice, haptic technology, more flexible and capable robotics, digital payments, and, perhaps, central bank digital currencies. I will focus on the last three. To begin, the shocking fragility of our global supply chains⁶ has revealed a heightened necessity to shorten supply chains. A recessionary environment will force companies to do so while controlling any incremental costs (offsetting a potentially higher relative cost of labor), which will likely require more flexible and capable robotics. Additionally, the newly required distancing between humans, and between human and machine³ in instances in which a manager cannot physically visit a plant but needs to inspect its output, is a non-trivial shift that is not likely to be reversed after the virus is under control. According to Anna Shedletsky, CEO of Instrumental, a firm which uses machine learning to help manufacturers improve their processes, electronics manufacturing is “ going to do five years of innovating in the next 18 months” in order to accommodate these shifts.³ Focusing now on digital payments and currencies, while some economies are already nearly cashless, the US has lagged in adoption of digital payments. As more people become reluctant to use physical cash,¹ adoption of contactless solutions - like Apple Pay — will increase. Since onboarding is a one time hassle while the ongoing user experience is frictionless, users that transition to digital payments are likely to be sticky. However, while pay-ins or peer-to-peer transfers work well enough on traditional payment rails, recent challenges in distributing fiscal stimulus demonstrate that payouts remain difficult. Mailing out physical checks to citizens is a very inefficient way to distribute funds. While a large portion of the population can receive these funds via direct deposit, there is a sizable unbanked population that cannot.⁷ Original drafts of stimulus bills included proposals for a “digital dollar” operated and maintained by the Federal Reserve. The draft mentions “digital dollar wallets” which would essentially serve as free bank accounts through which users could receive money, make payments, and take out cash, in an attempt to ensure that everyone who is entitled to COVID-19 related relief, would receive their funds quickly and inexpensively. Later proposals, expanded scope to include debit cards with prepaid “digital cash” balances, which would help distribute funding to those without bank accounts. At the same time, FinTech companies, such as Cash App and Venmo, are also providing ways for their users to receive stimulus funds digitally, even if they don’t have a bank account. The government’s new willingness to partner with FinTechs may diminish the need for a central bank digital currency (CBDC) to aid in wide-spread payouts. The focus on CBDC in stimulus bill proposals may be motivated by the recent pilot launch of China’s digital Yuan (the DCEP) across four regions.² In this context, it’s still worth considering the impact that the introduction of central bank digital currencies could have on other parts of the monetary and payment ecosystems, long after the virus is contained. The need to assess risk and prevent fraud in the distribution of government stimulus could accelerate the transition to digital IDs. As the government passes legislation allowing for unprecedented levels of fiscal stimulus, it is calling upon established FinTech companies to aid in the digital delivery of funds to SMBs. A lending practice that requires loan officers, in-person identification processes, and faxes was not functional before the crisis and I don’t see how it remains in place afterwards. FinTech companies — like Toast, Brex, and Mindbody, among others — have high frequency, comprehensive relationships with SMBs, allowing them to have a real-time financial picture of many of the small businesses applying for Paycheck Protection Program loans.⁴ The fact that they’ve also already performed Know Your Customer (KYC) checks on these merchants should help to reduce fraud. If this dynamic spurs the U.S. government to embrace digital processes, perhaps it also more quickly paves the way for digital IDs. With more reliable digital IDs, a whole host of paper-based, in-person processes could be digitized and made more efficient. Privacy suddenly becomes tangible. I believe the rapid escalation of widespread citizen surveillance (a necessary measure to contain the virus) will lead to an increased desire for anonymity, pseudonymity, and privacy. Until now, violations of privacy were mostly abstract, because they occurred indirectly and online, whereas now they are concrete and apparent. If users didn’t care about privacy before, they just might after this pandemic. I still don’t think privacy will work as a product per se (since it generally still involves corresponding tradeoffs that outweigh the perceived benefit), but I do think that people will seek refuge in platforms that allow them to use an avatar or pseudonym and that grant them the option to communicate and transact anonymously as their offline lives become increasingly tracked and traced. Masks may finally be de-stigmatized in western culture, and maybe they even become a pro-health, pro-privacy fashion statement. The importance of securing edge devices is underscored by the coordinated shift to remote work. Yes, as companies shift to remote work, we need video conferencing solutions and collaboration and communication tools to enable productivity, but we also need cybersecurity solutions that secure the millions of edge devices now accessing corporate networks and confidential files. Pre-virus, most security fell on IT departments, but now it is up to employees to ensure their personal computing devices are secure. This will require widespread adoption of better authentication measures, stronger end point security, and techniques to ensure data integrity. Media for the people, by the people. While the trend towards user-generated content has been growing gradually for several years, shelter in place orders may force users to generate their own content en masse. While Hollywood and recording studios are shutdown and sports are on hiatus, users are turning to TikTok and Patreon in droves. As Twitter becomes a place for rapid, scientific research exchange and an avenue for first-hand reporting, the contrast in the quality of reporting relative to traditional media outlets is stark. Information exchange regarding the virus may just change the face of journalism; if not, it probably at least changes perception about who the purveyors of trusted information are. While this is not the first time we’ve experienced a breakdown in mainstream media reporting (The 2016 U.S. election, for example), it is the first time this breakdown has occurred on a global scale, impacting everyone at the same time, and on a personal level. Furthermore, given the corresponding economic impact, the media industry is likely to consolidate as local news outlets and smaller publications struggle to remain afloat. This will increase the level of centralization in editorial decisions at the same time that advertising-based revenues are pressured. That sounds like a recipe for an environment in which it’s even harder to produce content devoid of politics and sensationalism, further driving users to get their news direct-from-experts and their entertainment direct-from-creators.
https://medium.com/swlh/our-post-covid-world-a-request-for-startups-707fb445ffe1
['Justine Humenansky']
2020-05-05 07:05:49.301000+00:00
['Investment', 'Founders', 'Venture Capital', 'Entrepreneurship', 'Covid 19']
Welcome to the Patient Experience Studio at Cedar
At Cedar, we’re improving the patient financial and administrative experience, guiding each patient from pre-visit check in to post-visit billing with ease. On the Cedar Design and Data Science teams, we believe that building a powerful product requires a deep understanding of the “What” and the “Why” of the patient experience. The “What” is the output of our analytics, custom analyses and explanatory models. Through these methods, we seek to identify the key factors driving patient behavior. The “Why” focuses on identifying the relevant emotions and subconscious behavior by listening and understanding what patients have to say about their own experiences. Linking the two allows us to build hypotheses to improve our product, design thoughtful features and deliver experiences that we continuously improve through experimentation. We foster a constant flow of insights and ideas between Data Science and Design to identify and fill our collective blindspots and contribute to each other’s work. To achieve this, we collaborate daily, sometimes even hosting team events together (like that time we toured NYC to learn about the art of graffiti and completed “masterpieces” that are on display at our NYC office). We also regularly present our work across teams to get feedback and learn about our patients from all different angles. We even coined a phrase to express this collaborative back-and-forth: “data art and design science.” This blog showcases some of our most interesting Design and Data Science methods and insights. Join us on our journey to build a better patient experience together. Happy reading! Amy Stillman, VP, Design at Cedar Yohann Smadja, VP, Data Science at Cedar
https://medium.com/the-patient-experience-studio-at-cedar/welcome-to-the-patient-experience-studio-at-cedar-15f25f8cc645
['Yohann Smadja']
2020-07-22 04:55:25.755000+00:00
['Data Science', 'Patient Engagement', 'Design', 'Healthcare', 'Patient Experience']
Beautiful Reasons
NEW LANGUAGES FOR DATA VISUALIZATIONS: STARTING A DIALOGUE To be a designer you have to find new languages, new ways of entertaining people; and working with data you also have to make visuals that can become magnetic to people that are not familiar with data practices. We believe that, sometimes, the act of loading an analytical representation with emotional investment produces attention rather than distraction, creates worlds that are evocative and nameless at the same time, able to inspire sensations, as long as we always respect the values in the data and we don’t manipulate the information. To this regard, we can define successful designs as the ones able to balance convention (i.e. familiar forms our minds are already familiar with) and novelty: new features that can engage and delight people in the hope they will stick around our visualizations a bit longer, and in the hope we can help the conversations in our fields moving forward. We believe that there isn’t a unique truth in data-visualization, and that there are more than one instead, more or less appropriate and effective depending on the scopes and the goals, on the data and on the readers, on the situations and contexts. Dense and non-conventional data visualizations produce a type of behaviour that promotes slowness in this era of short attention span: if we can create visuals that are demanding the right slowness and the right level of engagement, people would slow them down to meet it. This article is obviously not providing answers; I hope it can rather start a dialogue among practitioners and enthusiasts: how can we keep on exploring, guessing, imagining, hunching, trying combinations and trying to inspire feelings, as visual communicators who use images and symbols rather than words and numbers? How can we be faithful to scientific accuracy while allowing space for exceptions to flourish, with the aim of bringing a range of new possibilities to the table?
https://medium.com/accurat-studio/beautiful-reasons-c1c6926ab7d7
['Giorgia Lupi']
2016-01-21 17:04:06.478000+00:00
['Design', 'Data Visualization', 'Data']
A Mask Allows Me a Small Wins #11
A Mask Allows Me a Small Wins #11 A poem that places a face on Coronavirus Life is intimacy The little happy moments Time with friends Love of family Joy of pets Humans were placed in the universe Left to our own devices Provided choices I choose to wear a mask It is my action to Politicize caring for others Campaign to stop the spread Lobby for safety When governments don’t care People must My world has been infected By bedbugs They crawled in unnoticed Silently, took up residency Between my blankets of security Interrupted my dreams Infected my nightmares And people I love Why do I wear a mask? Masks allow me Small wins and Hope for tomorrow Sara, 24-year old female
https://medium.com/the-pom/a-mask-allows-me-a-small-win-11-f7df3db7bc4a
['Brenda Mahler']
2020-12-29 12:31:45.124000+00:00
['Faces Of Covid', 'Reflections', 'Poetry', 'Coronavirus', 'Covid 19']
Explaining AlexNet Convolutional Neural Network
Overfitting Prevention Having tackled normalization and pooling AlexNet was faced with a huge overfitting challenge. Their 60-million parameter model was bound to overfit. They needed to come up with an overfitting prevention strategy that could work at this scale. Whenever a system has huge number of parameters, it becomes prone to overfitting. Overfitting — Given a question that you’ve already seen you can answer perfectly but you’ll perform poorly on unseen questions. They employed two methods to battle overfitting Data Augmentation Dropout Data Augmentation Data augmentation is increasing the size of your dataset by creating transforms of each image in your dataset. These transforms can be simple scaling of size or reflection or rotation. See how no. 6 is rotated in various directions Source These schemes led to an error reduction of 1% in their top-1 error metric. By augmenting the data you not only increase the dataset but the model tries to become rotation invariant, color invariant etc. and prevents overfitting Dropout The second technique that AlexNet used to avoid overfitting was dropout. It consists of setting to zero the output of each hidden neuron with probability 0.5. The neurons which are “dropped out” in this way do not contribute to the forward pass and do not participate in back- propagation. So every time an input is presented, the neural network samples a different architecture. This new-architecture-everytime is akin to using multiple architectures without expending additional resources. The model, therefore, forced to learn more robust features.
https://medium.com/x8-the-ai-community/explaining-alexnet-convolutional-neural-network-854df45613aa
['Rishi Sidhu']
2019-06-07 14:40:48.833000+00:00
['Machine Learning', 'Data Science', 'Technology', 'Neural Networks', 'Artificial Intelligence']
Reliable Performance Testing in C++
credits to freewallpapers1 C++ is a language designed for scalability, performance, and control. The language was created over 20 years age, and has since developed a huge community of developers and libraries across the world. In the modern era, C++ is mostly used for systems or performance critical software such as high frequency trading platforms or databases. A requirement for almost any production grade C++ software is reliable and consistent performance testing. Meeting benchmarks and scalability requirements is an essential capability for many C++ applications. However, building and running C++ performance tests are not so straight forward. For quite a number of years, before the C++ 11 standard was released and incorporated into compilers, the language lacked a cross platform clock or time utility as part of the standard library. The C language has long since possessed a <time.h> header with various time and clock functions. Yet, many guarantees of those functions, such as whether or not a clock is monotonic and will never go backwards, are not promised within the C standard. Thus, any consistent performance tests prior to C++11 would have to have been implemented through operating system specific functions, like those of Windows or Linux. System Vs Monotonic Clocks The C++ chrono header deals with two types of clocks, system clocks and monotonic clocks. A system clock in any operating system is set to monitor the ticks since some point in the past, called the epoch. This relationship toward the epoch allows timestamps from a system clock to be easily convertible to human readable format. However, using a system clock leaves a vulnerability toward other programs modifying the system clock. The type std::chrono::system_clock guarantees access to the system clock on any platform or operating system. The other major type of clock is the monotonic clock. Called a “steady clock”, within std::chrono::steady_clock , monotonic clocks just monitor the number of ticks since the machine the executable or program is running last restarted. Thus, a monotonic clock does not offer the ability to convert it's time points into human readable values, because it counts ticks and time from the moment the system last restarted. This is very different from a system clock, which counts time from an epoch, such as January 1st, 1970, in the case of UNIX time. Single Interval Tests The most fundamental performance test for any written code is a single interval test. The test involves recording a point in time, then running the desired code, then recording another point in time. The result of the test is the difference, also called the duration, between the first and second time points. A duration can only be derived from two time points recorded from the same clock type. Since we are only going to use a monotonic clock, all time points will be of the type std::chrono::time_point<std::chrono::steady_clock> . Durations between time points can either be floating point durations or integer durations. The type of duration you should use depends on what purposes the results of the performance tests you are running need to serve. Floating point durations allow statistical measurements like means to be calculated more easily. Recording two time points and deriving two durations, a floating point and integer one, respectively, would look like this: For the utilities in the <chrono> header, the - operator between two std::chrono:time_point<> objects evaluates to a duration. To get an integer duration, std::chrono::duration_cast<type> is always needed. Every duration has a Period , indicating the time period it represents. The std::milli is a time period represented by a predefined ratio defined in the <ratio> header. Although custom time periods with customized ratios can be used when creating a duration, it's far more straight forward to use the predefined Period types in the <chrono> header. They are listed below: Furthermore, the type names for templated chronology types can become rather long, such as std::chrono::milliseconds int_ms = std::chrono::duration_cast<std::chrono::milliseconds>(end - start); If a peformance test is using the same time period and duration type most of the time, there is nothing wrong with using the auto keyword here. Just don't forget you will need an std::chrono::duration_cast for integer durations. To get the actual value of time from the duration, you have to use it’s count() method. Additionally, a duration has a min() and max() method to indicate the minimum and maximum values for the duration type. For example, here's a way to check the time of a duration, compared to it's minimum and maximum values. We expect the iterative function to be faster, but let’s see what the test shows: Mean Interval Testing Producing reliable performance test results requires not only timing the duration a piece of code takes to execute, but running that code and averaging that timed duration over a period of trials. Performance testing is distinct from functional or regression tests in the sense that all variables which affect the test cannot be accounted for. Where as a functional test can expect the same result given the same input. This is because a performance test depends on the state of several pieces of hardware, what condition they are in, and the many processes running on them. Running the same test many times and taking the average result of those test runs leads to more reliable and trustworthy performance testing. This test shows that floating point durations from std::chrono can be used to calculate a mean duration. Mean durations over a larger number of test runs produce more confidence in the reliability of the test. Mean interval testing is great for performance tests that need to be very accurate about a specific section of code and specific inputs completing within some time period. However, you might need to test the performance of a program or implementation where the inputs might not be controlled, or there might be many smaller inputs. In those scenarios, mean interval testing won't work well because it only tells you the average, not anything about the distribution of times different runs had. To accomplish that, you need a histogram test. Histogram Testing Depending on the program or software being tested, some performance tests need to show the distribution of completion times rather than a mean duration or singular result. You might want to check if your mean completion time for your performance test is being driven by a lot of super fast and super slow runs, as opposed to times close to the mean. Additionally, you might need to guarantee the performance of a particular program runs within a specific range of intervals most of the time across many inputs, rather than ensuring performance on one input. To illustrate such behavior, a histogram is typically used. A histogram is a representation of data that differentiates individual run times within a pool of test runs by the amount of times they have been achieved. Such as, a simple histogram might look like: For this example, we will use a simple hash function to run on many string inputs to generate the values for our histogram. All inputs will be the same length, but have different characters. Each run consists of generating the hash for one input string. The function is as follows: For every test run of the hash function, there is an input, and a time taken to hash that input. Since a performance test does not care about the efficiency and collusion rate of a hash function, we shall ignore the outputs. A histogram can be represented in C++ as a collection of test run data points that can be abstracted into different interval arrangements. The data points in a histogram can be organized into different sets of intervals. The more intervals a histogram has, the more detailed picture it can show in terms of the spread and difference of runs. To write this, three components are needed. One is a struct type to run and store the runs of each input on the hash function and the time taken for the run. In this case, milliseconds will be used. Next, we need a function to construct a vector of all times for all runs, and lastly, a function to query the vector of runs to see how many fall within a given range of times. This will allow the custom distribution against different time ranges to get a more detailed view into different run times that occur.
https://medium.com/swlh/reliable-performance-testing-in-c-1df7a3ba398
['Joshua Weinstein']
2020-07-10 21:10:47.253000+00:00
['Performance', 'Technology', 'Cpp', 'Science', 'Programming']
Custom Tab Navigator Using React Navigation & SVG
The <TabsUI/> In order to achieve the required result .We will have few components, all working together. We start by defining the shape using d3-shape. We will create a line by using a line generator. We’ll divide our shape to 3 parts: left, center and right. We’ll start off with the easy parts, left and right. We know the tabWidth is based on the screen width, divided by the number of tabs we want to place. Since we are using SVG coordinate system the [0,0] point of the shape is top left corner. Keep in mind that we will use a constant value to store the tab bar height. For the left part we define a line going from [0,0] to about the center [tabWidth * 2, 0]. For the right part we start a little bit after the center tab [tabWidth * 3, 0] going to the end [width, 0] then we close the shape by going down to [width, height] and back to start [0, height] closing the shape. For the center part we will create a V-like shape by providing the values, going a little bit continuing the x-axis while going down up to half of the height, then taking up the values on the y-axis to 0 again while going continuing the path on the x-axis. There are many curve functions, that can help us smoothing the connection between the points. We’ll use a curve function from d3 here are some examples. If you wish you can experiment with this nice example. Here is the code for the <TabShape /> Component using the described above. TabShape - Creating a background, separate UI & Functionality <TabShape /> Now that we have a background all we have to do is overlay the icon for the navigation on top and we are done. We’ll create a <TabsHandler> component for that. TabsHandler — Creating handlers, separate UI & Functionality Notice that for the middle larger icon we do not render the tab SVG with a text and use Logo component. This logo component will be drawn on center, instead of the middle tab position. It would be a separated component so we can add a opening animation if we want to, later.
https://medium.com/swlh/custom-tab-navigator-using-react-navigation-svg-b659b395a7c4
['Matan Kastel']
2020-09-21 18:00:14.299000+00:00
['React Native', 'Tab Navigator', 'D3js', 'React', 'SVG']
Metadata and Additional Responses in FastAPI
Metadata and Additional Responses in FastAPI Customize your own API documentation for better readability Photo by the author. Building on top of our previous guide (Migrate From Flask to FastAPI Smoothly), we are going to explore the API documentation a little more today. By now, you should realize that the generated interactive API documentation and ReDoc of a newly created FastAPI server are not that intuitive and lack proper examples of the input and output schema. Let’s have a look at the following examples. Swagger UI: Image by Author From a quick glance, we can deduce that there are two APIs available. The first route is to create a user, while the second route is to get a new user. The information is definitely not sufficient for someone who is new to this API. Besides, the naming is based on the actual name of the function. It can get really confusing later on when more APIs are added to it. It would be a lot more convenient and self-explanatory if the API documentation looked something like this: Image by Author A new developer will have an easier time reading this documentation compared to the first one. Let’s proceed to the next section on adding metadata to the documentation.
https://medium.com/better-programming/metadata-and-additional-responses-in-fastapi-ea90a321d477
['Ng Wai Foong']
2020-11-09 01:56:52.486000+00:00
['Python', 'Documentation', 'DevOps', 'Programming', 'Fastapi']
How Therapy Helped Me When I Went Back To Work
How Therapy Helped Me When I Went Back To Work From one overwhelmed mother to another Photo by Priscilla Du Preez on Unsplash Growing up, I thought therapy was reserved for those with ‘real problems’ Asians stereotypically don’t do therapy. We suffer in silence and let that shit brew until it becomes stomach cancer. Does the word therapy still relate to” crazy people”? “Wow, there must be something seriously wrong with you!” “Can’t you just keep that shit inside or talk to your friends about that over some wine and call it a night?” You may not say that out loud, but you may believe that. Your view of therapy depends on your level of understanding of what it is, your inherent biases, and how you were raised, social and cultural influences. When I was growing up, the idea of getting help for your mental health was rarely talked about. And when it was discussed, it was usually about a Hong Kong celebrity going to rehab. Therefore, 90’s Hollywood movies and TV shows became my source of information. In sitcoms or romantic comedies, the line “Well, my therapist says…” would be something a kooky character would say to the main character as a joke. Cue Laugh Track…jump to the next scene. In dramas, I’d see unhappy couples on the verge of divorce going to therapy, returned soldiers going to therapy, widows/widowers going to therapy, people who had survived shootings/bombings/car crashes/The Holocaust go to therapy…and the list goes on. Therapy was always something that characters would do if they had experienced major trauma, or they had run out of options. It was like therapy was the last resort when you’ve completely broken down. You know, if all else fails, try therapy…because then you’ve literally exhausted all options. As a kid, that was my misperception of therapy. It’s a joke if you’re “normal” and reserved for only those who had “real” problems. From my ongoing recovery from perfectionism, self-harm, depression, and disordered eating, I didn’t go to therapy then, but I should have. Entering working motherhood After going back to work from my maternity leave, I started feeling extremely overwhelmed. I was really struggling to balance all the different hats I had on (working mom, writer/blogger, wife, sister, daughter, etc.). Every time I switched gears from one role to another, I became more flustered and anxious (biting nails/picking my face type of anxiety). I realized that motherhood had replaced my ability to be mindful. I felt like I was unraveling at the seams. I am a natural single tasker with a Type B personality. So motherhood had forced me to become someone who was required to multitask and have a Type A personality. I was having trouble balancing who I was and who I needed to be as a mother. My mind was constantly thinking for another person. It’s like I had two minds going at once. I couldn’t be my natural self and have one thought at each moment. There were always thoughts that were simultaneously running thru my mind, and if I stopped for one second to be mindful, it was brief. Because then she was grabbing something that shouldn’t be touched or yelling for my attention. Even without her presence, I was thinking about whether she had eaten enough…if she had pooped…how long she napped for As a result of this mindless multitasking, I lost my passion for the simple things in life that I used to indulge in. The smell of the crisp air in the morning, the taste of a freshly peeled orange, the crunch of walnut between my teeth…all the small moments were rushed and completed mindlessly. I tried to find the balance between the demands of a mother and the needs of an individual. I had a life living in the present, and now it had changed. I needed to find a way to do what I used to do while integrating this little person's responsibility. I was scared to seek help I knew I should have sought help, but I procrastinated for months because I was scared. It got to the point where I couldn’t sleep. I was eating crap. I was unhappy. I was constantly negative. I was always tired. I was sick of myself. I was merely going through the motions of life, letting it happen to me instead of making it happen for myself. I was incredibly unfulfilled. So one bright and sunny winter day, I made a call and got over my fear. I told myself, “Fuck your excuses, just do it.” One rainy Saturday evening My therapist was a middle-aged woman (working mother of 2 full-grown children) who wore dark-rimmed glasses. Her face was gentle, and her eyes were wise. The crease between her eyebrows would tense up as she listened, then soften as she spoke. We conversed. I unloaded. I expressed my feelings of being overwhelmed. I cried. I sobbed. Tears flooded my face, just like the raindrops that were running against the windowpane. I had 3 moments of clarity that I will continually remind myself so I can maintain balance during overwhelming times, keeping myself sane to be a good imperfect mom. 1.) It Takes Time To Adjust It takes an average of 8 to 12 months for a mom who has returned to work to feel normal again. This is an actual, clinical fact. It takes time. It was like a light bulb went off in my head. I could have easily read it in an article, but I needed someone (a professional, someone objective, someone with expertise) to tell me that to my face, that it’s normal to feel the way I was feeling. 2.) I am Valued I told my therapist about my history of perfectionism and my ongoing road to recovery. Her response made me sob. Childhood perfectionism often arises when the child does not feel she is valued. The child believes that by acting ‘perfectly, ‘she will create value for herself and those around her. When parents constantly praise a child for their maturity (i.e., grades), it can rob them of their childhood. When I was in Grade 5, a friend of mine gave me the nickname “Mature Girl.” Although it was a joke, there was an element of truth. I don’t have memories of being just a kid. I always coloured inside the lines, making sure the sky was blue, the sun was yellow, and the trees were green. I would laugh at the kids who coloured their animals purple. I have to remind myself to let go of the idea that doing everything perfectly will make me valuable. I am valued because I value myself. I don’t want to be a supermom; I want to be a healthy mom. 3.) Keep Writing I didn’t tell my therapist about my blog or that I’ve been a writer since I could remember. So when she told me to start writing, it made me smile. She explained that the act of writing is an effective way for perfectionists to balance their emotions. The right brain is home to emotions and intuition. Perfectionists are often in a state of stress (trying to meet impossible demands and constantly never satisfied), and so their right brain is in chronic overdrive. The left brain is home to logic and reasoning. Writing helps activate the left brain, countering the emotional side of the overworked right brain. I’m not afraid anymore I’ve worked in mental health and did my graduate studies in the field. I fully understood why we need to take care of our mental and psychological health. I’ve encouraged and supported people around me to go to therapy; however, I had never gone myself. Why? It’s because I always had this fear that I would appear weak or “crazy.” However, now I understand this misconception stems from my upbringing and the social and cultural influences that I had growing up. And after that first session, I’ve made strides to become happier, healthier, and more fulfilled. I hope sharing my story will help remove the stigma associated with mental health issues, giving the courage to anyone who feels overwhelmed, unhappy, or just completely stuck in life to seek help. I firmly believe that those who ask for help demonstrate more strength than those who keep it all in. So Readers, what are your thoughts on the word “therapy”? Have you gone? What was your perception of it growing up? Do you think your culture has influenced that view?
https://medium.com/modern-parent/how-therapy-helped-me-when-i-went-back-to-work-9b3608fbd77a
['Katharine Chan', 'Msc', 'Bsc']
2020-12-20 21:13:27.234000+00:00
['Parents', 'Mental Health', 'Parenting', 'Therapy', 'Motherhood']
Making Data Analytics Work on a Blockchain
“Artificial Intelligence” Artificial intelligence with predictive learning capabilities employing black box algorithms have been around for decades being used by different institutions including the government and their military. Applications of such are varied from medical to education to national security. However, do we know how it really works on a blockchain providing behavioral data analytics? “User Interfacing provided by DATAVLT” DATAVLT was one of the first institutions to apply A.I. with machine learning capabilities on a blockchain with the aim of providing a low cost and efficient data analytics tool. They have provided on their whitepaper an overall data processing framework. The task; let us digest each component in simple terms. The discussion will be in this order: external and internal data, algorithm black box, artificial intelligence, predictive learning or machine learning capabilities, and the DATAVLT blockchain. “Business data” 1st step: What type of data will be fed in to the network? The data can be external and internal. External data includes information from the global web including that of similar entities, industries, Google analytics, any other third party sources, and data from communication involving consumption of goods and services like social media accounts of the organization using the platform. Internal data includes the organization’s private data held by different departments especially that of the IT and planning departments to facilitate better productivity and encourage innovation among the employees to serve their customers better. “Black Box” 2nd step: What is the algorithm of the black box? When all data needed is gathered, it’s time to feed these to the algorithm black box. This data is then treated as inputs to the box (this box can be an algorithm, a transistor, or even the human brain). It will act as observer and selects all data related to the specifications or the industry where the user of the tool belongs. Afterwards, the selected data will be considered as outputs where they have observable elements or similarities. That brings us to the next phase. THE BLACK BOX PROBLEM SOLVED with DATAVLT Despite many uses of such, some will contend that it can have biases to the output data processed which may cause inaccuracy of analysis. Furthermore, creators will not be able to know some black box algorithms processes. However, this can be fixed if an organization will devote their time with it. DATAVLT knows the existence of this problem and has been very rigorous since their inception to bring the best Black Box. The reason why most fail is the quick deployment of such systems without proper testing. But, DATAVLT, on their road map we can see clearly their commitment to deliver the best tool since they will be conducting numerous beta testing with partners before official launching. “Future Learning” 3rd step: The function of Artificial Intelligence DATAVLT stated that AI will sort, unclutter, and reject duplicitous and dubious data. This means that the A.I. will have to trim down the insurmountable data outputs to have obtained what are necessary for analysis and to reject duplicated data resulting from departments keeping separate accounts that have the same similarities. “DATAVLT Blockchain” 4th step: The DATAVLT blockchain Once the data is ready for processing and in depth analysis, it should be secured and remain unaltered to prevent inaccuracy of reports. This is done through blockchain since it has the characteristic of immutability and tamper proof where no one will be allowed to make deliberate and unintentional fraud within the network. 5th step: The machine learning capabilities (Predictive learning) The data will now be ready to be processed into information which can be communicated to the management team. Predictive learning capabilities allows the tracking of significant patterns of historical financial information, trends of customer behaviors, manager performances, variances useful in management by exception, and others.
https://medium.com/datadriveninvestor/making-data-analytics-work-on-a-blockchain-469db12b7557
['Ali Aswegui']
2018-06-25 10:28:04.081000+00:00
['Machine Learning', 'Artificial Intelligence', 'Blockchain', 'Technology', 'Bitcoin']
A Publication About Sexual Assault Survivors
A publication for sexual assault survivors to share their stories, poems, art, and narrative. Our voices need to be heard! Follow
https://medium.com/survivors/new-publication-for-sexual-assault-survivors-68aa07804ace
['Toni Tails']
2020-06-07 10:39:11.261000+00:00
['Mental Health', 'Family', 'Parenting', 'Life Lessons', 'Life']
Escort Recruiting Mega Style
Escort Recruiting Mega Style Forget About Pimps! Networks Like Lifetime and Hollywood Movies Recruit Escorts Into the Profession. And They Do It a Division At a Time! Dylan Kidd — Unsplash We read, hear and watch a lot of news about traffickers who lure girls into the escort profession. Maybe it’s true…and maybe it’s fiction. But I gained a little insight this afternoon that might really turn the mainstream on its ear. I was on the phone with an old buddy yesterday — an independently-operating escort who calls herself Candy — when somehow the conversation turned to that old rite of passage thing (the usual sexual abuse mythology which is generally true)…and then to how she was lured into the profession in the first place. And you might find the seminal recruitment tool sublimely enlightening. No, it wasn’t a big, bad pimp driving a $100,000 Benz — or a foreign broker who “turned her out.” It was a movie on THE LIFETIME CHANNEL that did the job! Yup! The girl was just 14 years old when one night she watched “THE MAYFLOWER MADAM” (the story of Sydney Biddle Barrows) on the “chick channel” — and there is where she found her calling! Now Candy is no dumbbell — at least academically speaking. En route to her destiny, the Sweet One enrolled in UCLA on scholarship. But once out in the world on her own, the lure of that fantasy so groomed by that movie she’d seen just a few years back, beckoned. And so she dropped out within weeks to pursue her particular dream — one of making the big bucks in the escort game. And sure enough at just 18 years of age, Candy was living in her own cozy apartment, shopping till she dropped, and earning 900 bucks a day to pay the way! This I find fascinating. While law enforcement pursues any number of facilitators and traffickers whom they think are the culprits, a freakin’ cable channel just might be doing more to glamorize the escort world and recruit young girls into the profession than the people they’re spending all that money to track! And what about “PRETTY WOMAN” the movie? How many women decided to give it a go based on that fucking fairy tale? I mean…come on! Who wouldn’t want to marry a handsome trillionaire?? Put me in a wig and sign me up! Ya think maybe a few girls entered the rank and file based on that fairy tale? But that’s not all of it! The media aids and abets the process in less obvious and more workaday ways. New York City’s most notorious madam runs three separate places and is in constant need of new faces to titillate her endless list of regular customers. And while a significant percentage come from word of mouth and friends of her current employees, the lion’s share find her via the internet help wanted ads she runs on a daily basis. Yessirree! The boss lady recruits via the Internet — yet another medium attracting young ladies. While her ads are fairly obvious, others are not. Emma, a beautiful Korean girl, found the business from the help wanted pages of a Korean daily. Worded in such a way as to hoodwink the paper itself so the owner could run the ad in the first place, Emma answered thinking she’d only be giving massages. When she called, the boss asked “You know men, right?” She got the picture right away! But really…recruitment to the business can happen in any number of much more organic ways. Take Sexxxy Sadie, a college-educated Jewish woman born (by her own admission) with a healthy appetite for carnal fun. She had a job writing insurance policies and one day commuted to a prospective client who as it turned out owned a massage parlor. He liked her…asked her out on a date…and then gently enlightened her. She never worked for him…but shortly thereafter, checked the help wanted ads in the Voice adult section and started work making the big bucks…all of which brings us back to the media as recruiters. When it comes to The Great American Hoochie Mama, the “hood rats” hit the bricks the old fashioned way. Race or ethnicity notwithstanding (not all hood rats are women of color), these girls tend to get pimped by a smooth-talking player. He could be in the club. Or he could be driving down the boulevard in a tricked out ride. After the initial overture, Homey might run some ads for her if he’s an enterprising individual, or more often than not, will place her with an escort agency where she’ll receive roughly 50% of the money she earns…and then go home to turn her share over to “daddy!” New York City is chock full of foreign escorts…especially of the Asian variety. How does recruitment work with them? Suzie, a phone girl who’s managed virtually every massage parlor in the Big Apple, reports that recruitment back in the home country starts at what is called a “hostess bar,” a ubiquitous and completely legal enterprise peculiar to their culture. In an American bistro, a waitress takes your order…slings it on the table…and then presents you with a check. But a Korean hostess bar is different. Often a bunch of businessmen will get their own room in which to drink. Their waitress not only pours their beverages…but sits and chats with them — and might even perform a little karaoke for their entertainment. While nothing illegal or sexual goes on, the leap to massage parlor work from the hostess bar isn’t quite as pronounced as going directly into the fire. And in fact, the hostess bar serves as a sort of minor league training ground. Once working as a waitress, the girl networks with her colleagues and customers and sooner than later discovers other options to make bigger bucks. And so it goes in Korea…and China as well according to Suzie. Clearly, recruitment tools — be they old school or media-driven — supply workers for a business that isn’t going anywhere anytime soon. The world’s oldest profession has that moniker for a reason. Its survivability rivals that of a cockroach in a nuclear war! But really…when you think about it…isn’t all the media coverage/glorification of the trade as culpable as any pimp or trafficker? It’s such a powerful recruitment tool! Pimps do it one girl at a time. But Lifetime? Thousands and thousands! Not to mention Hollywood! OMG! Don’t tell me networks and movie companies don’t profit! How much do you think Gary Marshall earned on Pretty Woman? Of course, the constitution protects people like Gary and networks like The Lifetime Channel. Thus, they get to earn millions without regard for how many mixed up young girls they entice into the business while at the same time, some dude who posts an ad on an adult directory site as a favor to his favorite escort to gain her favor, runs the risk of arrest for so doing…all of which doesn’t make a lot of sense to me. Nor does the wack-a-mole mentality of law enforcement which seems to target buck privates and not the commissioned officers. In a perfect world, this would all be legal. But that’s a discussion for another day. I’m not saying that it’s time to censor any of the media in this arena…just that recognizing how escorts are enticed into the business is more complicated than it seems. There are many forces at work (some unrecognized) doing an excellent job…as there doesn’t seem to be a shortage of practitioners despite law enforcement’s best efforts. Wanna read more escort exposes? Try these!
https://medium.com/everything-you-wanted-to-know-about-escorts-but/escort-recruiting-mega-style-18865fa74d30
['William', 'Dollar Bill']
2020-12-20 23:16:59.041000+00:00
['Culture', 'Escorts', 'Sex', 'Psychology', 'Life Lessons']
Mocking React hooks for testing with Jest and react-testing-library
Obligatory hook-related stock photo (📷 by Chunlea) Imagine this familiar scenario: a developer builds a life-changing todo application using React and has decided to leverage hooks to make the sharing of state logic much cleaner. Being a responsible developer with a passion for testing, they bring in jest and react-testing-library to start writing unit and integration tests for their components. Upon implementing these component tests, a challenge arises: How does one mock a hook’s value when its state exists outside the scope of a component? To help solve this, we can leverage jest.mock()
https://chanonroy.medium.com/mocking-hooks-for-testing-with-jest-and-react-testing-library-d34505616d12
['Chanon Roy']
2020-07-19 15:43:29.018000+00:00
['React Hook', 'Hooks', 'Jest', 'React', 'React Testing Library']
Measures of Variability — Range, Variance, Std. Deviation, Coefficient of Variation
Variability gives you the idea about how the data is distributed around the mean value in a data set. It gives you an idea how far the data is distributed. Variation exists in our daily lives, for example, the time we wake up every day varies over a range. Too much variability affects other events during the day and the outcome might not be favorable. Similarly if your favorite dish in a restaurant varies a lot you would not like that. This variation can be measured. Following is an example Like the 3 measures of central tendency we have many measures of variability namely Range Variance Standard Deviation Coefficient of variation/ Relative Standard Deviation We will calculate of the above for our height distribution. Range: Range is the most simple parameter of distribution, it tells us between what range the data point lie, that is the min and the max value. The range for above distribution is (62.0, 78.5). Variance: Variance denotes the dispersion of data points around the mean. The mean for the above distribution is 69.09888. The above graph is the visualization of the below data set 78.5 75.5 75.0 75.0 75.0 74.0 74.0 73.0 73.0 73.0 73.0 73.0 72.7 72.0 72.0 72.0 72.0 72.0 72.0 72.5 72.0 72.0 71.0 71.0 71.0 71.0 71.0 71.0 71.7 71.0 71.5 71.5 71.0 71.7 71.0 71.5 71.0 71.0 71.0 71.0 71.0 71.0 71.0 71.0 71.0 70.0 70.0 70.0 70.0 70.0 70.5 70.0 70.0 70.0 70.0 70.0 70.0 70.0 70.0 70.0 70.0 70.5 70.0 70.0 70.0 70.5 70.5 70.0 70.0 70.0 70.5 70.3 70.5 70.0 70.0 70.0 70.0 69.0 69.0 69.0 69.0 69.0 69.0 69.5 69.0 69.5 69.0 69.0 69.5 69.2 69.0 69.0 69.0 69.0 69.0 69.5 69.0 69.5 69.0 69.0 69.5 69.0 69.0 69.0 69.0 68.7 68.5 68.5 68.0 68.0 68.0 68.0 68.5 68.0 68.5 68.0 68.0 68.0 68.0 68.5 68.0 68.0 68.0 68.0 68.5 68.0 68.2 68.0 68.7 68.0 68.0 68.0 68.0 68.0 68.5 68.0 67.0 67.0 67.0 67.0 67.0 67.0 67.0 67.5 67.0 67.0 67.0 67.5 67.0 66.0 66.0 66.0 66.0 66.5 66.0 66.0 66.5 66.5 66.0 66.0 66.0 66.0 65.0 65.0 65.0 65.0 65.0 65.0 65.0 65.0 65.0 65.5 65.5 64.0 64.0 64.0 64.0 62.0 62.5 The formula for variance for a sample is (We won’t be using the population formula) We get s2 = 6.484943 inch2 for the above data set. As you can see the variance is the squared difference between each data point X1 X2 X3 … Xn and M divided by the degrees of freedom ie. N-1. Note: Variance gives the value in squared unit, inch2 which doesn’t make much sense. 3. Standard Deviation: Most of the times in very large datasets, the value of variance is too large and is difficult to calculate. Therefore we take the square root of variance. This is called the Standard Deviation. The formula to calculate the Standard Deviation is nothing but squared root of variance. Therefore s = 2.546555 for the above data set. 4. Relative Standard Deviation/ Coefficient of Variation: It is the standard deviation relative to the mean of the data set. An even better unit less measure of dispersion is the coefficient of variance. Coefficient of variation is obtained by dividing the standard deviation by the mean of the sample. The Relative Standard Deviation for our data set is 0.03675069 Note : Both Variance and Standard Deviation have units, inch in our case. Imagine a scenario where you need to compare the dispersion of two data sets of different units. Therefore coefficient of variance is preferred in such a situation.
https://medium.com/swlh/measures-of-variability-range-variance-std-deviation-coefficient-of-variation-b972dbc679ca
['Jayesh Rao']
2020-11-03 19:59:15.543000+00:00
['Statistics', 'R Programming', 'Analytics', 'Stats', 'Analysis']
Runner’s Life Newsletter
Runner’s Life Newsletter Highlights and stories from December 6-December 12, 2020 Photo by Chander R on Unsplash Welcome to the Runner’s Life newsletter! If you’ve missed previous newsletters, you can find the archive here. When I speak with other runners, the reasons why someone runs are varied. But frequently, there is a link between mental health and why someone started running. For me, it was a way to help me deal with the depression and anxiety I’ve experienced for most of my adult life. Growing up in a time and within a household where showing emotions resulted in being shamed, I learned to just deal with it. But all that did was make things worse. I wore a mask every day and appeared happy, but on the inside, I was dying. Along with therapy and medication, running has changed both my mental and physical health. Awareness of mental health issues and finding ways to reduce the stigma are important to me. For those of us who have experienced clinical depression, it is incredibly debilitating. This week, I want to share a video from Alexi Pappas, a long-distance runner, NCAA All-American, and the Greek national record holder in the 10k. Please take a moment to watch it. I Achieved My Wildest Dreams. Then Depression Hit.
https://medium.com/runners-life/runners-life-newsletter-df68b1d5c743
['Jeff Barton']
2020-12-14 01:22:24.981000+00:00
['Newsletter', 'Mental Health', 'Running', 'Life Lessons', 'Fitness']
How to save tabs in Chrome with Pin Tabs
Look at the picture. Is that familiar to you? How many times happened to you to have many tabs opened on Google Chrome? The fact is that most of the times we do not need tabs open but we do not close them because maybe you are just bored to delete them. It is true that you do not want to save them all in the bookmarks, because you are sure that if you are going to use them, it is going to be just for the next hours or days. I have this problem and unfortunately I could not find anything really interesting in the Web Store in order to organise my tabs. I do not need any sort of tab manager because this means that I have to spend time in order to organise my tabs and I want to be as efficient as possible. What I really need is a sort of auto-expiring bookmarks tool. The idea is to have something that keep track of what I saved but only for a very short period of time. I need a kind of box which after a period is emptied by an automatic timer. For this reason, I spent some of my time to develop a chrome extension which I called Pin Tabs (actually it could have been called auto-expiring bookmarks manager, but it sounds too long and complicated 😱). It is really easy and it does everything I need: save tabs, keep tabs and delete tabs. Now every time I want to clean the tabs of my browser, I check each of them and if I am not sure whether I am going to use them or not, I save the tabs in this box. Then if I really need them, I add the tabs into my bookmarks, otherwise I forget about it and the extension will do the dirty job of removing them. This extension is free and you can find the link to download it here. I am looking forward to your feedbacks 😃
https://medium.com/code-words/how-i-managed-chrome-tabs-a455b655f835
['Pier Roberto Lucisano']
2018-03-20 15:41:37.674000+00:00
['Chrome Extension', 'Chrome Custom Tabs', 'Productivity', 'Alumni', 'Google Chrome']
The Wefunder Petition
Kickstarter is awesome for funding creative projects. It’s one of my favorite startups at the moment, and it’s important. I lurves it. But I wish I could use something like it to invest in actual companies — both the tech startup variety that my friends work tirelessly on as well as the hyper local variety that make those special tapas that my neighbors are raving about. Soon, new laws may allow us to do just that. To raise awareness of these initiatives, a few friends and I tossed together the Wefunder petition to support HR2930 and Brown’s Democratizing Access to Capital Act (S.1791) that’s currently being debated in the US Senate. Please go sign it (click the “learn more” link on the site for more background information and links). These changes could have huge impacts for both entrepreneurs and investors, allowing bold new ideas to surface and creating a ton of important jobs and opportunities. I know that sounds like marketing speak, but it’s true. This matters. We’ll be going to DC next week to talk to Senator Brown’s people and see what else we can do to push this forward. We’ve also been fortunate enough to get some great coverage for our efforts at BoingBoing and ReadWriteWeb. Now go sign it already and help us send a message. And thanks!
https://medium.com/zerosum-dot-org/the-wefunder-petition-b105564227d5
['Nick Plante']
2017-11-04 17:26:01.174000+00:00
['Crowdfunding', 'Equity Crowdfunding', 'Fundraising', 'Investing', 'Startup']
A Children’s Fantasy Story about the World’s Greatest Truth
A Children’s Fantasy Story about the World’s Greatest Truth A review of April Graney’s new children’s book, *The Marvelous Maker* When is the last time a story quite literally captivated you? It pulled you in, captured your mind, and wouldn’t let go. For me, it was the Hunger Games trilogy; before that, Ted Dekker’s Circle series. A children’s picture book shouldn’t be able to create the same feeling, but I can truly imagine that outcome for this new book from B&H Publishing and author April Graney, The Marvelous Maker: A Creation and Redemption Parable. This wonderful book is written as a fantasy story about Adamus and Genevieve (Adam and Eve) and the story takes the reader through the true stories of creation, fall, and redemption with those two as stand-ins for all of humanity. Graney explains: While the real Adam and Eve died anticipating the promise of a savior, the characters in The Marvelous Maker live through the entire epic tale of the Bible. Adamus and Genevieve represent countless generations of believers who have been delivered from darkness and brought into the kingdom of light through Jesus Christ. Their story represents all of us, in the sense that every one of us has sinned and is living in a fallen world in need of redemption. The result is a beautiful book, in form and content, that will engross your child in the story of the Bible. Monica Garofalo’s illustrations are beautiful, and they fit the tone of the book perfectly. The vocabulary is high enough that just the words will be a learning experience for older children, but somehow my 5-year-old followed along well enough that it prompted a great conversation. My 5-year-old Eliana has been asking lots of questions about God recently. There was a version of “How come God doesn’t always answer my prayers?”. Then she asked, “When we breathe or walk or talk, does that come from God or does that come from our bodies?” (That was a fun conversation.) Then, after asking several questions about The Marvelous Maker, we stop three-quarters of the way through the book because she asks me to pray with her to ask God to “be her boss”. We had a long conversation about it and my wife and I decided that she probably wasn’t quite ready yet to accept Jesus as her savior and Lord of her life, but God is working in Eliana’s life. I pray that The Marvelous Maker will create space for these conversations in your precious days with your children. I received a review copy of The Marvelous Maker courtesy of B&H Publishing with a special thanks to Jenaye White, but my opinions are my own.
https://medium.com/park-recommendations/a-childrens-fantasy-story-about-the-world-s-greatest-truth-358b9a2e4a1f
['Jason Park']
2020-09-08 01:18:33.413000+00:00
['Children', 'Books', 'Reading', 'Religion And Spirituality', 'Christianity']
BitClave Weekly Update — May 21, 2018
Development Last week we had finished the permission APIs on platform level and had a very significant progress in integration and testing of BASE-Login in Desearch. On platform side we have resumed the development on REQUEST and OFFER entities and the concept of REQUEST/OFFER matching and verification as per BitClave’s white paper. At this stage we are focusing on interactions between REQUEST and OFFER and on secure data sharing between these two entities. You can see the progress and the latest on our engineering app for API testing at https:// base-bitclave-com.herokuapp.com. Your feedback on the APIs and in general is always very welcome. Marketing Last week our Marketing team focused on the Blockchain Week in New York City, attending different events and connecting with the who’s who of the industry. Our Co-Founder Vasily Trofimchuk and Head of Growth were there in New York City for the Blockchain Week. Ethereal Summit was about knowing the big folks in the industry and interacting with them, speaking about BitClave and our work with MatchICO and Desearch. Our team there met Joseph Lubin from ConsenSys, Laura Shin from Unchained/Unconfirmed, Eva Kaili from EU Parliament to name a few. We were also introduced to the Consensys mesh and how they are working to build a decentralized future with their hub and spoke model of operation. We are in talks with them to see if Desearch/BitClave can be a part of it in any way. Consensus 2018 saw over 8000 people in attendance compared to about 1700 in 2017. Though Vitalik Buterin boycotted the event and alerted the community not to attend, what we experienced there was completely the opposite of that warning. The Ethereum community there was also buzzing along with the others. Consensus probably is the biggest Blockchain event in the world today with so many people attending. We were glad to meet and interact many of our users there and share updates on our existing products and upcoming releases. We also attended a few side events, meetups and parties where we could interact with the active Blockchain community in person. Our Head of Growth was interviewed by CNBC Crypto Trader team as a part of their documentary about Blockchain in general and Consensus 2018. He was also interviewed by a local media team in English as well as his native language Hindi. We have also received good number of media requests that we will be following up in the coming weeks. The last event we attended during the Blockchain Week in NYC was Crypto Influence Summit where we met a lot of influencers, YouTubers and Podcasters at one forum. We were happy to know most of them know BitClave and few of them have already spoken about us. Pratik met Crypt0 Omar, who was the winner of Best Influencer and the Most Relatable Influencer awards at the event. In the coming days, we will be connecting with all the people we met during the Blockchain Week. Regarding our push in video communications for our products MatchICO and Desearch, we’ve selected teams to create feature videos to introduce each project and have developed those narratives. We are excited to share them soon. If you or someone you know is interested in joining the BitClave team, check out our AngelList profile for postings. Our community members active on Telegram should be sure to join the Desearch.com channel on Telegram for the latest updates and to provide feedback to our developers. Events/meetings This week we attended the major conference Consensus 2018 in New York May 14–16. Our Founder Vasily Trofimchuk, Head of Growth Pratik Gandhi, our events coordinator, and other team members were there meeting crypto influencers and other leaders. Head of Growth Pratik Gandhi also attended the Crypto Influence Summit on May 17. Also, our Event Manager Stanislav Liutenko attended Next Block Conference, which took place in Kyiv, Ukraine on May 18th. The greatest point was meeting Bobby Lee, the co-founder of BTCC, who gave a speech on bitcoin’s future. We have a lot of events in June to announce — stay tuned! Want to learn more? Don’t miss our special news updates. Sign up for BitClave here. Join our Telegram Channel: https://t.me/BitClaveCommunity Github: https://github.com/bitclave/ Official Twitter: https://twitter.com/bitclave Official Facebook: https://fb.me/bitclave
https://medium.com/bitclave/bitclave-weekly-update-may-21-2018-d22d04ce585a
[]
2018-08-04 11:36:00.812000+00:00
['Technology', 'Marketing', 'Blockchain', 'Update', 'Decentralization']
How to explain non-additive measures, Part 2: Incremental contribution
How to explain non-additive measures, Part 2: Incremental contribution Interactive decomposition with atoti This article is the second part of a series of tutorials about interactive analytics in in atoti’s dynamic pivot tables. Check out Part 1: Pro-rata allocation. If you wish to read what non-additive measures are and why we may want to have them decomposed — please refer to the same Pro-rata allocation post. Solution #2: Incremental contribution In the previous post we were decomposing a non-additive measure into additive components. There’s another perspective on the contribution analysis when you don’t want to allocate the top-level number down to additive components, but you want to evaluate incremental, aka marginal impact of a contributor — computing the impact of a scenario if that contributor is hypothetically removed: In the following example, the “VaR Incremental BookHierarchy” measure shows the impact of the sub-portfolios on the firm-level VaR “as if the portfolio was removed”. For example, for the business line “Equities” it is +15.414, meaning that this portfolio has a positive +15.414 impact on the firm-level VaR. To test that, let’s visualize the plain version of the “VaR” measure — not the incremental one. If we filter out the “Equities” and compute the “VaR” it goes from -593,129 to -608,543 which is exactly 15.414 lower (-593k — (-609k))! To implement this behavior in atoti, we will use exclude_self=True parameter of the siblings aggregation. The following measure will aggregate positions being “siblings” of a sub-portfolio, and will ignore positions belonging to the current sub-portfolio: Now let’s compute VaR from the sibling’s aggregation: To finalize the incremental calculation, we’ll compute VaR above the current book and subtract the calculation excluding current sub-portfolio: You can find a notebook implementing this example in the atoti gallery: Value at Risk: A simple way to monitor market risk with atoti. Conclusion In this post, we discussed how to use parent and sibling relationships in atoti to implement contributory measures and explain non additive measures. I hope the described techniques can help you build powerful analytic applications!
https://medium.com/atoti/how-to-explain-non-additive-measures-part-2-incremental-contribution-37543284588a
['Anastasia V Polyakova']
2020-09-28 21:17:15.910000+00:00
['Fintech', 'Technology', 'Regulation', 'Risk Management', 'Big Data']
How to Stay Happy While You’re Unemployed During COVID-19!
“Because of your smile, you make life more beautiful.”- Thich Nhat Hanh. Life is quite an intriguing realm that we’re always living through, yet we can’t fathom its complications. For example, a beehive -its output yields numerous benefits, yet we still can’t comprehend how honeybees put together a nest full of divinity. The common perception of honeybees is they too have a life purpose similar to us. We’ll never know the workings of our nature, and like the late legendary singer Frank Sinatra famously said, “That’s life.” While honeybees work to serve their life’s purpose, similarly, we aim to do the same — mainly while we’re employed. We feel accomplished and develop a sense of self-worth when we do the work we love. Unfortunately, this COVID-19 pandemic has left millions of Americans unemployed, stripping them of their usual channel to gain happiness and self-worth. Being employed is vital as it serves as an assurance of stability and fuels us to strive to fulfill our life’s purpose. I’ll admit that being unemployed sucks, but there are many ways to stay happy! Positive perception is what is going to get you through the sorrows of unemployment! Trust me, fulfilling the desire to live a happy life through whichever means necessary ultimately negates the monticules we encounter along our life’s journey! :) One common statement I heard throughout my life is, “life is dynamic; it brings you sorrow along with an abundance of triumphant moments.” Mainly, the goal of achieving happiness is the fuel to get through life’s many presented obstacles. When you’re employed, you have the means to maintain a sense of safety and security — on the other hand, being unemployed shakes you because of the perceived loss of security. Trust me, while you’re unemployed, you’re still OK! You need to remain cool, calm, and collected! Your perception of life’s moments heavily weighs the outcome of your satisfaction. For example, I’m a big believer in the Latin phrase “Carpe Diem.” I live by it religiously, and I choose to see life matters from a half glass full perspective. I believe that having a half-glass full view of life results in more happiness. You’ll start to weigh out the pros, which will help you develop a positive paradigm. For example, I lost my job due to the COVID-19 pandemic; it set my life back by miles, stripped my happiness, and led my mental health to deteriorate. However, I found ways throughout my unemployment phase to cope. I created a pathway that has led me to believe in myself, and as a result, I’m happier. Instead, I choose to see things from a positive angle, which helps formulate my decisions that end in an optimistic outlook. I’ll admit that there are still kinks in my life — but I am more confident than before — and I want you to find your realm of happiness! :) Life is too short, so don’t drown yourself in misery. Everything will turn out great in the end. :) Where to begin? The COVID-19 pandemic deserves an entire yearly addition to the Guinness Book of World Records. It’s an invisible beast that has crawled its way into our lives, leaving us haunted by its harmful effects. For starters, it robbed the joy of millions of Americans: It has swiftly killed people in abundance It caused layoffs across the nation Robbed the enriched learning environment for kids Compelled people to rethink how to celebrate their successes Pushed people away from one another because of its contagiousness Like how President Trump commonly describes unprecedented matters, “I haven’t seen anything like this.” I strongly dislike this Twilight Zone era we are currently living in, as it has caused me to be unemployed now for almost eight months, halting my life goals and being detrimental to my happiness. However, I’m grateful for finding ways on how to get through it and maintaining a sense of stability. Here’s how you can stay happy while you’re unemployed during this pandemic: Having a positive perception Taking care of your mental health Taking care of your physical health Pursuing hobbies Social interaction Growing yourself Being grateful Working temporarily 1. Positive Perception- I know it’s hard to be happy when you’re unemployed. However, there are numerous ways to turn that frown upside down and into a smile! The key is to acknowledge your situation and switch up your perception. Remember, a period of unemployment is only temporary. Negativity gravitates towards negativity. Likewise, positivity gravitates towards positivity. Therefore, one solution is to see things positively. For example, consider seeing your situation from a glass-half-full perspective. One pro while being unemployed is that you have more time on your hands to do things you love. The utilization of this time is excellent for your mental well-being. For example, when you spend more time with your family, friends, pursuing new hobbies, reading, or finding ways to enrich yourself. Having this extra time allows for us to hit the refresh button and helps steer away from any downtrodden moments. When you have a positive perception during a negative period, it opens up doors to stay busy. Sure, not earning money or not doing what you love is undeniably a bummer. Furthermore, embracing the temporary situation by opening yourself to new avenues is tremendous for your mental health! 2. Mental Health Wellness- As you can tell from my point above, I emphasize mental health wellness! Maintaining your mental well-being is crucial, especially when you’re unemployed! Some great ways to keep your mental well-being is by checking in with yourself, journaling, meditating, staying proactive, getting adequate amounts of sleep, staying social, and indulging in hobbies! These all account for keeping your mental health inline. A particular benefit of being unemployed is that you have time to work on your mental health! For example, one thing essential to do continuously is to check-in with yourself throughout your unemployed period. One great way of checking-in is to journal, which allows you to track your progress. I journal regularly, and it keeps me focused and happy! Whenever I feel down, I flock to my journal because it helps me stay intact with my positive emotions. By journaling, you can set goals such as finding a job and can list your progress. Whenever you feel down, take a look at your progress or write about it because it will uplift your spirit. The tracking of your progression serves as reassurance that you are getting closer to your goal. Remember, like when we’re employed, we have goals and tasks to keep track of; the same goes for when we’re unemployed. When you’re unemployed, the end goal is to find a job or start a business. Every step is a sign of progression! By staying proactive, you accomplish small victories, which essentially keep you and your mind happy and sharp! 3. Physical Health Wellness- Similar to what I mentioned above, taking care of your physical health is just as fundamental as taking care of your mental health. We have one life to live, so why not live a sexy and comfortable experience. Remember, life is too short! #carpediem. If you already have a workout routine, then kudos to you! For those who don’t have a workout routine, I highly recommend incorporating a workout routine as it significantly uplifts your mood. I remember before the COVID-19 pandemic, I was twenty-lbs overweight. On April 1st, 2020, I checked my weight, and I weighed 201 lbs! I was flabbergasted, and I felt disgusted and awful about my weight. I knew I had to do something. I started working out four days a week, one to two hours a day. My workout routine: Monday-Chest and triceps followed by half an hour of biking Tuesday-Back and biceps followed by half an hour of biking Wednesday-rest or bike Thursday-Leg day followed by half an hour of biking Friday-Abs day followed by half an hour of biking Since April 1st, 2020, I’ve lost over twenty-two lbs. It’s an incredible feeling, I tell ya! I feel 10x happier than before! I’ve more energy now, and I feel much more optimistic! A significant benefit of working out is that it serves as fuel to remain positive, happy, and fit. For instance, when you finish working out, it’s not only mentally satisfying but also serves as a personal accomplishment. Notably, personal victories such as working out while you’re unemployed are vital because they help boost your self-worth. The upside of being unemployed is that there are many workout options; moreover, you shouldn’t have any excuse for not building a workout routine! Unemployment sucks, and feeling downtrodden as a result, isn’t sexy! Don’t forget, working out leads to more happiness, builds self-confidence, and feeling sexier! 4. Pursuing Hobbies- Hobbies, Hobbies, Hobbies! The three H’s of fun! I strongly urge pursuing your hobbies to keep you focused and happy! Immersing yourself in hobbies locks you into a bubble that not only challenges you but also keeps you entertained, yielding a vast amount of happiness! I love photography and traveling! I get lost pursuing them because they challenge me, help me grow, and ultimately make me happy. If you have hobbies, then keep following them! I suggest finding hobbies if you don’t have any yet. Indulging in your hobbies is essential because they lead to feeling accomplished and productive, making you feel good about yourself! You can have fun while you’re unemployed! Don’t let society’s opposing viewpoints bog you down! You have other talents so continue to excel at them in the meantime! Heck, hobbies can turn into full-time opportunities! My fellow readers, our world is your oyster! :) 5. Social Interaction-It’s the holiday season, time to party! Wait — not so fast! COVID-19 is notorious for forcing us into our shells. However, that doesn’t mean we can’t socialize! As human beings, we love to mingle like we’re single so we can’t stay contained for an extended time! Staying indoors for so long is only going to cause us to explode like the corn kernels that turn into popcorn! I understand the number of COVID-19 cases has risen worldwide — more significantly in America. Socializing is nutritious for our souls! Nonetheless, I’m an advocate for maintaining socialization while adhering to cautious practices. It has become difficult to see loved ones, friends, and significant others in person, especially during the holiday season. However, don’t stop socializing. When you do associate in-person, be mindful and practice social distancing and wear a mask! We’re all in this mess together, so let’s all help contain this invisible monster! COVID-19 has caused many people across the globe to feel lonelier than they have before — mainly unemployed people. I know meeting in-person is the preferred way of communicating, but there are myriad ways of communicating! Don’t keep yourself feeling isolated! While we’re unemployed, it’s crucial to stay in touch with your loved ones and friends. Negative thoughts can creep up faster than the speed of light, and having a supportive group can help you dodge those malicious bullets — matrix style. I recommend the following ways of communicating: Meeting in-person — practice social distancing Video calling — there’s an abundance of options Calling Chatting on social media Texting You’re not alone — so utilize your social sphere! The key is to remain happy throughout your unemployed period, despite the hardship. Most importantly, social interaction can help ease your pain. FYI surround yourself with positive people! 6. Growing Yourself-One interesting aspect about genetics is that we are bound to blossom up to a predetermined mark. Unfortunately, we can’t grow taller once we’ve reached our peak height — but we can grow ourselves in other dimensions: mentally, physically, and emotionally. We have one life to live, so why not enhance ourselves? When we grow ourselves, we’re happier! While I’ve been unemployed, I‘ve relentlessly improved myself in multiple realms. I’ve focused on: strengthening my mental health, getting in physical shape, enhancing various skills (digital marketing, writing, graphic design, and photo editing), spending more time with my family, significant other, and friends. Heck, I started writing on Medium this year and never looked back. I’m taking the time I have to create a better version of myself. I’ve been happier in my life despite not having a job, which I am determined to get soon! So will you! :) For my mental health, I’ve been meditating, journaling, getting a good night’s rest, staying busy, and practicing being mindful. For my physical health, I’ve been working out at least four days a week up to two hours. I feel accomplished when I work out. Plus, I’ve lost twenty-two lbs, which is a huge mental boost. I’m continuing to workout so that I can maintain my sexiness ;). For my digital marketing skills, I’ve been creating content, writing more, making videos, crafting my graphic design, and enhancing my photo editing skills. I feel much better knowing I’m improving myself day by day. I’ve built my expertise by continually working on different avenues; this also makes me feel much more confident when interviewing. Plus, here I’m writing on Medium! I spend a significant amount of time with my family since I’m back home, haha. I’ll admit being with my family throughout this temporary downtown has been a blessing. At times, there are stressful moments when my parents ask me how the job hunt is going, or I get bummed out when I encounter my family working. I combat these negative feelings by keeping myself busy and working on myself. I suppose you can try to be with your family or give them a visit! Once again — you’re not alone! Plus, having a stable significant other is tremendous for staying happy! I can’t stress any further how grateful I’m for my amazing girlfriend, Manleen! :) Suppose you have a significant other, then great! If not, possibly start dating and swarm like a bee! If you’re not yet comfortable, that’s understandable too! The point is to be happy and to get through your unemployed period happy! Since I’ve been back home, I’ve spent a lot of time with my friends. It got lonely while I was living in the Bay Area, so catching up has been great! I advise giving your friends a visit or suggest they come to visit. Friends are like extended family, and their support matters a lot! The joy of friendship is priceless! Plus, in today’s era, opening up about emotional matters is de-stigmatized, so don’t be afraid to share what’s going on with your friends! Make sure to stay in frequent contact in case you’re away from them! You can also hop on Meetup.com and find groups based on your interests. Always stay curious about meeting new people! 7. Being Grateful-I know being unemployed is not easy, and it carries a gloomy ambiance. However, trust me when I say this practice — being grateful for the things you have every morning. When you state the things you’re thankful for, you will feel much happier! List things you think of right away and gradually add to your list. I do this every morning and whenever I’m feeling down. The blessings you have in your life are the tools you have to help you get through this downturn. Blessings can be such as: having a roof over your head, your skills, being close to your family, friends, living with a significant other, or having friends nearby. This list of blessings is one of many that helps to boost your mood and offers emotional support! You’ll start to appreciate things more as you become more grateful. Whenever a negative feeling springs up, you can negate any negative emotions by counting your blessings! No doubt that having a job is excellent, but it isn’t everything to keep you happy! Imagine only having a job but not having a family, significant other, or friends! Yikes, life would be lonely and miserable! So, start counting your blessings, and like DJ Khaled famously says, “Bless up.” 8. Temporarily Work-I know being unemployed is not ideal, but one way you can stay hold your grip is by finding a temporary job! Wait; what? Look, I know you may be thinking, why would I look for a job outside of my expertise? What would employers or people feel if I pick up an odd job? I know what it’s like to ask that same question! Don’t forget; the goal is to stay happy! Plus, who’s going to pay the bills? I found a temporary position to keep me feeling happy and accomplished! The sense of feeling accomplished enhances your self-worth, self-confidence, and self-esteem. I drive for Uber and Lyft in the meantime; although I know it’s not ideal, but it keeps me busy, focused, and I feel great at the end of the day. It offers me flexibility, allows me to pay my bills, and I get to continue my job search and build myself! Phew, we are at the end of this long blog post! I know this is a long read! I want my readers who are unemployed to have solutions to get through their unemployment period and find ways to stay happy! Staying happy is a lifelong solution to a great life! Unemployed moments are temporary, but they have this nasty effect on our happiness. To all unemployed readers out there — remember — you’re not alone — you have help! :) Have a Happy Thanksgiving! :) I also created a publication called The COVID-19 Chronicles, which provides solutions on how you can get through COVID-19! Click here for more! Make sure you give this post a clap and my blog a follow if you enjoyed this post and want to see more! :). If you like my blog post photographs, click here to see more! (I upload only my work unless stated otherwise.)
https://medium.com/age-of-awareness/how-to-stay-happy-while-youre-unemployed-during-covid-19-a73c6ff8dd2c
['Harry Dhaliwal']
2020-12-17 03:40:27.472000+00:00
['Mental Health', 'Unemployed', 'Self Help', 'Personal Development', 'Covid 19']
Dockerizing Vue.js App With NodeJS Backend
Dockerizing Vue.js App With NodeJS Backend Learn How to Dockerize and make it a deployable image Photo by Brandable Box on Unsplash Docker is an enterprise-ready container platform that enables organizations to seamlessly build, share, and run any application, anywhere. Almost every company is containerizing its applications for faster production workloads so that they can deploy anytime and sometimes several times a day. There are so many ways we can build a Vue.js App. One way is to dockerize the Vue.js app with nodejs backend and create a docker image so that we can deploy that image any time or sometimes several times a day. In this post, we look at the example project and see the step by step guide on how we can dockerizing the Vue.js app with nodejs as a server. Introduction Example Project Dockerizing the App Running The App on Docker Summary Conclusion Introduction Nowadays, it’s very common to dockerize and deploy the Docker image in the production with the help of container orchestration engines such as Docker Swarn or Kubernetes. We are going to Dockerize the app and create an image and run it on Docker on our local machine. We could also push that Image into Docker hub and pull it whenever and wherever we need it. Here is the complete guide on how to develop a Vue.js app with nodejs as a backend server. If you are not familiar with the process or you want to know before studying this guide, I would recommend you going through it. Prerequisite As a prerequisite, you have to install Docker for Desktop (whatever your OS is). Please follow this link to install Docker on your laptop. Once installed you can check the Docker info or version with the following commands.
https://medium.com/bb-tutorials-and-thoughts/dockerizing-vue-app-with-nodejs-backend-33645f0f50ec
['Bhargav Bachina']
2020-08-30 05:01:01.217000+00:00
['Programming', 'JavaScript', 'Software Development', 'Vuejs', 'Web Development']
Why It’s Easier to Succeed With Learning Than You Might Think Chapter 1— Any Kid Can Code
''' Importing two libraries: 1. turtle is our own friend 2. random - introducing new library which will help to select the random values to have different color at each square ''' import turtle import random #defining shape of turtle jumper = turtle.Pen() jumper.shape("turtle") # picked 5 colors, you can select more but make it sure it exists! Check documentation # another data type list is introduced. Here color is list color = ["red", "green", "black", "cyan", "yellow","blue","orange”] l # loop running from 0-49 [50 times] for j in range(50): # hiding the turtle, if you want to see comment this line jumper.hideturtle() # setting the speed to 0 which is fastest and slowest is 10 jumper.speed(0) # bringing our jumper up so that it wont create line and # easy to change the location based on random number # if you want to see what random is doing print the below statement: # print(random.randint(-200, 200)) jumper.up() jumper.goto(random.randint(-200, 200), random.randint(-200, 200)) jumper.down() # select the random fill color from the choice given in list above jumper.fillcolor(random.choice(color)) # begin the fill, whatever image will come after this will be filled with the above color jumper.begin_fill() # deciding the random radius of the circle between 20 and 100 # you can directly put random.randint rather than creating variable radius = random.randint(20,100) # Drawing the circle jumper.circle(radius) # end fill will stop filling of color and next iteration will change the color jumper.end_fill() Code is self explanatory above and i have provided the comment. Feel free to experiment and play with this to understand more. If you feel like something is not explained, play with that and change the value. Try to print the same and see the change in results, it will definitely help to learn faster.
https://laxman-singh.medium.com/another-leaf-to-learnings-i-any-kid-can-code-4f6510487fd7
['Laxman Singh']
2020-12-02 03:10:24.268000+00:00
['Python', 'Kids', 'Technology', 'Tech', 'Python Programming']
Why Are Women Still Behind in the Design World?
Fifty-three percent of all graphic designers are women, but only 11% of creative directors are women. We know what you’re thinking: That’s it? 11%? You’re joking. But it’s true. It’s perfectly plausible that a female graphic designer might never work under a female creative director. In fact, 70% of young female creatives say they have never worked under a female creative director. But does it really matter if your CD is male or female? After all, your boss might never be your favorite person in the world, so does it make a difference if they look like you? It does. Women in graphic design who have worked for women — like Abbey Kuster-Prokell, creative director at Martha Stewart Living Omnimedia, and Claire Fraze, senior art director at Swift Agency — told us exactly how valuable that experience can be. “Over the course of my career, I’ve made a conscious effort to only work for strong, uber-talented women,” Kuster-Prokell said. “In most of these roles, I didn’t make as much as some of my friends, but I valued the experience more. Not to discredit my time with male CDs, but it’s the female CDs who have helped to shape not only my career, but who I am.” “I was a decade into my career before ever experiencing female leadership,” said Fraze. “When I was finally lucky enough to work under an incredible female CD, my self-confidence and growth just skyrocketed. She challenged me as much as she encouraged me and took the time to figure out what motivated me in a way I had not experienced previously.” Sydney Wisner, founder of the Portland chapter of Ladies, Wine & Design, thinks that cycle of male domination in the industry has been tough to break because of natural biases in hiring. “Men — yeah, not all men, but that’s not my point — are often intimidated by women with power and women with strong voices and opinions, which means in the hiring process, they are more likely to choose someone who is more agreeable and shares similar values,” Wisner said. “This candidate often ends up being another man, making it nearly impossible for women to climb career ladders all the way to the top.” Women influence upwards of 80% of consumer spending and 60% of social media sharing. But, if you’ll recall, they only hold 11% of the leadership jobs in graphic design. With 89% of teams being led by men, how can we expect content to accurately represent women — the ones who are the largest influencers of spending and media sharing? Wisner says, intuitively, that no one knows how to market to women better than women. “If companies really want to be setting themselves up for success,” she said, “they would be putting more relatable people in leadership positions.” Wisner cites an example of an inspiring company with female leadership: Bumble. Founded by Whitney Wolfe Herd, Bumble puts women first by putting online dating in their hands — only the female in a heterosexual match can initiate a conversation. The app goes above and beyond to make dating a safe space for women. “Do you think those values would have been installed if Bumble was run by a man?” Wisner asked. “No, it would be another Tinder.” The typical woman working as a designer earns $44,564 a year, just 73% of the median income of $60,944 for men in the profession. How is that large a pay gap still possible? Can any of these smart women designers help us understand — and if it can’t be explained, can we just get pissed off about it? “I think women are conditioned not to ask for more — that it would seem greedy or presumptuous,” said Meg Vazquez, creative director at Splice. “My friends in the industry and I are all incredibly open with each other about our experiences asking for raises or title changes, and it’s helped me be so much more comfortable advocating for myself. You’re your best advocate; you can’t trust that anyone else will go to bat for you.” Fraze added, “It is so disheartening (and enraging) to think that our work, our process, our ideas, our inspiration, and every other thing we give to our work is worth less just because of our gender alone.” Studies show many women won’t apply for a role until they meet 100% of the hiring criteria, while men will apply with only 50% met. Betti Iannucci, VP of design at Bloomingdale’s, thinks this difference in confidence, specifically a lack of it for women, is culturally induced. “Men seem to grasp for more even if their reach is not so broad. They try,” Iannucci said. “Women tend to be more constrained in their thinking. It’s an attitude. Which one is right? I think there is learning to do on both sides. Many people get the job and many fail the role they fought so hard for.” “It’s an advantage to completely meet the criteria for a job listing. That’s the long game,” said Angi Arrington, creative director at Watson Creative. “Frauds are exposed, one way or another. True confidence comes from deep preparation, investment, research, and refinement. True confidence is earned. It can’t be faked. Anything less is bad business. And bad businesses eventually sink.” “I think it again comes down to societal conditioning,” Vazquez said. “Women and people of color have to work so much harder than the next person to be seen as competent. Because of this I think we’re less likely to apply for positions that we might not think we’re a perfect match for. I remind myself that I should always be punching above my weight class because the person before or after me is definitely doing the same.” Let’s recap Women get paid less for the same work and get hired less often for leadership positions, even though they make the majority of consumer decisions. Now you’re going to say there’s nothing we can do about it? Uh, no. In fact, it’s up to us to step up. “At the end of the day, I don’t think women do themselves any favors trying to be more like men,” Arrington says. “I say double down on being a woman.”
https://modus.medium.com/why-are-women-still-behind-in-the-design-world-5eb3b56c43f5
['Olivia Brown']
2020-01-28 16:31:01.221000+00:00
['Work', 'Women In Design', 'Workplace Equality', 'Design', 'Career']
Code Sponsor joins Gitcoin
In December, the Code Sponsor platform was forced into a pivot due to compliance issues with GitHub. Since then, my great friends Freddy Shelton and Mike Smith with Rollbar have kept the flame alive through direct sponsorships. I spent the month thinking about how to best continue the Code Sponsor mission. Finding solutions to help sustain open source has become a passion of mine. Code Sponsor itself has helped generate over $10,000 in revenue for developers through the platform. Thanks to a tweet last year by Tomás Aparicio, I had the great pleasure to discover Gitcoin. During the exact moment I saw his tweet, I was contemplating how to manage payments to over eight different countries. Cryptocurrency seemed an obvious choice. That day marked the beginning of a four-month courtship between Kevin Owocki, CEO of Gitcoin, and myself. Since our first discussion in October, Kevin and I found that our company and personal goals are very similar. We both wanted to help grow the open source ecosystem by providing means for developers to generate paths of sustainability. Here’s an interview I did with Kevin discussing open source sustainability. I’m extremely excited to announce that Code Sponsor has joined the Gitcoin + Bounties Network family. This change allows me to work on Code Sponsor full-time and continue to find and provide ways to sustain open source. Simultaneously, I will venture into the world of blockchain as an employee of ConsenSys, a leading company in the space and lead investor in Gitcoin. The company has fully embraced decentralization, where spokes (like Gitcoin) are autonomous, yet supported by the ConsenSys Mesh. I look forward to serving the developer community and bringing the blockchain technology into OSS funding!
https://medium.com/codefund/code-sponsor-joins-gitcoin-b7d35966b93d
['Eric Berry']
2018-01-16 15:51:50.340000+00:00
['Open Source', 'Open Source Software', 'Blockchain', 'Sustainability', 'Bitcoin']
Creating a Variable RSI for Dynamic Trading. A Study in Python.
This is a way to gradually weigh the RSI lookback periods. Note that you can select whichever periods you want and optimize them according to your preferences. The default parameters on the Dynamic RSI (according to me) are the ones above. Let us now see the full function that gives out this indicator before we proceed to the back-testing step. Note that you must use it on an OHLC array with multiple extra columns to be populated by the function. def dynamic_rsi(Data, momentum_lookback, corr_lookback, what, where): for i in range(len(Data)): Data[i, where] = Data[i, what] / Data[i - momentum_lookback, what] * 100 Data = rolling_correlation(Data, what, where, corr_lookback, where + 1) for i in range(len(Data)): if Data[i, where + 1]>= -1.00 and Data[i, where + 1]<= 0.10: Data[i, where + 1] = 14 if Data[i, where + 1] > 0.10 and Data[i, where + 1]<= 0.20: Data[i, where + 1] = 10 if Data[i, where + 1] > 0.20 and Data[i, where + 1]<= 0.30: Data[i, where + 1] = 9 if Data[i, where + 1] > 0.30 and Data[i, where + 1]<= 0.40: Data[i, where + 1] = 8 if Data[i, where + 1] > 0.40 and Data[i, where + 1]<= 0.50: Data[i, where + 1] = 7 if Data[i, where + 1] > 0.50 and Data[i, where + 1]<= 0.60: Data[i, where + 1] = 6 if Data[i, where + 1] > 0.60 and Data[i, where + 1]<= 0.70: Data[i, where + 1] = 5 if Data[i, where + 1] > 0.70 and Data[i, where + 1]<= 0.80: Data[i, where + 1] = 4 if Data[i, where + 1] > 0.80 and Data[i, where + 1]<= 0.90: Data[i, where + 1] = 3 if Data[i, where + 1] > 0.90 and Data[i, where + 1]<= 1.00: Data[i, where + 1] = 2 Data = rsi(Data, 14, 3, 0) Data = rsi(Data, 10, 3, 0) Data = rsi(Data, 9, 3, 0) Data = rsi(Data, 8, 3, 0) Data = rsi(Data, 7, 3, 0) Data = rsi(Data, 6, 3, 0) Data = rsi(Data, 5, 3, 0) Data = rsi(Data, 4, 3, 0) Data = rsi(Data, 3, 3, 0) Data = rsi(Data, 2, 3, 0) for i in range(len(Data)): if Data[i, where + 1] == 14: Data[i, where + 12] = Data[i, where + 2] if Data[i, where + 1] == 10: Data[i, where + 12] = Data[i, where + 3] if Data[i, where + 1] == 9: Data[i, where + 12] = Data[i, where + 4] if Data[i, where + 1] == 8: Data[i, where + 12] = Data[i, where + 5] if Data[i, where + 1] == 7: Data[i, where + 12] = Data[i, where + 6] if Data[i, where + 1] == 6: Data[i, where + 12] = Data[i, where + 7] if Data[i, where + 1] == 5: Data[i, where + 12] = Data[i, where + 8] if Data[i, where + 1] == 4: Data[i, where + 12] = Data[i, where + 9] if Data[i, where + 1] == 3: Data[i, where + 12] = Data[i, where + 10] if Data[i, where + 1] == 2: Data[i, where + 12] = Data[i, where + 11] return Data Now, how do the results of the Dynamic RSI compare to the results of the regular RSI? After all, we need a benchmark or some form of comparison to properly judge our strategy. I will make an exception in this back-test regarding the risk management processes I employ, as I will rather respect the optimal risk-reward ratio of 2.00. This means that I will place my stops at 1x the 50-period ATR and my targets at the 2x 50-period ATR. Another way of saying that I will be risking half of what I expect in each trade.
https://medium.com/swlh/creating-a-variable-rsi-for-dynamic-trading-a-study-in-python-2af3ff8eaf0c
['Sofien Kaabar']
2020-12-02 20:52:01.650000+00:00
['Machine Learning', 'Python', 'Data Science', 'Technology', 'Trading']
Behind Every User There’s a Character
Part 1 in a Series on Storytelling and Product In a recent podcast I described how storytelling can help product managers build better products, recruit stronger teams, and advance their careers. I want to unpack what makes storytelling powerful by showing how successful companies tap into a story’s core elements and how you can leverage each of these to build better products. Today’s focus is on character development and storytelling, illustrated by Netflix. I’m one of those early Netflix users who stopped using the service a few years ago in favor of easy substitutes — cable, Prime, Apple TV. I recently rejoined for the company’s original programming. Like Daredevil, released in full on April 10, 2015. Early in the first episode we witness the defining moment in Daredevil’s life, when as a boy he saves a man from an oncoming truck, only to be struck himself and blinded. A natural born hero who turns tragedy into strength. By the final episode we’ve watched the character develop from childhood to present day in such detail that we understand what motivates Daredevil to put on a mask and fight crime. Attention to character development is a staple of great storytelling. And product development has its equivalent: The characterization of our users’ motivations, needs and background. Companies that really focus on the character behind the user, like Netflix, grow new markets and distance themselves from competitors over time. But before we get there, let’s take a look at how things usually get (not) done. If you’ve worked in product development in media and technology long enough, you’ve felt the resistance to really fleshing out the user. Putting a stake in the ground about the user’s needs and motivations sounds like narrowing the opportunity — How can we reach the largest target market or appeal to the broadest base of advertisers if we focus only on a handful of things that matter to some people? And it takes guts — What if we’re wrong? I’ve seen this refusal to focus on distinct user motivations collapse seemingly invincible companies, and paralyze many others. Worse, these same companies fail to understand why their users adopt competitive products. You’ll often see this expressed via generalizations: - they use our competitor’s product because it’s free, at some point they’ll want more; - they’re only using that because it’s new, the novelty will wear off; - our competitor’s targeted such a narrow base, they’ll never reach the mainstream. Rarely are the specific needs being met — by the free, the new, or the tailored product — articulated clearly and taken seriously. So what do companies like Netflix do differently? Over the years much of what I’ve just described has been said by those who understood the Netflix user as motivated by “watching programming that originally aired elsewhere,” like on television or in a movie theater. And so Netflix always risked “dying” whenever their content partners wised up and delivered a similar service, or priced Netflix out of the market. Netflix, by contrast, doesn’t view their users as a generic audience motivated by “watching someone else’s stuff”. Instead they see their users as individuals looking for great programming, with richly varied tastes not only in what they like to watch, but how they like to watch it. They recognize that by digging deep into user motivations they can fuel a constantly growing business. Competitors are now scrambling to keep up with a company that has spent years analyzing the programming tastes and viewing habits of millions of viewers. A company that’s approached the tagging and classifying of programming with that same thoroughness. And who now not only creates great programming of its own, but has seemingly overnight changed decades long patterns of how that programming is consumed. Where else can you watch an entire season of a new series within days after its release? To use character development in your own work of creating great products for users, start by looking for a motivation. To keep with our example, a description for a streaming video or cable service user might read “viewers who watch at least five hours of episodic programming per week across more than one genre”. A better one would add “and don’t want to wait for the next episode” to establish a specific motivation. From that single motivation — and a gutsy bet that behind the user lurks a binge watching character ready to trade sleep to finish the series, then tell all her friends to do the same — comes the competitively differentiated strategy of releasing a full season at once. Naturally, this example is specific to Netflix, but by focusing on motivations you can uncover insights that apply to almost any market. In the next part of this series I’ll look at how great companies use Setting, another core element of storytelling, to create great products.
https://medium.com/agileinsider/behind-every-user-there-s-a-character-adb0352dead4
['Valla Vakili']
2019-06-11 16:55:40.358000+00:00
['Storytelling', 'Product Managment']
How To Recognize Secure Attachment After Processing Trauma?
@adityaries unsplash.com One of the hallmarks of trauma recovery is this moment towards the end of your journey when you recognize how far you’ve come. It is almost like a flash of lightning that tells you that you are now an emotionally healthy individual. You never thought this could be your identity. You’ve inherited this identity slowly while working hard on your issues. As you emerge from the fog of indecision, you face the new therapist and tell her that you no longer need her because you’ve spent the last ten years working on your issues. The reason is because of this moment of recognition of secure attachment you saw last week confirmed by a kind of confidence you feel at your inner core. The secure attachment is your validation that all of your work in the past few years has paid off. You say to your therapist, “I know that you are only a phone call away when I need you again. I know that I still have a few minor issues to work through. I know what my lingering issues are, and I have the skills now to heal myself. I have learned to recover. I’ll be okay.” You give her a smile that you are flashing every day since whenever you feel this intense love for the universe. In trauma recovery, there’s rarely a confident moment. You are never sure, burdened by a lifetime of shame that you are 100% healed. But, you know that good days are the norm. The bad days have gone. You have a lifestyle that’s putting faith back into your life. You also have a purpose for which you work diligently for, every day. You have learned new skills to get up when you fall. You love yourself enough not to let yourself fall again. You feel emotions at your core and can process them normally. You notice your reactions, in your gut, in your toes, from your head, in your third eyes. You have a gateway to your unconscious. You’ve accepted the whole of you, reflecting on all of the darkness and the light. You are compassionate toward yourself. If you are in trauma recovery, re-visit how you are doing once every year, reflect on how far you’ve come. While reading trauma recovery books, I’ve learned that my intuitions helped to steer me on my recovery journey. After ten years, I know what secure attachment looks like and how to develop it in relationships. You are straight forward with people. You don’t hide. One of the most harmful ways trauma teaches a person is that “hiding” is essential to obtain safety. You tend to minimize your anger, your discontent. You tend to please others. You tend to take on the responsibilities of others in every situation. You blame yourself. You shame yourself for other people’s actions. Ask yourself, have you been hiding your feelings for the last few weeks from people you know well? Being straight forward does not mean that you have to disclose everything or that words should hurt other people. Instead, you can detach from the situation’s outcome and sit with your feelings repeatedly to allow you to articulate your feelings better to other people. For instance, if you don’t know what to say, say this, “I’m sorry I don’t know what’s going on with me. But, I know that I’m not okay with this. I feel uncomfortable. I don’t know why. Let me get back to you.” When you are mad, are you hiding the emotions? I don’t mean that you have to throw a tantrum. Do you have a way to tell people that you are mad? Can you deal with conflict as they arise? Do you have uncomfortable conversations all the time in your house? Can you tell someone straight to their face what is bothering you? Can you call someone out if you think they’ve done something? Confrontation will stop the shame and guilt train from coming for you after the fact. You Are Holding People’s Eye Contact And Relating To Them With Your Full Presence If you are holding trauma in your body, you will escape any chance you get. A lot of trauma victims live in their heads. At least in your head, everything makes sense. You build narratives around your trauma to make sense of them. You are escaping when you are uncomfortable. When people don’t react to the way you want them to, especially when you can’t control their reactions, you stop holding eye contact with them. You’ve decided to retreat into your head because you don’t feel safe with strong emotions. It doesn’t mean what these emotions are. They don’t have to be dark emotions. They could be any intense emotion, and you are uncomfortable. Try holding people’s gaze when they look at you. Try to look into people’s eyes at grocery stores. Try to look into people’s eyes as you are talking to them. When you see the discomfort in people, hold their eye contact anyway. Hold eye contact even when you are mad at them. Interestingly, when you sink into the emotions that they are reflecting back at you, you will find that these emotions are not as strong or negative as you thought. You may experience the wonder of seeing their human-ness. You Are Not Clingy When your SO goes to work, do you have anxiety about the day? When your children leave for school, do you feel a sense of dread? When you have a significant emotional meltdown and wanted to cry on the phone to your best friend if she doesn’t answer the phone, do you feel so terrible that you can’t go on? When you have a secure attachment to yourself, then you can give to others generously. You are the default person who can help you with just about anything life throws at you. What do you do with anxious thoughts about your SO? Do you let these thoughts fester? Do you sink into the “love” that you know and trust that they come back again and again? By not become clingy to outcomes from relationships or any event in your life, you are not writing that trauma narrative all over the wall of your house. You can sit with that anxiety for just a few minutes. You can breathe deeply and inhale in the love that you know. You are then embarking on your day with no attachment to what you will find at the end of the day. When you have a significant emotional meltdown, do you have a process that you fall back on that involves no one else? You can fall back to yourself to take care of your heart, soul, body, and mind. You can trust that you will be there for you always. You Give More Than You Take, But You Draw Firm Boundaries One of the biggest challenges from overcoming trauma is this overwhelming sense of holding things inside. You don’t give love because you are living in numbness. Only when you start to love yourself unconditionally can you truly give love to other people. But, when you begin to show your love to others, it is not a free for all. It doesn’t come as floods of love anymore. Nothing is dramatic when you are giving love to friends, family, colleagues, etc.. It becomes a natural occurrence of being compassionate toward people around you. You smile for the maintenance guy at your local park. You thank the cashier at your local gas station for adding extra vanilla into your coffee with firm eye contact, and you mean it. You tell a coworker what you find not acceptable with their behavior, all the while offering a helping hand and an open heart to help them deal with the potential stresses in their life. You no longer hesitate to reflect on your behaviors. You put your best face forward every day, all the time, not because you have to, but because you want to. You have all of this love in you, and you give that generously to people you care about. You catch yourself when you think you need to draw boundaries. Then, you take action. You Are Taking Actions To Improve Your Reality, But You Are Not Ego-Driven Each one of us has a way to live in this world. Without feeling like the world is trustworthy and safe, it isn’t easy to engage in the world with any sense of love. When you heal from trauma, it’s tempting to want to live in that “happiness” that you think is the outcome of all the hard work of therapy. But, the actual result is the optimistic outlook at your core. It’s the belief that you have the confidence to deal with anything. It’s the belief that you are stable in your internal locus of control. You can channel it to achieve anything that you want to. You mold your reality into the vision that you see every day. But, by some chance, you don’t get the outcomes you want to along the way; you move on to find something else. You are flexible. You can deal with failure because you trust the world and how it will help you. You attach to the world healthily and securely no matter what the world becomes. You trust yourself. Therefore, you have immense trust in your role in this world. You Recognize and Appreciate The People Who Are Securely Attached to You The final step in trauma recovery is always to have healthy relationships based on trust. These healthy relationships confirm the image of what secure attachment is supposed to look like for you. These healthy relationships that you have where you feel safe enough to engage with your authentic self validate your actions, give you confidence and are sources of love as well as the destinations where you place your love. When you look into your child’s eyes, do you see that your child loves you unconditionally and attached securely to you? When your child goes to school, do you want them to have a good time, make friends, and look forward to the stories they tell you when they come back? When you are away, does your child miss you but know that they will be looked after by people who love them? When you leave your child to play, does your child engage with the world with wonder and playfulness in their eyes? Do you unleash a sense of freedom in your friends, SO, and your loved ones where they can be anything and do anything with you by their side? If so, then you love unconditionally. Your loved ones are securely attached to you. You do not feel fear in your heart when they are away. You’ve managed to maintain a handful of relationships while you diligently processed your trauma. Perhaps you recovered because of these people who gave their love by your side. When you look at how they are still there, you recognize the new sense of freedom that all of you feel in your relationship. The Takeaway Trauma recovery is a long road. But, when you are in the thick of it, do you appreciate how far you’ve come? I didn’t recognize my secure attachment in my relationships, healthy boundaries, and unconditional love until I realized that I’ve healed. All of the hard work, therapy, meditation, bodywork, breathwork, being alone, engaging in work that fulfills my soul, self-improvement, reflection, they all paid off handsomely. When you have a new identity of an emotionally healthy person, wear it proudly. Savor it by going out and meeting new people, taking risks, and living the life that you’ve always dreamed that you will have. What are you waiting for?
https://medium.com/jun-wu-blog/how-to-recognize-secure-attachment-after-processing-trauma-f3c7e6672cac
['Jun Wu']
2020-10-07 18:18:49.065000+00:00
['Self', 'Mental Health', 'Family', 'Attachment', 'Trauma']
Writing your first “Hello World” program in TypeScript with ease
Hello World Program A simple Hello World program in JavaScript would be to print Hello World! message to the console. In TypeScript, just like in JavaScript, it would be using the console.log function call with a "Hello World!" string literal. Let’s create a simple project with the directory name hello-world and place a hello.ts file inside it. The .ts extension will help us and the TypeScript compiler understand that it is a TypeScript file. /hello-world └── hello.ts The hello.ts program looks like above. It’s plain and simple JavaScript, nothing TypeScript specific at the moment. To run this program inside a browser (by importing it using a <script> tab) or in Node.js, we need to convert it to the .js file. So let’s use the tsc command and provide the hello.ts as a source file. The TypeScript compiler accepted the hello.ts file and created a hello.js file right beside the source file. Now we can import this hello.js file in the browser or run directly inside Node as shown above. Adding Type Support Let’s modify the previous example and let’s try to mimic a little modular project structure. Let’s also focus on making the program safe by adding TypeScript features such as type annotations. /hello-world └── src/ ├── lib/ | └── utils.ts └── program.ts In the modified project structure, the src directory contains all the source ( .ts ) files. The utils.ts contains a sayHello function that the program.ts imports and executes. These program files look like below. The :number part of result variable declaration is called the Type Annotation. The type annotation declares the ultimate data type of an entity such as a variable, an argument of the function, or the return value of a function. This type annotation syntax placed just behind the name of the entity in the entity declaration statement. The number here in the type annotation is the built-in data type provided by the TypeScript that represents all the numbers. You can also create your custom types that may represent complex values such as an object of a specific shape or a function of a specific signature. Once an entity is annotated with a type, its type can’t be modified. That means an entity can represent only a set of values once it is declared. Therefore the result variable in the above program can only contain number values during the lifetime of this program. You will see in a minute why the above program fails to compile simply because of this reason. When we compile these program files using the tsc command, we could provide file paths of both the files or just the program.ts since it imports utils.ts therefore it is auto included in the compilation process. Focus on the type annotations in the program. The sayHello function accepts an argument of type string and returns a value of type string . 💡 We will learn more about type annotation, basic and abstract types and the type system in general in the upcoming lessons. We have used import statement in the program above. We are also going to discuss these in upcoming lessons. This type information will be used by program.ts and we can already see a problem here. The result variable assignment statement shows an error. If you hover on it, you will be able to see the problem. But let’s try to compile the program and see what TypeScript compiler says about it. Oops, looks like we made a mistake. The sayHello function returns a value of type string but the result variable is a type of number . We can save a number value as a string value. So we need to either change the type of result to string , to change the return type of sayHello function. Now if we compile the program, TypeScript won’t complain about anything and we also do not see any errors in the IDE. When the TypeScript compiler generated the compiler .js output files, it places them right beside the source files, since it likes to maintain the original file paths. /hello-world └── src/ ├── lib/ | ├── utils.js | └── utils.ts ├── program.js └── program.ts Generally, we do not like to pollute our working directory. This is obviously bad and we need to fix it. What we can do is to provide an output directory where these files should be emitted. We can do that by invoking the tsc command with the --outDir command-line flag with the directory path. Now the output files are placed inside dist directory but the TypeScript held on to the original file structure of the source file which is a good thing. Let’s see how compiled JavaScript looks like and what I said it’s a good thing. By default, TypeScript converts ES6 import statement into CommonJS require() calls by default so that the output code can be run inside Node. This however doesn’t work inside a browser since the browser doesn’t support the CommonJS module system. But you can change it using --module flag. You can also see that the TypeScript compiler performed some operations on the source code. It converted ES6 template string (inside sayHello function) into normal string concatenation using the + operator. This process is called downlevelling since it down-compiles the code from a higher language version to a lower language version. This is done by the TypeScript compiler since the default target is set to ES3 (JavaScript version) which do not support template strings. You can change the target to ES6 or above to bypass this process using --target flag. 💡 You can learn more about these flags from the Compiler Flags lesson. If we want to run this project using Node, we just need to invoke the node command and provide the dist/program.js file since it already imports the ./lib/utils.js file (relative to itself) using the require() call. Now you can see why keeping the original file structure in the output was a good thing. Had the output file structure different, Node would not have been able to find the ./lib/utils.js file relative to the program.js file. Using tsconfig.json configuration file So far we talked about --target and --module command-line flags and used the --outDir flag to change the output directory. These flags configure the compilation settings of the TypeScript compiler. Similarly, we provided the source files to the TypeScript command from the tsc command itself. When the compiler-options get larger, so does the tsc command and it could be overwhelming to handle such a big command. To solve this issue, TypeScript provides specifying these options through a JSON configuration file mainly named tsconfig.json . This file should be placed in the root directory of the project. When we invoke the tsc command from this root directory, the TypeScript compiler uses this file to extract the compilation settings just like values from the command-line options/flags. So let’s create the tsconfig.json in the hello-world project directory. In the above tsconfig.json , we have specified the source files to include in the compilation using the files field. The uitls.ts file is redundant since it is already imported inside program.ts . The compilerOptions field contains the actual compilation settings. You can provide a custom file path of this configuration file using the tsconfig.json using the --project or -p flag such as tsc --project tsconfig.prod.json . You can also override compilerOptions of the imported tsconfig.json file by using the command-line flags. In the above example, even though the tsconfig.json says the output directory is dist but we have overridden it using the --outDir flag. This could be useful while performing automated tasks. Working with ts-node So far we have learned that in order to run a TypeScript program, you first need to compile it and then run the output JavaScript ( .js ) files. But sometimes you really do not care about the output .js files, you just need to run the program using Node. So having this extra compilation step kinda seems unnecessary, though it is mandatory. To solve this issue, some brilliant people came together and made ts-node . It is an open-source command-line utility to run .ts directly on Node without having to manually compile these source files to .js file since ts-node does that internally with added optimizations. You can just run ts-node program.ts command and the process we went through manually in the above example is taken care of by the ts-node under the hood. To install this tool, follow this official documentation on the GitHub. I would recommend you to install this tool globally so that you can access it using the command ts-node from anywhere on your system. Once the installation is done, let’s move to our project and execute the ts-node command. As you can see from the above example, we just executed the ts-node command and provided the src/program.ts file. The ts-node tool uses the TypeScript compiler to first compile the program and then run the output .js file using the node command. So in a nutshell ts-node is just a combination of tsc and node command but with added improvements. If you want to provide custom compiler-options to ts-node , then you should put the tsconfig.json file in the directory where the ts-node command is being invoked, perhaps in the root directory of the project. Just like the tsc command, the ts-node command looks for the tsconfig.json file but only to extract compilerOptions . The files field is ignored (also the include and exclude fields) so that we can provide the executable source file path from the command-line itself.
https://medium.com/jspoint/typescript-hello-world-program-b0826ee3d87d
['Uday Hiwarale']
2020-09-01 06:38:45.104000+00:00
['Typescript', 'Nodejs', 'JavaScript', 'Programming', 'Web Development']
Transactional Writes in Spark
Since Spark executes an application in a distributed fashion, it is impossible to atomically write the result of the job. For example, when you write a Dataframe , the result of the operation will be a directory with multiple files in it, one per Dataframe 's partition (e.g part-00001-... ). These partition files are written by multiple Executors, as a result of their partial computation, but what if one of them fails? To overcome this Spark has a concept of commit protocol, a mechanism that knows how to write partial results and deal with success or failure of a write operation. In this post I’ll cover three types of transactional write commit protocols and explain the differences between them. The protocols being addressed are Hadoop Commit V1, Hadoop Commit V2 and Databrick’s DBIO Transactional Commit. Transactional Writes In Spark the transactional write commit protocol can be configured with spark.sql.sources.commitProtocolClass , which by default points to the SQLHadoopMapReduceCommitProtocol implementing class, a subclass of HadoopMapReduceCommitProtocol. There are two versions of this commit algorithm, configured as 1 or 2 on spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version . In version 1 Spark creates a temporary directory and writes all the staging output (task) files there. Then, at the end, when all tasks compete, Spark Driver moves those files from temporary directory to the final destination, deletes the temporary directory and creates the _SUCCESS file to mark the operation as successful. You can check more details of this process here. The problem of version 1 is that for many output files, Driver may take a long time in the final step. Version 2 of commit protocol solves this slowness by writing directly the task result files to the final destination, speeding up the commit process. However, if the job is aborted it will leave behind partial task result files in the final destination. Spark supports commit protocol version configuration since its 2.2.0 version, and at the time of this writing Spark’s latest 3.0.1 version has a default value for it depending on running environment (Hadoop version).
https://medium.com/tblx-insider/transactional-writes-in-spark-2d7cb916f2fc
['Andriy Zabolotnyy']
2020-11-23 11:29:30.598000+00:00
['Software Development', 'Spark', 'Hadoop', 'Data Engineering', 'Databricks']
I Got Drunk And Figured Out Why I am So Pissed Off
I Got Drunk And Figured Out Why I am So Pissed Off File this post under “things I tried that bombed spectacularly.” At least I got a look into my mind with no filter Photo by Andrey Zvyagintsev on Unsplash Trigger Warning — if you are dealing with your own heavy shit right now, you may want to read something a little happier than this jumble of angst and fear. Twelve days into NaNoWriMo and my life is unraveling. It started OK, just as it always does, but it’s almost like my demons know I’m trying to accomplish something and increase the frequency and brutality of their attacks on my mind. Nightmares riddle my sleep — childhood memories and snapshots of events in my life that are molested and rubbed with filth. During the day, my anxiety level is off the charts, and I cannot seem to get past this depression and the feeling that I may never be good enough to do anything with my life and writing. The voices are slobbering and moaning with glee because they know the walls I erect to keep my mind safe are crumbling. The fucking voices are killing me. So I did the only thing I could do last night, I got drunk. At that moment, the only alternative seemed to involve blood and pain, and there is no way I am stepping on that road again. A week ago, I bought all the blueberry Soju on the grocery shelf because I go all out when I get obsessed with something. After spending a few hours in the freezer, they were ready to drink. I sat at my laptop and scrolled through my blessedly Trump-free timeline until everything stopped making sense. When I looked up, I had finished four bottles. I stumbled to the kitchen, bottles clanking against me, and grabbed another from the freezer. I was standing at the fridge, staring into the depths and hoping some Cheetos would magically appear when I realized what was going on in my head. Nothing. Silence. I felt nothing, saw nothing, and heard nothing. No screeching, no voices. The warmth spreading over my body replaced anxiety and I smiled my first smile in a few days. I was so wasted my mental illness went away! I was so relieved that the first thing I did was lie down in bed and give my mind a little dreamless sleep. I woke still very drunk, and with the sleep still clinging to my body, and drank another bottle — this time peach. The symptoms of my illness were still gone, and the more I thought about the realization that the only time I can manage my brain is when I am so drunk I can hardly walk, the more pissed off I became. Writing seemed like a good idea, so I grabbed my Soju and stared at a blank page. I think this is what I wrote: When I woke to get some work done, I found I was still drunk, so I grabbed another bottle of Soju and sat down at the laptop to create. The usual chorus of voices in my head, compliments of schizoaffective psychosis, are gone, replaced with a happy melody and the need to continue writing the words appearing in my mind. I am quite drunk. I thank the Koreans who carefully crafted this rice wine and added fairy dust so that all the words I’m typing sound as they come from the mind of a great writer like Hemingway. I don’t know why it took me so long to try writing when my brain is relaxed and receptive, but here I am. Lucky you, who happen to be reading the brilliance of many bottles of blueberry Soju and the pickled mind of a writer pushed to the edge of sanity one time too many. I feel like, at this time, I need to be as honest as possible because the truth serum is pulsing through my veins, and the barriers I typically erect to keep from embarrassing myself have vanished. There is some shit I want to say. If you read one of my recent posts about God and the universe, you probably came out of it confused about whether I was an atheist or a madman. I wanted to say something to whoever is controlling the universe, even if it turns out that everything happens because of millions of years of chance and evolution. If there is a God or a force that controls what happens in our lives, I have a message for you. What.The.Fuck? What the actual fuck is mental illness? I mean, you give a “normal” mind to underserving peons, and they squander it either by joining the cult of Trump or doing the opposite of everything you should do to be a success. Then you have me and my brethren, who only want to find their place in the world and a little of that sexy winner’s juice that the infomercials promote at 3 am. Instead, you fuck up our thought processes and make us hear and see things that aren’t there. You make us depressed and anxious in crowds of people. You make us so fucked up that we can only operate in survival mode, and deny us hope although we crave the good things in life like everyone else.. When we try to push ourselves out of the darkness and climb the ladder of success, a society that stigmatizes mental illness slaps us. We never had a chance to do anything with our lives. It would have been good to know that before we made an effort. What the fuck? I’m not trying to have a pity party, but this is seriously messed up. I’m given enough intelligence to know that there is more to life than what I have, but I cannot experience it because I have an abnormal mind? Why bother letting us have the wants and the need to climb to success when you deny us entrance to the club most of the time? What.The.Fuck?
https://medium.com/tmi-too-much-information/i-got-drunk-and-figured-out-why-i-am-so-pissed-off-fb964c0acdca
['Jason Weiland']
2020-11-11 17:04:19.110000+00:00
['Mental Illness', 'Self', 'Mental Health', 'Drinking', 'Anger']
Cap Table 101
1. KEY CONCEPTS Before we dive into what a later stage cap table looks like, here are 5 key concepts to understand: 1.1 Common vs Preferred Shares Common shares — impart the most basic rights, privileges and preferences to those that hold them, including the rights to disclose company performance or to exercise a degree of control over operations. Typically held by founders, employees and angels. Preferred shares — impart additional rights on top of those provided by common shares, generally giving priority to security holder for company returns in a liquidation event. Figure 2. Ownership by stage (Source: Capshare) 1.2 Participating Preferred vs Non-Participating preferred Participating preferred shares — receives their full investment amount AND pro-rata share of remaining ownership in a liquidation event. Non-participating preferred shares — receives their full investment amount OR a pro-rata share in the liquidation amount. Figure 3. Participating Returns vs Non Participating Returns 1.3 Convertible Notes Convertible Notes (a.k.a. convertible bonds) — are debt agreements with the option to purchase equity at a discount in the following round. These are generally seen in seed/ series A rounds where there is a larger risk of the start-up not succeeding. A typical conversion carries an interest rate ~10% and discount for the next round of ~20%. 1.4 ESOP Employee Stock Ownership Plan (ESOP) — provides employees with an ownership stake to align their interests with the company. ESOP gives employees a right to purchase shares in a company at a fixed price, and is generally distributed as part of an employee’s remuneration package. As an incentive mechanism, ESOP is a powerful tool to use early on in a company’s life time, especially when they have limited cash flow. 1.5 Dilution Fully diluted % equity — Often in later stage cap tables, a fully diluted % equity column exists to illustrate the ownership structure should all convertible notes and options on the cap table have been exercised. On raises, founders are usually first to give up their equity for capital, hence diluting their stake in their start-up. Anti-dilution — a standard provision in investor shareholder agreements that prevent their stake from being diluted after new shares have been issued. Also considered a pre-emptive / protective/ pro-rata measure. Full ratchet dilution — In a case where an investor with the protection of this clause has paid a higher price per share in a previous round (say $10/share at 20% stake), the clause would allow that investor to maintain their stake in a subsequent round by converting their shares ($5/share, doubling the amount of shares) at the price paid in the previous round. This provision can impair raises at higher valuations as certain investors may hold a disproportionate amount of shares, especially if the clause is used in an early round. Weighted average dilution — This is an industry standard clause that protects the initial investor by taking into consideration for the difference in number of shares issued to a new investor at the current rounds price, versus the price of the previous rounds, and the degree of dilution incurred by the raise.
https://medium.com/the-ouroboros-effect/cap-table-101-b0a9fcec0653
['Kevin Lu']
2020-06-16 06:15:11.533000+00:00
['Cap Table', 'From The Team', 'Startup', 'Founder Advice', 'Venture Capital']
5 Reasons Why We Love Fresno
If you’re intrigued by cryptocurrency, interested in a convenient way to get into crypto and purchase bitcoin, or are curious about our ATMs, come check our machines out. We have ATMs all over the country, and they are located in some pretty darn great cities (and we’re expanding to more in the coming months). One of these great cities is Fresno, California. For those of you who live in Fresno — or if you are planning on visiting soon — we have not one but TWO locations located in CellPros shops on Aslan and Olive. Both of these kiosks are situated in a part of the city that is close to a few of our other favorite places to visit. An afternoon filled with crypto, nature, and rare animals? Sign us up! 1. Fresno Chaffee Zoo You don’t need a reason to go to the zoo. If you want to go, just do it. The Fresno Chaffee Zoo is home to approximately 190 species, including the African Elephant, the Colobus Monkey, the California Sea Lion, sloth bears, and warthogs, among many others. The zoo is less than a 10-minute drive away from our ATM, so it’s almost a little too convenient. 2. Old Fresno Water Tower If there’s a historic building in town, we’re the first ones in line. The Old Fresno Water Tower was built in the American Romanesque style and stands 109 feet high. If you’re a fan of historic buildings, add this one to your list. And in addition to sharing that you used a crypto ATM, here’s an additional interesting fact you can share at dinner parties: the Old Fresno Water Tower can hold up to 250,000 gallons of water in its tank. You’re welcome. 3. Forestiere Underground Gardens When a website’s About page begins with “Sometimes you have to go beneath the surface to find what you’re looking for,” you know it’s worth a visit. The Forestiere Underground Gardens is an open-air museum that is listed on the National Register of Historic Places and is California State Historical Landmark number 916. Built by Sicilian immigrant Baldassare Forestiere in the 1900s, he set out to create a citrus empire. It took Forestiere about 40 years to make the gardens, digging through sedimentary rock and reaching 40 feet below the surface of the earth. Are you intrigued yet? 4. Fresno Art Museum Once you’ve purchased bitcoin at our ATM, go spend the rest of the afternoon strolling the hallways of the Fresno Art Museum. With a focus on modern and contemporary artworks — including painting, sculpture, prints, photographs, and other media — you’re bound to have a thought-provoking and meaningful experience. 5. Shinzen Friendship Garden Located a short drive outside of downtown Fresno, the Shinzen Friendship Garden is a convenient escape into nature. Why wouldn’t you want to spend some time exploring, learning, and reflection? And if there’s serenity and renewal included, yes, please! This is a quiet place to sit and think and enjoy the beauty of Mother Nature.
https://medium.com/coinme/5-reasons-why-we-love-fresno-5bd9ad769ca
[]
2018-04-30 20:26:23.139000+00:00
['Crypto', 'Cryptocurrency', 'Fresno', 'Startup', 'Bitcoin']
AI’s Impact on Social Interactions
AI’s Impact on Social Interactions When it comes to interpersonal relationships, “friction-free” may not be the goal By Irving Wladawsky-Berger I recently wrote about the event I attended on February 28 to celebrate the launch of MIT’s Schwarzman College of Computing. This new interdisciplinary College is MIT’s response to the rise of artificial intelligence, — a profoundly powerful technology that will likely reshape our economy, society and personal lives in the decades to come. But, attaining AI’s broad potential will require not only a continuing slew of technological innovations, but equally important research on the challenges our society must get ready for, including ethical issues, workplace disruptions, and human interactions. The event included keynotes and panels on a wide variety of topics. One of the talks I found most interesting was Rethinking Friction in Digital Culture by Sherry Turkle, MIT Professor of the Social Studies of Science and Technology. In her many writings and seminars, — including her 2011 best-seller Alone Together, — Professor Turkle has long been warning about the impact of technology on our social interactions. “Most of us here today were introduced to the idea of friction free as a really good thing,” she said at the start of her keynote. “It’s an aesthetic of engineering efficiency, so why shouldn’t it be a really good thing?” This idea that technical things should be friction free and smooth easily bleeds into other domains. “Efficiency becomes aspirational — in politics, in business, in education, and in our thinking about relationships.” She cited a few examples of the kinds of friction free interactions she’s come across in her research on technology and social relationships. People of all ages have told her that they prefer to text rather than talk— even when interacting with a colleague in the next cubicle or office. People have said to her that they prefer to text their spouse rather than have a face-to-face conversation. Friction free, she explained, “is usually tied up with a hope for greater efficiency and less vulnerability.” AI, perhaps without meaning to, has now become part of this friction free story. AI, almost by definition, is all about “the promise of efficiency without vulnerability — or, increasingly, about the illusion of companionship without the demands of friendship. But by trying to move ahead toward the friction free, we are getting ourselves into all kinds of new trouble.” The problem with a friction free digital culture is that, in many cases, life teaches us one thing and technology teaches us another. “Technology encouraged us to forget what we knew about life. And we made a digital world, where we could forget what life was teaching us… It’s time to associate the digital with other values than the value of easy. Let’s say, the opposite of easy. And it’s time to remember that the opposite of easy is not just difficult. The opposite of easy is also evoked by words such as complex, involved, and demanding.” Professor Turkle concluded her talk by reminding us that while increasingly living in a technological world, we must keep in mind “what we know about life and the life we want to live. We have to work on the real world as hard as we work on our technology. We can’t just work on our technology and hope it fixes the real world.” Her thought provoking talk can be seen in this video. Yale professor Nicholas Christakis made similar points in a recent article in The Atlantic— How AI Will Rewire US. Science fiction has long portrayed the biggest threats of AI to humans as computers gone beserk, like the Hal 9000 in 2001: A Space Odyssey; or homicidal cyborgs, like The Terminator. Some fear that in a post-Singularity future, superintelligent general AI will far surpass human intelligence, posing an existential threat to humanity. But, the real threat to humanity, said professor Christakis, is that “For better and for worse, robots will alter humans’ capacity for altruism, love, and friendship” Major innovations have long had an impact on the ways that people interact with each other. Technologies like the printing press, the telephone, radio, TV, and more recently the Internet, revolutionized how we access and exchange information. “As consequential as these innovations were, however, they did not change the fundamental aspects of human behavior that comprise what I call the social suite: a crucial set of capacities we have evolved over hundreds of thousands of years, including love, friendship, cooperation, and teaching…” he wrote. “But adding artificial intelligence to our midst could be much more disruptive. Especially as machines are made to look and act like us and to insinuate themselves deeply into our lives, they may change how loving or friendly or kind we are — not just in our direct interactions with the machines in question, but in our interactions with one another.” AI can both help improve the way we relate to one another, as well as make us behave less ethically. Experiments with hybrid groups of people and robots working together have shown that the right kind of AI can help improve the group’s overall performance. But, in other experiments, he found that by adding a few bots posing as selfish humans, the same groups that previously behaved in an altruistic, generous way toward each other were now driven by the bots to behave in a selfish way. This shouldn’t surprise us, as over the last few years we’ve seen how the spread of misinformation by malicious bots over social media can have a highly negative, polarizing impact on large groups of people. “Cooperation is a key feature of our species, essential for social life. And trust and generosity are crucial in differentiating successful groups from unsuccessful ones. If everyone pitches in and sacrifices in order to help the group, everyone should benefit. When this behavior breaks down, however, the very notion of a public good disappears, and everyone suffers.” “The fact that AI might meaningfully reduce our ability to work together is extremely concerning… As AI permeates our lives, we must confront the possibility that it will stunt our emotions and inhibit deep human connections, leaving our relationships with one another less reciprocal, or shallower, or more narcissistic.” We will need rules, laws and policy oversight to help us deal with the potentially negative impacts of AI on society— not unlike how we’ve stopped corporations from polluting our water supply or individuals from spreading harmful cigarette smoke. “Because the effects of AI on human-to-human interaction stand to be intense and far-reaching, and the advances rapid and broad, we must investigate systematically what second-order effects might emerge, and discuss how to regulate them on behalf of the common good.” “In the not-distant future, AI-endowed machines may, by virtue of either programming or independent learning (a capacity we will have given them), come to exhibit forms of intelligence and behavior that seem strange compared with our own,” writes professor Christakis in conclusion. “We will need to quickly differentiate the behaviors that are merely bizarre from the ones that truly threaten us. The aspects of AI that should concern us most are the ones that affect the core aspects of human social life — the traits that have enabled our species’ survival over the millennia.”
https://medium.com/mit-initiative-on-the-digital-economy/ais-impact-on-social-interactions-f64919fa2ebb
['Mit Ide']
2019-04-12 00:49:51.349000+00:00
['Artificial Intelligence', 'Robots', 'Human Machine Interaction']
Advanced Python: Learn How To Profile Python Code
Advanced Python: Learn How To Profile Python Code Explaining How To Detect & Resolve Performance Bottlenecks No one wants a slow data science application. Not when it is required to be consumed by high-demanding users who expect a quick turn around. Profiling is one of those concepts that every Python programmer must be familiar with. It is required to become an expert in the programming field. This is an advanced level topic for Python developers and I recommend it to everyone who is/or intends in using the Python programming language. We can learn the Python library and understand how to create objects and modules, but the true Python experts emerge when they encounter and fix tough technical issues. One of those issues requires resolving performance bottlenecks that revolve around profiling the code. Let’s understand how profiling works. Profiling requires analysing, assessing and understanding the bottlenecks in the code Profiling is often performed to find performance bottlenecks in the code. Finding bottlenecks in the code is an art and with experience and knowledge, it gets easier. There are some expert tricks and tips that I will outline in this article. This article will demonstrate how to profile the Python code the right way. One of the skills to master is the ability to find the bottlenecks in the Python code and then fix them the right way to give an application performance boost. The journey of fixing performance bottlenecks requires one to profile the code. Only then, the issues can be resolved. Article Aim This article will start by explaining what profiling is. It will then demonstrate the steps and techniques that are required to profile the Python code. The article will also provide an overview of what to look for and how to profile the Python programming code the right way. I will present three profiling methodologies. Lastly, the article will touch on how to optimise the code and the tips to remember whilst optimising the code. To be an expert level Python programmer, not only we need to know the ins and outs of the Python programming language, but we are also expected to learn to use Python as a tool. If you want to understand the Python programming language from the beginner to an advanced level then I highly recommend the article below: It is common for advanced level Python programmers to be able to profile and find bottlenecks in the code. 1. What Is Profiling? In a nutshell, profiling is all about finding bottlenecks in your code. In simple words, the process of profiling concentrates on going over your application’s codebase and assessing or analyzing how long each function/line of code takes and how often it gets executed to understand how the code can be optimised. Profiling is a quantitative technique. Therefore it produces a set of statistics. The exercise of profiling produces a set of statistical measures. These statistics enable us to understand how long each part of the code took and how many times it was executed. The statistical measures can be presented to the team of programmers and they can then be used to channel the team effort in a calculator manner. Profiling can unveil the mystery of how and what is required to be improved. The Steps Of Profiling Essentially, the first step is to decide whether the required profiling technique is going to be at the macro or micro level. The macro profiling is about profiling the entire program and generating statistical information while it is running. Micro profiling is profiling a specific part of the program. Micro-profiling is generally ad-hoc and manual in nature. There are multiple ways and a number of helpful packages available to profile the Python code. I recommend using the cProfile library. It is a C extension and is suitable for profiling long-running applications. We can perform simple, intermediate, or advanced profiling. Let’s understand how to profile next. 2. How To Profile The best way to understand profiling is to consider a real use-case as an example. This example is created to help us demonstrate the techniques of profiling better. The function below is used to calculate the total risk and return that is allocated to a company. def get_company_details(symbol): end_date = datetime.now() - timedelta(days=1) start_date = end_date - timedelta(days=10) prices = yf.download(symbol, start=start_date, end=end_date)['Close'] log_returns = np.log(prices) - np.log(prices.shift(1)) total_return = log_returns.sum() total_risk = log_returns.std() return symbol, total_return, total_risk The code snippet above takes in a company symbol as a function argument. It then queries yahoo finance web service to get the last 10 days' prices of the company. This is an IO-bound operation. From the prices, it computes the total return and risk of the company. Both of them are CPU intensive operations. This function is called multiple times by the function get_all_companies_data() below. def get_all_companies_data(): symbols = ['ADM', 'ADT', 'AHT', 'III'] for symbol in symbols: symbol, total_return, total_risk = get_company_details(symbol) print(f'SYMBOL: {symbol}. TOTAL RISK: {total_risk}. TOTAL RETURN: {total_return}') We can run the code as shown below: get_all_companies_data() Running the code will print the Symbol, Total Risk and Total Return of all of the required companies: get_all_companies_data() Running the code will print the following results SYMBOL: ADM. TOTAL RISK: 0.021869461018335933. TOTAL RETURN: 0.030534911987234015 SYMBOL: ADT. TOTAL RISK: 0.033611646937176394. TOTAL RETURN: 0.06830708186185985 SYMBOL: AHT. TOTAL RISK: 0.06504280244333693. TOTAL RETURN: -0.1232326135175989 SYMBOL: III. TOTAL RISK: 0.032170296104463876. TOTAL RETURN: 0.047178627683446606 This section will focus on the three techniques now. 2.1 Simple Profiling The simplest form of profiling requires running the function and assessing how long it took to execute and get the results. We can use the time package: import time start = time.time() get_all_companies_data() end = time.time() print(end - start) All we have done here is to store the current time before and after the execution of the code. It will give the elapsed time between two points, in seconds. Therefore, the code above took 3.19 seconds to complete. We can add these times in individual lines of the function to get micro-level profiling results. Now the question is regarding its accuracy. As an instance, assume that the Yahoo finance web service was experiencing heavy traffic when this function was called and its response was slow because of that. Maybe, I was experiencing network overload due to other applications running on my machine, or maybe the CPU was busy executing the other tasks. Therefore we need a mechanism to run the code multiple times so that we can take the average. This brings me to Intermediate Level Profiling 2.2 Intermediate Profiling I have written a decorator below. Decorators are great in the sense that they can be shared, encourage code reuse, and make the code readable. I am going to use the timeit Python package import timeit def time_me(number_of_times): def decorator(func): @functools.wraps(func) def wraps(*args, **kwargs): r = timeit.timeit(func, number=number_of_times) print(r/number_of_times) return wraps return decorator The decorator above is called time_me. It takes in a parameter: number_of_times. Internally it executes the decorated function multiple times as specified by the input argument and then it calculates the average time the function took to complete. This reduces the uncertainty that maybe there was a network latency or CPU was busy performing other tasks. To use it, simply decorate the function get_all_companie()s with the decorator: @time_me(100) def get_all_companies_data(): symbols = ['ADM', 'ADT', 'AHT', 'III'] .... This decorator will execute the get_all_companies_data() 100 times and print the average time the function took. On average, it took 3.01 seconds on my machine. It’s important to note that the timeit module prints CPU cycles and those are not absolutely comparable from one machine to another. There is an advanced level profiling technique. 2.3 Advanced Profiling In this section, I will introduce the cProfile library. We can directly execute the cProfile.run(python code) function and it will output the reasonable statistics for us. The cProfiler outputs the following statistical measures for each line of Python code: The number of function calls and how many times the function was recursed Calls that were not induced via recursion. It is known as primitive calls. Number of times the function was executed The total time spent in the given function The total time per call As an instance, we can simply pass in the name of the file to profile and it will output the above stats for us: python -m cprofile fintechexplained_profiling.py To profile the code get_all_companies_data() , I have written a generic decorator. We can decorate any function with the decorator below and it will profile the code and output statistical measures for us: import cProfile import functools import pstats import tempfile def profile_me(func): @functools.wraps(func) def wraps(*args, **kwargs): file = tempfile.mktemp() profiler = cProfile.Profile() profiler.runcall(func, *args, **kwargs) profiler.dump_stats(file) metrics = pstats.Stats(file) metrics.strip_dirs().sort_stats('time').print_stats(100) return wraps This decorator takes in a function and outputs the profiling statistics. The pstats.Stats() method produced the required metrics method produced the required metrics The strip_dirs() method removed the extraneous path from all the module names. method removed the extraneous path from all the module names. We then used the sort_stats() method. It sorted the metrics by the time column. method. It sorted the metrics by the time column. Lastly the print_stats() method is called to print out all the statistics. I recommend the cProfiler to profile the target code to obtain performance bottlenecks We can also print details about a spefiic function e.g. stats.print_callees(‘function_one’) All we have to do now is to add the decorator to the function: @profile_me def get_all_companies_data(): symbols = ['ADM', 'ADT', 'AHT', 'III'] .... It will output the metrics ordered by the time: Let’s go over the statistics above: There were 9221 function calls and 9125 amongst them were not induced by recursion. For each line, we can see the number of times the function was executed (ncalls), the total time spent in the given function (tottime), the total time per call (percall), and cumulative time details (cumtime and its percall). We can profile the entire code base. This information can help us understand where we need to speed up the code. We can view the profiling stats as a diagram using gprof2dot package. Other than the speed, we should also look out for memory consumption. If your process memory is increasing over time even whilst it’s idle then it’s best to find the root cause of the issue and fix the memory leak. Now that we have the quantitative stats, we can focus the effort on improving the performance of the slow code. This is a quantitative methodology to optimise the code which we can present to the team and make calculated decisions on how and where to spend our efforts. The next steps are about code optimisation. 3. How To Optimise Python Code Lastly, I wanted to provide a quick overview of code optimisation. Over the years I have noticed a pattern in programming. Most, if not all, of the times whenever we attempt to optimise a solution, we end up increasing its complexity. Increasing complexity tends to make the solution harder to maintain and debug. Consequently, it increases the overall cost of maintaining the application. If your optimised code makes the code harder to read and understand then it’s better to not optimise it because it will be costly in the long-run. There are a number of optimisation strategies that we can follow to optimise our Python application. It includes introducing caching, concurrency and parallelism, enabling Cython, Numba, and so on. However, the most important yet overlooked optimisation strategy is to evaluate and choose the right data structure for the job. I highly recommend reading this article that explains the time complexities of the Python data structures. It will solve most of the performance issues. If the data structures are appropriately chosen then one of the quick ways to gain a performance boost is to introduce concurrency and parallelism in your code. I highly recommend reading this article that explains the concept of concurrency in Python from the basics. This article is considered the must-read article of FinTechExplained publication by many readers: An important tip is to only introduce parallelism and concurrency if you really have to introduce it. Plus, try not to introduce caching or multiple processing until your code works because these optimisations have other side effects that need to be considered. These include debugging and finding the root causes of issues when embarrassing bugs appear in the production environment. 4. Gotchas About Optimisation Here are some of the optimisation tips I encourage everyone to read. The first step in optimising the code is to ensure that the solution works and solves the problem we expect it to solve. For everything that works, write unit tests. Unit tests are meant to be atomic in nature. They should only test a single unit of code. Hence we can easily ensure that the code, pre-and-post optimisation, is working as desired. Unit tests will ensure that our intended changes haven’t broken the functionality. Always ensure every single path of the code branch is unit tested to increase the confidence in the refactor. Whilst optimising the code, ensure it is easily readable. As an instance, if you want to introduce multiprocessing/threads or retry logic, abstract it out into other module or introduce a decorator and pass the function as an argument to ensure the code is cleaner, it is easier to understand for others in the team. The use of decorators is encouraged as it reduces code complexity and makes the code readable. Summary Profiling is often performed to find performance bottlenecks in the code. Finding bottlenecks in the code is an art and with experience and knowledge, it gets easier. Thank you for reading FinTechExplained This article started by explaining what profiling is. It then demonstrated the steps and techniques required to profile Python code. The article also provided an overview of what to look for and how to profile the Python programming code the right way. I also presented three profiling methodologies. Lastly, the article touched on how to optimise the code and the tips to remember whilst optimising the code.
https://medium.com/fintechexplained/advanced-python-learn-how-to-profile-python-code-1068055460f9
['Farhad Malik']
2020-07-20 21:47:06.740000+00:00
['Data Science', 'Python', 'Fintech', 'Technology', 'Programming']
Machine Learning is Requirements Engineering — On the Role of Bugs, Verification, and Validation in Machine Learning
Disclaimer: The following discussion is aimed at clarifying terminology and concepts, but may be disappointing when expecting actionable insights. In the best case, it may provide inspiration for how to approach quality assurance in ML-enabled systems and what kind of techniques to explore. TL;DR: I argue that machine learning corresponds to the requirements engineering phase of a project rather than the implementation phase and, as such, terminology that relates to validation (i.e., do we build the right system, given stakeholder needs) is more suitable than terminology that relates to verification (i.e., do we build the system right, given a specification). That is, machine learning suggests a specification (like specification mining and invariant detection) rather than provides an implementation for a known specification (like synthesis). I have long been confused by the term machine-learning bug, especially when referring to the fact that a machine-learned model makes incorrect predictions. Papers discussing quality of machine learning models and systems tend to be all over the place about this. It is tempting to use terms like faults, bugs, testing, debugging, and coverage from software quality assurance also for models in machine learning, but discussions are often vague and inconsistent, given the lack of hard specifications. We tend to accept that model predictions are wrong in a certain number of cases — for example, 95% accuracy on training and test set might be considered quite good — so we don’t tend to call every incorrect prediction a bug. When a model performs poorly overall, we may be inclined to explore whether we picked the right learning technique, the right hyperparameters, or the right data — are these bugs? When a model makes more systematic mistakes (e.g., consistently perform poorly for people from minorities), we tend to explore solutions to narrow down causes, such as biased training sets —this may feel like debugging, but are these bugs? Specifications The term bug is usually used to refer to a misalignment between a given specification and the implementation. Though we rarely ever have a full formal specification for a system, we can typically still say that a program misbehaves when it crashes or produces wrong outputs, because we have a pretty good sense of the system’s specification (e.g., thou shall not exit with a null pointer exception, thou shall compute the tax of a sale correctly). In the form of method contracts, the specification helps us to assign blame when something goes wrong (e.g., you provided invalid data that violates the precondition vs. I computed things incorrectly and violated postconditions or invariants). Talking about specifications in machine learning is difficult. We have data, that somehow describes what we want, but also not really: We want some sort of generalization of the data, but also don’t want a precise generalization of the training data — it’s quite okay if the model makes wrong predictions on some of the training data if that means it generalizes better. Maybe, we can talk about some implicit specification derived from some higher system goals (e.g., thou shall best predict the stock market development, thou shall predict what my customer wants to buy), but its unclear where such specification comes from or how it could be articulated. Maybe a probabilistic specification would be just to maximize accuracy on future predictions? This vague notion of specification is confusing (at least to me) and makes it very hard to pin down what “testing”, “debugging”, or “bug” mean in this context. To make things more concrete, let’s take the well known and highly controversial COMPAS model of predicting recidivism for convicted criminals, here approximated with an interpretable model created by Cynthia Rudin for her article Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead with CORELS: IF age between 18–20 and sex is male THEN predict arrest (within 2 years) ELSE IF age between 21–23 and 2–3 prior offenses THEN predict arrest ELSE IF more than three priors THEN predict arrest ELSE predict no arrest Given some information about an individual at a parole hearing, the model predicts whether the individual is likely to commit another crime, which may be used for sentencing or deciding on when to release that person. What is the specification for how COMPAS should work? Ideally it would always correctly indicate whether any specific individual would, in the future, commit another crime — but we don’t know how to specify this. Even in a science fiction scenario it seems more likely that we figure out how to manipulate behavior rather than to predict human future behavior. Hence any prediction must generalize and our goal might be to create a best approximation of typical behavior patterns. In practice, a model as the one shown above is learned from past data of released criminals to generalize patterns about who is more or less likely to commit future crimes. The model won’t perfectly fit the training data and won’t perfectly predict the future behavior of all individuals; at best we can measure accuracy on some training or evaluation data, and maybe see how many individuals predicted not to commit another crime actually do so (the opposite is hard to measure if we don’t release them in the first place). The model will be imperfect, and as broadly discussed among researchers and journalists, may even produce feedback loops and influence how people behave. Note that challenge is how to evaluate whether the model is “acceptable” in some form, not whether it is “correct”. Models are learned specifications I think the confusion comes from a misinterpretation of what a machine-learned model is. A machine-learned model is not an attempt at implementing an implicit specification; a machine-learned model is a specification! It is a learned description of how the system shall behave. The specification may not be a good specification, but that’s a very different question from asking whether our model is buggy! Consider the COMPAS model again. It is a specification of how the system should predict the recidivism risk. We could now take that specification and implement it, say, in Java. If the Java implementation messed up the age boundaries, this would be an implementation bug — our implementation does not behave as specified in the model. However, if the Java implementation is implemented as specified, but we don’t like the behavior, we should not blame the implementation, we should blame the specification. The problem is not that we have implemented the system incorrectly for the given specification, but that we have implemented the wrong specification. In software engineering, there is a fundamental and common distinction between validation and verification (wikipedia). Validation is checking whether the specification is what the stakeholders what — whether we build the right system. Verification is checking whether we implement the system corresponding to our specification — whether we build the system right. Validation is typically associated with requirements engineering, verification with quality assurance of an implementation (e.g., testing). Validation vs Verification If the machine-learned model is the specification, we usually do not worry about implementing the specification correctly, because we directly derive the implementation from the specification or even just interpret the specification. Bugs in the traditional sense — implementing the specification incorrectly — are rarely a concern. Verification is not the problem. What we should worry about is validation, that is, whether we have learned the right specification. A requirements engineer’s validation mindset Requirements engineers deal with validation problems all the time — they are the validation experts. Their main task is to come up with specifications and make sure that they are actually the right specifications. Typically requirements engineers will read documents and interview stakeholders to understand the problem and the needs of the system (I’m oversimplifying and limit the discussion to interviews). Requirements engineers will proceed iteratively and propose some specifications, maybe build a prototype, and then go back to the customer and other stakeholders to see whether they have gotten the specifications right. Each iteration might provide insights about what was missing in the previous specification and how we might come up with a better one (e.g., interviews with stakeholders not previously considered). There are lots of challenges when identifying specifications, which is what requirements engineers study, such as interviewing representative stakeholders and avoiding injecting bias in those interviews. There are also lots of techniques, such as story boarding, prototyping, and A/B testing to check whether a specification meets user needs or system goals. Also conflicting requirements are common during validation. Different stakeholders may have different goals and views (e.g., the company’s goals vs. the employees’ goals vs. the customers’ goals vs. privacy requirements imposed by law). A key job of a requirements engineer is to detect and resolve vague, ambiguous, and conflicting specifications. Detection can be performed through manual inspection of text but also with automated reasoning about models (e.g., there is a huge body of work on using model checking to find conflicts in specifications). Resolving issues found during validation often requires to go back to users and other stakeholders to discuss and negotiate tradeoffs. Machine learning is like requirements engineering Assuming machine-learned models are specifications, machine learning has a very similar role and very similar challenges to requirements engineering in the software engineering lifecycle.
https://medium.com/analytics-vidhya/machine-learning-is-requirements-engineering-8957aee55ef4
['Christian Kästner']
2020-03-13 08:24:25.327000+00:00
['Machine Learning', 'Testing', 'Requirements Engineering', 'Software Engineering', 'Se4ml']
What is Consciousness?
Let’s consider the question, what is consciousness? I’m fascinated by this question because it’s mostly still a mystery. As in we don’t really know what consciousness is, why it happens, or why it exists! This might be the most difficult and complex question facing philosophy. To examine this question, I’ll provide a language distinction for two important variables — the mind and the brain. The brain refers to our three-pound walnut-shaped organ placed on top of our spinal cord, the organs that routinely fires off one hundred billion neurons. Now, the juicy part for today, the mind, the mind that is the mental state produced by the brain such as visual sensations, emotions, memories, thoughts, attitudes, and beliefs. It’s the you experiencing this right now that has created mental state experiencing this right now. Materialism Science In order to understand how the brain produces the mind, we must turn to a scientific understanding of the objective study of the brain. This is where we can identify the material mechanisms that produce the mind — or we think might produce the mind. The idea that consciousness is an extension of the material brain is what makes up the materialist worldview. However, this is where the mystery of consciousness comes into play, although science has made significant progress for brain research in understanding the neural correlations between consciousness and the connection with certain functions of the brain, we still understand very little about our mental states — those visual sensations, emotions, memories, thoughts, attitudes, and formation of stories and beliefs. Consciousness is still mostly the same mystery it always has been even though science has made progress in pretty much everything around us in the material world. It then raises the question for us, could something be missing? If neuroscience achieves a full understanding of how the brain produces the mind or how the brain creates a state of consciousness? But should we even assume this given how little progress we’ve made in understanding the brain to consciousness connection? Some materialist scientists around the study of mind put forward we would understand how the “switch” of consciousness would turn on if we fully understood the brain. But I think a full understanding of the mind will not fully explain the concept of human consciousness; neuroscience examines brain dynamics that connect with particular states of consciousness. Essentially, scientific research of the brain is looking to identify brain function (state) x that correlates with the brain doing y. However, when we self-examine the state of consciousness — with personal experience — the personal state of mind is a much more subjective experience; a subjective state that cannot be fully explained, expressed or understood through the objective proclamations of science. Think of it this way, science is studying what is an atom, but not what it is to be an atom. This distinction matters, because when those same atoms come together they form a human that is a conscious being that creates story, experiences joy, suffering, and the subjective experiences around life. We can identify a connection with what it is to be human. Now, I’m not saying atoms are out experiencing the world, but instead of assuming there’s a mysterious consciousness switch turning on when those atoms come together, it becomes less of a magical occurrence to just assume atoms have a level of “consciousness” intrinsic to them in itself. These ideas I just expressed is essentially the philosophical ideas behind panpsychism. Consciousness is Everywhere And the reason I find the idea of panpsychism so interesting is it points out this problem of conciseness I’ve discussed. It’s pointing out we have something missing in the study of the mind, we are not looking at what it is to be an atom. So, to help more fully express this idea, it’s saying that maybe there’s a level of energy, whether it’s particles or atoms, or whatever more accurate scientific word I could insert into their that makes something into ‘what it feels’ to be like that thing described. It’s this idea that we might find some psychophysical laws around those mental states we experience. Psychophysical laws that might help explain our subjective experience of consciousness. Allow me to dissect this further. Think of our brain as an information processing device; If we can understand how information is processed from inputs into output brain functions — we can better understand intelligence, cognition, and perception. At the very least this can help us understand an objective state of consciousness even though it is not enough to fully understand the subjective state of mind. For clarification, philosophy itself will not provide a definitive answer to the question of consciousness, but it can help ask the right questions to bring us to a better understanding. To express the problem of a purely scientific state of consciousness, imagine you have Person A and Person B staring at a red apple. Based on the scientific understanding of the brain we know that neurons send electrical signals to one another which produces a mental event that is the experience of seeing a red apple (I’m oversimplifying this process but the general understanding is all that is necessary for this essay). When you stare at the red apple you have an experience of red within your visual field from one neuron sending another neuron a signal and so on — you can visualize a group of people continually sending messages to one another. Now, imagine placing a green apple next to the red apple, another neuron process is taking place that produces the experience of green. The brain itself isn’t changing to green — we don’t have a little being up in our heads changing the filter every time a new object of color is placed in front of us. We only have our brain producing states of mind. The point is a brain function produces a mental event of experiencing red and green apples which means we have the mental experience of red and green but red and green are not found in the brain — it is an experience of the mind, a mental state. To further express this problem, imagine your friend looking at a red apple. Neurons A, B, and C fire off to produce the mental experience of red for your friend. We could identify this brain mechanism in a brain scan, but we cannot know if she’s experiencing red. Even if she claims to be experiencing the mental image of red, she is only saying that because that’s the mental experience she has learned to identify with red. But, for all we know that brain mechanism of A, B, and C neurons firing off could be producing what we call green. Meaning, we have only identified the brain function for producing the experience of that mental state but we cannot identify what that mental state is. This explanation of the problem of consciousness is essentially what Philosopher David Chalmers coined in 1995, The Hard Problem of Consciousness. Explaining that even a complete understanding of the physical structures of a creature thinking mechanisms (not limited to humans) leaves open the question of whether or not the said creature is conscious. The Hard Problem demonstrates that even a full physical explanation is incomplete because it is devoid of what it is like to be the subject regarding the subject. Finding the Missing Link Before I progress further, I must acknowledge my underlying assumption in the denial of Descartes's substance dualism. Descartes remains the holder of the most famous argument for the soul with his famously credited phrase, ‘I think therefore I am.’ His arguments suggest that the mind and body must be separable entities because of our ability to doubt the existence of our brains but not our minds. He claims that it is not possible to doubt we have a mind because doubting itself is a mental exercise. His arguments boil down to this: body and mind must be separable because you can doubt one but not the other. However, a distinction between the brain and mind does not mean it must be separated. Additionally, merely doubting your body is not a strong enough property for making proclamations for its nonexistence. Descartes can doubt the existence of his brain and not the mind, but that doesn’t provide evidence of their distinct separation. It is merely describing how he thinks about it. We can also conceptualize our limited understanding of our mind and brains full nature — in that, we do not know all there is to know. But we know enough to know that the dependence between the brain and mind is inescapably interwoven he just might begin fully doubting the objective nature of himself. Descartes understood the body merely as a vessel for the soul, however, we have a better understanding of the interconnectedness between the mind, brain, and even further we have an understanding of other areas of the body (your gut) to then affect your brain that then affects your mind. The Hard Problem of Consciousness arises because of our understanding of the mind and body to be interconnected. In that, the mind is produced by the brain. But what is that subjective experience? In large part, Descartes's argument for dualism, although not proving dualism, helps express the problem with calling consciousness an illusion. Once you understand that you need consciousness in order to experience an illusion you realize the position that it’s an illusion is incoherent. As humans, consciousness is the only thing we can be certain that exists, yet it’s the biggest riddle and mystery of our world. Being that science has done very little for the inquiry into sensory qualities that we subjectively experience, maybe it’s time we go back to the drawing board for our understanding of consciousness. And maybe when science begins seeing the importance of asking, what it is to be x, they can become open to ideas that were once seen as absurd. I think—only then—will we begin seeing some more reliable understandings of the conscious experience. So what is consciousness? Well, I still don’t really know, but it is the thing that gives us the emotions, perceptions, and sensory feelings that gives life meaning, so it’s time we examine this vital question with a more open mind.
https://medium.com/the-philosophers-stone/what-is-consciousness-2c31ff50e7ea
['Brenden Weber']
2020-02-20 20:07:40.849000+00:00
['Ideas', 'Philosophy', 'Consciousness', 'Education', 'Science']
4 Famous Women Who Played Hard to Get With the Men in Their Lives
4 Famous Women Who Played Hard to Get With the Men in Their Lives These women knew how to play the game and win Photo by Yamil Duba from Pexels Have you ever heard of a self-help and dating book called All the Rules: Time Tested Secrets for Capturing the Heart of Mr. Right which was first written by Ellen Fein and Sherrie Schneider in 1995? When this book came out, it was really controversial. Despite all the strides that women had made in their personal and professional lives, the book talked about some dating ground rules that seemed to set women back into the Dark Ages. The key precept of the book was that men should chase women when it came to love and relationships. It emphasized that a relationship was doomed to fail if the women so much as even made the first move by even uttering the word “hello”. According to The Rules book, the woman should never even call a man. The man should always lead, pay for dates, say “I love you” first and of course be the first one to propose. Basically, the book advocated for the extreme version of playing hard to get. There is still a lot of discussion in psychology and behavioral science as to whether playing hard to get really works. You can always find a case where it works and just as likely, you could find another case where playing hard to get is the worst thing that a man or woman could do. However, in real life and in fiction and literature, you can always come across women who do play hard to get and end up winning in love and in life. I came across these 4 examples of famous women from history, fiction, celebrity culture, and royalty.
https://medium.com/indian-thoughts/4-famous-women-who-played-hard-to-get-with-the-men-in-their-lives-a0b9342ee710
['Anita Durairaj']
2020-12-20 14:44:44.115000+00:00
['Self', 'Life Lessons', 'Love', 'Psychology', 'Life']
Bodymovin to Android
I’ve previously taken a look at using AirbnbEng’s Lottie to create vector animations on Android: While Lottie offers a ton of power, it misses out on a lot of the performance benefits of Android’s native vector animation format AnimatedVectorDrawable . I came away wanting the best of both worlds… Lottie’s tight integration with Adobe AfterEffects and AnimatedVectorDrawable 's performance characteristics. Well it seems that my dreams have come true: Bodymovin (the AfterEffect’s plugin Lottie also uses) version 4.10 brings direct support for exporting AE compositions as Android AnimatedVectorDrawable s! 🙌 🎉 🥂🍾🎈 A huge thanks to Hernan Torrisi for adding this! Here’s how it works:
https://medium.com/google-design/bodymovin-to-android-6e53e5f7a96
['Nick Butcher']
2017-11-14 15:02:12.498000+00:00
['Motion Design', 'Android', 'Animation', 'Design']
Adventures in Sound Poetry: Interview with Lane Chasek
Poetry doesn’t always age well. Several traditional forms of poetry seem to be on their last dying breath. I run a literary journal. If anyone submits a sonnet or a villanelle to us, it’s a safe bet that the form is used somehow ironically. I’m generally in favor of poets moving beyond traditional forms. But author Lane Chasek does the world an important service by reviving one nearly forgotten form of poetry: sound poetry. Unlike most established forms of poetry, which are based on strict structural rules, sound poetry is exactly the opposite, insisting on no formal structure whatsoever. Sound poetry goes so far as to do away with syntax and semantics entirely. This enables sound poems to explore musical and theatrical elements unlike any other form of poetry. Although sound poetry was officially invented in the 20th century, it’s also an ancient form, with its roots in oral poetry traditions. In his book, “Hugo Ball and the Fate of the Universe: Adventures in Sound Poetry,” Chasek unites the ancient form with its 20th century figureheads, and then pulls it all into the present day. His book is also a reflection on the creative process more broadly, offering a personal narrative about the struggles of a poet in the modern world. Below is an interview I conducted with Chasek via email about the making of his new book. What got you interested in sound poetry? Lane Chasek: I learned about the Dada movement from a friend I used to attend poetry open mics with, and I learned about sound poetry soon after. I never attempted writing a sound poem back then, but the idea always lingered in the back of my brain until I started writing this book. This was during my freshman year of college, so that would have been 2013. But my fascination with nonsense dates back to childhood. I used to invent long, meaningless words when I first learned to read and write, and I’d plaster those words on my bedroom walls and the pages of any notebook I could get my hands on. It was fun, but I really think this phase was my way of testing the limits of language. I was only six or seven, and the idea of words was still fresh to me. I didn’t know what they could do, so I went a little wild with them. What was the writing process for the book? Was it different in any way from your usual writing process? This was my first attempt at long-form nonfiction. Given the nature of the Fair-Minded Fraud & Forgery series, I always thought of this project as creative scholarship. Compared to past projects, I was surprised by how much of my time writing this book was spent . . . well, not writing. Most of it was spent researching. I’d often go weeks without writing a single word. This was agonizing at first, but this process took me to some unexpected places. For instance, I wrote a chapter where I combine the legend of the golem with the story of how my paternal great-grandfather came to the U.S. I never thought I’d write something like this, but it makes sense: an Eastern European city, mass carnage, an unpronounceable word that can create monsters. It’s a story about needless violence, heritage, and the power of language — themes which ultimately came to dominate my book. The original working title for the book was “Notes of an Aspiring Sound Poet.” How did the book evolve from when you first sat down to start writing it? Initially this book wasn’t going to focus so heavily on religion or science. So at the time, Notes of an Aspiring Sound Poet made sense as a title — I was (and still am) an aspiring sound poet, and these are my notes. But as my research expanded into cosmology, religion, and thermodynamics, I realized there was more at play here than sound poetry. When Hugo Ball and the Fate of the Universe was suggested to me, the focus of my book suddenly made more sense. Hugo Ball is sound poetry’s earliest protagonist, but beyond Ball and his work, I’d found myself writing about entropy, randomness, number theory, warfare, theology, etc. By studying sound poetry I’d opened my mind to topics that involve our fate as a species and the fate of the universe. The book is partly about your journey in writing your first sound poem. Did writing this book help you become a better sound poet? It hurts to say this, but no. If anything, my research and my attempt at writing sound poetry proved how much I still have to learn. Sound poetry is a deceptive topic. It looks simple, but beneath the hood there are so many intersecting webs of music, politics, theatre, and history — very daunting. I wasn’t an expert when I started this book and I’m still not an expert. I’m a sound poetry novice, and I’m fine with that. However, one thing that’s holding me back is the performative nature of sound poetry. It’s one thing to read a sound poem, but listening to one adds new dimensions and nuances. It’s like tasting artificial watermelon flavoring versus eating an actual slice of watermelon. Theatricality, I believe, is integral to great sound poetry, and a good sound poet always has some kind of performative flare. That flare is something I haven’t developed yet. Sound poetry is a form of language but without semantics or syntax. How can you judge a good sound poem from a bad one? Is it mostly about the performance? That’s a tricky one. It’s hard to put into words what works and doesn’t work in a poem that doesn’t use words. But I think I’ve developed a sense of what I prefer in a sound poem. For example, I prefer Hugo Ball’s earlier sound poems to his later ones. The later ones are so jagged and cacophonous, while his earlier attempts felt like an otherworldly imitation of human language. Noise is noise, but nonsense that sounds like it might have meaning is more interesting. Otherwise, the line between good and bad sound poetry seems to be in the performance. I’ve never read any of Jaap Blonk’s sound poems, but I’ve listened to his work and I love it. I’m not sure if I’d enjoy his work in print, but when you listen to his albums, you lose yourself in the sonic elements of the genre. For that reason alone, I think Blonk is the best starting point for anyone interested in sound poetry. I’d especially recommend the album Five Men Singing. Your book introduces a cast of historical figures who are all distinct and fascinating. Would it be fair to say that sound poets are some of the most colorful figures in the history of poetry? If we limit ourselves to Dada, a touch of color seems to be a defining characteristic of many artists in the movement, not just sound poets. And it makes sense — Dada was ultimately a movement for people who were fed up with what Western Civilization had been peddling for the past five centuries. The anti-structural, anti-authoritarian ethos of sound poetry and Dada attracted the disaffected, the angry, and the eccentric. In terms of today’s sound poets, I’m not sure. Every sound poet I’ve encountered has just looked and acted like a poet. The only unusual thing about them is their genre of choice. There have always been artists who are drawn to nonsense, or pure expression without any clear meaning. Famous writers who dabbled in nonsense include Lewis Carroll, Alfred Jarry, and of course Hugo Ball. Do you know why nonsense is such an enduring quality of art? A lot of it’s probably frustration. At least in literature, there’s this gulf between writer and reader that isn’t present in other media. You have to translate events and phenomena into words, which are imprecise. Sometimes you can’t find the right word or phrase, and even if you do, you can’t guarantee that the reader will understand what you meant. Theatre, painting, and music are more direct — the medium is the experience. Having to think about that gulf as a writer is discouraging. I often feel like a painter with my hands cut off, and that frustration is probably universal. I think artists who embrace the nonsensical have grown tired of the limits of their medium. Nonsense is both a creative and psychological outlet for them. They want to break things. When I finally wrote a sound poem, it felt like a creative temper tantrum — uncontrollable, but oddly liberating. I didn’t care if the world understood me. I could finally write and not worry about what a hypothetical reader would think. Hugo Ball viewed sound poetry to be a partly political art form. Can you explain the political side of sound poetry? Also, do you have any opinion about the role sound poetry — or art in general — should play in politics? Sound poetry is (basically) language, so I’d say it’s inherently political. Historically, though, the political dimension of sound poetry stems from the science and art of propaganda. Propaganda has always existed, but in the early 20th century, propaganda became a mobilizing, destructive force that could end millions of lives in a matter of months. To artists like Hugo Ball and Tristan Tzara, this had to be horrifying. Governments were suddenly using words and images, the domain of writers and artists, to end human lives. We take propaganda for granted now, but Dadaists back in the day probably felt disgusted and appalled. Sound poetry was Ball’s way to redeem human language: you remove meaning, you defang the language. Or at least that’s what he thought. In our post-truth world, I’d say we’re in a crisis similar to that faced by the first Dadaists. Like Ball and company, we’re witnessing the terrifying power of language. We live in an age where a few inflammatory Tweets feel like they could tear our country apart. What can sound poetry do to solve this? Not much. Our president talks and writes nonsense all day, so what would more nonsense accomplish? Do you think sound poetry has a future as a poetic form? Sound poetry will be around forever, I think, but it probably won’t gain popularity anytime soon. In its purest form it just doesn’t appeal to a mass audience. It’s always been a niche genre, but I don’t mind. There’s something special about discovering a writer or performer like Jaap Blonk and only one or two of your friends really “get” what he’s doing. You can share that forever. However, even if sound poetry isn’t popular in its own right, its children certainly are. And by children, I mean the ways in which sound poetry has influenced music. Scat singing, for example. Even if someone doesn’t know about sound poetry, they’re probably familiar with scat singing, whether it’s Mel Torme or Scatman John. But let’s face it, even jazz has become pretty niche. I think where we’re really seeing sound poetry’s lasting effects is in the newer generation of rappers, especially the ones who get labelled as “mumble” rappers. Which isn’t a fair label. “Mumble” implies that their style is lazy just because it’s occasionally nonsensical. If there’s anything I’ve learned, it’s that nonsense can be an artform. When someone complains that an artist like Lil Uzi Vert doesn’t use complex, sprawling rhyme schemes like Pharoahe Monch, I can’t help but laugh. It’s like comparing Hugo Ball to Alexander Pope. They’re different artists, they have entirely different goals. A lot of this newer music focuses on mood and the sonic experience more than the lyrics themselves. This isn’t the devolution of rap — it’s proof that the spirit of the first major sound poets is alive and well in the 21st century. What are you working on now? What’s your next writing project? I’ve been juggling two ideas for a while now. For the past six months I’ve been working on an autobiographical analysis of T.S. Eliot’s Old Possum’s Book of Practical Cats. A lot of Eliot’s theological and political themes from works such as “The Hollow Men” and Ash Wednesday, oddly enough, feature heavily in Practical Cats, and they make an interesting backdrop for a story about exploring sexual identity in the Midwest. Otherwise, I’ve been busy revising and organizing some of my recent poems into what will (hopefully) be a book soon. They mostly revolve around family history and collective guilt, but they all stem from a story that’s been passed down in my family for years. According to years of rumors, the sole reason my maternal great-great-grandmother left Sicily for the U.S. was because the Mafia wanted her dead. The working title is Mafioso. “Hugo Ball and the Fate of the Universe: Adventures in Sound Poetry” is currently available in print and on Kindle.
https://petermclarke.medium.com/adventures-in-sound-poetry-interview-with-lane-chasek-943e38c8a364
['Peter Clarke']
2020-07-01 22:20:32.652000+00:00
['New Book Release', 'Indie Publishing', 'Nonfiction', 'Interview', 'Books And Authors']
Writing Useful Functions to Explore Human Genetics with Python
Writing Useful Functions to Explore Human Genetics with Python Simple Python Functions can help give an insight into Human Disease Image courtesy of https://www.ted.com/playlists/357/how_does_dna_work A Python and Sequence Data Example Many human hereditary neuro-degenerative disorders such as Huntington’s disease (HD) are correlated with the expansion in the number of tri-nucleotide repeats in particular genes. In genetics, codons, a type of tri-nucleotide repeat, code for amino acids, the building blocks of proteins. Specifically, in Huntington’s disease, the pathological severity is associated with the number of CAA or CAG codon repeats. These codons specify the amino acid glutamine, one of 20 of the small building blocks. As such, HD belongs to a group of diseases collectively referred to as polyglutamine (polyQ) diseases. More than 35 tandem repeats virtually assures the disease, while the healthy variant of this gene have between ~6–35 tandem repeats. Fortunately, Python can be used to write simple functions to investigate the mRNA transcripts, the instruction template needed to assemble proteins, and determine the number of tandem repeats of either the CAA/CAG codons. Aim: We are fortunate in biology that a lot of the files we may be interested in, from databases such as NCBI are text-based files. This means we can treat them as ordinary Python strings. For the function that I will demonstrate here, I would like the output to be as informative as possible. Specifically, I would like to analyze a typical FASTA file of the mRNA transcript which encodes the huntingtin protein. In genetics, a FASTA file is a simple file with a header that contains a unique identifier for the sequence, and the sequence itself below this header, as shown below: >FASTAheader AGCTCGATACGAGA I would like to retrieve and work out the Accession or identifier for that file, the DNA length of the sequence. Importantly, I would like the user to choose their own tandem repeat number for the CAA/CAG codons to search for in the file, and have the output inform the user of the number of consecutive runs of CAA/CAG codons above the input tandem number, what those tandem sequences are, and how many repeats were found in each run. Getting started To begin lets practice on a small DNA sequence. I have deliberately intercepted this DNA sequence with an ‘X’ to break up the runs of consecutive CAG/CAA codon runs. Firstly, I convert the DNA sequence to uppercase using the .upper() method. I then use the regex module, where I search for the CAG or CAA codons that occur three or more times. The pipe character, | represents or, and the curly braces, give the limit for how many times these sequences need to be found (here, the upper limit is left off, after the comma). Therefore, in this example, I want to find runs of CAA/CAG occurring more than three times in the DNA string, meaning if this requirement is satisfied a length of DNA of at least 9 should be found. I then iterate through the matches found, saved in the variable pattern_match, and print the sequences that fulfill this criteria. For clarity, I also append these sequences to a new list and print their lengths. 2 runs are found and both are above the minimum threshold of at least 9. Working with Files To open the file I use the open function, using the with statement. I then slice the file, because I do not want the Accession name (header) of the FASTA file, contributing to my DNA count. This means I need to slice the file at element 0, which removes the header, and I then read the sliced file f, being careful to remove any new line characters. -An example illustrating the impact of not removing new-line characters is shown in the image below. These characters would break the consecutive runs of CAG/CAA, and add extra length to the DNA which would not be representative of the sequence data. The impact of forgetting to remove new-line characters; the sequence is artificially longer and not representative of the sequence data. I then use the re.finditer method as described above. For this code snippet to work, it is necessary to import islice and re. from itertools import islice import re Shown below is the script, and the console output. As indicated, the accession name is shown, followed by the length of the sequence, how many runs of CAA/CAG greater than three are found, the sequences of those runs and their respective lengths and tandem repeat numbers. This code however, could be much more informative and easier to interpret. For example, a printed message before each section could inform the user what the output represents. In addition, this code is a little inflexible, as it only works with ‘test.fasta’ and checks for 3 or more consecutive runs of CAA/CAG. To improve upon this script, the most convenient option is to transform it into a function. The output in the console is hard to interpret, and could include informative text beside. Function Creation Now that the main body of the code has been written, creating a function will not require too much additional work. As specified, the two parameters that the user should be able to choose for themselves should be the DNA file to analyse and the limit of the tandem repeats. I define the function huntington_tandem_repeat_dna, and give the function two parameters inside the parentheses. The dna parameter is the file that is opened, and the tri_nucleotide_repeat_num has a default value of 3, in cases where the user decides not to specify the tandem repeat number. To try the function, lets test two cases. One is a dummy test file I created, and saved with the extension .fasta. The contents of this file is shown below. I have included 2 consecutive runs of CAG/CAA codons in this file. One is 24 base pairs long and the other is 12. I have included this file to validate the function is working as expected. Boxed: The boxed section refers to a a 24 base pair segment, comprised of 8 tandem runs of CAA/CAG codons. The files I am running are in my local directory, and as such I have only specified their relative path. The other DNA sample I run, is the mRNA sequence. Here, I wanted to make sure that the number of base pairs match up when the file is analysed. The NCBI database informs me that the sequence length should be 13,498bp and indeed this is confirmed when the huntington_tandem_repeat_dna function is called with this DNA file. Further to this the accession names match up in both cases confirming that all is working as intended. The output in the console is explicit. Summary To go full circle, the original aim has been met. The function reads the file and gives useful output which is clear to understand. To pick on the second example shown in the image above, we can clearly see, that the DNA length is 13,498 bps, the user has searched for 23 or more consecutive runs of CAG/CAA, but only one run that matches that criteria is found. The sequence of that match is printed, along with its length, and the number of tandem repeats that comprise that run. The function is now perfectly amendable for scaling up, such as for processing multiple files and writing the output to any particular destination should the user want to add this extra functionality. I hope this read showcased how writing functions in Python can deliver insight into human genetic diseases. For more code examples, follow me on linkedIn and Medium.
https://towardsdatascience.com/writing-useful-functions-to-explore-human-genetics-with-python-3e135540ea0
['Stephen Fordham']
2019-08-23 21:49:29.352000+00:00
['Data Science', 'Software Engineering', 'Programming', 'Healthcare', 'Genetics']
The World Will Get Worse if Trump Loses… Here’s How
It is January 20th, 2021. Joe Biden is being sworn in as the 46th President of the United States. After being defeated in a modern-day landslide, Donald Trump packs up his belongings, family, and golden toilet. He leaves behind the White House as he heads for Mar-a-Lago in Palm Beach, Florida. Center and left-leaning Americans will celebrate in the streets. The orange buffoon will be finally be gone. We can return to some semblance of normalcy. Or so we think. Be careful what you wish for because this is what will happen next: Donald Trump will launch the Trump News Network (TNN). A new cable news source dedicated to far-right conservative reporting for his cult-like followers. TNN will continue to be used by the former President to brainwash and gaslight his millions of adoring fans directly. After all, they trust Donald Trump more than any other news source. No more fact-checking. No more dealing with hard questions. And no more needing to pretend to condemn groups that Trump thinks deserve a seat at the table; like the Proud Boys, Qanon, and other far-right fringe organizations. But why call it the ‘Trump News Network,’ you ask? Well, the Donald has a compulsion to put his name on everything. That is his brand. It is recognizable and valuable. It is on his buildings and was on his university (closed), his foundation (closed), his steaks (closed), and his casino (closed). Plastering his name all over a network channel dedicated to right-wing news will likely be a great success and exactly what we expect from the king of Twitter. All About Money Donald Trump already has a built-in viewership, with more than 85 million followers across his social accounts. Advertisers do not care what political slant a channel has, as long as it means they can get their product, service, or message in front of millions of eyes. That equals billions in potential revenue for a network run by the Trump family. Fox News, the current king of conservative news, has been rubbing Trump the wrong way lately. Any dissent felt by the President usually results in him lashing out at the network. Currently, FNC is the third most valuable news channel in the world, at an estimated $11 .4 billion. It is not outside the realm of possibility that a Trump News Network could pull in more than $1 billion in revenue its first full year of operation and be north of $5 billion within five years. The former President could create a top 10 network based on revenue and viewership within the decade. This would be an extremely enticing opportunity for Donald Trump, whom we recently found out is on the hook for hundreds of millions of dollars in personal debt. Fox News launched in 1996, and it quickly became the most-watched news source in the United States. Leveraging his Oval Office experience and global name recognition, Donald Trump could very well see a similar rapid ascension and become a new age Rupert Murdock. Licensing Over Launching The most likely path to a Trump News Network would be Donald Trump doing what he always does, licensing out his name. The President rarely owns the things his name is attached to, usually opting for an easier route. There is no need to carry cumbersome real estate assets, fork over loads of start-up capital, or deal with hundreds of contractors. Just find an attractive property with high-end clientele and slap that big beautiful ‘TRUMP’ name on it. This is the standard practice of Donald Trump. Regarding Trump News Network, it is much more likely he partners with a newer network that is on the rise that he can control, like One America News Network. OAN has become a Presidential favorite over the last couple of years because of their clear, far-right lean, and favorable reporting. A start-up network like this one would be the perfect landing spot for a newly branded TNN. With an existing lineup of conservative commentators and reporters, Donald Trump would have very little work to do to turn the operation into a full-fledged disinformation machine. A newer, smaller operation like OAN would also give Trump the authority to remake the network however he sees fit. Something he couldn’t pull off at a far more established Fox or Breitbart. This would show an interesting evolution in the business mindset of Donald Trump and his family. Something I think he likely learned while running for and being elected President. Why sell the masses a thing, a product, when you can sell them a perspective? This news network will expand his influence exponentially. Imagine a channel dedicated to 24/7 programming, approved and guided by Donald Trump. An entire world of disinformation, manipulated events, charismatic reporters, and flawed perspectives carefully crafted for maximum influence on his willing audience. The President will make more money than he ever dreamed of with his own news networks, compared to the substantial losses of his hotels, golf courses, casinos, steaks, and university. And all it will cost the American people is our collective sanity. In the eyes of Donald Trump, that sounds like a fair trade.
https://medium.com/the-purple-giraffe/the-world-will-get-worse-if-donald-trump-loses-heres-how-3197e4a48d41
['Patrick Tompkins']
2020-10-22 13:11:34.668000+00:00
['Government', 'Leadership', 'Journalism', 'Elections', 'Politics']
Planning to Migrate a WinForms App to ASP.NET MVC? How to Get Started
If you have a WinForms app, there’s a good chance you’d like to modernize it by migrating it to an ASP.NET-based model-view-controller (MVC) architecture. Doing so can make it easier to extend the application or integrate it with new frameworks. Undertaking a WinForms-to-ASP.NET MVC migration takes work. In this post, I’ll explain the first key technical considerations when planning your migration. In some cases, the desktop application cannot be extended because of desktop application limitations (such as an IoT integration). Independent of the reasons to migrate, some considerations need to be analyzed before starting the migration. This article will discuss those considerations by highlighting the first items you should consider when migrating a WinForms app. Migrating to an ASP.NET MVC architecture 1. The conceptual migration: Make sure migration is the best option The first consideration is making sure that migrating to an ASP.NET MVC architecture is the right decision in the first place. In some scenarios, you could more easily modernize your app in other ways. For example, you could possibly containerize the application and share the access with a distributed team without rewriting. (As some developers say, the best code is the code we don’t write.) 2. Identify the platforms’ points of similarity If migration is the only option, we need to identify the points of similarity in the current technology in comparison to the new one. Of course, the migration from WinForms to ASP.NET MVC could be softer if the current application was developed with layers separating the logic in a similar construct to MVC. But we know that is almost impossible in most cases. Although the concepts of WinForms and ASP.NET MVC differ, the classes could be the code most reused. Because of this, it should be a better starting point. Many applications developed with WinForms have a persistent storage layer implemented inside the project including SQL code or a call to some procedures. Identify which your application is using, and after migrating the classes, start the migration of queries and functions from procedures. 3. Identify fundamental platform differences The most challenging problem is the layer of viewers. Desktop applications involve many native events, and most could be migrated to events of the web page as mouse and keyboard events, but for other resources, such as the file manager, rethinking implementation is necessary. Other aspects of migration will be the implementation of the controller, moving management of routes, and the logic of page redirects. 4. Identify code changes to migrate from WinForms to ASP.NET MVC Now that we’ve discussed the considerations to keep in mind before starting the migration from WinForms to ASP.NET MVC, let’s take a look at what you should expect to change in your code in order to perform the migration. As mentioned before, if you have SQL code in your project like the sample above, you can reuse the SQL in the query on the ASP.NET MVC project. Let’s seek to understand this better by analyzing the code used to search users by country in the WinForms project. // The SQL code to get users by country. string sql = "SELECT * FROM Users WHERE countryID = @countryID"; // The connection treatment. using (SqlCommand sqlCommand = new SqlCommand(sql, connection)) { // Inclusion of parameters to query. sqlCommand.Parameters.Add(new SqlParameter("@countryID", SqlDbType.Int)); sqlCommand.Parameters["@countryID"].Value = countryID; try { connection.Open(); // Reader to get the users. using (SqlDataReader dr = sqlCommand.ExecuteReader()) { DataTable dt = new DataTable(); dt.Load(dr); // Attribute the data table to the data grid view shown on the screen. this.dgvUsers.DataSource = dt; dr.Close(); } } catch { MessageBox.Show("ERROR"); } finally { connection.Close(); } } This code has SQL code, the treatment of the connection and the attribution of the result to the component used to show the list in the form. The communication with the database code and the view is strongly linked here. In addition, the treatment of errors starts a visual action showing an alert window. Now, let’s see the ASP.NET MVC code. public async Task<ActionResult> UserDetail(int? countryID) { // Verify if the parameter is valid, if not return the HTTP error. if (countryID == null) { return new HttpStatusCodeResult(HttpStatusCode.BadRequest); } // Find the users by countryID without using the SQL code. List<User> users = await db.Users.FindAsync(countryID); // Or search using the SQL query. string query = "SELECT * FROM Users WHERE countryID = @p0"; List<User> users = await db.Users.SqlQuery(query, countryID).SingleOrDefaultAsync(); // Verify if has some result, if not return the HTTP error. if (users == null) { return HttpNotFound(); } // Send the list of users to view. return View(users); } This code uses two means to create a search. The first is from the Entity Framework and the second uses the raw SQL code that could facilitate the reuse of code from a WinForms project with significant amounts of raw SQL code. But the Entity Framework is preferred because of improved use of ASP.NET MVC resources. In the view, we can loop the list of users and print the users in the CSHTML file: <ul> @foreach (var user in users) { <li> @Html.ActionLink(user.Name) </li> } </ul> 5. Decide how to build a new interface using ASP.NET: interface and windowing considerations In addition to migrating the backend of your application, you also have to think about how to build a new interface using ASP.NET. Fortunately, this is not as difficult as it may seem, even if you are not a frontend developer by trade. The MVC architecture approach makes it possible to use simple HTML and JavaScript for building an interface. That means that building an interface for your app after the migration requires only basic web development knowledge. Plus, JavaScript provides common events like click, hover and focus that are equivalent to those available from the WinForms, so it is easy to implement this same interface functionality after the migration. Here’s some sample code showing how you might approach crafting an interface after the migration without having to rearchitect your code to a significant extent. The only big difference in the example below in the interface before and after migration is that the web alert has no title on the prompt window. // WinForms Button alertButton = new Button(); alertButton.Click += new EventHandler(alertButton_Click); Controls.Add(alertButton); void alertButton_Click(object sender, EventArgs e) { MessageBox.Show("My message", "My title", MessageBoxButtons.OK, MessageBoxIcon.Error); } // MVC <button onclick="openAlert()">Open alert</button> <script> function openAlert() { alert("My message"); } </script> Migration Next Steps WinForms has been in use for years, and ASP.NET MVC is a stable framework to build and migrate applications. What’s needed in a migration scenario is a deep knowledge of how the current application works and what can be done with it. Migration is difficult and slow; the work should be done with an enthusiastic team. Additional things to consider: Business intelligence conversion — C# WinForms to C# MVC or JavaScript Discuss client-side to client-server events Rest APIs Cloud services — converting local storage to cloud data storage Syntax differences We will provide the next steps in a following article. In the meantime, if you have any questions, please leave them in the comment thread below. by Brena Monteiro Originally published on GrapeCity.com
https://medium.com/grapecity/planning-to-migrate-a-winforms-app-to-asp-net-mvc-how-to-get-started-e2854505c74e
['Grapecity Developer Solutions']
2019-01-24 19:30:29.793000+00:00
['Development', 'Technology', 'Web Apps Development', 'Web Development', 'Migration']
Avancargo, LATAM B2B trucking platform
Avancargo has raised $1M in total. We talked with Diego Bertezzolo, its CEO. How would you describe Avancargo in a single tweet? Avancargo is an on-demand B2B trucking platform, connecting FTL carriers and shippers in LATAM. How did it all start and why? It all started back in 2017, when the three founding partners were doing their MBA in Buenos Aires. We all were connected to logistics (my personal experience was managing sales and marketing for Volvo CE in Argentina and Uruguay) and found that there was a big opportunity to digitize and improve service in the sector. We arrive quickly to an MVP, and in a couple of months we found the first angel investors. Leveraging opportunities and supply in the agri sector was our fist goal, where only Argentina is moving over 4 million trips per year. What have you achieved so far? So far we can sum our achievements in the following lines: Over 8,000 companies onboard, with more than 30,000 heavy trucks (10% of Argentina’s) 800 shippers, among which we are regularly operating with Walmart, Cargill, Bunge, Cresud 14 people in our team, with IT and operations as the largest areas US$1 million investment, with some strategic partners such as Globant, Supervielle Bank, Murchison Group and Organization Roman Over 10,000 trips requested in the last 12 months What do you plan to achieve in the next 2–3 years? Our goal is to settle the Argentinian operation during 2019/Q2 2020 in order to scale the service to Chile, Peru, Colombia and Mexico.
https://medium.com/petacrunch/avancargo-latam-b2b-trucking-platform-51168c3d3707
['Kevin Hart']
2019-10-22 17:02:08.351000+00:00
['Travel', 'B2B', 'Startup', 'Technology', 'Latam']
Let’s read the Bible for the first time
Let’s read the Bible for the first time On Ann Nyland, translator For me, it was a period of theological distress. The world I’d grew up in, Evangelical Christianity, seemed loveless, attack-oriented, and I couldn’t stay in it anymore. I was interested in the Bible. But I only seemed to see “adultery” and “fornication” on every page—and, in my mind, the image of a man with a gray, cruel face staring back at me. Looking up Bible commentary one night, I noticed a mention of a Bible translator in Australia who’d said to Christianity: You got everything wrong. I said to myself: Yes, that’s what happened. In a 2005 interview, she speaks of her youth. I was born into a Christian family so was brought up with the Bible and accepted the Lord as my Savior when I was 6. My father was a Greek scholar (as was his uncle) and from an early age I can remember him talking about what the Greek really meant and how it was a shame it wasn’t brought out in English translation. He had a collection of English translations. He was also a preacher and used to go on at length about how the King James said this, the other versions said this and that, but the Greek said something else. Christendom had an open secret? The scriptures were not translated very well. She went to school to become a classical Greek scholar, but the Bible is what she was interested in—just different, unusual corners of it. Her dissertation was on a Hittite method of horse training, and continues to be cited. (In 1992, she wrote a published paper from it.) She got a job at the University of New England, in Australia, and says she published papers on “Greek lexicography from Homeric to Hellenistic times.” But dealing with sexist translation of the Bible was the job that called out to her. In the 1990s, the Christians vs. the Feminists was the regular media event. Dr. Nyland’s stance was neither, or both? As she comments in a blog post: “I used to teach ancient Greek language at university, and then when I translated the New Testament from the Greek, I attracted all sorts of charges of feminist agenda simply for translating correctly.” Bible publishing, she’d realized, is a bit of a racket, controlled by vast corporations and overseen by, as she calls them, “the lobbyists.” Well-known Christian leaders, she writes, “have misrepresented lexicons, presented new meanings for common Greek terms, and displayed clear errors about elementary Greek grammar.” Her initial efforts to push back weren’t warmly received. She recalls in a 2007 interview: “A scholarly article I wrote in a peer-reviewed academic journal led to me be being described as ‘a shrill feminist author from Australia’, rather than a Greek scholar commenting on the blatant mistranslation of a common Greek word by a group of lobbyists.” It seemed there was a commercial opportunity to update the translation of the Bible with new information. The scholarly literature was full of finds by archaeologists. For example, as she notes, Paul’s phrase translated “husband of one wife” — is found on the tombstones of women. There were new readings by scholars, adjusting the meaning of many poorly-understood words. To translate and market a Bible was a taunting task, but for six years she worked on her New Testament, naming it after the Greek word kephalē. Does it mean ‘head’, or ‘source’. Ever industrious, she seems to have set up her own publishing company to release it. Her Source translation of the New Testament was published in Australia in 2004. How was it different? She explains in the introduction: For centuries, the meaning of numerous New Testament words remained unknown, thus translators were left to guess. In the late 1880s and again in the mid 1970s, large amounts of papyri and inscriptions were discovered. These impacted our knowledge of word meaning in the New Testament to such a degree that scholars labeled the finds “sensational” and “dramatic” . . . Yet nearly every New Testament translation of today follows the traditional translations of the earlier versions, which were published centuries before . . . . The Christian world had gotten locked into meanings created by translations. And when the translations were proven false, it created the problem she’d face. Do Christians get the message, or shoot the messenger?
https://medium.com/belover/the-woman-who-read-the-bible-for-the-first-time-d38412a2c11a
['Jonathan Poletti']
2020-05-02 06:32:24.183000+00:00
['Bible', 'Books', 'Christianity', 'Feminism', 'Sex']
Top books on bitcoin & blockchain you should read in 2020
Top books on bitcoin & blockchain you should read in 2020 LetKnowNews Follow Jan 22 · 4 min read Blockchain, over the years, has gained a lot of popularity, and rightfully so, because it can be decentralised and the data is cryptographically stored where no one can touch whats inside it. Blockchain is a revolution in digital ownership and digital privacy, which will eventually become a part of our daily lives. Many companies like Facebook and WhatsApp are already implementing blockchain to safeguard the way their users exchange money. Banking and investing probably make the most out of the blockchain technology than any other sectors. So, in recent years, blockchain has piqued interest from everyone looking to invest because of bitcoin. Naturally, if one is interested in it, then one has to read about it. Here we list down books on blockchain in no particular order that one should be reading: Digital Gold Digital Gold by Nathaniel Popper might be the perfect book for beginners. In this book, the author has entailed some of the riveting tales filled with innovators which might pique interest from beginners who want to understand blockchain. The author presents events form millionaires, criminals and programmers who were hell-bent on creating a new form of digital money. What’s interesting about this book is that anyone with no knowledge about blockchain can read it and by the time they are finished, they will have a strong understanding and ready to jump to the next level. Blockchain: The Complete Guide to Understanding Blockchain technology Miles Price’s book gives details about the implementation of blockchain, the technical underpinnings and also how to earn profit through mining cryptocurrencies. This is a short and sweet guide to blockchain implementation along with its history, mechanics and limitations of blockchain and much more. This book is another example of a great beginner’s guide. The Bitcoin Standard The Bitcoin Standard: The Decentralised Alternative to Central Banking is a book that can be read by both blockchain beginners and intermediate readers. Saifedean Ammous’s book is one of the best books at demystifying bitcoins and money. Everyone, whether it be a beginner or intermediate level blockchain enthusiast, can get an understanding of the sophisticated transformational technology after reading this book. Blockchain Revolution Blockchain Revolution: How the Technology Behind Bitcoin and Other Cryptocurrencies Is Changing the World by Dan and Alex Tapscott is an excellent resource for one who has a basic understanding of blockchain. This book is a unique combination of casual reading packed with technical information. This book assumes that the reader is an investor, so it might be better if someone with prior knowledge of blockchain investing is reading it. Mastering Bitcoin: Unlocking Digital Cryptocurrencies Mastering Bitcoin is from Andreas M. Antonopoulos, which strictly speaking, is for everyone who has a basic understanding of what blockchain is. This book gives a deep dive of the bitcoin, and by the time one finishes the book, one will know how to write the next cryptocurrency applications. “The invention of the Bitcoin Blockchain represents an entirely new platform to build upon, one that will enable an ecosystem as wide and diverse as the Internet itself. As one of the preeminent thought leaders, Andreas M. Antonopoulos is the perfect choice to write this book.” — Roger Ver, Bitcoin Entrepreneur & Investor Mastering Ethereum, another book by the same author, is also a great read and gives details about smart contracts and an additional layer of security. LetKnow.News on Facebook, Twitter and Telegram
https://medium.com/letknownews/top-books-on-bitcoin-blockchain-you-should-read-in-2020-346f0b56f2ad
[]
2020-01-22 07:44:08.676000+00:00
['Blockchain', 'Books', 'Cryptocurrency', 'Crypto', 'Bitcoin']
How to Secure a Spring Boot Application with TLS
Creating a Spring Boot Application In this section, we will create a Spring boot application and expose the following endpoints: GET v1/books/ : List all books POST v1/books/: Create a new book GET v1/books/{book_id}: Get a book resource DELETE v1/books/{book_id}: Remove a book Step 1: Creating a Spring Boot Project Browse to your favorite IDE and create a Spring boot project with web, h2, data-jpa and Lombok dependencies. Following is the pom.xml file: pom.xml Step 2: Configuring h2 database In this application, we will use the h2 in-memory database as our backing database. Add the following configuration in the application.properties file to configure h2 database: application.properties Step 3: Creating the domain object In this application, we will manage book information. Users of this application/API can perform CRUD operations. We have created the following book entity: Book entity Step 4: Creating the REST endpoints To facilitate the REST endpoints, we have created the following REST controller. It serves the aforementioned endpoints: BookController.java We also have created the following repository: BookRepository Step 5: Insert data at application startup To help with the testing, let's insert a few book details in our database. To do so, create a new SQL file named data.sql in src/main/resources directory. Spring boot automatically executes this file at startup. data.sql Step 6: Testing the API Let us now start the application and test a few endpoints to ensure the application is working as expected. We are using HTTPPie to access the endpoint and we are receiving data from the server as shown below:
https://medium.com/swlh/how-to-secure-a-spring-boot-application-with-tls-176062895559
['Somnath Musib']
2019-12-03 05:01:02.046000+00:00
['Technology', 'Software Engineering', 'Coding', 'Software Development', 'Programming']
Deploying Spring Boot Applications
Spring Boot applications can be deployed into production systems with various methods. In this article, we will go through step by step deployment of Spring Boot applications via the following 5 methods: Deploying in Java Archive (JAR) as a standalone application, Deploying as Web Application Archive (WAR) into a servlet container, Deploying in Docker Container, Deploying behind NGINX web server — direct setup, Deploying behind NGINX web server — containerized setup. Deploying in Java Archive (JAR) as a standalone application Spring Boot applications can easily be packaged into JAR files and deployed as standalone applications. This is done by the spring-boot-maven-plugin. The plugin is automatically added to pom.xml once the Spring project is created via Spring Initializr as a Maven project. <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> In order to package the application in a single (fat) jar file, run the maven command mvn package under project directory. This will package the application inside an executable jar file with all its dependencies (including the embedded servlet container — if its a web application). To run the jar file, use the following standard JVM command java -jar <jar-file-name>.jar . Deploying as Web Application Archive (WAR) into a servlet container Spring Boot applications can be packaged into WAR files to be deployed into existing servlet containers (such as Tomcat, Jetty etc.). This can be done as follows: Specify WAR packaging in pom.xml file via <packaging>war</packaging> . This will package the application into a WAR file (instead of JAR). On the second step, set the scope of Tomcat (servlet container) dependency to provided (so that it is not deployed into WAR file): <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-tomcat</artifactId <scope>provided</scope> </dependency> Initialise the Servlet context required by Tomcat by extending SpringBootServletInitializer and overriding configure method as follows: @SpringBootApplication public class DemoApp extends SpringBootServletInitializer { @Override protected SpringApplicationBuilder configure(SpringApplicationBuilder builder) { return builder.sources(DemoApp.class); } public static void main(String[] args) { SpringApplication.run(DemoApp.class, args); } } In order to package the application in a war file, run the standard maven command mvn clean package under project directory. This will generate the WAR package which can be deployed into a servlet container. To run the application inside an existing Tomcat container, copy the generated WAR file to tomcat/webapps/ directory. Deploying in Docker Container Before deploying the application into a Docker container, we will first package the application in a (fat) JAR file. This process is previously explained, therefore I will assume we have a jar file. On the first step, we need to build a container image. For this, we start with creating a Dockerfile in the project root directory as follows: # latest oracle openjdk is the basis FROM openjdk:oracle # copy jar file into container image under app directory COPY target/demoApp.jar app/demoApp.jar # expose server port accept connections EXPOSE 8080 # start application CMD ["java", "-jar", "app/demoApp.jar"] Note that, in the above snippet, we assumed that the application JAR file ‘demoApp.jar’ is located under the target directory of our project. We also assumed that the embedded servlet port is 8080 (which is the default case for Tomcat). We can now build the Docker image with the following command (from where the Dockerfile is located): docker image build -t demo-app:latest . where -t is the name and tag of the image to be built. Once the image is built, we can create and run the container via: docker container run -p 8080:8080 -d --name app-container demo-app where -p publishes (maps) host port to container port (in this case both are 8080). The option -d (detach) runs the container in background, and --name specifies the name of the container. Deploying behind NGINX web server — direct setup Configuring servlet containers (such as Tomcat or Jetty) for real production (i.e. running on port 80, without root user and with SSL) may not be straight forward (but doable). Therefore, it is recommended to use a web server (such as Nginx) in front of your Spring Boot applications. This can be done in two ways; direct setup or containerized setup. In this section, we will demonstrate the direct setup. In direct setup, we run the Nginx web server and the Spring Boot applications directly on localhost (on different ports of course). And we let Ngnix proxy REST requests to Spring Boot applications. For this: Install Nginx web server on Linux via sudo apt-get install nginx , Open the file /etc/ngnix/sites-available/default with a text editor, Say, we have two Spring Boot applications to be proxied. Then replace the ‘location’ block in the file with following blocks for two Spring Boot applications. Note that, all Nginx-Java configs can be found here. location /app1 { proxy_pass http://localhost:8080; } location /app2 { proxy_pass http://localhost:9000; } based on which the requests coming to http://localhost/app1/ will be directed to /http://localhost:8080/ , and requests coming to http://localhost/app2/ will be directed to /http://localhost:9000/ . Load balancing If you are running multiple instances of a Spring Boot application, you can enable Nginx to apply load-balancing. For example, if we have 3 instances of app1 running on ports 8080, 8081 and 8082. We can load-balance among these servers as follows: Open the file /etc/ngnix/sites-available/default and add the following block at the top of the file (before server block): # configure load-balancing upstream backend { server localhost:8080; server localhost:8081; server localhost:8082; } Modify the proxy_pass parameter of app1 as follows: location /app1 { proxy_pass http://backend; } based on which the requests coming to http://localhost/app1/ will be directed to one of /http://localhost:8080/ , /http://localhost:8081/ or /http://localhost:8082/ . Deploying behind NGINX web server — containerized setup In containerized setup, we deploy the Nginx web server and all the Spring Boot applications on separate Docker containers. And we let Nginx (running in its own container) proxy REST requests to Spring Boot application containers. We start by packaging all Spring Boot applications in (fat) jar files (which is previously explained). At this point, pay attention to setting individual server ports and root context paths to each Spring Boot application by adding following lines to application.properties (or application.yml) files: server.port=8082 server.servlet.context-path=/search-service Then we deploy the generated jar packages in separate Docker containers (which is also previously explained). As an example, we deploy four Spring Boot applications; single instance of ‘analysis-service’ application and three instances of ‘search-service’ application. The three instances of search-service application will be load-balanced by Nginx. Our basic architecture looks like follows: We create a Nginx configuration file nginx.conf based on the default configurations. We add load-balancing and proxy information for each service as follows: http { upstream backend { server search-service-1:8080; server search-service-2:8081; server search-service-3:8082; } server { listen 80 default_server; listen [::]:80 default_server; root /var/www/html; server_name _; location /search-service { proxy_pass http://backend/search-service; } location /analysis-service { proxy_pass http://analysis-service:8083/analysis-service; } } } events { worker_connections 1024; } based on which the requests coming to http://localhost/search-service/ will be directed to one of /http://search-service-1:8080/search-service/ , /http://search-service-2:8081/search-service/ and /http://search-service-3:8082/search-service/ , and requests coming to http://localhost/analysis-service/ will be directed to /http://analysis-service:8083/analysis-service/ . After the configuration file (nginx.conf) is created, we will deploy the Nginx web server in a Docker container. For this, we create a Dockerfile as follows: # latest nginx FROM nginx # copy custom configuration file COPY nginx.conf /etc/nginx/nginx.conf # expose server port EXPOSE 80 # start server CMD ["nginx", "-g", "daemon off;"] And we build a Docker image for Nginx web server as follows: docker image build -t custom-nginx:latest . Once all Docker images are built, all system can be deployed by running docker-compose up command on the following docker-compose.yml file:
https://medium.com/swlh/deploying-spring-boot-applications-15e14db25ff0
['Murat Artim']
2019-05-30 14:05:14.342000+00:00
['Spring Boot', 'Java', 'Docker', 'Deployment', 'Spring']
Apple Fitness+, Launches December 14
Apple Fitness+ — Apple Apple has just announced that the launch date for their newest service, Apple Fitness+, will be December 14. The long-awaited Fitness+ service gives users a catalog of different workouts led by expert trainers. The service will be powered by Apple Watch and will be able to provide real-time metrics like heart rate and calories burned. This will mean that in order to use the service you will need an Apple Watch. Apple Watch allows Fitness+ to dynamically integrate metrics like heart rate, in real time, on the screen. For those who like competition, some workouts will offer an optional burn bar that will contrast your effort to others who’ve also completed it. If you already own an Apple Watch, Fitness+ will be the center tab in the fitness app once the service becomes available. From there you can see your subscription and use the service. The fitness app will be available to Apple TV subscribers. Fitness+ has been created for everyone, whether you are a beginner or an expert. You can make modifications for all levels in every workout. You can do whatever exercise you want, choose the time, the trainer, and the workout. If you need to learn the basics, there is even an instructional program called Absolute Beginner. Many of the workouts on Fitness+ will not require any equipment at all but for some, you may need things like weights, a yoga mat, an exercise bike, a rowing machine, or a treadmill. The workout types available include treadmill walk, treadmill run, HIIT, rowing, cycling, dance, yoga, core, strength, and mindful cool-down. Fitness+ will have brand new content for all of their workout types every week, with different trainers, times, and music. There is also an intuitive filter that will make finding different workouts much easier. Music will be a key part of the Apple Fitness+ service. Users can choose from a variety of genres whilst completing workouts. They include Latest Hits, Chill Vibes, Upbeat Anthems, Pure Dance, Throwback Hits, Everything Rock, Latin Grooves, Hip Hop/R&B, and Top Country. There is an option to filter by music to match your current mood. You will not need an Apple Music subscription to enjoy the music, although if you do, you will be able to incorporate your own music. When you purchase a new Apple Watch you get a 3-month free trial of the service. The price is $9.99 per month / $79.99 per year. If you are subscribed to Apple One’s premier plan, Fitness+ will be included in your subscription. Apple Fitness+ will be available Monday, December 14, and requires iOS 14.3, watchOS 7.2, iPadOS 14.3, and tvOS 14.3. For Apple Watch users, Fitness+ will automatically appear as a new tab in the Fitness app on iPhone; the Fitness app for iPad will be available to download from the App Store; and on Apple TV, the Fitness app will automatically appear once users upgrade their software.
https://medium.com/adventures-in-consumer-technology/apple-fitness-launches-december-14-f2ecfb2279cf
['Yash Patak']
2020-12-12 13:56:38.277000+00:00
['Apple Fitness Plus', 'Apple Fitness', 'Workout', 'Apple', 'Apple Watch']
Life is Best Lived Because of Its Unpredictable Nature
You are awesome too, and continue to write some of the best quality writing we’re proud to feature: Ditching My iPhone for a Flip Phone Made Me See How Useless Smartphones Can Be by Megan Holstein Sunday night before last, I was lying in bed and looking at my Screen Time stats as I do every Sunday night. This weekly review is meant to help me keep an eye on my phone use and identify problems before they balloon out of proportion. → READ MORE. And you’ll love Megan for this one too… (Honest truth, I hate shopping, so I fell in love with this piece). My 2021 New Year’s Resolution is to Save My Wallet, Sanity, and the Earth by Doing a Clothes Shopping Ban Clothes shopping is a normal part of being a person, especially in 21st century America. Women plan girls’ night out with a stop to the mall, and parents plan shopping days with their children. → READ MORE. Why One-Word Intentions Work Better Than Goals and New Year Resolutions by Omar Itani “What’s your new year’s resolution?” That’s the one question that will flaunt around toward the final few days of every year. → READ MORE. 5 Unusual Things That Might Be Ruining Your Sleep by J.J. Pryor I still remember faking sick when I was a kid. I’d rub my forehead until it was red, wait a few minutes, walk into my parent’s room and raspily say “I don’t feel so good.” → READ MORE. How Public Health Crises Affect Literature by Jane Trombley In a sobering essay in March 2020, The New York Times’ columnist David Brooks tapped into a human truth: when things get ugly, we tend to leave the details to the dry record of historical fact. → READ MORE. Focus on Your Strengths to Reach Your Goals Faster by Jon Hawkins According to Psychologist Alison Ledgerwood, our perception of the world is naturally negative. → READ MORE. Time Is Your Most Valuable Resource by Pamela Hazelton Your time is worth everything. This undervalued asset is easily spent way quicker than money. It’s frequently wasted, with little regard to the fact it cannot be recouped. → READ MORE. ‘Hillbilly Elegy’ Does Not Need to Be a Political Analysis by Ryan Fan Last night, I watched Hillbilly Elegy on Netflix the Ron Howard adaptation of the 2016 memoir of the same name by J.D. Vance. I thought it was a very good movie with touching personal testimony and family drama. → READ MORE. Anxiety is a Smoldering Flame That Can Easily Ignite by Kristina H Anxiety is no joke. It is the one emotional challenge that, probably everyone, has at one time or another, and at varying levels. → READ MORE. How To Have Difficult Conversations When You Don’t Like Conflict by Matt Lillywhite For the longest time, I genuinely believed that avoiding conflict was smart. “If I don’t argue with anyone, all of my relationships will be absolutely perfect.” At least, that’s what I told myself. → READ MORE. Interrupt Often as an Effective Communication Tool by Mathias Barra As a child, I was taught to be silent when listening or interrupt my interlocutor. That’s how conversations unfold in France. → READ MORE. How Losing My Keys Made Me Question My Attachments by Crystal Jackson Paddling on the lake is just standard operating procedure for me. Paddling with a puppy — well, that’s new. So, it shouldn’t be surprising that the ultimate mishap happened when I was managing a puppy on a paddleboard. → READ MORE. Why Being Smart is No Guarantee of Success by Dr. Audrey I was only a kid, but I still clearly remember the day my mom returned from Germany and her grammar school reunion. Both my parents are German, and trips back and forth between Germany and the States weren’t a big deal. → READ MORE. Grow Your Mental Resilience to Stress Less About People’s Opinions by Martina Doleckova Next door to us lives a guy who is the embodiment of happy confidence. Whenever he walks past our windows, he’s singing out loud to his headphones. → READ MORE. Anxious Parents Don’t Have to Raise Anxious Kids by Eric Sangerma As a dad, I want to keep my children safe from all the dangers of the world. When they’re distressed or scared, I want to be the one to protect them. But according to all the parenting advice I’ve ever read → READ MORE. Two of the most powerful heart stories I’ve written recently can be found here and here. Sending peace and blessings with you now and into the new year, Nicole PS: Do you know I have a writing course?
https://medium.com/publishous/life-is-best-lived-because-of-its-unpredictable-nature-9f8d6ed118b
['Nicole Akers']
2020-12-19 03:12:11.073000+00:00
['Parenting', 'Life Lessons', 'Travel', 'Writing', 'Self Improvement']
The Need of Philosophy to refine Scientific thinking and understanding
The Need of Philosophy to refine Scientific thinking and understanding Scientific Inquiry is an art of interrogatives, reasoning, and logic While doing research, it was advised to follow a methodological approach that another author had used for their investigation. There are laboratory standards and protocols one has to follow in doing science. Working for weeks in the Philippine Textile Research Institute with my colleagues investigating the plausibility of Green Chemistry for developing nanotechnology, we spent a ton amount of time, money, and effort on doing something wrong. Apparently, some research upon which we laid the foundations of our methods did not agree with the actual result we had developed in the lab. My team and I, as young and curious researchers, mulled over how and why we are wrong, like someone who lost in a long and tedious engagement we demanded justification. We were on a deadline, it’s too expensive, forget it. We discussed the matter over some experts on that field and gathered their insights; it amounts to something like: “I’m not sure why exactly but perhaps step X is done wrong (or poorly), start with another approach”. To think of it much further, one is faced with so many variables at play. To dare to seek a satisfactory justification means to unravel the intricacies of those chemical interactions. In other words, it would have to take a lot of effort and more resources to investigate such matters on the level that would only scientifically satisfy a simple question; we were on a deadline, it’s too expensive, forget it. Reformulate new conceptual framework, recalibrate, and revise. It was a decision made considering the constraints we had, as high school students nerding out on national research laboratories. It was fun though. Thanks to our research adviser, our journey to science began — we were exposed to the culture of researchers and interesting people working on something at such a young age. But our question remained unanswered. After all, what does it mean to disagree with mere experiments? Other than mistakes, we are also faced with a deep question which is not very likely to be the case: what if the methodology was wrong? On what grounds should we settle the case of our investigation? What is the justification of a scientific method? A rational scientist cannot deny the possibility of falsifying a methodology, but the overwhelming consensus of the scientific community seems to agree with such a method, what forms of justification could one attempt to falsify a method? Certainly, experimental results alone cannot suffice to establish counterarguments. By the same token, one could also ask what is the justification of a scientific method?
https://medium.com/dave-amiana/the-need-of-philosophy-to-refine-scientific-thinking-and-understanding-d97dbd599321
['Dave Amiana']
2020-07-07 03:28:28.890000+00:00
['Science', 'Education', 'Philosophy', 'Philosophy Of Science']
Ditching The Dreadful Bin Liner
Bin liners! Brightly coloured rolls of the things lurk in our kitchen drawers and barely register on our consciousness. When our kitchen bin gets full, we tie the liner top and take it to our outside bin. Then, we grab a new bag from the roll and line our kitchen tidy bin with it, while no doubt thinking of things less mundane. And the cycle continues. Every day or so, depending on how much kitchen waste you generate. Using something as seemingly innocuous as a kitchen bin liner may seem unimportant. Until you start to think about it. We buy this product only to throw it away. It has no other use. Its entire purpose is to hold rubbish for a short time before it and its contents end up in landfill. Do some quick calculations and you start to realize how appalling bin liners are from an environmental perspective. Hundreds per year from every household that uses them. Around the world, millions upon millions of them end up in garbage dumps. All that extra, environmentally destructive plastic waste. All those precious resources wasted to produce and transport them. All for no good reason. It’s not as though bin liners are an essential item. For your average household kitchen, they are completely unnecessary and we can easily do without them. Go Naked in the Kitchen! Well, you can keep your clothes on if you like, but your little kitchen tidy bin can and should go stark naked. Here’s how to eliminate bin liners and achieve kitchen bin nakedness with little effort. Recycle First, recycle everything you can! This is probably already happening at your house, but it's worth reviewing what you can and can't recycle from time to time. Information about recycling will likely be on your local council website or perhaps inside the lid of your council recycle bin. Your kitchen recycle bin should ALREADY be naked. Recyclable items should not be placed into bags because it makes sorting and processing much harder. Compost Even if you don’t do much gardening, you can help reduce kitchen waste by composting. I keep a small bin on the kitchen bench for food scraps and other compostable material. Every day, I transfer the bin’s contents to a larger compost bin in the garden. Once your food scraps are in the main compost bin, you don't really need to do much else. There are ways to enhance and speed up the composting process if you want, but otherwise, just leave it be. Every few months, you can remove the bottom contents of the bin and spread it over a garden bed. This enhances the health of the soil and keeps a lot of rubbish out of landfill. Keep a Little Bag Box Most household rubbish can go directly into your unlined kitchen bin. But, occasionally you might need to wrap some items such as messy food scraps that can’t be composted before binning them. To deal with that, I collect and keep food and produce bags that cannot be recycled. Bread bags, soft plastic bags from frozen vegetables, bags from packaged items and plenty more. I keep these bags handy in a box in the kitchen so that I can grab one if an item needs to be wrapped before binning. Clean Your Naked Bin at The End of Wash Ups One of the main reasons that people use bin liners is to keep the kitchen tidy bin clean. And, it's true that your naked kitchen bin might get a bit grubby without a liner. I empty the bin just before I wash up each day. That way I can give it a clean during the wash-up without wasting any extra water or detergent. If you use a dishwasher, and the bin won’t fit, giving the bin a quick rinse or wipe once a day should do the trick. Compromising If you’re not ready to completely ditch your bin liners, an alternative approach is to simply reuse the bags. Rather than dump the entire bag, just empty its contents into your outside bin and then put the bag back into your kitchen bin for a second round. One liner should do several days before it needs to be replaced. If you use the small unrecyclable bags you’ve collected to wrap messy rubbish, the bags should not get too dirty. While you’ll still be using liner bags, the amount you use over time should be greatly reduced. Aside: Plastic Shopping Bags Major stores and supermarkets here in Australia have now eliminated single-use plastic shopping bags. Shoppers need to either bring their reusable green bags with them or purchase multiple-use plastic bags at checkout. This is a very welcome and positive move that should be applauded. However, a small consequence of the move is that our households no longer have collections of used plastic shopping bags at hand. One of the primary uses for such bags was as bin liners. And, with the shopping bags no longer available, many people have reverted to actually buying bin liners. So, it’s even more important that we ditch the bin liner. We don’t want to compromise the environmental benefits of eliminating single-use shopping bags by purchasing a gazillion more plastic bags that are manufactured solely to be thrown away.
https://medium.com/positively-green/ditching-the-dreadful-bin-liner-9d4975646cb1
['Brett Christensen']
2019-10-04 04:34:36.793000+00:00
['Environment', 'Recycling', 'Kitchen', 'Plastic', 'Garbage']
Fizz Buzz as a problem in group theory
There standard solution to this problem (implemented in Python) is as follows: The above works, but the style is implicit. Some thought needs to put in to the order of the conditionals. all(...) must come first, or else we will be printing “Fizz” (or “Buzz”) instead of “FizzBuzz” when i=15 . Code like this while correct, makes it difficult to maintain because of the implicit dependency between the conditionals. If someone didn’t know about the FizzBuzz problem, he wouldn’t realize that you can’t shift the conditionals about because the code wouldn’t break if he did that. Dependencies which arise in the context of the requirements, and not explicitly in code are what makes code bug prone. I think there is a better, more explicit way of solving this problem which highlights the inherent mathematical nature of this problem. And by appreciating the mathematical context, we get a more elegant way of implementing the FizzBuzz printer and remove implicit dependencies from the code.
https://medium.com/analytics-vidhya/fizz-buzz-as-a-problem-in-group-theory-b49ffc425792
['Tang U-Liang']
2020-08-07 06:10:34.148000+00:00
['Programming Tips', 'Python', 'Group Theory']
A Fine Line Between Smart and Creepy Online Marketing
3. Random Comments Another way to increase your website rankings is to pop in on hundreds of blogs and websites around the net, and leave random comments at the end of each post — all with a link back to your site. Genius? THINK AGAIN! Imagine for a moment you are walking down the street and “drop in” on every person you pass who is talking to someone else. Without listening or understanding the conversation, you randomly say “Rhinoceros”, or “I want shoes like yours. Where did you get them?” You can expect more than the odd look. Generally people will think you a pest and tell you to go away (or they will just find you creepy and take great pains to avoid you). These sorts of comments will not have people racing towards you to buy your services. The online world is the same. Adding value to quality conversations is great for your reputation. Dropping spam comments on random conversations is creepy and damages your reputation and even your rankings. Many business websites are now finding that the cheap SEO company they outsourced their work to, used a random comment (or link spam) process to generate links and their website has now been penalized by Google. The business owners now have to work with reputable SEO company to send out thousands of requests to sites around the world to have these spam links removed. Cheap has definitely turned into creepy and nasty. I’m serious… it will give you more damage than good. As a rule of thumb, if you don’t know how to do it properly, hire somebody who can do the job and don’t go cheap. Trust me, it will be worth every penny.
https://medium.com/swlh/a-fine-line-between-smart-and-creepy-online-marketing-2529945f658f
['Junie Rutkevich']
2019-05-22 16:09:01.980000+00:00
['SEO', 'Online Marketing', 'Marketing', 'Digital Marketing', 'Internet Marketing']
Parsing REST API Payload and Query Parameters With Flask.
Intro For a very long time Flask was my first choice of framework when it came to building micro-services (until python 3.6 and async/await came to life). I have used Flask with a different number of extensions depending on the situation. flask-restful , flask-restplus , webargs .. etc. All of these extensions were great until I needed to parse the request payload or query parameters, it gets painful and the reason is they are all built on top of marshmallow . Marshmallow Marshmallow is probably the oldest python package for object serialisation and parsing and probably most of the flask extensions are built of top of it. here is an example of marshmallow from datetime import date from pprint import pprint from marshmallow import Schema, fields class ArtistSchema(Schema): name = fields.Str() class AlbumSchema(Schema): title = fields.Str() release_date = fields.Date() artist = fields.Nested(ArtistSchema()) bowie = dict(name="David Bowie") album = dict(artist=bowie, title="Hunky Dory", release_date=date(1971, 12, 17)) schema = AlbumSchema() result = schema.dump(album) pprint(result, indent=2) # { 'artist': {'name': 'David Bowie'}, # 'release_date': '1971-12-17', # 'title': 'Hunky Dory' The Pros Stable and provides a lot of complex functionality The Cons Can be confusing sometimes when you use dump vs load. It is old and not using python annotations. Documentation is terrible and ugly Pydantic Another package for object parsing, validation and serialisation which is more intuitive and uses python type annotations. The Pros Uses python type annotation Documentation is fabulous, check here No dump or load Can be used for settings management as well The Cons It is new and is not yet battle tested like marshmallow There are not much flask extensions/packages built on top of it The same example written using pydantic from datetime import date from pprint import pprint from pydantic import BaseModel class ArtistSchema(BaseModel): name: str class AlbumSchema(BaseModel): title: str release_date: date artist: ArtistSchema bowie = dict(name="David Bowie") album = dict(artist=bowie, title="Hunky Dory", release_date=date(1971, 12, 17)) schema = AlbumSchema(**album) pprint(result.dict(), indent=2) # { 'artist': {'name': 'David Bowie'}, # 'release_date': '1971-12-17', # 'title': 'Hunky Dory' Here is the same exact example written in a much more intuitive way. Webargs Webargs is a popular Python library for parsing and validating HTTP request objects, with built-in support for Flask built on top of marshmallow from flask import Flask from webargs import fields from webargs.flaskparser import use_args from marshmallow import Schema, fields app = Flask(__name__) class user(BaseModel): name = fields.Str() @app.route("/") @use_args(user(), location="query") def index(args): return "Hello " + args["name"] if __name__ == "__main__": app.run() Luckily there is another webargs package which is using pydantic now, called pydantic-webargs
https://medium.com/swlh/parsing-rest-api-payload-and-query-parameters-with-flask-better-than-marshmallow-aa79c889e3ca
['Ahmed Nafies']
2020-10-25 09:59:55.540000+00:00
['Https', 'Json', 'Python', 'Web Development', 'Flask']
Infogram Insights: Startup Failure Rates
The challenges faced by successful businesses are as diverse as the entrepreneurial landscape itself. And sometimes, a simple series of unexpected pitfalls and unwary mistakes can shatter dreams of becoming “the next big thing.” ‘Infogram Insights’ offers a deeper look at relevant, newsworthy topics — visualized with Infogram. Every week we explore the data that is forever changing our world. Do you have a story you’d like to tell with data? Infogram features beautiful templates, 36+ chart types, 200,000+ icons, and 200+ regional maps.
https://medium.com/infographics/infogram-insights-startup-failure-rates-4820b8f33cb9
[]
2016-12-07 21:13:27.321000+00:00
['Business', 'Startup', 'Startup Lessons', 'Data Visualization', 'Infographics']
Graphing The SIR Model With Python
Graphing The SIR Model With Python Graphing and solving simultaneous differential equations to model COVID-19 spread If one good thing has come out of the COVID-19 pandemic, it’s the vast amount of data we have acquired. In light of technological advancements, we have access to more information and computing power, which we can use to predict and curb the spread of the virus. One of the simplest ways to do this is through the SIR model. The SIR is a compartmental model that categorizes a constant population into three groups, namely the susceptible, infected, and recovered. These can all be expressed as functions that take time as an argument. S(t) — The number of people who are susceptible to the disease I(t) — The number of people infected with the disease R(t) — Those incapable of infecting others; either recovered or diseased (hence a better name for this group may be “removed”) Of note, S(t) + I(t) + R(t) = N at any time t, where N is the total population. Importing the needed Python libraries from scipy.integrate import odeint import numpy as np import matplotlib.pyplot as plt There are several libraries that we can use to smoothen out the calculations and graphing process. The SciPy library contains calculation methods that are “user-friendly and efficient.” We will be using their numerical integration function for this program. We will also be using Numpy for float step values and Matplotlib for graphing. Finding the rate of change of each function We can’t directly find an equation for each function. But we can derive the rate of change of each function at time t. In other words, it's a derivative. The amount of susceptible people generally starts off close to the total population. This number then decreases with time as susceptible people become infected. The number of newly infected people is a percentage of the possible interactions between susceptible and infected individuals. We can call this infection rate as ‘a’, while the possible interactions being the product of S(t) and I(t). The change in susceptible people is therefore S’(t) = -a*S(t)*I(t) The decrease in the number of susceptible people is the same as the increase in the number of infected people. To find the entire derivative of I(t), we must also consider those who have recovered or died after being infected. That is simply the recovery rate multiplied by the current number of infected individuals. With the recovery rate as ‘b’, we then have I’(t) = a*S(t)*I(t) — b*I(t) Calculating the derivative of R(t) is then a simple matter, as it is just the second term of I’(t). In the SIR model, recovery (or more aptly removal) only increases with time. The increase in R(t) is the product of the recovery rate and the infected population: R’(t) = b*I(t) We can now use these derivatives to solve the system of ordinary differential equations through the SciPy library. Defining the necessary constants From our equations above, we see that there are two constants that need to be defined: the transmission rate and the recovery rate. For now, we’ll set the transmission rate to be 100% and the recovery rate to be 10%. a = 1 # infection rate b = 0.1 # recovery rate Creating function f(y, t) to calculate derivatives # FUNCTION TO RETURN DERIVATIVES AT T def f(y,t): S, I, R = y d0 = -a*S*I # derivative of S(t) d1 = a*S*I — b*I # derivative of I(t) d2 = b*I # derivative of R(t) return [d0, d1, d2] Next, we have to define a function to return the derivatives of S(t), I(t), and R(t) for a given time t. Remember that we actually solved for these already and are just encoding the equations in a function. In the following lines of code, d0, d1, and d2 are the derivatives of S, I, and R, respectively. d0 = -a*S*I # derivative of S(t) d1 = a*S*I — b*I # derivative of I(t) d2 = b*I # derivative of R(t) Note that the values of S, I, and R are not yet defined here (although we don’t need R(t) to find its derivative). Since these functions are dependent on each other, we will first get the previous values of S(t), I(t), and R(t) to calculate their derivatives. S, I, R = y # or S = y[0] I = y[1] R = y[2] Replacing the variable R with an underscore _ is also acceptable, but it’s best to be explicit and descriptive. Defining the necessary initial values Before we can calculate the values of the functions at time t, we must first find the initial values. The Philippines will be used as a sample population, t=0 on March 1, 2020. The initial number of susceptible people is the total population, which is around 109,581,078. Based on the Department of Health’s COVID-19 tracker, the initial cases on March 1, 2020, is 50 people. And of course, we can set the total recovered from being 0. It would make things cleaner to keep the values between 0 and 1. Such can be accomplished by dividing all the values by the population. S_0 = 1 I_0 = 50/109_581_078 R_0 = 0 It would be helpful to store these values in a list, and we’ll see why this is important later. y_0 = [S_0,I_0,R_0] Let’s also not forget to define the domain or the time range. We’ll do this with Numpy’s linspace so as to incorporate decimal values for steps if desired. t = np.linspace(start=1,stop=100,num=100) Solving the differential equations with ODEInt Now that we have defined all the needed variables, constants, and functions, we can now solve the system of ordinary differential equations. We will be solving each differential equation for a range of 100 days as specified in the time range. Doing so is actually simple with all the parameters set; the code is just one line: y = odeint(f,y_0,t) f is the function f(y, t), which we defined earlier, y_0 is the list of our initial values, and t is the list of equally spaced time values. The variable y is actually a Numpy ndarray, or an N-dimensional array. >>> print(type(y)) <class ‘numpy.ndarray’> In this case the odeint() function returns a 2 dimensional array. A row is a list of values for a given time t, while a column represents the values for S, I, or R. The values for S(t) would be found in the first column, I(t) in the second, and R(t) in the third. We can, therefore, access them as follows: An element of such an array located in the nth row, mth column can be indexed as element = y[n,m] And we can use list splicing to view all the values in a column. S = y[:,0] I = y[:,1] R = y[:,2] Graphing the values for each function plt.plot(t,S,’r’,label='S(t)') plt.plot(t,I,’b’,label='I(t)') plt.plot(t,R,’g’,label='R(t)') plt.legend() plt.show() The first 2 arguments in the plot() function indicate the x and y values. We can then specify the color and label of each line. The legend() function places a legend on the graph, referencing the “label” keyword argument for each line. Finally, we use the show() function to actually generate the graph. Further exploration It is worth pointing out that we can increase the precision of the graph by decreasing the specified step values. (In this case, by increasing the “num” keyword argument). t = np.linspace(start=1,stop=100,num=200) t = np.linspace(start=1,stop=100,num=500) t = np.linspace(start=1,stop=100,num=1000) Moreover, making the parameters such as the transmission and recovery rate closer to the actual data would make the model more accurate. All that said and done, the SIR model demonstrates the invaluable role technology and mathematics plays in dealing with real-world issues. You can find the code for this article here: https://github.com/JoaquindeCastro/SIR-Model/blob/main/SIR-ODE-Integrate.py References and resources
https://medium.com/towards-artificial-intelligence/graphing-the-sir-model-with-python-e3cd6edb20de
['Joaquin De Castro']
2020-11-05 14:45:21.389000+00:00
['Pandemic', 'Data Visualization', 'Python', 'Education', 'Mathematics']
The Word Boss
The Word Boss There’s no greater power than controlling words Photo by Bill Adler Leon’s chair groaned as he leaned his three hundred pounds backward. The banker’s lamp on his desk cast deep shadows on his acne-scarred face and a bright spot on the silver and red money counter. Although a cigar muffled his speech, Kathy understood him clearly enough. “Nikhedonia. That’s going to cost you one hundred.” Kathy met Leon’s words with a gasp. She almost brought her hand to her face, but her instinct for self-preservation stopped her. No sudden moves. “Pay up or don’t use nikhedonia.” Leon chuckled. His eyes blackened, becoming both a window into his soul and a fortress surrounding it. “I don’t make the rules, I just enforce them.” He flicked open the switchblade on his desk and used it to drill a small hole into the cigar tip. Like hell you don’t. Kathy’s face tensed. The two armed bodyguards on opposite sides of Leon moved their hands to the pistols tucked in their pants. She willed herself to appear non-threatening. “That’s outrageous. You probably don’t even know — ” Kathy instantly regretted that sentence. Leon had a volatile and violent reputation. She’d heard the stories. A friend of a friend self-published a novel without paying any word authorization fees. That writer spent a month in the hospital. Another writer, ironically a mystery author, was found dead at the bottom of a lake in the Catskills, weights tied to his arms and legs, a paperback thesaurus stuffed into his mouth. There were no free words, not if you valued your kneecaps or life. Leon leaned forward and slammed his beefy arms on the desk, making the room shake like a thunderclap had assaulted it. “I know what nikhedonia means!” he growled. “If you’re here to insult me, the price just went up twenty-five percent.” “No, no, sir.” Kathy lowered her eyes to the floor. “I’m sorry. It’s just that Mr. Saly’s fee is only twenty dollars a word.” “That’s indeed true, but he’s the boss of easy words like ‘refrigerator’ and ‘swim.’ You want to use a fancy word in your book, you have to pay fancy prices.” He released a long breath, like a deflating balloon. Leon seemed calmer to Kathy now, maybe because he knew that she knew she had no road to travel other than the road that ended at his wallet. “If you want to sound like you’re a know-nothing writer, fine then, just use the words controlled by Saly and the other dime-store word bosses. If you want your prose to shine with words like ‘jentacular’ and ‘crapulence’ you have to pay up.” Leon’s office had more in common with a coffin than an office. Dented filing cabinets lined three walls, and the floor was covered by a tattered, brown carpet that bore the impressions of dozens of fallen bodies. The carpet carried the metallic stench of dried blood. A photograph of an emaciated old man hung on the wall behind Leon. His father or grandfather? The man in the photograph was so skinny it was hard to imagine a genetic connection between him and the minivan-sized Leon. I might as well get it all over with today so I don’t have to come back. “Can I get perfidy and excoriate, too?” she asked. Leon shook his head. “Those words are in Joseph Talin’s territory. Mid-level words. I only deal with the fancy words. You’ll have to see him, but I gotta tell you, Talin doesn’t have a charming personality like me.” Leon interlaced his fingers, reversed his palms, and cracked his knuckles. “We bosses don’t encroach on each other’s territories. That’s our golden rule. The less violence, the better.” Except for the poor writers who don’t pay, Kathy thought. “Okay,” she said. Kathy offered the bodyguards a disarming smile, hoping to telegraph, “no weapon.” Slowly, she reached into her bag and withdrew her wallet. With trembling hands, she counted out five twenties and held them in her outstretched arm. The closest guard snatched the bills and deposited them on Leon’s desk. Leon scooped up the cash and slid the pile into the money counter, which whirred as it sorted and counted. Kathy spied the digital counter: $36,100. She wondered if that was Boss Leon’s haul for a day, a week, or maybe just an hour. “I love that sound,” Leon said. He chewed on his cigar for a few seconds. “You’re a good writer?” Kathy nodded like one of those bobble head dolls. “I think so. I get good reviews. My most recent novel was in the top ten in thrillers on Amazon.” “Tell you what. I don’t get many writers who want jentacular or poltophagy. You want to use either of those, they’re only fifty bucks each.” He made an odd purring sound. “Deals like that don’t come along every day.” Kathy wasn’t sure if Leon was being generous or making a demand. “Okay.” She retrieved another hundred dollars from her wallet. “Good then,” Leon said. “We’re done here.” He flicked his hand her way. Kathy stood and sidled toward the door. She hesitated before reaching for the doorknob and turned back toward Leon. “Can I ask you a question?” She had no idea where her sudden boldness came from. “Seeing how we’re going to be long-term business partners, sure.” “With these word authorization fees, how does any writer make a living?” “That’s not my problem. Now get out of here.” The bodyguards tapped the sides of their pistols. Kathy didn’t need to be told again.
https://billadler.medium.com/the-word-boss-a46f0c846b5f
['Bill Adler']
2020-12-23 03:18:41.418000+00:00
['Crime', 'Fiction', 'Short Story', 'Thriller', 'Writing']
Introduction to Apache Spark with Scala
This article is a follow-up note for the March edition of Scala-Lagos meet-up where we discussed Apache Spark, it’s capability and use-cases as well as a brief example in which the Scala API was used for sample data processing on Tweets. It is aimed at giving a good introduction into the strength of Apache Spark and the underlying theories behind these strengths. Spark — An all-encompassing Data processing Platform “If there’s one takeaway it’s just that it’s okay to do small wins. Small wins are good, they will compound. If you’re doing it right, the end result will be massive.” — Andy Johns Apache Spark is a highly developed engine for data processing on large scale over thousands of compute engines in parallel. This allows maximizing processor capability over these compute engines. Spark has the capability to handle multiple data processing tasks including complex data analytics, streaming analytics, graph analytics as well as scalable machine learning on huge amount of data in the order of Terabytes, Zettabytes and much more. Apache Spark owns its win to the fundamental idea behind its development — which is to beat the limitations with MapReduce, a key component of Hadoop, thus far its processing power and analytics capability is several magnitudes, 100×, better than MapReduce and with the advantage of an In-memory processing capability in that, it is able to save its data in compute engine’s memory (RAM) and also perform data processing over this data stored in memory, thus eliminating the need for a continuous Input/Output(I/O) of writing/reading data from disk. To effectively do this, Spark relies on the use of a specialized data model known as Resilient Distributed Dataset (RDD), that can be effectively stored in-memory and allows for various types of operations. RDDs are immutable i.e read-only format of data items that are stored in-memory as well as effectively distributed across clusters of machines, one can think of RDD as a data abstraction over raw data format e.g String, Int, that allows Spark does its work very well. Beyond RDD, Spark also makes use of Direct Acyclic Graph (DAG) to track computations on RDDs, this approach optimizes data processing by leveraging the job flows to properly assign performance optimization, this also has an added advantage that helps Spark manage errors when there is job or operation failures through an effective rollback mechanism. Therefore, in cases of errors, Spark doesn’t need to start computation from the beginning, it can easily make use of the RDD computed before the error and pass it through the fixed operation. This is why Spark is designated as a fault-tolerant processing engine. Spark also leverage a cluster manager to properly run its job across a cluster of machines, the cluster manager helps with resource allocation and scheduling of job in a master — worker fashion. A Master distributes jobs and allocate necessary resources to the workers in the cluster, and coordinate the worker’s activity such that in cases of a worker being unavailable, the job is reassigned to another worker. With the idea of in-memory processing using RDD abstraction, DAG computation paradigm, resource allocation and scheduling by the cluster manager, Spark has gone to be an ever progressing engine in the world of fast big data processing. Spark Data Processing Capabilities. Structured SQL for Complex Analytics with basic SQL A well-known capability of Apache Spark is how it allows data scientist to easily perform analysis in an SQL-like format over very large amount of data. Leveraging spark-core internals and abstraction over the underlying RDD, Spark provides what is known as DataFrames, an abstraction that integrates relational processing with Spark’s functional programming API. This is done, by adding structural information to the data to give semi-structure or full structure to the data using schema with column names and with this, a dataset can be directly queried using the column names opening another level to data processing. Starting at version 1.6 of Spark, there is the Dataset API that comes with the Structured SQL API, it provides a high-level SQL-like capability to somewhat low-level RDD of Spark-core. In literal terms, Dataset API is an abstraction that gives an SQL feel and execution optimization to spark RDD by using the optimized SQL execution engine without also losing the functional operations that come with RDD. Both the Dataset API and Dataframe API forms the Structured SQL API. Spark Streaming for real-time analytics Spark also provides an extension to easily manipulate streaming data, by providing an abstraction over the underlying RDD in the form of Discretized Stream. Using the underlying RDD Spark core has two main advantages; it allows other core capabilities of Spark to be leveraged on Streaming Data as well as avail the data core operations that can be performed on RDDs. Discretized Stream of data means RDD data obtained in small real-time batches. MLLib/ML Machine learning for predictive modeling Spark also provides machine learning capability by providing machine learning algorithms, data featurization algorithms and pipelining capabilities optimized to be implemented to scale over large amount of data. Spark Machine learning library’s goal is summarized thus: to make practical machine learning scalable and easy. GraphX Graph Processing Engine. The fourth data processing capability is inherent in its capability to perform analysis on Graph data e.g in social network analysis. Spark’s GraphX API is a collection of ETL processing operations and graph algorithms that are optimized for large scale implementations on data. Initializing Spark There are several approaches to initialize a Spark application depending on the use case, the application may be one that leverages RDD, Spark Streaming, Structured SQL with Dataset or DataFrame. Therefore, it is important to understand how to initialize these different Spark instances. 1. RDD with Spark Context: Operations with spark-core are initiated by creating a spark context, the context is created with a number of configurations such as the master location, application names, memory size of executors to mention a few. Here are two ways to initiate a spark context as well as how to make an RDD with the created spark context. 2. DataFrame/Dataset with Spark Session: As observed above, an entry point to Spark could be by using the Spark Context, however, Spark allows direct interaction with the Structured SQL API with Spark Session. It also involves specifying the configuration for the Spark app. Here is the approach to initiate a Spark Session and create a Dataset and DataFrame with the Session. 3. DStream with Spark Streaming: The other entry point to Spark is using the Streaming Context when interacting with real-time data. An instance of Streaming Context can either be created from a Spark Configuration or a Spark Context. This is shown below Operations on RDD, Datasets and DataFrame Having seen a good glimpse into the capability of Spark, it’s important to show some operations that can be applied over the various spark’s abstraction. 1. RDD RDD, which is Spark’s main abstraction and more at the center of the spark-core has two basic operations. Transformations — Transformations operations are applied on existing RDD to create new and changed RDDs. Example of such operations include map, filter and flatMap to mention a few. A full list of transformation operations in spark can be found here Once spark Context has been used to create an RDD, these operations can be applied on the RDD as seen in the code sample below. It is important to note that the operations are lazily evaluated in that they are not directly computed until an Action operation is applied. Actions — Actions operations triggers an actual computation in Spark, it drives computation to return a value to the driver program. The idea of action operations is to return all computations from the cluster to the driver to produce a single result in actual Data types away from the RDD abstraction of spark. Care must be taken when initiating action operations because it’s important that the driver has enough memory to manage such data. Example of action operations includes reduce, collect and take to mention a few. The full list can be found here 2. DataSet/DataFrame As mentioned earlier, Dataset is the RDD-like optimized abstraction for Structured SQL that allows both relational operations like in SQL and functional operations like map, filter and many other similar operations that are possible with RDD. It is important to emphasize that not all DataFrame SQL-like capability are fully available with Dataset however there are many column-based functions that are still very much available with Dataset. Also, there is an added advantage of encoding Datasets in domain-specific objects i.e mapping a Dataset to a type T this helps extends the functional capabilities that are possible with Spark Dataset adding also the ability to perform powerful lambda operations. Spark DataFrame can further be viewed as Dataset organized in named columns and presents as an equivalent relational table that you can use SQL-like query or even HQL. Thus, on Spark DataFrame, performing any SQL-like operations such as SELECT COLUMN-NAME, GROUPBY and COUNT to mention a few becomes relatively easy. Another interesting thing about Spark DataFrame is that these operations can be done programmatically using any of the available spark APIs — Java, Scala, Python or R as well as converting the DataFrame to a temporary SQL table in which pure SQL queries can be performed on. Conclusion. To conclude this introduction to Spark, a sample scala application — wordcount over tweets is provided, it is developed in the scala API. The application can be run in your favorite IDE such as InteliJ or a Notebook like in Databricks or Apache Zeppelin. In this article, some major points covered are: Description of Spark as a next generation data processing engine The undelying technology that gives spark its capability Data Processing APIs that exists in Spark A knwoledge of how to work with the Data Processing APIs A simple example to have a taste of spark processing power. This article was originally published here
https://towardsdatascience.com/introduction-to-apache-spark-with-scala-ed31d8300fe4
[]
2019-04-02 13:22:33.048000+00:00
['Apache Spark', 'Data Science', 'Spark', 'Big Data', 'Scala']
Is Plastic-Free the New Organic?
image via Unsplash The global food system is rife with room for improvement. From the need to supplant the burdensome and unethical livestock industry, to mitigating the use of chemical pesticides, herbicides, and fertilizers, to overhauling the products we define as food in the first place — replacing artificial ingredients, excess sugars, and saturated fats for whole food ingredients. But there’s another area getting a lot of attention lately: plastic packaging. It’s essential in the modern world: foods need delivery methods. Some of it is humble: a bag of frozen peas, a carton of almond milk. But we’ve also traded in fresh fruits and vegetables for packaged alternatives: fruit leather instead of apples, bean and peas in chip form, cauliflower pizza crusts. While all this packaging helps to extend the shelf-life of our beloved snacks it also contributes to the ever-increasing ocean pollution. Plastic packaging has been linked to exposure to harmful endocrine disrupting chemicals like bisphenol-A and bisphenol-S. And like consumers have been steadily increasing their spend on certified organic foods — particularly fruits and vegetables — in an effort to decrease their exposure to chemicals in nonorganic food — a growing shift away from plastic is gaining steam. Malibu, San Luis Obispo, and Davis Calif., Seattle, Miami Beach and Fort Myers, Fla., have all recently banned plastic straws — one of the most common pieces of beach debris. The UK is poised to ban straws and just this week McDonald’s announced that it would begin testing paper straws in select locations. “Giving up plastic straws is a small step,” Diana Lofflin, the founder of StrawFree.org, an activist organization based in San Diego, told the New York Times, “and an easy thing for people to get started on. From there, we can move on to larger projects.” Earlier this year Dutch supermarket Ecoplaza made international headlines when it announced it was offering consumers a “plastic-free” supermarket aisle for consumers seeking to reduce their dependence on and exposure to plastic. Now, a U.S. petition is calling on Kroger, the nation’s largest supermarket retailer, to offer its customers a plastic-free aisle. The petition collected more than 115,000 signatures in one week. via Unsplash “Huge strides have been made in alternative packaging, but not nearly enough to make a dent in plastic’s ubiquity,” explains Food Dive. “This movement will require much more than a large supermarket changing a single aisle systemwide. It will require a fundamental shift. Though consumers say they want change, the U.S. plastic bottle recycling rate is less than 30%.” Concern over plastic is not new. Efforts to reduce, reuse, and recycle it date back to the 1970s. And despite victories like bans on plastic bags in cities in California, the UK, India, Australia, and Mexico, the substance keeps cropping up elsewhere, a game of environmental Whac-a-Mole. Plastic is one of the most durable substances. Unprocessed, plastic waste has a lifespan of hundreds of years. Not only does that require excessive landfill space, but plastic seems to find its way out of the municipal waste systems into the ocean where it’s coalescing into a massive gyre twice the size of Texas. Plastic waste is discovered inside the digestive systems of all sorts of marine animals. It’s being discovered in humans, too. A recent study found that plastic will outnumber fish in the ocean by 2050. “I think a lot of people feel overwhelmed by the magnitude of the plastic problem,” says Lofflin. And it is overwhelming; more than 8 billion tons of plastic are on the planet — more than one ton per person. Studies have found numerous threats to human health from plastic exposure and its impact on marine ecosystems is particularly detrimental. And while the USDA’s National Organic Program has been criticized of late, it has achieved one majorly notable success: people recognize it. Consumers, whether they opt for organic or not, now recognize the symbol and understand if not exactly how an organic carrot is grown, that it is most likely a guarantee to be a “better” carrot than the non-organic version. This delineation has jettisoned a movement in our food system driven by millennials seeking cleaner, healthier, and more ethical foods. It’s part of the reason vegan and plant-based foods are so popular now. For many people the healthy quest started with an interest in pesticide-free foods and from there a shift away from animal products followed. Regard for a plastic fork or water bottle that will outlive the person using it, is following the same trajectory. “We used science to create a material that lasts forever, and then we throw it away, all day, every day. That doesn’t make any sense. And of course, there is no ‘away.’ Every single piece of plastic that has ever been created is still with us,” Ayana Elizabeth Johnson wrote in Scientific American. “We can’t go back to a world free of plastic pollution,” Johnson says. “But we can stop the problem from growing, and we can turn the trend around. To do that we really must break our addiction to plastic. We must refuse to use plastic. We must stop creating new plastic.” Is a plastic-free supermarket aisle the answer? Or only offering plastic straws by request the solution? Not really. But they start the critical conversation. “This will require a major cultural shift — the creation of new traditions, norms, trends, memes,” Johnson says. “A shift away from a culture that accepts things being disposable.” Find Jill on Twitter and Instagram Originally published on Organic Authority
https://jillettinger.medium.com/is-plastic-free-the-new-organic-90f8fb96c172
['Jill Ettinger']
2018-07-22 04:35:59.439000+00:00
['Environment']
Struggling With Major Depressive Disorder
Struggling With Major Depressive Disorder I wish more people understood how it feels to have chronic depression. Do normal people know depression feels like dying? Because it does. When it’s a bad episode I’m sure I am slowly dying, but in that state I don’t much care. Mostly I just lie in bed and wait to see. For the sake of those who love and depend upon me I hope to get better, to somehow get up, get going, shower, clean my house, engage in my hobbies, and enjoy my full and blessed life. But I can’t do those things during a major depressive episode. Dying, it seems, would solve my biggest problem which is that sometimes — more often since I’m older — I cannot muster the effort it takes to get out of bed. The sheer weight of depression lays on me like a lead blanket. It’s sometimes so heavy it seems I cannot lift my arms — and certainly not my whole body. Do people understand that depression causes those of us who are afflicted to feel guilty and ashamed that we cannot function properly? We are often thought to be lazy and shiftless for not getting up and doing the things adults must do in order to have good lives. Worse than what others say is depression causes us to readily pass the same judgement on ourselves. It doesn’t matter we have a diagnosed brain disease and its symptoms and manifestations have nothing to do with the quality of our character. We still feel the crushing weight of guilt on top of it all. Depression Hurts If one has the flu it’s expected they will stay at home in bed. No one expects someone with the flu to be up and going about their regular business. It’s not understood by most when depressed people can’t get out of bed, shower, dress, go to work, shop, prepare meals, etc. We’re often just seen as lazy. Too many in this supposedly enlightened time do not know that in addition to the psychological effects, depression has terrible physical effects. A major depressive episode hurts. Pain is magnified. The entire body often aches and joints feel stiff and painful. The worst, though, is the overwhelming exhaustion. We are too tired to live when we’re suffering through an episode. It’s never that I actually want to die. Most depressives don’t. Instead, I’m just too exhausted to live. Somewhere in my sick chemically unbalanced brain I still want to get up and do the things I love. I want to keep working on the restoration of my old house — it’s one of my favorite activities — and I want to go junking. I want to cook. I enjoy cooking for my husband and grandson. I want to go spend time with my mother. I want to take her places she can no longer get to by herself. I want to go to the library. I want to play with my paint and markers. I want to rearrange rooms and reorganize all the stuff stored upstairs. But sometimes I can’t get up and get dressed. And even when I eventually can, I first lie in bed for hours trying to will myself to ignore the heaviness, the aching, and the feelings of hopelessness. When I finally drag myself out of bed, shower and dress, I’m exhausted. After that monumental effort, I’ve used up what energy I could muster. So I’m up and dressed but usually I accomplish little or nothing. Depressed People Aren’t Lazy; They Are Incapacitated If everyone understood depression can leave one as incapacitated as a physical disability, the lives of those who have it would be easier. How awful it is to be so ill and miserable and have others label us as “just lazy.” It is not ok to say things like that. It’s also not ok to say things like, “snap out of it,” or “rise and shine” or any of those well-meaning but irritating things that can trigger sadness, shame or, in some cases, extreme irritability. I had a marriage fall apart because,according to him, I was “laying around crying all the time.” I was. I was depressed and not on medication. He said I was feeling sorry for myself. I was guilty of that, too. I felt sorry for myself, but also hated myself for being unable to function. He found someone who was happy, he said, “like you used to be,” and left me. Money was tight during that marriage and I didn’t want to make it worse by running up bills seeing a psychiatrist or getting therapy. I’m over it, and found more happiness with the good man I’m married to now. But that divorce was largely due to untreated depression. Depression Tricks Us Into Believing Nothing Will Help It is depression — the very ailment we have — that tricks us into thinking it’s hopeless to seek treatment. When one is in a bad patch, it’s hard to imagine anything that could help. Major Depressive Disorder (MDD) can and does wreck lives. It cannot be minimized as “a feeling of sadness.” It is so much worse than that. Depression comes and goes and no one knows why except that the chemicals in the brain become unbalanced. There are no real answers as to why it happens. Moreover, it only happens to some of us, and happens temporarily for most people, while it can be a chronic and lifelong condition for others. I first experienced depression as a teenager but had no idea what it was. Adults saw me as overly dramatic when I cried over the smallest things. I was kind of a melancholy kid from all the photos I see. I remember feeling as if no one loved me — although that was not true at all. I knew I was loved. But that’s one of the disastrous effects of depression. It can cause us to adopt low self-esteem. It makes us feel as if no one could ever love us even if the facts prove otherwise. We often feel worthless. In the most acute phases of major depression, we have no hope for ourselves or the future. Extraordinary Losses Often Caused By MMD I’ve only been suicidal once, in my twenties, and I was able to get myself to a hospital and ask for help. After a few months of hospitalization I was well again. But I then had to cope with the ruins of what had been my life. The losses were extraordinary and changed the course of my life — though not in a good way. Whatever I had built was gone, my job, the trust people placed in me, and the lifestyle I had lived. My child, who was only 6 years old, was hurt by what she experienced as abandonment. My parents were scared half to death and worried whether I’d ever be alright. Fortunately, I got help instead of dying. It was excellent help, and I’ve never since been truly suicidal. But there were still heavy consequences. When I was released from the hospital I had to deal with them whether I felt strong enough or not. Regular therapy and medicine kept me stable and I maintained mostly good mental health for years, although there was sometimes an underlying low level depression. When I was younger it was easier to manage my depression. Like most people with clinical depression, my meds would make me feel fine. So well, in fact, that I would decide I no longer needed psych meds. Then shortly after I stopped taking them I would fall back into the black hole. It seemed to be worse each time and harder to get under control. A combination of meds and therapy would bring me back to health, but it took some time. Now that I’m old I know better than to stop my medications. I’ve finally learned, through many bad experiences, that I cannot do that. Not now, or ever. The costs are too high, and relief is hard and slow to attain. It Becomes Harder To Stay Well As We Age I don’t often feel fully free of depression anymore. Meds “wear out” and don’t work like they once did. Then it takes weeks to change over and start a new medicine, and more weeks to wait and see if it works. Meanwhile, I’m depressed until the new med kicks in. And sometimes I’m left with full-blown major clinical depression if it doesn’t. Then a new one is tried. Hopefully in a few weeks it will take hold and lift the black fog in which I must exist while I wait for my brain chemicals to get adjusted. Maintenance becomes more difficult as one gets older. But, as if to compensate — sarcasm here — major depression shortens one’s life expectancy even more than smoking.* That’s hardly cheerful news and knowledge that adds to the hopelessness of depression. There’s depression, and then there’s MDD. Anyone can become depressed over a terrible event or tragic loss. Almost everyone will at some time experience sadness that becomes depression to some degree. Most of these episodes are very treatable. The type depression related to events or circumstances has a definite cause, unlike chronic clinical MDD. Situational depression can sometimes be alleviated with therapy alone. Medication and therapy don’t always completely alleviate MDD. Aging is tough for most people. But it’s when our old stand-by medicines stop being effective and therapy seems pointless after one has already been counseled, analyzed, and had an alphabet soup of therapy methods. Being unable to completely control my depression caused me to retire early from a job I loved. I had planned to work several more years but it became impossible. To make things worse, many of the newer, more effective antidepressants aren’t recommended for people over 65 because of possible serious or fatal side effects. Isn’t that just peachy? What Does 40 Years Of Antidepressants Do To A Brain? No one yet knows what effect a lifetime of taking antidepressants has on a person. There’s not a lot of research on it. Maybe because we die relatively young? Before they can study the long term effects of depression. One effect seems to be death and I guess any others become moot issues once the research subjects kick the proverbial bucket? With antidepressant medication we basically force our brains, through artificial (medicine) means, to produce more of this chemical and less of that, etc. It can’t be good for it. Early dementia is fast becoming a concern, too. Maybe the stuff we have to take wears out our brains, especially since they were atypical in the productions of brain juice anyway. No wonder those of us with MDD are so often bummed. First they tell us we have considerably shorter life expectancies than others. Then they tell us we’re at increased risk for dementia. Basically we’re going to go crazy and die early. Ok. So, don’t worry and just be happy. In the last few years I’m more apt to feel the anger and increased irritability that can accompany some phases of MMD. It might have something to do with a prognosis that this disease is not only killing me before my time, but probably driving me crazy beforehand, too. Psychiatrists, Too, Can Be Uninformed When I see a new psychiatrist (which happens often at the teaching and research hospital where I get mental health services) I often feel like I’m more of an expert than they are. One bright and cheerful very young psychiatrist told me recently that major depression and bipolar disorder are “usually outgrown” by the time one is elderly. I argued, of course, and she informed me that the “newest theory suggests people will outgrow most brain disorders as they become senior citizens.” What? And she said that with a straight face after seeing the list of medications I take and discounting everything I had just told her. I’ve never been violent in my life — but I really wanted to slap her smug little face even though I wish she was right. The next time I went in for medicine check there was a different psychiatrist, also new, possibly an intern, since she seemed to be under the supervision of an older doctor. I was just too tired to go into detail — again — about my lifelong battle with depression. I told her I just needed my refills. The only thing worse than possibly needing a new medication is being put on one an inexperienced shrink thinks “might” work. That’s ok, but no thanks. I’ve been there and done that. I’ll stick with my current medicines and manage until I can see someone more experienced. As an old lady with chronic MDD, I don’t feel like experimenting. Depression is a disability that causes one to sometimes be unable to function. It can put me to bed as easily as the flu or any other acute illness. When it hits me I can’t function normally. It is impossible for me to get up and dress to meet the day. I’m not saying that seeking pity. I’m saying it because it’s true. It’s a fact of my life and I wish, for the sake of others as well as myself, that people would educate themselves and learn the symptoms and disabling characteristics of depression. It is not just feeling sad and wallowing in it. However, I try to forgive people for not understanding and for thinking I’m just malingering. After all, I thought that of myself before I adjusted to and learned the effects of my mental illness. I’m easier on myself now. I’m not ashamed on the days I don’t get dressed. I don’t bother to hide my frequent inability to get out of my house and I don’t care who knows I have MDD. I Suffer Along With 16 Million Other Americans It could always be worse. Besides, I’m far from alone or unusual. An estimated 16 million people in the United States alone are afflicted with MMD and suffer a serious episode at least once a year. The World Health Organization estimates as many as 300 million people worldwide suffer from depression and it’s the world’s leading cause of disability. A large number of depressed people commit suicide. It’s estimated every 40 seconds someone with depression kills themselves.* More women (8.7%) than men (5.3%) suffer with depression and people of both sexes most often experience their first episode between 18 and 25 years of age. So, while it’s a serious and prevalent disorder, it’s one that has sparked much research and new medicines are constantly being developed. Someday not too far in the future it is hoped we will be able to conquer depression just as so many other maladies have been cured or prevented. Until then I’ll just struggle along as a not-quite-crazy-yet old woman, take my medicine and write my articles. Amazingly, I can often write when I cannot do anything else. Granted, I often write in bed under the covers, but mostly, I can write. With the one exception of when I had to be hospitalized so many years ago, depression has never rendered me completely unable to write. For that I am very thankful. *Sources: www.research.va.news/features/depression.cfm https://psych central.com/news2018/07/13life-expectancy-in-mental-illness National Institute of Mental health Https:www.bbc.com/News/Health-13414965 World Health Organization
https://carolburt-15733.medium.com/i-have-major-depressive-disorder-55975539c8e
['Carol Burt']
2019-11-19 05:10:35.208000+00:00
['Mental Illness', 'Malingering', 'Mental Health', 'Life Expectancy', 'Depression']
5 Reasons Why Green & Silver May Be the New Black & Gold
5 Reasons Why Green & Silver May Be the New Black & Gold How the Green and 4th Industrial Revolutions could mean interesting prospects for the White Metal in the future This post is based on another post I wrote for Towards Data Science, see the original post here. 1. Dwindling Supply The global Silver supply (which comes mainly from mines) has been declining over the past several years; this can largely be attributed to low Silver prices — the low silver price means that Silver mines have been struggling to operate at a profit — leading to them committing less-and-less capital into exploratory projects and into increasing production. A spike in the Silver price could, therefore, prompt increases in supply, but this would take time. Total Global Silver Supply by Year since 1977 (GoldSilver Blog Post, CPM Silver Yearbook) Historical Silver Prices per year since 1969 (macrotrends.net) 2. Silver’s Physical Properties Make It Great for Solar Power Silver is interesting as a precious metal, in that its demand extends far beyond merely its use as jewelry and as a store of value — rather, its physical properties make it highly sought after in industry. Thermal & Electrical Conductivities of Several Metals (Left), and Silver Nanoparticles Biocide Solution (Right) Silver has the highest electrical and thermal conductivity of any metal on the face of the Earth. Not only this, but the metal is also a rather well-known biocide — meaning: it kills bacteria. (Note that resistivity is the reciprocal of conductivity). Silver has the highest electrical and thermal conductivity of any metal on the face of the Earth. These properties — Silver’s high electrical conductivity in particular — mean that Silver is a very useful material to use in all sorts of electronics, including…Photovoltaic Solar Panels. Silver is used extensively to produce solar panels But two factors provide a definite threat to the unbridled increase in Silver demand due to solar: namely, thrifting and other-technology-related efficiency increases. Thrifting is the process by which manufacturers try and cut down the amount of silver used so as to cut costs. Indeed, the amount of silver needed to produce a conductive silver paste for the PV industry may be almost halved — from 130mg per cell in 2016, to 65mg by 2028 — this is according to the report, on behalf of the Silver Institute. The amount of silver has already decreased to a much larger extent, from 400 to 130 mg from 2007 to 2016. Below is the total global solar energy consumption, per year since the 1960s: Total World Solar Energy Consumption, Per Year And here is the amount of Silver used in Solar cells, a good visual representation of thrifting: Silver Loadings in Solar PV Production, Per Year (The Silver Institute) If we assume that that the Solar-Silver demand is directly proportional to both the Silver loadings, and the amount of Solar consumption —and forecast, scale, and combine the above graphs, things get interesting: Proxy for the Solar-Silver demand, based on Global Solar Consumption & Silver Loadings per Solar Cell? Please leave comments if my logic has failed me! Anyway. What it looks like, if that graph is to be trusted, is that we are at a turning point for the Solar-Silver demand. If there is indeed a physical limit to the amount of silver that will continue to be used in solar panels, then it seems that — assuming an increase of solar consumption equal or similar to our forecast — the demand for silver from the solar industry will begin to climb relatively rapidly in the future, particularly when it reaches this limit. What it looks like, is that we are at a turning point for the Silver demand due to PV Solar. Of course, some brainiac could always come up with a Solar panel design that doesn’t need Silver — but, judging my Silver’s physical conductive properties, and the fact that the Photoelectric effect (by which Solar panels generate energy) was thought up by Einstein himself, I am taking my chances — at least for now. 3. The Internet of Things Buzz words or phrases are thrown around a lot these days, two of these phrases — “Internet of Things/IoT” and “The Fourth Industrial Revolution” —have enjoyed a comfortable position in the ‘Buzzword Top 10’ for a good while now. I think that it is safe to say that electronics are here to stay and that electronic devices will certainly become more numerous in years to come (if recent years is anything to go by). This is interesting because Silver’s global demand is driven primarily by industry, and Silver’s industrial demand is actually driven primarily by the electrical & electronics sector: Global Silver Demand by Sector, by Year Total Industrial Silver Demand by Industry, by Year So if the number of electrical & electronic devices increases, it is a fair assumption that the Silver demand from this sector will too. This is, again, reliant on our hypothetical brainiac not inventing some material alternative to Silver — for use in these devices. 4. The Next Global Economic Downturn According to several, prominent figures — such as billionaire investor and hedge fund manager Ray Dalio, and Michael Burry, the man who successfully predicted the 2008 housing market bubble — we are nearing another economic downturn. According to Ray Dalio, The Economy is the Result of the Accumulation of Three Forces — watch on YouTube Burry, in fact, has said that a downturn will be due to the fact that we may currently be in an ‘index bubble’, that which is similar in nature to the housing bubble in 2008: “Passive investments are inflating stock and bond prices in a similar way that collateralized debt obligations did for subprime mortgages more than 10 years ago, Burry told Bloomberg News.” — Michael Burry, September 2019 Now I don’t know whether we are, indeed, in a bubble or not. Nor how far away the next downturn/recession is. But I do assume that we haven’t had our last one; and what happens during an economic downturn? The answer is that investors flock to precious metals — like Silver — and this would mean a gargantuan increase in the Silver demand. 5. (Near)Future Technologies The future holds some exciting new prospects, innovations, and technologies — many of which could make good use of Silver, again due to its excellent physical properties. Electric Vehicles I know that Electric vehicles are not a technology of the distant future, and are indeed here already; but they definitely still have a lot of room in the market, into which to grow. Due to its high electric conductivity, Silver is widely used in and around the electric powertrain and other applications that are increasingly featured in hybrid Internal Combustion Engine (ICE) cars and Electric Vehicles (EVs) alike. Silver Demand in the Automotive Industry, Silver Institute Water Treatment & Space Exploration Silver is a well know, and broad-spectrum biocide, proving to be effective in quashing a growing list of bacterial threats, including legionnaire’s disease, E. Coli, Streptococcus, and MSRA; these microbes account for a large proportion of germs that we encounter on a daily basis. Although the WHO World Health Organisation has declared that — in its current applications — that Silver is “not an effective disinfectant for drinking water”, developments in the field nanotechnology and the use of Silver Nanoparticles may see this change in upcoming years. Recent studies have shown that silver loss from Ag (Silver) Nanoparticle sheets were lower than the standards for drinking water put forward by both WHO and the Environmental Protection Agency (EPA), with the conclusion that filtration through paper deposited with Ag Nanoparticles could be an effective emergency water treatment. In addition to this, recent studies have been done investigating the plausibility of using Silver as a biocide for spacecraft water systems. In fact, NASA is considering silver as the future biocide for exploration. To summarise; if you, like Michael Burry believe that the future will bring a considerable shortage of accessible freshwater — and that this same future will see a drastic increase in space exploration and electric cars, you may just want to hedge your bets on this fascinating metal.
https://medium.com/welded-thoughts/5-reasons-why-green-silver-may-be-the-new-black-gold-f13633b079cf
['Peter Turner']
2019-12-03 04:18:04.199000+00:00
['Data Science', 'Renewable Energy', 'Money', 'Solar Energy', 'Investing']
Helping Students with Autism Adjust When School Re-Opens
by Coleen Vanderbeek, Psy. D., LPC, Director of Autism Services, Effective School Solutions Students and teachers across the United States are finishing up the 2019–20 academic year via remote instruction after a demanding and anxiety-provoking four months under quarantine. The combination of fear, cabin-fever, social isolation, and “Zoom Fatigue” has left all members of the school community more vulnerable. As states move to contain the further spread of COVID-19 and to re-open schools and businesses, experts in multiple fields are fearing not only a second wave of infection, but also an epidemic of mental health symptoms, including anxiety, depression and post-traumatic stress (PTSD). The COVID-19 pandemic has given rise to a collective, mass trauma. People with reasonably good coping skills and who have the ability to self-regulate, are struggling, so perhaps it goes without saying that individuals who were already more vulnerable, including people with autism spectrum disorder (ASD), are particularly impacted by this crisis. National statistics show that 1 in 59 individuals are affected by ASD, and in New Jersey the ratio is a staggering 1 in 32. Children with ASD are at increased risk for both encountering traumatic events and developing traumatic sequelae, and although the topic is understudied, it is commonly believed that 100% of individuals with ASD will experience at least 1 traumatic event prior to age 18. The very experience of navigating the world with the social and communication deficits that are common to autism are for many, a trauma in and of itself. Educators are painfully aware that there has been a significant increase in mental health symptoms in the general student population in recent years. One in five students ages 13 to 18 has or will have a serious mental illness. About 11 percent have a mood disorder, and 10 percent have a behavior disorder. At the same time, the prevalence of autism spectrum disorder is increasing, and in many cases, mental health and autism challenges overlap. Experts estimate that 75–80% of individuals with autism are dually diagnosed with a mental health or psychiatric disorder. The most common psychiatric diagnoses are depression, anxiety and PTSD. Psychology professor and best-selling author Dr. Jean Twenge conducted a survey in April to assess the impact of the pandemic on U.S. adults, and found what she labeled a “devastating effect on mental health”. The 2020 survey revealed that 70% of participants met criteria for moderate to serious mental distress, compared with 22% in a similar survey conducted in 2018. Younger adults, ages 18–44, and parents with children under 18 at home, have been particularly hard hit. And, it is not much of a leap to assume that parents of special needs children are struggling the most, attempting to manage Individualized Educational Programs (IEP) at home, while either working from home, potentially worrying about the loss of employment, and managing their own stressors. So, what can school professionals expect when students with ASD return in the fall, either in-person, or with some blended version of remote and in-person learning? Preliminary data suggest that these students have been regressing during the pandemic despite the best efforts of districts to provide some version of IEP mandated services, and parents struggling to facilitate their children’s remote learning. The full impact of disruptions in speech and occupational therapy, skill development programs, and other services are not yet known, and will depend on each student’s age, developmental stage, academic skill, the severity of ASD symptoms, family environment, and a myriad of other contributing variables. Mental health symptoms have increased in ASD-affected children, along with perseverative and self-soothing behaviors, and the quality of social relationships has declined. Many students with ASD already feel socially isolated and ostracized, and when school resumes, might feel even more anxious and detached because of the quarantine. Stable, supportive relationships with teachers and members of their care team have been disrupted, and may take a while to re-establish. Because of the need for social distancing, special and enriching relationships with grandparents and other family members have been affected, potentially causing students to feel unsafe and insecure. While some students with ASD back away from physical contact due to sensory and/or social issues, others favor hugging and physical connection, and may struggle to respect boundaries and maintain social distance. Sensory issues might also affect students’ comfort with wearing masks, with hand washing, or with the use of hand sanitizer. Students on the spectrum need consistency and predictability, and almost all home, school, and recreational routines have been turned upside down over the last few months. Due to difficulty adjusting to change, students with autism will likely need more time to acclimate to a more typical school schedule. This will become even more complicated if a district needs to combine in-school and remote learning in order to safely reopen, as a student’s schedule might vary from day to day. A predominant emotion of parents and students alike upon the return to school will be ambivalence — a welcoming of a return to some version of normal, coupled with fear about venturing out into a newly unsafe world. After the trauma of Hurricane Katrina, child specialists observed that there was an increase in students’ externalizing behaviors. Many students with ASD already have challenges communicating their wants and needs, so an increase in behavioral expression can be expected. Since trauma causes difficulty with self-regulation, hyper-vigilance, and changes in cognitive ability and introspection (already a challenge for many students on the spectrum), teachers can expect more emotional outbursts, more fearfulness, increased engagement in restricted interests, increased perseveration, and increased problems with verbalizing emotions and needs. On the other hand, some students may respond with hypo-arousal, appearing passive and detached from learning and social interactions. Other changes brought about by the COVID-19 crisis may also impact students. Many have been less physically active, have spent long hours on computers and other devices, and have had sleep and eating patterns changed, and these disruptions of routine can have significant negative effects. Parental stressors, such as unemployment and financial hardship, can also impact a child’s well-being and sense of safety. When school reopens, teachers may find that students with more stable and supportive family environments have greater difficulty separating from parents, while those in high-conflict families may have been subject to emotional and/or physical abuse because of the heightened tension associated with quarantine. Some parents will be mourning the deaths of friends and family members taken by the virus, and although the loss of a person does have some impact on students with ASD, the loss of routine, the loss of tangible objects, and even the loss of pet are shown to be a greater contributor to their sense of grief. If all of this sounds very daunting, it is. But there are many things that school professionals can do for students with autism to ease them back into school. There are a few key areas to consider in planning for re-opening: Remember that ASD is a spectrum disorder; there are many subtypes, and each student can have unique strengths and challenges. Be cautious about discounting the difficulties of students with milder forms of ASD as students with higher developmental capacities tend to be overlooked when it comes to supports. Since they do not complain, they do not get attention. The other contributing factor to the lack of support is the misconception that because a student has higher capacities, he should “know better”, or have the ability to problem solve on his own. Sadly, this is not true: if it were, these higher functioning individuals would not carry the ASD diagnosis. Encourage parents to frequently document where their children are as far as behavioral and mental health symptoms over the summer in order to track regression and provide a baseline for the beginning of in-person instruction and intervention in the fall. Work with parents to identify rewards and incentives that can help motivate their children to re-engage in academic and therapy activities. And be sure to inquire about tangible losses such as family deaths or job loss so as fully to support students who are in mourning, or whose families are struggling financially. If your district is proposing some form of blended instruction, with some students attending on some days and the rest on others, consider an exception for students with autism, e.g. daily attendance, or 3–4 school days in a row rather than alternating days. Plan a school, classroom, and schedule “walk-through” right before school re-opens so that students can be prepared for the new flow of their days. Create visual supports — label lockers and cubbies, desks and chairs, the bathrooms and closets; create task checklists and visual daily schedule cards, etc. to help orient students. One of the universal losses during the COVID-19 crisis has been the loss of the sense of control over our lives. Where possible, offer students choices, e.g. which cubby they want, or what side of the classroom they want to sit on. Prepare students for new procedures or practices that might trigger sensory issues, such as frequent hand washing, wearing masks, or needing to refrain from hugging. If a blended instructional model will be used, make sure to share your screens and include a lot of visual material during remote video instruction since individuals with ASD tend to be more visual than auditory with regards to learning. Consider sensitivity to volume and noise when adjusting computer sound settings. Complete an environmental check of the classroom and learning environments, go through each sense and modify the environment as needed based on the results (e.g., adjust the lighting, temperature, sound). Maintain consistency as students with ASD struggle with unexpected changes and transitions. The #1 reason that students with ASD present in psychiatric crisis is a disruption of their routines. Take the time to be proactive and create a variety of options for routines and structure throughout the school day. Remember to include visual supports when possible. Go back to the basics of working with ASD affected students. Find a common interest from which to build rapport/relationships. Create a list of each student’s personal preferences (e.g., video games, TV shows, music etc.) with the student/family, then do some research on the student’s interest and regularly spend time touching base on these interests. Communicate frequently with members of each student’s support team, including community providers, psychotherapists, psychiatrists, primary care physicians, neurologists, OT, PT, ABA, speech therapy providers, and family caregivers. Such coordination is necessary in order for skills to generalize outside of the educational setting. Consider the function of typical ASD behaviors, and examine your responses. For example, tantrums and repetitive behaviors may occur when students feel threatened and are unable to verbally express distress or soothe themselves. See yourself as an important part of the equation in each teacher-student relationship; your stress level and reactivity cannot be dismissed. It is most helpful if school professionals can adopt and maintain a trauma-informed stance toward students. Essentially, this means asking yourself “what is going on here? what happened to this child? what does this child need?” rather than “what is wrong with this child?” A trauma-informed stance will favor the conceptualization of “problem” behaviors as fight-flight, trauma-related survival mechanisms, rather than viewing them as oppositional or “bad” behaviors. Where possible, help the student focus on internal feelings and sensations when feeling threatened, coaching them on how to attach words to feelings, and ask them what would help them feel safe. As in-person classes resume, don’t act like nothing happened, but don’t talk endlessly about the crisis either. Make room for students to express their experiences and distress, but redirect students to the educational tasks at hand, and toward hopeful planning and preparing for the future. Institutionalize classroom practices such as mindfulness, movement breaks, singing, or art tasks. The ESS Autism toolkit offers some suggestions about communicating more effectively with students with autism. Check in often with parents and caregivers who may remain highly stressed, especially if the district implements a blended educational model that incorporates in-home remote instruction. As school re-opens, some may greet you with a greater sense of respect and cooperation, while others may be more irritable and demanding. Try not to take it personally. It is also critically important to remember that the COVID-19 crisis has disproportionately affected minorities and poorer communities, so don’t assume that all families have the level of support and resources needed to keep food on the table, much less to supervise in-home instruction and interventions. In the aftermath of Hurricane Katrina, traumatized students looked to teachers to provide personal affirmation and hope. It is important to remember that even one strong, supportive relationship with a school professional can go a long way to help a student heal from trauma and grief. But remember: you are grieving many tangible and intangible losses as well. Both staff and students will need to “name and claim” this grief in order to move forward from losses of all kinds. Self-care is more important than ever, and ESS staff members are here to help with resources and referrals for both you and your students. Resources: Twenge, J.M., Cooper, A.B., Joiner, T.E., Duffy, M.E., & Binau, S.G. (2019). Age, Period, and Cohort Trends in Mood Disorder Indicators and Suicide Related Outcomes in a Nationally Representative Dataset, 2005–2017. Journal of Abnormal Psychology, Vol. 128, №3, 185–199.
https://effectiveschoolsolutions.medium.com/helping-students-with-autism-adjust-when-school-re-opens-f40040f095ce
['Effective School Solutions']
2020-06-17 20:40:30.018000+00:00
['Mental Health', 'Education', 'Autism Spectrum Disorder', 'K12 Education', 'Autism']
10 Book Recommendations for Scintillating Brain Food
There is a rule in my household. If anyone wants to read a book, for any reason, I will buy it. Society have this false belief that education ends after receiving a formal, professional degree. It is absurd. There is no endpoint for mastering the human psyche. You don’t get public recognition for understanding why you screamed at someone questioning your morals. You don’t get a cake for untangling a knot in a friendship, only to cherish and enjoy each other’s company again. Maybe you should. Books, and talking about them, is one of the best ways to continually increase your intelligence and wisdom. If these lofty goals are uninteresting, just lose yourself in a state of flow for hours. Recharge your batteries. Appreciate exotic landscapes. Fall in love with characters and safely wrestle with their foreign perspectives. Contrary to notable magazines and newspapers, I do not provide an end-of-the-year list of books published this year. I take recommendations too seriously to care about publication dates. I will detail the best books I read over the past 12 months. This list is a portion of my 60+ books read. The title is a hyperlink to purchase each book, so you can immediately initiate an emotional roller coaster or intellectual journey. I bet my reputation on these picks. Enjoy! 1. Other People’s Love Affairs by D. Wystan Owen For those that can appreciate melancholy. Not sadness, not depression, but melancholy. The state of being when just like every human being, some desirable features of life are out of reach. A friendship that no longer fits. Nostalgia for the silly conversations of youth. Being forgiven for a cruel act that only serves as a reminder of a shadow side you’d rather forget. And this is painful. And yet, beautiful. In melancholy, your senses are finely attuned. You are alive. This is a short collection of 10 stories that capture the full depth of trying, failing, succeeding in relationships, only to repeat the cycle anew. I hesitate to give the story line of any of them. But here are a few passages that I underlined: He made his hands into fists and pressed them under his eyes, the way Joey Makepeace had said you could do. You put your fists on the tops of your cheeks and you pressed there. It was a way to be brave. What he had been offered was a place on the periphery, a chance to play at something that was not quite his: like a plain, unmarried girl asked to hold the train of her younger sister’s wedding gown…the thought of such a role had always saddened him, but as he turned for home it seemed that perhaps it would be enough, that he might manage eventually to supplement his solitary pleasures with new vicarious and borrowed ones… If you want to feel, really feel, pick up this series. 2. The Laws of Human Nature by Robert Greene You do not sufficiently understand why you behave the way you do. Why you take joy in seeing someone ostracized at a party, thankful that it’s not you. Why you cannot handle long silences at meals. Why you are too eager about any sexual opportunity. Why you feel compelled to make jokes in group situations when deeper conversations would be more satisfying. And you know even less about other people and their aggression, obsessions, envy, micromanaging, and desire to look like a saint (despite resisting an overflowing reservoir of cravings and lusts). This is a 586 page treatise that looks beneath the facade of what people try to be and into the deep, often dark role playing games being played. I cannot say this enough — the insights are worth the effort. Here is one of many dog-eared passages: Remember: behind any vehement hatred is often a secret and very unpalatable envy of the hated person or persons. It is only through such hate that it can be released from the unconscious in some form. One criticism I have is that it is unclear which elements are based on scientific evidence. Regardless, you should not be agreeing with every point by an author. If a book makes you think. If a book motivates you to refine arguments. If a book makes you consider alternative explanations for extreme behavior then it is a success. With this metric, The Laws of Human Nature will stand the test of time. I strongly encourage you to buy the hardcover so it is easy to re-read passages. 3. Inferior: How Science Got Women Wrong — and the New Research That’s Rewriting the Story by Angela Saini What we seemingly know from scientific research is only as valuable as the methods used. If the only perspective adopted is that of men, if there is an assumption that only studying men is sufficient, then certain areas of inquiry will be half-baked. This book is a scathing expose into how women have been rendered invisible through must of the history of science. Consider that the oldest scientific institution, The Royal Society of London, did not elect a woman to full membership until 1945. Consider that of the 10 prescription drugs deemed unfit for public consumption by the United States Food and Drug Administration, women had more adverse effects than men for 8 of them. One reason they made it to market was that the scientists only studied the effects on men. One reason the Food and Drug Administration failed is that these all-male clinical trials were deemed perfectly suitable for allowing prescriptions to women. One part of this book is about the invisibility of women. Another is the over reliance of stereotypes to understand women. Consider the large bodies of research, still discussed in textbooks today, suggesting that female brains are hard wired such that empathy is easier and more frequently experienced whereas male brains are hard wired for analytical and mechanical reasoning. Read this book for a summary of the shoddy scientific data that these stereotypes are based on. Here is one of my many underlined segments that details the pressing need for this book: A phenomenon known as the “Nordic Paradox” shows that equality under the law doesn’t always guarantee women will be treated better. Iceland has among the highest levels of female participation in the labor market anywhere in the world, with heavily subsidized child care and equal parental leave for mothers and fathers. In Norway, since 2006, the law has required that at least 40 percent of listed company board members are women. Yet a report in May 2016 published in Social Science and Medicine reveals that Nordic countries have a disproportionately high rate of intimate partner violence against women. One theory to explain the paradox is that Nordic countries may be experiencing a backlash effect as traditional ideas of manhood and womanhood are challenged…This is why science matters for every one of us. The job ahead for researchers is to keep cleaning the window until we see ourselves as we truly are… Whether you disagree or agree with each argument, read this review if only to have more sophisticated, data driven conversations about when demographics matter and when they don’t. 4. Just Mercy: A Story of Justice and Redemption by Bryan Stevenson Regardless of your political preferences, you would have to contort facts to believe that the justice and corrections system in the United States is working well. Sift through the stories in this book and know that if someone is found guilty by a court of law by no means is this sufficient evidence that they engaged in wrongdoing. Of my 10 recommendations, this book had the strongest emotional effect on me — sadness, anger, resignation, and hope. This is a timely indictment on the lack of due process in the United States justice system. This book needs to be read broadly. There are too many cognitive biases in our justice system: Confirmation bias — we are drawn to information that fits our existing beliefs and theories about someone. Group bias — we are more helpful to people interpreted to be part of our group/tribe. Bias blind spot — we notice flaws more easily in others than we do in our own thinking. Conformity bias — we tend to self-edit our thinking to match the members of group members we want as friends. False causality bias — when events occur in a sequence, we tend to view the earlier events as explanations for the latter events. Anchoring bias — we are heavily influenced by information that is already known or shown first, and do a poor job of properly re-calibrating beliefs based on new, inconsistent information. The list goes on and on for how we render fatal judgments on people’s character too quickly. The justice system is no different than office gossip. Don’t be convinced by me, read the poignant stories in this book. There are too few protections to prevent perverted decisions. Read this book and join me in the cause to fight back against malfunctioning systems run by biased people that are unfairly ruining lives. 5. Sex, Drugs, and Cocoa Puffs by Chuck Klosterman Here’s the thing about Klosterman, he delivers deep insights about culture by dissecting low-brow entertainment. How low brow? He has a chapter on the crappy 90’s show Saved by the Bell to illustrate the mismatch between our memories of adolescence and what really happened, and whether this is a good thing. Here are a few references you will not find in any other philosophy book: Guns N’ Roses, MTV’s The Real World, Bravo’s Inside the Actor’s Studio, Jose Canseco, Jeff Buckley, Donnie Darko, My So-Called Life, SimCity, and KISS. This is the most entertaining book of my 10 recommendations. Here’s one of my underlined passages: A mind-numbing percentage of pro athletes are obsessed with God…some studies suggest that as many as 40 percent of NFL players consider themselves “born again.” This trend continues to baffle me, especially since it seems like an equal number of pro football players spend the entire off-season snorting coke off the thighs of Cuban prostitutes and murdering their ex-girlfriends. Argue with his thesis. Enjoy the memories of what grabbed your fancy in the 1990's. 6. Remembering Satan by Lawrence Wright I think this is the third year of book recommendations where Lawrence Wright earned an entry (see the links to prior years at the bottom of this post). Today’s witch hunt is a call-out culture where there is constant surveillance against people who say something that disagrees with extreme liberal or conservative worldviews. In the 1980’s, the witch hunt was a wave of accusations about satanic ritual child abuse. It was a strange time chronicled by Wright’s extensive research. You will not believe this happened. You will not comprehend how rumors can ruin lives until you read this book. I read this quicker than any book on this list. I could not put this page-turner down. This is yet another book on the absurdity that pervades the justice system in the United States (see #4 for the other). Here is a telling excerpt of how freaking weird humans can get when emboldened by a witch hunt: As a result of information provided by a prison official in Utah, word circulated in the police workshops that satanic cults were sacrificing between fifty and sixty thousand people every year in the United States, although the annual national total of homicides averaged less than twenty-five thousand. Workshops were being sanctioned in police precincts despite the fact that it was statistically impossible to be as large of a problem as thought and to this day, there is yet to be documented evidence of a single satanic ritual murder. Before you start judging characters from the 1980’s, ask yourself about the quality of available evidence today to support the broad dissemination of mandatory unconscious bias workshops in the same police precincts. This book is a reminder to resist the emotional contagion of fast moving societal trends. Be willing to stand out with caution and skepticism. History repeats itself so read a fascinating story from the not-so-distant past. 7. Rejection Proof: 100 Days of Rejection, or How to Ask Anything of Anyone at Anytime by Jia Jiang Who among us is content with how rejection is handled? You could read a book on stoicism or you can read the entertaining 100 day of adventures by Jia Jiang. You are going to love this guy. A reserved, neurotic guy who took it upon himself to change his personality. The things he does to conquer a fear of rejection are amazing. The stories are better than the descriptions of lessons learned. But don’t let this detract you. Jia can spin quite a yarn. Breezy, entertaining, life affirming read. 8. The Last Girlfriend on Earth: And Other Love Stories by Simon Rich I’m not going to lie, I love Simon Rich’s books. He is the master of short stories from the lens of strange narrators. Spoiled Brats has a fantastic story from the perspective of a hamster dealing with the delinquent kids in a Kindergarten classroom. Ant Farm has an equally witty tale titled, “what goes through my mind when i’m home alone (from my mom’s perspective)” which I read aloud to my kids and we all cried from laughing so hard. And yet, The Last Girlfriend on Earth is in my opinion the strangest and funniest collection. You might have heard of the Ancient Greek playwright Aristophanes and his myth of the missing half. In short, humans were born as circles and Zeus split them in half causing a crisis as everyone circling the earth yearning for their other half to form a perfect shape again. Well, check out Simon Rich’s version: ACCORDING TO ARISTOPHANES, there were originally three sexes: the Children of the Moon (who were half male and half female), the Children of the Sun (who were fully male), and the Children of the Earth (who were fully female). Everyone had four legs, four arms, and two heads and spent their days in blissful contentment. Zeus became jealous of the humans’ joy, so he decided to split them all in two. Aristophanes called this punishment the Origin of Love. Because ever since, the Children of the Earth, Moon, and Sun have been searching the globe in a desperate bid to find their other halves. Aristophanes’s story, though, is incomplete. Because there was also a fourth sex: the Children of the Dirt. Unlike the other three sexes, the Children of the Dirt consisted of just one half. Some were male and some were female and each had just two arms, two legs, and one head. The Children of the Dirt found the Children of the Earth, Moon, and Sun to be completely insufferable. Whenever they saw a two-headed creature walking by, talking to itself in baby-talk voices, it made them want to vomit. They hated going to parties and when there was no way to get out of one they sat in the corner, too bitter and depressed to talk to anybody. The Children of the Dirt were so miserable that they invented wine and art to dull their pain. It helped a little, but not really. When Zeus went on his rampage, he decided to leave the Children of the Dirt alone. “They’re already fucked,” he explained. Happy gay couples descend from the Children of the Sun, happy lesbian couples descend from the Children of the Earth, and happy straight couples descend from the Children of the Moon. But the vast majority of humans are descendants of the Children of the Dirt. And no matter how long they search the Earth, they’ll never find what they’re looking for. Because there’s nobody for them, not anybody in the world. If you can appreciate a bit of vulgarity and dark humor, this is going to be a treat. If you are easily offended, move on to the next recommendation. 9. Creativity 101 by James Kaufman The title and online description is misleading. While this has been sold as an academic textbook, James is such a playful, thoughtful author that this is essentially the best book available on the topic of creativity. Many books have been written on creativity in the past few years including The Myths of Creativity, Originals: How Non-conformists move the world, Wired to Create, The Accidental Creative, Creativity Inc., and just plain ol’ Creativity. I read them all. Many of them are excellent books but none of them are as good as Creativity 101. Simple. James Kaufman is one of the leading creativity researchers and he did his homework for you. Walk away knowing everything you want to know about this wonderful psychological strength and powerful human process. Of the books mentioned, Creativity 101 has received the least marketing and hype. Be a rebel, read the best volume on the topic not the most widely sold. 10. Letters from an Astrophysicist by Neil deGrasse Tyson This might be the most popular book on this year’s list. Neil is the paragon of scientists educating the public about science. He is the ideal successor to Carl Sagan. The premise of this short book is that Neil publishes letters received from fans and his answers. The letter writers range from men in prison to small children. The exchanges made me smile and reminded me of the importance of really engaging with another person on sophisticated topics. Spend a few hours on this book and be inspired to mentor someone regularly. As always, please leave comments after reading the books above and offer your own recommendations. In case you missed the last 8 years of book recommendations, here are the links. Here is the list of books to read from 2018 Here is the list of books to read from 2017 Here is the list of books to read from 2016 Here is the list of books to read from 2015 Here is the list of books to read from 2014 Here is the list of books to read from 2013 Here is the list of books to read from 2012 Here is the list of books to read from 2011 Here is the list of books to read from 2010 So many great minds to converse with, so little time. Dr. Todd B. Kashdan is a public speaker, psychologist, professor of psychology and senior scientist at the Center for the Advancement of Well-Being at George Mason University. His latest book is The upside of your dark side: Why being your whole self — not just your “good” self — drives success and fulfillment. For more, visit: toddkashdan.com
https://toddkashdan.medium.com/10-book-recommendations-for-scintillating-brain-food-b59fcadcbc4
['Todd Kashdan']
2020-03-23 15:04:08.474000+00:00
['Wellbeing', 'Sexism', 'Psychology', 'Sexuality', 'Evolution']
One Percent Better
One Percent Better A New Approach To Improvement I Was Inspired by 1% Last weekend Chris Nikic became the first person with down syndrome to complete an Ironman triathlon (2.4-mile swim, 112-mile bike ride, and 26.2-mile run completed in under 17 hours). His story of how he set a goal and then focused on getting 1% better each day has inspired me to improve in a new way. It’s about a commitment to manageable growth and a refusal to be complacent over time. Is 1% Each Day Possible? Depending on the starting point, there comes the point where improving by 1% each day might be a challenge. What if one 1 % represented improving by doing one thing better each time. Today maybe it’s running one minute longer, and tomorrow perhaps it’s hanging on the pullup bar for 5 seconds more than usual or speaking words of encouragement to one more teammate.
https://medium.com/the-partnered-pen/one-percent-better-c2497b898276
['Laura Mcdonell']
2020-11-14 19:41:25.676000+00:00
['Ironman', 'One Percent', 'Triathlon', 'Improvement', 'Productivity']
How IBM Sees The Future Of Artificial Intelligence
Ever since IBM’s Watson system defeated the best human champions at the game show Jeopardy!, artificial intelligence (AI) has been the buzzword of choice. More than just hype, intelligent systems are revolutionizing fields from medicine and manufacturing to changing fundamental assumptions about how science is done. Yet for all the progress, it appears that we are closer to the beginning of the AI revolution than the end. Intelligent systems are still limited in many ways. They depend on massive amounts of data to learn accurately, have trouble understanding context and their susceptibility to bias makes them ripe targets for sabotage. IBM, which has been working on AI since the 1950s, is not only keenly aware of these shortcomings, it is working hard to improve the basic technology. As Dario Gil, Chief Operating Officer of IBM Research recently wrote in a blog post, the company published over 100 papers in just the past year. Here are the highlights of what is being developed now. Working To Improve Learning What makes AI different from earlier technologies is its ability to learn. Before AI, a group of engineers would embed logic into a system based on previously held assumptions. When conditions changed, the system would need to be reprogrammed to be effective. AI systems, however, are designed to adapt as events in the real world evolve. This means that AI systems aren’t born intelligent. We must train them to do certain tasks, much like we would a new employee. Systems often need to be fed thousands or even millions of examples before they can perform at anything near an acceptable level. So far, that’s been an important limiting factor for how effective AI systems can be. “A big challenge now is being able to learn more from less,” Dr. John Smith, Manager of AI Tech at IBM Research, told me. “For example, in manufacturing there is often a great need for systems to do visual inspections of defects, some of which may have only one or two instances, but you still want the system to be able to learn from them and spot future instances.” “We recently published our research on a new technique called few-shot or one-shot learning, which learns to generalize information from outliers”, he continued. “It’s still a new technique, but in our testing so far, the results have been quite encouraging.” Improving a system’s ability to learn is key to improving how it will perform. Understanding Context One of the most frustrating things about AI systems is their inability to understand context. For example, if a system is trained to identify dogs, it will be completely oblivious to a family playing Frisbee with its beloved pet. This flaw can get extremely frustrating when we’re trying to converse with a system that takes each statement as a separate query and ignores everything that came before. IBM made some important headway on this problem with its Project Debater, a system designed to debate with humans in real time. Rather than merely respond to simple, factual questions, Debater is able to take complex, ambiguous issues and make a clear, cogent argument based on nuanced distinctions that even highly educated humans find difficult. A related problem is the inability to understand causality. A human who continues to encounter a problem would start to wonder where it’s coming from, but machines generally don’t. “A lot of the AI research has been focused on correlations, but there is a big difference between correlations and causality,” Smith told me. “We’re increasingly focused on researching how to infer causality from a large sets of data,” he says. “That will help us do more than diagnose a problem in say, a medical or industrial setting, but help determine where the problem is coming from and how to approach fixing it.” Focusing On Ethics And Trust Probably the most challenging aspect of AI is ethics. Now that we have machines helping to support human decisions, important questions arise about who is accountable for those decisions, how they are arrived at and upon what assumptions they are being made. Consider the trolley problem, which has stumped ethicists for decades. In a nutshell, it asks what someone should do if faced with the option of pulling a lever so that a trolley avoids killing five people laying on a track, but kills another person in the process. Traditional approaches, such as virtue ethics or Kantian ethics, provide little guidance on what is the right thing to do. A completely utilitarian approach just feels intuitively lacking in moral force. These basic challenges are compounded by other shortcomings inherent in our computer systems, such as biases in the data they are trained on and the fact that many systems are “black boxes,” which offer little transparency into how judgments are arrived at. Today, as we increasingly need to think seriously about encoding similar decisions into real systems, such as self-driving cars, these limitations are becoming untenable. “If we want AI to effectively support human decision making we have to be able to build a sense of trust, transparency, accountability and explainability around our work,” Smith told me. “So an important focus of ours has been around building tools, such as Fairness 360, an open source resource that allows us to ask questions of AI systems much as we would of human decisions.” The Future Of AI When IBM announced its System 360 mainframe computer in 1964 at a cost of $5 billion (or more than $30 billion in today’s dollars), it was considered a major breakthrough and dominated the industry for decades. The launch of the PC in the early 80s had a similar impact. Today’s smartphones, however, are infinitely more powerful and cost a small fraction of the price. We need to look at AI in the same way. We’re basically working with an early version of the PC, with barely enough power to run a word processing program and little notion of the things which would come later. In the years and decades to come, we expect vast improvements in hardware, software and our understanding of how to apply AI to important problems. One obvious shortcoming is that although many AI applications perform tasks in the messy, analog world, the computations are done on digital computers. Inevitably, important information gets lost. So a key challenge ahead is to develop new architectures, such as quantum and neuromorphic computers, to run AI algorithms. “We’re only at the beginning of the journey,” IBM’s Smith told me excitedly, “but when we get to the point that quantum and other technologies become mature, our ability to build intelligent models of extremely complex data is just enormous.” An earlier version of this article first appeared on Inc.com Image courtesy of IBM Previously published at www.digitaltonto.com.
https://greg-satell.medium.com/how-ibm-sees-the-future-of-artificial-intelligence-b1cb70b41887
['Greg Satell']
2019-03-10 11:32:36.517000+00:00
['Artificial Intelligence']
A Step-by-Step Guide to Gui Programming : Any Kid Can Code
What is GUI? GUI means Graphical user interface. Any software which has visual components where actual user can interact with that software comes under GUI. e.g, User type with keyboard, press mouse button, etc. In games, we have an interface, where we can use different inputs to play it. That is GUI which is allowing to interact in the game.
https://laxman-singh.medium.com/gui-programming-any-kid-can-code-45089f32a896
['Laxman Singh']
2020-12-09 04:44:58.502000+00:00
['Python', 'Python For Beginners', 'Kids And Tech', 'Gaming', 'Kids']
Growing Dreams Into Sprouting Nightmares — Beyond Senses Known
She spoke out one too many times. Robyn’s inquisitions about the relationship between the moon and the earth infuriated Mr. Cavendish. When Ms. Edwards, the science teacher, finished her presentation on our solar system and left the classroom, Mr. Cavendish tensed then frowned at Robyn. “Ms. Mwezi, see me after class! You will be on detention until further notice.” His loud words dripped with unkind and unpleasant portent. Robyn’s confusion moistened her eyes, but before a tear fell, she recovered her composure and said, “Yes, Mr. Cavendish.” “That’s yes, Sir from you, young lady,” Mr. Cavendish said. “Yes, Sir,” Robyn said back with a snap and irritation. Mr. Cavendish pretended not to notice, but the reply scared him. She had an unexpected firm and confident tone that sent chills trembling along Jim Cavendish’s spine. After class, Robyn approached her teacher’s desk, and he ignored her for 5 minutes while he organized his class tests and assignments. Her silent, dead stare disturbed Mr. Cavendish, but he didn’t betray his unease. Finally, he jolted out of his seat and said, “Come to the board now. You will write, ‘I will be quiet and listen,’ 100 times before you can walk home.” Then, he grabbed Robyn’s hand and pulled her close. “You will not speak in my class unless you’re spoken to from now on. Do you understand me?” Robyn felt his hot breath on her forehead. She focused her eyes into a fierce glare and yanked her hand from his. Robyn’s Mom showed her how to break a strong grip. She twisted in a fast jerk and pulled with all her energy while turning her hips and stomping on the floor. Robyn slammed her foot on Jim Cavendish’s toe. Her heel felt like it went right through his patent leather shoes and pierced the tile underneath. Mr. Cavendish screamed in pain. He grabbed her by the hips, stood, and pushed Robyn to the marker board. His body pressed against hers. Her anger overcame the sick, nauseous thought of him touching her. It was more than wrong, and Mr. Cavendish was assaulting her. Robyn remembered the school safety meeting about predators. Before he left her at the marker board, he whispered in her ear, “I’ll be watching you until you’re finished, so don’t think about running away.” Mr. Cavendish turned and tripped over his desk chair. He fell on the floor, and his nose slammed into the tile. But, he didn’t say anything and dabbed at his blood, then placed the soiled cloth on his desk. The elementary school vice principal knocked at their classroom glass door, and Mr. Cavendish met with her in the hall. Robyn grabbed Mr. Cavendish’s bloodied cloth and stuffed it in her backpack.
https://medium.com/the-partnered-pen/growing-dreams-into-sprouting-nightmares-beyond-senses-known-6a12db4db313
['Greg Prince']
2020-10-21 01:59:08.532000+00:00
['Storytelling', 'Creative Writing', 'Fiction', 'Short Story', 'Horror']
LA BioStart Biotech Entrepreneur Boot Camp Day #1
Background information on what this biotechnology boot camp was: 🧬🦠 As stated on the LA BioStart website: “The Cal State LA BioStart Bioscience Entrepreneurs Boot Camp is a five-week, intensive training program that prepares emerging bioscience entrepreneurs to launch their own bioscience ventures. The boot camp is a collaborative project of Cal State LA, the Biocom Institute and the Los Angeles Cleantech Incubator. With investment from the U.S. Economic Development Administration, the training is offered at no charge to eligible participants.” I was accepted into LA BioStart’s third cohort over the summer of 2018.☀️ 👉 Here is a link to the cohort I was in: (I am “Skylar Björn” in Summer 2018) https://labiospace.calstatela.edu/biostart/cohorts/ Before the first day: 🎀 This boot camp was about 30 miles from me and the traffic in Los Angeles is terrible, so it was over an hour drive there. Once on campus, I remember having trouble finding the room too. When I found it, I walked in and all the attendees had nametags on the table while they waited for the speaker to present. The topic of day #1: July 16, 2018 🌀 The presenter mainly lectured about the stages of starting a biotechnology company. Information from day #1: 📝 Questions 🧐❓ 1️⃣ Why is it important to implement the commercial process early on? ⏩ Just because you have great science does not mean you will be successful. 2️⃣ What is the life science industry? ⏩ Diagnostics, pharmaceuticals, biotech, MedTech, digital health/wearables. Research steps involved in starting a biotechnology company 📚 Biopharmaceutical research and development process is long and requires significant investment — with potential for high returns if successful. ▶️ A lot of scouts are always talking to researchers to see what new discoveries could be of interest. 1️⃣ Discovery and development — pre-clinical research: (4.5–6.5 years) 🧐🔬 ▶️ Research for a new drug begins in the lab. ▶️ Drugs undergo lab and animal testing to answer basic questions about safety. ▶️ Extremely expensive to do any research in non-human primates. ▶️ Is this safe for humans? ▶️ Pre-IND meeting: must have a minimum amount of data that will influence the rest of the design of studies I need to do. Pre-IND package: FDA only responds to what you ask, so you need to ask the right questions. ▶️ Much stronger in saying we had a pre-IND meeting with the FDA; not all INDs get approved to go into human studies. ▶️ Once you get the green light to practice on people, getting healthy volunteers is faster and less expensive. 2️⃣ Clinical Research: (7–7.5 years) 💉 ▶️ There are 3 phases in clinical research. ▶️ Understand the maximum tolerated. ▶️ Risk profile varies on the demographic; ex: a young versus an old person. ▶️ The smaller the population the higher tolerance for risk and vice versa. ▶️ There are a variety of ways to conduct researches. ▶️ Design of study, CRO you are using, how are patients being taken care of. ▶️ The clinical site is where you actually conduct your studies. ▶️ Pivotal studies are phase 3 studies, these are the most expensive. ▶️ She was at a migraine meeting and they had 1,000 patients. ▶️ Cardiovascular has about 10,000 patients. ▶️ NDA BLA — submit these to FDA if it works. 3️⃣ FDA Review: (1–1.5 years) 📑 ▶️ If not done after approval, then still under monitoring. ▶️ You have to update the information to the FDA, and then the FDA evaluates it. ▶️ Pre IDE- for medical devices ▶️ Denovo - 5–10k ▶️ Post-market follow up varies on years. ▶️ An oral drug can have a major impact on certain parts of the body. ▶️ If it is a new material that has never been used, additional studies may need to be done. 4️⃣ Post-approval research and monitoring 🧐📋 It takes $1–2 billion for a biopharmaceutical company, which includes the cost of failures. ▶️ Startups tend to not have enough money, so they do not always have another chance. ▶️ It is so expensive to develop products; there are opportunities for liquidation depending on how much the value of the production is done can exit. ▶️ Expenses are very expensive. ▶️ Software is way different. ▶️ You will need to bring in experts since no one is an expert in everything. Sometimes products are not approved which can delay it 6 months to a year; stock prices tend to tank when you hear this. ▶️ More difficult from phase 2–3. ▶️ There are 2 sides: risk and reward. ▶️ Studies are a lot more expensive. ▶️ You may need to scale up for the phase 3 study. ▶️ When trying to exit with a larger company there are different components with the deal structure; could be when you reach certain milestones. ▶️ With royalty, what percentage would you get of sales? ▶️ You typically tend to license this product from academia like UCLA. Biopharmaceutical vs. Medical Devices 🔬🧪 Biopharmaceutical: 💊 ▶️ Always have to have clinal studies. ▶️ Way more expensive. ▶️ Tend to be higher priced and larger markets. Medical Devices: 🧬 ▶️ The life cycle is way shorter. ▶️ Tend to make a version, then over time develop and make new versions. ▶️ Doing product [outer atoms]are more attractive Nanotech is slow since there is not much regulatory so it would cause other companies to take risks. ▶️ SBIR funding ▶️ How much skin in the game does the management team have? Reasons for failure in the biotech industry 🧨 The hardest point where most companies fail is when they do not choose the right process of approval. ▶️ Young entrepreneurs may not choose the right CMOS and often fail. What you are looking for dictates the CMOS and there are tons of CMOS out there. Bridging the gap between R&D and commercial interests 🤔 ❇️ Funding/investors 💸 ▶️ Entrepreneur grants and scholarships. $5–10 million to get to first in human studies, sometimes $15 million. ▶️ Nothing turns off investors more than saying you are going into a billion-dollar market when it is not one. ▶️ She would score people and see if they should be funded or not. ▶️ What would bug her is that she would hear how people raised like $30 million but had done no work. ▶️ A company has one shot at getting it right. ▶️ Mistakes at launch are expensive, if not impossible, to correct and it can impact sales for years to come. What to think about in development 🛠 ▶️ When making devices they are getting feedback on how studies should be designed. ▶️ It is really a mindset than the need to hire more people. ▶️ How are you addressing their pain points? ▶️ How hard is it to gain access to that patient? The biggest challenge is that every second literally counts. ▶️ You need to make things convenient for the patient; understanding these things helps lead to commercialization. ▶️ The point of entry in the patient population that has an unmet need. ▶️ The payer dictates how the products are used. ▶️ The focus is not on approval, but on reimbursement and coverage. ▶️ No one may want the product because it may take another few years of research and what if it can’t be commercialized. Payers: private payers- such as Blue Cross; also the government: such as Medicare, etc. ▶️ Sometimes with devices, they can start as self-pay. ▶️ Information gathering during the pilot study. ▶️ To be the first line you need the head to head studies, botox is 3rd and 4th line. What people will ask 🤓 ▶️ People tend to ask who your competitors are. Competitors are not now, but what are coming downstream; so pay attention to all of that. ▶️ You better have a very clear idea of how your product is better. ▶️ Look at technical vs. commercial fit. 4 pillars of focus in a successful product 🏅 1️⃣ Approvability 2️⃣ Manufacturability 3️⃣ Reimbursabilitly 4️⃣ Commercializability ⏩ These are the four key areas to decrease risk and increase the probability of success. ▶️ Successful product development goes beyond obtaining regulatory approval. ▶️ You have to look at the cost of commercial scalability. ▶️ Get something that is very scalable moving forward. Going international 🌎 ▶️ When they think of global approval they tend to do the U.S. And Europe. ▶️ In Europe all they need is a CE mark which is very easy; although sometimes when approved no one will buy it. Things to remember 🧠 It is all about the money — Botox made doctors a lot of money since they could bill a procedural code, so then on top of that they can markup the product. ▶️ Psychiatrists almost lose money writing prescriptions. ▶️ You take a product every single day that lasts some hours, but they never heard of one that you need 21 injections that last 3 to 4 months. ▶️ Most often in small startup companies, they bring in different consultants with specialties in different areas. ▶️ They never bring the group of consultants together since it is too expensive so they have one-on-one meetings. Integrated product development & commercialization 🖌 ▶️ Business development and licensing, product development strategy & project management, regulatory CMC, and more. ▶️ You can go through all those things and if you don’t have money it will be very hard to move forward unless you are extremely wealthy with the ability to fund yourself. ▶️ Large molecules are much more expensive. ▶️ A CRO can’t do the manufacturing side of things. What types of things are commercial value-driving? 🚙 ▶️ Even when trying to raise money, they expect you to have a business plan and strategy; such as how is the market, how are patients being treated, will barriers change over time - do a SWOT analysis; show a fundamental understanding in the market. What to think about and realize upon launching a biotech company 🚀 ▶️ How are you addressing pain points for stakeholders? ▶️ What is your initial target product profile? ▶️ What do you think you want to be when you grow up? ▶️ Business development licensing You will sometimes talk informally to people who may be interested in the technology. ▶️ The testing product profile. ▶️ They talked to 5–7 key opinion leaders and 5 out of 7 were interested, etc. ▶️ Talk to people that you want to invest money in your company. Starting a life science business is not difficult, it is obtaining the funding that is. ▶️ As a startup, you don’t want to hire a lot of people since you cannot afford it. ▶️ Outsource a lot of what you do since it is very easy to cut off. ▶️ You want a board. ▶️ As an entrepreneur, you have to be willing to do anything and everything. Investors, advisors, and other key stakeholders evaluate both the jockey and the horse; such as a jockey who does not know how to get the best out of the horse will not win. ▶️ They are looking for people who have been there and done it before. ▶️ You have to show that you are willing to learn and listen. ▶️ You need to stand out aside other investment options for the investor. ▶️ Who do you know that you can connect with that will help you out? You need to do homework when it comes to investors to see what they have invested in already. ▶️ You understand what you want and what the investors are looking for. ▶️ For women, look for groups in which women are the decision-makers since they are most likely looking for companies in which women are in the C-suite. ▶️ She often contacts female VC’s not for funding but for advice as a first-time female CEO. ▶️ Initial investments are easier to get since investors have to start investing. ▶️ Does the management team have the ability to maneuver through the unknowns? ▶️ You cannot guarantee that things will be successful. ▶️ You need to be willing to be able to change course. Kill fast, kill early; but it is really hard for founders to do this. ▶️ Know when to engage with experts. ▶️ You are a salesman, you need to know when to pitch. ▶️ Understanding risks, what irritates VC’s, and investors, in general, is saying there is no risk with your company. ▶️ Knowing how to meet the unmet need. ▶️ Investors have a flock herd mentality and can be very fickle. They say VC’s all have ADD, especially once they start looking at their phones and step out to take a call. ▶️ Highlight your best data. ▶️ They all like stories, it’s all about a story; kind of tell a story in terms of what you are doing. ▶️ The main thing is focusing on the main thing. 4 major things compromise LA’s bioscience economic development master plan 🗞 1️⃣ Bioscience commercialization 2️⃣ Bio lab development 3️⃣ Bioscience talent development 4️⃣ Marketing & branding Random: 💫 ▶️ The eye is a very isolated organ, so some companies start out in the eye before expanding. LA is known for Hollywood, but not as much as innovation, etc. ▶️ MedTech: can be all sorts of things in MedTech, it is very broad. ▶️ Wearables/digital health: software for healthcare. ▶️ Greater and greater integration of these areas. ▶️ The process is the product. ▶️ At first people in life sciences were older, but now younger tech people are speeding up the process. ▶️ Genetics and genomics in the life science industry. ▶️ A lot of companies are outsourcing manufacturing. ▶️ CROS- contract research organization; tons of different ones, can be work on humans, etc. ▶️ ****CMOS AND CROS****** ▶️ So many different types of commercialization: which journals to choose, people to endorse you, PR agencies, press releases, what is your marketing mix, how are you going to brand, IP lawyers, corporate lawyers when starting a company and when doing fundraising, etc. ▶️ A lot goes into running the life science industry After the first day: 😜 The boot camp provided lunch everyday so there was always a buffet style meal to eat everyday, and I ate a lot. We then all got our headshots taken for the LA BioStart website by a photographer. Key takeaways: ✅ 1️⃣ Biopharmaceutical research and development process is long and requires significant investment — with potential for high returns if successful. 2️⃣ 4 pillars of focus in a successful product 🏅: approvability, manufacturability, reimbursability, commercializability 3️⃣ Starting a life science business is not difficult, it is obtaining the funding that is. Tips: 🏁 1️⃣ The hardest point where most companies fail is when they do not choose the right process of approval. 2️⃣ It takes $1–2 billion for a biopharmaceutical company, which includes the cost of failures. 3️⃣ Startups tend to not have enough money, so they do not always have another chance. 4️⃣ $5–10 million to get to first in human studies, sometimes $15 million. 5️⃣ The payer dictates how the products are used. 6️⃣ Initial investments are easier to get since investors have to start investing. 7️⃣ A lot of companies are outsourcing manufacturing. Thank you for reading! 😄🎊🎊🎊 Social Media: 😃 💞 💜 Instagram:💜 @skylarbjorn https://www.instagram.com/skylarbjorn/ ❤️ Youtube:❤️ Skylar Björn https://www.youtube.com/channel/UCcq5kwiiM1-hPN8SIL0g9gA?view_as=subscriber 💙 Facebook:💙 Skylar Björn https://www.facebook.com/skylarbjornsky
https://medium.com/california-dreaming/la-biostart-biotech-entrepreneur-boot-camp-day-1-c853bfc095b
['Skylar Björn']
2020-09-01 02:33:42.471000+00:00
['Biotech Startup', 'Innovation', 'Biotech', 'Bioscience', 'Biotechnology']
How I Grew an Instagram Account From 4000 Followers to 190k in a Year
I started @theminimalistwardrobe in 2017 for two reasons. I wanted to create an audience for a business I was planning, while at the same time wanting to learn the game of Instagram. At the time I had an existing business with its own Instagram account, but I was too afraid to try things out. I was stuck in my safe routine. What would my customers think if I suddenly posted 8 posts one day? Would they be annoyed? Is it weird if I post something else than my products? These were the insecurities I had, and a fresh account with no responsibilities was the perfect solution to test everything. The new business I was planning to launch was a clothing brand, with high-quality essentials and minimal branding. After some tinkering with names on Instagram, I settled on ‘The Minimalist Wardrobe’. That wasn’t meant to be the name of the clothing brand, I simply wanted to create a like-minded audience, so I didn’t have to launch to crickets. My first post. I sourced photos from Instagram and stock photo sites. I posted my first post on February 19, 2017. It was a low-resolution photo of clothes rack with some shirts and a few pairs of shoes underneath. There was no real strategy here. I just enjoyed the freedom of posting whatever and analyzing the results. Little did I know what it would lead up to. From 0 to 4000 The first followers are always the hardest to get, everyone knows that. I got my first few followers by posting a few posts and engaging with some similar accounts. That’s a method that still works, but it’s not scalable. Engaging with other accounts is time-consuming, and even if you’d automate it, Instagram is cracking down hard on all software that is against their terms of service. I grew the account to a little over 4000 followers in 8 months. Nothing to write home about, but during this time, I didn’t really use any strategy. I just learned a little from every post I posted and leaned into what worked. I didn’t make any groundbreaking discoveries but learned how to use hashtags, what kind of photos and captions my audience seemed to like, and the best times to post. I started scheduling posts with Later so that I could create a bunch beforehand, and not be on my phone the whole day. The followers came from my engagement, and from the posts that reached new people through hashtags. After 8 months I just stopped posting. I had scrapped the clothing brand idea a long time ago, as soon as I realized how much work it would require. I also happened to find some brands that had executed my idea better than I ever could (a shoutout to Asket, from where I still buy my clothes). As for the learning part, well, I felt like I had learned some useful things, and honestly just lacked the motivation to continue playing around with a useless account. I logged off the account for half a year. When it Finally Clicked For Me I can’t remember why I logged back into the account after 6 months. Maybe I had a boring day. In any case, that was one the most significant days for The Minimalist Wardrobe, because that was the day when I understood that I’m on to something. To be a little more specific, I understood it the next morning. I had published a post in the evening and woke up to over 300 likes. The caption said “Long time no see! Did ya miss us?” Long time no see! Did ya miss us? Now, 300 likes with 4000 followers is nothing to brag about, but it was still enough for me to understand that there’s an actual audience that really enjoyed what I was posting. I realized that the account is promoting something that people wanted. Beautiful photos of clothes racks and basic garments painted a picture of simplifying your wardrobe. I had somewhat unintentionally conveyed my own philosophy for clothing. This is the moment I decided to apply a real strategy to grow the account, and treat it as its own project. Now things got interesting. Sliding Into DMs All Day Long The first thing I started doing was contacting accounts of the same size (or smaller), asking them to do a shoutout exchange with The Minimalist Wardrobe. They’d simply post about me on their feed, and I’d post about them. I spent hours and hours finding suitable accounts to cross-promote with, and I must’ve sent over a hundred DMs — daily — to people. I didn’t mind if the accounts were smaller. Anything over 1000 was worth it for me, as posting was easy, and my audience seemed to enjoy the posts. This is how I usually reached out to people. Once I grew, I could get bigger accounts on board, which is why the growth was exponential. I had also perfected my strategy by only contacting accounts with good engagement, and instructing them on how to promote The Minimalist Wardrobe when agreeing on the shoutouts. A clear call-to-action to follow made a huge difference. From Shoutouts to Deeper Collaborations Sending DMs for hours every day wasn’t sustainable, but the results were undeniable. I needed a better solution. Essentially I wanted collaborations that would give me constant exposure, but only needed to be set up once. I decided to build a simple website and set up a blog. Then I reached out to sustainable and slow fashion bloggers and asked if they’d like to write for my new blog. I’ve always believed in fair relationships, not just because I’ll sleep better, but because at some point the one who’s getting the worse end of the deal will call it quits — it’s just a matter of time. Fortunately, The Minimalist Wardrobe’s following was somewhere around 15k at this time, so it was a great opportunity for the bloggers to get in front of a new audience and gain new followers too. Every time someone wrote a post for my blog, we’d both promote it on Instagram. That way both reached a new audience. Eventually, I had over 20 guest bloggers, with a new blog post 5 days a week — each of them promoted by the blogger. The big 100. Still with the old logo — I now have a new one by my favorite designer, Hannah. The account kept growing fast, by over 2000 daily followers at best. 30k, 40k, and 50k were just simple milestones which I celebrated with a smile and started counting when the next one would come. I hit 100k in late November, just 6 months after taking this seriously. My Experience With the Infamous Follow/Unfollow The account’s growth kept accelerating, and I didn’t stop exploring different ways to grow. I decided to try the most despised way of growing an Instagram account: Follow/Unfollow. For those of you unfamiliar with it, the idea is to follow accounts so that they get a notification, and a percentage of them follow you back. Then at some point, you unfollow them. I did it for a while but stopped doing it for a couple reasons. Firstly, I hated the idea of it the whole time I was doing it. It was a cheap tactic, and honestly, I didn’t need it. My curiosity simply won and I couldn’t help myself. Secondly, it wasn’t sustainable either. I was back to tapping for hours on my phone. Truth to be told though, it did work. My growth rate increased. It’s hard to say how much this influenced it, but it definitely helped. (Un)fortunately, Instagram has cracked down on action limits recently, so this shouldn’t be as viable anymore. The Real Reason For the Growth The collaborations with bloggers were great, as were the earlier shoutout exchanges. I got a boost from following a lot of people. My analytical approach to using hashtags and putting effort into each caption paid off — many posts reached thousands of new people, turning a good amount of them into new followers. All these strategies accelerated the growth of the account, but the real reason why so many wanted to follow The Minimalist Wardrobe was simple: The core idea was something that people were interested in. I was posting content that people wanted to see. None of these growth hacks would’ve worked if the foundation of the account wouldn’t have been golden. Now, I got lucky by being into something hundreds of thousands of people are also into and happened to create an Instagram account for it. I probably got lucky with the timing too. Nevertheless, the core idea of the account is the key to exceptional growth. How you execute it is almost as important. Growth hacks lag far behind. Instagram suggesting The Minimalist Wardrobe for new followers of The Minimalists. When you truly have an account people want to follow, Instagram will help you out too. They’ll suggest you to new followers whenever someone follows an account that’s related to yours, and your posts will often be featured on the explore page. Can This Be Recreated? Is it still possible to grow any account to almost 200k followers in a year? Sure it is. There’s nothing that’s stopping you. The growth strategies I wrote about here aren’t difficult to copy. If you have the drive to hustle, you can do exactly what I did. The challenge is coming up with — or stumbling upon, as I did — an interesting idea for your account. That’s really the message I’m trying to send here. I even wrote an article about how to get your Instagram foundation right. It’s too common to see people apply perfectly good growth strategies to their accounts, and not seeing any growth. The Minimalist Wardrobe isn’t growing as fast as it used to anymore, and that’s fine. It grew into something so big so fast, that I wanted to take a step back and turn it in to something helpful, not just inspirational. I took my foot off the pedal for a while and am investing in the core idea, which I think will pay off in the future. “Build it and they will come” is bad advice. You need marketing to grow — at least initially, before word of mouth kicks in. The thing is, the methods to accelerate growth aren’t rocket science. What I did wasn’t particularly sophisticated, and the results were tremendous. If you put most of your effort into creating a valuable product — which in this case was the Instagram account — you’re setting yourself apart from the masses. Way too many businesses have great marketing with a mediocre product. Don’t make that mistake.
https://medium.com/swlh/how-i-grew-an-instagram-account-from-4000-followers-to-190k-in-a-year-543d238341ad
['Sebastian Juhola']
2019-10-03 07:01:02.371000+00:00
['Instagram', 'Growth', 'Social Media', 'Marketing', 'Growth Hacking']
The Therapist Emeritus has a Breakthrough
The Therapist Emeritus has a Breakthrough Chapter 3 Image from Pikist Mother Earth continued to moan and groan for a few long minutes, like the old lady she was; long enough for even the newcomers to get used to it and realize it wasn’t the end of the world. No earthquake followed even though the scientist-types insisted the sounds were earthquake related. No volcano blew its top even though the more imaginative envisioned fire and brimstone. If there was an apocalypse, it passed by the Epiphany Café. So, the Lisping Barista went back to work. Soon you couldn’t hear the Moodus Noises over the coffee grinder. It took longer for us at the Epiphany Café to get used to the fact that the Lisping Barista had said yes to the Geeky Guy. The event was as uncanny as wonderful. As dangerous as astonishing. It was the next step in a dark room. A jump into the cold water. If this could happen, then what else was possible? There was one person, though, who knew it was going to happen. She knew because she had set it up. She knew because she was a master head shrinker of the eclectic school of Narrative Rogerian, Experiential Jungian, Integrated Lacanian, Interpersonal Freudian, ad hoc Cognitive Dialectical Behavioral Family Therapy. She knew more about you than you could ever know, and she hasn’t even met you. She could read your mind and tell you why you did the things you didn’t know you did. She could interpret the dreams you forgot. If she analyzed you, you’d stay analyzed. If she hypnotized you, you’d bark like a seal. She knew because she, and only she, was the Therapist Emeritus. Other therapists practiced in the Kenilworth area. There was a community mental health clinic up the road in Middletown where the staff were so busy that they did their paperwork while you talked. There was a pinstriped psychiatrist who was free with the anxiolytics until you got addicted, then he wouldn’t see you anymore. There was a score of young women in private practice who took more care picking out their outfits and selecting their office furniture than they spent on your anguish. There was a halfway house all the way out in the woods where the counselors would shout slogans by day and take a Bacardi behind a tree at night. There were self-help groups, mutual-help groups, and groups that were no help at all. There was a whole league of life coaches who would never utter a discouraging word. With its enchanted forests, hills that grind their teeth, caffeine addicted river running uphill, and well-insured, half-mad clientele, the area was a boomtown for therapists, a hotbed of holistic healing. The business of head shrinking was expanding in Kenilworth. All species of psych people flocked to the area, but none like the Therapist Emeritus. Alas, she had recently retired. The Therapist Emeritus had taken inventory of her mutual funds, took down her shingle, and sold her couch on Craig’s List. She dutifully parceled out her clients to colleagues and planned to take up weaving. There was no special reason to weave, she was already a woolly woman with hair as curly, fine, and gray as a sheep. She thought she would like to work with her hands rather than her ears, spinning fibers into threads and threads into yarn, shuttling between warp and woof. The woven cloth would gather warm on her lap. She could lose her thoughts in its intricacies. Her cat would play at her feet. When it was finished, well, something would be finished. She had never finished anything before. The Therapist Emeritus liked buying the materials well enough, loading skeins in her arms when she could have used a shopping cart. She insisted on assembling the loom herself and spent the better part of a week doing so, cursing at the instructions written in a language other than her own. Then, when it was time for her to make her first blanket, she found the blanket would not make itself. She called her friends and invited herself over for tea. Once she started going to tea, she forgot all about the weaving. Wrapping her fingers around the cup, slowly rocking in her chair, nodding and making encouraging sounds whenever they were called for, seemed to fit her better. She felt more at home doing that than she ever felt on the bench by the loom. On the bench, she had been a strand out of place, a loose thread, a dropped stitch. She was made for tea and trouble. Because she was a reflective person, the Therapist Emeritus reflected that the way she spent her years changed her. Just as a laborer develops calluses on his hands, and may develop them on his heart, fitting him better for his work, so too, does spending one’s life as a therapist. It made her reflective, for one. It also gave her a capacity to ever so slightly nudge things along and sit and watch the rest happen. Blankets don’t get made by nudges. The Therapist Emeritus had a lot of friends, but not enough friends to fill up a retirement, so she started calling her old clients. They were all glad to hear from her and told her stories about their disappointing new therapists. Nine of her former client’s therapists talked too little, six talked too much. One had an annoying thing she did with her pen. Another seemed intent on the clock on the wall. Still another didn’t match his socks to his tie and one shoe was more scuffed than the other. There was something not right about her former client’s new therapists, nothing that deserved calling the ethics board, but still, something not quite right. The plants by the windows failed to grow and the books on the shelves looked like they’d never been read. No couch was as comfortable as the Therapist Emeritus’ couch, no one’s tea was nearly as hot, no one’s stress balls were quite as firm. It didn’t take long before the Therapist Emeritus started meeting her old clients for tea. It turned out that the Therapist Emeritus liked her old clients better than her friends and certainly liked them better than weaving; so, she sold the loom and most of her fibers before she even had made a single scarf, leaving a ball of yarn for the cat. She volunteered to see her old clients gratis at the Epiphany Café and soon had a permanent spot in the comfy chair over in the back corner, behind a potted plant. The Geeky Guy had been one of the Therapist Emeritus’ old clients for years. They met twice a week. She decided he was lonely, so she began a new nudging campaign. The Therapist Emeritus had found that it did no good to tell people what to do, make recommendations, prescribe courses of action. Instead, she would nudge. Soon the Geeky Guy was asking every woman in the café out on a date so that the Therapist Emeritus could observe. He thought it was his idea. Every woman turned him down until there was one left, the Lisping Barista. She’d been saved for last because no one thought he’d have a chance. But the Lisping Barista said yes, surprising everyone including the Therapist Emeritus. You might say, by knitting people together, the Therapist Emeritus already was a master weaver.
https://medium.com/who-killed-the-lisping-barista-of-the-epiphany/the-therapist-emeritus-has-a-breakthrough-6185636b32e3
['Keith R Wilson']
2020-08-30 13:16:01.731000+00:00
['Mental Health', 'Novel', 'Fiction', 'Psychotherapy', 'Weaving']
Serving configuration data at scale with high availability
Pavan Chitumalla and Jiacheng Hong | Pinterest engineers, Infrastructure We have a lot of important and common data that’s not modified frequently but accessed at a very high rate. One example is our spam domain blacklist. Since we don’t want to show Pinners spammy Pins, our app/API server needs to check a Pin’s domain against this domain blacklist when rendering the Pin. This is just one example, but there are hundreds of thousands of Pin requests every second, which generates enormous demand for access to this list. Existing Problem Previously, we stored this kind of list in a Redis sorted set which provided us with easy access to keep the list structure in a time sorted order. We also have a local in-memory and file-based cache that’s kept in sync via polling the Redis host for any updates. Things went well in the beginning, but as the number of servers and size of the list grew, we began to see a network saturation problem. In the five minutes after the list was updated, all the servers tried to download the latest copy of data from a single Redis master causing the network to saturate on Redis master and resulting in a lot of Redis connection errors. We identified a few potential solutions: Spread the download of this data over a longer period. But for our use case, we wanted the updates to converge within a few minutes at most. Shard the data. Unfortunately, since this data is a single list, sharding it would add more complexity. Replicate this data. Use a single Redis master with multiple Redis slaves to store this data and randomly pick a slave for reads. However, we weren’t confident about Redis replication (we were running v2.6). Moreover, it wouldn’t be cost effective since most of the time (when the data is not updated) these Redis boxes will be idle due to client side caching. Solution As each of the above solutions has its own shortcoming, we asked ourselves, how would we design a solution if we were building from ground up? Formalizing the requirements of the problem: Frequent read access (>100k/sec) and rare updates (several times a day, at most). Quick (within one minute, or several at most) converge the updates across all boxes. Ideally, a push-based model instead of clients polling for updates. We engineered a solution by combining the solutions to the smaller problems: Cache the data in-memory so that high read access won’t be a problem. Use Apache ZooKeeper as a notifier when updates are made. This is conceptually similar to the design of ZooKeeper resiliency, but if we stored the entire data in one ZooKeeper node, it would still cause a huge spike in network traffic on the ZooKeeper nodes during an update. Since ZooKeeper is distributed, the load would be spread across multiple ZooKeeper nodes. Yet, we didn’t want to burden ZooKeeper unnecessarily as it’s a critical piece of our infrastructure. We finally arrived at a solution where we use ZooKeeper as the notifier and S3 for the storage. Since S3 provides very high availability and throughput, it seemed to be a good fit for our use case in absorbing the sudden load spikes. We call this solution managed list aka config v2. Config v2 at work Config v2 takes full advantage of the work we have already done, except that the source of truth is in S3. Further, we added logic to avoid concurrent updates and to deal with S3 eventual consistency. We store a version number (that’s actually a timestamp) in ZooKeeper node which also will be a suffix of the S3 path to identify the current data. If a managed list’s data needs to be modified, a developer has the option to change it via an admin web UI or a console app. The following steps are executed by the Updater app on save: First, grab a Zk lock to prevent concurrent write to the same managed list. Then, compare the old data with the one in S3 and only upload the new data to S3 if it matches — Compare And Swap update. This prevents dirty writes while a previous update is converging. Finally, write the version to Zk node and release the Zk lock. As soon as the Zk node’s value is updated, ZooKeeper notifies all its watchers. In this case, triggering the Daemon processes on all servers to download the data from S3. How we grappled with S3’s consistency model Amazon’s S3 gives great availability and durability guarantees even under heavy load, but it’s eventually consistent. What we needed was “read after write” consistency. Fortunately, it does give “read after create” consistency in some regions*. Instead of updating the same S3 file, we create a new file for every write. And yet, this introduces a new problem of synchronizing the new S3 filename across all the nodes. We solved this problem by using ZooKeeper to keep the filename in sync across all the nodes. Introducing Decider When a new feature or service is ready for launch, we gradually ramp up traffic in the new code path and check to make sure everything is good before going all in. This resulted in the need to build a switch that can allow a developer to decide how much traffic should be sent to the new feature. Also, this traffic ramp-up tool (aka “Decider”) should be flexible enough so that developers can add new experiments and change the values of existing experiments without requiring a re-deploy to the entire fleet. In addition, any changes should converge quickly and reliably across the fleet. Earlier Solution Every experiment is a ZooKeeper node and has a value [0–100] that can be controlled from the web UI. When the value is changed from the web UI, it’s updated in the corresponding node, and ZooKeeper takes care of updating all the watchers. While this solution worked, it was plagued with the same scaling issues we previously experienced since the entire fleet was directly connecting to ZooKeeper. Our Decider framework consisted of two components: a web-based admin UI to control the experiments and a library (both in Python and Java) that can be plugged in where branch control is needed. Current Solution Once we realized the gains of managed list, we built managed hashmap and migrated values of all Zk nodes containing the experiments. Essentially, the underlying managed hashmap file content is a json dump of the hash table that contains experiment names as the keys and an integer [0–100] as the value. API def decide_expermiment(experiment_name): return random.randrange(0, 100, 1) How this is used in code: if decide_experiment("my_rocking_experiment"): // new code else // existing code Another use case of Decider: dark read and dark write We use the terminology “dark read” and “dark write” when we duplicate the production read or write request and send it to a new service. We call it dark because the response from the new service doesn’t impact the original code path whether it’s a success or failure. If asynchronous behavior is needed then we wrap the the new code path in gevent.spawn(). Here’s a code snippet for dark read: try: if decider.decide_experiment("dark_read_for_new_service"): new_service.foo() except Exception as e: log.info("new_service.foo exception: %s" % e) *In the rare event that S3 returns “file not found” due to eventually consistency, the daemon is designed to refresh all the content every 30 mins, and those nodes will eventually catch up. So far, we haven’t seen any instances where the nodes got out of sync for more than a few minutes. If you’re interested in working on engineering challenges like this, join our team! Pavan Chitumalla and Jiacheng Hong are software engineers on the Infrastructure team. For Pinterest engineering news and updates, follow our engineering Pinterest, Facebook and Twitter. Interested in joining the team? Check out our Careers site.
https://medium.com/pinterest-engineering/serving-configuration-data-at-scale-with-high-availability-8612521c1108
['Pinterest Engineering']
2017-02-17 22:24:54.785000+00:00
['Engineering', 'Pinterest', 'Infrastructure', 'Redis', 'Data']
The Third Third
The Third Third Notes from a life well underway but nowhere near over. I had breakfast with a friend of mine not too long ago and our conversation turned to, as it often does with those hovering around the 60 year mark, to the subject of retirement. That started with a passing comment about my father who had recently entered his 90th year, as he fondly and often tells us. “I sometimes wonder,” I said, “if my parents had known they were going to live this long if they would have organized their lives any differently.” My grandparents were, for the most part, gone in their late sixties or early seventies. Only my mother’s mother just made it to her late seventies but she was the last of that generation in our family by a long shot. So my parents, roughly 25-to-30 years their junior had little to go by other than, most likely, they had roughly 25-to-30 years to live. At the same time their children — my generation — would be thinking they had 50 years if they thought about it all. Which we didn’t, in our invincible, indestructible teens and twenties. As my mother and father retired in their early sixties they might have thought of making the best of the ten years they were almost guaranteed to have together, short of accident or illness. Anything beyond that would be an unexpected bonus, and therefore an unplanned one. I’m not suggesting this was a conscious, daily thought, but if they planned decades at all, I believe they were only planning for one. In reality, they were embarking on nearly three. So far. Ironically, they would only know their guess was wrong when it was proven to be so and likely too late to do much about it. “Think about it,” I said to my friend, “we allowed something like 20 years to ‘grow up’, learn how to read, write, do some arithmetic along with memorizing a little history for the test and enough geography to read a map.” He looked confused and uncomfortable, but I went on. “We then spent an open-ended blob of time dedicated to education, career and family. Whatever is left over is for some sort of retirement. Ideally, time to pursue that thing we probably should have pursued all along if we only had the perspective of age and gift of youth at the same time.” Now my friend’s look had evolved to either impressed or terrified, I’m not sure which. He may have just realized, at that moment, that his time was a-wasting. He had better start figuring out where he was going to go from here on in, as opposed to spend one more minute listening to a second rate, dime store philosopher without much of a plan of his own. These days, if we at least break even on the genetic lottery and take reasonable care of ourselves, whatever happens after 60 should run 20, 25 or maybe even 30 years. A full third of our lives. Whether that’s a lot or a little depends on how we are eventually able to look back on it. It is going to seem like an unhappy, merciless purgatory if used to watch round-the-clock, breathless cable news coverage of the latest contrived outrage. On the other hand, if spent being that artist or musician or travel writer we’ve always known we were, but nobody let us be, we’ll eventually wonder where the time went and so quickly. This is where I diverge from the happy, beautiful people in those annoying financial planning ads who seem to be forever walking on the beach, building wooden boats or driving convertibles under endless blue skies. I’m going to have to work in the third third of my life. Maybe harder than I ever have. With the evaporation of defined benefit pension plans just as I was starting my career—and left to my own devices with whatever defined contribution I have been able to hang onto — I’m on the Freedom 85 plan. So heigh ho, it’s off to work I go. Although not given to what-the-hell, come-what-may optimism — the exact opposite, in fact — I’m actually looking forward to the third third of my life. Mostly because I’m once again being forced to leap without a net and where the consequences for getting it wrong are pretty dire. The choices I make matter. What I do matters. Not doing it — whatever it turns out to be — is going to be a problem. I’m thankful, for yet another of countless reasons, for long-lived parents. Amongst myriad blessings, they also provide the best and most reliable indication of how my DNA will fare over the years. Pretty well as it turns out. From what I’ve seen so far, though, better to have a little too much on my plate as opposed to a little too little. I was helped along to this point by events over which I had little control. The industry which had paid so many bills over the years went into a catastrophic tailspin from which it is uncertain it will ever recover. Skills which were narrowly applicable to that industry were rendered suddenly and unexpectedly useless. I was forced to look at whatever natural skills fate had given me and had the opportunity to develop in some way, and finally ask “with who I am, and with what I know, now what?” The first third of a 90 year life should be devoted to growing up in every way which includes getting an education both in theory and in practice. If you are going to be really stupid, get it out of your system before this first third is over. Your life — and your body — simply become less able to cope with your abuse as the years go by. Most importantly, any decision made in this formative first period should come with a right of absolute revocation because you really don’t know what the hell you are doing. Do over. Then do it over again. The second third is getting your life in top gear and putting your foot to the floor. Thirty years is a long time, but really not that long at all. This is the time where your personal desires take a back seat to making a good living, being a good citizen, a good spouse, a good parent and a good son or daughter. This is a time for sacrifice, not gratification. Best just buckle in, buckle down and get it done. You’ll be surprised how seemingly little time goes by before, for the first time, you’re passed over for someone younger, smarter, cheaper and better looking than you. Once that happens, the second third is rapidly coming to a close. Wipe away those tears, stop feeling sorry for yourself and most importantly stop believing you have accomplished something by simply staggering over what you think is the finish line. What you have accomplished, by entering the third third of your life, is survived the first two thirds intact and developed a pretty clear idea of who and what you are. You are fully formed and worn smooth by experience. Time to make the best of things with the tools you have in your bag right now. Stop competing with youngsters. That’s a race you’re eventually and inevitably bound to lose. In my case, the first time I was asked a question based primarily on how really, really old I was, I bristled. I had always thought of myself as sort of 30-something, in attitude if not by the clock. Whenever I hung around with 30-somethings that feeling was reinforced. I began to realize, however, they weren’t thinking of me like a peer, as one of their own, but like their not-very-hip dad or oddball uncle. Wow. Wow. That was it — the second third was over. Recently a great new job, completely unrelated to anything I had done previously fell out of the sky and hit me in the head. What took some time to realize, looking around this wonderful, precocious startup I had joined, was that I was the old man — the very oldest employee they had. The most concrete impression in the relentless blur of the first few weeks was how on earth I was ever going to keep up? They all seemed to be running around at breakneck speed and speaking a language I barely understood. It wasn’t exhilarating. It wasn’t challenging. It wasn’t a great opportunity to learn new things and meet new people. It was terrifying. It felt like I had been thrown out of the airplane with some cloth, a sewing machine and told to make a parachute before I hit the ground. For quite some time, I could not think of one good reason why they would have hired me. It seemed like it was just a matter of time before they realized they had mistakenly skipped a line on the short list and hired the guy they shouldn’t have hired in a million years. I seriously thought about just handing in my resignation. That said, metaphorical seppuku really wasn’t an option for me. I had made enough mistakes in the first two thirds of my life that trying to put them right in the third third was simply something I had to do. As a result, there was little option but to leave it all out there every single day. If it wasn’t good enough, at least I knew there was nothing left to give. I had done my best. At that moment, I realized what I had been trying to do and the big mistake I was making. I was trying to live the third third of my life using the unforgiving rules from the second third. That came to me when I finally and absolutely accepted that maybe this company had hired me not inspite of being older, but because of it. I was not another person to run around with my hair on fire. Not a chance. There wasn’t enough left to which to put the match. I was not there to be developed into something else. I was there precisely because of who I was at that specific moment. What an incredible relief. It was an affirmation that I must have done something right in the first 57 years of my life. In the first two thirds you are constantly tormented with the notion that there is another alternative path that, if we just work a little harder, we can achieve. It’s all about not accepting reality and feeling compelled to put some sort of Jobsian dent in the universe. Not smart enough? Go back to school. Not rich enough? Save more or get a second job or both. Not enough kids? Have some more. The whole game is figuring out what is lacking in your life and then formulating a response that will address that in some way. The third third, on the other hand, is all about doing the best you can with what you have accumulated along the way. The time for rehearsals and do-overs is over. Whether the audience cheers or boos is entirely out of your control. There are a few things about the third third of my life I now know for sure. Actually, it’s mostly what it’s not: It’s not the end of anything. It’s certainly not a time to kick back and relax. And it absolutely is not a time to put in time until I finally get to do what I want. It’s also not the time for wishing away the days thinking about what might have been, could have been or should have been. I’ll admit, I have absolutely no idea where this third third of my life will take me. That’s what makes it beautiful and scary and thrilling. However, there’s an open road stretching out beyond the horizon. It’s a cool and brilliantly clear, summery evening and I have a just filled the tank. I’m here. It’s now. Time to go. ©2018 Terence C. Gannon
https://terencecgannon.medium.com/the-third-third-f283c164adb
['Terence C. Gannon']
2018-05-03 14:15:34.247000+00:00
['Retirement', 'Podcast', 'Nonfiction', 'Life Lessons', 'Career Advice']
Your hidden personality and how it guides you
Your hidden personality and how it guides you Your theories of action: Theory in Use vs. Espoused Theory Photo by Icons8 Team on Unsplash, modified by author Your “hidden personality” is hidden only from you — others get a clear understanding of it from your actions and words because behavior conveys personality. You’ve undoubtedly noticed that some people view themselves very differently from how others see them. For example, assholes don’t view themselves as such, though from time to time, one will, with a shock of recognition, see that they have acted exactly as an asshole would. Sometimes that results in a positive change — they are, as it were, scared straight and take a more thoughtful and careful in future interactions. Theory in Use vs. Espoused Theory Our theory of action consists of the ideas, values, and assumptions that guide our actions—what we do and say. Chris Argyris saw the difference between how people view themselves and how others view them as the result of having two theories of action. One is our theory in use — what we do and how it appears to an impartial observer — and the other is our espoused theory — our view of what we are doing and how we would describe it to others. For some, these two diverge considerably. I once worked for an extremely controlling manager who thought he gave free rein to his subordinates and supported their independence when in fact he continually checked on them and required that they clear any decisions with him. Argyris studied management and organizations and found that the more a person ascended in the hierarchy of an organization, the more frequently theory in use and espoused theory diverged, because the higher a person’s position, the less likely their subordinates will provide frank and honest feedback (one of the ways that power corrupts.) His book Increasing Leadership Effectiveness describes his experience with a small group of young CEOs. Among other things, he helped them give each other the frank and honest feedback they no longer got from subordinates. For example, they pointed out to each other instances in which their actions contradicted their statements of values. And in Theory in Practice: Increasing Professional Effectiveness Argyris and Donald Schön apply those ideas directly to the practice of education. The Adaptive Unconscious vs. the Constructed/Conscious Self Our adaptive unconscious and our conscious self each have a personality — characteristic behavior and responses — and the two personalities are relatively independent. Our espoused theory is what our conscious self does when it is making conscious decisions; our theory in practice is what our adaptive unconscious has us doing from habit — unconsciously. Timothy Wilson’s excellent book Strangers to Ourselves: Discovering the Adaptive Unconscious provides a good description of the unconscious and what we have learned about it. Wilson notes The adaptive unconscious is more likely to influence people’s uncontrolled, implicit responses, whereas the constructed [conscious] self is more likely to influence people’s deliberative, explicit responses. For example, the quick, spontaneous decision of whether to argue with a coworker is likely to be under the control of one’s nonconscious needs for power and affiliation. A more thoughtful decision about whether to invite a coworker over for dinner is more likely to be under the control of one’s conscious, self-attributed motives. How to take conscious control It’s a good idea to learn the personality of your adaptive unconscious since it directly affects how people view you. That is, it’s a good idea to know the theory of action you (unconsciously) express in practice and whether (and how much) that differs from your espoused (conscious) theory of action. So how can you do that? As Argyris demonstrated, frank and honest feedback, preferably with the guidance of a skilled facilitator, is one way— and group therapy is an example of that. That’s not always available, and in any case, it’s good to know what you can do on your own. Wilson offers this observation: Because people cannot directly observe their nonconscious dispositions, they must try to infer them indirectly, by, for example, being good observers of their own behavior (e.g., how often they argue with their coworkers). How important is this kind of insight? It doesn’t have to be perfect, because some positive illusions are beneficial. However, it is to people’s benefit to make generally accurate inferences about the nature of their adaptive unconscious. The Moral-Mirror Journal can help you learn the personality of your adaptive unconscious. This journal, written from a bystander or onlooker’s point of view, describes your interactions with others. Don’t include in the journal your thoughts at the time or your reasons for what you did. Record only what was said and done: an objective account. When you review it after some time has passed, you may not be able to recall your thoughts and reasons, so you will have to judge what happened just as others do: based only on your words and actions Over time, patterns will emerge, and in those patterns, you will see the personality of your adaptive unconscious and the theory in practice that you use. You may find that these differ from what you thought, and from that, you can get guidance for real change. If you continue the journal, you can judge the effectiveness of the change.
https://medium.com/age-of-awareness/your-hidden-personality-and-how-it-guides-you-b10eae3ae45c
['Michael Ham']
2020-12-26 01:03:43.757000+00:00
['Self Development', 'Life Lessons', 'Psychology', 'Journaling', 'Self Improvement']
Wool Weekly — Volume 13
It has been a very busy week at Wool Digital; we have pitched for new projects, conducted interviews, and attended some insightful networking events. Find out more about what has been happening this week here.
https://medium.com/wool-digital/wool-weekly-volume-13-374782440a0d
['Emma Cookson']
2017-11-02 19:39:53.556000+00:00
['Startup', 'Digital', 'Manchester', 'Tech', 'Digital Marketing']
How to find if the given Strings are Isomorphic?
How to find if the given Strings are Isomorphic? Day 48— 100 Days to LinkedIn, Yahoo, Oracle Photo by Marc-Olivier Jodoin on Unsplash Out of Free Stories? Here is my Friend Link. 100 Days to LinkedIn, Yahoo, Oracle Introduction Hey guys, Today is day 48 of the 100 Days to LinkedIn Challenge. Free For Kindle Readers If you are Preparing for your Interview. Even if you are settled down in your job, keeping yourself up-to-date with the latest Interview Problems is essential for your career growth. Start your prep from Here! Last month, I have been researching to find out the Frequently asked problems from these Companies. I have compiled 100 of these questions, I am not promising you that you will get these questions in your interview but I am confident that most of these “interview questions” have similar logic and employs the same way of thinking from these set of challenges. Before we move on to the first problem, If you are wondering why I chose LinkedIn, Yahoo and Oracle over FAANG are because I have completed a challenge Focusing on Amazon and Facebook Interview. New Day, New Strength, New Thoughts Day 48 — Isomorphic String🏁 AIM Given two strings s and t, determine if they are isomorphic. Two strings are isomorphic if the characters in s can be replaced to get t. All occurrences of a character must be replaced with another character while preserving the order of characters. No two characters may map to the same character but a character may map to itself. Example🕶 Input: s = "egg", t = "add" Output: true Input: s = "egg", t = "add" Output: true Follow House of Codes for keeping up to date in the programming interview world. Code👇 Edge Cases Check if the length of the two given strings are unequal. If so you can directly say that, the Strings can’t be isomorphic. Algorithm Create 2 hashmap. Traverse the Strings. At each index, add the first strings character as the key and the second string character to be the Value of that in map1, do the vice versa in map2. At each index also check if the current two characters are the same combination. i,e if a key value pair exists, then check if the current two characters are the same values in the hash map. Since a character cannot be mapped to another instance. Return True at the end of the traversal🔚 Complexity Analysis Time Complexity : Space Complexity : Thanks for Making this #1 New Release 🖤 Further Reading 4 Incredibly Useful Linked List Tips for Interview Top 25 Amazon SDE Interview Questions Do you think you really know about Fibonacci Numbers? 9 Best String Problems Solved using C Programming One Does not Simply Solve 50 Hacker Rank Challenges End of the Line You have now reached the end of this article. Thank you for reading it. Good luck with your Programming Interview! If you come across any of these questions in your interview. Kindly Share it on the comments section below. I will be thrilled to read them. Don’t forget to hit the follow button✅to receive updates when we post new coding challenges. Tell us how you solved this problem. 🔥 We would be thrilled to read them. ❤ We can feature your method in one of the blog posts. Want to become outstanding in java programming? A compilation of 100 Java(Interview) Programming problems which have been solved. (Hacker Rank) 🐱‍💻. This is completely free 🆓 if you have an amazon kindle subscription.
https://medium.com/dev-genius/how-to-find-if-the-given-strings-are-isomorphic-80206255434a
['House Of Codes']
2020-12-14 17:31:00.904000+00:00
['Programming', 'Coding', 'Java', 'Software Development', 'Interview']
A little fun with Tkinter
Logic trumps all Under the hood , the program is really not that complicated, there are simple python scripts, one that handles the interface behavior using the tkinter module and a second “backend” script that creates the enemy a player classes to be used by the program , let´s take a look at the backend script first. The program is run by a simple Character() class, this class deals with the main attributes of any given character , such as their features (strength, agility, inteligence etc.) and the basic methods of attack and defense as outlined below The character class is , in fact, a mother class that allows to create enemy and player classes objects with ease. In this way , we can have 3 different player classes (Combat Operative, Tech Operative and Covert Operative) just by adjusting the base parameters of our Character class. Note that Enemy is just another instance of character with different stats. The methods attack() and defend() are a simple based on the stats for each class. The attack method will favor the player classes with higher agility and and strength (the base damage is a function of the player strength),while the defense favors the constitution stat . The idea behind these rules is to create a balance between the different player classes and the enemy so is not too easy or to hard for them to beat each other. The second script controls all the interface aspects by making use of tkinter. Here , we will find mirror images of the attack()(attack_press() and defend() (defend() press) functions but this time dealing with the events that occur after a button press. The whole screen is handled through the use of label frames, neatly allowing the arrangement of the different consoles , the canvas to show the player and enemy graphics and the action buttons Now let´s take a closer look at attack_press() def attack_press(): enemy_init_health=enemy.health damage = player.attack() enemy_health=enemy.defend(damage) absorbed_damage=(damage-(enemy_init_health-enemy_health)) Action_button=ttk.Button(player_control,text="Action",command=command_action) Action_button.grid(column=0,row=0) Action_button.configure(state="disabled") Flee_button=ttk.Button(player_control,text="Flee") Flee_button.grid(column=0,row=1) Flee_button.configure(state="disabled") Guard_button=ttk.Button(player_control,text="Guard") Guard_button.grid(column=1,row=0) Guard_button.configure(state="disabled") Pass_button=ttk.Button(player_control,text="Pass",command=enemy_attack) Pass_button.grid(column=1,row=1,sticky=tk.W) player_message=ttk.Label(action_console,text="Player Attacks! deals %s damage"%damage) player_message.grid(row=0,column=0) enemy_hp=ttk.Label(enemy_stats,text="Hp: %s"%enemy_health) enemy_hp.grid(row=0,column=0) enemy_message=ttk.Label(enemy_console,text="Enemy absorbs %s damage"%absorbed_damage) enemy_message.grid(column=0,row=1) if enemy.health==0: mBox.showinfo("Python Message Info Box","You´ve won, restart the simulator!!!") In the first few lines, the Character attack() method is invoked and with it we will calculate the base damage dealt by the player to the enemy. The enemy will , in turn , invoke the defend() method to calculate how much damage , if any, is absorbed. When the player is done attacking , it must hit the ‘Pass’ button , giving the enemy a change to attack (this occurs with the enemy_attack() method), while the enemy_attack() method also invokes the Character attack() and defend() methods, it must also reset the state of the buttons used by the player so that a new run can be performed. The player will constantly hit attack and pass until either the enemy or the player runs out of HP, at which point , the program throws a little alert And that’s it . What´s fun about this program is that its concept can be developed for bigger simulations and with a polished UI it could make a fairly decent RPG. I find that, most of the time, the difficulty lies in balancing the game’s rules so maybe one could pick up some tabletop RPG’s rulebook and bring them to life in the computer. By using the python´s libraries , the development possibilities are varied and quite extensive indeed. I have more ideas for game development but , as usual, this takes some time as I mostly focus in Analytics and Artificial Intelligence, which are quite extensive topics in and of themeselves, however sometimes I take a break from the more academic stuff and wind up by writing up some development ideas in my notebook, and eventually I try to find the time to put them together. Until the next one
https://josebsg75.medium.com/a-little-fun-with-tkinter-e2ef2f499721
[]
2020-08-17 19:38:32.708000+00:00
['Python', 'Tkinter', 'Rpg']
“She Has Unstable Relationships:” The Gut-Wrenching Moment That Confirmed My Mother’s Mental Illness
Photo by Riccardo Mion on Unsplash My therapist, Dr. D., knew after our first dozen hours or so discussing my unstable childhood. He looked me square in the eyes. “Amy, it sounds like your mother has Borderline Personality Disorder.” “I don’t know what that is. A mental illness of course?” “The way she rages at you, her terror of being abandoned, how she tries to force her shame on you. “Ew, don’t say it like that! Gross.” He laughed. He understood there are certain words and phrases I can’t stand to hear. Anything that smacks of boundary invasion, for one, that reminds me of a mother who stands too close, or who won’t leave the room when you need to change into your pajamas. And words like penetrate. Yuck. Fantasy. Blech, too inexplicable and personal. And the worst two: desire, and its sister word, longing. It’s painful just to write those words. I’m fighting the urge to jump up and down to shake them out of me. “You might want to consider researching it,” he said. “It may help you understand her behavior in a more helpful context.” “I really don’t want to understand her, you know. I want to understand me, that’s fucking hard enough.” D. gave me that look, his eyes bright with a smile that’s bemused but somehow not condescending. Maybe because he knows I’m just about to get it. “Don’t tell me one leads to the other! I don’t want that!” “Okay, Amy.” Of course I went home and looked up Borderline Personality Disorder, or BPD. Within a half hour I called my sister. “Jess!” “Yeah?” “Mom has Borderline Personality Disorder!” “She has what now?” “Listen!” I read the symptoms from the web site for NAMI, the National Alliance for Mental Health: Regular fits of rage. Frantic efforts to avoid abandonment. Unstable relationships. Distorted self-image. Impulsive, sometimes dangerous behavior. Intense depression and anxiety. Thoughts and threats of suicide. Feeling empty. Uncontrollable anger. Dissociative feelings. Splitting — veering from “I love you” to “I hate you” quickly and frequently. “Sounds on the nose to me,” I said. “What do you think?” “Holy crap. She has something, clearly, but… holy crap. She’s definitely a narcissist, that we already knew.” “Apparently the two sometimes coexist. Put all that shit together and it’s Mom.” Jess and I had often talked about Mom’s proclivity to love you and hate you in the same day. I was either an angel or a demon, and I never knew what I’d be in any given hour. And what triggered the change in Mom’s view was usually a mystery to me. Sometimes Mom would retreat to her room and stay in bed for days, eating toast with margarine and jam that Dad brought her. At some point, Dad would round up his three kids — me, Jess, and our older brother, Jack — make us file into the bedroom, and apologize to Mom for making her so depressed. We would each say “I’m sorry” and file back out, wondering what we were sorry for. My stomach hurt all the time. I tried to be helpful around the house and otherwise invisible by staying in my room to read. Nothing, however, could prevent her frequent rages, a repetition of shouted accusations about my worthlessness: I was spoiled. Ungrateful. Lazy. I didn’t know what it meant to suffer. I had no idea how good I had it or what she and Dad had sacrificed for me. The moments when she approved of me — asking me to play piano while she cleaned the kitchen, praising an essay I’d written for English class — were as unpredictable as her rages. If something happened at school that upset me, she might understand. The exact thing would happen again and she’d yell at me. I had to learn to take it, she’d say. Stop being bothered by everything. Stop being so damn sensitive. I was sensitive, very. I still am and always will be. But I learned to fake it, to wear a hard shell, never cry or complain, laugh when I was hurting. By the time I was in college, I rarely cried, and I learned to keep my face still even as she raged. The more my sister and I discussed the possible BPD diagnosis, the more we believed it. Still, there were things we couldn’t find in the host of symptoms listed for BPD: Mom’s obsession with germs and cleaning, but how she told us we should eat the cereal even when fleas got inside the box. How things went missing from our rooms never to be seen again— certain clothes, toys, art projects. Her accusing me of “sleeping around” in high school, claiming I would get pregnant and embarrass her, even though I hadn’t yet had sex. I didn’t bother defending myself. By then, I’d learned that contradicting her would spark an argument I couldn’t win. She’d never believe me. Then came the day, some years later, when I began to write my memoir about surviving her abuse and my father’s enabling behavior. By then, Jess and I had estranged ourselves from our parents, who afterward made no effort to contact us or find out why. I heard through others that Mom was telling everyone “Amy destroyed the family.” I decided to research BPD by reading several books about it, something I’d refused to do until that point, concerned that it would trigger too much emotional pain. Now I felt it was a necessary part of understanding and portraying my mother on the page, to better understand her and flesh her out as a whole human being. The bombshell came as I was reading Kimberlee Roth’s Surviving the Borderline Parent. It felt as if I were reading a description not of a mental illness, but specifically about my mother: …they may seem to repeatedly be at the center of conflict when it arises. They may have a hard time respecting others’ boundaries and may ruin a child’s cherished possessions, give away or euthanize a child’s pets, or withhold affection or care… Euthanize a child’s pets. Blood drained from my face as I grew suddenly numb with shock. I thought that was just my Mom. When I was eleven, I came home from school and wondered why my miniature red dachshund, Hansel, wasn’t waiting at the door, tail wagging, to greet me. I walked into the kitchen looking for him and found my mother washing dishes at the sink, her back to me. “I had to take Hansel to the vet today and have him put down,” she said. “He was getting too mean. I was afraid he might hurt somebody.” She never turned to look at me. I went to my room and collapsed on my bed in sobs. Pain shredded my stomach. My eyes grew swollen but I couldn’t stop crying. An hour later, my father came home and sat on the edge of my bed. He adjusted his glasses as he spoke. “Amy, I know this is hard for you. But imagine how much harder it is for Mom. She’s the one who had to take him to the vet. You’re making her feel worse by being so upset.” By this age, I was already primed to value my mother’s feelings more than my own, more than anyone’s. Guilt crept inside me and I stopped crying. I stayed in my room until the next morning, missing dinner. To this day I have never cried again about Hansel. It’s as if that part of my psyche is locked up tight and there’s no key. I called Jess. “So I learned this thing and it’s freaking me out.” I read the passage to her. Jess was quiet for a few moments before replying. “I have to tell you something.” “Okay.” “It’s going to make you mad.” “Okay…” Silence. “Jess, I want to know,” I said. “I don’t care if something makes me mad or whatever, I’d rather not be in the dark.” She let out a breath. “Right. Well… do you know how Sugar died?” Sugar was a kitten Dad had brought home for me in high school. “They told me she ran away,” I said. “But it’s something else? Oh god — did that Doberman next door get her?” “No…” “She got lost in the swamp mess across the street?” We’d lived in a rural area with a a lot of undeveloped land. “No. Dad… Dad shot her with that handgun.” “What? Are you… you’ve got to be kidding me.” “Mom told him to do it.” Now it was my turn to be silent. “I found out because I was visiting them a couple years ago, and Dad said something,” she said, “some joke about shooting the cat for Mom. He was drunk. Mom kicked him under the table to shut him up.” Jess was right — it made me mad. Anger spread through me. My cheeks burned with rage and other memories flooded my mind — the two cats we had when I was very small, who didn’t come with us when we moved. My cat Spunky who “ran away.” My cat Butterscotch who “ran away.” Another cat, Puffinstuff, who “ran away.” They said Sugar “ran away.” And Hansel, the sweetest, most amiable dog, who “might hurt someone.” My mother wasn’t just a BPD/Narcissist with weird, unique behaviors. She was textbook. She fit the diagnosis like a perfectly-sized dress. And now I had to confront the fact that she’d most likely had all my pets put down. I assured Jess that it was good she told me, that I was glad, and it was true. This horrible new knowledge struck me as so cruel that even I couldn’t find a way to rationalize it. So what if Mom had a mental disease? No half-decent parent would do that to her kid. Anger is a good emotion. It’s clarifying. It’s cleansing. And feeling the heat of it, the way it traveled up through my stomach and throat and face, was a relief. I can and do sometimes feel sympathy for my mother— her childhood was no picnic, and I’m sure she was often in a lot of pain. But taking away my pets required thought and planning; it offered opportunity for reflection. It was purposeful. She wanted to hurt me. Her actions devastated me and there’s no excuse. And now I know. I also realize how much I don’t know. How many lost things were really stolen things? When she “accidentally” ironed over the decal on my favorite t-shirt, was that really an accident? As always, whatever triggered my mother’s rages and hurtful behavior is mostly a mystery. But each day, I’m more convinced that it may not have been my fault. _______________________ Quote Source: Kimberlee Roth. Surviving a Borderline Parent: How to Heal Your Childhood Wounds and Build Trust, Boundaries, and Self-Esteem (Kindle Locations 334–336). Kindle Edition.
https://amygrierwrites.medium.com/she-has-unstable-relationships-the-gut-wrenching-moment-that-confirmed-my-mothers-mental-12d42801ddd9
['Amy Grier']
2020-03-31 18:40:34.177000+00:00
['Borderline Personality', 'Mental Health', 'Depression', 'Narcissism', 'Motherhood']
Could CA’s Internet Privacy Law Help Save Journalism?
SACRAMENTO, Calif. — Journalists and internet privacy activists are speaking out in the wake of a recent hearing on big data and privacy in Washington, D.C., arguing that tech companies’ ad practices are hollowing out the news business and even threatening democracy. They contended the problem is that companies like Google and Facebook use people’s personal data to control 60% of the digital advertising market. Brian O’Kelley, who invented the back-end system that supports digital ads, said big tech can out-compete news sites for ad dollars because they monetize users’ every move online. “Google and Facebook specifically are really aggressive with how they use our personal data,” O’Kelley said. “And one reason they can make so much money in advertising is because of the data they have that the fragmented, local newspapers and news stations just don’t have.” Thousands of journalists have been laid off nationwide this year, and a University of North Carolina study found more than 1,800 local newspapers have closed since 2004. The Los Angeles Times and San Diego Union Tribune have both seen waves of layoffs in recent years. Perhaps ironically, both papers got a lifeline when tech billionaire Patrick Soon-Shiong purchased them in 2018. Data privacy activists who would like to loosen big data’s grip urged Congress to pass a federal version of California’s Consumer Privacy Act of 2018. The law forces tech companies to tell people what kinds of data they are collecting, and delete it on request. It also allows users the right to opt out of having personal information collected, with no penalty. Laura Bassett, a freelance journalist formerly with the Huffington Post, said she wants Congress to break up — or at least better regulate — Facebook and Google, so news companies can recapture the ad revenue to fund their reporting efforts. “We create the content, write the stories. Facebook takes that content for free and posts it on their site and then takes all of the advertising money that would otherwise be going to us,” Bassett said. “And suddenly, we’re not getting paid for the work that we’re doing, and having to lay off journalists and unable to financially survive.” Bassett pointed out a vibrant press helps ensure a free society by investigating corruption, explaining the impact of government policies and fostering a sense of community.
https://medium.com/save-journalism/could-cas-internet-privacy-law-help-save-journalism-c4b33470631c
['Save Journalism']
2019-05-29 14:24:32.899000+00:00
['Privacy', 'Big Tech', 'News', 'Journalism', 'Local News']
DeepMind AI Predicts Traffic on the Roads
Making models generalise through customised loss functions Conceptual Idea The idea behind DeepMind traffic prediction is mostly in making models generalise though custom built objective functions. While the supreme goal of DeepMind system is to reduce errors in travel estimates, they discovered that making use of a linear combination of multiple loss functions with weight greatly increased the generalization ability of the model. Specifically, we formulated a multi-loss objective making use of a regularising factor on the model weights, L_2 and L_1 losses on the global traversal times, as well as individual Huber and negative-log likelihood (NLL) losses for each node in the graph. By combining these losses we were able to guide our model and avoid overfitting on the training dataset. While our measurements of quality in training did not change, improvements seen during training translated more directly to held-out tests sets and to our end-to-end experiments. Future Research Now DeepMind is focusing on examining of possible application of the MetaGradient approach for altering the composition of the multi-component loss-function during training, using the reduction in travel estimate errors as a guiding metric. This work is inspired by the MetaGradient efforts that have found success in reinforcement learning, and early experiments show promising results. Read More If you found this article helpful, click the💚 or 👏 button below or share the article on Facebook so your friends can benefit from it too.
https://medium.com/ai-in-plain-english/deepmind-neural-network-predicts-traffic-on-the-roads-b292ad4bcdfa
['Mikhail Raevskiy']
2020-09-22 13:01:14.397000+00:00
['Machine Learning', 'Data Science', 'Neural Networks', 'Artificial Intelligence', 'Deep Learning']
Anaconda is bloated — Set up a lean, robust data science environment with Miniconda and Conda-Forge
While it is possible to get started doing data analysis within the base environment, I recommend creating a completely separate environment. A note on directory path For the rest of this tutorial, I will use miniconda3/bin and miniconda3/condabin as shortcuts referring to the full path to these locations. If you followed the installation instructions from above, the full path will look similar for all operating systems. The full path for miniconda3/bin will be one of the following: Windows — C:\Users\<UserName>\miniconda3\bin macOS — /Users/<UserName>/miniconda3/bin Linux — /home/<UserName>/miniconda3/bin If you are enjoying this article, consider purchasing the All Access Pass! which includes all my current and future material for one low price. Deactivating the base environment By default, the base environment will always be active upon opening the terminal. Specifically, the miniconda3/bin directory will be added to your path and allow you to start python and all the other programs listed above. In my opinion, this isn't good practice and its better to explicitly activate the environment. We can change conda’s configuration settings so that it does not automatically activate the base environment upon opening of the terminal. On the command line run the following: conda config --set auto_activate_base false Exit the shell, re-enter it, and output the path again. You should notice that only the miniconda3/condabin directory and not the other one. This command seems to have no effect on Windows Anaconda Prompt. Windows users will have to manually deactivate their base environment with conda deactivate . Also, notice that ‘(base)’ is no longer prepended to the prompt. Activating the base environment You can reactivate the base environment with the following command: conda activate base This will prepend the miniconda3/bin directory to your path and add '(base)' to the prompt. You can deactivate it again with the command: conda deactivate Creating a new environment just for data analysis While it’s possible to use the base environment to do all of our data science work, we will instead create a new environment where all of the packages are installed from the conda-forge channel. But, before we do that, it’s important to understand what a conda channel is. Conda Channels A conda channel is simply a repository of Python packages. There are dozens (if not hundreds) of channels available each with their own collection of Python packages. Whenever you install a new package using conda, its contents will come from exactly one channel. By default, conda will only install from the defaults channel. You can verify that the defaults channel is the only one available by running the following command: conda config --show channels All channels have at least one URL available where the repository is located. You can find it with the following command: conda info The above results are from my macOS. Linux and Windows channel URLS will look very similar. Notice that there are multiple URLs for this one channel. There’s even a URL for R packages, which seems bizarre, but conda is not a tool just for managing Python packages. It is a general purpose package manager that can work with any other programming language. Navigate to one of the URLS and you will see a list of the packages available to download. The default channel contains a hand-picked list from the team at Anaconda of popular and powerful packages to do scientific computing. However, there are many thousands of packages that exist that are not available in the defaults channel. This is where the conda-forge channel becomes important. The conda-forge channel Anaconda, the company, allows anyone to create a channel and will host these packages in the Anaconda Cloud. You can create an account right now and start your own channel with your specific collection of packages. conda-forge is the most popular channel outside of the defaults and contains many more packages. My recommendation at the time of this writing is to install packages only from conda-forge (if possible) and not from the defaults. The reasons for this are described in the conda-forge documentation reprinted below: all packages are shared in a single channel named conda-forge care is taken that all packages are up-to-date common standards ensure that all packages have compatible versions by default, we build packages for macOS, linux amd64 and windows amd64 many packages are updated by multiple maintainers with an easy option to become a maintainer an active core developer team is trying to also maintain abandoned packages One of the main reasons to use a single channel such as conda-forge is the consistency it provides with package compatibility. For packages that have components written in a compiled language like C, compatibility improves when they are all compiled from the same base C library. Master Python, Data Science and Machine Learning Immerse yourself in my comprehensive path for mastering data science and machine learning with Python. Purchase the All Access Pass to get lifetime access to all current and future courses. Some of the courses it contains: Exercise Python — A comprehensive introduction to Python (200+ pages, 100+ exercises) — A comprehensive introduction to Python (200+ pages, 100+ exercises) Master Data Analysis with Python — The most comprehensive course available to learn pandas. (800+ pages and 350+ exercises) — The most comprehensive course available to learn pandas. (800+ pages and 350+ exercises) Master Machine Learning with Python — A deep dive into doing machine learning with scikit-learn constantly updated to showcase the latest and greatest tools. (300+ pages) Get the All Access Pass now! Creating our environment It’s finally time that we create our new environment that we will use for data science. There are a few different ways to successfully accomplish this. One way will be shown now, with other alternative ways shown later on. Create an empty environment Let’s create an environment with the name ‘minimal_ds’ that has no packages in it, not even python itself. conda create -n minimal_ds Confirm its creation by finding its location in your file system at miniconda3/envs/minimal_ds . Any downloads for the environment will be located here. Activate the environment with: conda activate minimal_ds By default, this environment will install packages from the defaults channel. Add the conda-forge channel Let’s add the conda-forge channel as an option for just this environment. The --env option ensures that conda-forge is added only as to our currently active environment. conda config --env --add channels conda-forge Confirm that the channel has been added. conda config --show channels Running the first command from above will create a configuration file named .condarc in the environment's home directory ( miniconda3/envs/minimal_ds ). You can verify this by outputting its contents to the screen, which will be identical to the previous config command. cat miniconda3/envs/minimal_ds/.condarc Adding a channel will not remove any previous channels. Instead, it will become the first channel that conda looks to find packages. Currently, if it cannot find a package in conda-forge, it will then look in defaults for it. But, if the same package exists in both, then it will choose to install it from the channel with the newest version. For instance, if conda-forge has pandas version 0.23 and the defaults has version 0.24 then conda will install pandas 0.24 from the defaults channel. This behavior is unintuitive to me and it makes more sense to always use the channel that appears first in the channels list regardless of the version. Conda gives us a way to change this with the following command: conda config --env --set channel_priority strict Let’s verify the configuration change. conda config --show channel_priority The .condarc file has also been updated with the same information. Changing this setting will cause conda to always install packages from conda-forge unless they don’t exist in it at all and then look to the defaults channel. There are some packages that only exist on the defaults channel. Installing the packages We are finally ready to install packages into our new environment. conda will only look in the conda-forge channel unless a package is missing and then turn to the defaults. Personally, a minimal data science environment has numpy, scipy, pandas, scikit-learn, and matplotlib along with the newest stable version of python (which is 3.7 at the time of this writing). It will also have jupyter notebooks available. Let’s start this installation now. conda install pandas scikit-learn matplotlib notebook Notice that python, numpy, and scipy weren’t explicitly included in the list of packages to install. These packages are dependencies of at least one of the included packages and will be installed along with many other dependencies. Let’s take a look at the packages to be installed before confirming. Conveniently, this list shows the name of the package, the version number, the size, and the channel. If the last column has a blank value, it indicates that it is being sourced from the defaults channel. Note, that matplotlib does have qtconsole (a fairly large package) as a dependency for Windows and Linux systems. Python Installation and Data Science Environment Setup Complete! You should now have a robust environment set up to do data science using Python. It contains a minimal number of the most common and useful Python packages and will use conda-forge as its primary channel for package installation and updates. When starting your terminal (or Anaconda Prompt) remember to activate the minimal_ds environment. Summary of Steps There were a lot of words that separated the steps in this tutorial. A summary of the commands is provided below. Install Miniconda for your OS with the default settings Prevent the base environment from automatically activating conda config --set auto_activate_base false Create an empty environment conda create -n minimal_ds Activate the environment conda activate minimal_ds Add conda-forge as the first channel conda config --env --add channels conda-forge Ensure that conda-forge is used if the package is available conda config --env --set channel_priority strict Install packages conda install pandas scikit-learn matplotlib notebook Other Considerations There are a few other items that are worthy of discussion not mentioned above. Only installing from conda-forge Our current setup allows for packages not found on conda-forge to be searched for on defaults. It may improve compatibility issues to only use the conda-forge channel. To ensure that packages only come from conda-forge, you’ll have to specify the channel name (with the c option) together with the --override-channels option. conda install -c conda-forge --override-channels <package_name> Conda always asks for confirmation of the installation after showing you the plan, which allows you to verify the channel before proceeding. Installing packages not in conda-forge As discussed, if a package does not exist in conda-forge, it will be searched for in the defaults channel. If it does not exist in the defaults channel, then the installation will fail with an error message. You can specify a different channel to use as long as you know its name. The easiest way to find the channel name of a package is to visit anaconda.org and search for it at the top of the page. For instance, plotly_express is not available in either conda-forge or defaults. Searching for it on anaconda.org reveals its channel as plotly. Let’s install it by specifying the channel. conda install -c plotly plotly_express Note that the package plotly is a dependency and will also be installed from the plotly channel. If you search for the package plotly, you will see that it is available on conda-forge, but now it’s being installed from the plotly channel and not from conda-forge. Any channels provided to the c option will take precedence over the channels in the .condarc file. Verify that Jupyter Notebooks execute in the correct environment Although we have created our own environment with the ability to create Jupyter Notebooks, we are not guaranteed that they will execute Python in the same environment that they were launched. For instance, if we launch a Jupyter Notebook from the minimal_ds environment, it is possible that we are executing python from the base environment. This is quite surprising behavior as you would expect environments to be isolated from one another, but this isn’t quite the case. We need to verify that executing notebooks launched from the minimal_ds environment execute python from the minimal_ds environment and not from anywhere else. With the minimal_ds environment activated, run jupyter notebook and start a new 'Python 3' notebook. In the first cell of the notebook, execute the following two lines of code: import sys sys.executable The result should return a path to the environment python ( miniconda3/envs/minimal_ds/bin ). If it returns the location for the base environment ( miniconda3/bin ) then you aren't executing python from the minimal_ds environment. The cause of this is a ‘User’ kernel that is masking the environment kernel. Run the following command to see the list of kernels: jupyter kernelspec list Check to see if your python3 kernel is indeed a User kernel. Find the default locations for User kernels for your OS. The User kernel has the highest precedence over the Environment and System kernels. Yes, that’s right, even when you have an active environment, the User kernel takes precedence and allows you to execute python from other environments. There really isn’t a need for a User kernel when you are working in active environment. I recommend removing the User kernel with the following command: jupyter kernelspec remove python3 Rerun the command jupyter kernelspec list and you will see the Environment kernel with the same name (python3). You should now launch another Jupyter Notebook and verify that the python executable is located in the active environment. Alternative Environment Creation In the above tutorial, we created an empty environment first and then installed the packages with a separate command. We did this so that we could add the conda-forge channel and set its priority. It is possible to do this in a single step, though this won’t permanently change the configuration file. conda create -n minimal_ds -c conda-forge --strict-channel-priority pandas scikit-learn matplotlib notebook Another method for creating an environment is with a text file usually given the name environment.yml . The contents of the file contain the name, channel(s) and packages. The contents of the file that would have created our environment look like this: name: minimal_ds channels: - conda-forge dependencies: - pandas - scikit-learn - matplotlib - notebook We could then run the command: conda env create -f environment.yml An issue with this method is that there is no (current) way to set the channel_priority to strict. Master Python, Data Science and Machine Learning Immerse yourself in my comprehensive path for mastering data science and machine learning with Python. Purchase the All Access Pass to get lifetime access to all current and future courses. Some of the courses it contains: Exercise Python — A comprehensive introduction to Python (200+ pages, 100+ exercises) — A comprehensive introduction to Python (200+ pages, 100+ exercises) Master Data Analysis with Python — The most comprehensive course available to learn pandas. (800+ pages and 350+ exercises) — The most comprehensive course available to learn pandas. (800+ pages and 350+ exercises) Master Machine Learning with Python — A deep dive into doing machine learning with scikit-learn constantly updated to showcase the latest and greatest tools. (300+ pages) Get the All Access Pass now!
https://medium.com/dunder-data/anaconda-is-bloated-set-up-a-lean-robust-data-science-environment-with-miniconda-and-conda-forge-b48e1ac11646
['Ted Petrou']
2020-11-26 00:15:42.847000+00:00
['Machine Learning', 'Data Science', 'Python', 'Anaconda', 'Miniconda']
This Week In The Economy: France, Germany Propose €500 Billion Recovery Fund, CBO Expects Weak US Jobs Recovery, Fed Worried About Long-Term Damage, Housing Market Skid
This Week In The Economy: France, Germany Propose €500 Billion Recovery Fund, CBO Expects Weak US Jobs Recovery, Fed Worried About Long-Term Damage, Housing Market Skid Welcome to a regular snapshot-review of U.S. and international economic news that aims to 1) provide a window into the challenges and decisions facing businesses today, 2) determine the direction of economic policy — such as the speed at which central banks decide to raise interest rates, and 3) assess what the impact will be for consumers. Confirmed Coronavirus Cases Globally Cross 5 Million Mark The number of confirmed COVID-19 cases around the world has now exceeded 5 million — of that number 2,822,226 are active cases and there have been 333,001 deaths. The United States has over 1.6 million confirmed cases — with 1,204,803 active and 95,087 fatalities. The number of infections in Russia continues to soar, and it is now firmly ensconced in second place behind the United States. Overall, there are 326,448 confirmed COVID-19 cases in Russia, with 223,374 of them active and there have been 3,249 deaths. Brazil has also experienced a significant spike in its coronavirus infections, with 310, 087 confirmed cases. The United Kingdom has 250,908 confirmed cases, 212,948 are active and there have been 36,042 deaths. Spain has 280,117 confirmed cases, 100,822 of them are active, and 28,001 fatalities. France, Germany Propose Creation of European Recovery Fund In certain parts of the world, governments are beginning a slow transition from fighting the pandemic and limiting the fallout of strict lockdowns, to contemplating recovery efforts to rebuild battered economies. In the European Union, the debate has raged around how much of the financial burden should be shared vs. each nation footing the bill individually. Southern members of the bloc, notably Italy, have pushed for a joint rebuilding effort — using grants not loans that come with austerity strings attached, while fiscally-conservative Northern members such as Austria and the Netherlands are reluctant to loosen their purse strings. A potential breakthrough occurred this week, as Germany Chancellor Angela Merkel and France President Emmanuel Macron proposed the creation of a €500 billion fund to finance the recovery. “This would not provide loans, but rather budget funding for the sectors and regions hit hardest by the crisis,” Merkel said in a briefing to announced the proposal. “We firmly believe that it is both justified and necessary to now provide funding for this from the European side that we will gradually deploy across several European budgets in the future.” “We need to take action at the European level so that we emerge sound and stronger from this crisis,” she added. However, the proposal will require the agreement of all 27 EU countries, as under the proposal member states would borrow together on financial markets and use the €500 billion to bring financing to hardest-hit economic sectors and regions. CBO Predicts Steepest Labor Market Deterioration Since Great Depression The U.S. Congressional Budget Office this week released updated economic projections to account for the coronavirus pandemic and the social distancing measures that shutdown economic activity. It expects real GDP to contract by 11.2% in Q2, -37.7% compared to the same quarter a year ago, before picking up in the second half of 2020. Consumer spending, a vital engine of the U.S. economy, is forecast to plunge 11.6% in Q2, with the expectation for a minor rebound as household spending is likely to remain constrained. Still, domestic consumption is expected to offset declines in business investment, which is expect to fall by 15.8% in 2020 (-12.2% alone in Q2) due to the significant drop in demand. “Furthermore, significantly lower oil prices will hit investment in the oil and gas industries disproportionately hard,” the CBO said. On the jobs front, “the labor market is projected to see the steepest deterioration since the 1930s,” the report said, with the unemployment rate is expected to average 15% in Q2, up from less than 4% in the fourth quarter of 2019. While the CBO does expect labor market conditions to improve more materially after Q3, it warns that “some degree of social distancing is still expected to persist through the third quarter of 2021,” placing partial constraints on business activity and the demand for labor. “In addition, the expected pace of labor market recovery is dampened by the prospect that many businesses may not survive the earlier, extended period of revenue loss,” it said. It projects a still-high 8.6% unemployment rate by Q4 2021, with “about 3 million fewer people” projected to be in the labor force. This comes as the U.S. Labor Department reported that 2,438,000 new claims for state unemployment benefits were processed last week, bringing the number of Americans who have filed for unemployment since mid-March to roughly 40 million. The number of people continuing to receive unemployment insurance is now more than 25 million, clocking in at 25,073,000 for the week ending May 9, an increase of 2,525,000 from the previous week’s level. Federal Reserve Officials Concerned About Long-Term Economic Damage The CBO isn’t alone in its concern about prolonged damage from the pandemic to the U.S. economy. The Federal Reserve this week released the minutes from the April 28–29 meeting of the Federal Open Market Committee, the central bank’s policymaking body. According to the report, Fed officials believe the effects of the coronavirus outbreak and the ongoing public health crisis “would pose considerable risks to the economic outlook over the medium term.” They also expect household spending will be weighed down by a decrease in consumer confidence and a spike in precautionary saving. As for business activity, the Fed officials noted the “particular challenges” being faced by small businesses, and are concerned that a large number may not be able to endure a shock that had long-lasting financial effects. “Participants were further concerned that even after social-distancing requirements were eased, some business models may no longer be economically viable, which could occur, for example, if consumers voluntarily continued to avoid participating in particular forms of economic activity. In addition, participants expressed concern that the possibility of secondary outbreaks of the virus may cause businesses for some time to be reluctant to engage in new projects, rehire workers, or make new capital expenditures,” the report added. This could spell even more trouble for an already-worrying employment situation. Fed officials are concerned that temporary layoffs could become permanent, and that unemployed workers could face a loss of job-specific skills or may become discouraged and exit the labor force. Furthermore, “Participants were additionally concerned that employees who were on low incomes would be the most severely affected by job cuts because they were employed in the industries most affected by the response to the outbreak or because their jobs were not amenable to being carried out remotely.” A survey by the U.S. Census Bureau published this week showed 47% of respondents either lost employment income or another adult in their household had lost employment since March 13. Thirty-nine percent of adults expected that they or someone in their household would lose employment income over the next four weeks. Being unable to pay rent or mortgage on time was reported by 10.7% of adults, while another 3.2% reported they deferred payments. When asked about the likelihood of being able to pay next month’s rent or mortgage on time, 21.3% reported only slight or no confidence in being able to pay next month’s rent or mortgage on time. Pandemic-Fueled U.S. Housing Market Skid Continues The housing market has not been immune from the effects of the coronavirus outbreak and social distancing efforts, with building and sales activity screeching to a halt. On the supply front, the Census Bureau said the number of building permits plunged 20.8% in April from March, and is down 19% compared to April 2019. Single-family home authorizations were down 24.3%. As for the new residential construction, April housing starts fell 30.2% from March and are 29.7% below the April 2019 rate. Single-family housing starts are down 25.4% from March. The rate of home completions is 8% below March and down 12% from the same month a year ago. As for demand, the National Association of Realtors reported home resales dropped 17.8% from March, and are down 17.2% from a year ago. The month-over-month drop is the largest since July 2010. The median existing-home price for all housing types in April was $286,800, up 7.4% from April 2019 ($267,000), as prices increased in every region. This marks 98 straight months of year-over-year gains. “Record-low mortgage rates are likely to remain in place for the rest of the year, and will be the key factor driving housing demand as state economies steadily reopen,” the NAR said, adding that “more listings and increased home construction will be needed to tame price growth.” Housing inventory at the end of April totaled 1.47 million units, down 1.3% from March, and down 19.7% from one year ago (1.83 million). Unsold inventory sits at a 4.1-month supply at the current sales pace, up from 3.4-months in March and down from the 4.2-month figure recorded in April 2019. Around The Horn Cautious Optimism In Germany — The May ZEW Indicator of Economic Sentiment for Germany recorded an increase for the second time in a row, even as the assessment of the economic situation in Germany continued to decrease slightly. “Optimism is growing that there will be an economic turnaround from summer onwards. This is also reflected in the significant improvement in expectations for the individual sectors … . However, the catching-up process will take a long time,” the group said in a statement. Record Drop In UK Retail Sales— April retail sales in the UK — the first full month of lockdown measures — fell by a record 18.1%, following a 5.2% decline in March. All sectors saw a monthly decline in volume sales except for a record increase in sales for non-store retailing at 18.0% and, not surprisingly, a continued increase in sales for alcohol stores at 2.3%. The share spent online soared to the highest on record in April 2020 at 30.7%, which compares with the 19.1% reported in April 2019. EU, UK Brexit Talks Off To A Rocky Start — The lead negotiators for the EU and UK engaged in a war of words this week, albeit via correspondence, with a both sides accusing the other of negotiating in bad faith. In a letter to his counterpart, Michel Barnier, the UK’s David Frost accused the EU of offering much less favorable terms compared to free trade agreements negotiated with other countries. “Overall, at this moment in negotiations, what is on offer is not a fair free trade relationship between close economic partners, but a relatively low-quality trade agreement coming with unprecedented EU oversight of our laws and institutions,” he said. Barnier countered in his own letter that “We do not accept selective benefits in the Single Market without the corresponding obligations, we also do not accept cherry picking from our past agreements. The EU is looking to the future, not to the past, in these negotiations.” South Africa, Turkey Central Banks Lower Interest Rates — The Reserve Bank of South Africa this week lowered its key interest rate to 3.75%, aiming to ease financial conditions and provide support to households and firms against the economic implications of Covid-19. The Bank also eased regulatory requirements on banks to free up more capital for lending by financial institutions. Turkey’s central bank reduced its target interest rates from 8.75% to 8.25%, citing the “crucial importance to ensure the healthy functioning of financial markets, the credit channel and firms’ cash flows.”
https://medium.com/discourse/this-week-in-the-economy-france-germany-propose-500-billion-recovery-fund-cbo-expects-weak-us-3847c02417b4
['Brai Odion-Esene']
2020-05-22 16:54:30.132000+00:00
['Federal Reserve', 'Macroeconomics', 'Monetary Policy', 'European Union', 'Coronavirus']
Data Science in the Film Industry Part 1: What is my Preference?
On the surface, the film industry seems to be pretty easy to understand. Viewers only get a glimpse of the actors on their screens. Audience members rarely think about the inner workings and the complex processes hidden behind the big screen. There are multiple layers within the industry that power its success. Big data is a useful tool for Data Scientists working in the film industry. Data Scientists have a key role in ensuring triumph with this data by collecting relevant samples and analyzing important trends to gauge the preferences of the public. Based on the data collected, companies can predict customer preferences and viewing habits. Many streaming platforms dread the day when their customers will get bored of the recommended content. Continuous and repetitive content might cause people to turn to another source after a while. To prevent this, companies are working to improve their streaming algorithms and relying on recommender systems. Big Data and Recommender Systems Data Scientists in prominent production companies (Amazon, Netflix, Hulu) analyze the trends in the data to understand the viewing preferences of the general public and cater their predilections. Big Data is commonly gathered by combing through sites on the internet and public data reserves to get accurate information. Recommender systems use this data to gauge the public’s preferences. Recommender systems are “algorithms aimed at suggesting relevant items to users”. There are two parts to this system: collaborative filtering and content-based filtering. Figure 1. Collaborative and content-based filtering methodologies. Source: Towards Data Science A. Collaborative Filtering The collaborative method relies on the relationship the user had with past options. These interactions are then stored in the “user-item interactions matrix”, displayed below. Figure 2: The user-item interactions matrix; Source: Towards Data Science The user preference is expressed using two categories. Explicit Rating refers to a value on a scale, such as the number of stars one would give to a movie they watched. Implicit Rating documents the user’s activity such as page views, number of clicks, purchase records, whether or not they listened to a certain music track. The information stored in the matrix is then used to determine which ratings match what user. Collaborative filtering uses the Nearest Neighborhood algorithm. For user-based collaborative filtering, there is a matrix of dimensions (a × b), where a denotes the number of user IDs, and b denotes the number of items, which contains ratings. If a given target user did not watch or rate a certain item, we can still predict that target user’s rating of it. To do this, we need to compute the similarities between all the users and our target user. After that, we gather the top X similar users, and then take the weighted average of the ratings from the X users with similarities as weights. People sometimes give ratings that are higher than their true intended value. To prevent this phenomenon from skewing the data, subtract every user’s average rating for all the items when calculating the weighted average, and then adding it back to the target user. This is the idea of the Nearest Neighborhood algorithm. Item-based CF is when two items are said to be similar because they received similar ratings from a singular user. The recommendation system first finds the similarities between the item pairs and then proceeds to the model-building stage. Linear regression and weighted sum are used by the system during the calculation. Linear regression is used to determine the relationships between the rating habits among the different users. The system analyzes the items the user has rated and checks the similarities between the items and then creates a recommended list. B. Content-Based Filtering Content-based filtering recommends items that align to the user’s preferences based on their ratings from the past, specifically higher ones. As the user continues to rate items, the algorithm becomes more and more accurate at predicting the user’s preferences. It is easier to narrow down the choices based on the information stored. The benefit of content-based filtering is that it does not require data about other users because this type of filtering is user-specific.This method can also help understand the user’s specific interests, so it can recommend content that, perhaps, only a few other users consume. It can even work when a product does not have any review. The represented content also varies, which opens up options to other approaches such as text processing techniques. A limitation is that the model can only recommend content using current interests rather than understanding the interests on a broader scope. This tends to create a filter bubble, a phenomenon in which only certain types of content are recommended to the user by the algorithm. For example, if the user is currently interested in action movies, the majority of the suggested movies will be action movies. Variety is out of the question in this case. Repetitive content can bore the user, which leads them to turn to another option. Companies are trying to overcome this hurdle and make it more effective in the long run. Case Study: Netflix Netflix has used recommendation systems to understand their customer’s preferences. Upon creating an account, users are told to select a few titles that interest them. These titles are used to “start” running the algorithm that matches content based on the user’s interest. Those who do not choose any titles will start off at square one as they are picking shows. As the user continues to watch more content, the recent titles will supersede the past titles. Netflix adds tags to each work that summarizes the main parts of the title, such as “nostalgic drama” or “romantic comedy.” Figure 3: Users usually scan vertically rather than horizontally. Source: Netflix: Binging on the Algorithm Netflix’s new recommendation system, which uses content-based filtering, relies on image data, the “covers” the user sees when browsing titles. Netflix uses a framework that uses big data to ultimately decide what images work for each user. After conducting many experiments about user preferences, it has been proven that certain emotional ranges appeal to the users. Netflix’s algorithm essentially works to display a show’s cover page that reflects aspects, emotional ones, in particular, of the show that are important to the user. A. Variation of a Recommender System Netflix found a way to improve the audio and visual quality of their content to completely immerse their viewers into what they are watching. Predictive caching is used to permit a video to increase its speed or play at a higher quality. If a viewer is watching a television series, for example, the following episode will be partly cached. Fortunately, all that research and hard work has paid off. Netflix’s strategy of frequently providing new, varying content and undeniable experience in the entertainment industry makes it a formidable competitor. Netflix profits have increased by more than 30% since 2015, with its revenue being a staggering $16.614 billion yearly. Customer Experiences While Big Data is useful to increase sales, it also helps determine the problem areas in a company. If customers are having difficulties with a particular brand, they will most likely change to another option. Even the most devoted customers cannot tolerate more than one unfortunate incident. In a survey by PricewaterhouseCoopers, 59% stated that they will change to a different brand after multiple terrible experiences, and 17% after just one terrible experience. Figure 4: Consumer Interaction after bad experience(s); Source: Bad experiences are driving customers away — faster than you think As seen in the figure above, even one experience can discourage a certain percentage of customers. Several bad incidents do lead to a significantly higher percent of displeased people. The adoption of Big Data and more effective recommendation systems help companies mitigate the problem areas to ensure positive customer responses. Although there are many places to improve, companies such as Netflix are in a good position. These companies will continue to innovate and develop their technology to increase the percentage of satisfied customers. Recent developments in recommendation systems have already led to much success because of the system’s ability to understand their user for the most part. The system will continue to improve its accuracy as time goes on.
https://medium.com/ds3ucsd/data-science-and-the-entertainment-industry-part-1-what-is-my-preference-f1f0cd1cba6e
['Riya Mhatre']
2020-12-09 01:58:33.987000+00:00
['Machine Learning', 'Data Science', 'Entertainment Industry', 'Recommender Systems', 'Big Data']
They
bring people together and sooner or later groups will form and true to form they will dispute every foot and come to blows until one group bows because no two clans share the same plans though each will consider the other a bother fit only for the work they themselves would shirk its them and us at the heart of a fuss whatever it happens to be same as in history though leveled fingers and ideological crossfires have been known to have borne compromise now and then with all men full of smiles for a while especially when common sense dictates an alliance against some other up and coming power (which could also offer weight to their coffers) in any conflict someone has to merit blame for the war and the law of self preservation demands the crucifixion of a suitable oddball pleasing to all and the sputum in the vacuum will be the weakest in the nest the rising sun is a phenomenon when you see the infallibility with which they find a way to duck all blame and consign to the flames someone less adroit and you can bet on it that they will always say it wasnt them and that in any system there has to be someone handy who will always have to pay the price of avarice
https://medium.com/poetry-en-motion/they-85a4a1b88c8c
['Tyrone Graham']
2017-05-10 20:00:51.729000+00:00
['Writer', 'Life', 'Poetry', 'Poem', 'Writing']
Innovation beyond best practices: the what, who, how and where of it.
Innovation is a popular word. But scratch the surface and you’ll realize that there are widely varying expectations of what we mean by it. And questions like — can there a process to drive innovation or is it serendipitous? Does it have to be disruptive or can it be incremental? Does it need a separate group like R&D or can it be ingrained? And perhaps most importantly, what is meaningful innovation? A decade ago, Robert Wolcott, a professor from Northwestern (developed with Mohan Sawhney, Inigo Arroniz and Jiyao Chen of the Kellogg School) set about collecting 500 data points around innovation from managers in innovation roles across 19 corporations including companies like Boeing, Chamberlain Group, Cono-coPhilips, DuPont, eBay, FedEx, Microsoft, Motorola and Sony. They took the results, whittled them down to 100 measures and further tested them to a set of 12, framed within four quadrants of what has come to be known as the innovation radar. These dimensions of innovation and contributing factors are a way to not just understand the type and scope of innovation within a company but to frame a strategy for one. There are 12 directional vectors aligned to a 360 degree view framed by offerings, customer, process, and presence. Innovation around an offering, driven by platforms and/or solutions are perhaps the most obvious measure of which many examples abound. The pharmaceutical industry for one, is powered by R&D that delivers innovation through great new compounds but the level of process or customer innovation is quite low. Innovation driven by customer experience and/or value, exemplified by a Ritz Carlton hotels or concierge medicine practices in healthcare, where high touch personalized engagement is the differentiator instead of the product itself (aka the hotel room or the diagnosis/prescription). Process innovation around an organization and/or supply chain could be focused on making a product affordable — as in the case of Nokia phones targeting emerging markets as opposed to feature rich iPhones. Or innovations in supply chain such as P&G’s continuous replenishment model of having just in time stocks to retailers based on projected demand. Healthcare process innovation is in its infancy but the use of biometrics for real time monitoring of at risk and post-operative patients by hospital systems is one such example. Innovation around a brand and channels (networks) is framed as presence — and direct to patient advertising starting in the 90s (though its been legal since 1985), triggered in part by the FDAs easing of the need for a complete listing of side effects on infomercials. Today a similar situation exists with social media, while legal to promote and participate on, the rules of engagement aren’t clearly defined. According to ongoing data capture as part of the initiative, Wolcott states that when looking to define a strategy for innovation, it is best to focus on 2–5 of the 12 measures and uni-directional innovation (who, what, how, or where) has a far better shot at being meaningful and become operationalized than a broad approach. So at the risk of stating the obvious, its critical for innovation to have a mission. Best practices seldom drive anything that can be the best. But setting about an innovation goal using a directional framework such as this one is like asking a good open ended question. After all the quality of the answers we get often depend on the quality of the questions we ask.
https://medium.com/healthwellnext/innovation-beyond-best-practices-the-what-who-how-and-where-of-it-b56180972395
['Pro Bose']
2016-11-30 20:47:36.706000+00:00
['Startup', 'Innovation']
My Understanding of Exploratory Data Analysis
Exploratory data analysis (EDA), pioneered by John Tukey, set a foundation for the field of data science. The key idea of EDA is that the first and most important step in any project based on data is to look at the data. By summarizing and visualizing the data, you can gain valuable intuition and understanding of the project. Exploratory analysis should be a cornerstone of any data science project. Part 1: Location estimator Trimmed mean It is defined as the average of all values after dropping a fixed number of extreme values. x has been sorted in the formula. A trimmed mean eliminates the influence of extreme values. Weighted mean/median It is the sum of all values times a weight divided by the sum of the weights. There are two reasons in favor of weighted mean/median: Some values are intrinsically more variable than others, and highly variable observations are given a lower weight. For example, if we are taking the average from multiple sensors and one of the sensors is less accurate, then we might down-weight the data from that sensor. The data collected does not equally represent the different groups that we are interested in measuring. For example, because of the way an online experiment was conducted, we may not have a set of data that accurately reflects all groups in the user base. To correct that, we can give a higher weight to the values from the groups that were underrepresented. Weighted median is calculated in this way: instead of the middle number, the weighted median is a value such that the sum of the weights is equal for the lower and upper halves of the sorted list. Like the median, the weighted median is robust to outliers. The implementation is as follows (use Python package wquantiles and numpy ): np.average(state['Murder.Rate'], weights=state['Population']) wquantiles.median(state['Murder.Rate'], weights=state['Population']) Summary The basic metric is the mean. While robust estimators (median, trimmed mean, weighted mean/median) are valid for small data sets, they do not provide added benefit for large or even moderately sized data sets. Statisticians and data scientists use different terms for the same thing. Statisticians use the term estimate while data scientists use the term metric for location statistics. location and variability are referred to as the first and second moments of a distribution. The third and fourth moments are called skewness and kurtosis. Skewness refers to whether the data is skewed to larger or smaller values; kurtosis indicates the propensity of the data to have extreme values. Part 2: variability estimator Deviation: the difference between the observed values and the estimate of location. Variance: the sum of squared deviations from the mean divided by n — 1 where n is the number of data values. The reason why we use n-1 is because we can get an unbiased estimate of variance and standard deviation. There are n — 1 degrees of freedom since there is one constraint: the standard deviation depends on calculating the sample mean. For most problems, data scientists do not need to worry about degrees of freedom. Mean absolute deviation: the mean of the absolute values of the deviations from the mean. It is also called l1-norm, Manhattan norm Median absolute deviation from the median (MAD): the median of the absolute values of the deviations from the median. Normally, we need to normalize MAD by a multiplication factor 1.4826, which means that 50% of the normal distribution fall within the range +/- MAD. See statsmodels.robust.scale.mad from statsmodels import robust data = np.random.randn(30, 2) df = pd.DataFrame(data, columns=['column_1','column_2']) print(df['column_1'].std()) print(robust.scale.mad(df['column_1'])) Range: the difference between the largest and the smallest value in the data set. Order statistics: metrics based on the data values sorted from smallest to biggest. Percentile and quantile Interquartile Range (IQR): the difference between 25th percentile and the 75th percentile. data = np.random.randn(30, 2) df = pd.DataFrame(data, columns=['column_1','column_2']) df.head(3) df['column_1'].quantile(0.75)-df['column_1'].quantile(0.25) Percentiles df['column_1'].quantile([0.05, 0.75, 0.95]) Part 3: distribution estimator Boxplots horizontal thick line in the box: median top/bottom horizontal thin line in : 75th and 25th percentiles the dashed lines are called whiskers, and it will extend to the furthest point within 1.5 times the Interquartile Range (IQR). Any data outside of the whiskers is plotted as single points or circles (often considered as outliers). range: the y-coordinate range df['column_1'].plot.box() An improved version of boxplots is violinplot: sns.violinplot(df['column_1'], inner='quartile') The density is mirrored and flipped over, and the resulting shape is filled in, creating an image resembling a violin. The advantage of a violin plot is that it can show nuances in the distribution that aren’t perceptible in a boxplot. On the other hand, the boxplot more clearly shows the outliers in the data. Frequency table and histogram binned_column = pd.cut(df['column_1'], 10) print(type(binned_column)) binned_column.value_counts(sort=False) A histogram is a way to visualize the frequency table. ax = df['column_1'].plot.hist(figsize=(6, 6)) ax.set_xlabel('range value') Density plots and estimates A density plot is a smoothed version of a histogram. ax = df['column_1'].plot.hist(figsize=(6, 6),bins=11) df['column_1'].plot.density(ax=ax) ax.set_xlabel('range value') We can also use bar/pie plot to show the categorical data: ax=df['column_1'].plot.bar()/pie() ax.set_xlabel('') ax.set_ylabel('') For categorical data, the mode is the value that appears most often in the data, and it, however, cannot be used for numeric data. Part 4: Correlation Variables X and Y (each with measured data) are said to be positively correlated if high values of X go with high values of Y, and low values of X go with low values of Y. If high values of X go with low values of Y, and vice versa, the variables are negatively correlated. We can use heatmap to visualize the correlation matrix. print(df.corr()) sns.heatmap(df.corr()) Like the mean and standard deviation, the correlation coefficient is sensitive to outliers in the data. Another way of visualizing the relationship between two measured data variables is with a scatterplot. The x-axis represents one variable and y-axis another, and each point on the graph is a record. df.plot.scatter(x='column_1', y='column_2') However, scatter plot might not be proper for data sets that contain a large amount of records. In this case, a better option is to use a hexagonal binning plot. Rather than plotting points, which would appear as a monolithic dark cloud, we grouped the records into hexagonal bins and plotted the hexagons with a color indicating the number of records in that bin. left: scatter; right: hexagonal binning df.plot.hexbin(x='column_1', y='column_2') An alternative is to use contour plot: the contours are essentially a topographical map to two variables; each contour band represents a specific density of points, increasing as one nears a “peak.” left: scatter; right: contour In a summary, in order to show the correlation between two numerical variables, we have the following solutions: scatter plot: it is often used when we have small amount of data records hexagonal binning and contour plot: they are used when there are a lot of data records heat map: it is often used to illustrate matrix. For example, correlation matrix, confusion matrix and so on Part 5: contingency table A contingency table (also known as a cross tabulation or crosstab) is a type of table in a matrix format that displays the (multivariate) frequency distribution of the variables. import pandas import numpy # creating some data a = numpy.array(["foo", "foo", "foo", "foo", "bar", "bar", "bar", "bar", "foo", "foo", "foo"], dtype=object) b = numpy.array(["one", "one", "one", "two", "one", "one", "one", "two", "two", "two", "one"], dtype=object) res = pandas.crosstab(a, [b], rownames=['a'], colnames=['b']) res Part 6: more variables (more than 2) Both boxplots and voilinplots support multiple variables. For example: airline_stats.boxplot(by='airline', column='pct_carrier_delay') sns.violinplot(airline_stats['airline'], airline_stats['pct_carrier_delay']) The same with scatter plots, hexagonal binning plots and so on. The basic idea is conditioning variables in a graphics system, and showing pairs of variables sequentially. loans_income = pd.read_csv(LOANS_INCOME_CSV, squeeze=True) sample_data = pd.DataFrame({ 'income': loans_income.sample(1000), 'type': 'Data', }) sample_mean_05 = pd.DataFrame({ 'income': [loans_income.sample(5).mean() for _ in range(1000)], 'type': 'Mean of 5', }) sample_mean_20 = pd.DataFrame({ 'income': [loans_income.sample(20).mean() for _ in range(1000)], 'type': 'Mean of 20', }) results = pd.concat([sample_data, sample_mean_05, sample_mean_20]) g = sns.FacetGrid(results, col='type', col_wrap=1, height=2, aspect=2) g.map(plt.hist, 'income', range=[0, 200000], bins=40) g.set_axis_labels('Income', 'Count') g.set_titles('{col_name}') plt.tight_layout() plt.show() Part 7: Reference Book Practical Statistics for Data Scientists 50+ Essential Concepts Using R and Python: CHAPTER 1 Exploratory Data Analysis Code
https://majianglin2003.medium.com/my-understanding-of-exploratory-data-analysis-5eec824434d1
['Ma Jianglin']
2020-12-01 09:40:12.245000+00:00
['Exploratory Data Analysis']
Demystifying the Membership Inference Attack
Understanding the factors that influence the MIA Let’s start by studying the MIA on the standard MNIST dataset. This is the Hello World! of machine learning, a very simple problem to solve. We train the target model with 10 epochs on a simple convolutional neural network: It is PyTorch code to define a convolutional neural network. For instance nn.Conv2d(1, 10, 3, 1)) defines a 2D convolutional layer accepting images over 1 channel in input (gray scale), it has 10 trainable filters of size 3x3, and filters move over images with a stride of 1. The target model gets 99.89% accuracy on its training set and 98.82% accuracy on its test set. The shadow models are copies of the target model, trained on 10 epochs on their shadow dataset. We then generate training samples for our 10 attack models with the trained shadow models. Attack models are simple multilayer perceptrons: To measure the success of our attack we compute the mean accuracy of the attack models on their test set (train and test samples executed by the target model). We get 51.97% accuracy, hence this attack is not significantly better than a coin toss. The original paper [4] states that the MIA is linked with overfitting. The authors measure the overfit with the gap between the training set accuracy and the test set accuracy. In this case the gap is close to 0. Hence, we carry on with a harder problem on which it is harder to generalize to unseen samples. CIFAR-10 is known to be a harder problem to solve. It is composed of miniature images of 10 object classes. We use a more complex convolutional model for the target and shadow models: In this experiment we do not modify the architecture of the attack models and we train the target and shadow models with 15 epochs. The target model has 98% accuracy on its training set, 66% on its test set and the attack models have a mean accuracy of 67.44%. We start to see some success for the MIA. Effectively the train-test accuracy is 32% and attack models succeed to differentiate partially the in from out confidence levels. Let’s investigate the attack models samples distributions to understand how it works: Confidence distributions for in (left) and out (right) test samples (generated by the target model). This figure is a violin plot [15] of the confidence distributions of in and out attack model test samples (samples output by the target model). We plot only the distribution of the confidence level for the class 0 for the attack model for the class 0 here. The difference between in and out confidence level is clear, but the out confidence distribution seems to be the addition of two distinct distributions. To investigate this phenomenon we differentiate samples that have a correct prediction (true positives) from the target model class 0 and that have a wrong prediction (false negative). We do not have here false positives and true negatives because we only consider samples of label 0 for the test “does the model predicted label 0?”. Distributions of in and out samples, false negatives only Distributions of in and out samples, true positives only With this sample separation we are able to single out the two different distributions that compose the total distribution. The clear differentiation that we have been able to glimpse is now is no longer relevant here. Actually, the difference in form in the total distributions is due to the differences in the number of true positive samples and the false negative samples in the in and out samples. The common hypothesis for the MIA success is that in samples exhibit a higher confidence level for their class compared to out samples [4]. This hypothesis does not hold here. In fact, a better hypothesis is: the attack models learn to single out samples that get a correct prediction from samples that get an incorrect prediction. To summarize, we could replace the whole membership inference attack process by a simple rule “If the target model predicts the label correctly for a sample, then it comes from the training set, otherwise it does not”. We refer later to this rule as the correct classification rule (CCR). Let’s measure the accuracy of the CCR. We assume that to measure accuracy, there is as many sample in as out to test. This is indeed the case in our implementation of the MIA. On the MNIST experiment, the CCR accuracy is 50.53% (99.89 + (100–98.82))/2) and on the CIFAR-10 experiment the CCR accuracy is 66%. It seems very correlated with the attack models mean accuracy, respectively 51.97% and 67.44%. In fact, with these experiments, the rule is as effective to infer sample membership than the MIA with shadow models. Does this mean that, finally, the hypothesis of MIA success based on the confidence level is purely an illusion? It seems that the success of the attack is mainly based the train-test accuracy gap and the CCR captures this phenomenon. Does this mean that, finally, the hypothesis of MIA success based on the confidence level is purely an illusion? We studied all the public implementations that we could find, and reproduced as many experiments as we could from the literature. Some experiments presented in research articles expose a low train-test accuracy gap, but show a high MIA accuracy. Unfortunately we have not been able to reproduce those experiments. Either the dataset was not public, or the processing steps was too vaguely described to allow for reproduction. Moreover, a majority of public implementations are currently wrong, and show unrealistically good results because of induced biases. Those biases are often a bad dataset splitting strategy leading to biased results or the testing of the attack models on samples produced by the shadow models instead of the target model. In definitive, we could not find counterexamples to our hypothesis which is alarming because it questions the reality of the MIA with shadow model. Lastly, our hypothesis explains some curious assertions like: “our attacks are robust even if the attacker’s assumptions about the distribution of the target model’s training data are not very accurate” [4], “even restricting the prediction vector to a single label (most likely class), which is the absolute minimum a model must output to remain useful, is not enough to fully prevent membership inference” [4]. Indeed, the CCR does not need confidence levels.
https://medium.com/disaitek/demystifying-the-membership-inference-attack-e33e510a0c39
['Paul Irolla']
2019-09-19 11:23:12.325000+00:00
['Machine Learning', 'Artificial Intelligence', 'Security', 'Privacy', 'Explainable Ai']
Learn To Develop The Code: My Way
Okay, following the 80/20 rule we did 80% of the progress and spend 20% of the time. Sweet! We did not write a line of code yet, but we know what to do, and we know where to do it. Every application starts with a task. The task contains the requirements you want to implement. It documents the code you are going to write. And it answers the question “Why did I do it that way back in the days?”. Task is fuel. You pick a task and transform it into code. So create a list of tasks you are going to complete, thus you form the project backlog. You can create a board for the project, for instance, in Trello, Jira, or GitHub. The board doesn't need to have Kanban features or automation hooks, it can be a simple layout with columns “Todo”, “In Progress”, “Done”. You can imagine your super board with columns I haven’t mentioned, it is up to you. Put your backlog task in the “Todo” column and be ready to start! Managing the board gives you a strong experience and good practice of how to set up the work on the project. It gives transparency for the process, gives you moments of satisfaction when you see how many tasks you have done already! (That warm feeling, yes). If the company pays for your education, the board will give managers an update regarding your progress and about milestones you have passed. Typical board for one of my projects One thing about tasks. Make them as detailed as possible, as small as possible. Tasks with the text “Implement web-site” or “Write user settings page” do not make any sense. Such tasks are hard to complete, and they don't contain any historical info on what you did exactly. Try to split them and add more context about what you expect to do. Tasks like “allow a user to change email from the account settings page” or “add Typescript into the project” are small and meaningful. And yes, use labels! Mark tasks with labels. Feature, bug, high priority, need to think about it — you can imagine yours. Discuss it with your mentor (if you have one) about it. Labels will give you feedback about task status and will draw your attention to the thing you worked on the other day. Keep your board in good shape. Okay then, we are ready to write some code, huh? Let’s finish a few tasks. A few simple things to see some results. Let’s say, hello world on the browser page or health check API if you are working on a backend application. Why should we implement a few of them? We don't want to waste time deploying the application, running tests, building docker image, and other routine steps before diving into code wrap. Spend some time at the start of the project and play a DevOps role. I do believe developers have to know the basics of the continuous integration process. You don't have to learn to write terraform code for AWS services, but you should be able to write simple CI scripts for CircleCI, Github actions, or another CI service. Imagine you’ve just merged another great pull request into the main branch and then seat back and relax watching it build and deliver to the real environment. Automate as much as possible, do not waste your time on routine actions Finally, you can start writing the code, learning framework you have selected, reading tons of manuals and documentation, finding the solution in the StackOverflow, ironizing “why this example does not work on my machine?!”. Sounds cool, right? Not so fast, one last thing. Learn Git. Seriously, learn it. Learn to merge and rebase, create branches, open pull-requests, or merge requests (hey GitLab) or whatever-it-called requests in your version control platform. Ask for a review. Ask somebody to look at your code. We never write perfect code, but we can get better at it, and requesting a review is a way to learn and do better. Get used to it, it’s a typical development process in almost every company. Code review is a perfect way to learn, share ideas, and discuss current implementation And write the code. The more you write, the more you learn, the better you get. Read about the language you have selected, about common patterns, about testing, imagine how is it applicable in every task you are going to implement.
https://n-srg.medium.com/learn-to-develop-the-code-my-way-56e35cc1276e
['Sergey Nikitin']
2020-10-26 06:39:37.086000+00:00
['Development', 'Education', 'Programming', 'Personal Development', 'It']
Portraits of Life
Painting pictures with a jigsaw of words Photo by Trevor McKinnon on Unsplash Words come to me in a jumble. 500, 1000, 2000 pieces, lying together in the box of my mind, waiting. I open the lid. I pour them out into a pile. I begin to sort through them. The edge pieces first. I look for the structure, the frame, the outline that encompasses my thoughts. I am working quickly. I do not want to lose the shape of the puzzle, it’s borders. Then I slow down. Now I begin to mindfully choose, turning the words this way and that until they snap together with a pleasant snap. I am absorbed in my work. Head bent over the puzzle of my words, a woman enthralled. I move them around. I try them first here, and then there, wondering what I will create. I force some words into the wrong slots. They seemed to fit, but now I have to pry them out where I have wedged them in. I move them around some more. Suddenly I see the places they belong, fitting neatly beside their neighbor. I stand back to inspect my work. I can see I have created a picture from the pile of words.
https://medium.com/literally-literary/portraits-of-life-bf671907fd8e
['Beth Bruno']
2019-08-20 21:00:59.198000+00:00
['Poetry', 'Life', 'Writing Life', 'Poem', 'Writing']
If You Only Read One Article Today to Improve Your Writing, Make It “An”
Like boxing, writing is a dance of the familiar and the unexpected. Using the right mix of articles such as “a” and “the” is useful for staying on rhythm while waiting for just the right moment to strike and knock out your reader’s expectations. Tactical use of the 3rd article, “an,” provides a powerful punch. Less common than either of its siblings, “an” is worth examining up close and in our faces. Legendary writers, which are mostly Hemingway, withheld their “an’s” for just the right moment — such as in front of a noun that starts with a vowel! So, today, if you read only one article to improve your writing, let it be “an,” your new secret weapon for a surprise reader takedown! Read on for three daring, Hemingway-approved uses of “an.”
https://medium.com/the-haven/if-you-only-read-one-article-today-to-improve-your-writing-make-it-an-a8f91c386c4e
['Grin Spickett']
2020-09-11 20:33:21.211000+00:00
['Humor', 'English', 'Grammar', 'Authors', 'Writing']
Thankful for Stuff I Used to Hate
Breaking It Down Winter Winter storm, Nov. 26, 2019. Photo by A. Burton. Commiseration about the weather, whatever it’s doing, is like a national sport…and winter gets punted all the time. My friend’s thinking prompt was, “Give Thanks for Your Favourite Season.” This is like choosing your favourite kid, three of whom you like, and one you disinherited long ago. I live not too far from the eastern slopes of the Canadian Rockies. We often have snow from September (3 feet this year) through May. Winter, for us, entails shoveling and more shoveling, avoiding driving or choosing to drive in dangerous conditions because winter roads are nearly always dangerous, thinking about survival each time you exit the house, and, consequently, a whole lot of outdoor avoidance. Although I start thinking about Spring’s renewal in the Fall, love Summer’s warmth and its contrast of productivity and relaxation, and am awed by Autumn’s glory, Winter challenges all of my senses, determination, imagination and gratitude for what is given more than any other time of the year. So I am admitting, for the first time in my life, that although every season is lovable in its own right, I am most grateful for Winter because it stretches me and makes me strong. Anxiety Photo by Jeremy McKnight. Simple fact: I grew up anxious — sick anxious. I am not grateful for how debilitating and stifling that was, except…. Over the decades of my adult life, I’ve studied strategies and coping mechanisms, reached deep for strength I didn’t know I could access, and pretty much revolutionized my eating/sleeping/moving lifestyle to support healthy, rested responses and endurance in stress-inducing situations. That is all good and things are much better, but what I’m grateful for about anxiety is that when someone else is experiencing it, I can feel with them. I “get” anxiety enough to shift gears and pedal at their speed. Anxiety was the cost. Personal growth was the investment. A degree of empathy is the dividend. Getting Older “Age is irrelevant, unless you’re a cheese…or old.” Photo by Taylor Smith. When I turned 40, I was blithe. I use this word to suggest blissfully unconcerned. I felt accomplished in my maturity that I was not one iota fazed by cresting the hill. What hill? What’s with attitude about it’s all downhill from here? Like I said, unconcerned. Then some of the more visible effects of ageing started showing up in my hair, on my mirror, and around my waist. Little aches came from nowhere. I had a scary heart incident. Friends’ warnings that you turn fifty and there goes the warranty started to seem possible. Although I’m more conscious of my own mortality, and although I still mourn my shiny brown hair, the relative ease of shedding 5 pounds once in a while, and smooth, wrinkle-less skin, I’ve become glad for the power I have to contribute to my own longevity by becoming smarter and more pro-active. Hard-earned wellness feels great…and takes the edge off ageing. Dark Chocolate Photo by Dovile Ramoskaite. Once you’ve become habituated to smooth and creamy, it’s awfully hard to transition to bitter, woodsy, and slightly (or more than slightly) chalky. But, hear this: Dark chocolate (high in cacao or cocoa) is higher in antioxidants than superfruit like acai, pomegranate and blueberry. This means dark chocolate can prevent the effects of cell-disruptive free radicals before they do damage. For me, it’s also proven to have built-in self-control. How much bitter can you do? Dark chocolate, I’ve come to know, is excellent preventative, balancing nutrition. Video calling Photo by Rachel Moenning. Twenty years ago, the thought that someone would be able to see me on a call was appalling. This probably says more about my state of dress or readiness for the day when I answered a phone than it does about privacy concerns, but the fact is, I would have been a late adopter of the technology if it hadn’t been for working in a distributed office. Most of my meetings were online and seeing the participants really enhanced connection. Now, it’s all useful and used for good: Skype, Zoom, Google Hangouts, FB video calling, Marco Polo. Each helps me see “my” people better, learn right where I am, and reach others around the world. Spoiler alert if you’re not already there: It is the best technology in the universe when your grandchildren live far away. Avocados Photo by Thought Catalog. Q. What has skin like a dried toad, just a hint of flavour in its oily, slimy, green flesh, and a seed that takes up 30% of the space for fruit? A. Yup. That thing I avoided for the first 30 years of my life. The transition started with small dollops of guacamole because it was a Mexican restaurant and I was an adult in the company of other adults…and we all know how much fun finicky eaters are. Learning that avocados nourish eyes, regulate blood sugar, lower bad cholesterol, prevent cancer, treat arthritis and more makes them all the more delicious. Vive le avocado toast! Early mornings Photo by Andreas Kind. Two a.m. was my prime time at one time. I could get everything done faster after midnight. Two a.m. also makes for cranky, underslept people when you can’t adjust your wake-up time to compensate for short sleep hours. Switching to early mornings for productivity and peace took (and still takes) a lot of self-discipline and preparation, but it is the biggest contributing factor in my better health, better readiness for the roles I play, and better self-confidence. (I procrastinate less when I’m honest with myself that late nights are self-defeating.) Early mornings rule. Ugly workouts Photo by Hans Reniers. I might have been a teenage athlete, particularly in long-distance running, if I hadn’t looked so gosh-awful in the heat of physical exertion — blotchy red cheeks, pouring sweat, and lifeless hair — and been so self-conscious about it. When I adopted early morning as my self-care time and signed up for fitness classes that start at 6 am, I had to stop caring. It was my health or my ego. Health won, and I am glad for what my dishevelled, post-workout appearance suggests these days: she takes care of herself. Hardship, roadblocks, setbacks, resistance I wouldn’t wish the death of a loved one, a financial slam-back, a brick wall in the middle of your path toward success or the discouraging, dampening effects of resistance on anyone. But, personal experience has shown me that exponential growth — deep, abiding, actual change-for-the-better — is often influenced by the degree of difficulty we face. The harder the challenge, the greater the gain in resilience, ingenuity, resourcefulness, character, and faith. Life shocks are arduous, tough, heavy, demanding, soul-wrenching, and punishing. Pushing past nay-sayers and the reluctant, visionless company we sometimes encounter is also grinding. And yet, here I am; there you are: still standing and, if we take some time to evaluate, we’re probably standing a little taller and more sure of our footing. I am grateful for tough times and trying circumstances because they are inherent in life and call on — and show us — our potential. A full life in a direction I love is what I’m here for. If there are obstacles in the way, so be it. Over, up, around or through, I believe we’re here to make it. People knowing the truth about me and my family Photo by Caleb Woods. The family I grew up in, the family I married into, and the family I’ve co-created all value privacy. Even though there has been a performer or two in the lot, these are not flashy people who have sought or seek attention as a general rule. In fact, we tend to live routinely and “small” by some standards — avoiding attention, negative or otherwise, and keeping our business to ourselves. That we have struggled, and continue to struggle, is a fact. Relationship challenges, mental illness, financial devastation, bad decision-making, loss, regular unhappiness, and missed opportunities are some of the difficulties we’ve faced and continue to face. As a general observation, I think we don’t tend to own up to what’s hard and real even to ourselves, let alone others. Optimism plays a part in this, as does pride: the fear of losing regard or respect motivates a good deal of what we keep private. I don’t disagree with a family culture of loyalty or cohesiveness. Both serve us well. What I have actively avoided is letting people know we bleed, too…we bleed, we fight, we cry, we despair, we make large and small mistakes, we sin, we forget, we hide. This was brought home a while ago when our tendency to present our best-side-only became a public issue. A beautiful young woman who was dating one of our sons was asked, in effect, “What makes you think you’re good enough for their family?” When the story came back to me, I felt sick. I would rather be known, rough spots and all, for what we do about our challenges (try again, mop up, hang tough, hold on, pray, ask for forgiveness, backtrack, repent, recalibrate, and learn hard lessons) than to be known inaccurately as having it made. I’m grateful to have arrived at a place where I can be whole and honest about my flawed self a lot of the time. I am also grateful for the people who know us, inside and out, and still like us. It leaves room to breathe and to be human. How about you? I challenge you to explore your own shifted “appreciation landscape.” What are you thankful for now that you couldn’t imagine being thankful for in the past? The switch from hateful to grateful is a liberation.
https://medium.com/publishous/thankful-for-stuff-i-used-to-hate-fb1ed37ac7c7
['Heather Burton']
2019-12-04 15:44:53.564000+00:00
['Lifestyle', 'Life', 'Self Improvement', 'Gratitude', 'Writing']
The Most Common Machine Learning Classification Algorithms for Data Science and Their Code
The Most Common Machine Learning Classification Algorithms for Data Science and Their Code Bhanwar Saini Follow Sep 12 · 11 min read The roundup of most common classification algorithms along with their python and r code: Decision Tree, Naive Bayes, Gaussian Naive Bayes, Bernoulli Naive Bayes, Multinomial Naive Bayes, K Nearest Neighbours (KNN), Support Vector Machine (SVM), Linear Support Vector Classifier (SVC), Stochastic Gradient Descent (SGD) Classifier, Logistic Regression, Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), Fisher’s Linear Discriminant…. Classification algorithms can be performed on a variety of data — structured and unstructured data. Classification is a technique where we divide the data into a given number of classes. The main goal of a classification problem is to identify the category or class to which a new data will fall under. Important Terminologies encounter in machine learning — classification algorithms: classifier : An algorithm that maps the input data to a specific category. : An algorithm that maps the input data to a specific category. classification model : A model draw some conclusion from input data which is given for training purpose. It will predict class labels or categories for new data. : A model draw some conclusion from input data which is given for training purpose. It will predict class labels or categories for new data. Binary classification : Classification task with two possible outcomes. Eg: Gender classification (Male / Female) : Classification task with two possible outcomes. Eg: Gender classification (Male / Female) Multi-class classification: Classification with more than two classes. In multi-class classification, we assigned each sample to one and only one target label. Eg: An animal can be cat or dog but not both at the same time Classification with more than two classes. In multi-class classification, we assigned each sample to one and only one target label. Eg: An animal can be cat or dog but not both at the same time Multi-label classification: Classification task where each sample is mapped to a set of target labels (more than one class). Eg: A news article may be about sport, a person, and location at the same time. These classification algorithms are used to build a model that predicts the outcome of class or categories for a given dataset. The data can come from different platforms. It depends upon the dimensionality of the datasets, the attribute types, and missing values, etc. one algorithm can give you better accuracy than other algorithms. let’s get started… 1. Decision Tree Decision trees are very extremely intuitive ways to classify or label objects: you simply ask a series of questions designed to zero in on the classification. For example, if you wanted to build a decision tree to classify an animal you come across while on a hike, you might construct the one shown in Figure. Decision tree classification models can easily handle qualitative independent variables without the need to create dummy variables. Missing values are not a problem either. Interestingly, decision tree algorithms can be used for regression models as well. The same library that you used to build a classification model, can also be used to build a regression model after change ing some of the parameters. As the decision tree-based classification models are easy to interpret, they are not robust. One major problem with decision trees is their high variance or low bias. One small change in the training dataset can give an entirely different decision tree model. 2. Naive Bayes Naive Bayes models are a group of extremely fast and simple classification algorithms that are often suitable for very high-dimensional datasets. Because they are so fast and have so few tunable parameters, they end up being very useful as a quick-and-dirty baseline for a classification problem. Naive Bayes Classifier is based on the Bayes Theorem. The Bayes Theorem says the conditional probability of an outcome can be computed using the conditional probability of the cause of the outcome. The probability of an event x occurring, given that event C has occurred in the prior probability. It is the knowledge that something has already happened. Using the prior probability, we can compute the posterior probability — which is the probability that event C will occur given that x has occurred. The Naive Bayes classifier uses the input variable to choose the class with the highest posterior probability. The algorithm is called naive because it makes an assumption about the distribution of the data. The distribution can be Gaussian, Bernoulli, or Multinomial. Another drawback of Naive Bayes is that continuous variables have to be preprocessed and discretized by binning, which can discard useful information. 3. Gaussian Naive Bayes The Gaussian Naive Bayes algorithm assumes that all the features have a Gaussian (Normal / Bell Curve) distribution. This is suitable for continuous data eg: daily temperature, height. The Gaussian distribution has 68% of the data in 1 standard deviation of the mean and 96% within 2 standard deviations. Data that is not normally distributed produce low accuracy when used in a Gaussian Naive Bayes classifier and a Naive Bayes classifier with a different distribution can be used. 4. Bernoulli Naive Bayes The Bernoulli Distribution is used for binary variables — variables that can have 1 of 2 values. It denotes the probability of each of the variables occurring. A Bernoulli Naive Bayes classifier is appropriate for binary variables, like Gender or Deceased. 5. Multinomial Naive Bayes The Multinomial Naive Bayes uses the multinomial distribution, which is the generalization of the binomial distribution. In other words, the multinomial distribution models the probability of rolling a k sided die n times. Multinomial Naive Bayes is used frequently in text analytics because it has a bag of words assumption — which is the position of the words doesn’t matter. It also has an independence assumption — that the features are all independent. 6. K Nearest Neighbours (KNN) K Nearest Neighbors is the simplest machine learning algorithm. The idea is to memorize the entire dataset and classify a point based on the class of its K nearest neighbors. The figure from Understanding Machine Learning, by Shai Shalev-Shwartz and Shai Ben-David, shows the boundaries in which a label point will be predicted to have the same class as the point already in the boundary. This is a 1 Nearest Neighbor, the class of only 1 nearest neighbor is used. KNN is simple and without any assumptions, but the drawback of the algorithm is that it is slow and can become weak as the number of features increase. It is also difficult to determine the optimal value of K — which is the number of neighbors used. 7. Support Vector Machine (SVM) An SVM is a classification and regression algorithm. It works by identifying a hyperplane that separates the classes in the data. A hyperplane is a geometric entity which has a dimension of 1 less than it’s surrounding (ambient) space. If SVM is asked to classify a two-dimensional dataset, it will do it with a one-dimensional hyper place (a line), classes in 3D data will be separated by a 2D plane and Nth dimensional data will be separated by an N-1 dimension line. SVM is also called a margin classifier because it draws a margin between classes. The images shown here has a class that is linearly separable. However, sometimes classes cannot be separated by a straight line in the present dimension. An SVM is capable of mapping the data in a higher dimension such that it becomes separable by a margin. Support Vector machines are powerful in situations where the number of features (columns) is more than the number of samples (rows). It is also effective in high dimensions (such as images). It is also memory efficient because it uses a subset of the dataset to learn support vectors. 8. Linear Support Vector Classifier (SVC) A Linear SVC uses a boundary of one-degree (linear/straight line) to classify data. Linear SVC has much less complexity than a non-linear classifier and is only appropriate for small datasets. More complex datasets will require a nonlinear classifier. 9. Stochastic Gradient Descent (SGD) Classifier SGD is a linear classifier that computes the minima of the cost function by computing the gradient at each iteration and updating the model with a decreasing rate. It is an umbrella term for many types of classifiers, such as Logistic Regression or SVM that use the SGD technique for optimization. 10. Logistic Regression Logistic regression estimates the relationship between a dependent categorical variable and independent variables. For instance, to predict whether an email is a spam or whether the tumor is malignant or not. If we use linear regression for this problem, there is a need to set up a threshold for classification which generates inaccurate results. Besides this, linear regression is unbounded, and hence we go into the idea of logistic regression. Unlike linear regression, logistic regression is estimated using the Maximum Likelihood Estimation (MLE) approach. MLE is a technique for the “likelihood” maximization method, while OLS is a distance-minimizing approximation method. Maximizing the likelihood function determines the mean and variance parameters that are most likely to produce the observed data. Logistic Regression transforms its output using the sigmoid function in the case of binary logistic regression. As you can see in the below figure, if ‘t’ goes to infinity, Y (predicted) will become 1 and if ‘t’ goes to negative infinity, Y(predicted) will become 0. The output from the function is the estimated probability. This is used to infer how confident can predicted value be as compared to the actual value when given an input X. There are several types of logistic regression: Binary Logistic Regression: Two Categories: Spam (1) Not-Spam (0) Multinomial Logistic Regression: Three or more category without ordering: Predicts which food is recommended more like Veg, Non-Veg, Vegan Ordinal Logistic Regression: Three or more categories with ordering: Books rating from 1 to 5 11. Linear Discriminant Analysis (LDA) Linear Discriminant Analysis (LDA) is performed by starting with 2 classes and generalizing to more. The idea is to find a direction, defined by a vector, such that when the two classes are projected on the vector, they are as spread out as possible. 12. Quadratic Discriminant Analysis (QDA) QDA is the same concept as LDA, the only difference is that we do not assume the distribution within the classes is normal. Therefore, a different covariance matrix has to be built for each class which increases the computational cost because there are more parameters to estimate, but it fits data better than LDA. 13. Fisher’s Linear Discriminant Fisher’s Linear Discriminant improves upon LDA by maximizing the ratio between-class variance and the inter-class variance. This reduces the loss of information caused by overlapping classes in LDA.
https://medium.com/swlh/13-machine-learning-classification-algorithms-for-data-science-and-their-code-e185e8fca507
['Bhanwar Saini']
2020-10-10 08:17:09.582000+00:00
['Machine Learning', 'Data Science', 'Artificial Intelligence', 'Programming', 'Classification Algorithms']
Dribbble shot designs can make you a worse UI designer
How to practice UI deliberately and become a better designer Practicing UI is not a bad thing, however it’s worth remembering that UI is one part of a much larger process and by going through the full design process, you’ll reach the UI stage with more context and can make better decisions as a result. When I practice UI, I typically mimic the full design process in a short period of time in order to get context and direction, however I’d spend the bulk of my time in the UI stage to ensure that I’m getting the practice I set out to get. After all a UI exercise is a UI exercise. This way I can make better decisions but also draw the line a lot sooner with unnecessary explorations that don’t contribute towards the main goal. Mimic the full design process but spend most time in the UI phase When I feel that I’ve got a promising solution to a problem, I typically prefer to stop there and set up a new UI project for myself instead of come up with another hundred iterations. This keeps me disciplined as I learn to iterate responsibly and try to get to a sensible solution in a fair amount of time. This discipline is valuable because no amount of iteration will ensure that a project will be successful so you may as well learn to get your design to be great enough then ship it to see how it performs. To be more specific on how I quickly, and loosely mimic the full design process, here are the four stages that I cover when working on UI exercises: 1. I pick a product I already use and list some of my pain points. 2. I ideate for solutions that address the pain points. 3. I prototype my ideas. 4. I put my prototypes in people’s hands. So without further ado, let’s take a good look at each step. 1. Pick a product you use and list some pain points A good starting point is to pick a product you already use and think about your personal pain points with that product. Alternatively, you may think about things you wish the product offered you. Taking this approach will help you begin with three things in mind: 1. A real customer — yourself 2. Of a real product — the product you’ve chosen 3. With a real problem — the thing that frustrates you about that product With the list above you’re already putting yourself on the fast track as far as gaining context of the problem to be solved and who you’re designing for. Although your pain points may not reflect those of the broader audience, starting here is still better than jumping straight to the pixels as it reinforces the process of thinking about people and problems as a first step. This is what designers do at work especially as they grow to more senior positions. By deliberately practicing this phase you’ll be further acknowledging it’s relevance and make a habit of starting at the right place. With this context in mind, and some real pain points, you’re ready to jot down one or a few problems that you think are worth solving. To keep the exercise lean and focused I’d recommend listing very little problems around a single area of the experience. I typically aim for one to three related problems at the very most. Now because this exercise is all about practicing UI, I don’t recommend that you spend days or weeks in choosing a product and listing problems. In fact I’d recommend that you carry out this initial step relatively quickly. If you’re familiar with the product you’ve selected, you may be able to jot some problems down in a single sitting of less than an hour. 2. Ideate solutions to the pain points To get the most out of this part of the exercise you’re better off separating it into two parts: Part 1 — Learn from other products The first part is about finding products which have solved similar problems to those you’re about to solve. This will help you discover what solutions already exist but more importantly, how they compare to one another and how they actually work. Doing this will also make you knowledgable about more products and expand your reference base. As time goes by, you’ll have a better understanding on how UI works and will be able to make better decisions sooner. Part 2 — Iterate and iterate some more The second part is about iterating to solve the problems you’d like to solve. It’s perfectly fine to incorporate ideas from other products you’ve tried out if you’ve found solutions that can solve the problem you’re addressing. With this being said, always make sure to borrow for the right reasons. If you’re borrowing a solution because it addresses your problem and feels familiar and intuitive then that’s a thumbs up. If you’re borrowing a solution because of a nifty interaction then ask yourself how that helps solve the problems you’ve listed earlier. Remember that great UI should help people abstract value from a product and has nothing to do with self indulgence or artistry. During the iteration phase, I like to force myself through a few rounds of iteration in order to challenge myself to rapidly come up with the best solutions possible. Here’s how I do it more or less: Create low fidelity flows — critique them. Improve low fidelity flows — cement them. Apply medium fidelity UI to flows — critique UI. Explore high fidelity UI improvements round 1 — critique UI. Explore high fidelity UI improvements round 2 — critique UI. Explore high fidelity UI improvements round 3 — draw line. As you may have noticed, the process is about covering the broader picture first to get that cemented. After that, the rest is about multiple rounds of refinement based on feedback, each round brings the solution closer to the final product. It’s good practice to draw the line at the right time; UI can certainly keep you busy for long and eventually provide diminishing returns compared to the number of iterations being produced. After solving the problem and pushing yourself through a healthy number of rounds of iteration, you’re better off moving into prototyping. This is because even though you may have countless ideas on how to solve a problem, you’ll only know the value of your work after putting it in people’s hands. 3. Prototype your design This phase is about turning your static designs into a convincing product, ready for people to use. No UI is truly complete unless it factors in all of the interactions that come along with it. As people navigate through screens, each interaction serves to communicate what’s happening and makes the experience more learnable and seamless. Great interaction is subtle and additive to the experience, it is not about making objects move around for the sake of it. Imagine a story with a beginning and an ending but nothing gluing the two together; that’s what UI without interaction feels like. When working with interaction, it’s worth keeping to standard interaction models related to the UI pattern’s you’ve chosen. You may see how these interactions work by observing the products you researched. This is because people come to products with learnt expectations they’ve developed from other products; we call these expectations ‘mental models’. Deviating from these models for no valid reason may frustrate users as things will feel very unfamiliar to them. Unless you’re working on a specific case which begs a new interaction model, playing it safe usually guarantees higher levels of understandability for users. In some cases, you’ll notice that some designs are awkward to design great interactions around and this may force you back to wire framing. This isn’t a bad thing. With practice you’ll get used to thinking about interactions whilst putting wireframes together. This is another benefit of adding a prototyping phase to your daily UIs. You’ll start to become more mindful of the finished product and how people will navigate it. With each new project, you’ll find that the prototyping stage will take less time to complete. This will indicate that you’re getting more and more skilled at the craft. 4. Put your prototype in peoples’ hands By the time you reach this stage you should have a prototype which addresses some issues with a real product that you, and perhaps others have used too. Now’s the moment of truth, the most exciting part. This is where you can put your prototype into the hands of people who’ve used the product you designed for and get their thoughts on your prototype. Instead of going for a formal round of user testing, you may find people who are happy to try your prototype and share their thoughts with you. Remember that this exercise is about mimicking the design process merely to help practice better UI, the main focus therefore should be the UI and not design thinking in it’s entirety. Even though this doesn’t mimic how user testing really happens, the idea here is to reinforce ‘ending’ with feedback from users as this is how things work at product companies. In many cases after gathering feedback, you’ll notice that your work has further room for improvement and you may be tempted to go back and continue working on your prototype. This is fine and if you see value in doing so then go for it! I personally prefer to gather my learnings and draw the line here to avoid turning a rapid UI exercise into a massive project. The reason I draw the line at this point is because that allows me to begin a new project afresh and improve my entire process from start to finish. The reason for this is that each project will teach you a few lessons; by going through many projects you’ll learn a variety of lessons which will make you a more rounded designer with many experiences to draw from.
https://uxdesign.cc/dribbble-shot-designs-can-make-you-a-worse-ui-designer-5227ac906f42
['David Portelli']
2019-10-17 00:05:11.845000+00:00
['Practice', 'UI', 'Dribbble', 'Design']
Meditation isn’t hard — we’re just doing it wrong
When you think of meditation, what comes to mind? Probably something along the lines of “I really should be trying not to think right about now.” Well what if I were to tell you that the whole point of meditation is actually to keep thinking — not to stop thinking? Maybe you want to believe me but you don’t want to make the rookie error of taking time out your meditation to actually ponder about it. At present, you’re desperately trying to shut every slither of a thought from sauntering through the distracted chaos that is your mind, all the while trying to retain some sense of feeling in your slowly numbing ass, as your back muscles start to ache from sitting up in such an unnaturally straight posture. Your body is screaming, and so is your mind. But still you stubbornly sit and listen out for some unobtainable solace of silence. Stop. It’s not coming. And it’s not supposed to. Meditation isn’t about cutting off the power supply to every wayward thought and emotion that flitters through your head. It is not the gathering of empty darkness that will invite inner peace and clarity to swoop in. The point of meditation is actually to observe your thinking patterns to help you decipher the mismatch of incomprehensible twists and turns of the mechanical madness that comprises the maze of topsy turvy mayhem that is the holistic constitution of your mental faculty. What? Exactly. That got you thinking again. Now we’re getting somewhere. When I sent myself off to meditation camp, I thought I’d return with yogic super powers that enabled me to sit dead still for hours on end in full lotus, or at the very least, come away with the answer to every childhood hangup or emotional melodrama I’d ever self-inflicted on myself and the poor souls around me — but I didn’t. What I came away with was far more valuable. I came away with a deeper understanding of how I think. In other words — I came away with a deeper understanding of who I am. Which is actually the same thing. As James Allen says in As a Man Thinketh: “The outer conditions of a person’s life will always be found to be harmoniously related to his inner state…Men do not attract that which they want, but that which they are.” Meditation doesn’t teach you to stop thinking — meditation teaches you to become a silent observer to your own thought patterns, which give rise to your emotions and, subsequently the actions and reactions that stem from this. Your humaness is the very essence which will bring you peace and fulfillment. The more you deny your thoughts and emotions, such basic of human instincts, the more you cut yourself off from who you actually are. So rather than telling yourself that meditation is “too hard” and you just can’t possibly have a single moment spare to even try it, here’s a few tips that may help you do it right. Tips to Help You Meditate Practice mindfulness whenever you remember to. When you are eating, when you are driving, when you are brushing your teeth or closing a door. Just allow yourself to be aware of what you are doing. That’s all you have to do — think about what you are doing. Sit in silence in nature for at least 10 minutes a day. Take a small smidgen of time out your daily schedule to remove yourself from stress and hurriedness to contemplate trees. Take off your shoes, run your finger along the roughness of bark, make friends with the insects, take a breath and relax. Meditation doesn’t mean you have to sit for hours on end. Try standing, walking at different speeds or bringing mindfulness into seemingly mundane tasks like washing the dishes or cutting vegetables. Bring your full focus into what you are doing and turn your work into a form of meditation. Focus on your breath. This is how Buddha reached enlightenment. By observing his in and out breath he was eventually able to completely disassociate his consciousness from this human instinct and so became an outside observer of his own body. Easier said than done, but give it a try. When you feel an emotion arising, rather than reacting instinctively, pause and immediately take note of what you are thinking. What are the habitual thinking patterns that have caused this emotion to arise and is it really necessary to be allowing yourself to feel this way? Self-observation is the first step to positive change. Try incorporate these simple meditative principles into your everyday living and see the way you think, the way you are, miraculously start to change.
https://medium.com/dreamer-do/meditation-isnt-hard-you-re-just-doing-it-wrong-6c8ba25b95d5
['Camilla Marsh']
2019-10-16 03:44:38.401000+00:00
['Self-awareness', 'Mindfulness', 'Inner Peace', 'Meditation', 'Life Lessons']
Aisle Rocket Studios, the agency behind Whirlpool, masters remote collaboration
Aisle Rocket Studios is the digital powerhouse behind iconic household brands like Whirlpool, Maytag, Sears and more. The agency’s approach is all about being fast and nimble. With a portfolio of 25+ brands, 250+ employees, four offices and remote associates, Aisle Rocket Studios is not your typical agency. As featured in CIOReview, ARS’s Chief Digital Officer Kashif Zaman explains, “Our clients see us as an extension of their team. We are their growth hackers and relentless problem solvers.” Distinguishing themselves from the typical agency model, the designers at ARS identify themselves as builders. As Zaman puts it, “It’s the idea that the development team is technically a member of the creative team and there is a level of homogeneity between the roles — their skill sets are different but the creative process is interchangeable.” Therefore, one of the core design principles at ARS is that 80 percent of the creative process should take place in the browser. But given the agency is spread out across four locations with remote associates, real collaboration was a challenge. That was until the Whirlpool account team made big waves. So many files in so many places! Matt Carson, Creative Director at ARS working remote in Scottsdale, was brought onto the Whirlpool account last minute to help with a particular project with a tight deadline. With a short turnaround looming, Carson recalls colleagues sending all sorts of native work files — some in Sketch, others in Photoshop. “It was a hot mess,” explains Carson, “there was no central process for the organization.” Carson’s primary objective quickly became to establish a centralized toolkit. So when Isaac Vander Ark, Associate Creative Director at ARS, sent Carson a Figma link, Carson immediately laughed, “you’ve gotta be kidding me, another piece of software!” But unlike other tools, Carson was intrigued with Figma’s multiplayer collaboration feature. As Carson put it, “it was exactly what ARS needed to have a real-time space to collaborate and work across time zones and — it was a no-brainer.” “I romanticize Figma because it creates this total collaborative environment where no longer is a writer, designer or developer working in different mediums and different spaces.” — Matt Carson, Creative Director, Aisle Rocket Studios Collaboration = more creativity. The Whirlpool account team was the ideal testing ground for Figma because everyone was dispersed. Figma would act as the new source of truth and point of collaboration for the Whirlpool account team. “We put to bed the other workflows and tools and centralized everything on Figma almost overnight,” said Carson. In no time, the team was iterating and designing together in a single file, running internal meetings on Figma, soliciting feedback from internal stakeholders and sharing prototypes with clients. “We now have a dynamic, collaborative environment that enables us to brainstorm together, bounce ideas off one another and ultimately be more creative.” — Taylor Madaffari, Copywriter, Aisle Rocket Studios For client Glenn Roper, Digital Brand Manager at Whirlpool Brand, he’s able to visualize complex design elements in a single view. Roper says, “With Figma I’m able to view and share digital concepts with simulated functionality without needing a login or having developers code the experience.” As a result, the review process between agency and client become significantly shorter because they can iterate together in real-time. ARS uses nested components in prototyping so updates made to the parent components are automatically updated in the prototype. Inclusive design empowers. With a shared workspace in Figma, it’s all the easier to host 80 percent of the creative process in the browser. Zaman explains, “We don’t need to physically share a screen to collaborate, rather developers can follow along with designers or copywriters with designers, all within Figma.” For Taylor Madaffari, copywriter at ARS, having access to make changes to copy directly within designs has completely changed her approach as a copywriter. She no longer has to work in a separate space, drop in copy and make updates so it fits within a design, instead she and Vander Ark are already collaborating in the early stages of design. Madaffari explains, “Figma leveled the playing field and made the entire design process more democratic. For the first time, I feel empowered to have input on designs.” Copy is no longer an afterthought but rather a distinct design element. Madaffari and Vander Ark collaborate together in real-time. And developers have embraced Figma, too. Before, the team was handing off static files to development which resulted in a lot of back and forth. “It was excruciating at times for our developers so they welcomed Figma with open arms,” recalls Carson. Figma reduces the friction of turning visual concepts into code because developers are granted viewer access to files early to make sure designs are achievable from the onset. A small team makes big impact. The Whirlpool team started to capture the attention of the larger agency, they were leaner, faster and more collaborative. As Zaman puts it, “they had the magic sauce and everyone wanted in.” With Figma, the team consolidated their tooling, tightened complex workflows and centralized files. Even client Glenn Roper, digital brand manager at Whirlpool, agrees, “Figma enables greater collaboration between my team and our agency, saving time and money without sacrificing creativity.” Today, collaborative open design is the standard at ARS. Inspired by the collaborative nature of the Whirlpool team, Vander Ark is helping get the entire design team at ARS onto Figma. “People are seeing Figma as the new industry standard,” explains Vander Ark. To get more people on boarded on Figma across the agency, Vander Ark created a playground of assets in Figma in which colleagues are tasked with re-imagining existing pages. What’s next? Design systems. The Whirlpool team is embarking on a new journey to build more consistency across the digital brand. Following the principles of atomic design, the goal is to have the atoms built to spec in Figma and saved to the team library as a source of truth. And as the agency moves towards standardizing on Figma, they’re looking to create a one-to-one design language system for base components that can be used across brands — be it Maytag or Whirlpool. The sky’s the limit for this agency.
https://medium.com/figma-design/aisle-rocket-studios-the-agency-behind-whirlpool-masters-remote-collaboration-c0ada83d93e6
['Morgan Kennedy']
2018-09-21 15:01:02.469000+00:00
['Case Studies', 'UI', 'Design', 'UX']
Letters.
why do i feel like i don’t own you? your words, phrases that once echo my soul now seem like an abyss abandoned by its invisible being. why do all of you appears new to me? …that every time i roam my eyes on your entirety, i barely remember a thing. tell me, had i written you? did i spend hours to fulfill my fixation upon you? why does it feels like everything is brand new? …like i am back from the start where it’s all scraps and full of trash. i am lost yet to find.
https://medium.com/poets-unlimited/letters-897b613df476
[]
2019-06-18 13:42:21.096000+00:00
['Poetry', 'Writing']
Efficiently deleting one million things per hour
For ten years people have known WeTransfer as the go-to service for getting their stuff from A to B. Simply, safely, and (dare we say so ourselves?) beautifully. But what about when the files reach B — what happens after that? Transfer Konmari About 55 million transfers are created every single month on wetransfer.com. Most of these transfers expire after a week which means they need to be deleted. Sounds pretty straightforward but, with an average of 20 transfers uploaded every second, that’s 20 transfers we need to delete per second just to keep up. And sadly, no, we can’t just hit ‘delete all’. We keep track of uploaded transfers, their senders and their intended recipients in a centralized MySQL database. The database tables have a simple structure with several child tables (containing things like download details and file metadata) belonging to a single parent table called transfers. Often transfers can contain hundreds of files and potentially millions of downloads. So when it comes to deleting 20 per second, that’s a serious amount of rows to take out in one go. And we’ve got to stay on top of it. Not deleting the metadata would be a violation not only of people’s trust and expectations but also of regulations like the GDPR. Plus, these tables grow to multiple terabytes in size making them problematic to store. Deleting so much in such a short space of time and at such a regular cadence was taking its toll. For a long time, we would actually take the site offline for a couple of hours throughout the year and remove the rows manually, since trying to do it while the site was up would result in mysterious downtimes. So, as WeTransfer continued to expand and develop, we decided to get to the bottom of the issue once and for all. Problem #1: Too slow Knowing which transfers we needed to delete was straightforward since the tables have an indexed delete_at column. All our database tables are ActiveRecord models and they already had the relevant relations defined between them. Naturally, our first attempt was something like this: until finished() do some_transfers_to_delete = Transfer.where(delete_at < Time.now).limit(BATCH_LIMIT) some_transfers_to_delete.destroy_all end It worked, but not as well as we’d hoped. Our logging showed that roughly 66% of the database time taken to destroy an object was spent in the COMMIT phase. A large percentage of the time was also spent allocating ActiveRecord objects and then immediately garbage collecting them. So our next iteration deleted multiple transfers per transaction and skipped allocating ActiveRecord models altogether: until finished() do some_transfers_ids_to_delete = Transfer.where(delete_at < Time.now).limit(BATCH_LIMIT).pluck(:id) ActiveRecord::Base.transaction do Recipient.where(transfer_id: some_transfers_ids_to_delete).delete_all # this will generate a DELETE FROM recipients WHERE transfer_id IN (1,2,3 …); Download.where(transfer_id: some_transfers_ids_to_delete).delete_all FileEntry.where(transfer_id: some_transfers_ids_to_delete).delete_all Transfer.where(id: some_transfers_ids_to_delete).delete_all end end This helped us reach an acceptable speed, deleting just under 6000 rows per second across all the tables. Sadly, it didn’t last and we encountered an unusual surprise when running the script for longer periods of time. Problem #2: Too fast InnoDB, the storage engine for MySQL, returns a positive response to a COMMIT as soon as the transaction has been written to its redo log. At that point, even if there is a hardware failure, data can still be recovered from the transaction. The engine also marks the relevant pages in its in-memory cache of disk pages (the buffer pool) as ‘dirty’. One of the background threads in InnoDB is responsible for writing these dirty pages to the underlying storage, after which the transaction can be deleted from the redo log again. That is, until you reach the checkpoint_age.[https://www.percona.com/blog/2011/04/04/innodb-flushing-theory-and-solutions/]. The checkpoint age is the number of bytes above which no more transactions are accepted in InnoDB. If you reach this limit, your database will be unresponsive while background threads work to bring the checkpoint age down as fast as they can. If your application depends on the database (like ours does), this usually means downtime. When issuing commands which change many different pages (like deleting millions of rows), it’s quite easy to raise the checkpoint age faster than the background threads can bring it back down. Clearly, we had to slow down our delete scripts, but by how much? Problem #3: Just right? Running the script too fast would risk downtime if we misjudged our traffic at any point. But leaving it slow enough to be reasonably sure we wouldn’t hit maximum checkpoint age meant running much slower than we wanted to. To tackle the problem we resorted to a very straightforward feedback controller, limiting the checkpoint age to about 66% of its maximum value: until finished() do some_transfers_ids_to_delete = Transfer.where(delete_at < Time.now).limit(BATCH_LIMIT).pluck(:id) ActiveRecord::Base.transaction do Recipient.where(transfer_id: some_transfers_ids_to_delete).delete_all # this will generate a DELETE FROM recipients WHERE transfer_id IN (1,2,3 …); Download.where(transfer_id: some_transfers_ids_to_delete).delete_all FileEntry.where(transfer_id: some_transfers_ids_to_delete).delete_all Transfer.where(id: some_transfers_ids_to_delete).delete_all end sleep 10 if checkpoint_age() > ((2/3) * MAX_ALLOWED_CHECKPOINT_AGE) end After this, the checkpoint age no longer reached unsafe levels — success. But there was one problem left. While most transfers are only downloaded a handful of times, some are downloaded in the millions. Deleting all those rows in one go could still exceed the maximum checkpoint age with a single query, rendering our protective check pretty useless. We found this out the hard way when a single transfer (uploaded by a certain mononymous musician) was downloaded over 19 million times. Luckily MySQL supports LIMIT clauses on DELETE queries. So instead of having a single large query of an unknown length, we can chop it up into an unknown (but finite) number of transactions of a known length. We can also verify our checkpoint age after each of those queries to make sure we’re not overloading the database: until num_deleted < MAXIMUM_NUMBER_OF_ROWS_PER_TRANSACTION do ActiveRecord::Base.transaction do num_deleted = ActiveRecord::Base.connection.delete(“DELETE FROM … WHERE … LIMIT …)” end sleep 10 if checkpoint_age() > MAX_ALLOWED_CHECKPOINT_AGE end With this final addition, the problem appeared to be solved. The code has been up and running unattended without a single problem yet (knock on wood). Conclusion Going from too slow to too fast, we finally found the goldilocks of transfer deleting speeds. And though it’s taken several iterations to get here, the amount of technical debt it’s freed up is well worth it. We’ve also been able to apply automated checkpoint age monitoring to other write-heavy operations, proving that investing in the deep understanding of our tools certainly pays off (and deleting one million things per hour is unbelievably satisfying). Images (from top to bottom): Nasa via Unsplash Fabio via Unsplash Joshua Sortino via Unsplash Margaret Weir via Unsplash
https://medium.com/wetransfer/https-medium-com-wetransfer-efficiently-deleting-one-million-things-per-hour-fa92262b4854
['Wander Hillen']
2019-11-18 06:45:30.174000+00:00
['Code', 'Technology', 'Software Engineering', 'Tech', 'Web Design']