Unnamed: 0 int64 0 3k | title stringlengths 4 200 | text stringlengths 21 100k | url stringlengths 45 535 | authors stringlengths 2 56 | timestamp stringlengths 19 32 | tags stringlengths 14 131 |
|---|---|---|---|---|---|---|
1,500 | The M1 Macbook Pro is Blazing Fast | Photo by Daniel Cañibano on Unsplash
The M1 Macbook Pro is Blazing Fast
On my old 12" Macbook with an m3 core, the slowest part of my workflow was opening Affinity Photo.
Affinity Photo, a faster and more affordable alternative to Photoshop, is my tool of choice for designing book covers, marketing materials, web design assets, and anything else I need for my writing business.
It’s a great piece of software, and I am a raving fan, but opening that thing on an m3 Macbook was slow. I would click to open the program and then pick up my phone to pass the next half a minute. And that’s the blazing fast Affinity Photo. Photoshop, Affinity Photo’s larger (and slower) competitor, would easily take five minutes to load fully on my Macbook.
All in all, the m3 Macbook is not as slow as you would expect an m3 core to be, but it’s still slow. For writing articles or browsing simple websites, it is a fabulous machine, and I still recommend it to writers who are looking for their first writing computer. But for the more complicated parts of my business workflow — designing marketing materials, configuring my website — it showed its limitations.
The M1 MacBook Pro is a world apart.
After fully setting up my M1 MacBook Pro, the first thing I did was open Affinity Photo. It opened in exactly two heartbeats. One heartbeat for the preview screen, one heartbeat for the main photo editor to load. Done.
I gasped so audibly my mother asked me what was wrong.
The initial configuration process for the machine, a process that notoriously takes ten to fifteen minutes, passed in a matter of moments. I barely processed that I was waiting on my new computer to load before it was fully loaded and ready to rock-and-roll. Indexing the entire machine, which normally takes about an hour, took fifteen minutes. | https://medium.com/macoclock/the-m1-macbook-pro-is-blazing-fast-eb259757ca28 | ['Megan Holstein'] | 2020-11-18 05:46:15.236000+00:00 | ['Gadgets', 'Tech', 'Digital Life', 'Apple', 'Technology'] |
1,501 | Meet the team: Software Engineer & CIO Daniel Kirchgesner | Short Introduction:
I am Daniel Kirchgesner, 22 years old and software developer at SiriusX FinTech. For me it was a big and uncertain step after completing my Bachelor’s degree in Software Engineering at Heilbronn University not to take the usual step of selecting a large company that had proven itself for many years and to gain professional experience with it. Instead, I decided to take this opportunity here, which is why I’m doing this interview now. My long-time buddy Peter Metzler contacted me over a year ago with the idea for SiriusX, which he developed together with Patrick Scheiter, and asked me for technical support. Convinced of the concept, I started creating the website during my studies and have been an integral part of the team ever since.
What does your support look like in concrete terms?
My tasks are planning, creating and managing websites and apps. I am responsible for the operation and maintenance of the products as well as for quality assurance. System tests are carried out under my direction, the results are evaluated and documented accordingly and decisions are derived from them.
How do you judge your career decision in retrospect? | https://medium.com/@siriusfintech/meet-the-team-software-engineer-cio-daniel-kirchgesner-fd89319535c2 | [] | 2019-11-15 19:44:49.247000+00:00 | ['ICO', 'Blockchain', 'Blockchain Startup', 'Software Development', 'Blockchain Technology'] |
1,502 | JavaScript’s Magical Tips Every Developer Should Remember | JavaScript is the most popular technology for full stack development. While I have been focusing mainly on Node.js, and somewhat Angular.js, I have realised that there are always some tricks and tips involved in every programming language irrespective of its nature of existence.
I have seen (so many times) that we programmers can tend to make things over complicated, which in-turn leads to problems and chaos in the developing environment. It is beautifully explained by Eric Elliott in his post.
Without further ado let’s get started with the cool tips for JavaScript:
Use “var” when creating a new variable.
Whenever you are creating a new variable always keep in mind to use “var” in front of the variable name unless you want to create a global variable. This is so because if you create a variable without using “var” its scope will automatically be global which sometime creates issues, unless required.
There is also the option to use “let” and “const” depending on the use case.
→ The let statement allows you to create a variable with the scope limited to the block on which it is used.
Consider the following code snippet.
function varDeclaration(){
let a =10;
console.log(a); // output 10
if(true){
let a=20;
console.log(a); // output 20
}
console.log(a); // output 10
}
It is almost the same behaviour we see in most language.
function varDeclaration(){
let a =10;
let a =20; //throws syntax error
console.log(a);
}
Error Message: Uncaught SyntaxError: Identifier ‘a’ has already been declared.
However, with var, it works fine.
function varDeclaration(){
var a =10;
var a =20;
console.log(a); //output 20
}
The scope will be well maintained with a let statement and when using an inner function the let statement makes your code clean and clear.
I hope the above examples will help you better understand the var and let commands.
→ const statement values can be assigned once and they cannot be reassigned. The scope of const statement works similar to let statements.
function varDeclaration(){
const MY_VARIABLE =10;
console.log(MY_VARIABLE); //output 10
}
Question: What will happen when we try to reassign the const variable?
Consider the following code snippet.
function varDeclaration(){
const MY_VARIABLE =10;
console.log(MY_VARIABLE); //output 10
MY_VARIABLE =20; //throws type error
console.log(MY_VARIABLE);
}
Error Message : Uncaught TypeError: Assignment to constant variable.
The code will throw an error when we try to reassign the existing const variable.
2. Always use “===” as comparator instead of “==” (Strict equal)
Use “===” instead of “==” because when you use “==” there is automatic type conversion involved which can lead to undesirable results.
3 == ‘3’ // true
3 === ‘3’ //false
This happens because in “===” the comparison takes place among the value and type.
[10] == 10 // is true
[10] === 10 // is false
'10' == 10 // is true
'10' === 10 // is false
[] == 0 // is true
[] === 0 // is false
'' == false // is true
'' === false // is false
3. ‘undefined, null, 0, false, NaN, ‘’ (empty string)’ are all false conditions.
4. Empty an array
var sampleArray = [2, 223, 54, 31];
sampleArray.length = 0; // sampleArray becomes []
5. Rounding number to N decimal places
var n = 2.4134213123;
n = n.toFixed(4); // computes n = "2.4134"
6. Verify that your computation will produce finite result or not.
isFinite(0/0); // false
isFinite('foo'); // true
isFinite('10'); // true
isFinite(10); // true
isFinite(undefined); // false
isFinite(); // false
isFinite(null); // true
Lets take an example to understand the use case of this function, suppose you have written a database query over a table which contains large amount of data and after the execution of query you are not certain about all the possible result values of that query. The data of query changes on something that you put dynamically in the query. In such a case you are not sure about finite property of the result and if you use that result directly it can give you infinite condition, which can break your code.
Hence, it is recommended to use isFinite() before any such operations so that infinite values can be handled properly.
7. Use a switch/case statement instead of a series of if/else
Using switch/case is faster when there are more than 2 cases, and it is more elegant (better organised code). Avoid using it when you have more than 10 cases.
8. Use of “use strict” inside your file
The string “use strict” will keep you from worrying about declaration of a variable which I mentioned in 1st point.
// This is bad, since you do create a global without having anyone to tell you
(function () {
a = 42;
console.log(a);
// → 42
})();
console.log(a);
// → 42
“use strict” will keep you away from making above mistake. Using “use strict”, you can get quite a few more errors:
(function () {
“use strict”;
a = 42;
// Error: Uncaught ReferenceError: a is not defined
})();
You could be wondering why you can’t put “use strict” outside the wrapping function. Well, you can, but it will be applied globally. That’s still not bad; but it will affect if you have code which comes from other libraries, or if you bundle everything in one file.
9. Use && and || to create magic
“” || “foo”
// → “foo” undefined || 42
// → 42 function doSomething () {
return { foo: “bar” };
}
var expr = true;
var res = expr && doSomething();
res && console.log(res);
// → { foo: “bar” }
Additionally, don’t forget to use a code beautifier when coding. Use JSLint and minification (JSMin, for example) before going live. This will help you maintain a coding standard in your project and make it as standard as possible.
The above is a mere reflection of the power of JavaScript. | https://medium.com/swlh/javascripts-magical-tips-every-developer-should-remember-38c71b1cbfba | ['Tarun Gupta'] | 2020-12-25 14:45:06.231000+00:00 | ['Programming', 'Technology', 'Productivity', 'JavaScript', 'Software Engineering'] |
1,503 | Work smartly with QR code. | QR code functionality is significantly reducing human work and making us work smartly.
How?
If you just take a look at couple of years back, how we used to exchange information whether it is personal or professional, manually only correct?
Now, technology has been significantly changing periodically. however, we aren’t still utilizing it in the way as we’ve to be.
I’ll going to give you a simple example:
You’re meeting with your client and he asked your contact details. now, you’ll provide your visiting card to your client. therefore, he/she can call you later for any further discussions right? you’re still making your client to work hard instead of smart work.
If you’re going to provide the OLD card like below, I’m damn sure that you’re going to lose the opportunity as your card indicates that you’re an outdated version.
In today’s world, we’ll have to be updated with technology then anyone else in the world and you should impress your client by making him to use the cool technology. we don’t require that outdated visiting cards anymore instead, you’ll have to create your own customized DIGITAL visiting card like below.
Here you go with the required details embedded in the form of QR code, just let your client scans it and he/she will just have to import that contact details.
Scan it to see the same results as below from your mobile scanner.
This is how it looks like after scanning. just import it and it will be saved to your phone’s directory. Isn’t cool?
You can even customized it, append your logo or title etc.
There are many apps that can let you create QR codes for free, try this below
https://www.qr-code-generator.com/ to generate free QR codes.
Always work smartly not hardly 😎 | https://medium.com/@shahednawaz01/work-smartly-with-qr-code-d814b2e2ddf4 | ['Mohammed Abdul Nawaz'] | 2020-12-14 05:07:27.525000+00:00 | ['Qr Code', 'Qrcodegenerator', 'Technews', 'Smartphones', 'Technology'] |
1,504 | Why we Still Need Bookshops in the Internet Age | Dare the question: do we really need bookstores (or even libraries) today? In theory, no. If you’re in search of a book a simple click on the Internet can satisfy it: within 24 hours it is delivered in your mailbox. Better, you can have it immediately in its digital version. Better yet, lying in your couch you can ask your personal assistant — Alexa, Watson, Siri whatever… — to take care of the purchase. Better yet, it can advise you on your next reading. Better still, the machine can even read it to you.
An access to the planet’s library without moving from your sofa.
A dream comes true …
Except this dream is not yours.
It’s Jeff Bezos’s.
And it’s a lure. This infinite choice is a mirage. From our sofa, with our laptop, space is shrinking ever more. The algorithm of the machine is in tune with our “inner algorithm”, a force that pushes us to choose the same dishes to a buffet displaying thousand meals.
Anice book published in England in 2014 reveals what the use of a bookstore in such a context.
It’s called The Unknown Unknown by Mark Forsyth. The title is an allusion to a quote from Donald Rumsfeld — the Secretary of Defense George W. Bush himself. Entangled in the scandal of the war in Iraq, in order to justify the merits of military strikes, he had given journalists an improvised course of epistemology. He subsumed human knowledge in three broad categories, three continents of knowledge: the “known known”, these things that we know we know (for example, I know that Umberto Eco wrote the Name of the Rose, that Napoléon was a French Emperor, that the Beatles were four …); the “known unknown”, i.e., the things we do not know about (like I know that I do not know the exact number of the population of Tanzania, or how to say “Thank you” in Japanese …); and, finally, “the unknown unknown”, those things we ignore we do not know (and of which I could give no example since precisely I do not know that I do not know it)
This third continent — let’s call it Terra Incognita Incognita — is the hugest continent. Almost infinite. And contrary to what we believe this continent remains out of reach via the Internet. It remains a blind spot of our computers and our smartphones. Yet, we live in the illusion that all the knowledge of the world is directly accessible to us on the Internet. In theory, it is. But actually, it’s a different story.
Internet serendipity — this ability to make us discover new things— is partly an illusion or a myth. It certainly exists — denying it would be absurd — but it is a smooth serendipity that proceeds from what we already know based on the same principle as « Italy-alley cat-cat and dog-dog leg » …
In fact, when we are surfing on the infinite expanses of the Internet, we remain, even reluctantly, bound to what we already know. We do not move very far from familiar shores, stuck to our echo chamber: for comfort, we trample the “known known” and we explore on tiptoe the “known unknown”. But “the unknown unknown” stays out of our sight beyond our horizon. How can we manage to google something whose mere existence is unknown to us?
Actually, the Terra Incognita Incognita is a country almost unreachable by our own means. It is precisely the bookshoper’s or the librairian’s mission to welcome us to this third continent: a leap into the “unknown unknown” through books we did not even suspect they existed. Towards unsuspected promises of reading pleasures. And that’s the very definition of the good bookstore: the one we always go out with what we did not come to get.
More than ever at a time when algorithms confine us to our own choices, when we tend to duplicate our own tastes, we need bookstores or librairies able to take us out of our “cultural bubble”.
Otherwise more sophisticated than the stupid algorithm of Amazon — which blindly follows our course and stupidly aggregates the books bought by other customers — the bookseller’s algorithm is a key that opens us to our own desire, the one that we do not know yet. Not a desire that we would have checked in advance — as on a dating site or an order online — but a novel desire, that is to say, an unknown unknown desire.
An absorbing mission of exploration that requires to immerse in the plethora of editorial production in order to detect the nuggets. A risky mission too that involves coming out of the comfort of the mainstream recommendations.
But an essential mission for the sake of the culture diversity. Because, contrary to what the giants of the Internet claim, they dismiss acting for the “long trail”. Book shops and libraries do it by delivering an accessible diversity pledged by the Internet — by making it alive, by ensuring both plurality and duration in books curation. By passion, too.
Lately, an advertising campaign in France compared booksellers to superheroes. Indeed, they are. Their superpower is that of taking us to the unknown unknown … It’s up to us to help them to keep on. By pushing the door of a bookshop—wide open on the unknown— rather than clicking comfortably seated on our sofa browsing on what we already know. ¶
This text is an extract of our new essay “Délivrez-vous!” published in France at Les Editions de l’Observatoire. | https://paulvacca-58958.medium.com/why-we-still-need-bookshops-and-librairies-in-the-internet-age-87d971bb7ab2 | ['Paul Vacca'] | 2019-04-27 15:04:01.871000+00:00 | ['Serendipity', 'Technology', 'Books', 'Reading', 'Libraries'] |
1,505 | Reolink vs Amcrest: Which IP Camera is The Most Popular? | Reolink vs Amcrest — Quick Analysis
When it involves security cameras for outdoor using, Reolink vs Amcrest are the names you’ll completely believe. Reolink and Amcrest both security camera are most popular in 2020 but they have some different features in their model brand. Today we will compare both of them so please keep in your eyes.
Full Reviews here: Reolink vs Amcrest
In fact, Amcrest has been the go-to option for several users as a nanny camera, baby monitor indoors, and a wireless security camera used outside.
Even though Reolink is not any longer a newcomer to the safety camera market after numerous years helped enliven the safety camera market, it’s not underestimated by observers of surveillance systems because its flagship products are competing with products from large vendors like Hikvision, Lorex or Amcrest. Reolink offers products at lower prices than other products with an equivalent spec from other vendors.
Reolink vs Amcrest
This time we’ll compare the technical specs of Reolink vs. Amcrest for a few of their featured products with dome, bullet, wifi and PTZ cameras. We take the featured product from reolink from each of the bullet, dome, ptz models and compare it with similar products from Amcrest and that we see how the costs differ from the 2 camera models.
Reolink vs Amcrest wifi camera
For home or indoor needs, Reolink released several camera models and particularly home wifi cameras. this point we take one among the flagship products in new wifi camera namely Reolink E1 Pro and that we take the comparison product from Amcrest which is IP4M-1051.
You may like: Relink Security camera reviews
Amcrest
Foscam US re-branded themselves as Amcrest Technologies in early 2016. Foscam US (Foscam Digital Technologies) wont to be an independent distributor for the Chinese manufacturer/supplie Foscam Shenzhen. However in 2016, the Chinese supplier allegedly started undercutting Foscam US. Foscam US says that they had no choice but to chop all ties with Foscam Shenzhen and go it alone as Amcrest Technologies.
Foscam security cameras never had an excellent reputation and once Amcrest launched their own products, they need quickly overtaken Foscam both in quality, reliability, support service levels, and reputation. they’re still around though.
Amcrest mostly re-brands Dahua cameras.
Lens
Although Reolink uses a rather higher resolution than Amcrest, the optical zoom is far different. With Amcrest IP4M-1053 you’ll use zoom from a good angle of 58° to the narrowest angle of 5° with 12x optical motorized zoom.
Night vision
For night-sight , Amcrest also extends further up to 330ft while reolink only up to 190ft. for your business needs, Amcrest ip4m-1053 is right for covering very large areas like parking areas, factory areas, mining project areas and marine projects.
Amcrest vs Reolink Spotlight Camera
One of security camera models that functions as a lively deterrent may be a spotlight camera. When the camera detects motion in the dark when light is low or full darkness the camera will trigger the spotlights that shine bright light around your property.
Amcrest offers ADC2W spotlight camera with built-in 110dB siren, while Reolink introduces new Reolink Argus 3. Argus 3 may be a wire-free camera with built-in two spotlights and siren.
Resolution
Both spotlight cameras accompany an equivalent full HD 1080p resolution but with different frame rates (frame per second), Amcrest comes with real time @30 fps while Reolink Argus 3 delivers 1080p @15fps.
Spotlight
One thing that creates Argus 3 different is that it comes with Starlight CMOS that works perfect in low light condition and delivers full color night-sight when the spotlights are ignited to illuminate the frame. Amcrest just flood the lights around without switching the B/W night-sight to full color like Argus 3.
Power
The other difference is about the facility , Argus 3 comes with three power options, the DC power, rechargeable battery or Reolink photovoltaic cell . photovoltaic cell power is preference so you’ve got always On power camera when the sun light is sweet .
Reolink vs Amcrest Bullet
Reolink doesn’t have many camera models with 4k resolution except reolink b800 bullet models, maybe within the next few months (when this text is written @ Aug 2019). For that we take one among the Amcrest products with 4k resolution, the amcrest ip8m-2496 as a comparison product from this 4k camera by reolink.
When the facility runs out, plug the camera into a daily outlet, or install a solar array to avoid continuous recharging.
The company claims that the battery can last up to six months with one full charge. However, you’ll expect this to be a touch stretch.
Nonetheless, if you’re not into checking video feeds regularly, the camera can out-perform others in terms of longer battery life.
Cloud Storage Service
The camera allows cloud storage of up to 10 GB videos in 15 days that’s a sufficient limit to exceed.
Also, it’s safer than the microSD card storage option as you’ll have access to the saved footage albeit the camera goes missing.
You can install this security camera without fear about privacy leaks. it’s ideal for keeping an eye fixed on the property, kids, or pets once you are faraway from home.
Voice Commands
Get hold of your live video with an easy voice command! it’ll be visible on the Chromecast-enabled TVs or the Google Nest Hub as per your settings.
The camera is compatible with both Alexa and Google Assistant, which allows for hassle-free installation and use of the safety camera.
Stunning night-sight
Along with an incredible night-sight and a vivid 1080p HD quality, this outdoor camera can provide all the coverage you would like .
Reolink Argus 2 comes equipped with the advanced 33 ft night-sight and 130 degrees wide field vision.
You will access to crystal explicit videos and pictures even in complete darkness of night.
PIR Motion Detection
This motion is, by far, the favourite feature in security cameras.
The device is triggered automatically on instant motion alert notifications. you’ll even found out siren alarms to daunt potential burglars!
The motion detector alerts with Reolink Argus 2 are prompt and fast enough to allow you to see mild movements.
In addition to the present , the sensitivity is additionally excellent, although you ought to keep it on “medium” to avoid false alarms now then .
Two-Way Audio
The camera features a built-in mic and a speaker that you simply can use to greet someone at the door or warn potential burglars even once you are faraway from home.
Unfortunately, there’s a 2.5 seconds delay between delivering your voice to the surface . This makes it confusing to hold on a conversation with someone on the opposite side.
It is, however, handy enough to convey one-liners or ask the delivery man to attend while you get the door in time.
Reolink vs Amcrest: Final Verdict
When it comes right down to the ultimate choice, what is going to you choose? There are variety of features you would like to require under consideration before selecting a security camera for your home or workplace.
Reolink could seem like it’s lacking some key advancements, however, it’s popular as a reasonable product which will actually be quite powerful if used right. The camera may be a breeze to line up and guarantees long-term use.
For Amcrest Pro-HD, rock bottom line is that it’s far more capable of delivering a pointy image quality and cleaner recordings. In contrast, this security camera is difficult to line up and configure. It also might not be a budget-friendly choice in comparison with Reolink Argus 2.
The camera works on a chargeable lithium-ion battery. You don’t got to worry about cables and wires running around during its installation. you’ll mount it just about anywhere.
Bottom Line
In this Amcrest outdoor camera review, you’ve seen how well the lens is tailored to capture what’s happening in your home or business. this technique is straightforward to put in over Ethernet, and it’s simple to watch on any computer.
While the Amcrest doesn’t have an sound recording device, the sector of vision and determination from the Sony cameras during this system will structure for not having audio recording capabilities.
If you’re really upset by not having sound, you ought to find a reasonable thanks to involve a microphone. The local customer support from Amcrest will likely be ready to guide you thru that procedure. In this comparison between reolink vs amcrest you got the winner. | https://medium.com/@getlockers1/reolink-vs-amcrest-which-ip-camera-is-the-most-popular-eb84571e4b8c | ['Smart Locks'] | 2020-10-10 19:27:56.707000+00:00 | ['Câmera', 'Technology', 'Reolink', 'Security Camera', 'Safety'] |
1,506 | All About Timeboxing | Timeboxing refers to the allocation, or “boxing”, of a specific amount of time to a task, depending on its importance and required effort. It allows you to structure your day based on chunks of time dedicated to a certain number of tasks, helping you organize your schedule and maximize your productivity. To-do lists, the traditional method of managing tasks pales in comparison to timeboxing: the ultimate productivity hack.
Why To-Do Lists Don’t Work
Stop making to-do lists and start timeboxing. To-do lists, a perfect resort for the procrastinator, are highly ineffective and inefficient. There are five fundamental problems with to-do lists.
To-do lists overwhelm us with too many choices. Students are busy people, and creating lists of dozens of items is intimidating. And as a result, the running lists promote procrastination and task avoidance. To-do lists lack a commitment device. They don’t prevent you from choosing the most pleasant and easy tasks over the most important and difficult ones, because they don’t keep you accountable for the time you use inefficiently. The clutter of to-do lists causes us to be naturally drawn to simpler tasks. Even when we try to prioritize, we have a tendency to complete tasks that are more easily accomplished. This leads to the putting off of essential task items and ultimately lowered productivity. On the flip side, we are rarely drawn to important-but-not-urgent tasks. Activities like studying for the next final is something that is often put off when looking at a long list of to-dos. To-do lists lack the context of what time you have available. A simple list does not capture the vital bits of information that you need to make efficient use of your time, such as how long a certain task will take you to complete. This inherent flaw of to-do lists has significant negative implications on one’s productivity.
Why Timeboxing Works
Timeboxing is the perfect alternative to to-do lists. In a study done by Harvard Business Review, it was found that out of 100 productivity hacks, timeboxing was ranked as the most beneficial. The five points below outline why timeboxing is effective, especially for students.
Timeboxing enables the relative positioning of work. It’s visual, intuitive, and obvious, allowing you to position your study sessions around test dates or your homework sessions around deadlines. It is a great option for students who are constantly swamped with a never-ending list of dates and deadlines. It provides a comprehensive record of what you have done. Looking back into your timeboxed calendar will give you a detailed account of what you worked on in the past few days and how much time you allotted to each task. You will feel more in control. As you timebox, you decide what to do, when to do it, and how much time you spend on it. You will be substantially more productive. Rather than a vague, low commitment to-do list, you are imposing a sensible, finite time for a series of tasks. Timeboxing helps you beat procrastination by forcing you to get started. The allotted block in your calendar keeps you accountable for that period of time and pushes you to get started on the tasks you have been avoiding, ultimately equipping you to overcome procrastination.
If you are interested in incorporating timeboxing into your everyday life, edPAL, an organizational platform is a perfect choice. An all-encompassing platform, edPAL features a timeboxing function along with document centralization (e-binder) and scheduling (calendar) features. Visit www.edpal.net to learn more! | https://medium.com/@edpal/all-about-timeboxing-5874d0ae04c0 | [] | 2020-11-22 15:40:44.326000+00:00 | ['Students', 'Productivity', 'Organization', 'Technology', 'Edpal'] |
1,507 | Tutorial: Synchronous and Blocking JavaScript | This is the beginning of an introduction into the blocking nature of JavaScript. In the process, you’ll also be doing more exploring of various Node modules, like fs .
This is required reading for the Code Chrysalis Immersive Bootcamp’s precourse — a series of assignments, projects, assessments, and work that all students must successfully complete remotely before embarking on our full-stack software engineering course.
Before You Begin
You will need to have Node installed on your computer. A simple way of describing Node is that it is a program that will allow you to run JavaScript on your computer (rather than just in the browser). This means that you will be able to control, among other things, your file system! If you are new to Node, please check out Node School to get started.
This is a hands-on tutorial, so you will need to follow along and play around with the code yourself.
Learning to look through and read documentation is a very important skill for software engineers. Please practice looking through the NodeJS documentation as you are going through this tutorial. Please be cognizant of the version of Node that you have and the documentation version that you are looking at.
Objectives
Gain familiarity with using fs
Use fs.readFileSync to read files on local system
to read files on local system Use the words synchronous and blocking to describe certain aspects of JavaScript
Synchronous & Blocking
Let’s create a function:
console.log('1');
function hello() {
console.log('hello');
}
console.log('2');
hello();
console.log('3');
Can you predict the output?
Answer:
1
2
hello
3
What if hello was instead a long-running operation, though? Like a database lookup or HTTP request? Don't worry about how it's implemented, just supposed that if it were, then hello would take a lot longer to complete.
Does this mean that we can’t call the last console.log until it's complete? With JavaScript, yes. That's because JavaScript can only do one thing at a time (advanced: single-threaded). This would be, what we call, a synchronous operation.
With a synchronous operation, if something takes a long time, then that function would block the rest of the code from running until the operation is completed.
On a browser, that can mean an unresponsive web page.
On a server, that can mean requests would stop being processed.
Switching gears a little bit…
Introducing fs
Unlike the browser, where everything is global, in Node there are only two global things: a module object, and a require() function.
We’ll get into module later, for now, let's require() fs:
const fs = require('fs');
console.log(fs);
What is fs ? There are two ways to find out:
Check the docs
console.log() it!
We see a lot of file related methods, so one could deduce that it stands for "file system".
In your current directory, run the following command:
echo 'hello world' >> index.js
Then run the following in Node:
const fs = require('fs'); const result = fs.readFileSync('index.js', 'utf8');
console.log(result);
What is the result? What happens if you don’t include ‘utf8’? What is ‘utf8’?
Blocking Behavior
In our above example, take note that the console.log does not run until we are done reading the file. Try modifying index.js so that it is a huge chunk of code. Does that change the order?
No.
This is because fs.readFileSync is exacty as its name suggests---it is a synchronous method and therefore, blocking. No matter how large the file, our JavaScript will wait until fs.readFileSync is completely done with reading it.
Let’s check this out with regular JavaScript.
Blocking Behavior with Higher Order Functions
Let’s create a higher order function now and provide hello to it as a callback.
console.log('1');
function hello() {
console.log('hello');
};
console.log('2');
function invokeNow(action) {
console.log('3');
action();
}
console.log('4');
invokeNow(hello)
console.log('5');
Can you predict the output?
Answer:
1
2
4
3
hello
5
invokeNow(hello) is still exhibiting blocking behavior. We have to wait until all of that is done before we print 5.
Challenges
Can you use fs.readFileSync() to read non-JavaScript files? Like a JSON file or a text (.txt) file? How about an html file? Try it. What are other options besides ‘utf8’? The opposite of synchronous is called asynchronous. What do you think asynchronous means? Explore fs.readFile . This is the asynchronous version of fs.readFileSync . How do you use it? What does it do differently?
Resources
Here are a couple of resources. These are not the only resources out on the internet, so please look around and find other resources to supplement as needed.
What about Asynchronous Javascript? 👉 Read Tutorial: Asynchronous JavaScript — Callbacks, Promises, and Async/Await | https://medium.com/code-chrysalis/tutorial-synchronous-and-blocking-javascript-797a5566b63a | ['Yan Fan'] | 2020-11-23 05:26:17.293000+00:00 | ['JavaScript', 'Coding', 'Software Development', 'Codingbootcamp', 'Technology'] |
1,508 | 5G: A Genie at our Fingertips? | 5G: A Genie at our Fingertips?
A letter to Greta Thunberg — and a dream
by Miguel Coma
Dear Greta,
Never before has my mind been so restless as I imagine our society’s future. Humanity and our ecosystems are in the midst of profound changes. I had so many ‘aha’ moments in 2020. I realize that attempts to maintain a few people’s privileges destroy our civilization; and, on the other hand, I feel extraordinary hope that everyone on Earth can have enough food, proper shelter and a meaningful, sustainable life. In this vision, we need wise choices for our health, environment and technology.
This leads me to 5G, the fifth generation of wireless telecommunication networks. My research shows that 5G is a corporate scheme. It would impair efforts to create a sustainable society. The more I learn about 5G, the more I dream of another kind of Internet. Today I will describe how 5G use would harm our planet — and offer alternatives that would allow us to avoid it.
First, I want to tell a story of how a mobile Internet connection can take life and use energy. Then, I want to show you how to create more sustainable connectivity. The story stars a girl, a cat and a genie. (Geeks, watch out: I’ve got three hidden abbreviations here — with definitions named below[1].)
Last winter, Pash was a kitten, chasing squirrels on an icy porch. One day, Jack, Pash’s companion, captured her ice ballet on his phone and posted it on YouTube. Within hours, Pash was ranked most viewed kitten ever. Two weeks later, seven-and-a-half-year-old Altea sits in her dentist’s waiting room. To relax, this girl is watching cute cat videos on her mum’s smartphone. Meanwhile, NRgee, the genie, slumbers. NRgee can sleep for millions or even billions of years, curled up inside of tiny coal, gas or petroleum molecules. He can hide inside the even tinier nucleus of an atom, where there’s room for lots of genies. Superpowers give NRgee abilities to travel inside vehicles like water, wind, sun rays or batteries; to transform into other forms; to travel at the speed of light and to live eternally. Plenty of clones can assist NRgee when he wants to satisfy his master. Under specific conditions, we mortals can summon NRgee.
Actually, young Altea has the power to summon NRgee at her fingertips. Just by tapping the touchscreen, she can wake the genie! Freed from a molecule in the smartphone’s battery, NRgee transforms into electricity. In the blink of an eye, NRgee and his clones work all over the smartphone to get YouTube to stream “Pash the Kitten” onto Altea’s mother’s screen.
NRgee clones transform again, this time into microwaves from the smartphone’s antenna. They radiate in all directions around Altea. Only a minuscule fraction of the clones make it to the cellular antenna. Most of the NRgee clones are lost. The lucky ones tell the cellular network that they need to reach YouTube’s data centre as soon as possible, using an IP address. To deliver Altea’s order to stream the cute cat video to YouTube, NRgee seeks help from more clones travelling the electric power grid and feeding the Internet’s “highways.” To assist, most genies will first leave their long slumber (from coal, natural gas, uranium, you name it) using an incredible human-made machine called a power plant.
Whenever a molecule containing carbon frees a genie, a reaction makes him fart carbon dioxide (CO2). Power plants using fossil fuels (coal, gas, oil) or biomass use oxygen (O2) and emit CO2. So, these power plants do exactly the opposite of living plants. During photosynthesis, trees and flowers absorb CO2, emit O2 and capture NRgee from light to feed on it.
Back to Altea’s command: when it reaches YouTube’s data centre, other genies fly in, feed digital storage devices and servers, find Pash’s video and send it to Altea’s screen. Lots of NRgee genies are needed to carry the data-heavy video file and play the video for Altea before she goes to her dentist’s chair.
Greta, to show you where NRgee genies are most needed in this story, I’ve got a pie chart. This one shows that when you stream a video using 4G (available today), the mobile access network uses more energy (60%) than the smartphone (30%) or the data centre (8%)[2]. The smartphone also uses significant energy to connect to the mobile network (3.7 times the microprocessor’s energy[3]). The mobile network is composed of many parts and each consumes energy to operate[4,5].
Compared to all data shared or accessed using a mobile connection (for video, music, photos, websites, documents, posts, tweets, emails, texts, apps, etc.), video consumes the most data (63% and growing[6]) and the most energy. Video emits the most greenhouse gases and has the greatest impact on climate change. Then, although 5G is more energy efficient than 4G, it will cause a dramatic increase in energy use and greenhouse gas emissions since it requires additional infrastructure and will increase data traffic. In 5G’s era, video is expected to have a far greater footprint than it already has. (When I talk about video, I mean streaming, video-calling and messaging.)
Video will continue to drive data traffic, energy consumption
and greenhouse gas émissions.
Photo credit: Andreas Felske, unsplash.com/photos/oQEdDIMEIlc
I’ve got to admit that I love watching videos on a smartphone, tablet or laptop. But my priority is to reduce my carbon footprint. When I learned how much energy is involved in watching videos, I had to question my habit. Is it possible to reduce videos’ energy demands and environmental impacts — and still watch them?
It is possible! The key to a more sustainable Internet is to limit use of energy-guzzling mobile networks and rely on wired networks whenever possible.
If you love watching videos while traveling, load them onto your device using a fixed Internet connection (wired is better; wireless is possible), or a public Wi-Fi hotspot. Plenty of movies will fit on a smartphone or a cheap memory card. (A microSD card can fit hundreds of hours of high-definition video). If you pay monthly for a video streaming service, you should have a download button in the app. Some platforms offer downloads free of charge. Downloading one hour of video takes from a few seconds with an optical fibre connection, up to a few minutes with a decent 10 megabits per second (Mbps) wired connection.
With pre-loaded videos, you won’t experience drops in mobile service, even in tunnels or rural areas. You will save your battery, since your device will not have to radiate radiofrequencies to the nearest mobile (or Wi-Fi) antenna while your video plays. Your body will be less exposed to electromagnetic radiation. Then, by not using a mobile network (the largest, blue, slice of the pie chart), you will reduce energy use significantly. If it’s alright to be offline for a while and you want to save even more energy and radiation, switch off mobile data and Wi-Fi. Activate plane mode whenever possible. The same benefits apply for music downloads, although music uses less data than video.
If you really need to load a video while you’re on the go, first try connecting to a Wi-Fi hotspot, which uses less energy than a mobile network[7,8]. If possible, download the video and, when done, switch off the Wi-Fi — and enjoy your video. If a hotspot is not available, your last option is a mobile connection. If you decrease the video’s quality, you’ll save energy: open video quality settings by tapping three little dots, three small horizontal bars, or a cogwheel. For example, on YouTube, 360p will use significantly less data and energy than 720p. In Netflix’s app, opt for “mobile data usage” then “save data.”
If you love long video-calls on your mobile device, first try connecting with a cable. I didn’t think I could do it. But there are elegant wired solutions for mobile devices[9]. I’ve switched off our home Wi-Fi, and all family members use superfast cable connections. We’ve all reduced our mobile and energy use. If you really must have mobility, choose Wi-Fi over mobile data. If you really need a mobile network, look for a setting like Google Duo’s “limit mobile data usage.” This could mean upgrading to a newer version of the mobile video app. After upgrading, look for new features that can save data and energy.
Beside reducing energy use and climate impacts, keeping distance from wireless devices is healthier. A Wi-Fi router can be switched off easily at night using a mechanical outlet timer.
Another thing, if you care about reducing your carbon footprint: avoid Fixed-Wireless-Access (FWA) connections. FWA uses inefficient mobile networks, and typically much more data than a mobile device connection. Unless there’s no alternative, it makes little sense for users or the environment. If fibre optics or other wired links are available (TV cable, VDSL, ADSL[10]), they will provide the best experience in reliability, speed, response time, security and energy efficiency.
Video will continue to drive data traffic — and the Internet’s energy consumption and greenhouse gas emissions. In an article (http://www.webbsearch.co.uk/wp-content/uploads/2019/10/Why-the-quoted-applications-for-5G-will-not-raise-MNO-revenues.pdf) about the economic implications of 5G for network operators, William Webb summarizes the industry’s most promising 5G applications. He shows that there is no proof that autonomous cars, virtual reality, the Internet of Things (IoT), industrial IoT, network slicing, remote surgery or smart homes can benefit from national 5G networks.
Why are simple guidelines about reducing our Internet footprint not taught in schools or described in user manuals? Why do we learn to turn off lights and use energy-saving bulbs, to turn off the tap and use eco-shower heads, but not to switch off the Internet connection when we don’t use it, and limit 4G/5G connections? Does this knowledge compromise the industry’s dream of ever-exponential growth in mobile data and 5G?
If significant numbers of people adopted the steps I reported here to reduce mobile video streaming and calls, we would significantly reduce our energy use and greenhouse gas emissions. Mobile traffic itself could decrease! This leads to the crazy idea (is it?) that we could avoid 4G network saturation — as well as large-scale 5G network deployments. Let’s leave 5G to its true beneficiaries — a few large factories (https://www.lightreading.com/iot/industrial-iot/what-might-the-demand-be-for-5g-in-manufacturing/a/d-id/753746). By deploying infrastructure at specific sites, we would not have to build or operate 5G networks. We’d save energy and reduce greenhouse gas emissions dramatically. This is my dream.
Miguel
References
[1] Hidden abbreviations — answers :
Kitten : Pash > HSPA 3.5G (High Speed Packet Access);
Girl : Altea > a-LTE-A > LTE-A 4.5G (Long Term Evolution-Advanced);
Genie : NRgy > NR 5G (New Radio).
[2] Yan M. et al., “Modeling the Total Energy Consumption of Mobile Network Services and Applications“, 2019, fig. 5.(b) (Video Play) — using 4G macrocells https://www.researchgate.net/publication/330201584_Modeling_the_Total_Energy_Consumption_of_Mobile_Network_Services_and_Applications
[3] Ibid., table 1. To stream a video (“video play” scenario), on average, 51.4 J are consumed by the 4G network, while 13.8 J are consumed by the CPU (video play)
[4] In mobile networks, operating energy is consumed mostly in base stations (est. 80%), but also in backhaul connections (between base stations and the mobile core network) and in the mobile core network. https://gsacom.com/paper/5g-network-energy-efficiency-nokia-white-paper
[5] A base station’s operating energy includes (in decreasing order): power amplifiers, transceivers, power supplies, cooling, and others. Humar I. et al., “Rethinking Energy Efficiency Models of Cellular Networks with Embodied Energy”, 2011, fig. 3. https://www.ltfe.org/wp-content/uploads/2011/04/published_paper.pdf
[6] Ericsson Mobility Report June 2020 https://www.ericsson.com/fr/mobility-report/reports/june-2020
[7] On a smartphone, 4G consumes 23 times more energy than WiFi.
https://www.resilience.org/stories/2015-10-21/why-we-need-a-speed-limit-for-the-internet/
[8] Mobile networks (in 2015, 5G not yet included) caused about 60% of total carbon emissions of all telecommunication networks. Mobile networks : 100 Mt CO2e ; total all networks : 169 Mt CO2e. Malmodin J. & Lundén D., “The Energy and Carbon Footprint of the Global ICT and E&M Sectors 2010–2015“, 2018, fig.6. https://www.mdpi.com/2071-1050/10/9/3027
[9] You need up to three things to connect your smartphone, tablet or laptop with a cable: First, you need an ethernet RJ45 adapter for your device (unless your laptop has an RJ45 socket, or you already have an RJ45 adapter). Examples here : https://prises-cpl.info/comparatif-des-meilleurs-adaptateurs-usb-ethernet-en-2018/ Then, you need an ethernet cable. See examples here : https://www.lifewire.com/best-ethernet-cables-4178848, or retractable cables here : https://nextthing.co/retractable-ethernet-cable ast, if your device is too far from your Internet router, you can use a pair of power socket adapters anywhere in your home. Plug one adapter near your device, and the second adapter near the router, connected with a second ethernet cable). See here : https://www.lifewire.com/best-powerline-network-adapters-4141215. Powerline adapters are a higher investment, but they make it possible to switch off your Wi-Fi. My family uses many of these adapters around the house.
[10] VDSL2 and ADSL2 speeds over10 Mbps require a distance from the exchange no longer than 2.7 km (1.7 miles). https://www.increasebroadbandspeed.co.uk/2012/graph-ADSL-speed-versus-distance;
https://www.increasebroadbandspeed.co.uk/2013/chart-bt-fttc-vdsl2-speed-against-distance
Miguel Coma is an engineer in telecommunications and an Information Technology architect. After a decade in telecommunications (with two mobile operators and an equipment manufacturer), he now works as an enterprise architect in the bank-insurance sector. He believes in technology’s potential to create sustainable progress.
Miguel Coma’s letters are part of a series of letters to Greta Thunberg, written with Katie Singer. The series is available at www.DearGreta.com/letters.
This article was originally published by Wall Street International Magazine at: https://wsimag.com/science-and-technology/64418-5g-a-genie-at-our-fingertips | https://medium.com/@katiesinger/letter-10-44db0fe9b309 | ['Katie Singer'] | 2020-12-24 15:11:03.596000+00:00 | ['Environment', 'Energy', 'Climate Change', 'Technology', '5g'] |
1,509 | Type Coercion in JavaScript | Type Coercion in JavaScript
Explained with simple and complex examples
Knowingly or Unknowingly, you have been dealing a lot with Type Coercion if you frequently code in JavaScript. Type Coercion is just a fancy name for implicit typecasting in JavaScript.
For more articles, visit https://www.knowledgescoops.com
Type Coercion Definition
As per MDN, Type Coercion refers to the automatic or implicit conversion of values from one type to another.
In JavaScript, if we execute the following statement
var val = '10' + 10;
console.log(val);
String ‘1010’ will get printed in the console. The number 10 is implicitly converted to string ‘10’ while executing the code. That’s what implicit type casting or type coercion.
No matter if we write the same statement as
var val = 10 + '10';
console.log(val);
The same output ‘1010’ will get printed.
The following are some more examples for which we can see type coercion in action.
'10' - 10 // 0 10 - '10' // 0 '10' + 10 // '1010' 10 + '10' // '1010' null + '' // 'null' null + undefined // NaN
Type Coercion with + operator
When adding the two operands with the + operator, the JavaScript engine will try to add the values if both the values are of integer type.
// No Implicit Type casting needed
10 + 10 // 20
But if any of the value is a string type, JavaScript will try to automatically convert the other values to a string so that they can be appended.
// With Implicit type casting
10 + '10' // 1010
As soon as we tried to add null and undefined, the JavaScript engine tries to convert the values to integer resulting in NaN.
null + undefined // NaN
Type Coercion with the - operator
Using the - operator, the JS engine to subtract the values and tried to cast the values into integers implicitly. So we get the result as
'10' - 10 // 0 10 - '10' // 0 null - undefined // NaN '2' - 1 // 1
Here are some of the strange conversions with reasons why they are converted like that.
'' - 1 // -1 as in JS Number('') is 0 false - true // -1 as JS parses true as 1 and false as 0 null + 10 // 10 as Number(null) is 0
Type Coercion with == operator
In JS, == operator is very common to compare values. It compares the values based on their values ignoring their types.
For example,
10 == '10' // true
This expression evaluated to true, as String(10) is 10. So JavaScript internally typecasts the String 10 to integer type and the values are equal for both. Hence, the expression evaluated to true.
Some more complex scenarios with == equality operator
true == 1 // true as Boolean(1) is true 1 == '01' // true as Number('01') is 1
What if both the values are either not a string or boolean?
null == '' // false as String(null) converts to 'null' &
// 'null' is not equal to ''
Can you also guess the output of the following expression?
true == 'true'
Make a guess and scroll to the solution…
.
.
.
true == 'true' // false
In JavaScript, true is evaluated as 1. So JavaScript tries to cast String ‘true’ to Number which evaluates to NaN. Hence evaluating to false.
{} == {} // false as Objects get compared by their references not by their values
Comparison between null and undefined
null == undefined // true
This is because null and undefined, both are evaluated as falsiness in terms of boolean in JavaScript. Hence, we get the values implicitly converted to Booleans.
Boolean(null) // false
Boolean(undefined) // false
Hence, null == undefined yields true.
Object and String comparison
var obj = {};
console.log(obj.toString()); // "[object Object]"
Output for the following expressions: | https://medium.com/developers-arena/type-coercion-in-javascript-c973b369b272 | ['Kunal Tandon'] | 2020-02-16 17:14:59.415000+00:00 | ['Programming', 'Technology', 'Coding', 'JavaScript', 'Software Development'] |
1,510 | Data for Africa: Open data portals listed and reviewed | Data for Africa: Open data portals listed and reviewed
Tool Review
What open data is out there? Where do you find it and what can you do with it? We review ten South African and African open data portals.
Johannesburg Train Station From Randlords by Paul Saad (Creative Commons)
The Open Government Declaration, signed by 11 African states from 2011, states that ‘people all around the world are demanding more openness in government’. The African Development Bank says on its open data portal that “reliable data constitutes the single most convincing way of getting the people involved in what their leaders and institutions are doing.”
Open data is data that can be freely used and shared by anyone. The theory is that more open data means more transparency, and more transparency means a more accountable and better government. In Africa, some governments have committed to making their data open. But the Open Data Barometer puts African countries at the bottom of their rankings.
In many cases, the issue isn’t just openness — it’s that the data hasn’t been collected in the first place. Questions have also been raised about whether increased transparency is likely to lead to more accountability without more government commitment. As you can see from our review below, looking at what has been done so far, many of the open data portals are still difficult to navigate, especially on mobile phones which is how most people get online. As one African Union report puts it, we still need a data revolution in Africa.
So what South African and African open data is out there? How useful is it and where do you find it? This list is a work in progress. Please contribute and comment: What portals do you use or run? What portals do you “rate”? And which ones are just not worth the pixels/bandwidth? How have you used, or tried to use, open data?
Please share your experience and ideas in the comments below, directly via email to civictech@journalism.co.za , or join the discussions on our Facebook page.
Below you’ll find seven South African and three international open data portals: Municipal Money, Municipal Finance Data, Municipal Barometer, Wazimap, City of Cape Town Open Data Portal, South Africa National Data Portal, The NRF’s SA Data Archive, OpenAFRICA, DataBank from the WorldBank, and African Development Bank Groups’ Open Data for African portal
South Africa
1. Municipal Money
Municipal Money is National Treasury project that takes municipal finance data and makes it available to the public in an accessible way. The stated goal of the portal is to take complex and extensive spending and budget information, to crunch and reformat it (in graphical ways) so it doesn’t require an economics or statistics degree to interpret, and to let users drill down to information at the individual municipal level.
What data is there?
All the data on the portal is reported information, submitted to National Treasury by individual municipalities. Generally you can access information on an area’s audit status, income sources, cash balance, and spending (by category) including wasteful expenditure, as well as a “resources” section. This includes links to data sources such as financial reports in PDF and Excel format. Depending on which municipality you look at you may find missing information which as the site makes clear may be an indication that the municipality failed to provide data to the Treasury (which is informative in itself).
What can you do with it?
Find your municipality, or choose one of interest, to see summarised information with supporting graphs and graphical ratings (using smiley and unhappy faces, and a red, orange and green colouring key). Each metric provides the year(s) of data available, an explanation of the metric and outgoing links.
Ease of use
The site is simple and well designed, and the “traffic light” colour code aids understanding. The “Did you know” boxes and linked explanations will help orientate new data users. At the time of visiting (May 2017) the location-detection appeared to be non-functioning, but search was simple and accurate. Viewed on a low-end Android smartphone*, the site is generally mobile- friendly with only minor blips in the layout of some graph keys.
2. Municipal Finance Data
The Municipal Money tool (above) draws its raw data from this site, the Municipal Money API website. Here users can access the original datasets that Municipal Money is built on.
What data is there?
There are 12 datasets to access, including audit reports, cash flow and capital acquisition. The site promises four years of data, covering 278 municipalities.
What can you do with it?
Drill down to specific months and years, within a category or dataset, and download the raw data.
Ease of use
The interface is a lot less user-friendly, in comparison to Municipal Money, and is clearly not aimed at a “general” audience. That isn’t to say that it is a big mess (it isn’t your typical government department website) but there has been a lot less attention paid to supporting a non-expert user-experience. The site is clear and readable, but fails the mobile test with many of the dataset blocks breaking or displaying run-over text. Once you click through to the datasets themselves, it’s best to rotate your screen for landscape viewing — or, better yet, if you have access to a PC, give up and access it there.
3. Municipal Barometer
Municipal Barometer was developed by the South African Local Government Association (SALGA). It publishes economic, social and governmental performance data down to a ward level. It’s intended audiences are both general interested individuals and municipal staff who can access the information for informed planning purposes.
What data is there?
There are three “tools”: the data bank, a benchmarking tool, and reporting tool. Information in the data bank is divided by year and area, and includes demographics, financial info, access to social and basic services, and “economic growth and development”, each with their own subcategories. The data itself is largely drawn from the last available census, currently, 2011.
What can you do with it?
Once you’ve made your various selections (and you can select multiple categories), you’ll end up at a page that lets you see the data in graph or table format on screen, or download it in PDF and Excel spreadsheet formats.
Ease of use
There is a lot of clicking involved in the data bank section of the site (four steps, each with their own choices) and some users are likely to find that click-path confusing. Two steps and three submenus in I caught myself wondering “where was I going with this again?”. That means it’s a lot of work for a merely interested citizen, but a good resource for a more confident data spelunker. The mobile experience on the other hand is a complete write-off.
4. Wazimap
Wazimap was developed by Media Monitoring Africa with Code for South Africa (now OpenUp), and takes census and election data and links it to a map of South Africa. You can use the clickable map interface or search for a specific area to access the information — at a provincial, city and even ward level.
What data is there?
Wazimap incorporates census 2011 data from Stats SA and election data (national 2014 and municipal 2016) from the IEC. You’ll be able to see how votes were split between parties, as well as demographic and financial information. The census data has some quite revealing insights with metrics such as language, citizenship and the very specific and granular, like “head of household” information or household goods ownership (TV, radio, fridge, etc).
What can you do with it?
With each metric, you can click to see the data on which the graph is based (in table form) and get a handy embed link, so you can slot it into your journalistic stories or external websites as needed. This is why it’s proved such a popular tool with local journalists already.
Ease of use
The site is simple and intuitive, and the graphs simply presented. The mobile experience is smooth and clear — easy to read numbers and responsive graphs even in portrait orientation.
5. City of Cape Town Open Data Portal
In line with the city’s open data policy, to support transparency, the City of Cape Town has created an open data portal for city information.
What data is there?
There are 92 data sets and counting, including information on land administration (aerial photographs), spatial planning, health, finance, and even call centre statistics.
What can you do with it?
Browse and download images, documents and spreadsheets, or visit the “mini-sites” that are built off the back of this data.
Ease of use
The site is surprisingly active and deep, and the three lists on the front page (recently uploaded, most popular and featured sites) a pretty useful navigation “extra” (in a prompting kind of way). It’s not the most modern or attractive site ever but it is clear and “navigable”. A pleasant surprise is that it doesn’t all fall apart when accessed via mobile, despite its reliance on table-format in the data set area. But it can tend towards a bit of a “data dump”, with most data for download only, rather than viewing on the site.
6. South Africa National Data Portal
A project of the Department of Public Service and Administration (DPSA), the South African National Data Portal is part of the country’s commitment to the Open Government Partnership — along with the principles of transparency and accountability.
What data is there?
It hosts departmental data and links to external data sources. There are 409 data sets on offer, with download links, of varying usefulness. For example, if you want to see the Department of Agriculture, Fisheries and Forestry’s PDF list of South African varietal seed crops (from December 2013), then you’ve come to the right spot. Having said that, the Departments of Energy, Environmental Affairs, Higher Education and Labour have all contributed data sets.
What can you do with it?
Download a huge variety of official information from an official government source.
Ease of use
The front page presents 409 datasets grouped into ten themes, such as Human Settlements and Economy and Employment — which is great in theory. But click on any theme and you get zero datasets. This leaves you with the option of clicking the “See all 409 datasets” and browsing or searching if you know the name of the set you want. The presentation format is at least a clean and simple table, with document name, theme, file type, and uploaded date rows. The site is pretty responsive, but best viewed in landscape on a mobile.
7. The NRF’s SA Data Archive
The National Research Foundation’s South African Data Archive (SADA) proclaims to act as “a broker between a range of data providers … and the research community”, with a view to preserving info for future use and safeguarding data.
What data is there?
The site promises data sets on labour and business, political studies, social studies and surveys and censuses, but currently all the links to the data sets are broken (as are many other links on the site). We reached out to the NRF and they promised that the site was still operational, but said they were “experiencing problems with our IT”, and promised this would be “resolved soon”.
What can you do with it?
Nothing right now, but the potential for a data archive that pools the huge amount of research that falls under the NRF banner is exciting. So, fingers crossed that they can exorcise their IT demons soon.
Ease of use
Not currently available to review.
Africa
8. OpenAFRICA
openAFRICA is volunteer-drive, run by Code for Africa, and says it “aims to be largest independent repository of open data on the African continent”. It is a private, not-for-profit repository of data collated (or “liberated”) from public sources.
What data is there?
A large variety — 2544 data sets including from Western Cape dam levels, local airport locations, World Bank data, a Sustainable Development Goal baseline survey, and greenhouse gas mitigation strategy information.
What can you do with it?
Users can access and download data sets, and even upload their own to store it or make it widely accessible. They also have social sharing buttons on source pages, so you can get the word out.
Ease of use
It’s a pleasure to use on a computer, with a strong uncluttered interface. At first glance, I thought the mobile experience would be just as good. The front page is responsive, but it all falls over though once you search or browse the datasets — with none of the table text displaying.
9. DataBank from the WorldBank
A searchable repository of WorldBank data, gathered from hundreds of countries and 55 databases.
What data is there?
This is probably the world’s largest collection of comparable country-level economic and development data with 63 databases covering 264 countries**, and accessible in 1504 data series dating back 57 years. Of course, not all areas will have all years, or the same availability of metrics, but there is a tonne to peruse, including data on health, poverty, inequality, jobs, economics, governance and ease of doing business information. Its often the case that African data is a lot less current than that from other countries in the global south, an indication of the continental challenges faced by national statistical offices and other data collection agencies.
What can you do with it?
Generate charts, tables, and time-linked data, create reports, visualisations and shareable graphics from a wide variety of metrics and categories, with mapping and pivot table-like functionality added in.
Ease of use
The interface screams “serious” — wordy, with reams of blue and grey text, clickable list items, and next-to-no images (you can make your own data chats and maps once you get in that far). It promises to be tablet-friendly, but is not responsive on a regular mobile phone screen.
10. African Development Bank Groups’ Open Data for African portal
This platform hosts data from 54 African countries with a strong focus on developmental matters and socio-economic statistics.
What data is there?
Data is gathered from partners and their own sources, and is reported at country and regional levels. Clicking on a country link will open a country-specific subpage of the portal with some top-level stats, like unemployment and GDP figures. Dig into the catalogue page to find themed data, demographics, health statistics, gender splits, birth rates, and so on.
What can you do with it?
Users can see, compare, and analyse data, create visualisations, and share outputs on a wide range of data. Search for country-specific info or browse data catalogues per country.
Ease of use
The site navigation is a bit lacking, and it’s not immediately clear if there is a way to do cross-country comparisons on the site. The visualisation options though are a nice touch. The site is not mobile-responsive, but the design in clear and simple enough that if you are prepared to do a lot of zooming in and out, mobile users should still be able to get value from their visit. Best viewed on a laptop or desktop, though, if you have one.
*For mobile-friendliness assessment, the same low to mid range market Samsung Galaxy Grand Neo Plus device was used to access all of the sites.
** Yes, there are not 264 countries in the world. We suspect this number may include historical data (countries that have changed) and possibly duplicates. We have reached out for clarity and will update when/if we hear back. | https://medium.com/civictech/data-for-africa-open-data-portals-listed-and-reviewed-277b92134284 | ['Kate Thompson Davy'] | 2017-10-05 17:18:29.322000+00:00 | ['Civic Technology', 'Open Data', 'South Africa', 'Civictech', 'Features'] |
1,511 | Future Thinking of Artificial Intelligence towards society | Future of AI
What is in the future? is actually everyone favorite question, “What I will be when I’m grown up?”, “Does Time Machine available?”, “Is cure of cancer found?” or “Do we finally step foot on mars?”. There is too many question to be answered, that is why both leaders (Gandhi and Lincoln) said that the futures is depends on us. It is us who will be shaping what Artificial Intelligence for our future generations, is it will be an era of Decepticon or Autobots? Evil or bad? is the question we need to answered. In this article, I will explain about my perspective on the future of A.I and its significant towards the society.
“The future depends on what you do” -Mahatma Gandhi “The best way to predict your future is to create it”-Abraham Lincoln
Artificial Intelligence is discovered in 1955 but it only widely use in the 21st century. The demand of AI products is becoming trendy as the term ‘automation’ is making people lives easier. Artificial intelligence has made significant contributions to human society’s development and has altered the conventional methods of production and ways of thinking in human society.
From providing translation, image recognition until diagnosing diseases, this intelligent system are improving our lives. However, despite the fact Artificial Intelligent is assisting our lives, it is full of loopholes that have potential to cause a snowball effects. The existing policy system is not sufficient and there is no supervising system in place. Ultimately, this technology brings threats to the society such as leakage of personal privacy, unsupervised AI bots, abuse on war events and destroying people jobs. If there is no such ethical principles that is compulsory to follow, AI technology will reach its downfall in society.
The evidence is already existed and maybe when you are reading this, someone is manipulating your data for their selfish interest. The case of breaching the user data like in Cambridge Analytica where the company admitted that they break the British data law. In the movie of “The Great Hack” it portrays on how Cambridge Analytica played an important roles in campaigning for the Republican party of USA in 2016.
It uses the user data to set a demographic of users and understanding their characteristics and eventually put them into a clusters. This cluster is given different tactics and strategies to won their vote for the presidential elections. Therefore, they have the capability to play god and changing the voters mindset as they already know what the voters of a particular cluster want to see based on their data. Facebook also a part to be blame where 87 Millions Facebook data are leak because of their API faulty. Mark Zuckerberg responds and say;
“I’ve been working to understand exactly what happened and how to make sure this doesn’t happen again. The good news is that the most important actions to prevent this from happening again today we have already taken years ago. But we also made mistakes, there’s more to do, and we need to step up and do it.”-Facebook
The underlining of this issue is how a big company like Facebook is allowing particular unethical parties to gathered data for their illegal profit. And this happen under our nose but no one have clue on this case until a whistleblower come out. This is just Facebook we are discussing, how about the small medium technology enterprise working on AI projects? are they doing their task in ethical manner? Until we have the global scale law to be uphold for AI usage, then maybe we can start imagine to have bright future with AI.
Fictional character from Avengers 2
Unfortunately, Artificial Intelligence (AI) is also use for war and making arsenals more powerful. AI is suppose to be created to help the society, not to hurt them. However, there still parties that support and develop autonomous weapons. The concept of is cold, autonomous weapon can select and engage targets — without human interventions. The autonomous weapon can be use in war drones, or any form solely to kill and destroy. With this AI war tech, AI will become a ticking bomb for society. Now, just imagine if it falls into the hand of black-market or terrorist. Thus, autonomous weapon need to completely banned on this face of earth because it can induce more harmful than good. It is clear that the future of AI, really depends on how we design it, ethical or unethical? ethical is the answer for a better world.
Nevertheless, it cannot be argue that Artificial Intelligence is bringing contributions to the society. AI even help in the research of COVID-19 on predicting the spreads and helping in finding the vaccines. Smart cities is also build to make better life for the society to live in. People now do not have to drive with Tesla autonomous car and have less effort to drive anywhere. We can now depend on technology to do our daily task. Artificial Intelligence also is primary drive motivation of todays IR4.0 that focuses on automate people activity and improve more productivity by using AI. Personally speaking, I think people of today love the concept of AI, due to its ability to help us. We need to bring this element of ‘love’ in developing AI, and in order to be secured, law and order, and ethical principles need to take place.
In the future, AI can only be good if the new technologies that using AI is guided and regulated. It is important for the government agencies to take the lead in the production of intelligence, do good work with AI technology, evaluate product ethics, and make moral presumptions of artificial intelligence from the technical level. With this, we can live together with more advance AI in our society. Maybe everyone can have their own Jarvis! | https://medium.com/@firdausjamdi/future-thinking-of-artificial-intelligence-towards-society-18866afc8d6a | ['Firdaus Jamdi'] | 2020-12-25 13:40:07.206000+00:00 | ['Automation', 'Education', 'Future', 'Technology', 'Artificial Intelligence'] |
1,512 | The Dynamic Simulation of Pedestrians and Vehicles Makes a Digital Twin City Alive | The Dynamic Simulation of Pedestrians and Vehicles Makes a Digital Twin City Alive 51WORLD Follow Feb 2 · 5 min read
51WORLD believes that a true digital twin city must be alive with people bustling around and vehicles going back and forth. It is the people and vehicles that vitalize a digital twin city.
Therefore, 51WORLD Earth Clone Institute (ECI) and Dynamic Simulation Institute (DSI) come up with an extraordinary simulation solution, converging the static digital twin scenes with the dynamics of the agents.
It is the autonomous decision-making of the agents, instead of the mechanical movement of the model; it is the deductive result in real-time, instead of acting according to the “script”; it is real-time traffic flow simulation accommodating thousands of people and tens of thousands of vehicles, instead of a simple demo animation.
Microsimulation of a subway station
Traffic flow simulation in an airport
Railway transit, ground transportation, venues, airports, docks, parks…
The pedestrian-vehicle dynamic simulation unleashes infinite imagination.
Bustling pedestrian flow simulation
What is the soul of a city?
There is no doubt that it is the people living in this city. The same is true for a digital twin city.
Therefore, pedestrian flow simulation is one with the greatest potentials as an accurate prediction of crowd behavior is required in simulating stadium evacuation exercises, subway commuting, and traffic efficiency of passing an intersection.
Rush hour simulation
On the other hand, crowd simulation is also a recognized conundrum in the industry.
Now, cooperated with the X Business Unit, 51WORLD DSI, based on the all-element scene (AES), delivers a pedestrian-flow simulation solution of various scenarios by integrating the dynamic simulation technology (DST). It can be used in venues such as schools, shopping malls, airports, stadiums, urban blocks, and underground transportation facilities.
Traditional pedestrian-flow simulation is essentially a timeline animation, in which the virtual figure can only move mechanically according to the planned actions and routes.
However, in an intelligent simulation, people can make real-time decisions according to diverse environments, presenting real dynamics in the digital twin scene.
Pedestrian flow simulation when partial turnstiles break down
At present, the 51WORLD pedestrian-flow simulation system is able to adjust the density and speed of the pedestrians, configure the elevators, escalators, entrances, and exits, and obstacles. It also supports layered display and 3D presentation of real-time statistical results
At the same time, real-time pedestrian flow data can be obtained from 51WORLD’s AI recognition through camera monitoring, which can be used as a reference for parameter configuration in simulation.
In simulating workflow, 51WORLD first builds a digital twin AES including the subway station based on real data. Then, it processes the requirements and various IoT data of cameras and turnstiles into standard parameters, to build a pedestrian model and connect it to the scene.
Pedestrian flow simulation when the exit is closed
Once the environment is constructed, the managers can select different emergency evacuation plans, according to which the system will animate the agents in the AES deduced from corresponding models and generate dynamic pedestrian flow scenes, which are demonstrated in forms like heat maps, holographic scenes, global visual field, and roaming camera.
In the process, one agent can interact with other agents and make decisions accordingly, such as choosing a shorter queue or deciding to wait or leave depending on the line ahead.
This intelligent driving process is based on configurable parameters, which can be adjusted locally or globally according to actual situation, to optimize and iterate the agent models.
Pedestrian flow simulation when an elevator breaks down
Hustling traffic flow simulation
If the pedestrians can be viewed as the principal players of a city traffic on a micro level, vehicle flow is vital in simulating the city traffic from a meso and even a macro perspective.
On the support of dynamic simulation technology and 51Sim-One, 51WORLD’s self-developed autonomous driving simulation and testing platform, 51WORLD provides a set of vehicle traffic flow solutions simulating from macroscopic road networks to microscopic vehicles, satisfying various needs such as traffic design, planning, and management.
Meso and micro traffic flow simulation
We have simulated and evaluated the traffic conditions of 15 KM in the West Third Ring Road in Beijing according to different demands, with the peak number of the vehicles in the scene reaching 2 million.
Pedestrian-vehicle flow at a complex intersection
Microscopic simulation of complicated intersections is a thorny issue in traffic simulation. 51WORLD can simulate conditions such as vehicles turning in different directions, traffic lights response, the interaction between pedestrians, non-motorized vehicles and vehicles, and pedestrians crossing roads.
Simulation of traffic flow at complex intersections
At the same time, the seamless connecting with AES allows the platform to quickly simulate traffic flow in line with actual conditions, supporting the evaluation and verification of planed management schemes.
Simulation of traffic flow in an airport
Employing digital twin technology, 51WORLD designs a set of intelligent dispatching systems for aircraft and working vehicles, which can simulate and evaluate the traffic flow and flight statuses in the airport under different managing strategies.
In the digital twin scene, the system offers real-time micro-simulation for airport operators by importing the regulations and rules such as timetables and avoidance between working vehicles and aircraft.
At the same time, the system can also calculate relevant operating indicators, facilitating the formulation of relevant design and managing plans. | https://medium.com/51world/the-dynamic-simulation-of-pedestrians-and-vehicles-makes-a-digital-twin-city-alive-2db398f27a97 | [] | 2021-02-02 02:57:16.614000+00:00 | ['Digital Twin Technology', 'Smart Cities', 'Digital Twin'] |
1,513 | 9 Reasons Why You Should Get a Smart Door Lock | More Security
If you’re like many, you might keep a spare key under the planter on your front porch. While it might have helped you a few times, it’s not exactly the safest decision. Any thief would start by looking for a spare hidden key, and porch mats are the first place they might be searching.
2. A Stylish First Impression
Smart locks can make any door look much smarter and cooler. No wonder they’re called “smart” locks. A smart door lock can give a stylish first impression of your home for new guests and friends. They are more likely to assume you invested quite some money in it, even though actually it was pretty cheap.
Don’t worry about changing your mind, either. Most smart door locks are made to be extremely quick and easy to install. They’re also designed to leave your door’s exterior unaltered.
3. Open to Guests
Every time you make a plan, there’s for sure an unexpected event that will destroy it. Sometimes you will need to let friends and neighbors get inside your home. It might be because they arrived earlier than expected or because something happened and you need their help to give a quick check.
Either way, you don’t need to go back home or find ways to leave your home keys to them. With a smart door lock, you can hand out access codes or digital codes to your friends and neighbors. You will also get a notification when their access code is used to not use it again without your permission.
4. Perfect for When You’re Away
Most smart door locks available on the market nowadays have a built-in alarm system. This means that if anyone forces entry to your home while you’re away, the app on your phone will alert you, and it will also contact the authorities.
5. Easy to Upgrade
Installing a smart door lock — especially a Smonet smart door lock — is a great way to update your front porch’s look. For example, the Smonet SMUS-AM is equipped with Wi-Fi connectivity. This way, you can make sure your smart door lock is always working and connected to your Amazon Alexa or other smart home devices. Plus, the installation process is so easy! You can install it by yourself in a few minutes.
6. More Peace of Mind
How many times have you left home and then worried if you locked the door? With a smart lock, you can lock the door wherever you are. You don’t even need to worry about losing your keys, as long as you have your smartphone with you. All you need to do then is check the app on your smartphone, and you’re set to get in.
7. Perfect for Families with Kids
If you have kids, you always worry about their safety. To ensure that they’re going back on time and safe, you can set up their access codes. This way, you can remotely monitor when they come back home after school.
8. Ideal for Home Renovation
When you’re renovating your home, you might need to let many people in and out all the time. We’re talking about builders and designers, to name a few. A smart door lock allows you to track who is getting in and out. You can also see if they are showing on time. The best part is that you can monitor all their movements directly from the app on your phone.
9. Best for Rental Properties
If you’ve ever rented any of your properties, you are familiar with the anxiety and discomfort you get at the idea of one of your tenants doing unimaginable things. A not too uncommon example is when they copy your home keys and give them to some of their friends without your permission. Sounds scary, isn’t it?
With a smart door lock, you have full control access from your phone. You can provide temporary access to any guests as well as revoke it once they’ve checked out. This is especially useful if you have many short-stay guests, for example, from Airbnb.
A smart door lock frees you from the need to make a special trip to hand over a physical key. It also saves you from the worry of your home keys being copied. | https://medium.com/smonet/w9-reasons-why-you-should-get-a-smart-door-lock-ee856d6ab37 | [] | 2020-11-13 01:35:53.333000+00:00 | ['Smart Home', 'Technology', 'Security Camera', 'Smart Door Lock', 'Home Security'] |
1,514 | How Does Your Digital Product Perform When Users Get Angry? | How Does Your Digital Product Perform When Users Get Angry? truematter Aug 5·3 min read
When people get frustrated by digital products, they become angry which impedes task completion. Understanding this common exasperation will help you make better sites, apps, and software.
Poorly designed, frustrating digital experiences make otherwise harmoniously calm people angry — really, really angry. This probably doesn’t come as much of a shock. Interface rage is something we’ve all felt.
Some hurl offending computers from windows. Others write exceedingly long research articles on the subject. We all cope in our own way.
I’m infamous around the office for, shall we say, colorful pronouncements when using maddening apps, sites, and software. My infantile rantings sail right past the PG-13 standard into Rated R land. I’m not proud of it. But at least I’m not alone. Apparently a good many of us confess to verbal or physical abuse of our computers.
The Downward Spiral of Digital Fury
The worst thing about getting worked up over bad digital products is once we surrender to anger, we create a self-reinforcing cycle that makes the problem worse.
Maybe we’re filling out an online form and miss a required field, instruction, or error. Perhaps the form is just plain broken. What should have been insanely easy becomes a time-sucking ordeal. Anger is a natural response, but it makes us harried and mistake-prone. When flustered, we miss obvious things we’d otherwise see. Problems multiply, making us all the more furious.
Even worse, sometimes we come to a digital product already angry, already irrational. Those of us who tackle the Free Application for Federal Student Aid (the dreaded FAFSA) know what it means to start at our wit’s end. In this case, the downward spiral begins immediately.
Death. Taxes. Anger.
User frustration is depressingly common. Online it is the rule, not the exception. The interface rage it generates is a fact of modern life. I get mad. You get mad. I bet even the Dalai Lama has lost his digital cool when trying to order new Warby Parkers. The question is, what exactly can you, a digital product maker, do about it?
Begin by focusing less on the anger itself and more on the commonality of it all. Instead of wondering what to do if someone gets upset when using your product, ask instead what you should do when they inevitably become blind with anger.
A Different Model for UX Success
People are naturally impatient online. Anger flows easily from this. Angry folks exhibit poor judgement, make rash decisions, and are generally irrational. Your software must be truly amazing if it performs well in the face of this emotional tsunami. Your fancy app might be wonderfully usable under calmer, even-keeled circumstances. Perhaps sturdiness under duress is a more legitimate measure of success.
Testing this hypothesis with users would be difficult to say the least. Fury is so dang subjective. And making people break blood vessels on purpose feels a tad unethical. So we’ll have to settle for the next best thing.
Agree with Reality
Assume everyone using your digital product is having an exceedingly bad day. They are upset, irate, and stressed. It’s doesn’t matter why. Maybe your app is the cause. Maybe not. Perhaps they have a hundred things to do and your thing is just one of them. Perhaps they just finished the FAFSA before turning to your app. Life’s not fair.
You can rely on people to be frustrated by technology. This is never going to change. Adopt a mindset that assumes perpetual user exasperation. It will revolutionize how you think about, build, and deliver your product. Everyone from the newest employee to the CEO will make better choices as a result.
About truematter
Our team has been doing the real work of user experience since the earliest days of the commercial web. We’re out to make your digital products a whole lot better.
That means ensuring they can withstand the endless onslaught of irrational human behavior.
Author: @ExperienceDean
Graphic: @djosephmachado
Image Source: Engraving by Martin Engelbrecht
Acknowledgement: Cian O’Connor for conceptual inspiration | https://medium.com/@truematter/how-does-your-digital-product-perform-when-users-get-angry-7f5dc4b8dfb6 | [] | 2021-08-05 20:42:54.726000+00:00 | ['Anger', 'UX', 'Technology', 'Users', 'Business'] |
1,515 | TechNews.io Versus Cision: It’s All About the Search | When I brought my company’s’s PR function in-house in August of 2018, I realized that I needed a media database.
With hundreds of thousands of reporters, contributors, and bloggers in the market at any given moment, and something close to a 30 percent rate of annual turnover, my focus was on vendors that specialized in maintaining reporter contacts rather than ones that relied on web-scraping to obtain their data.
That requirement led me to Cision, which appeared to be the last and best game in town for this type of database. (Cision had actually acquired and digested two other providers I had worked with in the past, and they seemed to be the last firm that claimed it went through the epic process of contacting reporters to confirm contact details a few times a year.)
The Cision database is fairly complete. If you know who you are looking for, you can usually find their name and contact information there.
It’s a different story for discovery.
Beat designations within Cision seem very rudimentary and “thick”— reporters listed as “Technology,” for instance, might cover “artificial intelligence” or “embedded analytics” or “machine learning” or some combination of the three.
Given the fact that an off-beat pitch can easily result in a communications representative being added to a reporter’s spam list, it’s critical that you know exactly what a reporter is writing about, or risk having no way to contact a reporter ever again.
Decent Monitoring
Cision does have monitoring and search functions built in to the product to cope with the challenge of bloated beat titles, and reporter specialization.
The news monitoring function — alerts which track news published online — is outside of the scope of this review, but I will say it does a fairly good job of giving me a heads-up about coverage.
What’s more, if you want to track coverage by source type against particular names or keywords, Cision monitoring will sometimes produce sources that Google alerts will not, and it will usually beat the alerts by a day or so.
As you might expect, this monitoring will often reveal new reporters covering a beat of interest… although this happens incidentally and accidentally, over a long period of time — not helpful when you need to respond quickly with a reactive pitch opportunity.
Difficult Search
For media search, Cision has proved to be more frustrating.
I had cut my teeth in media strategy with a combination of Factiva (now greatly diminished in the number of sources it accesses, unfortunately) and Lexis Nexis (now much more expensive than it was). I learned advanced Boolean search and, with my skills and these platforms, I had a reliable way to get a fairly comprehensive overview of relevant media coverage by any way I wanted to slice and dice it — topic, outlet, subject, etc.
Cision promised that functionality but, without getting too far into the limitations of their archiving, their provider agreements, and their search functionality, I found it a very clunky and inefficient way to search coverage on the fly.
In the end, to find reporters I hadn’t already identified, I was usually left doing it the way that I had before I had contracted with Cision: Google searches.
Enter TechNews.io
It was at this point that I ran across TechNews.io.
TechNews.io does a lot of things well but, for me, the most valuable functionality is providing an easy, quick way to see who is writing about a given set of topics in the English-language technology space.
TechNews.io provides a very easy-to-use search interface — maybe a little basic for Boolean search geeks like myself, but one that seems to be very good at intuiting deeper meaning from coverage without requiring more sophisticated search strings.
If you want to see the reporters who cover “IoT” and “neural network,” for instance, you just type in those two terms. The results will sometimes miss a story, but it’s fairly comprehensive, and it almost never produces an off-topic result.
A Reasonably Effective Database
What’s more, TechNews.io provides contact information for most of the reporters in the results it produces.
To be sure, these names are obtained through a web-scrape — something that I had hoped to avoid in my initial search for a media database solution.
In the end, this didn’t prove to be a huge handicap, at least when it came to TechNews.io. Reporter information in TechNews.io is driven by recent coverage, and most reporters now include their current contact information in their stories.
There are unusual instances where reporter contact information is missing entirely, requiring me to go online to search or going back to Cision. But, most of the time, the reporter contact is there, and correct.
The Verdict: Search Wins
The two platforms are not true comparables. Cision is a comprehensive database for every reporter, everywhere, which is updated regularly. It also contains robust monitoring functionality that, while not of use for me, is probably very valuable for larger companies.
TechNews.io, on the other hand covers tech-only media coverage, and reporter names are derived by web scraping that is sometimes out of date, and seldom includes contact information like phone numbers.
That said, finding the right reporters to pitch your story to is the most important part of the media research process — for me at least. In the rare cases where I can’t obtain a reporter’s email address or phone number through TechNews.io, I can usually track down this information on my own with some detective-work.
On the other hand, having a huge, all encompassing database full of names, and not knowing which ones to reach out to, is a much more difficult challenge to cope with.
The bottom line is that, for tech TechNews.io provides a reasonably good database for North America- and UK-based reporters, while providing a VAST improvement over Cision in my ability to identify reporters who might cover fairly esoteric combinations of topics.
The powerful search functionality in TechNews.io has been a game-changer for me — allowing me to produce highly targeted media lists in minutes, which could have taken hours or days with Cision.
Not every PR person has the same needs but, for me, it’s all about search and, for this, TechNews.io is the clear winner versus Cision. | https://medium.com/@davidzweifler/technews-io-versus-cision-its-all-about-the-search-9a0afac9b355 | ['David Zweifler'] | 2019-02-07 15:51:19.872000+00:00 | ['Public Relations', 'Media Database', 'Media Relations', 'Product Reviews', 'Technology'] |
1,516 | Mesut Is Disrupting The Scholarship System With a Cutting Edge Technology Platform in Turkey | Mesut Is Disrupting The Scholarship System With a Cutting Edge Technology Platform in Turkey Romain Sion Jun 16, 2019·2 min read
👋Did know that in Turkey🇹🇷 40% of the young graduates 👨🎓👩🎓are unemployed? Nowadays, many promising students are working hard ⚒️to make sure they could get a job 👨💼👩💼after graduation. However, being accepted in a tier one Turkish 🇹🇷or International🌍 University is not enough. They need to pay💰 for the University fees and many of them can not afford it😟. Every year, hundreds of thousands of Turkish students apply for scholarships.
Few years ago, Mesut was a brilliant 🌠🌠student at Galatasaray University. His dream🚀 was to become a computer💻 science engineer but he needed a grant💰. After many time consuming & inefficient applications, he finally obtained one and he successfully attended great 💪Universities such as Galatarsaray, La Sorbonne and the Bogazici Universities😀. During that process, he noticed every foundation or institutions had their own internal process which was complex and fastidious.
After graduating🎓, Mesut decided to digitalize the scholarship system by launching a technology platform called E-Bursum which seamlessly connects international 🌍donors with the most promising Turkish🌠 students. To make the process more efficient & trustful, he integrated Artificial intelligence & Machine learning. For example, the platform can complete full background checks through governmental databases, it can calculate precisely how much a grantee really needs…
➡️And it works! 🌠300 000+ students are currently registered on E-Bursum. Since 2015, E-Bursum has distributed more than🌠 $5M+ to thousands of Turkish students. Mesut and his team are now targeting🌠 $10M by the end of year
🌠🌠🌠At our own pace and scale, everyone can aspire to change the world | https://medium.com/@sionromain/mesut-is-disrupting-the-scholarship-system-with-a-cutting-edge-technology-platform-in-turkey-3891a82bfb24 | ['Romain Sion'] | 2019-06-16 15:01:20.140000+00:00 | ['Social Impact', 'Technology', 'Students', 'Turkey', 'Entrepreneurship'] |
1,517 | Cycloid-O-Matic Draws Fun Shapes with Modded Steppers | Spirographs, which use a series of connected rotating gears to draw interesting curve patterns, have fascinated young and old alike for generations. Now, with the advent of cheap stepper motors and accessible design tools, InventorArtist Darcy Whyte has taken this toy into the 21st century with his Cycloid-O-Matic.
His project rotates paper using a lazy Susan arrangement, powered by a cheap-o 28BYJ-48 stepper motor. A pen is held in a clothespin overhead via a linkage system, attached to two stepper motors that vary its extension. This lets the pen move in and out/left and right according to the motion of the motors, while the paper itself spins to produce beautiful patterns.
Stepper motion is controlled by an Arduino Uno and GRBL shield. While it typically uses a computer interface, an Arduino-only version is in the works, though there’s a bit of an issue as of this writing in varying the motor speed.
One notable sub-hack here involves modifying the 28BYJ-48 motors for use with the GRBL shield, as the coils are normally center-tapped and would require a different type of driver. Conversion, however, is simply a matter of cutting a trace on the circuit board, and with proper measurement you can even do this through the housing without seeing the circuit board! | https://medium.com/@JeremySCook/cycloid-o-matic-draws-fun-shapes-with-modded-steppers-3cdf793dbb6a | ['Jeremy S. Cook'] | 2019-06-17 17:29:36.787000+00:00 | ['Arduino', 'Spirograph', 'Technology', 'Makers'] |
1,518 | SAP BW vs. Snowflake: End-User Experience | A year after the platform migration, do the business users share BI’s excitement?
Photo by Scott Graham on Unsplash
A year after my company migrated from on-premise SAP BW to Snowflake Data Cloud, I wrote an article comparing the two Data Warehouse (DW) platforms and highlighting the key factors which made the project a success. Given such a niche topic, I never expected the amount of feedback and healthy discussion that the article continues to generate.
However, one crucial detail was missing from that conversation: the end-user experience.
In a DW migration, winning over the business users is just as important as getting the model right. Although our migration was a massive success from a cost and performance perspective, I couldn’t yet speak for the users. At that point, we had already migrated the main reports, but the users were still getting their footing and becoming acquainted with the new platform’s various features.
Now that another year has passed, I would like to extend the analysis beyond the Business Intelligence (BI) team’s perspective and focus on our analysts and business users. What did the transformation mean for them, and are they happier and more productive as a result?
Conclusion
Traditionally, this comes at the end. But anyone who has been following Snowflake already knows that their platform empowers companies to transform their BI in ways they never imagined. Our experience was no different.
The user’s transition to the new platform was rapid, and the feedback was, without exaggeration, 100% positive.
Independent surveys have shown similar findings in other Snowflake customers:
Snowflake customer feedback
So for me to take such a typical outcome and attempt to use it as a conclusion would have been anticlimactic indeed. Especially when the question on everyone’s mind is:
How does Snowflake manage to achieve such results?
That’s what I’d like to explore.
Data Cloud > Cloud Data
Snowflake is not just another cloud-based DW. It’s a platform built to “mobilize data” using scalable architecture, engineered for the cloud. It uses four key pillars to achieve this goal:
Single source of truth (SSoT), many workloads
Unlimited performance and scale
Secure and governed access
Zero-maintenance as a service
To this, I would also add a fifth element which I think is overlooked when it comes to business-user adoption:
Ease of access (Through a web-based portal, using standard SQL)
With these principles in mind, my team saw an opportunity to do more than replicate existing reports in the cloud. We realized that Snowflake had the potential to change the relationship our users had to their data.
Not only that, it meant that the BI team would no longer have to be the bottleneck to getting that data to the users exactly when they need it. Instead, the BI team could become the facilitator of a company-wide analytics strategy that empowered all our business users through a self-service model.
Sounds great, but let’s not lose sight of what we’re up against: a mature, powerful, and robust on-premise SAP HANA database with hundreds of existing reports and user-configured views.
Senior management had not mandated or even authorized the migration. As the users were under no obligation to switch platforms, we knew that we’d have to win their support if we wanted our plan to succeed. That left us with only one viable strategy:
Let data quality and user experience drive the conversion.
Let’s look at how Snowflake made this possible over a pre-existing SAP BW implementation.
Same Reports, Only Better
After having migrated the model, our first objective was to provide the core set of reports in a format that the users were already familiar with. In BW, this meant that the bulk of the analysis was done through tabular(and interactive) WEBI reports, while the graphical (but static) reports were distributed through Business Objects. Content creation in both of these tools came with a steep learning curve and licensing constraints, making it impractical for business users.
No longer confined to the SAP ecosystem, we chose to do our core reporting in Tableau — a Gartner Magic Quadrant leader in BI analytics, who also offers a SaaS cloud-based option that plays well with Snowflake. Tableau combines tabular and graphical reporting in an intuitive interface, capable of connecting to many different data sources.
Tableau not only gave us the possibility to migrate existing reports with all the interactive and visual features that our users had come to expect, but it allowed the users to create their own dashboards, either from scratch or using pre-defined templates.
With reporting taken care of, what else could the BI team do to empower our various business teams?
Business Users in a Data Warehouse?
Beyond the mere learning curve of BW reporting, there was a constant risk of runaway queries bringing down the performance of the entire system. Sure, there is always the QA environment, but its data is perpetually out of sync with PROD and offers little value.
Enter Snowflake with virtual databases that can read or write to one another without copying the physical data. Couple that with the ability to adjust compute resources independently from the storage, and you give users real freedom.
Unlike with our BW system, there are absolutely zero risks in giving users read access to the DW models or even raw data. Since their queries run in a separate and adequately sized reporting warehouse, they never interfere with the one used by BI.
Hold on, how are business users even able to access a data warehouse?
Remember, Snowflake uses native ANSI-standard SQL and can be accessed through a web interface with no additional software to install. Unlike BW, which takes years to master, a day’s worth of prep can allow someone to start writing basic SQL queries and extracting insights from the underlying data.
The recent release of row-based security puts Snowflake’s security features on par with those of SAP. However, Snowflake can do something that SAP users have never even dreamed of: give individual teams their own database.
What would the business users want with their own database? Quite a lot, it turns out! First, let’s review all the new abilities that our users gained by migrating from BW to Snowflake, and in the next section, we will look at what they were able to achieve as a result.
Snowflake gave our users:
Ability to access an SSoT across the entire business
Ability to access data beyond the reporting layer
Ability to extend existing reports and create new ones (Tableau)
Ability to manage their own warehouses for more complex tasks
Ability to choose their budget and size their warehouse to their needs
Inability to bring down the system or even penalize the performance of other users and processes
Maslow’s Hierarchy of Self-Service
Ok, maybe Maslow had individuals in mind when he introduced the concept, but Snowflake helped us see how it applies to analytics. By continuously empowering our business users to get more out of existing data, we realized that self-service isn’t a singular concept — it has layers.
The layers of organizational self-service
Each department has distinct and creative ways of using data. Instead of providing a one-size-fits-all solution, the BI team should be looking for ways to give these teams as much autonomy as they were willing to take on.
Exploration
For some of our users, the core reports completely fulfilled their everyday needs. However, the new capabilities that the Data Cloud gave us, meant that we could do more for the users and departments who wanted to go beyond the basics — to ascend the self-service pyramid.
Insight
One of the most requested features was direct access to master data and the ability to track ETL logs. Teams who previously never went beyond front-end reporting saw this as the gateway into the SQL world. Suddenly, every master data table became a data source for business consumption. Users could even query metadata (e.g., last refresh date, count, and distinct values). Requests like this used to sit for weeks on our BW backlog — needing dashboards to be built in order to expose simple tables.
The next rung in the self-service pyramid is business users’ ability to load and maintain department-specific data (e.g., groupings, hierarchies, classifications) and join it to existing reporting models. Previously, BI would have needed to build the pipeline, the data model, and then accommodate every change which arose over time. Now that users could operate their own warehouses with shared and secure access to productive data, they are free to perform ad-hoc analyses without depending on BI.
Actualization
Finally, the self-actualization of analytic needs: ad-hoc modeling and data science. Why should technology be a barrier for teams with the technical knowledge and desire to run these analyses?
Whether it’s doing department-specific modeling or running predictive analytics on business metrics, Snowflake’s ease of access makes it possible for teams to move at the pace of their own ideas — without depending on BI.
A department’s ability to manage its own warehouse and take on the associated costs gives our most technical users complete autonomy. Whether the data resides within the warehouse of a specific team or the central DW managed by BI, it can instantly be made available to the entire organization thanks to the Data Cloud.
We Could Have Done Better
Despite having achieved our goal of getting the buy-in of our business users, there were indeed things that we wished we knew in advance.
Before we understood the self-service hierarchy and how the business would eventually come to use Snowflake, it wasn't easy to foresee the importance of governance and automation— both at the database and reporting levels.
Database Governance
Once our business users began to derive value from having database access, knowledge of the database landscape could no longer reside entirely within the BI team. Strict database naming conventions become essential. Unless the users can navigate the DW, they do not have true self-service. Since this wasn’t even a possibility with BW, it took us a while to realize its importance (and find time for the necessary clean up).
Reporting Governance
Beyond the staging layer, as data models get more complex and business logic is introduced, a data dictionary becomes essential. An SSoT is only as good as the user’s ability to make sense of it.
Since Snowflakes’ Data Cloud makes sharing data within an organization trivially easy, we could integrate more sources into our reporting model and do so with a much faster release cycle. Unfortunately, this meant that our reporting layer was evolving faster than users' ability to keep up with our memos—a good problem to have, but a problem nonetheless.
Eventually, it became clear that maintaining an up-to-date data dictionary was crucial to the success of our self-service strategy. Although we’ve begun to address this through a manual effort, it’s becoming clear that we will soon have to rely on an automated solution.
We’ve begun looking at tools such as SqlDBM, which comes with an integrated data dictionary that syncs with data model changes. Because SqlDBM also supports visual modeling, enforces naming conventions, and helps visualize relationships between data entities, it would provide tremendous value to BI, as well as business users at every stage along the self-service hierarchy.
Continuous Integration and Deployment
Finally, even the best-made model can suffer from poor data quality. SAP systems do a great job of ensuring (else failing on) data integrity checks. With an open ELT approach, additional safeguards must be built-in before the data can be trusted in the reporting layer.
Once we understood this, we incorporated duplicate value and threshold checks into our data model. But if we want to scale, we’ll need to implement a CI/CD solution like DBT, which will allow us to automate data integrity checks and ensure a more stable and reliable delivery cycle.
Conclusion. (For Real This Time)
In the first year of being introduced to Snowflake, the BI team managed to migrate the entire data model and completely transform our data warehousing strategy.
In the second year, the business users caught on to Snowflake’s potential and wholly abandoned the on-premise BW/HANA system in favor of what the Data Cloud could do for their data needs.
As we enter our third year on Snowflake, we continue to push the frontiers of what the platform can deliver. Using the Data Cloud, BI and business-matter experts from across the business are collaborating on machine learning initiatives and automated apps.
Data scientists, programmers, and business analysts can work across organizational lines along with BI on a single shared platform and reinforce each other’s strengths. Despite boasting of performance and efficiency, this collaboration would have never been possible with the traditional limitations of an on-premise system like BW.
Admittedly, this is still not much of a conclusion — it’s a promise of bigger things to come on our Snowflake journey. | https://medium.com/sqldbm/sap-bw-vs-snowflake-end-user-experience-d938f9d48fe9 | ['Serge Gershkovich'] | 2020-12-07 11:30:05.670000+00:00 | ['User Experience', 'Sap', 'Tableau', 'Technology', 'Snowflake'] |
1,519 | 7artisans 35mm f1.2 Review: Sonnar on a budget | Overview
7artisans has made quite the name for itself as a manufacturer of third party manual focus glass. While its releases have generally been hit or miss, we have seen some very interesting lenses from them, like the 35mm F2 for M-Mount, the upcoming 28mm f1.4 (also M-Mount), and this lens.
Let’s get something straight right off the bat: this lens won’t win any awards for sharpness. Or CA. Or contrast, or almost any other technical measure of a lens’s optics for that matter.
What this lens is, is a character lens. Featuring an optical design that is based off that of a Sonnar, it delivers rendering and bokeh with characteristics not unlike those of a Jupiter-3 or Zeiss ZM Sonnar (although I wouldn’t say it performs up to the standards of the latter).
Build Quality
While this lens is cheap for its specifications, it feels solid and dense in the hand. A full-metal build and a smooth focus helicoid and aperture ring make using this lens an enjoyable experience. It balances well on the Fujifilm X-E3 I’m using, which was the body used for all the sample photos in this review. | https://medium.com/@aloysiuschowhc/7artisans-35mm-f1-2-review-sonnar-on-a-budget-abc10717546b | ['Aloysius Chow'] | 2020-05-06 04:49:18.789000+00:00 | ['Review', 'Photography', 'Gear', 'Technology', 'Lenses'] |
1,520 | 5 Reasons why Linux has a low Os market share | Despite having a huge fan base for the Linux community, it occupies a very little market share In OS Market. Linux only Occupies a 1.4 % market share in the overall market share of Desktop Operating Systems in 2020. However, If you consider the overall market Share of Operating systems Then Linux is every were and plays a significant role. Linux is an Operating system that we use daily in our day to day lives but doesn't make not of it. Linux may be running on your social app servers, Household IoT Devices, your smart Android mobile phones, Smartwatches, etc.
Though There are many geek Linux users who don’t accept Android as Linux, at the end of the day it was built on Linux kernel technically meaning it is Linux.
Linux logo
Though Linux occupies a very little market share Linux has 600+ active Linux distributions.
The probable reason for this low market share might be | https://medium.com/@hari-kiran/5-reasons-why-linux-has-a-low-os-market-share-a521a2ca5370 | ['The Cyber Monster'] | 2020-12-07 18:52:45.727000+00:00 | ['Mac', 'Technology', 'Blog', 'Windows 10', 'Linux'] |
1,521 | Your Kit Lens isn’t as bad as you Think | Despite what you may have heard, kit lenses can be an amazing tool.
They are amazing for beginners and it allows them to be slowly eased into the world of DSLR photography.
Kit lenses typically come in the box with your first ‘serious’ camera (ie. a DSLR) and they are cheap to produce but they produce decent images.
It gives the beginner photographer an ideal platform to get started with because before they progress onto more challenging lenses.
This is because:
It covers a wide 18–55mm range
They are great wide lenses for beginners, allowing them to capture large buildings and landscapes with ease
While the aperture is much narrower than say, a 50mm lens, it is still enough for beginners to get some bokeh in their backgrounds
However, it is important that you understand how to get the most out of your kit lens and this article will give you some actionable tips to get you started on your way to taking some amazing photos!
A kit lens covers a wide 18–55mm focal length range, which is more than enough to get you started | https://medium.com/photo-dojo/your-kit-lens-isnt-as-bad-as-you-think-bc4b6d9924a2 | ['Joel Oughton'] | 2020-12-30 08:18:49.251000+00:00 | ['Creativity', 'Technology', 'Art', 'Photography', 'Design'] |
1,522 | KZ AS10 Review | INTRODUCTION/DISCLAIMER:
The KZ AS10 is an in-ear monitor with five balanced armature drivers per side. KZ is the brand that started my Chi-Fi journey years ago with the ATE. Since then I have owned the KZ ED9, which I liked, and the ES4, which I disliked enough to return. I also own the C10 from KZ’s sister company, CCA, which is my go-to recommendation for a sub-$50 hybrid IEM.
The AS10 is the most expensive KZ model I have evaluated so far, retailing for $59.99 on Amazon at the time of this review. The AS10 was provided to me by Linsoul Audio in exchange for a fair and objective review.
SOURCES
I have used the KZ AS10 with the following sources:
Windows 10 PC > JDS Labs The Element > KZ AS10
Pixel 3 > Fiio BTR1K (Bluetooth Apt-X) > KZ AS10
Windows 10 PC > Fiio BTR1K (USB DAC) > KZ AS10
Pixel 3 > Apple USB-C to 3.5mm dongle > KZ AS10
I have tested these headphones with local FLAC and Spotify Premium.
PACKAGING AND ACCESSORIES
The KZ AS10 comes in a black rectangular cardboard box marked with the KZ logo on the front panel. Stickers on the bottom panel indicate the mic and color options as well as the contact information for the manufacturer.
The box has a flip cover which opens to the left, revealing a foam inlay containing the earpieces and a metal plaque. Behind this inlay are two translucent white plastic bags containing the AS10’s removable cable, 3 sets of KZ Starline eartips (S, M, L), a user manual, a QC pass chit, and a warranty card. The AS10 does not come with a carry bag or case.
BUILD QUALITY / DESIGN
The AS10 earpieces have piano black plastic housings with transparent faceplates. The housings are on the larger side with deep nozzles. The AS10’s KZ-branded circuit boards are visible behind the transparent faceplates. Although I love this look, there are many who do not. The model name, “Left/Right,” and “10 Balanced Armature” are printed in silver on the top face of the black plastic housing. “L/R” are also identified on the transparent faceplate above the cable connection.
Each earpiece has a tiny circular vent near the top of the inner face of the housing. The AS10 is an all-BA design, so driver flex is not a concern.
The nozzle does not have a traditional lip for securing eartips, and instead has 3 small protrusions along the edge of the nozzle. This worked just as well as a lip in my experience.
The AS10 has a copper-colored braided 2-pin cable with an L-shaped 3.5mm jack. The KZ logo is printed on the jack housing. The cable has pre-formed plastic ear-guides and “L/R” markings on the 2-pin housings. There is no chin-adjustment choker, and the Y-split is around halfway-down the cable length, roughly 2 feet from the bottom of the 2-pin connections. The cable is not as tangle-prone as the cable included with the CCA-C10, but is still problematic in this regard. Microphonics are minimal.
COMFORT / FIT / ISOLATION
The KZ AS10 is intended to be worn cable-up only. The wide housings and relatively deep insertion depth make the AS10 tolerable at best from a comfort perspective.
Noise isolation is above average relative to dynamic driver or hybrid designs, but not as good as the Tenhz T5, a sealed all-BA design.
The AS10 accepts a wide variety of eartips. The relatively deep insertion depth makes getting a good seal easy. I used the small silicone eartips from the Fiio F1 for my listening.
SOUND
The KZ AS10 has a warm, mildly-V shaped tuning.
The AS10 emphasizes mid-bass slam rather than sub-bass rumble. Sub-bass is present and well-extended but not visceral. Bass articulation is quick and precise. Bass texture is dry and clinical. The mid-bass hump bleeds into the lower mids, thickening deep and growled male vocals and causing distorted electric rhythm guitars to come off as boomy.
The lower mids are slightly recessed and a tad warm. Both male and female vocals are clear and full-bodied. The upper midrange could use a touch more presence. Both male and female vocals, while natural-sounding, come across a bit flat.
The treble, while smooth and inoffensive, has a plastic-sounding timbre. Resolution is adequate for the price, but the AS10 is lacking in sparkle.
Imaging and instrument separation are very good. Soundstage is slightly larger than average for the price point and compares well with more expensive IEMs.
MEASUREMENTS
My measurements were conducted with a Dayton iMM-6 microphone using a vinyl tubing coupler and a calibrated USB sound interface with a resonance point at 8k. The measurements are presented with 1/24th smoothing and without compensation. Measurements above 10k are not reliable.
AMPLIFICATION REQUIREMENTS AND SOURCE PAIRING
With a sensitivity of 105dB and an impedance of 14ohms, the AS10 can be easily driven to adequate listening volumes by a smartphone. I did not notice hiss with any of my sources.
COMPARISONS
KZ AS10 [$62] vs Simgot MT3 [$66]
The Simgot MT3 has slightly more prominent and extended sub-bass. The MT3 has slightly more textured bass. The AS10’s bass is better articulated, with more precise attack and decay. The MT3’s mid-bass hump rolls off earlier and does not bloat the lower midrange as much.
Male vocals are more prominent on the AS10. The MT3’s lower midrange does not exhibit the boominess that can be heard on the AS10. The MT3 has a livelier but more aggressive upper midrange, which makes vocals sound more exciting at the cost of sibilance. The timbre of the MT3 is thinner than the AS10’s. Distorted electric guitars can be too bright on the MT3.
The MT3’s lower treble rolls off earlier than the AS10’s, but is harsher, splashier, and grainier. The MT3 has more air and sparkle. The AS10 has more realistic transients.
The AS10 has better instrument separation, a larger soundstage, and more precise imaging. The MT3 is slightly harder to drive. The MT3 is more comfortable thanks to its smaller housing. The MT3 comes with a wider variety of eartips, a nicer cable, and a mesh carry bag.
CLOSING WORDS
The AS10 errs on the side of caution, presenting a safe tuning that is unlikely to offend. Build quality and technicalities are very good for the price point. | https://medium.com/bedrock-reviews/kz-as10-review-765c2777d683 | [] | 2019-06-01 14:05:29.876000+00:00 | ['Review', 'Music', 'Headphones', 'Technology', 'Audio'] |
1,523 | Samsung Announces Massive 110-Inch MicroLED TV | It has a 99.99% screen-to-body ratio and 5.1 channel sound system without an external speaker.
By Matthew Humphries
LCD, OLED, and Quantum Dots, stand aside: Samsung is rolling out the next generation of display technology for its newest television with news of a 110-inch 4K HDR MicroLED TV arriving next year.
Samsung has been working with MicroLED for a while now, and last year launched its modular luxury TV, The Wall, which can be scaled up to 292 inches. But the company is now using the tech for a much more conventional television—albeit a massive one.
“As consumers rely on their televisions for more functions than ever, we are incredibly excited to bring the 110-inch MicroLED to the commercial market,” said Jonghee Han, President of Visual Display Business at Samsung Electronics. “Samsung MicroLED is going to redefine what premium at-home experiences mean for consumers around the world.” | https://medium.com/pcmag-access/samsung-announces-massive-110-inch-microled-tv-a1a922104428 | [] | 2020-12-10 15:55:19.549000+00:00 | ['Samsung', 'Gadgets', 'Micro Led', 'Television', 'Technology'] |
1,524 | How To Remotely View Security Cameras Using The Internet | Many people want to buy security camera for monitoring their business via mobile but they dong know how to remotely view security cameras using the internet. There are many reasons why you’ll want to urge remote over your security camera system. Here is an example , it gives you the power to stay an eye fixed on your business directly from your laptop or smartphone once you are away. Also, if you would like to understand what’s happening to the one that you love once you aren’t around, you merely found out web access to security cameras and see what’s happening in your house while miles away.
You physically connect your camera to an area computer (we’ll call it the server) and install the app on both the server machine and therefore the PC (the client) from which you’re getting to access the camera remotely. Some of wireless security camera system with remote viewing options included.
Launch USB over Internet app on the server computer and open the Local USB Devices tab. Find the camera within the device list and click on “Share next” thereto .
On the client computer, you begin the software and attend the tab named Remote USB devices. Locate the safety camera there and click on “Connect” next to the device name.
From now forward, the camera are going to be displayed within the Device Manager of your remote computer love it was plugged directly into that PC. So, now you’ll use any specialized software to remotely control the safety camera as if it had been your local device.
A differently
Setting up for remote access
Here we’ll re-evaluate the way to found out your IP security cameras for remote access.
Using a Cat-5 network cable, connect your security camera’s DVR (digital video recorder) to the network router.
Open your browser and sort the local IP address of your router within the browser’s address bar to log in. you’ll ask your provider for the precise IP address of your router, but an example will appear as if this — http://192.168.1.1.
After you’ve got logged into your router’s configuration panel, you’ll then create a fanatical or static IP address for your camera within the local area network (LAN) settings of your router. Having a fanatical IP for the camera will allow you to access it directly without causing any conflicts with other devices within the network. you ought to also note of the subnet mask number, which is 255.255.255.0 in most cases. you’ll need this information afterward
What next?
Next, you would like to line up port forwarding on your router for the camera’s IP address. this may allow you to access your camera from a special device. Type the IP address of your camera and set it to forward port 80. If you would like to forward quite one port, just repeat the method for all the specified ports of your camera.
After port forwarding is complete, power on the camera to access its network menu. attend the setup settings for networking to settle on a static IP address. Enter the IP address and subnet mask number assigned to your camera. Double-check that the ports on the camera match those you forwarded through the router, then save the settings.
From outside your local area network, enter the external IP address you’re using into an internet browser. to see your external IP address, you’ll attend whatismyip.com and replica your listed IP address there. you’ll then be prompted to put in ActiveX control for your camera’s web server. After this, you ought to be ready to access your camera online.
Another way : how to remotely view security cameras using the internet
Step 1: Register for a FlexiHub account. then , choose the subscription plan that’s best for you and begin a free trial.
Step 2: Start by physically connecting your security camera to your computer(server). Then install the FlexiHub software on both the server and therefore the remote computer (client) which will be accessing the camera remotely.
Step 3: To share the safety camera over the web , simply start the software on both machines using an equivalent login credentials.
Step 4: Click ‘Connect’ on the remote computer to access a security camera.
What to try to to If you can’t Remotely View Security Cameras Using the web via Port Forwarding
Make sure your cameras are connected to the network.
Ensure all the ports of the network configuration are mapped to the web .
Open the firewall within the router to permit Internet access to the camera.
If your computer features a firewall, proxy, ad-blocking software, anti-virus software or the likes of , attempt to temporarily disable them and connect the Server again.
Check your Web Server Settings and make sure that your user account has permission to access the IP cameras.
Make sure the cameras are compatible with the online browser you’re using for remote viewing.
If you’ve got more problems about fixing your IP camera for remote viewing or watching CCTV cameras from anywhere using Internet, be happy to go away your comment below and that we would like to help. So hope you got a complete idea about how to remotely view security cameras using the internet via this post. | https://medium.com/@getlockers1/how-to-remotely-view-security-cameras-using-the-internet-264d58d97768 | ['Smart Locks'] | 2020-07-20 17:58:30.573000+00:00 | ['Safety', 'Remote Working', 'Security', 'Security Camera', 'Technology'] |
1,525 | Three Analytics Tools and Skills You Can Start Learning Right Now | Data Science / Data Analytics / Education
Three Analytics Tools and Skills You Can Start Learning Right Now
Whether you are in marketing analytics, data analytics, business intelligence, or data science, we as analysts usually have a handful of tools and skills that we all need to be great analysts. Whenever I first started learning analytics, I honestly had no clue where to start. There are so many different skills and tools available, that it can be a little overwhelming if you are trying to learn on your own.
After getting into the professional world and seeing the daily flow of analytics as well as talking with many different data professionals, I think I have a good idea as to a starting point for tools and skills. This is not a perfect list or the “right answer”, as every organization is a little different in terms of the skills and tools they use. Without further stalling, let’s get into it!
Disclaimer: I am not currently full time data professional, and will not be until May 2021. I am simply relaying my experiences from my summer internships, independent consulting, and personal practice. This is not meant to tell you what exactly you need to know, rather, this is to give you some practical advice from experiences I’ve had. For advanced technical information, please check out Towards Data Science. Any tools or products that are mentioned are not sponsored, they are simply my favorite product I am personally referring. If you are currently learning analytics on your own and need a good starting point, keep reading!
3 Analytical Tools to Help You Start Learning Analytics
There are so many different tools that analysts use every day. Most analysts have to be able to learn efficiently, as there are new tools being developed every month. That being said, there are a few tools that most analysts I have talked to use on a daily basis.
1. Structured Query Language (SQL)
Structured Query Language, or “SQL”, is probably the most common tool that analysts use. I personally use it in marketing analytics, and know that my data analytics and data science friends use it daily as well! While NoSQL and non-relational databases are on the rise, relational databases are still widely used and more common, so SQL is probably your best bet if you are just starting out learning analytics.
While SQL is known for being “old”, it is still very powerful and can do a lot of great things. Part of being an analyst or data scientist means being able to take care of your data and organize it for analytical use. Besides warehousing and storing data, SQL can do a lot of other great things:
Create ETL processes that feed into tools for statistical analysis, such as Python, R, or SAS
Created stored procedures to semi-automate basic maintenance tasks, such as updates
Do basic operational functions, such as multiplication or subtraction, which allows for aggregations
Filter large amounts of data to make analysis easier
While all of these applications are useful, at the end of the day, the true power of SQL comes from modifying the data to be used in analytics. If you have 100 rows of data to analyze, you could probably just clean it in Excel and push it to a tool for statistical analysis. On the other hand, if you have 700,000 rows of data, it is much more efficient to put that data in a database, and write a few queries that clean it, all in less than a minute. I would highly recommend learning SQL because not only does it give you control of your data, it allows you to think through data problems with a different perspective.
One of my favorite SQL courses that I took when I started learning data analytics was “SQL, by 365 Data Science”. I would recommend checking it out if you are starting to learn SQL!
2. Python / R
Now I know what you’re thinking, “Learning programming is hard, and I’ve tried so many times and just can’t get anything to stick”. I’d like to say a couple things on this topic. First, when you are learning Python or R for analytical purposes, the goal should not to be a full Python web developer or R developer in 3 months. When starting out, you are not learning how to be a developer, you are learning how to be an analyst. I say this because there are a lot of courses out there for Python and R that teach you way more than you need to know when you are starting out.
“When you first start learning analytics, the goal should not to be a full Python web developer or R developer in 3 months. When starting out, you are not learning how to be a developer, you are learning how to be an analyst.”
My advice for learning these is to spend 40% of your time learning the syntax and 60% of your time in projects applying it. If you just read a Python book or do the course, you won’t know Python from a practical standpoint, and it will be difficult to implement Python on real projects. The goal is not to memorize the syntax and be able to write a full application in your head. The goal should be to understand how to problem solve using Python or R. Now all that being said, here are some great applications for Python or R:
Run basic statistical analysis with minimal code (averages, standard deviations, etc.)
Filter and clean large data sets (deleting duplicates, fixing misspellings, etc.)
Develop applications that automate basic analysis
Run different types of models
Split large files into smaller ones (More useful than you think)
Python / R comes up quite a bit on job postings as well as already having a large community of data professionals. There are other options like Julia, SAS, and Go, but I can safely say Python and R have the largest communities and highest demand in the industry.
My personal training is in Python, and before you find a course if you want to learn Python, I would recommend checking out W3Schools: Python. This gives you a good overview of Python basics and any training material you use will make more sense once you get the basics down!
3. Tableau (Or Any Data Visualization Tool)
Tableau is one of the most popular data visualization tools on the market right now. While it is pretty expensive, it is personally my visualization tool of choice. If you are a college student, you can get a 1 year subscription for free! If you are in a position where you cannot get a hold of a Tableau license, I would also look into Microsoft Power BI. The free version of Power BI works great, and while it is not nearly as powerful as Tableau, it is still a great tool to have in your toolbox. Here are some things Tableau can do:
Create interactive dashboards with a lot of different functionalities
Create calculated fields to do specific calculations and aggregations of metrics
Do quick table calculations that run more advanced calculations with one click (percent of total, percent of different, etc.)
Automatically update visualizations if connected to a database
Again, Tableau is my personal preference, but any data visualization tool works. If you haven’t noticed already, with these three tools, you can store/maintain data (SQL), transform/analyze data (Python / R), and visualize it for the end user (Tableau). This basic process, also called ETL (Extract, Transform, Load), is a process that many analysts use to get raw data analyzed and sent to the end user.
I personally learned Tableau at my university, but Tableau has their own training material that I think is really strong, which can be found here. I would definitely recommend checking it out!
3 Skills Every Analyst Should Have
While it’s great to have a set of tools you can use, the tool is only as good as the individual using said tools. There are a couple broad skill sets that you can practice, even while learning the above tools, that will help you as an analyst or data scientist!
1. Problem Solving
We all have some sort of problem solving skills and use them every day. Life is full of problems that need to be solved. In my opinion, the most difficult part of problem solving is avoiding “paralysis by analysis”. Paralysis by analysis is when you get stuck trying to find the perfect solution to a problem. It is easy to fall into this “paralysis” because the more broad and complex a problem is, the more paths there are to a solution.
“In my opinion, the most difficult part of problem solving is avoiding “paralysis by analysis”. Paralysis by analysis is when you get stuck trying to find the perfect solution to a problem.”
This is the aspect of problem solving that most analysts are strong in because typically, analysts are given a broad problem that needs solving and receive little direction. It might be something like “We need a subscriber report in a month”, or “I keep getting an error message when I open Tableau, can you take a look at it?”. In order to train your brain to solve problems, I would suggest starting by doing small analytics projections and use this problem solving process:
Step 1: Identify your objective — What specifically is the problem you are trying to solve?
Step 2: Identify you decision variables — What are the things you can change and control?
Step 3: Identify your constraints — What are things you have to account for in order to reach a solution
If interested, I can write a separate article on this topic. No matter what skills you use, the ability to take a large problem and break it down into these 3 steps is invaluable. If you can do this, it doesn’t matter what tools you use, because you will be providing value to your organization or clients. You might have your own problem solving process, and by no means do you have to follow the process above. I personally have found that this process is effective and is used by many analysts and data scientists.
2. Adaptability
Being able to adapt is a very important skill for any analyst. Let me give you a scenario. You work hard and spend time learning how to use SQL, Python, and Tableau. You apply for a job as a data analyst at a big company and get an offer! You accept, and start your new position next week. You first day, you are so excited to put your new tools to work. You walk into your first strategy meeting, only to find that this week, the team is implementing a new tool, Alteryx. Now, you can take 1 of 2 approaches in this scenario. Either you will refuse to learn the new tool, stumble through your position because you don’t know how to use the tool, and probably end up quitting out of frustration. Second approach, you do some independent research and spend some time out of the office messing around with Alteryx, and find it isn’t that difficult to use, and are able to start using it the next week. Being flexible and adaptable is a huge skill to have because there are new analytical tools coming out all the time. Being able to learn these new tools efficiently and adapt to unusual situations will lead you down a path of success in the world of data.
3. Communication
When I talk to people about going into analytics, they always seem surprised when I tell them about how important communication is, even in analytics. Analytics and data science is a complex field, and as an analyst, your job is to make sense of what is complicated and discover new things. In my opinion, this thrill is what makes this field so interesting. The discoveries you make, however, obviously need to be shared with the end user. It takes solid communication skills to articulate complex ideas and make them digestible for everyone else who is not an analyst, or more frequently, people who are not skilled in understand numbers and technology. Whenever you send out reports and discoveries by email, or present them at meetings, you will get loads of questions, and sometimes they are not the most fun questions to answer. “I’m pretty sure my team did more sales than that, can you recheck your numbers?”, or “You said your “model” is projecting my team to have a decrease in sales this year, why is that?”. You will get many questions like these, and being able to articulate why a number is there will show your professionalism and skill as an analyst.
Conclusion
While this is not the perfect roadmap or everything you need to know, I hope this is a good starting point for anyone trying to get into analytics or data science! You will get frustrated and put in a lot of hours to learn these tools and skills, but I promise you, this field is such a rewarding field. High salaries, job security, and an intellectually stimulating job will all make it worth it! Be well, and stay safe! | https://towardsdatascience.com/3-analytics-tools-and-skills-you-can-start-learning-now-b98d89adeaf0 | ['Brenden Noblitt'] | 2020-12-11 20:39:32.941000+00:00 | ['Technology', 'Education', 'Data Analytics', 'Data', 'Data Science'] |
1,526 | Space exploration — An industry that values design | Photo by Pixabay from Pexels
A Design in its most basic sense is the offering of a or a solution to a problem. Its purpose is to reduce complications in the most practical and efficient manner, in the most vulnerable and dangerous of situations. But finding the inspiration for a design is but only the first step of the process. The victory of a well-done design is in its usability.
The metamorphosis of an inspiration into a ‘GOOD DESIGN’ is the primary foundation of human civilization. The design of a bonfire, inspired from a spark between two stones helped early man to stay safe in the wilderness. The perfect circular stone rolled on to become wheels saving us the hard labour of walking to places and carrying heavy load. These designs might seem plain, but it took several trials and errors to reach their prime functional stage.
Importance of good design
One of the best ways to gain perspective for appreciation of good design is through evaluating achievements and losses in the process of space exploration, the largest breakthrough for mankind till date.
The disaster of The Columbia space shuttle is probably the greatest misfortune in the quest of space exploration. The loss of a piece of foam from the bipod ramp collided with the left wing of the shuttle, allowing hot gas to enter the wing that destroyed the shuttle while it re-entered earth’s atmosphere, killing its crew of seven astronauts. The loss of foam had already been established as a setback in design. NASA focused on strengthening precautionary measures to handle the disastrous malfunction rather than to completely mitigate the possibility of its occurrence through a better functional design.
This catastrophe reinstates the importance of good design. And also the danger of not heeding to one.
Launch of a vessel into outer space has many challenges. The main motive of a space rocket is to escape the earth’s gravity drag to reach the destination.
The force required to break free of earth’s gravitational force is massive even for a tiny object. A fully equipped rocket is heavy, weighed down by the essentials required for a crew manned mission. The required force is achieved by fuel propulsion and mass expulsion.
In space shuttles gas is produced by burning fuel in the engines. Controlled escape of this gas provides the thrust which propels the rocket in the opposite direction. The largest mass of the rocket is occupied by the propulsion system (propellant tanks and propellants). This mass is lost as the propellants are fired up by the engine to gain acceleration of the rocket providing the force for it to escape gravity. A functional design for a rocket is to efficiently burn enough propellant to gain the required acceleration.
A well designed rocket is able to generate enough force to either get into orbit of the earth or escape earth’s gravity entirely to move to another planet, much like the same way a good design helps to gain investors to ‘launch off’ a product successfully into the market.
As the rocket flies through the air of earth’s atmosphere aerodynamic forces of drag and lift kick in.
Lift force occurs due to the turning of the gas flow by the rocket. It upsets the direction of flight by causing rotation and changing the course of the vessel. The nose cone, body tube and the wings of the rocket are used to turn the gas flow and generate lift force to control and stabilize the direction of flight.
Drag force is the friction generated between the atmospheric air and the rocket. The drag force is difficult to determine and requires testing of the rocket model and is entirely dependent on its shape design, thrust setting and gas flow at the base.
Thus Structural design of the rocket dictates its flight ability and is key for the mission of space exploration.
Once a product is launched in the market, the competitors and economy begin acting as the opposing forces. Efficient Marketing and Branding design are essential to neutralize these opposing forces.
Turning good design into an asset
A key factor for space exploration is the ability to maneuver the vessel and host the crew.
Once the rocket leaves the luxurious atmosphere of the planet, assistance through communication satellites becomes unreliable and there is no GPS in space.
A sustainable atmosphere with Environment controls, Life support systems and highly reliable smart control systems are vital constituents of a rocket.
The launch of Falcon9 rocket, a combined project of SpaceX and NASA, is the first step towards opening up the barriers of space exploration through smart, reliable and functional design.
The Falcon9 simplified its control panel with large touch screens that enabled visualization of extensive real time data and effectively commandeer the ship.Use of custom foam molds in the capsule seats helps provide a more smooth and comfortable ride for the crew.
The rethinking of the design of the spacesuit by Elon Musk with minimal equipment, use of lightweight material and increased efficiency, custom made for the crew is a positive step towards the future of space missions.
The control panel and cockpit along with a reliable crew are necessities that help the transition of a well designed rocket into a beneficial asset for mankind.
A highly efficient team and environment is required to keep a business and product afloat. Build an efficient cockpit to drive your design to success.
The International Space Station (ISS) is a pitstop in the conquest of space exploration.
ISS functions as a self-contained, living quarters and life-support system that is capable of communication with the ground flight controllers. It is designed to serve as a docking port for space shuttles and crew exploration vehicles to provide a rest stop for rejuvenation. The Internal and External research accommodations of ISS allows experiments to be carried out in order to collect data and test new technologies. It serves as the initial platform for the conquest of space, to further proceed towards greater achievements in exploring the universe.
Once you build a successful firm, make a pitstop to gather and re-evaluate data. Design a platform to experiment with creativity and test new ideas.
Rely on your design, data, and intuition once you step out to become a venture. Make your product an asset in the market.
Future of Design
Each successful design opens up opportunities for further designs, technologies and achievements.
The Mars Landers have picked up insights about the planet and evaluations of the data have shown a possibility of building a sustainable human colony on it. The curiosity rover was designed to travel over the land of Mars and analyze its environment. Powered by the energy generated through radioisotope decay, the rover is a self-sustainable mission. It has a camera and robotic arms to collect, gather, and store information as it roams on the planet’s surface.
Elon Musk sprung the proposition of landing an experimental greenhouse on mars that would help open access to space. If the idea seems familiar to you, it is because it is also the storyline of ‘The Martian’ movie.
Falcon9 is a step towards opening up space exploration opportunities with reliable and affordable design, the vision of SpaceX. The successful completion of its mission has restored the enthusiasm to explore space further.
There exists no limit to inspiration and design. Every idea is an opportunistic design to power the future.
If you’re looking for consulting services in design or development for your new or existing digital product or enhance your online presence, please email your requirements to hello@headcanon.co | https://medium.com/@headcanon/space-exploration-a-product-of-valued-design-e4ead33ffb03 | [] | 2021-01-06 12:51:11.007000+00:00 | ['Consulting', 'NASA', 'Emerging Technology', 'Design', 'Space Exploration'] |
1,527 | The Deacon Meets Blockchain: Wake Fintech | Greetings, BEN community!
Though COVID-19 may have put some of our Spring plans on pause, it’s still been a busy year for Wake Fintech. We wanted to take a moment to update the community on what we’ve been up to, as well as some of our plans for the future.
Last April, Wake Fintech hosted an event with Kevin Leffew (WFU ’17 and Fintech Club Founder) and John Quinn (WFU ’95) of Storj Labs. The public talk included a live demo of Storj Labs’ decentralized Cloud Storage platform, Tardigrade.io, which gave students an in-depth look at how blockchain is revolutionizing the way we develop applications and store data.
This past Fall, we held meetings throughout the semester which featured engaging presentations and demos by our very own Fintech club members. We kicked off the semester with an overview of blockchain technology, as well as a presentation teaching students about the history of Bitcoin and how it helped promote the development of future cryptocurrencies. This was followed up with a round-table discussion of how we currently use blockchain in our daily lives, as well as areas where blockchain and cryptocurrencies could be implemented to create value and make life easier. We also discussed some of the issues that cryptocurrencies could face on the path to widespread adoption.
In October, Rob Michele (WFU ’20 and Outgoing Club President), taught members about Bittrex’s REST API through a presentation and live demo. Michele has used the service for various projects and trading algorithms in the past, and his extensive knowledge made for a very eye-opening experience for all — especially members who didn’t have as much development experience and were excited to get started with their first projects.
Our semester was capped-off in November with a presentation by Zach Skubic (WFU ’23), who taught members about the ways Fintech can help fix underbanking in developing economies by promoting financial inclusion. Skubic spoke about cryptocurrencies and blockchain implementation, emphasizing increasing mobile accessibility worldwide and the need for digital security, as well as the need for online banking solutions that cater directly to developing markets. He also discussed the potential for the securitization of microloans to increase institutional access to developing markets while providing social enterprises with additional funding.
Zach Skubic presents “How Fintech Can Fix Underbanking in Developing Economies” — November 2019
Even though we might not be able to meet in-person for events at the moment, we’re still very excited about the future of Wake Fintech. We look forward to continuing our meetings and student-led presentations to provide our community with a look at the developments occurring within the Fintech realm, and we are also excited to expand our educational offerings to students through our partnership with BEN. When the time comes (hopefully next Fall), we hope to extend more invitations to leaders in the blockchain and crypto communities so students can continue learning from experts about cutting-edge projects that are having a real-world impact. | https://medium.com/blockchainedu/the-deacon-meets-blockchain-wake-fintech-bfed9d1833ce | ['Zach Skubic'] | 2020-07-09 18:28:37.086000+00:00 | ['Blockchain', 'Cryptocurrency', 'Fintech', 'Blockchain Technology', 'Education'] |
1,528 | The Technology Fighting Coronavirus | As I’m sure many of you are aware, the global pandemic, COVID-19, known as the Coronavirus, has spread rapidly and many of you are probably at home in quarantine reading this now. Initially I chose not to prepare a response, given how this issue has taken over every media outlet, YouTube channel, and Facebook page. However, after a little research, it became clear that technology is being used as a very good tool. I couldn’t pass up an opportunity to recognize the men, women, and technology that is working to solve this health crisis.
Like I’ve said, technology is a tool and its use as a very good tool couldn’t be made more clear in regard to fighting this virus.
Like I’ve said, technology is a tool and its use as a very good tool couldn’t be made more clear in regard to fighting this virus. There are three primary objectives in this fight:
We must prevent the spread of the virus to those who are healthy. We must treat those who are ill. We must develop a cure for all who may contract this virus.
Technology is being used within all three objectives.
Let’s start with preventing the virus. The first step to preventing the spread of a virus is to limit individual contact. This is of course where social distancing and quarantines are used. Technology is helping us make these efforts much more effective. To more effectively limit social contact technology is helping officials learn where cases are arising. A Boston-based start-up, BioBot Analytics, is installing technology in sewer systems. These systems are working to detect the virus, and using data analysis, determine where, how many, and how cases of COVID-19 are spreading (Perry, “Startups Unveil…”).
On a slightly more pleasant topic away from the sewage, researchers at the University of Southern California are working to develop an app that could determine who needs to stay home and who is probably safe to go to work or shop (Polakovic, “USC experts..”). The researchers are attempting to find a balance between preventing the spread of the virus and the economic impact we’re already seeing. The app uses anonymous data from positive COVID-19 tests to determine if an individual has been exposed to the Coronavirus and then alerts them with a suggestion to stay home and quarantine.
Of course, a review of the technology being used to prevent the spread wouldn’t be complete without mentioning the countless video platforms that allow us to connect with work, family, friends, churches, and schools. In many ways, although we are stuck in isolation, these technologies have allowed us to remain as connected as ever.
But technology can also be used to treat those who are ill.
But technology can also be used to treat those who are ill. The first step is diagnosis. In order to prevent further contamination, telemedicine is growing in popularity and developers are working to increase the accuracy of and the level of care teledocs provide. An Israeli company is working to develop apps and programs that can detect heart rate, heart rate variability, respiration, and oxygen saturation using only the cameras on a smartphone (Perry, “Startups Unveil…”). Another company is using simple audio recordings to detect the sounds within the lungs, a vital indicador of possible infection (Perry, “Startups Unveil…”).
Unfortunately, as of yet, there is no known cure for COVID-19 and therefore there is little healthcare professionals can do once a diagnosis is made. In the most severe cases, ventilators can however mean the difference between life and death. But as cases rise, equipment is further limited. Companies around the world are quickly building up factories to build more ventilators, but this may not be enough. Some individuals have discovered that 3D printers can make vital pieces for ventilators. Hobby groups with at-home 3D printers in Spain have even produced a ventilator prototype that is being tested by healthcare professionals and the scientific community. If their prototype succeeds, 3D printers across the world can print important components using online templates.
Of course, through this entire crisis, a cure is being developed. Researchers haven’t yet found it, but they are using technology to speed their progress. Doctors and scientists are producing virtual simulations of the virus and possible treatments. The simulations are being run through supercomputers 100 times faster than those used 10 years ago. They are testing current drugs and treatments against the virus, virtually. Big data, artificial intelligence, and machine learning are providing more capabilities to scientists than ever before (Polakovic, “USC experts..”). Also, these computers can work 24/7. A team of researchers has reported that artificial intelligence has helped them find 500 possible antibodies that could fight the Coronavirus (“Technology against…”).
Now, I’ve only scratched the surface of the uses and power of technology in the fight against COVID-19. For more information, please see the resources referenced in the Works Cited section below.
It’s clear that technology is an extremely important and powerful tool in this endeavor.
It’s clear that technology is an extremely important and powerful tool in this endeavor. As so many experts have reported, we will win this fight, it only takes time. When we do conquer this virus it will be by the tool of technology, well, that and washing your hands.
So stay confident, stay inside, and stay connected via the technology at your fingertips. | https://medium.com/tech-is-a-tool/the-technology-fighting-coronavirus-baa0b968625 | ['Benjamin Rhodes'] | 2020-04-23 13:42:49.580000+00:00 | ['Covid 19', 'Virus', 'Quarantine', 'Technology', 'Coronavirus'] |
1,529 | Exploring the US Cars Dataset | The US Cars Dataset contains scraped data from the online North American Car auction. It contains information about 28 car brands for sale in the US. In this post, we will perform exploratory data analysis on the US Cars Dataset. The data can be found here.
Let’s get started!
First, let’s import the Pandas library
import pandas as pd
Next, let’s remove the default display limits for Pandas data frames:
pd.set_option('display.max_columns', None)
Now, let’s read the data into a data frame:
df = pd.read_csv("USA_cars_datasets.csv")
Let’s print the list of columns in the data:
print(list(df.columns))
We can also take a look at the number of rows in the data:
print("Number of rows: ", len(df))
Next, let’s print the first five rows of data:
print(df.head())
We can see that there are several categorical columns. Let’s define a function that takes as input a data frame, column name, and limit. When called, it prints a dictionary of categorical values and how frequently they appear:
from collections import Counter
def return_counter(data_frame, column_name, limit):
print(dict(Counter(data_frame[column_name]
.values).most_common(limit)))
Let’s apply our function to the ‘brand’ column and limit our results to the five most common values:
return_counter(df, 'brand', 5)
As we can see, we have 1,235 Fords, 432 Dodges , 312 Nissans, 297 Chevrolets, and 42 GMCs.
Let’s apply our function to the ‘color’ column:
return_counter(df, 'color', 5)
Now, let’s look at the brands of white cars :
df_d1 = df[df['color'] =='white']
print(set(df_d1['brand']))
We can also look at the most common brands for white cars:
print(dict(Counter(df_d1['brand']).most_common(5)))
We see that most of the white cars are Fords, Dodges, and Chevrolets.
We can also look at the most common states where white cars are being sold:
print(dict(Counter(df_d1['state']).most_common(5)))
Next, it would be useful to generate summary statistics from numerical columns like ‘duration’. Let’s define a function that takes a data frame, a categorical column, and a numerical column. The mean and standard deviation of the numerical column for each category is stored in a data frame and the data frame is sorted in descending order according to the mean. This is useful if you want to quickly see if certain categories have higher or lower mean and/or standard deviation values for a particular numerical column.
def return_statistics(data_frame, categorical_column, numerical_column):
mean = []
std = []
field = []
for i in set(list(data_frame[categorical_column].values)):
new_data = data_frame[data_frame[categorical_column] == i]
field.append(i)
mean.append(new_data[numerical_column].mean())
std.append(new_data[numerical_column].std())
df = pd.DataFrame({'{}'.format(categorical_column): field, 'mean {}'.format(numerical_column): mean, 'std in {}'.format(numerical_column): std})
df.sort_values('mean {}'.format(numerical_column), inplace = True, ascending = False)
df.dropna(inplace = True)
return df
Let’s call our function with categorical column ‘brand’ and numerical column ‘price’:
stats = return_statistics(df, 'brand', 'price')
print(stats.head(15))
Next, we will use boxplots to visualize the distribution in numeric values based on the minimum, maximum, median, first quartile, and third quartile.
Similar to the summary statistics function, this function takes a data frame, categorical column, and numerical column and displays boxplots for the most common categories based on the limit:
import matplotlib.pyplot as plt
def get_boxplot_of_categories(data_frame, categorical_column, numerical_column, limit):
import seaborn as sns
from collections import Counter
keys = []
for i in dict(Counter(df[categorical_column].values).most_common(limit)):
keys.append(i)
print(keys)
df_new = df[df[categorical_column].isin(keys)]
sns.set()
sns.boxplot(x = df_new[categorical_column], y = df_new[numerical_column])
plt.show()
Let’s generate boxplots for ‘price’ in the 5 most commonly occurring ‘brand’ categories:
get_boxplot_of_categories(df, 'listed_in', 'duration', 5)
Finally, let’s define a function that takes a data frame and a numerical column as input and displays a histogram:
def get_histogram(data_frame, numerical_column):
df_new = data_frame
df_new[numerical_column].hist(bins=100)
plt.title('{} histogram'.format(numerical_column))
plt.show()
Let’s call the function with the data frame and generate a histogram from ‘price’:
get_histogram(df, 'price')
I will stop here but please feel free to play around with the data and code yourself.
CONCLUSIONS
To summarize, in this post we went over several methods for analyzing the US Cars Dataset. This included defining functions for generating summary statistics like the mean, standard deviation, and counts for categorical values. We also defined functions for visualizing data with boxplots and histograms. I hope this post was useful/interesting. The code from this post is available on GitHub. Thank you for reading! | https://towardsdatascience.com/exploring-the-us-cars-dataset-dbcebf954e4a | ['Sadrach Pierre'] | 2020-05-09 16:47:54.414000+00:00 | ['Software Development', 'Programming', 'Python', 'Data Science', 'Technology'] |
1,530 | Solutions: The Energy Industry (Part 1) | In the next series of articles, BigBang Core will look into a number of different industries that are benefitting from blockchain technology.
Photo by Max Bender on Unsplash
As always, make sure to follow us on Twitter, Instagram and give us a like on Facebook. You can also join our Telegram Chanel for all the latest news and updates.
As the world is moving towards digitalization, the transformation towards blockchain is rapidly increasing. At present, industries are increasingly incorporating blockchain technology into their businesses models since it has the potential to transform them effectively.
Like other sectors, the energy sector refers to the companies involved in the manufacturing and sales of energy such as fuel production, distribution, refining and extraction; boosting the economy and aiding in transportation and production.
Why is there a Need for Blockchain Technology?
In regards to the energy sector, the increasing utilization of renewable energy installations can result in great pressure on electricity grids. Similarly, oil and gas companies are facing issues of privacy. We have highlighted some of the issues faced by the energy industry below:
Difficult Statistics and Lower Authenticity
In the energy industry, collating accurate data has been a recurring problem. It is difficult to count, or in the middle of circulation process, the data is lost or tampered with, making it difficult to quantify.
Cost of a Third-Party
The recruitment of a third-party energy testing company can be complicated and expensive. Furthermore, there is a risk that the third-party might leak private data.
Difficulty in the Collection of Data
The degree of digitalization of information in the energy sector is exponentially low. Hence, the collection of accurate data on energy consumption and emissions is can be challenging.
The Benefits of Blockchain Technology in the Energy Sector
The above mentioned issues can be rectified by employing blockchain technology. Utilization of blockchain technology in the energy sector brings a lot of benefits. These include (but are not limited to):
Minimized Costs
Greater transparency for stakeholders while maintaining security
Sustainability
For instance, the utilization of blockchain technology in an energy company for electricity distribution can facilitate in the connection of end-users with the energy grid. Blockchain together with Internet of Things will allow the direct trade and purchase of energy from the grid, instead of retailers (third-parties). Similarly, employing blockchain in the trading of commodities will result in low-cost and efficient trading as compared to the traditional mechanisms. This will not only enable companies to minimize the costs related to labor but also the costs related to data management, settlement delays etc. Furthermore, it will provide end-users with a higher rate of efficiency, and enable them to have control of their energy sources along with trustworthy and live updates of energy utilization data. The data could comprise of fuel costs, margin costs, market prices etc.
BigBang Core: Transforming the Energy Industry
BigBang Core is on its way to transforming the energy Industry. It combines big data, Internet of Things, and blockchain technology for creating an energy ecological system that comprises of “Data collection, data transmission, data storage, and data analysis”. This will facilitate the companies in reducing the time and energy required for man powered statistics and analysis, and also reducing the utilization of resources and energy, resulting in greater savings for everyone.
Enhanced Data Availability and Usability
BigBang processes scattered multi-source data into standards and clean data assets for ensuring data consistency, availability, accuracy, and integrity.
Higher Efficiency, Lower Cost
BigBang Core is proud to solve the issue of secure collection, transmission, and chaining of energy-saving data.
Managing Data Assets
BigBang Core combines big data for managing data assets. It builds data and service platforms which enables managers to understand the data from a high-level situation, giving them a clear understanding of the overall image of data resources.
Results’ Accuracy and Promoting Energy Conservation
BigBang Core serves the users with a data warehouse maturity model for evaluating the degree of intelligent data, providing guidelines for the generation of evaluation reports.
Achieving Cross-Chain Transactions
Each energy company can create a branch of BigBang Core for attaining cross-chain transactions of “energy indicators”.
Jengengbao Hangzhou:
Launched in 2011, Jengengbao is the first free energy-saving service provider. It is concerned with the management and operation of energy-saving emission reduction technologies.
BigBang Core facilitated Jengengbao in transforming and upgrading its product chips for accomplishing data security on the chain with one-third of the previous cost. The combination of blockchain and big data technologies helped the company in reducing the time and energy of man powered analysis and statistics, saving money in the process.
The Future Looks Bright
Blockchain technology is slowly but surely changing industries that make our economy turn. BigBang Core is proud to solve the issues related with the energy sector effectively and efficiently with the aid of blockchain technology and Internet of Things. It will aid in the minimization of the consumption of energy resources, resulting in lower costs. Furthermore, a safe and secure environment is provided to carry out businesses with convenience. Who knows what the future will hold, but we believe it can only improve. | https://medium.com/@bigbang-core/solutions-the-energy-industry-part-1-448e20e7da75 | ['Bigbang Core'] | 2020-12-04 06:00:23.946000+00:00 | ['IoT', 'Industry', 'Blockchain', 'Blockchain Technology', 'Business'] |
1,531 | Past Meets Present: Blockchain’s Impact on Fine Art | ‘White Cube’ as a synonym for ‘Blockchain’?
Being a history of art student as well as a writer focused on blockchain tech doesn’t make sense to many people. The art world preserves a reputation of being traditional, elitist and pretentious, while blockchain technology is still avant-garde enough to deter people from engaging for fear of seeming behind the times.
Ever since Marcel Duchamp’s controversial interventions in the early 20th century, conceptual artists have been addressing the centuries-long problems within the art world including the lack of representation of women and ethnic minorities, the authoritarianism of the gallery and the question of what art actually is. Without exploring messy beds and man-made canyons too much, most seem to agree it’s the ‘idea’ behind the art that really matters.
As the blockchain and crypto industries unfold around us, I’ve come to notice some parallels between the two. Although the intangible aspects of blockchain are obvious in its relation to conceptual art, it holds real potential to solve many persistent problems that have impaired the art industry for centuries.
Redesigning Art Ownership & Provenance
One issue that even the Renaissance giants are not immune to is the notion of provenance. How can we actually say that this piece is by this person at this time? For the older ones it seems we are stuck with determining their origin by the way they drew ears, or x-rays to confirm the approximate age. For conceptual art, which is increasingly intangible, we have an even bigger dilemma as many artists embrace workshops and even rely on community efforts for the piece to materialise.
By implementing blockchain technology, every stage of the artwork’s conception and journey can be recorded on an immutable ledger. We would be able to see who was involved in the creation of the piece and what their exact role was as well as being able to avoid repetition of cases like that of Maria Altmann, who spent a gruelling six years battling the Republic of Austria to retain ownership over a Klimt painting that was rightfully hers.
Initiatives such as Codex are putting the identity of art and collectibles on the blockchain, making tracing provenance and buying and selling items easier, faster and more secure. At Sea Foam we are advising and developing technology for Artified, a Chicago-based company that plans to not only track the provenance of art but offer an interactive mobile app for collectors to engage with one another. Artified will incentivise collectors with token rewards for their knowledge and engagement, further enhancing the art industry and community.
These advancements may not be in the gallerys’ interests however, as disputes persist over artworks and artefacts that were obtained in unsavoury ways, not to mention the shady gallery funding scene and money laundering disguised as cash sales. That being said, as this becomes common knowledge, more institutions want to distance themselves from the alleged “black market” of the art world.
Bringing Transparency to the Dark Side of the Art Market
As we cautiously approach the role of galleries in a contemporary world we must consider the ongoing debate around their authoritarianism and the power collectors and gallery owners have over the market. A good example of this is how Charles Saatchi catapulted the Young British Artists into infamy, very early on in their careers, setting a benchmark for their future value from that moment on.
The mere existence of galleries and museums has relied on the wealth of a few individuals since their genesis. This is mostly beneficial for all parties, but certain cases taint the art industry as a whole, leading to a reputation of money laundering and seedy funding. Hans Haacke often tackles these themes in his work, more often than not targeting galleries directly who are led by unethical characters, or who had obtained their artworks or funding from disreputable sources.
Currently, photographer Nan Goldin is taking on the Sackler family who made a fortune from OxyContin — a painkiller known for its role in the fatal opioid epidemic — despite their name adorning the art galleries of New York and London.
Through blockchain transactions we can guarantee transparency of money and finances for both galleries and collectors, as well as allow artists and artworks to hold their own unique serial number, reinstilling faith in the art industry as a whole by reducing the potential for manipulation of facts. It also creates transparency in areas which are currently opaque, such as how artists are paid, as it’s currently unclear how much of a cut galleries and auction houses take.
Whether institutions embrace blockchain for this reason is still uncertain as increased transparency has the potential to scare away vendors and buyers. However, according to a 2017 Deloitte Art and Finance Report, 75% of art professionals and 64% of collectors want the art market to achieve greater standards of transparency. Blockchain could be the push we need to advance the art market into the 21st century.
Greater Accessibility for Collectors & Enthusiasts
Another pressing theme in the art world is the lack of accessibility for people in non-Western countries. Traditional cultural centers like Paris, London, New York and Tokyo still dominate, despite a movement to embrace art from nations that have, until recently, been excluded from the grander narrative. Other forms of technology, such as Virtual Reality, are also being utilised by galleries to create experiences for art fans around the world, but what about the people who want to collect art and can’t afford the substantial price tag?
The likes of Maecenas make it possible to buy art on the blockchain, allowing people to invest in fractions of artworks to enhance their portfolios in a safe and secure way. Their auctions will be significantly more inclusive for people with humble finances, with the minimum investment amount being equivalent to $5000 in BTC, ETH or ART token. They have partnered with Dadiani Fine Art in London for the world’s first blockchain art auction of Andy Warhol’s ’14 Small Electric Chairs’ (1980) from 25th July 2018.
Exciting New Medium for Artists
Similarly to the backlash against photography in the 1800s, we’re in the midst of a retaliation against the collision of technology and art. If even David Hockney’s iPad drawings can provoke such mixed reactions, how can we expect people to support blockchain artists?
It is difficult to measure or critique digital art in the same way as we did in the 1950s. Blockchain could be the catalyst for pushing art into its next evolution by allowing artists to address complex themes in ways which could excite a 2018 audience in the same ways as Daguerreotypes roused the public in 1839.
The nature of blockchain technology opens doors for artists around the world to not-only address key issues of equality, economy and self-commodification in their work but also provides a solution for the pressing issues all parties are faced with when interacting with the art industry. | https://medium.com/sea-foam-media/past-meets-present-with-blockchains-impact-on-fine-art-64646af1f0c4 | ['Chloe Diamond'] | 2018-08-03 22:36:49.636000+00:00 | ['Technology', 'Art', 'Blockchain', 'Cryptocurrency', 'Bitcoin'] |
1,532 | WTF?! Study Shows PC Gamers and Dota 2 Players Swear the Most | By breaking down the comments from 40,000 gamers, researchers discovered who has the the foulest mouths, which games keep it clean, and the most popular cuss-word of all.
By Eric Griffith
How do you figure out which gamers are the most trash-mouthed a**holes of all? Ask the folks at OnlineGambling.com. The site usually helps people find the best online gambling sites and sports books, but it took a gamble on a scheme to figure out where all the cursing is happening in today’s video games. (Because, obviously, there’s a s**t-ton of it. Also, “hell” and “damn” don’t count; it’s not 1965.)
To do so, OnlineGambling went deep into gaming subreddits, 14 discussions in all, logging occurrences of English-language curse words across the most recent 100 posts, and came to some foul-mouthed conclusions.
Most important are the words themselves. You can see in the chart at top which word got used the most, followed by several variations on the f-word, with some tamer terminology such as “ass” and “crap” thrown in for good measure. Only one use of “arse” and “wank” apiece indicates not a lot of UK-English speakers are hitting the subreddits.
Perhaps more interesting is how many swears there were by gaming platform. Parents might want to know just what kind of company kids are keeping, depending on their preferred gaming method. It should shock no one that PC gamers are the foulest-mouthed—it’s so easy to type your cussin’!
You’d think the wholesome Nintendo Switch games would indicate its users are clean-living and thus cleanest in the vocabulary. Wrong: Microsoft Xbox users averaged the fewest swears. | https://medium.com/pcmag-access/wtf-study-shows-pc-gamers-and-dota-2-players-swear-the-most-8a099c5c3cac | [] | 2020-05-19 13:26:18.029000+00:00 | ['Swearing', 'Videogames', 'Culture', 'Technology', 'Gaming'] |
1,533 | Organize Your Apps to Improve Your Well-being | Photo by You X Ventures on Unsplash
We can’t get away from technology. Nor do we want to. But, more stories come out every day about how it’s affecting our personal well-being. From disturbing our sleep (scrolling before bed) to increasing stress (checking work email on vacation) to building insecurities (ahh, the social media “perfect life” dilemma) to shopping at our fingertips (a flash sale!), we. are. bombarded. But that doesn’t mean we have to give up our connectivity in order to achieve a better level of personal well-being. What we can do is organize our apps to help us focus on what is important. Let’s make this easy — we’ll segment your well-being into 5 categories: Physical, Emotional, Social, Professional, and Financial.
Physical
Your physical well-being, from exercise to blood pressure, is usually what people think of first when it comes to health. How often are you working out? What are you eating? What medication are you taking? I’m sure you have goals coming out of the wazoo in this category — from fitness to health. Apps that belong in this category: MyFitnessPal, Aaptiv, Strava, Exercise Timer, Nike, Peloton, or the new Apple Fitness+.
Emotional
Your mental well-being is one of the most important things you can focus on. Whether you need to de-stress, want to attempt meditation, or just take a break, put your apps like Headspace, Calm, or Breethe here. My daughter loves Moshi, so I have hers in this category, too.
Social
If this folder is the biggest of all, you know you need a folder. Facebook, Instagram, WhatsApp, Messenger, Pinterest, Twitter, GroupMe, Snapchat, Houseparty, Marco Polo, dating apps, news apps — you get it. If it’s social, it belongs here.
Professional
Maybe you’re looking for a new job, or perhaps hoping to hone your skills in something new. Glassdoor, LinkedIn, Udemy, and Business Insider are a few you can pop into this category. Maybe your job has specific apps, like Workday, Slack, Zoom, Office 365, Trello, or others. Put them all here.
Financial
Online banking is better than ever these days. Putting all of the tools at your fingertips, including any apps you may use to look for deals, like Woot or Retail Me Not. Interested in investing? Put Ellevest, Robinhood, Public, Stash or Diversyfund here.
Putting the things that matter front and center
If your goal is to decrease your social media time, take your “Social” app folder off of your home screen. Replace it with “Emotional” if you are hoping to bring some balance into your life.
Maybe you frequent a good ol’ fashioned website that doesn’t have an app yet but is mobile-responsive (meaning it adjusts to the size of your screen depending on your device). Click on the menu dots on the browser, then select “Add to Home Screen.” It then becomes an icon like an app. Drag that site into the appropriate well-being category.
Doing this work is kind of like cleaning out your overly-stuffed inbox: seemingly overwhelming, but it’s worth the time. Oh, and while you’re at it, turn off unnecessary notifications. Your well-being will thank you. | https://medium.com/@mindylpierce/organize-your-apps-to-improve-your-well-being-5251f05e38df | ['Mindy Pierce'] | 2020-12-18 18:46:51.334000+00:00 | ['Organization', 'Wellness Tips', 'Life Lessons', 'Technology', 'Wellbeing'] |
1,534 | Capable Robot Components Launches Programmable USB Hub | Capable Robot Components, the startup that brought us SenseTemp, is back with another exciting board — the Programmable USB Hub, which doubles as a development board, and a bridge between your computer and I2C (via SparkFun Qwiic connectors), GPIO, and SPI using its a mikroBUS header.
The Programmable USB Hub features a Cortex-M4F MCU, four downstream USB 2.0 ports, two GPIO headers, and more. (📷: Capable Robot Components)
Actually, the board is capable of a myriad of applications, including being a power supply by providing 5V/6A and 13mA resolution monitoring downstream to connected devices. The Programmable USB Hub can also act as a USB to TTL Serial adapter, as well as a flexible embedded electronics development tool.
The Programmable USB Hub offers an upstream USB connector, USB UART/GPIO connector, I2C USB connector, MCU I2C connector, and MCU USB port for programming. (📷: Capable Robot Components)
The features and specs for the Hub are vast and include a Microchip SAM D51 MCU, four USB 2.0 downstream ports, a USB upstream port, I2C, SPI, UART, and two GPIO headers. It allows you to turn on/off the 5V power on each downstream port, and adjust the current limits between 0.5A to 2.6A as well. As mentioned earlier, the board sports a mikroBUS header, enabling you to add more sensors and connectivity options, and solder jumpers that let the UART and SPI connect to the USB Hub IC or the Cortex-M4F MCU.
The USB Hub controlling and I2C OLED display from the upstream host. (📷: Capable Robot Components)
Capable Robot Components is expected to launch the Programmable USB Hub on Crowd Supply shortly and will ship in an extruded aluminum enclosure with internal status LEDs and cutouts for the connectors and ports. It will also come with open source CircuitPython firmware, which can be updated over the MCU USB connector.
UPDATE: The Programmable USB Hub is now available on Crowd Supply, starting at $140 for a bare PCB, $180 for the PCBA with a custom metal enclosure, light pipes, and rubber feet, and $220 ($200 early bird special) for the kit that throws in a power supply and aux , I2C, and USB cables. | https://medium.com/@CabeFSAtwell/capable-robot-components-set-to-release-programmable-usb-hub-7de7240fbbbf | ['Cabe Atwell'] | 2019-06-24 13:25:39.462000+00:00 | ['Open Source', 'Microcontroller', 'Hardware', 'Single Board Computer', 'Technology'] |
1,535 | From Blockchain to no Chain | Blockchain is a distributed ledger technology (DLT) where transactions are recorded in the form of Blocks that are linked cryptographically to form a chain. It is the underlying technology behind many cryptocurrencies such as Bitcoin. While Blockchain is a transformative technology that can potentially disrupt a wide range of industries, it is not without its drawbacks.
The HelixTangle represents a block-less and chain-less solution to the world of Distributed Ledger Technologies. It is a next-generation DLT that is based on Directed Acyclic Graphs (DAGs), instead of blocks chained together. This article will highlight problems faced by existing DLTs, and how the HelixTangle addresses these issues.
1. Blockchain Consumes Enormous Amounts of Energy
Satoshi Nakamoto designed Bitcoin in a way such that anyone can take part in the process of updating transactions. However, there’s a catch. Updating transactions require users to solve a hashing algorithm called SHA-256 that involves some tough math. The fastest computer that validates a bunch of transactions on the ledger earns Bitcoin. This is known as mining, and the complex hashing process is called proof of work. Mining ensures that the Bitcoin ledger is maintained and updated regularly.
Solving the SHA-256 algorithm requires brute-force computing to speed up the hashing process and eventually generate more rewards for the user. Only expensive specialized computers stand a chance at solving these cryptographic puzzles. This process consumes a lot of energy since computers would have to remain switched on for long periods of time, in order to come up with as many answers as possible.
Competition for profit has largely driven the Bitcoin blockchain to consume as much electricity as Denmark, with costs surmounting to almost $3 Billion. Furthermore, coal-based power plants in China fuel the Bitcoin network resulting in an extreme carbon footprint for every transaction.
At a time when the world is grappling with global warming, Helix serves as a sustainable alternative to existing distributed ledgers such as Bitcoin. Unlike Blockchain-based DLTs, Helix adopts Proof of Useful Work (PoUW) to confirm data on its ledger. Peers on the HelixTangle can put useful data into the ledger and allow compute nodes to operate on this data, significantly reducing the system’s wastefulness.
In short, the HelixTangle requires no mining and eliminates the need for miners or expensive hardware. The Proof of Useful Work feature of the HelixTangle results in potential zero waste and prevents wastage of energy resources.
2. Transactions on the Blockchain are Expensive
It is estimated that large mining farms control 51% of the nodes on the Bitcoin network, making the system susceptible to attacks. Miners leave their computers switched on for 24 hours to validate transactions. This comes at a cost. The rewards that miners earn from authenticating transactions on Blockchain-based DLTs comes out of the pocket of users transacting on the network. And boy, aren’t they expensive! In early 2018, people paid a whopping $28 on average to carry out Bitcoin transactions.
Directed Acyclic Graphs (DAGs) ensure that the HelixTangle arrives at a consensus with no demand to trust a central authority. The HelixTangle requires no miners, and users can now directly transact with each other for free. The Helix network caters to the needs of the emerging machine economy by being fee-less and requiring no intermediaries to facilitate transactions.
No Miners = No Fees
Furthermore, high transaction fees charged by existing DLTs make it pointless to carry out low-value transactions. Helix, with its ‘Zero cost on the protocol’ policy, allows users to execute quick, any-value transactions between each other.
3. Blockchain Fails to Scale Up Efficiently
The spectacular rise of Bitcoin brought unprecedented interest in Blockchain, with many expecting the technology to disrupt a wide range of industries. Today, people adopting DLTs only make up a small fraction of the world’s population. However, the industry is gaining traction with companies such as Helix working tirelessly to educate the public about distributed ledgers. As adoption increases, can existing DLTs efficiently scale up?
This looks bleak at the moment. The mining award system temporarily disguised costs associated with transacting Bitcoin. Bitcoin has a 1MB size limit on blocks built into the system. As the currency grows and transactions increase, block size limitations impose capacity constraints on the network, thereby escalating fees and delaying processing transactions.
Scalability limits of existing DLTs mean that the confirmation rate does not increase with the transaction rate, while the rate of waste rises with adoption. With Blockchain catching up with this reality, the time has come to portend towards a technology that can efficiently scale up with adoption — HelixTangle. The speed of transaction confirmations in the HelixTangle increases as more and more people and things use it. This feature is made possible by the HelixTangle‘s unique underlying data structure — Directed Acyclic Graphs.
Every transaction on the HelixTangle verifies two previous transactions. As a result, settlement times become much quicker once more users transact on the network. It is estimated that the HelixTangle can practically achieve at least 1000 confirmed transactions per second per node, a significant improvement on the paltry 3 transactions carried out by a traditional Blockchain.
Final Words…
Bitcoin took the world by storm when it’s price skyrocketed in late 2017. However, much of the focus has been on the price of cryptocurrencies rather than actual use case development. In his excellent article, Brian Schuster explains that the price crash of cryptos could have been prevented if adoption rates would have stayed up with the valuation. So why is the rate of adoption slow?
It is a known fact that new technologies in the market are slow to be adopted. However, distributed ledgers are not like social media applications on the internet, where users can quickly make the transition from one service to another. It’s an entirely different beast that requires educating the public from grassroot level. To gain mass adoption, companies in this space need to convince people to believe in their product. One cannot deny that cryptocurrencies did invest a lot of resources in generating hype for their product. However, businesses need to do a lot more than that to create long-term value. That’s where education comes in to play.
The HelixFoundation is responsible for building the HelixEcosystem and understands the need to explain the technology for all ages in the simplest possible way — from how transactions are sent on the HelixTangle to creating decentralized applications on the network. Over the coming months, Helix will release a series of blogs and interactive videos for this purpose. | https://medium.com/helix-foundation/from-blockchain-to-no-chain-fa15b56e6dc6 | ['Raj Hegde'] | 2019-05-24 12:38:02.773000+00:00 | ['Blockchain', 'Technology', 'Distributed Ledgers', 'Cryptocurrency', 'Bitcoin'] |
1,536 | INT Chain 4.0 ‘Titans’ Testnet Mining Competition — Phase 3 | Competition details
Phase 3 of the Testnet mining competition will start at block height of 500,000 and will end at block height of 1,000,000. We have allocated a total reward pool of 500,000 INT for the competition. The rewards will be split between these events:
1. Mining Race
A total of 250,000 INT rewards for this event. The number of INT you receive is determined by the number of blocks you mine. Here is the calculation method:
250,000 INT * (the number of blocks you mine / the total number of blocks mined during the competition period) = your INT reward
Note:
The total number of blocks does not include the blocks generated by the official node.
Any validator nodes that do not reach a consensus in any cycles will be disabled for at least one cycle. The user would need to manually initiate an unblocking operation to be an active validator node again.
2. Lucky Block
We will set 40 lucky block heights and give 5,000 INT to 40 nodes who mine each of those blocks.
Note: If the lucky block is generated by an official node, the lucky block is determined by the next block mined.
3. INT Pot
All entrants in the competition will receive a share of a pool of 50,000 INT.
Note: Official nodes not included.
Activity period
Starting block height: 500,000
Ending block height: 1,000,000
How to participate?
Visit the ‘Titans’ Testnet 4.0 technical document website: https://titansdocs.intchain.io Follow the instructions to set up a mining node on Testnet 4.0 Feel free to reach out for support from the community in the official social media channels: https://t.me/INTDevelopment
Rules | https://medium.com/int-chain/int-chain-4-0-titans-testnet-mining-competition-phase-3-5890e8f9ff5c | [] | 2020-11-05 07:49:06.319000+00:00 | ['Mainnet', 'Blockchain', 'Technology', 'Mining'] |
1,537 | History of Girls Kode | These days, almost everything can be solved through clicks anytime and anywhere. Technology and digital growing faster and more intense than ever. Data says, by January 2020, more than half of the world’s population using the internet and there are 3.80 billion social media users in January 2020. This gives an insight into how big is the technology and digital world right now. But unfortunately, there are still not enough women in the industry.
History of Girls Kode
Access for everyone
There are not enough women in tech
Rina, the founder of Girls Kode, heard from many tech company head hunters from many tech conferences that programmer women are rare. Some reason why there aren’t many women in technology industries is:
Bias of gender
Some people in the industry do underestimate women, even though women have skills and competences.
Role Model
There are a few numbers of successful women in the Technology and Digital world for a role model.
Academic support
Most universities and academy curriculum for technology is not relevant to the industry need’s
Positive support system
There are still a few support systems to support and improve skills and knowledge.
Another course after graduated
Have you ever feel the need to take another course to keep you updated with the jobs?
Like how many youths in big cities that worked in the technology industry, Rina Kusmalasari, the founder of GirlsKode, finds many obstacles that many people may face: many hard-skill needed in the industry isn’t taught at school, one of them is coding. So, she needs to take another course to enrich her skills, unfortunately, that cost more bucks. Right at this moment, the idea sparks. Next, she and her friends initiate to build a tech women community where women can learn the technology and digital world from inspirational speakers. That was moment, Rina with her friend Laurenza Claudia which really loves about Digital and that is what she doing for her job. Finally, Rina and Lauren start with a lot of discussion and preparation so they ask their good friends Adit (from creative) and Naufal (from Tech) to create Girls Kode.
Girls Kode concern in college student and workers (youth ages)
Girls Kode members vary from IT and digital workers to fresh graduates and students. To enrich members’ knowledge and preparations, Girls Kode invites great people that work in the technology industry as speakers. They share insight about the working environment and needed skills for the community members. Most of them are big company worker or branded StartUp founder. Besides that, as a community, the members have freedom to ask and give their insight in technology and digital problems.
By holds classes, seminars, and discussions, Girls Kode’s focus not only on the members’ career development but also to improve members’ skills in technology and the digital world.
Girls Kode community as a support system!
Girls Kode community formed to help solve the problems every woman faced in their career.
How Girls Kode try to solve the challenges.
Everyone is welcome!
Let’s join the movement through linktr.ee/girlskode
Written by Esa Difny | https://medium.com/girls-kode/history-of-girls-kode-4062b8cced09 | ['Girls Kode'] | 2020-12-05 12:24:29.920000+00:00 | ['Career', 'Skills', 'Digital', 'Girlskode', 'Technology'] |
1,538 | The Pulse by EllisX: Tech & Business Trends Worth Writing About — Dec 21 | Here we are in the final days of December. In what is typically a slow month, we’ve seen a whirlwind of activity this year, with exciting new trends catching our eye week after week. We don’t know if this will hold throughout Christmas and the New Year (who’s ready for 2021?), but continue monitoring the tech and business landscape for anything interesting. If we don’t spot anything interesting, this will be our last Pulse for 2020, and we want to take a moment to wish you a happy and healthy holiday season.
And without further ado, here are the trends that caught our attention during the past week:
Cryptocurrency may be moving into the mainstream
On December 16 Bitcoin’s price rose to more than $20,000 for the first time and is now on its way to $23,000. The cryptocurrency had been deemed worthless multiple times in the past and its price has been very volatile over the last couple of years. In addition, crypto exchange Coinbase filed to go public next year and is expected to be among the first IPOs of 2021. Coinbase was most recently valued at $8 billion and its IPO, together with Bitcoin’s doubling in price this year, signal that investors may finally be ready to embrace crypto, elevating its status as an investment opportunity.
Can the Passion Economy help creators earn sustainable income?
The Creator Economy has been credited as a way to uplift people and offer a path into the middle class for those who aspire to create and cater to specific audiences. We’ve seen many prominent journalists leave established publications to launch their own Substacks, but as Josh Sternberg of The Media Nut wrote a few month ago, the reality is that the majority of them are not able to make a living out of their writing. While this year has seen an uptick in creator tools, the majority of creators still struggle to generate sustainable income. However, according to Li Jin, we’re just scratching the surface of the Passion Economy’s potential, and if we put the right support systems in place, it provides a solid pathway to a primary or secondary income stream that would uplift and empower the majority of creators.
Is Big Tech under a threat?
The idea of breaking up Big Tech has become quite popular with regulators in the last couple of years. This year alone the CEOs of Twitter, Facebook and Google were called to testify in front of Congress. And the EU has been leading its own fight against Big Tech. This trend is set to continue into 2021. Last week, Texas filed a lawsuit against Google, accusing the behemoth of anticompetitive behavior. Google is also under a lawsuit from the DOJ for the same reason. In the meantime, Massachusetts filed a complaint against trading platform Robinhood. While Robinhood is not quite a Big Tech company, this goes to show that large unicorns are not immune to regulators’ scrutiny — something that might become more common next year.
And if you’re writing a story about any of these trends, you can always find qualified experts available to comment on them here. | https://medium.com/ellisx-blog/the-pulse-by-ellisx-tech-business-stories-worth-writing-about-dec-21-7dccd3ed74ce | ['Leia Ruseva'] | 2020-12-21 16:22:15.807000+00:00 | ['Trends', 'Creators', 'Business', 'Cryptocurrency', 'Technology'] |
1,539 | Bad News for Shoplifters: AI can Now Spot You Even Before You Steal | Bad News for Shoplifters: AI can Now Spot You Even Before You Steal
Facial Recognition Software and Artificial Intelligence
Shoplifting has been on the rise according to Gartner research in retail stores in the USA and UK where despite security cameras installed, theft cases continue to rise. Retail stores continue to suffer from theft losses characterized by shoplifting¹ and artificial intelligence is offering timely assistance.
By working with facial recognition technology, artificial intelligence² is using algorithms to determine the behavioral patterns of shoppers in a bid to reduce theft cases. Vaak⁹ from Japan is a start-up leading the way where the company recently developed systems run by AI to monitor suspicious attributes among shoppers and alert retail store managers through their smartphones.
Photo by Hanson Lu on Unsplash
While AI is usually envisioned as a smart personal assistant, the technology is accurate at spotting weird behavior. Algorithms³ analyze security-camera footage and alert staff about potential thieves via a smartphone app. The goal is prevention; if the target is approached and asked if they need help, there is a good chance the theft never happens.
Overview
Vaak and Third Eye¹⁰ are some new start-ups making news regarding AI for shoplifting prevention and in 2020, more retailers are using their technology to bolster security. Based in London, Third Eye is using AI to prevent instances of shoplifting by coordinating with store owners via instant alerts.
Let us first start with Vaak from Japan, which is making progress in the theft management in the retail sector
Vaak has developed an #artificialintelligence software that can catch shoplifters in the act by alerting staff members, so they can prevent thieves from even leaving the store. Vaak used hours of surveillance data⁴ to train the system to detect suspicious activity using many behavioral aspects, including how people walk, hand movements, facial expressions, and even clothing choices.
Vaak claims that shoplifting losses dropped by 77 percent during a test period in local convenience stores, demonstrating how this technology could help reduce global retail costs from shoplifting, which hit $30 billion 3 years ago. Furthermore, implementing AI-based shoplifting detection technology⁵ would not lead to a significant increase in costs because security cameras, which comprise most of the required hardware, are usually already in place at retail stores.
AI Working with Facial Recognition Technology
Vaak’s technology demonstrates how artificial intelligence can work with facial recognition software, which scans faceprints a code unique to an individual, just like fingerprints.
Unlike fingerprints, faceprints can be scanned from a distance, which opens the possibilities of facial recognition’s applications in fields such as security and law enforcement.
Several local public security bureaus in China have started implementing the use of #augmentedreality glasses⁶, created by the Xloong company, which are able to cross-reference faces against the national database to spot criminals.
Human Behavior and AI Evaluation
The ability to detect and analyze unusual human behavior also has other applications. Vaak is developing a video-based self-checkout system, and wants to use the videos to collect information on how consumers interact with items in the store to help shops display products more effectively.
Photo by Lucas Santos on Unsplash
Beyond retail, Tanaka envisions using the video software in public spaces and train platforms to detect suspicious behavior. Third Eye, has been approached by security management companies looking to leverage their AI technology.
This is not the first time AI has been used to fight retail shrinkage. Last summer, another Japanese company, the communications giant NTT East, launched AI Guardsman, a camera that uses similar technology to analyze shoppers’ body language for signs of possible theft. AI Guardsman’s developers said the camera cut shoplifting by 40 percent.
Taming Losses with Artificial Intelligence
Because it involves security, retailers have asked AI-software suppliers such as Vaak and London-based Third Eye not to disclose their use of the anti-shoplifting systems.
The assumption here is that several major store chains in Japan have deployed the technology. Vaak has been approached by the biggest publicly traded convenience store and drugstore chains in Japan.
Big retailers have already been adopting AI technology⁷ to help them do business. Apart from inventory management, delivery optimization and other enterprise needs, AI #algorithms run customer-support #chatbots on websites. Image and video analysis is also being deployed, such as Amazon.com Inc.’s Echo Look, which gives users fashion advice.
The Ethical Questions we need to Ask
There is always an evil side to all technologies especially when it involves Artificial Intelligence, as often criticized. #Technology has always been about adding convenience to and safeguarding human lives, but what it turns in to always depend on who uses it and for what.
AI has always been a favorite subject of critics even for those who are pioneers in the technology. Vaakeye was no less of a target. Many fear that the technology will intrude privacy.
Installing artificial intelligence and facial-recognition software⁸ does raise some questions about the ethics of the technology, especially when it comes to customer consent.
Customers are typically willing to sacrifice some privacy for convenience when they are aware the technology is being used. Most retail stores already post signs about the presence of security cameras, so resolving this concern could be as simple as adding a notice about facial recognition to these signs.
Photo by Rich Smith on Unsplash
Despite how far science has come, AI does not truly think like a human being just yet. This could lead to a bias in a system’s algorithm. However, just as artificial intelligence can be inadvertently given a bias, it has the potential to be less biased than a human being. This is simply a case of auditing the algorithms to root out any potential bias before training the artificial-intelligence system.
Time to Embrace AI in Retail
Artificial intelligence in retail is not a hypothetical anymore. Today, AI algorithms run inventory management, delivery optimization, and customer-support chatbots on websites, which we are all too familiar with.
When paired with #facialrecognition software, artificial intelligence can even eliminate the need of salespeople, best shown in Amazon’s self-service brick-and-mortar stores that use image and video sensors to shape the customer experience.
With artificial intelligence entering the retail loss prevention sphere, we are going to see great change in how our departments catch shoplifters and combat retail shrinkage.
Do you think using AI to monitor shoplifters is a good idea? Share your opinions below to contribute to the discussion on Bad News for Shoplifters: AI can Now Spot You Even Before You Steal
Works Cited
¹Shoplifting, ²Artificial Intelligence, ³Algorithms, ⁴Surveillance Data, ⁵Shoplifting Detection Technology, ⁶Augmented Reality Glasses, ⁷AI Technology, ⁸Facial-Recognition Software
Companies Cited
⁹Vaak, ¹⁰Third Eye
More from David Yakobovitch:
Listen to the HumAIn Podcast | Subscribe to my newsletter | https://medium.com/towards-artificial-intelligence/bad-news-for-shoplifters-ai-can-now-spot-you-even-before-you-steal-1e778ba002ec | ['David Yakobovitch'] | 2020-11-10 20:59:44.226000+00:00 | ['Artificial Intelligence', 'Life', 'Business', 'Technology'] |
1,540 | Advancements in 3D Printing and the Future of Additive Manufacturing | The primary form of additive manufacturing, 3D printing service in india, has revolutionized the way parts are produced and prototyping is done. Manufacturers and product designers have seen a significant improvement in the speed of prototyping, design and production since Vat Polymerization was introduced. There are many technologies available today that provide a wide range of options for both designers and producers. The central question of 3D printing service is ready for production as technology improves.
What is additive manufacturing?
Additive manufacturing or online 3D printing india is a method in which material can be selectively added to create physical parts. This is different from the traditional computer numeric controlled processes (CNC), which selectively remove material to create desired shapes. Although 3D printing has existed since the 1970s it was not popularized until the 2000s when the open-source movement and crowd funding allowed for affordable printers to be made. Some technologies have been made accessible to the masses. You can now buy a decent 3D printer for $200 that meets or exceeds the makers’ standards. Fuse Deposition Modeling, Vat Polymerization, SLA & DLP, Material Jetting and Powder Bed Fusion are the main 3D printing processes. Binder Jetting, Directed Energy Deposition and Sheet Lamination are also available. Each technology has its advantages and disadvantages. It is a good idea to prototype with a service bureau that offers a variety of printing technologies in order to get the best part possible for your application.
Technology Fuse Deposition Modeling: Heated Extruder that melts and disperses plastic filament in a layer-by-layer bed.
Heated Extruder that melts and disperses plastic filament in a layer-by-layer bed. Vat polymerization (SLA/DLP): UV lights selectively cure liquid resin in a tank. Resin is a photosensitive plastic that crosseslinks and hardens under UV exposure.
UV lights selectively cure liquid resin in a tank. Resin is a photosensitive plastic that crosseslinks and hardens under UV exposure. Material Jetting is Material is selectively jitting from a printer head, and then UV light cures all of the bed. The part is moved down one layer, and then additional material is added.
Material is selectively jitting from a printer head, and then UV light cures all of the bed. The part is moved down one layer, and then additional material is added. Powder Bed Fusion A layer of metal or plastic powder is laid on a flat surface in a heated oven. The powder is then sintered by a laser to create a solid part. After the part is cooled, a layer of powder is added to it.
A layer of metal or plastic powder is laid on a flat surface in a heated oven. The powder is then sintered by a laser to create a solid part. After the part is cooled, a layer of powder is added to it. Binder jetting: This is another powder-based system. It uses a similar inkjet printer’s printer head to deposit a binding solution on a powder bed. As in SLS, the part lowers a layer while more powder is added on top.
This is another powder-based system. It uses a similar inkjet printer’s printer head to deposit a binding solution on a powder bed. As in SLS, the part lowers a layer while more powder is added on top. Directed energy deposition: Like the SLS, the plastic powder is heated in a chamber. However, instead of using a laser for fusion, the inkjet array uses an inkjet array that selectively dispenses a fusing ingredient. This heats the powder to its melting point with heat lamps.
Like the SLS, the plastic powder is heated in a chamber. However, instead of using a laser for fusion, the inkjet array uses an inkjet array that selectively dispenses a fusing ingredient. This heats the powder to its melting point with heat lamps. Sheet Laminate:Compressed sheets of sheet material are bonded together in order to form a 3d part.
Are 3D printers ready for production?
3D printing in india has revolutionized prototyping and reduced the time required for product development for decades. Rapid advances in additive manufacturing have led to new materials and processes that are more efficient, reliable, and more durable. The question remains, however, if online 3D printing services are ready to be used in production. Although it will never be able to match the speed and cost-effectiveness of injection-molded parts, the technology can unlock the constraints of traditional manufacturing processes and allow for new products. Today additive manufacturing is happening and there are many examples of 3D-manufactured parts on scale.
Design
This has changed the way we think about design. Engineering students were able to access 3D printers during their education. Now, graduates incorporate additive manufacturing processes into part designs. Metal sheet forming and injection molding are not the only options. This new approach allows multiple machine and molded parts to be combined into a single, more complex, 3D-printed part. It is also cheaper. GE engineers have reduced 20 parts of a fuel injection nozzle to one part, and since then they have mass produced 30,000 units using metal powder bed fusion machines. Software advancements have made it possible to create complex structures that are both efficient in material usage and provide precise performance characteristics.
Operational software
Operational software has also seen significant improvements that have reduced the cost and improved the performance of additive-manufactured parts. A lot of the work involved in creating a buildfile can now be automated. It used to take a lot of skill to pack parts into a 3-dimensional build tray. The task can be done in a few clicks. The slicing algorithms that break down 3-dimensional parts into 2D layers, then create tool paths or instructions to the printer/laser, have been improved to produce reliable and predictable parts. Service bureaus can now charge more for additive manufacturing by using cost analysis as a standard feature.
Quality Control
Up until recently, quality manufacturing and consistency in additive manufacturing were barriers to mass production. Quality management is essential in the automotive and aerospace industries, where lives are at risk. A new set of tools is available to manage quality. Some of these tools are currently being developed by makenica. | https://medium.com/@makenica/advancements-in-3d-printing-and-the-future-of-additive-manufacturing-bb7577f878a | [] | 2021-12-14 12:46:25.518000+00:00 | ['3D Printing', '3d Printing Service', 'Additive Manufacturing', '3d Printing Technology'] |
1,541 | Top 4 Content Migration Vendors for the SharePoint Platform | SharePoint Online is a collaborative platform that maintains the integrity of the data online. It is a document management and storage platform that lets you store, create, organize, and share information under one roof. Migrating your content to SharePoint is becoming the fad for its numerous benefits. A detailed migration plan and strategy help you with migrating your data through various channels and tools. Before choosing a specific vendor, you as a business must consider the following tasks for a successful migration.
What content and data form you wish to migrate (customization and nature of migration)
When do you want to migrate (time-bound constraint)
How do you want to migrate (Vendor and tools)
What are your governance objectives?
What is your migration strategy?
What is your situational analysis of the current environment?
Having answered the concerns mentioned above, you can proceed with the migration. After thorough contemplation on the process’s ins and outs, you can decide on the vendor perfect for migrating your content to the SharePoint platform.
Xavor
Xavor, a brand name for technological wonders, has been around for quite a while now. From offering consulting services to migrating and maintaining your online data integrity, it comes with all sorts of migration services. Xavor thrives on different functions that allow the smooth movement of data from one platform to another. Through a detailed and sophisticated SharePoint migration architecture, it supports data movement on a big scale. It can migrate a large amount of content. Deals efficiently in big data, Xavor supports migration in waves or levels. Its ability to manage complex enterprise data makes it the best choice for SharePoint migration. The SharePoint migrator tool, also known as the xspm, migrates data from wss 3.0 to moss 2007. The tools support content migration between wss 3.0 and moss 2007 platforms.
Codeplex
This is another vendor known for its open and free space. It provides open-source space for big projects and enterprise data. The space allows for the development and testing of SharePoint services. Along with providing a stable and detailed SharePoint migration framework, its primary services include access to Microsoft Office SharePoint portal server 2007, 2010, and 2013. These work as import or migration tools. The Codeplex comprises download space too, where you are met with various migrators. These migrators help with exporting and importing data. The cope is open to everyone for use.
Dell
The trusted vendor for SharePoint migration is none other than the tech company Dell. Having ventured into the SharePoint migration architecture recently, Dell has a fair share of migration services. Due to its migration properties and useful tools, it has secured a position in fortune 500 companies known for SharePoint migration in a short span. The SharePoint suite of Dell provides options for migration from and to on-premise deployment in Office 365. It takes care of processes that are complex and may otherwise take ample time.
Vamosa
For its growing interest in SharePoint migration, Vamosa has been making waves in the online technology solution for a long time. It can migrate content from multiple sources and sites to the SharePoint platform. The feature of migrating content from any location to a targeted platform makes it one of the best migration vendors. Bulk migration, notes migrations, and quick migrations are their forte. Being one of the oldest vendors in town for migration tools, it has captured a fair market share.
If you are looking for a smooth and brisk migration process to and from any platform, the vendors mentioned above are your way forward. It is always best to leave SharePoint migration in the hands of some professionals to avoid scammers and frauds. | https://medium.com/@adeel-javed/top-4-content-migration-vendors-for-the-sharepoint-platform-e32f8bd7eee1 | ['Adeel Javed'] | 2020-12-08 06:28:33.113000+00:00 | ['Migration', 'Technology', 'Sharepoint Online', 'Sharepoint', 'Writing'] |
1,542 | Artificial Intelligence and It’s Danger? | Technology has been improving more and more. Various new and developed features have been adding into human life. One of them is artificial intelligent. According to Euchnar, AI is a computer-controlled robot has functions like discovering sense, learning from previous experience (2019, p. 1). In the recent decades, AI use in current life has been put on the table because of the developments in related to AI. In spite of the fact that facilities of AI, it is quite controversial. Although it has been argued that AI use in current life impairs people, it provides various beneficial features since it increases the facilities for health sector and brings more jobs than it displaces while improving working conditions.
The first argument opposed to AI use into human life is that mistaken decisions by AI, yet AI tools provide more effective results compared to humans’ one. Proponents who think AI poses dangerous for human being argue for absence of practical attitude in health sector. They maintain that since the changing circumstances may require to improvise in some cases in short time. They further argue that probability of inaccurate treatments applied by AI are not remote.
However, contrary to risk of inaccurate treatments, implementation of AI in health sector has good news. PWC network states that, according to the American Cancer Society, there is a considerable proportion of mistaken results in mammograms. It brings about fifty percent of healthy women being informed they have cancer. On the other hand, AI is 30 times quicker and has 1% error rate (2017). It proves that AI gives feed back faster and better. Additionally, AI technology facilities to developing new drugs which could be key for terminal diseases. Moreover, involving AI for improving new drugs decreases the disproportionate wasting of money. Plecher highlights that, discovering a new drug completely costs more than $350 million on average. AI use for drug researching is new, and it has a great potential to cut not only the time to research but also their cost (2017). It is clear that AI use is quite beneficial and provides speed and accuracy.
The second dispute in regards to AI use in human life is that AI poses a risk because of displacing of jobs, nevertheless AI provides not only new jobs but also improved conditions. People who believe that AI is a risk for the current jobs say AI can displace humans from workplaces. They pursue that it may occurs a chaos in future because of employment imbalance.
According to chart graph about automation and its probability for replacing jobs conducted by BBC, AI will replace basic level jobs approximately 70%. Apart from that, jobs which is more complicated such as dental practitioners and senior professionals in education have a risk to be displaced around 20% (2019). Nonetheless, contrary to being a risk for jobs, AI use can create new work areas. In addition, working conditions will be improved because of AI can handle heavy labour. Despite the fact that AI will displace some jobs but it does not poses risk because AI will bring new jobs more than it takes. As Hiltbrant states, more than 130 million new jobs will be created by algorithms and machines while 75 million jobs will be displacing (2020, p. 4). Additionally, AI will play a big role to decrease work accident rate due to it will handle jobs with high risk and majority of people will have high level jobs. Therefore, AI is not a risk on humans in workplaces.
In conclusion, AI use in sectors as assistive technology clearly facilitate human life by providing more effective results compared to humans, and by creating numerous and better jobs. As it can be understood arguments mentioned above, AI is not a risk for humanity. It is a huge benefit to improve the human life and it should be used in various sectors. | https://medium.com/@salih-abdullah/artificial-intelligence-and-its-danger-30d4ca5363a2 | ['Salih Abdullah Şendil'] | 2021-02-07 15:46:47.242000+00:00 | ['Technology', 'Artificial Intelligence', 'Post Modern Philosophy', 'Risk Management'] |
1,543 | Morpheus Labs | An Update On Token Movement | Since late September 2020, the Morpheus Labs team has committed to publishing a bi-monthly token movement update to our token holders and community members. In order to facilitate transparency and progress of the project, we will also be making use of this opportunity to address queries on the existing and upcoming developments with regard to Morpheus Labs and our flagship product — Morpheus Labs SEED (BPaaS Version 2.0).
Welcome to our December 2020 Token Movement Update.
Overview
Morpheus Labs takes a very long term view with respect to the success of our ecosystem and our role in supporting it. Since our last Token Movement Update, we have made rapid progress in the background with developments that not only strengthen our product, but have also ensured that our Education campaign for the broader community is on track to realization.
Importantly, our immediate goals with our stake are to help ensure the liveness, utility and security of MITx. Our secondary goal is to promote the growth of new applications and use cases on the platform.
Company Token Wallet Updates
Earlier in September, we published our Tokenomics update to provide an insight to the token distribution and our committed token burn schedule until 4th October 2022.
Key Metrics
MITx (ERC-20 )Contract Address: 0x4a527d8fc13c5203ab24ba0944f4cb14658d1db6
Circulating Supply (1st Dec 2020): 391,903,057 MITx
Total Token Supply (1st Dec 2020): 746,999,995 MITx
Total Token Supply (4th Oct 2022): 700,000,000 MITx
Total Token Burn (4th Oct 2022): 300,000,000 MITx
*Burnt tokens split into Burn Address I (Pre-2019) and Burn Address II (Post-2019)
Company-held wallets (and Balance as of 1st Dec 2020)
Reserve I and Reserve II (7.9% — 79,000,000 MITx)
Team (10% — 100,000,000 MITx)
Foundation (10% — 100,000,000 MITx)
Platform/Treasury (1.28% — 12,818,575 MITx)
Listing (3.4% — 33,999,995.7642 MITx)
Back in October 2020, we initiated the process of splitting our Listing Wallet into 2 wallet addresses. The primary reason for splitting them into 2 separate cold wallet storage, is to ensure the security of the company-held holdings, as well as facilitation of upcoming usages, as in the case of the Listing wallet.
Today, the original Listing I (3.4% — 33,999,995.7642 MITx) and new Listing II (1.7% — 16,686,218.33 MITx) wallets are allocated, with the former being the wallet preserved for tokens that are meant for upcoming token burn.
With our Token Burn Schedule well under way, 33,999,995.000 MITx of the remaining tokens meant for token burn has also been moved into the Listing I wallet.
Since September 2020, the company has withdrawn 25,410,722.87 MITx from the Listing wallet allocation for marketing and development expenditures. This amount includes 13,000,000.00 MITx tokens which were sent to the eater address for our 30th September 2020 token burn.
Going forward, we are accelerating the push for the adoption of MITx as well as Morpheus Labs SEED. Tokens from the Foundation and Listing wallets will be used to execute our upcoming platform development, campaign and promotions.
We will use tokens:
To participate in consensus and assist in securing the network, though our tokens will never represent more than 49% of the circulating supply. Tokens are routinely redistributed among nodes to maintain voting at the appropriate level;
To provide incentives to contributors and application developers through token grants, competitions, and investments;
To assist in the development of the financial ecosystem by encouraging growth and liquidity through the support of partner solutions including market-making.
In addition:
We will re-balance our accounts from time to time, creating new addresses or removing old addresses, and update this page as needed.
We anticipate selling some MITX from time to time through third party run, structured selling plans to fund development initiatives. Details on any sales will be published in future transparency statements.
These points are important as you may see tokens move and our total stake fluctuate as we provide support to the community. We will publish a periodic summary of token movement in addition to wallet updates to ensure transparency.
Community, Partnerships & Exchange Listings
In an effort to add further customization capabilities for our SEED users, we are excited to work closely with our strategic partners to make it increasingly easier for users and enterprises to build easily and effectively. We are also in our final stages of discussion with prospective partners, enterprises and organisations. We are grateful that as the blockchain ecosystem continues to see huge transformation in 2020, more entities have recognised and are looking to deploy blockchain-based applications for their respective businesses.
Finally, we have completed our marketing planning, assessment and research with a strong focus on our community growth and product awareness for Q4 of 2020 and beyond. We are also in our final stages of shortlisting the next exchange to list MITx for our existing and the broader community. More information will be provided soon.
That sums up our Bi-monthly update of December 2020. We are eager to hear feedback and suggestions from our community. Please email info@morpheuslabs.io for general inquiries/comments, or engage with us in our official Telegram group (https://t.me/morpheuslabs) | https://medium.com/morpheus-labs/morpheus-labs-an-update-on-token-movement-f6f0b3e70d52 | ['Morpheus Labs Team'] | 2020-12-02 10:03:06.092000+00:00 | ['Developer', 'Cryptocurrency', 'Technology', 'Blockchain'] |
1,544 | Self-Driving Cars Are Out. Micromobility Is In. | Waymo, a division of Alphabet, has long been a leader in autonomous vehicle technology. Based on the limited data released on the company, its vehicles have driven the most miles in self-driving mode and have the lowest rate of disengagement (moments when humans have to take over).
Waymo CEO John Krafcik. Source: Waymo
But Waymo’s CEO, John Krafcik, has admitted that a self-driving car that can drive in any condition, on any road, without ever needing a human to take control—usually called a “level five” autonomous vehicle—will basically never exist. At the Wall Street Journal’s D.Live conference, Krafcik said that “autonomy will always have constraints.” It will take decades for self-driving cars to become common on roads. Even then, they will not be able to drive at certain times of the year or in all weather conditions. In short, sensors on autonomous vehicles don’t work well in snow or rain—and that may never change.
Such a statement from someone leading a self-driving vehicle company seems surprising. But given what’s happened throughout 2018, it shouldn’t be. A number of negative stories about self-driving cars permeated the year’s coverage, including the deaths of those using Tesla’s Autopilot technology. The effect of an Uber self-driving car killing a woman in Tempe, Arizona, cannot be understated. That singular event broke through the largely uncritical mainstream coverage of autonomous vehicles; it showed us how far the technology really had to go before it could be safe.
No longer does anyone credibly claim that self-driving cars are the future of transportation.
The initial event was bad enough: A self-driving car failed to slow down to avoid hitting a person and a safety driver was too distracted to notice. But as the National Transportation Safety Board investigated the incident, we learned that the autonomous driving system was unable to determine that the object in front of it was a person at all. When it finally did correctly determine that it had to stop—just 1.3 seconds before impact—it couldn’t because emergency braking had been disabled, and there was no way to alert the safety driver.
Leaked information showed that Uber safety drivers had to intervene in their self-driving vehicles every 13 miles (21 km) compared to every 5,600 miles (9,000 km) on average for Waymo’s vehicles, and the team was putting their test vehicles in unsafe situations to try to hit impossible deadlines. It was a complete mess, and eventually blew up future plans among ride-sharing apps that depended, in part, on autonomous vehicles to reduce labor costs.
Source: Navigant Research
Uber had to completely halt its autonomous vehicle testing, and it was already far behind its competitors. It pulled out of Arizona completely, laid off most of its safety drivers, and only reapplied to resume testing in Pittsburgh near the end of 2018—almost eight months after the fatal crash.
But between March and November, everything changed. No longer does anyone credibly claim that self-driving cars are the future of transportation, and Uber has even shifted its focus to scooters, e-bikes, and turning its app into the “Amazon for transportation.”
At the beginning of 2018, it would have been unimaginable for the CEO of Waymo to publicly acknowledge that self-driving cars will never work in all conditions. Now, it’s a statement of fact that anyone familiar with the industry already knows. But while the hype about self-driving cars is over, there’s a new vision for urban transportation that’s much more inspiring—and everyone seems to want in on it. | https://medium.com/s/story/self-driving-cars-will-always-be-limited-even-the-industry-leader-admits-it-c5fe5aa01699 | ['Paris Marx'] | 2019-01-15 03:06:59.329000+00:00 | ['Technology', 'Self Driving Cars', 'Cities', 'Future', 'Transportation'] |
1,545 | Has Anyone Seen My Private Key? | Restore — my private key
About a year ago, while I was in between jobs, I was offered an opportunity to write for an online crypto currency portal. The payment for the articles I wrote to me was made in BTC and since the amount wasn’t much. I had all my BTC stored in Breadwallet on my iPad. At the time of creating the new wallet, I was staying at my ex in-law’s place.
Fast forward a few months, I got a job half way across the globe. I was excited to go. Packed all my stuff including my iPad but I have completely forgotten about my wallet key. I thought I had it stashed somewhere safe in my then brother-in-law house, it should be safe. If my iPad got stolen or crashes I should be able to get the key back.
Unfortunately, my iPad did crashed and I had it repaired. All the information was lost and it didn’t have backup. Immediately I called my then brother in law to help me search for my paper key, which I believed was somewhere in the living room. He couldn’t find it and I thought, well that is okay. When I get back I could search for it myself, there is no need to worry.
Fast forward a few more months, family drama ensued. It was not possible for me to go to his house ever again. At the time of writing this article, I have to accepted that my coins are lost forever.
It can really be disheartening for anyone who ever lost their wallets and keys. For me, it feels like I got run over by a bicycle. I lost merely 0.3 bitcoins which I earned from writing articles. I imagine others who lost more than I did would feel like they got run over by a car or worse, a bus or a truck.
Being the (woeful) one who lost his key and wallet, I couldn’t help but think that this key and wallet system is inherently flawed. Maybe I am the one to be blamed for not safeguarding my own key. For whatever reasons, losing or misplacing one’s key is very easy. I imagine I’m not the only unfortunate one with a lost key. Millions of other may have walked in my shoes.
Once the key is lost, the crypto it contains is lost forever. My 0.3 bitcoin is now floating somewhere in the virtual world along with James Howell’s 7500 bitcoins which he accidentally lost when he threw his old hard disk away. A study shows that around 4 million bitcoins have been lost due to the lost key. If we estimate bitcoins value at USD 10,000 each, the value of lost bitcoins stands at USD 40 billion.
There are many methods which we may use to keep our keys safe. The most widespread method is probably writing it on a piece of paper and keep that piece of paper somewhere. We call that a paper key. As a matter of fact, paper key is terribly insecure because paper is highly perishable. It can be easily misplaced, discarded, burnt, torn or destroyed by a myriad of reasons. I am sure many of us are not taking sufficient precautions to keep our keys safe.
I recounted my misfortune to a close friend of mine and he helped me to search for my paper key but his efforts were in vain. It was a subject that we brought up a lot in our conversations. That was the inspiration that brought out the idea of using the blockchain system to keep the keys safe. Infinitus is a real solution to a very real problem the crypto community is facing. It’s a revolutionary yet simple approach to keep millions of keys safe. | https://medium.com/infinitustoken/has-anyone-seen-my-private-key-6b0f682e2a26 | ['Willie Tan'] | 2018-06-04 08:40:28.304000+00:00 | ['ICO', 'Bitcoin', 'Blockchain Technology', 'Cryptocurrency', 'Blockchain'] |
1,546 | Electric Scooters… Really? | Whats next, electric rollerskates?
In the run up to the Christmas of the worst year I have ever lived through, I can’t help but hear murmurs of one topic- presents. The antidote for the past year of disaster, after catastrophe, after crisis. But what will be on the list this year? Phones, Ipads, Airpods….. Electric Scooters?
So, in an age of increasing reliance on quick fixes- quick transport, quick food, quick conveniance- is now the time to start buying into yet another unnecessary convenience product? I think not.
With issues concerning electric scooters ranging from obesity rates to carbon emmissions, I want to know how an electric scooter will fit into the transport system. Will they start to replace bikes as the easier, sweat free way to travel? Will they invade the pavements- generating a new pedestrian terror or will their users become a new menace for road users? And, most importantly, what are they adding to the commuting mix?
As more and more sugary, energy intense foods flood our shelves, whilst physical activity falls another place in our lives, I can’t help but feel that another sedentary product will only further exacerbate the strain of obesity on healthcare everywhere. I can’t see e-scooters replacing cars or buses- they are currently illegal to drive on public roads, pavements or cycle lanes- so the only thing I can imagine them changing is swapping a quick walk to the corner shop to a quicker scoot to the corner shop. Who would have thought the corner shop was such an emergency? Whilst the immediate impact of this seems positive, the wider implications mean one thing; the fall of physical activity. The Office for National Statistics predicts that by 2050 the wider costs to society as a result of obesity will reach £49.9 billion per year. The only remedy of which, to my imagination, is a step up in exercise. The Physical Activity through Sustainable Transport Approaches (PASTA) project has shown that a daily commute via bike rather than sedentary transport can lower BMI, and found that, on average, Men who took up cycling lost an average of 0.75 kg, and a deacrease of 0.24 in BMI.
Moreover, I can’t help thinking that another dangerous pavement-road hybrid will only dissuade other, greener healthier modes of transport like cyclists and pedestrians from travelling in this way. Especially in london, we are in desperate need of more people to cycle on their commutes, rather than exacerbating the Londoner’s menace- rush hour. In the UK, 16.7 million people commute to work every day; all in all accounting for 18% of UK carbon emmisions. Whilst the number of people depending on privately owned cars is falling (especailly in the capital) it’s no suprise to hear that in a poll conducted by Cycling UK 43% of people were put off cycling by dangerous road use.
The idea of green, eco friendly electric scooters is a fond argument of scooter supporters everywhere, however researchers at North Carolina State University discovered that traveling by scooter produces more greenhouse gas emissions per mile than traveling by bus or moped, just because of the carbon high manufacturing process. Another subtle but nonetheless significant carbon producing component of electric trans[ort as a whole is simply the source of energy. The product is only as carbon-clean as it’s energy source; and because the UK energy mix is somewhat heavy on the fossil fuels (54% in 2016) many electric scooters squander energy (therefore filling the atmosphere with more CO2) on short, unnecessary journeys.
And yes.. I’ll admit I can spot a glimmer of fun in e-scooters, but really, what are they actually doing better than the options already available to us today? Can you see the unique selling point? If you can, I’m begging you- enlighten me. What happens when the fad is over and all we’re left with is a plethora of e-scooters gathering dust in sheds everywhere? Landfill here we come! In contrast to what you are probably thinking.. I do appreciate a bit of fun, useful; no, ageless; definitely not, but fun??? absolutely. I love fun things like a chick flick i’ll read in a day and leave gathering dust in my cellar for the next century, or another pair of fluffy socks i’ll only wear once. But not £300 machines meant for overaged adolescents… Sorry Ashton Kutcher. | https://medium.com/@h-amiswoods/electric-scooters-really-84e24c068d5e | ['Hetty Amis-Woods'] | 2020-12-24 09:28:25.609000+00:00 | ['Urban', 'Urban Transportation', 'Scooters', 'Technology', 'Transportation'] |
1,547 | Good news. Editorial update. ILLUMINATION classic look is back. | Good news. Editorial update. ILLUMINATION classic look is back.
Dear readers and writers,
After I sent the previous announcement of the situation, one of our editors with a strong technical background (Arthur G. Hernandez) resolved the situation.
This is great news for us.
Congratulations on getting our publication state back.
We still don’t know the root cause. However, we are happy with the outcome.
I will ask the Medium help-desk to close the case.
I wish you all the best.
Happy writing and reading. | https://medium.com/illumination/good-news-editorial-update-illumination-classic-look-is-back-761e7b67c61f | ['Dr Mehmet Yildiz'] | 2020-12-17 18:59:06.843000+00:00 | ['Illumination', 'Writing', 'Business', 'Technology', 'Self Improvement'] |
1,548 | Webs of Deceit: Alternative Facts, False Narratives and Toxic Politics | Webs of Deceit: Alternative Facts, False Narratives and Toxic Politics
It is becoming harder to distinguish between truth and lies. Soon, we may not be able to. In a future where we will choose our “facts”. Anurag Mehra Follow Dec 18, 2019 · 13 min read
Latest Post-Publication Notes and Updates added at the end of the article on February 22, 2020.
Alternative Facts
We are being lied to, with impunity, by those in positions of privilege and power. Kellyanne Conway, an advisor to US President Donald Trump invented the phrase “alternative facts” — a euphemism for a lie — and made lying acceptable. These “facts” are being used to construct universes full of fantastical conspiracies, devotion to populist “supreme” leaders and seething hatred for all “enemies” who have to be annihilated. Across countries, this infocalypse of lies and misinformation, is fuelling violent, toxic politics.
Here are some narratives based on alternative facts. They seem so “believable” when read uncritically.
I. Alternative Fact: White People in Europe and America are Being Wiped Out by Immigration: A great French philosopher, Renaud Camus, tells us about “The Great Replacement” in his book You will Not replace Us! White populations are being replaced by non-White (mostly Islamic) people through immigration and violence (and even by encouraging abortion among White populations). This is being done deliberately by the corrupt elites of many nations.
II. Alternative Fact: The Holocaust Never Happened: Let us go back to World War II. They say that the Nazis, under Hitler, murdered 6 million Jews in Europe. This is referred to as the Holocaust. A written account is given by the British Broadcasting Corporation. This paid media outlet has taken pains to write all of this very simply so that even the dumbest human can swallow their propaganda. The holocaust is a gigantic hoax and there is evidence to show that this was a conspiracy to defame the non-Jewish, Aryan races in Europe. “Though six million Jews supposedly died in the gas chambers, not one body has ever been autopsied and found to have died of gas poisoning.” We should wonder if Hitler was such a villainous person that he is made out to be if his only crime was to defend the Fatherland against the rapacious depravities of corrupt, usurious Jews. Does this affable looking man seem dangerous to you? It is chilling that for speaking out this truth you can be imprisoned, in many countries.
III. Alternative Fact: Nuclear Weapons Don’t Exist: We have been fooled into believing that atom bombs were dropped on Hiroshima and Nagasaki in 1945, at the end of World War II (sometimes I wonder if such a war actually happened). The whole idea of nukes was just a massive “psy ops”. Death Object: Exploding The Nuclear Weapons Hoax by Akio Nakatani, a respected academic, lays bare the whole conspiracy. According to the description, “the nuclear trick is the biggest, boldest and baddest-ass scam in all of mankind’s ancient and eternal quest for power and profit through mass slaughter. DEATH OBJECT takes you behind the curtain and reveals the empty sound stage. The science, the history, the misery, the mystery — the full hoax is covered. … Every element of the atomic bomb scam, the founding myth of the technological age, is tied to every other, coalescing into an unanswerable exposé.”
Pretty scary, given that so much money is being poured into nuclear weapons worldwide, all for a fraud. Surely another trick by the corrupt elites and their paid media to keep this away from the public eye.
Real versus Alternative
How do we decipher what is true and what is not?
The first thing that people do when confronted by information is to check how it stands with respect to existing knowledge they hold.
Experts who track effects of immigration tell us unequivocally that immigration makes nations more productive, increases their wealth because of enhanced economic activity, provides for skills not available natively; and that extreme anti-immigrant rhetoric is a strategy to is “strike and intensify fear of ‘the other’ in the hearts of voters.”
Those who have studied history will have an idea of what the Nazis did; and also know about the dropping of the atomic bombs on the Japanese cities, Hiroshima and Nagasaki, towards the end of World War II. The grief that they have caused have been described by so many records.
Anyone familiar with physics and space research will know about all the Apollo landings on the moon. Biology and evolutionary theory tell us that dragon myths came from the fear of large predators and the importance of fire in human prehistory.
Therefore, an education that exposes us to science, history and stories about the world is needed to distinguish between truth and lies.
When we are not sure about some information we hunt for relevant books, articles and archives — in the real world and on the internet — in a bid to resolve the truth value of what is being claimed or denied. Generally, we also try to ensure in some way that the resources being used are credible; it is likely that we will trust the information given in a reputed encyclopaedia or the archives maintained by a well known university.
The Collapse of Cognition: Why and How Do We Believe What We Do
However, this structure of how we decide to believe something is breaking down, driven by a complex ecosystem of technology and politics. We list below ten factors that make up this ecosystem.
First, many people now take information and news stories from the internet (websites, social media, blogs) and not so much from newspapers or television. This is true of America as much as it is for developing markets.
Social media outpaces print newspapers in the U.S. as a news source. Source: Pew Research Center.
Second, the general tendency is to assume that what they are reading is true. This is what print culture has taught us. Textbooks are usually edited by someone as are newspapers. These gatekeepers check what is finally published. This is not true on the internet where there is a lot of stuff that has not been checked for authenticity, making it very easy for untrue material to proliferate.
Third, there has been an exponential growth of fake and false information on the net — events that never happened (Hillary Clinton ran a pedophile racket), people saying things they never said (President Trump endorsed PM Modi). Many people are turning skeptical about the quality and truthfulness of what they read. Unfortunately, some of this skepticism is also targeted towards information that is already proven to be authentic (e.g. verified and checked historical records; validated scientific principles).
Fourth, fake information that is being “manufactured” today is of fantastic quality in terms of production values and technical sophistication, and this makes it so believable.
Weapons of Mass Deception
Fifth, false stories are designed with an objective to influence so these are crafted with the right emotional “hooks” to generate rage, despair, disgust, elation, hatred, and so on. Such misinformation is very effective is building or destroying the reputation of people and the significance of issues. Often, they feed into a persistent feeling of paranoia and are effective tools in manufacturing “public mood”.
The internet, especially social media, therefore abounds in a heady mix of morphed images, doctored videos, edited documents, vintage (to convey authenticity) looking scrolls, inside these fake stories. Images of massacres elsewhere are picked up and used in a fake story to show a local leader’s cruelty; images of great things being done somewhere are shown as the creation of infrastructure somewhere else. A report in India Today cheekily called various instances of use of fake news in the recent Lok Sabha elections, as “weapons of mass deception”. A buzzfeed compilation of the best fake stuff that was created during the 2016 US presidential elections includes these alternative facts: the Pope endorsed Trump’s presidency (so he was the favored choice of Christians), and that Obama had banned the pledge of allegiance in schools (he was therefore anti-national).
A thorough and detailed analysis of how anti-immigrant rhetoric is mainstreamed especially by the use of “dark social” platforms, conventional social media (twitter, influencers, memes etc.) has been developed by Davey and Ebner for the Institute of Strategic Dialogue in UK (ISD).
Sixth, in a troubling development, because powerful hardware and software has become so cheap, deepfake videos are coming. In these, you can make public figures say things they never said (abuse, confess). In a video, released in 2018 to demonstrate the power of the technology, Obama can be heard “abusing” Trump. Deepfake videos have already put celebrity faces into porn; it is a matter of time before public figures and leaders are incorporated. The fakeness of these products is becoming harder and costlier to detect.
Seventh, the preponderance of misinformation has now spawned an entire profession of fact checkers and fact checking websites (1, 2,3). Typically, they check for telltale signs of editing in an image or a video, hunt for similar resources on the internet to locate where the one in use may have been taken from, search old, trusted archives, and so on. But even this has resulted in debates like who will check the fact checkers because they themselves may have a bias in selectively certifying facts.
How it Gets Around
Eighth, to make matters worse, MIT research shows that fake information seems to travel faster than facts and humans more than robots help spread it faster.
Ninth, most people believe information toward which they already have a predilection. They try to reject what does not match with their existing beliefs. People will also usually believe whatever their group does. This is a gift of the “lazy brain”. A lie repeated many times becomes familiar and produces an illusion of truth. There is a general tendency to believe in more sensational material. Worst-case, negative scenarios are believed most.
Illegal immigrants are destroying this country! Source: Source: Boris Rasins’s Photostream
A predilection could be based on a rational analysis of a situation but more often than not it arises from ideas that get “seeded” — mostly emotionally and uncritically — because of conscious or unconscious background biases. Therefore, a person holding even a mildly racist worldview (who may have been brought up within a racist community) is likely to believe the false claim that Obama was not born in the US; many Germans feeling “national” humiliation after World War I, blamed the Jews for “backstabbing” the nation — this view was anchored on centuries of anti-semitic stereotypes that prevailed in Europe, and finally became an important trope in Nazi propaganda that vilified the Jews.
Tenth, the factor that actually motivates people to create and propagate false information is pervasive social polarization. This is driven by a severe crisis of inequality, unemployment and economic deprivation in classical forms of capitalism. It has spawned varieties of cultural nationalisms in many countries (UK-Brexit, US, Turkey, India, Hungary, Brazil …) that oppose the earlier, “globalized” political order. Cultural nationalisms tend to be majoritarian, and nurture many types of social divides where the majority becomes hostile to people who are “others” — from another race, caste, community, gender or class.
Ideological Silos
Most political narratives use false “facts” and misinformation in a big way to mobilize support. Such false stories are now the most potent weapon to polarize citizens on any issue be it competing nationalisms, race or caste discrimination or falsifying history. Whoever uses this weapon more, and more cleverly, will stay ahead in the winning stakes.
The creation and consumption of false stories have carved out and deepened ideological silos. These stories are driven by polarization and in turn create even more of it. It is a vicious circle.
Trump supporters believe that all mainstream media firms, with mainstream institutions as their allies, report untruthfully, and thus spread “paid news” created by the “corrupt elites”. They question the credibility of CNN or BBC and think that Fox News and Breitbart are credible sources of information. In India, cultural nationalists believe that Republic TV is credible but NDTV is not; that JNU or UoH are controlled by “anti-national”, “anti-people” liberals and leftists who are allies of the “corrupt elites”.
The counter-narrative is that cultural nationalists locate a social group and then falsely hold it responsible for all that is wrong with the “nation”. Typically, these “others” are immigrants, people belonging to another religion, race or caste. So Sanders’ supporters argue that there is no logic in blaming these “others” and that the state of society is determined by the mechanics of capitalism which breeds poverty and inequality. For them Fox News and Breitbart are media outlets that spread only falsehoods.
These two groups now mostly engage with each other in battle mode, by hurling charges and abuses, on television and on social media. As these ideological “bubbles” evolve and harden they have become separate universes. The bigger universe determines who will get the “peoples’ mandate” and what will be the cultural meaning of nationalism. The power of polarization — cultivated intensely through misinformation — in creating an all pervasive, background “buzz” of ideological civil wars should not be underestimated.
The Walls that Liberals have built (as the other side views them). Source: SS&SS Photostream.
The Mess of Trump’s Obsessions and Issues (as the other sees them). Source: Source FolSomNatural Photostream.
Driving Blind
In such an environment, we are spiralling rapidly towards an information chaos where people do not know what to trust, and ultimately will not know how to evaluate the credibility of what they are seeing, hearing or reading.
Imagine these scenarios. It is 2019. A student is trying to learn about the Indian republic’s first leaders, on her own. She watches biographical videos on the life of India’s first prime minister Jawaharlal Nehru (1, 2, 3), but also come across other videos that suggest that he was a philanderer, a “muslim-lover” and a good-for-nothing who died because of sexually transmitted diseases (1, 2); she also sees exchanges on Twitter and posts on Facebook. In order to resolve these conflicting inputs she tracks other resources, on the web or stocked in libraries, national archives, old camera footage and films to confirm what kind of a person Nehru was and whether he died of sexually transmitted diseases or not. She is likely to conclude that Nehru was a tall leader of the freedom movement, and someone who developed a comprehensive idea of India.
Now fast forward to 2035. Another student wants to do a similar exercise. She finds too much material on the internet; genuine information is lost within heaps of misinformation, like a needle in a haystack. Even if located, it is hard to establish its credibility. She is drowned in so much contradictory material, that it is impossible to resolve it in any direction.
Now, what will she do?
We could soon be living in a world where misinformation has engulfed the truth; people cannot distinguish between science and superstition; every explanation is a conspiracy; no one is able to figure out which national statistics are correct and which are fictional; history has become a wasteland of actual happenings, fictional narratives and myths entwined into strands of folklore.
What Does the Future Hold
Misinformation cannot be fought with facts alone. Facts are not sufficient to overcome emotional bases of beliefs. Calls to be rational, to assess credibility, to fact check will fall flat when misinformation overwhelms real information. This debate on the The Future of Truth and Misinformation Online, enriched with numerous opinions, remained inconclusive two years ago. But I find, based on what is happening in science, and politics (1, 2, 3) that things have worsened. Optimism, that this problem will be solved by technological fixes or basic human goodness, seems to me to be misplaced, in a world full of resentments and rage.
Yet the fight must be waged against a culture based on untruths.
Media consumers must be encouraged to use fact-checking services so that this becomes a mainstream norm. News sites should provide quick access to their primary sources, if possible, so that users can themselves assess the credibility of what they are seeing. Digital literacy courses that teach users how to spot fake news and how to guard against it should be introduced in educational institutions as a part of the curriculum. This will enable the development of critical media consumption habits and to identify partisan, emotional hooks.
We need to have laws that make it mandatory for news organizations, channels, platforms to perform aggressive fact checking, give warnings about dubious material, and be subject to severe penalties for propagating falsehoods. Even more than that, we need genuinely autonomous regulatory agencies that can shield us from the excesses of state power and the self-serving opportunism of the market. There are deep conundrums here: who will define what is fake and what penalties must be applied? Populist leaders who thrive on misinformation and declare real news to be fake would want to use these laws to reign in a “free” media. Technology companies, on whose platforms this misinformation plays out, are happy with the traffic and advertisements revenue these humongous “battles” generate. In any case, should not private corporations only have limited responsibilities of fact checking and following “community” standards but never the power of censoring free speech rights?
Projects that study the reasons for social resentments must relentlessly expose the emotional manipulation being exploited by the fabricators of fake news. Ultimately, the misinformation menace will reduce significantly in volume in a more just and equal world where the motivation to create false information withers away.
Aldous Huxley thought civilization will be threatened by irrelevant knowledge drowning out the relevant. Little did he imagine that the real will be swallowed by the fictional.
“A society, most of whose members spend a great part of their time, not on the spot, not here and now and in their calculable future, but somewhere else, in the irrelevant other worlds of sport and soap opera, of mythology and metaphysical fantasy, will find it hard to resist the encroachments of those who would manipulate and control it.” (Aldous Huxley in “A Brave New World”)
This is a longer and updated version of an article published earlier (July 26, 2019) in Scroll.in.
Post-Publication Updates & Notes:
More political instances of how misinformation overwhelms facts, and confounds attempts to ascertain the truth. Added on Feb 22, 2020.
2. India just got it’s first election campaign featuring deepfakes. Added on Feb 22, 2020. | https://medium.com/technology-culture-and-politics/webs-of-deceit-manufacturing-culture-with-alternative-facts-acb4e72a883a | ['Anurag Mehra'] | 2020-03-02 07:37:12.046000+00:00 | ['Fake News', 'Technology', 'Fascist', 'Post Truth', 'Conspiracy Theories'] |
1,549 | Robots and Religion: Mediating the Divine. | Some 100,000 years ago, fifteen people, eight of them children, were buried on the flank of Mount Precipice, just outside the southern edge of Nazareth in today’s Israel. One of the boys still held the antlers of a large red deer clasped to his chest, while a teenager lay next to a necklace of seashells painted with ochre and brought from the Mediterranean Sea shore 35 km away. The bodies of Qafzeh are some of the earliest evidence we have of grave offerings, possibly associated with religious practice.
Although some type of belief has likely accompanied us from the beginning, it’s not until 50,000–13,000 BCE that we see clear religious ideas take shape in paintings, offerings, and objects. This is a period filled with Venus figurines, statuettes made of stone, bone, ivory and clay, portraying women with small heads, wide hips, and exaggerated breasts. It is also the home of the beautiful lion man, carved out of mammoth ivory with a flint stone knife and the oldest-known zoomorphic (animal-shaped) sculpture in the world.
We’ve unearthed such representations of primordial gods, likely our first religious icons, all across Europe and as far as Siberia, and although we’ll never be able to ask their creators why they made them, we somehow still feel a connection with the stories they were trying to tell. | https://medium.com/swlh/robots-and-religion-mediating-the-divine-2bd73220787d | ['Yisela Alvarez Trentini'] | 2019-08-08 03:03:37.012000+00:00 | ['Philosophy', 'Religion', 'Artificial Intelligence', 'Robotics', 'Technology'] |
1,550 | AI in Medicine — Majority decision isn’t always right | Following their major publication in JAMA that marked a major breakthrough in both the AI and healthcare communities, Google made some fine tuning to their Deep Learning model and published their new results on Arxiv. *This article was then published in Ophthalmology, one of the most important journals in the field, with an impact factor of 6.1.
I had explained in a previous article how Google designed their initial AI model to detect diabetic retinopathy (DR) from fundus photos. Here’s what’s new in Google’s AI 2.0:
1. Redefining the Gold Standard Using “Adjudication”
Garbage in garbage out. We all know that AI is biased if trained using inaccurate labels; it can even be dangerous in medicine.
While having high-quality ground truth label is critical for training machine learning models, it is easier said than done since medicine is often subjective. Take diabetic retinopathy for example, doctors will most often agree when lesions are obvious: this one has retinopathy and that one does not. However, when asked to grade the disease on a scale of 1–5, disagreements occur. On the image below, Google showed that for the same image of the retina, different ophthalmologists will grade the image differently, consistent only around 60% of the time with themselves and with others.
Each row is an image, each column is an ophthalmologist grader. Colours represent the severity grades given by each ophthalmologist.
That is, in the most part, due to the subjective variance in exact definition of grades and boundaries between the 5 different grades. For example, while mild DR is defined as “having microaneurysms”, image artefacts often resemble microaneurysms and was a common source of disagreement. Moderate DR is defined as “more than microaneurysms but less than severe NPDR”, which is also open to interpretation. Luckily, clinical care accounts for much more than a simple image. Treatment plan is personalized for each patient according to his/her age, medical and family history, disease progression, diabetes control, and a much more thorough eye exam using other imaging modalities (OCT) and direct stereoscopic examination of the retina after dilation. Although not affecting patient care, disagreements in grading scores do make research and creating image label much more difficult.
When doctors disagree, who is right? Who’s answer should we label as ground-truth?
Traditionally, taking the “majority decision” has been a popular method for defining the reference standard. For example, Google had hired enough ophthalmologists to have each image of their initial dataset (128k images) read by 7–8 different people, independently. They then took the majority decision as the final label for each image. This method is flawed as it does introduce a bias where the algorithm will miss subtle findings that the majority of ophthalmologists might not identity. In their second study, Google suggested a more rigorous way to define the “gold-standard”, through adjudication. Using this method, instead of taking the majority grading when there is disagreement, doctors will decide together, face to face, to conclude on a final decision.
They tested this adjudication process on a subset of around 6000 images. The images were first evaluated independently by 3 fellowship-trained retina specialists who then discussed face-to-face to resolve disagreements and determined a “final diagnosis”.
3737 images with adjudicated ground-truth labels were used as the tune set; ie: it was used for tuning the algorithm hyperparameters (e.g. image resolution, learning rate) and making model choices (e.g. network architectures), but not for training the model parameters.The rest of the images with adjudicated grading were used as the validation set.
In this study, they demonstrated that model performance was significantly improved even if only a small subset (0.22%) of the training image grades were adjudicated.
2. Upgrade from Binary Prediction to a 5-class Rating Prediction
Instead of a binary “referable” vs “non-referable diabetic retinopathy”, in this second study, the Google team trained a 5-class prediction model that can grade an image’s disease severity: none, mild, moderate, severe, and proliferative. This is in accordance to the most commonly used International Clinical Diabetic Retinopathy (ICDR) disease severity scale. This makes the model more suitable for clinical practice.
3. Bigger Data Set (1.6M Images)
The training portion of the development set was increased from 128 175 images in their first study to over 1.6M images in this study. It contains fundus images from 238 610 patients.
Input resolution in this new model is 779 x 779 pixels, a large increase over the 299 x 299 pixels used in their previous study. The model architecture was also upgraded, from Inception-v4 to Inception v316.
Overall, for each of the five gradings, the algorithm has AUC values between 0.986 to 0.998. In other words, the algorithm is capable of labelling all five grades of the disease with high sensitivity and specificity!
Future: an Increased Need for Ophthalmologists:
When tele-ophthalmology was first introduced, it allowed patients to be screened remotely using fundus photos, making care much accessible to rural populations. Consequently, it has also allowed more diabetic retinopathy cases to be diagnosed and referred to retina specialists for closer examination, treatment and followups. This has overwhelmed many ophthalmology clinics’ already busy schedule and wait-lists. The AI algorithm, through high productivity and low cost, will make screening even more accessible to patients worldwide.
Diabetic retinopathy is the leading cause of blindness in working-age adults.
DR is an insidious disease that slowly damages the retina, leading to symptoms only in late-stage, when damages have become irreversible. Current guidelines therefore recommend yearly screening to all diabetic patients. In real life however, less than one third of patients are getting this required exam (Ontario, Canada).
By 2040, it is approximated that 600 million people will have diabetes, with one-third expected to have diabetic retinopathy. -Ting et al. JAMA.
With 600 million people all requiring yearly screening, along with an increased life expectancy, and an aging population, the demand for ophthalmologists will be higher than ever. As a future ophthalmologist, I’m glad that AI algorithms now exist to assist me in the future, freeing me from the repetitive pattern-recognition tasks, and can allow me to focus on patient care and innovative research.
Read more from Health.AI:
Deep Learning in Ophthalmology — How Google Did It
Machine Learning and OCT Images — the Future of Ophthalmology
Machine Learning and Plastic Surgery
AI & Neural Network for Pediatric Cataract
The Cutting Edge: The Future of Surgery | https://medium.com/health-ai/ai-2-0-in-ophthalmology-googles-second-publication-c3b5390c19ae | ['Susan Ruyu Qi'] | 2019-12-06 09:57:28.247000+00:00 | ['Artificial Intelligence', 'Machine Learning', 'Health Technology', 'Ophthalmology', 'Healthcare'] |
1,551 | Leading Edge Cryptography Research at NTT Research | Ilan Komargodski, CIS Lab, NTT
Amanda Christensen, ideaXme guest contributor, fake news and deep fake researcher and Marketing Manager at Cubaka, interviews Ilan Komargodski, Ph.D., Researcher at the Cryptography & Information Security (CIS) Lab at NTT Research.
Amanda Christensen comments:
Data privacy has long been a topic of contention, and the importance of proper security protocols have never been more needed as we individuals, businesses, and governments have been forced to rapidly digitise everyday interactions as a part of lockdown and COVID-prevention procedures.
Recent events like the large-scale account hacking on Twitter, as well as heightened concerned over TikTok’s use of data have heightened these concerns in the public eye.
However, at present, there is little we can do to completely protect ourselves from giving up our personal information — if you want to access a website, flick through an app, get directions, or any number of the hundreds of minute tasks we perform digitally every day, you’re going to be giving up some degree of your personal data to do so.
But what if that didn’t need to be the case?
Ilan Komargodski, Ph.D.
To address this topic, I had the pleasure of speaking with Ilan Komargodski, Ph.D., Researcher at the Cryptography & Information Security (CIS) Lab at NTT Research.
As a part of the CIS Lab and NTT Research, Ilan and the CIS Lab team model various scenarios that are threatening data privacy and information security, and then use these models to design cryptography tools and protocols to protect user privacy.
With data privacy being a key concern across all sectors, the applications of their work is endless, and can help to facilitate technological innovation by reducing, or potentially eliminating, the key concern and barrier of privacy and information security.
Through methods like homomorphic encryption, the CIS Lab and NTT Research are also developing new ways to analyse and compute data without compromising data privacy.
Ilan started his career as a Software Engineer in the Israeli Army, leaving to pursue both an M.Sc. and Ph.D. at the Weizmann Institute of Science doing fundamental research in computer science with an emphasis on cryptography. Prior to joining NTT Research, Ilan was a postdoctoral researcher at Cornell Tech.
In this episode we will hear from Ilan about:
His background and how he developed an interest in computer science and cryptography. The research being undertaken in the three labs at NTT Research, as well as a deep dive into the the Cryptography & Information Security Lab. A top-level introduction to cryptography and homomorphic encryption.
This interview is in British English Visit ideaXme’s website
Watch ideaXme interviews.
Credits: Amanda Christensen interview video, text, and audio.
If you liked this interview, be sure to check out our interview with Dan Mapes on the Spatial Web: Web 3.0!
Follow ideaXme on Twitter: @ideaxm
On Instagram: @ideaxme
Find ideaXme across the internet including on iTunes, SoundCloud, Radio Public, TuneIn Radio, I Heart Radio, Google Podcasts, Spotify and more.
ideaXme is a global podcast, creator series and mentor programme. Our mission: Move the human story forward!™ ideaXme Ltd. | https://medium.com/@ideaxme/amanda-christensen-ideaxme-guest-contributor-fake-news-and-deep-fake-researcher-andmarketing-77e6c68ebc19 | ['Ideaxme', 'Move The Human Story Forward'] | 2020-08-20 12:39:52.714000+00:00 | ['Ideaxme Ntt Research', 'Exponential Technology', 'Ntt Research', 'Cryptography', 'Data Privacy'] |
1,552 | 8 Smart doorbells to keep an eye on who’s at your door | When someone is looking to transform their home into a smart home, one of the many popular choices is going for home security and the reasons are obvious: We’ve all seen the videos of attempted stolen packages, break-ins, and more. Knowing you can always check in on the front of your house affords an enormous amount of peace of mind.
Enter smart doorbells. They’re similar to a smart security camera but are unique in that they provide a dual purpose. Of course, they work in a traditional sense to let you know when someone is at your door. But, a smart doorbell goes above and beyond to show you a live view of your front door for total peace of mind.
August View Wire-Free Doorbell Camera
This advanced device streams clear, realistic video of your entrance from your smartphone. With a state-of-the-art image sensor and two-way audio, View makes it easy for you to engage with visitors. Likewise, the sensor offers 1440p resolution, which is significantly more than full 1080p HD.
Netatmo Smart Video Doorbell
This wireless system is complete with an intelligent camera to give you full access to your front door from anywhere. Compatible with Apple’s Homekit, the camera captures 160° in 1080p Full HD. Likewise, the wide-angle camera also sports HDR functionality. This means it can intelligently respond to changes in light in real time.
Robin Telecom ProLine HomeKit Enabled Doorbell
This clever system allows you to get a good look at who’s in front of your door through the Apple Home app to see exactly what the Robin ProLine Doorbell sees. Recording in HD, the doorbell provides a wide 130° field of view so you can get the whole picture. You can also communicate with the person on the other side.
Robin Telecom ProLine HomeKit Enabled Doorbell
SimpliSafe Video Doorbell Pro
Designed to be your eyes and ears, this device lets you know as soon as someone is in front of your door. The Video Doorbell Pro sends alerts to your phone when it detects activity out front. In fact, the person doesn’t even have to ring the bell for the Video Doorbell Pro to detect them.
SimpliSafe Video Doorbell Pro
Nest Hello Video Doorbell
With Nest Hello, you’ll never miss a person or a package at your door. By replacing your existing wired doorbell, Nest Hello provides HD video and crystal clear images. It even works well at night thanks to night vision. Featuring a 160° field of view, the video doorbell allows you to see anywhere from top to bottom. Additionally, it comes with 24/7 streaming, enabling you to check in anytime.
Nest Hello Video Doorbell
Netvue Belle AI WiFi Doorbell
Aside from regular Wi-Fi doorbell functions, Belle features AI for facial recognition and voice interaction. The device also offers motion sensing and HD live streaming. Using voice interaction, you can give instructions to anyone at the door. Additionally, Belle greets your visitors upon arrival and notifies you with important details.
Ring Video Doorbell Pro
Compatible with both iOS and Android devices, this intelligent doorbell uses Wi-Fi to connect to the app to give you a live view of your front door. Equipped with a wide-angle lens, the camera records in HD so you can stay up to date with anyone (or anything) at your front door.
SkyBell — Wi-Fi Video Doorbell
Upon ringing the bell, the visitor’s image is instantly uploaded to your smartphone with the app, available for both iOS and Android. Using a built-in motion sensor, SkyBell is also awakened even if someone arrives at your door but doesn’t ring the doorbell. | https://medium.com/the-gadget-flow/8-smart-doorbells-to-keep-an-eye-on-whos-at-your-door-616efe393771 | ['Gadget Flow'] | 2019-03-21 23:29:55.420000+00:00 | ['Security', 'Technology', 'Smart Home', 'Internet of Things', 'Home Improvement'] |
1,553 | [Tech] Part 2 — Distributing Bounties and Airdrops (60K TX) | Introduction
The last post was about the preparation of Bounties and Airdrops. Since there were too many recipients of the campaign, we focused on reducing gas costs rather than reusable, versatile contracts. When the airdrop preparation was complete, we tested the airdrop distribution on the Ropsten Testnet.
by anonymous developer
Ropsten TestNet Airdrop
We wanted something that anyone can use to conduct their airdrop through this program. By clicking buttons from a client, “Server of node.js” will process the airdrop also showing the procedure to the client. When the result is out, the server will receive a receipt which was requested to be generate on the screen. So, we built a page for the client which automated the airdrop process when inserting the txIndex, Gas Limit, and Gas Price and clicking the button. We also added a function to indicate the process status and to pause the process.
There were a few issues with using web3 when signing and transmitting transactions from node.js.
1. Notation
We have to indicate each value in a hexadecimal value because the exponential notation is not recognizable.
ex) ‘0x’ + (1000000000).toString()
2. Private Key Format
When making a new transaction and sign there should be no 0x in the Private key value. If there is 0x in Private key’s value, there will be an underpriced gas error.
* When using Private key to create a public key, there should be 0x value. Web 3 did not have consistency which was problematic.
3. Timeout
After the completion of a transaction and receiving Receipt, we modify the record in the DB’s monitoring table. However, there are some cases that the same transaction is recorded in the DB twice. When logging is done, if node.js does not respond for 2 minutes, it sends a header asking the web browser to send the request again. If there is still no response for 2 minutes even after the request, Chrome considers it as a failure and FireFox continues requesting again and again. As it was a time-out problem, we modified the server’s time-out to 1200 seconds.
Aside from the client page that creates transactions, the operations team used a separate dashboard to monitor the distribution. For this reason, we had to keep the DB updated. These are a few issues when recording in the Database.
- Initially, we were hoping to record tokens in an integer number or a form of a floating decimal. However, we had to find alternatives since Ethereum has a large scale of numbers (Unit256). We considered dividing index and decimals and saving each in an integer (bigint) or string variable (varchar). We concluded to save the number in a form of a string variable and convert them separately because we thought it was a waste using two separate fields for variables which records the token amount.
- Node.js’s mysql module only accepts callback form. As a result, it was a total disaster that there were over 10 logics overlapping each other: checking the log to see if the code works fine, recording transaction transmission trial, processing airdrop, and recording the result of the transaction again. In order to optimize the process, we produced a function that wraps any kind of function with a ‘Promise.’
Ethereum Mainnet
We initially set the time-out as 1200 seconds (Testnet Issue #3) because we could not finish the airdrop distributions in time since it took forever when average transaction pending time exceeds 10 minutes. However, the price of gas fluctuation is significant. Even when we entered the same gas price, the transactions sometimes completed in seconds and sometimes completed in hours. Hence, we increased the time infinitely. (we made a code that automatically changes the gas price according to pending time, but we could not use it for a few other problems such as fork, nonce, and others).
Even though we fixed the time-out length as 1200 seconds for node.js, there was a problem that node.js died after 750 seconds when pending. There was no error message showing, so we traced the source and found that web3 was the problem. Web3 called it an error when there were no result for 15 seconds from each block and after 50 blocks pass. However, the beta41 error is not properly showing outputs, so we increased the pending time limit and continued the distribution.
It may have been due to the Constantinople hard fork as the distribution was right before it, the gas price was too high and fluctuating greater than usual when we were processing airdrop on the mainnet. The transactions suddenly became pending for several hours even though it was often completed in 30 seconds at the same price. As the delay was getting worse, there was a Connection Timeout in Mysql. We were able to solve the MySQL connection problem by disconnecting it before requesting the transaction, and when receiving the receipt, reconnecting it to operate the DB work.
As the pending time became longer, it would be necessary to reduce the pending time by replacing an existing transaction with a new transaction with higher gas prices. The problem was that when a number of transactions were pending when one transaction completes, the nonce of the remaining transactions in the pool increased by 1, and all the transactions were sequentially included in the block. We tried to find the reason, but we could not simulate it, so we avoided replacing transactions as much as possible.
We wanted to verify once more if the token distributed into the allocated wallets successfully after the completion of a transaction. We chose to check the balance of the wallet through balanceOf which is included in ERC20. However, as the airdrop period was extending, there were about 5000 token holders who transferred their tokens from their wallet. Moreover, we could not lock it because we were already listed on the exchange. Next time, we thought we could use getPastEvent or lock it if necessary.
Join REMIIT’s Community
Telegram: https://t.me/remiit
Website: https://remiit.io/
Twitter: https://twitter.com/remiitplatform | https://medium.com/remiit/tech-part-2-distributing-bounties-and-airdrops-60k-tx-90f2732e6855 | ['Team Remiit'] | 2019-03-26 06:35:30.055000+00:00 | ['Blockchain Development', 'Cryptocurrency', 'Blockchain Technology', 'Blockchain', 'Airdrop'] |
1,554 | How are Vietnamese tech-enabled and tech-driven startups scaling up? | Encounters of innovators: tech-driven and tech-enabled Vietnamese startups [Part 3]
Although all businesses are likely to grow, the possibilities of scaling up among startups vary, based on its industries, growth stages, technology application capacity and founders’ management competency.
A Deloitte’s research in 2015 showed that the chances of a new enterprise to ascend as a scaleup were around 0.5%, which meant that only 1 out of 200 surviving new enterprises would become a scaleup. “Unicorns” made up the even smaller subset of scaleups.
“The transition from startup to scaleup involves constantly learning from today’s rapidly changing global environment,” affirmed Joe Haslam, Executive Director of the owner scaleup program at IE Business School.
VNG has been the only “unicorn” in Vietnam up until 2020. Photo by Vietnamnet.
Pham Nam Long, founder and CEO of Vietnamese tech-driven startup Abivin, the 2019 Startup World Cup’s winner, recently refused to answer the question of “unicorn” possibility of his company.
“We are still at the iterative circle of building, measuring and learning to make bigger leaps”, he admitted, acknowledging that competing in technology advancements required serious calculated decisions on whether a company might make or break at this point.
Brant Cooper and Patrick Vlaskovits wrote another age-old lesson in their classic book “The lean entrepreneur” that even if a startup does have all the elements of a good business model, especially in tech-enabled companies, it does not mean that the business can grow big. Wefit, one of the top 10 startups of Techfest 2016’s national competition, was a prominent example of the fact that a “copycat” business model still fails.
The company surprisingly filed for bankruptcy this May, even though it followed the model of a successful scaleup in the U.S. — ClassPass, a listed New York-headquartered online fitness subscription platform company. The incident showed that Wefit’s pivot strategies derailed by inefficient management and its current business model was not suitable for scaling up within Vietnam.
“We could call it the first notorious scaleup failure of a third-generation startup in the country,” said Nguyen Viet An, Director of Frontier Technology Development Center at Hanoi’s Hoa Lac Hi-tech Park. “We should appreciate and embrace this failure,” he noted, underlining that the case drew on a lot of learnings, especially about pivot strategy management and the wrongly overfocusing on customer acquisition in tech-enabled companies.
COVID-19 was said to have accelerated Wefit’s bankruptcy. Photo by VnExpress/Dat Nguyen.
When a company scales up, it may increase returns but “triggers two new kinds of technological risks,” stated Nicolas Colin, Co-Founder and Director of European accelerator The Family, on his company’s website. He wrote that the infrastructure had to carry on speedily scaling business, and some technology had to be developed to process data and manage operations.
“We could not segregate good business models from technology advancements”, said Tran Tri Dung, Program Manager of Swiss Entrepreneurship Program in Hanoi and Central Region of Vietnam.
The line between tech-enabled and tech-driven is more and more blurry in scaleups. Tech giants like Google, Apple, and Amazon, have outgrown their startup notions and have diversified enough to be both tech-driven and tech-enabled.
VNG, the only “unicorn” of Vietnam, is a tech-enabled company with popular products like Zing MP3, Zalo, ZaloPay, competing directly with international giants such as Spotify, Facebook Messenger, or SamsungPay. The company recently announced that it has successfully developed the cloud computing technology owned by Vietnamese engineers to serve hundreds of millions of users.
“ Vietnam is still waiting for a next generation of scaleups and unicorns,” said Mandy Nguyen, Director of Ecosystem Development of Startup Vietnam Foundation (SVF) — the co-organizer of Techfest 2020’s startup competition. “We believe that making a playground for the meaningful encounters between tech-driven and innovative business model-oriented startups [tech-enabled] would benefit and accelerate the ecosystem into this scenario.”
T he preliminary round of the national startup competition is taking place from October 8 to November 8, divided into two groups of startups: tech-driven and innovative business model-oriented. The organizer would give the top 60 companies out of nearly 500 startups nationwide various training sessions and online pitching stages in front of international and domestic investors and experts. The final round will be held at the biggest annual event for Vietnamese startup community — Techfest 2020 on November 28 in Hanoi. The winner will be the representative of Vietnam to attend the Grand Finale of Startup World Cup 2021 in Silicon Valley, competing with more than 40 countries’ prominent startups worldwide for a $1 million investment prize.
Part 1: Tech-driven startups are leading the game in Vietnam
Part 2: The popularity of tech-enabled startups in Vietnam | https://medium.com/jamilletters/how-are-vietnamese-tech-enabled-and-tech-driven-startups-scaling-up-89a2025fd82b | ['Jamille Tran'] | 2020-11-11 14:16:36.148000+00:00 | ['Technology', 'Startup', 'Vietnam'] |
1,555 | On Unbecoming a Hotshot | Me, in my former role as a hotshot.
My four-year anniversary at my current job snuck up on me a few weeks back.
It’s not an anniversary I have a lot of experience with — it’s been a while since I worked somewhere for four years in a row. My professional career has taken me on an interesting journey, and I’ve bounced around a lot. Like a lot of people that find their way to 18F, I’m an impact junkie — leapfrogging to new, higher-profile roles can seem like a pathway to deepening and broadening your impact.
At least it once did to me.
When I realized I’d completely spaced out on the day of my four-year anniversary, I reached out to a former colleague who had helped me onboard. We hadn’t stayed in close contact since he left, so I thought a note of hello and thanks for the on-boarding experience four years go would be welcomed. And it was.
But he said something that really made me appreciate how long a four years it has been:
I remember you were like a semi-celebrity coming in so I didn’t know what to expect…
And I completely understood what he meant.
I had worked very hard over the prior 6–8 years to become outspoken on the thing I had become passionate about. A field that has come to be referred to as civic tech.
I had a lot to say, and I wanted to say it. I’ve always written for my own blog, and I took every opportunity I was offered to write for others. I tweeted my fingers into premature arthritis and used the #civictech hashtag with reckless abandon.
I did a lot of public speaking as I was developing ideas about what civic tech was and what I thought it should be. And I took every opportunity to speak in front of people that I was offered. Every single one. It didn’t matter how big of a group, or how tenuous their connection to my idea of what civic tech was, or why it was important.
Me, speaking to a (probably) bewildered audience at a communication technology conference in in 2010
I once spoke at a communication technology conference in San Francisco about open data, and how it was changing the world. I’m not sure everyone in attendance understood why I was there or what I was talking about, but I saw it as an opportunity to evangelize so I jumped at it.
I once drove from Philadelphia to Pittsburgh to give a talk on open data to the team that would go on to create the Western Pennsylvania Regional Data Center. I drove back to Philly the same day, after meeting with the newly elected Mayor of Pittsburgh — a roundtrip of about 600 miles in the span of about 16 hours or so.
One year during the week of Thanksgiving, I found myself speaking to a group of Australian government employees in Canberra about open data and civic hacking. I remember asking myself why I was there when my family was celebrating a holiday back home. I never even made it to see Melbourne or Sydney, though that was likely the only trip I’d ever make Down Under. Certainly for a long while.
Who even was I, anyway?
I had become a hotshot. I didn’t set out to do that, but that’s where I found myself nonetheless.
I realize now that this is what made the early months of my transition to 18F difficult. I had to learn how to unbecome a hotshot. The problem space that 18F works in isn’t well suited to hotshots. It’s a space where teams thrive, and have impact that can last, and be expanded, and be replicated.
Over the four years that I’ve been there, I’ve been fortunate enough to have been a part of great teams. I’ve tried hard to be a good teammate, and to become the kind of person that can slide seamlessly into any team and make it better. To learn to listen more. To move out of the way, and let others take the lead when their ideas or skills are stronger. To eschew credit, unless it is directed at the team as a whole. To become agile enough to become whatever the team needs me to be on any given engagement.
None of these are easy things for a hotshot.
I’ve still got a lot to learn, and the work to be an effective teammate never ends. But the greatest benefit of joining the federal government and working at 18F is that it’s given me the freedom to let go of the need to be the one standing in front of the crowd. The one who always has to have the deepest insight, or most quotable remark.
The challenges of delivering high quality digital services, and bringing modern technology practices to the government are real, and they are signifiant.
And these challenges will be overcome by teams, not by hotshots. | https://medium.com/@mheadd/on-unbecoming-a-hotshot-b0a1a8124fd2 | ['Mark Headd'] | 2020-12-06 16:32:24.847000+00:00 | ['Open Data', 'Public Service', 'Civictech', 'Civic Technology', 'Government'] |
1,556 | Apple’s iPhone 12 mini’s Battery Life is Amazing | Apple’s iPhone 12 mini’s Battery Life is Amazing
Holding Apple’s iPhone 12 mini
I’m the Goldilocks of iPhone customers. I don’t want the iPhone 12 Pro Max because the 6.7-inch screen and body is too much for me and my pants pocket — its clean, curved body always poking out over the top of my pocket edge. Similarly, I’ve steered clear of the smaller phones like the iPhone 12 mini, mostly because I’ve been spoiled by the just-right, mid-sized iPhone 11 Pro and now its successor, the 6.1-inch iPhone 12 Pro.
My fascination with mid-sized phones (a decade ago they would’ve been considered “big-screen” phones) is a relatively new phenomenon. I carried the tiny iPhone 4s with its 3.5-inch screen for years. The extra half-inch I got on the iPhone 5s a little later seemed like a big deal. Now, we think nothing of carrying 6-inch-plus displays and, thanks to edge-to-edge screen technology, it’s possible to carry a near 7-inch iPhone without the body being almost comically large.
The trend is to go big and it was easy for me to dismiss the adorable iPhone 12 mini, especially because it only has two cameras: the wide and 120-degree ultra-wide (you need to go Pro to get the optical zoom lens).
It’s so small
What’s more, I worried that the 5.18 in. x 2.53 in. x 0.29 in. body couldn’t accommodate enough battery to support a day of activity. Apple doesn’t publish battery sizes and instead promises in its tech specs “Up to 15 hours of video playback” (10 hours if I’m streaming). Video run-down tests are fine, but I realized I needed a real-world test to see if the iPhone 12 mini is suitable for my real world.
The night before my test, I made sure to fully charge the iPhone 12 mini on a wireless MagSafe charger. To Apple’s credit, aside from size and cameras, the iPhone 12 mini shares virtually all the same specs with its larger and more expensive counterparts. It has the aforementioned magnetic, wireless charging capability, ceramic display covering the Super Retina Display, a TrueDepth module that supports Face ID, and includes a 12 MP camera. It has the same IP68 ratings (6 meters for 30 minutes), the powerful A14 Bionic CPU, and, of course, 5G. In fact, there’s never been a smaller and thinner 5G phone (if you're into that kind of thing).
8:13 AM: The next morning, I snatched the iPhone 12 mini off the charging pad.
The switch from a larger phone to this little one was unsettling. It felt like almost nothing in my hand. At 134 grams, it’s 30 grams lighter than the iPhone 12 and almost 60 grams lighter than the iPhone 12 Pro. I found myself gingerly cradling it with two hands as if I were holding a little baby Yoda.
While not the highest resolution screen in the iPhone 12 family, the mini has, at 476 ppi, the highest pixel density. I stared at the 5.4-inch screen, wondering if I could get used to it. Everything looked sharp but tiny. Almost immediately, I struggled with typing on the small virtual keyboard. The high resolution made me wonder if my fingers had swelled overnight. I was surprised to find that Apple had not disabled Reachability (a sweep motion just above the home bar pulls the screen image halfway down the display for better thumb reachability).
This would be an adjustment.
The trick, I found, was sticking with the iPhone 12 mini until it felt natural. I did my best not to look at or pick up a larger screen phone.
Apple’s iPhone 12 mini’s screen is 5.4-inches, which is larger than what you’ll find on an iPhone 8.
I used the phone to peruse tweets, read news, and plan a quick excursion to a hardware store. I even let Apple maps guide me to my destination. By this time, I didn't mind that my on-screen directions were on a smaller screen. It looked sharp and I could hear Siri’s instructions loud and clear.
As I got out of my car, I slipped the iPhone into my pocket and walked into the store. A few minutes later, I needed to check the phone and, in a near panic started frantically patting all my pockets. I couldn't feel the phone and thought I’d lost it. In fact, it was simply sitting sideways at the bottom of one pocket, something that would be impossible with the iPhone 12 Pro.
1:40 PM: I arrived back home and noted the iPhone battery power was at 70%.
By this time, I’d stopped thinking about the phone’s size and realized that whatever physical real estate I lost was made up by the iPhone 12 mini’s impressive screen resolution. My only complaint is that, on a phone of this size, Apple’s TrueDepth appears to run almost the full width of the screen.
In the afternoon, I checked and responded to mail, made some edits to a Medium post, posted an Instagram photo, and spent a lot of time on Twitter. I even watched a SpaceX Starship test launch (and crash). The video looked amazing on the little screen. I also took some photos with the front and rear cameras. These two lenses are exactly the same as the ones I’ve tried on the iPhone 12 Pro and Pro Max and the image quality is just as good. | https://medium.com/@LanceUlanoff/apples-iphone-12-mini-s-battery-life-is-amazing-7c8b564badbd | ['Lance Ulanoff'] | 2020-12-14 18:01:02.150000+00:00 | ['iPhone', 'Iphone 12 Mini', 'iOS', 'Apple', 'Technology'] |
1,557 | “Investigational” Is Payer Code for “Coverage Denied” | “Investigational” Is Payer Code for “Coverage Denied”
Photo Credit: Christian Horz iStock — Getty Images
Right now, state and federal governments are throwing everything at COVID-19 with an urgency so great that the U.S. Food and Drug Administration (FDA) has issued Emergency Use Authorizations (EUAs) that tear down long-standing obstacles to telehealth, digital health, diagnostics and innovative medications that may prove effective against the virus. Payers have been quick to follow, making these innovations — sometimes labeled “investigational” — available on their plans.
With the eyes of the world upon them, payers know that when beneficiaries’ health and corporate reputations are at risk, innovation is welcomed, and the “investigational” label is not a barrier to urgently needed care. But, if you suffer from less-visible ailments, unrelated to COVID-19, you may be in trouble.
Outside the pandemic, “investigational” remains payer code for denying reimbursement. In the case of treatment for chronic conditions, even FDA-approved digital-health and connected devices languish on waitlists for formulary inclusion.
Considering the health-giving and money-saving benefits of digital-health innovations — whether connected devices, smartphone-controlled apps or virtual-reality tools — the idea that coverage is routinely denied by payers using the excuse that these products are “investigational” is outrageous. It’s time for change. When entropy and status quo become the banners to which the system rallies, we need new, bold approaches. We need national standards, rather than the hidden patchwork established by arbitrary payer committees. It’s time that sector leaders such as Apple, Google and Microsoft, trade groups such as HIMSS, private-equity sector leaders like Rock Health and others take up the cause for digital medical products and devices to gain clear paths to payer-formulary inclusion. That does not mean lowering the bar for safety and efficacy — it means making coverage standards clear.
Rock Health, the first venture fund dedicated to digital health, is bullish about the category and reported that U.S. digital health companies raised $5.4B in venture funding during the first six months of 2020 alone:
The strength of 2020 investment to date matches this moment of unparalleled demand and opportunity for digital health. Our optimism for tech-driven transformation is reinforced by the legitimization of business models, demonstration of strong clinical and economic outcomes, increasing FDA clearances and approvals, and industry efforts to develop scalable payment mechanisms for digital health products. However, we are just on the cusp of this momentum.
Yet without payers embracing the possibilities that digital health and connected devices bring to improving patient well-being, these innovations will not reach patients — nor provide return on investment for sponsors. We see innovators and private equity leaders engage regularly on the digital- health speakers’ circuit, but where are the leading-edge payers, talking about how digital therapeutics and devices are integrated into their patient-care vision and available for physicians to use as indicated? An “investigational” medical device requires clinical studies designed to evaluate its effectiveness and safety. In many cases, de novo (unique) devices approved by FDA for patient use require multiple clinical studies to pass muster. These devices are vetted scientifically by the intended Agency gatekeeper — so why do the doors to access remain locked by outside actors?
Eye-to-Eye with Payers
Payers may feel they have sufficient short-term, financial justification to deny medical device coverage. It might be that they have criteria in place that must first be met before these devices are added to formularies. If so, what are those guidelines? Of course, decision-making must take into account efficacy data, physician expertise and common sense; why can’t these standards be shared transparently? Payer decisions must become more transparent and more standardized so that patients and physicians know where they stand. Payers must be held accountable for their processes when it comes to formulary and reimbursement decisions and their impact on patients — both their wellbeing, and their doctors’ ability to diagnose and treat them.
For example: a child, diagnosed with Ehlers Danlos Syndrome with small-fiber neuropathy and hypermobility who presented with terrible, debilitating stomach pains and was barely able to eat. An academic medical center gastroenterologist in New York City requested use of an FDA-approved “pill camera” to track the child’s digestive function. This so-called capsule endoscopy has been in use for more than a decade and uses a micro wireless camera to take pictures as it journeys through the digestive system. Despite the child’s need, FDA approval and history of safe and successful use, the payer denied the request.
The GI specialist appealed, sending peer-reviewed, published, clinical studies to the payer case officer. Health insurance again refused coverage, calling the smart pill “investigational,” suggesting the formulary pathway of colonoscopy, endoscopy and imaging. With their child in pain, the weary parents paid “out-of-pocket” for the smart pill, which produced an immediate, clear diagnosis. Why was this necessary? And what would have happened to the child, had the family been unable to “go around the system” to access the needed technology? The payer review process is completely opaque — and riddled with inequity.
Overcoming (and Overhauling) the System
Reimbursement for de novo medical devices is an uphill struggle. In the payers’ war of attrition, denying reimbursement by calling something “investigational” works only for payers. It doesn’t work for physician experts seeking options for their patients. The solution to overcome payer obstacles is that physician groups, trade associations, venture capital funds and patient advocates must join forces to support digital- and connected-device access. The Centers for Medicare and Medicaid Services (CMS), Veteran Administration (VA) and FDA must also update direction to better define the payer landscape.
Recently, FDA expanded approval for one remote, electrical neuromodulation device that treats episodic and chronic migraine in children, 12 and older. “Having this drug-free migraine therapy available for the adolescent migraine community could positively impact patient compliance,” said Jennifer McVige, MD, a board-certified physician in pediatric neurology, adult and pediatric headache and neuroimaging at the DENT Neurologic Institute in New York. “Teens do not always want to take pills, and some may be unable to do so due to various drug-to-drug contraindications.”
Dr. McVige adds that as adolescents typically adapt easily to new technology: “[This is] an efficacious smartphone-controlled tech solution that can be worn inconspicuously and is the perfect design for teens who may unfortunately begin to experience migraine attacks.”
However, online parent chat boards cite overwhelming challenges in engaging the payer system to support this approach. “It’s investigational,” is the familiar retort to reimbursement requests. Like the smart pill mentioned above, migraine neuromodulation device options have undergone clinical studies. While some are approved for reimbursement by the VA, for a parent dealing with a child screaming with migraine pain, it’s more than a headache to get a payer to put the product on formulary — even after navigating the hoops of prior authorization through which physicians are willing to jump.
Physicians and Patients Need Greater Clarity
When German officials passed the “Act to Improve Healthcare Provision through Digitalization and Innovation” in 2019, regulators determined doctors should prescribe health apps and these should be reimbursed. As a result of the reimbursement policies, the relationship between digital and device manufacturers and the healthcare system ceased being adversarial. No longer pitted against each other, the payer system and device innovators serve mutually patient needs.
But unlike one-payer system countries, such as Germany, the U.S. insurance network is fragmented, which adds complexity for medical device innovators seeking coverage. In the U.S., each payer sets individual standards for coverage. There is no central group setting policy for device reimbursement and each mysterious payer meeting has unpredictable outcomes. Health insurance companies can therefore seem deaf to consumer needs when it comes to evaluating new devices and technologies that can address persistent medical concerns. Until the opaque walls thrown up by payers can be torn down, patients and their physicians are forced to expend huge amounts of time and energy to get answers.
“Commercial insurance coverage decisions lack transparency and processes for stakeholder engagement and are not appropriate for inclusion in Medicare’s reasonable and necessary definition.” That’s the response of the Advanced Medical Technology Association (AdvaMed), the leading device trade association, to the insurance lobbying industry’s objection to a proposed CMS rule accelerating access to FDA-approved medical devices for seniors. “CMS (proposed rule) is sending a signal to the entire innovation ecosystem that taking the risk to develop new breakthroughs will be rewarded if those devices receive FDA marketing authorization and improve patient care,” AdvaMed continued.
This adversarial relationship isn’t necessary. “Patients form the basis of the clinical need and, along with patient advocacy groups, can be important drivers of the adoption of new medical treatments,” writes Denise P. Clarke, senior consultant, Boston MedTech Advisors, in “From Approved To Covered — What Medical Device Companies Need to Know.” But even when patients help in clinical-trial design with physician experts, and FDA grants approval to market, payers have too many hidden ways to “just say no.” That shouldn’t be the case.
Innovation is only meaningful when patients have access to it. Checks and balances are essential, but, they must be transparent. Technology that can improve people’s health and lives — reduce their pain and suffering — cannot be kept out of reach forever. We consumers — the payer system’s customers — are becoming smarter and are more informed every day, and seeking ways to take action, together.
It is time for payers to recognize that physicians and patients should not have to scale walls of undetermined height to access medical innovation that can make them healthier — and which may well save payers more money in the long run.
[My appreciation to Finn Partners colleagues Arielle Bernstein Pinsof and John Bianchi for their comments and to Shira Friedman for her review.] | https://medium.com/beingwell/investigational-is-payer-code-for-coverage-denied-ef8ea515f286 | ['Gil Bashe'] | 2021-02-17 20:40:15.956000+00:00 | ['Medical Devices', 'Health Insurance', 'Healthcare Technology', 'Insurance Claim Denial', 'Digital Health'] |
1,558 | When the Planet Becomes a Computer | When the Planet Becomes a Computer
A new study makes a surprising claim about the growth of data
Gerd Altman / Pixabay
Everyone knows the Internet is big, and getting bigger all the time.
It’s adding almost 200 million emails a minute, and 18 million text messages, and half a million Tweets, a quarter million Instagram stories and … well, you get the idea.
So here’s a question. With the Internet growing like this every minute of the day, have you ever wondered how much the Internet weighs? I don’t mean the weight of the hardware like computers and cables and servers. I’m talking about the weight of the information itself.
The answer might surprise you. According to a recent study by physicist Melvin Vopson, the total mass of all the data we produce in a year is equal to the mass of just one E. coli bacteria.
I suppose this is a relief. I won’t have to use the spare bedroom to store my daughter’s Tik Tok videos. And data centers like this one Facebook is currently building in Singapore won’t blanket the planet.
Credit: Facebook
But this is not the end of the story.
Because Dr. Vopson didn’t stop there. He took this finding and crunched some more numbers to see how things play out in the future as the amount of data continues to multiply.
And this is where it gets really interesting.
About 40% of the world’s population still hasn’t come online, and when they do, they will join us at an ever-expanding Internet buffet, where we’re gorging on livestreams, 5G, augmented reality, virtual reality, multi-player gaming, video and new sources of data that are being dreamed up all the time.
According to Dr. Vopson, this tsunami of new information is going to create a problem. If we assume that digital information increases at a rate of 50% a year, it will only take a couple centuries for the mass of all this information to equal half the mass of the entire planet.
In other words, even though all of our new data this year will only weigh as much as a single bacteria, we are actually on track to turn half the planet into digital bits and computer code in roughly 200 years.
“We are literally changing the planet bit by bit.” — Melvin Vopson.
This is a lot to digest. My first thought was, well, maybe information won’t grow 50% a year.
So Google took me to this frequently cited study by the International Data Corp, which says the world’s total data will grow from 33 zettabytes in 2018 to a hefty 175 zettabytes in 2025.
That turns out to be a compound growth of 61% a year.
Credit: IDC
Okay, so what does this all really mean?
For now, nothing, because things don’t get crazy for a while. At 50% growth a year, Vopson says we will have only 1 kilogram of digital information by the year 2070. The law of accelerating returns will only really start having an impact much later.
But at some point, humanity will have to address the problem. Perhaps we learn how to store information differently, or we create computers off-world.
Maybe the moon can be turned into a giant computer — we wouldn’t have to worry about keeping it cool.
Or perhaps Elon Musk is right and we can’t stay on Earth forever. If we are to keep pursuing knowledge and ensure the survival of humanity on a planet that is one asteroid strike away from extinction, then our future is among the stars.
There are no good answers now. But it’s probably time to start asking the question. | https://medium.com/descripter/when-the-planet-becomes-a-computer-5eedf09d831d | ['Craig Brett'] | 2020-08-17 11:24:24.985000+00:00 | ['Humanity', 'Internet', 'Data', 'Future Technology', 'Science'] |
1,559 | Artificial Intelligence: Synergy or Sinnery? | Will the advent of A.I. allow us to embark upon a complete overhaul of traditional labor structures?
This is a question that comes up less frequently than others and one that has an answer that is wholly dependent on whether or not we’d like to take an optimistic or pessimistic view.
In another way to phrase it — A.I. can be seen as the harbinger of an age where humankind can, for the most part, finally unshackle themselves from the toils of labor. Conversely, it can also be regarded — and is often regarded in this way — as an enormous threat to employment, set to disrupt almost every industry and cause massive scale job loss.
Assuming an optimistic perspective, it’s certainly an exciting proposition, one that would have to be supplemented with some measure of a universal basic income for everyone or some completely innovative way by which resources can be accumulated by members of society. While it seems wholly unfeasible to live in a world where humans need no longer work (again, for the most part) and can be set free to pursue their individual endeavors, it is nonetheless a tantalizing prospect.
Preparing for a world without work means grappling with the roles work plays in society, and finding potential substitutes. First and foremost, we rely on work to distribute purchasing power: to give us the dough to buy our bread. Eventually, in our distant Star Trek future, we might get rid of money and prices altogether, as soaring productivity allows society to provide people with all they need at near-zero cost. — Ryan Avent, The Guardian
Supposing one were to take a pessimistic perspective, the threat of soaring unemployment rates is all too real. We’ve already seen the loss of jobs brought about by automation in the workforce and A.I. poses the most menacing danger of all. The darkest of estimates come to show the loss of half of all current jobs to automation and A.I. — if this had even been remotely exaggerated, it certainly seems drastic enough to consider alternative systems of wage disbursement in its entirety. | https://medium.com/hackernoon/a-i-synergy-or-sinnery-3eeb2a2c8d3 | ['Michael Woronko'] | 2019-02-25 11:41:00.852000+00:00 | ['AI', 'Philosophy', 'Technology', 'Artificial Intelligence', 'Elon Musk'] |
1,560 | Time Machine Conference | Imagine an international organisation developed from an alliance focused on cooperation and exchange of technology, science, and cultural heritage. The main basis for achieving this goal? To provide access to a collective memory supported by technological development to broaden our knowledge of the past, present, and future of us all. This is the main mission of the Time Machine Organisation.
The European initiative has large-scale research projects and powerful directing agents and partners. This has been pushing the boundaries of scientific research in Information and Communication Technologies, Artificial Intelligence, and Social and Human Sciences, always looking for new ways to develop technologies that help us to have a better and deeper insight into the data that makes up our history, and thereby widens our horizons of possibilities for the days to come.
Seeking to show the results and stimulate building bridges between the most different market and knowledge sectors (academics and research, civil society, small and medium businesses to large corporations, specialists in different technologies, etc.), Time Machine Organisation will hold in 2019 its international conference on 10 and 11 October in Dresden, Germany.
Join the conference: https://conference.timemachine.eu/
Throughout the event, we will be able to learn about the innovations developed by the Digital Humanities Lab as well as the results of the Local Time Machines project, a distributed enterprise that seeks to digitize the history of several European cities. The digitization effort involves not only documents or what is recorded in official papers but also everything that involves social production throughout the history of countries and cultures. That includes objects, artwork, urban evolution and so on. Such digitization involves the largest computational simulation process ever performed. To keep in mind: all this collection is available for free access as a database, as well as an interactive simulation in which all these collections can be explored.
Local Time Machine for Venice
Although the project started in Europe and had first access to local data, it’s important to point out that the idea is not to promote a Euro-centered work telling a “single history” or to be bound to a single territory. The idea is precisely piecing together multiple pasts that allow us multiple futures.
Seize the chance to experience first-class speakers from world-leading science, technology and cultural institutions discussing the potential of cultural heritage data and technology for various domains. Take the unique opportunity to connect with internationally renowned representatives from high-profile businesses, Ministries and funding bodies as well as researchers and scientists from academic and cultural institutions.
Source:
https://backdoorbroadcasting.net/2019/01/the-venice-time-machine-project-digitising-heritage-in-time-and-space/
https://www.intotheminds.com/blog/en/visit-to-epfl-artlab-featuring-leading-big-data-projects/ | https://medium.com/torustimelab/time-machine-conference-756480e45f5c | ['Torus Time Lab'] | 2019-10-03 10:54:03.322000+00:00 | ['Heritage', 'Technology', 'Time', 'Time Travel', 'Museums'] |
1,561 | AI for marketing: how AI will skyrocket your digital performance | AI is now changing the whole world and the marketing sphere is not an exception. The creation of the powerful marketing strategy that resonates requires a good understanding of human psychology and some empathy. However, primitive marketing that we are used to pretty well is just a set of algorithms. And no human can repeat the success of the AI when it comes to managing algorithms. So, let’s dive deeper into the perspectives of AI for marketing.
But first, let’s look at what AI is.
Some time ago, developers have been building algorithms they could explain, using conditions “if this → then that”.
However, some problems are too difficult for people to process and understand. The more data humanity has been accumulating, the more difficult it was to cope with all this data. Time has come to create neural networks!
How do these networks function? Nobody knows that. Even people who create them.
But here is a very approximate mechanism of a neural network creation, which can give kinda some understanding of why AI is so powerful.
Let’s imagine that you want to build an ANI (artificial narrow intelligence) that will distinguish different images. Say, pics of a table and those of Trump.
Creating such a neural network, you build a bot that builds bots that will distinguish between images. Also, a builder-bot constructs a bot that will teach learner-bots to do that. Neither a builder-bot nor a teacher-bot can differentiate between images of furniture and the president of the US.
The builder-bot builds bots-learners, but it is not even skillful even in its trade and creates bots with some random connections.
These randomly-built bots then go to the teacher-bot. How it teaches small learners-bot if it can’t understand what is depicted in the photo?
Well, it doesn’t teach, it tests. A human being gives the teacher-bot a bunch of labeled photos, like test questions and answers to them. The special feature of a neural network is that it needs a real lot of information to learn.
If a human child can understand that it is dangerous to stick a hand into the fire, having done it a couple of times, neural networks are much dumber when learning. If the neural network was a child, to get an understanding of such a simple thing, it would have to touch a hot thing like 10 thousand times or even more.
That’s why AI creation was impossible before the era of Big Data. But, wait! We now have much data yet how to label such an amount?
Though there are some tools to make the labeling process easier. this process is still manual. Thousands of teams around the world label data to enable, for instance, autonomous vehicles’ existence. And by the way, you help with this matter too, while choosing the pictures where traffic lights or crossroads are depicted offered you instead of captchas by Google.
So, this labeled data goes to the teacher-bot and gives it the information on what is depicted in the pictures. Learner-bots try to determine where a table and where Trump is.
Actually, they rather guess, than determine.
After the teacher-bot checked the results, the builder-bot separates the ones with the best results and recycles the others. Then a builder-bot creates new learner-bots similar to the most excellent learner-bots with some changes and sends them back to school again. Rinse and repeat.
So, it’s kinda natural selection in action. After every round of testing, the best bots survive and improve, while others are destroyed. The loop is repeated until the results of the best learner-bot are close to perfection.
This excellent learner-bot exceeds all expectations of its creator, it distinguishes between a table and Trump even when a human being fail to do that and acts in a way his creator can’t understand (if you’ve ever watched recorded chess parties with Alpha Zero, you know that humans don’t play chess like that). This is how AI is born.
And if it acts better than people in every possible way, let’s see what can do AI for marketing.
1. Enhance your content
People often say that content creation is purely human skill, and AI can’t generate great content as it lacks the understanding of aesthetics and psychology. I don’t see any problem with it and, though I am a writer and it saddens me, I believe AI will do better than humans both when it comes to literature/journalism and content/copywriting
A famous American writer Kurt Vonnegut wrote in “The Sunday Herald” newspaper a year before his death:
“If I should ever die, God forbid, let this be my epitaph: THE ONLY PROOF HE NEEDED FOR THE EXISTENCE OF GOD WAS MUSIC”.
He was an author but still believed that music, unlike literature, is divine.
When I’ve been telling my dad, who is a big music lover, about the power of the AI, he told me: I will believe in its power only when it can create music like the one of Chopin.
Later, I’ve persuaded my dad to listen to a long-forgotten Chopin’s musical composition, and he was fascinated. What a genius Chopin was! — he exclaimed.
The thing is this “long-forgotten Chopin’s piece” was written not by Chopin but a musical intelligence Emmy created by a composer and scientist David Cope. It just used algorithms it determined in Chopin’s style to make a composition similar to Chopin’s one.
David Cope doesn’t consider that Emmy will threaten the work of human composers. It will just help them to create better music.
Yet, maybe one day Emmy will win an Emmy award, who knows.
Why I am telling you this story?
Because if marvelous music can be of a non-divine origin like Vonnegut thought, but of an algorithmic one, everything else definitely can be too. Including content.
I will elaborate more on AI-generated content in one of my following posts. In any case, Ai will optimize content delivery through SEO.
2. Optimize content delivery
Every day we interact with such an SEO-AI that tries to organize the existing content effectively. I am talking about Google with its RankBrain algorithm, which allows bloggers not to stuff their articles with keywords, as they did before. RankBrain independently decides what results should be ranked in the organic search where exactly, based on the natural language processing.
To make this decision, the RankBrain algorithm analyzes a bunch of data and creates a ton of different criteria that matter to a higher or lower extent. Of course, Google now gives some tips to rank your materials higher such as “be an authoritative trustworthy expert”, but, as you can see, this advice is pretty vague.
That’s because RankBrain is a classical AI, and nobody has the ultimate knowledge about how it works. However, an outstanding SEO specialist Brian Dean figured out quite successfully how to rank for the algorithm to boost your SEO results.
In any case, if you want your content to rank high in Google, you have to spend a bunch of time making this happen. Even more time, than you spend creating the content. And nobody can guarantee that your efforts will pay off.
Here are just a few operations that you have to do for the on-page SEO.
To analyze market trends and popular content To analyze top-performing articles and to get insights from them To create the semantic core of your site To brainstorm keywords which align well with your semantic core to optimize your post for them To find keywords your competitors rank for and you don’t To analyze the search volume and the SEO difficulty and to choose the most appropriate keywords among generated ones To create an outline of the post, preferably including related queries people are looking for To optimize your article for the chosen long-tail keywords and to make your article as relevant as possible to the main one To analyze regularly what keywords your article ranks for and to update the article adding those keywords to it
And there is also off-page SEO which is more important and difficult than the on-page one.
As you can see, SEO includes A LOT of manual work. Now, imagine that all this work is done by a neural network, and you can concentrate just on writing and be sure that your positioning will be alright.
Let’s take an example of the Alli AI SEO tool and see what it can do for you already now. It creates an SEO strategy for you, based on all the Google algorithm updates, so you won’t have to read a lot of articles on SEO, take many courses, such as the Ahrefs very expensive one, and figure out how to change your strategy to keep up. It also offers you code and content optimizations to generate more organic traffic. Besides, you can track your progress in the same tool.
Given that SEO takes, at least, a half of your time, distracting you from content creation, when you’re growing your blog, using such an AI-based SEO tool seems to be a great deal.
3. Help generate a bigger volume of higher quality leads
When we’re talking about leads, we mean prospects that can potentially become customers. So, leads are people who will likely be interested in your product or service.
How do you define potential customers among mere people who, say, visited your site or read your company’s blog? First, you have to identify your target audience, manually create your target personas and reach them out at the right time with the right offer. And, of course, you can make a mistake with your targeting and all your efforts won’t give a tangible ROI.
61% of marketers say generating traffic and leads is their top challenge. And sales reps spend as much as 80% of their working time calling and emailing potential customers, and, hence, only 20% of time closing deals.
Probably, this is the reason why everybody hates generating leads.
But in the nearest future, instead of handling all of these repetitive and routine tasks, you will be able to concentrate on a more important activity, assigning lead-generation to a neural network.
It will be able to create more specific target personas, based on the problem your company solves, analyze your client database detecting the clients that are ready to make a purchase or will be ready to do it soon, and nurture them the right way.
Let’s take Leadberry — one of the best modern lead generation tools, which is the best rated Google technology partner app in the world. What it can do for you already now?
Not only can Leadberry unveil what executives of the exact companies visited what pages of your site, but it also provides you with their contact information across various channels, from phone numbers to Twitter accounts (Google has the biggest database, so you can imagine the power this app hands out to you).
The app also evaluates how serious a visitor’s interest is regarding your services/products based on a set of algorithms, so all the spammers are filtered out automatically.
Besides, the soft offers you the leads similar to the ones you’ve already generated.
And finally, when you’ve gathered your list of prospects in Leadberry, you can connect it with your favorite CRM systems or email automatization tools.
4. Expand your audience in social media
Every second Earther has now an account in social media. And 76% of American consumers purchased a product after seeing a brand’s social post.
So, social media marketing is a great form of customer service as well as a very effective way to spread awareness about your company using a carefully crafted content strategy.
AI can help you craft it based on the insights from big data. Not only will it help you to define what content in your niche gets the most likes, comments, and shares but also analyze the performance of your posts. This will help you to define messages your target personas resonate with and to build your social media strategy around these messages. Hence, you will receive a massive competitive advantage.
You’re used to gathering data and get such insights from the results of your top-performing competitors manually. No need to say that uncertainty is great in this case. So, why not assign these boring tasks to somebody who won’t be bored and will be much more accurate than you are, and spend your time more wisely?
For instance, using the SocialBakers platform, you can easily uncover interests and content preferences of your target audience, as well as the most authoritative influencers that they follow. Also, the platform allows you to manage your publications across various social media in one calendar, using just your smartphone. And after your content is published, the platform gives you quite a detailed analytics on how it performs.
5. Create influencers of your own
I’ve already been telling you about CGI influencers in this article.
And AI can help you create own influencers. They can be not just photos, but even popular video bloggers. Remember that video generates more engagement than any other content type on Instagram
To create your influencer-character and its videos, you can make use of computer-generated graphics with AI. NVIDIA has already been working for several years to make AI capable to do the heavy work in the complex graphics creation instead of human artists.
Besides, now the Deep Fake algorithm allows creating some cool fakes very easily. Guess who’s starring here as Marty McFly and Doc!
So, why not suppose that in future Deep Fake will be able to generate not only fakes but brand new characters based on the criteria you prescribe?
For instance, China has already created an AI presenter.
Yes, maybe, he is not very emotional but it still delivers news pretty comprehensively. I am sure that in a couple of years we’ll witness a CGI revolution and a lot of influencers will appear.
6. Personalize your campaigns
In 2019, members of the Association of National Advertisers selected “Personalization” as the Word of the Year.
There are quite serious reasons for that. 74% of marketers consider targeted personalization the reason for increased customer engagement. And they are right, as half of Millennials and Gen Zers ignore communications from companies that don’t personalize their content.
Research from McKinsey even shows that brands that are good at personalization deliver 5 to 10 times the marketing ROI & boost their sales by as much as 10% as compared to companies that don’t personalize.
To my mind, personalization has several dimensions, and all of them can be optimized with the help of AI.
The first dimension is the segmentation. Today, most of the marketers when targeting, define several segments and ideally create a targeted sales funnel for every category of users.
This kind of personalization, as you can see from the stats above, can make these efforts worth every penny. But what if the segmentation was much more precise (because a larger dataset is considered) and there were 1000 segments of the same audience, instead of 10? How many more results would it bring!
Now, you probably think that it would be quite difficult and expensive to process all these segments and create email sequences for all of them. But here, we have another dimension of personalization — natural language processing (NLP).
With time, Ai constantly becomes better at writing in a natural language. So, it can analyze the patterns of email campaigns that once converted amazingly, and create the one for your company based on the analyzed data. Fortunately, texts of some legendary email sequences can be easily found online, so you will be able to gather a great collection for your NLP neural network.
Besides, you can use NLP to localize your content. Probably, localization is not so popular topic to discuss, but it can still make or break your game. For instance, 60% of local consumers in non-English speaking countries will unlikely to buy from English-only websites.
Will you click to read the article “How to buy a bindaas Porsche for just 755152 rupees”? I mean, even if you’re a fan of Porsche and are currently going to buy a car.
That’s because you’ll just lack a desire to google what means “bindaas” in Hindi slang and to convert rupees into dollars. Nobody wants to read the article to understand the title of which he had to google twice. So, the article about how to buy a luxury Porsche for just $10000 that had a viral potential will remain of-the-radar.
When you localize your marketing campaigns, i.e. their language, including the local slang, use correct date and currency formats, take into account local behavioral patterns, you show respect to your audience. And your prospects will return a thousandfold in gratitude.
You will even be able to move one step further and create personalized digital sales representatives that will engage your clients in dialogue using both conversational tone and respect to every little detail you know about your customers. They will also send emails at the right time, craft awesome subject lines to stimulate clients’ curiosity and be there for your clients 24/7. This will result in closing a great number of deals. And this is our nearest future!
7. Enhance your clients’ user experience with chatbots
All the points about personalization correlate with AI-based chatbots even better than with email sequences.
The thing is that everybody is tired of getting limitless personalized or “personalized” emails from different companies. So, at best, just 20% of your audience will open your email, no matter how much time you’ve spent to make it perfect.
Now, compare this with 88% open rates you can achieve messaging your prospects via a chatbot.
Not only can chatbots have high open rates and communicate with your prospects in a personalized way but also they can answer very quickly up to 80% of clients’ routine questions, which is important to 55% of the Internet users. By 2023, consumers and businesses will have saved over 2.5 billion customer service hours. Hence, businesses will save 30% of the budget they spend on customer service.
If usual rule-based chatbots can respond 80% of obvious questions with answers prepared in advance, they can’t, however, answer any questions outside of the defined rules. And can not learn to do that through interactions. They only perform scenarios you train them for, and this can reduce the extent of personalization and turn off some of your clients.
On the other hand, AI chatbots understand the context and intent of a question before formulating a response. Due to this understanding, they can generate answers to more complex questions and can continuously improve. Besides, they understand many languages and can truly work miracles of personalization.
AI chatbots demand much more data and time to train, but in the long run, an AI chatbot will become an awesome tool for your business. You can base your sales funnel on a chatbot, nurture your client base through it and even turn it into a professional digital sales rep.
To do that you can use a tool just like the Conversica system which is an already accessible sales AI assistant. So, what such a sales rep can do for your business?
It follows-up leads who showed some interest in your product or service and engages unresponsive leads with natural language, doing it either via email or text exchanges.
Also, Conversica sends invitations to both online and live events to your clients and can do that in multiple languages. After having several interactions with your clients, the system identifies the hot leads, whose contacts are then given to your sales team, and updates your CRM system with new data obtained from the interaction.
Besides, all the insights on where leads are getting stuck, Conversica AI stores on its dashboards and in the reports so that you can understand pretty quickly what aspects of your sales funnel should be improved.
So, using the soft of such a kind, you allow your sales team to focus on closing the deals, and not to be overwhelmed with necessary, but not so efficient for your business tasks.
Quantum computers will change the AI game completely
Quantum computers may be soon enough used for scientific and commercial advantage, possibly, even in 2020. And quantum computers pull up the Quantum machine learning development.
What’s the cool thing about it?
From the technical point of view, in classical computing, data is stored in physical binary bits. This means that a bit is either in a 0 state or a 1 state and it cannot be both at the same time.
Quantum computing uses properties of subatomic particles that can be in two states at the same time, and the quantum bits (called “qubits” for short) can be a combination of both a classical 0 and 1 state. This property allows much more data to be stored in qubit than in a regular bit.
How will quantum computers revolutionize AI? Remember at the beginning of this article, I’ve described the process of machine learning? It takes a bunch of data (the more, the better) and, hence, A LOT of time to process it. Qubits features will allow us to speed up the machine learning process to a high extent, and to allow AI to solve VERY COMPLEX problems with a tiny or none inaccuracy.
For instance, now Google has a quantum computer they claim is 100 million times faster than any of today’s systems. Just imagine the opportunities it creates for machine learning!
So, even though the use of AI in marketing can now seem a luxury, in the nearest future it will be the necessity for you if you want to beat your competitors or, at least, be competitive enough. So, it’s time to think in this direction to keep up.
What do you think about the AI perspectives for marketing? Have I mentioned all the cases? Or you have anything to add? If you do, share your thoughts in the comments. | https://medium.com/marketing-and-entrepreneurship/ai-for-marketing-how-ai-will-skyrocket-your-digital-performance-9b1a5174f597 | ['Jane Dolskaya'] | 2020-04-15 11:06:42.823000+00:00 | ['Technology', 'Marketing', 'Digital', 'Digital Marketing', 'AI'] |
1,562 | 8 Web & Mobile Application Development Trends to follow in 2021 | Photo by Sigmund on Unsplash
The developing new web technologies are ruling the world of web application development trends and there are many more yet to come.
To make your business win the race in the long run you need to move ahead with the new trends that are rising instead of the older trends.
The web application development trends have led to a drastic change and almost all businesses are owning a stunning website to achieve success in this competitive world.
All the developers are putting in their best effort for web app development as well as mobile application development.
Web application and mobile application development is the fastest growing industry and it has no signs of slowing down in the future.
After conducting thorough research the application development trends that are going to dominate in the year 2021are listed as follows-
#1 Motion UI
Motion UI is a library that is used for creating CSS transitions and animations.
The motion UI can play a major role in 2021 to make a Web application more interactive and eyecatching.
It will help in a superior experience for a user instead of GIFs and flash-enabled websites.
This can be one of the most amazing Web application Development trends to go for in 2021.
The motion UI will enable web developers to add styling and make their site unique among thousands of others having a static one
#2 Push Notifications
Push notifications are emerging to be one of the most powerful tools for any mobile application, website, or PWA.
It has led to higher conversion rates.
A push notification is a quite simple way to inform the user about any updates or when the new version will be introduced without relying on an email campaign or expecting the user to check the site.
#3 Online support through Chatbots
Chatbots are those useful PC programs that can have direct interaction with the customers without any intervention from a manager via a customer service chatbot.
These chatbots will be useful to ensure quality in the services provided to the users of the website.
It will make things quite easy for e-commerce developers as they need to develop websites with proper assistance online.
#4 Voice search for fast answers
This is the best possibility that you can have in your mobile or web application.
Today the people want to save as much time as possible so that they can be invested in competing with other potential competitors.
Voice search can act as a boon for them the potential customers will just use the voice search to find a product, will get a prompt reply and the work is done.
They can do this easily even if their hands are engaged in some other works.
#5 Virtual Reality and Augmented Reality
These are quick, easy, and interactive development trends that are becoming popular not only in the entertainment and education sector but for all types of websites and apps.
Virtual reality can be used in a mobile application to have a real feel of what will be actually provided on the purchase.
Augmented reality, on the other hand, is emerging gradually by adding a real-life layer to the information generated by computers.
It can be used to enhance the user’s digital experience.
#6 Cloud Hosting
The world must take the advantages that the cloud is providing in 2021. Cloud will help to cut costs in hosting, improve loading capacity and streamline business operations.
With the help of cloud technology in 2021, you would be able to develop powerful apps that would run directly on the cloud.
Cloud is helping to secure mobile application transactions by making it more fast and reliable.
#7 Cybersecurity
Security always has been a primary concern in the case of web development or mobile app development.
Software and data loss threats are the things that need to be looked at before the development of an application in 2021.
It is the responsibility of the organization to ensure that the application is not vulnerable to any type of threat and is safeguarded properly.
#8 Low code development
The low code development platform can become a major trend in 2021.
The industry is starving due to a lack of skills and expertise in developing web applications as well as mobile applications.
So low code development can actually help a company to create a cutting edge website without any extensive hand-coding.
Even the developer would not waste time in reimplementing and continuous testing. This idea can be used to make a quicker and better web or mobile application. | https://medium.com/@hostcodelab/8-web-mobile-application-development-trends-to-follow-in-2021-4368ed1afcb6 | ['Hostcode Lab'] | 2021-02-22 11:50:28.033000+00:00 | ['Application Development', 'Web Application Design', 'Mobile Application Trends', 'Website Development', 'Information Technology'] |
1,563 | Converting HTML into PDF in Java. I recently ran into the need to convert… | Converting HTML into PDF in Java
Using Open-Source Libraries: Jsoup and Flying Saucer
I recently ran into the need to convert an HTML file into a PDF file in Java using free, open-source libraries. In this post, I will walk you through my setup process:
Installing Maven using Homebrew and configuring $JAVA_HOME
Setting up a Maven project in IntelliJ and installing jars needed for our code
Code to convert HTML into PDF
Installing and Configuring Maven
If you haven’t already, install Homebrew. Then we will install Maven with Homebrew.
brew install maven
Add the following line to ~/.bash_profile
export JAVA_HOME=$(/usr/libexec/java_home)
Then in the Terminal, run:
source ~/.bash_profile
To check if $JAVA_HOME is set correctly:
echo $JAVA_PATH
This should give you something like /Library/Java/JavaVirtualMachines/adoptopenjdk-13.0.1.jdk/Contents/Home
Configuring a Maven project in IntelliJ
Create a new project in IntelliJ and select Maven. From this step on, there are two common errors you may encounter, and I will show you how to resolve them.
Error: release version not supported
Create a Java file in src/main/java and have it print out Hello World!
Run the program and you may run into the error Error:java: error: release version 5 not supported if you are using JDK 8+. There is a post on Dev.to about resolving this error.
In the pom.xml file, add the following lines: (1.8 for JDK 8, 1.11 for JDK 11, 1.13 for JDK 13, etc.) I am using JDK 13.
<properties>
<maven.compiler.source>1.13</maven.compiler.source>
<maven.compiler.target>1.13</maven.compiler.target>
</properties>
Shift + Cmd + A (on Mac) or Help > Find Actions to bring up the Actions menu. Type Reimport All Maven Projects.
Now rerun the Java program to ensure that Hello World is printed correctly.
Next, let’s add the jar files our HTML to PDF code depend on in pom.xml. Add the following lines.
Maven Error: invalid target release
In IntelliJ’s Terminal, run this command to install the dependencies.
mvn install
This may fail with an error message saying error: invalid target release: 1.13 . This blog post has a solution for this.
Add the following plugin to pom.xml and do Reimport All Maven Projects. Note that the source and target should be 1.8 for JDK 8, 1.10 for JDK 10, but 11 for JDK 11, 12 for JDK 12, 13 for JDK 13, etc.
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.1</version>
<configuration>
<source>13</source>
<target>13</target>
</configuration>
</plugin>
</plugins>
</build>
Then mvn install should finish without errors.
Converting HTML to PDF
We need two steps: First, convert HTML to XHTML with Jsoup. Second, convert XHTML to PDF with Flying Saucer. XHTML is different from HTML in that XHTML is a syntactically stricter version of HTML. For instance, XHTML doesn’t allow self-closing tags like <img src=''> .
HTML to XHTML
Add Jsoup as a dependency to pom.xml:
<dependency>
<groupId>org.jsoup</groupId>
<artifactId>jsoup</artifactId>
<version>1.13.1</version>
</dependency>
Import Jsoup:
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
Create a method to convert HTML to XHTML:
private static String htmlToXhtml(String html) {
Document document = Jsoup.parse(html);
document.outputSettings().syntax(Document.OutputSettings.Syntax.xml);
return document.html();
}
XHTML to PDF
Add Flying Saucer as a dependency to pom.xml:
<dependency>
<groupId>org.xhtmlrenderer</groupId>
<artifactId>flying-saucer-core</artifactId>
<version>9.1.20</version>
</dependency>
<dependency>
<groupId>org.xhtmlrenderer</groupId>
<artifactId>flying-saucer-pdf-openpdf</artifactId>
<version>9.1.20</version>
</dependency>
Import Flying Saucer:
import org.xhtmlrenderer.pdf.ITextRenderer;
import java.io.*; // for file I/O
Create a method to convert XHTML to PDF:
private static void xhtmlToPdf(String xhtml, String outFileName) throws IOException {
File output = new File(outFileName);
ITextRenderer iTextRenderer = new ITextRenderer();
iTextRenderer.setDocumentFromString(xhtml);
iTextRenderer.layout();
OutputStream os = new FileOutputStream(output);
iTextRenderer.createPDF(os);
os.close();
}
Optionally, you can register custom fonts used in the HTML. Right after the line ITextRenderer iTextRenderer = new ITextRenderer(); , add:
FontResolver resolver = iTextRenderer.getFontResolver();
iTextRenderer.getFontResolver().addFont("MyFont.ttf", true);
Putting Both Methods Together
public static void main(String[] args) throws IOException {
String html = "<h1>hello</h1>";
String xhtml = htmlToXhtml(html);
xhtmlToPdf(xhtml, "output.pdf");
}
I downloaded a font called Butterfly.ttf and used it in my HTML.
<style>
font-family: "Butterfly";
src: url("Butterfly.ttf");
}
.butterfly {
font-family: "Butterfly";
}
</style>
</head> <a href="http://twitter.com/font" class="cv ih" rel="noopener nofollow">@font</a> -face {font-family: "Butterfly";src: url("Butterfly.ttf");.butterfly {font-family: "Butterfly";
<h1>Hello world</h1>
<img src="
<p>Regular text</p>
<p style="color:red">Red text</p>
<b>Bold text</b>
<p class="butterfly">Fancy font</p>
</body> Hello world https://www.w3schools.com/w3css/img_lights.jpg "> Regular text Red text Bold text Fancy font | https://medium.com/javarevisited/html-to-pdf-in-java-using-flying-saucer-f73b70d9014d | ['Lynn Zheng'] | 2020-09-06 10:07:12.374000+00:00 | ['Java', 'Maven', 'Technology', 'Coding', 'Open Source'] |
1,564 | 我的區塊鏈之路 — 前傳. 在高中考完學測之後就沒有再寫文章(寫作文真的是很讓我頭痛),寫作能力也是隨著時間… | Two months. Online. Self-Learning to Mastery. Shortest-path to Enter the Blockchain Era.
Follow | https://medium.com/turing-chain-institute-%E5%9C%96%E9%9D%88%E9%8F%88%E5%AD%B8%E9%99%A2/%E6%88%91%E7%9A%84%E5%8D%80%E5%A1%8A%E9%8F%88%E4%B9%8B%E8%B7%AF-%E5%89%8D%E5%82%B3-36506bf6e923 | ['Hank Huang'] | 2019-09-27 09:37:30.980000+00:00 | ['Technology', 'Cryptocurrency', 'Bitcoin', 'Blockchain', '區塊鏈'] |
1,565 | v.systems / VSYS Staking Guide. Put your VSYS to work today! | v.systems / VSYS Staking Guide
The v.systems mainnet launched on November 27, 2018.
To lease your VSYS to the Staked supernode, please use the following address:
ARNK3hFsaQfGHVqD94wZxwDb6s2YfXWc1Cv
Leasing is non-custodial and supernodes cannot spend your VSYS. Staked pays 90% of the block rewards to stakers, and offers the industry’s only 100% SLA on block production.
Instructions for leasing VSYS are included below.
Summary
v.systems is a next-generation blockchain database for decentralized applications. Introduced in November 2018, it has since been dedicated to deliver an advanced blockchain platform with high stability, scalability and efficient performance in an expanded environment.
Staking Token Economics: (1/9/20)
Rewards are distributed upon participation in minting. Token holders are able to lease VSYS to any supernode candidate via the v.systems official wallet. There are no minimum requirements or lock-up period when leasing VSYS.
Staking Instructions
Download the VSYS wallet here. Navigate to the minting section and select the “Start Leasing” button.
3. Enter the Staked Supernode address ARNK3hFsaQfGHVqD94wZxwDb6s2YfXWc1Cv in the Recipient field, the amount of VSYS you want to lease in the Amount field, and then click Continue.
4. On the next screen, click on the Confirm button to broadcast your leasing transaction.
5. Staked will pay rewards every week on Wednesdays at 12 PM EST. We offer a market-leading 10% commission fee and highly reliable service.
Having trouble getting started? Get help or find time to speak with the Staked team by emailing cole@staked.us. | https://medium.com/@staked/v-systems-vsys-staking-guide-4b61a040985d | [] | 2020-04-03 20:10:54.385000+00:00 | ['Blockchain Technology', 'Crypto', 'Proof Of Stake', 'Cryptocurrency', 'Blockchain'] |
1,566 | TAPMYDATA PLATFORM REVIEW | Over the years we have been privileged to have people, establishments, and organizations who have been there for us to ensure that while we are busy with over issues of life that we don't miss out on our memorable past. There are companies whose sole duty is to collect, compile, and keep safe information or data about people.
Our main issue for emphasis today to how we as individuals can come to the realization of the fact that such an establishment exists and be in control of those data they have about you, and more interestingly how through your possession of those data you can earn for yourself, that's the goal and purpose of these platform known and called TAPMYDATA.
When we come to know about the existence of such an establishment, we can only ask to be allowed to have access to it knowing full well what we stand to gain from owning our data. Having come to an understanding of the benefits this platform has to offer, it wouldn't be a good idea to delay your enrollment into this platform. TapMyData to own your data.
EARN CRYPTO FROM YOUR DATA
By asking or requesting your data, you earn what we call TAP, a Mint NFT data license that enables you to be part and member of various data pools. By so doing you earn Crypto from your data. The advantages of being in control of your data under this cannot be overemphasized, you not only have control over your personal information you also earn as a result of being in control of it... That's what I call Double Deal.
DATA REVOLUTION
Over the years before people come to know that they can be in control of their personal information or data, their privacy has been invaded and abused a lot of times, and because they know not the value of their data they are easily or rather seamlessly sold short. But today with the initiation of this platform, that pain is a thing of the past, of today people by owning their personal information secure their privacy and through acquired knowledge control their data and not easily sold out anymore and through data, personalization brings to book those that are violating the data right and access.
The platform also allows its audience to make inquiries, through provided answers they can take back data that belongs to them, also they get to know the value of their data, that way they cannot sell themselves short or out.
SAR OR DSAR
Before the discovery of the platform, there have been companies in charge of people's personal information and datà which they have no clue or knowledge of. This platform brings to the awareness of its audience the possibility of being in control of such information.
By so doing, the audience can ask to have access to their personal information or data held by the company, leaving the individual in control of his or her information or data.
Conclusion
By being in control of the information or data about you gives the individual an edge to making a concise and articulated decision on any issues that pertains to them. To me, that’s a very big advantage when success is what the individual is aiming at.
To get extra information about Tapmydata, look up the links below
WEBSITE
WHITEPAPER
TWITTER
TELEGRAM
Authored by Perry
This is “A sponsored article written for a bounty reward.” | https://medium.com/@alaskakobby/tapmydata-platform-review-6817c3ab543e | ['Alaska Kobby'] | 2020-12-17 03:42:03.989000+00:00 | ['Cryptocurrency', 'Finance', 'Altcoins', 'Data', 'Information Technology'] |
1,567 | Building Communities of Practice at FARFETCH | By Hugo Froes, Product Ops Lead
Companies evolve and grow. Practices become more complex and harder to handle. At FARFETCH we noticed like minded people coming together to discuss or even tackle some of those issues. In some cases we saw how the work done by those communities could feed back into the teams and become part of their day to day work.
Taking a cue from Emily Webber on Communities of Practice (CoP), we started exploring how we could bring more people together. People that have a common interest on a certain topic and how we could best support them.
This article explores how we built guidelines and resources for anyone interested in starting a community within FARFETCH. How we support those communities when needed, but leave them alone when they’re ready.
We hope that others can take from what we’ve learnt and use/adapt to build similar communities within their own company.
Building a Common Language
One of the first things we felt the need to do was define the categories of CoP so that they may be more easily identifiable for those outside that community.
These definitions are particular to the way we have approached communities and are only a guideline. At any time, as a CoP evolves, they can move into another category or divide into two or more communities.
Communities of Practice (CoP)
Communities of Practice are groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly.
The term ‘Community of Practice’ (or CoP) is one of the most common names given to these types of communities, which is why we use CoP as our umbrella term. And so, we’ve established that for us internally a Community of Practice, just by existing, automatically falls into this category.
Guilds
More formal than a CoP, a Guild has a strong fixed objective, which often has repercussions within the company and teams. Often associated with working towards strong standards of practice and/or aligning objectives across business units.
Work Groups
More integrated with work being done within teams, Work Groups have a direct impact on the company. Usually evolving from a CoP or supporting a Guild, the members of the Work Group are looking at the topic on a ground level. They have Objectives and Key Results (OKRs) associated with this work and it’s part of their objectives.
Clinics
The company has grown very quickly in the last few years and there are many challenges in scaling all the teams to their optimal size. How can some of those teams support all those other teams, without having to grow to an unmanageable size?
That’s where we’ve found clinics are useful. Usually, a fixed session that is added to the calendar of everyone that might benefit most from feedback or input from a specific discipline. During these sessions, someone can ask for advice for what to do next or feedback on whether they are on the right track.
Depending on which clinic it is, the advice will be geared towards that subject matter and hopefully helps participants understand what steps to take next.
The User Experience Research clinic, User Interface clinic and Experimentation clinic are examples of some of the existing clinics at Farfetch.
Guiding principles
100% open to everyone
And we mean everyone. Everyone who wants to be part of the conversation may join. If someone feels they can either contribute or be able to extract value from the community for themselves, they may join.
The communities are not exclusive, members only groups.
One of the easiest ways to determine whether you really do have a CoP rather than a different form of regular meeting is to ask whether you’d be happy having anyone from the company drop in and observe if they were interested.
Global first
Our company is spread across various geographical locations, but we don’t want that to limit participation. In fact, we encourage the communities to diversify and gain a richer amount of knowledge contribution.
We still want to improve the way communities interact across time zones as well, thus removing even more barriers.
Everyone has something to contribute
All voices are valid and should be heard. Even the least experienced or people from other areas of expertise have something to contribute.
If someone hasn’t said anything in an entire session, encourage their participation by asking their opinion on something you know they have knowledge about.
Respect each others’ perspective and experience
An open mind is key to achieving great results. We strongly encourage active listening and an openness to try and understand the perspective of others as a way to expand our own ideas.
If someone else’s idea seems strange or out of place, first try and understand the perspective and context. If however, the contribution seems misplaced, try to be encouraging and suggest that it’s a very valid point to remember for later discussion, but it might be a bit off topic in the current discussion.
Share ideas and opinions in confidence
Building on the above principle, one of the most important things about the communities is creating a safe space where everyone feels their opinion can be voiced without judgment and we expect everyone to make an effort to adhere to that ideal.
If someone seems less confident, don’t forget to give some encouraging feedback when they contribute, and if some feedback is needed, that it should be constructive.
Build community knowledge wherever you can.
Communities are incredible and there is nothing better than a group of people sitting down and discussing a topic they love. The big question is what to do with what is discussed? How do you build on ideas until they are mature enough? How does the value of the communities reach those outside the community?
At FARFETCH, we’ve created an open and transparent space on confluence, where every community can both record or access what is being done.
We want the communities to work together. To learn from each other. This space is open to every Farfetcher and we encourage open sharing.
Avoid absolutist positions
Communities are about sharing and learning. We feel that absolutist points of view will be a blocker to a healthy continuous growth.
Assume that there are other ways of looking at an issue. Every idea or solution might make sense in the now or with what the current members in the community know, but in a few months it might change.
Supporting the communities
Guild of Communities
As a support mechanism, we created a Guild of Communities. A small group of people who are passionate about CoPs and who can focus on developing the resources and guidelines for the communities.
The Guild works as facilitators. Helping the communities to get off the ground, connecting to the right people and identifying opportunities.
We also made it clear that the Guild is not meant to run the communities and will not dictate how the communities collaborate and approach the practice they want to improve.
Single hub for everything
One of the first questions we were asked was where do these communities live, since they aren’t owned by any team or business unit?
As mentioned above, we created a Confluence space exclusively for the communities of practice. Within that space, we added all the content and resources available, as well as templates for the CoPs to create their own sub-section.
We now see communities recording everything they discuss and create within those spaces. They may also explore other existing communities to identify opportunities to collaborate or learn from each other.
Where to next?
Much like the way we allow the communities to be created and grow fairly organically we also don’t define a success metric for the specific communities unless they want it.
We measure the success of the communities by the number of initiatives that are developed by these communities, which make their way into the practices of the teams. In how the team members continuously extract the value they themselves expect from that community.
When do we suggest ending a community? When all those involved feel the community no longer makes sense. When the vision each person has of that community isn’t aligned with what the community has become.
Participants should always extract value and for that reason we hope to start helping them set personal OKRs attached to the work they do within the community, giving them the time to contribute positively.
More than anything, we want to give the communities the space and support they need to thrive, extract value and evolve in the direction they need to. | https://medium.com/farfetch-tech-blog/building-communities-of-practice-at-farfetch-96359a7fb781 | ['Farfetch Tech'] | 2020-11-19 09:28:28.882000+00:00 | ['Communities Of Practice', 'Collaboration', 'Community', 'Product', 'Technology'] |
1,568 | How To Decouple Data from UI in React | Approach A. Custom Hook
Let’s create a custom hook — useSomeData — that returns the properties someData , loading , and error regardless of the data fetching/management logic. The following are 3 different implementations of useSomeData .
With Fetch API and component state:
With Redux:
With Apollo GraphQL:
The 3 implementations above are interchangeable without having to modify this UI component:
But, as Julius Koronci correctly pointed out, while the data fetching/management logic is decoupled, the SomeComponent UI is still coupled to the useSomeData hook.
In other words, even though we can reuse useSomeData without SomeComponent , we cannot reuse SomeComponent without useSomeData .
Perhaps this is where Render Props and Higher Order Components do a better job at enforcing the separation of concerns (thanks again to Julius for highlighting this). | https://medium.com/javascript-in-plain-english/how-to-decouple-data-from-ui-in-react-d6b1516f4f0b | ['Suhan'] | 2020-12-21 16:47:14.382000+00:00 | ['JavaScript', 'React', 'Technology', 'Software Engineering', 'Programming'] |
1,569 | World Cup sample dapp on MOAC: from start to finish | in In Fitness And In Health | https://medium.com/moac/world-cup-sample-dapp-on-moac-from-start-to-finish-13e3ef83617c | [] | 2018-07-31 06:54:48.405000+00:00 | ['Dapp', 'Blockchain', 'Moac Technology', 'Moac'] |
1,570 | Open sourcing Querybook, Pinterest’s collaborative big data hub | An efficient big data solution for an increasingly remote-working world.
Charlie Gu | Tech Lead, Analytics Platform, Lena Ryoo | Software Engineer, Analytics Platform, and Justin Mejorada-Pier | Engineering Manager, Analytics Platform
With more than 300 billion Pins, Pinterest is powering an ever-growing and unique dataset that maps interests, ideas, and intent. As a data-driven company, Pinterest uses data insights and analysis to make product decisions and evaluations to improve the Pinner experience for more than 450 million monthly users. To continuously make these improvements, especially in an increasingly remote environment, it’s more important than ever for teams to be able to compose queries, create analyses, and collaborate with one another. Today we’re taking Querybook, our solution for more efficient and collaborative big data access, and open sourcing it for the community.
The common starting point for any analysis at Pinterest is an ad-hoc query that gets executed on a SparkSQL, Hive, Presto cluster, or any Sqlalchemy compatible engine. We built Querybook to provide a responsive and simple web UI for such analysis so data scientists, product managers, and engineers can discover the right data, compose their queries, and share their findings. In this post, we’ll discuss the motivation to build Querybook, its features, architecture, and our work to open source the project.
The Journey
The proposal to build Querybook started in 2017 as an intern project. During that time, we used a vendor-supplied web application as the query UI. There were often user complaints about that tool regarding its UI, speed & stability, lack of visualizations, as well as difficulty in sharing. Before long, we realized there was a great opportunity to build a better querying interface.
We started to interview data scientists and engineers about their workflows while scoping out technical details. Shortly, we realized most were organizing their queries outside of the official tool, and many used apps like Evernote. Although Jupyter had a notebook user experience, its requirement to use Python/R and the lack of table metadata integration deterred many users. Based on this finding, our team decided Querybook’s query interface would be a document where users can compose queries and write analyses all in one place with the power of collocated metadata and the simplicity of a note-taking app.
Released internally in March 2018, Querybook became the official solution to query big data at Pinterest. Nowadays, Querybook on average has 500 DAUs and 7k daily query runs. With an internal user rating of 8.1/10, it’s one of the highest-rated internal tools at Pinterest.
Feature Highlights
Figure 1. Querybook’s Doc UI
When a user first visits, they‘ll quickly notice its distinctive DataDoc interface. This is the primary place for users to query and analyze. Each DataDoc is composed of a list of cells which can be one of three types: text, query, or chart.
The text cell comes with built-in rich-text support for users to jot down their ideas or insights.
The query cell is used to compose and execute queries.
The chart cell is used to create visualizations based on execution results. Similar to Google Docs, when users are granted access to a DataDoc, they can collaborate with each other in real-time.
With the intuitive charting UI, users can easily turn a DataDoc into an illustrative dashboard. You can choose from different visualization options, such as time-series, pie-charts, scatter plots, and more. You can then connect your visualization to the results of any query on your DataDoc and post-process them with sorting and aggregation as needed. To automatically update these charts, you can use the scheduling options and select your desired cadence. The scheduler can notify users of success or failure. Combined with the templating option powered by Jinja, creating a live updating DataDoc is very quick.
Scheduling and visualization features aren’t intended to replace tools such as Airflow or Superset. Rather, these features provide users a simple and fast way to experiment with their queries and iterate on them. Often, Pinterest engineers use Querybook as the first step to compose queries before creating production-level workflows and dashboards.
Last but not least, Querybook comes with an automated query analytics system. Every query executed gets analyzed to extract metadata such as referenced tables and query runners. Querybook uses this information to automatically update its data schema and search ranking, as well as to show a table’s frequent users and query examples. The more queries, the more documented the tables become.
Architecture
Figure 2. Overview of Querybook’s architecture
To understand how Querybook works, we’ll walk through the process of composing and executing a query.
The first step is to create a DataDoc and write the query in a cell. While the user types, the user’s query gets streamed to the server via Socket.IO. The server then pushes the delta to all users reading that DataDoc via Redis. At the same time, the server would save the updated DataDoc in the database and create an async job for the worker to update the DataDoc content in ElasticSearch. This allows the DataDoc to be searched later. Once the query is written, the user can execute the query by clicking the run button. The server would then create a record in the database and insert a query job into the Redis task queue. The worker receives the task and sends the query to the query engine (Presto, Hive, SparkSQL, or any Sqlalchemy compatible engine). While the query is running, the worker pushes live updates to the UI via Socket.IO. When the execution is completed, the worker loads the query result and uploads it in batches to a configurable storage service (e.g. S3). Finally, the browser gets notified of the query completion and makes a request to the server to load the query result and display it to the user.
For brevity, this section only focused on one user flow of Querybook. However, all the infrastructure used is covered. Querybook allows some of it to be customized. For example, you can choose to upload execution results to either S3, Google Cloud Storage, or a local file. In addition, MySQL can also be swapped with any Sqlalchemy-compatible database such as Postgres.
The Path To Open Source
After noticing the success that Querybook had internally, we decided to open source it. One challenge we bumped into was how to make it generic while preserving some of the Pinterest-specific integrations. For this, we decided to have a two-layer organization through a plugin system and to add an Admin UI.
The Admin UI lets companies configure Querybook’s query engines, table metadata ingestion, and access permissions from a single friendly interface. Previously, these configurations were done inside configuration files and required a code change as well as a deployment to be reflected. With this new UI, admins can make live Querybook changes without going through code or config files.
Figure 3. The Admin UI
The plugin system integrates Querybook with the internal systems at Pinterest by utilizing Python’s importlib. With the plugin system, developers can configure auth, customize query engines, and implement exporters to internal sites. Customized behaviors provided by the plugin system allow Querybook to be optimized for the user’s workflow at Pinterest while ensuring the open-source is generic for the public.
You can check out more of Querybook’s features and its documentation on Querybook.org, and you can reach us at querybook@pinterest.com.
Acknowledgments: We want to thank the following engineers that have made contributions to Querybook: Lauren Mitchell, Langston Dziko, Mohak Nahta, and Franklin Shiao. And to Chunyan Wang, Dave Burgess, and David Chaiken for their critical advice and support. | https://medium.com/pinterest-engineering/open-sourcing-querybook-pinterests-collaborative-big-data-hub-ba2605558883 | ['Pinterest Engineering'] | 2021-03-30 16:55:47.623000+00:00 | ['Open Source', 'Pinterest', 'Remote Working', 'Technology News', 'Engineering'] |
1,571 | Shopping List for a Maker Space | Shopping List for a Maker Space
Apple iPad Air 2 64GB — $599
It’s always fun to spend fake money. But if you’re going to spend money you don’t have, you might as well spend it well. Right?
Last week’s inventory for the fictional maker space at the fictional Dominican High School included items from the worlds of robotics, 3D printing, movie making, music making, and many more. The inspiration for many of those items came from searching through actual maker spaces in my PLN and from the up-and-coming maker space at the school where I work. And because our assigned budget was a moderate $3000, we really wanted to make it stretch. This required a thorough and comprehensive investigation of each item and any viable alternatives. And as any modern American consumer knows, it’s all about balancing the reviews with the price.
Dremel Idea Builder — $999
This week, my group created a Prezi of our investigation which we will present to our class next week. Before I take you through the process of investigating each item and choosing which items to present, let me first say a few words on Prezi. I’ve tried. And I’ve tried. But despite my efforts, I cannot get on board with Prezi. I do not like it as a program for creating visually pleasing yet functional presentations. I find that the only benefit of Prezi over MS PowerPoint is that Prezi offers the ability to organize one’s information into a meaningful geographic landscape. The ability to create frames that stem from other frames allows one to organize information in a way that may communicate something visual to the audience. Consider if one wanted to present a family lineage in alinear fashion without the image of a tree. Prezi allows for this. However, my gripe with it is: how often is that actually necessary? I find that section headers and title cards in MS PowerPoint are sufficient for organizing a landscape of informaton. Proponents of Prezi might argue that it is mroe visually pleasing, and perhaps that is true for some. Yet, is it that visually pleasing? I would liken Prezi to Smartboards — flashy and cool at first, but eventually, the poor functionality of each renders the flashy factor obsolete.
littleBits Base Kit — $99
Now, onto the heart of the matter — the investigation. As a group, we chose 4 items from last week’s inventory to evaluate and present to the class. We each chose 1 item and we each contributed to the Dremel 3D printer. I chose to contribute the pros to the Dremel, and I began my investigation at the Dremel product page for the particular model we listed on our inventory. I was impressed by the videos on the product page’s website that offered general instructions, ideas for application, and troubleshooting techniques. Next, it was on to Amazon.com to read reviews of the Dremel as well as some competing 3D printers. I found two 3D printers that were comparable to the Dremel — Solidoodle ($600) and Flashforge ($1300). Flashforge had some compelling negative reviews on Amazon.com, and considering the price difference — the Dremel Idea Builder prices at $1000 — that eliminated the Flashforge from my considerations. Also, the Solidoodle had several reviews criticizing its customer service. And considering 3D printers are such highly technical devices, I thought good customer service is a major plus. In addition, I found that the “proprietary” (non-generic; sold by the company) filament, which is the material with which 3D printers print, was cheaper than its competitors. Overall, it seems like a good balance of quality and low price.
Lego Movie Maker app — $0.00
Next, I undertook the Lego Movie Making items. I first got this idea from an Apple conference for librarians and media specialists I attended last year; and it sounded like an excellent project for a library maker space, because no single classroom can invest in all of these materials. Instead, a library can, and it can be made accessible by classes and clubs. The item list for this project included three lego sets (e.g. figurines, blocks, platforms), an iPad for filming, and the Lego Movie Maker app. The item that shines the most here — the app itself — is free! I’ve played with this app and it is incredibly user-friendly, intelligible, and visually pleasing. It is a nearly perfect app, and it makes stop-motion movie making so easy. I found that the sets are perfect for customization and creative building, yet, specialty Lego pieces, such as curved pieces and all those fun pieces that could give a Lego movie personality, are difficult to buy in wholesale. Perhaps, our makerspace could organize a donation drive of students’ and staff’s old Legos.
And, I’m happy to announce that I have a meeting next week with our advancement office to go over some of the maker space ideas from this project. Maybe I’ll be seeing this inventory list come to life very soon. | https://medium.com/@adamapo/shopping-list-for-a-maker-space-ea06059b8440 | ['Adam Apo'] | 2015-10-21 20:32:34.916000+00:00 | ['3D Printing', 'Technology'] |
1,572 | What Retailers Should Consider When Adopting Data Sciences | The future of retail will be built on insights derived from proprietary data, in particular, consumer data. Businesses must act now to reap the rewards of the consumer data by moving from simply collecting consumer data to using it to support and systematize better decision-making.
In recent years, there is a rising importance of leveraging data science as a core capability throughout the organization to drive decision-making, yet it has not been widely adopted by the industry. Analytics is becoming ever more sophisticated and its use within organizations is becoming broader which allows retailers to be more versatile. Advanced analytics drives profits because it provides real-time responses to market shifts and can better inform innovation initiatives.
The primary obstacles preventing retailers from speedily developing these capabilities include the expense of developing an insights-driven organization, a lack of data scientists and the right talent to conduct analytics modelling, and infrastructure gaps that prevent action on the insights generated by predictive analytics.
Moreover, consumer relationships have become more difficult than ever to secure and maintain. This makes it imperative for predictive analytics to be applied throughout a retailer’s end-to-end value chain and to keep up the pace with industry demands.
Considerations for Retailers Regarding Data Sciences Adoption:
Data Ownership
As traditional consumer touchpoints are disrupted by disintermediation and new players along the supply chain, ownership of consumer data will be the key to success. Yet, due to the disruption, capturing consumer data will be challenging and access to these data could be highly uneven: some players owning disproportionately large volumes of data; while others holding small fragments. Over the next decade, as it is highly likely that natural monopolies will arise, regulators will have to plan and address the challenges for those impacted, both business and consumers.
Data Security
As growing numbers of touchpoints, channels and players make the consumer relationship more complex, consumer data privacy will be critical to maintaining trust. Given the increasingly important role that digital plays throughout the retail industry, there is already a growing threat relating to cybersecurity (e.g. data law breaches) that could have extremely detrimental short- and long-term impacts for retailer and consumer. Data security will be essential to prevent situations that could compromise consumer relationships. Over the next decade, as different technologies play increasingly prominent roles, investing in data security will be a certain thing.
Data-driven Supply Chains
Real-time data will underpin just-in-time (JIT) models in supply chains, which can potentially lead to major improvement of efficiencies in the availability of materials, delivery schedules, manufacturing capacity and staffing considerations. As fast-changing external factors (e.g. trending fashions) keep shifting consumer preferences, it will be imperative to stay ahead by making smarter decisions. This will require critical data to be shared easily among manufacturing teams, from the factory floor to the executive suite. | https://medium.com/dayta/what-retailers-should-consider-when-adopting-data-sciences-19973a977e58 | ['Jisoo Lee'] | 2020-06-11 09:34:02.871000+00:00 | ['Data Science', 'Retail Technology', 'Retail', 'Retail Analytics'] |
1,573 | 5 Xcode Extensions You Must Have | Swimat
Quickly formatting a block of messy code might be the most common demand for developers.
Swimat is an Xcode extension to format your Swift code. It supports the latest Xcode 11. The Re-Indent on Xcode works the same as with Swimat , but Swimat is more convenient because it doesn’t need to work with code selection.
To install and enable the extension, we can download the zip file from the GitHub repo and then enable it in the Extensions in System Preferences.
Enable extensions in System Preferences.
After installation, to format the currently active file, click Editor -> Swimat -> Format in the Xcode menu. | https://medium.com/better-programming/5-xcode-extensions-you-must-have-46fb1fd39e7a | ['Eric Yang'] | 2020-08-07 15:20:36.136000+00:00 | ['Technology', 'Xcode', 'iOS', 'Swift', 'Programming'] |
1,574 | What Happens When a Tech Company Makes an Enemy of the President | What Happens When a Tech Company Makes an Enemy of the President
Twitter draws a line in the sand, while Facebook ducks and covers
Photo: NurPhoto/Getty Images
Welcome back to Pattern Matching, OneZero’s weekly newsletter that puts the week’s most compelling tech stories in context.
From the moment Donald Trump was elected, it was obvious he’d present a conundrum for the social platforms that helped fuel his rise to power. After all, he’d broken into politics partly by pushing the racist conspiracy theory that President Obama had been born in Kenya, and insulted and demonized countless individuals and groups on his path to election.
In late November 2016, I asked both Facebook and Twitter if they’d enforce their rules against a sitting president, hypothetically, assuming he were to violate them. Facebook said it wouldn’t; by definition, anything an elected president might say would qualify as mainstream political discourse, and CEO Mark Zuckerberg didn’t want to be in the business of regulating that. Twitter, however, wouldn’t rule it out: “The Twitter Rules apply to all accounts, including verified accounts,” a spokesperson said.
Fast-forward three-and-a-half years, and the confrontation everyone expected long ago is finally here. Just as the companies signaled back in 2016, it was Twitter that ended up drawing the line, while Facebook conspicuously did not. As the week came to a close, the fallout was just beginning — with the legal underpinnings of the modern internet at stake.
The Pattern
Twitter vs. Trump
Undercurrents
Under-the-radar trends, stories, and random anecdotes worth your time
Instagram’s augmented reality filters are becoming more powerful. Thanks to an update in Facebook’s Spark AR platform earlier this week, Instagram filters will now be able to respond to music and be applied to pre-recorded media from your camera roll, TechCrunch reported. As my colleague Drew Costley reported on Monday, Instagram Stories have already become an unexpected and surprisingly robust platform for AR gaming. In a sense, Facebook is accomplishing what companies like Apple have struggled with for some time: getting normal consumers to use AR without clunky headsets. Why? Follow the money.
Thanks to an update in Facebook’s Spark AR platform earlier this week, Instagram filters will now be able to respond to music and be applied to pre-recorded media from your camera roll, TechCrunch reported. As my colleague Drew Costley reported on Monday, Instagram Stories have already become an unexpected and surprisingly robust platform for AR gaming. In a sense, Facebook is accomplishing what companies like Apple have struggled with for some time: getting normal consumers to use AR without clunky headsets. Why? Follow the money. Japan approved a bill to speed up the development of “super cities.” The plan is to make it easier to change regulations and roll out new technologies — like autonomous vehicles — in cities across the country, the Japan Times reported. “There are concerns about the use of personal information because various kinds of data will be gathered and used in such cities,” the report added.
The plan is to make it easier to change regulations and roll out new technologies — like autonomous vehicles — in cities across the country, the Japan Times reported. “There are concerns about the use of personal information because various kinds of data will be gathered and used in such cities,” the report added. “It’s like a noisy tech, a fake A.I. that just pretends to safeguard.” Walmart workers are lashing out against a self-checkout service that they say fails constantly, forcing human employees to intervene and needlessly risk exposure to the coronavirus. Wired reports following an earlier piece in HuffPost.
Walmart workers are lashing out against a self-checkout service that they say fails constantly, forcing human employees to intervene and needlessly risk exposure to the coronavirus. Wired reports following an earlier piece in HuffPost. Apple versus Netflix heats up. Only in a week as wild as this one could a deal like this be considered “under the radar,” but here you go: Apple is expected to sign a deal with Paramount to produce Martin Scorsese’s upcoming, $200 million film Killers of the Flower Moon. This will easily be the highest-profile offering from Apple’s original programming service, and it’s an unsubtle power play against Netflix, which previously released Scorsese’s film The Irishman.
Headlines of the Week
Marauding monkeys attack lab technician and steal Covid-19 tests
— Joe Wallen, the Telegraph
‘Feisty Old Polish Grandmother,’ 103, Beats Coronavirus Then Cracks Open A Beer
— Ed Mazza, HuffPost
Masks Work.
— Alexandra Sifferlin, the Medium Coronavirus Blog
Tweet of the Week
“Fun fact: The Constitution gives Congress the right to censure the president but when they don’t have the guts to do so, that responsibility will go to… *checks notes* …Twitter.”
— Sarah Cooper
Thanks for reading. Reach me with tips and feedback by responding to this post on the web, via Twitter direct message at @WillOremus, or by email at oremus@medium.com. | https://onezero.medium.com/when-a-tech-company-makes-an-enemy-of-the-president-50b7260e945b | ['Will Oremus'] | 2020-06-06 12:57:07.015000+00:00 | ['Politics', 'Pattern Matching', 'News', 'Social Media', 'Technology'] |
1,575 | Are You A Good Leader? How Do You Know? | One of my favorite interview questions is, “How do you know that you are succeeding as a new leader? How do you know that you are doing a good job?” Okay, that was two questions, but they are asking the same thing in two different ways. This question is testing the candidate if they have wrapped their heads around what it means to be a leader and that success is very different compared to other roles, like being a Software Engineer or Product Owner.
As a Software Engineer, my success comes in the form of the problems I solve and the speed and quality at which I solve them. As a Product Owner, I can easily point to the metrics of my product and talk to users to understand if I am succeeding or not. As a Leader, I don’t have something tangible that I can point at and say, “I built that!” I can’t feel or see success in the same way as someone who produces something.
As a Leader, I need to update my definition of success. I also need to change how I find out if I am succeeding or not because it’s no longer binary like it was in my previous role when I was contributing to a solution or product.
What is the definition of success as a leader?
One month ago, we were interviewing a potential new software engineering leader. I asked this candidate the question about what their definition of success was and at the end of the interview, they turned the question back on all of us. They asked us, as experienced leaders, how did we know if we were succeeding or not? First, well-played candidate. Second, this is where we see that the answer to this question is not simple nor clear. Here’s what everyone said.
The team is growing — This was my answer. I led off by saying that I know I am doing my job well when everyone on the team is growing in their careers and that we are getting better each and every day. It was a little fluffy and lacking details, but that’s what I said. Not my most inspirational answer ever but hey, we’re human, right? The team is happy and fulfilled — This is a much better answer in my opinion compared to what I said. This leader cares deeply about their team and they wanted to emphasize that when everyone on their team feels safe, fulfilled, and enjoys what they are doing and who they are doing it with, she knows that she’s doing her job. We are executing effectively — This answer is what most leaders focus on. We are here to make sure our teams do their jobs, to produce, and to impact the outcome. Yes, without achieving this, we will be out of the job. How we achieve this is something entirely different. We’ll cover this another day. We are making an impact — This answer was great because we can have the happiest team on the planet and we can be the most efficient team ever to exist, but to what end? If our teams are not impacting the outcome by helping our clients and solving our business problems, then we are not doing our jobs. We need to help our teams spend their time and energy on things that drive value. By focusing on this, our teams are more engaged because they know that what they are doing matters. Great answer! We build things the right way — This was the technical version of answer number three listed above. We need to make sure that our teams not only execute effectively but do so in the right way. We cannot take shortcuts or introduce poor quality into our product, otherwise, our client’s experience and our business will suffer. Absolutely true statements and again, as leaders, if we let this happen, we will get fired.
This concluded our round table response to the candidate’s question and upon reflecting, I realized that answering this question is really hard and there isn’t just one answer. Additionally, we could have easily mentioned something related to setting a vision for the team or said that we are growing other leaders as part of our answers. There isn’t a clear answer to this question.
You’ll notice though, all of our answers had something in common. Every answer was about the team. The team was growing. The team was happy. The team executed. The team makes an impact and the team solves problems the right way. Not once did anyone say that success was about the Leader themselves. There wasn’t mention of being promoted within a number of years or obtaining a bonus as part of their success.
As a leader, your success is attached to the success of the team. Our job as leaders is to help the team succeed. Our job is to help the team grow. Our job is to help the team make a lasting impact. Achieving these outcomes may come in many flavors but ultimately, our success is no longer ours alone, it’s dependent on the success of the team.
Let’s put it to practice
I am going to use the analogy of a soccer team for this example. A team is made up of many different types of players. There’s offense, defense, midfield, and a goalie. This maps well to our teams. We have Software Engineers, Product Owners, Designers, UX Researchers, etc. When I played soccer, I was a forward on offense. My job was to score goals and to help others score goals! I was recognized by my stats and at the end of the year, I won awards to the goals and assists that I made. This is the same as being a Software Engineer. At the end of the year, I was recognized for the problems I solved and how I helped others solve problems.
Now, let’s take a look at the coach of the soccer team. The coach spends most of their energy during practice where they work with the players one on one, in pairs, and in groups. They’ll stop and restart sessions to focus on skills, thought processes, and behaviors. They’ll push the team's endurance to the max during practice and it’s in preparations for the game.
When the game begins, the coach is basically out of the picture. They can do minor things during the game, like sub people in and out, or give an inspirational half-time speech, but their job was to prepare the team to win. The coach can’t step onto the field and score a goal for the team, it’s up to them. This maps directly to leadership. As a Leader, I focus on preparing my team to win the game or to solve problems the right way. To have the skills, mindset, and behaviors to succeed. Similar to a soccer coach, when the team loses, the coach loses, the same goes for leadership. When the team fails, the leader fails.
So, are you a good leader?
This is a hard question, right? If you are focused on building the best team you can and invest in helping them succeed, you’re probably headed in the right direction. Getting feedback from your team, business, and your data will help you know for sure.
Remember, Leaders, are coaches. Our job is to help our team learn and develop so that they can win. Our success hinges on theirs. | https://medium.com/leadershiplife/are-you-a-good-leader-how-do-you-know-c1f634ac7cba | ['Calvin Bushor'] | 2020-07-28 16:01:00.948000+00:00 | ['Tech', 'Entrepreneurship', 'Startup', 'Leadership', 'Technology'] |
1,576 | How Pipe is creating revenue as an asset class | How Pipe is creating revenue as an asset class
Harry Hurst is a serial entrepreneur who developed a passion for technology early in life, graduating high school age 12 studying Computer Science. Harry and Pipe co-founder Josh Mangel built their first company Skurt, a financial technology company in the mobility space. After four years, Skurt was successfully acquired by Fair in 2018. Pipe was founded in 2019, to unlock the largest untapped asset class in the world — revenue — to empower entrepreneurs and companies to grow their business on their terms without debt or dilution. Harry has raised over $150 million for his companies and is considered an industry expert on financial technology and emerging asset classes. He is also an angel investor and mentor to a number of companies. Prior to becoming a technology entrepreneur, Harry ran a professional recruitment consultancy, which influenced his approach to strategically and intentionally hiring the right talent, building strong teams, and developing enduring company culture.
Harry is a featured speaker at Wharton Fintech’s upcoming conference. In this guest post, Harry breaks down why he and his co-founders are building Pipe and the value Pipe is unlocking for founders.
Pipe co-founder Harry Hurst
The equity value of a business is a derivative of the revenue it generates.
From that simple observation, we are building Pipe to unlock revenue as an asset class, so that founders can fund their businesses without the need to raise dilutive equity capital or restrictive debt.
Before Pipe we saw too many of our friends pour their blood, sweat, and tears into their companies while successive rounds of equity financing diluted their ownership stakes down to the single digits.
We also saw companies negotiate against themselves — offering discounts of up to 40% to incentivize their customers to prepay them upfront for the year — reducing both the amount of cash they had to re-invest into their businesses and discounting their top-line revenue (and therefore enterprise value).
Pipe is addressing what we found were inefficiencies in the way that companies with a high degree of predictability around their revenues were being financed. A more aligned and efficient solution needed to exist connecting these companies directly with the capital markets.
If we were not building Pipe, we believe somebody else would have (eventually).
How does Pipe actually work?
The simplest way to think about Pipe is as a two-sided trading platform. Companies come to Pipe to trade their recurring revenue streams for as much cash as possible paid upfront, instantly. Investors come to Pipe to bid on these revenue streams and earn yield as an alternative to fixed income products, money market funds and other similar instruments.
Today, companies trade their customer contracts for cash equal to an average of 90–95% of annual contract value (ACV). We expect that terms will get even more favorable to companies — our goal is to move closer to 100% over time, as more institutional investors join the platform and bid on revenue streams.
Pipe is the conduit that sits between companies with recurring revenue and investors with yield seeking capital. We ingest the data and rate the assets, so investors can make their bids and execute their trades, while we work as the servicer and make sure that the money goes where it should.
How is Pipe different?
If equity and traditional debt products are derivatives of revenue, then the closest proxy to revenue are the contractual agreements that guarantee payments from customers. Pipe has unlocked those contracts, or subscriptions and made them fully liquid and tradeable as an asset in and of themselves.
We are not against other ways of financing companies, including venture capital. Far from it. We are VC-backed ourselves, with investments from some of the largest ecosystem players in the space like Shopify, Slack, Okta, HubSpot, Chamath Palihapitiya, Marc Benioff, Michael Dell, and more. I’m also an equity investor in a number of early and late stage companies. But we do find that growth capital can be costly and that founders pay too high of a price for other types of financing options. In particular, nearly all other forms of financing make you a victim of your own success — as your company becomes more successful, the effective cost of your financing increases.
For companies with reasonably predictable revenue, the reason you need financing at all is to close the gap between spending money (e.g., sales, marketing, product development, etc.) and your payback on that spend (i.e., cash in the door). Raising capital is not an end in itself.
We built Pipe to be the most efficient way to close that gap. In particular:
Equity — There is a time and a place for venture capital-style equity financing. For example, I think it makes sense for companies that need to experiment and have not yet demonstrated product-market fit to raise equity. But the cost of dilution is high, especially over successive rounds and when an alternative like Pipe exists. In this respect, early stage VCs and founders are very much aligned.
Debt — If you can access it, venture debt carries interest rates up to the double digits, in addition to potentially onerous terms that limit the way you can invest in your business. Further, venture debt often comes with warrants that significantly increase the effective cost of financing. This is a clear case where the financing structure you choose can make you a victim of your own success.
Revenue-Based Financing / Merchant Cash Advance (MCA) — Pipe is sometimes incorrectly lumped in with MCA, so it’s worth highlighting the differences. We believe our pricing is more efficient than MCA because multiple investors on our platform bid for individual companies’ revenue streams. We want you to get the best price possible so you come back and use our platform. You will get the best price you can because investors bid on your specific revenue streams based on your company metrics, instead of giving you a price based on a fixed cost of capital and an average that includes some potentially bad companies too. Your customers will also never know that you’ve traded their contract on Pipe — you maintain total control of the customer relationship.
We believe the financial benefits of trading your revenue on Pipe are compelling on their own. Having built technology platforms in the past, we also believe in the value created by the design of the product itself.
In particular, Pipe financing will save you and your leadership team time — instead of weeks or months on the road (or Zoom) pitching your company, you can have cash in the bank in 24 hours and spend that time building your company instead. Pipe connects to all of your existing systems to make the process as painless and self-serve as possible.
Pipe’s trading dashboard
Who is using Pipe?
Pipe is built for any company with predictability around their revenues.
Today, bootstrapped companies with a hundred thousand dollars in annual recurring revenue (ARR) sit comfortably alongside public companies that actively trade with hundreds of millions of dollars in ARR on our platform.
Pipe can be a fit for companies of such wildly different sizes because it gets right to the heart of a company’s value. The underlying contracts (cashflows) that underpin the equity value of the company.
To take one example, a public company looking for $100m in financing would pay millions of dollars in legal fees, plus interest, plus dilutive warrant coverage to access a traditional credit facility.
With Pipe the same company can reduce its cost of capital by more than half (which doesn’t even take into account the order of magnitude reduction in cost from the elimination of warrants).
Trader Mentality
As a kid I sold £1.50 government assistance lunch coupons for £1.00 to my classmates, used the proceeds to buy sweets at wholesale prices, and sold them at a 3x markup to double the value of my original asset and get that value back in cash.
Now I want founders to know they have valuable assets that can be traded. I want to empower founders to shift from a borrower mentality to a trader mentality.
Here’s what I mean by trader mentality:
Let’s say you have $100k of ACV that gets paid monthly or quarterly, but you want cash to invest in your company now. You have a few options:
Provide a 20–30% discount to your customers to get them to pay up front. This will reduce your ARR by the amount of the discount, which will also reduce your company’s enterprise value.
Borrow against that revenue, entering a multi-month process with a bank, paying significant legal fees, taking on restrictive covenants, providing warrants to your lenders, and paying up to double digit interest rates.
Trade the contracts on Pipe instantly for a discount of 5–10% to the full ACV.
If you choose to trade on Pipe, you can get $90–95k cash today for your $100k of ACV (using conservative pricing assumptions). You can use that money to invest in your business: sales, marketing, product, engineering — whatever you need to spend on today that will result in getting paid more later.
Let’s say you invest $90k to hire a salesperson. If that salesperson generates just $10k of net new ARR, you are already cash flow breakeven on your trade.
The profit on your trade is not just when that $10k incremental revenue recurs next year, but also the multiple on that net new revenue that is added to the value of your business. At 20x EV/Sales, your trade has generated $200k in enterprise value (20 x $10k net new ARR).
But even that is quite conservative. What company do you know that hires a salesperson, pays them $90k, and expects them to generate $10k in sales?
More realistically, let’s say the new salesperson generates $200k in net new ARR. At 20x EV/Sales your trade has added $4m to the value of your business, not to mention the revenue that will recur in year 2 and year 3 (and year 4 and…)
That’s the value we are unlocking at Pipe. We are empowering founders to adopt a trader mentality. We’re unlocking a multi-trillion dollar untapped asset class for investors. We sit in the middle and make sure it all works. | https://medium.com/wharton-fintech/how-pipe-is-creating-revenue-as-an-asset-class-66d273001198 | ['Harry Hurst'] | 2021-03-18 11:03:19.276000+00:00 | ['Technology', 'Fintech', 'VC', 'Startup', 'Tech'] |
1,577 | Of New Mexico, For New Mexico: Providing Actionable Insights to State Government | Of New Mexico, For New Mexico: Providing Actionable Insights to State Government
RS21 wins statewide contract to help the place we call home.
Posted By Charles Rath, RS21 President + CEO
When we opened our doors in Downtown Albuquerque, we made a commitment to contributing to the creative and technical capabilities of New Mexico. RS21 is part of a flourishing tech community and is supported by some of the brightest minds in data science and visualization, UX/UI design, software development, and policy.
RS21 “walks the talk” when it comes to being local. In fact, RS21 is the first New Mexico technology firm to officially adopt a “go-local” approach to sourcing business supplies from more than 60 New Mexico-based vendors. In the past two years alone, we have contributed over $13 million to the local economy, supporting the vitality of our community and the health and well-being of New Mexico families who rely on our business.
And now, we are thrilled to have the opportunity to make an even bigger impact.
IT Professional Services
We were founded as a company to help solve the most challenging issues facing us today. And in line with that mission, we’ve worked with Fortune 500 companies, federal agencies, national laboratories, and non-profit organizations to deliver big data products and services. Now we’re bringing those capabilities home.
Through our IT Professional Services contract with the State of New Mexico, we can work with state agencies, local governments, and public educational entities to tackle our state’s biggest challenges.
RS21 is uniquely positioned to help state agencies harness the power of data. We have a deep understanding of New Mexico’s economies, demographics, and needs. It makes total sense for a company like ours to work with the State to blend our local and domain knowledge with advanced computational capabilities, data integration techniques, and visualization expertise.
What does that mean?
It means we know how to take all the data available to our state agencies, plus data from other relevant sources, to create a coherent data picture of a given situation. And we can make it usable by everyone — from a technical analyst to a director or decision-maker.
Our intuitive visualizations and interactive dashboards make it easy for people to understand how the data is layered and where and when specific situations occur. We take it a step further with predictive modeling so users can create different scenarios and quickly predict occurrences and outcomes.
That’s an extremely powerful capability.
Why? Because data is key to helping people answer their questions and solve problems.
When people have access to easy-to-understand, intuitive data analytics, they can focus on doing what they know best. They can simplify the data journey, work independently from data analysts, and access the insights they need to do their work and fulfill their organizational missions.
Where do we expect to see this technology used?
Everywhere.
Data for New Mexico
“Power comes not from knowledge kept, but from knowledge shared.” — Bill Gates
Pixabay / Amber Avalona
Crime Prevention + Safety
While we are proud to call New Mexico home and raise our families here, crime is a big issue for our communities. We can create better software to help local and state law enforcement more quickly track, manage, and prioritize crime data, such as outstanding warrants.
Education
New Mexicans rank education as one of the top issues we need to address. Using all the data we have available in rigorous, innovative ways will help us better understand which policies are most effective to support our students, teachers, and administrators.
Public Health
Data scientists at RS21 are bringing together disparate data to support improved health outcomes for communities. We are pairing claims data with socioeconomics, nutrition, and other information to learn where services exist and where geographic gaps occur. Armed with more complete information, public agencies, healthcare providers, and health insurance companies can work together to improve health and wellness for New Mexicans. | https://medium.com/rs21/of-new-mexico-for-new-mexico-providing-actionable-insights-to-state-government-e31fe35cdec5 | [] | 2020-02-25 16:36:01.692000+00:00 | ['New Mexico', 'Information Technology', 'Data Science', 'Data Visualization', 'Data Analytics'] |
1,578 | Blockchain API service for Hyperledger Sawtooth | Photo by Marc-Olivier Jodoin on Unsplash
Primechain-API for Hyperledger Sawtooth is a RESTful JSON APIs that makes it very simple for developers to add Sawtooth’s power to their code — electronic signatures, encrypted data storage & more. It is developed and maintained by Primechain Technologies.
Using Primechain-API for Hyperledger Sawtooth can save months of development time, reduce the development cycle and vastly increase the security and scalability of your applications.
Currently, it supports the following use cases:
Bank Guarantees Charge Registry Contract authentication, verification & storage Digital signatures Employee background verification Encrypted communication Encrypted data storage Know Your Customer (KYC) Letters of Credit Vendor on-boarding & rating
Introduction to Hyperledger Sawtooth
Hyperledger Sawtooth is a blockchain framework that allows developers to create smart contracts in multiple languages including Python, Javascript, Rust, C++, Go, Ethereum Virtual Machine and Java.
Key features:
Sawtooth nodes communicate through messages (containing information about transactions, blocks and peers), serialized using Google’s Protocol Buffers, and sent over TCP. Sawtooth nodes use access control lists to control who can connect to the network and sync with the current ledger state, who can send consensus messages and participate in the consensus process and who can submit transactions to the network. Sawtooth also supports the RAFT consensus protocol. Designed for high throughput, low latency transactions, RAFT is a crash-fault tolerant ‘voting’-style consensus algorithm. Sawtooth supports the Proof of Elapsed Time (PoET) consensus algorithm. As per official Sawtooth documentation, “At a high-level, PoET stochastically elects individual peers to execute requests at a given target rate. Individual peers sample an exponentially distributed random variable and wait for an amount of time dictated by the sample. The peer with the smallest sample wins the election. Cheating is prevented through the use of a trusted execution environment, identity verification and blacklisting based on asymmetric key cryptography, and an additional set of election policies”. Sawtooth also supports PoET-Simulator, an implementation of PoET that forgoes Byzantine fault tolerance.
Primechain-API for Hyperledger Sawtooth
Primechain-API for Hyperledger Sawtooth sets up multiple nodes and the API service which supports the following endpoints:
1. Create key pair
2. Publish data to the blockchain
3. Retrieve data from the blockchain
4. Creating a digital signature
5. Verify a digital signature
6. Create, encrypt, sign and publish data to the blockchain
7. Decrypt, verify and retrieve data from the blockchain
1. Create key pair
To create a key pair, use get /api/v1/create_user_key_sawtooth The output will be the
Private_key Public key
{
"status": 200,
"primechain_sawtooth_private_key": "89b0d0a7ef410e9dc19907b8d9297ec2239222c2b38d56056964495b517fe6c1",
"primechain_sawtooth_public_key": "02b023ef5ce8f6ef4e3419ebc925060ceb3ed9f80c489e34fb93033956581b4941",
"response": "User keys generated"
}
2. Publish data to the blockchain
To publish data to the blockchain, use post /api/v1/upload_data_sawtooth and pass 2 parameters:
Private_key The data
{
"primechain_sawtooth_private_key": "89b0d0a7ef410e9dc19907b8d9297ec2239222c2b38d56056964495b517fe6c1",
"data": "Mistakes are always forgivable, if one has the courage to admit them."
}
The output will be the Sawtooth transaction id for the transaction.
{
"status": 200,
"primechain_sawtooth_tx_id": "a31075ee02e17c73b520c3a379880e936a011a702fe412a5e4da0d02300b90623b442e627774671d542565c642d4aa246545faaedaecde16cfa5d979be231e62"
}
3. Retrieve data from the blockchain
To retrieve data from the blockchain, use post /api/v1/get_data_sawtooth and pass the Sawtooth transaction id of the transaction.
{
"primechain_sawtooth_tx_id": "a31075ee02e17c73b520c3a379880e936a011a702fe412a5e4da0d02300b90623b442e627774671d542565c642d4aa246545faaedaecde16cfa5d979be231e62"
}
The output will be the data.
{
"status": 200,
"response": "Mistakes are always forgivable, if one has the courage to admit them."
}
4. Creating a digital signature
To create a digital signature, use post /api/v1/create_signature_sawtooth and pass 2 parameters:
Private key the data
{
"primechain_sawtooth_private_key": "89b0d0a7ef410e9dc19907b8d9297ec2239222c2b38d56056964495b517fe6c1",
"data": "Mistakes are always forgivable, if one has the courage to admit them."
}
The output will be the digital signature:
{
"status": 200,
"signature": "235de04e222dfcfcc0a2cdec32c1a2aaa56b404dc6b0e4e721eef264ce891e1920ff05ee6345635bd017fca97fef2407b57364a287dd27eddc8a82dec61bb87b"
}
5. Verify a digital signature
To verify a digital signature, use post /api/v1/verify_signature_sawtooth and pass 3 parameters:
public key the data the digital signature
{
"primechain_sawtooth_public_key": "02b023ef5ce8f6ef4e3419ebc925060ceb3ed9f80c489e34fb93033956581b4941",
"data": "Mistakes are always forgivable, if one has the courage to admit them.",
"signature": "235de04e222dfcfcc0a2cdec32c1a2aaa56b404dc6b0e4e721eef264ce891e1920ff05ee6345635bd017fca97fef2407b57364a287dd27eddc8a82dec61bb87b"
}
The output will be:
{
"status": 200,
"response": true
}
or
{
"status": 200,
"response": false
}
6. Create, encrypt, sign and publish data to the blockchain
To create, encrypt, sign and publish data to the blockchain, use post /api/v1/encrypt_sign_store_data_sawtooth and pass 2 parameters:
the primechain sawtooth private key the data
Sample input
{
"primechain_sawtooth_private_key": "89b0d0a7ef410e9dc19907b8d9297ec2239222c2b38d56056964495b517fe6c1",
"data":
{
"guarantee_number": "ASDRE76464",
"guarantee_currency": "USD",
"guarantee_amount_words": "Twenty Two million",
"guarantee_amount_figures": "22,000,000.00",
"guarantee_date_of_issue": "17-December-2018",
"guarantee_date_of_maturity": "16-March-2019",
"guarantee_beneficiary": "Nicole Corporation",
"guarantee_details": "On behalf of our client, Nicole Corporation, for value received, we, Global Bank, hereby irrevocably and unconditionally and without protest or notification, promise to pay against this, our irrevocable bank guarantee, to the order of Nicole Corporation, as beneficiary, on the maturity date, the sum of Twenty Two million United States Dollars ($22,000,000.00 USD), upon their first written demand for payment hereunder. Such payment shall be made without set-off, free and clear of any deductions, charges, fees, levies, taxes or withholdings of any nature. This bank guarantee is transferable, without payment of any transfer fees. This bank guarantee is issued in accordance with the uniform customs and practices for bank guarantees, as set forth by the International Chamber of Commerce, Paris, France, latest revision of ICC 500 publication. This bank guarantee shall be governed by and construed in accordance with the laws of the Republic of India and is free and clear of any lien and encumbrances and is of non-criminal origin.",
"authorized_bank_officer_name": "Nicole Kidman",
"authorized_bank_officer_title": "Vice President",
"authorized_bank_officer_telephone": "230950934545",
"authorized_bank_officer_facsimile": "367645455624",
"authorized_bank_officer_email": "nicole@example.com"
}
}
This is what happens:
Step 1: Hash computation
The SHA-512 hash of the data is computed.
Step 2: Signing
The hash is signed using the private key of the signer, using the secp256k1 algorithm.
Step 3: Encrypting the data
The data is encrypted using the AES (Advanced Encryption Standard) algorithm and the following are generated:
the encrypted version of the data the AES password the Initialization Vector (IV). Initialization Vector is a nonce that is associated with an invocation of authenticated encryption on a particular plaintext and Additional Authenticated Data (AAD). the Authentication Tag (tag), which is a cryptographic checksum on data that is designed to reveal both accidental errors and the intentional modification of the data.
Step 4: Storing the encrypted data
The encrypted data and the tag are published to the blockchain.
Step 5: Output
The following is the output:
the relevant Sawtooth transaction id the digital signature the AES password the Initialization Vector (IV)
Sample output
{
"status": 200,
"response": {
"primechain_sawtooth_tx_id": "82cde773d6b25b9a48dcb16f602fbcd5ed3649183d595e19ed545650bdfcc8fc2a9316121281910f43c52d3ba5c9e2d0c0eb6fce3ece54b6a257462bc6d395c1",
"signature": "cb36670bc1e53389473f1fcac881bf8c30e3a8b44ea34a8adb8608ab7dd7dec850073c9d2e6e868b1a4fe94603a2a3fa3187dfd506edf26fc7173a64e9c4e699",
"aes_password": "gkSQEaVPI9bHwGM37NmtjNBUad7mwoHm",
"aes_iv": "RaaizYEJvqxg"
}
}
7. Decrypt, verify and retrieve data from the blockchain
To retrieve data from the blockchain, use post /api/v1/decrypt_download_data_sawtooth and pass these values:
the relevant Sawtooth transaction id the sawtooth public key of the signer the AES password the Initialization Vector (IV) the digital signature
Sample input
{
"primechain_sawtooth_tx_id": "82cde773d6b25b9a48dcb16f602fbcd5ed3649183d595e19ed545650bdfcc8fc2a9316121281910f43c52d3ba5c9e2d0c0eb6fce3ece54b6a257462bc6d395c1", "primechain_sawtooth_public_key": "02b023ef5ce8f6ef4e3419ebc925060ceb3ed9f80c489e34fb93033956581b4941",
"signature": "cb36670bc1e53389473f1fcac881bf8c30e3a8b44ea34a8adb8608ab7dd7dec850073c9d2e6e868b1a4fe94603a2a3fa3187dfd506edf26fc7173a64e9c4e699",
"aes_password": "gkSQEaVPI9bHwGM37NmtjNBUad7mwoHm",
"aes_iv": "RaaizYEJvqxg"
}
This is what happens:
Step 1: Retrieval of encrypted data
The encrypted data and tag are retrieved from the blockchain.
Step 2: Decryption
The encrypted data is decrypted.
Step 3: Verification
The digital signature is verified.
Step 4: Output
The output will be the data if the signature is verified and valid.
Sample output
{
"status": 200,
"response": {
"guarantee_number": "ASDRE76464",
"guarantee_currency": "USD",
"guarantee_amount_words": "Twenty Two million",
"guarantee_amount_figures": "22,000,000.00",
"guarantee_date_of_issue": "17-December-2018",
"guarantee_date_of_maturity": "16-March-2019",
"guarantee_beneficiary": "Nicole Corporation",
"guarantee_details": "On behalf of our client, Nicole Corporation, for value received, we, Global Bank, hereby irrevocably and unconditionally and without protest or notification, promise to pay against this, our irrevocable bank guarantee, to the order of Nicole Corporation, as beneficiary, on the maturity date, the sum of Twenty Two million United States Dollars ($22,000,000.00 USD), upon their first written demand for payment hereunder. Such payment shall be made without set-off, free and clear of any deductions, charges, fees, levies, taxes or withholdings of any nature. This bank guarantee is transferable, without payment of any transfer fees. This bank guarantee is issued in accordance with the uniform customs and practices for bank guarantees, as set forth by the International Chamber of Commerce, Paris, France, latest revision of ICC 500 publication. This bank guarantee shall be governed by and construed in accordance with the laws of the Republic of India and is free and clear of any lien and encumbrances and is of non-criminal origin.",
"authorized_bank_officer_name": "Nicole Kidman",
"authorized_bank_officer_title": "Vice President",
"authorized_bank_officer_telephone": "230950934545",
"authorized_bank_officer_facsimile": "367645455624",
"authorized_bank_officer_email": "nicole@example.com"
}
}
Have a query? Email us on info@primechain.in | https://medium.com/blockchain-blog/blockchain-api-service-for-hyperledger-sawtooth-4be5539432b | ['Rohas Nagpal'] | 2018-12-24 13:33:52.549000+00:00 | ['Primechain', 'Blockchain', 'Blockchain Technology', 'Blockchain Application', 'Hyperledger'] |
1,579 | The FUTRSPRT Interview Series: Tone Networks Founder & CEO Gemma Toner | Courtesy of Tone Networks
By Matt Bowen
Follow on Twitter: @IsItGameTimeYet/@FutrSprt
Follow on LinkedIn: FUTRSPRT
FUTRSPRT Interview Series presents conversations with some of the sharpest minds in sports tech. Leaving no angle of competition or fan experience unscathed FUTRSPRT showcases those creating the future of sports right before our eyes.
FUTRSPRT’s Matt Bowen discussed female leadership with Tone Networks Founder & CEO Gemma Toner. The company is beginning to make major inroads in Major League Baseball as FUTRSPRT discovered Tone Networks is just getting started.
Courtesy of Tone Networks
Toner told FUTRSPRT that Tone Networks, a SaaS-based company, is focused on finding a way to get women executive coaching. The company continuously gathers consumer insight and understands that working women want a one-stop-shop for leadership content. Through short videos and applicable “cheat sheets,” users can confidently use the Tone Networks platform when they have time.
Toner stated, “Tone provides access and a way for [women] to participate and progress in the workplace. Through Tone Networks, the ability to find experts in a given field/subject matter is greater. We democratize access to leadership stars.”
Think Tone Networks is gimmicky? Wrong. It’s a transcendent trailblazer in female leadership.
Tone Networks was launched in late 2017 and its recent deal with the Atlanta Braves is evidence that empowerment platform will soon be a staple throughout the front offices of sports.
What Have Some Pleasant Surprises Been?
“The generosity of people to refer Tone Networks has been a great surprise. Our metric-driven approach has led to fresh, innovative ways to promote women in the workplace”, says Toner.
“We’ve had clients range from Coca-Cola to big data to health care companies.”
Coca-Cola. Major League Baseball. Tone Networks is a classic.
“The most exciting thing we’re seeing is the impact we’re making with women. We’re seeing super-high engagement and 89 percent of our users say Tone Networks has made a positive impact.”
“What we’re doing is building a talent pipeline for the future.”
Bullseye. It may take 5/10/15 years to become crystal clear but Tone Networks is a driving force in female leadership. Get used to the name now — your daughter may thank the platform when she becomes CEO of a Fortune 500 company in 2030.
What’s Next for Tone Networks from a Tech Perspective?
“We’ll forever be data-driven and let our customer’s feedback guide improvements”, Toner begins.
“Tone Networks will continue to use video-sharing as a way for companies to share within a community. The Atlanta Braves can have a custom environment where they can build unique coaching sessions geared toward a specific department or individual employee.”
“This also allows the sender of the videos to become a role model, building trust and relationship throughout the organization.”
Toner leaves with a tea leaf pointing toward the future, “Expansion plans are on the roadmap.”
Highly-customizable, metric-based leadership lessons delivered via video that helps individuals and organizations build a security network amongst each other — jackpot. Expansion in many forms should be expected.
In 3 Years Tone Networks Will Be Known for _____?
“The perfect storm is to help grow leadership, courage, and commitment for women.”
“In three years Tone Networks will be known for moving the needle when it comes to promoting women globally.”
……………………………………………………………………………
Tone Networks — don’t forget the name — the platform of the future when it comes to executive leadership. | https://medium.com/futrsprtpodcast/the-futrsprt-interview-series-tone-networks-founder-ceo-gemma-toner-d662644a03ee | [] | 2020-04-07 13:01:00.925000+00:00 | ['Leadership', 'Video Technology', 'Female Leadership', 'Platform', 'Sports Business'] |
1,580 | JavaScript and the Web — Handling Events | Photo by Waranya Mooldee on Unsplash
JavaScript is one of the most popular programming languages in the world. To use it effectively, we’ve to know about the basics of it.
In this article, we’ll look at how we can handle events in JavaScript.
Event Handlers
We can add event handlers to listen to user events.
Events include things like inputs and mouse clicks.
For instance, we can write:
window.addEventListener("click", () => {
console.log("clicked");
});
Then we listen to clicks on the browser tab and log 'clicked' if something is clicked.
addEventListener is used to listen to events of our choice.
Events and DOM Nodes
We can also call addEventListener to listen to events triggered by a DOM node.
For instance, given the following HTML:
<button>Click me</button>
We can write:
const button = document.querySelector("button");
button.addEventListener("click", () => {
console.log("clicked.");
});
to listen to click events of a button.
Then when we click the Click me button, we see 'clicked' logged.
There’s also a removeEventListener method which takes an event listener as an argument to remove it.
For instance, we can write:
const button = document.querySelector("button");
const clickOnce = () => {
console.log("Done.");
button.removeEventListener("click", clickOnce);
}
button.addEventListener("click", clickOnce);
Then the clickOnce listener will only run once and then it won’t run again since we called removeListener with it as the argument.
Event Objects
Event objects are pass into event listeners and we can use it to get more information about an event.
For instance, we can write:
const button = document.querySelector("button");
button.addEventListener("mousedown", event => {
if (event.button === 0) {
console.log("left");
} else if (event.button === 1) {
console.log("middle");
} else if (event.button === 2) {
console.log("right");
}
});
Now we listen to the mousedown event of the button.
It’s triggered when we click the button.
If the button property is 0, then the left button is clicked.
If it’s 1, then the middle button is clicked.
If it’s 2, then the right button is clicked.
So when we left-click, we get 'left' logged and when we right-click, we get 'right' logged.
The type property would have the string which identifies the event, such as 'click' or 'mousedown' .
Propagation
In JavaScript, events propagate outward from then originating element all the way to the browser window.
The event handlers for the parent, grandparent, and so on will all run if they exist by default.
To stop this, we can call the stopPropagation method on the event object.
For instance, we can write:
const button = document.querySelector("button");
button.addEventListener("mousedown", event => {
event.stopPropagation();
if (event.button == 0) {
console.log("left");
} else if (event.button == 1) {
console.log("middle");
} else if (event.button == 2) {
console.log("right");
}
});
Now it’s won’t propagate to the parent, grandparent, and so on.
We can also use the target property to add a listener than can listen to actions from multiple elements.
For instance, given the following HTML:
<button>foo</button>
<button>bar</button>
<button>baz</button>
We can write:
window.addEventListener("mousedown", event => {
if (event.target.nodeName == "BUTTON") {
console.log(event.target.textContent)
}
});
Then we get 'foo' , 'bar' , 'baz' logged depending on which button we which since the textContent has the content of the button.
Default Actions
Default actions are associated with many events.
To prevent the default action, we can call preventDefault() to stop it from happening.
Event handlers are called before the behavior takes place, so we can to it there.
For example, if we have the following HTML:
<a href='http://example.com'>example</a>
Then we can stop people from clicking the link to go to the page by writing:
const link = document.querySelector("a");
link.addEventListener("click", event => {
console.log("stopped");
event.preventDefault();
});
event.preventDefault(); stop the default action, which is to go to the page with the URL.
Key Events
The keydown event is trigger when we press a key, keyup event when a key is released.
For instance, if we have the following HTML:
<p>
foo
</p>
We can write:
window.addEventListener("keydown", event => {
if (event.key === "g") {
document.body.style.background = "green";
}
});
window.addEventListener("keyup", event => {
if (event.key === "g") {
document.body.style.background = "";
}
});
Then when we press the G key, the background will turn green.
It’ll turn back to white when we release it.
The key property has the key that’s pressed.
Photo by Gustavo Zambelli on Unsplash
Conclusion
We can detect mouse and keyboard events with the event object.
Also, we can add and remove listeners from a DOM object by passing in an event listener. | https://medium.com/dev-genius/javascript-and-the-web-handling-events-47cc526c1793 | ['John Au-Yeung'] | 2020-06-18 16:31:04.938000+00:00 | ['Programming', 'JavaScript', 'Software Development', 'Web Development', 'Technology'] |
1,581 | Seven Best Tips for Aspiring Software Engineers | Tip #1 — Focus on just one or two verticals, initially
I remember how I wasted (not exactly) my time by jumping from one domain of expertise to another in a very short span of time. I soon realized it led me to nowhere. There are a lot of booming technologies out there and it’s important to initially focus on one or two before switching to others. Try to find that niche that really excites you before you proceed to master it.
Though you could argue that exploring a lot of fields initially is what really paves us the way to research, this might not always be true.
Don’t try to learn Kubernetes, Machine Learning and React JS by scheduling time and acting smart. You’ll only end up having half-baked knowledge in the end.
You might have a sudden urge of excitement to study a topic just because it’s currently hot but it’s significant to realize that even the simplest of applications might contain intricate details under the hood. Constantly switching between areas only leads you to wander rather than focus!
Reap the rewards of learning a new skill by implementing it practically such that it solves a mundane task. After accomplishing this, you’ll be fairly confident and you can switch to new technology. | https://medium.com/illumination/seven-best-tips-for-aspiring-software-engineers-c8942ff8d767 | ['Abishaik Mohan'] | 2020-11-24 02:47:18.387000+00:00 | ['Self Improvement', 'Creativity', 'Productivity', 'Life', 'Technology'] |
1,582 | Cross-Platform vs. Native Mobile Apps | Introduction
Cross-platform apps have become a go-to-market solution for many startups in the tech industry. They promise to be able to easily write one codebase that deploys to both iOS and Android devices. However, many companies still choose to go the traditional route and develop natively despite the potential advantages.
Both solutions are valid — there isn’t a single “right” way to do things (at least not yet!). Choosing the right path forward for your team depends on your goals and constraints, and your choice in technology should come from that more so than a personal belief. This post will draw a comparison between the two options to help you make the best decision for your company.
Cross-Platform
Currently, there are several competing options for cross-platform development: React Native, Flutter, Xamarin, Ionic to name the mainstream ones. In this post, we are going to avoid explaining each one and discuss the pros/cons of cross-platform development as a whole.
Each of these platforms allows your team to write the code for one app, which can be deployed, to both the Google Play Store and App Store. The cost-savings potential of this is obviously huge if done effectively.
Pros to cross-platform
It is possible to deliver to both mobile markets at a budget far less than it would take two develop two separate native apps. About 60–70% of the code can be shared with the remainder being custom for either platform.
You can have a single mobile development team, simplifying your organization. This can also benefit in bringing some cohesion to your Android and iOS apps that is hard to achieve otherwise.
Depending on the chosen technology, you may have developers in house that are already familiar with the languages and development styles (JavaScript, C#/.NET, Dart). You may be able to build a team without having to start hiring from the ground up.
New features are deployed to your iOS and Android users at the same time, rather than having a “feature leader”.
Drawbacks to cross-platform
It is much harder to hire developers experienced in this tech compared to native.
When you do hit a roadblock, it often takes a really highly skilled engineer to solve the problem.
iOS or Android specific features (ex. Health Kit / Apple Watch) are extremely difficult or impossible to work with.
Developing intricate UI/UX takes a lot longer than Native. If there isn’t an out-of-the-box component, a custom one needs to be created for both platforms which is time consuming.
Cross-platform is often not as efficient as Native, sometimes causing some jank that can’t really be solved. This has more impact the more complex your application.
The technologies have less mature ecosystems than native. This means less tools for your team to utilize and less help available when they run into issues.
The apps do not “feel” like iOS or Android apps. They create unfamiliar experiences for the user, which can make the app feel unattractive.
There is risk of the chosen technology dying out and losing community, creator or company support.
When it makes sense to use cross-platform
You want to target both Android and iOS markets for your MVP.
Limited budget for mobile is one of your primary constraints.
Your mobile app is not a core part of your business.
User experience should be standard between your apps and you are not seeking to “feel” like an iOS or Android app.
Your application is straightforward in terms of user experience and features.
Your designs are nearly identical on both platforms.
Our Thoughts
Cross-platform technologies are a great way to get to market quickly to the widest audience possible and validate your idea. If your app isn’t going to require platform-specific technologies, intricate UI/UX, or complex business logic you should strongly consider going with a cross-platform solution. Post-MVP, this can allow you to focus on product iteration entirely, whereas going native might mean you have to consider building a completely new team/app for iOS or Android. Native can always come later if needed.
Native Development
“Native” development refers to the traditional style of developing an entire application using the first party tools supplied by Apple and Google for iOS and Android. The ecosystems have evolved, but today, the industry standard is to use Android Studio & the Kotlin programming language for Android, and Xcode & the Swift programming language for iOS.
This approach necessitates having two separate codebases for each app. Teams will usually have engineers focus on one platform or the other, or sometimes have two completely separate development teams for each platform.
Many times, companies will release their app either to the App Store or Google Play first and focus on iterating on the product. If delivering the app on the other platform is part of their strategy, once they have traction or success and enough funding, they will invest in building a team to handle the other platform.
Pros to native development
Apps can take advantage of all the iOS or Android-specific features.
The user experience can be tailored to a greater or lesser degree to what an iOS or Android user expects.
Custom, intricate UI is a lot easier to develop generally.
If there is complex background processing, that will run smoother than cross-platform due to technical limitations.
Native apps done well “feel better” in a way that cross-platform apps are not capable of right now.
Your following team should be able to catch up to your first team pretty quickly. They will not have to go through the same mistakes and learnings that the first team did.
In terms of job market, there are many more developers experienced in native iOS or Android development. Your hiring will be a lot simpler.
The tools and community support have been developed since the inception of mobile apps and are far more robust and mature.
If you can imagine it, it can often be done 99% of the time.
Drawbacks to native development
It can be very expensive to fund native apps for both iOS and Android.
You essentially need two teams worth of developers to deliver to both.
The following platform is often lagging behind in terms of features and it can be difficult to achieve parity.
It is challenging to achieve cohesion between the iOS and Android apps and teams.
When it makes sense to go native
You care deeply about a fine-tuned, high-quality or impressive user experience.
You want to take advantage of platform features (ex. Health Kit / Apple Watch integration).
You have computationally intense features in mind such as video or live streaming.
It is not highly important that you deliver to the widest audience possible at first.
You are targeting a specific demographic that tends to use one platform or the other.
When your designs heavily leverage native components and styles.
Our Thoughts
If you want to deliver a very high-quality app that feels the most natural or satisfying to the user, or you need to integrate closely with native APIs and features for either platform, then native is the way to go. Even if you are anticipating this in the future, you should consider doing native. When everything goes smoothly, developing two native apps is a lot more expensive. If you run into a roadblock in terms of what features you can create, often those are solvable in native, while cross-platform might leave you tearing your hair out.
Conclusion
At QuarkWorks, we have experience developing in a variety of cross-platform technologies as well as our extensive background in native development since the inception of the App Store.
There is no “one true way” when it comes to developing mobile apps. For developing lightweight prototypes and MVPs, we like the concept of using cross-platform when possible to get our ideas in the hands of the widest audience quickly. However, when we want to create something to perfection and really deliver that highest quality user experience, we still prefer to go with native development.
We appreciate the apps that just “feel” good and there is no way to recreate that feeling quite the same with cross-platform technology currently. We’re excited about some of the new solutions like Flutter, that are aiming to provide not only a single codebase for mobile, but also being able to share code between mobile and web. It does all this while being as efficient as native apps. Although it’s not perfect, we see that as the metaphorical Holy Grail that the industry could one day achieve.
Right now, many of these technologies are promising that, but fall short in ways that can become really frustrating to work through.
As ever, QuarkWorks is available to help with any software application project — web, mobile, and more! If you are interested in our services you can check out our website. We would love to answer any questions you have! Just reach out to us on our Twitter, Facebook, LinkedIn, or Instagram. | https://medium.com/quark-works/cross-platform-vs-native-mobile-apps-bc9076fa227b | ['Jacob Muchow'] | 2020-10-15 15:02:41.386000+00:00 | ['Technology', 'Software Development', 'Startup', 'Apps', 'Coding'] |
1,583 | Your potential is directly correlated to how well you know yourself. Those who know themselves and maximize their strengths are the ones who go where they want to go | Your potential is directly correlated to how well you know yourself.
Those who know themselves and maximize their strengths are the ones who go where they want to go Olv Nov 19, 2020·6 min read
Life is a journey of twists and turns, peaks and valleys, mountains to climb and oceans to explore.
Good times and bad times. Happy times and sad times.
But always, life is a movement forward.
No matter where you are on the journey, in some way, you are continuing on — and that’s what makes it so magnificent. One day, you’re questioning what on earth will ever make you feel happy and fulfilled. And the next, you’re perfectly in flow, writing the most important book of your entire career.
https://okt744.medium.com/live-rafael-nadal-vs-stefanos-tsitsipas-live-mubadala-tennis-full-game-broadcast-119e7b3cfd2b
https://gumroad.com/allsportslivest/p/atp-tour-finals-2020-rafael-nadal-vs-stefanos-tsitsipas-live-streaming
https://gumroad.com/allsportslivest/p/2020-nitto-atp-finals-live-stream-practice-court-1-thursday
https://gumroad.com/allsportslivest/p/nadal-vs-tsitsipas-live-stream-nitto-atp-finals-2020-free
https://gumroad.com/allsportslivest/p/live-tsitsipas-vs-nadal-live-stream-free-tennis-final-2020
https://gumroad.com/allsportslivest/p/nadal-vs-tsitsipas-live-stream-watch-atp-finals-2020-match-online-and-on-tv-today
https://gumroad.com/allsportslivest/p/live-rafael-tsitsipas-vs-nadal-live-stream-free
https://gumroad.com/allsportslivest/p/tsitsipas-vs-nadal-live-free-watch-tennissteam-guide-2020-nitto-atp-finals-on-reddit
https://gumroad.com/allsportslivest/p/how-to-watch-nadal-vs-tsitsipas-live-stream-free-finals-match
https://gumroad.com/allsportslivest/p/free-rafael-nadal-vs-stefanos-tsitsipas-live-stream-reddit-tour-finals-london-2020
https://steemit.com/live/@swbrwrentryt/rafael-nadal-vs-stefanos-tsitsipas-live-stream
https://olq767.medium.com/nadal-vs-tsitsipas-live-stream-4k-f91f55690fc0
https://olq767.medium.com/watch-live-tsitsipas-vs-nadal-mubadala-tennis-full-game-live-full-match-broadcast-eb1b809b8eeb
https://olq767.medium.com/nadal-vs-tsitsipas-free-livestream-tv-channel-e94b73f628f0
https://steemit.com/cgvjh/@swbrwrentryt/f-cdg-xfcgdx-xfcgd-xgfcgfgdx-fcgvhbxg-fc
https://paiza.io/projects/cy0YTVhPhMKtMgrlpETadg?language=php
What nobody ever tells you, though, when you are a wide-eyed child, are all the little things that come along with “growing up.”
1. Most people are scared of using their imagination.
They’ve disconnected with their inner child.
They don’t feel they are “creative.”
They like things “just the way they are.”
2. Your dream doesn’t really matter to anyone else.
Some people might take interest. Some may support you in your quest. But at the end of the day, nobody cares, or will ever care about your dream as much as you.
3. Friends are relative to where you are in your life.
Most friends only stay for a period of time — usually in reference to your current interest. But when you move on, or your priorities change, so too do the majority of your friends.
4. Your potential increases with age.
As people get older, they tend to think that they can do less and less — when in reality, they should be able to do more and more, because they have had time to soak up more knowledge. Being great at something is a daily habit. You aren’t just “born” that way.
5. Spontaneity is the sister of creativity.
If all you do is follow the exact same routine every day, you will never leave yourself open to moments of sudden discovery. Do you remember how spontaneous you were as a child? Anything could happen, at any moment!
6. You forget the value of “touch” later on.
When was the last time you played in the rain?
When was the last time you sat on a sidewalk and looked closely at the cracks, the rocks, the dirt, the one weed growing between the concrete and the grass nearby.
Do that again.
You will feel so connected to the playfulness of life.
7. Most people don’t do what they love.
It’s true.
The “masses” are not the ones who live the lives they dreamed of living. And the reason is because they didn’t fight hard enough. They didn’t make it happen for themselves. And the older you get, and the more you look around, the easier it becomes to believe that you’ll end up the same.
Don’t fall for the trap.
8. Many stop reading after college.
Ask anyone you know the last good book they read, and I’ll bet most of them respond with, “Wow, I haven’t read a book in a long time.”
9. People talk more than they listen.
There is nothing more ridiculous to me than hearing two people talk “at” each other, neither one listening, but waiting for the other person to stop talking so they can start up again.
10. Creativity takes practice.
It’s funny how much we as a society praise and value creativity, and yet seem to do as much as we can to prohibit and control creative expression unless it is in some way profitable.
If you want to keep your creative muscle pumped and active, you have to practice it on your own.
11. “Success” is a relative term.
As kids, we’re taught to “reach for success.”
What does that really mean? Success to one person could mean the opposite for someone else.
Define your own Success.
12. You can’t change your parents.
A sad and difficult truth to face as you get older: You can’t change your parents.
They are who they are.
Whether they approve of what you do or not, at some point, no longer matters. Love them for bringing you into this world, and leave the rest at the door.
13. The only person you have to face in the morning is yourself.
When you’re younger, it feels like you have to please the entire world.
You don’t.
Do what makes you happy, and create the life you want to live for yourself. You’ll see someone you truly love staring back at you every morning if you can do that.
14. Nothing feels as good as something you do from the heart.
No amount of money or achievement or external validation will ever take the place of what you do out of pure love.
Follow your heart, and the rest will follow.
15. Your potential is directly correlated to how well you know yourself.
Those who know themselves and maximize their strengths are the ones who go where they want to go.
Those who don’t know themselves, and avoid the hard work of looking inward, live life by default. They lack the ability to create for themselves their own future.
16. Everyone who doubts you will always come back around.
That kid who used to bully you will come asking for a job.
The girl who didn’t want to date you will call you back once she sees where you’re headed. It always happens that way.
Just focus on you, stay true to what you believe in, and all the doubters will eventually come asking for help.
17. You are a reflection of the 5 people you spend the most time with.
Nobody creates themselves, by themselves.
We are all mirror images, sculpted through the reflections we see in other people. This isn’t a game you play by yourself. Work to be surrounded by those you wish to be like, and in time, you too will carry the very things you admire in them.
18. Beliefs are relative to what you pursue.
Wherever you are in life, and based on who is around you, and based on your current aspirations, those are the things that shape your beliefs.
Nobody explains, though, that “beliefs” then are not “fixed.” There is no “right and wrong.” It is all relative.
Find what works for you.
19. Anything can be a vice.
Be wary.
Again, there is no “right” and “wrong” as you get older. A coping mechanism to one could be a way to relax on a Sunday to another. Just remain aware of your habits and how you spend your time, and what habits start to increase in frequency — and then question where they are coming from in you and why you feel compelled to repeat them.
Never mistakes, always lessons.
As I said, know yourself.
20. Your purpose is to be YOU.
What is the meaning of life?
To be you, all of you, always, in everything you do — whatever that means to you. You are your own creator. You are your own evolving masterpiece.
Growing up is the realization that you are both the sculpture and the sculptor, the painter and the portrait. Paint yourself however you wish. | https://medium.com/@olv772/your-potential-is-directly-correlated-to-how-well-you-know-yourself-2d74010175bd | [] | 2020-11-19 17:10:44.969000+00:00 | ['Technology', 'Sports', 'Social Media', 'News', 'Live Streaming'] |
1,584 | Morpheus Labs in Top 15 Amongst More Than 500 Startups Globally | We’re excited to be part of the Huawei Spark Program that aims to incubate and accelerate deep startups in the Asia Pacific! This is a great opportunity for us to gain direct access to many respectable mentors in various areas of the business.
To learn more, watch this video below.
Organised by HUAWEI CLOUD in partnership with Enterprise Singapore and Startup SG, Huawei Spark was launched in August 2020 to call on deep tech companies in the Asia Pacific to participate in the accelerator programme at the online HUAWEI CLOUD Summit 2020 Singapore. Morpheus Labs has gone through a rigorous selection process and was chosen to be in the top 15 amongst more than 500 startups globally, as a very promising startup by the accelerator programme.
The Morpheus Labs team has spent the first two years of its business in building and developing the low code development platform, named Morpheus Labs SEED, where developers and enterprises alike could use the platform to design, deploy and run blockchain applications and solution implementations. The time has come that Morpheus Labs will seek greater market validation and set foot in the emerging technology market.
Morpheus Labs will receive Cloud resources from Huawei, including hardware support including AI modules and AI-based intelligent industry solutions, along with open-source software support such as AI framework, database and OS to develop their own applications, services and hardware appliances, not limiting to technical, training, external webinar support, as well as go-to-market support. Morpheus Labs will also get access to its wide-ranging enterprise client portfolio globally. On top of that, the company will be able to reach over 600 million Huawei mobile users through the Huawei Mobile Services (HMS) system and Huawei App Gallery, and millions of enterprise users across the globe through Huawei’s Cloud marketplace.
With Huawei’s additional support and resources, Morpheus Labs will be the go-to platform for blockchain solution implementations, providing the following: | https://medium.com/morpheus-labs/morpheus-labs-in-top-15-amongst-more-than-500-startups-globally-cbe6216369ba | ['Morpheus Labs Team'] | 2020-12-11 09:02:02+00:00 | ['Technology', 'Blockchain', 'Cryptocurrency', 'Accelerator', 'Huawei'] |
1,585 | Minecraft Dungeons Review | Oddly, the game features no mining or crafting. Mining may not always have a place in the action RPG world, but crafting certainly does. I expected to build new weapons and armor throughout my short journey, but no. Instead, you’ll earn every item you use, either by finding it in a chest or random enemy drop, or by spending emeralds in your camp. The merchants in the main hub function a little too close to free-to-pay loot box mechanics in mobile games for my personal comfort. Instead of offering a selection of weapons and powerups to buy, they’ll offer you one blind item in exchange for an amount of your money. If you don’t like the item, you can break it down for a fraction of the emeralds you spent on it.
The prices for these blind items slowly increase as you advance in the game, and to be fair, the main missions offer plenty of loot such that you’ll never solely rely on the merchants to get the gear you’re after. Still, it seems like a system ripe for exploitation with microtransactions down the road. The game already has a $10 DLC pass available that includes a number of new cosmetic character options, and gets you two content packs coming in July and September. These will add a few extra stages to the game, which should help with its brevity at least. | https://xander51.medium.com/minecraft-dungeons-review-3e9307a405bf | ['Alex Rowe'] | 2020-05-27 19:25:44.657000+00:00 | ['Videogames', 'Gaming', 'Minecraft', 'Tech', 'Technology'] |
1,586 | Checking Under the Dashboard | Checking Under the Dashboard
This article reveals the often-overlooked consequences of women’s voices not being incorporated into tech, resulting in harms that will affect generations. Details of direct harms and indirect harms can be found in Gender Bias in Data and Tech.
AFP/MARTIN BUREAU/POOL: This is an image of French President Macron at a Tech Conference in 2017.
Data Profiling — How It Works and Why It’s A Problem
Do you know why or how certain movies are suggested to you on Netflix over others, or how your news feed articles are prioritized on Facebook, or why certain Google search results come to the top over others? Movies, articles, and search results are targeted to you based on your “likes”, your friends’ “likes”, your search history, your political views, information you provide to other apps and devices that you use, and many other personal traits.
You probably already knew that, but have you considered that they’re also based on whether or not you’re a woman, whether or not you’re a parent, and whether or not you’re in the paid workforce? The Adtech industry uses your online behavior to build a profile of you — who they think you are. They may look at your behavior and profile you as a male, between 40–50, heterosexual, married, multiple children, and living in an affluent area. Advertising is then targeted to you based on what a typical man of this age and demographic is interested in, which is heavily based on societal norms, the personal biases of the individual(s) creating the algorithms behind this, as well as stereotypes. This type of profiling happens all the time — just think of how many online or digital interactions you have with various apps and websites all day long. These companies are receiving a steady stream of information about you that they use to refine and add to their profiling of you as an individual. Not all of this data will be interpreted correctly.
Now think about LinkedIn, online recruiters, job search sites, banks, financial lenders, healthcare providers, and governments — this is not about whether you see the latest handbag or cat video anymore, it’s about whether or not your profile comes up in the recruiters search for a new candidate, whether a lender finds you credit worthy, or whether you qualify for healthcare. This is where technology has the potential to exponentially exacerbate bias with unchecked AI, but it could also reduce human bias by putting up a virtual curtain to look beyond one’s gender, race, or other profiling traits. The former is the status quo; the latter will only happen if we act.
This is why we need a focus on gender data+tech.
What is Gender Data+Tech?
When you hear “gender data+tech” what comes to mind? If you’re working in international development, you probably think of the gender digital divide or you might think of the more recent focus on tech-facilitated GBV. Technologists’ minds on the other hand could gravitate to digital rights or possibly the “pipeline problem”. If you’re the type to discover the next best market, then the untapped potential in women’s healthcare and Femtech might come to mind. To all the data gurus out there, you’re probably thinking of data rights, data security, and privacy. A few of you will be thinking about gender data, its gaps and access problems. Then, if you are one of those very few people who straddle some combination of data science, gender studies, policy, and practice you will likely think of AI ethics, and all of the different types of data bias that can occur in machine learning.
In order to define gender data+tech, we first examine the consequences of status quo. In this article, we discuss direct harms and indirect harms associated with data, or lack thereof, and technology solutions. We not only outline the direct harms of getting online today, but also focus on revealing the often-overlooked consequences of women’s’ voices not being incorporated into tech, resulting in indirect harm that will affect generations upon generations.
Three Billion Marginalized Voices — Creating Space for Women and Girls[1]
In order to address the needs of women and girls, we need to ensure that we are consciously putting them first. This might sound obvious to some, but it will be a dividing factor for others.
Most of the work in this space does not actually have a primary focus on women and girls. Some work acknowledges that there are differences in our global population and focuses on human rights (data ethics/ AI ethics, digital rights/ data rights), some focus on challenging the power and purpose of technology (Data&Society), while others have a focus on all marginalized groups (Design Justice Network, Data + Feminism Lab, Algorithmic Justice League, We All Count). These are all separately and together necessary spaces for the betterment of society.
This is a visual mapping fields of study and a few predominate groups that are close to Gender Data+Tech, organized by prime focus area. This is populated based on publicly available information and is by no means comprehensive or static. This visual is to provide clarity on gaps as well as identify room for collaboration within the space.
Having a space focusing on women and girls is also separately and together necessary. Without this, conversations (subconsciously) orbit around men as the default, even when led by women. Historically, men/boys raise their hands first, which is a problem when the question relates to the needs of women/girls. Of course, there is room to work together, but this requires a women/ girls focused space that enables the different perspectives to emerge.
United We Stand, Divided We Fall
It’s going to take a cross-cutting gender lens with a truly diverse global perspective to address and create for the futures of women and girls within data and tech. Current coordination and funding efforts lack acknowledgement that this problem cuts across all gender work[2] as well as across all developed and developing countries, Northern and Southern Hemispheres.
No matter what perspective you’re coming from, we can all agree that when looking at issues of gender or women, and technology or data it’s a very minimally explored area that needs to be further developed with frameworks and structures that collectively guide us. The intention of this article is to increase awareness of this issue and galvanize people into action. We need a larger discussion that collectively defines all of the risks for individual women and girls as well as the risk on our economic and social systems that shape gender equality now and into the future.
Direct Harm/ GBV/ VAW
Direct harm against women and girls encompasses intentional violence such as: online harassment, hate speech, stalking, threats, impersonation, hacking, image-based abuse[3], doxing, human trafficking, disinformation and defamation, swatting, astroturfing, IoT-related harassment, and virtual reality harassment and abuse. These types of violence can be easily classified as GBV/ VAW, tech-facilitated GBV/VAW, or online GBV/ VAW because the intent behind the acts is clear, to harm an individual or group. There is a small, but growing body of work on these topics.
Groups that have been focused on this type of tech-facilitated direct harm for some time are the APC and journalists, although the work was historically not intended to focus on women, the targeting of women is common, thus a coalition of women journalists was recently formed against online violence. The child protection communities have also had a strong presence, as it relates specifically to children. Traditional GBV communities are also beginning to take up such topics since tech-facilitated GBV is caused by the same generational problem of power imbalance between men and women.
Indirect Harm/ GBV/ VAW
The way in which a majority of technology, especially digital tech, is being created today is amplifying gender inequalities and discriminating against women and girls. Despite preventing women from accessing jobs, funds, public services, and information, indirect harms against women from the misuse of data and tech is almost entirely overlooked.
Here we are talking about algorithmic bias (i.e. coded bias in artificial intelligence and machine learning), data bias (i.e. missing or mislabeled datasets), data security (i.e. sharing identifiable information), and other gender-blind tech that isn’t incorporating the voices of women and girls (e.g. bots, car crash dummies, and human resources software perpetuating harmful gender norms).
For those who want to dive deeper into indirect harm, a detailed explanation of harms and common terminology can be found in this sister article Gender Bias in Data and Tech.
Why Are There So Few Women in Tech Today? The tech industry, like almost every industry or sector, is full of harmful gender norms, bias, and sexism, but something unique to the tech industry is the extremely small representation of women in combination with the massive amount of growing power and influence tech has on all of our daily lives.
As Emily Chang so clearly spells out in her book Brotopia Breaking up the Boys’ Club of Silicon Valley an important aspect to understand when looking into these problem is that women have been excluded from the technology industry for over 50 years despite being the early developers. To all of those who say that it’s a “pipeline problem” she asks, who created the pipeline? The pipeline problem was created by the tech industry itself, from intentionally hiring anti-social men with the Cannon-Perry aptitude test in the 1960s, to Hundred Dollar Joe at Trilogy rewarding impulsive risk takers from strip clubs in Vegas in the early 1990s, to the original founders of PayPal being outwardly anti-feminist and anti-diversity, selectively hiring men like them beginning in the late 90s. The lack of women in the tech industry today is not because there’s a lack of interest or intellect. It’s because the tech industry pushed out women in the 60s and has been a massive boys club with exclusionary behavior from the 1990s through 2010s, only solidifying such behavior today with the largest tech investors coming from the same proud PayPal Mafia.
Now Let’s Move Forward, Together.
When examining why this problem exists it’s easy to get lost in discussions with people who don’t care about the problem, people who don’t see the problem, or people who lack the will to do something. Instead of focusing our finite energy on this, let’s first start with the low hanging fruit. Let’s coordinate the people who care, see, and have the will to do something about the problem, but don’t know how to do it alone or just need their voice amplified.
We are building Gender Data+Tech Community of Practice to bring together thought leaders to create partnerships, collaborative guidance, and develop solutions. Our goal is to build an evidence base with proven data and tech solutions that focuses on tackling gender inequalities for women and girls.
We, as a small but growing global community, are eager to have the critical conversations about how to ensure diverse voices and perspectives of women and girls are reflected and incorporated into technology today (e.g. demystifying black boxes, documenting training data, use AI to fight against GBV) as well as co-create better technology for tomorrow (e.g. participatory tech, inclusion prediction models, NLP tools for gender professionals).
Please reach out if you are passionate about moving the needle for women and girls with data and tech. We are stronger together and now is the time to shape our future.
[1] The terms “woman” and “girl” are social constructs of gender, thus this includes anyone who self-identifies as a woman or girl. This is also meant to include all female persons, indifferent of their gender identity. Language has been intentionally abbreviated not to dilute or distract from the 3 billion majority who are cis women and girls.
[2] Let’s address putting terminology differences aside for the time being. Gender Equality, Gender-Based Violence (GBV), Violence Against Women (VAW), and Violence Against Women and Girls (VAWG) work often overlaps, some would even argue that they are all the same.
[3] Image-based abuse includes: sharing images or videos without consent, taking images or videos without consent, shallow fakes, deepfakes, cyber flashing, sextortion, sextortion scam, child based image abuse. | https://medium.com/@stephanie.mikkelson/checking-under-the-dashboard-6d0acc196fa2 | ['Stephanie Mikkelson'] | 2021-07-01 21:48:14.316000+00:00 | ['Violence Against Women', 'Gender Equality', 'Data', 'Artificial Intelligence', 'Technology'] |
1,587 | How Much Does It Cost To Build A Social Media App (Features, Business Model, Cost) | How much does it cost you to develop a social media app in 2020–21?, it is one of the most often asked questions by businesses. The typical cost to create a social media app like Facebook, Instagram, Tumbler, can be ranging from $35,000 to $50,000+, depending upon the factors like app type, features, complexity, design, software development and more that influence its development cost.
From Facebook to Instagram, Twitter to LinkedIn, Whatsapp to Snapchat, all have become a social media sensation of the town that is continuously booming on the internet these days.
With the projection that global social media users will reach by 4.41 billion in 2025 from 3.6 billion in 2020, it is fair enough to say that social media apps will quickly take the world by storm. In fact, the existing social media applications have already revolutionized the way people communicate with each other and have become an integral part of everyone’s life.
Moreover, the impact of social media apps have reached to that extent where the majority of people first check the notifications of the social media app on their smartphone in the morning. Right from checking what’s new in the work to keeping up with their friends and family, the smartphone users are widely using these apps for communications and interactions.
Now, many of you are wondering, how it can be rewarding for your business in 2020–21?
The truth is, a mindfully developed social media app can be helpful for both- you and your targeted audience. Since people are spending an average 2 hours 24 minutes on social media on a daily basis and this time has been growing since 2012. While there is no way to predict the future. But, considering the current statistics, it is fair enough to say that global usage of social media apps will likely to increase in the coming future.
Where millions of people are being engaged on social media apps for numerous topics on a daily basis, even highly functional websites fail to achieve that level of engagement that social media platforms provide. And this is the sole reason why businesses are spending in social media applications to fulfill the needs of the specific audience.
But, before stepping towards developing a social media app for your startup, it is important to understand a few things that help you achieve success in the thriving app development market.
Here are the key highlight of this blog:
Growing Popularity of Social Media Apps: Latest App Trends and Statistics
Type of Social Media Apps You Can Consider to Develop in 2020–21
Main Features and Functionalities that Can Make Your App Successful
Advanced Features To Create a Social Networking App
Development Team and Tech Stack Required to Develop Social Media App
Choosing the Best Social Media App Monetization Model?
How Much Does It Overall Cost to Build a Social Media App in 2020–21?
Let’s discuss each point in detail for better understanding:
1. Growing Popularity of Social Media Apps: Latest App Trends and Statistics
Undoubtedly, social media apps are growing at a rapid pace and can ensure you a great success in future. Still, it is worth analyzing the market and evaluating the statistics to determine the scope of launching your social media app in 2020–21.
Here are the key statistics of social media market that you need to know:
Wrap Up: Considering these fantastically powerful and flourishing figures, it is fair enough to say that social media apps will not go anywhere in future. This is why, number of progressive startups spring up like mushrooms and work hard to secure their place in the cutting-edge competitive market.
To achieve success, one thing you need to keep in mind, is to bring a unique and innovative app idea to change the whole concept of social media and offer value to the potential users that they never received before via communication channels.
So now the question is, how to develop a brilliant social media app and how much does it cost you?
To get the answer to this question, first you need to decide which type of social media app you want to create?
2. Type of Social Media Apps You Can Consider to Develop in 2020–21
There are a number of social networking apps developed for different goals and to attract different categories of people. So what’s your type of social media app?
Here are the few categories, from which you can easily choose:
Social Media Sharing Networks
This is one of the most common trending social networks, that provides the privilege to share and create video and photo content with your friends. Through this app, users can create short videos or start their video channel by using various photo filters and interesting features to make out the best from these social networking apps. And the perfect example of this type of social media networks are Instagram, Youtube, Snapchat and more.
Relationship Building Networks
Applications like Facebook, Twitter or Linkedin are the most widely used relationship building networks, leveraging millions of users from all around the world. These apps are working in three ways, developing personal contact networks, professional networks and also helping people to find suitable dating mates.
Social Content Publishing Platforms
Right from blogging to micro-blogging, social content publishing apps like Twitter and Tumbler are playing a great role. Investing in these types of applications is a real worth as it helps in generating traffic and increases the eagerness among users to create and share their thoughts with the world by utilizing this platform.
Discussion Forums
Discussion forums or Q&A service providing social media apps were the first on the internet. The prime goal of these apps is to exchange the knowledge and provide expert advice to the users through the forum.
Online Reviews
Today, in the digitized era, majority of the users are relying on the customer reviews and ratings of the products and services before making an online purchase. This type of social media app is really worth investing in as it helps in pushing forward the quality services of your business.
Conclusion: No matter, whatever kind of social media app you have decided to develop, make sure your app has:
Excellent UX/UI design that attracts the attention of the new users.
Delivery satisfying user experience.
Provide access to use apps anytime and in any location.
Integrated will all the latest features to ensure improved functionality.
Apart from all these, you must be wondering, what features are necessary to consider in developing brilliant social media apps in 2020–21?
3. Main Features and Functionalities that Can Make Your App Successful
The success of the app greatly depends upon the features and functionalities you choose to integrate in the app. And when it comes to launching social media apps, the game solely depends upon the features, its structure and UX/UI design of the app.
However, before you prepare a list of features, keep in mind that these are one of the biggest cost driving factors of the app.
Note: These are the rough estimations of building features, it can go less or above these figures depending upon the choice of mobile app development team you hire, feature you choose and complexity of the app you decide to develop. Moreover, the cost to develop the back-end of the app is still to be added in the app development cost.
So the list of expenses didn’t end here with this table. Since every business has different requirements and goals, therefore, social media apps are constructed with different sets of features. So customizing the app even with the basic features is a complicated task, therefore it is worth enough to hire a mobile app development company to develop a brilliant app.
4. Advanced Features To Create a Social Networking App
With the emerging technologies and increasing number of social media applications in the app stores, developing an ordinary app is not a sensible decision. Where Facebook, Instagram, Twitter, Tumbler, and more already are leading the social media app industry, it’s time to incorporate advanced features in your app to redefine the user experience.
Here are the few advanced features and its development cost/time that you should consider in 2020–21:
AR Filters: Like Snapchat and Instagram, increase the curiosity of the users to try new photo filters everyday in your app. These apps offers you a bunch of AR based face filters right from enhancing features with beauty masks to adding dog’s ear to your face. To add these features to your app, you can use ML kit or ARCore, or simply choose to hire app developers to customize the filters without any hassle.
Like Snapchat and Instagram, increase the curiosity of the users to try new photo filters everyday in your app. These apps offers you a bunch of AR based face filters right from enhancing features with beauty masks to adding dog’s ear to your face. To add these features to your app, you can use ML kit or ARCore, or simply choose to hire app developers to customize the filters without any hassle. Pro Photo Editing : It’s amazing to give access to in-app picture editing features and allow users to customize their picture by using different photo editing tools. Developers can integrate editing features with the help of FFmpeg and allow users to leverage better editing options.
: It’s amazing to give access to in-app picture editing features and allow users to customize their picture by using different photo editing tools. Developers can integrate editing features with the help of FFmpeg and allow users to leverage better editing options. Location Based Content: This can be an exciting feature for your social media app to analyse the user’s location and start showing the content posted by the others with the same location tag. You can hire app developers to develop this feature by using Google Places.
5. Development Team and Tech Stack Required to Develop Social Media App
Building a social media app is a complex task, therefore, instead relying on freelancer or in-house development teams, it is always worth hiring mobile app developers experienced in developing such apps by leveraging latest technologies and proven methodologies.
Since Native iOS/Android app development requires various specialists, therefore you need to look for the dedicated team of developers including:
Project manager: Capable of understanding client’s requirements and able to get it executed from the development team.
Capable of understanding client’s requirements and able to get it executed from the development team. UX/UI Designer: To create beautiful design of the app which is simple, easy and seamless to access and ensure excellent user experience.
To create beautiful design of the app which is simple, easy and seamless to access and ensure excellent user experience. 2 iOS Developers: Expert in developing apps using platform specific programing languages.
Expert in developing apps using platform specific programing languages. 2 Android Developers: Experienced in creating apps using platform oriented programing languages.
Experienced in creating apps using platform oriented programing languages. Back-end Developer/ Front-end developer
QA Engineer: Who will deeply analyse your app and test it by running it over multiple platforms.
6. Choosing the Best Social Media App Monetization Model?
There is no use of developing any mobile app if it fails to make a profit for your business. While there are various monetizing methods, but here we have discussed some of the proven monetizing strategies to help you make best profit from your app:
In-App Advertising: Ads are the most common ways to generate profit from the app, but keep in mind to show relevant advertisements and in certain limits. Excessive amounts of ads can make your users panic and encourage them to uninstall the app.
Ads are the most common ways to generate profit from the app, but keep in mind to show relevant advertisements and in certain limits. Excessive amounts of ads can make your users panic and encourage them to uninstall the app. In-App Purchase: Most of the social media apps are free to use, but you can offer premium subscriptions to get some additional features.
The choice of the app monetization method is depending upon the audience you are trying to target, location of the users and operating system you choose to launch your app. Else, you can hire a mobile app development team to get better app monetization options.
7. How Much Does It Overall Cost to Build a Social Media App in 2020–21?
According to the survey report, the average price of developing a simple app is starting from $40,000 to $60,000, medium complex can cost you in between $61,000 to $69,000 and the complex app can go above $100,000+.
There are thousands of mobile app development companies around the world that claim to be best app developers and can ensure you the cheapest development cost and allow you to develop a social media app under $25,000.
But let me tell you, that’s just a marketing trap for you!
The truth is, there is no fixed price of social media app development, but with the average $50 per hour cost of the developer- a basic app can be developed at $15,000+, medium complex app will cost between $20,000- $30,000 and the complex apps usually goes beyond $50,000+ with all the modern features.
If you are thinking that high cost of developers will ensure you high quality, then, this is just a myth. The actual development cost of the social media app is greatly depending upon the skills, experience, geolocation and expertise of the development team that you choose to hire. So it is always worth to be choosy and calculated while hiring the mobile app developer for your next project.
Conclusion
With the end of this post, hopefully, you are able to estimate how much a social media app will cost you roughly. Building a complex social media app like Linkedin, Facebook, Instagram, Twitter, and more, it is worth to use an MVP approach to test the product concept first and what exactly you are trying to achieve through a social media app.
With so many features and functionalities, complexities, and technologies available to choose from, it is recommended to look for a mobile app development company that can easily evaluate your business requirements and be able to suggest your best solution with an accurate pricing model. | https://medium.com/flutter-community/build-a-social-media-mobile-app-its-cost-features-business-model-etc-62718c7d05e4 | ['Sophia Martin'] | 2020-11-02 05:04:54.940000+00:00 | ['Business', 'Mobile App Development', 'Technology', 'Mobile Apps', 'Startup'] |
1,588 | Hotmail Helpline Number or Hotmail Customer Service +1–844–832–5538 Technical Support | Hotmail Change Password Use Your Secret Questions
Also, remember that the server will determine which of the secret questions will be made available to you; for hotmail account sign in.
Contact Hotmail customer service for password reset, If you have forgotten your Hotmail password and wish to use your security questions, head over to Hotmail Contact page and select Password and sign in.
Under sub-topic two, select Forgot Password, under Recommended Option, enter your email address and click on Submit.
Click Next. If prompted, enter the CAPTCHA code.
Select Use my secret questions, followed by Next. Then enter the answers to your security questions.
If entered correctly, Hotmail will prompt you to enter a new password. And change your earlier Hotmail password enter your new pass-code and click on Next to finish your password recovery.
Note: If you do not have secret queries on file that reset possibility will not be out there in Sign-in Helper.
Finding it hard recalling what’s the answer to your security questions?
Try resetting it using your recovery email or mobile number.
Invalid ID or Password Message
If your attempts to log in result with an Invalid ID or Invalid Password error message, Hotmail change password this signals that you’ve entered a password and ID combination that doesn’t match what’s on record.
If you are certain that you are providing the proper sign-in data, there are a few scenarios as to why this is happening.
If your secret code contains numbers or letters, make sure that your caps lock or number lock keys are activated.
Case-sensitive codes are often entered incorrectly due to key stroke sensitivity.
Take a look at your browser’s auto-fill settings. If your browser usually fills-in your password for you automatically and you’ve changed your password recently, you’ll be required to enter your new password manually to override your browser settings.
If you’re sure that you’re entering the correct information, this could be an indication that someone else accessed your account and changed your password.
Reset your password immediately using the Password Helper tool. Once you regain the access to your account, review the steps to secure a hacked account to revert any changes made without your knowledge.
Important Note: If you can’t answer the secret question either and you do not have access to your alternate e-mail address or phone number also, there is nothing else you can do from here as Hotmail is not able to tell that you are the legitimate owner.
Find your Hotmail ID and restore access to your Hotmail account | https://medium.com/@jenny_86082/hotmail-helpline-number-or-hotmail-customer-service-1-844-832-5538-technical-support-fe7b8dcbcc42 | ['Jenny Underwood'] | 2019-07-24 06:41:46.930000+00:00 | ['Tech', 'Support', 'Technology', 'Technology News', 'Technews'] |
1,589 | Latest PCSIR 2020 pakistan Council of Scientific and Industrial Research Jobs Advertisement | Latest PCSIR 2020 pakistan Council of Scientific and Industrial Research Jobs Advertisement Pakjob ·Dec 17, 2020
PCSIR Head Office, 1-Constitution Avenue, Sector-G-5/2, Islamabad
Vacancy 6
Availability CONTRACT
Experience 5–10 years required.
Region Punjab
Education Masters, PHD,
Latest PCSIR 2020 pakistan Council of Scientific and Industrial Research Jobs Advertisement
Last updated on: seventeenth December 2020
Offered Salary : PKR 90000–125000
Aptitudes Level: Management
Sexual orientation: Male, Female
Age: 30–40 Years
Assignments: Keep going Updated on
Last Date to Apply Dec 28, 2020
for more information please visit this link
https://www.jobadvertisement.cf/ | https://medium.com/@pakjob114/latest-pcsir-2020-pakistan-council-of-scientific-and-industrial-research-jobs-advertisement-615a4738539b | [] | 2020-12-17 11:32:13.331000+00:00 | ['Information Technology', 'Blog', 'Jobs'] |
1,590 | Tutorial Ansible — #4 Ansible CLI Playbook | Digital publications that discuss the everyday configuration and troubleshoot network devices | https://medium.com/netshoot/belajar-ansible-4-ansible-cli-playbook-d9e3d4497bbc | ['Ghifari Nur'] | 2021-01-18 00:01:56.255000+00:00 | ['Automation', 'Networking', 'Ansible', 'Technology', 'Indonesia'] |
1,591 | Elite; Series 4 — Episode 1 : Full Episode On (Netflix’s) | ⭐A Target Package is short for Target Package of Information. It is a more specialized case of Intel Package of Information or Intel Package.
✌ THE STORY ✌
Its and Jeremy Camp (K.J. Apa) is a and aspiring musician who like only to honor his God through the energy of music. Leaving his Indiana home for the warmer climate of California and a college or university education, Jeremy soon comes Bookmark this site across one Melissa Heing
(Britt Robertson), a fellow university student that he takes notices in the audience at an area concert. Bookmark this site Falling for cupid’s arrow immediately, he introduces himself to her and quickly discovers that she is drawn to him too. However, Melissa holds back from forming a budding relationship as she fears it`ll create an awkward situation between Jeremy and their mutual friend, Jean-Luc (Nathan Parson), a fellow musician and who also has feeling for Melissa. Still, Jeremy is relentless in his quest for her until they eventually end up in a loving dating relationship. However, their youthful courtship Bookmark this sitewith the other person comes to a halt when life-threating news of Melissa having cancer takes center stage. The diagnosis does nothing to deter Jeremey’s love on her behalf and the couple eventually marries shortly thereafter. Howsoever, they soon find themselves walking an excellent line between a life together and suffering by her Bookmark this siteillness; with Jeremy questioning his faith in music, himself, and with God himself.
✌ STREAMING MEDIA ✌
Streaming media is multimedia that is constantly received by and presented to an end-user while being delivered by a provider. The verb to stream refers to the procedure of delivering or obtaining media this way.[clarification needed] Streaming identifies the delivery approach to the medium, rather than the medium itself. Distinguishing delivery method from the media distributed applies especially to telecommunications networks, as almost all of the delivery systems are either inherently streaming (e.g. radio, television, streaming apps) or inherently non-streaming (e.g. books, video cassettes, audio tracks CDs). There are challenges with streaming content on the web. For instance, users whose Internet connection lacks sufficient bandwidth may experience stops, lags, or slow buffering of this content. And users lacking compatible hardware or software systems may be unable to stream certain content.
Streaming is an alternative to file downloading, an activity in which the end-user obtains the entire file for the content before watching or listening to it. Through streaming, an end-user may use their media player to get started on playing digital video or digital sound content before the complete file has been transmitted. The term “streaming media” can connect with media other than video and audio, such as for example live closed captioning, ticker tape, and real-time text, which are considered “streaming text”.
This brings me around to discussing us, a film release of the Christian religio us faith-based . As almost customary, Hollywood usually generates two (maybe three) films of this variety movies within their yearly theatrical release lineup, with the releases usually being around spring us and / or fall respectfully. I didn’t hear much when this movie was initially aounced (probably got buried underneath all of the popular movies news on the newsfeed). My first actual glimpse of the movie was when the film’s movie trailer premiered, which looked somewhat interesting if you ask me. Yes, it looked the movie was goa be the typical “faith-based” vibe, but it was going to be directed by the Erwin Brothers, who directed I COULD Only Imagine (a film that I did so like). Plus, the trailer for I Still Believe premiered for quite some us, so I continued seeing it most of us when I visited my local cinema. You can sort of say that it was a bit “engrained in my brain”. Thus, I was a lttle bit keen on seeing it. Fortunately, I was able to see it before the COVID-9 outbreak closed the movie theaters down (saw it during its opening night), but, because of work scheduling, I haven’t had the us to do my review for it…. as yet. And what did I think of it? Well, it was pretty “meh”. While its heart is certainly in the proper place and quite sincere, us is a little too preachy and unbalanced within its narrative execution and character developments. The religious message is plainly there, but takes way too many detours and not focusing on certain aspects that weigh the feature’s presentation.
✌ TELEVISION SHOW AND HISTORY ✌
A tv set show (often simply Television show) is any content prBookmark this siteoduced for broadcast via over-the-air, satellite, cable, or internet and typically viewed on a television set set, excluding breaking news, advertisements, or trailers that are usually placed between shows. Tv shows are most often scheduled well ahead of The War with Grandpa and appearance on electronic guides or other TV listings.
A television show may also be called a tv set program (British EnBookmark this siteglish: programme), especially if it lacks a narrative structure. A tv set Movies is The War with Grandpaually released in episodes that follow a narrative, and so are The War with Grandpaually split into seasons (The War with Grandpa and Canada) or Movies (UK) — yearly or semiaual sets of new episodes. A show with a restricted number of episodes could be called a miniMBookmark this siteovies, serial, or limited Movies. A one-The War with Grandpa show may be called a “special”. A television film (“made-for-TV movie” or “televisioBookmark this siten movie”) is a film that is initially broadcast on television set rather than released in theaters or direct-to-video.
Television shows may very well be Bookmark this sitehey are broadcast in real The War with Grandpa (live), be recorded on home video or an electronic video recorder for later viewing, or be looked at on demand via a set-top box or streameBookmark this sited on the internet.
The first television set shows were experimental, sporadic broadcasts viewable only within an extremely short range from the broadcast tower starting in the. Televised events such as the 2020 Summer OlyBookmark this sitempics in Germany, the 2020 coronation of King George VI in the UK, and David Sarnoff’s famoThe War with Grandpa introduction at the 9 New York World’s Fair in the The War with Grandpa spurreBookmark this sited a rise in the medium, but World War II put a halt to development until after the war. The 2020 World Movies inspired many Americans to buy their first tv set and in 2020, the favorite radio show Texaco Star Theater made the move and became the first weekly televised variety show, earning host Milton Berle the name “Mr Television” and demonstrating that the medium was a well balanced, modern form of entertainment which could attract advertisers. The firsBookmBookmark this siteark this sitet national live tv broadcast in the The War with Grandpa took place on September 4, 2020 when President Harry Truman’s speech at the Japanese Peace Treaty Conference in SAN FRAElite CO BAY AREA was transmitted over AT&T’s transcontinental cable and microwave radio relay system to broadcast stations in local markets.
✌ FINAL THOUGHTS ✌
The power of faith, love, and affinity for take center stage in Jeremy Camp’s life story in the movie I Still Believe. Directors Andrew and Jon Erwin (the Erwin Brothers) examine the life span and The War with Grandpas of Jeremy Camp’s life story; pin-pointing his early life along with his relationship Melissa Heing because they battle hardships and their enduring love for one another through difficult. While the movie’s intent and thematic message of a person’s faith through troublen is indeed palpable plus the likeable mThe War with Grandpaical performances, the film certainly strules to look for a cinematic footing in its execution, including a sluish pace, fragmented pieces, predicable plot beats, too preachy / cheesy dialogue moments, over utilized religion overtones, and mismanagement of many of its secondary /supporting characters. If you ask me, this movie was somewhere between okay and “meh”. It had been definitely a Christian faith-based movie endeavor Bookmark this web site (from begin to finish) and definitely had its moments, nonetheless it failed to resonate with me; struling to locate a proper balance in its undertaking. Personally, regardless of the story, it could’ve been better. My recommendation for this movie is an “iffy choice” at best as some should (nothing wrong with that), while others will not and dismiss it altogether. Whatever your stance on religion faith-based flicks, stands as more of a cautionary tale of sorts; demonstrating how a poignant and heartfelt story of real-life drama could be problematic when translating it to a cinematic endeavor. For me personally, I believe in Jeremy Camp’s story / message, but not so much the feature.
FIND US:
✔️ https://cutt.ly/RnXpUrx
✔️ Instagram: https://instagram.com
✔️ Twitter: https://twitter.com
✔️ Facebook: https://www.facebook.com | https://medium.com/@elite-s04e01-s-4-episode-1/elite-series-4-episode-1-full-episode-on-netflixs-b7972b6d8285 | ['Elite -', 'Series Episode', 'Full Episode'] | 2021-06-17 07:35:37.428000+00:00 | ['Politics', 'Technology', 'Covid 19'] |
1,592 | Slow and steady won’t win the race in PT | The physical therapy profession traditionally isn’t very progressive.
We are often reactionary and slow to adopt new technology.
Not only does this put us behind the curve, but it allows other, more progressive providers the chance to provide care to our prospective patients.
Instead of being afraid of the future and digging our heels into the old model, my challenge for us is to figure out how we can best use the technology available to reach new populations.
We need to quit focusing so much on the short term busyness of modern physical therapy that we miss out on future opportunities.
And when it comes to technology, it is not a matter of “if” but “when” you will need to adopt new systems that fit the demands and expectations of patients who now are paying more out of pocket and arranging their own care than ever before.
Help has arrived
I (and many others) believe telehealth is the next frontier of medicine.
The old model of waiting in doctor’s offices to manage basic care issues and chronic conditions is a huge burden for patients.
As stated, patients are expecting more from their providers.
Resources such as telehealth can help.
To that end, there is promising early research that suggests telehealth improves the patient experience and satisfaction.
Another example, the armed forces and VA are already successfully using this approach to provide better care.
Telehealth Primer: Synchronous vs Asynchronous
Telehealth can be synchronous (provider and patient interact in real time on a live feed) or asynchronous (aka “store and forward” where patients and providers access data such as videos, reports, and messages whenever it works best for them).
Synchronous telehealth helps to reach populations that either wouldn’t have access to healthcare (rural populations) or for those that have difficulty getting into a traditional medical office (homebound patients).
The main limitations to this approach is that both patient and provider have to be available at the same time and have to have the system working properly.
Streaming services can have technical glitches and user errors can throw a wrench into your meeting times.
The providers I know utilizing this option are using services like Skype, which are subject to security and connection issues at times.
Can you imagine the frustration of your connection dropping right in the middle of your doctor’s visit?
Asynchronous telehealth cuts down these barriers and allows for patients to get the healthcare they need at a time that works best for them.
Looking back on my career in outpatient and home health settings, I can identify many times where this option would have made a huge difference. (I recently wrote about one particular scenario that will likely ring true for you.)
Don’t get left behind
I encounter PT business owners who say things like, “That sounds great, but I just don’t have the time to look into that stuff.”
If that sounds like something you’d say, here’s my response: You can’t afford not to look into this.
Those that are slow to adapt get left behind. If you want to be a successful provider in the new digital age, you need to spend the time right now not in a year or two.
By Sean Hagey (Twitter: @seanhagey)
Try In Hand Health for Free! Fast Sign Up! | https://medium.com/the-in-hand-health-collection/slow-and-steady-wont-win-the-race-in-pt-8d865f9efa75 | ['In Hand Health'] | 2017-01-25 21:54:25.409000+00:00 | ['Healthcare Innovations', 'Healthcare Technology', 'Digital Health', 'Physical Therapy', 'Telehealth'] |
1,593 | The bionic supply chain is already a reality | The bionic supply chain is already a reality
Supply chains have long been linear models linking production, distribution, and consumption. Initially supported by fixed processes and configurations, these supply chains were often essentially internal activities. For 20 years, evolving technologies have opened up supply chains and have made them more dynamic because they have made the integration of internal and external actors possible. Nevertheless, the planning, ordering, and delivery cycle has still remained sequential, and errors made upstream have still been subsequently reflected in issues of breaks, overstocks, and delays. These traditional, standard models are now completely out of date because of the volatile, hyperdynamic, and totally personalized nature of consumer demand.
Anywhere, at any time
New buying behaviour that looks for products and servives “anywhere, at any time” is adding a new dimension to this disruption of the supply chain, especially in the retail and consumer goods sector. The originality and uniqueness of a customer demand has given way to a demand that has diversified and multiplied along with the diversification and multiplication of marketing channels.
The issues facing most players in the supply chain, whatever their industry, are essentially the following:
- the permanent increase in e-commerce-based-distribution models;
- an acceleration of demand due this increase in e-commerce; and
- a demand that is still driven by the principle of “same-day delivery”.
The bionic supply chain
The agility needed by a company to adapt to these new issues and their associated challenges — namely, the need for total flexibility — requires the use of all forms of resources available to it: financial capital, human capital, and natural resources, but it also requires new forms of resources — which is what organizations such as Amazon, Alibaba, and Apple are telling us.
These new resources consist of behavioural data on clients (behavioural capital), proprietary knowledge and algorithms (cognitive capital), a network consisting of partners, extended clients, and a social media presence (network capital). The challenge is to digitize the supply chain and make it “bionic” for all of its stakeholders: suppliers, operators, distributors, and customers.
Under the impulse of new purchasing behaviours, some companies, such as those involved in producing films, music, books, and magazines, have largely been able to completely transform their operations and their supply chains. They managed to make these transformations by integrating new distribution channels, dematerializing products, completely changing their catalogue structures, and transitioning from a sales model to a service model by rental and then to a subscription model. Finally, they have a distribution model that is boosted multiple times by their social media (i.e. social network) presence.
Redesigned adaptability
As the supply chain has diversified and been adapted to customers, so too consumption data has multiplied and been refined. Such data can now be obtained faster. These companies were able to first adapt their distribution model to a more refined consumer profile, and then develop with their suppliers dynamic product catalogues that meet the most recent expectations of their customers.
Due to their particular socio-economic contexts, these industries were able, before others, to find solutions to the problem that is the demise of the traditional, standard supply chain. But other distribution, transportation, and consumer product industries are also facing new customer expectations. The convergence of many new technologies (Internet of Things, blockchain, artificial intelligence, etc.) makes these necessary transformations possible today. The challenge is not to add a digital layer to a traditional model, but to rethink the whole model itself by integrating the digital into it.
New types of capital
With these challenges to supply chains come new forms of capital available to businesses that can provide innovative solutions and that can result in successful or future transformations:
● In anticipation of a changing demand that generates overstocks and ruptures, behavioural capital provides companies with customer data collected through CRM, loyalty programs, discussion groups, and so on. As it is modelled, this data can predict future variations (peaks, falls, changes in demand), inform about changes that need to be made (reallocation of inventory, reduction of production), and give the supply chain a greater deal of responsiveness.
● Under the pressures generated by multichannel and accelerated deliveries, distribution networks have become increasingly complex, increasing the number of warehouses and transport flows, the management of which is now becoming quite difficult. Cognitive capital accompanies this diversification by digitizing operations and thereby creating digital twins (compilation of data models, algorithms, and knowledge of operations). The digital representation of assets and processes can help businesses better understand their operations, predict their effectiveness, and optimize their performance.
● The problem of growth that is limited by the finite circulation of a product catalogue among existing or known customers can be solved by network capital, which can project the company beyond its traditional borders (geographical, cultural, product function). New markets, new populations, new customers, new uses of the same product — all of these boost growth. Blockchain technology now becomes crucial to allowing the company’s information to circulate freely.
The challenge of the “bionic” supply chain can then be simplified into two main questions: How do you operate these new forms of digital capital with the more traditional human forms of capital? and How do you integrate people and machines and thus benefit from greater agility, improved responsiveness, better transparency, and simplified decision-making in order to boost growth and attract new talent?
Building a new ecosystem
The “bionic” supply chain is not the digitization of existing functions, but the creation of a new ecosystem. This new ecosystem requires new talents and a desire on the part of the company for transformation. It is based on the principles of integration, visibility, and reaction time:
● Integration of supplies, suppliers, upstream and downstream logistics, production, warehousing, and the customer all controlled from a central point, which can possibly be virtual. By integrating and sharing data, planning can become hyper-responsive: raw material lead times are updated in real time by vendors and are integrated into production planning. The expected arrival time is announced and respected, which allows for a reduction of stock, a faster production cycle, and a reduction in the size of the lots. The customer’s order is ready faster. It can be supported in advance for a next day delivery.
● Visibility of the information collected by integration and sharing between all actors, which allows for collaboration between the strategic, tactical, and operational horizons. The very concept of the silo, which stiffens supply chains by making each link independent, disappears. The information that is visible and shared at each stage of the macro-process raises awareness of client-supplier concepts.
● Integration and visibility allow for an acceleration in the reaction time of each actor in the chain. It is the agility of the entire supply chain that is eventually multiplied. This reaction rate is further accelerated by the following functional transformations:
1. Purchasing and Procurement 4.0 automates transactions between applicants and vendors through shared platforms and catalogues;
2. Robotic warehouses offer unmatched service rates, minimizing inventories; and
3. Maintenance strategies, aligned with the principles of RCM (Reliability-Centered Maintenance) through the generalization of sensors, make full use of 3D printing and allow for machine service rates of nearly 99%.
The availability, sharing, and fast processing of data does not simply describe or even predict the supply chain, it also determines how it is managed. | https://medium.com/pwc-canada/the-bionic-supply-chain-is-already-a-reality-ed595cbda71e | ['Pwc Canada'] | 2018-10-30 20:38:26.311000+00:00 | ['Technology', 'Consulting', 'Business', 'Supply Chain', 'Pwc'] |
1,594 | Natural Language Processing and Naval: The Art of the Podcast | Podcasts are a harmonic creation.
They are fascinating tools, connecting us to the speakers in a very personal way, almost like we are a part of the conversation too. I laugh out loud at some things, or nod along, reacting emphatically, despite being in a different time and place than both the interviewer and interviewee.
Podcasts are tools of connection. Almost everyone either wants to start a podcast or knows someone who has started a podcast (for better or worse).
Podcast creation is the perfect storm of an open consumer base, an ever-evolving Internet, and accessible recording software and hardware. The barrier to entry is low. In the most basic approach, all you need to have is a cellphone and Internet access.
We also like to hear from online ‘celebrities’ or people that we greatly admire, and use the content for learning and entertainment. It’s easy to go onto Spotify or Apple to get access to a wide variety of podcasts, on an unfathomably wide variety of topics. | https://medium.com/datadriveninvestor/natural-language-processing-and-naval-the-art-of-the-podcast-b7ff7e201b31 | ['Kyla Scanlon'] | 2020-09-26 15:03:19.136000+00:00 | ['Naval Ravikant', 'Data Science', 'Technology', 'Growth', 'Economics'] |
1,595 | How To Create a Social Media App and How much does it costs? | Saying that most people spend their favourite time on social media platforms would not be an exaggeration, especially in today’s world.
Over the time social media applications like Facebook, Instagram, Messenger, Twitter and more have become an integral part of people’s lives and daily routines. People have become so addicted to it, that the first thing they are dying to do after waking up is to check their social media feeds.
With billions of registered accounts and trillions of annual active users, social media apps have emerged as the bright opportunity for businesses to stay connected with their targeted audience.
Today, where the covid-19 pandemic has pushed businesses to doom and people are in isolation, traditional old-fashioned yet in-person communication has become almost impossible. And that’s where social networks and social media applications outshined as the most potential platform for emerging entrepreneurs to generate better business opportunities.
If you also belong to one of them, then you are at the right place. Here we are going to answer all of your burning questions and walk you through every step you need to take to make it happen.
Here’s what we’ll cover:
Market Statistics: The Growing Social Media App Landscape
Why Do Businesses Should Invest in Social Media Apps?
What Type of Social Media App You Should Develop and Its Cost?
Tech Stacks That You Can Use To Develop A Social Media App
Current Trends That You Can Integrate Into Your Social Media App
Must-Have Feature To Consider When You Create a Social media App
Step-By-Step Guide To Create a Social Media App
How Much Does It Cost To Create a Social Media App?
How To Make Money From a Social Media App?
So before you get straight into the process of hiring the best mobile app development company, it is worth getting deep into each point and understand these parameters closely…
Market Statistics: The Growing Social Media App Landscape
Since developing an app is like starting another business that requires the same level of effort and budget. So before making something groundbreaking, it is worth evaluating the market and analyzing the leaders.
Here are a few statistics reflecting the growing landscape of social media apps:
According to the studies, the landscape of social network and social media applications is swiftly expanding and it has been observed that social media users have grown by 10% over the last year in 2020 and taking the global total to 3.96 billion in the first quarters of 2020.
On average, users are spending 2 hours and 33 minutes socializing online mostly on Facebook, YouTube, Messenger, Whatsapp, Instagram, Twitter or Snapchat in 2020.
According to the eMarket survey report, 11.8 million US user growth for the app in the next four years and expected to reach 138.1 million users will access Facebook Messenger in the US alone by 2022.
In the nutshell, it is fair enough to say that for the first time, more than half of the world’s population now uses social media with 99% of total social media users accessing these platforms via smartphones at some point.
And this provides the huge opportunity to grow your community and monetize your targeted audience with your own unique social media applications. While it’s true that social media apps are a great platform for communication and staying connected. But how can it benefit your businesses?
Why Do Businesses Should Invest in Social Media Apps?
Initially, social networking services are started with the aim to create a platform that helps users find each other and communicate online. But today, social media apps are not left just a medium of communication between friends and relatives. Let’s understand the relevance of social media apps with new era businesses.
Here are the few benefits that emerging bootstrappers can leverage with social media app development:
Easier To Reach Your Targeted Audience!
100% enables you to reach your targeted audience as you can use multiple search filters with the choice of location, gender, age, category, interest and more while posting any content on the app.
Build a Direct and Deeper Relationship With Community!
Customised social media apps can help in creating direct yet deeper relationships and interactions with users. Unlike traditional marketing campaigns, App owners can directly interact with the community and are able to make alterations in their services or products as per the user’s needs.
Get Access To Full Data And Keep Users Engaged With Your Content!
Get a deeper understanding of your community as existing social media platforms only give access to a fraction of your communities’ data. But when you are the owner of your own app, you will have the power to access full data and understand the interest areas of the people and what kind of social activity they like the most in your app which is a great lead for your business.
Bring Better Monetization Opportunities!
Social Media apps can make better profits by providing the opportunity to build digital marketplace options right under your app.
Say Goodbye to Clever Algorithms!
When promoting services on your own app, you don’t need to chase those tricky algorithms to navigate or news feed spaces to fight for. It’s simply you and your audience. All you need is to hire an app developer to create a full-featured app. But when you promote it on existing social media platforms, there are thousands of people, creators and businesses to compete with and algorithms would decide which content to show and to who.
Stay Connected With the Users 24/7!
A Social media app development builds a channel for your business to promote a certain product, brand or services and stay in touch with the users 24/7.
Create a Private Community of Users!
It’s a unique yet leading opportunity to get brand enthusiasts together at your app and form a circle with like-minded people.
In the nutshell: By creating your own social network, you will free yourself from all such hassles and grab the attention of the users of your products and services. By hiring a software developer you can easily create a social media app that can add an edge to your business. But the central question is, what type of app should you build for your business?
Let’s answer this question in the next section!
What Type of Social Media App You Should Develop and Its Cost?
Before you head straight into the app development process, let’s first understand what are the major categories that all social media apps are divided into. This will help you decide which app type will better suit your business goals and bridge the gap between your success.
Here are the common types of social media app that you need to know about:
Media Sharing Networks: This is a type of app or a platform that allows users to share all types of media files including photos, videos, GIF files and more. The perfect examples to consider are Vimeo, YouTube, Snapchat, or Instagram. An average cost of app development can be $15,000 to $25,000+, depending upon the features and functionalities of the app.
Social network Apps: Applications like Facebook, Twitter or Linkedin are the perfect examples of personal and professional social networking apps. And with millions of monthly active users of Facebook are strongly influencing business to hire mobile app developer to build this type of application. The estimated cost of app development can be starting from $20k+ and go to any expensive price.
Networks for Consumer Reviews: Such types of applications are used for customers to verify reviews and ratings of the businesses they’ve had experience with. And Yelp is one of the most prominent examples of this category. The average price of creating this type of app can be starting from $15,000 to $18,000+.
Community and Discussion Forums: This can be a great platform where like-minded people come to the same platform for asking questions, receiving answers, sharing news, ideas, insights and experiences. Quora and Reddit are the titans in this category of social media apps. The starting price to build this type of app can be ranging from $15000 to $20,000+.
Blogging and Publishing Platform: Applications like Medium can be a perfect example of this type of social media app that allows users to create their blogs and publish their content and update them about any technology, services, products or anything every day. The average budget you need to build for this app can’t be starting from $12,000 to $15,000+.
As you notice, there are many types of social media applications available to choose from. And there are various applications that are leading in their domain. So no matter what type of app category you choose to develop, it is important that you hire the best app development company that can translate your app idea into a robust, scalable and flexible digital solution under your limited budget.
Tech Stacks That You Can Choose For Cross-Platform Social Media App Development(iOS/ Android)
If you are interested in creating a social media application that runs on multiple platforms but behaves like native apps, then here are the few tech stacks including Flutter, React Native, and more that you can use to build a cross-platform application
Since we are aiming to build a social media app with React Native, so are here some tools that you can use to build a full-featured social media app.
Since React Native is one of the top choices of the framework, though most IT companies are inclining towards React Native for mobile app development solutions. RN is not only easy to fit the budget but also empower developers with a choice of features that make the development process 10x faster and cut down the impactful amount on the budget.
Now the question is, what kind of modern trends can you integrate into the social media application while developing with React Native?
Current Trends That You Can Integrate Into Your Social Media App
With each passing year, the Social Media platform brings the latest trends to keep their users engaged and addicted to their applications. For some of us, this topic seems to be threadbare, but it is not in fact because developing an application with old tired and boring functionalities will only bring failure.
So here are the few trends to watch out for before you hire a cross-platform app developer for building an outstanding social media application:
Artificial Intelligence and Chatbots
AI and chatbots are playing a major role when it comes to delivering excellent customer supper on social media apps. Businesses running over social media can use AI-powered chatbots to answer the queries of their customers in no time. AI-integrated chatbots can efficiently conduct conversations with the consumers and provide them with the solution right away by understanding their query.
AR Powered Face Filters
Like Snapchat and Instagram, you can also be a trendsetter in the field of social media apps. People still enjoy the funny filters and visit the app frequently to check new updates every day.
Live Video Streaming
This could be the most engaging and attractive feature of your social media app as users love watching live videos or reading comments below on social media.
Uploading Video Content
In comparison to Images, video content is most popular to attract the attention of the users and keep them stay on your post for longer. But since everyone has a fast-paced life, keep it simple, short, and engaging to make your users come back to your post regularly.
24 Hours Story Post
It is a type of trend that allows you to post a story that stays live for 24 hours on your feed like Instagram and after that, it will disappear. People widely use this feature to keep their friends updated with their everyday life activities.
Sharing and Publishing Content On Social Media
While more and more people want to share their content with as many users as possible. Though it’s good to provide your users a platform whether they can publish the content and expand the user base.
Once you have decided upon your niche and learned trends to follow to make a social media app, it’s time to hire cross-platform app developer and check out the rough estimation of time and cost it will take for your project.
As of now, you know the trends that you can integrate to make your social media app working smoothly. But what are the basic features that should have in your app in case you are going with MVP?
So here, we have jotted down the list of must-have features along with the estimated development cost for iOS and Android.
Must-Have Feature For When You Create a Social media App
Building a social media app is no simple task. So to make your app function, you may be wondering what features you should consider for an internet-based network?
So here’s an infographic list of features along with the estimated development cost and time that it will take to develop. To make it develop, all you need is to hire an app developer experienced in customizing social media apps with detailed features.
Note: These are the rough estimations mentioned here by simply evaluating the market survey reports. The real estimations can vary according to the needs of the businesses and the type of application you are interested in building.
How To Build a Social Media App in 2021?
While building a social media app is a complex task that requires pro skills to code this app idea. So it is worth that the coding job will be left with the expert app development company and you’ll focus on aspects of building an app for iOS and Android. To make sure that reading this section will actually be worth it, therefore we are keeping things simple and help you assist in setting all the things before the developers get their hands on the project.
Now is the time to find out all the ins and outs of making a social app for Android and iOS:
Phase 1: General Information on Social Media App Development For Android/iOS
Architecture: MVVM (Model-View-viewmodel) architecture pattern.
MVVM (Model-View-viewmodel) architecture pattern. Programming Language: Earlier we used only Java to develop Android native but now developers are using Kotlin for new projects. For iOS Native apps, you can use Swift or Object C++. But to make it work on the web, Android and iOS with the same programming language, you can choose React Native for the app development.
Earlier we used only Java to develop Android native but now developers are using Kotlin for new projects. For iOS Native apps, you can use Swift or Object C++. But to make it work on the web, Android and iOS with the same programming language, you can choose React Native for the app development. Framework: In the case of Android, Google Play Services are mainly used as a Framework as it provides the access to a wide choice of Google Services including in-app purchase, Geolocation, Cloud Messages, Firebase and more. For iOS, Dip Framework is used as a service locator.
In the case of Android, Google Play Services are mainly used as a Framework as it provides the access to a wide choice of Google Services including in-app purchase, Geolocation, Cloud Messages, Firebase and more. For iOS, Dip Framework is used as a service locator. Library: RxJava2 and RxSwift for Android and iOS respectively.
Phase 2: Technology Stacks to Be Used on UI layer of the App
RecyclerView: It is a native Android component that is used to show scrollable content in the form of a list to a user and allow interactions with items on the list. Whereas, DTTableViewManager/DTCollectionViewManager are used for building type-safe table views and collection views in iOS apps.
It is a native Android component that is used to show scrollable content in the form of a list to a user and allow interactions with items on the list. Whereas, DTTableViewManager/DTCollectionViewManager are used for building type-safe table views and collection views in iOS apps. Fragments: It’s a native Android container that contains other views and widgets and they have their own lifecycle. And in iOS, LoadableViews are used for creating reusable view components.
Phase 3: Technical Implementations are Required For the Network Layer
For Android Native Apps
Retrofit for network requests
GSON for JSON parsing
Glide for loading images and caching
For iOS Native Apps:
TRON/Alamofire for building network abstraction.
Codable/SwiftyJSON for parsing JSON responses.
AlamofireImage for loading and caching images from the network.
How Much Does It Cost To Create a Social Media App?
Now when you understand the important technical things that are required for social media app development, let’s speak about the cost. No matter whether you are a startup or an entrepreneur, the financial side always matters to you!
So, we will provide you with the calculation of three app versions to help you decide which app version will best suit your business needs and budget:
1: Basic Version, is the best option for the first app version that can be launched with the basic functions and feature set.
2: Full Product with advanced features and additional functionals that make your app fit the growing needs of the market.
3: Modern Large App with complex structure and features to add a competitive edge to your product.
Note: Before you hire a mobile app development company, it is vital to understand that these price estimates are based on market surveys. It can differ from company to company and developer to developer.
But the main question still remains unanswered: how will you make money from the app?
How To Monetize Your Social Media App in 2021?
If you are wondering that how can you monetize your social media app and make a good profit from your investment, then here are the few strategies that you can try to achieve this:
Paid Subscriptions: As long as your content serves the real value to the community, members will be happy to pay for a subscription. You can offer a free tier with good content and for in-depth and high valued content, you can offer a paid tier. In addition, Apple and Google Payment systems make subscribing easier and simpler.
As long as your content serves the real value to the community, members will be happy to pay for a subscription. You can offer a free tier with good content and for in-depth and high valued content, you can offer a paid tier. In addition, Apple and Google Payment systems make subscribing easier and simpler. Physical Purchase: From innovative face filters to cool merchandise with snappy slogans, you can sell direct stock to the users. So it can be a great way to use this platform to earn money.
From innovative face filters to cool merchandise with snappy slogans, you can sell direct stock to the users. So it can be a great way to use this platform to earn money. Sponsored Content: If your app is leveraging great community members, then you can provide opportunities to other businesses to reach them out with the paid sponsorship program.
If your app is leveraging great community members, then you can provide opportunities to other businesses to reach them out with the paid sponsorship program. Events and Activities: Promote events and online experience through your community platform with the paid ticketing and start earning the money.
Let’s sum up this blog!
Conclusion
So here we are ending this blog! Still, if you are sitting and thinking that it will take a hefty budget and ages to complete the project, then we recommend you to hire a mobile app development company that always comes up with the best alternatives. Still, we have covered everything you need to know about building a social media app right from the scratch- the technicalities, features, functionalities, costing, monetizing strategies and more. We try to help you get started with the digital connection with no strings attached anywhere.
If you have any doubt anywhere, then you can drop a query in the comment box! Experts will answer it with their expert advice.
More content at plainenglish.io | https://javascript.plainenglish.io/how-to-create-a-social-media-app-with-react-native-403a2158fb8d | ['Sara Khan'] | 2021-05-22 10:47:04.777000+00:00 | ['Startup', 'Social Media', 'Programming', 'Mobile App Development', 'Technology'] |
1,596 | From CCO to tech intern: My week of work experience | by Jon Slade
In recent years the teams I’m responsible for have worked ever closer with colleagues in Product and Technology. Simply put, we can’t do our jobs in the Commercial teams without strong working relationships and understanding of how the professionals elsewhere in the company do their jobs.
A recent trip to our excellent new development centre in Sofia, Bulgaria reminded me that I still had a lot to learn about Product and Technology if we were going to work in the most collaborative way necessary for success. So with that thought in mind I asked Cait O’Riordan, the FT’s CPIO, if she could put up with my ignorance for a week via a short ‘internship’. I was delighted when Cait agreed, and I spent a week in early July shadowing teams across Cait’s department in London.
It was a terrific learning experience for me, and thanks to the very warm welcome (and patience!) that the teams gave me I feel I came away with just a little bit more insight into the great professionalism going on at the FT and how we might work better together.
At times it felt like I had joined a new company — meeting people I didn’t know worked at the FT, visiting new places at Bracken House (our new London offices), and hearing discussions and plans that are vital to how the FT functions, but which for me were completely (or at least, largely) new.
What follows are some thoughts on the week.
The definition of done is very different between Technology and Commercial
Commercial teams work to regular calendar-driven business rhythms: daily run rates for ad bookings, weekly acquisition targets for subscriptions, monthly team profit goals, quarterly department forecasts and annual operating plans. ‘Done’ is largely determined by whether the financial objective is hit in the time given.
In technology, ‘done’ has a different definition: the code base is ever-expanding (and a desire to keep it in a manageable state is a real driver in thinking), cyber security is an endless vigil, Trello boards never empty, and an agile methodology means projects are completed when objectives are hit, not by looking at the calendar.
We would do well to better understand these differences in rhythm when planning projects and business targets.
Culture and ways of working
I thought Commercial was full on, but communication in P&T is relentless. Slack never, ever stops… and communication is literally 24x7 (unsurprising given our global business, I guess).
Documentation is also relentless, and so important. I was really struck how the rotating teams on OpsCops support emerging issues, about which they might have known nothing when they walked into the office that morning, by tuning into a problem and using the supporting documentation.
Relatedly, I noticed a culture of always iterating and improving together — a ‘hive mentality’ — something embodied by that ever-growing and improving documentation base.
And I noticed a real desire to share knowledge, via blog posts, stand-ups and wikis. I think we could learn a lot from this way of working in Commercial.
Product and Technology have been working to a system of OKRs in 2019 — Objectives and Key Results — as a way of tracking outcomes and progress. It seemed to me that this was a really good way of trying to get everyone rowing in the same direction. There’s work to be done in learning how OKRs are best used, but I definitely see the opportunity for teams working closely with P&T to also adopt the system (for example, the B2C teams) and making sure that the joined up forward momentum is not just in one department, but working across company-wide missions, like subscription growth. At the Board Subs Summit last week we discussed how we might roll the framework out more widely — more to follow.
The work that teams are doing on audio demonstrates there is huge potential in departments working closely together and experimenting rapidly.
Coming from a relative silo, what I saw in UXD, Customer Research and elsewhere was somewhere where it all centralised, with work being undertaken for pretty much every team.
And I could feel and see the value of the open space we have at Bracken House to help with collaboration.
I even joined the annual Product & Technology rounders.. and our team won!
A different language
For my sins, I can now tell you what each of these means:
Rubberducking
Bikeshedding
Tab vs space (turns out you do need an opinion on this… I think I’m a tab kinda guy)
Microservices
Mobbing
Pull requests
Run books
If anyone in P&T wants a crash course on DSPs, SSPs, CpH and BUD I’ll be happy to oblige.
Alignment
The FT has a Tech Strategy and also a Product Strategy, and just in my world we have strategies for B2C marketing, customer services, Specialist, Circulation and Advertising.
Given the interdependencies between us all it’s more important now than ever before to ensure that these strategies complement one another (see: OKRs…).
There’s a real risk that they might not, and I do wonder if we have the right forums in place to make sure that they do.
One to work on.
Living with the past
There’s a lot of wrestling with decisions we made in the past and how they impact our ability to do what we want to do now.
Next FT, the ‘new’ FT.com launched in 2017, was a major piece of work for us because the previous publishing stack — Falcon — was a monolith, meaning we couldn’t make any rapid changes without threatening the entire piece. Next FT is based on microservices, meaning one small change doesn’t necessarily crash the remainder. That’s a huge step forward.
But we mustn’t repeat the mistakes of the past: there is a constant trade off between building technology to help us grow, versus just getting stuff completed, maintained and improving quality.
I guess in part it’s a factor of being a business with a history — we are living with technology that made sense in the context of the time we built it.
But a constant refrain in the week was ‘we could move faster if we focussed on fewer things’.
Getting the balance right on growth versus maintenance is going to be our key challenge going into 2020. Longer term thinking and planning around investments will help.
Understanding our customers
At the FT we pride ourselves on understanding both our internal and our external customers. We are right to do so.
Customer Research handles dozens of projects at any one time: hundreds of hours, thousands of customers telling us about our products each week. And there is a consistency of themes — ‘you can hear the picture’, as Caroline, a senior researcher, put it to me.
I really like the mantra in research of ‘Trying to find the best What for the best Who’. That seemed to sum up well the focus of the effort.
And I was really struck by the amount of preparation that goes into design workshops and design thinking. It paid off handsomely in a workshop I attended on B2B Newserve, with some solid thinking and conclusions emerging from the session. I promised, and managed, to keep my mouth shut and just observe…
On the internal product side, I attended a user testing session for Spark, our new CMS, and I was interested to see how the team are seeking to find a balance between keeping the tool light and usable and not being overloaded with all the various needs from each desk.
User testing and customer workshops are a real art form — how far do you guide, how far do you just observe?
Real-time O&R and failing-over…
Spending a morning with Operations and Reliability (O&R) underlined the very real-time nature of the Technology operation — dealing with issues immediately as they emerge.
The issues were all very different — security, failed tech, upgrades — and I was struck by the power of dashboards to help manage 1800 separate services.
But a heart-hammering highlight for me was failing-over FT.com from US servers to those in the EU. I’d like to thank Kev for calmly walking me through the procedure while I turned off our US servers — and I’m pretty confident our US audience didn’t notice a thing. The flatline on the chart below proves we did it…
Finally, and the future
Once again, a huge thank you to the teams for welcoming me into their domain. I’ve come away with a huge sense of pride in the organisation I work for and the professionals who work with us, and a definite sense of enlightenment. I spent a week being the most ignorant person in the room, and I do at least now have a little bit of knowledge.
But you know what they say about a little bit of knowledge… ;-)
I’m delighted that Cait O’Riordan wants to try the same process with the Commercial teams. I look forward to that, and thinking how we might expand this concept to a wider group. Our future success depends on us understanding one another’s business and a joined-up, collaborative way of working. I hope that the week I spent with Product and Technology was a step in the right direction. | https://medium.com/ft-product-technology/from-cco-to-tech-intern-my-week-of-work-experience-6dcb6adba5be | ['Ft Product'] | 2019-07-31 16:09:41.322000+00:00 | ['Technology', 'Learning', 'Chief Commercial Officer', 'Okr', 'Internships'] |
1,597 | Silent Breach Establishes Security Operating Center (SOC) in Singapore | Singapore City, Singapore, June 11, 2019 -(PR)- Silent Breach today announced that it has established a Security Operating Center (SOC) at their Singapore headquarters in SUNTEC Towers.
“Silent Breach is very excited to offer 24/7/365 cybersecurity monitoring to our APAC clients,” said Marc Castejon, CEO of Silent Breach. “We believe that cybersecurity is a holistic commitment that requires around-the-clock vigilance, and this new SOC will enable us to continue to grow our activities in the region while maintaining industry-leading standards and services.”
The SOC was created to handle increased demand from the APAC market, in particular from Hong Kong and Singapore. Notably, this expansion coincides with Silent Breach’s move to expand operations in the managed cybersecurity services sector, including Continuous Monitoring, Pro-Active Threat Detection, Intrusion Response, Access Management, among others.
“With the high demands and costs of a rigorous in-house IT team, our customers need a cybersecurity solution which is both cost efficient and incredibly responsive,” said Jin Diong, Managing Director APAC. “For many, Silent Breach’s comprehensive suite of Managed Cybersecurity Services is the perfect fit. With our new Singapore-based SOC, our clients can maintain the highest levels of security and responsiveness at a fraction of the price.”
Further information about the Security Operating Center and its activities, can be found at: https://silentbreach.com/Managed-security-services.php | https://medium.com/silent-breach/silent-breach-establishes-security-operating-center-soc-in-singapore-ea7c2e9040a7 | ['Daniel Rhodes'] | 2020-09-22 14:29:41.417000+00:00 | ['Cybersecurity', 'Technology', 'News', 'Infosec', 'Singapore'] |
1,598 | When the unicorn went to Disneyland | Working as the only female in an IT-Startup and finding my way in a world of men. And computers. Many computers.
When I saw Emirs’ number on my phone a few months ago, I already knew what the call was going to be about, which is why I didn’t pick up immediately and took some deep breaths (after locking myself in the printer room in the amazing law firm I am also working at). “Do you remember when we asked you about working for timebite a few years ago? I am calling to officially ask you whether you want to work for timebite. Again, and with all seriousness.”
I replied and asked him if he and the team were fully aware of the fact that I do not have a marketing or a social media degree. He replied “I know, but I also know how you work. I have seen it. Most of us don’t have a degree in what we do either. Look where it got us!”
The start of something new
The guys originally wanted me to just reactivate the social media accounts for the student platform timebite.at. However, after a few hours, I already started writing the first emails to press agencies and public institutions that could be interested in what the boys do. I studied other timebite products, just in case the boys needed me to work on them as well. And here I am. Months later, sitting in an office full of pictures of the team that I hung up on the empty, egg-shell walls, while I think of one of the interviews that I got the timebite founders with one of Austria’s biggest newspapers “Die Presse” after only three weeks in my position. Successful campaigns, such as a new feature for our Coronahelp-app “Hilfma” in cooperation with the ministry of education or a podcast with one of our founders, followed shortly after.
When I got the call from “Die Presse”, I was completely hyped. I was on the phone with the interviewer for almost 20 minutes, the call was a “pre-interview”, where the interviewer thoroughly asked me about everything that I knew about the boys and the platform. After she hung up, I jumped around in my flat and screamed like a little child, I also jumped into the metro to give the boys the good news. It took me a while to get the lines “Please book the meeting room for an interview with “DiePresse” tomorrow” out. While I was the only one who seemed excited in the beginning, the boys finally started saying “this is really sick” to one another through out the day. Deep down I knew and still know that this was just the beginning. The boys are full of fruitful ideas and can manage extraordinary amounts of work. I also felt like they might need someone who does not only throw catchy headlines on Instagram but reminds them about how far they have come.
“CPO” stands for Calmness, Peace and “Oh, this is sweet”
In the first meetings I really tried to follow the CPO’s and CTI’s reports. I am honestly not sure yet, if my definitions of these abbreviations are correct. However, I have noticed that my weakness of not knowing what all of the technical stuff means is actually exactly what I needed in order to show our target group who the boys behind timebite are. In other words: This enlightment of “unenlightment” was what I needed to do my job.
I am a ray of sunshine-person. Really annoying sometimes. I like colorful suits and dancing in study breaks and singing songs that I do not know the lyrics of. I gave the boys in the office a snack bar as a Christmas present because I like seeing snacks in cute fancy glasses and I can not work with hangry people. There is hardly any situation that can actually throw me off the track or a day where I wake up and decide to rather stay in my bed. From all of the days where I went in and out of the office, I remember one day in particular: I woke up early to jump on my yoga mat and work on some kicks on my punching bag, but because of the lack of sleep that I was getting (my roomy was up late), I was totally exhausted and tense. I turned my phone on and read that one of my closest friends was tested Covid-positive and she was having a very high fever that night. That was just the start of the bad news. I didn’t really inhale much peace that day. It was raining in bucket loads when I got out of the flat, I forgot my phone and ran back to get it. Then I was worried that I might be late and rushed myself to the office. OK. Inhale now.
Here is the funny thing, after only 2 minutes, my boss already knew that something was wrong. Maybe he heard me shouting at my best friend in the morning on my walk to the conference room but nevertheless, he knew. Few minutes later the CTO felt like something was wrong too, which is why he consistently tried to cheer me up. After some hours I cracked open eventually. I don’t like sharing problems with other people, especially not at my workplace. It makes me feel heavier most of the time. I know that people say that sharing makes things better because you don’t go through the troubles alone. But sometimes, I do disagree. The problems are still my problems at the end of the day. And now, another person is troubled with my problems as well.
But it did not really feel like that. I honestly just shared what has been going on because everyone was quite annoying about cheering me up that day. I did not feel better about sharing the whole thing, but I felt better because the boys valued my emotions and myself in a way that just made me feel better. And it was no big deal for them at all, they closed their laptops and just sat there and listened. As they are reading this, they probably do not even remember that day but it stuck with me.
I almost never leave the office with my mascara on because I burst out laughing all the time. One second the team tensely discusses sales, and what we will do during lockdown 5.0 (God forbid), while we can almost see synapses fly around and next thing you know is that we share a hilarious story right after we receive a call from futurezone.at, a platform that recently nominated us for one of the top 3 Apps in Austria in 2020.
Coffee for the crew, not the girl
Frankly speaking, the article has not been much about being a woman in an IT-Startup yet. So, here is the thing: I work with very competent women and men. But these guys took my own competence to a very different level. I had to learn completely new things, like graphic design, sound adjustment, film editing or creating ads. Most people think that Marketing and Social Media are just about liking photos and making nice comments, but they are not. It is exhausting. Instead of identifying yourself with yourself, you identify yourself with the brand you represent. You put your head in the heads of the people you want on your platforms. I always recommend not judging jobs you have never done yourself. Every job comes with its own challenges.
What I have learned from the boys is that they don’t differentiate between the value of my work and their own work product. While I have absolutely no idea how they deal with all the coding and the development of their products, they respect me as if I have been there from minute 0. Now take that: I have worked in politics, I have worked in law, and wherever you go, no matter what you do, you will meet people who think that you are less (competent) because of your gender or the color of your skin. Neither of that matteres here. They did not hire me because I am a woman. They hired me because I am me. I feel like I have never received more appreciation at any other workplace. Most importantly, I am not getting the appreciation because of my gender but because, for once, my gender and the color of my skin really do not matter to the people I am working with. Which leaves a lot of space for the actual work, development and encouragement.
Do I feel like the boys treat me differently than they would treat a man in my position? Nope. They treat me exactly the way I should be treated, measured by the time, energy and work I invest in our goals. Do they actually treat me differently? I have not even slightly noticed so.
None of them get a coffee without usually asking whether someone else wants one too. None of them just leave a room without holding the door open for someone else. None of them comment on my clothes or hair differently than when they make a compliment to one another. We even have the same hoodies (I still need to figure out how to combine them with colorful suits though). Believe it or not, I am sure, if I asked the boys whether I can join them for one of their weekend Playstation sessions, they would be happy to invite me too.
Of course, I (sadly) know that being this comfortable as a women and as a women of color at any workplace — not just in the male-dominated IT-world — is very unique.
Place of unicorns, not toxic masculinity
This is what I feel like working with 7 other men. It can be incredibly frustrating and stressful at times, but not because of my gender. At the end of the day, the stress is nothing compared to the joy that comes from working with these bright minds. I always feel like a unicorn, no matter where I go. I never really fit in with my energy, my ideas and my visions, which never really bothered me too much. I am not a puzzle piece that needs to fit in somewhere. But this time, I actually feel like I am surrounded by other unicorns as well. Almost as if I am not alone with what I believe in and the investments that I would be willing to give and take in order to get a few steps closer to achieving what we all believe in. I found my Disneyland. At the end of the day timebite and the whole team is not here because of the money or fancy Startup headlines in newspapers. Timebite is a platform that unites students. A platform that helps students to find some light in their studies and to not feel alone with their troubles, even if sharing them does not simply dissolve study problems. With timebite and these amazing people sharing is not caring, it is repairing what higher education institutions in Austria and Germany are not able to fix themselves. | https://medium.com/@barbara-abdalla/when-the-unicorn-went-to-disneyland-67a09d66f262 | ['Barbara Abdalla'] | 2021-01-11 11:11:29.247000+00:00 | ['Gender Equality', 'Information Technology', 'Women', 'Students', 'Work'] |
1,599 | Nuxt.js — Plugins and Modules. Nuxt.js is an app framework that’s… | Photo by Morning Brew on Unsplash
Nuxt.js is an app framework that’s based on Vue.js.
We can use it to create server-side rendered apps and static sites.
In this article, we’ll look at how to use plugins on client and server-side environments and create modules.
Client or Server-Side Plugins
We can configure plugins to be only available on client or server-side.
One way to do this is to add client.js to the file name to create a client-side only plugin.
And we can add server.js to the file name to create a server-side only plugin.
To do this, in nuxt.config.js , we can write:
export default {
plugins: [
'~/plugins/foo.client.js',
'~/plugins/bar.server.js',
'~/plugins/baz.js'
]
}
If there’s no suffix, then the plugin is available in all environments.
We can do the same thing with the object syntax.
For example, we can write:
export default {
plugins: [
{ src: '~/plugins/both-sides.js' },
{ src: '~/plugins/client-only.js', mode: 'client' },
{ src: '~/plugins/server-only.js', mode: 'server' }
]
}
The mode property can be set to 'client' to make the plugin available on the client-side.
To make a plugin available on server-side, we can set the mode to 'server' .
For plugins that are only available on server-side, we can check if process.server is true in the plugin code before we run the code.
Also, we can check if process.static is true before we run the plugin code on static pages.
Nuxt.js Modules
Nuxt.js comes with a few modules that we can use to extend Nuxt’s core functionality.
@nuxt/http is used to make HTTP requests.
@nuxt/content is used to write content and fetch Markdown, JSON, YAML, and CSV files through a MongoDB like API.
@nuxtjs/axios is a module used for Axios integration to make HTTP requests.
@nuxtjs/pwa is used to create PWAs.
@nuxtjs/auth is used for adding authentication.
Write a Module
We can create our own modules.
To add one, we can create a file in the modules folder.
For example, we can create a modules/simple.js file and write:
export default function SimpleModule(moduleOptions) {
// ...
}
Then we can add the module into nuxt.config.js so that we can use it:
modules: [
['~/modules/simple', { token: '123' }]
],
Then object in the 2nd entry is passed into the SimpleModule function as its argument.
Modules may be async.
Build-only Modules
We can create build-only modules and put them in the buildModules array in nuxt.config.js .
For example, we can write:
modules/async.js
import fse from 'fs-extra' export default async function asyncModule() {
const pages = await fse.readJson('./pages.json')
console.log(pages);
}
We added the fs-extra module to read files.
The function is async, so it returns a promise with the resolved value being what we return.
In nuxt.config.js , we add:
buildModules: [
'~/modules/async'
],
to add our module.
The module will be loaded when we run our dev server or at build time.
Conclusion
We can create modules and plugins that are available on the client or server-side with Nuxt. | https://medium.com/dev-genius/nuxt-js-plugins-and-modules-b0364dab5611 | ['John Au-Yeung'] | 2020-12-28 20:37:48.769000+00:00 | ['JavaScript', 'Software Development', 'Programming', 'Technology', 'Web Development'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.