Unnamed: 0 int64 0 3k | title stringlengths 4 200 | text stringlengths 21 100k | url stringlengths 45 535 | authors stringlengths 2 56 | timestamp stringlengths 19 32 | tags stringlengths 14 131 |
|---|---|---|---|---|---|---|
1,200 | Remote work ‘say what?’ | I wasn’t ever too keen on the idea of employees working remotely. In fact, I was stubbornly convinced that a successful company required a total in-office presence.
I believed I was so right for so long that remote work doesn’t work that I used to turn down reputable job candidates who wanted to work at Pushnami but who were not willing to relocate to Austin to work inside the Pushnami office.
I was adamant about it to the point that we once did not hire a really senior developer that we very much needed. He wanted to work for us but another company offered to let him work remotely so we didn’t get him. That was a big miss for our company.
Even before the pandemic hit, some employees did express interest in working from home from time to time. But that just wasn’t the kind of culture I wanted for Pushnami. I used to tell people you’re either in the office or you’re not part of our team. I did not believe you could collaborate or innovate unless you’re doing it in person.
So how did the fastest-growing company in Austin evolve from staunchly opposed to firmly supportive of remote work?
A record first quarter in 2021, as well as being named the fastest growing company in Austin, reinforced that I was wrong.
But it wasn’t so easy getting comfortable with the concept.
Our first company meeting on Zoom was super weird. You can’t look at 50 people at once (although I definitely try to click through to see everyone) and it was a big adjustment. Before the city of Austin shutdown due to COVID-19, we prepared for the potential of forced remote work conditions by implementing Wednesday work-from-home “drills.” In full transparency, I really didn’t believe that this was going to be needed. But some employees were very convinced that this was coming, though, so we decided to practice to work out the kinks.
We were only able to get one trial Wednesday in before the city shut down and forced us to carry on business from the safety of our homes.
So what happened once Pushnami went fully remote?
We continues to smash monetary milestones, add new benefits to adapt to the remote culture and to hire. Most of all, employees innovated, collaborated and got their work done — even though they were doing it from their home offices.
In times like these, you can either shake your fists in the air and scream “this is not right, I’m not going to do this, it’s my way or the highway,” or you can adapt and understand that this is the way the world is going to be now. Yes, it’s hard, but everything is hard. Even in the office, things are hard sometimes. You just have to adapt.
That’s not to say there weren’t some drawbacks once the pandemic struck, though. Some employees were feeling lonely or unhappy, not because we weren’t going out of our way to cater to employees but because the impact of the pandemic was taking a toll on the team’s emotional and mental health.
Some people decided they wanted a change in their life over the course of this, so we lost good people here. After everything that was going on in the world, some of our team discovered they just wanted something different in their lives. I don’t blame them at all, I think we all hit a wall at some point where we just had enough of all of this.
Today, we hire employees outside of Austin and even outside of the state. Some recent hires live in Oklahoma, Florida and Ohio.
I find it freeing that we can hire the best of the best from all over now.
As for going back to a mandatory in-office environment? That’s just not in the cards for us. If I said it was mandatory to be in the office right now, even for the people who live right here in Austin, one-third of my team would quit. There is no going back. They love it.
Things aren’t perfect though. The team is still learning how to “turn off” when they’re at home, especially since our hires are known for having superb work ethic. We offer compensation and the challenges and the rewards that come along with that. Because of that, it is hard for certain people to turn off as we are a 24/7 business here. The notifications don’t stop. Our clients don’t stop: we’re global. This is a very hard job, but a fun one.
At the end of the day, technology companies have shown the most promise in the face of remote scenarios as our teams excel at using technology. Businesses who adjust more quickly end up with the competitive advantage when it comes to the candidate talent pool.
During everything that took place in the world, we were able to continue to offer our people opportunities to advance and we kept doing the good work inside our community. Isn’t that what really matters?
Want to join our outstanding team? We’re hiring! | https://medium.com/@emerson-smith/remote-work-say-what-27de2a25934a | ['Emerson Smith'] | 2021-07-23 15:41:32.946000+00:00 | ['Company Culture', 'Technology News', 'Remote Work', 'Leadership', 'Austin'] |
1,201 | Inside the World’s ‘Most Sophisticated’ Surveillance System, With BuzzFeed News’ Megha Rajagopalan | OneZero is partnering with Big Technology, a newsletter and podcast by Alex Kantrowitz, to bring readers exclusive access to interviews with notable figures in and around the tech industry.
This week, Kantrowitz sits down with BuzzFeed News reporter Megha Rajagopalan. This interview has been edited for length and clarity.
To subscribe to the podcast and hear the interview for yourself, you can check it out on Apple Podcasts, Spotify, and Overcast.
China’s mass internment of Muslims in its Xinjiang region is one of the world’s most under-covered stories. The country has detained 1 million people there, putting them through a “reeducation” program meant to erase their language and culture, sometimes through forced labor and sterilization.
Though comprehensive, on-the-ground reporting from Xinjiang is sparse, BuzzFeed News reporter Megha Rajagopalan has been on the story from the beginning. She reported from Xinjiang itself. Then, after China did not renew her visa, she worked with BuzzFeed News contributors to track the internment camps using satellite imagery, finding that they are expanding.
To learn more about what’s happening in Xinjiang, how China treats the press, and the future of the global internet, I sat down with Rajagopalan on this week’s Big Technology podcast. The following discussion is filled with fascinating revelations from her reporting, an absolute clinic for anyone interested in these issues.
Kantrowitz: What is Xinjiang and what’s happening there?
Rajagopalan: Xinjiang is a really large region in Western China that sits on the border of a number of Central Asian countries. You have a population there of some 25 million people. About half are Uighur Muslims and other predominantly Muslim ethnic minorities and the other half are Han Chinese.
The government has had its issues with the minority populations there for a long time — since the Communist Party came to power in 1949. But what I’ve primarily written about is the government’s policies in Xinjiang in the Xi Jinping era. And it’s during that time period that things got significantly worse.
Starting in late 2016, early Sep 2017, the government started to implement this policy of high-tech and pervasive surveillance over Muslim minority populations, and mass internment and incarceration of a portion of that population.
So, they’re detaining millions of people.
Detaining and incarcerating. According to UN officials and some academic estimates, there are upwards of a million people who have been detained in that region since 2017. The numbers involve a lot of extrapolation and they’re sort of hard to get to, but that’s the ballpark figure that everyone takes seriously.
If China is going to incarcerate people like this, what does it say their crime is?
It’s important to distinguish between people who are in extrajudicial camp facilities versus people who are in the normal prison population. In some sense, it’s a distinction without a difference, because the government calls the camps vocational training centers or schools. They are not that, they are internment camps. But the thing that you do to get to camps versus the things that you do to get to prisons are different. For camps, people are being sent there for transgressions that are not even crimes under Chinese law.
I’ve met people who were told that they were sent for having banned apps like WhatsApp on their phones, people who were sent for sending money to family overseas for traveling and living abroad, particularly within the Muslim majority world. There’s all kinds of things like that that can get you sent to camps.
China isn’t really worried about the people downloading WhatsApp. So, what’s going on from a higher level? Are they just interested in making sure there are no Muslims in China?
Well, I think they are concerned about people downloading WhatsApp. You have to factor in that China has probably the most sophisticated internet censorship system in the world and surveillance system as well. So, of course, it’s quite important to them to first of all control the way ways in which people communicate and also to monitor those communications. That’s why they drive people outside of systems that they cannot monitor. WhatsApp of course is end-to-end encrypted. It also belongs to Facebook, which is a U.S. company. So, Chinese government is really limited in the ways that they can monitor it.
But what I’m getting at is — why is it that Muslims in this region are the ones that are taking the brunt of this?
From the government’s own statements, at the heart of this is a desire to sort of forcibly assimilate this group of people into Han Chinese culture. That necessarily involves eradication of their own cultures. So, the Chinese government’s perspective is that Uighurs in particular, who are the biggest by far Muslim ethnic minority group in Xinjiang, have separatist groups that are responsible for terrorism, are responsible for riots that broke out in the city of Ürümqi in 2009. That they’re sort of causing unrest because of an ideology that they perceive to be toxic.
It’s important to note that there have been terrorist incidents in Xinjiang. Although according to the government, there haven’t been any in the past few years. They have punished all of the people that they have found to be responsible in those incidents. You could see this particular campaign as a form of collective punishment for an entire ethnic group for the crimes of a handful of people who have sort of already been punished, but that’s sort of their perspective.
So, this has been going on since around 2016?
Late 2016, early 2017.
We’ve gotten bits and pieces of reports on this. China doesn’t have a free press in the same way that we do in the U.S. There have been a few journalists that have made it inside. But you actually took a pretty different approach, analyzing satellite imagery to examine what was happening on the ground, comparing uncensored maps with blacked-out areas on Baidu Maps. What did you find?
I don’t want to make it seem as if Xinjiang is a really difficult place to access. That’s just not true. There are a lot of journalists that have been to Xinjiang, including myself actually for a story for BuzzFeed in 2017 most recently. You can get on a plane and fly there. It’s not like it’s North Korea. The issues there are a little bit more subtle than that. When you go there, a lot of times you’re monitored by police. You can’t move about freely. Also, there are so many camps that it’s just like logistically not feasible to go to all of them or even a significant number of them, especially with the level of surveillance that exists there.
You also ended up getting kicked out of Beijing, but we’ll get to that in the second segment.
Yeah — so just about this particular project, I worked with Alison Killing, who is a licensed architect and also human rights investigator, and Christo Buschek, who is a developer. Essentially, we started talking all the way back in 2018. Alison and I met in a very strange fashion. We were working on this, like, citizen journalism handbook to help people become better investigators. While we were working on this together, I started kind of talking her ear off about some of these issues in Xinjiang. She got really interested. Then she started playing around with Baidu Maps. We were trying to kind of come up with a methodology for this.
She started to quickly notice that when she looked at places where we already knew that camps existed, when you zoomed in, these funny gray squares would appear. When you zoom further, they would disappear. We were like, “Well, what are these gray squares?” And then, it’s possible that it was because the imagery wasn’t loading or something like that. But we didn’t think that was the case because that’s actually like a different kind of gray tile or blanked-out tile that appears when the image just hasn’t loaded. So, we thought this might be a kind of censorship.
Xinjiang is a huge region. Satellite maps are made up of squares that are called tiles, and Xinjiang has millions of them. There’s no way we could have sifted through all of them. So we used this trail of clues from Baidu to narrow down the areas that we would have to search and that helped us. We didn’t use the fact that it was been blanked-out in Baidu evidence of it being a camp.
Right, you cross-referenced those with satellite imagery.
Yeah, we cross-referenced that mostly with tiles from Google Earth. Alison also had this idea to look near cities and near towns, basically where people had settled rather than these huge open deserts and grasslands that you have in other parts of Xinjiang and mountain ranges. So, that really narrowed the search.
What we ended up finding is that not only was the government accelerating its construction of these really scary facilities that have all the hallmarks of internment camps, they were building them really big. We found several structures, these are like compounds that can house more than 10,000 people. We found at least one that can house more than 30,000.
The comparison was to Central Park, which is enormous, when you’re talking about an internment facility.
Yeah, absolutely.
What happens in these internment facilities?
I interviewed a lot of ex-detainees. I’ve spoken to quite a few over the years and their stories have a lot in common. When people are inside, they’re subject to all kinds of kind of really degrading treatment. The kind of a sensible purpose of these facilities is education in some sense. That’s what the government says. There’s a kind of kernel of truth to that because people are taught Chinese language and then they’re taught like Communist Party dogma. But in practice, what that looks like is people go into these classrooms where there’s a kind of opaque, I think probably, transparent wall between the teachers and the students. There will be guards in the classrooms. The students… I shouldn’t call them students. I’m sorry.
The detainees, some people describe being hit with batons if they got a word wrong in Chinese, described being humiliated in terms of their treatment. Even beyond that, people describe really, really dreadful overcrowding, particularly in the first generation of camps, which were largely repurposed government buildings like old folks homes and high schools. Women talked about having their hair cut forcibly to chin length. Lots of people talked about being taken to solitary confinement, being interrogated routinely, being beaten with sticks. There are now lots of reports of women undergoing forced sterilization. So, really all kinds of abuses, really anything you can think of are happening in these camps.
Once again, I would stress that none of these people have been accused of anything. So, it’s an incredibly kind of Kafkaesque system in that way because you don’t really know necessarily what your transgression even is when you arrive to these places.
This is all just part of China’s attempt to essentially forced assimilate the Muslim community inside the country?
Assimilate them and then also control and confine them.
What do you mean control?
Well, if you are afraid of being sent to a camp for writing a tweet or using VPN, if the cost for doing those behaviors is that high, if you’re afraid that having a prayer rug in your house is going to get you sent to a camp, then you’re probably not going to pray or tweet, right? So, I think, if control is one of the goals, that probably has been accomplished.
Wow. China. Is there something that the international community can do about this? Also, do you think that this has gotten enough attention?
Yeah, I mean, “Can anything be done?” question is hard because it’s almost like we’re coming at this as two Americans. We’re thinking about a problem that’s happening on the other side of the world. The implicit thing is “Can the international community do something about this?” when actually the most straightforward answer is that the Chinese government could just stop doing this, right? But that’s not really something that is even sort of within the realm of possibility as a consequence of pressure that would come from the media or something like that.
Even now that it’s a big international story, I still think it hasn’t gotten all the attention that it’s due. Simply because we don’t know a lot about the kind of whole scope of what’s happening there from a lot of different perspectives ranging from forced labor to forced birth control and other practices like that, all kinds of things. But having said that, I’ve been covering this issue for a long time. I do think that there has been a steady uptick and understanding of this issue.
I would also say the Trump administration has actually taken a few concrete steps. It seems like it fits into their agenda for China. They talk about it frequently. More than other international governments by a lot. They have put sanctions on officials with direct responsibility for the abuses, including Chen Quanguo who’s the top Communist Party official in the region. They’re putting curbs on imports from the region. So, there have definitely been some steps that have been taken already. In terms of what those steps are going to accomplish, I think it’s probably too early to say, but I’m interested to see if it will prove to be any kind of deterrent either for China or for companies with ties there or other actors.
One of the obvious questions I haven’t asked yet is… Do we know why China is doing this? China’s a country of 1.5 billion people. I don’t understand what they’re looking to do, to try to take such a small population and essentially erase their history, erase their culture. What’s the possible benefit?
You shouldn’t underestimate the government’s obsession with what they would call social stability. I think there is an element here of the government just genuinely believing that Islam is the thing that is the problem. The cultures that exist there are the problem. When you listen to what they talk about, they talk about ideology in the terms that people would talk about like a virus. They’ll call it like a virus, this idea of extremism being a virus.
So, if you think about extremism and then you broaden the definition of extremism to include anything like fasting on Ramadan or praying or having really religious texts, if you think that all of those things are extreme actions… The government clearly does, because they have banned things like wearing the hijab, like wearing a beard for men, very, very normal actions that followers of Islam would take up.
If that’s what you believe, then it makes sense to say, “Okay, this whole group of people needs to be brainwashed in these reeducation centers or what have you.” I think that’s sort of where they’re coming from. It takes of course a certain kind of racism to really believe that about a group of people that’s that big, but it’s happened many, many times in history. So, it’s not surprising to me that it would happen again.
And here it is happening again. Let’s talk a little bit about the people. One of the things that I found remarkable about your story is you did speak with a lot of people who have been through the system. How risky is it for them to speak with a reporter?
Yeah. So, when I first started reporting on this, it was really hard to get anyone to talk, because even if you’ve left China, you probably still have a family there. The government does go after people’s families. This is a known thing. They even do it in state media reports for ex-detainees who have spoken out. They’ll go and interview their family and the family will be like, “No, this never happened, and it’s all lies,” stuff like that.
Wow.
Or they’ll say like, “Stop talking or we’ll detain your family member.” I did a piece about a man who is in Scandinavia. He was worried about his teenage son being sent to a camp for this exact reason. So, what happened after that is that people figured out that… they kind of just gave up. They were hoping for lenience by staying quiet. And then they found out that they weren’t going to get any lenience in many cases. So, people started speaking up a lot more. That’s why if you look at Xinjiang news coverage now, you’ll see a lot of named sources in a way that you wouldn’t necessarily have done before.
What calculation do you make as a reporter? Because you realize that by putting these people’s names, there’s a chance there might be retribution.
Yeah, definitely. That’s something I think about a lot. I think I’ve thought about it more a couple years ago than I do now. Just because I think the consequences of putting your name out there, a lot better known now in these communities. I always try to ask people about their family’s situation early in the interview. I really do make clear to people, if you have second thoughts, if you get cold feet, if you get some new piece of information about your family that makes you think that if we publish this, they could be detained, then please tell me and we won’t do the story. That’s happened to me twice that I can think of where we had a long conversation and that the person got cold feet and backed out.
As a reporter, I’m always like, “My door’s open even if you just want to talk. I’m here for that.” But I would never want to put pressure on someone to publish. I think there have been other cases where I feel like people have been quoted in the press and I don’t know if consent was given. It really bothers me to see that, especially on video and stuff like that, because people are taking huge risks. The other side of that is that you don’t want to in the name of protecting a source actually stifle someone who really does want to speak out. You want to give them the opportunity to do that, if they have a story that’s true and that’s newsworthy.
There are a lot of people who are very direct who will say like, “I know this could have consequences for my family, but I feel like if I don’t speak out, I’m betraying myself. I’m betraying my conscience,” and all that sort of stuff. So, I try to take that seriously when that does come out. And then the other thing is often, we’ll use first names. Sometimes we won’t put the person’s hometown. We’ll just put their kind of adopted hometown, stuff like that. So, I think there are sort of some ways around it.
I’d like to just speculate a little bit. I know you can’t really say for certain where this goes, but if you had to guess, what do you think happens from here in Xinjiang?
Oh, man, I really hate this question. I don’t know. The thing is you’re asking where do we go from here, but I don’t even know if we have documented what here is. There’s so much that we don’t know about the crisis still. We don’t know about how prevalent forced labor is because of the opacity of supply chains there. We don’t know what’s happening to children of people who are detained in full. There’s evidence that children are being sent to state-run orphanages, but we don’t know long term what happens to these children.
We don’t know for what purpose, ultimately, these super high-security facilities that we documented were built on. We don’t know about the possibility of state-controlled violence or rape on a wide scale and other crimes, specifically targeting women. There’s just a lot to find out. I think knowing more about that stuff will tell us more about what’s to come. I don’t want to sound like I’m dodging the question.
No, this is great.
There’s been some international pressure but not like a lot. But I think these campaigns take a lot of time. I don’t think we’ll know if the international pressure that is there has worked probably for a couple of years at least.
Getting kicked out of China
Megha, you had an experience that might have been somewhat rare when it happened to you, which is that you got kicked out of Beijing. But now it seems to be happening much more frequently with international news organizations. From your perspective, what happened when you got kicked out of the country?
You’re right. It’s getting to be a pretty big club. When I lost my visa, it wasn’t that common.
When did it happen?
This was in 2018 in March. I had written my first big piece on the camps in Xinjiang and the surveillance environment there in the fall of 2017. I had gone to the region. I was the first journalist to actually find one of these internment camps because I had GPS coordinates, or I had directions I think from a source who just knew where it was. So, I went. I took some pictures and we did a piece on it. Because it’s BuzzFeed, BuzzFeed doesn’t have a paywall. Of course, a lot of people read it. I didn’t really have much of an inkling that anything was wrong, but I did get a call for a meeting with state security officials in the kind of, I think, fall or early winter of 2017. That was kind of unusual. It wasn’t the most common—
What did you think when you got called into that meeting?
I didn’t know they—
Were you like, “This is the end of my time here?”
No, I wasn’t like that. I mean, I didn’t know who they were. They said they were government officials by phone. With the Beijing city government, they asked if I would meet. And then once I showed up, they showed me their badges and they were state security officials. They identified themselves as such. So, we had the meeting. It was cordial.
What do they say to you?
They wanted to know about who I was talking to, about my work, and stuff like that. They wanted me to sort of cooperate with them in giving information. Of course, I had no intention of doing that, but I just didn’t give them much of any answers. I think I wrote a little memo about the experience, just so I would have it for my records in case anyone ever asked. Yeah, and that was it. And then I didn’t hear from them after that. So, I had no reason to think anything had gone wrong.
And then at the kind of end of the year, I had my annual meeting with the Ministry of Foreign Affairs, which is the kind of main government ministry that foreign journalists interact with in China. It was a routine meeting. I got to this meeting, which was in a Korean coffee shop across the street from the Foreign Ministry, best known for waffles. We were sitting there, drinking our lattes. They were like, “Listen, we don’t like your reporting on human rights.” I was like, “Why?” They’re like, “Well, we think it’s wrong.” I said, “Well if there are any errors in the story, we can discuss. We can talk about running a correction if there’s something really wrong with the story.” He said, “Well, there are no errors that we could point to specifically per se, but it was wrong nonetheless.” I was like, “Alright.”
Interesting, yeah.
Yeah, great. So, then we talked. Again, it was a polite conversation, touched on a range of subjects after they were done criticizing my work. And then at the end of the conversation, I asked, “My visa’s coming up too. Do you expect any problems with it?” They’re like, “No, no, no, just submit it and it’ll be fine.” So, I was like, “Okay,” but they asked me to wait until after the Chinese New Year holiday, which is a weeklong holiday in China. The country kind of shuts down. It’s a bit like Christmas in terms of its significance. So, I waited until the first working day after the holiday. I submitted all my paperwork, got a read receipt, like an email back, like got it from the Chinese Consulate in New York. And then I just waited. I had to leave a country for a different story.
So, then I ended up abroad because I just had planned to just be away. That might be so it’d be renewed, and then I thought I would go back. But unfortunately, it wasn’t renewed. I called to check in on it. They said that they’d lost the application, which couldn’t have been true because I got that receipt. So, at that point, I knew something had really gone wrong. And then finally, I think around June of 2018, I received an email from the Chinese Consulate in New York saying, “We do not approve this visa.”
So, how did you feel when you saw that you weren’t going to get your visa approved? Because you've invested a lot at that point into reporting on China.
Yeah, I mean, it was devastating. It was absolutely devastating. I had built my career there for the most part. It’s the country that I spent the most time in since graduating college. I spoke Chinese. My whole life was in China. I’d never really lived anywhere else much as an adult. All my friends were there. I had a lease on an apartment there, all of my furniture was there. I was overseas and stuff like that. I just thought, “I don’t know what I’m going to do with my career now.” Also, I just didn’t want to leave in that way. I’m not an anti-China person, at all. I wouldn’t have spent so many years in China if I hated China. I mean, there are so many things that I love very, very, very much about that country. The idea that I wouldn’t be able to go back because of this decision that felt very arbitrary was quite painful.
Can you go back as a tourist, or are you just fully banned?
I don’t know. I’ve never asked. I always assumed that I could go back as a tourist. However, because of some recent developments, the jailing of the two Canadians that have been held there in retaliation for the Huawei case, as well as all of the recent measures that have been taken targeting American journalists and Australian journalists, it’s a little bit scary to think about going back, especially now that I’ve published a new batch of work that’s quite critical of the government.
Right. Okay. So, after you got kicked out, now let’s get to the part that everyone seems to be getting kicked out. I mean, the New York Times, Wall Street Journal, Washington Post, Voice of America, TIME magazine have all had either been fully kicked out or have had restrictions put on them. Why is China kicking out all these journalists? It’s sort of interesting to me that they even allowed them in, in the first place if they’re so sensitive about this coverage? So, what do you think is behind this recent wave of expulsions?
Yeah, okay. So, two things. So, why do they let them in in the first place? A lot of people ask this, so I want to address this. So, China is an authoritarian country, but it is not a tin-pot dictatorship. This is a big economy that really, really cares about its international image including the whole world, not just the U.S. and Europe, right? Part of being a big economy is that investors have to trust that you have like a functioning system based on rule of law in terms of the stock market, in terms of property, in terms of monetary policy, all of these things. You can’t really get there if you don’t have international media in your country, right?
If you think that sounds crazy now, because now we’re talking about human rights and diplomacy and stuff like that, but it wasn’t long ago that the only story really that was making big headlines about China was the economic story. When they were at 10% GDP growth a year, that was the story about China. It was about poverty alleviation and economic development, joining the WTO, and stuff like that. So, it was during that era that a lot of journalists started to come to China, and China started to become a big presence in the international media. So, I guess like, what’s happened since then?
So, you asked, “Why are they throwing people out now?” I think part of it is the very obvious, they don’t like critical coverage. I think that’s part of it, but of course, as you pointed out, there was critical coverage before, and all these people kept their pieces. So, I think what’s happening now is that a lot of these journalists have essentially become pawns in this kind of grand strategy thing that’s happening between the U.S. and China, where both parties are continually sort of upping the ante. I used to cover diplomacy in China.
One thing that’s a really core part of Chinese diplomatic culture is just reciprocity. It’s like a very, very tit-for-tat kind of driven diplomatic culture. So, for instance, I used to cover state visits a lot. Whenever any U.S. official would visit China, there would be a discussion about how many journalists are allowed in the room from each country, right? As it was told to me by American officials, the Americans would always say, “Well, we want like six journalists,” or however many it was that they want. The Chinese would be like, “No, what for?” Just out of principle, they would want less. And then these discussions would go on all night long. It was so insane.
Fascinating.
Yeah, but this is what it is. So, for them, it’s like you force our state media outlets to register as foreign agents in D.C., we’re going to retaliate against your media outlets. It doesn’t matter that U.S. media generally isn’t state-controlled, right? They see it as more or less analogous.
So, is it the recent forcing of these publications from China to register that’s caused some of the stuff?
I think it was the trigger, but I kind of felt like this was always going to happen at some point. But yeah, I think if you’re looking for a—
Why was that?
Well, because of the direction things were going. I mean, I think China has sort of become less important to multinational companies than it previously was because their priorities building up their national champions in many industries. So, that was one of the core lobbies for greater engagement with China. That lobby kind of lost a lot of power, a lot of interest in doing that after the Chinese market became less hospitable to them. So, I think it was always moving in a direction where there was going to be a little bit less engagement, a little bit more hostility, but then I think that kind of sped up a lot in the past couple of years with the Trump administration.
So, I mean, having said that, I don’t want to imply that it’s the Trump administration’s fault that journalists are being thrown out of China. That’s not what I’m saying. What I’m saying is like they did it in response to an immediate event. I think they—
It was…
Yeah, the notion that throwing out journalists as a way to punish them for negative coverage has been a lot around for a long, long time, far predating Trump.
Yeah. I guess one obvious question to end this segment is how much has the coronavirus situation, where the U.S. has blamed China, China has blamed the U.S. for originating the virus. How much has that played into the tensions and then also the ability of reporters to do their job inside China?
I don’t know if I can answer that to be honest, because I just haven’t covered it.
The TikTok “sale”
The U.S. has been flirting with the idea of banning TikTok. Many U.S. tech companies are banned in China. Facebook isn’t in there. Google isn’t in there. What do you make up the whole fight between these two countries over control of the internet? How do you think TikTok plays into that?
Yeah, that’s a good question. I mean, I never thought that we would get to a point where there were serious discussions in the U.S. about banning a particular app because it was made in a country that was hostile to the U.S., but I guess we are there now. It’s interesting. I guess, to me, it’s shown that this vision of the internet that China and countries like China have established for a long time that was mocked I think by the U.S. and other countries that had the least restrictions on the internet is now being adopted by the U.S. kind of in a matter of speaking.
What is that vision?
So, for instance, we used to talk about the balkanization of the internet, right? Meaning that we would move from this kind of state of history that we’re sort of currently in, where there’s sort of free movement of information on the internet between countries to a version of the internet that is much more kind of regulated by country or censored by country, what have you. So, China was obviously the pioneer of this with the Golden Shield Project, also known as the Great Firewall, where they have, as you said, cut-out lots of internet services that other countries use. Bill Clinton referred to that model of censoring the internet as nailing Jell-O to a wall. Meaning, you can’t nail Jell-O to the wall, right? It’ll slide off.
But then now we’re in this situation where the U.S. is actually now saying, “We’re going to do exactly what you did. We’re going to cut out one of your apps from our market.” So, I think that’s really interesting because that’s sort of the vision of what the internet should be kind of at a high level. The U.S. reasons for banning TikTok are very different from China’s reasons for banning Facebook. To my understanding, the U.S. is not banning TikTok, because of free expression issue. They’re not trying to cut off speech. They’re thinking about things like basically privacy and national security concerns. So, it’s a little bit different, but the outcome is more or less kind of comparable.
The most convincing argument I’ve heard for banning TikTok is that this is an app that shows you content based off of an algorithm not really based off of a follow model. It’s possible that because there are many tech companies in China that have serious connections to the Chinese Communist Party. In fact, the ByteDance CEO has already apologized to the Party for not censoring enough. That it’s possible that maybe one day TikTok will rearrange its algorithm to show people content based off of the Chinese government’s wants and sort of do some cultural control that way without anyone ever knowing. Is that a serious concern from your perspective?
I’ve heard people say that. Yeah, I have heard people say that. It’s interesting. I don’t think it’s outlandish. However, it is a hypothetical, I think. I think the problem with this argument is that to make this argument, you have to accept that manipulating algorithms is fundamentally a problem. You have to accept that even if it’s not the Chinese government doing it, if it’s some other bad actor that’s doing it. It’s really hard then to say, “Actually, TikTok is the only one that’s vulnerable to this,” right? Because that’s what you’re really saying.
You’re saying TikTok is more vulnerable to this than other social platforms. I’m not completely sold on that argument, because I mean, we all know that Facebook very famously had its algorithm manipulated, right? So, they’re now kind of conscious of that and taking some steps to try to ensure that doesn’t happen again. But I mean, we both know that it’s been touch and go at Facebook, Twitter, and other social platforms in the U.S., right?
So, the question to me is when you’re talking about tech governance, you don’t necessarily want to apply rules based on the place that the company is based, right? I don’t know if that’s necessarily the best way to approach tech governance. It seems to me that it would make more sense to have one set of standards and then apply to all these companies, right? So, when you’re talking about algorithms, what we really want is transparency, right? Because we’re talking about social media companies as a vehicle for speech and these algorithms that are essentially regulating speech, right, in a public square.
The larger issue here is that nobody sees these algorithms because the companies consider them proprietary. There’s no kind of independent governing structure that determines which algorithms are just and which are free from influence and all of that sort of stuff. So, all of the tech companies have this problem, right? So, I don’t really see necessarily a strong reason to apply it to TikTok in a way that is different from all of the other tech companies.
The reason would be that TikTok seems more easily influenced by a state power than the others. I mean, that would be the straightforward argument to do it to TikTok. Do you buy that?
Well, I’m more convinced by the argument that it’s a platform that shows you stuff kind of not based on follows, but just based on the algorithm. I think that’s more convincing. But when you’re talking about is it easy for nation-state actors to influence the platform, that was already done at Facebook, right?
That’s true.
Yeah, how can you really make that argument, right? It’s hard.
Yeah, but we found out about Facebook. I guess, people would say, “We might never find out about TikTok,” because like you mentioned, it’s not a follow-based model. So, it could just be some subtle shifting of the algorithm to point to one thing, but who knows? Okay, so let’s just wrap up with another question that I’m sure you will hate. Let’s say you’re in President Trump’s position.
Oh, no.
What would you to TikTok?
Alex, I’m a journalist. Why would you ask me this?
I don’t know, I have a podcast and I feel like it’s fun to ask questions like this that people hate, so…
What would I do with TikTok? Gosh, I mean, I’m an old-fashioned girl who believes in the free and open internet. I think it’s a bad path to… I mean, I’m not saying any regulation of TikTok is out. I’m not saying that at all. But I think blocking an app based on a decision that is made about its home country is opening the door. Even if it’s legitimate, it’s still opening the door to other actions that are like this that will be taken on by future administrations that we cannot predict, right?
It will set a precedent for internet censorship in the U.S. in a way that has never been done before. I don’t necessarily know if that’s a good thing. I don’t know if we want to set a precedent where everybody in the world is living in their own little app ecosystem and not communicating with each other. I don’t know if that’s necessarily a better world.
Having said that, I think that if the Trump administration were going to say or if anybody was going to say, “Let’s not ban TikTok for the entire U.S. populace, but if you’re in the military, if you’re in a government role, don’t use it,” or certain kinds of people, don’t use it in certain occupations and stuff. That makes sense to me from this national security perspective. It’s also this argument that the Chinese government could use it to create an influence operation, I mean, yes, it’s definitely possible to do, but, like I said, it’s kind of a hypothetical, right? So, it feels like we’re punishing the app for something that hasn’t even happened. | https://onezero.medium.com/inside-the-worlds-most-sophisticated-surveillance-system-with-buzzfeed-news-megha-rajagopalan-64f182f2f25f | ['Alex Kantrowitz'] | 2020-09-29 00:07:49.879000+00:00 | ['Technology', 'BuzzFeed', 'Politics', 'Big Technology', 'China'] |
1,202 | High Efficiency Motor Can Run for Many Thousands of Years | If you buy a Tesla or other electric vehicle, you expect the motor(s) to, perhaps, get you to work and back home, where you dutifully plug it in for the next day’s commute. Imagine, however, having an electric motor that doesn’t just last through the day, but could theoretically run for thousands — many thousands — of years on a single AA cell. YouTuber lasersaber has created just such a device, and while it won’t come close to powering your car, it’s still very interesting.
As you might suspect, the key to this motor’s efficiency is that it works with extremely low friction. This is enabled here by the magnetic levitation of a rotor made with pyrolitic graphite. Four permanent magnets attached the top of the rotor are moved around by electromagnets that provide power, forming a sort of very low power, high efficiency stepper motor. A number of resistors, a capacitor, and a reed switch are used to control the unit.
A large portion of the video below is spent explaining just how impressive the device’s power consumption is, and at a low RPM setting it uses only .15 microamps at 33millivolts, for a grand total power consumption of less than .005 microwatts. This means that theoretically, the device could operate for 89,880 years on single AA battery. It’s an amazing feat of efficiency, and as ‘saber notes, this could possibly be made even more impressive if the device was constructed inside a vacuum chamber! | https://medium.com/@JeremySCook/high-efficiency-motor-can-run-for-many-thousands-of-years-84915e7c4 | ['Jeremy S. Cook'] | 2019-03-26 22:25:49.657000+00:00 | ['Motors', 'Science', 'Technology', 'Batteries'] |
1,203 | A year as a Storage Provider on Sia and Storj networks | A year as a Storage Provider on Sia and Storj networks Sculptex Feb 2·9 min read
Having already written articles comparing various Distributed Storage platforms, and a two-month evaluation of Sia from the renting perspective, I decided to experience being a Storage Provider to evaluate that side of both platforms to be able to give a fair overall evaluation.
In mid-2019 I had deployed a Server to handle my own varied storage requirements. In December 2019, I set about using some of the spare capacity to become a host on both Sia and Storj platforms.
I varied storage capacity (and pricing on Sia) throughout 2020, so I will attempt to take this into account and give an average overall picture.
Disclosure: I am an Ambassador for the 0chain project (which is currently in its betanet phase). I plan to use this very server when the mainnet goes live.
Sia
Running since 2015, Sia was really the first working Distributed Storage platform.
Requirements
Sia requires that Storage Providers commit 'collateral’ in its native SiaCoin to show commitment to their Service Provision. It aims to use spare capacity on Personal Computers and Servers around the world. There is generally a modest hardware (CPU/RAM) requirement, however, the entire Blockchain 'consensus' needs to be downloaded before you can start hosting. This process is CPU intensive, as the consensus now exceeds 20GB and can take hours or days to sync depending on connection speed, drive speed and CPU power.
Storage Capacity and Pricing
Initially I set my Storage prices low, a little below what was calculated to be the average, wanting to experience having storage utilized and gaining a good reputation. I also used the recommended 2x collateral.
My Performance on the network
As you can see by the comment from one of the Sia founders, my host has always been one of the best if not the best performing host on the entire network.
After a few months, the first partition I was using for Sia was getting full, so I added additional spare storage in a couple of other partitions.
A month or two later, these were also filling up and by now I had served thousands of contracts with zero failures. I decided to reduce the sizes of these other storage partitions.
At this point, I started getting several failed contracts. I checked in the Sia discord community, but it seemed that it was acknowledged there were some bugs that were hard to diagnose and since they were small amounts, I didn’t worry about them. However, after a while, some large contracts started failing, and because my collateral ratio was 2x, I was losing double my potential earnings so I had to do something about it.
After consulting the Sia community discord, I decided to stop accepting storage contracts, but stayed online in order to fulfill the remaining contracts, otherwise I would lose the collateral from those too.
As you can see from the chart, my used storage declined until early October. I also had some downtime because Sia had crashed and I did not notice.
I then started from scratch with a brand-new node and wallet. (this time choosing an equal collateral). However the IP address is the same so the stats graphs look like a continuation of service.
I had more downtime later in October. Interestingly I didn’t notice any failed contracts because it was a new node, maybe none expired during this downtime.
This actually illustrates a weakness of the Sia protocol, providers can go offline for the bulk of a contract then come back online and collect the storage fee when the contract ends. They lose potential read fees during this time of course, but in the meantime the client will have had to form new contracts with other hosts to maintain the desired EC ratio.
Within a few weeks, my node was getting full again, so I decided to just increase my prices. I expected a sharp drop in new contracts, but they continued to climb. I increased twice more, even without increasing collateral. Amazingly, even after massively increasing prices I still got some new contracts! However, the rate did decline sharply as contracts expired and decided not to renew.
This illustrates the drastic shortage of decent hosts on the Sia network. My node should not get selected for any new contracts with such a huge cost with such tiny collateral.
Drawing 2020 to a close
In December I got banned from Sia Discord for expressing my views about an unrelated subject. It looks like I am not the only one though, RBZL, a veteran Sia community member and creator of SiaSetup.info was also banned for expressing his opinions about Sia. Read :- https://siasetup.info/concerns-about-sia-and-skynet for more info.
This ended my interest in the Sia project.
Earning Summary for 2020
Average Storage: 5TB
Total Earnings: 30,000 SC ($100) estimated
Average Earning/TB/mo: $1.70
(As previously stated, the earnings could have been much higher if I had priced more aggressively)
Storj
With Storj, the prices are fixed. The determination of which hosts are used is done by centralised entities, called satellites. I do not know the criteria but assume good performing hosts are favoured.
No collateral is required to host, but a portion of rewards are held back in escrow, decreasing with time, (mentioned later).
For most of the year, my Storj host offered over 5TB of storage. It took a few months for this to get filled.
When a host becomes full, the ingress naturally falls off. In June, I took the opportunity to try the Graceful Exit (GE) procedure. (There are other reasons mentioned later).
As you can see, over 80% of my data was on test satellites (Stefan-Benten, Saltlake and Europe-North).
So, I started with the Stefan Benten satellite in Europe as that was a test satellite that would soon be discontinued anyway. The GE only took a few days and was completely successful.
I then GE’d the Asia East and US Central satellites. The Asia worked without issue, but the US Central failed even though it had an audit of 100%. (I have not investigated this because the held amount held is small anyway).
I then decided to GE from Saltlake in early September. This is another test satellite and used by far the most storage. The GE took over 6 weeks but was eventually successful. For November and December, I reduced the capacity to 2.5TB because I knew the two remaining satellites would take a while to fill the space vacated by Saltlake.
Average Storage used: 4TB
Total earnings: $220 (appx 500 STORJ)
Average Earning/TB/mo: $5
This is actually better than I was expecting. Even though the egress is typically only around 10%, since it is 13.3x more profitable than storage, the egress made the lion’s share of the earnings. But before you rush out and buy hard disks, allow me to elaborate.
The first issue..
These figures also include ‘Surge payments’. These are arbitrary amounts that were devised to be paid to hosts. These were awarded in January, March, April and November and equated to around a quarter of the earnings. So, excluding these, the average would have been around $3.75TB/mo.
The big issue..
The vast majority of data on the Storj network still appears to be test data. Although testing is crucial, this level is not sustainable. When I started Gracefully Exiting in June, around 80% of data was on the rest satellites.
Since then, I do not have a true picture on my node because I am only on some of the satellites. However, I believe the ratio to still be around the same, looking at the forums and also a discussion I had on a discord group.
Some additional points
It takes time to get vetted. You won’t get a lot if data during this period. I am sure this only took me a couple of weeks but the latest I’ve read is that it currently takes a couple of months. If it were not for the fact that I Gracefully Exited some of the satellites, there would be significantly more still held back. A percentage is held in escrow. *Correction* The percentage held back starts at 75% and decreases by 25% every 3 months. (Half of this rolled forward amount is returned after 15months). When you Gracefully Exit, you get your held escrow amount back from that satellite. The default minimum period is 15 months before you can Gracefully Exit, it is temporarily reduced to 6 months, so all this was another reason I chose to Gracefully Exit some satellites. Additionally, for each new satellite, your count starts from zero to when you are able to GE. One of my Graceful Exits failed. As I stated, I have not investigated as the amount was small. Others in the forums have had to produce all sorts of logs to try to investigate when this happened to them. The general consensus last time I looked was that Graceful Exit is still not a risk-free procedure. Storj payments are monthly and via it’s ERC20 Storj token. Three times since mid 2020, Storj have changed the rules regarding disbursements, mainly because of high ETH gas fees. Many people falling below their threshold (including my December payment) are rolled forward indefinitely until the threshold is met. This is particularly detrimental to those who considered committing a relatively small amount of storage.
Summary
Although both platforms offer an opportunity for people with spare storage to earn some money from it, neither platform gives me the confidence that it could turn a profit long-term. However, they still remain viable for spare existing storage.
Most would not argue with this about Sia, as it has always maintained not to invest in new hardware as that would likely never turn a profit.
However, I suspect that some Storj providers may argue that they are happy with the income that their host(s) generate. I would ask them to reconsider in light of the following:-
As described above, test data rate is completely unsustainable.
Surge Payments also give a false impression of real income.
Hard Disks are obviously the most reliable when new. When disks start failing and potentially held escrow funds are forefeit, will it still work out worthwhile? (Storj is very tolerant of downtime but intolerant of Data Integrity Failure.)
Conclusion
These two platforms have failed to realize the full potential of distributed storage. By not having sensible measures to ensure and reward high quality hosts, high EC ratios have been required to offset the host churn and no efficiency has been gained over typical DC triplication of data.
Sia, due to longstanding, unfixed errors and Storj due to a lack of transparency have failed their loyal followers, many of whom run hosts much more for the community than for any hope of ever turning a profit.
Fortunately, there is an alternative.
High quality hosts are welcome to consider joining the 0chain network.
Through true decentralization with high efficiency and advanced protocols, hosts can expect a reward that reflects their true performance on the network.
Poor quality hosts need not apply.
Footnote
Please read my previous article comparing various Distributed Storage platforms if you haven’t already for further key differences between these platforms.
No doubt I will be criticised and accused of writing this solely for benefit of 0chain. I did all this research off my own back and was not paid anything for it.
The reason I conclude that 0chain is better is for reasons described. I believe that 0chain protocols and tokenomics are so much more efficient than either Storj or Sia, and that’s not even mentioning other superior technology like private sharing through Proxy Re-Encryption.
I welcome feedback and will publish corrections if I have stated anything shown to be inaccurate, but please stick to facts and not misplaced loyalty in an inferior technology. | https://medium.com/@sculptex/a-year-as-a-storage-provider-on-sia-and-storj-networks-b8e00be49d5a | [] | 2021-02-03 17:29:52.353000+00:00 | ['Distributed Storage', 'Sia', 'Storj', 'Storage', 'Blockchain Technology'] |
1,204 | More Complex Bindings with Svelte | Photo by John Barkiple on Unsplash
Svelte is an up and coming front end framework for developing front end web apps.
It’s simple to use and lets us create results fast.
In this article, we’ll look at how to use the bind directive outside of input elements.
Media Elements
We can use the bind directive with audio or video elements.
For instance, we can bind to various attributes of audio or video elements as follows:
<script>
let time = 0;
let duration = 0;
let paused = true;
</script> <video
bind:currentTime={time}
bind:duration
bind:paused
controls
src='https://sample-videos.com/video123/mp4/240/big_buck_bunny_240p_30mb.mp4'
>
</video>
<p>{time}</p>
<p>{duration}</p>
<p>{paused}</p>
In the code above, we used the bind directive to bind to currentTime , duration , and paused properties of the DOM video element.
They’re all updated as the video is being played or paused if applicable.
6 of the bindings for the audio and video elements are read only, they’re:
duration — length of the video in seconds
— length of the video in seconds buffered — array of {start, end} objects
— array of objects seekable — array of {start, end} objects
— array of objects played — array of {start, end} objects
— array of objects seeking — boolean
— boolean ended — boolean
2 way binds are available for:
currentTime — current point in the video in seconds
— current point in the video in seconds playbackRate — how fast to play the video
— how fast to play the video paused
volume — value between 0 and 1
Video also has the videoWidth and videoHeight bindings.
Dimensions
We can bind to the dimensions of an HTML element.
For instance, we can write the following to adjust the font size of a piece of text as follows:
<script>
let size = 42;
let text = "hello";
</script> <input type=range bind:value={size}> <div >
<span style="font-size: {size}px">{text}</span>
</div>
There’re also read-only bindings for width and height of elements, which we can use as follows:
<script>
let w;
let h;
</script> <style>
div {
width: 100px;
height: 200px;
}
</style> <div bind:clientWidth={w} bind:clientHeight={h}>
{w} x {h}
</div>
bind:clientWidth and bind:clientHeight binds the width and height to w and h respectively.
This Binding
We can use the this bind to set the DOM element object to a variable.
For instance, we can use the this binding to access the canvas element as follows:
<script>
import { onMount } from "svelte"; let canvas; onMount(() => {
const ctx = canvas.getContext("2d");
ctx.beginPath();
ctx.moveTo(20, 20);
ctx.lineTo(20, 100);
ctx.lineTo(100, 100);
ctx.stroke();
});
</script> <canvas
bind:this={canvas}
width={500}
height={500}
></canvas>
In the code above, we put our canvas manipulation code in the onMount callback so that we only run the code when the canvas is loaded.
We have bind:this={canvas} to bind to the canvas element so that we can access it in the onMount callback.
In the end, we get an L-shaped line drawn on the screen.
Photo by Markus Spiske on Unsplash
Component Bindings
We can bind to component props with bind .
For example, we can write the following code to get the value from the props as follows:
App.svelte :
<script>
import Input from "./Input.svelte";
let val;
</script> <Input bind:value={val} />
<p>{val}</p>
Input.svelte :
<script>
export let value;
</script> <input bind:value={value} />
In the code above, we have value bound to the input element. Then since we passed val in as the value of the value prop and used the bind directive, val in App will be updated as value in Input is updated.
So when we type into the input box, the value will also be shown in the p element below.
Conclusion
We can bind to media elements when we use bind with some property names.
Using the this binding, we can get the DOM element in the script tag and call DOM methods on it. | https://medium.com/swlh/more-complex-bindings-with-svelte-414f6dcba416 | ['John Au-Yeung'] | 2020-05-04 17:08:02.545000+00:00 | ['Programming', 'Web Development', 'JavaScript', 'Technology', 'Software Development'] |
1,205 | 😂🌍🇺🇸 Emojis for diplomats | “Ultimately, whatever the metric, the adoption rate of Emoji is staggering,” professor Evans writes in his book. “And this provides grist to the mill that Emoji is a truly global form of communication.”
After all — Evans adds — no matter what our mother tongue is, “the smiley face means the same thing in every language.”
We are all, or nearly all, ‘speaking’ emoji now.
EMOJIS FOR DIPLOMACY
According to India Bourke of the New Statesman, “in many ways, emojis are the promised land of diplomatic history: they have the potential to speak across borders to a new, global citizenry. They are the Esperanto of the digital age.”
She adds, however how “emojis can also tend to the crass and immature” and their meanings and interpretations be overly limiting and hazardously slippery.
Despite that, many heads of state or government, foreign ministers, and diplomats, use emojis regularly. And even Malala Yousafzai, which debuted on Twitter this month, has started to use emojis.
And former head of UN Development Helen Clark recently used emojis on twitter, responding to former United Nations Deputy Secretary-General Jan Eliasson’s first tweet.
Indeed, like every digital form of communication, some are better than others at embracing it. And when it comes to the foreign policy community, there have been interesting experiments using emojis.
In February 2015, Australian foreign minister Julie Bishop — described by many as ‘emoji enthusiast’ — sat down with Mark Di Stefano of BuzzFeed for “the world’s first political emoji interview,” in which she described Russian president Vladimir Putin with 😡.
During the interview, Bishop also talked about Australia-US relations with 👍 ✔️ 😃; Australia-China relations with 👍 ✔️ 😎; marriage equality 👐; and winding down at home after a long day at work 📚 📀 📺. But everybody seemed to focus on the emoji she used to answer the Putin question.
“Her choice of a red-faced emoticon to describe Putin did not go down well with her country’s Senate: what exactly was she using it to infer, they demanded to know?,” Bourke explains referring to the uproar that the emoji used by Bishop created and how it opened up to many interpretations.
Bishop went even further a few weeks later when, in a nationally televised interview she brought emojis to life and responded to a question: “I’m going to answer in emoji,” she said before pulling a baffled face and shrugging her shoulders.
2015 appeared to be an interesting emoji year also for Argentinian president and social media powerhouse Mauricio Macri. In December that year, shortly after his election, Macri posted on Facebook the entire list of his proposed cabinet.
And he used emojis for every cabinet position.
In naming Susana Malcorra as his foreign minister, he used the following emoji: 🚻
Earlier in 2015, the British daily The Guardian posted a translation into emojis of Obama’s State of Union address — while launching at the same time the Twitter handle @Emojibama.
“Barack Obama said his address to Congress this year was all about ‘finding areas where we agree, so we can deliver for the American people’,” The Guardian wrote. “And if there’s one thing we can all agree upon, it’s emojis,” The Guardian continued.
Now, if we look at the full list of emojis — a total of 2,666 in the Unicode Standard as of June 2017 — there’s at least one that might not look appropriate for diplomacy: what I call the poop emoji or 💩.
And yet, together with 🚽, it was featured quite successfully in the November 2016 #WorldToiletDay campaign by the World Health Organization.
EMOJIS FOR COUNTRY BRANDING
One country has gone the extra mile in the use of emojis: 🇫🇮 Finland.
Finland is in fact the first country in the world to publish its own set of country-themed emojis — launched, also in 2015, by the Finnish ministry for foreign affairs. According to finland.fi, an online portal produced by the foreign ministry and published by the Finland promotion board, “the Finland emoji collection contains 56 tongue-in-cheek emotions, which were created to explain some hard-to-describe Finnish emotions, Finnish words and customs.”
They even include some emojis related to the role of Finland in the international community with the ‘peacemaker’ emoji (inspired by the work of former Finnish president Martti Ahtisaari), and as a pioneer in gender equality, as it is the first country in the world to give women both the right to vote and stand for election in 1906, with the ‘girl power’ emoji. | https://medium.com/digital-diplomacy/emojis-for-diplomats-b6e51e6f1781 | ['Andreas Sandre'] | 2017-07-17 18:29:40.600000+00:00 | ['Foreign Policy', 'Digital Diplomacy', 'Emoji', 'Technology', 'Social Media'] |
1,206 | MIT’s LeakyPhones Help You Interact with Strangers by Sharing Music at a Glance | The Tangible Media Group within MIT’s Media Lab is focused on researching human interaction with the gadgets and gizmos around us, and how that technology affects social dynamics. Their research has led to a number of interesting developments that we’ve featured in the past here on Hackster, but this is one you’re either going to love or hate. Amos Golan’s LeakyPhones are headphones that let you share your music with a stranger at a glance — and that let them share their music with you.
Golan’s motivation for this project is rooted in our current social dynamics, or lack thereof. In the past, starting a conversation with a stranger was a simple matter of saying “hello” when you made eye contact for a moment. These days, many of us spend our time in public with our eyes glued to our smartphones and our ears covered with headphones. For many of you, that’s a good thing — you don’t want strangers talking to you anyway. But, there’s no denying the effect it’s having on spontaneous interactions.
LeakyPhones is intended to open the doors back up to that kind of interaction, but respectfully. The idea is that if two people wearing LeakyPhones come across each other and hold eye contact for a moment, they’ll begin to hear what the other person is listening to. The longer they hold eye contact, the more the stranger’s music starts to overcome their own. Hopefully, that will spark a conversation.
There are, of course, a number of privacy concerns here. Many of you are already cringing at the idea of a stranger hearing what you’re listening to, or you hearing what they’re listening to. That’s why LeakyPhones has four different operating modes. You can either choose to allow transmission of your own music or not, and choose to receive music from strangers or not. It allows for controlled social interactions with strangers, but only when you want it. | https://medium.com/hacksters-blog/mits-leakyphones-help-you-interact-with-strangers-by-sharing-music-at-a-glance-b4c15817719 | ['Cameron Coward'] | 2019-01-29 23:26:00.726000+00:00 | ['MIT', 'Research', 'Social', 'Technology', 'Music'] |
1,207 | Drones and Robots Future of e-Commerce | Drones and Robots Future of e-Commerce
The very idea of an automaton, robot, or self-governing vehicle conveying your item at your doorstep is energizing, all things considered, it gives you a Jetsons-like encounter. Like it or not, human sans contact conveyance is the eventual fate of the internet business. Organizations like Amazon, Uber, and UPS have contributed vigorously to make this innovation accessible for what’s to come. The physical web is in all likelihood going to be the ordinary of things to come.
One of the most critical costs that organizations face is the last mile conveyance costs. For most conveyances, the last mile conveyance cost makes up just about 30 percent of the complete conveyance cost, here and there even 50 percent. The last-mile conveyance cost is the most costly part of the internet business industry, which is the reason organizations are consistently looking for an answer to chop down these costs. 2019 has been the time of online business, with worldwide deals coming to about USD 4.9 trillion. The expanding volume of exchanges makes it considerably all the more testing to handle the conveyance issue.
While most online business retailers have made quicker conveyance choices accessible at a cost, 70 percent of the clients are valued touchy in nature and want to hold up out the conveyance time frame. While a possibility for quicker conveyance is appropriate to the clients ready to pay extra, it just includes to the weight and expenses borne by the retailer. Getting an item conveyed quicker consistently brings about an expansion in the last mile conveyance cost just as the general conveyance cost. This is the place automatons, robots, and self-governing vehicles become possibly the most important factor.
Nonetheless, the accessibility of this innovation doesn’t mean these are progressively possible choices. There is as yet the issue of absence of foundation. Independent vehicles are exceptionally subject to the 5G organize and progressed man-made reasoning. One is yet to turn out to be generally accessible, while the last doesn’t come modest. While robots can deal with little obstructions like checks and steps, a clumsy hindrance or a pothole can raise a ruckus. On the off chance that it tumbles over, who might lift it up? In spite of automatons and robots being more effective than people, they frequently set aside more effort to move to start with one spot then onto the next.
Undoubtedly, people are costly with regard to last-mile conveyance costs. Nonetheless, they additionally represent a lot of points of interest. The issue with rambles and independent vehicles is, they need natural judgment that can demonstrate convenient when a last-mile conveyance issue happens. For example, if the client is inaccessible at the given location at the hour of conveyance, what might an automaton or a robot do? Would it have the option to break new ground and thought of an appropriate arrangement? Regardless of the considerable number of stresses that the machines are here to assume control over humankind, it is impossible they can get by without the human investment. The most ideal approach to step into the future would be through an appropriate coordinated effort among people and robots, where they supplement one another. Toward the day’s end, the point ought to be to build the way of life and have a cultural effect with the assistance of mechanical advances. | https://medium.com/@marywmarks6/drones-and-robots-future-of-e-commerce-d01cdc00cf39 | ['Mary W. Marks'] | 2019-11-15 10:49:51.988000+00:00 | ['Technology', 'Robot', 'E Commerce Solution'] |
1,208 | Awesome hacks from TerribleHack 6 | Li Li the Scaredy Cat
The IoT equivalent of Furby. You can’t run away fast enough.
Poober
Did you ever have your pet poop but don’t want to pick it up? Thanks to the sharing economy you don’t have to! Pay someone else to pick up your shit.
Mouthzart
I don’t think I need to explain this one.
SQML — Structured Query Markdown Language
Disappointed with the high reliability of mongoDB? SQML is the hottest new DB in the block. It stores all the data in an HTML table. HTML has been around for decades and is used by all the Internet giants like Google or Facebook, so it must be reliable. Get it on npm!
Speakthernet
Send messages across two devices, one text-to-speech soundbyte at a time.
GooseScript
Fowl-friendly programming language. Transpiles Perl — I mean — Goose sounds into C++.
Smoke Detector Detector
Detects if there are nearby fire detectors by starting a fire. Genius.
Facecode
It automatically pushes your Git commits as a diff patch to Facebook for all your friends to admire. Not easy to use but at least you can keep working during those GitHub outrages!
Awesome Game Bundle™
It’s the poor man’s humble indie bundle, comes with exciting games such as 2 colour bejeweled and 1x1 Tetris.
Lemmr
Spice up your math assignments by automagically inserting lemmas taken from “Mathematical Theorems you had no idea existed, …” Facebook page.
Visual Stupid 2017
Ever need to compile pictures of code? Look no further.
College Freshman Cooking Simulator
Don’t know how to cook? This hot new web-app will randomly generate fun recipes for you to follow!
That’s a wrap for this term’s TerribleHack! A big thank you to everyone who participated for making the event awesome, to our Tilt supporters for making a huge leap of faith and gave us money, and of course to Mary from Shopify for hosting us.
I haven’t had the chance to include every single demo on here, but you can read more about them here!
We also have more information about each of the hacks on our DevPost page.
Want more? Read more about previous TerribleHacks here and here.
If you’re interested in helping out for a future TerribleHack, let me know, we can probably use the help. | https://medium.com/uwaterloo-voice/awesome-hacks-from-terriblehack-6-58a3593153a2 | ['Yu Chen Hou'] | 2019-02-11 21:08:18.920000+00:00 | ['Terrible', 'Comedy', 'Technology', 'Programming', 'Hackathons'] |
1,209 | Key roles of a tech startup and benefit of SkillPal. | A tech startup is a company whose purpose is to bring technology products or services to market. These companies deliver new technology products or services or deliver existing technology products or services in new ways. Hiring qualified people for your startup isn’t enough, either. 65% of startups fail due to management issues. So, when you’re building a startup, it’s about hiring the right people for the right positions, not just those who look good on paper. So, what startup roles are essential for success? I am going to take a look at nine essential roles in a startup company, and what it takes to make each of them successful. According to a SkillPal Expert, The chief executive officer (CEO) of a startup is often referred to as the visionary. The leader of the pack. The decision-maker. Their talent lies in dreaming big and being passionate about what the company could achieve in the future. But that doesn’t mean a CEO is paid more than the rest of the team, and they don’t hold more power.
I am really wanting to tell you what are the key roles in a startup business. But you may need a mentor first to make you understand what the things to do and what not to. A mentor is a person who has a professional and life experience and who voluntarily agrees to help a mentee develop skills, competencies, or goals. Put another way, a mentor is an advisor and role model who is willing to invest in the mentee’s personal growth and professional development. A mentee is someone who has identified a specific personal or professional goal and who believes that the guidance and help of a mentor and being held accountable to the mentor can help them achieve their goal. If you are searching for a mentor SkillPal is the best way.
The SkillPal mentorship is a relationship between two people where the individual with more experience, knowledge, and connections is able to pass along what they have learned to a more junior individual within a certain field. In today’s competitive landscape a mentoring relationship can give you an edge that differentiates you from your peers and/or your competition. You may be ready to make a career change or advance in your present career but something is holding you back. Wouldn’t you benefit from a relationship with someone who may provide knowledge, insight, support, guidance, and open doors for you? It may surprise you that some of the world’s most rich and famous had mentoring relationships to help them in their quest for excellence.
No startup is built on the exact same structure. This is because startups are, by definition, agile, lean, and adept at evolving based on the company need. However, some general role categories seem to recur everywhere. These include:
· Engineers: out of all the positions at a startup, backend engineers are probably the ones who benefit the most clarity. Hackers, born coders and computer scientists are usually technically-orientated, focusing on learning the best programming languages, algorithms and frameworks for the project.
· Product managers: often have engineering backgrounds, but also see the bigger picture. They enjoy analyzing traffic, understand how to prototype and research, and often know their way around various tasks.
· Marketing and sales: the hustlers who will do everything in their power to promote and sell the product to the right audience.
· Business Developers: often lumped together with sales, these positions often become available to more experienced salespeople. There is crossover in the skills, but making deals on a large scale implies strong people skills and an innate ability to network with the right people.
· Legal teams: not always needed for brand new ventures, but primordial for growing startups.
· Human Resources: hiring and firing, but also attracting top talent to fill positions at the company.
Core responsibilities of a startup CTO:
· In the early stages, coding and developing the company’s product
· Developing and fine-tuning the startup’s strategy for using tech resources
· Making sure the tech team is hitting deadlines, and using their time productively and efficiently
· Focusing on ways the backend team can increase product revenue
· Developing and implementing product infrastructure
Kind of obvious from the “visionary” label. A CEOs vision needs to seep into the foundations of a startup. They need to be constantly looking for opportunities for their product in the market, their customers, and find it a place in their chosen industry. 44% of startups fail because there isn’t a market need for their product, so finding a pain point (and fixing it) is a crucial part of a CEOs job. A startup, especially in its early stages, can’t afford to hire consultants and specialists to help make their vision become a reality. That’s why a CEO needs to be a jack of all trades. A startup CTO is a CEOs right hand and helps them fine-tune strategy, tactics, and business goals to push the company forward. In the first stages of the startup, the CTO will be hands-on in the IT/development side of the company, helping to invent the product before the company progresses out of its early-stages.
“A good CEO is constantly questioning whether the right people are in the right places,” says Founder/CEO of Brainscape, Andrew Cohen. “Do any roles need to be re-organized? Are any bad apples affecting the performance of the rest of the team? Do people like their jobs? Do we need to have a corporate team-building retreat at a ropes course? Do we need to create any new vacancies for unmet leadership needs? “Continually optimizing for team fit is a difficult activity, but it pays off exponentially as you put the right people in the right places.”
If I need mentorship or if I need any approval of my work, I will go to SkillPal first to get a Mentor. The person will send me a bite-size video or a live video call. By that, I will improve my work. There are many platforms in India and many people who may provide us with knowledge for free but SkillPal got the best man and woman who can mentor us at the right time with the right clues. The purpose of mentoring is to tap into the existing knowledge, skills, and experience of high performing employees and transfer these skills to newer or less experienced employees in order to advance their careers. | https://medium.com/@chatterjeesayantan62/key-roles-of-a-tech-startup-and-benefit-of-skillpal-86a9e87f4649 | ['Sayantan Chatterjee'] | 2020-12-08 05:08:27.332000+00:00 | ['Skillpal', 'Tech Start Ups', 'Growth', 'CEO', 'Technology'] |
1,210 | Alexa, I am your Father. | Papa didn’t wait for answers. He went to work. Waking up bright and early the next day with the Robo Remote 5.0 in hand and Achinto (our friendly neighbourhood electrician) as backup. They were to use the Sunday to synchronise the three systems involved and break new ground. Mom was naturally a bit skeptical about this entire endeavour (as was Achinto, he’s an old school man) and the tough negotiation eventually settled on one room in the house — the living room (since it had maximum lights and scope for creativity. The bedroom was strictly out of bounds).
Having worked through the day, by evening Dad had a crafty smile on his face. The kind of smile seen after a genuine adrenaline rush. He invited Mom and the rest of us to the living room for a live demo.
Sitting at one end in his favourite chair and pyjamas, Dad casually began the performance, as if a magician in his opening act.
Alexa, Dad said affectionately, switch on the North light. Boom, a light above my head switched on. Dad didn’t respond. Alexa, he said, please switch on the South light. Boom, another light went on above his head on cue. His eyes twinkled. Alexa, Let there be light! He bellowed, as if Moses himself was parting the Red sea. All the lights and bulbs came on in a flash of pretty impressive technical ingenuity. Oh and one more thing, Dad said snickering away. Alexa, let’s see Party Mode.
The lights suddenly went off. Then they came on, one by one, triggered by time delays for extra effect. The bulb, then the tube light, then the corner lamp, then the table lamp and finally, with a Shakespearen beat, the AC started humming. Dad leaned back, waiting to take it all in.
This performance soon became a routine affair for new family members over dinner, with Dad, like a professional stand-up comedian, throwing in new bits every time. I noticed that the bulbs of the corner lamp now had 5 different colours synced to the time of day, the garden lights now came on and off on their own, and the Diwali lights went into series and parallel mode based on Dad’s mood and the pollution levels.
Things seemed to be going rather well, and the smart home was finally alive and kicking. Other than the occasional confrontation between my Mom and Alexa (she didn’t really like my Mom’s accent), which would be fine in most situations, just that this debate was about putting on the fan after the morning pocha had been done, which you would know is a non negotiable item in the domestic scheme of things.
Still, this could all be managed well enough, but then the damn Chinese attacked. | https://medium.com/@yudhishthiragrawal/alexa-i-am-your-father-2295e32ed654 | ['Yudhishthir Agrawal'] | 2020-12-23 14:13:01.209000+00:00 | ['Dads', 'Writing', 'Humour', 'Technology', 'Alexa'] |
1,211 | Are you tired of proprietary IT tools? | Are you tired of proprietary IT tools?
Then, my friend, we are on the same boat. Adopt GNU/Linux, Free, and Open Source Software (FOSS) as early as possible. You won’t repent, I guarantee. I had adopted GNU/Linux and FOSS as early as 1999. I seldom (or never) use a single piece of proprietary software. My computing life has become so smooth and secure. I never experience malware and virus attacks. And I spend zero dollars on software!
Source: lewing@isc.tamu.edu Larry Ewing and The GIMP (GIF animation is by the author)
If you are interested in joining me on this learning journey, please don’t hesitate to write a comment or ask me any questions here, on Twitter or LinkedIn. Cheers, Debesh Choudhury. | https://medium.com/technology-hits/are-you-tired-of-proprietary-it-tools-8f04556ec9ee | ['Debesh Choudhury'] | 2020-12-19 07:20:23.681000+00:00 | ['Open Source Software', 'Software Engineering', 'Technology', 'Self Development', 'Linux'] |
1,212 | Re-thinking the ‘Good’ in ‘Good-Cheap-Fast’ | In physical systems, we tend to have a fair appreciation of what Good-Cheap-Fast trade-offs look like, and they often feel reasonably linear. Our choice of stewing beef cuts range from corner-store beef chuck to oyster blade from an organic shop. Some cuts tenderise in one and a half hours, others need six hours of love. Often in navigating the trade-offs of a system, we are willing to relent on the aesthetics (rendang is definitely an ‘Ugly Delicious’ food), or some other element deemed not fully necessary for the system’s function.
With data, it’s different
Data-driven solutions are the rendang of building tech. So. Many. Ingredients: when we contemplate data-driven solutions, the decisions we make involve not only the construction of technological components, but also their maintainability, scale, reliability, testing, hardening, and ultimately, longevity in use.
A common pattern in data-driven solutioning is to associate ‘function’ (the ‘Good’ in Good-Cheap-Fast) primarily with infrastructure components; everything else is deemed ‘form’. Faced with the need to deliver value promptly, a common mantra is to ‘build lean’ and enable agility to meet changing business demands.
The first elements removed from delivery scope are often testing, documentation and maintainability plans, as the size of averted cost is unseen — the Prophet’s Dilemma is real. So minimum viable solution roadmaps are often focused on building the thin-slice pipelines: get data, transform and land data, sit models on top of data, sit interfaces on top of model outputs → voila, insights! Everyone is (very quickly) a winner.
Up to this point, our linear worldview of Good-Cheap-Fast seems to work just fine. But adopting this view supposes the data world obeys the same rules as the physical world — which in my view is a misplaced assumption. Data changes state and form across time dimensions, ebbing and flowing to the non-linear stochastics of human behaviours applied to technology. A common challenge is the re-mapping of source-to-target transformations as new tools are released — such as the introduction of new customer interfaces necessitating full ETL re-builds to ensure data warehouse tables reflect the same customer information pre- and post-launch.
Replace “algorithms” with “data”… source: https://xkcd.com/1831/
The highly variable, tightly wound nature of such tech-enabled solutions means thin-slices can be brittle in the face of moving business parts. And in the absence of automated testing, documentation or change plans, remediation can be a disproportionately expensive exercise, not to mention also a blocker to scale.
Re-thinking the ‘Good’ in Good-Cheap-Fast
The point is the old adage of “Good, cheap or fast: pick two” isn’t as straightforward when it comes to delivering data-powered solutions. The need for speed in competitive conditions can be a barrier to seeing past point solutions; further, cost-benefit-backed, scalable roadmaps are notoriously difficult to write. But the conversations in my team have me optimistic about where we are headed.
We’re currently talking about what a ‘good’ minimum threshold actually looks like. Not all scope-cost-time options are actually viable in data, because ‘good’ (or quality) isn’t exactly on a linear scale. Data that is 50% accurate probably has no value, data that is 80% accurate might have some value, and data that is 100% accurate may be worth its weight in gold — assuming intelligence and healthy skepticism in its use. The joy of unveiling a beautiful interface quickly devolves into fury if its numbers are wrong; so we’re thinking not only about developing minimum viable components, but also minimum responsible disciplines.
A real challenge in the matter is just how counterintuitive (read: un-fun) data disciplines can feel. Few things seem more obtrusive than the Data Fun Police, who at each business request for a new feature responds with a catalogue of Do-Nots-in-the-Data-Lake. In truth, interests must align on both sides; with every Do-Not, there should be What-About-This-Instead; and working rhythms should facilitate two-way value exchanges. For example, new work should be accompanied by a considered re-ranking against existing commitments; also, if our data engineer says that new feature will break an existing pipeline — it’s probably a good idea to consider alternatives.
At a broader level is the question of what a ‘good’ solution itself means — for example, what hardening in the ‘thin slice’ do we spend time on now, versus other activities that can wait? The motivation here is tech and operational debt, commensurate with digital opportunities — tends to grow exponentially. Small delays in prevention today can lead to much larger missed opportunities in a year — such as Google Research has found in machine learning. Still, it would be naive to put delivery completely on ice to search for some non-existent ‘perfect’ target state; rather, we need to continually refine the way we stage investments in data, disciplines and culture.
As a data team at Reece we are continuously examining not only what we build, but how we choose what to build. I am excited that as we continue shaping what ‘good’ looks like, we’re making better choices about how we build — hopefully towards a hundred years (and then some) more of improving the lives of our customers and people. | https://michellejoylow.medium.com/re-thinking-the-good-in-good-cheap-fast-db414073e9a7 | ['Michelle-Joy Low'] | 2020-12-08 20:40:04.238000+00:00 | ['Data Science', 'Data', 'Technology', 'Project Management'] |
1,213 | How to Mine $TTT-test on TrustNote2 Testnet — Beta 0.1 release | We are so excited that TrustNote2 Testnet — Beta 0.1 release is launched today. Comparing with TrustNote 2.0 Alpha release, 6 significant changes/improvements are implemented in this release, such as:
A deposit smart contract must be created after the miners receive his/her $TTT-test;
The mining difficulty is adjusted according to the amount of the deposit deposited by the miner;
The deposit can be withdrawn when the miner decides to stop mining
Then the questions are, how to create deposit smart contract, how to put a deposit and get it back? Is it the same way to get $TTT-test as in Alpha release?
In this tutorial, we will answer all these questions and walk you through setting up mining on TrustNote2 Beta 0.1 release.
Operating System
Windows 10 (x64), OSX 10.13.6 (Mac), Ubuntu 16.04, Ubuntu 18.04
Start Mining
Setup Your Super Node
Please select and download the mining client from https://github.com/trustnote/trustnote-pow-supernode/releases according to the operating system you are using.
Unzip the downloaded package
Run the executable ‘Start’ file to start mining service.
In the first run, mining client will create your Super Node address, please save this address before pressing any key to continue.
Click the ‘Start’ file again to re-run the mining service. Now the mining program starts running on your machine and starts data synchronization. Please keep this window open all the time!
Create Deposit Smart Contract
From the Beta 0.1 release onward, a smart contract used to lock-in your mining deposit must be created. When the smart contract is created, the deposit address will be generated at the same time.
Then how to create the deposit smart contract and get its address? It is quite easy! After data synchronization is finished, run the executable ‘deposit’ file to create the deposit smart contract, and the deposit address will be printed on the command terminal. Please save this address, you will need it to put your mining deposit if necessary.
* Congratulations, your mining rig is up and running now!
FAQ
How to Get TTT Test Notes
To start with mining TTT-Test Notes you will need to have a small amount of TTT-Test Notes to pay for your initial transactions on the TrustNote 2.0 testnet.
Email sunny.liu(at)trustnote.org, and send your super node’s wallet address and we will gladly to send you 10 MN (million notes) TTT-Test Notes. That is more than enough to cover the small initial transfer fee for you to start mining.
How to Put Mining Deposit to your deposit address
If you have TFans, please email sunny.liu(at)trustnote.org, we will help you to exchange your TFans to TTT-test and transfer to your deposit address. For each single mining machine, the more the deposit is, the lower the mining difficulty is.
How do I know how many $TTT-test I have mined?
When you see ‘Coinbase Reward’, congratulations, this means your node is selected as an attestor and obtains reward at Round ‘current round -2’, Round 159 in this example. ‘Coinbase Reward’ is the reward your node obtains, the unit is ‘note’. Balance indicates the total balance of the node, ‘stable’ means that it has been confirmed, ‘pending’ represents not yet confirmed. And ‘Accumulated Reward’ means the accumulated gain.
For more details, you can also check from TrustNote2 testnet explorer with your address.
How can I manage the $TTT-test I have mined by using test-wallet?
The mining program itself is also a wallet, and the default Coinbase Reward address is your mining address. From the beta release, we provide a test-wallet to help you easily receive, manage your reward and deposit.
Download and install TrustNote testnet 2.0 beta1 test-wallet from https://github.com/trustnote/trustnote-pow-wallet/releases according to the operating system you are using.
Change the default Coinbase Reward address to your test-wallet wallet address in conf.js file.
For example, change ‘exports.coinbase_address = null;’ to ‘exports.coinbase_address = ‘WB2MAANTWOLEWBE6IPSLKMY7AFB6AJ2H’;’
If you put any deposit during mining, you can withdraw your deposit when you stop mining, the deposit will be returned to your deposit guarantee address. The default deposit guarantee address is your mining address, also you can change it to your test-wallet by changing two parameters ‘exports.safe_address’ and ‘exports.safe_device_address’ in conf.js file, ‘exports.safe_address’ is your test-wallet wallet address, ‘exports.safe_device_address’ is your test-wallet device address. For example:
Please keep in mind, if you change the configurations in conf.js file, mining program needs to be restarted. | https://medium.com/ringnetwork/how-to-mine-ttt-test-on-trustnote2-testnet-beta-0-1-release-669ec32001c4 | ['Sunny Liu'] | 2018-12-14 10:45:05.569000+00:00 | ['Cryptocurrency Mining', 'Blockchain Technology', 'Technology'] |
1,214 | How to Become a Great Software Engineer | Value Your Relationships With Others
No matter how hard you work, your relationships are extremely important.
No amount of money can buy the time you spend with your family or loved ones.
Sacrificing that time and instead simply trying to focus your whole life on software engineering will make you less happy, which can also make you feel less connected to others and more miserable. As a result, you are more likely to get fewer things done.
As the founder of YC Combinator Sam Altman puts this:
“Don’t neglect your family and friends for the sake of productivity — that’s a very stupid tradeoff (and very likely a net productivity loss, because you’ll be less happy).”
In the end, we are human beings, not human doings.
We need time to recharge and connect with our loved ones.
Professional relationships
Whether or not you are the only engineer working on a project, you must not let your ego cause you to think only of yourself.
For engineers, ego is the worst enemy.
Some engineers tend to think win/lose, in which others have to lose for someone to win. You think you are going to get promoted or get a raise if your colleagues look bad in front of your manager.
As a result, you will orient your actions toward showing off as much as you can, taking credit without merit, or belittling your colleagues, all to represent yourself as superior.
This will not only give you a superficial type of contentment (as you are continuously trying to belittle your colleagues), it’ll also probably damage your team productivity as a whole, as you and your colleagues will not be willing to work together at all.
If you want to be an effective engineer, you need to set yourself apart from what the majority of people are doing.
In every situation, you should try to reach the best alternative for the whole team.
You should think about mutually beneficial solutions that will ultimately lead to a better long-term resolution, rather than if only one person in the situation gets their way.
Cultivate the habit of asking yourself questions like these: “What’s in it for them that I can also benefit from?” “How can we both get some portions of what we want without damaging our relationship?”
You may sometimes have misunderstandings and short tempers with your colleagues, but this does not mean that the harmony and the relationships you have will be destroyed.
Your ability to deal with people is one of the key areas in your life.
Avoid arguments as much as you can and focus on long-term benefits.
When you begin with the end in mind, you do not want to be alone, even if you reach your biggest goals. Living on a big island with no one nearby is probably an adventure you do not want to experience.
There is no doubt that large projects are developed by a lot of people that work as a team.
Bill Gates started Microsoft with Allen, who even came up with the name Microsoft.
Elon Musk created his first IT company, Zip2, with his brother Kimbal, and he then sold the company to AltaVista for $307 million in cash and $34 million in securities. Instagram was initially started by Kevin Systrom and Mike Krieger.
Amazon has more than 341,000 employees. Microsoft has more than 120,000 employees.
Google, after tons of research on building the perfect team, has found that some of the most productive teams were the ones with an environment that cultivated synergy and embraced psychological safety.
Harvard Business School professor Amy Edmondson describes psychological safety as “a sense of confidence that the team will not embarrass, reject, or punish someone for speaking up.”
Everyone has the freedom to contribute, collaborate, and get involved in everything they want. Whenever you find yourself part of an argument with a colleague, try to understand their core interest. Address them so that they become reciprocally beneficial and productive.
In this way, your team can be a lot more productive by taking advantage of the strengths of every member. Teamwork is greater than the sum of its parts.
If you want to be an effective programmer, try to help your team become more synergetic. That can be done by valuing every teammate’s freedom and helping them feel very comfortable expressing different points of view without the fear of embarrassment. Allow everyone to contribute to and collaborate on everything they find interesting and worth contributing and paying attention to.
Be very kind.
Say thank you and please a lot.
Praise other people’s efforts.
Apologize as soon as possible. | https://medium.com/better-programming/how-to-become-a-great-software-engineer-dbb0373ec771 | ['Fatos Morina'] | 2020-10-20 16:08:42.151000+00:00 | ['Machine Learning', 'Software Development', 'Technology', 'Software Engineering', 'Programming'] |
1,215 | I Fired One of My Programmers 15 Days After Hiring Them | Why I Fired Him
The problem started from day one, but our CTO cut him some slack. As he was new, we thought he would need some time to catch up with our project.
After three days, our CTO told me that something was not right about him. He could not do simple tasks. He did not understand the basics of our project. Even our junior programmer could do it easily.
So, we decided to give him more time.
After ten days, he was assigned a simple task with some junior developer. He couldn’t guide them and was getting uncomfortable in the office.
Junior programmers also complained about him to our CTO.
Then we started to realize he lied on his resume. We didn’t tell him this, but I think he understood.
Then after 15 days, I called him into my office and told him we had to let him go for some reason. I sensed that he actually felt relieved!
I felt bad for him and tried to help him
Though he lied and we could prove it easily but we didn’t because it was part of our fault in the hiring process and we didn’t want to disrespect him!
It was still hard for me to fire him. I tried to help him as much as possible I could. I offered him a junior developer position but it was not easy for him for self-respect. I understood that.
My CTO discussed in detail with him about his lackings and how he could improve his skills. He also gave him some good resources for improving his skills.
Now we have added another layer in the hiring process to avoid this type of event in the future. | https://betterprogramming.pub/i-fired-one-of-my-programmers-15-days-after-hiring-them-5316e9337ec4 | [] | 2021-05-13 10:14:42.411000+00:00 | ['Data Science', 'Programming', 'Software Development', 'Technology', 'Startup'] |
1,216 | Healthcare is not a right | Americans have a right to life, liberty and the pursuit of happiness.
Senator Bernie Sanders has suspended his campaign for the Democratic nomination for the President of the United States. Even so, he strongly supports having the federal government provide health insurance for every American. He says healthcare is right. He is wrong.
A right is generally defined as “ a moral or legal entitlement to have or obtain something or to act in a certain way.” Healthcare is not an entitlement, nor should it be. According to the Universal Declaration of Human Rights, there are numerous rights humans have, but the right to healthcare is not included.
The Declaration of Independence of the United States says all Americans have a right to life, liberty and the pursuit of happiness.
Nowhere in any definition of rights, is there any mention of the right to have health care. The rights mentioned above, all deal with the rights that individuals should have in their daily lives and in their pursuit of happiness. There is no mention of a right that requires other citizens to pay for something some citizens can’t afford to pay for themselves.
Most people, in most societies, tend to be compassionate. In the US, compassionate Americans are willing to give some of the income they earn to those, who for whatever reason, have not earned sufficient income to pay for the necessities of life.
Through the government, compassionate Americans will agree to have the government use some of their earnings to ensure all Americans have at least some income. There are specific government programs where income earners have agreed to allow the federal government the ability to take some of their earnings and fund a welfare program, a food stamp program, a housing program and some other programs.
These income transfer programs were never intended to imply that any and all Americans have a right to food stamps or a right to welfare or a right to Section 8 housing allowances. The programs are not rights. They exist because of the compassionate generosity of fellow Americans.
Sanders wants to go beyond that. He says that all Americans have a right to basic income, food and shelter. The truth is these are not rights. These are programs that the majority of compassionate Americans freely choose to support.
Health care is expensive. Ideally, Americans would like to see everyone covered by health insurance. But the price for that is too high. We tried the Affordable Care Act (ACA) which increased the percentage of Americans with health insurance from 85% of the population to 91%. That means 6% of the population, about 20 million people, benefited.
Meanwhile, 275 million Americans had to pay more and received less care. While about half of Americans just accepted this, the other half strongly objected. Now the ACA may be declared unconstitutional by the Supreme Court as early as this summer. That means we will have to debate health care policy again.
If healthcare was given to all Americans who couldn’t afford it, the majority would have to pay even more for their healthcare and receive even less. This is not an ideal situation. Eventually, the quality of healthcare would decline and those who do pay would object.
President Trump has a better solution. Instead of simply giving healthcare to all Americans, Trump wants to give every able-bodied American the opportunity to earn enough income to pay for their own health insurance. That’s why Trump concentrated on economic growth and providing opportunity.
His policies paid off, since, by the end of 2019, the unemployment rate was at a historical low, meaning more Americans had jobs and could pay for their own health care.
By declaring healthcare coverage to be a right, many people will not be concerned with earning enough income to pay for it. Rather people will know they can get coverage without paying, so many will simply not pay. That creates yet another burden on those of us who do pay for our healthcare.
Americans have a right to life, liberty and the pursuit of happiness. Beyond that, those in need will have to rely on the compassion of their fellow citizens. There is, however, a limit for how compassionate Americans can and should be.
Michael Busler, Ph.D. is a public policy analyst and a Professor of Finance at Stockton University where he teaches undergraduate and graduate courses in Finance and Economics. | https://micbusler.medium.com/healthcare-is-not-a-right-e219aa7a661a | ['Michael Busler'] | 2020-04-15 20:35:33.625000+00:00 | ['Technology', 'Business', 'Government', 'Politics', 'Health'] |
1,217 | Investing In Business Intelligence Software | A wide range of industries are starting to lean into a data-driven culture, focusing on data, as one of their most important resources for decision-making.
Photo by Campaign Creators on Unsplash
It is a fact that every company, no matter the size, from start-ups to even more established companies, manage different types and volumes of data. Data can talk, but how can we read it?
When it comes to data, we must understand that originally it will be raw, so it needs to be cleaned up, organized, and analyzed in order to provide us information, so we can communicate something with it. All these processes take time, and as we know, time is non-refundable also representing a cost.
So, what if it is possible to access all the data of business in real-time? Try to imagine, accessing the right information in a matter of seconds, making faster and better decisions. This is what a business intelligence software is about. A business intelligence tool can provide much needed information about a business without relying on IT, in a short period of time and with the right visualizations, that will help us to digest the data and take insights from it.
The Benefits Of Business Intelligence Software
Photo by Monty Rakusen on Getty Images
A real-time business intelligence software will help companies to get access to a huge volume, variety, and sometimes complex data, in just a fraction of seconds in a much easier way. One of the big benefits of BI tools is that they are highly accessible. These tools allow the users to see different types of graphs, dashboards, and visualizations in every kind of device, from any screen you may have in the office, to mobile phones. No matter when or where, the data will be available in real-time.
Real-time business intelligence will also help to improve decision-making by understanding in a more detailed way all the business numbers, metrics and key performance indicators. Having all these daily, weekly, monthly KPIs and contrasting them with previous time periods, short- and long-term trends, historical data for patterns, etc., will allow us to deep dive on the data, extracting amazing insights from it. This will lead not only to an improved metrics performance but also to find possible gaps and areas for improvement.
Photo by Luke Chesser on Unsplash
Besides discovering new market opportunities, going deeper in the data will also help to identify outliers. These anomalies can be hiding behavioral trends that could probably not be seen, so by discovering them, it will be easier to find those answers we may be looking for. Information can provide answers that can explain the reason for different issues or problems the business has, but we haven’t found before.
Another point worth highlighting is that most of BI tools are self-service software. What this means is that everyone in an organization can have access to crucial business data without requiring high technical knowledge and without relying on other company towers. This gives the final user much more independence, saving a lot of time and resources.
Why Invest In Business Intelligence?
Not everything is about fast analysis, intuitive visualizations, and data-driven decisions. It is also a matter of costs.
As it was mentioned before, it is a matter of reducing costs in a short-term period. Of course, buying and investing in a business intelligence software will require an initial investment, but it will be reflected into an increase in business performance and efficiency.
Therefore, the initial payment will be transformed into a faster route to the goals aimed by the company, which will translate into a significant increase in the profits.
How?
By understanding better our data, we will understand better our business. This means, we will uncover opportunities, as well as areas with place for improvement, cutting costs and, yet again, earning more money.
When to start investing in Business Intelligence Software?
If there is a need of analyzing reports to find points of improvements, grow a certain business and of course, somebody to consume the data of those reports, it could be a first signal that there is a need of a business intelligence software. Business intelligence tools can help to scale business.
A second reason could be the volume and different data sources a business can have. Sometimes data does not come from the same source, so for analyzing and show it all together, it has to be integrated in a same place. This is also possible with a BI tool.
For last but not least, spending a lot of money on adverstising but not analyzing the impact of it, can also be an alarm. Maybe it is time to spend less on advertising and start thinking on investing some of that budget in business intelligence.
Conclusion
Having real-time data will not only improve decision-making but also it will help taking more proactive actions instead of being reactive and moving slower. It is vital to every company to understand their data because this will help to better understand their customers behavior, make better decisions and also learn about mistakes committed in the past in order to not repeat them in the future. With a real-time business intelligence software, all this is possible.
There are many different types of business intelligence tools, from cheaper and more simple ones, to others more expensive and complex. It is all about finding the one that adapts better to your business possibilities and needs.
Sometimes we feel like we need to get more data-driven in our business but we don’t know from where to start.
For me, here is the start…
Let’s take the next step! | https://medium.com/digital-diplomacy/investing-in-business-intelligence-software-c4b489c42edf | ['Martina Burone Risso'] | 2020-10-19 12:18:42.651000+00:00 | ['Investing', 'Technology', 'Business Intelligence', 'Data', 'Data Visualization'] |
1,218 | 5 Programming Projects to Get you Through the Darker Months | Introduction
As Game of Thrones character Jon Snow would put it, “Winter is coming”. As we edge closer to the holiday period, the days are becoming shorter and the nights longer. This presents the perfect opportunity to find ways to keep yourself busy inside. As the COVID-19 pandemic makes it difficult to catch up with friends, I thought it would be a nice idea to investigate ideas for how to stay busy if you’re by yourself.
That’s why I created a list of five programming projects that can be worked on right now to help get you through the dark months. Working on one — or multiple — of these projects is an amazing way to improve yourself as a programmer. You get the opportunity to broaden your skillset by trying out new techniques or programming languages.
The projects:
Smart thermometer with Arduino A hobby catalog An Idle game as PWA A stock prices predictor Creating a virtual reality world
1. Smart thermometer
Photo by Harrison Broadbent on Unsplash
Let’s imagine that you want to measure the temperature in your room but don’t want to use an old-fashioned thermometer that you can buy at the store. No, you want to build a smart thermometer, one that puts the measurements into a dashboard and allows you to monitor the temperature in your room over time. How might you do this in a simple way?
The simplest way to build this is by making use of an Arduino, which has a wi-fi module attached and measures the temperature via a thermoresistor. The resistance of the resistor is transformed into a temperature, which is then sent via a wi-fi module to a server.
A good way to store this data is by making use of an InfluxDb — a Time Series Database — which is excellent for these kinds of tasks. The InfluxDb also provides an API endpoint that you can send your data from your Arduino to. Then, all you need to do is visualize the data! This is easily done by Grafana. You can host Grafana together with InfluxDb on the same server to make it even more convenient. Make some useful graphs and ta-da! You got yourself a smart thermometer.
What you’ll learn:
You get some experience with a form of IoT.
You get some experience with setting up the applications on your server and then creating graphs from the received data.
2. Hobby Catalog
Photo by Niels Kehl on Unsplash
If you’re reading this article, you probably already have a hobby. It’s programming of course! But programming is likely not your only hobby. So why not create a catalog app for one of your hobbies? Take cocktail recipes for example. Many people like to drink cocktails on the weekends, but as you can’t go to bars right now you’ll need to create your own cocktails. An interesting idea would be to create a great catalog for your recipes to quickly find which cocktails you liked and what ingredients you needed for them.
You can make this app as advanced as you like. It’s easiest to start with a static app and some photos, titles, and descriptions of your hobby items. After you’ve finished your basic catalog you can look into uploading your new items or even authentication.
What you’ll learn:
Creating an app in a language of preference; Swift (iOS), Java (Android), or even Xamarin for cross-platform
Designing mobile screens for a catalog
3. Idle Game
Photo by SJ . on Unsplash
It’s not always necessary to create an actual app when you want to create a game. Most of you will probably know Cookie Clicker. Cookie Clicker is an incremental game — also known as an idle game —where you earn cookies by clicking on a big cookie. A couple of months ago I decided to create an idle game myself. As I was eager to learn a new technique, I decided to create a PWA (Progressive Web App).
This project is perfect for you if you want to experiment with one of the popular JavaScript libraries (React, VueJS, or Angular) and combining it with a PWA.
A tutorial video of creating an application like this in VueJS can be found here.
What you’ll learn:
Work with a reactive website and PWA techniques
Creating your own progressive game with different difficulty levels
4. Stock Prices Predictor
Do you like artificial intelligence and big data? Then this is the perfect project for you! Predict the price of a company’s stock using machine learning and Python. Of course, it’s nearly impossible to predict the future, even people with a good understanding of statistics and probabilities have a hard time doing this. But this nice little project teaches you hands-on Python with the opportunity to check whether the stock market moved as you expected.
You could use a video that explains how to create a model for an example company — in this case Netflix — and enlarge or replace it with the stocks you like to predict.
What you’ll learn:
Implementing a model with Python to predict stock prices
The basics of AI and reinforcement learning
5. Virtual Reality World
Photo by JESHOOTS.COM on Unsplash
The popularity of virtual reality (VR) is constantly growing and continues to push the boundaries of our imagination. People grew tired of ordinary 2D media content, so virtual reality is being called to deliver information as a new way to engage audiences. If you like the idea of creating your own 3D world, and maybe even find a new passion, then keep on reading.
It’s a little harder to get started with a VR project (as you need a VR headset to test it out “In real life”). But nowadays, virtual reality headsets are a lot more affordable. The newest Oculus Quest 2 starts at $299.00. A big advantage of the Oculus Quest is that you can use it on its own.
There are two main programs that I highly recommend for creating a virtual reality world; Unity and Unreal Engine. Both programs provide enough tutorials to get you started and create your own amazing creations.
What you’ll learn:
Working with 3D objects to create your own world
Getting a grasp of working with VR programs like Unity and Unreal Engine
Conclusion
I hope that you enjoyed this story and that you found a project that seemed interesting to you. It’s time to get your hands dirty and start today! If you work on something you really like then you’ll improve your skills the most. You could also try to create a project with two different programming languages and see which one you like more.
I’m looking forward to seeing the results.
Happy coding! | https://codeburst.io/5-programming-projects-to-get-you-through-the-darker-months-710c0486a13f | ['Koen Van Zeijl'] | 2021-03-26 19:24:08.179000+00:00 | ['Startup', 'Education', 'Projects', 'Technology', 'Programming'] |
1,219 | #13: Amazon Takes on Fashion; Driverless Cars; Poetry at 37,000 Feet | © Luke Hayes
This week, we visit the London Design Museum’s exhibition California: Designing Freedom, which chronicles Silicon Valley’s influence on our lives. And, in considering how much technology has saturated our day-to-day, we look at how driverless cars will change the landscape of our cities, and if Amazon can conquer high fashion in a digital space. Next, we review both high and low-brow offerings in horror movies to surprising results, and share a personal reflection on celebrated poet John Ashbery’s recent passing.
The show ‘California: Designing Freedom’, now up at the London Design Museum, is a considerate examination of the tech capital.
The introduction of autonomous vehicles could change the city in ways we maybe haven’t contemplated.
As the retail giant’s find. fashion brand launches its first ad campaign, the venture may find it has to take more that the usual steps to find audience.
The new Darren Aronofosky film Starring Jennifer Lawrence is full of shocking material that is also hard to turn away from.
There are some chilling moments in the cinematic remake of ‘IT,’ but the film ultimately fails to deliver… here’s why.
A personal reflection on the celebrated poet who passed away on September 3 | https://medium.com/the-omnivore/12-amazon-takes-on-fashion-driverless-cars-poetry-at-37-000-feet-6a05b7d138d1 | ['Culture Trip'] | 2017-09-22 14:20:22.461000+00:00 | ['Poetry', 'Fashion', 'Technology', 'Review', 'Film'] |
1,220 | Your A.I.Sommelier is Here! | If like me you have a penchant for wine, then your palette will soon be guided by an A.I. Sommelier. Artificial Intelligence has crossed the threshold for recommendations when it comes to our tastes. Prior to recognising what delights the tongue, search engines like YouTube, Amazon and Spotify have led the way in personalising our likes. They rely on our sight and sound senses when we select a video, book or song. Our selections build digital fingerprints about what we find appealing. Then predictions are made on the basis of them.
However, wine and all other food and drink are primarily based on our taste buds. Each of us is unique and our physical tastes are very subjective. For instance, do you prefer wines that are; crisp, earthy, dense, oaked, silky, complex, or velvety? These are just a few descriptive examples. Yet, until recently, it would have been extremely difficult to map and distil these nuances into qualities. And then use them to repeatedly make recommendations. This was due to the need for an extensive volume of controlled testing and numbers of people with varied preferences. In addition to taking account of all the chemical components that form part of the wine creating process.
Nonetheless, Katerina Axelsson, a chemist who founded, Tastry, has solved the problem. She pierced through the maze of taste bud roulette, by teaching a computer to taste wine. Being immersed in the wine industry, she recognised that it lacked coherent decision making data. Her epiphany occurred when a wine critic reviewed the same wine that had two different labels. And then scored them differently. Katerina said,
“I had a hypothesis that you could objectify sensory characteristics by creating a new flavour and analytical chemistry methodology that would measure products the same way a human palate does. And that this objective data could provide predictive visibility throughout the supply chain.”
Voila! By harnessing A.I, analytics and taste chemistry methodology, Katerina was able to gain insights into the flavour composition of wines. In turn, this helps us as consumers to buy wines based on our favourite flavours. Tastry’s algorithms has a 93% accuracy rate in predicting how consumers will score a wine.
This taste bud technology is gaining traction in America. It is already outperforming other tried and tested wine buying methods, by 45%. Tastry, is currently being trialled in the U.K and will be rolled out across Europe, later this year.
For those of you in the wine hospitality sector, you’ll be able to download an app called Bluebird, to receive wine recommendations. | https://medium.com/@bybreensamuels/your-a-i-sommelier-is-here-b8c7bdb8e56f | ['Bybreen Samuels'] | 2021-06-17 15:05:37.604000+00:00 | ['Supply Chain', 'Wine', 'Artificial Intelligence', 'Machine Learning', 'Retail Technology'] |
1,221 | Colorado lawmakers pass law to use blockchain for water management | Colorado lawmakers pass law to use blockchain for water management CryptoCurry Follow Jun 28 · 1 min read
Colorado lawmakers have passed a bill that serves as foundation for the study and use of blockchain technology for water management. The bill also encourages the exploration of other emerging technologies that may help improve water management in response to the worsening drought. The bill, dubbed House Bill 21–1268, passed by both chambers of the Colorado legislature is now awaiting Governor Jared Polis to sign it into law.
Image by Tumisu from Pixabay
Forty-five legislators sponsored the bill which permits and orders the University of Colorado and the Colorado Water Institute to explore blockchain and emerging technologies to improve monitoring and management of Colorado water systems. Specifically, the institutions are encouraged to use “blockchain-based documentation, communication, and authentication of data regarding water use; fulfillment of obligations under Colorado’s system of prior appropriation, including augmentation plans; and water conservation.”
Lawmakers hope that using such technologies would help reduce inefficiencies and waste in water management, as well as provide data that are more transparent and reliable. The University of Colorado is expected to provide both a written report and a live testimony on the results of their feasibility studies and pilot deployment on or before July 15, 2022.
Source: CoinGeek | https://medium.com/cryptocurry-official/colorado-lawmakers-pass-law-to-use-blockchain-for-water-management-c8313a9a9f82 | [] | 2021-06-28 01:18:01.780000+00:00 | ['Blockchain', 'Blockchain Technology', 'Blockchain Application'] |
1,222 | Digital Acceleration from the Old Economy to the New Economy in the post Covid era | The current Covid-19 crisis has provided a view into the future world. A world that will be primarily driven by digital technologies. Digital will be central to every interaction, either within the organizational domain, or while delivering value to the customer. And organizations will have to align to this new reality overnight. Digital channels will become the primary way to interact and collaborate with other businesses, they will drive the customer engagement models and will play a pivotal role in defining the Value Proposition for customers. Most of the supply chains will be transformed using digital technologies. Customer behaviors will be significantly impacted by the digital experience and organizations will find new ways to working to respond to these changes.
How will the Old Economy companies respond to these changes?
Traditional Business models involving physical, tangible products and touch points with customers currently treat Digital interfaces as secondary. With legacy processes drive their operations, they seem to be lagging behind the new challenges thrown by the ever changing customer preferences and their digital competitors.
At the moment, the Digital transformation efforts within these organizations have been approached as a piecemeal or in a fragmented and uncoordinated manner. This has led to local optimizations within the organizations, while the larger organization has not seen significant improvements in the absence of a cohesive plan for the whole organization.
These Organizations will be tested to a much greater extent by the organizations that are born with a Digital value proposition.
The Born Digital Organizations have their products and services that are digital in nature. The Sales Channels and Customer interfaces are primarily digital. Their processes are also digital — apps for their sales force, digital payment gateways, paper less offices, AI and algorithms baked into their DNA from the very beginning.
Even if there are any physical services, they are largely digitally enabled. Ex: Warehouses run by robots requiring hardly any manual intervention, factories primarily driven by automated machines.
These new age born digital organizations are challenging the status quo of the existing players across the entire business landscape; be it Manufacturing, Retail, Financial Services, Travel and Hospitality and many others business domains.
What does Digital transformation mean for the Old Economy organizations?
The existence of the Old Economy companies depends on their ability to adapt to the Digital model.
They need to rethink existing Value proposition in a Digital way.
They need to transform the existing Value proposition using the opportunities Digital technologies offer.
The Old economy giants need to pilot new digital business models that are centered around the customer value proposition that can outperform the existing ones and can thrive in the new reality. And all this needs to be carried out at a rapid pace before digital tsunami inundates them.
Using this digital transformation involves rethinking their value proposition to their Customers, not just their operations, supply chain, sales and marketing. They need to strategize how the digital technologies can enable these organizations to create and deliver new customer value via existing and new channels.
Organizations need to break down the Digital Transformation into the Digital part, which means adopting technologies such as Cognitive Computing, Cloud, Block Chain, analytic systems to drive business value.
Along with this adopting a Lean, Product-driven model made up of cross functional teams from customer experience, operations, design, technology is a critical part of the journey.
Organizations also need to pay significant attention to the Transformation part which involves changing the organization’s culture, people’s mindsets, skills, structure, roles. This part of the transformation is likely to take longer and generate more chaos than adopting new digital technologies.
As George Westerman pointed out, the Old Economy organizations need to leverage the use of digital technology to radically improve their performance and the reach of the organization.
New reality for Organizations in the post Covid era
Within the last decade, the digital platforms have disrupted many of the traditional business models across industries; the current crisis has accelerated that disruption and will present a new “ecosystem” driven by digital in the post covid era. In the last 6 months, organizations have started accelerating their digital transformation, enabling them to improve their current offerings along with offering new value propositions.
And as organizations are preparing themselves to weather future storms, digital is driving their resiliency plans to future proof their operations and also aligning them around the customer. Whether it is about harnessing digital channels to supplement the existing ones or further strengthening their relationships with their customers, digital is the way most organizations are trying to succeed in the new reality.
It is an opportunity for Organizations to rebuild a future-ready, nimble enterprise using digital technologies that can be a source of competitive advantage. The ongoing challenges and opportunities that the current crisis has provided has provided a glimpse of the new reality that organizations will have to be prepared for.
This new reality demands Organizations to be bold, nimble and continuously learning and improving. The pace of experimentation will have to outpace the rate of change around them.
Digital acceleration will provide them will the capability and resiliency to respond to these changes that will be a constant feature going forward. | https://medium.com/@devpyadav/digital-acceleration-from-the-old-economy-to-the-new-economy-in-the-post-covid-era-4c0d320a0fb4 | ['Dev Yadav'] | 2020-12-20 12:11:56.678000+00:00 | ['Digital Transformation', 'Technology Transformation', 'Business Transformation', 'Transformation'] |
1,223 | Q.O.D. Technology and Efficiency of Use | Love this quote by Bill Gates! It’s a reminder for us to NOT get caught up in all tools and apps that can aid in automation and overall efficiency. Rather, let us first focus on fixing and optimizing “Our Processes and Frameworks”. If we don’t identify and address the root cause; then these efforts are wasted and we are worse off then when we started. | https://medium.com/@salazarallen/q-o-d-technology-and-efficiency-of-use-66a71534e478 | ['Allen Salazar'] | 2020-12-07 04:25:05.350000+00:00 | ['Minimalism', 'Design Thinking', 'Simplicity', 'Quote Of The Day', 'Technology'] |
1,224 | Here Is How You Can Apply Software Development Best Practices to Analytics Pipelines | Configuring the Big Query Credentials
You can configure the Big Query credentials using the methods described on the dbt site. Here I am using the service account JSON key method. You need to make sure the service account has following permission —
BigQuery Data Editor
BigQuery Job User
BigQuery User
Then you need to make sure dbt profile is setup $HOME/.dbt/profile.yml and details for the service account are updated as shown below —
my-bigquery-db: target: dev outputs: dev: type: bigquery method: service-account project: my-project dataset: olympics threads: 1 keyfile: /path/to/service-account.json timeout_seconds: 300 priority: interactive retries: 1
You can set up profiles for dev and prod separately without affecting any code change. This one of the features of dbt which allows us to do the sandboxing and environment management easily.
Creating a Project
You can create a dbt project using a simple CLI command
$ dbt init [project-name]
This command creates a sample dbt project with all the required files and folders. The typical structure looks like as shown below -
Looking at the folder structure, you can see the sense of software development principles where you see folders for tests, logs, modules, data separately. This allows the overall manageability and readability of the code structure. This is very much version control friendly and easy to follow.
You can read about the details of the folder structure and their meaning on the dbt site.
In dbt_project.yml , you need to set up the profile you configured in the earlier section.
Creating Models
I am creating two simple models to manipulate the data and get some insights.
The models/example/athletes.sql looks like this —
{{ config(materialized='table') }} with athletes as ( select * from olympics.atheletesnew )
select * from athletes
and model/example/players_by_country.sql looks like this —
{{ config(materialized='table') }} with athletes as ( select * from {{ ref('athletes') }} ), players_by_country as ( select country, count(*) as totalcount from athletes group by country ) select * from players_by_country
Here you can see the dbt feature where you can define a model in one file and refer it in other models. This helps in re-usability. Also if you need to change anything in base model, you can do it in one place. Just like in any typical software engineering language.
Models have other features like —
Enabling / Disabling Model with flag
Using aliases
Using variables
Using tags
etc.
You can create models inside the specific folders. This allows you to achieve modularity. This is similar to creating packages/namespaces in certain programming languages.
Documention
dbt allows you to auto-generate documentation for the models. Making it easier for the team to understand and collaborate better
You can generate docs by running a command
$ dbt docs generate
You will be able to look at the documentation by running a command —
$dbt docs serve
You can even look at the lineage information in the documentation website as shown below —
Testing
dbt allows you to do some testing of the models generated. By default, you can configure tests like a unique check, null check, referential integrity etc. for a model. In this example, I am running sample schema tests to validate the results of the model for null checks. I can configure these tests in schema.yml as shown below —
You can run tests for this project by running a simple command like
$dbt test
and see output like this —
You can even run some complex tests as described on the dbt site.
Logs
Every time you compile and run a dbt project, it will generate the detailed logs for you. In case, you face any issues, these logs are quite useful to trace back the errors.
Integrating with CI/CD Pipelines
All dbt commands give proper exit codes like 0, 1, 2. You can use this feature in a CI/CD pipelines so that you know if a particular step was successful or failed. | https://towardsdatascience.com/here-is-how-you-can-apply-software-development-best-practices-to-analytics-pipelines-8d65ba43bc9c | ['Tanmay Deshpande'] | 2020-04-09 08:59:46.830000+00:00 | ['Tech', 'Software Development', 'Programming', 'Technology', 'Data'] |
1,225 | Creating a Variable RSI for Dynamic Trading. A Study in Python. | This is a way to gradually weigh the RSI lookback periods. Note that you can select whichever periods you want and optimize them according to your preferences. The default parameters on the Dynamic RSI (according to me) are the ones above.
Let us now see the full function that gives out this indicator before we proceed to the back-testing step. Note that you must use it on an OHLC array with multiple extra columns to be populated by the function.
def dynamic_rsi(Data, momentum_lookback, corr_lookback, what, where):
for i in range(len(Data)):
Data[i, where] = Data[i, what] / Data[i - momentum_lookback, what] * 100 Data = rolling_correlation(Data, what, where, corr_lookback, where + 1)
for i in range(len(Data)):
if Data[i, where + 1]>= -1.00 and Data[i, where + 1]<= 0.10:
Data[i, where + 1] = 14
if Data[i, where + 1] > 0.10 and Data[i, where + 1]<= 0.20:
Data[i, where + 1] = 10
if Data[i, where + 1] > 0.20 and Data[i, where + 1]<= 0.30:
Data[i, where + 1] = 9
if Data[i, where + 1] > 0.30 and Data[i, where + 1]<= 0.40:
Data[i, where + 1] = 8
if Data[i, where + 1] > 0.40 and Data[i, where + 1]<= 0.50:
Data[i, where + 1] = 7
if Data[i, where + 1] > 0.50 and Data[i, where + 1]<= 0.60:
Data[i, where + 1] = 6
if Data[i, where + 1] > 0.60 and Data[i, where + 1]<= 0.70:
Data[i, where + 1] = 5
if Data[i, where + 1] > 0.70 and Data[i, where + 1]<= 0.80:
Data[i, where + 1] = 4
if Data[i, where + 1] > 0.80 and Data[i, where + 1]<= 0.90:
Data[i, where + 1] = 3
if Data[i, where + 1] > 0.90 and Data[i, where + 1]<= 1.00:
Data[i, where + 1] = 2
Data = rsi(Data, 14, 3, 0)
Data = rsi(Data, 10, 3, 0)
Data = rsi(Data, 9, 3, 0)
Data = rsi(Data, 8, 3, 0)
Data = rsi(Data, 7, 3, 0)
Data = rsi(Data, 6, 3, 0)
Data = rsi(Data, 5, 3, 0)
Data = rsi(Data, 4, 3, 0)
Data = rsi(Data, 3, 3, 0)
Data = rsi(Data, 2, 3, 0) for i in range(len(Data)):
if Data[i, where + 1] == 14:
Data[i, where + 12] = Data[i, where + 2]
if Data[i, where + 1] == 10:
Data[i, where + 12] = Data[i, where + 3]
if Data[i, where + 1] == 9:
Data[i, where + 12] = Data[i, where + 4]
if Data[i, where + 1] == 8:
Data[i, where + 12] = Data[i, where + 5]
if Data[i, where + 1] == 7:
Data[i, where + 12] = Data[i, where + 6]
if Data[i, where + 1] == 6:
Data[i, where + 12] = Data[i, where + 7]
if Data[i, where + 1] == 5:
Data[i, where + 12] = Data[i, where + 8]
if Data[i, where + 1] == 4:
Data[i, where + 12] = Data[i, where + 9]
if Data[i, where + 1] == 3:
Data[i, where + 12] = Data[i, where + 10]
if Data[i, where + 1] == 2:
Data[i, where + 12] = Data[i, where + 11] return Data
Now, how do the results of the Dynamic RSI compare to the results of the regular RSI? After all, we need a benchmark or some form of comparison to properly judge our strategy. I will make an exception in this back-test regarding the risk management processes I employ, as I will rather respect the optimal risk-reward ratio of 2.00. This means that I will place my stops at 1x the 50-period ATR and my targets at the 2x 50-period ATR. Another way of saying that I will be risking half of what I expect in each trade. | https://medium.com/swlh/creating-a-variable-rsi-for-dynamic-trading-a-study-in-python-2af3ff8eaf0c | ['Sofien Kaabar'] | 2020-12-02 20:52:01.650000+00:00 | ['Machine Learning', 'Python', 'Data Science', 'Technology', 'Trading'] |
1,226 | What Are The Steps To Fix Magellan GPS How To Update? | What Are The Steps To Fix Magellan GPS How To Update? Lieke Follow Mar 8 · 3 min read
Concern for Magellan GPS How To Update process? Let me ask you that have ever wondered why there is a need to do a Magellan Gps Update after a regular interval of time.
I am asking you this because I used to think like that and one day I have to face much trouble. One year before, I was traveling to an unknown place and ignored the notification of Gps update.
You won’t believe it but I end up struggling on an unknown path and hence I had to take help from the people to reach my destination.
This is why I want you to complete the Magellan Map Update with the help of the below-given steps.
Step By Step Process For Magellan GPS How To Update
The process to complete the Magellan Updates is really simple, you need to follow all the steps given below.
Connect Gps Device With The Charger Download Content Manager Install The Content Manager Connect The Gps Device With The Computer Launch The Content Manager Complete the Magellan Roadmate Map Update process Remove The Device
Connect GPS Device With The Charger
If you don’t want to be interrupted within the Magellan GPS How To Update process, connect the device with the charger.
You need to verify that your device has been changed completely.
Download Content Manager
Once you have connected the device with the charger, download the Content Manager.
To do so, connect the computer with the internet and visit the official website of Magellan Gps.
Hit enter and wait until all the pages will be loaded completely.
On the homepage of the website, click on “Support”.
To start the Magellan Gps Update process, you will get a download link.
Install Magellan Content Manager
Once the file will be completely downloaded to your system, install the application by following all the on-screen prompts.
Connect The Gps Device With The Computer
Now, you have to connect the Magellan Gps with your computer device.
You can connect the Gps to the computer with the help of a Gps cable that came along with it.
Connect one end of the cable to the computer and the other to the Gps device. Make sure you have connected them properly.
Once the cables are connected properly, turn on the Gps device.
Launch Content Manager Application
To complete the Magellan GPS How To Update process, turn on the device and open the content manager.
process, turn on the device and open the content manager. To launch the application, you need to tap on the icon of Magellan Gps Software .
When you will open the application, you have to sign in by filling in all the required credentials.
If you are a new user, you have to complete the sign-up from the official website of Magellan.
Complete Magellan Gps Update
When the application will be opened, click on “check for updates”.
Wait for a while and soon the application will find the appropriate update for your device.
Click on Download update and wait for a while until your device will not be completely updated.
Remove The Device From The Computer
Once the whole process will be completed, remove the device from the computer. Your Magellan Gps device will restart automatically.
This is the process for Magellan Gps How To Update. I hope that you have completely understood all the steps properly.
Conclusion
At the end of this article, I just want to mention that it is very necessary to do the Magellan GPS update periodically.
In this guide, I have already told you all the methods needed for Magellan Gps How To Update it.
So, complete the update and go anywhere you want. | https://medium.com/it-helpdesk-tips-tricks/magellan-gps-how-to-update-9f2ca2c60626 | [] | 2021-03-08 10:27:27.824000+00:00 | ['Tech', 'Services', 'Maps', 'Technology', 'Gps'] |
1,227 | Design is dead, and we killed him. | 2050. It’s a windy December night. Christmas is around the corner, and I feel cozy with my hot cup of coffee and my favorite Lo-Fi playlist. Everything seems perfect. Everything seems in the right place. Then, a voice. A voice whispers: “You have killed it”.
“What have I killed?” I ask perplexed.
A long silence.
“Design.” responds the voice.
I don’t understand. Design is great nowadays: websites are wonderful, apps even more. Why is this creepy voice telling me that I killed design?
I had to further investigate, so I asked the voice of her reasons.
“Usability” whispers calmly.
Well, that word made me chill. Memories started to flow in my mind while I was paralyzed in front of my computer. There was a time, many years ago, when designers used to say that term: Usability. It meant a lot in the past, and was a fundamental part of our lives. There were a lot of platforms and websites where you could learn about “usability”, like Medium or W3C.
The Usability Era.
Those times were indeed strange: designers cared about the user experience and designed websites according to it. I can even remember that people were tested on prototypes before developing platforms. Ages of darkness, in which absurd guidelines like accessible fonts, sizes, and contrast were law. Can you even imagine designing a website with all this tomfoolery?
Can you recall websites where scrolling was set on default, and no script interfered with the user? Horror. Horror to my eyes. But thanks to our two omnipotent Lords, Dribbble and Awwwards, everything changed.
The Aesthetic Era
The voice said that I (or we) killed design. Clearly, this is the speech of a heretic. Design got blessed with the Word of our Lords and, finally, peace flooded the world.
Design brought us ten commandments, and I hope you can recite them with me. You can’t? Well, don’t say that where someone can hear you. Now let me recite them backward for you.
10. Always in the name of our Lords, use left-side drawers in app design.
This is important: most people are right-handed, and their thumb can’t reach the left part of the new generation screens. In fact, you don’t want the user to access the app menu, Don’t you? | https://uxplanet.org/design-is-dead-and-we-killed-him-70aa4e777777 | ['Lorenzo Doremi'] | 2020-12-12 22:29:16.041000+00:00 | ['Visual Design', 'UX', 'Design', 'Technology', 'Web Development'] |
1,228 | My journey through UCL Computer Science — One of the toughest CS Degrees | My journey through UCL Computer Science — One of the toughest CS Degrees
Photo by Nathan Dumlao on Unsplash
If you are not familiar with University College London, it is the 10th best university in the world (according to QS rankings). My 3-year journey there has been quite rough and full of bumps, but I am quite happy to have graduated last summer with First Class Honours.
In this story, I am going to give comprehensive tips and details for university students on how to avoid making the same mistakes that I did. This is going to save you a lot of time, and I think those tips can even be utilized outside of the university.
Maybe not everyone will agree, but I am sure if you ask people about their time during Computer Science BSc at UCL, they would agree that it’s a bit difficult, especially if they were trying to get First Class Honours.
I am going to be talking about each of the 3 years not because I want to present a boring timeline, but I think that every year had some different challenges that I want to address separately.
The freshman year
Although the university started from scratch, the pace at which the material was being covered was quite quick. I remember being asked to write python code for solving graph problems using Dijkstra’s algorithm, Prim’s, Dynamic programming, and others during my first few weeks. Don’t get me wrong I know those are standards in a Computer Science curriculum, but usually after a few months when you start getting to grips with Computer Science.
So my first tip for people going into a Bachelor’s degree in Computer Science is:
Try to be prepared as much as possible before starting university
This involves learning the basics of programming languages (like Python, HTML, and others), learning the fundamentals of Computer Science, and brushing up on your Mathematics. This will go a long way in helping you with your first year. Also, try looking up the content from your first year online before starting.
My second tip is:
Read up on effective time management strategies
I am not going to talk about this in detail since there are tons of resources online for this. From my experience, the reason that most university students suffer in their first year is that their time management simply sucks. For instance, although my 3rd-year curriculum was much more difficult than the first year, I was able to handle it in a much better way simply because I was managing my time much better.
Final tip:
Always research your assignments and construct a plan to finish them once you receive them
Life sometimes gets in the way, it’s okay to sometimes delay doing your assignments. However, you have to have a plan for finishing them. I have delayed several assignments before that I thought are going to take a few hours to finish and they ended up taking days simply because I wasn’t aware of their complexity. You can avoid this by spending about 30 mins researching the assignment once you receive it.
The penultimate year
Photo by Damian Zaleski on Unsplash
I always thought the second year would be a bit easier since I would have learned a lot from my first year. It was a bit easier in terms of university modules, however, because you almost have to get an internship before you graduate, the second year was quite difficult.
Of course, I expected the second year to be more difficult than the first year, I just wasn’t expecting how difficult it is to get an internship.
My number 1 advice for this is:
Start preparing from your 1st year, go to career fairs (this is really important), start working on your resume, and talk to seniors at your university.
If you are interested in my internship hunting experience, I am going to be releasing a comprehensive guide soon.
The final year
It’s expected that this will be your hardest year. Covid definitely made this year a bit more difficult than it was supposed to be. But anyway, one of the main challenges to successfully graduate is acing your dissertation.
I started researching various dissertation topics during my summer before the third year. And I think this was one of the best moves that I did. Not only did I pick a unique dissertation topic, but also I learned a lot about machine learning (since it was part of that topic) which helped me pass several final year modules.
Your dissertation can either be very tedious or it can be very enjoyable. I have seen both ends of the spectrum with my friends. For me, it was quite enjoyable and I suggest that you check it out here:
Also, one of the main challenges of your final year is securing a graduate job. If you had a summer internship after your penultimate year, this probably wouldn’t be an issue for you. However, if you didn’t, I strongly urge you to start applying to graduate jobs as early as September.
Final thoughts
I think my final tip and probably the most important one is to CHILL OUT! I honestly used to heavily stress over losing a few marks here and there, and it just wasn’t worth it. I ended up getting First Class Honours and I could have enjoyed the journey a lot more. Your university years are probably going to be one of the best years of your life, I am sure a lot of the readers would agree with this. | https://towardsdatascience.com/my-journey-through-ucl-computer-science-one-of-the-toughest-cs-degrees-977a1d1e0bea | ['Mostafa Ibrahim'] | 2021-01-14 10:32:42.388000+00:00 | ['Technology', 'Graduation', 'Computer Science', 'Computers', 'University'] |
1,229 | Transforming HR Operations Through Transparent Network With Blockchain | Photo by Scott Graham on Unsplash
Blockchain technology, as a foundation stone for the cryptocurrency in 2008, is a head-turner even outside the payments industry to which it has been restricted in the past.
Blockchain will create $3.1 trillion in business value by 2030. — Gartner Forecast
This statistic shows how enterprises are striving to discover ways of embracing blockchain to revolutionize their business operations and exchange values in this digital world. While blockchain has disrupted fashion, logistics, finance, healthcare, and other sectors, it is ready to shake talent management, HR, and skill development industry too.
What does it mean to the recruitment industry?
Human resource professionals manage volumes of workload, such as determine the staffing needs, screen resumes, perform background checks, recruit professionals, handle their training, and much more. For the seamless execution of these tasks, businesses hire expensive recruitment management agencies. But they are oblivious that Blockchain-enabled business models will present a seismic shift to how HR operations are conducted in the future.
The questions that businesses struggle with-
What implications does Blockchain have for the future of HR?
How the HR specialists can leverage opportunities that blockchain offers?
What are the related risks?
A glimpse into the blockchain capabilities can help in gaining an understanding to align recruitment operations with this evolving technology.
Employee Background inspection
Short term contacts, freelance, and on-demand talent acquisitions have changed the way people work. This further restricts tracking and verifying the background of employees.
With blockchain, it is easier to record, verify, track, and manage the candidates’ addresses, employment history, education documents, and much more. The employee records can thus be stored on the digital ledger by tokenizing the candidate’s identity along with reducing the risks of fraudulent applications.
Smart contracts for agreements
The global smart contracts market is expected to reach approximately 300 USD Million by the end of 2023 with 32% CAGR during the forecasted period from 2017–2023.- Market research future.
Time intensive manual contract and agreement generation restricts the recruiters to focus on high-value activities like employer branding and enhancing communication.
Blockchain-powered smart contracts generate immutable rights and obligations verified by all the participants on the network. Smart contracts help manage information like employment contracts, digital signatures, and performance reports, among others.
Payroll and settlements
When employees operate remotely, payment management becomes a lengthy process. Handling payments that involve travel expenses, tax liabilities, currency exchange, etc., make the process highly complicated.
Blockchain comes to the rescue by enabling seamless cross-border payments along with reducing risks and errors in the process. The decentralized and secure ledger verifies and validates the data more securely and efficiently. Besides, blockchain restrains the involvement of banks and third parties making the overseas payroll process less time consuming and cost-effective.
Regulatory compliance
Many times, the legal frameworks of certain countries favor the restriction or removal of confidential information of an entity from the virtual platforms, for instance, EU’s GDPR.
In such cases, blockchain ensures that candidates of such countries can leverage the entitlements just by erasing the encryption key, making the information undiscoverable for the entities.
Employee Attendance
Tracking the attendance of candidates becomes a monotonous task that eventually affects the wages and claim processes.
With Blockchain technology, it gets easier for the HR specialists to store biometric data such as fingerprint or iris scan, ID scan, etc., in real-time. Record accuracy and increased visibility reduce errors and disputes and increase the efficiency of the operations like payments and necessary settlements, decreasing the frustration of the HR officials. | https://medium.com/the-capital/transforming-hr-operations-through-transparent-network-with-blockchain-cd9e41feeea5 | [] | 2020-12-19 03:07:19.319000+00:00 | ['Blockchain', 'Technology', 'HR', 'Management', 'Bitcoin'] |
1,230 | US Sanctions Russian Government Center for Creating ‘Triton’ Malware | The US Treasury Department is also linking the creators behind Triton with scanning and probing at least 20 US electric facilities for vulnerabilities.
By Michael Kan
The US is sanctioning a Russian government research center for allegedly developing Triton, a malware attack capable of disrupting IT systems at factories and power plants.
On Friday, the US Treasury Department sanctioned the Central Scientific Research Institute of Chemistry and Mechanics (also known as CNIIHM or TsNIIKhM) on claims the Russian center had a hand in creating Triton.
The malware grabbed headlines in 2017 for hitting a petrochemical facility in Saudi Arabia in an apparent act of industrial sabotage. According to security researchers, the attack initially arrived via a phishing email. Triton can tamper with a facility’s industrial controls, causing the systems to ignore hazardous conditions or shut down a power plant.
“Researchers who investigated the cyber-attack and the malware reported that Triton was designed to give the attackers complete control of infected systems and had the capability to cause significant physical damage and loss of life,” the Treasury Department added in today’s announcement.
The Russian research institute CNIIHM (Credit: Google Maps)
In 2018, security firm FireEye then released a report connecting Triton to a professor employed at the Russian research institute CNIIHM. An IP address used at the Russian institute was also found monitoring coverage of the Triton malware and scoping out potential targets.
The Treasury Department didn’t elaborate on how US officials are linking the Russian government lab to the malware. But the department also claims the creators behind Triton were scanning and probing at least 20 US electric facilities for vulnerabilities back in 2019.
In response, the Treasury Department is prohibiting all US persons and businesses from engaging in transactions with the Russian research institute. “Moreover, non-US persons who engage in certain transactions with TsNIIKhM may themselves be exposed to sanctions,” the department added.
The US announced the sanctions days after the Justice Department charged six Russian military officers for allegedly unleashing the NotPetya ransomware outbreak in 2017, and for using malware attacks to shut down the power grid in Ukraine. The US is hoping the indictments will cause Russia’s state-sponsored hackers to adopt a different profession. However, the Russian government has denied any involvement with the cyber attacks. | https://medium.com/pcmag-access/us-sanctions-russian-government-center-for-creating-triton-malware-9748e7a94aac | [] | 2020-10-26 19:01:02.236000+00:00 | ['Sanctions', 'Cybersecurity', 'Russia', 'Malware', 'Technology'] |
1,231 | A ‘Google Nexus’ in an iPhone | Hardware
Unlike the OLED panel on the iPhone, the 5.8 inches screen on the Nokia is IPS LCD 1080 x 2280 pixels (~432 PPI). The display is sharp, has decent colour representation and wide viewing angles.
The 2.5D glass held by the aluminium frame and the 19:9 aspect ratio Grollila Glass 3 front display looked spectacular at first glance. The phone felt extremely sophisticated and premium just like fit and finish of the iPhone.
The octa-core Qualcomm 636 chipset powers the device. Four cores dedicated to powerful computation and four to battery efficiency. It is more than powerful to do all the current mobile tasks. The successor, Nokia 6.2 also uses the same chipset so future updates to applications and overall usage would run quite smoothly as there’s plenty of headroom left. It has AC WiFi, the 3060 mAh battery sufficiently lasts all day.
Software
The logic I’m comparing to iOS is because of the regularity of updates it receives just like the iPhone. The phone when first shipped in August of 2018 was running Android Oreo 8.1. Later that year it was updated to Android 9 Pie and last month in the January of 2020 it was updated to Android 10. That’s total 3 Operating System.
The gesture-based UI and the similarity of most third-party applications diminish the software differences. The quirks of owning a Mac along with iPhone is diminishing too. With Your Phone app from Microsoft, one can access Notifications, Photos, Messages and answer Phone calls on the PC. Also in the works is the ability to mirror the whole screen and access the device remotely and Google is working on a nearby share feature just like Air-Drop.
Ownership and Experience
One of my favourite person on the internet, Dieter Bohn, often communicates about the relationship with our tech devices using the ‘Instrument’ and the ‘Tool’ metaphor. A phone is not merely a tool that we use to communicate but is a part of the culture and an extension to our outside world. It is more of an instrument where one has to be tuned with its liking, develop an intangible relationship with it. Nokia for me is one such instrument. At a mere 20 per cent of the price of the flagship, it fits me like a glove. I adore everything about it. The pearl white rear glass compliments the front black mirror. I am grateful for the features like the ability to plug in my headphones, quick charge using the USB Type C and the facility to expand the storage with the SD Card. | https://medium.com/@vernewave/a-google-nexus-in-an-iphone-3818ddc2b777 | [] | 2020-02-07 12:33:30.375000+00:00 | ['Nokia', 'Technology', 'Mobile', 'Apple', 'Hardware'] |
1,232 | The bloXroute Scalability Solution | By Prof. Aleksandar Kuzmanovic, Co-Founder and Chief Architect
*This post was updated on October 23, 2020
bloXroute solves the scalability bottleneck for blockchains at its core: the network layer.
In our previous blog post we explained why scalability solutions such as FIBRE and Falcon, Compact and XThin Blocks, and Graphene are insufficient in solving the scalability bottleneck.
TL;DR:
Fibre and Falcon can increase performance of blockchains but at the cost of placing power in the hands of the network owners
Compact and Xthin Blocks increase TPS by reducing the amount of data each block contains, but initial results in a very small, controlled network show a relatively modest increase
Graphene reduces block size through a bloom filter and IBLT, but bloXroute can outperform it.
So, what is different with bloXroute?
bloXroute brings two major novelties: transaction-based neutrality and dramatic scalability advances. bloXroute is a transaction-based neutral network. This means that bloXroute by design cannot stop nodes, wallets, users, transactions, or blocks from utilizing the BDN.
A major aspect of bloXroute’s uniqueness lies in the way it helps propagate information in a blockchain network. Contrary to first-generation relay networks, bloXroute propagates transactions among blockchain nodes. By propagating transactions on behalf of users, bloXroute manages to effectively index these transactions using much shorter IDs. Thus, when blocks are propagated through the network, bloXroute utilizes such IDs, effectively compressing the amount of data that needs to be carried through the network. Additionally, bloXroute optimally streams data through a well-provisioned dedicated network infrastructure and enables the processing of a high volume of on-chain transactions.
bloXroute’s Architecture
bloXroute’s architecture is innovative at multiple levels. First, bloXroute transforms the access plane, i.e., the way blockchain nodes access bloXroute. There are two ways to access the BDN: 1. Users with a full node can run a Gateway (either a Local Gateway or Hosted Gateway) or 2. users who do not have a full node can use bloXroute’s Cloud-API.
A Gateway is a piece of open source code deployed at a blockchain node that quickly transmits transactions and blocks to and from nearby bloXroute servers. To a blockchain node, a Gateway looks no different than another peer node; yet, it enables access to the bloXroute network. Gateways help with performance since they significantly shrink the data transmitted through the access link using the transaction IDs explained above. In the network sense, a Gateway is a high-end server that is paired with a blockchain node. This is similar to the approach taken by Akamai’s CDN, which deploys edge servers close to millions of end users, thus improving their Web experience.
bloXroute’s unique architecture ensures that bloXroute by design cannot stop nodes, wallets, users, transactions, or blocks from utilizing the BDN. bloXroute does this using transaction-based neutrality, meaning it ensures all nodes are kept in sync regarding the transactions waiting in the mempool to be included in a block. bloXroute ensures synchronicity by periodically sending updates to Gateways on new transactions, of which bloXroute cannot prevent or delay from being included in the update.
If for any reason a Gateway realizes its transaction has not been included in the update, it can send its transaction to the p2p network to continue its propagation and thus not be censored. This model removes the need for data to be encrypted, which also alleviates the concern of DDOS attacks sending large amounts of encrypted data and delaying the network. Users can utilize the bloXroute BDN as a router that is optimized for blockchain technology. The protocol runs underneath the blockchain, making it the first layer-0 solution.
Implementation: No protocol changes and gradual deployment
How does a blockchain (or a cryptocurrency) community utilize the opportunity and newfound capacity provided by bloXroute? By simply adjusting the block size and inter-block time interval parameters. bloXroute requires no blockchain protocol change beyond this parameter adjustment to fully utilize bloXroute’s capacity. We will provide guidelines for each individual blockchain. The recommended scaling parameters, i.e., inter-block time and block size, will be based on the experimental results from our global testbed. bloXroute as a transport layer is complementary to the native consensus protocol used, and it is capable of boosting performance, often dramatically, for any blockchain. But again, the protocol itself does not change; the validity requirements remain the same, as is the structure of blocks and transactions, and all the messages among nodes.
Another important question is what fraction of blockchain nodes need to deploy Gateways at their machines in order for the blockchain community to scale-up the protocol parameters? In short, the more nodes that deploy Gateways the better but it is not necessary for all the nodes, or even the majority of the nodes, to deploy Gateways, in order to allow the transition. This is because bloXroute creates value not only for the first individual miner or user, but for the entire network, including those not using the BDN. When a block is mined and sent to the BDN it will reach all miners and users faster and increase the overall network speed for everyone.
The BDN not only enables scalability, but also provides network insights and performance boosting tools for everyday users and blockchain service providers via our DeFi toolsuite and custom integrations.
In the same way Akamai helps millions of Web users around the world with their vigorous server deployment at edge networks, bloXroute’s unique architecture can help every individual blockchain community. Whether they consist of hundreds or ten thousands of nodes, they will greatly benefit from dramatic scalability advances. In our next post, we will dive deeper into the topic of bloXroute’s network. | https://medium.com/bloxroute/the-bloxroute-solution-b293c3ee3dc2 | ['Bloxroute Team'] | 2020-11-19 20:43:33.936000+00:00 | ['Cryptocurrency', 'Ethereum', 'Technology', 'Blockchain', 'Bitcoin'] |
1,233 | How Performance Management System Transform With Technology | “Management with Technology is a powerful combination to revolutionize the performance of your organization.”
In today’s story, I highlighted the modern way of recruitment and how technology transforms the Performance Management System of organizations. Also, what are the latest Technological trends adopting by the managers to ease the process of recruitment?
The uprising of the Digital era revolutionized the organization’s performance. To improve the performance or create effective Human Resource Management in organizations almost all companies and industries are adopting the latest trends of Technology. Last few years of the technological era, Machine Learning and AI were the effectual mechanisms for the operations of the sales and organization’s performance sectors only. However, the latest research shows how HR has integrated with the latest technology with operation management, data analytics, talent acquisition, and development of the overall results of HR in organizations.
The latest Technological trends are smooth the way for Human Resource Management towards their goals and milestones of organizations. For example, learning and communication through Social Networks, employment engagement through virtual methods, or mobile learning management systems, the gamification process for recruiting and analysis of prospective employees skills and development are the on-going trends in today’s trends.
Technology is vast same as the Performance management vision for an organization. Technology not only modernizes the structure of organizations but can provide a wide in-depth, and more insightful mission, up-to-date trends adopted by managers, and every individual of organizations have now become a more tactful well-acknowledged, and quick decision-maker.
The virtual world of today facilitates the Human Resource Managers to optimize manpower performance. When a broader and diverse background of prospective candidates becomes the part of organizations Technology helps to make the process evaluation of recruitment transparent. The whole process initiates when recruitment takes place with unique designed job recruitment portals that ease the recruiters to track and evaluate the resumes of the potentially interested candidates and develop ease the tedious manual tasks. Technology involvement in Human Resource Industry is a blessing for all employees and prospective employees. Gamification is now a day’s prevail at the performance management and facilitates the recruiter to choose the right candidate for the right job.
After the whole process of recruitment, Technology plays a very vital role in the internal world of organization operation. For example updating and tracking the payroll, attendance system, and salary management through various flexible software have become trending and accessible in the market. Overall, Technology persuasiveness in Performance Management System in Human Resource creates the ease to work effectively and time-managing for better policy development. Moreover, Technology restructured and transforms the organization’s conventional method in an innovative way that can leads to a better human resource team for the Performance Management System.
Cloud Software for Human Resource Management (HRM)
It is a well-versed and user-friendly way to improve performance management with accurate and error-less work. Using software provides the best benefit that less paperwork required. As paperwork sometimes becomes iterative and tedious. Now HR manager can assess data of the organization accurately and in real-time. Security and Privacy of employees are crucial, and that ease the overall process of the organization’s data.
Employee self-service & Manager Self-Service
Adopting this system that improves the overall service, reduction of labor cost, secure the overall activities, and handling routine transactions among employees, managers, and prospective candidates.
Blockchain in HRM
This innovative technology reshaped the whole Human Resource Management in Organization. It is quoted by many researchers that “By 2023, blockchain will support the global movement and tracking of $2 trillion of goods and services annually”. Blockchain has full capability to record, store, and secure data of employees and the organization’s mass information, which is the core need of Human Resource Management.
“Blockchain technology will be integrated directly into the HR function through a multitude of use cases — lending transparency and trust”. — Mercer Editorial Staff
People analytics in HR
People Analytics is also referred to as the HR analytics and workforce analytics that helps to collect, amalgamate, and transforming of Organizational and HR concerning data into pragmatic insights that efficiently improve the overall performance.
Real-time performance management System
In Human Resource Management, this latest trend that followed by almost every organization that creates the most engaging employees through virtual means, and produces effective real-time feedback. This helps to save money and time for the overall organization’s process. | https://medium.com/digital-diplomacy/how-performance-management-system-transform-with-technology-4a17242f0a6b | ['U.F.M Techie'] | 2020-11-07 16:42:29.651000+00:00 | ['Technology', 'Blockchain', 'Productivity', 'Business', 'Tech'] |
1,234 | Technology | When we talk about technology it doesn’t seem unusual. Everything is in the hands of technology and now the world seems to be non-existent without it. People earn through it, learn through it, and do lots of stuff through technology. Why we all say that ‘WORLD HAS BECOME A GLOBAL VILLAGE”?, It’s because like early ages people don’t have to communicate by means that weren’t in reach and that might have created a lot of problems. But now all the story has changed and people are connected and more like a village where people know about each other and now it’s even easier to earn through various websites and sources.
Technology and its negative effects:
While technology has made a lot of positive changes in the world, it has also created some negativity across the globe.
We can see them in point:
· Psychologically and physically, it has been proven that social media has affected our lives to such a big extent.
· Overuse of technology may have a more powerful influence on the welfare of children and adolescents.
· The way handheld devices and machines are used by many people may often lead to incorrect poses. Over time, this can contribute to musculoskeletal problems.
· The use of technology that is too close to bedtime may trigger sleep problems. This phenomenon has to do with the fact that the brain is activated by blue light, including the light from mobile phones, e-readers, and computers.
And many more like these. And several solutions can opt for betterment.
Taking advantage of technology:
Technology has always given a lot to this world. But there isn’t any awareness about how can we use it wisely and make our lives much easier. I’m going to tell you through my points given below:
· You can go through several websites and showcase your talent. Even you can learn through various channels and websites
· As a business major, you should probably start with a Microsoft Office tutorial and/or the textbook that your college uses for their “Introduction to Computers” class (Or, “Intro to Technology”, etc..)
· There are several ways where we can find the solution to everything.
· You can even start a small business or anything that may help you earn, you just need some guidance which is provided on the internet
· Mobile phones and all the other technologies are also very helpful if used correctly
Another useful side of technologies:
Modes of transportation have improved the ease of access but also increased the volume of emissions. A wide variety of alternatives have been available to people thanks to technology. By supplying them with hearing aids, text scanners, special seats, etc., technology has also helped persons with special needs. Today, without having to worry about their disabilities, they will experience everyday life too.
To summarize, technology has several positive implications for our lives, but there are still several downsides. We can’t abandon technology yet, so we can make sure we’re not dominated by it. | https://medium.com/@atifashah999/technology-f1bf8ec3e55b | [] | 2020-12-23 19:22:06.730000+00:00 | ['Blogging', 'Blogspot', 'Tech', 'Technology', 'Blogger'] |
1,235 | Creative Destruction or Just Destruction? An Analysis of Fortune 100 Companies in 1955 and 2020 | The small number of companies on the Fortune 500 or 100 for both 1955 and 2020 is not due to creative destruction, and it does not symbolize the strength of the U.S. economy, as some claim. For instance, the American Enterprise Institute’s (AEI) analysis found that only 52 companies on the Fortune 500 in 1955 were still on the list in 2020, a conclusion I do not dispute. It claims that “The fact that nearly nine of every 10 Fortune 500 companies in 1955 are gone, merged, reorganized, or contracted demonstrates that there’s been a lot of market disruption, churning, and Schumpeterian creative destruction over the last six decades.” It continues: “The constant turnover in the Fortune 500 is a positive sign of the dynamism and innovation that characterizes a vibrant consumer-oriented market economy, and that dynamic turnover is speeding up in today’s hyper-competitive global economy[1].”
But is creative destruction the reason for the small number of companies remaining on the Fortune 500, creative destruction that is led by young vibrant American firms introducing highly productive new technologies? I admit that these are my words and not the words of the AEI. Nevertheless, behind the AEI’s claim that the small number of remaining companies is due to “market disruption, churning, and Schumpeterian creative destruction” is the notion that American companies are doing the disrupting and thus Americans are benefiting from this creative destruction through higher productivity, incomes and standard of living.
We know that the last part of the last sentence is not true. Productivity data demonstrates a clear and persistent growth slowdown over the last 80 years, as many readers, particularly those who are followers of Robert Gordon will know[2]. This trend has continued, and a judging panel noted “that the 2010s were the worst decade for productivity growth since the early 19th century[3], despite the positive impact of globalization on productivity. The continued slowdown suggests there was also a slowdown in technological and innovative output in recent decades, an issue we can also address using the change in companies in the Fortune 100 between 1955 and 2020.
I categorized each company in the Fortune 100 into sectors, and in some cases industries, in order to see how the list of companies have changed between 1955 and 2020[4]. The reasons for the rise and fall of these industries are then discussed, drawing from many historical sources. From this analysis, this article concludes that the main reason for the small number of remaining companies is due to the evolution of American economy away from manufacturing to services, an evolution that is more due to foreign competition and financial engineering than to creative destruction by small American companies.
As shown in Table 1, the number of manufacturing and oil companies fell from 74 and 14 to 20 and 9 respectively while the number of financial/insurance, information and communication technology, health care, and retail companies rose from 3 in total to 22, 17, 12, and 11 respectively (total of 62). The ICT companies represent the biggest disruption by new American startups, innovation, venture capital and IPOs[5] while the manufacturing companies declined mostly because of foreign competition and they were replaced by old banks and insurance companies, many of which were founded in the 19th century. In fact, the average age of companies increased from 63 in 1955 to 100 years in 2020, not a sign of newly founded startups disrupting old line companies. M&A were also a big driver of change with the rise and fall of conglomerates, hostile takeovers, and other financial engineering, which reduced the number of oil and manufacturing companies.
Delving into these trends in more detail, the number of oil, tire, auto, steel, food, chemical companies also fell due to foreign competition, consolidation, and some technological change. The number of oil companies dropped from 16 to 6 from consolidation, and not from technological change, despite the rise of fracking. Exxon and Mobil merged as did Chevron and Texaco with the latter two firms acquiring Union Oil, Unocol, and Pure Oil along the way. Sinclair Oil and ARCO were acquired by British Petroleum and BP America was no longer considered an entity for the Fortune 500 in 2020. There were no fracking companies in the Fortune 100 in 2020
The number of auto and tire companies fell from 9 to 2 from both consolidation and foreign competition, but again not from technological change. Electric vehicles had less than 2% of the market in 2019 and Tesla is still far from being a Fortune 500 much less a Fortune 100 company. Instead, component and tire suppliers were either acquired or driven out of business by Japanese and other competition. Firestone was acquired by Bridgestone, Uniroyal and Goodrich by Michelin, and Goodyear still exists at a much smaller scale. Although there was some innovation by Japanese companies in terms of manufacturing techniques, there were no large product innovations and overall, it was Japanese companies doing the innovation and not American ones.
Steel was also decimated by foreign competition, initially by Japan and later by China, causing the number of companies in the Fortune 100 fell from six to zero. Bethlehem and National Steel went bankrupt and the others (Armco, Youngstown sheet & Tube) still exist as much smaller companies; Republic Steel and Jones & Laughlin merged to form LTV. Unlike the auto, rubber, and oil industries, however, the basic oxygen furnace and continuous casting were examples of creative destruction, which resulted in big productivity advantages for the Japanese and European producers.
Other metals such as aluminum and lead were also impacted by foreign competition, particularly from China, and also creative destruction by plastics. For instance, plastic bottles have replaced a significant fraction of aluminum cans and glass bottles (and also reduced demand for steel in many assembled products). The results is that the number of other metal producers dropped from four to zero, including Alcoa and Reynolds Aluminum leaving the list. As an aside, several glass bottle and canning companies also fell of the list between 1955 and 2020. Despite the growth in plastic usage, however, the number of chemical companies fell from eight to one. Dow and Dupont merged and acquired Union Carbide along the way. Monsanto was acquired by Bayer and thus is no longer an American company.
The number of food companies also dramatically decreased, falling from 20 to four, probably because more meals are eaten outside or are delivered to homes. Mergers impacted on General Foods, Kraft, Standard Brands, and Pillsbury, yet one resulting entity, Kraft Heinz, is not in the Fortune 100. The two food companies in the Fortune 100, Tyson Food and Archer Daniel Midland, might be considered disruptors because of Tyson’s innovations in chicken processing using assembly lines and ADM’s emphasis on intermediate food products.
Other new companies include retail, healthcare, finance/insurance, drug, and information & communication technology. The last two are certainly the case of new technologies disrupting old ones, and retail, might also be considered an example of disruption. The number of drug companies rose from zero to four and the number of ICT companies rose from 6 to 17. Their stories are told elsewhere so there is no need to tell them here. Nevertheless, the increases from 6 to 17 does not tell the whole story because the 6 on the 1955 list were analog telephone (AT&T), radio/TV (RCA, CBS), and early computer (IBM, Sperry) companies, nothing like the semiconductor, personal computer, software and Internet companies that were to follow.
Retail companies might also be considered examples of technology disruption because they used information technology to increase product variety and manage increasingly complicated supply chains. Companies such as Walmart, Costco, Walgreens, Kroger, Home Depot, Target, Lowe’s Albertsons, and Best Buy sell us a remarkable number of different products at low prices, courtesy of computers, software and other devices. Retailers such as Albertsons might also be considered food disruptors because they have brought a greater variety of food to consumers.
The healthcare, finance and insurance companies have been the biggest replacements for the manufacturing companies that dominated the 1955 list. They represented more than half the companies in the 2020 list, but most are old companies. Most of the finance and insurance companies can trace their roots back more than 100 years with some going back to before America’s Civil War. These companies have slowly grown over time benefiting from the deregulation that allowed them to cross state lines and thus become huge national banks, investment companies, and insurance providers. Just as healthcare now represents about 18% of GNP[6], and finance and insurance about 8%[7], companies from these industries represent 34% of the Fortune 100.
In summary, the American Enterprise Institute and many others have misinterpreted the reasons for the small number of 1955 companies that still remain on the 2020 list. The evolution of the Fortune 100 is not a symbol of creative destruction, it merely reflects the evolution of the American economy, from manufacturing to services, driven mostly by foreign competition, hostile takeovers, and the rise and fall of conglomerates. Although some of this was driven by innovation, particularly in the ICT and drug sectors, innovation played a small role in the decline in the number of steel, auto, tire, chemical, and oil companies on the list.
Does this tell us something about the future? It probably tells us that we can expect continued changes in the companies, but little changes in products and processes and thus few improvements in productivity. Analyses of startups lead to the same conclusions[8]. If we want a better future, we need to rethink how R&D and innovation are done.
[1] https://fee.org/articles/comparing-1955s-fortune-500-to-2019s-fortune-500/
[2] The Rise and Fall of American Growth, Robert Gordon, 2016, Princeton University Press
[3] https://www.ft.com/content/8d7ef9b2-24b4-11ea-9a4f-963f0ec7e134
[4] Here are the companies for 1955 (https://archive.fortune.com/magazines/fortune/fortune500_archive/full/1955/) and those for 2020 (https://fortune.com/fortune500/2020/search/). I added Ford to the 1955 list because for some reason Fortune failed to put it on the list.
[5] https://medium.com/@jeffreyleefunk/the-most-valuable-startups-founded-since-1975-none-have-been-founded-since-2004-8bc142b67051
[6] https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/NationalHealthAccountsHistorical#:~:text=U.S.%20health%20care%20spending%20grew,spending%20accounted%20for%2017.7%20percent.
[7] https://fred.stlouisfed.org/series/VAPGDPFI
[8] https://medium.com/@jeffreyleefunk/what-will-happen-to-todays-privately-held-unicorns-valued-at-1-4-trillion-13f507797487
https://medium.com/@jeffreyleefunk/why-are-todays-startup-unicorns-doing-worse-than-those-of-the-past-1c8ece718ab0 https://medium.com/@jeffreyleefunk/are-there-any-industries-in-which-ex-unicorns-are-profitable-747eca652170
https://medium.com/@jeffreyleefunk/how-successful-are-todays-startup-unicorns-893043f32d24 | https://medium.com/swlh/creative-destruction-or-just-destruction-an-analysis-of-fortune-100-companies-in-1955-and-2020-91a36f60287d | ['Jeffrey Lee Funk'] | 2020-10-28 05:58:16.849000+00:00 | ['Disruption', 'Innovation', 'Startup', 'Venture Capital', 'Technology'] |
1,236 | The Internet of Bodies Will Change Everything, for Better or Worse | Ross Compton was there when a fire ravaged his $400,000 home in Middletown, Ohio, in September 2016. Fortunately, Compton told investigators, he was able to stuff a few bags with several possessions — including the charger for an external heart pump he needed to survive — before shattering a window with his cane and escaping.
But as the smoke cleared, police began to suspect that Compton’s story was a fabrication.
His statements were inconsistent. The rubble smelled of gasoline. And it seemed implausible that someone fleeing a burning house — especially someone with a medical condition like Compton’s — could execute such a complex escape plan.
Eventually, investigators were able to indict Compton on felony charges of aggravated arson and insurance fraud. Their star witness? His pacemaker.
Police obtained a warrant to retrieve data on Compton’s heart activity before, during, and after the fire. After reviewing this information, a cardiologist concluded that it was “highly improbable” Compton would’ve been able to escape the flames so quickly, while lugging so many belongings.
Compton pleaded not guilty. His attorney argued that the pacemaker data should be thrown out; including it would violate doctor-patient privilege and Compton’s constitutional right to privacy, the lawyer said.
The case was strange, arguably sad, and fraught with difficult questions. Regardless of whether Compton really torched his house, should a life-saving device inside someone’s body be part of a case that might put them behind bars?
We may not know the answer for some time. Compton passed away in July at the age of 62, leaving his case — and whatever precedent it might have set — unresolved.
This may seem like a one-of-a-kind chain of events, an aberration. But as industries usher in a new era of devices that track personal information by leveraging the internet and the human body in equal measure, it won’t be the last.
When it comes to regulating the Internet of Bodies, it’s the Wild West.
This type of technology, appropriately dubbed the Internet of Bodies (IoB), has the potential to improve our lives in countless ways. But the risks are just as legion. A new RAND study explores the Internet of Bodies, identifying implications for policy that could help maximize the IoB’s upside while mitigating these risks.
“When it comes to regulating IoB, it’s the Wild West,” said Mary Lee, a mathematician at RAND and lead author of the study.
“There are many benefits to these technologies that some consider too great to be slowed down by policy. But we need to have a larger discussion about what those benefits will cost us — and how we might avoid some of the risk altogether.”
What Is the Internet of Bodies?
Internet-connected devices like smart thermostats, voice-activated assistants, and web-enabled refrigerators have become ubiquitous in American homes. These technologies are part of the Internet of Things (IoT), which has flourished in recent years as consumers and businesses flock to smart devices for convenience, efficiency, and, in many cases, fun.
Internet of Bodies technologies fall under the broader IoT umbrella. But as the name suggests, IoB devices introduce an even more intimate interplay between humans and gadgets. IoB devices monitor the human body, collect health metrics and other personal information, and transmit those data over the internet. Many devices, such as fitness trackers, are already in use.
Torrents of data on everything from diets to social interactions could help improve preventative health care, increase employee productivity, and encourage people to become active participants in their health.
Artificial pancreases could automate insulin dosing for diabetics. Brain-computer interfaces could allow amputees to control prosthetic limbs with their minds. And smart diapers could alert parents via Bluetooth app when their baby needs to be changed.
But despite its potential to revolutionize just about everything in ways that could be helpful, the Internet of Bodies could jeopardize our most intimate personal information.
“There are vast amounts of data being collected, and the regulations about that data are really murky,” Lee said. “There’s not a lot of clarity about who owns the data, how it’s being used, and even who it can be sold to.”
Lee and her colleagues examined the risks that IoB devices could pose across three areas: data privacy, cybersecurity, and ethics. The team also identified recommendations that could help policymakers balance the IoB’s many risks and rewards. | https://medium.com/rand-corporation/the-internet-of-bodies-will-change-everything-for-better-or-worse-44d9d0775656 | ['Rand Corporation'] | 2020-12-01 20:13:56.518000+00:00 | ['Healthcare Technology', 'Biotechnology', 'Emerging Technology', 'Internet Of Bodies', 'Biomedical Research'] |
1,237 | A Free Plugin Every Producer Should Know About | A Free Plugin Every Producer Should Know About
A free VST along with thousands of free presets online and an active community? If you didn’t already know — thank God you’re here now
Photo by Ryunosuke Kikuno on Unsplash
Synth1 is a completely free open-source analog-modeling subtractive virtual synthesizer developed originally by Ichiro Toda with Daichi Laboratory. This plugin is a dream-come-true for those music producers who love browsing presets for inspiration (sometimes ad nauseam), and a match-made-in-heaven for those of us who are looking to replicate a vintage or lo-fi sound from their synths — think Ariel Pink, Washed Out, Toro Y Moi. If you love the warm and often gritty sounds of analog synths but might not have the cash to buy one, then this free virtual plugin is my Christmas present to you.
image of the VST window kvraudio.com
‘Open-sourceness’
I’ll admit Synth1 isn’t the prettiest plug-in I’ve ever seen. Fortunately though for us, its aesthetic isn’t what sets this free virtual synthesizer apart from its peers. The big pull for Synth1 is the vast and varied libraries of free present banks online. And the pull is big, as Synth1 is one of the most downloaded free VSTs of all time. Surrounding Synth1 is a large community of producers and music-makers alike, who contribute to the increasingly massive online collection of presets for this noteworthy plugin. You can find Synth1 sound banks all over the internet with a quick search, but if you want a one-stop-shop to download a large collection of free presets then click here.
Adding Sound Banks
To add sound banks to Synth1 you’ll need to first make sure you have the latest download of Synth1 installed then start up your DAW of choice (in my case it is Ableton Live 10) then follow the following steps:
Click the bar at the bottom of the VST window showing the current patch (next to “OPT” button) Click an empty slot in the path window then find and select your preset folder Confirm at the top right and press okay.
NOTE: Synth1 preset banks will come in zip files, do not unzip them. You’ll be selecting the zip files directly when adding them to your VST
Artificially-Generated Synth Patches
If you’re a fan of AI or you just like inserting some randomness into your productions you can download banks of procedurally generated random patches from this website by James Skripchuck — click on the floating synthesizer to download a random bank of patches. This is a good time to mention that Synth1 was developed to functionally resemble the Clavia Nord Lead 2 Red Synth (the floating synth you’ll find on James’ website). Inspired by “This DX7 Cartridge Does Not Exist”, Skripchuck uses a Generative Adversarial Network (GAN) that pits two neural networks against each other to create synth patches. You can read more about James and his project on his website.
If you enjoyed this article and want more stuff like this, follow me on Medium or if you want to check out some of the lo-fi music I make on my computer, you can check it out at this link… or this one.
Check out the links below to read more about Synth1 and the community that supports it. | https://grsahagian.medium.com/a-free-plugin-every-producer-should-know-about-d06c116db069 | ['Graham Sahagian'] | 2020-12-24 15:19:34.714000+00:00 | ['Self Improvement', 'Technology', 'Education', 'Life', 'Music'] |
1,238 | Making Every Day a Monday: Praveen Varshney & Varshney Capital | Image Courtesy of Praveen Varshney.
Praveen Varshney was born to solve problems. This knack for problem-solving initially pushed him to study Science at UBC, with a hope to work in engineering. However, after failing first-year physics, he felt a bit of a paradigm shift. As he reflected on his love for people and relationships, he saw Commerce as a better option and a natural calling. Today, he has combined that love for people and the ability to solve problems as Director at Varshney Capital Corp, a leading Merchant Banking and Venture Capital firm — in Vancouver. NBR was very fortunate to sit down with Praveen and discuss his vastly interesting philosophy of life and his greatest pieces of advice.
Growing up, Praveen’s two most influential role models were his parents. His father, Hari B. Varshney, (the Business Career Centre at Sauder is named after him), worked in accounting and had his CPA designation. Praveen decided to follow in his father’s footsteps and major in accounting. However, his desire to pursue projects that required problem-solving and thinking outside the box never faltered. He recalls an entrepreneurship course at UBC, his first foray into the field, where you designed a business idea, plan, and pitch. For Praveen, it was very formative in terms of what he wanted to do, but also just fun! However, he did not go on to pursue entrepreneurship immediately; upon graduating he worked at KPMG for five years and earned his CPA designation. A few years later, both Praveen and his father were named CPA Fellows (FCPA), an award for the distinction and honour that they brought to their profession and community.
However, his entrepreneurial instincts continued to grow, so he decided to leave KPMG and create a company with his father and brother. They initially launched a mining company in the Northwest Territories, which went on to become one of the world’s largest and last new diamond mines, Gahcho Kué. The endeavour was taken public and was a tremendous success. This led to starting their next venture: a casino company, Thunderbird Gaming. Once again, the venture was successful, went public and provided capital to fund two more ventures. One of them was Mogo: a fintech company that launched in 2003 and IPO’d on the Toronto Stock Exchange in 2015 with a subsequent listing on NASDAQ. Mogo leverages the increasingly popular blockchain infrastructure to enable users to buy bitcoin, offer personal loans, provide identity fraud protection, mortgages, a Visa Prepaid Card, and free credit score with free updates. These subsequently successful ventures offered enough financial freedom for the Varshney’s to create Varshney Capital, their own family firm.
Varshney Capital has built a portfolio with unique breadth: from crypto to resource exploration to compostable, biodegradable patented single-serve coffee pods and real estate, there was no shortage of interesting innovations to discuss with Praveen. He admitted he is no expert in each and every business sector. However, his experience of building companies has given him the foresight to invest successfully in unknown industries by focusing on criteria like strong financial control, qualitative factors (the team), and timing. Firstly, Praveen noted that if a leadership team doesn’t have a strong financial grasp of their company, it’s effectively like “driving a car without any of the instruments and panels, which will ultimately lead to crashing.” Second, his ability to read qualitative factors such as the founder’s vision, passion, and the subsequent team they’ve built is one of the most important factors. Finally, once they’ve analyzed all this, they look at the timing. Praveen admitted that he doesn’t mind being a bit late to an investment opportunity because they can analyze incumbents and tweak the venture’s offering to be more successful in landing customers and partnerships.
On Praveen’s path from a traditional accountant to a prominent serial entrepreneur to an esteemed angel investor, he has reaffirmed and developed many important values that have led to his successes. He spoke to the importance of integrity. For students, he noted that if you ever want to become a leader, “you must develop integrity.” To help build the future of business, he noted leaders must act according to principle and not simply as a result of what a situation provokes, which Praveen noted as especially important in the ambiguous world of entrepreneurship. Beyond that, he discussed the value of a personal brand. For Praveen, building a brand of clarity, consistency, and constancy is vital to ensure we leave the desired impact on our partners, customers and other external parties — if done right, it leads to opportunities as you’re reputation is one of being reliable and dependable, the “go to person” wants on their team.
Within Vancouver, there are few, if any, individuals who have made as much of a social impact through business or daily life. In early 2000, the Varshney’s played a huge role in Victoria-based Carmanah Technologies becoming the largest solar-powered LED lighting company in Canada. Praveen recounted that this venture was life-changing since it was the first time he had positively affected the planet while simultaneously delivering value to shareholders. That awakening has helped shape the Varshney Capital portfolio as it now holds a number of social impact enterprises such as Little Kitchen Academy, a franchise of cooking schools for kids, and NEXE Innovations, an advanced materials company that has developed a plant-based, fully compostable coffee pod for use in Keurig brewing systems. Aside from the obvious tech ventures that have helped drive momentum in Vancouver’s startup ecosystem, Praveen feels that the continued acceleration of the market is largely thanks to such socially-driven businesses that have helped increase the amount of capital available in the market. Further, beyond his financially invested ventures, Praveen spends a tremendous amount of time helping the community by being a partner with Social Ventures Partners Vancouver, a community of philanthropists that catalyze the growth of non-profits via a unique venture model consisting of capacity-building expertise and funding. He’s also involved with numerous other charities such as Room to Read, The Dalai Lama Centre for Peace & Education, Foundations for Social Change, Covenant House and Instruments Beyond Borders.
Praveen and Varshney Capital Corp have become one of the most significant players in Vancouver’s startup ecosystem. They have helped pave a path for many others to follow, particularly by mentoring many entrepreneurs along the way. Ultimately, Praveen has a built career where
“every day is a Monday”: a day full of new ideas, inspiration and opportunity.
He mentioned that while many of his partners are beginning to consider retirement, it is far off for him as he is still having “way too much fun,” which is certainly great news for Vancouver! | https://medium.com/new-business-review/making-every-day-a-monday-praveen-varshney-varshney-capital-c3a643d8cf63 | ['New Business Review'] | 2021-04-15 16:43:53.233000+00:00 | ['Vancouver Startups', 'Social Enterprise', 'Venture Capital', 'Technology'] |
1,239 | My Positive Tech Predictions for 2021! | 1. Rise of a Public Cloud playing Green
Never has it been so easy to provision workloads in the public cloud. It got me wondering, right now we use serverless and other times we run with dedicated compute that’s been provisioned.
Unless you are spending time scaling the later, you will have capacity which isn’t doing anything and burning energy (forget cash 😂).
What if your public cloud workload got an energy rating, like your fridge?!
Whilst I free type; it also makes me think this will inevitably play into a serverless first mindset where you only pay for what you use. The same would apply to energy. That’s not to say you would be ignorant of designing serverless workloads. For instance you still assign resources to a lambda, but then do you need to even call it? Could we see a shift to “green patterns” or green availability zones in the public cloud.
I live in the West of Ireland and right now AWS are buying wind turbine capacity to offset their data centre power usage. BUT this doesn’t necessarily directly power the data centres, but offsets it. Is this good enough? (Maybe ive gone down a rabbit hole here 🙂)
2. Multi Cloud Swings and Roundabouts
I work in technology professional services; personally I would ask anyone adopting a multicloud strategy outside of a vendor product to justify where there business case is for multi cloud. BUT with the public cloud providers outside of AWS maturing I can see a pivot for this to open more.
3. Drone delivery to your garden
In the West of Ireland we are seeing drones now trialing delivery services in Oranmore, Galway. Once drone technology can move outside point to point deliveries, I think we will see a rapid rise in drones operating localised delivery services.
4. Global talent pools
I’m privileged to be working for a fortune 100 company that’s trying to go global. I also work with a team that’s distributed across Ireland and Poland.
I’ve been working remotely now for 5 years and I believe it doesn’t matter where in the world you are if you can get the job done. In a thriving market technology professionals, talent will start to look outside of the large technology centres and start to think about global talent.
5. Evolution of remote working
As companies dump their commercial footprints in 2021. I think we might see more structured options around remote working. If the new norm for technology working is asynchronous working patterns, might we see a new suite if tooling to support a market for these new ways of working?
Will business travel crash as a result of 2020? Which could surely a good thing for those of us with families. The days of the CEO landing in for the day and everyone having to be at their desks is surely over.
I mean who would have thought the office 2020 Christmas party would be remote!!! | https://medium.com/@belfastnerd/my-positive-tech-predictions-for-2021-97de024304b8 | ['Ben Steele'] | 2020-11-25 08:06:58.800000+00:00 | ['Predictions', 'Future', '2021', 'Tech', 'Technology'] |
1,240 | An overview of digital marketing technologies | Earn more | An overview of digital marketing technologies | Earn more Haseebalee ·Dec 20, 2020
If you are working in digital marketing these days, you may be bombarded with all sorts of technologies on a daily basis that can help you do something in one form or another.
You will see countless demos of countless companies.
(An overview of digital marketing technologies | Earn more) You’ve known for decades the brand of new startups that are diving into the next big thing, and if you’re confused and overwhelmed, don’t worry, you’re not alone. We are going to break Using the Digital Marketing Technology Stack Framework that we’ve used over the years at our firm Cardinal Path, focus well on the four major components of tools and technology and configure each one accordingly. They can help you and we won’t be able to.
read more click on the link https://www.digitalper.com/2020/12/An-overview-of-digital-marketing-technologies.html | https://medium.com/@haseebalee1472/an-overview-of-digital-marketing-technologies-earn-more-dd3afac75493 | [] | 2020-12-20 09:36:21.241000+00:00 | ['Digitalpercom', 'Digital Marketing', 'Marketing Technology', 'Online Earning', 'Medium'] |
1,241 | Cypher Sleuthing: Dealing with Dates, Part 1 | No matter what database, programming language, or webpage you might be using, dates always seem to cause headaches. Different date formats require calculations between application date pickers in user-friendly formats and system dates in backend devices and data sources. Then, programming languages each have their own libraries and structures for dealing with dates, too.
This concept in the Neo4j ecosystem isn’t any less complex with Cypher (a graph query language) date formats, the APOC library date functions/procedures, and countless possible integration tools/APIs for data import and export. I feel like I’m always looking at documentation and dealing with lots of trial and error in order to format the date just right. You may have heard about “dependency whack-a-mole,” but dates are another aspect of programming that can feel like whack-a-mole, too.
In this post, I will do my best to provide you with the tools for less random whacking and more accurate decision making when it comes to formatting dates with Cypher. You can follow along by launching a blank sandbox (free) and copying the Cypher into the browser or tweaking and running the queries for your own data set. Let’s dive in!
Time Conundrum
The general concept of time is rather confusing, and one that I did not realize was quite so complex. There have been a number of humorous and eye-opening content pieces around time being the programmer’s nightmare. Why is that?
First, standard measures of time aren’t always true. The number of hours in a day can vary depending on daylight savings time (and geographies changing at different points during the year), days in a month can vary by month and leap years, and weeks in a year can vary depending on the day of the week Jan 1st falls on and leap years. Time zones are another matter entirely. Countries change time zones somewhat frequently and different eras in the past had entirely different calendars and time zone structures.
There is a humorous and sobering comprehensive list of one programmer’s experiences of time variance, as well as an entertaining video on time zones from a programmer’s point of view. It was very valuable and educational for me to see how much time can morph, making it exceptionally complicated to calculate and present a consistently accurate measure of time. Also, thank you to my colleagues @rotnroll666 and @mdavidallen for those links. :)
Cypher Dates
Let’s start at the base with Cypher date formats. For this, we can go to the official Cypher manual and take a look at the two different sections that cover dates. The first section is for the date and temporal data types themselves. The second section is for instant and duration calculations using functions. We’ll stick with just the instant today and worry about durations and other details in another post.
The date and temporal data types in Cypher are based on the ISO 8601 date format. It supports three different categories of time: date, time, and timezone. Within those three categories are the instant types Date , Time , Datetime , LocalTime , and LocalDatetime . There are also three ways to specify timezone — 1) with the number of hours offset from UTC (e.g. -06:00 ), 2) with a named timezone (e.g. [America/Chicago] ), 3) with the offset and name (e.g. -0600[America/Chicago] ).
For this blog post, we won’t explore the LocalTime and LocalDatetime types. These types are the exception to most rules and are very rarely required because they leave valuable timezone information out of the temporal value.
Alright, let’s stop discussing concepts and see Cypher temporal types in action. We will create a few different dates using the instant types, then handle some timezone examples.
Example 1: Setting a node property to current datetime.
MERGE (b:BlogPost)
SET b.publishedDatetime = datetime()
RETURN b.publishedDatetime;
NOTE: You might notice the literal T between the date and time values. This vital little connector is easily forgotten and something we’ll need to keep in mind when we start doing translations and conversions with other formats!
Example 2: Setting a relationship property where date value equals a specific string.
MERGE (e:Employee)-[rel:ASSIGNED]->(p:Project)
SET rel.startDate = date(‘2021–02–15’)
RETURN rel.startDate;
Example 3: Setting a node property to time with time zone.
MERGE (s:Speaker {username: ‘jmhreif’})-[rel:PRESENTS]->(p:Presentation)
SET p.time = time(‘09:30:00–06:00’)
RETURN p.time;
Example 4: Setting a node property to full date time (with time zone).
MERGE (c:Conference)
SET c.startDatetime = datetime(‘2021–03–01T08:00:00–05:00’)
RETURN c.startDatetime;
To round out our instant types section, you can specify the date as parameters to the instant, and you can also access individual pieces of the instant. I haven’t run across cases where the parameter-like definition of the date is required, but I’m sure it was built in for a reason!
Here are a couple of examples.
Example 5: Setting date property using parameter-style format.
MERGE (p:Project)
SET p.expectedEndDate = date({year: 2021, month: 9, day: 30})
RETURN p.expectedEndDate;
Example 6: Setting date using date component.
MERGE (c:Conference)
SET c.year = date().year
RETURN c.year
Example 7: Find blog posts published in March using date component.
MATCH (b:BlogPost)
WHERE b.publishedDatetime.month = 3
RETURN b.publishedDatetime;
Example 8: Return date component (dayOfWeek) of created node.
MERGE (b:BlogPost)
SET b.publishedDatetime = datetime()
RETURN b.publishedDatetime.dayOfWeek;
NOTE: dayOfWeek has Monday as the start of the week. Since I’m writing this on Tuesday, these results are accurate. :)
Getting to Neo4j-Supported Date Formats
Now, these are great if you have a date/time value that is already formatted for ISO 8601. But what happens when you don’t? How do you translate a date into something Cypher will understand and Neo4j will store?
In this post, we will stick to what is probably the common temporal measurements — i.e. using year, month, day, hour, minute, second. For weeks, quarters, milliseconds, and so on, check out the docs. Also, recall that a literal T character is required between date and time in a combined value, so we’ll have to keep that in mind.
We will look at the following scenarios to get the dates converted to values Neo4j and Cypher can read:
Epoch time (formatted in seconds or milliseconds) Other date string formats ( yyyy-MM-dd HH:mm:ss and similar) Multi-conversions (one conversion wrapped in another on one line)
Epoch Time
The website epochconverter.com defines epoch time as follows:
“… the Unix epoch (or Unix time or POSIX time or Unix timestamp) is the number of seconds that have elapsed since January 1, 1970 (midnight UTC/GMT), not counting leap seconds (in ISO 8601: 1970–01–01T00:00:00Z)”.
This website is really easy to use, and I visit it quite frequently for ad hoc conversions or example dates to use. As an example of epoch time and other date formats, here is the same date in three formats:
Human-readable: Monday, March 1, 2021 12:00:00 AM
ISO 8601: 2021–03–01T00:00:00Z
Epoch time (seconds): 1614556800
Cypher does have the capability to convert epoch values for certain cases, though the syntax is a bit different than the conventions we’ve seen thus far. For other types of formats, we will go to the APOC library, which is a very popular extension for Neo4j containing procedures and functions for many different utilities.
Ok, let’s see some examples of how to programmatically convert epoch time. We will use our example epoch time from above (1614556800, which is March 1, 2021 12:00:00 AM), just to keep things simple and consistent. We will show the results of the converted value, as well as the final converted Neo4j temporal value next to it.
Example 1: Epoch to datetime using Cypher
WITH 1614556800 as epochTime
RETURN datetime({epochSeconds: epochTime});
Example 2: Epoch to date string using apoc.date.format()
WITH apoc.date.format(1614556800, “s”, “yyyy-MM-dd”) as converted
RETURN converted, date(converted);
Now, because epoch time is a date and time in a seconds format (time-based), we are unable to convert straight from epoch time to a simple date (without time). However, we could either store as a datetime and return date portions for queries….or we could use APOC to get our date!
Example 3: Epoch to ISO 8601 format using apoc.date.toISO8601()
WITH apoc.date.toISO8601(1614556800,’s’) as converted
RETURN converted, datetime(converted);
Other Date String Formats
Now we know how to convert Unix-based epoch time, but what about strings in all different kinds of formats? How do we translate them to something Cypher will read? Cypher does accept strings and can convert strings in the ISO 8601 format to a temporal value, so we just need to convert a variety of string values to an ISO 8601 string format. We can do that using apoc.date.convertFormat() .
Note: all of the possible formats in the procedure’s third parameter below are listed here.
Example 4: Similar date format to ISO 8601 string
WITH apoc.date.convertFormat(‘2021–03–01 00:00:00’, ‘yyyy-MM-dd HH:mm:ss’, ‘iso_date_time’) as converted
RETURN converted, datetime(converted);
Example 5: American date format to ISO 8601 string
WITH apoc.date.convertFormat(‘03/01/2021’, ‘MM/dd/yyyy’, ‘iso_date’) as converted
RETURN converted, date(converted);
Finally, there are a few APOC procedures that deal directly with temporal values. Only one goes to a Neo4j date format, though, and it transforms a string to a temporal.
Example 6: Datetime string to Neo4j datetime
WITH apoc.temporal.toZonedTemporal(‘2021–03–01 00:00:00’, ‘yyyy-MM-dd HH:mm:ss’) as converted
RETURN converted, datetime(converted);
Notice that both the results are the same, showing that the apoc.temporal.toZonedTemporal() function transforms directly to the Cypher datetime() value.
Multi-Conversions
Okay, so we have done several conversions that translate strings or epoch times to strings, but that doesn’t always get us to the Neo4j date. In order to do that, we can wrap our converted value in another conversion function. This isn’t really different from what we’ve seen before, but they can get convoluted and you might think “you can do that?” Yes… yes, you can. :)
Let’s take a look!
Example 7 (from Example 1 above): Convert epoch time to string and then to datetime
RETURN datetime(apoc.date.format(1614556800, “s”, “yyyy-MM-dd’T’HH:mm:ss”));
Example 8: Convert date from Twitter API to ISO date time string, then to Neo4j datetime
RETURN datetime(apoc.date.convertFormat(‘Mon Mar 01 00:00:00 -0000 2021’, ‘EEE LLL dd HH:mm:ss Z yyyy’, ‘iso_date_time’));
For a reference to the letters in that date format, the documentation is here (under Patterns for formatting and parsing ).
Wrapping Up
In this post, we covered most of the Neo4j-supported temporal instant types — date() , datetime() , time() — for creating the values either from a current instant or from an ISO8601-formatted string. We then saw how to use the utility functions in the APOC library to transform epoch Unix time values and strings in non-ISO8601 formats into strings or temporal values Cypher can work with.
There is so much more to explore on the topic of Neo4j dates. Next time, we will discuss Cypher durations for calculating the time between two instants or for adding/subtracting dates and amounts from temporal values.
Until then, happy coding!
Resources | https://medium.com/neo4j/cypher-sleuthing-dealing-with-dates-part-1-90eff82b113d | ['Jennifer Reif'] | 2021-07-13 03:41:23.328000+00:00 | ['Date', 'Graph', 'Cypher', 'Technology', 'Learning'] |
1,242 | CI/CD Pipeline with Cloud Build and Composer (with Terraform) | Hey
Sometimes I use some Google tutorial to do some training. But I like to automate (yes, I know you know!). So, let's talk about CI/CD for data processing in GCP. I'm going to use this tutorial:
In summary:
This tutorial describes how to set up a continuous integration/continuous deployment (CI/CD) pipeline for processing data by implementing CI/CD methods with managed products on Google Cloud. Data scientists and analysts can adapt the methodologies from CI/CD practices to help to ensure high quality, maintainability, and adaptability of the data processes and workflows.
Looks good, huh? We will use 5 things here:
Terraform — The tutorial it's a hands-on, but I will "transpose" to Terraform. 🤓 Cloud Build — Similar to Jenkins, where we will create the pipelines, triggers, … Cloud Composer — It's a managed Apache Airflow in GCP. We will use to define the steps of the workflow, like start the data processing, test and check results. Dataflow to run a job in Apache Beam as sample. There's also Cloud Source Repositories, that is the "GitHub" from Google (but reeeeeeaaaly far away from GitHub).
All the code can be found here: CI/CD Repository
First thing, we need to have a user with "Owner" permission in some folder (I will not create this in root level, there's a way to create in some specific folder. And I know Owner is not the best way to grant permission, but this is for test purposes). You can get the list of folders in GCP with this command:
gcloud resource-manager folders list — organization=<Your Org ID>
Cool! Now update the terraform.tfvars file (I'm using Terraform 0.13.6 version) in bootstrap folder. File is really simple!
From here, please note that this is a PAID test. Some resources will charge you, so remember to delete the project when finish. :)
Run the Terraform steps:
terraform init
terraform plan (Good to review, right?)
terraform apply
You should see something like this:
Plan: 54 to add, 0 to change, 0 to destroy.
The apply process should take 30 minutes. Just go get some coffee.
The output should return this:
Take note of Cloudbuild project and csr_repo.id.
You should be ready to go! If you go to Cloud build, you will see 2 triggers:
Composer is now created also:
Let's test our plan trigger. So, just to understand, everything that you commit that is not "master" branch, will execute the plan trigger. Let's see. First let's clone the CSR repository (go outside of our code that you cloned before):
gcloud source repos clone gcp-cicd — project=<CloudBuild Project ID>
Now change to a different branch (I will use plan) and copy everything inside source-code from our previous repo inside this one (change the command accordling your actual path).
git checkout -b plan
cp -rf ../gcp-cicd-terraform/source-code/* .
git add -A
git commit -m "First Commit"
git push -set-upstream origin plan
If you check your Cloudbuild page, you will see the plan started:
If you open you can see all steps and information:
In AirFlow UI, you can see DAG information:
And DataFlow the Job Graph:
So, what happened? This:
A developer commits code changes to the Cloud Source Repositories. Code changes trigger a test build in Cloud Build. Cloud Build builds the self-executing JAR file and deploys it to the test JAR bucket on Cloud Storage. Cloud Build deploys the test files to the test-file buckets on Cloud Storage. Cloud Build sets the variable in Cloud Composer to reference the newly deployed JAR file. Cloud Build tests the data-processing workflow Directed Acyclic Graph (DAG) and deploys it to the Cloud Composer bucket on Cloud Storage. The workflow DAG file is deployed to Cloud Composer. Cloud Build triggers the newly deployed data-processing workflow to run.
Cool, our process is now working in plan/test!! Now we can just apply to prod pipeline!
For this article, I will do a manual deployment to production by running the Cloud Build production deployment build. The production deployment build follows these steps:
Copy the WordCount JAR file from the test bucket to the production bucket. Set the Cloud Composer variables for the production workflow to point to the newly promoted JAR file. Deploy the production workflow DAG definition on the Cloud Composer environment and running the workflow.
There's some wayt to automate this steps with Cloud Function or even during plan pipeline, but the idea here is just to understand a simple way. So, first thing, we need to get the name JAR filename to update or trigger. Let's use gcloud command:
gcloud composer environments run <COMPOSER_ENV_NAME> \
--location <COMPOSER_REGION> variables -- \
--get dataflow_jar_file_test 2>&1 | grep -i '.jar'
Now that we have this, let's change the Apply trigger this value. Go to Cloudbuild and edit the apply trigger (change the "_DATAFLOW_JAR_FILE_LATEST" to the result before):
Now let's run the trigger (just run):
Let's check:
Now we have the DAG deployed to Composer. You can see if you go to AirFlow UI:
Let's just run the job. In AirFlow UI, just click on "Trigger Dag"
Now you can go to Dataflow and check the job:
And that's it! You have now a CICD pipeline that you can use for data processing, or any other model of process.
To destroy the resources, simple: just go inside the bootstrap folder and run:
terraform destroy
I hope you like this! As always, feel free to reach me, provide feedbacks, anythin!!
Stay safe, folks! | https://medium.com/marcelo-marques/ci-cd-pipeline-with-cloud-build-and-composer-with-terraform-379a05a4ca09 | ['Marcelo Marques'] | 2021-04-25 17:30:28.103000+00:00 | ['Gcp', 'Technology', 'Ci Cd Pipeline', 'Google Cloud Platform', 'Terraform'] |
1,243 | LiDAR and cameras: automation and autonomy | Human Eye — Camera — LiDAR
Photo by Amanda Dalbjörn on Unsplash
Short intro to perception
We all have a working understanding of a camera image as the image representation maps well to how we perceive the world. Camera image is a 2D projection of a 3D scene where each pixel carries color/brightness information. Let’s assume an RGB pixel format: 3 values that combined represent practically all that we can decipher color-wise. Make it 4K HDR 30 fps and you get very close to a human eye. This is of course an oversimplification for sake of illustration as human eye has a very high resolution on the focused object and poor peripheral. Humans have a native ability to understand and infer information from a camera stream — we can distinguish distances, surfaces, 3D shapes based on light and its reflectivity, we can identify objects, estimate their relative sizes and predict their relative volumes (sort of). On top of that we have some very specialized skills like the ability to immediately spot a person looking at you in a crowd, perform ultrafast facial recognition, or detect repeating pattern with no training. It will take us a while to get there technologically. Cameras excel at capturing an image, usually aiming to realistically depict a scene to a human. However, cameras can easily beat human at capture speed. We recognize individual images at about 10 frames per second. In other words, we cant comprehend more than 10 images in a second and if shown, say 10 images in half a second, we would not be able to tell what we have seen on each of them. Cameras have an ability to unlock that frame rate much higher — effortlessly reaching 300 fps. At this point you may wonder whether can we effectively consume 300 frames provided by a camera in a second and we will get back to this question later. Its time for the resolution! As mentioned above, 4K (about 8 MP, 2000+px height) is roughly equivalent to human eyesight. Industrial grade cameras range from 0.4 MP (500px height) to 40+ MP (5000px height). And here comes a LiDAR with its 32 px height image, provided at 20 fps costing ballpark $4000. Medium size icon in Windows 10 is 48 pixels high. Quite a rip-off, it seems, for an imaging system to provide icon height images for $4000. However, we are no longer in the RGB pixel format and there is a lot of horizontal measurements to make sense of the space. This time we use 4 to 5 parameters to describe a single point in space using spherical coordinates — radius, elevation, azimuth (think of these as an equivalent of X,Y,Z), reflectivity and potentially noise reading (noise is equivalent to a monochromatic camera capture). Reflectivity can tell you about the surface that bounces off a light — a practical use case is street signs or reflective paint street lane markings. Alternatively, if you assume a somewhat uniform reflectivity of an object surface, say, a human, the reflectivity tells you the angle of the surface the light beam reflected from.
This is me holding a large tea cup described with 14 lidar channels. Cheers! Warmer color means higher reflectivity; notice the blue stroke — it implies rounded edge of an object as the beams reflect at a different angle.
Pay attention the the reflectiveness values on the image provided. This information allows accurately predicting 3D shapes of objects which happens to be very useful combined with exact location in space. On top of that, the laser beam works in full sun and complete darkness with (virtually) no accuracy loss. Driving towards the sun 1 hour before sunset makes camera’s performance less than stellar, similarly driving at night. | https://medium.com/@stubbb/lidar-and-cameras-automation-and-autonomy-fa7ea2d34f3a | ['Marcus Gee'] | 2020-10-16 20:02:23.237000+00:00 | ['Technology', 'Tech', 'Autonomous Vehicles', 'Lidar', 'Autonomous Cars'] |
1,244 | Is Nanox ($NNOX) Over or Under Valued? | I have been looking into Nanox Imaging for a while. Infact, first I posted about their IPO. Then I made an article about whether Nanox can garnish long term support. I thought I wouldn’t need to talk about Nanox for a while afterwards, but then firms shortened it. My last two articles were in response to Citron and MuddyWaters Research. In-fact, this draft was anticipated to be written on the 21st of September, but a lot already happened. The day I was going to release this post, it was delayed because of the recent short report, which I believed happened on a SSR day. The question now relies, is Nanox over or undervalued? This paper will look at my few cents, (considering my confirmation bias) in being bullish on Nanox.
First let us start with the total addressable market. Zion Market Research states that by 2025, the global medical imaging market will reach $48.6 billion. You also need to consider that Nanox wants to target the “2/3rds of the world not scanned”. This can set Nanox to become a key player in the medical imaging market, but is speculative. Hence why, I’m excited.
Nanox Imaging also has a whole bunch of intellectual property. Infact, their intellectual property patents may give them a competitive edge or barrier to entry in comparison to some of the bigger players. One of the top things Nanox has for them is an opportunity to still be considered early into the market with a barrier to entry.
Analyst have also been seemingly positive on the sale side. You see here, that 3 analyst rated it with 2 buys and 1 hold. If you look at Zack’s Ranks scorecard however, 3 researchers are giving it a hold. Zack’s Ranks said, they are in the top 31% in their sector, but bottom 35% in their industry. This isn’t too bad considering it just IPO’d then people shortened it. The industry is also quite large. However, with positive buy side or people indicating hold, a moderate buy isn’t bad.
The tool WalletInvestor also seems bullish, but the tool isn’t that reliable given targets change a lot and a 70.39% accuracy metric. However, although the fluctuations, generally I find WalletInvestor quite useful. ChartMills have given it a low setup rating, but for the past two weeks the data that showed up on the ChartMills page was inaccurate. It was only recently updated. It is still too early for a TA rating from them, and I think I am more excited about what the analyst been saying.
I also have came out previously with a chart on why I might be super long for Nanox. This is something I sort of mentioned many times prior, and many of the reasons still stand.
Source: algowins.com
Seeing the table above, recent short volumes trending around the 40% or more shows some bearish correlations. However, you have to keep in mind the short sell reports that came out and the percentage float. With what has been going on with Citron, Empirical Research, and MuddyWaters Research, I wouldn’t be surprised if the resistance level pops soon.
Source: Finviz
Also looking at what analyst are expecting for year over year growth, as well as current ratios + debt, I don’t see this business being run into the ground any time soon. I see the potential of a financially responsible business, with decent market growth tackling different aspects of the market.
Also, look at the MSaaS Model they have (from their presentation)
MSaaS Slide
True, this says potential. However, they already were able to garnish a variety of partnerships and distributors + are working extensively as we speak. This includes the SPI Medical Agreement, The Gateway Group and the factory (amongst others).
You can see that they were outperforming the S&P500 before the short sell reports, and after the short sell reports how much the volumes have increased. Even considering the short percentage isn’t too high when looking at the stock’s overall history, it has been incremental. There are reasons to believe that the shorts could potentially be “squeezed” soon. This is especially looking at the short interest vs. support levels.
Another thing you need to consider is the executive team. The CFO is Itzhak Maayan, who was part of Perrigo, a previous CFO at Cisco Systems in Israel, an early XTIVIA’s CFO, and a CFO for Kulicke & Soffa. This guy also has previous auditory experience. It is almost as if the company went out of their way to find the best CFO in Israel.
Even MorningStar which rarely gives 5 star prices, at least put Nanox as fair valued. I personally think a marketcap of $5 billion for a company about the size of where Nanox is now, isn’t too far fetched.
Other sectors that Nanox’s tech can disrupt include:
X-Ray imaging for airports
The manufacturing industry
Self-Driving Cars (AI and Chip design)
and many others. Nanox is currently focused on medical imaging, but whose to say licensing or outside of medical deals are off the table? I think people are potentially undervaluing or underestimating Nanox, and that the future might be quite surprising. That being said, it seems like anything can happen.
I do engage in research related to the sensory tech and medical device industry. However, I hold no relations to Nanox Imaging or working with Nanox Imaging. My only current conflict of interest is being long on its stock.
Disclosure: Please keep in mind, everything I say is on an opinion based basis. This is not meant to be taken seriously or as actionable financial advice. Do your own due diligence and any trades/investments you do is at your own risk. We are not responsible, proceed with caution and we are trying to voice an opinion not meant to warrant action. This is solely meant to be viewed as a non actionable opinion not meant to be taken as seriously or as a form of actionable advice. | https://medium.com/quantportal/is-nanox-nnox-over-or-under-valued-1d3904dc90f5 | ['Andrew Kamal'] | 2020-09-23 03:26:42.552000+00:00 | ['Stock Market', 'Stocks', 'Technology', 'Medical', 'Startup'] |
1,245 | Demystifying Big Data! | Being on several conferences and fairs this year, I came to realize that a lot of people have quite an arbitrary understanding of what is meant by Big Data. The term becomes a buzzword and lots of people use it in their pitches when they explain their business models.
Some talk about Big Data as all the different customer data that they collect. Some others are using the term when talking about streaming real-time data into their platforms. In less sophisticated environments Big Data is broken down to a specific number like 100 Terabytes or 1–2 petabytes. However, what is Big Data now specifically?
Data is creating new jobs and changes our economy. A study by McKinsey Global Institute predicts that by next year the U.S. alone will fall short of nearly 200.000 jobs in this sector. Hence, the need for knowing Big Data is increasing.
This article shares a more scientific definition and understanding of Big Data. In general, we could say that the term Big Data is evolutionary. With the constant increase in computational capabilities, the amount of data that is handable is also changing.
Historically, a one terrabyte data warehouse used to be big data. However, nowadays have data warehouses that re able to store petabyte of data. We have analytic tools that can handle huge amounts of data.
In general, Big Data always expressed some sort of overwhelming amount of data. Something that can not be handled by common information technologies. During my studies I came across a definition I am more fond of.
The 3–5 Vs of Big Data
The 5 Vs are more accurate in describing this overwhelming amount of data. They characterize Big Data in a way that makes it easier to understand when data becomes big.
Variety, Volume and Velocity
The main three are variety, volume, and velocity. Variety describes the different forms this data can have. We have unstructured and semi-structured data that became as strategic as traditional structured data.
Volume referes to all the different sources that data can come from. All of these streams need to be captured and they need to be stored over longer periods.
Finally, the velocity describes the speed that this data is coming in. If we think about machine data, like for example sensor data of cars, we get new inputs every millisecond.
The three Vs are explained by Hugh Watson in his “Tutorial: Big Data Analytics: Concepts, Technologies, and Applications.
The two additional Vs
While the three previous vs describe the basic characteristics of this data. The following two Vs are explaining the importance of filtering and handling the data appropriately.
Even if we want to capture everything we can, we still need to ensure that we can trust the data source. Hence, the Veracity of the data needs to be ensured.
Additionally, we need to be able to extract strategic information from this data. Ultimately we need to provide Value.
Please leave a comment with your thoughts and ideas.
My series of posts is about making you think a little deeper about every day concepts. I look forward to having you follow along and reading what you throw at me. Peace!
Twitter: @tkronsbein
Instagram: @tizian_kronsbein
Website: www.tiziankronsbein.com
References:
Watson, Hugh J. (2014) “Tutorial: Big Data Analytics: Concepts, Technologies, and Applications,” Communications of the Associationfor Information Systems: Vol. 34, Article 65.Available at: http://aisel.aisnet.org/cais/vol34/iss1/65
Manyika, J., M. Chui, B. Brown, J. Bughin, R. Dobbs, C. Roxburgh, and A.H. Byers (2011) “Big Data: The Next Frontier of Innovation, Competition, and Productivity,” McKinsey Global Institute, May. http://www.mckinsey.com/Insights/MGI/Research/Technology_and_Innovation/Big_data_The_next_frontier_for_innovation | https://medium.com/d-lighted/being-on-several-conferences-and-fairs-this-year-i-came-to-realize-that-a-lot-of-people-have-quite-ead94af140d8 | ['Tizian Kronsbein'] | 2018-02-02 09:35:49.119000+00:00 | ['Data Visualization', 'Data Science', 'Technology', 'Big Data', 'Data'] |
1,246 | RealBlocks Supply Partners | RealBlocks is a blockchain-based real estate marketplace for parties to raise and invest capital. Our focus is on providing quality inventory and intriguing opportunities for the investors on our platform. This inventory will come from our partners — brokers, REITs, family offices, and crowdfunding platforms. Whether these inventory partners are looking to increase property exposure, access capital on the blockchain, dilute their stake in a property or portfolio, or raise money for a property or project, RealBlocks is able to deliver value for all of the above participants. By lowering minimums to invest and enabling cryptocurrency holders to convert to USD and transact on our platform, we are significantly expanding the accessible pool of capital.
Supply Partners
RealBlocks Supply Partners
Brokers
While many real estate blockchain companies are aiming to eliminate the need for a broker, we at RealBlocks believe that brokers add definite value to real estate transactions. Therefore, we are working ‘with’ the brokerages instead of ‘against’ them. Firstly, RealBlocks provides brokers a way to access capital on the blockchain that has previously been inaccessible. Brokers can leverage our platform to transact more deals. Already the brokerages in Texas, New York, California, and Florida have been creating an edge over their competitors by enabling cryptocurrency holders to transact. Secondly, as all the transactions on the platforms are carried out in fiat currency, the processes and workflows from the broker’s perspective remains the same. Finally, our commission structure sits on the top of the broker’s commission, thereby avoiding cutting into their margins.
REITs
Or Real Estate Investment Trusts, give individuals the ability to own shares of income producing real estate without having to purchase entire assets. RealBlocks is assisting REITs raise capital through a fund of funds approach. This will enable users to invest in micro shares of REITs resulting in lower minimums and the ability to build incredibly diversified portfolios across different geographies and asset classes. In general, publicly traded REITs can be liquidated on public exchanges, however, non-traded REITs tend to be longer term investments with limited ability to liquidate. Non-traded REIT holders will be able to take advantage of liquidating their holdings on our platform.
Family Offices
Traditionally, private placements in real estate are an illiquid market limited only to accredited investors. There are high minimum investments ranging from $50,000 to $500,000 and no secondary market to exchange interest. Family offices have the ability to raise capital on our platform through private placements for individual properties or a portfolio of properties. Owners will be able to dilute their equity stake with the optionality of selling percentages of equity. This grants owners more liquidity of their holdings to an increasing investor base of accredited and non-accredited investors (depending on the security offering) who can participate at lower minimums and exchange on a secondary market.
Online Investing / Crowdfunding Platforms
Developers typically raise capital through debt and equity markets comprising of individual(s) or investment fund(s). Raising capital from individuals can be a time consuming process and results in directing focus away from the project at hand. As a result, developers will opt to raise capital from larger investment funds as it is more timely. However, developers are then sacrificing controlling stake in the project and focus may be forced to increasing investor returns rather than original intentions. Online investing/crowdfunding platforms increase exposure for developers and allow investors to deploy capital into projects in an efficient manner. Our value adds another layer on top of this as we provide a seamless way to unlock capital they don’t traditionally have access to. On RealBlocks, developers can leverage the platform to raise capital for projects as well as dilute equity stake in properties they own and manage. These options provide developers further flexibility and increased exposure to capital.
Bringing It All Together
The United States is the most stable and secure country for real estate investment. However, a majority of the country, and world for that matter, never considers owning investment properties because of the high capital requirements. Providing lower investment minimums makes it possible for lower to middle income earners to invest in real estate at their discretion. Cryptocurrency holders are another underrepresented demographic in the real estate market. On the RealBlocks platform, these investors will be able to seamlessly buy or sell real estate by converting their crypto to USD. These features of RealBlocks provide suppliers access to the $400 B of capital on the blockchain as well as lower to middle income earners.
RealBlocks Social
Telegram: https://t.me/realblocks
Twitter: https://twitter.com/real_blocks
Reddit: https://www.reddit.com/r/RealBlocks/
Instagram: https://www.instagram.com/realblocks_/
Facebook: https://www.facebook.com/RealBlocks-164531427480664/ | https://medium.com/realblocks-blog/realblocks-supply-partners-7b44cdaac797 | ['Roderick Gill'] | 2018-05-23 16:33:03.119000+00:00 | ['Startup', 'Real Estate Investments', 'Cryptocurrency', 'Blockchain', 'Technology'] |
1,247 | Stop Using Square Bracket Notation to Get a Dictionary’s Value in Python | The Traditional (Bad) Way to Access a Dictionary Value
The traditional way to access a value within a dictionary is to use square bracket notation. This syntax nests the name of the term within square brackets, as seen below.
author = {
"first_name": "Jonathan",
"last_name": "Hsu",
"username": "jhsu98"
} print(author['username']) # jhsu98
print(author['middle_initial']) # KeyError: 'middle_initial'
Notice how trying to reference a term that doesn’t exist causes a KeyError . This can cause major headaches, especially when dealing with unpredictable business data.
While we could wrap our statement in a try/except or if statement, this much care for a dictionary term will quickly pile up.
author = {} try:
print(author['username'])
except KeyError as e:
print(e) # 'username' if 'username' in author:
print(author['username'])
If you come from a JavaScript background, you may be tempted to reference a dictionary value with dot notation. This doesn’t work in Python. | https://medium.com/better-programming/stop-using-square-bracket-notation-to-get-a-dictionarys-value-in-python-c617f6ea15a3 | ['Jonathan Hsu'] | 2020-05-06 01:15:37.387000+00:00 | ['Software Development', 'Programming', 'Data Science', 'Python', 'Technology'] |
1,248 | How Do You Find Malware? Microsoft and Intel Try Converting It Into 2D Images | To pull this off, the companies converted the malware’s programing into a one-dimensional stream of digital pixels. As their study explains, each byte in the malware’s code can be imaged to correspond to a different level pixel intensity.
The researchers then expanded the pixel streams into 2D images by using the malware’s file size after the conversion to determine the width and height. This allowed the Microsoft-Intel antivirus program to see the malware’s characteristics and train itself to discern them.
The approach, dubbed STAMINA, is showing some promising results. In a test using real-world malware samples, the antivirus program achieved 99.07 percent accuracy with a false-positive rate of 2.87 percent.
The companies developed STAMINA to address drawbacks in today’s antivirus scanning technology. The detection approaches can also involve disassembling a piece of malware into metadata to find trace signs of dangerous behavior. However, hackers are routinely coming up with ways to mask the malicious processes, making computer virus detection akin to a cat-and-mouse game.
STAMINA could potentially add a new tool to ferret out malware. “This joint research is a good starting ground for more collaborative work,” Microsoft said. “For example, the researchers plan to collaborate further on platform acceleration optimizations that can allow deep learning models to be deployed on client machines with minimal performance impact. Stay tuned.”
However, the company notes the approach does have a key limitation: it has trouble dealing with large file sizes. Converting them into a 2D image would require billions of pixels, making the detection method less practical if the malware comes bundled in a big program. | https://medium.com/pcmag-access/how-do-you-find-malware-microsoft-and-intel-try-converting-it-into-2d-images-11952c74363a | [] | 2020-05-12 14:29:35.153000+00:00 | ['Cybersecurity', 'Hacking', 'Malware', 'Technology'] |
1,249 | Automating Labs with DevOps Tools (+ Github) | TLDR: We set up a small windows lab with Vagrant.
In the last article we spoke about Infrastructure as Code and DevOps practices to simplify the creation of virtual machines and automate the provisioning process. In this article we’ll actually get our hands dirty and create a test environment using Vagrant, Ansible, and VirtualBox.
So the first thing we need to do is download the required software:
https://www.vagrantup.com/
https://www.virtualbox.org/
(Insert your favorite text editor here; I’ll be using Atom)
Installing Vagrant isn’t too difficult, just follow the wizard and it’ll add it to your path by default. Same goes for VirtualBox, it should all work right out of the box. | https://medium.com/@ldvargas/automating-labs-with-devops-tools-github-66100ef3264 | [] | 2020-12-07 04:27:15.738000+00:00 | ['DevOps', 'Information Technology', 'Infrastructure As Code', 'Automation', 'Windows'] |
1,250 | How to Use Istio to Inject Faults to Troubleshoot Microservices in Kubernetes | How to Use Istio to Inject Faults to Troubleshoot Microservices in Kubernetes
Improve your microservices running on Kubernetes
Photo by Brooke Cagle on Unsplash
Imagine you’ve deployed the reviews-v2 microservice into production, and you have an issue. Users are complaining they can’t see the reviews in your application intermittently.
You’ve run a thorough investigation in your development environment, but you’re not able to find what the problem is.
The only way to investigate the problem is by looking at the traffic flowing through your production services. Someone from your operations team notice timeouts in the chain, and you need to investigate further.
This story is a follow-up to “Locality-Based Load Balancing in Kubernetes Using Istio.” Today, let’s discuss fault injection and troubleshooting. | https://medium.com/better-programming/how-to-use-istio-to-inject-faults-to-troubleshoot-microservices-in-kubernetes-108250a85abc | ['Gaurav Agarwal'] | 2020-06-01 14:14:45.069000+00:00 | ['Technology', 'Programming', 'Kubernetes', 'Microservices', 'DevOps'] |
1,251 | How long will Samsung support Galaxy S9 and Galaxy S9+? — TheContentGenie | Samsung previously promised to provide only two major OS upgrades for its Android smartphones. It meant that two years was as long as the Galaxy S9 and Galaxy S9+ were supported. It came with Android 8.0 out of the box and has since received its Android 9.0 and Android 10 updates.
It is sad but true that the Galaxy S9 and Galaxy S9+ will not receive another significant Android OS upgrade.
However, Samsung can bring additional features to the devices through its new One UI updates, which is a software overlay. Samsung launched One UI in 2018 for its users using Android Pie and higher. Samsung already launched an upgraded One UI in August this year and will surely do so in the coming months.
No Android upgrades for Galaxy S9 and Galaxy S9+.
Samsung announced last month that it’s going to support select devices for three Android OS upgrades. The eligible devices don’t include Galaxy S models released before the Galaxy S10. All previous models are not suitable.
So this leaves the Galaxy S9 and Galaxy S9+ out of luck. It did take everyone by surprise when it announced that One UI 2.1 would be released for these handsets. Samsung eventually released One UI 2.1 for the Galaxy S9 and Galaxy S9+ three months ago. The new update brought features like Music Share, Quick Share, Single Take, the new camera zoom interface, and more.
One UI is Samsung’s custom skin, and it’s not beholden to Android. Even though these devices won’t get another significant Android OS upgrade, Samsung is free to release another One UI update if it deems fit. The company might be thinking about that.
Samsung revealed the list of devices that will receive One UI 2.5 last month, including both of these handsets. Remember, this iteration of One UI was introduced with the Galaxy Note 20 series. It will be awe-inspiring if Samsung releases the update for the 2018 flagships. The company hasn’t confirmed precisely when it’s going to send out this update.
As for how long the Galaxy S9 be supported, new Android upgrades are out of the question, and One UI 2.5 might be the last update of its kind for these devices. Samsung will continue to provide security updates for another year. The Galaxy S9 and Galaxy S9+ are currently on the monthly cycle for security updates. They will eventually be demoted to the Galaxy S8 and Galaxy S8+ quarterly cycle before software support completely dries up. | https://medium.com/@nikki-slay/how-long-will-samsung-support-galaxy-s9-and-galaxy-s9-thecontentgenie-be74d209c810 | [] | 2020-09-23 15:49:50.462000+00:00 | ['Galaxy S9', 'Samsung', 'Technology News', 'Galaxy S9 Plus'] |
1,252 | DreamTeam Tokenomics | Over the past six months, DreamTeam has been working hard to add new Analytics and Coaching features as well as update existing ones. The DreamTeam platform is gaining traction and is quickly becoming the place to find players and build teams, recently surpassing 1.2M users worldwide.
After a lengthy period of development, DreamTeam is thrilled to announce the release of the DreamTeam Tokenomics paper. This document aims to provide deeper insight into the DreamTeam ecosystem and various economic factors connected to the DREAM token.
One of the primary goals of DreamTeam is to become the infrastructure platform and payment gateway for esports and gaming. With the DREAM token as the core of the DreamTeam platform and by tailoring our growth and support strategy, we will integrate the DREAM token into the entire esports industry.
You can read the DreamTeam Tokenomics paper below or download it here.
DreamTeam Token
DreamTeam Token (DREAM) is a cryptocurrency issued by DreamTeam, a utility token which is primarily used to drive all DreamTeam services and create an entire decentralized economy for esports. The DreamTeam Token is a highly-secure asset, resistant to any known attacks, and has the following two-factor security guarantee:
The DREAM token is a virtual asset run on the “Ethereum world computer”, the most stable and well-known platform for building decentralized applications to date. The DreamTeam Token smart contract, an underlying program which defines how DREAM tokens are transferred, has been thoroughly audited by Coinfabrik. The audit confirmed that the DreamTeam Token contains no security issues.
All DREAM tokens have been fairly distributed between DreamTeam contributors, advisors, and DreamTeam itself. The number of DREAM tokens issued is fixed. No more tokens can be ever issued, which is guaranteed by the DreamTeam Token smart contract.
DreamTeam tokens are involved both directly or indirectly in every transaction settled on the DreamTeam platform, creating a demand for DREAM tokens.
Current Financial Landscape
DreamTeam platform users are able to buy DreamTeam subscriptions and services directly with DREAM tokens. Buying subscriptions and services directly with DREAM tokens will always result in a lower price. However, forcing users to use DREAM tokens to pay for subscriptions and services currently isn’t a feasible option for DreamTeam for the following reasons:
Cryptocurrency is a young and not widely understood payment option
Payments in DREAM tokens for users new to cryptocurrency require taking additional steps: signing up on an exchange, opening a wallet, etc.
Wallet maintenance difficulties: securely storing the wallet, keeping the passphrase safe, etc.
As almost every user already has a debit/credit card or PayPal account, DREAM tokens aren’t the only payment method accepted. The DreamTeam platform also accepts fiat currency, which is automatically converted into DREAM tokens. This decision was made in order to make the DreamTeam platform as user friendly as possible and increase the DREAM token volume.
Open Source Strategy
DreamTeam has active and transparent open-source contributions, which include regular smart contracts updates and integrations related to the DREAM token and its ecosystem. One of the priorities for the open-source strategy of DreamTeam is to provide the community with handy tools for integrating services based on the DREAM token with their applications.
An example of that is the innovative feature developed by DreamTeam, the smart contract recurring billing feature, available for any Ethereum-based token. This feature is already live and is used for DreamTeam premium subscriptions. Recurring billing in DREAM token is one of the most secure ways of establishing recurring payments between customers and merchants, as the monetary relations between parties are regulated by a smart contract. This means that the merchant cannot overcharge the customer nor can the customer miss a payment. Furthermore, no third parties are involved in the value exchange. This will help revolutionize recurring payments not only for esports, but for any subscription based service.
Another useful feature which makes the DREAM token stand out is delegated token transfers. This feature allows users to transfer DREAM tokens without needing to buy Ether, something that 99.9% of other tokens require. The secure delegated transfer feature of the DREAM token drastically simplifies user experience and allows for the building of true decentralized applications. DreamTeam has the ability to support this feature in all future DREAM token integration tools.
About DreamTeam:
DreamTeam — infrastructure platform and payment gateway for esports and gaming.
Stay in touch: Token Website|Facebook | Twitter | LinkedIn |BitcoinTalk.org
If you have any questions, feel free to contact our support team any time at token@dreamteam.gg. Or you can always get in touch with us via our official Telegram chat. | https://medium.com/dreamteam-gg/dreamteam-tokenomics-7a6366949239 | [] | 2019-04-05 10:54:32.025000+00:00 | ['Blockchain', 'Tokenomy', 'Token', 'Esports', 'Blockchain Technology'] |
1,253 | Non-Fungible Tokens (NFTs): All You Need to Know | The crypto space is one of the most innovative fields that I have come in touch with. It is ever changing with new projects popping up and new innovative applications of the blockchain technology driving it.
Just to set the context here, the crypto market can be seen to have three broad categories: Bitcoin, Altcoins and Tokens. Don’t think I have to delve much into this but Bitcoin is the world’s very first decentralized digital currency and has been receiving a lot of attention lately given its price surge. Altcoins, short for “alternative coins”, are basically any other digital asset that operates on its own native blockchain; Ethereum and Litecoin fall under this category since they have their own independent blockchain. Last but not least, tokens refer to digital assets that runs on the blockchain of a specific cryptocurrency.
For this article, I’d like very much to delve into a specific type of token — non-fungible tokens (NFTs). Sounds complicated, but not really.
When something is fungible, it means that it is interchangeable. For example, if you lend your friend a $10 bill, you wouldn’t mind if you get a different $10 bill back. Conversely, when something is non-fungible, it is unique and irreplaceable. For example, artwork like the Mona Lisa is definitely not interchangeable with another piece of art; flight tickets bearing your name are also not interchangeable.
So applying that to NFTs, these tokens have smart contracts that allow for detailed attributes recorded in their smart contracts, such as identity of the owner, rich metadata or secure file links. This thus allows NFTs to be used as means of proving ownership or authenticating items online.
Another notable feature of NFTs would also be that they are not divisible; in the same way that you can’t exactly share a piece of artwork you bought by cutting it up, NFTs will be worthless if they are ever “divided”.
Examples of NFTs
One of the first ever NFTs created would be the CryptoKitties that was launched in 2017, which are just really cute mutant kitties that you can breed, trade and buy. Each of these blockchain-based cat is unique and cannot be replicated, taken away or destroyed. Think of this KittyVerse as an upgraded version of Neopets and you got it.
Today, we also see NFTs being used for digital assets that need to be distinguished from each other to prove their value or scarcity; in the music industry, artists have issued tokens that will accord purchasers exclusive listening rights to new tracks; in the sports arena, binance has partnered with a fintech firm to offer fan tokens for teams like FC Barcelona and Paris Saint-Germain; in the entertainment realm, BBC has released a NFT-powered digital trading card game that turns characters into NFTs.
With more physical assets being represented by NFTs on blockchain, I am looking forward to see how ownership will be redefined in the future, especially with regards to our finance ecosystem, insurance markets and social networks.
Today, we already see a few platforms that are aiming to democratize the use of NFTs.
One of my favourite platforms to utilize would be Mintable, a marketplace to buy and sell digital items on the blockchain. Mintable allows you to buy and sell a myriad of products ranging from files, artwork, music, images and documents. Utilizing smart contracts which are already built into the system, these items live on the blockchain and when you buy them, they are deposited into your wallet. You have 100% verifiable proof that you are the owner of whatever you buy.
If you are a creator, Mintable is also a great way to further monetise your work. You can upload your pieces of work on Mintable and leverage the digital platform to provide exclusive benefits to buyers when they purchase through the blockchain. For example, you could award buyers exclusive listening rights or rare album covers that they can download after purchasing your items. For such NFTs, rarity can greatly increase the value of your product because of the collectability factor, where people will want to hold onto that item to resell it in future when its value increases. With its intuitive platform, it is perfect for those looking to make their first foray into the space and earn some crypto.
You can find out more about Mintable here.
Till next time, friends online!
Gale. | https://medium.com/@g-ale/non-fungible-tokens-nfts-all-you-need-to-know-e9c2ef7a3e70 | [] | 2020-12-20 14:13:00.904000+00:00 | ['Technology', 'Blockchain', 'Fintech', 'Tokens'] |
1,254 | Compensation Is Stuck In The Dark Ages | Pave, formerly Trove, is pleased to announce its Series A financing led by a16z with participation from Bessemer Venture Partners, Bezos Expeditions, Y Combinator, and others. Kristina Shen will join Pave’s board of directors, and Marc Andreessen will be a board observer.
I didn’t plan on starting a compensation company.
When I was at Facebook, I figured performance reviews were perfectly calibrated assessments of my abilities. And market data was accurate.
However at most companies, this could not be further from reality.
I’ve spoken with hundreds of HR and Finance leaders on compensation. Nearly every one of them describes it as a hand-wavy process.
Three ever-present problems are consistently raised:
Benchmarking data that’s uploaded manually and incorrectly. Aggregating hundreds of spreadsheets for every merit cycle. Equity is Monopoly Money for candidates and employees.
Ultimately, however, the most crippling impacts of a broken compensation ecosystem fall on employees:
Employees have no clue why compensation decisions are made.
Pay inequities. Women earn $0.81 for every $1.00 earned by men.
Good people leave.
Status quo benchmarking datasets are manual, which results in labor-intensive, stale data.
Status quo compensation benchmarking surveys are fundamentally broken due to one crucial aspect of how they work — manual, spreadsheet-based upload.
It might be helpful to describe the typical story of how a mid-market company participates in a compensation survey.
First, a compensation survey provider knocks on your door and insists that you fill out a spreadsheet if you would like their data. You ask your team to send you data pulls from your Cap Table and HRIS or Payroll.
You spend late nights merging employee records from different data pulls while making sure to capture the correct values for equity.
Almost there, right? You ship off your first spreadsheet.
Wrong! You inevitably have filled out the spreadsheet incorrectly because you recorded employees’ levels according to the wrong job level schema.
Now, you receive a $100k invoice.
Months later…the data finally arrives. Hooray.
Except, now the labor market is different. Covid-19 and remote work have flipped hiring trends upside down. But, this 2019 slice of data is the best thing that you have until it’s time to do it all over again.
Every. Single. Year.
Manual, spreadsheet-based compensation surveys have been this way since the ’70s, and they have hardly changed since then.
For employees, this means that crucial compensation and career path decisions are defined by finger-in-the-air datasets.
Compensation planning and merit/promotion cycles are cumbersome and subject to manager bias.
Most customers I talk with groan when they hear “merit cycle” or “compensation review.”
For the majority of companies who still run their planning with Google Sheets, the status quo process is cumbersome, error-prone, and scary.
Here’s how it generally works today as an HR leader:
You pull reports from multiple disparate systems, such as Carta, BambooHR, and Lattice. Then, you spend days creating formulas based on tenure, performance, location, and level. You pull an all-nighter creating 76 sub-spreadsheets for each of your 76 managers who need to engage in the review. And then, you deal with 76 sets of conversations, all offline. Perhaps in email. Or perhaps in Slack. Then, you splice them into a master file. Finally, it’s time to manually upload the decisions one-by-one back into Carta and BambooHR. If one person’s compensation changes or if the CFO commands a new budget, you must update all 76 spreadsheets, one-by-one.
You do all of this with no finance background and with a nervous twitch for every keystroke. Because one wrong email to the wrong person means disaster.
The wrong compensation data in the wrong hands could ruin internal culture and cause you to lose your job.
If you are an employee, it’s terrifying to think that the future of your compensation hinges on the success of 76 compensation spreadsheets.
Companies struggle to transparently highlight the upside value of compensation for each candidate and employee.
Equity compensation is like Monopoly Money.
Earlier this year, I talked with hundreds of employees. Bottom line?
“I view my startup stock options as worthless because it feels like my company doesn’t care about helping me understand them.”
The average tech employee receives an offer letter. They start Google searching on Investopedia. “What are stock options?”
They see some lengthy articles and think, “I’m not a Finance major. I just want to do my job. Maybe my FAANG offer — 100% cash and liquid shares of a public company — is less stressful.”
If the person is lucky, they reach out to family members or friends asking for guidance about their equity compensation. They’re fortunate to have energetic supporters who all lend their opinions.
However, they hear many opinions that conflict. Exercise windows. AMT obligations. Long-term capital gains strategies. Everyone looks at it differently. It is the blind leading the blind.
The tech employee never makes it to your company. They sign with Facebook.
A five person company may compete against Fortune 500 companies for talent. And employees are doing apples to oranges comparisons.
A Pave Future of Transparency and Fairness
The common theme behind every problem with compensation today? Manual, stale data and processes.
Pave’s core business revolves around one thing. Real-time integrations with your HRIS, ATS, and cap table software.
Integrate your systems in five minutes, throw away spreadsheets for life.
From there, customers can do many more things.
You can see real-time compensation benchmarking data that isn’t outdated and doesn’t require manual uploads.
You can plan compensation from a centralized visual hub where data is updated instantly instead of managing 76 spreadsheets.
You can eliminate confusion around equity by visually communicating total rewards to employees.
We do all this with the vision of achieving two larger company goals:
Transparency. We want companies to have real-time access to 100% valid, factual market data, and we believe that companies should be equipped to demystify compensation. Fairness. Compensation decisions should be rooted in merit. The context behind decisions should be shared and help guide continued career development for employees. Communication should be streamlined and accessible for all stakeholders to ensure that the loudest voice in the room or person with the most power is not the only one making pivotal compensation decisions.
We invite you to join the movement. | https://medium.com/pave-compensation/compensation-is-stuck-in-the-dark-ages-414b7c3b759c | ['Matthew Schulman'] | 2020-12-03 18:37:46.604000+00:00 | ['Enterprise Technology', 'HR', 'Compensation', 'Entrepreneurship', 'Hr Software'] |
1,255 | The Ethics Of (Mis-)Using Copyright Content | (Iconic anti-piracy campaign by Motion Picture Association)
In the year 2000 Metallica heard one of their unreleased songs, I Disappear, on the radio. The band sought out the origin of the release and found the source was coming from Napster, a peer to peer file sharing service, along with their entire discography. Metallica claimed millions in damages along with other musicians who sued the company, Napster filed for bankruptcy in 2002 and the outcome has been a major chapter in copyright issues which continue to this day.
Today US senator Thom Tillis has contributed to the anxiety of Twitch streamers, YouTubers and internet pirates with his insertion of a bill amendment. The amendment would allow imprisonment for up to three years or a fine for those who are using copyrighted content without permission from the copyright owner. The amendment describes three things content creators cannot provide, the two most pertinent to content creators include: content that is primarily designed to perform the copyrighted content, content that has no commercial use except to provide the copyrighted content. The amendment doesn’t indicate a specific boundary between fair use and using copyright content for commercial distribution which makes it harder for content creators to know when they are acting within the law.
Ethics of Copyright Laws
(Pirate Bay via Google Images)
I would like to take this ongoing event as an opportunity to discuss the ethics and practical enforcement of online copyright laws. Is the distribution and use of copyrighted material without the owner’s permission ethical? The biggest talking point comes from the side that says no which is that ‘internet piracy is stealing’. To understand where they’re coming from it’s best to understand the chain of agreements that led to the product being sold. In a usual transaction the artist or copyright owner has a deal with a company e.g Spotify to use their products in exchange for payments and someone who pays for the product also pays into the pocket of the artist. However in the case of the pirate, the product is taken without payment therefore not paying the artist. The chain of agreements is broken by the pirate and the product is ‘stolen’.
This is rebutted by the pro-piracy side by saying there is a misunderstanding. The product is not stolen, at least not like stealing something from a shop. Instead the product is copied so that the pirate has their own and the ‘original’ is still intact. After all music, for example, is stored in such a way to be read by a computer which interprets the data and plays it. So the instructions that make the audio are duplicated rather than the music being stolen. Some argue that this increases the supply of the product which could lower its price affecting the artist.
Following from that last point, people argue that the copyright owner will lose money because of sales they could’ve got. This does not support most findings on the subject where it seems most benefit from peer to peer sites such as Napster. However findings also show that growing artists may suffer less sales, this could be even worse for them because they are at a critical stage in their career where it could fail whereas established financially secure artists don’t need to worry.
A Problem of Permission
The major ethical problem not involving money is when anti-piracy proponents say it is wrong in itself to pirate or misuse copyrighted content. When this argument is brought up, I notice it’s not usually explained well. People say it’s just wrong as if the argument ends there but I think there is a line of reasoning behind it. My interpretation is that pirates act without the permission of the copyright owner which is the bad in itself. However we might wonder why permission is ethically necessary? You might say: We are all bound to the state through the law and the breaking of that chain is unethical, it may be right to use copyrighted music without the owner’s position but not if it’s against the law. In this case the problem is with breaking the law not directly misuse of copyrighted content.
Whether breaking the law specifically is unethical rather than the act of piracy is really out of the scope of my investigation. Is it unethical in itself? After all you are using content someone has made without their permission. For it to be ethical it has to be okay to use it without their permission, so the question is about permission. I can’t speak for everyone but I think the reason why permission matters is because the copyright owner put their labour into the product or at least it was in some way legally obtained by the copyright owner from someone who put labour into the product. Then we are talking about fairness, is the act fair? Or justifiable despite unfairness.
Fairness in trade is where each party gets what is owed to them. The pirate torrents an album on pirate bay and the artist gains no money, at least in the short-term, but the artist didn’t lose a sale. The distributor makes money from advertisements while providing the album so it is arguable they owe the artist. They make money through the album, something they don’t have permission to but again the artist isn’t losing money. The only way illegal distributors could make it fair is if they boost sales or profits for the artist, that way they get paid in some form. However this form isn’t guaranteed and the distributor doesn’t pay the artist directly with an agreement.
Permission may be ethically negated all together, the concept of owning a song or any copyrighted material is absurd to some people. I mean specifically owning the instructions which make the song, a song comes to life through it being read and played with sheet music by a musician or by read and played with binary data by a computer. It’s more understandable when physical media is stolen but when instructions that make the media are pirated it becomes a gray area. We would like to see someone’s innovation rewarded but the concept of owning instructions or better put an idea can’t really be argued. The same thing applies to patents which benefit the inventor rewarding their creativity but at the same time there is the gray area problem.
Dealing With Misused Content
(US Senator Thom Tillis, ThomTillis.com)
Assuming it is unethical to distribute copyrighted content without permission there is the small matter of dealing with it. We could recognize that it’s unethical and simply shun the activity or we could continue involving the law. Senator Thom Tillis wants to make it punishable with a maximum sentence of three years in prison for illegal distribution and use. Is this too harsh? Especially for a Twitch streamer or YouTuber using the content in a transformative way or in a way such that the content isn’t the main product. Should a streamer be punished for playing copyrighted music quietly in the background whilst there is in-game music and commentary detracting from it?
In closing, There aren’t many financial drawbacks to piracy other than for rising artists, with that in mind I wouldn’t say it’s a major issue, something that needs to be policed, especially if it’s costing even more money to police. The ethical problem of piracy is subjective in judging how bad it is and from that it’s hard to say if policing is necessary. And besides piracy, these laws make it harder for streamers and YouTubers to create content. The cherry on top is that Thom Tillis has received money from companies such as Comcast, Sony Pictures and Motion Picture Association which draws his motivation into question.
I am Henry Rudgate, thank you for reading and if you liked this story follow me here on Medium. | https://medium.com/@henryrudgate/the-ethics-of-mis-using-copyright-content-150400e42a15 | ['Henry Rudgate'] | 2020-12-22 17:00:14.970000+00:00 | ['Ethics', 'Law', 'Technology', 'Copyright', 'Politics'] |
1,256 | About Garage Door Maintenance | Garage door support should be fused into your week’s end plan for the day something like a few times every year. Your garage doors get considerably more use than you may get it. In an ordinary nuclear family a garage door normally is opened and shut 10–20 times every day. Please take the experience with Direct Service Overhead Garage Door Company.
Over a year that is a huge amount of opening and closings. Left un-kept up garage doors will unavoidably miss the mark causing trouble, yet possibly even mischief or harm to your vehicles, your home and even to yourself. In this manner it is imperative to perform garage door support no under multiple times each year.
Torsion Springs and Rollers
There are different mechanical pieces of a garage door that should be checked. Take a gander at the garage door rollers, torsion springs and metal areas that join the garage door to the house. Guarantee they are to a great extent securely verified and that the rollers turn effectively in the garage door tracks.
If the rollers are not turning effectively see them to choose whether they are broken or fundamentally need an oil. If they appear hurt displace them. They are ordinarily easy to oust. If they are essentially remaining a bit, apply some silicon based salve to them. | https://medium.com/@jkt6841/about-garage-door-maintenance-80314876055a | ['Md Hifjul Bari'] | 2019-03-27 05:22:05.393000+00:00 | ['Technology', 'Garage', 'Door', 'Services', 'Company'] |
1,257 | How to create Adobe Campaign Standard (ACS) forms in AEM | The creation of Adobe Campaign Standard forms in AEM (Adobe Experience Manager) and getting them to work shouldn’t be so difficult, but unfortunately many people find the process tricky. That’s why in the following post I’ve attempted to make your life easier by setting out steps on how to enable, configure and create ACS forms in AEM and use them to create a profile in ACS.
Why do you need the Adobe Experience Manager?
As mentioned on Adobe’s Experience Cloud Documentation page : “AEM lets you create and use forms that interact with Adobe Campaign on your website.” Below, I’ve consolidated all the steps that I learnt from my own experiences and on various community forums (like this) which have helped me in some ways, but never offered a complete overview. Steps in this guide are what worked best for me, and I hope it helps you too!
A step-by-step guide to creating an Adobe Campaign Standard form in AEM
I’ll be modifying an existing We.Retail project to showcase how to get it working. I’m pretty sure that if you follow the same set of steps in your own project (or just in We.Retail to see if really works!), you’ll be able to get things up and running in no time.
In these steps, I will make some modifications to the existing templates and set some default properties in them, as well as create a clientlib, along with the obvious Adobe Campaign cloud configuration, in order to be able to create a profile in Adobe Campaign Standard using the form.
So let’s begin!
1. Modify “page component/structure component” — whichever way you call it!
1.1 Include jQuery JS in your website’s head tag. You can skip this if you already have it loading in your application. This change is usually supposed to be done in customheaderlibs.html
Example:
Path: /apps/weretail/components/structure/page/customheaderlibs.html
Code:
<sly data-sly-call="${clientlib.js @ categories='cq.jquery'}"></sly>
How it looks in CRXDE Lite
1.2 Include context hub and granite utils.
You can include the following snippet in head.html (usually the place where you have head tag).
This step makes sure that context hub loads properly on your website, which is required for the Campaign Component’s mapping capabilities (will be used in step 6.6 ) to work. Without line 3 & 4 (in code snippet) below, you will get “CUI is not defined”, “Cannot read property ‘externalize’ of undefined” and “window.injectCentextHubUI is not a function” errors in browser console.
NOTE: Just for the sake of example, I am adding this change in the mentioned file path as it’s not overridden in We.Retail. You should not directly make this change in this path and I don’t recommend it. Instead, override it in your project if that has not already been done.
Example:
Path:/apps/core/wcm/components/page/v2/page/head.html
Code:
<sly data-sly-test="${wcmmode.edit}"> <meta id="campaignContextHub" data-register="true" /> <sly data-sly-use.clientlib="/libs/granite/sightly/templates/clientlib.html" data-sly-call="${clientlib.js @ categories='granite.utils'}"/> <sly data-sly-resource="${ @ path = 'contexthub' , resourceType='granite/contexthub/components/contexthub'}"></sly> </sly>
How it looks in CRXDE Lite
2. The “proxy” page component.
Your project’s page component should point to this page component (The one we will create now). It seems AEM uses this specific path name internally to enable ACS integration. This proxy page component should point back to the core page component used in your project.
In case of We.Retail, the page component points to /apps/core/wcm/components/page/v2/page and now we make the proxy page component point to this.
2.1 Creation of proxy page component.
NOTE: Usage of exact same name is important.
Example:
How component structure looks in CRXDE Lite
+------------------+--------------------+--------+-----------------+
| Node | Property Name | Type | Property Value |
+------------------+--------------------+--------+-----------------+
| /apps | - | - | - |
| | | | | | ./mcm | jcr:primaryType | Name | nt:folder |
| | | | |
| ../campaign | jcr:primaryType | Name | nt:folder |
| | | | |
| .../component | jcr:primaryType | Name | nt:folder |
| | | | |
| ..../profile | (Use component creation wizard) |
+------------------+--------------------+--------+-----------------+
Values for Component creation wizard
The highlighted “Super Type” text should point to your project’s root page component. The one in the screenshot is valid for We.Retail.
NOTE: Make sure to delete the profile.jsp file that gets created using this wizard as we don’t need it. Otherwise you will see a blank page when you open a page referring to this page component.
How Component properties should look like
2.2 Point the page component that you want to use, to the above created proxy component at the path: /apps/mcm/campaign/components/profile
Example:
In case of We.Retail, I have changed the /apps/weretail/components/structure/page node’s sling:resourceSuperType to mcm/campaign/components/profile
3. [OPTIONAL] Create “clientLibs”
NOTE: This step is required in older versions of AEM and if you see context hub is not loading or the mentioned categories of clientlibs are not loading.
Create clientlibs with the properties mentioned below. In the case of older AEM versions it can be under etc/designs, but in the case of newer AEM versions, you can use /apps/settings/wcm/designs.
+------------------+--------------------+--------+-----------------+
| Node | Property Name | Type | Property Value |
+------------------+--------------------+--------+-----------------+
|./campaign | jcr:primaryType | Name | cq:Page |
| | | | | |../jcr:content | cq:template | String | /libs/wcm/core |
| /templates |
| /designpage |
| | | | |
|.../client-context| jcr:primaryType | Name | nt:unstructured |
| | | | |
|..../clientcontext| path | String |/etc |
| /clientcontext |
| /campaign | | | sling:resourceType | String |cq |
| /personalization |
| /components |
| /clientcontext |
| | | | |
|..../config | path | String |/etc |
| /clientcontext |
| /campaign | | | sling:resourceType | String |cq |
| /personalization |
| /components |
| /clientcontext | | /config |
| | | | |
|../clientlibs | jcr:primaryType | Name | nt:folder |
| | | | |
|.../author | jcr:primaryType |Name|cq:ClientLibraryFolder|
| | embed |String[]| jquery | | granite.utils |
| granite.jquery |
| cq.jquery |
| cq.wcm.foundation|
| cq.wcm.foundation-main|
+------------------+--------------------+--------+-----------------+
How clientLibs structure look like
4. The Template
Create a template using the page component modified above(in steps 1 and 2). I won’t explain how to create it here as the Adobe Experience Cloud Documentation can be referred to for this. I will use the already existing template at the /conf/we-retail/settings/wcm/templates/hero-page path.
4.1 Embedding required static properties in template
The following properties need to be added manually into the initial configuration of the template that you created or already have.
As I am using We.Retail, I will navigate to /conf/we-retail/settings/wcm/templates/hero-page/initial/jcr:content . In your case, you must navigate to a similar initial node of your project template and add the following properties to it:
+-------------+----------+------------+
| Name | Type | Value |
+-------------+----------+------------+
|acMapping | String | profile |
| | | |
|acTemplateId | String | mail |
+-------------+----------+------------+
The same should be added into your specific template-type’s “initial” node as well. For this example it’s not necessary, so I’ll skip it here.
4.2 [OPTIONAL] Adding Initial Page properties
Open the Template (by going to Tools -> Templates -> We.Retail -> Hero Page), select “Initial Content” from the top right-hand corner and then choose “Initial Page Properties”. In there, select “Advanced” and choose the clientlib created above in step 3 for the “Design” field and click “Save & Close”. In our case it would be something like /apps/campaign in the path picker and will look like /apps/settings/wcm/designs/campaign, once it is selected, in the “Design” field. Make sure that you choose one level before the “clientlibs” folder in the path picker and the final path should not contain “clientlibs” at the end.
4.3 Adding required components to our template’s allowed components list
Now again using the template page opened above, from the top right-hand corner, select “Structure”. On this page, we can choose which components are allowed. For our example, we need the ones from “Adobe Campaign” and “CTA-Lead-Form”
5. The Cloud config
Now we’re all done with our “code” changes and configurations, the last step to configure would be to create an Adobe Campaign Standard cloud configuration.
5.1 Navigate to Tools -> Cloud Services -> Legacy Cloud Services. On this page, scroll down to Adobe Campaign and click on “Configure now” to add the new configuration.
After filling in “Title” and choosing the only available template i.e. “Adobe Campaign configuration” in the “Create Configuration” dialog, the following dialog will appear:
5.2 “Username” is usually “aemserver” and “API End Point” is the URL that you use to access Adobe Campaign Standard, ending with campaign.adobe.com. For a password, you need to contact Adobe Customer support. Only Adobe can create this user and share its password. Once you fill in the details and click on “Connect to Adobe Campaign” and if all details are correct, you should get:
If you don’t receive this notification, then either the information entered is incorrect or you need to whitelist your IP address, from your AEM or ACS instance or both.
6. The FORM!
Now we are ready to create a page using the template created in Step 4 and cloud config created above, to create a page in We.Retail, add form fields to it, bind those form fields to ACS Profile schema and submit this small form to create a Profile in ACS.
6.1 Create a page using the template that we have created or modified above. I will be using Hero Template as that’s what I have modified above:
Navigate to /content/we-retail/language-masters/en on “Sites” page, click on “Create”, then “Page”, choose the template and enter required details. In my case, I only fill in the “Title” field
6.2 Open the page, click on “Open Page Properties”, navigate to “Cloud Services” and choose the cloud config created above in step 5 above and click on “Save and Close”.
First choose Adobe Campaign
Choose created cloud config
6.3 Now on the empty “parsys” on the page, first add “Lead Form” component from “CTA-LEAD-FORM” component group.
6.4 Edit the “Form Start” placeholder
and following dialog will appear
I would skip to the “Advanced” tab and fill in only the required details — “Post Options”, where I choose “Adobe Campaign: Save Profile” and select “Profile — Create if non-existing” under “Action Configuration”
6.5 Now click the “Form End” placeholder, and click “+” to start adding form fields:
I then choose the “Text Field (Campaign)” component from the “Adobe Campaign” component group
6.6 Now edit the component added above and enter details like “Title” and “Name” for which I will add “First Name” and “firstName” respectively and then we move to the next tab that says “Adobe Campaign”. And now comes the moment of truth to see if we are actually able to view actual Profile Schema fields fetched from Adobe Campaign Standard! When I click on the Adobe Campaign logo next to the “Mapping” input field,
I can see the following dialog:
Here, I will navigate to “Profile” and choose “First Name” and save.
6.7 We repeat steps 6.5 and 6.5 to add “Last Name” and “Email address” to the form too, and my form will now look like:
6.8 Now we add the “Submit” button on the form. For this, we will click on “Form End” placeholder again and now click on “Configure”
Configure button on Form End
And the following dialog will appear:
Here we can do just necessary configuration like enable “Show Submit Button” and add “Submit Title”.
Now view the page in publish mode, fill in the required details and submit the form. It should just refresh the page, and if you check in Adobe Campaign Standard for the latest profile, you should be able to see a new profile with submitted information.
How my sample form looks before submission.
Profile preview in ACS
Conclusion
So you’ve now seen the changes and configurations that need to be done for the form component in AEM so that it’s able to submit information to Adobe Campaign Standard. Now it’s up to you to explore this form’s other capabilities and how to use it. I personally always extend the Adobe Campaign components and modify the dialogs and html to my own needs. However, for me, the most important feature is the binding capability - to bind the form fields to the Adobe Campaign Standard Profile Schema fields. Using this feature with my own form submission logic using ACS API offers a great set of possibilities for me to work with AEM and Adobe Campaign Standard. | https://medium.com/rewrite-tech/how-to-create-adobe-campaign-standard-acs-forms-in-aem-a31ffb1d81de | ['Navjot Singh Kler'] | 2020-07-02 10:14:35.308000+00:00 | ['Technology', 'Adobe Campaign', 'Marketing', 'Aem', 'Coding'] |
1,258 | Slack Clone with React | Semantic UI | GraphQL | PostgresSQL (PART 1) | Photo by Volodymyr Hryshchenko on Unsplash
Introduction
Hey all, this project will be a series. I don’t know how long the series will be as I’m still working on the project as I write these articles. I’ve wanted to build a chat app for quite some time. I came across an older tutorial (3 years ago) of Ben Awad (awesome YouTuber) doing a slack clone, which was perfect for me, so I’m following his approaches and making mine a updated version (a lot has changed in 3 years).
I wanted to practice building more complex projects. I’m learning a lot so far, like working with the PostgresSQL database, using Sequelize for the ORM, and connecting it with Graphql. So I’m hoping you guys can learn something too :) But that’s enough of the intro, let’s dive into the first part.
Installation for Database
Before we get to the good stuff, we need to install the things we need for this project. I’ll be using a Mac throughout this series.
Nodejs of course :) (if you haven’t already => https://nodejs.org/en/download/) PostgresSQL (for Windows and Mac https://www.postgresql.org/download/)
Installation videos
Mac video: https://www.youtube.com/watch?v=EZAa0LSxPPU
Windows video: https://www.youtube.com/watch?v=RAFZleZYxsc Postico (https://eggerapps.at/postico/) *optional* if your more visual like me :) this is a GUI for your database. (for mac)
That is all you need to get the database portion setup using Postgres (not that much). In the next one, we’ll work on folder setup and installing the packages we need for the backend. Until then folks :) | https://medium.com/dev-genius/slack-clone-with-react-semantic-ui-graphql-postgressql-part-1-cd40b5d3460 | ['Ajea Smith'] | 2020-09-16 07:03:32.132000+00:00 | ['Postgres', 'React', 'Programming', 'GraphQL', 'Technology'] |
1,259 | Dynamic value arrays and sets compared with Solidity mappings | Smart Contract Implementation
We are effectively providing functionality in a base class. It is designed to be inherited in order to provide the desired functionality. This is actually hijacking inheritance as a substitute for object composition.
map/map
Here is a useful import file providing a base contract that provides the add and check functions:
contract BaseAccountRoles { // an account has many roles
mapping(address => mapping(uint => bool)) accountRoles;
function accountAddRole(address account, uint role) internal {
accountRoles[account][role] = true;
} function accountHasRole(address account, uint role) internal
view returns (bool) {
return accountRoles[account][role];
}
}
This contract uses Solidity’s excellent mapping feature to map from the given address to each given role, which in turn maps to a bool indicating whether the account address provides the role or not.
map/d31
Here is another useful import file providing a base contract that provides the add and check functions:
import "Bytes31fun.sol" contract BaseAccountRoles31 { // an account has many roles
using Bytes31fun for bytes32;
mapping(address => bytes32) accountRoles;
function addAccountRole(address account, uint role) internal {
accountRoles[account] = accountRoles[account].push(role);
} function confirmAccountRole(address account, uint role) internal
view returns (bool) {
return accountRoles[account].find(role);
}
}
This contract uses a library to extend the Solidity bytes32 type to provide push() and find(). The method is described in the article “Dynamic Value Arrays in Solidity” by the author.
Solidity’s mapping feature is used to map from the given address to a bytes32 variable, which holds a compound of each given role that the address provides.
This implementation will throw an exception if the smart contract using it attempts to add more than 31 roles to any one account. That’s impossible in our particular implementation, but please be aware of the limitation if you use this technique.
map/set
Here is the last useful import file providing a base contract that provides the add and check functions:
contract BaseAccountRolesSet { // an account has many roles
mapping(address => uint) accountRoleSet;
function addAccountRole(address account, uint role) internal {
accountRoleSet[account] |= 1 << role;
} function confirmAccountRole(address account, uint role) internal
view returns (bool) {
return (accountRoleSet[account] & (1 << role)) != 0;
}
}
Solidity’s mapping feature is used to map from the given address to a variable, which is used as a set to holds each given role that the address provides.
This implementation will not throw an exception if the smart contract using it attempts to add more than 256 roles to any one account. I’m relaxed about that possibility in this case.
Enigmatic picture opportunity: | https://medium.com/better-programming/accounts-and-roles-in-solidity-48da6845038d | ['Jules Goddard'] | 2020-09-03 23:04:52.793000+00:00 | ['Smart Contracts', 'Solidity', 'Blockchain Technology', 'Ethereum', 'Programming'] |
1,260 | Blonde Balayage Elk freaks out | Blonde Balayage Elk freaks out
In the deep underground in Russia Blonde Baylage Elk saw deep in to Mamuskis pink eyes and found the text: Kietzj. Dostojevskij could confirm the place and told them to calm down themselves to understand that they have something exciting to discover.
Dostojevskij: I will take you to Kitezj. This is the adult department for sure. You must act like adult Elks there.
Blonde Balayage Elk: Wait! I have seen in my Elk channel one click bait e- expert with pink e-round balls adapted after elk feets. I will contact this icon right now and ask how to get it. They are very electric.
Dostojevskij: Lets go now to Kitezj and you will discover the magic there. I am not interested in pink electric manifestations under my feet.
Elk boss: I am getting tired here. It is boring to listen to things we can not see here. I believe in what is seen. All other things do not exist. I believe very much in my protocol and now please can we go away from this dark hole please?
Blonde Balayage Elk: My channel told me the pink magic feet electric can be found if Mamuski light up the underground with pink light. They know who you are! How can that be possible? Are you the celebrity? Can you write autograph here on the wall?
Mamuski: I e-see something here but its very flat. X numbers of very flat and round pink things here. Come and see!
Dostojevskij: I both doubt and believe now and I guess it must be the sign to take to next station. I do not know who owner is to these pink solutions, but I know God loves sinners so let us take them now and let’s go to Kitezj. We must!
Blonde Balayage Elk presented how to use the pink magic rolling e shoes. For the first time she saw Dostojevskij smile very much.
Dostojevskij: Please Elk boss understand that we need to do this to find out about where your Elk University is. This is the most invisible place you can find and therefore we can discover things to help us come to the soul’s nirvana.
With rolling feet’s, they all finally came to Kitezj and discovered a lot of walls.
Blonde Balayage Elk: what is this? I love the darkness in here but these walls are talking! It reminds about my channel! Is this my channel? Dostojevskij, please can you explain what this is all about.
Dostojevskij: This is where energies meet! Its very spiritual and this is the most invisible place you can find in the world as I see it. Your view in the darkness have enormous benefits. What do you say Mamuski?
Mamuski: For the first time I feel happiness, but I am still scared because my ears are now also pink, and they hear voices from the walls here. What is going on? Help me Dostojevskij! The walls are talking from front to end and end to front. I feel they are talking about numbers. I think I see the forest night in front of me when the bears took over everything. I feel the trauma right now. The bears have no respect for elks. Are the bears angry here in Russia?
Dostojevskij: Dear Mamuski I know the Ma och Muski in you will take you through all those spiritual walls. You see here in Kitezj we have the chance to plan how this new Elk University will be.
Mamuski: I get comfort if its about numbers because the Ma and Muski in me requires this.
Blonde Balayage Elk: Listen! I hear from one of the walls the code #FF69B4 and there I hear hymns from Kiwi Birds. I think they must be competitors to P Parrot on my channel! OMG! I can see they have rolling eyes! The show is called “K Slogan show” and they tell they are very Godly birds. I must publish this on my channel. Elk Boss! Why are you looking like you are falling asleep here! Wake up!!!
Elk Boss: I only believe in what I can see, and you need to follow my protocol here. The Elk Union in the north have requirements and your speculations are not based on Elk sciences. I am sorry but this is all a joke and you to!
Blonde Balayage Elk: Since you are so boring Elk Boss, I will create the new paradise and your University will be the part of it. It will not be for boring Elk kingdoms! I have the plan here and if you do not watch out, I will let the #Kiwi birds put the worlds biggest egg before you, so you are not coming anywhere! If you are not ready to listen to the walls here, you will miss out every detail on the way to get there. Can you please let go of that everything needs to be proof perfect?
Elk Boss: I believe in Micro and Macro units plus addition and fraction. I am sharing Mamuski views and you are crazy.
Blonde Balayage Elk: I create the new religion here and you can stay in your old school protocol that only can show what you already can see. Its very boring!
Dostojevskij: Blonde Balayage Elk, your channel is very meaningless. It reminds about death. Is this your #ABC for your mind find? Is this where you download energies?
Blonde Balayage Elk: I am so angry now! Are you lost in only one-way street in your thoughts? Imagine, we have 4 different apples. Elise means salad. Rubin star means sweet and juicy. Discovery means sweet and sour and Gloster is creamy. They love low oxygen environments. Listen! It was apple competition in this K slogan show, and I won that part. Can we please discover TOGETHER what these walls are saying now! You are so boring all of you! Watch out because I will create things you will be surprised to see. I will do something with these apples you never seen before and remember my crown collection in my channel! I will create my OWN religions for Elks! Trust me!
Mamuski: Blonde Balayage Elk, you are really meta mystic. At the same time religions are like plus and minus electrifying but they can also be very unfair!
Dostojevskij: Yes sometimes religions reminds about prison and your channel dear Blonde Balayage Elk. Reason, feeling and will… its weird… I both doubt and believe…
Blonde Balayage Elk: dear Dostojevskij I will now baptize you to Dosto D2. You seem to be bored and your __e- brotherhood to. It will also sell very good in my channel! Please accept this offer. You can become Duobipol app and be very transparent transcendent. The clicks will increase. You will be the code opener the University needs. I think God lives in my channel. You will be baptized and renewed there. Dosto D2 channel with some e Elk effects in the background will change your life! Trust me! I suggest your channel will be the competition to K slogan show with the name “Ask Dosto D2.” How about that?
Dosto D2: This is surreal and I start to think I am dead now. Please stop. Dear, I am scared because I hear now from one hole in the wall that one Robot Elk will be born very soon. Who are the parents?
Blonde Balayage Elk screams! I hear two walls talking now. Its Jesmoh and Siddalha! Wow! Daokonfu want to talk with Dosto D2!
Mamuski: This is the e sign! Jesmoh, Siddalha and Daokonfu are my children! It explains why Ma and Muski have been working so hard in me. But wait, they are not 3 but 6!
Blonde Baylage Elk: Wow! Now this must be the revelation and Dosto D2 brought light to this. I believe my channel will see the light to now. I also hear Marifatim in the K slogan show showing the biggest egg in the world. I will show the biggest apples and show how Elks can do it. The kiwi birds in the K slogan show can smell all types of smells, but they know nothing about what my eyes can see! They will soon see me take over and you Elk Boss will have to change your protocol and dare to scroll and e roll.
To be continued by E — — — — — — — — — — — — — #FF69B4 — — — — — -D2 — — — and — — -pick a code — — — —…………. | https://medium.com/@teachertrain/blonde-balayage-elk-freaks-out-1ce4fa3eca0c | ['Emma Hedlund'] | 2020-12-13 16:26:45.426000+00:00 | ['Science', 'Technology News', 'Religion', 'Jack Ma', 'Elon Musk'] |
1,261 | Mixin Network Testnet By Ceo Cedric Fung | Beijing October 31th , 2018. Cedric Fung, the founder of Mixin Network introduced Mixin’s Testnet release, focusing mostly on the kernel and answering the community questions.
“Hey, I am Cedric Fung the founder and CEO of Mixin Network, as you might know the Testnet was released and I’m here to highlight the most important points and address all of the questions you have sent us through our social platforms.
First of all the Testnet released is the Kernel part of the white paper, people can join as a full node to sync and validate transactions as snapshots graph.
The domain interfaces and other mining reward details are not finalized in this release yet, and will be updated in upcoming months. And the final main net will be launched early next year.
We are already running these Testnet nodes in our private data center to provide service for the Mixin Messenger API and finally the Mixin Messenger API will be replaced by the public mainnet.
• So here is the first question: Can funds be frozen by the government at any time since we don’t hold private key?
Simply is no, nobody can freeze the funds because we actually hold our private key, presented in simple form 6 digit pin code
• Next one: Can we create token in MIXIN network in the future? like Ethereum.
No because Mixin network is a transaction network there is no smart contracts and our own token XIN is also an Ethereum token and will never be swapped by some native coin in the Mixin network main net.
• The Mixin messenger is using the phone number and pin as private key. In case that I am travelling and I lost my phone, and I cannot get a replacement sim card with the same phone number, is there a way for me to have access to my funds?
If you have lost your phone number, there is no way to access the funds right now. But we will provide a solution to this situation in future Messenger updates.
• Do you have any plan to add more crypto currency pair in Ocean One?
Ocean One has already supported all assets in Mixin Network. No need to add a specific currency to the exchange if the currency is already supported by Mixin Network.
• How does the xin holder get profit?Could you give us some example?except the xin token price raise.Thanks.
The XIN holders will get reward if they join the network as a light node when main net is online .
What Is The Benefit Of Running Mixin Node, Other Than Receiving XIN Incentives?
For example if you withdrawal 1 BTC to another exchange, the Mixin Network charge you 0.0005 BTC as fee, but the actual transaction fee on Bitcoin network may be only 0.0001 BTC, the 0.0004 difference will also go to the node incentive pool.
Several campaigns are planned for Mixin Network Testnet.
There is a Developer bounty : producing on-line courses, debugging, developing dapps based on Mixin Network, developers can get XIN as a reward for their hard work. And An upcoming Developer Competition will be online soon. By submitting high-quality DApps in Mixin Network, developers can get XIN from Mixin Network. Almost 1000 XIN in total which is more than 100k usd will be distributed in this program.
Thank you for tuning in and talk to you later, bye guys.”
Follow Us:
English Mixin Telegram: https://t.me/MixinCommunity
Gitter: https://gitter.im/Mixin-Network
Twitter: https://twitter.com/Mixin_Network
Reddit: https://www.reddit.com/r/mixin/
Medium: https://medium.com/mixinnetwork
Facebook: https://www.facebook.com/MixinNetwork/
GitHub:
Mixin Network: https://github.com/MixinNetwork
OceanONE: https://github.com/MixinNetwork/ocean.one | https://medium.com/mixinnetwork/mixin-network-testnet-by-ceo-cedric-fung-5c9a1f24129d | ['Yasmine Moustatia'] | 2018-12-13 03:42:37.703000+00:00 | ['Technology', 'Blockchain', 'Bitcoin', 'Mixinnetwork', 'Community'] |
1,262 | The 4 Best Online Learning Platforms in 2020 | If you are trying to learn a new skill online it can be difficult to decide which platform is best for you, I have chosen the 4 best platforms that I have used and I will explain why each one is worth a try.
Photo by Avel Chuklanov on Unsplash
Udemy
Udemy has a wide range of courses but I mainly associate it with business skills and online skills. The courses will offer video-based learning, similar to youtube, and the instructor will guide you through course content by narrating or showing visuals.
Through my personal experience, Udemy is best used for learning skills that are not so specialised but are used by many people over many industries. Many of the courses have thousands of reviews so it’s very easy to find courses that other people would recommend so most of the top courses are at a good standard.
Most courses will tell you how long it will take to complete the course, the usual time period they suggest ranges from 1 day — 1 month, so if you’re looking to learn a specific skill in a relatively short period of time this ideal for you.
The price of the courses can vary from $10 to up to $200. I would recommend never paying over $30 because the website always offers deals on pretty much all courses on the website so buying at the highest price is an absolute rip off! When you buy the courses they will usually have a lifetime guarantee which allows you to access the content forever, this is mainly because Udemy offers you the courses individually priced, not through a monthly subscription.
Coursera
Coursera offers you courses provided by some of the biggest universities and businesses in the world. Yale, Google, University of London and IBM just to name a few. They let you earn certificates for completing courses and they offer full degrees all online. You can connect your account with your Linkedin account which is great if you are looking to share your newly gained skills to potential employers. A key selling point for them is that they offer you courses to build skills that employers want.
Courses on Coursera are not designed to be completed within a few hours, they are designed to be completed over months. The courses do have deadlines for submitting quizzes but they do not have any real importance, if you miss the deadlines it will not affect your final grade.
Many of Coursera’s courses are offered for free. If you want to gain a certification for completing a course or you want any of your course work to be graded you will need to pay. If you are looking for the latter then you will be looking at a cost of $29 — $100 a month for a specialized course. Coursera does offer Coursera Plus which allows you to have multiple courses under one subscription, this costs $390 for the year.
Codecademy
Codecademy is a great option if you are looking to learn a new programming language. Their courses offer an interactive platform that lets you learn by actually coding, this is great if you’re not a textbook person.
You can either do courses on specific languages like Python or Javascript, or you can choose a career path to learn. If you choose Data science for example you will need to complete SQL, Python 3, NumPy, pandas, matplotlib, scikit-learn courses.
The great thing about Codecademy is that it is all within the browser, you do not need to download any additional software or have to change some settings on your computer, you can get straight into learning without any hassle.
The cost is currently $31.99 paying monthly or $15.99 a month paying for a yearly subscription, they also offer student discounts.
Youtube
This is a more obvious option but Youtube is an unbelievable source of information purely based on the fact that there will be a video about anything. There are lots of Youtube channels that are created just for offering courses in a specific topic. A quite useful tool on youtube are playlists, some youtube videos will be part of a whole series of tutorials and it will let you use the playlists as a course in itself.
Another positive of using youtube is that it’s free! You might feel the pain of adverts while watching videos but it is worth the annoyance seeing that it doesn’t cost a penny.
The only issue you could have is there are so many videos on the platform that it may be hard to filter the good from the bad. Once clicking on the videos see how many thumbs up’s or downs they have, this could be a quick filter to find the good videos that other users have enjoyed.
Summary
Whichever platform you decide to go with they will all provide a great service and I’m sure you will enjoy your time on all. All of the costly platforms do offer free trials if you do not feel like risking your money.
If you’re looking at a short time frame course that is for a specific skill Udemy is best. If you’re looking for a longer time frame course that gives you more of a university education style service Coursera is for you. If you are trying to learn a new skill in IT Codecademy is the right choice. Youtube will hold information on all topics you will ever need, once you find a few good channels it’s hard to overlook look. | https://medium.com/swlh/the-4-best-online-learning-platforms-in-2020-c86733f5226a | ['Harry Khan'] | 2020-11-22 16:07:51.163000+00:00 | ['Programming', 'Productivity', 'Data Science', 'Buisness', 'Technology'] |
1,263 | Clinc’s Poised to Break New Ground with Recent Changes in Exec Regime | Clinc, an AI conversational leader based out of Ann Arbor, Michigan has recently announced major changes in their executive suite. On September 23, 2020, Clinc appointed John Newhard as the new Chief Executive Officer for the conversational AI startup. Newhard, previously the CEO of Trafficware, brings with him over 25 years of experience running companies in the IT industry.
With former leadership positions at Cubic Corporation, Kaplan Compliance Solutions, and New World Systems, Newhard is ready to lead Clinc and oversee future development of the company into several new verticals while strengthening Clinc’s positioning as the go-to AI research company for the finance industry and all banking institutions globally. Part of his new appointment strategy is to beef up marketing, engineering, and production teams with talented individuals from Ann Arbor and across the country.
Clinc Conversational AI Platform
With a recent surge in digital communications due to the social distancing requirements of the COVID-19 pandemic, AI is playing a more pivotal role in customer and business transactions than ever before. More and more companies worldwide are relying on Artificial Intelligence to provide their customers with an experience unlike any other in the industry. The Clinc conversational AI platform is a popular technology that allows for enhanced customer experiences while reducing business support costs.
Unlike most other AI platforms on the market, Clinc has been strategically developing its AI virtual assistant into the sophisticated technology that it is today since it opened its doors for the very first time in 2015. Clinc’s conversational AI technology can address customer questions competently and resolve these concerns at a 95 percent successful return rate. It understands messy language which allows the technology to follow a human’s conversational logic almost without any interruption and then supply an equally conversational reply, 95 percent of the time successfully meeting the customer’s needs with the answer given. Clinc’s AI technology is practically self-sufficient and has a very limited need to pass off customers to a live customer service team at any point while performing the services it has been taught. Clinc’s technology allows businesses the capability of scaling up their customer support exponentially without driving up costs by a penny.
A Revolutionized Marketing Strategy In Development
Shortly following the appointment of Newhard as CEO, Clinc announced on October 28, 2020, the onboarding of John Lichtenberg. As the new chief marketing officer (CMO), Lichtenberg oversees a revolutionized marketing strategy to drive more sustainable growth to Clinc.
Lichtenberg has announced new initiatives in the areas of brand management, public relations, and digital marketing. As a key player in driving company revenue growth and expansion, Lichtenberg brings a wealth of over 25 years of marketing experience to the AI industry. His long list of previous professional experience includes the roles of Vice President of Marketing for Park West Gallery, CMO for Fuel Leadership, and leadership rules in the Learning Care Group and New World Systems.
Clinc’s New Chief Customer Officer Announcement Proves Company Brand Reinvention Coming
If recent announcements of newly appointed CEOs and CMOs now from the Ann Arbor-based AI research platform Clinc do not symbolize a new era is on the horizon, then the most recent news out of the Clinc camp should make you a true believer of change coming. Shaking up the executive suite and truly giving a fresh trajectory for the conversational AI company, Mahesh Baxi was announced as Clinc’s Chief Customer Officer only days after CMO John Lichtenberg joined the team.
This trio of executive hires in a matter of two month’s time proves that Clinc is poised to receive a solid branding strategy overhaul to match the new company poster filled with all-new executive-level faces that are ready to take this AI company and its technology to soaring new levels. This new team also seems confident in their ability to expand Clinc’s reach across the entirety of the banking and finance sector around the world and even stretch it into brand new business verticals in the coming years
Clinc’s new CCO entrepreneur and proven CX leader and author Mahesh Baxi joins the Clinc C-suite with over 25 years of global IT Products and Services experience safely tucked away under his belt. He has built cultures of thought leadership and fostered innovation as the primary differentiator to create value-oriented engagement for the customers of his previous employers. Baxi has spent most of his professional life working directly with the customers and is very passionate about solving critical business problems on the executive level.
A 95 Percent Resolution Rate
Clinc is, without a doubt, a pioneer in the AI industry. As stated above, the company and its technology currently offer a 95 percent contact resolution rate, which is unheard of for the industry. Financial businesses that employ the use of Clinc’s AI platform are taking advantage of this cost-saving technology. This contact resolution rate, to reiterate the point, means that 95 percent of the time, customers who contact a business with a problem have that problem resolved without any human input.
This is excellent for a business’s support costs and needs. With the use of Clinc’s AI technology platform, businesses can substantially reduce their support costs. This allows customer support agents to deal with the remaining 5 percent comprised of more intricate customer problems. The best part is that consumers are getting their questions answered quickly and remain satisfied with their support experience.
A Highly Scalable Business Platform
The use of allows businesses to offer quality customer support 24/7. Restrictions based on office hours no longer apply, holidays, weekends, and other staffing disruptions will not slow down Clinc’s technology or stop it from continuously helping clients any time on any day. These automated solutions require no rest and no food and will not file a complaint for having to work around the clock to successfully answer customer inquiries.
Businesses that consistently deal with seasonal fluctuations in customer support demand can easily and immediately scale up with Clinc’s AI platform. Companies who implement it are less likely to get overwhelmed when customer support demands are high and don’t have to worry about service gaps associated with seasonal fluctuations.
The Future Of Clinc AI Technology
There’s no denying the fact that Clinc has firmly captured the title as an industry leader in the financial AI world with its release of Finie. This is a voice-operated AI virtual assistant that is specifically designed to handle the financial and banking services industry. Built with a vast, intricate financial knowledge-base and banking vocabulary, Finie rivals any real-life customer service specialist in the banking industry.
Clinc’s new C-suite tenants, Jon Newhard (CEO) and John Lichtenberg (CMO) and Mahesh Baxi (CCO), plan to use that proprietary software behind Finie to craft and adapt that AI technology for other industries. The proprietary software is constructed of a platform based on a unique machine learning system. The system can be easily trained with industry-specific vocabulary, business concepts, and product names.
With the highly versatile ability of this AI platform, it can be deployed in almost all industries. Currently, Clinc has fulfilled AI needs in banking, gaming, healthcare, restaurants, and automotive industries. Right now, the research platform is rapidly expanding its AI technology into many different industries.
New Clinc Leaders Breaking New Ground and Renovate the Old
With the additions of John Newhard, John Lichtenberg, and Mahesh Baxi, Clinc is poised for more success. Already a rapidly growing AI leader with highly valuable clients like Visa, Barclays, and Ford, Clinc is gearing up to expand its reach into many more industry sectors.
Expect miles of growth ahead for Clinc as even more new team members coming in the future will team with Clinc’s veteran researchers and AI wizards to provide the necessary knowledge to offer so much more cutting-edge AI performance in the months and years ahead. | https://medium.com/@clincai2020/clincs-poised-to-break-new-ground-with-recent-changes-in-exec-regime-62beb78dc97b | [] | 2020-11-03 15:57:38.305000+00:00 | ['Artificial Intelligence', 'Clinc', 'Technology', 'Technology News', 'AI'] |
1,264 | Harvey Labels — How a NYFW Label Rapidly Grew its Dropshipping Sales | Background
In 3 short years, fashion designer and entrepreneur Mim Harvey has built a highly successful online fashion business in Harvey Labels, opened her flagship store, and just recently participated in the New York Fashion Show (NYFW). The core ethos of the Harvey brand is beautifully made multi-way styles at affordable prices, with Mim developing all of her own prints. Mim first started with her namesake label, Harvey the Label, and has since launched two other labels; Seeker Brand, and AILA (collectively Harvey Labels). Based in Adelaide, Australia, Mim has quickly built a global brand, with Harvey Labels now stocked in over 180 online stores.
Harvey the Label Collection during the Fashion Palette show at NYFW (Photo by Randy Brooke/Getty Images
Convincing Partner Stores to Adopt New Software is Challenging
Harvey Labels fires on all cylinders — whether it be online or offline, retail, wholesale, and dropship. With such an ambitious strategy to maximise distribution, Mim needed a way to automate her partner store connections, with real-time inventory updating to avoid over-selling.
The need to connect inventory to partner stores in a quick and straightforward manner was critical due to the perceived time and resource cost required to learn inventory software. Online retail businesses are generally extremely time poor and have a million things in the air at a time, so learning new software would be difficult especially if it is complicated.
Syncio has reduced the friction on this onboarding experience. In order for Harvey Labels to connect with a partner store, all they have to do is send an email inviting them to install Syncio. Once done, the stores are connected with Harvey Labels, with their inventory visible and ready to be synced across in just a few clicks. This means that partner stores can start selling Harvey Labels products much faster.
Out-of-date inventory is a huge problem with a large distribution network
If the large distribution network strategy were to work, inventory would have to be accurate across all online stores. This was crucial as outdated inventory could become costly for Harvey Labels to manage if they had to deal with a large volume of refunds and customer complaints from their partner stores. Even inventory updates every 24 hours may be too infrequent for a distribution network as large as Harvey Label’s.
Thankfully, Harvey Labels no longer needs to worry about this issue thanks to Syncio’s ability to sync inventory across all connected stores in real-time. This means that an order that is made on Harvey Label’s store which takes stock out is also reflected in the inventory of all connected stores, and vice-versa. This has saved substantial amounts of time for Harvey Labels, as sales are now generated organically without any time spent on selling to and following up with the connected stores. Time spent on customer refunds and complaints are also avoided due to up-to-date inventory.
Harvey the Label makes beautiful pieces that can be worn in multiple ways.
Inventory sync creates new business opportunities
Operating in a highly competitive market, partner stores were also reluctant to purchase large amounts of stock due to the initial outlay of capital required. This was very challenging for Harvey Labels to overcome, as without an agreement, Harvey Labels couldn’t be sold on these partner stores.
Now with Syncio making it so easy to dropship across Shopify stores, Harvey Labels can offer their partner stores a quick and easy way to test Harvey styles with their customers without investing much time and money. Harvey Labels now can reach another market easily with orders coming from the partner store fulfilled in their normal process.
Benefits
Harvey Labels have been able to set up an extremely popular and successfully-run dropship program with their partner stores using Syncio as the engine behind it. The ability to scale their distribution to adopt this program required ease of onboarding, minimal time on managing inventory, and minimal capital outlay.
Managing so many sales channels now is simple as all inventory is updated centrally. This increased efficiency has helped Harvey Labels reach more customers and partner stores, creating a reliable and increasing source of revenue in the process.
Tips on growing and working with a large distribution network | https://medium.com/syncio/harvey-labels-how-a-nyfw-label-rapidly-grew-their-dropshipping-distribution-c7d95003b52b | ['Jimmy Zhong'] | 2019-02-15 00:39:55.036000+00:00 | ['Retail Technology', 'Fashion', 'Dropshipping', 'Ecommerce', 'Marketplaces'] |
1,265 | How Can Content Service Platforms Benefit Businesses? | How Can Content Service Platforms Benefit Businesses?
Content services platforms (CSP) are modular, open, agile, and cloud-ready. CSP delivers capabilities and privileges that extend beyond legacy enterprise content management (ECM) systems.
The modular, open, and API-led architecture of content services platform offers the agility to address existing realities, including the migration to the cloud, the digital content explosion, the increase in data-related regulations, tech-savvy users, mobile expectations, and the pressure to deliver quick, iterative innovation in support of digital transformation objectives.
Legacy ECM solutions struggle to keep pace with today’s computing environment and dynamic business with their decades’ old architecture.
The following is a rundown of some ways through which content services platform can be used to benefit business and IT alike.
1. Advancing Cloud Strategy
cloud computing
Migrating content services workload to the cloud helps enterprises achieve agility and significant payback. The elastic scalability and cost-effective storage resources of the cloud significantly benefit the content services. Enterprise can support mobile, geographically scattered users, and use cases that need sharing content with external partners or clients. It provides flexibility to the enterprises to store content in a particular location for performance or compliance reasons.
2. Developing Dedicated Application Experiences
With the content service platform, enterprises can provide experiences tailored to particular use cases, content delivery channels, and applications. Some platforms offer a client-side application development framework along with pre-built content services elements for accelerating solution delivery. CIOs are advised to search for a platform that utilizes the latest UI/UX technologies so that responsive and engaging applications can be built across web, mobile, and desktop.
3. Facilitating Solution Delivery
A modular content service platform that utilizes open APIs and is designed for the cloud can cut application time from several months to a few days. The platform is also well suited to iterative software delivery techniques, such as Agile, so that enterprises can quickly respond to transforming business requirements.
4. Building Platform for Digital Business
Many established organizations have allocated a set of functional capabilities into the digital business platform for powering their digital transformation strategy. Content services platform is core enabling technology, with process and governance services. The solutions created on these next-generation platforms improve customer experience, increase the productivity of employees, and digitize core business processes.
It is evident from the above benefits that integrating CSP in the business will surely take a company to the next level and help CIOs achieve targets which were earlier not possible.
Read More
Digital transformation is one of the significant points most organizations keep in mind as every enterprise is thriving to be the best in the modern business world. As a result, the enterprises should leverage the digital transformation to harvest their digital assets effectively.
Content enhances productivity and drives efficiency. Content also enables companies to adopt entirely new business models.
Many technologies and solutions help in digital transformation. Besides, the Content Service Platforms (CSP) are designed to help organizations transition away from old and inefficient approaches to managing the information. With CSPs, companies are led toward a path of digital transformation that will provide greater efficiencies and drive the bottom line. Leveraging and investing in CSP will help unlock the value of digital transformation and bring more value to the businesses.
Top Content Service Platform
The term digital transformation is quite often used to cover both optimization and conversion. Regardless of how organizations seek new efficiencies in the current business, content and content management play a significant role in providing this transformation. Content plays a critical role in enabling and engaging digital customer experience.
In most sectors, like financial services, content-driven interactions can be streamlined, automated, and also eliminated by using an effective content service strategy platform. Content is ready to access the right information and enables faster and more accurate decisions. Content often boosts the business processes and facilitates knowledge work.
Further, a CSP is built around one simple philosophy, which is, no matter what the content type is or where the data is stored, CSP will help users get access to all content for any app, service, process, or solution that needs it.
See More | https://medium.com/@techmag1/how-can-content-service-platforms-benefit-businesses-60ab14e6fbbe | ['Technology Magazine'] | 2020-11-19 05:03:58.248000+00:00 | ['Platform', 'Technology News', 'Technology', 'News', 'Service Platform'] |
1,266 | Three Ways to Title Case a Sentence in JavaScript | 1. Title Case a Sentence With a FOR Loop
For this solution, we will use the String.prototype.toLowerCase() method, the String.prototype.split() method, the String.prototype.charAt() method, the String.prototype.slice() method and the Array.prototype.join() method.
The toLowerCase() method returns the calling string value converted to lowercase
method returns the calling string value converted to lowercase The split() method splits a String object into an array of strings by separating the string into substrings.
method splits a String object into an array of strings by separating the string into substrings. The charAt() method returns the specified character from a string.
method returns the specified character from a string. The slice() method extracts a section of a string and returns a new string.
method extracts a section of a string and returns a new string. The join() method joins all elements of an array into a string.
We will need to add an empty space between the parenthesis of the split()method,
var strSplit = "I'm a little tea pot".split(' ');
which will output an array of separated words:
var strSplit = ["I'm", "a", "little", "tea", "pot"];
If you don’t add the space in the parenthesis, you will have this output:
var strSplit =
["I", "'", "m", " ", "a", " ", "l", "i", "t", "t", "l", "e", " ", "t", "e", "a", " ", "p", "o", "t"];
We will concatenate
str[i].charAt(0).toUpperCase()
— which will uppercase the index 0 character of the current string in the FOR loop —
and
str[i].slice(1)
— which will extract from index 1 to the end of the string.
We will set the whole string to lower case for normalization purposes.
With comments:
Without comments: | https://medium.com/free-code-camp/three-ways-to-title-case-a-sentence-in-javascript-676a9175eb27 | ['Sonya Moisset'] | 2017-02-16 09:31:57.309000+00:00 | ['Technology', 'JavaScript', 'Learning', 'Algorithms', 'Programming'] |
1,267 | How To Use TypeScript to Avoid Bugs | Type Checks
Also, you can put any argument into your JavaScript functions, which brings the same difficulties as the dynamic types as there is no enforcement of what is passed in.
This creates problems as arguments that you’re supposed to pass in but didn’t will be undefined and then you get undefined errors. There is also nothing stopping you from passing in too many arguments.
With both checks, TypeScript makes code easier to understand and follow. You don’t have to worry about breaking things when you change code as the compiler will tell you that you got those basic errors.
In addition, dependency injection is part of TypeScript, which means you don’t have to resolve dependencies yourself. It also makes mocking dependencies easy for testing.
TypeScript provides features in the upcoming versions of JavaScript (which is not finalized yet) that might be handy for some developers.
TypeScript adds types to your objects by defining type files for your objects. It works by setting up a module and then including a module.d.ts file with it.
An example of this is in the TypeScript docs.
// Type definitions for [~THE LIBRARY NAME~] [~OPTIONAL VERSION NUMBER~]
// Project: [~THE PROJECT NAME~]
// Definitions by: [~YOUR NAME~] <[~A URL FOR YOU~]> /*~ This is the module template file. You should rename it to index.d.ts
*~ and place it in a folder with the same name as the module.
*~ For example, if you were writing a file for "super-greeter", this
*~ file should be 'super-greeter/index.d.ts'
*/ /*~ If this module is a UMD module that exposes a global variable 'myLib' when
*~ loaded outside a module loader environment, declare that global here.
*~ Otherwise, delete this declaration.
*/
export as namespace myLib; /*~ If this module has methods, declare them as functions like so.
*/
export function myMethod(a: string): string;
export function myOtherMethod(a: number): number; /*~ You can declare types that are available via importing the module */
export interface someType {
name: string;
length: number;
extras?: string[];
} /*~ You can declare properties of the module using const, let, or var */
export const myField: number; /*~ If there are types, properties, or methods inside dotted names
*~ of the module, declare them inside a 'namespace'.
*/
export namespace subProp {
/*~ For example, given this definition, someone could write:
*~ import { subProp } from 'yourModule';
*~ subProp.foo();
*~ or
*~ import * as yourMod from 'yourModule';
*~ yourMod.subProp.foo();
*/
export function foo(): void;
} | https://medium.com/better-programming/how-to-use-typescript-to-avoid-bugs-3d760435e243 | ['John Au-Yeung'] | 2019-11-12 01:25:34.056000+00:00 | ['Software', 'Programming', 'Typescript', 'JavaScript', 'Technology'] |
1,268 | Testing PostgreSQL Applications From Scratch (Almost) | Quickstart
The Dockerfile contains the necessary commands to retrieve and unzip the external pg_tmp library. Here is the Dockerfile and an example test for a Java application. You can find this code in its entirety on Github.
Dockerfile:
FROM maven:3.5.2-jdk-8
ENV PATH="/usr/lib/postgresql/9.6/bin:${PATH}"
RUN apt-get update && \
apt-get install -y python-pip postgresql
COPY src /src
COPY pom.xml /pom.xml
COPY target /target
RUN mvn install -U
ADD http://ephemeralpg.org/code/ephemeralpg-2.5.tar.gz /src
RUN tar -xzf /src/ephemeralpg-2.5.tar.gz -C /src && rm /src/ephemeralpg-2.5.tar.gz
RUN pg_tmp=$(find /src -maxdepth 2 -type d -name '*ephemeralpg*') && make install -C $pg_tmp
RUN chown -R postgres:postgres /src
RUN chmod 777 /src
RUN chmod 777 /target
# Switch USER to non-root to run
USER postgres
CMD [ "sh", "-c", "java -jar /target/name-of-your-jar-with-dependencies.jar" ]
Java:
public static void testInsert() throws Exception {
try {
Process p = Runtime.getRuntime().exec("pg_tmp -t");
p.waitFor();
BufferedReader input = new BufferedReader
(new InputStreamReader(p.getInputStream()));
String s = input.readLine();
String pg_uri = "jdbc:postgresql://" +
s.substring(s.lastIndexOf('@') + 1);
input.close();
Connection conn = DriverManager.getConnection(pg_uri);
Statement stmt = conn.createStatement();
stmt.execute("CREATE TABLE trees (id VARCHAR (13), " +
"common_name VARCHAR (50), " +
"scientific_name VARCHAR (50));");
stmt.execute("INSERT INTO trees (id, common_name,
scientific_name)" +
"VALUES (1, 'California Redwood', 'Sequoia
sempervirens')");
ResultSet rs = stmt.executeQuery("SELECT * " +
"FROM trees;");
while(rs.next()) {
assertEquals("Sequoia sempervirens", rs.getString(3));
}
} catch (IOException | SQLException e) {
System.out.println(e);
}
}
Summary
Lines 14–17 in the Dockerfile retrieve the external library, unzip it, and run make . In more recent versions of Postgres certain commands cannot be run, such as initdb , as the root user. It is suggested to switch users to the postgres user in the Dockerfile in line 24 before executing the file containing any database logic. We grant the necessary permissions beforehand on lines 19–21.
Here are links to this approach implemented in Java and Python.
That’s it! Now you have a lightweight PostgreSQL instance that fits neatly into a Docker CI/CD pipeline for unit and integration testing. | https://medium.com/disney-streaming/testing-postgresql-applications-from-scratch-almost-fb7609cb3be7 | ['Daniel George'] | 2020-03-02 19:10:10.558000+00:00 | ['Technology', 'Postgresql', 'Java', 'Python', 'Docker'] |
1,269 | Trusterras: An Enterprise Blockchain Platform For Product Authenticity | There is a significant amount of counterfeit, fake and unsafe products in the world market. This includes not not only luxury products but also pharmaceuticals, groceries, alcohol and food products. Counterfeiters ‘parasite’ on the reputation and quality of the brand names that were built and nourished for decades. This can harm the reputation of a company with severe consequences.
The core issue is that end customers as well as trade partners have no simple and fast way to check if a product is authentic. On the other hand, brand owners do not have an effective way to protect their products in a way that it is tamper proof but transparent. Bringing them together is ideal for businesses to protect the integrity of their products.
Trusterras provides a solution for (1) end customers, (2) brand owners and (3) trade partners to track and secure product authenticity. Trusterras enterprise permissioned blockchain-based system helps manufacturers to register their product in a blockchain and label them. End Customers as well as distributors and retailers can validate if a product is not a fake within seconds using Android or IOS devices. Customers can use a free app on their phone, while manufacturers, distributors and retailers can use Android or IOS apps in their warehouse scanning devices.
Our goal is to bring trust and transparency among brands, business partners and customers through a federated consortium. As a result the parties will experience the following benefits:
Customers can be assured to buy safe and authentic products
Manufacturers and brand owners can protect their brands with a tamper proof blockchain
with a tamper proof blockchain Brands can get a better understanding of customer shopping preferences having products scans information and analytics at hand
having products scans information and analytics at hand Detecting red flags and fake products will be easier — the platform will inform the parties when suspicious activities take place
will be easier — the platform will inform the parties when suspicious activities take place Distributors and Retailers can also validate product authenticity to ensure they do not trade counterfeit products
to ensure they do not trade counterfeit products Trade secrets are kept private while the facts are recorded in an immutable Distributed Ledger
while the facts are recorded in an immutable Distributed Ledger Partners can rely on efficient infrastructure. With a blockchain-based DLT (Distributed Ledger Technology), all partners receive the information in real time. What one partner will see in their database, is exactly what other partners will see as well
A more transparent system builds trust, which is what a consortium is for. No one can manipulate or tamper with the data once it has been recorded and verified on a blockchain. All data recorded must be verified by the members of the consortium through a consensus. This provides a recorded version of the truth, and it cannot be reversed once it has been committed.
Product Authenticity
To protect brands and their reputation, it is important to maintain product authenticity. One of the main problems in retail today is fraud and piracy in the black market. Fake goods that imitate top brands are commonly sold at cheaper prices. In the case of food and pharmaceuticals, they can be a public risk to health and safety because they are not authentic. Many fake products contain hazardous ingredients that can affect consumers overall well being. In retail, there is a loss of quality since fake brands do not have the durability for wear and tear (e.g. clothing, accessories, luxury items).
Counterfeiters know that fake goods can be seized and destroyed. That is why they are resorting to new tactics in which they not only imitate top brands, but also attempt to imitate their logo, tags and certificates. The bigger problem is that this can really fool consumers who would otherwise have no idea what is fake and what is original. That is why we aim to provide a way to verify the authenticity of products using a blockchain.
Federated Consortiums vs Individual Implementation
Brands Owners have the flexibility to choose to be part of a consortium of companies as well as implement the Product Authenticity platform individually. The benefits of an individual implementation is that all the information recorded in a blockchain is only accessible by the brand and customers providing privacy and confidentiality. However, the brand is solely responsible for the costs associated with platform maintenance and data storage i.e. cloud or on-premise.
The benefit of a consortium is that it consist of business partners who share some information and share the costs maintaining the system. The consortiums share a common business that is part of the production process in the supply chain. Other consortiums can be linked through a federation or federated consortiums. All members of these consortiums become part of the whole system that comprises the supply chain. Every product is referred to in the system as assets, which can be recorded on a distributed ledger database (i.e. blockchain) which each participating member of the consortium will have a copy of. These ensure transparency among the consortium’s members.
In the consortium, all manner of exchange of assets are recorded in order to not only provide transparency but immutability as well. It must first be agreed upon by consensus among the members. When a consensus has been reached, the transaction can be finalized and committed to the blockchain. Once committed, it cannot be reversed and is used for attestation of a transaction. The blockchain provides a layer of verification that also supports the provenance of transactions that have occurred within the supply chain.
How A Blockchain Provides Verification
A simple example can be used to explain how our system works. Supposed you have a consortium consisting of 3 business partners. You have Company A who produces branded purses. Company B is the distributor while Company C is the retailer. They all register in Trusterras Product Authenticity platform. Once Company A produced the purses and packaged them, they are off to Company B and then to Company C that will sell them in retail.
The products are proven to be authentic if they meet the criteria: which we will set for the consortium:
All purses originated only from Company A
Each purse is located where the retailer received it
If one or more of these criteria is false, then the product cannot be verified with a certificate of authenticity.
Using a blockchain-based ledger, Company A Brand can track the supply chain from beginning to end. We begin by defining the product (i.e. the purses) as an asset that is manufactured and packaged at Company A. When Company B takes the ownership of the asset, the occurrence of the exchange is recorded as a transaction on the blockchain which all members of the consortium will have a copy of for transparency. When a customer validates the product, he can see the history and authenticity. Finally, the sale is recorded as another transaction.
Other products like, for example, organic apples, food, medicine, alcohol can be secured by a blockchain system. Imagine that a customer could scan or tap a label and see if a product is genuine.
Label Copy Scenario
It is not a problem to copy any label details and add it to a fake product. Thus, the Trusterras Platform has a solution that counter strikes the scenario of label copying. We can expand more during a demo session.
Some other ideas on eliminating the possibility of copying label include the following:
Introduce more strict processing of label printing. There is a possibility that authentic printed labels are used to package counterfeit products. Allow only authorized distributors and retailers to receive branded products.
Trust And Transparency
There are different problems within the supply chain. What our system aims to address and deliver is trust and transparency to solidify business relations. For brands to maintain their reputation it is important that their products are authentic, otherwise their value can go down. A system that builds trust among partners is also honest and open. There is nothing to conceal, but everything to reveal, and that is what makes the system more trustworthy.
We are looking forward for manufacturers/brand owners to be a part of our Trusterras Product Authenticity platform. Together we can build a system based on trust and transparency. We aim to explore different scenarios where authentic products are a vital part of the business, and provide a solution that works based on the requirements. | https://medium.com/trusterras/trusterras-an-enterprise-blockchain-platform-for-product-authenticity-8617d543c8b7 | ['Vincent Tabora'] | 2021-01-22 20:22:28.322000+00:00 | ['Blockchain', 'Distributed Ledgers', 'Supply Chain', 'Enterprise Technology', 'Product Authentication'] |
1,270 | Social Media — A Vessel of Untruths | The entire globe is facing information wars of an unprecedented nature. Social media is the transporter of misinformation and disinformation. Big Tech has been under fire for not implementing enough measures to uphold the democratic process and increase transparency. These platforms are the primary sources of (fake)news across the globe, and even with efficient fact-checkers, they cannot filter out the untruths, which travel further and faster than facts.
ABC News
Summary of Perils
Human beings are inherently complacent and callous about affairs that do not personally disrupt their lives. With over 2.77 million users, social media is one such universal disrupter. Digital platforms are the most convenient mode of communication to keep in touch with loved ones, entertainment news, and engaging with like-minded people and brands. However, with the advent of social media, society has been introduced to numerous afflictions like cyberbullying, teen depression, hate speech, false information surrounding democratic processes, the coordination of terrorist groups, and many more. There have been documented lynchings and deaths in many parts of India due to false news spread via social media. These attacks have been aimed at religious minorities in an effort to establish an authoritarian Hindu Nationalist government in India. In a more recent news discovery, these same Hindu Nationalists have been found to be spreading their right-wing agenda through the Indian American diaspora to influence the 2020 US Presidential election. The 2016 US Presidential election has already been tainted due to propaganda spread by external parties on social media. Our sovereignty seems to have been compromised and the 2020 US Presidential election has been described as a billion-dollar disinformation war.
The Coronavirus pandemic suffered at the hand of misinformation spread via social media as well. Our leaders failed us, media outlets failed us, and misinformation on social media platforms became the nail in the coffin. Some examples of misinformation were bogus treatments and cures, conspiracy theories about the origins of the virus, and inaccurate information on testing facilities.
What Next?
Sinan Aral, author of “The Hype Machine: How Social Media Disrupts Our Elections, Our Economy, and Our Health — and How We Must Adapt.” aptly draws attention to move past the discussion of whether “Social Media is Good or Bad”. He encourages us to deliberate over how we must adapt and consciously nudge ourselves to be more reflective, leading with values that make us a flourishing society. The ad-based revenue model to capture audience attention, which is a precursor to persuasion, must be re-engineered. It is the 21st century and social media platforms are still allowing heinous and vile content to be published under the garb of “freedom of speech”. The people must take action and ensure these gigantic corporations running information across platforms are regulated and that they align themselves with principles and ethics that hold people accountable for malicious and irresponsible acts. Corporate corruption must cease to exist in the government. Integrity is an unknown characteristic in many global leaders today, who are willing to lie, cheat, and deceive to benefit their agenda and themselves. Therefore, people must gain awareness, protect themselves and their loved ones from these vicious establishments.
Preserve our Empathy
The air of racial tension around the world is burdensome. Empathy heals divides. Empathy’s most avid promoters have strongly felt its absence. The current global climate rewards the pursuit of self-serving goals. That is exactly what the big technology companies, pharmaceuticals, and corporations are doing. The world needs more than just lip-service from these organizations. I urge all of us to be more cognizant of the struggles of people different from us and hold institutions accountable for unjust activities. Parents, educators, teachers must instill future generations with a strong sense of empathy. With the current trend, we are not far from a catastrophic crisis that transcends borders and races. Digital applications do make our life more colorful, but we must be very cautious about the hype surrounding social media platforms. | https://medium.com/marketing-in-the-age-of-digital/social-media-a-vessel-of-untruths-3e6e8a17331a | ['Mehtab Kaur Virk'] | 2020-10-25 20:53:20.206000+00:00 | ['Crisis', 'Misinformation', 'Social Media', 'Technology', 'Empathy'] |
1,271 | Forward Protocol — The Year in Review | Time flies when you are having fun, also when you’re changing the world too. As far as good years go, 2021 has been an awesome one for Forward Protocol. We were busy building and distributing the framework to solve the blockchain adoption problem that we barely had time to savor all our progress. So much has happened in 12 months, and some of it can fade into memory easily. Let’s see how much you remember.
Ready? Here we go!
Community
The community is at the center of Forward Protocol’s vision, and we make sure they understand how important they are to us. Our community is spread over Twitter, Discord, Telegram, Medium, Facebook, Instagram, and YouTube, and we do our best to always keep them in the loop. Social media is where it all happens around here!
We crossed the 100k community members milestone in 2021; that was a highlight! We also hosted over 20 AMAs throughout the year. Our AMAs are where you’ll catch our co-founders talking about their vision for Forward Protocol, sometimes in the company of partners and other team members. You should watch out for those and in case you want to see what all the fuss is all about you can read up on our AMAs’ recap in our Medium blog!
Partners and Investors
“Success breeds success.” The Forward vision caught on pretty fast in 2021, and it is rightly reflected in our partnerships and collaborations. We made powerful friends in high places who have supported us in achieving a great deal so far.
We are not one to name names for the sake of it. Instead, here is a challenge to find a more exclusive list — Polygon, Reef, Unilend, Metis DAO, Unvest, Coreto, AcknoLedger, ArGO app, Leyline, Shavo Odadjian, KCC, Supra Oracles, HSC, Unified Council, BlockPad and more. We look forward to growing together in the next year.
Forward Protocol also closed investment deals to get our dreams off the ground. Master Ventures, Polygon, CV VC, X21, AU 21, MarketAcross, Magnus capital, Tokenova, NFT Technologies, Ardura, Octopus, Bitcoin.com and more, all contributed to our growth throughout 2021.
Forward Protocol is immensely grateful to our investors who helped us raise $1.25 million and everyone who contributed to a wonderful year.
Impressive Numbers in IDO Rounds
Forward Protocol posted some impressive stats this year too. We got listed on CoinGecko and CoinMarketCap, with more than 778,000 of you adding us to your watchlist.
We also raised $1.25 million in our private round. 20,000 of you also got whitelisted to participate in our IBO event on MahaStarter’s Launchpad, helping us raise $100,000. Our SHO on DAO Maker also reached 29.2 million DAO locked, just 400k short of the all-time record. Way to go Forward!
Kucoin BurningDrop and Gate.io StartUp and Notable Mentions
Forward Protocol also received some love from names that you already know. KuCoin’s BurningDrop knocked it out of the park. Users staked USDT, KCS and ETH assets to mine FORWARD tokens. Gate.io also shared some forward cheer to platform users, giving out 6,000,000 FORWARD tokens during their Free Airdrop Program.
Top7 ICO and ICODrops also shone the spotlight on our work. It felt nice to be recognized by blockchain heavyweights. We also managed to catch the eye of several top media houses such as Cointelegraph, Bitcoin.com, Entrepreneur, Yahoo Finance, CryptoSlate, CryptoDaily, Investing.com, Cryptopolitan and more.
For all of this support and opportunities, Forward Protocol is grateful. We look forward to exceeding expectations once again in 2022. We hope to have all of you here by our side then too!
And to those still on the fence, we have seats for you on the train to a blockchain-inspired future. Looking for a place to start, you can join our social media channels. That’s where most of the magic happens anyway.
| Telegram || Twitter || Facebook || Instagram || YouTube | |Discord| | https://medium.com/@forwardprotocol/forward-protocol-the-year-in-review-5370cb1a88a | ['Forward Protocol'] | 2021-12-31 17:23:41.072000+00:00 | ['Blockchain Technology', 'Web3', 'Blockchain', 'Metaverse'] |
1,272 | How to Create Responsive Layout with Vue Slots | Subscribe to my email list now at http://jauyeung.net/subscribe/ .
Follow me on Twitter at https://twitter.com/AuMayeung
Slots is a useful feature of Vue.js that allows you to separate different parts of a component into an organized unit. With your component compartmentalized into slots, you can reuse components by putting them into the slots you defined. It also makes your code cleaner since it lets you separate the layout from the logic of the app.
Also, if you use slots, you no longer have to compose components with parent child relationship since you can put any components into your slots.
A simple example of Vue slots would be the following. You define your slot in Layout.vue file:
<template>
<div class="frame">
<slot name=" frame " ></slot>
</div>
</template>
Then in another file, you can add:
< Layout>
<template v-slot:frame>
<img src="an-image.jpg">
</template>
</Layout>
To use the slot in your Layout component.
We will clarify the above example by building an example app. To illustrate the use of slots in Vue.js, we will build a responsive app that displays article snippets from the New York Times API and a search page where users can enter a keyword to search the API.
The desktop layout will have a list of item names on the left and the article snippets on the right. The mobile layout will have a drop down for selecting the section to display and the cards displaying the article snippets below it.
The search page will have a search form on top and the article snippets below it regardless of screen size.
To start building the app, we start by running the Vue CLI. We run:
npx @vue/cli create nyt-app
to create the Vue.js project. When the wizard shows up, we choose ‘Manually select features’. Then we choose to include Vue Router and Babel in our project.
Next we add our own libraries for styling and making HTTP requests. We use BootstrapVue for styling, Axios for making requests, VueFilterDateFormat for formatting dates and Vee-Validate for form validation.
To install all the libraries, we run:
npm i axios bootstrap-vue vee-validate vue-filter-date-format
After all the libraries are installed, we can start building our app.
First we use slots yo build our layouts for our pages. Create BaseLayout.vue in the components folder and add:
<template>
<div>
<div class="row">
<div class="col-md-3 d-none d-lg-block d-xl-none d-xl-block">
<slot name="left"></slot>
</div>
<div class="col">
<div class="d-block d-sm-none d-none d-sm-block d-md-block d-lg-none">
<slot name="section-dropdown"></slot>
</div>
<slot name="right"></slot>
</div>
</div>
</div>
</template> <script>
export default {
name: "BaseLayout"
};
</script> <!-- Add "scoped" attribute to limit CSS to this component only -->
<style scoped>
</style>
In this file, we make use of Vue slots to create the responsive layout for the home page. We have the left , right , and section-dropdown slots in this file. The left slot only displays when the screen is large since we added the d-none d-lg-block d-xl-none d-xl-block classes to the left slot. The section-dropdown slot only shows on small screens since we added the d-block d-sm-none d-none d-sm-block d-md-block d-lg-none classes to it. These classes are the responsive utility classes from Bootstrap.
The full list of responsive utility classes are at https://getbootstrap.com/docs/4.0/utilities/display/
Next, create a SearchLayout.vue file in the components folder and add:
<template>
<div class="row">
<div class="col-12">
<slot name="top"></slot>
</div>
<div class="col-12">
<slot name="bottom"></slot>
</div>
</div>
</template> <script>
export default {
name: "SearchLayout"
};
</script>
to create another layout for our search page. We have the top and bottom slots taking up the whole width of the screen.
Then we create a mixins folder and in it, create a requestsMixin.js file and add:
const APIURL = " const axios = require("axios");const APIURL = " https://api.nytimes.com/svc "; export const requestsMixin = {
methods: {
getArticles(section) {
return axios.get(
`${APIURL}/topstories/v2/${section}.json?api-key=${process.env.VUE_APP_API_KEY}`
);
}, searchArticles(keyword) {
return axios.get(
`${APIURL}/search/v2/articlesearch.json?api-key=${process.env.VUE_APP_API_KEY}&q=${keyword}`
);
}
}
};
to create a mixin for making HTTP requests to the New York Times API. process.env.VUE_APP_API_KEY is the API key for the New York Times API, and we get it from the .env file in the project’s root folder, with the key of the environment variable being VUE_APP_API_KEY .
Next in Home.vue , replace the existing code with:
<div class="page">
<h1 class="text-center">Home</h1>
<BaseLayout>
<template v-slot:left>
<b-nav vertical pills>
<b-nav-item
v-for="s in sections"
:key="s"
:active="s == selectedSection"
>{{s}}</b-nav-item>
</b-nav>
</template> Home @click ="selectedSection = s; getAllArticles()">{{s}}
<b-form-select
v-model="selectedSection"
:options="sections"
id="section-dropdown"
></b-form-select>
</template> @change ="getAllArticles()"id="section-dropdown"> <template v-slot:right>
<b-card
v-for="(a, index) in articles"
:key="index"
:title="a.title"
:img-src="(Array.isArray(a.multimedia) && a.multimedia.length > 0 && a.multimedia[a.multimedia.length-1].url) || ''"
img-bottom
>
<b-card-text>
<p>{{a.byline}}</p>
<p>Published on: {{new Date(a.published_date) | dateFormat('YYYY.MM.DD hh:mm a')}}</p>
<p>{{a.abstract}}</p>
</b-card-text> <b-button :href="a.short_url" variant="primary" target="_blank">Go</b-button>
</b-card>
</template>
</BaseLayout>
</div>
</template> <script>
// @ is an alias to /src
import BaseLayout from "@/components/BaseLayout.vue";
import { requestsMixin } from "@/mixins/requestsMixin"; export default {
name: "home",
components: {
BaseLayout
},
mixins: [requestsMixin],
data() {
return {
sections: `arts, automobiles, books, business, fashion,
food, health, home, insider, magazine, movies, national,
nyregion, obituaries, opinion, politics, realestate, science,
sports, sundayreview, technology, theater,
tmagazine, travel, upshot, world`
.split(",")
.map(s => s.trim()),
selectedSection: "arts",
articles: []
};
},
beforeMount() {
this.getAllArticles();
},
methods: {
async getAllArticles() {
const response = await this.getArticles(this.selectedSection);
this.articles = response.data.results;
},
setSection(ev) {
this.getAllArticles();
}
}
};
</script> <style scoped>
#section-dropdown {
margin-bottom: 10px;
}
</style>
We use the slots defined in BaseLayout.vue in this file. In the left slot, we put the list of section names in there to display the list on the left when we have a desktop sized screen.
In the section-dropdown slot, we put the drop down that only shows in mobile screens as defined in BaseLayout .
Then in the right slot, we put the Bootstrap cards for displaying the article snippets, also as defined in BaseLayout .
We put all the slot contents inside BaseLayout and we use v-slot outside the items we want to put into the slots to make the items show in the designated slot.
In the script section, we get the articles by section by defining the getAllArticles function from requestsMixin .
Next create a Search.vue file and add:
<div class="page">
<h1 class="text-center">Search</h1>
<SearchLayout>
<template v-slot:top>
<ValidationObserver ref="observer" v-slot="{ invalid }">
<b-form
<b-form-group label="Keyword" label-for="keyword">
<ValidationProvider name="keyword" rules="required" v-slot="{ errors }">
<b-form-input
:state="errors.length == 0"
v-model="form.keyword"
type="text"
required
placeholder="Keyword"
name="keyword"
></b-form-input>
<b-form-invalid-feedback :state="errors.length == 0">Keyword is required</b-form-invalid-feedback>
</ValidationProvider>
</b-form-group> Search @submit .prevent="onSubmit" novalidate id="form"> Keyword is required <b-button type="submit" variant="primary">Search</b-button>
</b-form>
</ValidationObserver>
</template> <template v-slot:bottom>
<b-card v-for="(a, index) in articles" :key="index" :title="a.headline.main">
<b-card-text>
<p>By: {{a.byline.original}}</p>
<p>Published on: {{new Date(a.pub_date) | dateFormat('YYYY.MM.DD hh:mm a')}}</p>
<p>{{a.abstract}}</p>
</b-card-text> <b-button :href="a.web_url" variant="primary" target="_blank">Go</b-button>
</b-card>
</template>
</SearchLayout>
</div>
</template> <script>
// @ is an alias to /src
import SearchLayout from "@/components/SearchLayout.vue";
import { requestsMixin } from "@/mixins/requestsMixin"; export default {
name: "home",
components: {
SearchLayout
},
mixins: [requestsMixin],
data() {
return {
articles: [],
form: {}
};
},
methods: {
async onSubmit() {
const isValid = await this.$refs.observer.validate();
if (!isValid) {
return;
}
const response = await this.searchArticles(this.form.keyword);
this.articles = response.data.response.docs;
}
}
};
</script> <style scoped>
</style>
It’s very similar to Home.vue . We put the search form in the top slot by putting it inside the SearchLayour , and we put our slot content for the top slot by putting our form inside the <template v-slot:top> element.
We use the ValidationObserver to validate the whole form, and ValidationProvider to validate the keyword input. They are both provided by Vee-Validate.
Once the Search button is clicked, we call this.$refs.observer.validate(); to validate the form. We get the this.$refs.observer since we wrapped the ValidationObserver outside the form.
Then once form validation succeeds, by this.$refs.observer.validate() resolving to true , we call searchArticles from requestsMixin to search for articles.
In the bottom slot, we put the cards for displaying the article search results. It works the same way as the other slots.
Next in App.vue , we put:
<template>
<div>
<b-navbar toggleable="lg" type="dark" variant="info">
<b-navbar-brand href="#">New York Times App</b-navbar-brand> <b-navbar-toggle target="nav-collapse"></b-navbar-toggle> <b-collapse id="nav-collapse" is-nav>
<b-navbar-nav>
<b-nav-item to="/" :active="path == '/'">Home</b-nav-item>
<b-nav-item to="/search" :active="path == '/search'">Search</b-nav-item>
</b-navbar-nav>
</b-collapse>
</b-navbar>
<router-view />
</div>
</template> <script>
export default {
data() {
return {
path: this.$route && this.$route.path
};
},
watch: {
$route(route) {
this.path = route.path;
}
}
};
</script> <style>
.page {
padding: 20px;
}
</style>
to we add the BootstrapVue b-navbar here and watch the route as it changes so that we can set the active prop to the link of the page the user is currently in.
Next we change main.js ‘s code to:
import Vue from "vue";
import App from "./App.vue";
import router from "./router";
import store from "./store";
import BootstrapVue from "bootstrap-vue";
import "bootstrap/dist/css/bootstrap.css";
import "bootstrap-vue/dist/bootstrap-vue.css";
import VueFilterDateFormat from "vue-filter-date-format";
import { ValidationProvider, extend, ValidationObserver } from "vee-validate";
import { required } from "vee-validate/dist/rules"; Vue.use(VueFilterDateFormat);
Vue.use(BootstrapVue);
extend("required", required);
Vue.component("ValidationProvider", ValidationProvider);
Vue.component("ValidationObserver", ValidationObserver); Vue.config.productionTip = false; new Vue({
router,
store,
render: h => h(App)
}).$mount("#app");
We import all the app-wide packages we use here, like BootstrapVue, Vee-Validate and the calendar and date-time picker widgets.
The styles are also imported here so we can see them throughout the app.
Next in router.js , replace the existing code with:
import Vue from "vue";
import Router from "vue-router";
import Home from "./views/Home.vue";
import Search from "./views/Search.vue"; Vue.use(Router); export default new Router({
mode: "history",
base: process.env.BASE_URL,
routes: [
{
path: "/",
name: "home",
component: Home
},
{
path: "/search",
name: "search",
component: Search
}
]
});
to set the routes for our app, so that when users enter the given URL or click on a link with it, they can see our page.
Finally, we replace the code in index.html with:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="viewport" content="width=device-width,initial-scale=1.0" />
<link rel="icon" href="<%= BASE_URL %>favicon.ico" />
<title>New York Times App</title>
</head>
<body>
<noscript>
<strong
>We're sorry but vue-slots-tutorial-app doesn't work properly without
JavaScript enabled. Please enable it to continue.</strong
>
</noscript>
<div id="app"></div>
<!-- built files will be auto injected -->
</body>
</html>
to change the app’s title.
Finally we run our app by running npm run serve in our app’s project folder to run our app.
After that, you should see: | https://medium.com/dataseries/how-to-create-responsive-layout-with-vue-slots-8ed26edd3201 | ['John Au-Yeung'] | 2019-11-19 18:04:50.525000+00:00 | ['Technology', 'JavaScript', 'Software Development', 'Vuejs', 'Programming'] |
1,273 | Blockchain and the vision beyond the hype | Blockchain and distributed ledger technology
Blockchain is a collaboration-based technology, the benefits of which are best realised when a high-level of participants join a public, community-driven, decentralised network, such as the Algorand blockchain solution.
Blockchain is a form of distributed ledger that enables an immutable historical record of transactions between parties who do not have pre-established trust. This removes the need for a central organisation acting as an intermediary. The block is then stored on a network of computing nodes and each exchange of value is validated by the decentralised community and stored on a chain. All without the need for a centralised authority.
This should not be confused with distributed ledger technology, a term often used interchangeably with blockchain. Blockchains are a form of distributed ledger but have the added functionality of being able to bundle together historical records of the transactions that cannot be altered. This is an important security feature, especially when participants do not have pre-existing trust in the system. This security is key when moving an asset — especially money — between unknown parties, as you need the value to be available immediately on completion of the transaction. Allowing an immediate and secure exchange of value, anywhere in the globe, could be massively empowering to those who are currently struggling to participate in the global economy. For contrast, imagine the time taken and potential security risks involved in transferring physical cash to a location in another continent.
Distributed ledger technology in simple terms is many copies of a computing database of recorded assets. These copies are kept in sync automatically through a set of centralised agreed rules and distributed on many computers, between parties who have pre-existing trust. (Assets can be anything from currency to supply chain goods through to real estate records.) There are a number of projects working on interoperability between private distributed ledgers as they see the opportunity to develop and allow private ledgers to communicate without the need for an intermediary. As with any technology, you must assess your own organisational needs and pick a solution which is the right fit for you. | https://medium.com/@e-nicleoid/blockchain-and-the-vision-beyond-the-hype-ebb46090ed42 | [] | 2019-05-06 20:56:16.308000+00:00 | ['Distributed Ledgers', 'Emerging Technology', 'Blockchain', 'Algorand', 'Decentralization'] |
1,274 | Designing a Martian Government | Designing a Martian Government
How Mars Can Do Government Better Than Earth
The American democratic experiment was born in the New World where it was free of the vested interests and historical baggage that had slowed progress in Europe. It truly was a government built upon Enlightenment ideals, too far removed from Europe to be governed according to European laws and practices. When we colonize Mars, the Martian government will also be too remote for terrestrial control. Mars gives us an opportunity to experiment with entirely new forms of governance. Here’s how a Martian political structure might improve upon terrestrial ones.
The Need for Government
In the early days of a Martian colony, the governing structure would be based on crew responsibility and Earth-based commands. With guidance terrestrially, a small number of Martian residents would be able to self-govern, much how airplanes can self-govern at 50,000 feet. Prior to flight, rules and responsibilities are laid out and mission control on Earth provides the ultimate authority.
This works because a Martian colonial base of a few dozen individuals will all know one another on a first name basis. Their social bonds force them into cooperation with one another. As the Martian base grows in population, however, this governance structure will become insufficient.
It is hypothesized that humans are only able to maintain stable relationships with up to 150 people at any given time. Beyond this number, the governing structures required necessarily become more complex to accommodate a growing number of concerns and challenges. It is at this point that a true government is needed to maintain order in the colony.
Separations of Powers
The Enlightenment era that brought us the US Constitution brought us the concept of the “separations of powers.” First put forth in the modern era by the French philosopher Montesquieu, he reasoned that government should be divided into branches with distributed powers that balance each other out. This is perhaps best embodied in the US Constitution, which divides power among three branches of government; the Legislative branch, Executive branch, and Judicial branch.
This structure and concept have proven so robust, that many countries around the world emulate it as gospel. There is no good reason not to carry this concept with us to Mars in a Martian government.
Geometry As Our Guide
But as we design our theoretical Martian government, it begs the question, how many branches are required? Most governments have three main branches, but is this number ideal?
Here, we turn to an unlikely source: geometry. While the modern world doesn’t commingle geometry with politics; the ancient one did. Plato’s Academy of ancient Greece, the pinnacle of early Western education, had the words “Let no one ignorant of geometry enter” engraved at the door. This is because the ancient Greeks understood that the laws of geometry and the natural world have much to teach us about the philosophical one.
I am going to suggest that, yes, three branches is ideal. Think of government as a table upon which society rests. The table must provide a firm platform with stability, upon which society grows and develops. A table cannot have only leg, or even two, it will fall over. Three is the minimum number of legs required for the table to balance.
You might say, but wait, isn’t four better than three, wont four add stability? A forth leg can add stability, but it also amplifies imperfections in the lengths of the legs. A four-legged table can wobble, a three-legged one cannot.
And yes, adding an ever greater number of legs (five, six, seven) from there could increase stability while also preventing wobble, but it comes at the cost of parsimony. A government structure comprised of five or six equally powerful branches would be incredibly bureaucratic, complex, and cumbersome.
Also note that the three legs of a table form an equilateral triangle when viewed from overhead. Triangles are the strongest shapes in geometry because to break a triangle necessarily requires changing the length of one of the sides. It is for this reason that triangular shapes are used in engineering where strength is paramount.
In sum, three branches is parsimonious, stable, and strong, but Montesquieu’s division of those branches is flawed.
Flaws in the Current System
The traditional arrangement of three branches features a legislative branch that writes laws, an Executive to implement laws, and a Judicial branch to ensure that no laws violate the Constitution, has glaring weaknesses.
For one, as we can see with the Covid-19 crises, the government lacks the ability to respond proactively. It takes years, if not decades, for issues or problems to become salient enough (think climate change) for public pressure to build and enough politicians to rise in Congress to address that issue. In the rapidly changing world of the 21st Century, we need a government that is capable of responding not only to problems that have already materialized; we need government that can respond to problems of the future before they materialize.
A second major flaw of the current system is the relative weakness of the Judicial branch. The Judicial brace has the power only to review if laws are constitutional, it cannot determine if laws passed by Congress have their intended effect. Further, lacking the pocketbook or the sword, it has no real power to enforce its will. The Judicial branch is weaker, relatively speaking, than the other two.
This brings me to the third flaw with the current system: it is incomplete. It takes too long for issues to bubble up to legislative priority, and when laws are passed, there is no direct feedback mechanism to determine if they actually work as intended. Laws are passed to accomplish things. But today, a law that fails to accomplish its goals has to start all over again; with public recognition of failure, bubbling up back into Congress before legal modifications can be passed. This is too slow for the modern world.
My Proposal
My proposal for a Martian Colonial Government is this: A three-branched government structure; the first branch’s sole purpose is to take advice from the public and academia and write legislative proposals meant to solve current and head off future problems. It shouldn’t take years for issues to bubble up to the legislature.
The second branch decides which policy to implement and implements it. There is no logical reason to separate the passing of laws from its implementation. In fact, Conway’s law would suggest that they are best combined together.
The third branch reviews the effectiveness of the legislation and provides recommendations for improvement. It could function much like the current judicial system, but instead of its powers narrowly limited to the review of constitutionality, it could also strike down and remove laws that are not achieving their intended purpose. Additionally, it could make recommendations for improvements and send those recommendations directly back to the first branch where they can be incorporated into new legislation. This provides a feedback loop that doesn’t exists in the current framework.
Such a government would respond faster, be able to eliminate “dead” statutes, and would continually be improving itself and its governing capabilities.
A New Era
The colonization of Mars may not be as far off as most realize and now is the time to begin thinking about how we might govern it. Mars offers us a unique opportunity to bring the best practices learned on Earth, while also giving us a clean sheet upon which we can improve on them. | https://medium.com/curious/designing-a-martian-government-a2d0d55539d5 | ['J. Lund'] | 2020-12-23 07:53:30.779000+00:00 | ['Technology', 'Government', 'Politics', 'Space Exploration', 'Space'] |
1,275 | Spotting Civic Dark Patterns | We’re familiar with dark patterns in UX design. What about in our civic institutions?
In May, a colleague on the Modernist Studio team sent a curious message in Slack: “Is it weird that an economic stimulus payment was sent as a debit card, addressed to a combination of your first name and your wife’s maiden name?” he wrote.
When the stimulus check was announced, the Trump Administration and the IRS framed it as a much-needed boost for struggling families. But for millions of Americans, that value promise failed to deliver. Some, like my colleague, received payments that looked like spam, and threw their stimulus away. Many were promised a direct deposit and instead received a debit card — which was great for spending, but useless for paying rent or saving for medical bills. Others are still waiting, months later, for their money to arrive. Incarcerated people who work and pay taxes were sent stimulus payments and then asked to return them. And more than 1 million Americans who filed taxes jointly with spouses who were not American citizens got no stimulus check at all.
The stimulus was promised as a public service. But it was not designed to be accessible to the people who most needed it, in the ways they needed it.
What are dark patterns?
In UX design, dark patterns are user interfaces that trick users into doing something they don’t intend, want, or need. These mistakes usually cost users money, and always cost them time. Common forms of UX dark patterns include price comparison prevention (making it difficult for users to compare the prices of items), sneak-into-basket (automatically adding products add-ons to a purchase), forced continuity (disguising a recurring subscription under what looks like a one-time payment), and confirmshaming (“Are you sure you don’t need this additional weight loss product?”).
Dark patterns are not necessarily bad design — they are designs that ensure bad outcomes for the user. They are popping up in digital services everywhere, to the point that Congress took note, introducing a bill to prevent companies like Facebook, Google, and Twitter from deploying dark patterns against their users.
As the public sector adopts more from tech, it’s worth asking whether it is also adopting tech’s bad habit for dark patterns, in its own way.
The stimulus fiasco reflects perhaps a form of civic dark pattern — designs in the public sector that purport to deliver a service to the public but instead help benefit stakeholders, often at the cost of the humans interacting with them.
Some of those civic dark patterns may include:
A service that is designed to not be accessible, especially by people most in need of access
The stimulus is a recent example — the IRS sent people in need of money to pay rent a stimulus in a form that instead encouraged supporting businesses. But examples of this civic dark pattern also show up in our elections: from opposing funding for the US Postal Service when millions are relying on the USPS for mail-in voting, to structuring ballots so that critical issues are hidden from view.
A service that is designed to get people to pay, when it is supposed to be offered for free
Intuit made headlines in 2019 for hiding TurboTax’s government-mandated free tax-filing program for low-income users on its website in order to get them to use its paid services. Congressional staff later found that companies like H&R Block had done the same.
A service that reroutes critical public information through private gatekeepers
Throughout the pandemic, the CDC has collected, reported, and indexed data about Covid-19 from hospitals and health centers, providing this as a public and internal service for tracking the virus. In July, it was reported that data collection on Covid-19 would now be routed through a single private contractor in partnership with the HHS, with no word on how the data would be shared or whether it would be made available for participating hospitals and the public.
Dark patterns are remarkably profitable in the short term, but they erode customer trust and credibility over time. Unchecked, dark patterns like these affect the health and relationships of millions of Americans. But when it comes to civic dark patterns, the consequences are systemic. Our wellbeing as a society relies on access to resources, but it also relies on trust — in each other, and in the systems that guide us. Civic dark patterns undermine this trust, weakening the structure and resiliency of our public services — and ultimately our ability to live in peace.
How can service design help?
These emerging patterns will require an extra vigilance on the part of service designers. Service design is human-centered & civic-minded — a service designer works to create the systems, policies, artifacts, and experiences required to deliver the best service possible to end users.
Service-design driven companies like NAVA and digital service agencies like 18F are working with federal agencies to make government services simple, transparent, and accessible for Americans. And volunteer networks like U.S. Digital Response and the Emergency Design Collective are matching service designers with public servants and organizations to help human-centered rapid response to crises. As government, tech, and design continue to intersect, more and more designers will have the opportunity — and responsibility — to ensure that public services work for the people.
A key feature of design work in the future will be spotting civic dark patterns like these — and building for peace and trust instead. | https://medium.com/moderniststudio/spotting-civic-dark-patterns-8167281ad9a3 | ['Catherine Woodiwiss'] | 2020-10-19 21:07:43.341000+00:00 | ['Future', 'UX', 'Design', 'Civictech', 'Technology'] |
1,276 | Waves Launches Explorer 2.0 | The new version is optimised for mobile devices and fits perfectly within Waves’ unified design concept.
With the Christmas holidays in full swing for much of the world, the Waves team is still working hard and is pleased to announce another important launch before the end of the year. The new release of the Waves Explorer 2.0 is a major update, bringing the UI into line with the design of othxer Waves products. We have included several stability patches and — most importantly — improved the experience for mobile devices.
New UI
Our frontend team has created a much clearer and more user-friendly interface, and the overall experience for the application is now similar to other apps in the Waves ecosystem. | https://medium.com/wavesprotocol/waves-launches-explorer-2-0-e1d4f7ace9e6 | ['Waves Tech'] | 2018-12-26 13:35:32.502000+00:00 | ['Blockchain', 'Blockchain Technology', 'Cryptocurrency', 'Waves', 'Crypto'] |
1,277 | Wasmer 1.0 integrated into CosmWasm | TL;DR: It’s faster and safer and nicer.
Earlier this year we learned that the WebAssembly runtime that powers contract execution in CosmWasm was going to get a major refactoring. At that point, it became clear we cannot release a CosmWasm 1.0 that depends on an unmaintained Wasmer 0.x. So we changed plans and started looking into the changes of the upcoming release.
Photo by David Armstrong on Unsplash
Today, after about three months of integration work, we’re happy to announce that CosmWasm successfully integrated the latest beta release of the upcoming Wasmer 1.0. And this has many direct and indirect consequences for all CosmWasm users.
Faster
As part of the transition, we wrote a benchmarking suite directly in the cosmwasm repo. This allows us to compare the performance consequence of a change easily. And the results are amazing (even on my laptop from 2015):
2x Wasm compilation performance (to 50ms)
2x Wasm execution performance (to 50 µs)
6x module loading performance (to 6ms)
Safer
Before there upgrade, a lot of the internal functionality was based on raw pointer management. While this provided a powerful way to manage data during execution, we had to be extremely careful to not create bugs. The new interfaces allow us to benefit from all the safety that comes with the strict Rust type system.
New APIs now allow us to easily set memory limits during contract execution. Before a contract execution could take up to 4 GiB of memory, which is a result of the 32-bit address space in Wasm. Memory can now be limited to any amount ≤ 4 GiB. Since this memory limit is consensus critical, each blockchain needs to set it as a parameter.
Nicer
This is probably the most developer visible part: the CosmWasm stack as a whole gets nicer. The two available Wasm backends Singlepass and Cranelift now both support gas metering and protection from undeterministic float operations. Both can now be used with Rust stable. The only remaining caveat is that Singlepass does not yet support Windows, such that Cranelift remains the default development backend while Singlepass is used on-chain.
Kudos to the Wasmer team
On behalf of Confio, I’d like to take the opportunity to congratulate the Wasmer team for this major achievement and want to say thank you for the effort put into this open source project. When we considered alternative Wasm runtime projects that worked better with our initial timeline it quickly became clear that we want to stick with Wasmer for what we see in the project: technical excellence, great people, and a culture welcoming external contributions. When the collaboration intensified this month, this decision was proven exactly right.
What’s next?
The integration of Wasmer brings us much closer to a CosmWasm 1.0 release. It gives us a solid base to fine tune the Wasm engine with respect to gas metering, resource limiting and performance. We are actively reviewing internal features and changes we need to feel confident of the CosmWasm stack, but externally we are only blocked by a final Wasmer 1.0.0 release as well as a final Cosmos SDK Stargate release. | https://medium.com/cosmwasm/wasmer-1-0-integrated-into-cosmwasm-2fa87437458c | ['Simon Warta'] | 2020-12-24 10:18:44.176000+00:00 | ['Blockchain Technology', 'Webassembly', 'Smart Contracts', 'Cosmos Network', 'Wasmer'] |
1,278 | Status Update: HapRamp’s Production server is up, thanks to Hasura | The Stack
In case you are wondering about HapRamp’s stack, here’s a pictorial representation that best describes it —
A very simple image that shows how data flows in HapRamp
We are using Hasura solely for hosting backend as of now. Our backend is a Flask / Postgres application. We love using Hasura’s k8s based approach which makes deploying our app so easy and so performant.
The Future
Subscribe to HapRamp if you have not done so already. We would be rolling out the first public alpha very soon. You definitely don’t want to miss on getting early on this platform. | https://medium.com/hapramp/status-update-hapramps-production-server-is-up-thanks-to-hasura-2f64058ad3a4 | ['Avi Aryan'] | 2018-06-25 07:26:29.669000+00:00 | ['Press Release', 'Blockchain', 'Technology', 'Steemit', 'Hasura'] |
1,279 | 20 Luxury gadgets that are too cool to be true | Naim Mu-so Bentley Special Edition Wireless Speaker System
If you’re looking for a speaker system to match your Bentley, then the Naim Mu-so Bentley Special Edition Wireless Speaker System is the answer. It’s the brand’s first speaker with a wooden finish. There’s even a copper-threaded speaker grille, and the signature Bentley lattice surrounds the volume dial.
Grobo Premium Smart Indoor Garden
You can grow a garden indoors, on your own terms, with the Grobo Premium Smart Indoor Garden. This high-tech growing box lets you produce high-quality plants with large yields right from the start.
Bang & Olufsen Beovision Contour All-in-One OLED TV
The Bang & Olufsen Beovision Contour All-in-One OLED TV is another of our favorite cool luxury gadgets that are too good to be true. Its minimalist design is a gorgeous addition to any room, and its award-winning sound brings the theater to your home.
Kohler Stillness Bath Smart Bathtub
Enjoy a luxurious, high-tech soak every day with the Kohler Stillness Bath Smart Bathtub. Inspired by Japanese forest bathing, the water fills from the bottom and overflows into the wood moat. Full-spectrum lighting enhances the ambiance.
Caviar Golden Rock Sony PlayStation 5
The Caviar Golden Rock Sony PlayStation 5 is a jaw-dropping way to add luxury to your gaming setup. It’s coated with about 20 kg of gold and features a geode design that amplifies reflected light. And, if that’s not enough, the gamepad uses crocodile leather and 700 more grams of gold.
Caviar AirPods Max Gold Black
Meanwhile, the Caviar AirPods Max Gold Black is a serious upgrade to your headphones. The ear cups are made from 750 grams of pure gold with Caviar inscriptions etched on the headband. Now that’s a cool luxury gadget.
Bugatti & TIDAL Audio Royale Speaker Range
A collaboration between the two brands, the Bugatti & TIDAL Audio Royale Speaker Range combines design and performance. These colossal speakers have features in line with the Bugatti sports cars, and you can customize materials, finishes, and details.
Jupe Flat-Packed Shelter
For your forrays in the wilderness, the Jupe Flat-Packed Shelter is designed for an off-the-grid getaway. Inspired by the cosmos, this is one of those luxurious gadgets that’s both functional and a work of art. | https://medium.com/the-gadget-flow/20-luxury-gadgets-that-are-too-cool-to-be-true-afac2a7fd0d5 | ['Gadget Flow'] | 2021-02-25 17:54:14.388000+00:00 | ['Tech', 'Gadgets', 'Luxury', 'Smart Home', 'Technology'] |
1,280 | How i hacked BBC mail servers | BBC
Hey hackers ;)
Today I’ll write about a kind of vulnerabilities a lot of hunters forget to search or test it and some of them didn’t know how to exploit it!
The vulnerability which we will talk about it today is the Mail Servers takeovers AKA SMTP Servers Broken Access ..etc, It got a lot of names though, but now let’s talk about it.
My case is unique though, because It’s a kind of chained vulnerability as i well explain, so let’s start ….
First of all i did my normal recon starting from gathering the domains of the BBC, but unfortunately the BBC got a lot of SSL’s like:
BBC Studios Distribution Limited
BBC Studios Limited
BBC Worldwide
BBC Worldwide Ltd
So if you want to gather the domains in a correct way you should use every SSL name and extract the domains from it, by doing some reverse whois operation or the other techniques to gather the domains with, so i did all of that then i want a kind of general certificate or unique certificate got all or most of the domains, also some times i search for an SSL got a limited domains, seems weird right ?
I know :), but check that:
I search for the SSL which got a large amount of domains in order to prevent wasting time by searching with each certificate for domains
Then i checked the SSL with limited domains in order to see the rare domains, or the domains that is not common because the vulnerabilities will be exists like the registers in the processor :) [A lot]
So after some search i get an idea, this idea is what if i entered my homie Shodan in order to search for any new SSL on it, Me to me:
But let’s try why not man, so entered Shodan then i started with the dork:
org:”BBC”
This dork will seek for any host, domain ..etc related to an organization named BBC so now if any host appeared and related for the BBC organization, I can clearly see it’s SSL, so after hitting enter i got this result:
And i did find a new SSL which is:
British Broadcasting Corporation
So now we got a new SSL, then after gathering some domains i decided to start with the main domain of the BBC UK which is:
bbc.co.uk
I used my automation script — theSubdomainer — in this operation, so i typed the command in my VPS:
nohup subdomainer -d true -g <github_token> -l domains.txt&
This command will simply run the subdomainer in the background and will block any hup signal from reaching the process when i close the SSH tunnel, so i slept in this night when I’m thinking how can i hack this company but with critical or high impact.
So i woke up in the next day to check for the domains and it’s sups specially the bbc.co.uk domain, then i find a lot of attractive subs on it, so one of my favorite methodologies is to fuzz the attractive subs only, then the other subs, so i get a sub then fuzz it with the dirsearch tool, i love this tool like i love the exploitation 🥰
I have a technique in choosing the wordlists, this technique is:
Choosing the common files wordlist Then detect the technology of the web app then get a wordlist for it Detect the web app server then use the wordlist for this server Finally get the raft wordlists
So before is start i entered the subdomain, but unfortunately it redirects me to another sup with a login page:
No i confirmed that this sub contains sensitive info’s so i started the war now 🙂.
I get the common files word lists and i find the /api/ path, GOTCHA!, let’s start the work of my second fuzzing technique:
If you find an /api/ or api subdomain like api-example.com or api.example.com, then you should start your fuzzing with:
1. Started with the common api endpoints wordlists
2. Then the api’ seen in wild wordlist
3. After that try some graphql’s endpoints wordlist
4. Also you can search for actions / objects in this endpoint
So now first let’s try entering the API endpoint and see what will happened and how the application will interact with us:
So as you can see it dumped all the endpoints which have, so first let’s try to enter the /admin/ endpoint, in fact i love admins a lot 😻.
So after entering the admin endpoint i get the following result:
GG Homie, now we are in the Admin api, which shouldn’t be accessible by default, so for now we have an Unauthorized Access To Admin API, but without impact, so let’s try access the /admin/users, so that we can reach any sensitive info’s:
So i found a lot of emails with permissions & info’s about the mail users, so i searched a lot till i get the System_Admin info’s, so i report it as an Broken Access Control or Unauthorized access to admin endpoint, for the first time they did not accept it or they do not clearly understand what is this in face & they said that they will check for it deeper, so no problem I’m OK
So then i decided to make some network pentesting on it, because i was little sad about the endpoint 🥺, so i opened Shodan and stop for a while, then i though like that:
Hey homie CG, you find emails right ??
- Yeah Right
So why not to try find a kind of vulnerability in the mail servers then try to use this mails !!?
- Man i love you 🙂, Let’s do it
So i started my Shodan recon, so i searched a lot using the common certificates for the BBC, but few or no results found, then i think of the certificate which originally get by Shodan which:
British Broadcasting Corporation
So now i decided to make an accurate dorking so i dork with:
ssl:”British Broadcasting Corporation” port:25,587 “Hello”
So now let’s explain this simple dork:
ssl: is for specifying SSL to search about
port: to specify ports to only appear in the result
“Hello” to grap only the connected SMTP connections, because when shodan connects to SMTP server, he always try to send EHLO AKA Hello request to a kind of host to make sure that it can perform commands, so in the response if the SMTP connection is really created and “Hello” word appeared that means that the EHLO request is really sent.
25: the SMTP port
587: the Encrypted SMTP connection port
To explain the ports, simple like the difference between the ports: 80 & 443
Then i get the result:
Before saying why you hide anything, we can simply type the dork and access all of that ?, so first because all these info’s if any one use it with a wrong way this may cause a problem for me so i try make this report private as i can, second don’t ask a lot 😊.
So now when i tried to connect for this SMTP port which is 25, through my home network it doesn’t connect, i did some port scan to make sure that this port is rally opened because some times shodan caches the response & they may close this port:
So as you can see the rustscan tool didn’t give me any response with that port is really opened, but wait it’s not the end, look for this tip homie:
Sometimes in your home country there is some restrictions to access hosts / ports like this from your network, so try using a VPS, also do this in port scanning 😉.
So i switched into my VPS, then did the port scan and i get the result:
So we did it my homie, now let’s try connecting to it using the telnet command:
telent <port> <SMTP por>
So as you can see we connected and we could do the EHLO request with the SMTP command:
EHLO <host>
Now let’s play homies 😈, So i get back to the /api/admin/users to choose a mails from it, so i bring two emails like:
mr-High@bbc.co.uk
cyber-guy@bbc.co.uk
Now we have two mails right ?, So before continue let’s explain what are the workflow of the SMTP commands:
Specifying sender
Specifying recipient
Then knowing the syntax of data sending
Specifying mail subject
Specifying mail body
Sending the mail
And i think now you understand why i bring two mails :), if not i bring two emails to be one sender and one recipient, so let’s start specifying the parts of this SMTP communication:
MAIL FROM: mr-High@bbc.co.uk
RCPT TO: cyber-guy@bbc.co.uk
Now the SMTP server accepts the sender and the recipient, let’s see what is the Syntax of the data sending:
DATA
So as you can see here, the Syntax is:
Putting all the mail requirements [sender, recipient, subject, body] then enter a “.” to end the mail then send it
So let’s continue:
Subject: re-test from Cyber Guy
<hit Enter>
This is a re-test from Cyber Guy
.
So finally i was able to compromise over 4 or 4 BBC mail servers with that, and more also 😉, then finally they accept the first report about the sensitive endpoint and the second report about the Mail Server takeover. And i become a BBC Hall Of Famer, Wait for coming write-ups in BBC, Allah Willing | https://medium.com/@cyberguy0xd1/how-i-hacked-bbc-mail-servers-e61bb6faed2d | ['Momen Ali', 'Cyber Guy'] | 2021-09-04 12:40:17.470000+00:00 | ['Information Technology', 'Cybersecurity', 'Information Security', 'Bug Hunting', 'Bug Bounty'] |
1,281 | What is a technological stack and what are the criteria for selecting the right stack | What is front-end and back-end?
Front-end is everything that is visible and clickable on the page (it is also often called customer-side operations). A well designed front-end is useful and intuitive.
The most popular technologies for front-end:
HTML/HTML5 (page structuring),
CSS (Ostylation),
JavaScript (programming language),
Angular, React, Vue.js, jQuery etc. (front-end frameworks).
Back-end is everything that is in the invisible layer of the page, and what makes the page work (operations performed on the server side in browser-server connections, e.g. calculations, references to databases).
The most popular technologies for back-end:
Nginx, Apache (web servers),
C#, Java, PHP, Python, Ruby (programming languages),
back-end frameworks (Symfony, Node.js, .NET, Django, etc.),
Microsoft SQL Server, MySQL, PostgreSQL, Oracle, Neo4j (databases),
AWS, Microsoft Azure, Google Cloud, Heroku (cloud infrastructures and services).
Selection of the technological stack from scratch
How to choose the right technological stack? It depends. The following criteria will help to clarify the matter.
Functional and non-functional requirements
Functional requirements are requirements relating to system operation. For example, the assumption that the user logs into the account by clicking on the appropriate button.
However, functional requirements are not the most important. Why?
Because it is the non-functional requirements that dictate the stack composition to a greater extent. For example, quality attributes such as speed, scalability or usability.
For example, what if a page has 10 important business functions if it works so slowly that the rejection rate will be 100%. Likewise, if a page records 1000% of average traffic — if it is not scalable, it will be overloaded and return an error.
Access to specialists
Choosing a technological stack is like buying an exclusive car.
If there is no service in your region that specializes in the repair of this car brand, there is no point in undertaking a “project”. In case of repair or overhaul, it would be pointless to go to a service centre on the other side of Europe.
Similarly, in a technological project. Giving the specialists a free hand in choosing the technological station, give them also requirements and some limitations. This will help to avoid a situation in which there is a lack of technology specialists in the target location of the development office.
Project size
The size and purpose of the project are the most important factors of choice. The bigger the project you plan, the bigger your stack will be.
Using the example of a small project like MVP (minimum viable product) or SPA (single-page application) you can choose Python + Django or Node.js + React. Why? Python is the third most popular language used in startups or companies like Google and Spotify. Django is a framework of ready-made components, which will allow you to build a product pragmatically and quickly. In order not to reinvent the wheel to build similar components, e.g. user authentication or uploading Django files will be perfect.
For larger projects, such as enterprise applications or large e-commerce, the technology stack will be much wider — to properly integrate and maintain application performance. In this case, you need multi-level programming languages and frameworks that behave appropriately for large data volumes.
Product Scalability
The product architecture should be scalable — this also helps to further develop and improve the product. Scalability is the ability of a system to maintain performance with a higher load, e.g. more visits to the site.
Imagine you are selling socks. For starters, the store’s assortment does not include many items. There is a login panel, available socks, a contact form and a tab about the company.
Horizontal scalability will ensure that the website works great on many devices and can cope with increased traffic (e.g. on Black Friday or before Christmas).
Vertical scalability will work if, as your business grows, you can add new functionality to your website or, for example, create a mobile application.
Consider choosing between Microsoft Azure and Amazon Web Services (AWS). The very responsiveness of the cloud support will allow for more flexibility.
Put the choice of technology stack in the hands of specialists
If you’re still confused about the idea of choosing a technology stack, don’t worry. It makes sense to use a more experienced technology company.
By putting your project in the hands of specialists: | https://medium.com/@davincistudio/what-is-a-technological-stack-and-what-are-the-criteria-for-selecting-the-right-stack-90a97432cc58 | ['Da Vinci Studio Software House'] | 2020-12-03 09:50:08.708000+00:00 | ['Programming Languages', 'Technology', 'Backend', 'Development', 'Frontend'] |
1,282 | An Overview of Approaches to Privacy-Preserving Data Sharing | Approach 1: Process and Administrative Controls.
Summary: One fundamental form of ensuring that data stays deidentified is to ensure that anyone who has access to the data credibly agrees to not attempt to reidentify it. This can include internal firewalls (agreements that a party won’t combine data with other parts of the organization), audits, controlled access (for example, access to datasets by only certain individuals or access only in a “clean room”), and contracts.
Pros: In most pragmatic use cases, process controls can ensure that data is only used for legitimate research purposes, and that the privacy risks of data exchange are theoretical vs. practical. Combined with other approaches (such as basic redaction), this can significantly reduce the likelihood of actual re-identification.
Cons: This approach leaves open the risk of bad actors or process failure that jeopardizes patient privacy. On the flipside, overly draconian process controls can reduce research and reduce data liquidity.
Approach 2: Data Redaction.
Summary: This is the most common form of de-identification. Institutions start by redacting explicitly identifying information (eg. my name and social security number), then continue by creating categorizations (eg. birth year instead of birthdate), and then removing low frequency data from a dataset (eg. removing extremely rare disease patients from a data set).
“K-anonymization” is the extreme form of redaction, which is an approach that guarantees privacy by saying data should continue being deleted until that there are >K-1 records that look identical (so, for 100-anonymization, a record can only be preserved if there are 99 identical records, and redaction continues until that takes place).
Pros: Basic redaction is simple and has extremely high impact at reducing pragmatic privacy risk; for example, removing name and social security number is an obvious first step on any data set.
Cons: Data redaction can remove valuable information that is useful for analysis. This is especially true for machine learning approaches (where having lots of data is valuable), and this is also very true if data is strictly k-anonymized. For example, a date of service and a diagnosis code can be valuable for computation, but these two data attributes may need to be redacted as “indirect identifiers” depending on the strictness of the deidentification.
Approach 3: Aggregation.
Summary: A simple form of de-identification is to store an aggregate answer, and then delete all underlying records.
Pros: This is a form of “true anonymization” — where all underlying data is removed.
Cons: This is only feasible for fairly simplistic research — it requires knowing a question upfront, and for the answer to that question to be in a single data set. It also removes data needed for substantiating research studies.
Approach 4: Hashing & Linkable Redaction (or, “Pseudonymization”).
Summary: A cryptographic approach known as hashing can be used to indicate whenever two data attributes are the same, without knowing what the underlying attribute means. For example, “Travis May” could match to a consistent hash value “abc123”, which means that whenever that shows up in the multiple data sets, a researcher can know that it’s the same patient, without knowing being able to reverse engineer that “abc123” corresponds with “Travis May.”
Pros: This can solve for redacting identifying information, but while allowing data to be linked across data sets — an important component of any major research study. This method is also specifically approved under HIPAA as a means of de-identified linking (though linked data sets still need to be reviewed for whether the combination of data is itself too identifying).
Cons: Careful key management and process controls must be created to ensure no party can build a “lookup table” or otherwise reverse engineer hashes. Additionally, hashing is only useful for true identity aspects (such as name), but is not useful for de-identifying the attributes about a patient.
Approach 5: Synthetic Data.
Summary: Synthetic data is data that is generated to “look like” real data, but not contain any actual data about individuals. This can be used most frequently as a way to test algorithms and programs.
Pros: No actual data about individuals is in the data set used for analysis, so there is no privacy risk. Meanwhile, most statistics about the data can stay intact (averages, correlations of different variables, etc.)
Cons: While you can synthetically create data that maintains basic summary statistics, nuance in the data can be lost that is relevant to machine learning approaches, targeted quality improvement projects (e.g., identifying at risk populations for whom you want to improve a particular process), and effective clinical trial recruitment. Also, synthetic data approaches don’t allow you to combine data across datasets about the same individual (which is often necessary for longitudinal analysis). Also, approaches generally require knowledge ahead of time what statistical query you want to ask.
Approach 6: Differential Privacy.
Summary: Differential privacy is an approach in which “random noise” is added until it becomes technically impossible to identify any individual in a dataset. For example, within an underlying record, a date might be shifted by a few days, a height might be changed by a few inches, etc.
Pros: Adding random noise can often be less destructive of information than redaction of data to get a similar level of anonymization.
Cons: Differential privacy is computationally complex to do at scale in a way that achieves the desired privacy guarantees while keeping the analysis useful. Some data sets (unstructured data, for example), do not work at all in a differential privacy context. [This is a cutting edge approach which means the cons may be significantly reduced in coming years]
Approach 7: Multi-Party Computing.
Summary: Multi-party computing is an approach in which an analysis can be done on several data sets that different organizations have, without the organizations needing to share the data sets with each other. You might think of this as bringing the analysis to the data rather than bringing the data together for analysis. For example, if a pharmacy knows which drugs were filled, and a hospital knows what drugs were prescribed, multi-party computing can be used to determine what percentage of patients fill their prescriptions, without either party sending patient data to the other. This uses a variety of cryptographic approaches to ensure that no party gets access to more information, except the aggregated answer.
Pros: Can drive valuable analysis without privacy loss. In theory, any analysis can be done in this form.
Cons: Computationally complex and still emerging as a field, meaning computers struggle to run these algorithms quickly and they are difficult to build. Protects inputs and intermediate values in a computation, but outputs may still reveal identifying information (and thus approach can be augmented with differential privacy or another technique). [This is a cutting edge approach which means the cons may be significantly reduced in coming years]
Approach 8: Homomorphic Encryption.
Summary: Homomorphic encryption is an approach in which mathematical operations can be done on top of encrypted data. This means that an algorithm can be performed against data, without actually knowing what the underlying data means. For example, if a hospital knows date of prescription, and a pharmacy knows date the prescription is filled, homomorphic encryption would allow a company to find the time elapsed in between those two dates without knowing the underlying dates.
Pros: Can drive valuable analysis without privacy loss. In theory, any analysis can be done in this form.
Cons: Similar to the cons of multi-party computing, homomorphic encryption is also computationally complex and still emerging as a field, meaning computers struggle to run these algorithms quickly and they are difficult to build (to give a sense, homomorphic encryption is estimated to be >1M times slower than running computations on raw data). Protects inputs and intermediate values in a computation, but outputs may still reveal identifying information (and thus approach can be augmented with differential privacy or another technique). [This is a cutting edge approach which means the cons may be significantly reduced in coming years]
***
Of course, in addition to privacy-preserving technologies, patient consent can and should often be used to engage the patient in the decision to use data that is initially about them. While getting consent can be administratively challenging, researchers often find that patients are remarkably generous with their data if it can help other patients like them.
In practice, every organization should practice data minimization (i.e. redacting or walling off data that isn’t necessary for analysis (like social security numbers at an extreme!)), smart administrative controls, reasonable patient consent (depending on the use case), and some combination of other methodologies. While there are different methods and approaches to privacy-preserving data sharing, at the end of the day, the most important guiding principle is to do well by the patients who entrust institutions with their data.
As the amount of data continues to explode, as the societal value of data-driven analysis continues to grow, and as individuals become more concerned with privacy, expect lots of continued research in approaches to privacy-preserving data methods. | https://medium.com/datavant/an-overview-of-approaches-to-privacy-preserving-data-sharing-64fc5d4a9b48 | ['Travis May'] | 2019-08-07 06:10:49.264000+00:00 | ['Privacy', 'Health Technology', 'Health Data'] |
1,283 | What’s Holding Augmented Reality Back? | What’s Holding Augmented Reality Back?
AR has been eagerly awaited for decades, but so far, it has failed to truly deliver
Augmented reality demo featured in M. W. Krueger’s book ‘Artificial Reality II,’ 1991.
What you’re looking at above is the state of augmented reality nearly two decades ago. It’s also a clue as to why, today, Magic Leap is reportedly looking for more venture funding after having already raised more than $2.6 billion from Google and other Silicon Valley giants but has little to show for it beyond an expensive AR headset that’s rumored to have unimpressive sales. It’s also a cautionary case study for Apple, which is reportedly planning a launch of its own AR headset line in 2022.
This ’90s demo has a strikingly similar interaction model to Magic Leap’s user interface. But while visually compelling, this approach to interaction will always suffer from the problems associated with ambiguous input from hand gestures. It’s a key challenge that’s continually held AR back as a technology.
The author presenting at Innovem Fest 19 in Spain.
The unacknowledged complexity of gesture control
AR advocates often assume gesture control will be the next iteration in user interfaces since it seems so intuitive and natural to human expression. (And as discussed in my last Modus essay, it has the seeming inevitability of sci-fi.) But hand-gesture models and libraries are not uniform. The ambiguous input produced by humans forces the computer to process far more information than a controller with comparatively limited function, like a touchscreen. This makes the interaction prone to a wide range of errors. The user’s background could be too dark, they could be wearing gloves, or they could have hands smaller than those that the device was tested with. This interaction model also likely requires having to train someone to use gestures they’re not yet familiar with, and not everyone will make the gestures in the same way.
By contrast, physical buttons are incredibly practical. A computer can always interpret the push of a button as a one-to-one interaction. Button-based interfaces are usually colorful and in the right places for your hands. You can quickly pick up the muscle memory to use them regularly. With a button, it doesn’t matter if you’re wearing gloves or if your hands are a certain color or if you’re an adult or a child.
The promise of gesture control technology is that it will significantly improve over time, but in practice, its perceived accuracy basically remains the same. Like Zeno’s paradox of motion, the more our computing power and motion-sensor efficacy improves, the more our expectations for precise gesture recognition also grows. But existing computers can never have an understanding of the full spectrum of edge cases they might encounter in the real world. Even if they could, gesture recognition is cognitively expensive for machines and mostly unnecessary when a simple button would suffice.
HoloLens photo courtesy of Microsoft
Microsoft’s augmented reality HoloLens interface was released in 2016 to great anticipation, but users and developers quickly realized how difficult it was to actually interact with objects using hand gestures. The specified gesture for interacting with the augmented reality displayed by the headset was a clicking motion with their hands in front of them. But these gestures were not always recognized the first time by the headset due to visual noise such as light or the irregularity inherent in a specific gesture being performed by different people. Frequent HoloLens users even coined the term “smashing butterflies” to describe the act of performing the input multiple times in order to get the computer to understand it.
This problem is not unlike how home automation systems like Alexa often have trouble understanding commands the first time, especially with speakers with accents, mumbling, or background noise thrown into the mix. AR devices like Magic Leap and HoloLens struggle with detecting the intersection between hand movements and objects. Awash in the effluvia of reality, the headset cannot always discern that the user is, say, trying to pick up a block, and it forces them to grab it multiple times. (Perhaps as a response to this frustration, Magic Leap belatedly added physical controllers to its product roadmap and made them part of its Creator Edition.)
Most augmented reality headsets were launched to early adopter enthusiasts and content creators, but even these users quickly found using these devices on a daily basis to be difficult. They’re often heavy and hot, and they obscure your vision. And in the end, there are only so many butterflies that even the most passionate of us can smash.
The videogame-like Vive control for VR
This is one core reason why VR has been a relative success compared with AR: Most virtual reality headsets have a one-to-one, button-based user interface in the form of hand controllers. There is no real world to overlay information onto, and physics engines from videogames can be smoothly ported into the world of VR. This shouldn’t be a surprise: VR enables a one-to-one interaction with the virtual world, and we’ve had 30 years of video game development to perfect this interaction.
Here are some other hidden assumptions that hold augmented reality headsets back:
People assume all new tech will replace everything that came before it, but it rarely does
Some Apple executives reportedly think the company’s AR products will eventually supplant the iPhone. But this isn’t usually how technology adoption works. Just because you have the latest choice doesn’t mean it’s the best option for everything. Cash isn’t obsolete simply because credit cards exist. And real reality won’t be taken over entirely by virtual reality. We replace old models of the same fundamental technology with upgraded versions, but when it comes to devices that are categorically different, they become just another option alongside our existing devices and are only adopted if they greatly enhance our existing tech and lifestyle habits. For most of us, an augmented reality headset will be another thing to take care of, fight with, upgrade, and forget to charge.
Which takes me to a related point:
Any product, no matter how compelling, has to be evaluated in the full context that prior technology has already created
VR/AR enthusiasts will point out that smartphones have small displays and limited interaction options and are unable to offer anything like the data immersion of head-mounted displays (HMDs). While this is true, it assumes that this in itself makes HMDs superior to smartphones. What’s missing from that evaluation is not just the incredible convenience of smartphones, which can be used in just about any context, but their social nature. We enjoy our phones with each other, passing them back and forth to share funny videos and other interesting content. An HMD experience threatens to deprive a user of both convenience and that impromptu social interaction.
Social media platforms are already augmented reality
As I noted, the vision for augmented reality has remained more or less unchanged since the 1990s. Since then, the growth of smartphones and social media have unintentionally created an entirely different vision of AR, one where live photos and videos are shared across our networks through the device in our hands and then discussed in the posts’ comment threads — virtual chat rooms sitting on top of our experience of reality.
Facebook might provide merely a two-dimensional interface, but it is our imagination that adds in additional dimensions. For fully immersive interfaces, we must remember that a little technology goes a long way, and our brains can fill in the rest. Simply put: We already have quite a lot of augmented reality in our lives, just not the kind that was originally conceived.
Projected interface created by teamLab Borderless for MORI Building DIGITAL ART MUSEUM
In fact, successful augmented reality interfaces (broadly speaking) have been around for years. They’re democratic, affordable to use, and can provide higher-resolution interaction. And best of all, you don’t need to pay thousands of dollars for them, recharge them, or wear them on your head. They’re simple projected interfaces, and while they’re less sexy than headsets, they are already used in airports, shopping malls, and museums. They can be calibrated to show media content, art, or directions, and they can be used by anyone.
Consider all this in relationship to Apple’s much-rumored move toward launching an AR headset line in 2022. If the company asked my advice, I’d recommend they forbid any further internal discussion about AR replacing their beloved iPhone. If they’re smart, Apple will instead learn from past AR mistakes and start with a minimal device that has a very narrow but powerful set of features. Much the way the simple iPod preceded the iPhone, Apple should start small, very slowly getting people to adopt a whole new way of life. And keep it tightly integrated with the iPhone.
We need to remember that technology is cyclical. We see the same solutions proposed again and again, often with the same results. Instead, if we work within the limitations of technology we’ve already embraced, we have a much better chance of doing things well. | https://modus.medium.com/whats-holding-augmented-reality-back-fb80b21b2f40 | ['Amber Case'] | 2019-12-09 15:22:37.907000+00:00 | ['Virtual Reality', 'Technology', 'Augmented Reality', 'Ideas'] |
1,284 | Optimising your Cloud Infrastructure Spending | The Cloud Engagement Hub blog, here on Medium, talks a lot about different technologies and techniques used by organisations on their respective cloud journeys. Containerisation is clearly the strategic choice that many clients are making, though the reality is that a lot of workloads that you want to move to cloud are complex and many will be best suited to lift as virtual machines (VMs) for the time being. If this applies to you, be very aware: if you don’t change how you manage these VMs, it will cost you much more than it does now. The following article will talk about the patterns you should adopt, and the automation required to change the way you work. Without these “new ways of working”, cloud is never cheaper.
Let us start by revisiting our journey map in Figure 1. The modernisation approaches we are talking about in this article are the top two of those in blue; namely “Virtual > Virtual” and “Physical > Virtual”.
Figure 1 Cloud Application Modernisation Journey Map
Before we dive into the details, let’s dispel some myths. Most client journeys start with the consumption of IaaS services despite the fact that more value is realised in the application space than down in the infrastructure. One of the simple reasons for this is that enterprises spend more on application development and maintenance (ADM) than infrastructure — on average approximately $1.50 spend on ADM for every $1 on infrastructure.
Secondly, although there is a significant move towards containers and containerisation — see here and here for some related content — many organisations are still consuming virtual machines as their primary cloud infrastructure platform. They expect to see cost savings with this “lift-n-shift” approach; rehosting existing applications to cloud-based VMs with minimal architectural changes; but all too frequently, savings fail to materialise. Even though containerisation has a higher implementation cost, the ROI it provides is higher (310%) than the lift-n-shift approach (250%). But that theoretical lift-n-shift ROI depends upon you changing the way you work. It’s all about doing things differently.
That said, even though there is more value to be realised from the application space and through containerisation, there are still many things that can be done to optimise infrastructure spending. So, what are those patterns that can be used to better optimise your infrastructure spend?
Note: throughout this article, example prices are shown (in US dollars) to illustrate the points being made and to quantify potential savings. Though correct at the time of writing, prices change frequently and so these are not able to be guaranteed as correct in the future.
Pattern 1: “Turn it off!”
If your cloud is costing you more than your on-premises infrastructure to do the same thing, then chances are, you’re running it in the same way. Let’s face it, in many cases, cloud is only cheaper when you turn it off. If I create VMs in a cloud and leave them running for years as I might have done on-premises, then it’s almost always going to cost more. Which brings us neatly to the ability to suspend or hibernate a virtual machine. This allows you to temporarily stop using a VM while keeping it intact and without the need to re-provision or reconfigure it. It can be booted up again and you don’t pay for the time it’s not being used. Well, not quite true. It’s always important to understand what you stop paying for and what you will continue to be charged while it’s suspended (typically storage and maybe network addresses). There may be minimum usage charges or limitations on whether a VM can indeed be suspended at all and for how long. Remember also that suspended images are not patched or updated while suspended.
For systems supporting office hours working, use hourly billing and suspend these overnight. If the system has minimal configuration above the standard image or if the configuration activity is automated, it may prove more cost-effective to delete these resources and recreate them the following day.
The relative cost effects can be seen below. We use for this example, an hourly priced VM with SAN storage which costs $0.14/hr
Table 1. Cost effects of suspension and deletion of VMs when not used
You can see from this that suspending the VM saves 53% compared to just leaving it running. Deleting it saves 76%. Because you are always paying for the storage, a small VM with a lot of storage may not save you as much when suspended as in this example.
Pattern 2: “Right-size” your VMs
There is an oft-quoted statistic that the majority of x86 systems running within an enterprise are less than 5% utilised. Note that this is not the same as physical server utilisation. The use of virtualisation has tended to drive up average server utilisation as systems are consolidated onto the same physical hardware, but even here, average server utilisation is unlikely to be above 50%. The move to cloud provides an opportunity to “right-size” both your physical and virtual server fleet and reduce the amount of wasted resource and hence, cost.
If we consider a number of different VM sizes, we might see a range that looks like this:
Table 2. Cost effects of different VM sizes
If I have the equivalent of an X-Large server today and this is less than 50% utilised, as would be likely the case based on industry metrics, I should be able to right-size the physical server when moving to the cloud. I could similarly right-size an on-premises VM. This would allow me to reduce costs by about 50%. If I have servers which are less than 25% utilised, then these savings are of the order of 75%.
Doing this well requires you to have a good understanding of not only how the existing VMs or physical servers are being used, but also the constraining factor(s) for the workload. Most cloud providers offer a range of profiles that are optimised for particular workload needs (computation or memory) in addition to a more typical “balanced” option. The relative costs of different VM profiles for the same compute performance can be seen here:
Table 3. Cost effects of different VM profiles
While we can see the potential to save here, many organisations seemingly struggle with both the ‘t-shirt’ sizing approach and standardisation in general because again it’s different to what they did before. There are experts in many organisations whose role seems to be to exactly size a server to meet a specific application need. Selecting processors, memory, storage, even specific network adapters to precisely fit. In those IT shops where it took weeks or months to procure a physical server, this might have had some merit. Also, companies where there was no sharing of resources or budgets between teams wanted to get this choice just right. But cloud just isn’t like this. If I get my sizing wrong, I simply delete the ‘incorrect’ server and provision a ‘better’ one. If that isn’t perfect, I can repeat the process until I get it optimally sized for my workload’s needs. And it doesn’t take weeks or months to do this, rather minutes! The ‘t-shirt’ sizing approach frequently means that it’s never going to be perfect, but it’s good enough. Automation and ruthless standardisation are key to making this work well — but that is bugbear for another article.
There are other levers that we can pull to further optimise our VM costs. The first of these is to use the right technology generation for your workload needs. Most organisations have some technical debt and see cloud as a way to reduce this by selecting the latest technologies available in the service catalog. The question you need to be asking though is, at what cost? Consider the following example of a bare metal server. In this example, a dual processor / 16 cores device with memory, storage etc remaining the same:
Table 4. Cost effects of different processor generations
Obviously, the latest generation processor on this list has a clock speed almost twice as fast as the earlier generation ones, but if you don’t need, or your applications cannot take advantage of this, then there is potential for a 40% cost saving by not taking this latest device.
Datacentre location can also have an impact too due to the different taxation regimes operating in different countries. Below are the monthly costs for similarly configured VMs in a number of different countries:
Table 5. Cost effects of cloud datacentre location
You may not have the ability to choose the datacentre in which your workloads can run for regulatory or policy reasons, but if you can, it is worth checking the price differential between different locations. As we can see from this example, up to a 20% price differential across different datacentre locations might not be uncommon.
Some IT departments have a good understanding of the utilisation of their existing physical and virtual servers, but many don’t. If you’re in the latter category, figuring out the correct size of a new cloud resource may be problematic. Remember the point above that this doesn’t have to be perfect and you can always remove and re-deploy. The key here is to monitor the new system for a while and then decide whether action is needed. It is all too easy to over-estimate the resource needs or err on the safe side and go larger than you might need. If you then forget about this and hence leave it oversized, you’ll be paying more in the long run. There are many ways to measure resource consumption. The key point here is to actually do it and then decide, based on real data, whether the VM should be resized or not. In most clouds, compute resources double between t-shirt sizes: 2 vCPUs — 4 vCPUs — 8 vCPUs etc. The same is true for memory: 4GB — 8GB — 16GB. If you find your VMs are less than 50% utilised, then downsizing is likely to be possible. And if you do get it wrong, you can always go back again with minimal effort.
Pattern 3: Make use of transient (spot priced) VMs
Transient or spot priced VMs make use of unused cloud capacity and are offered at a lower (spot) price compared to a normal VM. The available capacity will vary based on where you want to provision the resources, the time of day, day of the week etc. As their name suggests however, the downside is that access to these resources is not guaranteed, they typically have no SLAs associated with them and they can be reclaimed by the cloud at very short notice. There may also be limits on how many spot instances you have access to at any one time.
Because they can be interrupted at any time, or indeed may not be able to be provisioned due to a lack of capacity, spot instances are most often used for workloads that can be interrupted or where there are less stringent “time to result” requirements. These are typically, but not uniquely, non-production workloads such as development and testing, but batch processing and also high-performance computation are also good candidates.
If you have workloads that can take advantage of them, the savings can be considerable as shown below:
Table 6. Cost effects of spot pricing on different VM sizes
Some clouds offer options to control when spot priced VMs are reclaimed based on price thresholds. This gives you better budgetary control
As with “right-sizing”, if you don’t have a particular VM size or locality requirement you may be able to find even cheaper options by shopping around. The key here is to understand which workloads you run may be suitable to run on spot priced VMs.
Consider a compute intensive workload that an organisation runs at the end of each week. It is highly variable in nature, based upon the business completed, and can require 30% additional capacity to complete for a busy week. Let us assume for the basis of this example that this may take 8 hours to run on 100 nodes (VMs). If we assume that this is managed by a workload scheduler which can resubmit units of work on ‘failed’ compute nodes, we can see the implications here:
Table 8. Cost effects of spot pricing on different weekly usage profiles.
If there is no deadline by which time a result needs to be calculated, you could solely use spot instances and realise savings in the range of 70% however it has to be stressed that not every workload can take advantage of these spot-prices so choose carefully.
Pattern 4: Reserve capacity for things that run 24x7
Spot price instance availability isn’t guaranteed. If you need to guarantee access to VM resources at some later date, enter the “reserved instance”. This guarantees you capacity and by making a longer-term commitment to the cloud platform, you will see lower unit prices. The longer the commitment, the greater the discount.
Reserved instances usually have restrictions or limitations, so you need to understand what you are getting into here. These restrictions might include not being available for every option in the service catalog and limits on the number of reservations you can make. Reservations typically don’t renew automatically so you need to keep on top of when the term ends so that you don’t suddenly find yourself without capacity when you need it or faced with a monthly bill larger than you were expecting.
Remember that you are reserving capacity and that if you don’t use that reservation, you are still paying for it as opposed to the Pay-As-You-Go model where you wouldn’t be. Often your cancellation options are limited, and you may have to pay some penalty (a cancellation fee) to get out of the commitment you previously made.
While the discounted price arising from the longer commitment, may look attractive you need to be sure that you can utilise them effectively. For predictable workloads running all the time, it makes sense to use reserved instances. Where there is variable usage or a lack of predictability, the case is much less clear and the savings you might make by using normal PAYG instances and turning them off when they are unused likely outweighs the reserved instance discounted price. You need to have a good view of your past and future utilisation to make the best choice.
To give some comparison of the “commitment effect” here let us consider what this might mean in practice:
Table 9. Cost effects of increased commitment to the cloud platform
Again, the savings realised through this approach can be considerable, but only where you are able to fully utilise the capacity you have reserved. Even if you are getting a lower unit price, you still run the risk of wasting money if you can’t.
A further ‘flavour’ of increased commitment available in some cloud platforms is the “Dedicated Host”. These are virtualised, single-tenant, dedicated servers into which your VMs can be placed. Mostly, these tend to be used to get isolation from other cloud users, whether that is for security reasons or to avoid ‘noisy neighbour’ problems. They do however also offer financial benefit because the monthly cost of the entire physical server is usually lower than purchasing the same capacity in standard VMs.
Table 10. Cost effects of dedicated servers vs standard VMs
As you can see, the percentage saving in this example is quite low. Some dedicated host offerings also allow you to control resource overcommitment allowing greater savings to be realised by putting more VMs onto the server. As with any “commitment” offering, you need to be sure that you can actually take full advantage of it or you will be wasting money if not.
The key role of automation
Automation and standardisation are fundamental to making these four patterns work well. When dealing with clients, concerns are often expressed about what happens if you forget to turn off VMs at the end of a day or over a weekend. Automation that takes this off your hands and operates according to policies is thus key.
Creation, configuration and deletion of VMs needs to be fully automated. Whether you are using Terraform, Ansible, Spinnaker or many of the other offerings in this space, doesn’t matter. The key thing is that human intervention is not required for any of these tasks. These automation capabilities then need the ability to be activated either at specific times (e.g. the end of the working day) or when other conditions are true (e.g. there are no users using a particular service).
This then brings us neatly to talk about the use of autoscaling. Rather than pre-configuring a pool of servers to support a workload, consider using autoscaling functionality provided by the cloud to dynamically allocate and deallocate resources as your workload demands change over time. While adding instances as more load is placed on your applications and services may be helpful, more interesting here is the removal of instances as load subsides. This removes unnecessary resources and hence reduces your costs. When working well, this ensures that you don’t have more resources provisioned than you need to, to support business workload requirements. Plus, you don’t need to forget about turning things off as the cloud platform will do it for you. Typically, you set minimum and maximum numbers of VMs to control your costs within autoscaling environments fall within known (pre-determined) limits. For autoscaling to work, however, your application needs to be able to support it — typically stateless or with state held outside of the application itself.
For the most part, we have been looking at individual VMs or systems, however many systems have complex interactions with others. Your automation capabilities need to handle these cases too. Treating these as an ‘environment’ rather than a ‘server’ is one way to do this. Again, there are many tools out there that can be used to create complete DevTest environments from scratch with all of the tooling that is needed.
The combination of these patterns, automated and deployed into the running environment allows us to drive potentially significant cost benefits, as the following example demonstrates.
Bringing it all together
Consider a global organisation running approximately 9300 applications across 50,000 servers in their own datacentres.
About 400 of these applications are out of scope for this exercise. These are either “desktop” applications running on a standalone device or applications supporting telecommunications. The remainder are what we might term “exotic” in that they run on about 7000 servers (Mainframe, various Unix flavours, Tandem etc). While there is surely potential for a Unix to Linux migration as part of any journey to cloud, we will ignore these applications for the time being though, as we can see from Figure 1 above, they all have a role to play here.
So, let us apply our patterns to this remaining population of about 8900 applications. Of these, 6250 are “production” applications. The other 2650 (approx. 30%) are used for development, test and other non-production use. These are potential targets for suspension or turning off when not in use, but this is not all. Of the 6250 production applications, 1550 (approx. 25%) are only used “9-to-5” so are also potential suspension / turn off targets. Thus, 4200 applications (47%) of the total population are hence potentially able to be turned off or suspended for the majority of any 24-hour period. If we assume that 50% can be suspended and the remainder turned off when not used, we can see from applying Pattern 1 that there is potential to save ~64% of the hosting costs for these applications.
Pattern 2 (right-sizing) is another area where significant benefits should be possible. Traditional x86 servers are frequently under-utilised so migrating workloads to smaller virtual servers gives the potential for reducing costs. If we don’t have actual utilisation data, we might choose to err on the side of caution and estimate 40% of the server population to be under-utilised. Half of this 40% could be reduced in size by 50%. Of the remaining 20%, half can be reduced by 25% and half by 75%.
For the Development & Test population (2650 applications), it may be possible to use spot-priced instances (Pattern 3) for some of these. Even if we assume that only 30% of this can be provided this way, there is another potential to drive further savings here.
Pattern 4 is the use of longer-term commitments to drive down the cost of those machines which are required to be on and available all the time. To keep the sums easy, let’s assume that all of our “7x24 applications” should have a longer term (in this case 3 year) commitment which reduces costs by another 60% for this population.
The results of applying these different optimisations can be seen in the chart below:
Figure 2. Potential impacts of infrastructure optimisation approaches
In this article, we’ve looked at potential ways to optimise infrastructure spending within the cloud at the pure IaaS level. There are many other business benefits realisable through a journey to cloud such as technical debt reduction and the reduction of maintenance costs. We’ve also not considered the potential upside of decommissioning old applications and the infrastructure that they run on in this. Though, as stated at the beginning, there is more financial benefit to be realised by containerising applications rather than undertaking a lift-n-shift to the cloud, this is still the journey that most organisations are on. The next step for these companies is to start to realise the potential cost benefits that cloud promised, but maybe they have yet to realise. Part of this comes down to changing the way that you work, and by adopting the patterns described above, I would hope to have shown a few pointers to show you how to do this. | https://medium.com/cloud-engagement-hub/optimising-your-cloud-infrastructure-spending-9f54aa4a1c96 | ['John Easton'] | 2020-12-21 12:03:19.886000+00:00 | ['Cloud Infrastructure', 'Iaas', 'Cloud Migration', 'Cloud', 'Cloud Technology'] |
1,285 | Gear Monthly Updates: December 2021 | The end of the year has been a very remarkable period for Gear. We finally announced that we had raised $12 million in a private investment round led by Blockchange Ventures. In addition to Blockchange, other top venture capital funds who participated in this round include: Three Arrows Capital, Lemniscap, Distributed Global, LAO, Mechanism Capital, Bitscale, Spartan Group LLC, HashKey, DI Ventures, Elysium Venture Capital, Signum Capital, and P2P Economy lead by Konstantin Lomashuk, along with several top executives of Web3 Foundation and Parity Technologies, including its founder Gavin Wood.
Other important milestones reached in December were mainly technical improvements to the Gear platform. The changes are as follows:
We added a program_id() function to gstd, which returns the program’s identifier. It can be used where the program wants to store funds, like fungible-tokens for itself.
New message processing logic with core-processor has been implemented.
This migrates current processing to the new logic of processing messages with a functional approach, and includes message journaling. This allows us to use multiple methods to execute core logic in different environments (collator, validator, and cumulus setups).
We have enabled a mechanism for the network maintainers (a.k.a. validators) to charge fees for the network’s resource usage. In particular, having a message stuck in the Wait List will cost the original sender a certain per-block fee. Furthermore, external users can participate in this game by keeping track of the WaitList state and suggesting those messages which have stayed there longest, thereby increasing the overall efficiency in terms of collected rent per message, in exchange for a portion of the total fee.
We added a submit_code call. This allows committing actors to the chain to be instantiated later from other actors.
In December we saw strong growth of the Gear community around the world. Following our series of educational events in both the US and Russia, we held another workshop at the Bauman Moscow State Technical University and two online workshops for the Chinese community. We outlined the benefits of our technology and demonstrated how to deploy smart contracts on the Gear platform.
According to tradition, before New Years is a good time to reflect on the achievements of the year, so we also would like to share a small summary of what we accomplished during 2021.
Since our GitHub became public in August, we reduced block time and the process queue at the end of a block. We replaced the procedure of handling the message queue to something more similar to what will be used in production mode. Messages are handled immediately; in other words in the same block, and they are submitted to the message queue (if the block gas limit is sufficient). Block time was also reduced to one second. As a result the latency of the network theoretically goes down (improves) by a factor of 18x.
In September, we made changes to the process of obtaining Metadata. In October, we wrote our custom Asynchronous Mutex, which allows programs to exclusively lock specific data and ensure it is not mutated by other messages while locked. Along with Asynchronous Mutex, we wrote Asynchronous RwLock — a reader-writer lock that allows more fine-grained async data locks.
Tree structures have been valued for the (future) gas spending algorithm, which isa step towards a self-consistent gas economy where gas associated with the message is always preserved. This was one of the milestones from our November report.
The growth of the community has played an essential role in 2021. In October, we launched our website with a new design and user-friendly interface. We held seven workshops and various MeetUps. Three workshops took place in Russia and one in the USA. The other three events were held online: the first was for students from the Computer Science and Engineering Society of the University of California San Diego. Another two were held for the Chinese community. So far, we have held five successful AMAs with our CEO and Founder, Nikolay Volf, hosted by PolkaWarriors, PolkaWorld, the famous Turkish influencer OrientusPrime, and Russian YouTube channels Cryptovo and ProBlockchain.
If you would like to learn more about Gear’s development in 2021, you can keep up with our monthly reports on Medium.
We would like to thank our fantastic audience who participated in all our events during the year, and we hope to see you again in 2022! More great things are coming, and we cannot wait to share them with you! We wish you all a Happy New Year!
Sincerely,
The Gear Team | https://medium.com/@gear_techs/gear-monthly-updates-december-2021-41097fe7748e | ['Gear Technologies'] | 2021-12-30 12:35:53.710000+00:00 | ['Smart Contracts', 'Polkadot', 'Polkadot Network', 'Blockchain', 'Blockchain Technology'] |
1,286 | The New Restaurant Experience: Robot Servers, Cooks And Hostesses | At the new Alibaba restaurant, Robot.he, in Shanghai the human hostesses have been relegated to smiling and pointing at touch screens. The hard work of interacting with guests and serving dinner is done by mobile phones and mobile robots that look oddly similar to Amazon’s warehouse rovers. Cao Haitao of Robot.he explains Alibaba’s rationale, “In Shanghai, a waiter costs up to 10,000 yuan ($1,460) per month. That’s hundreds of thousands in cost every year. And two shifts of people are needed.” Bots, Haitao exclaims, work everyday without complaint. Already the machines are receiving glowing reviews from patrons who see huge savings in their bill, “Normally for two to three people, a meal costs about 300–400 yuan ($44-$58), but here, all this table of food is just over 100 yuan ($15),” explains diner Ma Shenpeng.
Alibaba’s competitor, JD.com is following suit by announcing it will open a thousand robot-staffed restaurants by 2020. According to Nikkei Asian Review the first location was suppose to open this August but as of today there have been no grand openings. However, the e-commerce giant has been seen scouting out locations throughout China for its 400 square meter fast food experience. Rumor has it that they already have a menu of more than 30 items. While automated dining has been around since the age of the Automat, which first open its doors in 1895 in Germany, the executions by Alibaba and JD.com are more than just novelties. Rising labor costs and rents worldwide are driving retail establishments toward a mechanical future, potentially leaving 66 million human workers in jeopardy.
Last May, Las Vegas’ Culinary Workers Union voted to authorize a strike against casinos that are adopting new technologies. While a tentative deal was reached between the parties, the main sticking points were the effect of robots on its 60,000 members, which include: porters, bellmen, housekeepers, bartenders, cocktail/food servers, cooks and other kitchen staff. According to the Union’s website, “As a consequence of this wave of automation, casino resort workers represented by the Culinary Union Local 226 have made it clear they want their voice heard when it comes to robots entering their field of work. Their newly arranged tentative deals and signed 5-year contracts have language dedicated to technology introduction and how it could potentially affect their existing responsibilities.” The casinos were quick to halt the possibility of a strike as the economic fallout could cost over $10 million a day. This action comes on the heels of a very well publicized negotiation between the Teamsters Union and United Parcel Service over the use of drones, driverless trucks and other automation technologies. As organized labor grabbles to understand its role relative to automation technologies, it is clear that the inevitability of adoption is accelerating past the point of no return.
To digest the impact of unmanned systems on the food service industry, I reached out to John Ha of Bear Robotics. Last year, I had the pleasure of meeting the restauranteur turned roboticist at an industry conference. Since then, Ha has been making headlines with his game changing product: “Penny — The Runner Robot.” Ha shared with me the evolution of his whimsically named mechanical waitress. He started his career at Google working long hours and in between coding, the former engineer would eat dinners at a small Korean restaurant minutes from Mountain View. Ha eventually bought the Kang Nam Tofu House with the dream of starting his own chain of casual Korean dining experiences. Instead Ha sighs, “I experienced the challenges faced by many operators in this industry. When my employees would show up late to work — or not show up at all — I had to step in and carry the load. And yes, that meant cooking, dishwashing, serving, bussing, hiring, etc.” Then after two years of slaving away, “A light bulb finally flashed over me when I was knee-deep into this. I said to myself: there has to be a better way to run these establishments.” After testing several concepts, Ha finally landed on the idea of a “food runner robot.” In his experience, “This is a simple task that restaurants can use to take a burden off of servers.” He continues, “From a front-of-house perspective, running food from one location to another does not add tremendous value. So, why not automate this?”
I asked Ha if he thinks Penny could mean the death of the food service profession, he retorted, “This was less about replacing servers with a robot, and more about changing the nature of a server’s daily work.” Using the Tofu House as his laboratory, Ha found that his servers spent less time running marathons of shuttling food than with customers. Proving his thesis, Ha boasted that his “servers generated an 18% increase in tips,” after Penny took flight. He also thinks that another side benefit of robots will be longer employee retention by making their work more enjoyable. As Ha describes, “Imagine yourself walking or running five to seven miles a day juggling multiple trays of food and drinks in a narrow and crowded environment. If you can picture this, then also think of how physically taxing it is.”
After the unfortunate news of Jibo and Kuri, it is refreshing to see entrepreneurs solve real problems for their respective industries. Unlike Pepper’s boastful claims of being the be-all-end-all of robots in one adorable package, Penny is specifically attacking hospitality’s biggest pain point — employee turnover. According to The National Restaurant Association, waitstaff churn is climbing to more than 72% of all employees, a third higher than the national average for other sectors. That same report estimates that the average establishment loses over $150,00 a year from server discontent, which for a restaurant like the ToFu House is a considerable percentage of their bottomline. If the statistics hold the true, Ha’s robot will pay for itself in a matter of months.
The market for automated food service is quickly growing with other startups entering the fray to ferry delicacies to hungry patrons, most notably Pudu and Savioke. Chinese manufacturer Suzhou Pangolin Robot Corp., Ltd. has been building fleets of robots since 2004, claiming leadership in the hospitality market for food preparation and service. As Suzhou controls the entire supply chain it could potentially undersell competitors with its line of robots. Its restaurant product, Amy (shown above) is similar to other Asian iterations that not only bring convenience to the dinner table, but personality through a humanoid package. In discussing this with Bear Robotics’ Chief Operating Officer, Juan Higueros, says he is not discouraged as Penny has already successfully served over 25,000 satisfied customers. Higueros states that the real opportunity for his company is to partner with restaurant brands, such as Darden’s Olive Garden and Yum’s Pizza Hut, to deploy its “Bear Operating System for remote fleet management and on-site operation,” on a global scale. In the meantime, the team of Ha and Higueros is busy perfecting Penny’s AI-human interface to amplify the value proposition. As the son of a restaurateur whose father worked 25-hour days, cyborg-enabled waitstaff could be a welcome blessing for proprietors and their families.
Reserve today for the next RobotLab Forum on Retail Robotics when we discuss how automation is changing the restaurant industry and more with Pano Anthos of XRC Labs and Ken Pilot, formerly President of Gap on October 17th in New York City. | https://robotrabbi.medium.com/the-new-restaurant-experience-robot-servers-cooks-and-hostesses-ff25c261879e | ['Oliver Mitchell'] | 2018-09-17 12:11:48.727000+00:00 | ['Technology', 'Startup', 'Retail', 'Robotics', 'AI'] |
1,287 | MTI Welcomes Shorter Deadline For UK Electric Vehicle Switch | The MIRA Technology Institute (MTI) has welcomed the UK government’s decision to bring forward the ban on petrol and diesel engines to 2030 as part of what the Prime Minister has hailed a ‘green industrial revolution’. The MTI was created specially to fulfil the skills needs of the automotive sector and its core offer is based on emerging technologies including electric vehicles.
Greg Harris, Global Strategy Lead for Electrification at HORIBA MIRA, the MTI’s industry partner, said that the automotive sector was energised by the latest announcement and ready to take on the challenge of moving to an all-electric future over the next decade.
He said, “The automotive industry is already making significant progress towards this deadline with the rollout of electric passenger and commercial vehicles and there is a widespread willingness to embrace this challenging target.
“To meet the objective, manufacturers will need people with the right skills to understand the technology, develop the powertrains and adapt their lines for the new requirements. There will also be high demand through the supply chain for components including electric motors and lithium-ion batteries. Heavy to transport, the shipping duties on battery imports could prove challenging post-Brexit but to offset this, we are likely to benefit from a rise in domestic production through giga factories like those of Envision AESC and Britishvolt.
“As electric vehicle adoption becomes more widespread, the infrastructure challenge will gather pace. Solutions that deliver faster battery charging at filling stations as well as home-charging options will need to be developed. With support from government investment, there is no doubt that this challenge can and will be met.”
Lisa Bingley, MTI Director of Operations said, “This exciting news provides a boost for the automotive industry during what has been a difficult year. The scale of change that is now required and the pace at which this needs to be achieved represents a challenge that can only be met by skilled individuals who are able to engage with, and progress, in the sector. The MTI is perfectly placed to address the skills needs of the industry and, working with our partners in further and higher education, along with our close links to industry, we stand ready to help the UK to reach this demanding goal.”
The news was further reinforced by transport minister Rachel Maclean MP who spoke at the launch of the Cenex-LCV low carbon vehicle event on the day of the announcement. Rachel Maclean said that the UK was already at the forefront of zero emissions and Connected and Automated Mobility (CAM) technology and that its ambition to ban petrol and diesel engines earlier than originally planned would give further impetus to industry growth and development.
Marion Plant, OBE FCGI, Chair of the MTI Operations Board, and Principal and Chief Executive North Warwickshire and South Leicestershire College said, “This announcement represents a turning point for the automotive industry and puts the MTI at the heart of skills development for the green industrial revolution.” | https://medium.com/@miratechnologyinstitute/mti-welcomes-shorter-deadline-for-uk-electric-vehicle-switch-485212601bd5 | ['Mira Technology Institute'] | 2020-11-25 13:35:44.210000+00:00 | ['Technology News', 'Electric Vehicles', 'Automotive', 'Electrification', 'Engineering'] |
1,288 | Development Update on Verge #5 | As you know we are volunteers, but that doesn’t mean that we aren’t actively working on improvements. I’m sure some of you are interested in what we’ve done since our last update! Let me walk you through it all🤷♀️
To begin, I want to thank the #VergeFam on the positive feedback we have been receiving. Your trust in us is precious and we thank you for it! We are really happy to get your feedback and constructive criticism! As such we are always trying to improve our communication and provide better transparency! 💪
Again if you have something to tell us, we are available on Telegram, Discord, Twitter, Facebook. Our email is contact@vergecurrency.com
Welcoming a New Developer
We are proud to announce a new developer to the Verge Currency core team! If you haven’t met him on twitter or telegram already, please welcome Manuel Cabras originating from Italy, living in Switzerland. Manuel is our new .NET and Java software developer/engineer. His addition to the team has already impacted us positively. We hope you will like him too! If you have twitter, go give him a follow!
Insight API Clients
In our previous development update we mentioned the possible creation of separate packages for the Insight API, to have developers choose their own languages. Few days after mentioning this possibility, we have jumped on this challenge like a pack of hungry lions and delivered!
TypeScript Insight API Client: https://github.com/vergecurrency/typescript-insight-client
Marvin delivered the package, tested and it ready for beta use just 2 days after the article. The documentation is still on the roadmap of this project. We’ve used the tor-request package to make all requests to the Insight API private.
Swift Insight API Client:
https://github.com/vergecurrency/SwiftInsightClient
I delivered the package a day after Marvin, tested and ready for beta use. The documentation is still on the roadmap of this project. This package doesn’t rely on a TOR request library. That’s because of the way URLSession + Tor.framework work. So the ‘user’ can decide how to handle this themselves. All the client needs is an instance of URLSession .
Java Insight API Client:
https://github.com/vergecurrency/JavaInsightClient
Luckily we had Manuel joining the development team and providing the code for the Java version of the clients! The package isn’t fully tested and doesn’t contain a beta release yet. Of course, it is still being worked on.
Wallet Roadmaps
There are multiple wallets that are “in progress” at the moment:
iOS wallet
Electron (not Electrum!)
QT (with the new codebase)
We don’t have any set release dates for these wallets, but we will continue to provide you with progress through these Development Update articles.
New Desktop Wallet
https://github.com/vergecurrency/vWallet
To date, the desktop wallet is our most completed product since Marvin started working on it in January 2018. We are waiting for the new codebase since it was built on the current one. Since we developed the Insight API clients, we have successfully integrated it as well. There are a few minor issues that also needs to be taken care, but it’s almost done! https://github.com/vergecurrency/vWallet/issues
iOS Wallet
https://github.com/vergecurrency/vIOS
Our most anticipated Wallet, for iOS, is being developed smoothly. We are currently focusing on ensuring the design works with various visuals and turning these into code. Once the views are created, we then move to work on the back-end of the app. We are using our Swift Insight API to work with the blockchain to produce a wallet on your iPhone or iPad.
New Android Wallet
This wallet upgrade is on our roadmap as a project for 2019. Seeing as we currently have a wallet, it is not a priority 🤗
Ledger Progress
Our team has developed a prototype, proving we can integrate Verge into a Ledger Nano S. We are currently considering different possibilities for more security features. We also are relying on Ledger HQ to integrate our code into their product.
Currently done:
Generate an address
Custom menus etc
Test integration into vWallet
To do:
Send and Receive transactions
Fix images (logos etc)
Code-base revision:
Suddenly a wild CryptoRekt appears…
Hello everyone, CR here — We’ve have taken note that our community has wanted more information when it comes what exactly this “code-base revision” means and how is it going to make Verge better. We have been fairly silent on this for a while now but I would like to give you a very brief idea of what to expect when we’ve fully tested and implemented these changes.
Firstly, the code base revision effectively means we are rolling up from our current code, Bitcoin Core v13.0.0 (released in August of 2016), and moving all the way up to the most current version available to date.
What does that mean for Verge?
It is difficult to completely summarize the benefits of moving our core code to the latest revision of bitcoin core, however, some of the most notable changes that the end user will benefit from are as follows:
Substantial blockchain performance improvements
Greatly improved chain validation via improved PoW implementation
Revised and improved RPC commands
Greatly improved chain security
Greatly improved CPU & Memory performance
We understand that there is going to be a small subset of individuals who would like to read further into the changes coming. For those folks we will be posting change logs that have been modified to accurately reflect the changes we are implementing into our blockchain upon the public release of the new code base.
Those changes will be viewable in a few locations:
Our Wiki on GitHub — Which has already been modified to include the newest RPC commands coming.
Our primary repository’s documents file (doc) will be modified to also house the change logs.
What is going to happen in the future when there are new versions of Bitcoin Core?
It has been a fairly substantial task for our development team to bring our existing code up to par with the latest and greatest offerings provided by the bitcoin core development team, however, after this arduous task has been completed and released to the public we will no longer have to do this in the future.
Moving forward—As changes come down the pipe line from the bitcoin core development team, we will be able to quickly and efficiently select from the list of available features implemented on the latest public release candidates and implement them into our own blockchain in a matter of days rather than months. We are very excited about this and you should be as well.
Commonly asked questions
Lastly, I wanted to take a moment and address a few of the most common questions I receive:
Will Verge be implementing SegWit?
We will not be implementing SegWit onto our blockchain nor do we have any plans of ever supporting this feature in the future.
Will Verge be utilizing or implementing the Lightning Network?
We do not have plans to support this feature on our blockchain at this time.
When RSK?
Our development team is planning on tackling the RSK implementation after we’ve successfully released the new code base to the public.
When are audits going to happen/be published?
Audits will be re-conducted after we release the code base revision. We felt the first round of audits were not suitable for release given the fact that we had already scheduled to overhaul our core code base this year. Additional information about audits will be released in the future.
When Debit cards?
The debit card initiative is under full control of our partners at TokenPay. Verge Currency, the Verge core team, and its associates, have no say nor influence on the time-line or anything having to do with debit cards. If you would like to know more about the debit card program that is being offered by TokenPay please contact them here.
-CR
CryptoRekt Vanishes into thin air
Django
It seems one or two people liked the closing statement of the last development update. Well let’s keep it a thing! Meet Django… he’s 4 years old, loves eating, sleeping and cuddling. If you don’t pat him enough he’ll be very sad. 😾 | https://medium.com/vergecurrency/development-update-on-verge-5-faf4dcffcfc | ['Swen Van Zanten'] | 2018-09-17 07:43:02.745000+00:00 | ['Development', 'Technology', 'Vergecurrency', 'Bitcoin', 'Cryptocurrency'] |
1,289 | Benefits of Customer Support Widgets | Advantages of having a support widget
Providing customer services is an important part of your customer support strategy, as is staying in touch with your customers through every step of their buyer journey with your brand. They offer a few benefits:
>> Enable multiple ways to provide customer support such as:
Website support
Customers are likely to first look up a website to gather contact details. Use a chat widget on your website to make it convenient for customers to reach out to you.
Customers are likely to first look up a website to gather contact details. Use a chat widget on your website to make it convenient for customers to reach out to you. In-app support
In-app support is the most effective way to ensure that you are providing a better experience to customers. Nowadays customers want to raise issues or contact via the app. In-app support is beneficial for users who do not want to go through the hassle of going to the website to get support.
In-app support is the most effective way to ensure that you are providing a better experience to customers. Nowadays customers want to raise issues or contact via the app. In-app support is beneficial for users who do not want to go through the hassle of going to the website to get support. Messaging apps
It is likely your customer or potential customer prefers to use a social media channel like Facebook. You can use messaging apps like Facebook Messenger to offer support. This gives them the freedom to choose the support channel they prefer when getting in touch with you.
Using a combination of the above makes it easy for your customers to reach out to you across multiple customer touchpoints, enhancing their experience with your brand.
>> Resolve issues faster
Customers today no longer have the time or patience to experience a long wait time for a response, whether it is on-call or via email. In such instances, customers prefer the chat widget over other support channels. Since there’s little to no wait-time involved, issues can be resolved faster compared to other methods, making it an extremely appealing alternative to traditional forms of customer support.
>> Offer multilingual assistance
Being able to offer support is great. However, if your customer base is spread all over the world a support widget can help you offer customer service in a local language or a language of their preference. This can result in improved customer satisfaction, aside from quicker resolution of issues.
>> Make it easier and quicker for customers to connect
Traditionally, when customers wanted to reach out to a company, they would go the long way round by gathering details from a company’s website and then get in touch with them.
Offering support on chat, or via customer support widgets on the website helps reduce friction ensuring that they are able to get support easily without having to take too many steps.
>> Expand team capabilities and support multiple conversations
A support widget makes it possible to handle multiple conversations at once, allowing a team to address many individual concerns at the same time.
This is because the customer support team can engage with multiple customers at once. This increases your team’s efficiency and productivity. The result is maximized usage of available resources, and reduced costs for your business.
>> Offer proactive assistance
Offering proactive assistance involves forecasting customer issues ahead of time and keeping them informed while an issue is being processed or addressed.
For example, in the retail business — you can update the user about the delivery method you intend to use and its status, both during and after delivery.
This results in a more personalized customer experience, which can help create positive associations about the product and the company in the customer’s mind.
>> Provide an effective self-service option that augments FAQs & help documentation
Not every issue requires the assistance of the customer support team. For smaller issues, customers prefer to find answers on their own — often through FAQs or help documents and videos.
Today many widgets can be configured to offer automated responses and can surface relevant help documents based on a user’s query. This further extends their capabilities, helping customers find solutions to some issues on their own. | https://medium.com/@dltlabs/benefits-of-customer-support-widgets-b0dd1d7ee549 | ['Dlt Labs'] | 2020-12-15 16:42:17.498000+00:00 | ['Entrepreneurship', 'Business', 'Dltlabs', 'Technology', 'Customer Service'] |
1,290 | The New Startup Visa in Australia— a Guide for Beginners | James Cameron — Partner at Airtree Ventures
The other day a good mate asked me a question — “what’s the number 1 thing that Australia needs to learn from Silicon Valley?” A tonne of things came to mind — but one thing stood out more than anything else: we need to attract more great talent from around the world 🌎🚀
I can’t think of a single factor that has had a bigger impact on the success of Silicon Valley than skilled immigration.
The numbers are staggering:
More than half of the unicorns in the Valley were founded by immigrants to the US
If you look at the senior, non-founding roles at those same companies, the proportion of immigrants jumps to a whopping 71%
Some of our favourite portfolio companies at AirTree have been founded by immigrants to Australia like Manish from Dgraph, Mina from Different or Pieter from Secure Code Warrior.
I reckon if we can only learn one thing from Silicon Valley’s success — it’s that skilled immigration is critical to the success of a startup ecosystem 🇦🇺
The new ‘Startup Visa’
When the government decided to cut the 457 visa program, the startup ecosystem in Australia was rightly up in arms.
Fortunately, they’ve put in place a new Global Talent Scheme (GTS) pilot which is designed to cater for startups hiring needs — what we’re calling the ‘Startup visa’.
This scheme is aimed squarely at addressing Australia’s tech talent shortage.
The scheme is still only in pilot phase, and due to the reshuffle in the Department of Home Affairs last year, it has been a little slow to get off the ground.
However, it was recently announced that Q-CTRL was the first company to be certified as eligible to access the visa scheme under the Startup stream, and SafetyCulture the first under the Established Business stream.
Now that it’s up and running, this could be a game-changer for our ecosystem.
To figure out what the scheme’s all about and (more importantly) how you can get on it, we’ve teamed up with our friends at StartupAus and LegalVision to work through some of the most frequently asked questions.
Here’s our FAQ list with answers below:
1. What is it? 🤔
The ‘Startup visa’ is a brand new type of visa specifically designed for startup companies that operate in a technology-based or STEM-related-field.
This visa is part of the Global Talent Scheme (GTS) under the Temporary Skill Shortage (TSS) visa (subclass 482) which has now formally replaced the old 457 visa.
The Startup visa is designed to:
Help you attract top quality global talent, who possess highly specialised skills in their field
Fill niche roles within your business that you couldn’t otherwise fill from within the Australian labour market or the standard TSS visa program
It’s important to note that you can already sponsor people through the TSS visa program if the occupations you seek are available on:
The Short-term Skilled Occupation List (up to 2 year visa); or
The Medium and Long-term Strategic Skills List (up to 4 year visa with the option to apply for a permanent residency pathway)
You can search for occupation via the “eligible skilled occupations list”
However, the most significant benefit of the Startup visa is that you can employ candidates for emerging or niche occupations that are not currently available or appropriately defined under a single occupation on the eligible skilled occupation lists (STSOL or MLTSSL).
This means you can now employ talent:
In emerging sectors such as Quantum Computing, Artificial Intelligence and Virtual Reality — this is really helpful since the technology industry is changing so rapidly, it’s difficult to fit new occupations (such as a quantum engineer) into the eligible skilled occupation lists that were created for more traditional visa programs ~20 years ago
such as Quantum Computing, Artificial Intelligence and Virtual Reality — this is really helpful since the technology industry is changing so rapidly, it’s difficult to fit new occupations (such as a quantum engineer) into the eligible skilled occupation lists that were created for more traditional visa programs ~20 years ago To fill hybrid roles! In the more traditional visa programs you can only choose one occupation, and there are certain qualifications and requirements for that specific occupation that your ideal candidate may not meet exactly.
2. What kind of companies are eligible to apply for a Startup visa?
✅ You must be operating in a technology-based or STEM-related field (we’re hopeful this will cover most tech-focused startups in StartupAus!)
✅ You must be able to demonstrate that your recruitment policy gives first preference to Australian workers. The Labour Market Testing (“LMT”) under the Startup visa has more flexible requirements than the TSS visa (refer to: LMT website for comparisons). However you should be able to:
Provide evidence that you’ve advertised for the role in Australia (e.g. Seek, LinkedIn)
Keep a record of your job postings. And if no local applicants were successful, keep notes on why they were unsuccessful (e.g. not qualified enough etc.)
✅ Your company must be a good corporate citizen with no breaches of workplace law, or immigration law or any other applicable Australian law — though we hope you are doing all of this anyway!
✅ You may need to demonstrate that your employees are paid in accordance with current market salary rates for the occupation, noting:
The total amount can include equity
The minimum salary for the GTS is $80,000, of which at least $53,900 must be cash. The rest of the $80,000 minimum can be equity to the equivalent value (refer to: “Salary and Employment Condition Requirements for Sponsored Skilled Visas”)
Each occupation will need to meet its own industry specific salary benchmark by referring to sources such as joboutlook.gov.au, payscale or industry recruitment websites. If there is no clear benchmark to follow, you may need to demonstrate that you have taken appropriate measures to identify an appropriate salary for the nominated occupation
✅ You must be certified as eligible for the scheme by a ‘start-up authority’. This means you will need to meet at least one of the following requirements:
Received an investment of at least A$50,000 from an investment fund registered as an Early Stage Venture Capital Limited Partnership (“ESVCLP requirement”); or
Received an Accelerating Commercialization Grant at any time (“ACG requirement”)
This requirement is only for the early stages of the pilot program, and as the scheme matures and develops, we would expect this to transition to a points-based test.
The Department of Home Affairs (the “Department”) has also set up an independent GTS Startup Advisory Panel (the “Panel”) to help them decide if you are eligible for a Startup Visa.
Assuming you meet the ESVCLP or ACG requirement above, then the Department will be in touch to seek further evidence for the Panel to make their assessment.
✅ Finally, you must also be able to demonstrate that:
You cannot fill the positions through the traditional TSS visa program (refer to: Question 6 and 7 below); and
(refer to: Question 6 and 7 below); and Accessing the Startup visa will allow the creation of job opportunities and the transfer of skills to Australians
Once you become certified as an eligible company, you can access the Startup visa scheme and nominate up to 5 positions per year!
However, it’s important to note:
You must still lodge nomination applications for each overseas candidate — the Department will respond within 5–11 business days; and
Each candidate will still need to apply for a TSS visa online — the Department will respond within 5–11 business days (the process will be expedited given the Startup visa agreement is already in place)
3. What are the requirements for the candidates? 🤓
Candidates must:
Meet health, character and security requirements
Have no familial relationship with directors/shareholders of the company
Have qualifications that relate to the role they are applying for
Have at least 3 years’ work experience that is directly relevant to the position, and have the capacity to pass on skills/develop Australians
4. Key terms of the Startup visa
👉 The visa will last for up to 4 years, and if you decide you’d like to make the candidate a permanent employee, they may have access to a permanent residence pathway (“PR”) after 3 years.
👉 There are no age restrictions
👉 If the position ceases while the visa holder is on their temporary visa, they will have 60 days to find a new sponsor, apply for a new visa, or depart Australia.
5. How do I get one? 🙋
The pilot program will run until June 2019.
To get started:
First assess whether you can meet the ESVCLP or ACG requirements (refer to: Question 2 above and the Department’s website)
Then refer to the “Step by step process” on the Department’s website
Once an Expression of Interest has been submitted, the Department will request further info to assist the Panel in their assessment
The Department will continue to refine the process during the course of the pilot to ensure the Startup visa scheme is having the desired impact.
6. What is a Temporary Skill Shortage Visa (TSS)?
The TSS visa has two main streams:
The Short-term stream is for employers to source temporary overseas skilled workers in occupations that are needed to fill short-term skill shortages (occupations listed on the STSOL) — under this stream the visa is valid for a maximum of 2 years (or 4 years if an international trade obligation applies)
is for employers to source temporary overseas skilled workers in occupations that are needed to fill short-term skill shortages (occupations listed on the STSOL) — under this stream the visa is valid for a maximum of 2 years (or 4 years if an international trade obligation applies) The Medium-term stream is for employers to source highly skilled overseas workers in occupations that are needed to fill critical skills shortages (occupations on the MLTSSL) — under this stream, the visa is valid for up to 4 years
The TSS visa is a temporary visa and does not provide a right to permanent residence:
If the candidate’s occupation is on the STSOL there is no option to apply for permanent residence
If the candidate’s occupation is on the MLTSSL, they will have the option to apply for permanent residence after 3 years, provided their occupation remains in need in Australia
Candidates must:
✅ Have at least 2 years’ full time work experience directly relevant to the position and undertaken within the last 5 years
✅ Provide evidence that they meet the English language requirements
There are no age requirements. However, you (as the employer) will need to be an approved business sponsor.
7. Which visa stream should I apply for — the traditional TSS visa or the Startup visa?
8. What other visas can tech-focused startups use to employ international talent? 🧐
The TSS visa is the most common way for employers to sponsor foreign workers temporarily. And, if the candidate meets all the requirements under the Medium-term stream (including having an occupation on the MLTSSL), you can nominate them for PR on subclass 186 through the direct entry stream.
If the candidate is:
✅ Highly skilled;
✅ Has an eligible occupation; and
✅ Meets the points test and age threshold…
… they can apply for a visa under the general skilled migration program independently (subclass 189), or by being nominated by a state or territory government (subclass 190 or 489). Employers do not need to sponsor candidates for these visas. | https://medium.com/airtree-venture/the-new-startup-visa-in-australia-a-guide-for-beginners-b32dbf5e88f2 | [] | 2021-03-22 02:33:41.814000+00:00 | ['Startup', 'Immigration', 'Technology', 'Tech', 'Resources'] |
1,291 | AI, Sensor-Based Analytics, and a Generational Shift: Three Trends to be Aware of in 2017 | AI, Sensor-Based Analytics, and a Generational Shift: Three Trends to be Aware of in 2017
by Daniel Kimmel
Opex’s Mike Watson recently appeared on the Supply Chain Television Channel to discuss three defining trends that he has watched develop in the world of Supply Chain Analytics so far in 2017. Dan Gilmore, editor of Supply Chain Digest, facilitated the interview.
To give you an idea of what you will learn when you view the video:
He discusses what business leaders need to be aware of now that “Artificial Intelligence” has developed into a general umbrella term for “anything that uses data,” and explains the difference between the general buzzword and what AI means among the technical community, as well as a few of AI’s more sophisticated applications. (For example, quality control, which he previously wrote about on our blog here using one of his favorite examples, the Lay’s potato chip). He notes the growing use of Sensor-Based Analytics to track inventory across the global supply chain, as well as addresses the complicated question concerning the massive amounts of data provided by sensors and determining what data is worth saving and what is not worth saving. And finally, he discusses the implications of the “generational gap” that is growing between younger data scientists, who increasingly prefer to use open-source tools such as Python and R, and supply chain planners who are accustomed to more traditional modes of operation working with Excel or other off-the-shelf packages.
View the interview as a stand-alone video here.
Also, check out Mike’s follow-up article in the SC Digest, “Supply Chain by Design: Three Things That Supply Chain Managers Should Know about Artificial Intelligence.”
If you liked this blog post, check out more of our work, follow us on social media or join us for our free monthly Academy webinars. | https://medium.com/opex-analytics/ai-sensor-based-analytics-and-a-generational-shift-three-trends-to-be-aware-of-in-2017-1a1ab9cd70c5 | ['Opex Analytics'] | 2019-01-16 21:22:29.568000+00:00 | ['Technology', 'Artificial Intelligence', 'Supply Chain', 'Logistics', 'Analytics'] |
1,292 | Development of Computer Network Technology | Photo by inlytics on Unsplash
In this modern era, we are certainly familiar with network. This technological development has certainly made everything simplify and maximize the purpose of its use. This network allows devices to communicate with each other by accessing data, to achieve their respective goals. Developments in computer network technology such as the Internet and the World Wide Web (WWW) has been widely used the world community. who need an application Internet-based, such as E-Mail and Web access via the Internet. So that more and more e-commerce and fintech are developing. In a case study, with the development of network technology computers, data and information can be accessed quickly, easily and easily accurate.
Until that second, the development of computer networks continues to grow rapidly in all fields. In the 1950’s when the type of computer started to develop up to creation of a super computer, a computer must serve multiple terminals. The time-based process distribution concept is known as TSS (Time Sharing System), and for the first time computer networks were formed in layers application. | https://medium.com/@170212068/development-of-computer-network-technology-a615bf4aa0bd | [] | 2020-10-13 16:11:15.185000+00:00 | ['Development', 'Technology', 'Computers', 'Computer Science', 'Network'] |
1,293 | Must-have winter smart gadgets | Colder weather is on its way. So we invite you to prepare for it now with some of these must-have smart gadgets for winter. These devices and appliances will keep you warm and comfy all winter long. So much so that you might not want to emerge from your home come spring. Keep reading to learn more about these great gadgets.
Winter is almost here, and it’s time to prep your home with the best winter gadgets. You can start with a thermostat, follow it with a mind-blowing fireplace, and then try out some smart personal heaters. Either way, the winter of 2020–2021 will be smarter when you get these gadgets for your home.
Related: Premium coffee gadgets you need to see-2020 gift guide
Over the next days and weeks, we’ll also come up with many more winter gadgets that can help you get through the cold with fun and ease. From wearable heated apparel to smart temperature-controlled cups, this year’s winter necessities are truly top-notch.
Smart Gadgets for Winter
Nest Thermostat 2020 Smart Temperature Control
The first item on our roundup of must-have winter smart gadgets is the Nest Thermostat 2020 Smart Temperature Control. The latest Nest thermostat control comes at a more affordable price and is easier to operate. It also works with Google Assistant or Amazon Alexa, and it should help you save between 10% and 12% on your winter heating bills.
GOAT STORY GINA Smart Coffee Instrument
The GOAT STORY GINA Smart Coffee Instrument brews your coffee in three ways: pour over, drip, and French Press. It has a precision valve, which makes sure you get just the right water flow, and the GINA app can give you even more guidance. It even records your brewing history.
tadoº Smart Radiator Thermostat V3+ Home Heating System
The tadoº Smart Radiator Thermostat V3+ Home Heating System is another of our favorite winter smart gadgets. This gadget connects to our phone so that you can change your heating right from the app. You can set schedules for heating or cooling and receive tips for improving your home’s climate. Best of all, you can adjust temperature settings even while you’re on the go.
Eteria Filterless Personal Air Purifier
The Eteria Filterless Personal Air Purifier keeps the air in your home cleaner than ever. That’s because this gadget maps your home to purify the air throughout your entire living space. It comes with one main unit and as many monitoring modules as you want. Best of all, this quiet gadget removes VOCs, smoke, and biocontaminants.
First Alert Onelink Smoke and CO Detector
The First Alert Onelink Smoke and CO Detector is a practical two-in-one gadget that eliminates the need to buy both a smoke detector and a carbon monoxide detector. This smart gadget also sends you mobile and voice alerts so you’ll receive emergency notifications immediately.
Honeywell UberHeat Ceramic Personal Heater
The Honeywell UberHeat Ceramic Personal Heater is another great item on our roundup of must-have winter smart gadgets. That’s because it features a lovely modern design and gives you powerful heat wherever you need it. It provides two constant heat settings to warm your personal space effectively and saves money since it only heats the room you’re in.
Ember Mug 2 Temperature-Controlled Cup
The Ember Mug 2 Temperature-Controlled Cup keeps your hot drinks hot all winter long. This smart coffee cup allows you to set your preferred drink temperature through the connected app. That way, you can enjoy your hot coffee or cocoa for much longer. And if you leave this mug on its coaster, your drink will stay warm all day. Finally, it’s available in both 10- and 14-ounce sizes.
Vornado Glide Vortex Whole Room Heater
The Vornado Glide Vortex Whole Room Heater creates a warm room even when it’s snowing outside. This gorgeous wintertime gadget features natural wooden feet and a steel base, so it easily blends into your decor. It also offers a 20-degree tilt as well as powerful vortex circulation. Most importantly, this room heater comes with advanced safety features like a tip-over switch. It’s also safe to touch at all times.
Nature Remo 3 Smart Remote Control
The Nature Remo 3 Smart Remote Control is another of our favorite winter smart gadgets. This device works with an app that lets you manage your home appliances from anywhere. It has sensors for humidity, motion, temperature, and light so you always know what your home’s climate is like. The Remo 3 also works with Amazon Echo, Google Home, and Apple HomePod.
Winter Wearable Accessories
Polar Seal Heated Zip Top Self-Warming Shirt
The Polar Seal Heated Zip Top Self-Warming Shirt is a great winter smart gadget for keeping warm in icy conditions outdoors. This incredible shirt has two heating zones with three levels of heat. And it’s easy to operate with buttons located on the sleeves. Ski, snowboard, and explore in total warmth.
Beardo Original Detachable Beard Hat
The Beardo Original Detachable Beard Hat keeps your face warm while you ski, hike, and walk outside in winter weather. Featuring 100% acrylic yarn, this winter accessory is soft and comfortable against your skin.
Mujjo 3M Thinsulate Touchscreen Gloves
The Mujjo 3M Thinsulate Touchscreen Gloves are just what you need this winter. These gloves have a great fit and let you use your touchscreen easily with their highly conductive treatment. Naturally wind-resistant, these gloves will keep your hands warm in the coldest weather.
Are you ready for your coziest winter yet with the help of some of these must-have winter smart gadgets? I know I sure am. And I think that a ceramic personal heater is just what I need for my home office. Let us know which of these winter smart gadgets you prefer in the comments.
Want more tech news, reviews, and guides from Gadget Flow? Follow us on Google News, Feedly, and Flipboard. If you use Flipboard, you should definitely check out our Curated Stories. We publish new stories every day, so make sure to follow us to stay updated!
The Gadget Flow Daily Digest highlights and explores the latest in tech trends to keep you informed. Want it straight to your inbox? Subscribe ➜ | https://medium.com/the-gadget-flow/must-have-winter-smart-gadgets-db5e471fb033 | ['Gadget Flow'] | 2020-10-26 19:40:42.842000+00:00 | ['Winter', 'Internet of Things', 'Gadgets', 'Smart Home', 'Technology'] |
1,294 | Why Do Women Have the Upper Hand on Tinder? | Why Do Women Have the Upper Hand on Tinder?
Photo: MartinPrescott/iStock/Getty Images Plus
Co-authored with Kristian Elset Bø
Over the last decade, Tinder has redefined the online dating industry. The app has proven especially popular among young people, with three-quarters of those ages 18 to 24 reporting using the app at one point. Bumble is a distant second, with 31% of people using it.
Tinder differentiated itself through a simple swiping format since copied by numerous competitors. This format stood in stark contrast to early dating sites like eHarmony, which required long, time-consuming questionnaires that matched users based on personality compatibility. However, despite the easy and convenient allure of Tinder, getting a date through the app is notoriously exhausting.
I partnered up with Kristian Bø, who created the site Swipestats.io (which allows users to visualize their own data and compare against others) to analyze data. For this purpose, users downloaded data directly from Tinder and submitted it to us to give insight into the dynamics of the app’s dating market. Most shockingly, it shows two distinct worlds where the typical male user has a radically different experience from the typical female user.
Sign up for The Bold Italic newsletter to get the best of the Bay Area in your inbox every week.
While most women can easily find matches with men they’re interested in, the app presents a much more challenging environment for men. This difference is most evident in swiping patterns. While women swipe more than men overall, they are far more selective when doing so. Women swipe yes to just one in 20 people while the majority of men swipe yes more often than no.
Graph by Brayden Gerrard via Duro.Data and Swipestats.io/CC BY
Ultimately, most women only swipe yes on a handful of men per day while men are more freewheeling with their swipes. This creates a highly competitive environment where many men find it difficult to get matches consistently. Despite the high selectivity of female users, they actually match more often than men. The median female user receives about 2.75 matches per day while the median male user only receives about 1.1 matches.
At that rate, to expect a match, a typical woman would have to like just three men while a man would need to like over 50 women.
Graph by Brayden Gerrard via Duro.Data and Swipestats.io/CC BY
Of course, a match doesn’t always lead to an interaction. After matching, men message more frequently as well. The median female receives about seven messages per day while sending only five.
Why is Tinder so imbalanced?
While it has been common knowledge that women have had an easier time on the app for some time, this data provides important proof to back up the anecdotes. So why do women seem to have the upper hand on Tinder? The simplest explanation is basic supply and demand. Evidence suggests that men use dating apps more prolifically than women — both in the number of users and in the frequency of use. One estimate found that over 70% of active American Tinder users were male, and the ratio of male-to-female users among our data was similar. Taken together, this suggests a deep imbalance in the user pool of Tinder.
The result is a large number of men fighting over a comparably small pool of women, which allows women to choose potential matches very carefully. Meanwhile, men with profiles that are less attractive are left high and dry.
The data also reveals large inequalities for both genders. Men and women in the top 10% (meaning, the 10% of users who received the most matches) typically see a minimum of five matches per day, and a select few can see dozens per day. Some users went on to accumulate thousands or even tens of thousands of matches over time. The inverse of this is that while some people see a large number of matches, others see very few. Men in the bottom 10% see just one match per week at most. Success varies greatly among women as well. However, even those in the 10th percentile (meaning those who receive matches less frequently than 90% of women) can usually match about once per day.
Graph by Brayden Gerrard via Duro.Data and Swipestats.io/CC BY
However, a match is a far cry from a real date. Research has previously found that it requires 57 matches for one meetup and more than five times that for either a relationship or sexual encounter to occur. Accordingly, it would take a typical man almost 6,000 swipes over two nearly months to score a date.
Despite this, many men and women still find success on Tinder. According to one survey, more than 20% of millennials surveyed reported meeting someone off of Tinder. Men were more likely to use the app as well as more likely to have met someone from Tinder. Excluding those who have never used Tinder, nearly 30% of people have met someone from Tinder.
Graph by Brayden Gerrard via Duro.Data and Swipestats.io/CC BY
Still, the evidence is clear that men and women face much different realities on the app. While most women can get matches easily even when they are highly selective, the majority of men must temper their expectations. The data also tells us that some men will likely struggle to get a date on Tinder even after thousands of swipes and many months trying. | https://thebolditalic.com/the-two-worlds-of-tinder-f1c34e800db4 | ['Brayden Gerrard'] | 2021-03-10 21:26:40.585000+00:00 | ['Pineapple2021', 'Data Science', 'Digital Life', 'Social Media', 'Technology'] |
1,295 | Peer Mountain breaks down how the partnership with Syscoin will benefit users | It seems as though these days every project is partnering for the sake of partnering. More often than not these partnerships are designed to draw attention to the project due to the names. It is a popularity contest more than it is a desire to make partnerships that will see the project flourish over time. Our partnership with Syscoin isn’t one of those partnerships.
Our plans to create a decentralized future will see us work closely with Syscoin to build this future together. The partnership between our companies will see us run our two technologies alongside one another in order to verify a users trustworthiness and run an on chain service with great infrastructure.
Peer Mountain is a cross-chain protocol that facilitates the exchange of trusted information between peers with secure on chain records of these exchanges. Using Peer Mountain, all of the sharing that needs to take place to establish trust and meet legal requirements can be done digitally.
Syscoin is an infrastructure chain with cryptocurrency that offers near zero-cost financial transactions and provides businesses with the infrastructure to trade goods, assets, digital certificates and data securely. Syscoin has the ability to attract all business types thanks to its native set of features geared towards the financial sector. From eBay traders and High Street shops; Syscoin’s decentralized network benefits everyone.
Recently the Blockchain Foundry, Syscoin platform experts, announced Blockmarket; a blockchain based marketplace where users can trade goods. Think of it as a blockchain based Amazon Marketplace or eBay, where you or any merchant can sell items to anyone who wants to buy them.
The Blockmarket shopping client, mobile or web based, will work with Peer Mountain’s Peerchain protocol, meaning that when you have to share information to make a transaction, it can be done within the app. Every document is signed and logged on chain, so you get proof that these documents were exchanged. If you need validation of the documents, you can get that too with Peer Mountain trust providers; you just need to request them. Let’s take a look at an example.
Imagine you live in Luxembourg and want to buy a classic American car. Since you live in Europe, you look for one that has already been imported and you find the perfect car in the UK.
Source: Google Image
Since this is a big deal, you need to make sure everything is in order and you need to have certain papers in place to register the car. You want to make sure that this all checks out before making your payment of three Bitcoin (BTC).
To do this, you use the Peer Mountain features in the Blockmarket client on your phone. You swipe your ID document to the seller and ask them to provide their identity, since this is required by law. You can browse the available Peer Mountain trust providers in Blockmarket and find one that you trust to hand you everything you need to make sure you’re not getting cheated by the seller.
Step one : you ask an identity trust provider, like KYC3, to validate the UK passport copy and selfie that the seller sent and make sure he’s legit. You click “OK to pay KYC3 with 1 Peer Mountain Token (PMTN)”, about 10 cents, for this validation.
: you ask an identity trust provider, like KYC3, to validate the UK passport copy and selfie that the seller sent and make sure he’s legit. You click “OK to pay KYC3 with 1 Peer Mountain Token (PMTN)”, about 10 cents, for this validation. Step two : the seller sends you the draft terms right in the Blockmarket client. You check the VIN number and ask Carfax in the US to validate the history of the car from purchase up to its export to the UK 3 years ago. The seller provides copies of the current registration and title forms, which you validate with the UK DVLA. Blockmarket, Carfax and the UK DVLA are all enabled using Peer Mountain enabled systems and all are verified using 1 PMTN token.
: the seller sends you the draft terms right in the Blockmarket client. You check the VIN number and ask Carfax in the US to validate the history of the car from purchase up to its export to the UK 3 years ago. The seller provides copies of the current registration and title forms, which you validate with the UK DVLA. Blockmarket, Carfax and the UK DVLA are all enabled using Peer Mountain enabled systems and all are verified using 1 PMTN token. Step three : you negotiate the terms and then agree with the seller on three BTC price and pickup from a garage near him. The contract is digitally signed on chain by both of you.
: you negotiate the terms and then agree with the seller on three BTC price and pickup from a garage near him. The contract is digitally signed on chain by both of you. Step four: you find a local garage near the seller on Blockmarket and get a technical check on the car. They also ensure that all the paperwork is in order. They send you the report right in the Blockmarket client and you click ok to send them 100 Syscoin for their service.
Since everything looks good, you send the seller three BTC for the car and have the delivery company, which you also found and hired on Blockmarket, pick it up from the garage that did the check.
Whilst you could send the seller fiat currency like GBP, USD or EUR, it is important to remember that the underlying cryptocurrency is pinned to the traditional currency, making it more secure to use.
This exchange could be settled in a few ways using either a smart contract which ties the two users together or using a hashlock that unlocks the funds once the car has been delivered.
That’s just one example of how Syscoin and Peer Mountain will work together. Syscoin is the infrastructure for commerce and Peer Mountain is the protocol for trust. Together even the most complex deals can be negotiated and completed in a fully digital, on chain and private peer-to-peer mode.
The successful transaction becomes part of your identity and you can build up a profile, just like on eBay, that you can take anywhere on or off chain to prove your integrity in sales transactions anywhere! Welcome to the future; where trust and commerce can be frictionless, easy and among peers with no middlemen.
Does this use case trigger some question in your head? Come talk with us on Telegram. Click the banner below! | https://medium.com/peermountain/https-medium-com-peermountain-peer-mountain-breaks-down-how-the-partnership-with-syscoin-will-benefit-users-32e76a80ad34 | ['Peer Mountain'] | 2018-09-14 14:29:35.233000+00:00 | ['Peer To Peer', 'Syscoin', 'Blockchain Technology', 'Partnerships', 'Commerce'] |
1,296 | You’re Not Weird If You Think Trees Have Conversations | More Than What You See
Photo by Rishi Deep on Unsplash
“A forest is much more than what you see…underground there is this other world, a world of infinite biological pathways that connect trees and allow them to communicate…and allow the forest to behave as though it’s a single organism. It might remind you of a sort of intelligence.” — Suzanne Simard, forest ecologist, 2016 Ted Talk
Harris references a book called “What A Plant Knows” by plant geneticist Daniel Chamovitz. In his book Chamovitz explains that plants can react to outside stimuli in ways that are startling and mainly go unnoticed. In particular, plants can sense touch and also show indications that they have memory.
Chamovitz explains that vines can change the rate and the direction of their growth when it senses by touch something to grow around. Venus fly traps can also tell the difference between the touch of wind and rain versus the pressure from the touch of an insect or animal.
He also explains that plants show some form of memory. For instance, venus fly traps have hairs on them that when touched will close their leaves around an insect. Two hairs need to be touched before they will close. So, it ‘remembers’ that the first hair is touched before the second hair triggers the closing. Similarly, Chamovitz explains that wheat seedlings ‘remember’ they’ve gone through a winter before flowering.
Forest ecologist Suzanne Simard believes trees can communicate with each other. In her 2016 Ted Talk, she details experiments she’s done over 30 years that she says prove this.
In her first study to prove her hypothesis, she took Paper Birch and Douglas Fir trees and planted them together. She used tracer elements of carbon 14 gas and carbon 13 gas. She bagged the trees and put an individual gas in the bags with each individual species of tree. After sitting for an hour, she removed the bags and found that the birch and fir trees were passing the carbon back and forth when she analyzed them with a geiger counter.
In this particular experiment, the birch sent carbon to the fir, which she covered with a blanket and shielded from light exposure. In other instances she found the fir sending carbon to the birch trees.
Mycorrhizal Networks — Drawing By Nefronus Via Wikipedia Creative Commons
Simard explains that this carbon is being passed through mycorrhizal networks that originate from fungi. The top mushroom part of the fungus you see has threads called mycelium coming out of the bottom of it that interconnect with the roots of trees. The fungus and tree exchange nutrients and products from photosynthesis this way.
This network can also be used to trade carbon between trees and also information. For instance, trees can alert other trees of harmful insects. The network can be so dense that there can be hundreds of kilometers of mycelium under your feet.
In further experiments, Simard also found that ‘mother trees’ can recognize their ‘offspring’. Through monitoring isotope exchanges, she found that the parent tree will give more carbon to trees that were its ‘children’. The parent tree will also reduce their root competition with the related tree as well. Defense signals were also sent from ‘parent’ to ‘child’.
It appears this mycorrhizal network functions as a plant internet of sorts. Some even call it the wood wide web. | https://medium.com/discourse/youre-not-weird-if-you-think-trees-have-conversations-35c888a2002f | ['Erik Brown'] | 2019-09-23 10:21:01.161000+00:00 | ['Nature', 'Life Lessons', 'Environment', 'Technology', 'Science'] |
1,297 | How Organization Using Kubernetes And Getting Benefited | Hey guys hope you all are doing in today's article we are going to see how Kubernetes is using by various organization and getting benefited from and those organization who are not using are trying to shifting towards this.
You all might be thinking why Kubernetes??
As we all know organizations are adopting docker for their production. It is important to use an orchestration platform to scale and manage your containers.
Imagine a situation where you have been using Docker for a little while, and have deployed on a few different servers. Your application starts getting massive traffic, and you need to scale up fast; how will you go from 3 servers to 40 servers that you may require? And how will you decide which container should go where? How would you monitor all these containers and make sure they are restarted if they die? This is where Kubernetes comes in.
Kubernetes Use Case
Airtel
As we all know Airtel collaborate with IBM and Redhat for deploying their 5G network. The global software giant IBM announced several multi-cloud offerings that run on Red Hat OpenShift, a leading enterprise Kubernetes platform.
2. New York Times
Challenge
When the company decided a few years ago to move out of its data centers, its first deployments on the public cloud were smaller, less critical applications managed on virtual machines. “We started building more and more tools, and at some point, we realized that we were doing a disservice by treating Amazon as another data center,” says Deep Kapadia, Executive Director, Engineering at The New York Times. Kapadia was tapped to lead a Delivery Engineering Team that would “design for the abstractions that cloud providers offer us.”
Solution
The team decided to use Google Cloud Platform and its Kubernetes-as-a-service offering, GKE.
Impact
Speed of delivery increased. Some of the legacy VM-based deployments took 45 minutes; with Kubernetes, that time was “just a few seconds to a couple of minutes,” says Engineering Manager Brian Balser. Adds Li: “Teams that used to deploy on weekly schedules or had to coordinate schedules with the infrastructure team now deploy their updates independently, and can do it daily when necessary.” Adopting Cloud Native Computing Foundation technologies allows for a more unified approach to deployment across the engineering staff, and portability for the company.
Founded in 1851 and known as the newspaper of record, The New York Times is a digital pioneer: Its first website launched in 1996, before Google even existed. After the company decided a few years ago to move out of its private data centers — including one located in the pricy real estate of Manhattan. It recently took another step into the future by going cloud-native.
At first, the infrastructure team “managed the virtual machines in the Amazon cloud, and they deployed more critical applications in our data centers and the less critical ones on AWS as an experiment,” says Deep Kapadia, Executive Director, Engineering at The New York Times. “We started building more and more tools, and at some point, we realized that we were doing a disservice by treating Amazon as another data center.”
To get the most out of the cloud, Kapadia was tapped to lead a new Delivery Engineering Team that would “design for the abstractions that cloud providers offer us.” In mid-2016, they began looking at the Google Cloud Platform and its Kubernetes-as-a-service offering, GKE.
At the time, says team member Tony Li, a Site Reliability Engineer, “We had some internal tooling that attempted to do what Kubernetes does for containers, but VMs. We asked why are we building and maintaining these tools ourselves?”
In early 2017, the first production application — the nytimes.com mobile homepage — began running on Kubernetes, serving just 1% of the traffic. Today, almost 100% of the nytimes.com site’s end-user facing applications run on GCP, with the majority on Kubernetes.
“We had some internal tooling that attempted to do what Kubernetes does for containers, but for VMs. We asked why are we building and maintaining these tools ourselves?”
The team found that the speed of delivery was immediately impacted. “Deploying Docker images versus spinning up VMs was quite a lot faster,” says Engineering Manager Brian Balser. Some of the legacy VM-based deployments took 45 minutes; with Kubernetes, that time was “just a few seconds to a couple of minutes.”
The plan is to get as much as possible, not just the website, running on Kubernetes, and beyond that, moving toward serverless deployments. For instance, The New York Times crossword app was built on Google App Engine, which has been the main platform for the company’s experimentation with serverless. “The hardest part was getting the engineers over the hurdle of how little they had to do,” Chief Technology Officer Nick Rockwell recently told The CTO Advisor. “Our experience has been very, very good. We have invested a lot of work into deploying apps on container services, and I’m really excited about experimenting with deploying those on App Engine Flex and AWS Fargate and seeing how that feels because that’s a great migration path.”
There are some exceptions to the move to cloud-native, of course. “We have the print publishing business as well,” says Kapadia. “A lot of that is definitely not going down the cloud-native path because they’re using vendor software and even special machinery that prints the physical paper. But even those teams are looking at things like App Engine and Kubernetes if they can.”
Kapadia acknowledges that there was a steep learning curve for some engineers, but “I think once you get over the initial hump, things get a lot easier and actually a lot faster.”
“Right now, every team is running a small Kubernetes cluster, but it would be nice if we could all live in a larger ecosystem,” says Kapadia. “Then we can harness the power of things like service mesh proxies that can actually do a lot of instrumentation between microservices, or service-to-service orchestration. Those are the new things that we want to experiment with as we go forward.”
At The New York Times, they did. As teams started sharing their own best practices, “We’re no longer the bottleneck for figuring out certain things,” Kapadia says. “Most of the infrastructure and systems were managed by a centralized function. We’ve sort of blown that up, partly because Google and Amazon have tools that allow us to do that. We provide teams with complete ownership of their Google Cloud Platform projects and give them a set of sensible defaults or standards. We let them know, ‘If this works for you as is, great! If not, come talk to us and we’ll figure out how to make it work for you.’”
As a result, “It’s really allowed teams to move at a much more rapid pace than they were able to in the past,” says Kapadia. Adds Li: “The use of GKE means each team can get their own compute cluster, reducing the number of individual instances they have to care about since developers can treat the cluster as a whole. Because the ticket-based workflow was removed from requesting resources and connections, developers can just call an API to get what they want. Teams that used to deploy on weekly schedules or had to coordinate schedules with the infrastructure team now deploy their updates independently, and can do it daily when necessary.”
Another benefit to adopting Kubernetes: allowing for a more unified approach to deployment across the engineering staff. “Before, many teams were building their own tools for deployment,” says Balser. With Kubernetes — as well as the other CNCF projects The New York Times uses, including Fluentd to collect logs for all of its AWS servers, gRPC for its Publishing Pipeline, Prometheus, and Envoy — “we can benefit from the advances that each of these technologies make, instead of trying to catch up.”
Conclusion:
Kubernetes help Newyork times to reduce their time in deploying more new server as the requirement rises it also help to reduce cost and client can get benefit of this. As we can see Newyork Time s is just one example that gets benefited from Kubernetes but several others are getting benefited from Kubernetes such as Spotify, Pearson, SquareSpace, Pinterest, bose, Adidas, IBM, ING, & many more been benefited by shifting to Kubernetes.
Guys, here we come to end this is not much of a technical blog it’s just a small piece of information I want to share with you all hope you like it and found it informative. | https://medium.com/@gupta-aditya333/organization-using-kubernetes-and-getting-benefited-edb92a6db29a | ['Gupta Aditya'] | 2020-12-26 07:28:32.023000+00:00 | ['Case Study', 'Kubernetes', 'Articles', 'Information', 'Information Technology'] |
1,298 | Ageism in Tech and Data Science | Observations on pay and career growth
I pull a lot from my personal observations having worked in Silicon Valley as an engineer for 12yrs, as a Wall Street technologist for another 12, and more recently in MBB consulting. I’ve managed and hired shy of 200 people in my career. That being said, a lot of this opinion or anecdotal but has some backing by research [5,6].
1: Demand is limited for high paid engineers
Engineers get paid well, but you hit a ceiling relatively quickly (a few outliers exist namely at FANG firms). Also important to know demand drops quickly as you move up the curve — it is natural you leverage talent in pyramid structure for most teams.
Common salary/demand for engineers
2: Pay curves for leadership are higher
Engineer pay largely cap out at Sr levels, overlapped by leadership including Tech Leads and Managers, up to CTOs. The lines start/end abruptly because roles simply don’t exist at some levels (Anyone hiring Jr CTOs?).
3: Senior leaders are overpaid (but are not technical at all!)
I’m sure we all joke about the clueless Sr Leader/CTO/CIO who hasn’t coded or done anything technical in decades. It is all too true in many industries. Their ability to get things done at scale is what distinguishes them and gets them the big $$$.
So what? What is the point?
If you care about keeping your job and making more money than timing your career moves is key. A 28 year old manager will have tremendous bias against them and their decisions, while an engineer at 48 will have people questioning your abilities. The trap that often happens is being stuck in middle management — no longer technical and also not developed as a strong leader.
Presented next is a playbook to help move up the ranks. Unfortunately even this playbook exhibits ageism and is intended for younger people! | https://towardsdatascience.com/ageism-in-tech-and-data-science-67c7f4c3039d | ['Doug Foo'] | 2021-03-05 16:18:36.786000+00:00 | ['Age Discrimination', 'Career Paths', 'Technology Management', 'Career Advice', 'Ageism'] |
1,299 | Document Your Existing API’s With (Open API) Specification in ASP.NET Core | API 📗 Swagger
In this article, we will learn how to document our already existing APIs with .Net and.Net Core. Many developers absolutely deplore writing documentation. It’s boring to put all these things together in a document by manually writing them. To overcome this we have Swagger to implement documentation and it’s an open-source API documentation tool.
Packages required to configure Swagger,
Swashbuckle.AspNetCore (latest version) — This package adds Swagger and Swagger UI and other libraries to make it easy for us to create API documentation.
Step 1
After installing the package to the respective project, here I am sharing the screenshot for your reference if you are new to Swagger in ASP.Net Core
Fig-1
Step 2 -Setup
We need to enable our project to generate XML comments. The comments come from Triple-slash (///) comments throughout the code.
First, in the project properties, check the box labeled “Generate XML Documentation”
Right Click on the Solution and click on the Properties
Fig-2
You will probably also want to suppress warning 1591, which will now give warnings about any method, class, or field that doesn’t have triple-slash comments.
Fig-3
Step 3 -Configure Swagger
Startup.cs
launchSettings.json
XML Comments
In our XML Comments for methods.
WeatherController.cs
Here are XML nodes in use:
Summary: A high-level summary of what our method /class/field is or does.
remarks: Additional detail about the method/class/field
param: A parameter to the method, and what it represents
returns: A description of what the method returns.
Output — View Swagger
Fig-4
Here is a clear description of what is what in Swagger UI
Fig-5
Thank you for reading, please let me know your questions, thoughts, or feedback in the comments section. I appreciate your feedback and encouragement.
Move your project on next level by hiring skilled and seasoned ASP.NET core developers.
Keep Learning and Exploring…. !!! | https://medium.com/devtechtoday/document-your-existing-apis-with-open-api-specification-in-asp-net-core-a4a2bf1910d8 | ['Jay Krishna Reddy'] | 2021-06-07 04:57:23.799000+00:00 | ['Startup', 'Tech', 'Software Development', 'Programming', 'Technology'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.