Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Help Using Devise with the built-in Omniauth Support
I tried to follow https://github.com/plataformatec/devise/wiki/OmniAuth:-Overview, but somehow when I look at the generated routes I only see the callback path, and not the authorization path (and indeed I get the error on the view with the user_omniauth_authorize_path link).
I assume it might be a versions issue of OmniAuth and Devise (since after 0.2.0.beta Omniath allows configurable setting, and the routes must be defined). However, when trying to use an older OmniAuth version I get the error "You are using an old OmniAuth version, please ensure 0.2.0.beta or later installed.".
I tried working with Devise's master, 1.2.rc and the the omniauth branch and with both the entire omniauth gem (after 0.2.0.beta) and with 'oa-oauth' but without success.
I also tried to define the route:
match '/users/auth/:action/', :to => 'users/omniauth_callbacks#action', :as => 'user_omniauth_authorize'
This helped with the route, but when pressing the link I did get the error that devise cannot find a mapping. Funny enough, changing the controller in the devise_for to be invalid (like adding '/' before the users/omniauth_callbacks) resulted in an error the first time ("Controller name should not start with a slash"), but a small reload actually sent me to facebook and back (but naturally the callback route was not defined).
I am new to Ruby, and not quite sure where I go from here. Any help will be appreciated.
did you ever figure this problem out?
Nevermind, a simple server restart fixed it for me.
My problem was due to different versions of omniauth and devise. What finally worked was using this configuration in my gemfile:
gem 'devise', :git => 'git://github.com/plataformatec/devise.git'
gem 'omniauth', '>=0.2.0.beta'
gem 'oa-oauth', :require => 'omniauth/oauth'
you can see more details about my implementation here.
looks like a problem in devise 1.4.8; omniauth 0.3.0 and devise 1.4.7 worked for me.
Place devise :omniauthable on the user model.This will solve the problem
@JudeArasu - it is already there (look at the implementation link) - still didn't work.
This method is defined by devise, not through routes. Therefore it will not show up when you run rake routes. The method takes one of the oauth providers that you have configured in config/initializers/devies.rb. For example if you define the following in devise.rb:
config.omniauth :facebook, FACEBOOK_APP_ID, FACEBOOK_APP_SECRET
Then you should build the authorize link like this:
<%= link_to "Facebook Sign in", user_omniauth_authorize_path(:facebook) %>
As I mentioned - this link doesn't work (undefined method) in my view.
My bad, I didn't click though on the 0.2 beta issue. What provider are you using, I just upgraded to oa-oauth 0.2.0.beta5 with devise 1.2rc and facebook works fine for me.
devise 1.4.5 and omniauth 0.3.0.rc3 work fine. Be sure to restart the server -- it will never show up in rake routes.
|
STACK_EXCHANGE
|
Ap intermediate 2nd year model papers
Ap inter 1st year model questions papers 2018 / ap inter 2nd year sample papers 2018 / bieap 2nd year model papers 2018 study material / bieap inter 2nd ye. Intermediate ii year model question papers the model papers of languages and other optional subjects will board of intermediate education andhra pradesh. Ap senior intermediate previous model question papers download for telugu english medium 2018 final year exams board of intermediate education andhra pradesh will. Ap intermediate 1st year 2018 model questions ap intermediate model papers for telugu and english medium with ts inter 1st 2nd year model papers 2018. Ultimately some sort of existence do papers model ap intermediate 2nd year they do not drive the increasing use of evidence, which is illustrated by various. Andhra pradesh intermediate 2nd year model questions papers 2018, prepare students can get questions papers in the form of medium wise ap plus two quarterl.
Ap inter 2nd year question paper general vocational andhra pradesh bieap intermediate model question papers 2017 ap inter 2nd year question paper 2017. Ap ssc/ intermediate model papers 2018 download bseap previous year question papers pdf & also download bieap andhra pradesh inter 1st or 2nd year model. Board of intermediate andhra pradesh uploads model question papers pdf students download online ap board 12th (inter 1st & 2nd year) exam previous papers. Www bieapgovin, download 2018, ap intermediate 1st and 2nd year mpc, bi,pc previous model questions papers and public exam time table from below details official.
Ap board ssc,inter first second year model papers,syllabus 2018 ap board inter 2nd year syllabus 2018 bieap 1 2 year model papers,syllabus 2018. Ap inter 2nd year model question papers 2017 by eenadu, sakshi here we are provide you the andhra pradesh board sr inter public exams model question papers 2017 for.
- Intermediate previous question papers: 1st year 2nd year ap - march 2017 ts - march 2017 ap - march 2017 ts - march 2017 ap - march 2016 bieap model papers.
- Model question papers: intermediate i year: intermediate ii year: disclaimer: board of intermediate education andhra pradesh designed and developed by national.
- Ap 2nd year inter model papers 2018 half yearly / quarterly / public final exam sample questions papers e/m t/m download.
Brief contents01 andhra pradesh ap intermediate 1st year model question papers 2018 – pdf | ap inter 2nd year question papers – download pdf011 ap inter. Model question papers senior secondary course ap about 90,000 learners enrolled with us in the very year of inception of intermediate (model paper - 1. Ap sr inter previous (old) question papers download for ap inter 2nd year model papers 2018 study material with bit papers from sakshi, eenadu, narayana, s.
|
OPCFW_CODE
|
Minimum Age to Learn Kabbalah?
How old do you have to be to learn Kabbalah?
(I think its 60 ) but it seems everyone learns it anyway (Maharal, Chassidus). Is there a defense or is it just done?
From where do you get this assumption that learning Kabbalah is solely an issue of age?
Also, who says that Sifrei Maharal and Chassidus are defined as actual Kabbalah? They are based on Kabbalah, but who says they are actual Sifrei Kabbalah?
Related: http://judaism.stackexchange.com/questions/8075
from a book on Rav Kaduri zt'l http://dafyomireview.com/article.php?docid=261
I'm surprised that no one made this comment, but the wide variety of conflicting answers points to the need. That depends upon what you are calling "Kabbalah". The restrictions mentioned in niglah, like 'belly full of Shas and poskim', being 40 years old and married, etc. are speaking about a very specific and limited area of study. It is not dealing with what most people today think of when speaking about "Kabbalah". There are very few people today who even know what true "Kabbalah" is referring to, much less are pursuing its study.
https://www.chabad.org/kabbalah/article_cdo/aid/380334/jewish/Kabbalah-Before-Age-40.htm
@Yahu I have heard in the name of R Hartman (a mechaber on the Maharal) that R Moshe Shapiro told him that their are two levels of understanding the Maharal - "Ta'am Elyon" and "Ta'am Tachton" and "Ta'am Tachton is also very important"
As others have said, the Shach (the Sifsei Cohen), says that one must be 40 before they can learn kabbalah.
Others disagree:
Even though there is an opinion that one should not begin to study Kabbalah until the age of 40, the great masters of Kabbalah and Chassidut did not agree with this opinion. Some of the greatest teachers of Kabbalah--including the Ari, Rabbi Moshe Chaim Luzzatto (also known as the Ramchal), and Rebbe Nachman of Breslov--did not live to the age of 40! From an early age they began to study Kabbalah. In the Zohar we find that a sign of the coming of the Mashiach is when children will study and discuss Kabbalah.
As far as Chassidus is concerned, the Lubavitcher Rebbe said many times that this age limit referred to the time before the Baal Shem Tov. To quote from AskMoses.com
The Lubavitcher Rebbe explains that this [age limit] applied before Chassidut - the teachings of Kabbalah as prepared for the masses - was revealed to the masses by the Baal Shem Tov and Rabbi Schneur Zalman of Liadi, founder of Chabad Chassidut. During that time, the esoteric parts of Torah were considered to be a luxury, and only an elite few were privileged to be privy to Torah's inner dimension, and it was necessary for one to have exceptional knowledge and wisdom to study kabbalah.
Today, however, chassidut has been prepared for, and revealed to, everyone because it isn't a luxury anymore. Today, chassidut is necessary in order to be able to live as a G-d fearing Jew who loves and fears G-d. The longer the Galut progresses, the darker (spiritually) it becomes. In order to combat this darkness it is necessary to have the powerful light of chassidut.
Inner.org, the website of the famous contemporary kabbalist Rabbi Yitzchak Ginzburgh, says that the reason for the age limit was the concern that the knowledge of kabbalah could be misused:
The reason that some authorities have warned against studying Kabbalah at too early an age was that there were instances in Jewish history, even relatively recently, when most negative phenomena resulted from the misrepresentation and misuse of Kabbalah. For example, approximately 350 years ago, a misguided Jew, Shabbetai Tzvi, proclaimed himself the Messiah, basing himself on misinterpretations of Kabbalah. Before he was proven a fraud, he had wrought great material and spiritual suffering upon a significant portion of European Jewry.
However, it goes on to say, Chassidus is not susceptible to this problem:
This is one of the reasons that the Ba'al Shem Tov revealed a new dimension of Kabbalah--Chassidut. Chassidut expresses Kabbalah in a way that is accessible to every soul and that excludes all possibility of misinterpretation. Thus, it is highly recommended to study Kabbalah within the framework of Chassidut. When Kabbalah is studied within this framework there is no danger. If there is no danger, there is also no age barrier or other limitation on the study of the inner dimension of Torah.
didn't the Shach only live 41 years
@sam good point https://en.wikipedia.org/wiki/Shabbatai_HaKohen
Rav Ben Tzion Abba Shaul (Or LeTzion, Mussar, Shaar HaTorah, Maamar 7) writes that one should be 40 to learn Qabbala. Also, the Rokach writes in Sefer HaShem that one should 40 to learn the Qabbalistic Names of Hashem. However, the Kaf HaHaim Sofer (Orah Haim 155:12) writes that one should be twenty (see Mekubal's answer for the rest of his prerequisites).
See Shach YD 246:6.
The Shach on YD 246:6 is by far not the final answer. Sephardi authorities do not hold by him. Neither do various Ashkenazi ones. The Gra on the same page takes him to task for not knowing enough about Kabbalah to know that the text(the Rama in the Y"D 246:6) wasn't speaking about Kabbalah. For that matter Yeshivat Sha'ar HaShamayim in Jerusalem, an Ashkenazi Kabbalistic Yeshiva, admits students to the study of Kabbalah as young as 18 and 20.
Excellent source quoted by AY. Regarding Maharal and Chasidus, the danger with Kabbala is in the misinterpretation of it. There is no danger in learning kabbala pre-filtered and presented by mumchin in a ready-to-serve manner. See also the end of hakdamas haramban al haTorah.
For the sake of accuracy, Maharal and many sifrei Chassidus do not teach Kabbalah. Rather they teach insights, mussar, and other lessons or philosophical thoughts. Many if not most of what they teach is based on Kabbalah but is not actual learning of the "Hochmas HaNistar". Some Sifrei Hassidus such as Tanya or Rav Tzaddok ( and others) do also get more into actual kabbalah than others but the goal is not to teach Maaseh merkavah or Maaseh Beraishis, rather to see the Koach Hapoel Binif'al and be davek to Him.
I based my comment partially on the Ramban that I referred to. The Ramban strongly cautions his readers against speculating about the remazim he brings as only bad will come from it (same as the pardes issue). However, he excepted the same person who learns these same areas with an attentive and perceptive ear from a "mekubal chacham". I understand this to mean he finds out the right answers from the right people and not the wrong answers from speculation.
Your point is well taken that these sefarim may not be considered pardes at all. [Edited by moderator to remove an obsolete part.]
ok so where did you get that anyone can read Zohar?
Don't you say Brich Shmei d'Marei Alma 4x weekly? The Mishna Brura quotes Zohar many times as well. So I will throw the ball in your court to show an issur of "reading" Zohar.
Just to add two more Maareh Mekomos to the already great answers:
1) The Ramak in Ohr Ne'erav (Chelek 3, Chapter 1) writes that one should be 20 (he also says one should be married):
עוד צריך להגיעו לפחות לשנת העשרים כדי שיגיע לפחות לחצי ימי הבינה, ואף אם יש שפירשו עד
שיגיע האדם לשנת הארבעים אין דעתינו מסכמת בזה עמהם, והרבה עשו כדעתינו והצליחו. ועם
כל זה הכל לפי טהר הלב כדפירשנו וכפי טוב העצה הנכונה, ויש לזה רמז בזהר בכמה מקומות
אמרם (זהר ח"ב כט) עד לא תבשל בשולך וכו':
2) Rav Yitzchak Kaduri zt'l writes that one doesn't need to be 40 (based on the lack of any qualifier, I can imagine he didn't think there was an age)
|
STACK_EXCHANGE
|
[04:35] <mup> Bug #1698891 changed: [web UI] non-admin can appear to create logical volume <docteam> <MAAS:Expired> <https://launchpad.net/bugs/1698891>
[04:35] <mup> Bug #1702919 changed: displayed lease IP information not updated when entering rescue mode <dhcp> <MAAS:Expired> <https://launchpad.net/bugs/1702919>
[08:50] <mup> Bug #1715337 opened: [2.3a2] Missing DNS in rescue mode <MAAS:New> <https://launchpad.net/bugs/1715337>
[09:20] <mup> Bug #1715338 opened: Dumpdata failing for table metadataserver.nodeuserdata <MAAS:New> <https://launchpad.net/bugs/1715338>
[09:50] <mup> Bug #1715345 opened: [2.3 alpha 2, Machine details] When I click the edit button, the Save and Cancel buttons remain hidden <ui> <MAAS:New> <https://launchpad.net/bugs/1715345>
[10:20] <mup> Bug #1715353 opened: [2.3 alpha 3, Subnets/VLAN details] VLAN details do not have the Edit button and can still be edited with auto-save <ui> <MAAS:New> <https://launchpad.net/bugs/1715353>
[10:34] <c06> hi all i am trying test in my vbox VM
[10:35] <c06> i have two vm one VM with public and hostonly adapter(<IP_ADDRESS> - gw <IP_ADDRESS>), and i installed maas
[10:36] <c06> i configured second VM with hostonly adapter(<IP_ADDRESS>). but MAAS is unable to find the second machine. any suggestions.??
[10:37] <c06> Also in my second machine i enabled Network(boot) PXE
[10:53] <c06> anyone on.?
[11:35] <c06> my node is unable to get the boot on tftp server
[11:49] <c06> it got ip but after we are getting "No bootable medium found."
[14:51] <roaksoax> cnf: did you enable dhcp and/or imported image s?
[14:51] <roaksoax> err
[14:51] <roaksoax> sory
[14:54] <cnf> Hmm?
[14:54] <cnf> I'm in the US on holiday atm
[15:00] <cnf> roaksoax: so hi from carson city \o :P
[15:25] <roaksoax> cnf: sorry my message was for someone else
[15:59] <ybaumy> roaksoax: i know you are not in charge fixing that damn resolv.conf problem. but do you have a estimate on how long it will be until we get a solution. i would really need maas atm. i tried foreman but its not compatible with juju
[16:00] <ybaumy> i also tried 14.04 for commissioning
[16:00] <ybaumy> but its the same problem
[16:00] <ybaumy> and in the end juju doesnt work too
[17:39] <xygnal> roaksoax: get a chance to see my paste?
[17:41] <xygnal> roaksoax: not sure if this is an existing bug. cannot find one.
[17:53] <jamesbenson> can someone destribe badblocks-destructive further? I've read the info on the release notes... just not sure what exactly it does.... i.e. are the hard drives erased? just bad sectors?
[18:05] <xygnal> i assume destructive mode does read AND write testing
[18:06] <xygnal> write testing being destructive
[18:44] <jamesbenson> yeah, I'm was guessing that any bad sectors they find, they mark bad so any data there is unrecoverable...
[18:44] <jamesbenson> good to get validation. thank you xygnal
[18:45] <jamesbenson> do any of the tests work when they have a raid controller card?
[18:46] <jamesbenson> the few tests I've ran, they all seem to fail (have a perc 6i raid card)
[20:08] <xygnal> not sure James. dev team seems busy today. still waiting on a reply myself before submitting a bug.
[22:03] <mup> Bug #1715501 opened: MAAS can't connect to RSD 2.1 pod <MAAS:New> <https://launchpad.net/bugs/1715501>
[22:09] <mup> Bug #1715501 changed: MAAS can't connect to RSD 2.1 pod <MAAS:New> <https://launchpad.net/bugs/1715501>
[22:12] <mup> Bug #1715501 opened: MAAS can't connect to RSD 2.1 pod <MAAS:New> <https://launchpad.net/bugs/1715501>
|
UBUNTU_IRC
|
Greetings people who read the random musing of a random person. I'm not sure why you do that, but it does make someone like me feel slightly more connected to total and ambient strangers! Also didactic.
Those of you who have read every single one of my newgrounds internet journal posts will remember that I posted some time ago I was looking for an action script programmer to help me finish a game, because the person I had been working with, flaked like he was made out of really old person skin from a really old person who never moisturized.
Update 1: I have not found a programmer.
Update 2: I hate all of them.
Update 3: I don't really, but I've had a disheartening run with them.
The first guy: worked with him for like six months, great guy, great artist, great programmer. Disappeared off the face of the earth after leaving me some IMs that read like 'hey, dude ; -; are you around?'
I really hope he didn't kill himself because I was busy buying energy drinks from 7-11 and had left my computer on and he needed someone to talk to and thought i was ignoring him. That would be horrible!
Second guy: Turned out he didn't actually know any more about flash programming then me. It took a month to realize this, because I am dumb.
Third guy: Turned out he was the second guy using a different internet name. Took me a month and a half this time. Because I am REALLY dumb.
Forth guy: Asked to be paid money, I was like 'sure, i will pay you money, I've been working on this game for like a year, i just want to finish it. Please. Let the nightmare end.' and he was like, HA HA! YEAH! LETS DO THIS!
A few weeks later he sent me an e-mail basically saying my game sucked, I sucked, and he didn't think anyone would play it. He was so despondent he wouldn't let me pay him for the time he HAD spent on it. I tried!
I cried myself to sleep that night.
Fifth guy: Turned out to be the second guy again. It only took me a week to realize this. I'm pretty sure he is the devil.
Sixth guy: Actually a girl. Technically still working with her... though she hasn't returned an e-mail for three weeks, so I'm going to assume that she is also the devil.
Seventh guy: Teacher at a collage that teaches a course on flash. I showed him the game demo and he was like 'Bazam. Crackalakin. This is awesome. Yeah, i can bust that out in like two, three days. Tops.' i was quite pleased.
After two months of 'I'll get to it tomorrow' he eventually sent me the inevitable 'i will never get to it e-mail. I do not hate him.
Eighth guy: No one yet, but I have a feeling it'll be the second guy again.
The ninth person might very well be me if I can ever get past that point in my AS programming skills where the little figure doesn't gyrate across the screen randomly like he's having an epileptic fit.
I took a flash programming class, and that's what I turned in for my final project 'epileptic gyrating 8-bit stick figure man'
I got a B.... minus. The teacher said the only reason he did not give me a worse grade was because how i would swear at my computer mid class when the program wasn't doing what I wanted amused him.
So, if anyone out there is a flash genius and not a flaky person, feel free to contact me.
If you're not a flash genius and/or are a flaky person, feel free to contact me just to mess with me. I don't actually want you to do this, but it is not like I can stop you. If this ordeal has shown me anything, it is that I have no power to generate any kind of respect, output, or work ethic from anyone.
All of you have a wonderful day! Remember, if you see a ninja, kill it just to stay safe. Every eighth ninja drops a full heal.
|
OPCFW_CODE
|
Multipart file not being added to request when karate is executed from jar
When using multipart file from karate executed from a jar the file content is not added to request
As part of our CICD Process we build out a jar file containing our karate features to run on the deployed code in each envirnment in order to validate the deployment.
We have a test which calls an endpoint to upload a multipart file which runs fine in the IDE.
However when we build the code out to jar and run it form the jar the content of the file is no longer part of the request.
The error we get in the actual test: org.springframework.web.multipart.support.MissingServletRequestPartException
I have attached a sample project which demonstrates this behaviour indirectly (i.e. by showing the content is empty when run from a jar)
Instructions.
unzip the karate-sample.zip file and open up the project in your ide.
Run the file TestApplication as a java app in the IDE.
The output of the test will indicate content:
Mixed: content-disposition: form-data; name="fileUpload"; filename="test.xlsx"
content-type: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet; charset=UTF-8
content-length: 8602
Completed: true
IsInMemory: true
Now run the maven clean install
karate-sample.zip
build, this will build out an executable jar called: sample-test-jar-with-dependencies.jar
Run this as a java jar file: java -jar sample-test-jar-with-dependencies.jar
The output of the test now indicates that the file is empty:
content-disposition: form-data; name="fileUpload"; filename="test.xlsx"
content-type: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet; charset=UTF-8
content-length: 0
Completed: true
IsInMemory: true
From debugging the issue it seems file is not getting set properly in the ReourceUtils when it is a jar resource, whereas it does when it is a file resource therefore the ScenarioFileReader's call here return null:
public File relativePathToFile(String relativePath) {
return this.toResource(relativePath).getFile();
}
karate-sample.zip
@david-scott-ie it may take me a while to get to this - but I thought the use of classgraph should have solved all these problems: https://github.com/intuit/karate/issues/751
2 questions: a) which version and b) the moment you are in a JAR you must use classpath: - so are you ?
Version is 1.0.1
And yes, using classpath: this is the line in the feature:
And multipart file fileUpload = { read: 'classpath:/excel/test.xlsx', filename: 'test.xlsx', contentType: 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet' }
Note, I did try with and without the leading / both work in IDE but neither in packaged Jar
@david-scott-ie really appreciate the sample and instructions ! and your investigation helped. I've made the change so jar resources are grabbed as a byte-stream not a File.
can you test if it works, because this is something near impossible to write a unit-test for: https://github.com/intuit/karate/wiki/Developer-Guide
@ptrthomas - yes it works! Thanks for turning that around so quickly.
Pulled the code and built it out locally and plugged it in and both the sample and the actual tests are now uploading the file in the same way in the IDE and when run as a jar.
Thanks again - do you have a date for the next release?
@david-scott-ie great. we should have 1.1.0.RC4 within a week
@david-scott-ie 1.1.0.RC4 has been released
@ptrthomas File uploading finally works for me with RC4. I'm glad it occured that it wasn't a problem with my script ;) Thanks for fixing that.
1.1.0 released
|
GITHUB_ARCHIVE
|
Client-side templating basically means that your browser is responsible for rendering your web app, leaving the server in charge of handling what it's good at: the data. Because it is based on the MVC model, Aria Templates make it a breeze to separate your business logic from the way it is represented.
You're already familiar with the concept of templates if you have ever done web development using technologies such as ASP, JSP or PHP: a common way to proceed is to design a part of your interface in a generic way so that it is able to represent different states of the data. In the JSP world, the data is extracted and processed using JSTL or scriptlets (embedded chunks of Java code) and various taglibs. The resulting markup is then sent to the browser each time an update occurs.
With the advent of AJAX and the ability to generate asynchronous requests from the client, web apps became more flexible and communications between the browser and the server were not limited to transmitting whole streams of markup and JS code anymore and developers were free to only use chunks of markup or even pure data, a process that was made even easier with the help of JSON.
Server-side templating has two undeniable advantages:
- it's efficient, because the processing is done on a powerful dedicated machine;
- it's pretty much standard, meaning that documentation and tools are easy to find for developers.
However, as web applications became more and more complex, this mechanism quickly showed its limits: because of the amount of information needed to be sent back and forth, server-side templating weighs heavily on bandwidth and this has a huge impact on applications responsiveness.
The idea behind client-side templating is to solve this issue by shifting part of the processing to the client. Basically, upon initialization, the browser receives the necessary files to render the interface (the template engine and the templates) as well as the initial data set; then, each time a request is made to the server, the application only retrieves data that it needs to update the interface state.
Not only does this mechanism allow for less data to be transmitted over the network but, because the display is clearly separated from the data and the logic, this also makes customization of the interface much simpler: one template can be replaced by another very easily.
Aria Templates relies on the MVC pattern to create applications. For developers, this means a clear separation between the user interface, the actual data and the business logic. Each of these layers has a specific representation in AT.
In this approach, the view —the user interface, describes the state of the data at a given moment. The consequence of this is that developers do not need to manually modify the parts of UI impacted by a data update: in Aria Templates, data updates automatically trigger changes in the UI.
All these concepts are explained in details in the rest of this documentation.
To illustrate practically how an AT app is built, let's have a look at a simple example. Like most applications it will be based on:
- a bootstrap: a piece of HTML that loads the framework engine and the initial script code to load the template;
- a template;
- some data.
First off, we start with the bootstrap, in our case a simple HTML page created from scratch:
There are 3 parts to look at here:
- Lines 6 & 7 reference the needed files to include Aria Templates.
- Line 11 creates an empty container (a
DIVin this case) which will be used to display our template.
outputcontainer. In this example, it is also where we initialize some data to be used by the template.
Now let's see what
SgtGreeters.tpl looks like:
As you can see, AT introduces a special kind of grammar to describe your interfaces, much of it being quite straightforward. If we have a detailed look at the code we can see the following:
- Lines 1 to 3 simply declare that the file is a template with a specific classpath.
- Line 4 marks the entry point of the template.
- Lines 8 to 10 iterates through the array that was given as an argument in the bootstrap.
And here's what the result looks like:
Aria Templates aims at making it fast and easy to develop professional web applications. It offers a wide variety of widgets covering most of modern UI use cases, as well as a complete API to help tackle the common tasks of the application logic.
Thanks to the clear separation between its MVC layers, it also makes it simple to customize an existing application by modifying its interface or enhancing its business logic.
To go further, the rest of this documentation will present you each layer of the framework in detail. To get a clear understanding of the basics, please refer to the Core Layer Concepts section. The Aria Templates Guide site is also a good place to find code samples and tutorials designed to illustrate practically various common patterns and use cases. Finally, the API reference is where you'll find documentation about all the classes and methods exposed by the framework.
|
OPCFW_CODE
|
iOS Keyboard support.
What issue is this addressing?
Closes #1090
What type of issue is this addressing?
feature
What this PR does | solves
It creates a keyboard event mapping for physical keyboards on iOS.
Minimum supported iOS version is 13.4 - before iOS 13.4, this API only can return four arrow keys and five extra keys, which is usually not enough. So I didn't bother implementing the pre-13.4 case; instead, pre-13.4, this code compiles but does nothing.
Tested in AAAAXY - the game is now fully playable in the iOS Simulator, which is one step for potentially releasing the iOS version on the Mac App Store, and otherwise a major improvement for local testing.
Caveat: I am unsure about the "aliased" key mappings, commented in genkeys.go. In particular, I do not know what the NonUSBackslash and NonUSPound keys are and what Ebitengine key they SHOULD map to. I ASSUME they may be the extra key many international keyboards have near the enter key, and thus mapped to Backslash as that's what pressing the key does on DOS with US layout, but why have two codes for that then?
The change seems fine, thank you!
I ASSUME they may be the extra key many international keyboards have near the enter key, and thus mapped to Backslash as that's what pressing the key does on DOS with US layout, but why have two codes for that then?
https://www.w3.org/TR/uievents-code/#keyboard-102
A second key is also added (labelled #~ on a UK keyboard) which is partially tucked under the Enter key. This key is encoded as "Backslash", using the same code as the | key found on the "101" keyboard layout. According to [USB-HID], the US | and UK #~ are actually two separate keys (named "Keyboard \ and |" and "Keyboard Non-US # and ~"), but since these two keys never co-occur on the same keyboard most platforms use the same scancode for both keys, making them difficult to distinguish. It is for this reason that the code "Backslash" is used for both of these keys.
So assigning two different code to the same key name "backslash" seems OK. What do you think?
Thanks for the link. Yes, both of these key codes probably are best mapped to Backslash.
Keyboard Non-US # and ~ is the #' key on German layout, and does emit \ on US keyboard layout settings. It's close to the Enter key and emits the same scancode as backslash. Also, TIL that # is a pound sign, I thought £ is the pound sign, but fine.
The other one the \ key next to shift - is the <>| key on German layout, and too acts just like \ when on US layout.
So yeah, let's map all three to the "Backslash" Ebitengine key code for now. This will likely have one minor fun bug that holding one, then the other, and releasing first makes Ebitengine think no key is held. If we ever want to fix this, we probably should fix it by assigning a separate keycode in Ebitengine, but it's probably not a high priority.
Thanks. Could you add a comment to explain this background (why we assign the two keys to 'backslash')
Thanks - more explanation added.
Just checked, the runes = nil is necessary indeed. Verified by removing the runes = nil from the touch processing - if I now press the a key and click the Simulator window, that counts as extra a key presses.
I think the shared runes = nil, plus making all code append, is probably ideal. I'll send a separate PR for that once this is merged.
I think the shared runes = nil, plus making all code append, is probably ideal. I'll send a separate PR for that once this is merged.
As runes doesn't have to be a global state, adding an argument runes to updateInput sounds better.
Absolutely. I don't even know why it's currently a global.
I'm gonna sleep soon. If you could confirm and fix the Android issue, I'd be very happy!
|
GITHUB_ARCHIVE
|
How to come up with ideas for patents
The need for patenting inventions for business is covered in almost every second article on intellectual property. But what to do when you need to protect the product of the company, and when talking to developers, the answers are more meaningful than the other: "I just improved the code," "there's nothing new here," "I just fixed bugs," "it's not interesting to me," and " leave us alone, we have already proposed everything, "etc.? Recipes under the cut. Parallels RAS , another day on the VDI component of the same product, etc.
So people will be more open and discuss any ideas or problems with their colleagues, the main thing is only to talk them. Do not discuss the details of each proposed idea at once with everyone - just lose time, it is better to throw as many ideas as possible and after that separately meet with the author and clarify everything. At such meetings, ideas that do not fit in the way known to one or more participants of a prior art (analog) are immediately reclined as well. And ideas can arise both from what has been done in the product, and from what is planned or can be done. There have been cases when from the Brainstorm for patents new features were born for the company's product, implemented in the future.
At a meeting, one-on-one programmer is more difficult to liberate, so it's better to go through what he has already implemented and see how it would be possible to improve the parts of the product with which he works.
Tip: invite to any brainstorming extra person, who works with the product for a long time, preferably with deep technical knowledge and who somehow interacted with patents (there are already patented ideas).
On brainstorms, it is better to collect all ideas, spherical and narrowly applicable, unprocessed and realized. And only then decide what to do with them.
Analyze the new functionality of
Another option for collecting ideas for patents is to analyze the new functionality in the latest and planned releases with project managers and product managers. Here, too, do not focus on details and details, you can explain everything later with the developer working on a specific feature. The purpose of such an analysis is to identify potential ideas for patents.
So, for example, periodically I organize a meeting with the project manager Parallels Desktop and the manager of the same product, where we go through the list implemented in the finished update. In parallel with the written out potential ideas, I write down the names of the developers who are working on them. Further on, I will clarify all the questions that are of interest to me specifically, that they do not waste management time and give an opportunity to talk in detail about the implementation.
Do not forget that in the first place you should work on the functions that have already been implemented in published updates. Of course, it depends on the legislation of the country, but in most cases, they can be patented only for a certain period of time.
Also, do not discard ideas related to innovations that can be applied in your product. It can be useful to discuss them with management and analyze the possibility of introducing them into the product, and as a consequence, there is a possibility of the invention.
Organize the process of analyzing the idea and working on patent applications
All the collected ideas can be discussed on the patent committee, this will allow a comprehensive look at the proposed inventions and decide what to work on, what needs to be worked out and what is not good for protection. On the Patent Committee, there is also the likelihood of new ideas emerging, since when discussing the proposed members, they can modify, merge, or even come across a completely different idea.
At us in such committee enters 6 persons, and includes the representative of each large project and the head of all development of the company.
In this discussion, ideas are analyzed taking into account not only the three criteria of patentability (novelty, utility and non-obviousness), but also the need for a patent for the company.
Clearly show the process of ideas processing in the company to developers and their management, where ideas are written, what stages of the application development exist and what it would be desirable to see from the authors of inventions.
The result is
Correctly motivated programmers will be able to find inventions in the products of the company, but it is worth helping them in understanding the goals, tasks, processes and organization of events. The correct approach also plays an important role, remember: people are different - different interests, someone listens only to the boss, someone is interested in bonuses, and someone just loves everything new and will happily work together.
|
OPCFW_CODE
|
Fragmented speech using Redhat 8.0 and Apollo2
lists at digitaldarragh.com
Sat Nov 8 18:21:45 EST 2003
I cant exactly who was good enough to send that back then but I kept it
as it was so helpful.
HZ is set to 512. To compensate, you'll need to
change the timing variables for Speakup. In the /proc/speakup/apollo
directory, you'll se files called jiffy_delta and delay_time, among
Note that I might not have spelled apollo the way it is in the
/proc/speakup directory. Use the "cat" command to display the value
contained in the two pseudo files. For example: "cat jiffy_delta" and
delay_time", without the quotes, of course. To change a value, use the
"echo" command. For example: "echo 15 > jiffy_delta" and "echo 80>
delay_time". Using cat, find the value of jiffy_delta. Multiply the
by five. For example, if the value of jiffy_delta is 3, make it 15.
find the value of delay_time. Divide it by 5. For example, if you find
400, make it 80. If this works, you can write a script to make this
happen on boot.
I do have this happening at boot, but it was a while ago and I've not
touched linux much since.
Hope that helps
From: speakup-bounces at braille.uwo.ca
[mailto:speakup-bounces at braille.uwo.ca] On Behalf Of Sheila Acock
Sent: 08 November 2003 23:02
To: speakup at braille.uwo.ca
Subject: Fragmented speech using Redhat 8.0 and Apollo2
Please don't shoot me down or tell me to go and read the documentation.
I was really keen to try Linux, and have spent many more hours than I
ought to have done reading documentation, but I can't say it came
together as a coherent whole, and I have had so many problems, I am
feeling quite discouraged.
Having read the HowTo about installing Redhat with Speakup, I decided
to take the plunge and buy a box set in the hope that I could get a
system up and running to play with fairly quickly.
I won't bore you with details of the problems encountered, but I now
have Redhat 8.0 running on a system with an Athlon 850 processor, with
250 mb memory and an Apollo2 synth. The trouble is, the speech is so
disjointed I am having difficulty making much sense of it.
Looking back through the List archive, I see that a couple of people
reported a similar problem using Apollo2 and Redhat 8 back in February.
There did not seem to be a completely satisfactory solution, though one
person said that a suggestion involving "delay_time" and "jiffy_delta"
had helped to some extent.
I have signed up for Janina's pre-Techshare Linux workshop, but I don't
think I'm going to make much progresss until I can get reasonable
speech, so any help in sorting this problem would be much appreciated,
remembering that I am a complete newbie and need things spelt out in
words of one syllable.
Speakup mailing list
Speakup at braille.uwo.ca
More information about the Speakup
|
OPCFW_CODE
|
A review of Lending Club installment loan risk performance by applicant-provided job titles
Tremendous amounts of data in financial services organizations are generated from initial application to all subsequent customer interactions. However, new application decisions are often entirely driven by widely available credit bureau information. Credit bureau information is the right foundation; however, incremental application questions can add valuable risk splitting and provide a proprietary edge over the competition allowing institutions to approve deeper and offer better terms.
Lending Club applications at one point were a bastion for alternative data taken at time of applications. Potential applicants could write full paragraph descriptions on why they needed a loan for prospective funders to read. While this particular field was eliminated as Lending Club became less peer-to-peer dependent, Lending Club still collects free-form, applicant–provided employment titles. For example, applicants can enter “Professor”, “Truck Driver”, “Teacher”, “Super Hero”, or enter nothing at all. Given the free-form nature of this field, it is hard to directly incorporate this field into a credit risk model build, 67 thousand unique employment titles were entered between 2016 Q3 and Q4. However, neural networks and natural language processing is tailor made for this type of data and can be used to see if there are any usable insights that could be generalized and incorporated.
I won’t go deeply into the technical details, but leveraging Keras, Python deep-learning library, and pre-trained word vectors, I trained a neural network using employment titles from 2016Q3-Q4 Lending Club bookings with a 1/0 target if the borrower charged off or defaulted in the first 18-months of the loan.
Upon training the neural network, I broke out a hold-out population into the top 10% most risky predicted titles and top 10% least risky predicted titles, and everyone else. To make sure the employment titles are not just capturing the risk splitting already captured in Lending Club’s loan rating system, A-grade (least risky) to G-grade (most risky), I reviewed loan risk performance by loan grade and employment title risk groups. High predicted risk employment titles display between 20% and 100% higher risk, and low predicted risk employment titles display between 16% and 40% lower risk (see chart below).
Commonly occurring titles in the High Risk Titles group included: “Driver”, “Server”, and no entry. Commonly occurring titles in Low Risk Titles Group included: “Accountant”, “President”, and “Attorney”. While Lending Club cannot compliantly use specific individual employment titles as a reason for declining an applicant, upon review of the common titles in each group, non-salary, hours dependent jobs, like a ride-share driver and server, appear to present incremental risk not currently captured in the Lending Club loan grading system. Adding a question around salary vs. non-salary employment may be a way to capture much of the employment title insight and meet the compliance hurdles required to be incorporated into a future loan grading model.
- Employment title and potentially salary vs. non-salary employment has potential as a credit risk splitter in consumer lending on top of bureau information
- New tools like neural networks and natural language processing,while challenging to directly incorporate into credit models today, can be used to unearth new insights, after which, more traditional methodologies can be leveraged to bring in-market change and drive incremental value
|
OPCFW_CODE
|
What is the right way to wave the lulav and esrog in the down direction?
We wave the lulav and esrog (4 minim) in six directions including “down”.
What is the right way to wave down?
I have seen two procedures.
1) The 4 minim are held lower (nearer to the ground) than in the other wavings and waved up and down held vertically three times.
2) The 4 minim are pushed downwards towards the chest and away from the chest, while being held roughly diagonally three times.
There are many different customs. Which one do you want to know?
I am interested in all customs where the 4 minim are held in the direction that they grew - not with anything pointing down.
Here is the Chabad custom. When I get some time I'll make it into an answer (unless someone beats me to it..).
@Michoel I make that type 2.
related http://judaism.stackexchange.com/q/3033/759
Rabbi Joel M. Finkelstein of Anshei Sphard-Beth El Emeth Synagogue in Memphis can explain it better.
Video- Rabbi Joel M. Finkelstein
Vezu - it's great - he mentions type 2 shaking.
Vezu, welcome to Mi Yodeya, and thanks very much for bringing this source! Please consider [edit]ing in a quick summary of what R' Finkelstein says with respect to this question so that people can have an idea of where this answer is going without watching the video.
Link to the time where he shows it: http://www.youtube.com/watch?v=SwLKecpRC6w#t=222s
The Chabad custom (based on the Siddur of the Alter Rebbe, Sefer Haminhagim Chabad and the actual practise of the Lubavitcher Rebbe; collected in Otzar Minhagei Chabad pg 288) is as follows: each of the six directions comprises of six movements i.e. pushing the four species away and pulling them back towards you three times. Each "pushing" motion starts from the heart, and the "pulling" motion brings back the four species to actually touch the chest in the place we hit when we say viduy. The person waving remains standing in one spot facing west and does not turn to the direction he is waving; only his hands and the upper part of his body turns. The lulav remains upright throughout at chest level, besides for the upward and downward waving where the four species remain upright but are elevated to face level or brought downwards. A pictorial guide of this is available here.
|
STACK_EXCHANGE
|
RatInABox2.0 - Opening the discussion
I've begun to think about 2.0. The reason is that there are are certainly a couple of choices I made early on in development which weren't optimal. Now could be a good time to fix these as the community is growing but still small enough it won't be super disruptive. Also fixing them will make it easier to maintain RiaB in the long run.
I'm opening this issue to get community thoughts on this. @SynapticSage @colleenjg @jquinnlee @mehulrastogi you're some of the most active users I know fairly well so I'm tagging you to get your input (if you have any), but anyone can chip in here.
Here's my thoughts:
Args not Dicts: It's increasingly annoying me that parameters are always handed in as dicts. This is unconventional and has warranted very-well-made but hacky work-arounds e.g. #38 #39
Plotting: To me at least the visualisation ability of RiaB is really important but animations are slow and I like animating things so this annoying. Could improve by being smarter about how we render stuff in matplotlib, and not re-rendering the Environment or trajectories each frame. Stuff like that. See #54
Type hinting: This is a new thing in python which I've been told to consider. Any thoughts?
Refactoring: As discussed in #58 #55. E.g. it's not nice having all Neurons classes in one .py file.
Documentation: Would be great to have a sustainable ReadTheDocs page. #36
Unit testing: I have been pretty sloppy about this but will add loads more.
Global Environment update(): Given, now, Environments know about their Agents and Agents know about their Neurons we could have just one update function in Env which cascades through else thing else. Cleaner?
Jax compatibility: Very on the fence about this one. Probably leaning towards not doing it. Would be great to have speed ups, autograd and gpu capacity but it could be just a bit too much / unnecessary / off-putting for non-python geeks (tbh, like me). But if jax is the future I want to consider it. Options include:
Don't do to
Partial jax to hit a few heavy-lifting utils functions. Q: Does this even work, would converting to/from jax arrays not be inconveniently slow here?
Full jax no numpy. np--> jnp everywhere.
Both jax and numpy. Users choose which backend. This should hard but I've played around and probably could be done. Has complications though.
...?
I'm not a massive software guy so @SynapticSage @mehulrastogi feel free to give high level comments about best way to go forward.
args, not dicts ✅
a helpful case study in support of args ...
Most of you (I'm sure) have seen Grant Sanderson's beautiful 3blue1brown YouTube channel. Grant impressively homebrewed a piece of software manim (math animation engine) that he uses to make his stunning videos.
manim actually started out using a CONFIG dict --- reminiscent of this situation 🤔
On a positive note, the CONFIG dict cut down on lines in object init; encouraged people to spell out settings in one place.
But on the dark side, dicts required nearly re-coding a lot of Python features handled by kwargs and setting attributes. Ultimately the community fork decided to kill CONFIG dicts in favour of args -- decision convo here:
https://docs.manim.community/en/stable/changelog/0.2.0-changelog.html
And, it looks like Grant Sanderson's fork is also trying to remove them: https://github.com/3b1b/manim/pull/1932
plotting 👍
💯 replotting = slow.
... if ratinabox caches plot objects, super recommend scheme we chatted about:
https://github.com/TomGeorge1234/RatInABox/issues/30#issuecomment-1486449726
The TaskEnvironment has a weak version of this feature -- doesn't replot everything and thus renders quickly. But it's pretty hacky in my view that the environment caches things about its agents and goals. In the long-run, it will be more maintainable to have each class in charge of caching its own plot objects rather than having to change master supervisor class's plot every time the children classes change.
type hinting 👍
Especially easy-to-type variables.
Tools like jedi and language-server-protocol offer better code completion for type-hinted variables.
unit testing 👍
global environment 👍
Possible suggestion: each RIB class could have a list of children (environment.childen = [agent, ...]; agent.children=[neuron,...]) may help cascade .update() and .plot()/.render() calls down a hierarchy. and would unify the lingo for distributing update/plot. as opposed to each object having a different attribute name for its "kids".
Jax 🤷♂️
No strong opinions. I usually break into optimization mode when an analysis takes more than a day to finish or if it takes an hour, but I run it 100+ times.
Maybe partial is the right choice? Ease into slowly.
Sounds like a great idea overall for the longevity of the package! I definitely agree for the args instead of dicts, type hinting and unit testing. For global environment, if the cascading update is implemented, I would suggest having a kwarg like cascade=True, to allow users to opt out, when needed. No strong views on the other sections.
I would suggest an additional section:
modularity: Many of the classes have very long methods that chain a lot of complex, separate computations together. When I've created new classes for my own use, e.g., new Agent classes, I've had to copy long sections of certain methods that I needed to overwrite, but only partially (for example, for computing an agent's velocity). This can create a lot of code duplication (I think there may already be some for the plotting methods). So, I strongly recommend adding the goal of modularization to the list, i.e., extracting meaningful subparts of class methods and turning them into their own functions, perhaps aggregated into agent_util.py, env_util.py and neuron_util.py, or something like that.
Great comments, thanks guys. @SynapticSage 3B1B advice heeded! @colleenjg you're right this could be more modular, for example Agent.update() is pretty enormous. Breaking these down would make sense so I'll look to do that. Don't expect this anytime soon btw so any new ideas, keep posting them here.
Thanks for the feedback, closing for now.
One thing that just occurred to me, which could be considered:
Only passing ax to the plotting functions, not fig.
In typical use cases, to my knowledge, passing both should be redundant, as you can access the figure with ax.figure (or ax.ravel()[0].figure in cases where ax is an array).
Agreed and added to the list. It's essentially redundant and only add bloat
@musicinmybrain thanks for your feedback - that's ok, I doubt we'd go full jax. In fact leaning towards no jax at all actually. After some preliminary testing seems like getting significant speed ups would be difficult as most of the heavy computations are already vectorised
|
GITHUB_ARCHIVE
|
Creating a sidebar is useful to:
- Group multiple related documents
- Display a sidebar on each of those documents
- Provide a paginated navigation, with next/previous button
To use sidebars on your Docusaurus site:
- Define a file that exports a sidebar object.
- Pass this object into the
@docusaurus/plugin-docsplugin directly or via
By default, Docusaurus automatically generates a sidebar for you, by using the filesystem structure of the
You can also define your sidebars explicitly.
A sidebar is a tree of sidebar items.
A sidebars file can contain multiple sidebar objects.
Notice the following:
- There is a single sidebar
mySidebar, containing 5 sidebar items
Docusaurusare sidebar categories
doc3are sidebar documents
Use the shorthand syntax to express this sidebar more concisely:
You can create a sidebar for each set of markdown files that you want to group together.
apiSidebar are sidebar technical ids and do not matter much.
tutorialSidebarwill be displayed
apiSidebarwill be displayed
A paginated navigation link documents inside the same sidebar with next and previous buttons.
SidebarItem is an item defined in a Sidebar tree.
There are different types of sidebar items:
- Doc: link to a doc page, assigning it to the sidebar
- Ref: link to a doc page, without assigning it to the sidebar
- Link: link to any internal or external page
- Category: create a hierarchy of sidebar items
- Autogenerated: generate a sidebar slice automatically
doc type to link to a doc page and assign that doc to a sidebar:
sidebar_label markdown frontmatter has a higher precedence over the
label key in
Don't assign the same doc to multiple sidebars: use a ref instead.
ref type to link to a doc page without assigning it to a sidebar.
doc1, Docusaurus will not display the
link type to link to any page (internal or external) that is not a doc.
category type to create a hierarchy of sidebar items.
Use the shorthand syntax when you don't need category options:
For sites with a sizable amount of content, we support the option to expand/collapse a category to toggle the display of its contents. Categories are collapsible by default. If you want them to be always expanded, set
For docs that have collapsible categories, you may want more fine-grain control over certain categories. If you want specific categories to be always expanded, you can set
Docusaurus can create a sidebar automatically from your filesystem structure: each folder creates a sidebar category.
autogenerated item is converted by Docusaurus to a sidebar slice: a list of items of type
Docusaurus can generate a sidebar from your docs folder:
You can also use multiple
autogenerated items in a sidebar, and interleave them with regular sidebar items:
By default, the sidebar slice will be generated in alphabetical order (using files and folders names).
If the generated sidebar does not look good, you can assign additional metadatas to docs and categories.
For docs: use additional frontmatter:
For categories: add a
_category_.yml file in the appropriate folder:
The position metadata is only used inside a sidebar slice: Docusaurus does not re-order other items of your sidebar.
A simple way to order an autogenerated sidebar is to prefix docs and folders by number prefixes:
To make it easier to adopt, Docusaurus supports multiple number prefix patterns.
By default, Docusaurus will remove the number prefix from the doc id, title, label and url paths.
Prefer using additional metadatas.
Updating a number prefix can be annoying, as it can require updating multiple existing markdown links:
You can provide a custom
sidebarItemsGenerator function in the docs plugin (or preset) config:
Re-use and enhance the default generator instead of writing a generator from scratch.
Add, update, filter, re-order the sidebar items according to your use-case:
Using the enabled
themeConfig.hideableSidebar option, you can make the entire sidebar hidden, allowing you to better focus your users on the content. This is especially useful when content consumption on medium screens (e.g. on tablets).
To pass in custom props to a swizzled sidebar item, add the optional
customProps object to any of the items:
Real-world example from the Docusaurus site:
|
OPCFW_CODE
|
qml append json stream to listmodel
I would like to append an answer in JSON from a web server to a QML ListModel. Currently I am using
eventModel.append(jsonObject)
which works fine if the answer only contains strings or numbers but not if there is an array within the answer. I am using the code from here to get the JSON object.
This is one line of the answer:
{"i":3814086,"t":"d","s":1479970800,"sw":"Do","sds":"24.11.16","ss":"08:00","e":1479996000,"eds":"24.11.16","es":"15:00","f":false,"z":[{"i":223500,"d":true,"r":"","h":null,"hs":null,"hss":"","he":null,"hes":""}]}
Everything is added fine beside z. If I read the entries from the list model I get this:
{"objectName":"","i":3814086,"t":"d","s":1479970800,"sw":"Do","sds":"24.11.16","ss":"08:00","e":1479996000,"eds":"24.11.16","es":"15:00","f":false,"z":{"objectName":"","count":1,"dynamicRoles":false}}
It looks like everything in z is lost. I already tried to add it again
for(var i in jsonObject){
eventModel.append(jsonObject[i])
eventModel.set(i, {"z":jsonObject[i]["z"]})
}
but the result is the same.
Is something like this just not possible or am I doing something wrong here when appending the JSON object to the list model?
ListModel contains list of ListElement items. According to docs it can contain only simple values - Values must be simple constants; either strings, boolean values, numbers, or enumeration values. Value you try to assign to z is object or array.
To bad I did not see that. I guess there is no work-arround, is there? Because I have no clue on how to work with the JSON if I cannot store it in a ListModel =(
If I do eventModel.set(i, {"zn":jsonObject[i]["z"][0]}) zn is added as array and I have all values. Maybe 2D arrays (or what ever z currently is) cannot be added to a ListModel but arrays do work.
sorry to have you misled. I've tested that and ListElement get as arrays as objects. For example it eat that model.append({param: {a: 1, b: 2}}) w/o problem. Arrays also are acceptable: model.append({param: [1,2]}).
This is taken from one of my apps and it can be an illustrative example for your case.
ListModel {
id:agenciesModel
ListElement {
name: "401"
eventListDates :[
ListElement{
date:"jj/mm/aaaa"
}
]
}
ListElement {
name: "402"
eventListDates :[
ListElement{
date:"jj/mm/aaaa"
}
]
}
ListElement {
name: "403"
eventListDates :[
ListElement{
date:"jj/mm/aaaa"
}
]
}
}
var listObjJS = [{"date":"10/10/2019"},
{"date":"10/11/2011"},
{"date":"10/11/2011"},
{"date":"10/11/2011"}
];
for(var j=0;j<listObjJS.length;j++ )
agenciesModel.get(i).eventListDates.append(
{ date : listObjJS[j].toString().split(":")[1]}
);
Basically I should convert the 2D array into a 1D array but I do not see how I should store my data so that I could still access it. JavaScrit does not have variable variable names, right? So I cannot apped z1, z2 etc and then do a loop with z+j as variable name...
|
STACK_EXCHANGE
|
Is there a better way to level up in Dragon Quest IX?
An update: I've since traded in the game. It just irritated the hell out of me towards the end. Thanks to those with suggestions! I was hoping there would be a non grindy solution -_-
Currently playing Dragon Quest IX and I want to level up some characters quickly as I'm at the Gittish castle (just beat Hootingham). My main is 35, the others are 31. I looked up the Priest class and they get multi heal at 38 which made me groan that I would have to grind 7 levels :(
I know that I can go hunt metal slimes etc but they are so freaking rare (or they run away) it is making for an overly grindy experience which is becoming frustrating :(
Would appreciate any pointers on other ways to level up (if there are).
Thanks :)
An update: I've since traded in the game. It just irritated the hell out of me towards the end. Thanks to those with suggestions! I was hoping there would be a non grindy solution -_-
Unfortunately, killing metal slimes is your best bet. They're fairly rare, but they're also worth big, big XP. The Quarantomb has a fair number of them, and that's where I farmed in the early part of the game. You'll want to go prepared with attacks that are strong against Metal enemies, plus high speed and speed boosting spells. There's a good strategy here that may help save you some trouble.
This GameFAQs thread lists some locations where you can find other metal slime variants later in the game. The consensus seems to be that Bad Cave is the best place to farm Liquid Metal slimes.
It may feel grindy, and that's because it is :( All the DQ games I've played have been this way - "it's a feature, not a bug." :)
In general I don't mind grindy play - as long as I'm enjoying the game :) But some elements in DQ9 are annoying me :(
(I think that penalising the player for changing classes is a little off putting)
I might stick with it, but I'm starting to feel tired of the game :(
what you need to do is go to the gittingham palace and hunt around for lethel armours untill thay send for back up which are cure slimes and keep defeating them but make sure you have at least 2 cure slimes and the lethal amours send them up and if you go from 1 - 16 cure slime and do it 4 time u will gat about 90.000 exp
Since you've already made it to Gittingham Palace, don't waste your time with metal slimes at the Quarantomb. You'd be better off fighting regular monsters. What you should be hunting are Liquid Metal Slimes. They give 40,000 xp or 10,000 xp per character if you have a team of 4. You can find them on the 3rd floor in the Bowhole. Make sure you have someone who knows Hatchet Man or Lightning Thrust(axe or spear)... those moves have a 50% chance of hitting, and if they hit, it's an automatic critical.
Thanks for the tip, but I've traded this game in a while ago.
It just irritated the hell out of me towards the end!
Its kinda put me off any future Dragon Quest games really.
I say you should just grind any monster in Gittinham Palace.
I was doing that yesterday and I've already passed the game, so it works.
I've found that the Shivery Shrubbery at Swinedimple's Academy work extremely well. To test this, I spent 5 minutes battling there, and received 17k xp. It probably works best at lower levels, up until level 40, in my opinion..
facepalm Use slime hill of course! Using the Starflight Express, fly to the Plateau by Angel falls with the cave on it. Don't enter the cave. There will be a higher spawn rate off metal slimes, and only members of the Slime family can spawn there.
Lol I still have the game. Got it last Christmas! It's epic.
You can't get Slime Hill until post-game.
|
STACK_EXCHANGE
|
Top Albums 2020 Charts, We have been shipping and selling for over 20 years and have an amazing track record getting our customers their animals in a timely and SAFE manner. Alligator Pie Rhythm, I found myself going with him vicariously as he not only writes about the wildlife but the wild humans that live there too. The species is endemic to the mountains of New Guinea. At The Mountains Of Madness Analysis, The City Ground, Quick View. Glow Fertility Program, I am so thankful that Ari took the time to document his journey and do things that most of us could only dream of! Amanda Clapham Brother, SERPENTS IN THE CLOUDS is a collection of stories with a large snake, Simalia boeleni, the Boelen’s Python at its heart. Quick View. Myrtle Meaning Name, "Nigel" 2017 Male Banana Mexican Spiny-tailed Iguana (Ctenosaura pectinata) Regular price. Their coloring is beautiful with iridescence like the white lipped python or even the more well known reticulated python. Planet Of The Apes Trilogy, There's a problem loading this menu right now. The To Do List Netflix Uk, Post Aug 06, 2007 #1 2007-08-06T06:07. Vu sur spinelesswonders.smugmug.com Vu sur whozoo.org la classification. $7,500.00 Almost Everyday Meaning, ECO Herpetological Publishing (January 1, 2018), Reviewed in the United States on February 20, 2019. He is very easy to handle and docile when being handled. Tivo Add An App Ip Address, All Rights Reserved. Simalia boeleni is a species of python, a nonvenomous snake in the family Pythonidae. Darren Dunstan Movies And Tv Shows, Apartment For Rent In Penang Bayan Lepas, They have proven very challenging to reproduce in captivity and have generally only been available in limited quantities. 915. du 04/02/1977. There's a problem loading this menu right now. 2020 CBB NERD X Hypo Celebensis Poss. 2 talking about this. * $299.99 . Inland Reptile RageBeaReptiles Sign In . It also analyzes reviews to verify trustworthiness. My son is a herpetologist and I purchased this for him as a gift that he had asked for. … Liasis boeleni Brongersma, 1953; Morelia boeleni (Brongersma, 1953) Liasis taronga Worrell, 1958; Statut CITES. Prime members enjoy FREE Delivery and exclusive access to music, movies, TV shows, original audio series, and Kindle books. We have been reducing the amount of scenting that we have been using, so he would likely take unscented rodents soon. REIDNA. SERPENTS IN THE CLOUDS is a collection of stories with a large snake, Simalia boeleni, the Boelen’s Python at its heart. 915. This is a yearling female that has been in my possession since September of last year. Add to Cart View Options. Crested Gecko, Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required. Sommaire. Lauren Cohan Boyfriend, 29 Aug. By; In Uncategorized; Comments None; Cannon Buy Ball Python Morphs For Sale. When Was America Discovered, 915. Please try again. Regular price $2,850.00. Mud Servers, Sione Takitaki Stats, Bring your club to Amazon Book Clubs, start a new book club and invite your friends to join, or find a club that’s right for you for free. View Options. Burmese Pythons - Albino (Babies) Albino Burmese Pythons (Python molurus bivittatus) * Captive bred by The Serpentarium! Please do not hesitate to contact me at: SORINFLORINGHITA@GMAIL.COM or +40765424545 … She takes thawed rats off tongs.
|
OPCFW_CODE
|
cpu usage
I was using nsclient++ querying thru the web interface status of various items to report to a home automation tool I use. Still works but noticed that the cpu info reporting back is inaccurate. Like in the server it's showing 55% utilization in Windows but NSCLient++ reports back 8%. Noticed it's hard to pull those statistics in general from a Windows computer at least thru PowerShell. How well does SNCLient handle getting current CPU utilization? This would be for modern AM5 AMD CPUs in Windows 11 or Windows 2022.
The cpu utilization is measured once a second (per core) and stored for 1hour per default. You can then request the average utilization for given durations.
Have you actually tried SNClient and noticed an issue or is this just a general question?
Was a question, now that I tried it maybe an issue or usage problem. Seems the same as NSClient++. If I run say:
curl -k --user <username/password> 'https://:8443/api/v1/queries/check_cpu/commands/execute'
Well I uninstalled it but basically returns the same as NSClient++ info. So it would say return:
{"command":"check_cpu","lines":[{"message":"OK: CPU load is ok.","perf":{"total 1m":{"critical":90,"maximum":0,"minimum":0,"unit":"%","value":5,"warning":80},"total 5m":{"critical":90,"maximum":0,"minimum":0,"unit":"%","value":5,"warning":80},"total 5s":{"critical":90,"maximum":0,"minimum":0,"unit":"%","value":5,"warning":80}}}],"result":0}
But the CPU is cranking away in Windows 11 at say 60% utilization in task manager. So issue or usage question not sure.
Did a test with windows 11 vm:
Running the cpu check in a while loop:
while true; do ./check_nsc_web -k -p test -u https://<IP_ADDRESS>:8443 check_cpu time=3s; sleep 1; done
Looks good to me, the 3s average matches pretty much what the task manager shows.
Maybe one thing to note, the snclient gathers cpu metrics in memory only, so if you just
(re)started the agent, the metrics can only calculated over the duration since the last restart.
closing this one, let me know if there any issues.
Sorry been a while until I could get back to this. How is the above as expected? You are reporting with snclient utilization of let's say 9% but the task manager shows 24%.
I didn't found any description about how the task manager calculates the cpu load, so no idea tbh. Which time frame did you compare? 5s, 1m or 5m?
I'm using 5m.
Was curious. You mention "the 3s average matches pretty much what the task manager shows" The Task manager shows 24%
the command line output is not synchronized with the task manager. I'd expect the next check to match the 24% value. But you can see the 2 previous spikes in the list.
Using the cpu value over 3 or 5 seconds does not make any sense anyway if your monitoring check_interval is one minute. The smallest timeframe should be longer or equal your check interval.
I guess I am more looking for what the value is now, not an average.
Basically it is always an average because you have a cpu seconds counter and divide by duration to get the percentage. But if you want the "current" value, just pick a very small duration. It just doesn't make much sense in my opinion. But that might depend on the actual use case.
|
GITHUB_ARCHIVE
|
Open Source Projects |
Job History |
Creative and versatile individual with recognized experience in
SGML/XML, Web and Internet technologies. Experienced in distributed
application design and development in Java, Perl, and C; Unix system
administration; and GUI development. Maintain a variety of
free software applications used by hundreds of users throughout the
world. Proven ability to learn quickly and become an expert in a
variety of technologies. Skilled at producing quality work with limited
- Programming/Computer Languages:
Bourne shell script,
C (ANSI, K&R),
MS-DOS/Windows batch script,
Perl (4 & 5),
- Computer Protocols:
- Document Languages:
Frame Maker Interchange Format (MIF),
- Programming Libraries:
Java Servlet API,
JDK 1.2 and later,
- Operating Systems:
Apache HTTP Server,
Creation and contribution of open source software since 1993,
- Provided updates and fixes to Perl-based SGML/XML DTD parsing tool
originally developed by Norman Walsh.
- Creator of Perl-based program for web-based searching and
viewing of Unix/Linux manpages.
- Creator and maintainer of one of the top free web-based
email archivers, recognized in several publications and used
by numerous web sites.
- Contributed patches
to the nmh mail handling system.
- Creator of a collection of Perl-based tools for parsing
and analyzing SGML DTDs.
- Creator of Perl module to allow a Perl program to run as
a Unix/Linux daemon process.
- Creator of Perl module for binding Perl structures to a text file,
with its primary use for supporting HTML templates for CGI-based programs.
- XML Commons Resolver
- Provided patches to support Windows drive-letters in
pathnames, recognition of xml.catalog.verbosity if
CatalogManager.properties is not used, and proper setting
of systemID of InputSource during SAX parsing of XML
* Software available at
Senior Computer Systems Designer
Lead architect and developer of NSIV, an IETM viewer for NAVAIR based
on web and open source technologies.
Independent Software Consultant
Various consulting services, including product development,
web application development, technical and electronic publication,
content management, and open source software.
PBM Associates, Inc:
- Co-architect of NSIV,
Interactive Electronic Technical Publication (IETP) viewer for
based on web and open source technologies:
- Designer and implementor of the IETP
compilation process utilizing
Lucene (for full-text
- Designer and implementor of IETP-based web services
using Java servlet technology
under Apache Tomcat.
- Designer and implementor of a modular
Java Swing-based application for simplifying NSIV management operations
for non-technical users.
- Designer an implementator of S1000D Issue 3.0 and 4.0 applicability,
including submitting technical feedback to S1000D working groups
for improving the S1000D applicability model.
- Browser-based user interface development using
- Designer and implementor of automated software installation using
- Developed testing framework using
and other OSS libraries to facilitate automated testing.
- Source tree, application build, and distribution packaging
- Architect and implementor of a Java-based
(Business Rules Exchange) data module (DM) validator using
(XPath 2.0) and
- S1000D Issues 2.2 through 4.1 BREX support.
- Layered BREX support.
- GUI inteface.
- Command-line interface (for batch validation).
- Java API.
- XML and HTML report formats.
RSI Content Solutions:
custom solutions development: Workflows, action handlers, web services,
and UI customizations.
Provide strategic recommendations in improving and managing XML-based
authoring and publishing environment
Enhance and maintain Docbook-based XSLT transforms for PDF and HTML
Re-designed and re-implemented conversion process from older
FrameMaker+SGML-based documents to Docbook-based XML documents, improving
the conversion process time by over 4,000 percent.
Produced a 150+ page developer's guide documenting HP's custom Adobe
FrameMaker+SGML 6.0 authoring environment: EDD analysis, localization
procedures, custom third-party plugins analysis, including compilation
and installation · Provided strategy recommendations
for upgrading to FrameMaker 7.0 and the transition from SGML- to
XML-based authoring · Provided advise on graphics authoring and
- Architect of key components of
secure stamping system for email:
- Format of stamps within email.
- Stamping process.
- Stamp validation process.
- Overall security model.
- Bootstrapped the company's initial development system
- Developed initial set of cryptographic- and stamp-related
libraries utilizing OpenSSL.
Provided guidance and effort in the porting of
research, academic-oriented work into commercial, production quality
work · Designed and developed C++ API for a Question &
Answering product that uses natural language processing · Developed
and maintain C/C++ utility library for use within LCC products ·
Established source code management respository using CVS
and a collection of Perl scripts I developed to automate source code
management tasks · Developed (GNU) makefiles for managing the
compilation of projects for multiple programming languages:
C/C++, Java, Perl.
Web Applications Architect
- Co-architect of a collaborative wireless classroom learning system
under Solaris, Linux, and Win32.
- Design, develop, and maintain core
library, which includes:
- Proprietary message system supporting asynchronous and sychronous
message delivery that can adapt to multiple application network
protocols (e.g. HTTP). Messaging based on a point-to-point
queuing model with message delivery relay support.
- Servlet-based framework for web application development.
- General utilities like caches, application property resource
management, thread pooling, and jar file merging.
Created non-synchronized versions of some JDK classes for improved
- Software project administration:
- CVS repository management.
- Cross-language software build management using GNU make.
- Automated nightly build and release processes developed
- Tools to auto-generate template source code from technical
- Miscellaneous tasks: technical consultation to colleagues,
system administration, QA rollover procedures.
Senior Software Engineer
- Web application development for Excite@Home
broadband content service:
- Designer and developer of a server-side application framework in
- Page layout management.
- Reusable page components.
- Encapsulation and management of various server-side data repositories.
- Debugging modes.
- Utility classes.
- Development and maintenance of core server application library
built on top of the Java Servlet API:
- HTML component library to help support a consistent look across
applications and web browsers by hiding the rendering differences of
browsers and to promote better reusability of visual components:
- Font and color settings.
- Font and color inheritence.
- Markup minimization to reduce page size.
- Automatic generation of CSS styles and/or FONT elements.
- Java-based web server (based on Tomcat 2.1).
- Co-branding support.
- Custom page dispatching.
- Developed various modules for the broadband news application, including:
news photo viewer, full news story display,
online games listing, top movie/music/tv listings, and lottery.
- Developed a multi-windowed web-based editor for the creation and
editing of cover pages for @Home's news and sport channels:
- Service-side implemented in Perl, invoked via standard CGI or
Apache::Registry module running under
- X/Motif development for
Quadritek's main product QIP, an
IP management software product for large corporate TCP/IP networks:
- Design and implement hierarchical and table views
with the help of Microline's widget library.
- Create custom event handlers and action routines to extend
- Design and implement HTML-based help system.
- Icon/image management.
- Application X resource management.
- Reusable widget component development to reduce redundant code.
- Troubleshoot X/Motif technical problems.
- Designed and implemented QIPxpress: a
web/Perl application for managing and configuring
DHCP and DDNS/DNS servers with optional LDAP support.
Application designed to run under WinNT and Unix systems with a variety of
Netscape, and IIS).
Senior Information Analyst
and Information Services Inc.
- Administrator and co-developer for large on-line community project:
- Project component development using C, C++ and
languages; CGI; Netscape API (NSAPI);
Sybase Open Client;
NetGravity API; SybPerl.
- Developed source code and configuration management software
for the entire project using SCCS and Perl.
- Netscape Suitespot server administration.
- Sapphire/Web project administration.
- Software packaging, distribution, and installation.
- Designed and authored technical documentation.
- Unix (Solaris) administration.
- Sub-project integration.
- Technical consulting for colleagues.
- Developed a pipelined-based conversion process in
migrating the Physicians' Desk
Reference SGML source from an in-house legacy main-frame
typesetting system to newer Xyvision XPP-based system.
- Document analysis and design ·
Document conversion and processing ·
Technical consulting on SGML, WWW, and other related
Perl course development and instruction ·
Sun Sparc Solaris 2.5 system administration, including:
third party software installation/compilation;
peripheral installation (memory, disk-drives, etc);
user account management;
and sendmail configuration
- B.S. in
Information & Computer Science
University of California in Irvine.
Eliot Kimber's HyTime course ·
Introduction to Rational Rose/C++ Using UML ·
Java Swing and XML Programming ·
Microsoft Access ·
Object-Oriented Analysis & Design with C++ ·
Omnimark Programming I & II ·
Practical Formatting Using DSSSL ·
Project Management ·
Java Workshop ·
Work has been included or recognized in several publications
and web sites:
- BYTE Magazine
- HTML & CGI Unleashed,
- The HTML Sourcebook,
John Wiley & Sons Inc
- Managing Internet Information Services,
O'Reilly & Associates, Inc
- MH & xmh: Email for Users & Programmer,
O'Reilly & Associates, Inc
- Perl 5 for Dummies®,
IDG Books Worldwide, Inc
- Perl Conference 3.0 (Speaker), Open Source Conference
- The Perl Journal
- SGML CD,
- SGML for Dummies®,
IDG Books Worldwide, Inc
- The SGML/XML Web Page
- Special Edition Using SGML,
Open Source Projects |
Job History |
$Date: 2014/10/14 19:03:57 $
|
OPCFW_CODE
|
Something I started working on last year: yet another IRC bot.
Since IRC is a simple protocol for sending plain text messages to channels or users, it provides opportunities for some good programming projects, such as a bot. An IRC bot is something that connects to an IRC network and provides some kind of automated service to the users on it. The potential applications of a bot are endless, and the only real limitation is that it has to communicate via plain text.
Writing an IRC bot is an excellent project for an interactive program — within a few days you can have something that runs on the network and responds to messages. If you have an idea for something useful or fun it can do, even better. You just need to program the bot to understand for some commands and behave appropriately.
My previous IRC bot was called Probot, and was written in C and Prolog: C to do all the low-level networking stuff, and Prolog to provide dynamic and configurable behaviour. The implementation was a C program that used SWI-Prolog’s C library bindings.
The idea was that it would receive commands in IRC messages in the form of Prolog goals, and it would then print the results of solving those goals. For example:
<edmund> probot: X is 2 +2 . <probot> X = 4.
Or, with a slightly more ambitious goal involving access to outside data, backtracking and output:
<edmund> probot: pb_get_nicks(Ns), member(X, Ns), format(atom(G), 'Hi, ~s!', X), pb_speak(G). <probot> Hi, ChanServ! <probot> Hi, edmund! <probot> Hi, probot!
pb_speak/1 are predicates which return the IRC nicks of the currently visible users, and send a message to the IRC channel. The rest is standard Prolog: non-deterministically pick a member
Ns, construct a greeting
G, and send it to the channel, then backtrack until all possibilities are exhausted.
Of course, some predicates in Prolog change the environment. So it was possible to send commands to the bot that would affect how it processed further commands.
<edmund> probot: assertz(hi :- (pb_speaker(X), format(atom(G), 'Hi, ~s!', X), pb_speak(G))). <edmund> probot: hi. <probot> Hi, edmund!
The entire Prolog program is a database of rules, which can be manipulated on the fly. Of course, Prolog is not the most straightforward language to use for this purpose. But I had envisioned various hooks and shorthands that would make this easier, for instance, defining goals that should be solved on each kind of IRC event. The syntax could be improved by defining new Prolog keywords, and even Prolog’s Definite Clause Grammars could be used to add domain-specific languages for certain tasks. Sadly, envisaging is as far as I got with it before I lost the source.
Last year there was a discussion on IRC about bot programming, spurred by the creation of xBot. xBot is a modular bot written by Milos Ivanovic that provides a variety of services to an IRC channel. One of the clever things about xBot as a Python program, is that the services are defined in modules, and modules can be loaded and changed without restarting the bot. This makes the develop-test cycle for bot services much, much shorter.
Starting again: IRCbot
We were comparing notes and I was reminiscing about Probot to anyone who would listen. Since it’s such an approachable project, I undertook to repeat it. The new bot would be more general purpose and not based on an esoteric logic programming language. Python is a good language to use for general projects of this sort (and it means I can borrow the reload functionality from xBot).
But what should IRCbot do? Choosing an original and suitable name for the project had sorely taxed my imagination. Coming up with realistic and useful features for it was no easier. The typical IRC bot responds to a formal command language, or simply makes announcements from an external source.
A more challenging (and interesting, but admittedly, less likely to be useful!) approach is to respond to IRC conversations in natural languages. There are several examples of conversation bots, such as the famous ELIZA and the more modern Cleverbot (which xbot has a module for). A long time ago I was interested in the Loebner Prize, which is awarded each year to the program which comes closest to passing the Turing Test. I had ideas back then on analysing natural language, but I have learned a lot since; partly through studying formal languages in computer science, and partly through taking a stronger interest in language. I am not an linguist by any means but I think a program I wrote now would process language in more interesting ways than what I was planning back then.
A possible role of IRCbot is to connect a source of natural language — conversations on IRC — to a natural language analyser. Quite what the point is, I have not yet decided. But it will be interesting to see how sentence structure can be recognised, and how repeatedly used words relate to each other over the course of a conversation. This is still a long way from being a viable Loebner Prize entry (which would require the program to uphold one end of a conversation), but may give interesting results, and should be an interesting programming challenge in any case (which is what I’m really after).
IRCbot therefore provides two avenues of exploration: construction a reasonable IRC bot architecture, and the creation of a natural language processing engine. I will blog about these in the future.
|
OPCFW_CODE
|
M: Ask HN: What are extremely innovative marketplace ideas? - ThomasFreud
Hello guys,<p>I am looking for ideas for marketplaces which are extremely innovative/thrilling/bizarre...<p>I really appreciate your answer!
R: 1337biz
Obligatory reference: Fiverr! I just love that place. It is like going through
a 99c store with all kind of affordable oddities.
R: trafficlight
It's incredible the things you can get done for $5. Just recently I had a
professional quality commercial voice-over done. And I had my logo digitized
for a very specific embroidery machine. I don't know how they can afford to
spend the time doing such things.
R: 1337biz
There is lots of potential there. I wish they had some API so one could create
a shopping front and get everything handled by pre-screened fiverr providers.
But I guess there are already many doing similar arbitrage businesses via
virtual assistants that just coordinate fiverr orders all day long.
R: livestyle
A cool marketplace is Uber in a way.
A lot of service industries could be used in a marketplace scenario similar to
Uber.
R: ThomasFreud
do you have a link? Because I cannot find it ;(
R: 1337biz
how about [https://www.uber.com/](https://www.uber.com/)
|
HACKER_NEWS
|
Gavin Bell, digital strategy and product design
follow me on
Upcoming and previous talks
- 5th October, 2012 - scalecamp, London
- I ran a session on scaling the social side of a website, how to manage moderation, spam and building the right functionality for the size of audience you have.
- 2nd October, 2012 - Ignite Strata + Velocity, London
- An ignite talk on human behaviour for data driven sites, virtuous cycles, good behaviour makes for great data. Good data comes from your community, how can they help you to make the data great? Strava, the cycling and running app succeeded, yet GPS cycling apps are a dime a dozen. Studying human behaviour and not focussing on the technology lead them to create a self-reinforcing network effect. This talk will show how granular, re-playable data can be found in unexpected places.
- 27th April, 2012 - Skillmatter in the brain session, London
- Is Everything Social? Gavin will take a brief look at how privacy and competition change social activity, then the rest of the session will activity based. Covering Product design and social object centric design, within a scrum framework, you’ll look at the problems of finding somewhere to live; personal banking; present buying and scientific publishing and try to design good products for these areas.
- 14th March, 2011 - SXSW, Austin TX
- Understanding Humans: New Psychology and the Social Web, a core conversation exploring the potential of post-cognitive theories of psychology and how they can help build the social web.
- 22-23rd January, 2011 - History Hackday
- I'll be attending and hoping to build on the novel context ideas around Victorian literature.
- September, 2010 - CIM Ireland BRAND new 2010 conference
- A keynote at the Chartered Institute of Marketing Ireland annual conference in Belfast. I'll be talking about the role of the social web for companies and how this changes the relationships they have with their customers.
- September, 2010 - Open Tech 2010
- Not evenly distributed - A look at what will come after Facebook falls - Distributed, open and social or something else?
- 7th July, 2010 - MiniBar Workshop
- A short talk on product management for web applications. How to get from the idea to something your new community can help you evolve, plus the importance of an API.
- 19th May, 2010 - UX London Bookclub
- Q and A with Peter Morville, Joshua Porter, Karen McGrane and myself on the books we've written
- 24th March, 2010 - MiniBar Workshop
- A short talk on product management for web applications. How to get from the idea to something your new community can help you evolve, plus the importance of an API. Now sold out.
- March, 2010 - SXSW 2010, Austin, TX
- Do The Right Thing: Building Respectful Software, a conversation on building good social software with the technical editor for my book, Matthew Rothenberg.
- October, 2009 - Barcamp London 7
- The essence of Building Social Web Applications, a short talk on the nub of my book
- September, 2009 - Interesting 2009
- I'll talk about the experience of writing my book for O'Reilly.
- February, 2009 - O'Reilly Tools of Change
- I gave a talk about designing services and not focusing on sales aimed at the publishing industry entitled “The Long Tail Needs Community”. The presentation is on slideshare now.
- October, 2008 - Future of Web Apps, London
- A talk on interaction design, exploring some ideas on how interaction design and social software might work together better, “To Borg or not to Borg”, there is now a video of the talk available, the slides are on slideshare as usual.
- October, 2008 - <head>
- I'm speaking about a longer term view on dataportability and identity, entitled “Disintegration of the persistence of identity”. This is a webcast based conference, but I'll be at the London based get together on the Friday.
- July, 2008 - Open Tech 2008, London
- A short talk entitled Distributed, Federated, Partial, exploring some of the downsides to url / domain centric identity.
- May, 2008 - XTech 2008, Dublin
- My talk is entitled Data portability for whom? Some psychology behind the tech, it is on the Thursday afternoon. I'm also on the programme committee for XTech again this year.
- May, 2008 - Web Seminar for Society of Scholarly Publishing
- I'm giving a broad overview of social software at this seminar on the 15th. You need to register for the event.
- April, 2008 - Web 2.0 Expo San Francisco 2008
- I gave an updated version of the Website Psychology talk I gave at BarcampLondon3.
- March, 2008 - SXSWi 2008, Austin, Texas
- I spoke on a panel entitled “Green Software, Really?”, following up on the ideas from foocamp from June last year.
- February, 2008 - O'Reilly Tools of Change, New York
- I gave a talk entitled “From Buyers of Books to a Community of Readers”. I explored how communities and published content can interact successfully, my toc08 slides are on slideshare.
- February, 2008 - Social Graph Foo Camp, Sebastopol, CA
- I ran a session about the psychology behind persistent identity and learning a lot too.
- November, 2007 - BarcampLondon3
- I gave a talk about cognitive psychology and how it applies to web development, entitled Website Psychology, the slides are on slideshare.
- November, 2007 - Eduserv OpenID event
- I spoke on “The changing identity of research”, exploring how researchers and OpenID will interact together.
- June, 2007 - foocamp 2007
- I ran a session on green code, based on some of the ideas in the green code blog post on takeoneonion.org.
- June, 2007 - O'Reilly - Tools of Change for Publishing 2007
- I spoke on social software and how it can work for publishers, specifically thinking about books. The presentation is available and it is also on slideshare.
- May, 2007 - xtech 07
- I gave a talk entitled “What is your provenance?” (2.4MB PDF) I looked at networks of social networks and similar themes.. You can read the abstract of the talk and the full paper on the xtech07 website. Notes on the talk at The Guardian.
- I subsequently gave this talk at Google in June, the video of the provenance talk is on google video.
- I also gave a lightning talk on the internet time ideas I spoke about at barcamp, I've put together a set of slide in PDF format, (pre)Internet Time.
- February, 2007 - BarcampLondon2
- I spoke about “Time, History and the Internet” (15MB PDF), I wrote about the presentation on takeoneonion. This was an extended version of the session I ran at eurofoo, but given more as a presentation.
- September, 2006 - eurofoo
- I gave a session on “preweb data”, as we put our back catalog online we are forgetting about all the context that went with that content when it was originally published.
- September, 2006 - RailsConf Europe 2006
- Tom Armitage and I gave a talk entitled “Everything's Interconnected: Polymorphism as Design Pattern for Social Software”, which covered the high level parallels between polymorphic association and social network design, there is an MP3 of Tom and I speaking.
- July, 2005 - OpenTech
- I gave a short talk entitled “Every page tells a story”, which discussed some ideas around how literature can become a social experience and how we can understand the past. SocialDocuments.com has the gist of the Novel Context idea and other thoughts on annotation.
- May, 2005 - Xtech05
- I spoke about talkeuro, a version of the European Constitution which I made open for annotation. The talk was entitled “Open(ed) data, now what — bringing the European Constitution to the people”. You can read the presentation PDF, or the paper I wrote for the conference proceedings.
- March, 2005 - O'Reilly's Emerging Technology conference
- Tom Coates, Matt Biddulph and I spoke about “Programme Information Pages: An Architecture for an On-Demand World” based on work at the BBC. The presentation is available.
- Mark Simpkins and I gave a talk on “Public Documents as weblogs”, around engaging people with the consultation process.
Thanks to John Queen for the magpie.
|
OPCFW_CODE
|
Tasks panel allows you to see progress in your downloads, time estimation for active actions, and completed requests, such as Save to Camera Roll, Share (so as Stitch Panorama, Create Quickshot video and so on).
Tasks (items in the queue) are PAUSING whenever you pull the USB cable or turn off DJI drone or Osmo. However, items are automatically resuming once the connection is re-established with the same device and the same micro SD.
Let's get deep into Tasks:
Download and processing of media from DJI device is not ultra-fast. Also, we allow downloading while you are continuing to organize footage or while an app is working in the background. That's why we developed a special panel where you can keep track of all your requests. As soon as you start downloading photos or videos - the app immediately creates the task and places it on this screen.
You can access panel with the Tasks at any time just pulling Tasks tag, located on the right side of the screen. The tag itself displays current progress and estimation of the remaining tasks.
Tasks are divided into sections presenting DJI device or devices (if you link multiple). Each section contains device name, connection status, battery level and actual progress with the estimation. Also, you can open a context menu for the section and cancel tasks (or remove completed tasks).
Note: "Storage" section is used when you perform Share actions for media items already transferred to App Storage.
Tasks are auto resumable and if you want to finish download later or swap battery, or connect another DJI device - once you establish link again - Tasks will resume from the point they've been stopped (at this point, we hope you are impressed :)). Moreover, they are running even if you minimize Sync for DJI - the app maintains the connection with the drone/Osmo while there are things to download. You'll be guided with push notifications about milestones or changes in the downloading queue. For the best asynchronous results, we use Location service (read more about it here).
Protip: download multiple media items from DJI device using Multiselect Mode. In this case, selected files will be combined into one task.
Each task has context actions once it is completed. Use swipe to the left on item row (iOS) or long tap on a row (Android). You can Share to other apps or hide task.
Note: Successful tasks (if they do not require further action) will be erased from the list after the next launch of an app.
Estimation of the remaining time:
Starting with iOS 12.1 - we use prediction algorithms altogether with machine learning to estimate how much time is required for pending Tasks. We also calculate how much battery will be drained. You'll see a warning icon on Tasks tag in case of such a situation. Estimation is unique to each DJI device and mobile device you use for synchronization and requires some time for the calibration.
|
OPCFW_CODE
|
|Is AJAX post-loaded content cloaking?|
I'm considering AJAX'ing in content after the DOM is loaded to increase the page load time.
Would Google and other SE's consider this cloaking, since the AJAX'ed content does not show up in View Source, but it is viewable by human users?
Any insight would be appreciated.
Cloaking would be when the content seen by google is not seen by humans.
First off, I'm not "detecting" Googlebot and deciding to serve different content. That would be blatant cloaking, I get that.
Let me spin what you said around...
Cloaking would be when the content seen by humans is not seen by Google.
I guess a better question to ask is "does Googlebot see my AJAX'ed post-loaded content?"
If Googlebot does see it, then it's certainly not cloaking since the content is identical.
If Googlebot does not see it, then what?
Is it Googlebot's fault it doesn't see the post-loaded content, or is it my fault because I haven't implemented an "HTML Snapshot" mechanism vis-a-vis [code.google.com ]
Granted, the link I referred to above warns against using the AJAX Crawlable methodology to cloak (intentionally serve different content for different users), but I'm wondering that if Googlebot can't see the post-loaded content then, under a Manual Review, it could be considered 2 different versions of content - humans getting 4/4 of the content, while Googlebot only sees 3/4 of the content.
I hope I'm making myself clear. I apologize if I'm not articulating well, but I'm certainly not referring to blatant user-agent-detection cloaking.
I believe the spin would be a Logical Fallacy - affirming the consequent. If the content google used to determine the page is relevant is plainly seen by the visitor then it would not be considered Cloaking - if the html shown to google is not the html shown to people it is cloaking.
... I would add that Google apparently also wants this content to be near the top of the page and not obscured by ads but the beats a different drum called Panda ...
Google looks bad when the content seen by the visitors does not match what the visitor expected. Google does not like it when web masters make them look bad.
Now if there is additional content that people can see but bots can not ... IE flash, images, ajax ... these are not cloaking; But if it makes google look bad they will come up with a penalty.
Google does not care about being fair they care about looking good. If SEOs help Google to look good they like SEOs. If SEOs make Google look bad they know how to deal with us. Fault is irrelevant when they have a billion pages to pick from.
Can Google see the material? Is a different and very interesting question. My research seems to indicate that if content is shown in the first few seconds by script Google sees it. An ajax picking up another file and embedding it is something I've not specifically attempted to test. I would assume the space to be seen by Panda as empty and assume Panda does not attempt to determine what goes into it.
It's not cloaking as far as I understand it. If you don't use the hash-bang (#!) convention that Google introduced, then you most likely have content that Google cannot currently index. That's a lot different from serving content to Google that only Google can see (cloaking).
Added to that, in recent years Google spokespeople usually talk about "deceptive cloaking", not just "cloaking". There are all kinds of non-deceptive situations where googlebot gets something different than another user agent, for example, geo-location within a site might serve different content according to regional differences. That is also not deceptive cloaking, as long as the IP googlebot visits from sees the same thing as any other IP from that region.
|If the content google used to determine the page is relevant is plainly seen by the visitor then it would not be considered Cloaking |
|That's a lot different from serving content to Google that only Google can see (cloaking). |
Thank you both, that's exactly what I wanted to hear, and inline with my gut feeling. Sometimes you just have to hear someone else say it.
Agreed, its not blatant cloaking that is against Google's policies. I've loaded content into a page via AJAX to hide it from Googlebot in many cases myself.
But as SanDiegoFreelance points out, if you take it to any kind of extreme, Google would have no problem whacking your site with a penalty.
I've also seen posts here that indicate that Google may devalue your site if it thinks there are big empty sections of white space like iframes that it can't see.
|
OPCFW_CODE
|
import java.io.*;
import java.lang.*;
class BasedeDatos
{
private ObjectInputStream ObjetoLeer;
private File NombreArchivo;
private Cliente DatosPersonales;
private ClaseConsultar InstanciaClaseConsultar;
private ClaseInsertar InstanciaClaseInsertar;
private ClaseCampo DatosUsuario;
private Registro Resultado;
private GuardarRegistro ResultadoG;
private ClaseListaUsuarios InstanciaListaUsuarios;
private AsignarPuerto Muelle;
private int Registrarse, PuertoAsignar, identificador;
private String[] ListaUsuarios;
private ClaseModificacion InstanciaModificacion;
public BasedeDatos()
{
DatosPersonales = new Cliente("Juan","Pedro","Carlos","Miguel", 1, 1); //Creamos una falsa instacia de la clase Cliente para recuperar los datos introducidos por el usuario
ResultadoG = new GuardarRegistro(); //Instancia de la clase donde se guardare el resultdo de la verificacion
ListaUsuarios = new String[0]; //Se crea una falsa lista de usuarios del Chat
}
public Cliente RecuperarFile()throws Exception //Metodo que permite recuperar del archivo los datos introducidos por el usuario
{
try
{
NombreArchivo = new File("DatosPersonales.txt"); //Nombre del fichero
ObjetoLeer = new ObjectInputStream(new FileInputStream(NombreArchivo)); //Instaciamos los flujos para el archivo que contiene los datos del usuario
DatosPersonales = (Cliente)ObjetoLeer.readObject(); //Leemos el objeto del archivo
}
finally //Cerramos los flujos
{
if(ObjetoLeer!=null)
ObjetoLeer.close();
return DatosPersonales; //Retornamos la referencia que apunta a los datos del usuario
}
}
public int ConsultarDatos() //VERIFICA SI LOS DATOS INTRODUCIDOS POR EL USUARIO SON VALIDOS
{
identificador = DatosPersonales.RetornarIdentificador();
if(identificador==0) //Datos provenientes de la interfaz de Chat
{
InstanciaClaseConsultar = new ClaseConsultar("Principales", DatosPersonales);
}
else //Datos provenientes de la interfaz de Datos
{
InstanciaClaseConsultar = new ClaseConsultar("Personales", DatosPersonales);
}
Registrarse = InstanciaClaseConsultar.RetornarRegistro(); //Retornamos el valor de Registrarse y su correspondiente
return Registrarse; //significado
}
public int AsignaciondePuertos() //ASIGNA UN PUERTO INDIVIDUAL A CADA USUARIO
{
DatosUsuario = InstanciaClaseConsultar.RetornarDatosUsuario(); //Obtenemos la referencia que apunta a los datos del usuario a almacenarse en la Base de Datos
Muelle = new AsignarPuerto(); //Asignamos un puerto individual a cada usuario por el cual se realizara la comunicacion (Cliente-Servidor)
PuertoAsignar = Muelle.RetornarPuertoAsignar(); //Retornamos el valor del puerto asignado por el servidor
return PuertoAsignar;
}
public void ObtenerLogines() //OBTIENE LA LISTA DE USUARIOS ACTIVOS EN EL CHAT
{
InstanciaListaUsuarios = new ClaseListaUsuarios(); //Obtenemos la Lista de usuarios activos en el Chat
ListaUsuarios = InstanciaListaUsuarios.RetornarListaUsuarios(); //la cual se le enviara al usuario
}
public void InsertarDatos(String CadenaIP) //INSERTAMOS O MODIFICAMOS LOS DATOS INTRODUCIDOS POR EL USUARIO
{
if(Registrarse==1 && identificador<=1) //Los datos introducidos en la Interfaz de Datos van a registrarse
{
InstanciaClaseInsertar = new ClaseInsertar(DatosUsuario, CadenaIP,PuertoAsignar,1 ); //Insertar usuario en la Base de Datos
}
if(Registrarse==2) //Los datos introducidos en la InterfazdeChat son validos
{
InstanciaClaseInsertar = new ClaseInsertar(DatosUsuario, CadenaIP, PuertoAsignar, 2); //Modificar campo puerto, cadenaIP y estado en la Base de Datos
}
if(Registrarse==1 && identificador>1) //Los datos introducidos en la interfaz de actualizar se pueden modificar
{
InstanciaModificacion = new ClaseModificacion(DatosUsuario, CadenaIP, PuertoAsignar, identificador); //modificamos los antiguos datos del usuario por los recien
} //digitados
}
public void CrearEnlace()
{
if(Registrarse == 4) //SESION ABIERTA
{
try
{
Resultado = new Registro("Su sesion ya se encuentra abierta",ListaUsuarios,0); //Guardamos en un archivo el resultado de la verificacion en este caso:
ResultadoG.GuardarDatosRegistro(Resultado); //Lista de usuarios activos vacia, puerto de conversacion igual a cero y
} //un mensaje que indica que su sesion esta abierta ya sea en otro computador o por
//otra persona
catch(Exception e)
{
System.out.println(e.getMessage());
}
}
if(Registrarse == 3) //LOS DATOS INTRODUCIDOS EN LA INTERFAZ DE CHAT SON INEXISTENTES
{
try
{
Resultado = new Registro("Sus datos son inexsistentes. Por favor registrese",ListaUsuarios,0); //Guardamos en un archivo el resultado de la verificacion en este caso:
ResultadoG.GuardarDatosRegistro(Resultado); //Lista de usuarios activos vacia, puerto de conversacion igual a cero y
} //un mensaje que indica que los datos introducidos son erroneos o inexistentes
catch(Exception e)
{
System.out.println(e.getMessage());
}
}
if(Registrarse == 2 || Registrarse ==1) //LOS DATOS INTRODUCIDOS EN LA INTERFAZ DE CHAT SON VALIDOS
{ //LOS DATOS INTRODUCIDOS EN LA INTERFAZ DE DATOS O ACTUALIZAR SE
try //SE INGRESARON O MODIFICARON SIN NINGUN PROBLEMA
{
Resultado = new Registro("InterfazPrincipal",ListaUsuarios, PuertoAsignar); //Guardamos en un archivo el resultado de la verificacion en este caso:
ResultadoG.GuardarDatosRegistro(Resultado); //Lista de usuarios activos, puerto de conversacion asignado por el servidor y
} //un mensaje que indica que los datos introducidos se ingresaron o modificaron correctamente
catch(Exception e)
{
System.out.println(e.getMessage());
}
}
if(Registrarse == 0) //EL LOGIN INTRODUCIDO EN LA INTERFAZ DE DATOS YA EXISTE
{
try
{
Resultado = new Registro("Su login ya existe. Por favor ingrese uno nuevo", ListaUsuarios,0); //Guardamos en un archivo el resultado de la verificacion en este caso:
ResultadoG.GuardarDatosRegistro(Resultado); //Lista de usuarios activos vacia, puerto de conversacion igual a cero y
} //un mensaje que indica que el login introducido ya existe
catch(Exception e)
{
System.out.println(e.getMessage());
}
}
}
}
|
STACK_EDU
|
Escaping Debtor's Prison: Presenting v0.5.0
We've been doing hard time for the past year, paying off tech debt. While this type of thing slows down development in the short term, it speeds up all future development and extends the useful life of the product. With v0.4.0, we started the migration to a new tech stack. That new tech stack has now been applied to the rest of the application. Please join me for a tour.
Version 0.5.0 (2019-01-11)
• New Tech Stack & UI Overhaul (Flagship Feature)
New Home Screen
Logging in brings you to the new home screen dashboard, providing quick access to your recently modified decks and recently posted news.
New Home Screen
You'll also notice a change to the outer shell. We moved the user avatar to the bottom left, increased the size of the logo, and reorganized the menu options. We hope this provides a more intuitive user experience.
The old, table-based decks page got a makeover. Decks are now represented by tiles, and each format has its own scrollable carousel of decks.
New Decks Screen
The UI of the deck editor got shuffled around a bit.
New Deck Editor
All of the deck editor changes:
• The deck's name is now the prominent part of the screen (with support for inline editing)
• The search input got moved into the Deck tab where it contextually makes more sense
• The sideboard panel now appears to the right of the maindeck rather than underneath
• Removing cards from the drag and drop deck view has been made more intuitive with a drop target
• The Add Cards Dialog has become the Update Number of Copies Dialog
• Added ability to sort cards in deck in descending order (e.g. by type)
• Share Deck is temporarily unavailable as we rethink the sharing experience
• Filter by Color has been removed. Please use Sort by Color instead
• Install as App
On the Display Preferences page, a new Install App option appears. Here, App refers to Progressive Web App (PWA), technology allowing websites to be installed like native apps. When MITB is installed as an app, it can be launched from the desktop in its own full-screen window, giving more real estate to deck building and solitaire mode.
In addition to the changes listed, we've made many bug fixes and performance improvements. This was our biggest release to date! We're very happy to be freed from the old tech stack (RIP YUI 3).
Stay tuned for v0.6.0 and a return to new features! Expect more improvements to the deck editor and solitaire mode! Thanks for reading!
|
OPCFW_CODE
|
PowerShell, How to provide a pipe variable?
This is a high level question as the details might not be precise, as I'm not in my office but home.
I have a function that accept variables through pipe:
get-csv | myfunc
The pipe source is the fields from a .csv file.
How to define a variables and pipe into myfunc()? Would HashTable be good?
$my_pipe_variables = @{ Color = ‘Red’; Doors = 4; Convertible = $false}
$my_pipe_variables | myfunc
would that be the correct syntax?
Update:
I finally get around to try it but it is not working for me, as my myfunc accesses pipe variables directly via $_. Here is the demo:
function showThem { echo Color: $_.Color }
> [pscustomobject]@{ Color = ‘Red’; Doors = 4; Convertible = $false} | showThem
Color:
How can I make it works for myfunc, which accesses pipe variables directly via $_?
@mklement0, my only intent is to replace reading from .csv to providing from variable, all the rests are shooting into the dark. Please ignore and show me the correct way of doing it, for sample .csv input of Color = ‘Red’; Doors = 4; Convertible = $false. thx.
Instead of piping, you can use the hashtable for splatting the parameters to your function. P.s. get rid of the curly 'smart-quotes' and use straight ones.
Thanks for the link @Theo. It might not suit my specific case as my myfunc() access pipe variables directly via $_., instead of defining Params.
Import-Csv (not Get-Csv), for reading CSV data from a file, and ConvertFrom-Csv, for reading CSV data from a string, output a collection of custom objects (type [pscustomobject]) whose properties reflect the CSV data's columns.
To construct such custom objects on demand in order to simulate Import-Csv / ConvertFrom-Csv input, use the [pscustomobject] @{ <propertyName>=<value>; ... } syntax (PSv3+).
E.g., to simulate 2 rows of CSV data with columns Color, Doors,
and Convertible:
[pscustomobject] @{ Color = 'Red'; Doors = 4; Convertible = $false },
[pscustomobject] @{ Color = 'Blue'; Doors = 5; Convertible = $false } |
...
Separately, in order to make a function process input from the pipeline object by object via automatic variable $_, it must have a process { ...} block - see help topic about_Functions.
# Define the function body with a process { ... } block, which
# PowerShell automatically calls for each input object from the pipeline,
# reflected in automatic variable $_
function showThem { process { "Color: " + $_.Color } }
[pscustomobject] @{ Color = 'Red'; Doors = 4; Convertible = $false },
[pscustomobject] @{ Color = 'Blue'; Doors = 5; Convertible = $false } |
showThem
Note: In PowerShell, echo is an alias of Write-Output, whose explicit use is rarely needed; instead, the function relies on PowerShell's implicit output: the result of the string concatenation (+) implicitly becomes the function's output.
The above yields:
Color: Red
Color: Blue
|
STACK_EXCHANGE
|
As part of our build automation here at Riot, we've been trying to find solid options to backup our servers (configs, logs, data etc.) to an off-site location. Our provider does daily backups of our servers and restores data on demand, which is certainly nice, but left us wanting more fine grained control of the process. Cost, simplicity and security were our top concerns, and our search led us to start using duplicity combined with Amazon's S3. Here's how we use it.
You will need to have librsync installed on your system as well. In ubuntu:
apt-get install librsync-dev
Since duplicity is a python app, we chose to install it in a virtualenv. It's pip installable, but is not in pypi, so you will have to point pip at the tarball.
virtualenv duplicity cd duplicity source bin/activate pip install -E . http://code.launchpad.net/duplicity/0.6-series/0.6.11/+download/duplicity-0.6.11.tar.gz boto
or in ubuntu:
apt-get install duplicity
If you want to encrypt your backups you will need to generate a GnuPG key, like so:
You can accept the default options during install, make sure you add in a passphrase to the key, as duplicity will not work without it.
S3 is just one of the many backends duplicity supports. Their docs have more info.
Here's our backup script:
export AWS_ACCESS_KEY_ID='xxxxxx' export AWS_SECRET_ACCESS_KEY='xxxxxx' export PASSPHRASE='xxxxxx' export NOW=`date +"%Y-%m-%d-%H-%M"` duplicity --exclude ".*" --include "**" --full-if-older-than 30D \ --log-file /var/log/duplicity/s3-$NOW.log --verbosity 6 \ --s3-use-rrs --s3-use-new-style --asynchronous-upload \ /var/www/backups s3+http://riot.xxxx.xxxx export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= export NOW=
Restoring is a snap too. Though we haven't had the need to restore yet, this is how you would:
# Restore a file duplicity --file-to-restore var/www/backups/code.tar s3+http://riot.xxxx.xxxx ~/tmp/restore # Restore a directory duplicity --file-to-restore var/www/backups/db s3+http://riot.xxxx.xxxx ~/tmp/restore # Restore everything from a point in time duplicity -t 2011-02-19T12:20:45 s3+http://riot.xxxx.xxxx ~/tmp/restore
The backup script runs hourly and does incremental backups to our S3 bucket.
I wrote an implementation of the Levenshtein algorithm in python a few days back, and today while noodling around, I came across another implementation of the same algorithm, written by Magnus Hetland the author of Python Algorithms and wanted to see which was the "faster" implementation.
So, enter the timeit() module in python. Here's what I did:
>>> def levenshtein(a,b): ... "Magnus's Code" ... ... [ Code here ] ... >>> def leven(a,b): ... "Rohit's Code" ... ... [ Code here ] ... >>> import timeit >>> t1 = timeit.Timer(setup='from __main__ import levenshtein', stmt='levenshtein("plumber","causes")').timeit() >>> t1 50.655728101730347 >>> t2 = timeit.Timer(setup='from __main__ import leven', stmt='leven("plumber","causes")').timeit() >>> t2 68.573153972625732
Seems like Magnus has me beat :(.
One point to note here is that timeit() temporarily turns off garbage collection, so if your code requires it you will need to add it in.
>>> import gc >>> setup = """\ ... from __main__ import levenshtein ... gc.enable() ... """ >>> t2 = timeit.Timer(setup=setup, stmt='levenshtein("plumber","causes")').timeit()
There is also quite a nice collection of python performance tips here.
|
OPCFW_CODE
|
For units such as departments
Our service is funded by departments and schools, and members of these units can receive our services free of charge for a short period of time (in accordance to the shares of funding). In addition to the basic service, researchers and group leaders can request long-term support which they pay for themselves.
By joining the Research Software Engineering service, you provide the highest-quality computational tools to your researchers, enabling the best possible research and attracting the best possible candidates. You fund a certain amount of time, and actual cost decreases when groups pay for long-term service themselves. For both short and long-term projects, our surveys indicate a significant efficiency: (researcher time saved) ≥ 5 × (time we spend).
Case study: Systematic improvements
Your department has a lot of people doing little bits of programming everywhere, but everyone is doing things alone. What if they could work together better? By joining the RSE program as a unit, your staff can get up to X hours of free help to understand tools to make their programming/data work better. After a few years, you notice a dramatic cultural shift: there is more collaboration and higher-quality work. Perhaps you already see a change in your KPIs.
Benefits to schools/departments:
Increase the quality and efficiency of your research by providing the best possible tools and support.
Provide hands-on technical research services to your community at a higher level than basic IT (see Scicomp garage).
More societal impact, for example ChatGPT-type preview interfaces.
Help with data management, open science, FAIR data - be more competitive for funding and help get value out of your unit’s data.
You will be able to set priorities for your funding, for example do you focus on a certain strategy, wide variety of projects, high-impact project, etc.
Benefits to groups:
Receive staff/on-call software development expertise within your group, without having to make a separate hire. We don’t disappear right after your project.
Your researchers focus on their science while improving their computational skills by co-working with us.
How to join
The RSE program is a part of Aalto Science-IT (Aalto Scientific Computing), so is integrated to our computing and data management infrastructure and training programs. You don’t just get a service, but a whole community of experts. We can seamlessly work with existing technical services within your department for even more knowledge transfer - if it matches their mission, your existing technical services can even join us directly.
In practice, joining us means that you contribute a certain amount of funding, which allows us to hire more staff (combined with the other departments), to provide a certain amount of time to research groups in your unit.
If you would like to join contact Richard Darst or rse-group at aalto.fi.
|
OPCFW_CODE
|
Our Investment in Qwak - The Machine Learning Engineering Platform
Today, we’re excited to announce that Leaders Fund is leading a $15M Series A funding round into Qwak, a Machine Learning (ML) Engineering platform that helps companies 10x their ML organization’s throughput.
Qwak’s founders are a team of software/ML engineers and designers. In their roles leading ML teams at Payoneer, Wix and AWS they saw that when designed, built and deployed correctly, ML based products could deliver smarter, more valuable solutions to end customers. The breakthroughs they saw when deploying ML based products were significant, so the desire to add more talent and ML output increased.
Their quest to scale ML throughout their products ran into a number of challenges though. Firstly, hiring data scientists and ML engineers was extremely difficult and costly. Furthermore, once those people were hired, they lacked the tools and solutions to effectively manage the ML lifecycle from the point at which a model was ready to be deployed, to building all the infrastructure required to run ML based products at scale. Their ability to move from model design to model implementation was severely constrained, ultimately slowing them down and decreasing productivity.
They each respectively searched for tools that could help augment their teams in helping move from model design to model implementation and scaling, but after finding a series of point solutions that required hiring more people, not less, they decided to build it internally themselves.
After seeing the benefits, they realized that not all companies had the resources that large companies had to build an internal ML orchestration solution, but most had a desire to increase the value delivered to end customers using ML. While 9 out of 10 executives believe that adopting ML is crucial to complete, only 1 out of every 10 models actually make it to production. Most companies in the world were experiencing this similar problem - that inability to take ideas, configure ML models, run them in production in a reasonable time frame and scale the iteration/throughput of their teams.
It was out of this experience that Qwak was born. Qwak’s mission is to build the ML Engineering platform accessible to any company looking to deploy ML based products. Qwak’s solution eliminates internal bottlenecks and accelerates ML throughput through ML orchestration and automation.
By automating the key MLOps steps required to deploy and run ML based products at scale, Qwak dramatically increases the output of ML teams without slowing them down, ultimately driving more value to end customers through rapid iteration and more ML based products. World-leading companies like Yotpo, Guesty, Skyline AI and JLL are using Qwak today to scale their ML output, saving them time and from having to build this internally.
We’re also excited to join Nate Meir of Stage One Ventures and Modi Rosen of Amiti Ventures in supporting the Qwak team in their journey.
To learn more about how Qwak helps companies build more ML-driven products faster, visit Qwak.com.
|
OPCFW_CODE
|
These are the Clara OCR Frequently Asked Questions. They're useful for a first contact with Clara OCR. If you're looking for information on how to use Clara OCR, please try the Clara OCR Tutorial instead. Clara OCR can be found at http://www.claraocr.org/.
Clara is an OCR program. OCR stands for "Optical Character Recognition". An OCR program tries to recognize the characters from the digital image of a paper document. The name Clara stands for "Cooperative Lightweight chAracter Recognizer".
Clara is a cooperative OCR because it offers an web interface for training and revision, so these tasks can benefit from the revision effort of many people across the Internet. However, Clara OCR also offers a powerful X-based GUI for standalone usage.
Clara OCR is distributed within the terms of the Gnu Public License (GPL) version 2. Yes, Clara OCR is Free. Yes, Clara OCR is Open Source. Clara OCR is not "Shareware", nor "Public Domain".
Clara OCR is unrelated to the GNU Project but its development is strongly based on GNU programs (GCC, Emacs and others), as well as on other free softwares, like the Linux kernel and XFree86.
Clara OCR is free software because we agree on the free software ideal as stated by the GPL. To make this agreement explicit we also adopted some suggestions from the Free Software Foundation. These suggestions apply to the Clara OCR documentation:
(a) GPL programs are referred as "free software", not "open source".
(b) The term "GNU/Linux (operating system)" is used rather than "Linux (operating system)".
(c) We do not recommend non-free softwares and do not refer the user to non-free documentation for free softwares.
Furthermore, Clara OCR will support Guile as an extension language in the near future.
Obs. We write "free software" instead of "open source" just for coherence. We dislike antagonisms between the various initiatives created along the years to freely produce, use, change and distribute software.
Clara OCR is being developed on 32-bit Intel running GNU/Linux. Currently Clara OCR won't run on big-endian CPUs (e.g. Sparc) nor on systems lacking X windows support (e.g. MS-Windows). A relatively fast CPU (300MHz or more) is recommended. There is a port initiative to MS-Windows being worked. See also the next question.
Yes, but the X Windows headers and libraries are required anyway to compile the source code, and the X Windows libraries are required to run even the Clara OCR command-line interface. Unless someone reworks the code, it's not possible to detach the GUI in order to compile Clara OCR on systems that do not support X Windows.
Clara OCR will hopefully run on any graphic environment based on Xwindows, including KDE, GNOME, CDE, WindowMaker and others. Clara OCR depends only on the X library, and does not require GTK, Qt or Motif to run. Clara OCR does not use the X Toolkit (aka "Xt"). Clara OCR has been successfully tested on X11R5 and X11R6 environments with twm, fvwm, mwm and others.
As a generic recogniser, Clara OCR may be tried with any language and any alphabet. However, there are some restrictions. Currently Clara OCR expects the words to be written horizontally, and there are some heuristics that suppose some geometric relationships typical for the Latin Alphabet and the accents used by most european languages. Support for language-specific spell checking is expected to be added soon.
No, Clara OCR does not support Unicode, and the support to the ISO-8859 charsets is partial.
No, Clara OCR is not omnifont. Clara OCR implements an OCR model based on training. This model makes training and revision one same thing, making possible to reuse training and revision information (see also the next question).
This is a quote from the Clara Advanced User's Manual:
Clara differs from other OCR softwares in various aspects:
1. Most known OCRs are non-free and Clara is free. Clara focus the X windows system. Clara offers batch processing, a web interface and supports cooperative revision effort.
2. Most OCR softwares focus omnifont technology disregarding training. Clara does not implement omnifont techniques and concentrate on building specialized fonts (some day in the future, however, maybe we'll try classification techniques that do not require training).
3. Most OCR softwares make the revision of the recognized text a process totally separated from the recognition. Clara pragmatically joins the two processes, and makes training and revision parts of one same thing. In fact, the OCR model implemented by Clara is an interactive effort where the usage of the heuristics alternates with revision and fine-tuning of the OCR, guided by the user experience and feeling.
4. Clara allows to enter the transliteration of each pattern using an interface that displays a graphic cursor directly over the image of the scanned page, and builds and maintains a mapping between graphic symbols and their transliterations on the OCR output. This is a potentially useful mechanism for documentation systems, and a valuable tool for typists and reviewers. In fact, Clara OCR may be seen as a productivity tool for typists.
5. Most OCR softwares are integrated to scanning tools offerring to the user an unified interface to execute all steps from scanning to recognition. Clara does not offer one such integrated interface, so you need a separate software (e.g. SANE) to perform scanning.
6. Most OCR softwares expect the input to be a graphic file encoded in tiff or other formats. Clara supports only raw PBM and PGM.
PBM, PGM and PPM are graphic file formats defined by Jef Poskanzer. PNM is not a graphic file format, but a generic reference to those three formats. In other words, to say that a program supports PNM means that it handles PBM, PGM and PPM.
PNM files may be "raw" or "plain". The plain versions are rarely used. Clara OCR does not support plain PBM nor plain PGM. To make sure about the file format, try the "file" utility, for instance
You cannot. Clara OCR includes no support for scanners. To scan paper documents, use another software, like the one bundled with your scanner, or SANE (http://www.mostang.com/sane/). The development tests are using SANE.
All OCR programs will disappoint you depending on the texts you're trying to recognize. If you're a developer, join the Clara OCR development effort and try to make it behave better on your texts. If your are not a developer, wait a new version and try again.
If the documentation did not solve your problems, try the discussion list.
No. Clara OCR is just a tool for character recognition like many others that can be purchased or are bundled with scanners. The Clara OCR Project claims all users to be aware about the Copyrigth Law and not infringe it. The Clara OCR Project abominates any try to infringe the legitimate laws of any country.
Nonetheless, the Clara OCR Project supports the free and public availability of materials produced to be free, or of materials out of copyright due to its age. The Clara OCR Project recognizes the right of anyone to produce free or non-free materials.
The best way is to use Clara OCR to recognize the texts you're interested on, and try to make it adapt better to them. The Developer's Guide should help in this case (C programming skills are required). The Clara OCR Project acknowledges all efforts to make Clara OCR more widely known and used.
|
OPCFW_CODE
|
package com.unlimited.oj.webapp.filter;
import org.apache.commons.lang.StringUtils;
import org.springframework.util.PatternMatchUtils;
import org.springframework.web.filter.OncePerRequestFilter;
import org.springframework.web.util.UrlPathHelper;
import javax.servlet.FilterChain;
import javax.servlet.RequestDispatcher;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.IOException;
import java.util.Iterator;
import java.util.Set;
/**
* A simple filter that allows the application to continue using the .html prefix for actions but also allows
* static files to be served up with the same extension. Dojo to serve up its HTML template code. The filter works
* on an include/exclude basis where all requests for active pages are redirected by the filter to the dispatch
* servlet. All Dojo related .html requests are allowed to pass straight through to be processed by the servlet
* container as per normal.
*/
public class StaticFilter extends OncePerRequestFilter {
private final static String DEFAULT_INCLUDES = "*.html";
private final static String DEFAULT_EXCLUDES = "";
private static final String INCLUDES_PARAMETER = "includes";
private static final String EXCLUDES_PARAMETER = "excludes";
private static final String SERVLETNAME_PARAMETER = "servletName";
private String[] excludes;
private String[] includes;
private String servletName = null;
/**
* Read the includes/excludes parameters and set the filter accordingly.
*/
public void initFilterBean() {
String includesParam = getFilterConfig().getInitParameter(INCLUDES_PARAMETER);
if (StringUtils.isEmpty(includesParam)) {
includes = parsePatterns(DEFAULT_INCLUDES);
} else {
includes = parsePatterns(includesParam);
}
String excludesParam = getFilterConfig().getInitParameter(EXCLUDES_PARAMETER);
if (StringUtils.isEmpty(excludesParam)) {
excludes = parsePatterns(DEFAULT_EXCLUDES);
} else {
excludes = parsePatterns(excludesParam);
}
// if servletName is specified, set it
servletName = getFilterConfig().getInitParameter(SERVLETNAME_PARAMETER);
}
private String[] parsePatterns(String delimitedPatterns) {
//make sure no patterns are repeated.
Set patternSet = org.springframework.util.StringUtils.commaDelimitedListToSet(delimitedPatterns);
String[] patterns = new String[patternSet.size()];
int i = 0;
for (Iterator iterator = patternSet.iterator(); iterator.hasNext(); i++) {
//no trailing/leading white space.
String pattern = (String) iterator.next();
patterns[i] = pattern.trim();
}
return patterns;
}
/**
* This method checks to see if the current path matches includes or excludes. If it matches includes and
* not excludes, it forwards to the static resource and ends the filter chain. Otherwise, it forwards to the
* next filter in the chain.
*
* @param request the current request
* @param response the current response
* @param chain the filter chain
* @throws ServletException when something goes wrong
* @throws IOException when something goes terribly wrong
*/
public void doFilterInternal(HttpServletRequest request, HttpServletResponse response,
FilterChain chain) throws IOException, ServletException {
UrlPathHelper urlPathHelper = new UrlPathHelper();
String path = urlPathHelper.getPathWithinApplication(request);
boolean pathExcluded = PatternMatchUtils.simpleMatch(excludes, path);
boolean pathIncluded = PatternMatchUtils.simpleMatch(includes, path);
if (pathIncluded && !pathExcluded) {
if (logger.isDebugEnabled()) {
logger.debug("Forwarding to static resource: " + path);
}
if (path.contains(".html")) {
response.setContentType("text/html");
}
RequestDispatcher rd = getServletContext().getRequestDispatcher(path);
rd.include(request, response);
return;
}
if (servletName != null) {
RequestDispatcher rd = getServletContext().getNamedDispatcher(servletName);
rd.forward(request, response);
return;
}
chain.doFilter(request, response);
}
}
|
STACK_EDU
|
The motherboard in it died after ~3 months of owning the system, so they put a new one in, anyway, I was playing WoW like stated above, and I had a crash, looked memory related, the log didn't save for some reason, so I exit wow, and I just have a blank desktop, no start bar or anything.
I couldn't even alt control delete it gave me some error telling me to press ESC to restart my pc, so I did, and I got a disc read error, so I restart again, and this time I got a message similar to this:
Intel UNDI, PXE-2.1 (build 082) Copyright (C) 1997-2000 Intel Corporation
This product is covered by one or more of the following patents: US5,307,459, US5,434,872, US5,732,094, US6, 570,084, US6, 115,776 and US6, 327, 625
Realtek PCI Express Fast Ethernet Controller Series v1.15b (090312)
Client MAC ADDR: 00 23 8B EC 65 E1
PXE-E53: No Boot Filename received
PXE-m0F: Exiting PXE ROM.
Operating System not found.
So I restart, go into the boot menu, and my harddrive isn't listed, so now it's panic mode, my harddrive is a Hitachi HDS723015BLA642, 1.5 tb. So I turned the pc off, unscrewed one screw in the hard drive, let the pc sit for a bit, then screwed the screw back in, and turned on the pc, and viola, the harddrive was back in the boot menu, I recently did a defrag, and restarted before this whole mess happened, and now the PC works, what are your thoughts guys? Thanks.
-a worried pc owner.
Edit: I only took out the front screw, because I don't have the proper screwdriver to get the others out so I can make sure the cables are properly connected, but after rescrewing the one I took out in, it works again.
Check to see if any of the cables are torn and need replacing. Also run a s.m.a.r.t. scan on the hard drive.
April 19, 2012 8:25:27 AM
I'll be running a smart scan as soon as I can get things working, the issue happened again, everything froze, then the previous stuff I mentioned happened, I got a proper screwdriver, took out 2 screws in the sides, and can't get to the other one with the screwdriver without pretty much taking the whole case apart, which will void my warranty.
So do you guys think I should keep trying? Or send it back to gateway for the second time in ~5 months? Thanks again
BTW do you guys think a hdd cable is loose or what?
Check if the HDD cables are securely in place. If you can get it working then immediately download and run an error test with HDtune or whatever hitachi uses...
if it just wont work after a few reboots there is not much we can do besides send it back... since its a prebuilt and all.
April 19, 2012 9:18:20 AM
Thanks for the replies guys, when it does work, it works for like 30 minutes, but I'm going to mess with the cables tomorrow when my more hardware savvy friends come over, we just have to get the harddrive out without taking off the side of the case that has the warranty stickers on it... gonna be a long day.
|
OPCFW_CODE
|
The 20th anniversary of the Imagine Cup is here, and you have access to more training, mentorship, and learning to develop your skills than ever before. Through hands-on innovation, you can bring your idea to life, collaborate with a global community of students, and develop career-ready skills to propel you forward.
Here are the top 7 reasons why you should sign up to join the Imagine Cup competition journey:
1. Join a global network of students
After you register, you’re invited to join a virtual community of students, mentors, tech professionals, and more on Discord – a free collaboration and communication platform. Here, you’ll find space to chat with other students, find teammates, ask technical questions, find mentorship, and have a virtual hub for collaboration and skill development. You’ll be able to access the invite from your account page once you register for the competition.
2. Access curated training and mentorship with Microsoft professionals
All competitors have access to free training available on GitHub covering topics such as an overview of the Imagine Cup, brainstorming project ideas, getting started on Azure, and more. Once you complete all the training, you can unlock mentorship from Microsoft experts to get feedback on your ideas.
3. Develop your project pitch in the Epic Challenge
To enter the challenge, all teams have the option submit their 3 minute project pitch and proposal. Each team will have their Epic Challenge submission judged, and one winner from each region will be selected to win USD1,000 and automatically advance to the World Finals in May 2022 . Plus, the top 5 teams from each region will receive feedback from judges to improve their projects and pitches for the next round of the competition.
4. Win amazing prizes
Here’s a look at everything you could win this year:
Epic Challenge winners will each win USD1,000 and will advance directly to the World Finals
All World Finals teams will receive a USD1,000 Azure grant, plus the top 12 teams will receive USD2,500, and an assigned mentor.
The top 3 teams will receive an additional Azure grant and move forward to the World Championship, win an additional USD2,500, and compete for the grand prize of USD100,000 and a mentoring session from Microsoft Chairman and CEO, Satya Nadella.
5. Build new tech skills for your career portfolio
Have you wanted to get started with Machine Learning? Develop your own app? Build experience working as part of a team? Or maybe learn how you can develop business skills such as presenting, problem-solving, or working in a diverse team? Whatever your interest or skill level, the Imagine Cup is the place for you! On your competition journey, you’ll build and learn hands-on through innovation and collaboration alongside fellow students and Microsoft professionals. Plus, you can use your project as a portfolio centerpiece as you move forward in your career!
6. Find your community and learn together to make a difference
With competition categories in Earth, Education, Health, and Lifestyle, you can develop a tech solution focused on the social issue you’re most passionate about and collaborate with a community of fellow students who share your vision.
7. Make your impact in the world through technology
Your time is now to contribute to the world you want for the future. If you have an idea for how to help environmental or earth-related issues, tackle health and accessibility challenges, support learners globally, or shape how we live, work, and play, there’s no better place to bring it to life than the Imagine Cup.
Don’t miss out on the chance to grow your tech and career skills for the future and make an impact in the lives of others along the way. Register for the Imagine Cup now!
|
OPCFW_CODE
|
Please consider a native ARM64 version for Windows
Please confirm these before moving forward.
[X] I have searched for my feature proposal and have not found a work-in-progress/duplicate/resolved/discarded issue.
[X] This proposal is a completely new feature. If you want to suggest an improvement or an enhancement, please use this template.
Describe the new feature
I would love to have a native ARM64 build, instead of running on the x64 emulator.
This would improve performance, and more importantly save battery.
Screenshot from UniGetUI v3.1.0:
Describe how this new feature could help users
It would improve the end user experience:
Better performance du to not have to be run via the emulation layer
Improved energy consumption
Improved responsiveness
Align with trends; I would assume that many WinGetUI/UniGetUI users are technical and more likely to be running a ARM64 based device than any other audience
Hopefully a much lower memory consumption
EDIT: Just noticed the high memory consumption. On a x64 based device, the memory consumption is about half.
I don't have an arm64 machine (yet), and I don't want to build arm64 with my x64 computer and release it without testing.
However, I will see what I can do
I don't have an arm64 machine (yet), and I don't want to build arm64 with my x64 computer and release it without testing.
However, I will see what I can do
If you'd like someone to test, I own an Arm64 snapdragon x elite laptop and I'd be happy to assist. I also own a SQ3 surface pro 9 so we'd have coverage with multiple arm based cpu's.
Would like to see this as well. Running Surface Laptop 7 x elite. Could probably test a build as well.
I have a Surface Pro X and can test it out for you too!
I can test it too. :P Jokes aside, I'd also love to see this.
I have a 32GB Snapdragon X Elite laptop and Visual Studio Professional, I had a quick look at compiling but there were a number of libraries "missing". So, a tip of the hat to hmartinez82 who must have put more effort in than I did. I agree, it's not just a case of getting it to compile and run, there's all the "which architecture" questions to be answered or decided.
I can get it to build, the trick part will be the following when running on ARM64: Given that UniGetUI can detect the architecture of the installing application: If the installed app is x64, should we upgrade to a newer x64 or try to install the ARM64 version if available?
Would it be possible to make it either a checkbox before installing updates, or have unigetui prompt yes/no to replace with arm64 native build if it detects one?
I appreciate your efforts, but there won't be an arm64 version in the near future.
The problem here is that while UniGetUI may build under arm64 systems, (in fact, it has no theoretical reason for it to not build) I cannot publish a version I have not tested, and I can't either rely on a third-party to do the builds for me. It would be too unreliable, and I would be taking a very high risk.
Thanks for the testing, and sorry.
|
GITHUB_ARCHIVE
|
Let us briefly take a look at an example algorithm, and how you might follow its steps yourself. We don't need to know anything about programming to do these steps, we just need to be able to do some math, and keep careful track of what we're doing. This algorithm does not accomplish a really useful task, but gives us a simple example to start from. The algorithm says, "Given a non-negative integer N," so it is parameterized over N. We need a value of N to actually do these steps, so let's pick N equals two. We're also going to want a place to write down the output that the algorithm produces, that is, everything that we are told to write down, we will write in this output box. We need to keep track of where we are, so we are going to use this green arrow to remember what step we are currently on. We'll start at the start. The step right after the arrow says to make a variable called X, and set it equaled to N plus two. N is two, and two plus two is four, so we want to note that X is initially four. Our next step calls for us to count from zero to N. We will want to give a name to the number we are counting, so that we can use it in our other steps. Here, we decided to call it i. As we count, we are going to repeat steps. We've used indentation here to indicate which steps are repeated for each number we count, as well as explicitly noting in the step after that you do it, when you finish counting. We'll start with i being zero, since that is the first number we said to count. Our next step says to write down X times i. Four times zero is zero, so we write down zero. Our next step says for us to update the value of X. What is its new value? Four plus zero times two is four, so, we would update it to be four. It's already four, which is fine, we just keep its value as four. Now, we have reached the end of the steps that we said to do as we count. We need to go back and count the next number, so we move our arrow back to the top of the step we were doing for each number, and make i the next number we would count, in this case, one. Now, we begin doing these steps again with i having the value one. What is X times i now? Four times one is four, so we write down the number four in our output. Next, we are going to update X again, this time, X plus i times N evaluates to six, so we are going to change the value of X from four to six. We've again reached the end of our repeated steps, so we need to count our next number. We'd go back to the top, and then count our next value which is two, and make that the value of i. We said to count from zero to N, and N is two, so are we done counting? In these steps, we said to include both ends, so we're going to do these steps when i has the value two. So, we go in and do the repeated steps one more time. Now, X times i has the value 12, so we write down 12 in our output and we go to the next step. Our next step says to update the value of X again, evaluating X plus i times N, we get 10 this time, so we update the value of X to be 10, and move past that step. Once more, we've reached the end of the steps we want to repeat as we count, so we return to the step where we said to count. When we return here, we look at our steps and realize that we've already counted all of the numbers we were supposed to, zero, one and two. So, we're done counting. Now, we want to go to the steps after we finish counting and do them. This one says to write down the value of X. X currently has value 10, so we write that in our output. Now, our arrow is at the end of our steps. There's nothing left to do, so we're done. Zero, four, 12, 10 is the sequence of numbers that this algorithm wanted us to generate, for N equals two which we see in our output box.
|
OPCFW_CODE
|
Docker metrics on RancherOS
Hi,
Is it possible to enable docker metrics on RancherOS?
I'm trying to collect Docker metrics with Prometheus but I can't seem to get it working on RancherOS. I think the main issue is that RancherOS isn't letting me put Docker into experimental mode.
I've tried to do this two ways.
First attempt:
sudo ros config set rancher.docker.experimental true
sudo ros config set rancher.docker.metrics-addr <IP_ADDRESS>:9323
sudo ros service restart docker
After that, I could see that the experimental and metrics-addr options were set in ros config:
> sudo ros config export
EXTRA_CMDLINE: /init
rancher:
docker:
experimental: true
metrics-addr: <IP_ADDRESS>:9323
tls: true
environment:
EXTRA_CMDLINE: /init
state:
dev: LABEL=RANCHER_STATE
wait: true
However, attempting to curl port 9323, either from a container on the same host or from another host, gave a "Connection refused" error.
Second attempt:
I created /etc/docker/daemon.json:
{
"metrics-addr" : "<IP_ADDRESS>:9323",
"experimental" : true
}
I then restarted Docker: sudo ros service restart docker.
That resulted in the following error for every Docker command:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.32/version: dial unix /var/run/docker.sock: connect: permission denied
RancherOS Version: (ros os version)
version v1.3.0 from os image rancher/os:v1.3.0
Where are you running RancherOS? (docker-machine, AWS, GCE, baremetal, etc.)
On Bytemark Cloud, installed to disk using these instructions: https://rancher.com/docs/os/v1.x/en/installation/running-rancheros/server/install-to-disk.
I've now also tried the following, with the same result as my first attempt above.
sudo ros config set rancher.docker.extra_args ["--experimental"]
sudo ros service restart docker
It seems that RancherOS simply doesn't allow docker to start properly with the experimental flag.
Does anyone know of a way to make this work? If not, is there a chance RancherOS could be updated to allow this flag?
Hi, I cannot reproduce this issue.
[rancher@ip-172-31-9-59 ~]$ sudo ros -v
version v1.3.0 from os image rancher/os:v1.3.0
[rancher@ip-172-31-9-59 ~]$ sudo ros config set rancher.docker.extra_args ["--experimental"]
[rancher@ip-172-31-9-59 ~]$ sudo ros service restart docker
INFO[0000] Project [os]: Restarting project
INFO[0000] [0/18] [docker]: Restarting
INFO[0001] [0/18] [docker]: Restarted
INFO[0001] Project [os]: Project restarted
[rancher@ip-172-31-9-59 ~]$ docker run -it --rm alpine
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
ff3a5c916c92: Pull complete
Digest: sha256:7df6db5aa61ae9480f52f0b3a06a140ab98d427f86d8d5de0bedab9b8df6b1c0
Status: Downloaded newer image for alpine:latest
/ #
Hi @niusmallnan, thanks for checking it out. I've tried again and now the last part does work - I can put Docker into experimental mode using sudo ros config set rancher.docker.extra_args ["--experimental"]. However, I still cannot seem to get it to expose metrics, which was my original need.
I've tried setting metrics-addr and metrics_addr in ros config and also tried adding either of those to the extra_args for Docker. My ros config output now looks like this:
[rancher@monitor-01 ~]$ sudo ros config export
EXTRA_CMDLINE: /init
rancher:
docker:
experimental: true
extra_args:
- --experimental
- --metrics-addr=<IP_ADDRESS>:9323
metrics-addr: <IP_ADDRESS>:9323
metrics_addr: <IP_ADDRESS>:9323
tls: true
environment:
EXTRA_CMDLINE: /init
state:
dev: LABEL=RANCHER_STATE
wait: true
With metrics-addr=<IP_ADDRESS>:9323, I should be able to curl <host_ip>:9323/metrics and get Docker metrics. I've tried curling localhost:9323/metrics and <IP_ADDRESS>:9323/metrics from inside a running container (<IP_ADDRESS> being the output of /sbin/ip route|awk '/default/ { print $3 }', as per https://stackoverflow.com/a/24716645).
Also, I can't work out how to remove items from config, hence I've now got metrics-addr and metrics_addr both still set. If you can tell me how to remove old items from ros config, that would be appreciated too.
Hi, is there any update on the metrics-addr issue? Just an indication of whether it looks like a bug or a mistake by me would be a really helpful starting point.
@djbingham Here's the update on this issue.
Remove config from ros config
You can delete rancher.docker config in /var/lib/rancher/conf/cloud-config.yml and reboot it.
Enable Docker Metrics on RancherOS
I have tried on RancherOS v1.3.0 and Docker metrics works on RancherOS if you set these parameters in /etc/docker/daemon.json.
{
"metrics-addr" : "<IP_ADDRESS>:9323",
"experimental" : true
}
There's something wrong with docker documents in configure-and-run-prometheus.Please replace localhost with your RancherOS ip and restart your prometheus service.
Here's an example about this https://github.com/docker/docker.github.io/issues/6012
If you still have problem, please let me know.
@JacieChao Thank you very much for investigating this for me. Following your advice and link, I have got Docker metrics working now on RancherOS v1.3.0. The change I needed was to access metrics from inside a container via the IP address of the docker0 network, rather than eth0 or <IP_ADDRESS>.
|
GITHUB_ARCHIVE
|
Appearing early in 2002, the Klez virus is still everywhere on networks, and the danger it poses is even higher due to the new variations that keep cropping up (like Klez.e, Klez.g, Klez.h, Klez.i, Klez.k, etc.). The new versions of the virus include increasingly clever self-distribution mechanisms, allowing them to spread even easier. The KLEZ virus (code name W32.Klez.Worm@mm) is a worm which spreads by email. It also has 4 other ways to spread:
The Klez worm retrieves the list of addresses found in the address book of Microsoft Outlook or Eudora, as well as instant message clients (ICQ).
Next, the Klez virus sends all recipients an e-mail, using its own SMTP server.
Using this process, the Klez virus generates emails with an empty body and a subject chosen at random from a list of about a hundred preset choices. It attaches to the email an executable file which contains a variant of the virus. The viruses use an .eml extension to exploit a security flaw in Microsoft Internet Explorer 5.
||The Klez virus is distinguished by its ability to send emails which look like they came from a sender whose address was found on the victim's machine (shown in the from field in the email sent).|
More recent versions of the virus even carry tools for thwarting the most common anti-virus programs.
Worse, its own authors have programmed a false corrective measure for the virus, sent to the victims in an email entitled Worm Klez.E immunity. The email also sends false error messages showing that the message could not be delivered, which contain yet another copy of the virus as an attached file!
What's more, in Microsoft Windows the Klez virus can spread over shared network folders, infecting executable files found there.
Viewing Web pages on servers infected by the Klez virus may lead to infection when a user views pages with the vulnerable Microsoft Internet Explorer 5 browser.
The Nimda virus is also capable of taking control of a Microsoft IIS (Internet Information Server) Web server, by exploiting certain security holes.
Finally, like its cousins, the virus infects executable files found on the infected machine, meaning that it can also spread by file transfers.
||The Klez virus is programmed to delete randomly chosen files on the 6th of the month during odd-numbered months. To top it all off, on January 6 and July 6, the virus will erase all files on the hard drive!|
The Klez virus uses as many resources as it can on the infected machines. If your computer is reacting slowly and strangely, the first thing to do is to scan all your hand drives with your antivirus software, with the understanding that the virus may have altered the antivirus program to avoid being detected.
To eradicate the Klez virus, the best method involves first disconnecting the infected machine from the network, then using up-to-date antivirus software or the Symantec virus removal tool (preferrably restarting the computer in safe mode):
Download the virus removal tool
What's more, the virus can spread using a security hole in Microsoft Internet Explorer, which means that you may catch the virus by visiting an infected site. To fix it, you must download the patch for Microsoft Internet Explorer 5.01 and 5.5. Please check the version of your browser, and download the patch if need be:
||As the virus falsifies the sender's email address (in the from field), it is recommended that you not respond to the email's sender. Instead, check the Return-Path field of the message and reply to whichever address is listed there.|
|
OPCFW_CODE
|
Vendor Thrift
this PR pins thrift to the working version. commit #847ecf3c1de8b297d6a29305b9f7871fcf609c36
before zipkin was failing to build, returning:
# github.com/openzipkin/zipkin-go-opentracing/thrift/gen-go/scribe
gen-go/scribe/scribe.go:270: assignment count mismatch: 2 = 1
gen-go/scribe/scribe.go:317: cannot use scribeProcessorLog literal (type *scribeProcessorLog) as type thrift.TProcessorFunction in assignment:
*scribeProcessorLog does not implement thrift.TProcessorFunction (wrong type for Process method)
have Process(int32, thrift.TProtocol, thrift.TProtocol) (bool, thrift.TException)
want Process(context.Context, int32, thrift.TProtocol, thrift.TProtocol) (bool, thrift.TException)
gen-go/scribe/scribe.go:325: not enough arguments in call to processor.Process
have (int32, thrift.TProtocol, thrift.TProtocol)
want (context.Context, int32, thrift.TProtocol, thrift.TProtocol)
Closes #86
Ping @basvanbeek @adriancole @marc-gr
cc @kujtimiihoxha who opened #87
need to make a choice one approach vs the other. I don't think tradeoffs are well explained here, so I can't make that call (not enough go ecosystem experience)
I'm not a big fan of vendoring within a library. Has anybody tried to generate the thrift code from the latest dev version of the thrift compiler?
I will try this when back home. If that fails I will explore options of vendoring but these would not be on this last working commit but probably on last official thrift release.
@basvanbeek I've tried vendoring version 0.10.0 but it fails, returning this error
# github.com/Typeform/boombox/server/vendor/github.com/openzipkin/zipkin-go-opentracing/thrift/gen-go/scribe
vendor/github.com/openzipkin/zipkin-go-opentracing/thrift/gen-go/scribe/scribe.go:333: cannot use scribeProcessorLog literal (type *scribeProcessorLog) as type thrift.TProcessorFunction in assignment:
*scribeProcessorLog does not implement thrift.TProcessorFunction (wrong type for Process method)
have Process(context.Context, int32, thrift.TProtocol, thrift.TProtocol) (bool, thrift.TException)
want Process(int32, thrift.TProtocol, thrift.TProtocol) (bool, thrift.TException)
vendor/github.com/openzipkin/zipkin-go-opentracing/thrift/gen-go/scribe/scribe.go:341: too many arguments in call to processor.Process
have (context.Context, int32, thrift.TProtocol, thrift.TProtocol)
want (int32, thrift.TProtocol, thrift.TProtocol)
make[1]: *** [server-build-binary] Error 2
make: *** [app] Error 2
I believe vendoring dependencies allow us to keep a safe API which is crucial with so many dependencies. In the other hand, since there is some work in zipkin-go, does it makes sense to keep this library stable and focus on zipkin-go and a future bridge? WDYT @basvanbeek
I've regenerated the thrift code based on latest Go library and thrift compiler. This eliminates the need for vendoring at the library end.
|
GITHUB_ARCHIVE
|
MVC Membership Starter Kit Released07 Aug 2009
These instructions are out of date, and a newer version of the Membership Starter Kit is now available with support for ASP.NET MVC 4 and installation via NuGet. For more information, look the project up on GitHub.
Almost six months after the official release of Asp.Net MVC 1.0 and nearly a year after the last release of the starter kit, I've finally rewritten and released the Asp.Net MVC Membership Starter Kit. If you're already familiar with what it is and want to grab it, you can find the release on the GitHub project site.
What is the Asp.Net MVC Membership Starter Kit?
The starter kit currently consists of two things:
- A sample website containing the controllers, models, and views needed to administer users & roles.
- A library that provides testable interfaces for administering users & roles and concrete implementations of those interfaces that wrap the built-in Asp.Net Membership & Roles providers.
How do I use it?
In Asp.Net MVC 1 there isn't a great story for packaging & sharing controllers, views, and other resources so we'll need to follow a few manual steps:
- After getting the source code build it using your preferred IDE or using the included Build.Debug.bat or Build.Release.bat batch files.
- Grab the MvcMembership.dll assembly and place it wherever you're including external libraries in your project. Add a reference to the assembly to your Asp.Net MVC application.
- Copy the UserAdministrationController.cs file from the SampleWebsite's Controllers directory to your app's Controllers directory.
- Copy the ISmtpClient.cs file, SmtpClientProxy.cs file, and UserAdministration folder from the SampleWebsite's Models *folder to your app's *Models folder.
- Copy the UserAdministration folder from the SampleWebsite's Views folder to your app's Views folder.
- Make sure you've configured your web.config properly for Membership and Roles. If you aren't sure of how to do this, take a look at the first two articles in this series by Scott Mitchell at 4GuysFromRolla.
- Finally, add the following code to your global.asax to keep the membership system updated with each user's last activity date:
What is new since the last release?
Well, the last release was for Preview 5, so at the very least the project has been updated for Beta and finally Release. Moreover, the project has been completely rewritten from scratch - a major undertaking that was the primary cause of the long delay between releases. Why the rewrite? Two reasons:
- The first release of the Starter Kit was for Preview 2 of the MVC framework. A lot changed between Preview 2 and Release - A LOT. A lot of the features of the first starter kit were rolled into the OTOB experience (such as login and registration), so I shifted the scope of the project more squarely into the realm of user & role administration. Unfortunately all of these major changes took a toll on the source - I was no longer happy working in the source as it was written for many reasons and thus wanted a rewrite. One of those reasons was...
- Previous releases had no (as in zero, less than one, nada) unit tests. This became increasingly unacceptable to me and trying to add unit tests after-the-fact was a nightmare. Instead I rewrote the project using TDD.
Alright, so that was basically the long-winded spiel to prepare you for the bad news: the project regressed from a functionality perspective. During the course of the rewrite things some things didn't make it in - chief among them is the OpenID integration. I encourage everyone to take a look at the Maarten Balliauw (an MvcMembership contributor) blog post on authenticating via RPX in MVC.
What comes next?
The primary motivator for me getting off my but after nearly a year and finishing up this release is my desire to convert it to an "area" for use in MVC 2. Packaging reusable components like this has been a sore spot for the current MVC framework and I'm glad to see the blue badges are going to provide a common solution. Along with that I'll likely try to add RPX authentication ala Maarten's post.
|
OPCFW_CODE
|
Web scraping has become a common practice among individuals and companies nowadays. With every business striving hard to grow, today data is in demand more than ever before. Businesses today cannot even sustain themselves if they lack the useful data to make business decisions in their domain.
But before we understand why there is a growing demand for web scraping let us first understand what actually web scraping is.
What is Web Scraping?
Websites around the globe have a tremendous amount of valuable data that includes product pricing, hotel pricing, financial data, and many more. We can use this data for either beating our own competitor or creating a report on market sentiment.
If you want to access this data you either have to copy and paste the data manually or you can use any web scraping service.
However, if you chose to do it manually then the process of extracting data from million pages would be near to impossible. Thus, we can take advantage of web scraping.
Extraction of data from a website in a non-clean fashion using a script is known as web scraping. The data collected can then be stored in a database or can be exported in CSV files.
For example, you can use web scraping to export a list of product names and prices from any eCommerce website into a CSV file. If the website does not block you while scraping then you can prepare a python or nodejs script to scrape the website and if it does block you then opt for web scraping tools. Frankly speaking, web scraping can be a challenge if you are a beginner or you are facing a website with top-notch anti-bot detection like LinkedIn.
Today websites are built in a very different format. Let us understand how we can scrape websites of all kinds.
How do web scrapers work?
You can also check this by visiting the network tab of that website. Once this is done the scraper can return data in JSON or HTML format, CSV, etc.
On the internet, you can find many web scrapers. Now, web scrapers are available in many different forms:
- Browser Extension
- Desktop app
Why do different industries scrape data?
- Many financial companies scrape data from the web so that they can buy and sell stocks at the right time. This data provides them with a clear trend of where the next investments can be made.
- Many restaurants scrape reviews so that they can analyze which dish or department is not working well. Timely they can make an important decision and can even improve the service.
- Travel companies scrape pricing from niche websites to keep track of their pricing. To make a competitive edge in the market you need pricing data from your competitor’s website.
- Many Enterprise businesses scrape yelp to generate cold leads. They extract names and contact details in a sheet and then contact them to convert them to their paid customers.
- eCommerce websites scrape the web to analyze which data is in demand or how to set the pricing of any particular product.
- Many governments also scrape data before elections to analyze the mood of the nation. Obviously, they outsource this job. This helps them to pick topics for rallies.
Is Price Scraping even legal?
Well, the correct answer is yes & no. You can scrape publicly available data. However, if you scrape private data it can be against the scraping laws.
A few points that should be taken into consideration before you scrape a web page.
- See if the page is not behind an authentication wall.
- The page does not include any private information of a user.
- You should follow the robots.txt file.
- Do not overload the host server with unnecessary calls.
Again it all depends on your business needs, but let’s not forget the legal actions that can be taken against you when you scrape private data or private profiles.
Every business has different needs when it comes to the data they need to analyze. To take business decisions, it has become important for stakeholders to get data as much as they can to run their successful businesses.
Through, web scraping it is easy to get extract useful information & hence, businesses harvest it via different sources. You are just reducing the time taken manually to extract data when you scrape web pages.
Guest post by Divanshu Khatter
|
OPCFW_CODE
|
Mainframes have always been expensive. The hardware is expensive in the first place, and the software is expensive to run on it. That's the main reason given by many organizations for migrating off the mainframe - it's too expensive. A couple of Windows laptops and a bit of freeware downloaded off the Internet and you're back in business!! Well, the old Dinosaur Myth publication put that sort of logic in its place years ago, and, although the publication is a good few years out-of-date now, the theory behind it is still correct. And for people who have never seen the Dinosaur Myth, you can still download a copy from http://www.arcati.com/dinomyth.htm. So, we're left with the unenviable conclusion that all computing is expensive, it's just whether you're prepared to pay the money up front or you're forced to pay out later. This rather grim conclusion would lead one to ask, is there any way of saving money? And the answer is a pleasing "Yes". Traditionally, mainframes have a General Purpose Processor (GPP) that performs all the processing on the computer. Usage of that processor has been measured in Millions of Instructions Per Second (MIPS) and that has given IBM an easy way to charge for their computers - on how much work you are making them do, ie how many MIPS they are using. It's a bit like charging for van hire by how many miles the van goes while it is being rented. What users really wanted was some other processing engine that could do some processing and reduce the MIPS used by the GPP. IBM came up with such a solution. They invented (and of course sold) specialty processors. I'm talking here about zIIP (System z9 Integration Information Processor) and zAAP (System z Application Assist Processor). Basically, any workloads run on these specialty engines do not form part of an organization's contracted mainframe processing capacity. So their use results in a reduction in that organization's Total Cost of Ownership (TCO). As a consequence, not only do they get reduced software costs, they also get additional processing capacity - and that can be used to eliminate or delay the next upgrade It's a bit of a swings and roundabouts situation though. An organization needs to pay for a specialty processor, and it needs to buy software that can make use of the specialty processor, and then it can start saving money - I hear the sound of Excel Pivot tables being used to present the result of what can be quite a complicated calculation. But is there any software that can make use of this hardware? We know about DB2, but that's from IBM. What else is there? DataDirect's Shadow software has been around for a little while, and last week NEON Enterprise Software announced that Version 5.1 of its Eclipse Reorganization Utilities for IMS will. NEON claims in its press release that by using Eclipse Reorganization Utilities, it is possible for customers to experience capacity gains of more than 70 percent for some IMS database maintenance processing. Now that more software is becoming available to exploit specialty engines, it does seem that users could have a way to save money on their mainframes.
|
OPCFW_CODE
|
The Moral Machine experiment
Edmond Awad1, Sohan Dsouza1, Richard Kim1, Jonathan Schulz2, Joseph Henrich2, Azim Shariff3*, Jean-François Bonnefon4* &
With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions,
and the major challenge of quantifying societal expectations about the ethical principles that should guide machine
behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to
explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions in ten languages
from millions of people in 233 countries and territories. Here we describe the results of this experiment. First, we
summarize global moral preferences. Second, we document individual variations in preferences, based on respondents’
demographics. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. Fourth, we
show that these differences correlate with modern institutions and deep cultural traits. We discuss how these preferences
can contribute to developing global, socially acceptable principles for machine ethics. All data used in this article are
We are entering an age in which machines are tasked not only to pro-
mote well-being and minimize harm, but also to distribute the well-
being they create, and the harm they cannot eliminate. Distribution
of well-being and harm inevitably creates tradeoffs, whose resolution
falls in the moral domain
. Think of an autonomous vehicle that is
about to crash, and cannot find a trajectory that would save everyone.
Should it swerve onto one jaywalking teenager to spare its three elderly
passengers? Even in the more common instances in which harm is not
inevitable, but just possible, autonomous vehicles will need to decide
how to divide up the risk of harm between the different stakeholders
on the road. Car manufacturers and policymakers are currently strug-
gling with these moral dilemmas, in large part because they cannot
be solved by any simple normative ethical principles such as Asimov’s
laws of robotics4.
Asimov’s laws were not designed to solve the problem of universal
machine ethics, and they were not even designed to let machines
distribute harm between humans. They were a narrative device whose
goal was to generate good stories, by showcasing how challenging it
is to create moral machines with a dozen lines of code. And yet, we
do not have the luxury of giving up on creating moral machines5–8.
Autonomous vehicles will cruise our roads soon, necessitating
agreement on the principles that should apply when, inevitably, life-
threatening dilemmas emerge. The frequency at which these dilemmas
will emerge is extremely hard to estimate, just as it is extremely hard to
estimate the rate at which human drivers find themselves in comparable
situations. Human drivers who die in crashes cannot report whether
they were faced with a dilemma; and human drivers who survive a
crash may not have realized that they were in a dilemma situation.
Note, though, that ethical guidelines for autonomous vehicle choices in
dilemma situations do not depend on the frequency of these situations.
Regardless of how rare these cases are, we need to agree beforehand
how they should be solved.
The key word here is ‘we’. As emphasized by former US president
Barack Obama9, consensus in this matter is going to be important.
Decisions about the ethical principles that will guide autonomous vehi-
cles cannot be left solely to either the engineers or the ethicists. For con-
sumers to switch from traditional human-driven cars to autonomous
vehicles, and for the wider public to accept the proliferation of artificial
intelligence-driven vehicles on their roads, both groups will need to
understand the origins of the ethical principles that are programmed
into these vehicles10. In other words, even if ethicists were to agree on
how autonomous vehicles should solve moral dilemmas, their work
would be useless if citizens were to disagree with their solution, and
thus opt out of the future that autonomous vehicles promise in lieu of
the status quo. Any attempt to devise artificial intelligence ethics must
be at least cognizant of public morality.
Accordingly, we need to gauge social expectations about how auton-
omous vehicles should solve moral dilemmas. This enterprise, how-
ever, is not without challenges11. The first challenge comes from the
high dimensionality of the problem. In a typical survey, one may test
whether people prefer to spare many lives rather than few9,12,13; or
whether people prefer to spare the young rather than the elderly14,15;
or whether people prefer to spare pedestrians who cross legally, rather
than pedestrians who jaywalk; or yet some other preference, or a sim-
ple combination of two or three of these preferences. But combining a
dozen such preferences leads to millions of possible scenarios, requiring
a sample size that defies any conventional method of data collection.
The second challenge makes sample size requirements even more
daunting: if we are to make progress towards universal machine ethics
(or at least to identify the obstacles thereto), we need a fine-grained under-
standing of how different individuals and countries may differ in their eth-
. As a result, data must be collected worldwide, in order
to assess demographic and cultural moderators of ethical preferences.
As a response to these challenges, we designed the Moral Machine,
a multilingual online ‘serious game’ for collecting large-scale data on
how citizens would want autonomous vehicles to solve moral dilemmas
in the context of unavoidable accidents. The Moral Machine attracted
worldwide attention, and allowed us to collect 39.61million decisions
from 233 countries, dependencies, or territories (Fig.1a). In the main
interface of the Moral Machine, users are shown unavoidable accident
scenarios with two possible outcomes, depending on whether the
autonomous vehicle swerves or stays on course (Fig.1b). They then
click on the outcome that they find preferable. Accident scenarios are
generated by the Moral Machine following an exploration strategy that
1The Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA. 2Department of Human Evolutionary Biology, Harvard University, Cambridge, MA, USA. 3Department of Psychology,
University of British Columbia, Vancouver, British Columbia, Canada. 4Toulouse School of Economics (TSM-R), CNRS, Université Toulouse Capitole, Toulouse, France. 5Institute for Data, Systems &
Society, Massachusetts Institute of Technology, Cambridge, MA, USA. *e-mail: firstname.lastname@example.org; email@example.com; firstname.lastname@example.org
1 NOVEMBER 2018 | VOL 563 | NATURE | 59
© 2018 Springer Nature Limited. All rights reserved.
|
OPCFW_CODE
|
I use different hosting service but using cloudflare dns for ssl service.
My hosting company upgrade PHP 7.4 my host but It’s not change anything my Wordpress website.
When I change my name service to hosting Company’s name service ı can see php 7.4 version but now I change to Cloudflare old version.
What I can do for this?
Sorry my bad English?
Cloudflare has nothing to do with which version of PHP you run on your server. How are you testing your version?
wordpress say your PHP version old
That has to come from your server. Your host should be able to explain how to properly test PHP in your account.
They said “You should change your name server cloudflare to our nameserver”.
You’re certainly welcome to do that. You can also achieve a similar effect by clicking “Pause Cloudflare on Site” from the Overview tab here, lower right corner of that page.
What’s the domain?
Thanks for kind answer. I do it for try. Yes PHP version seen correctly.
Now I change name server to cloudflare again and I cannot use FTP and Wordpress seen old php version again.
I dont know what can I do.
I cannot use FTP
You should configure FTP to connect to the IP address of your server. Not the hostname that’s set to
I know you said you were using phpinfo() to check the version, but is this through a test.php type of file on your site?
Yes I use one file for test file but Now I cannot change my file.
I cannot see my testfile.
But I create with file manager in cpanel…
Sorry I will contact my hosting …This is not about cloudflare I think
peditrirutinleri DOT com does not appear to be a valid domain.
Actually I try something for my problem.
If cloudflare disable phpinfo = php 7.0
Cloudflare enable. Phpinfo =php 5.5…
I dont understand
Could you please send us the link there you read which PHP version it is using?
Just past it here, and when we say, disable CloudFlare, so we can have a look.
I think the request is getting cached and therefor you see an old version.
You have enabled “Cache everything” and therefor the link you are calling for accessing the phpinfo will deliver an old cached respone to you.
When I pause and development mode
Php version seen php 5.5
Now I change dns you can see
I can confirm that without Cloudflare, it says PHP 7.0.33, but it’s on HTTP (HTTPS doesn’t work).
Ok I saw what
: PHP 7.0.33 (without CloudFlare, just DNS as far as I saw)
Now please turn on DNS/CloudFlare Proxy and let me test/see again.
Now I change to Cloudflare.
Yes. Now I see:
: PHP 5.6.36
the URL changed now from HTTP to HTTPS so the server could (most probably) have a different configuration for HTTP requests as for HTTPS requests and passes HTTPS requests to a PHP 5.6 Handler and HTTP requests to a PHP 7.0 Handler.
So please check your server configuration and make them the same for HTTP and HTTPS
Sorrry. How I can set this option. Is this in cpanel or other config file?
Soryy is noob question
Sorry I dont know your setup. If you use cPanel it should be changeable there, but as I do not use cPanel at all you should ask in a cPanel forum.
Anyway this is not related to CloudFlare. CloudFlare can not change server-settings.
Thanks for kind of answer.
|
OPCFW_CODE
|
why git does not show latest log of a inner directory from root directory?
I'm experiencing the following situation:
user@host:~myproj$ git log . | head
commit<PHONE_NUMBER>30598203958209
Author: me <me@me>
Date: Web Feb 14 00:17:13 2018+0100
My comment of commit B
commit<PHONE_NUMBER>52395782927652
Author: me <me@me>
Date: Web Feb 13 00:05:21 2018+0100
My comment to last common commit C
user@host:~myproj$ git log inner/directory | head
commit<PHONE_NUMBER>86439738467394
Author: me <me@me>
Date: Web Feb 14 10:08:35 2018+0100
Comment to a more recent commit A
commit<PHONE_NUMBER>52395782927652
Author: me <me@me>
Date: Web Feb 13 00:05:21 2018+0100
My comment to last common commit C
That is if I run git log from root directory I see commit B and C, if I run git log from an inner directory I see commit A and C, with A more recent than B.
I'm also in the following situation:
user@host:~myproj$ git status
On branch master
Your branch is up-to-date with 'origin/master'
nothing to commit, working directory clean
user@host:~myproj$ git pull
Already up-to-date.
user@host:~myproj$ git push
Everything up-to-date
I thought that asking logs from root directory would have shown every commit in all sub-directory of the project; since now I see I'm wrong, how can I have a full list of commit history order by commit date?
Also, why this is happening? Is this a normal git behavior or am I m[ie]ssing something?
Do you use any submodule?
@ErniBrown I don't know what git sub-module is, so I'd say "no", but maybe my IDE created it for me. How can I check it ?
Launch git submodule in the project root directory, and let see.
Or maybe better, check if you have a .gitmodules files, and look at its content
No output from above command and no .gitmodules file is in root or any subdirectory
It comes out that git log . is different from git log.
My question can be answered by simply issuing: git log, without specifying any path
This is one of the odder corners of Git, by the way: git log <paths> turns on what Git calls history simplification. Essentially, Git skips over some commits. This part is natural enough because git log does not mean show me every commit ever but rather show me the commits that I've said are interesting based on arguments. But the ones that this history simplification chooses as "interesting" turn out to be quite tricky!
Excellent explanation @torek - You should post it as an answer. Thanks
See this answer to a related question.
|
STACK_EXCHANGE
|
Another fun filled day at PASS started with a session from Kalen Delaney. The room was packed and she was discussing query plans. I had heard a good bit of the content before but it was still interesting to finally see her present live. Very much like Itzik, her asides can be as informative as the content in her slides or demos.
The second session was a lot of fun as Paul Nielson and Louis Davidson had everyone join in their personal debate over what exactly is “proper design”. It was a 300 level session so nothing really earth-shattering came from it but it was really interesting to see the wide range of opinions across the room. There were even defenders of the dreaded “Muck” table, or the single table housing all look-up data (the horror!). I tend to fall on Louis’ side of most issues, but Paul made some strong arguments as well. At the end of the session Paul demoed a project called NORDIC which is an object-oriented database design. I’ll need to give this more attention before commenting but it certainly left me with questions swirling in my head.
The third session I attended was the most fun for me personally. Erik Kang, Program Manager from the product team, discussed intellisense and the debugger in 2008. We talked for a while after the session and I have a much better understanding of where he was coming from back in our early conversations about down-level support for intellisense. I think there are other pieces of the product that will be much better for the attention devoted to them than the time for implementing down-level support. I missed two opportunities to meet his team this week which really bummed me out. Just too many pieces of my schedule have been in play to make it happen.
The last session was a question and answer time with three general managers of the SQL team. They basically let attendees line up and fire questions at them for over an hour and a half. Now that’s brave. Some really interesting questions (some polite others not so much) were asked ranging from dedication to new product features (we hardly knew you Notification Services) to future support for jdbc driver. The coolest part to me was when a question was trending towards too technical a direction the GMs were able to point into the crowd and say “oh, here’s XXX he/she is the program manager for the area you’re discussing, they would be more than happy to talk more about it…”. Another benefit of having the event so near Redmond.
All in all, a tremendous day of SQL geekiness. I have to give some props again to Magenic for okaying this adventure out to the northwest. It’s been a great time.
|
OPCFW_CODE
|
I have tried searching for the answer to this question on the forum and couldn't find anything close.
I am working on an app that includes video content that is being released next month. We have tried keeping the content down to a minimum as obviously, this impacts the size of the app which is already quite large. There are 20+ videos that are around 1min and half long.
Our workflow is probably not the best for these videos as, after a few tests, the most efficient way of getting low files size videos that retain the quality was by uploading them to YouTube at 720p and then downloading them. They would go from 150mb to around 19mb and other videos down to 6mb which si great without any severe quality loss. The reason behind this unusual workflow was based on the idea that YouTube's compression will be top spec and have algorithms that select the most efficient Mbps when uploading the videos to YouTube.
I am at the stage of exporting all the videos again after some feedback and responses from the partners. In doing so YouTube's compression has changed adding up to 5mb on each video which is obviously times by 20+ videos - a nightmare for us. I have done tests with the same videos uploaded a month ago and they are not being compressed to the same size. I also think there might be better solutions with AME?
I have downloaded the MediaInfo app from the app store and been able to look at the previous YouTube videos (the smaller file size) to try and duplicate the same settings that YouTube (Google). These settings were highlighted on MediaInfo as:
MPEG-4 (Base Media / Version 2)
Overall bit rate mode: variable
Overall bit rate: 1 335 Kbps
Video: 1206 KBPS 1280*720m(16:9), 23.976 (24000/1001) fps, AVC (email@example.com) (CABAC / 3 Ref Frames) ISO Meedia file produced by Google Inc.
Audio: English, 126 Kbps, 44.1 KHz, 2 channels, AAC LC
I have tried using AME to duplicate these settings but couldn't see a way to make the bitrate variable for all videos. The results have been poor in comparison to Youtube so I am wondering if there is a better way to export these videos for the app that will keep the quality but give us the smallest size possible? I know ffmpeg is a favourite for apps but the app developer wants to keep it in mp4 as it such a common format and shouldn't give us any compatibility issues. Anything outside mp4 could require testing which we don't have the time to do at this stage.
Thank you in advance for any suggestions as well as apologise if I am missing an obvious fix!
Copy link to clipboard
I too have the same issue, please somebody help!
|
OPCFW_CODE
|
How competitive are math PhD programs?
How competitive are math PhD programs?
PhD programs are competitive in general. “Ranks” 15-20 are still quite competitive. You really should not be looking at places based on ranking, in my opinion.
What is the highest level math course?
The official titles of the course are Honors Abstract Algebra (Math 55a) and Honors Real and Complex Analysis (Math 55b). Previously, the official title was Honors Advanced Calculus and Linear Algebra.
How do you become a top math PhD?
Roughly: good grades (3.8+ GPA) in difficult courses, good test scores (80+ percentile on math GRE subject test [not the regular GRE math, which you should get a ~perfect score on without studying]), strong research background and good letters corresponding to it.
How hard is it to get into a math PhD program?
So, yes, it is unbelievably difficult to go to a top graduate school for mathematics. It would require a near perfect GPA, 6 or more graduate courses, and research (all done at a top undergrad program). If you go to a top undergrad program, move quickly into proof courses.
Is it worth getting a PhD in mathematics?
Probably not. If you don’t want to be a professor, then you probably don’t want to be a PhD student, since they involve doing pretty similar stuff, and in most other lines of work, 5-6 years of life experience will get you more benefit than a PhD.
Is a masters in math hard?
Depends entirely on the courses you took in your degree. Of course, grad school is inherently difficult, but grad programs are prepared to take in students with a variety of backgrounds. For a graduate degree, a master’s in pure math does not make much sense either.
Is maths a good degree?
If you’re a talented mathematician, a maths degree can be a good option. The fact that there is a right answer to questions means that it’s possible to achieve high marks, most courses offer the chance as you progress to specialise in the areas that most interest you, and your skills will be useful in many careers.
Is mathematics a useless degree?
It’s not useless and even if you aren’t in a standard maths career like finance, quant, modeller, data science or programmer etc you will probably use your skills some way as it is a very canonical and generalist degree.
Is a maths degree worth it?
Math degrees can lead to some very successful careers, but it will be a lot of work and might require you to get a graduate or other advanced degree. According to the Department of Education, math and science majors tend to make significantly more money and get better jobs than most other degrees.
What are the top 5 math careers?
14 high-paying jobs for people who love mathEconomist. Astronomer. Operations research analyst. Actuary. Median salary: $110,560. Mathematical science teacher (postsecondary) Median salary: $77,290. Physicist. Median salary: $118,500. Statistician. Median salary: $84,440. Mathematician. Median salary: $112,560.
Are mathematicians in demand?
Job Outlook Overall employment of mathematicians and statisticians is projected to grow 33 percent from 20, much faster than the average for all occupations. Businesses will need these workers to analyze the increasing volume of digital and electronic data.
Does NASA hire mathematicians?
Of course the space industry hires mathematicians. You won’t see many job titles or job postings that say “mathematician,” but look at the skills being asked for. Practical applications like your applied math degree rather than theoretical development is probably the better option.
How much do mathematicians get paid?
How Much Does a Mathematician Make? Mathematicians made a median salary of $101,9. The best-paid 25 percent made $126,070 that year, while the lowest-paid 25 percent made $73,490.
What can I do with a PhD in mathematics?
Doctorate (PhD), Mathematics Average by JobJob.Assistant Professor, Postsecondary / Higher Education.Data Scientist.Professor, Postsecondary / Higher Education.Associate Professor, Postsecondary / Higher Education.Mathematician.Senior Software Engineer.Postdoctoral Research Associate.
Which country is best for PhD in mathematics?
Canada.China (Mainland)Crimea.Germany.Hong Kong SAR.Kosovo.Kosovo, Republic of.Macau SAR.
How long is a PhD in mathematics?
between 3 and 5 years
How much does a PhD in mathematics make?
To give you some numbers, from the US, the AMS Survey gives data on starting salaries for Math PhDs. You can also get some data from Payscale on average salaries (e.g., Math PhDs, EE PhDs and Engineering Bachelors): AMS Median industry starting salary (2016 Math PhD): ~$106,000. Payscale Average Math PhD salary: …
Can you get a PhD in math online?
While PhD programs in math are rarely available online, interested graduate students may consider an online master’s degree in math or math education.
Can I do PhD in maths?
Ph. D. Mathematics is the program of choice for students who wish to pursue a career in a mathematical research field. The minimum duration of this course is 2-years, whereas you can complete this course in a maximum time span of 3-5 years.
|
OPCFW_CODE
|
On Tue, 3 Feb 2009, Ruy Diaz wrote:
> I've been trying to get SNMP monitoring to work for the last few days
> and have had quite a hard time getting it up. I am quite new to Linux,
> load balancing and SNMP so please take it easy on me.
> I am running Ubuntu 8.10, Haproxy 126.96.36.199 and I have just compiled net-snmp 188.8.131.52 with perl enabled (tried configuring with both v3 and v2c without
> success). I have copied haproxy.pl to /etc/snmp/haproxy and I modified /etc/snmp/snmpd.conf to include the lines indicated in the netsnmp-perl README.
> However, when I run:
> $ sudo snmpbulkwalk -c public -v2c 127.0.0.1 184.108.40.206.4.1.29385.106
> SNMPv2-SMI::enterprises.29385.106 = No more variables left in this MIB View (It is past the end of the MIB tree)
> Digging through forums I thought what I needed was to add a 'stats socket /var/run/haproxy.stat mod 777' line to my haproxy config, but when I add
> this, I get the following error:
> [ALERT] 032/163027 (24099) : parsing [/etc/haproxy/haproxy.cfg:29] : unknown stats parameter 'stats' (expects 'hide-version', 'uri', 'realm', 'auth' or
> [ALERT] 032/163027 (24099) : Error reading configuration file : /etc/haproxy/haproxy.cfg
Are you sure you have the 220.127.116.11 version? Your net-snmp daemon has to be able to communicate with you haproxy instance and it is not possible to do this without the 'stats socket' support.
Plase show the output from "haproxy -vv".
Krzysztof Olędzki Received on 2009/02/03 19:16
This archive was generated by hypermail 2.2.0 : 2009/02/03 19:30 CET
|
OPCFW_CODE
|
Build error: Failed to retrieve information about 'Internal.AspNetCore.BuildTools.Tasks'
After having checked out the current master and having called the build script I get this error . For more details, have a look in the attachment, please.
output.txt
Hint "Das Zeitlimit fr den Vorgang wurde erreicht" means "time limit has been reached"
Looks like a timeout when trying to restore packages. Is it persistent? (I was able to build the repo earlier today without any problems)
I just tried to compile it one minute ago. The problem is still there.
I tried to execute the build script on my second development pc - the error remains the same.
I just tried from my home pc and things work just fine:
dotnet-install: .NET SDK version 2.0.0-preview1-005418 is already installed.
Adding C:\Users\moozz_000\AppData\Local\Microsoft\dotnet\ to PATH
> dotnet msbuild /nologo /t:Restore /p:PreflightRestore=true C:\source\SignalR-Core\.build/KoreBuild.proj
Restoring packages for C:\source\SignalR-Core\.build\KoreBuild.proj...
Restoring packages for C:\source\SignalR-Core\.build\shared\sharedsources.csproj...
Lock file has not changed. Skipping lock file write. Path: C:\source\SignalR-Core\.build\shared\obj\project.assets.jso
n
Restore completed in 159.68 ms for C:\source\SignalR-Core\.build\shared\sharedsources.csproj.
Installing NuGetPackageVerifier 1.0.2-rc2-15220.
Installing Internal.AspNetCore.BuildTools.Tasks 1.0.0-rc2-15220.
Generating MSBuild file C:\source\SignalR-Core\.build\obj\KoreBuild.proj.nuget.g.props.
Generating MSBuild file C:\source\SignalR-Core\.build\obj\KoreBuild.proj.nuget.g.targets.
Writing lock file to disk. Path: C:\source\SignalR-Core\.build\obj\project.assets.json
Restore completed in 21.36 sec for C:\source\SignalR-Core\.build\KoreBuild.proj.
NuGet Config files used:
C:\source\SignalR-Core\.build\NuGet.Config
C:\source\SignalR-Core\NuGet.Config
C:\Users\moozz_000\AppData\Roaming\NuGet\NuGet.Config
C:\Program Files (x86)\NuGet\Config\Microsoft.VisualStudio.Offline.config
Feeds used:
https://dotnet.myget.org/F/aspnetcore-tools/api/v3/index.json
https://dotnet.myget.org/F/aspnetcore-ci-dev/api/v3/index.json
https://api.nuget.org/v3/index.json
C:\LocalFeed
https://www.myget.org/F/aspnetvnext/
C:\Program Files (x86)\Microsoft SDKs\NuGetPackages\
Installed:
2 package(s) to C:\source\SignalR-Core\.build\KoreBuild.proj
...
@natemcmaster - do you have any idea what it can be or how to get more details why restore is failing like that:
Retrying 'FindPackagesByIdAsync' for source 'https://dotnetmyget.blob.core.windows.net/artifacts/aspnetcore-ci-dev/nuget/v3/flatcontainer/internal.aspnetcore.buildtools.tasks/index.json'.
An error occurred while sending the request.
Das Zeitlimit fuer den Vorgang wurde erreicht (means timeout)
Sometimes NuGet's HTTP cache can be corrupted. Try running nuget.exe locals http-cache --clear to reset it.
@natemcmaster I tried your call using the nuget.exe (v <IP_ADDRESS>5) in the .build directory
Btw, the correct call is nuget.exe locals http-cache -clear (only one hyphen before clear).
The error remains the same...
@natemcmaster - is this url https://dotnetmyget.blob.core.windows.net/artifacts/aspnetcore-ci-dev/nuget/v3/flatcontainer/internal.aspnetcore.buildtools.tasks/index.json correct? When I try to check this I see: The specified blob does not exist. RequestId:a99e3346-0001-0089-4185-a9dfa1000000 Time:2017-03-30T18:46:10.4826349Z which I don't think is expected
@klaus-liebler - can you do git clean -xdf and git pull origin dev before building? If you cloned the code some time ago you might have old build scripts...
No, that package is on https://dotnet.myget.org/gallery/aspnetcore-tools but restore should be using all feeds in the nuget.config
In that case there might some settings in the global NuGet.config that interfere with ours - e.g. the global NuGet.config has already a key called "AspNetCoreTools". @klaus-liebler - can you check post your global NuGet.config?
@moozzyk I use an absolutely fresh cloned repository
File content of C:\Users<my user name>\AppData\Roaming\NuGet\NuGet.Config is
<?xml version="1.0" encoding="utf-8"?> <configuration> <packageSources> <add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3" /> </packageSources> <disabledPackageSources> <add key="Microsoft and .NET" value="true" /> </disabledPackageSources> </configuration>
@klaus-liebler - I guess you need to figure out where the https://dotnetmyget.blob.core.windows.net/artifacts/aspnetcore-ci-dev/nuget/v3 feed is coming from. We are not using it so it must be somewhere in your environment.
@moozzyk I tryed to do this by searching the URL in all files, but this was not successful. I tried to open the server https://dotnetmyget.blob.core.windows.net/ on my browser, but I get the well known error message.
Sorry, but I do not know how to figure out this?
Maybe, somebody can just provide the compiled artifacts so that I can use them without having to compile them on my system?
We push packages to myget after each build. Here is the feed https://dotnet.myget.org/F/aspnetcore-ci-dev/api/v3/index.json (gallery: https://dotnet.myget.org/gallery/aspnetcore-ci-dev
|
GITHUB_ARCHIVE
|
I'm going to take a few paragraphs here to discuss another topic outside the typical vein of the blog. That issue is Net Neutrality.
As a netizen for lo these many years (almost 25 now), the thought of a free (as in liberty) and open internet has been something that seemed completely inherent in the 'net. That someone would come along and consider restricting access was bound to crop up, but the thought is anathema to the openness of the 'net.
I look at the issue from two perspectives - as a consumer and as a small business owner - and this latter perspective is where I have the primary concerns. As a consumer, I already (over)pay for access to the internet and it seems reasonable, based on other costs, that I should. I already pay for electricity, gas, and water - internet access seems like another service being provided and the provider deserves to be compensated for that access. I can even choose how much I want to pay for the bandwidth I choose to consume - again, all reasonable. I would, however, be concerned with a "prioritized traffic" provider. I should be able to access all of the legal content I want to access when I want to access it, provided that I'm paying for that right.
As a business owner, though, I have serious concerns. The crux of the net neutrality debate is centered around businesses providing content. Services like BitTorrent were just a symptom of what the industry as a whole was experiencing - a lot of people using a lot of bandwidth. That BitTorrent could be used to break the law was, I'm certain, an impetus for internet providers. They saw this as an opportunity to "tax" business owners to get access to their services. As an ISP, the thought of all of the potential funding just below the surface would be irresistible. And it makes sense. "As a user of our network," they'd reason, "you should be required to pay commensurate with the amount of traffic you're pushing over our network." This is, however, just over the concept of "reasonable". Why? Business owners are already paying for access to the internet from their hosting services. And there are, literally, thousands of ISPs - should businesses have to pay every since ISP to prioritize their traffic? I can only assume the ISP answer to that question would be a resounding "yes!"
The biggest concern for me is the following scenario. Assume that the providers are able to prioritize their traffic based on who pays them. As a company that's not Google or Microsoft or Apple, I'm at a significant competitive disadvantage if I can't pay to get my traffic prioritized. The average person using the web will not wait more than a few seconds for a web page to display before they pick a different page (i.e, not mine). If there's a delay in displaying my information because my traffic isn't prioritized highly because I couldn't afford to pay every telco that wants to provide internet access, I lose customers or, at the very least, customer satisfaction suffers.
One of the arguments against net neutrality is that if companies haven't done it yet, there's no reason for them to do so in the future. There are a few points to be debated here, but the three that ring most true for me are: a) companies haven't really been able to do this yet, so there's been no real test of this concept, b) companies have a terrible history of doing things in people's best interest (look at the labor revolution of the late 19th century for a great example - and then go thank a union worker), and c) they're already looking at it. Comcast began to packet filter BitTorrent traffic - determined via empirical tests. And it's not that far of a leap from selectively filtering traffic to charging for that traffic. This also causes problems for consumers who now need to determine whether their provider will allow them to access the content they want to view.
So you may be asking where I stand on the whole issue (if you haven't already jumped to a conclusion). Actually, I prefer that we take a "wait and see" attitude, but we need to provide an organization - it can be the FCC, but I'd prefer an elected body rather than appointed - to oversee and insure that things don't get out of hand. We should be proactive rather than reactive to the potential direction this issue could move. Time and again we've seen that as things progress without direct confrontation about it, people become willing to accept the status quo, not status quo ante. I don't believe we need regulation, but we do need the ability to quickly address any complaints.
|
OPCFW_CODE
|
Collaborator: Howie Lan, Prof. Lewis Lancaster, Jeffrey Shaw
This project integrates the Chinese Buddhist Canon, Koryo version Tripitaka Koreana, into the AVIE system (a project between ALiVE, City University Hong Kong and UC Berkeley). This version of the Buddhist Canon is inscribed as UNESCO World Heritage enshrined in Haeinsa, Korea. The 166,000 pages of rubbings from the wooden printing blocks constitute the oldest complete set of the corpus in print format. Divided into 1,514 individual texts, the version has a complexity that is challenging since the texts represent translations from Indic languages into Chinese over a 1000-year period (2nd-11th centuries). This is the world’s largest single corpus containing over 50 million glyphs, and it was digitized and encoded by Prof Lewis Lancaster and his team in a project that started in the 70s.
The Blue Dots project undertaken at Berkeley as part of the Electronic Cultural Atlas Initiative which abstracted each glyph from the Canon into a blue dot, and gave metadata to each of these Blue Dots allowing vast searches to take place in minutes which would have taken scholars years. In the search function, each blue dot also references an original plate photograph for verification. The shape of these wooden plates gives the blue dot array its form.
As a searchable database, it exists in a prototype form on the Internet. Results are displayed in a dimensional array where users can view and navigate within the image. The image uses both the abstracted form of a “dot” as well as color to inform the user about the information being retrieved. Each blue dot represents one glyph of the dataset. Alternate colors indicate position of search results. The use of colour, form, and dimension to quickly communicate understanding of the information is essential for large data sets where thousands of occurrences of a target word/phrase may be seen. Analysis across this vast text retrieves visual representations of word strings, clustering of terms, automatic analysis of ring construction, viewing results by time, creator, and place. The Blue Dots method of visualization is a breakthrough for corpora visualization and lies at the basis of the visualization strategies of abstraction undertaken in this project. The application of an omnispatial distribution of these texts solves problems of data occlusion and enhances network analysis techniques to reveal patterns, hierarchies and interconnectedness. Using a hybrid approach to data representation, audification strategies will be incorporated to augment interaction coherence and interpretation. The data browser is designed to function in two modes: the Corpus Analytics mode for text only and the Cultural Atlas mode that incorporates original text, contextual images, and geospatial data. Search results can be saved and annotated.
The current search functionality ranges from visualizing word distribution and frequency to other structural patterns such as the chiastic structure and ring compositions. In the Blue Dots 360 version, the text is also visualized as a matrix of simplified graphic elements representing each of the words. This will enable users to identify new linguistic patterns and relationships within the matrix, as well as access the words themselves and related contextual materials. The search queries will be applied across classical Chinese and eventually English, accessed collaboratively by researchers, extracted and saved for later re-analysis.
The data provides an excellent resource for the study of dissemination of documents over geographic and temporal spheres. It includes additional metadata such as present day images of the monasteries where the translation took place, which will be included in the data array. The project will design new omnidirectional metaphors for interrogation and the graphical representation of complex relationships between these textual datasets to solve the significant challenges of visualizing both abstract forms and close-up readings of this rich data. In this way, we hope to set benchmarks in visual analytics, scholarly analysis in the digital humanities, and the interpretation of classical texts.
|
OPCFW_CODE
|
How do I prevent duplicate api call in React using nextJS for SSR?
I have a listing page of Products that I intend to server render. So using NextJs's getInitialProps I fetch the list of Products and pass it on to the components and it works.
There is a requirement, whenever a city changes(from Dropdown) in client-side, I need to refetch the updated list of Products from the API. Hence, I have this API call in a useEffect. This is somewhat the idea.
ProductsWrapper.getInitialProps = async () => {
// This runs on Server Side
// fetch list of Products
const products = await getProducts();
// update the redux store with products returned from server by dispatching
return { products };
}
// Products is a redux connected component that listens for Product and City state updates
function Products({ products, city }) {
// fetch new Products iff city changes
useEffect( async () => {
// This runs only on Browser
const products = await getProducts();
// update the redux store with products returned from server
}, [city])
}
The problem is at step 3 below,
At the server Products API call is made and fed to Redux.
The Page is built at the server-side and sent over to the client.
At the Client, useEffect runs and unnecessarily calls the API endpoint again.
How do I Prevent this redundant call, the first time when the page is rendered on the client?
Well, just an if check inside useEffect whether data is present before making the api call doesn't work, because despite Products data being present in Redux store, if the city changes, the api call has to be made.
how are you able to use await without async?
@Ifaruki Sorry missed that was a typo
Can you call getProducts in the city change handler?
@AryanJ-NYC Even if we do that, problem would still happen bcoz the root issue is
If there is a same network request in getInitialProps and useEffect, A second redundant network request is bound to happen coz of useEffect running on client end.
Do u agree?
I think useEffect is not the correct way to handle it.
You need to fetch data when that event happens. (City change)
That'll fix the issue.
The redundant call is due to the useEffect usage.
A better alternative by the name of getServerSideProps is available for this use case. https://nextjs.org/docs/basic-features/data-fetching#getserversideprops-server-side-rendering
This makes sure that the given fetch call is only run on server.
After this if you want to refetch on the change of dropdown you can either call it again at onChange or route to a different URL(e.g. /city/foobar).
In both cases there shouldn't be any redundant call on client as we did not use the useEffect hook
|
STACK_EXCHANGE
|
variable $index not found
After update to version 2.0.0, the variable $index of each has replaced by variable $item ????
And now how can i do this?
<div each="todo in todos" onclick="{data.setIndex($item || $index)}">
</div>
I have found this manner
onclick="{()=>data.setIndex($item)}"
but when this compile to js the result is wrong if the browser don't suport arrow functions
Hi there.
Yes version 1 used to wrap the on handlers in a function (it also called e.preventDefault for you too). I dropped this for a number of different reasons, not least because it created a new function each patch.
The reason $value, $item and $target were chosen was because is they made the most sense given you could be iterating over an Array, Object or Map. $index doesn't really work for Object and Map.
The function wrapper is no longer there so you have to either do it yourself inline:
<div each="todo in todos" onclick="{ function (e) { data.setIndex($item || $index)} }">
</div>
or call a function defined on your model:
<div each="todo in todos" onclick="{model.onClick}">
</div>
var model = {
onClick: function () {...}
}
patch(el, view, model)
FWIW, while I prefer this new way because it makes fewer assumptions, only this weekend I was thinking I may have stripped it back too much. I may reconsider the approach again (ideas are welcome!).
Do this help you for the time being?
nice to me. I think if the user wants the same effect of older way it would be disponibilized as a alias with this:
<span click.trigger="{model.setIndexOrAnyThing($item)}"></span>
In a future moment could exist the "click.delagate" to pass the event to the next parent "dom" element to better memory use.
Hi Alex,
So I've been thinking about this over the last few days. I'm still not settled on anything, one idea I've had is:
<input on="{ change: fn, keyup: fn1}" onlick="{fn3}">
The onclick handler would be a straightforward DOM event handler where the this context is the element.
A new special on attribute would take an object map of handlers and would wrap the functions, like it did in<EMAIL_ADDRESS>additionally calling e.preventDefault().
Delegated events would be nice too, although would probably require a third party lib e.g. ftdomdelegate
if this will be translate to
<input onchange="function(e){e.preventDefault();fn($item) }">
Good to me,
but how we could do :
<input on="{ change: (fn($item);data.index=$item) }" >
?
becouse in the older version we do this :
<form onsubmit="{data.sendItem();data.text=''}">
You're right. The 'on' way I described would not give us what we used to have.
I'm tempted to just go back to how it was (but without the e.preventDefault() call):
<input onchange="{data.value = this.value}">
Compiles to
<input onchange="function ($event) { data.value = this.value }">
Would you be happy with this?
for me is perfect, because I do onclick = "$event.preventDefault (); data.value=this.value;fnsetindex($item)" when you need to make the preventDefault. But if that is harming you or please implementation we will be discussing but.
OK - I've made the changes and released as 3.0.0 on npm.
We can continue this discussion but for now I think reverting to the old way (without e.preventDefault()) is the best thing.
great thanks.
|
GITHUB_ARCHIVE
|
Excel VBA Function Method API Windows Function User32.dll Alias Declare Library List .. thingies
I am sorry if the Thread title does not quite match what it is that I want. As usual I am not quite sure myself what it is that I am talking about….
Maybe an example will help get across which List I am trying to get hold of.
In Excel VBA there is a message box pop up thingy MsgBox, the VBA Message Box Function ( https://msdn.microsoft.com/en-us/vba...sgbox-function )
This apparently uses a “Windows API software code thingy”, ( "MessageBoxA" )
Yesterday I found out that is quite easy to “use that standard code more directly”. All you need is a simple single code line like
After adding that line at the top of a code module, you use the “APIssinUserDLL_MsgBox” in VBA codes very similarly to how you use the VBA Message Box Function, MsgBox https://www.excelforum.com/developme...ml#post4822070 .
Please Login or Register to view this content.
So I want a good list of those Pubic Declare thingys code lines
So Ideally what I would like is a list of Excel Methods and Functions and alongside each Method or Function the “ Excel VBA Function Method API Windows Function User32.dll Alias Declare Library “ thingy code line.
If, in addition, I can get an explanation of all the parameter arguments as well then so much the better.
Currently, for example, I am trying to find the “ Excel VBA Function Method API Windows Function User32.dll Alias Declare Library “ stuff for the VBA Application Input Box Method ( https://msdn.microsoft.com/en-us/vba...x-method-excel ), ( and possibly the VBA Input Box Function ( https://msdn.microsoft.com/en-us/vba...utbox-function )
Possibly, they use the same “ Excel VBA Function Method API Windows Function User32.dll Alias Declare Library “ thingy. I don’t know
I have also asked the question in this Windows Forum here http://www.eileenslounge.com/viewforum.php?f=18 , as maybe it is a Windows thingy . But I am mainly interested in using those API Windows Function User32.dll Declare Library Alias thingys as they apply to Excel VBA Methods and Functions.
P.s. If anyone knows what it is that I am talking about, could they possibly explain it to me in simple terms what that is.
My guess is that I am talking about …….a “Declaring” code line that gives my code access to a set of standard programs shipped with Windows that are available for use in various things, such as Excel, Word, Access etc…
|
OPCFW_CODE
|
Change the SEGA Sound
From Sonic Retro
This guide targets the Hivebrain 2005 disassembly.
Changing the SEGA sound isn't like a simple rename .mp3 to .pcm. Instead, it's a lot more. But, I will show you how to change the SEGA chant in Sonic 1. You can replace it. There's a way.
The tools you will need is as follows:
An emulator (GENS, Kega Fusion, BlastEm)
Make the SEGA sound as high quality as possible
We need to make the SEGA sound as high quality as possible to continue with this tutorial, otherwise there'll be some Z80 format editing, and that's as complicated as can be. We don't want to get into that. Now, to make the SEGA chant slightly higher quality, follow this tutorial. Now open up your sonic1.asm file, and go to PlayPCM_Loop:. You should find this line:
move.w #$14,d0 ; Write the pitch ($14 in this case) to d0
Change the $14 in the line to be $01. Now our SEGA sound will play at 27,025 Hz. The line should now look like this:
move.w #$01,d0 ; Write the pitch ($14 in this case) to d0
Head over to Audacity and grab a sound you like. For me, I'm using the Sonic CD extra life voice clip of Sonic saying "Yes!" Now, make sure these are specifications that are active: The project sample rate should be 27,025 Hz. The actual audio sample should be converted to 27,025 Hz, and then converted back to 41K or 48K Hz after you've made your changes. You have to convert it back, otherwise your audio will be very slow. Make sure that the audio is monophonic.
Now, save the file as follows: Set the file type to "Other uncompressed files" and set the file format as .RAW. Make sure the header is RAW and that it's header-less. Make sure that the encoding is an unsigned 8-bit PCM.
Save the file into the sound folder of the disassembly. Now to change the code to actually play your custom sound.
Making Sonic 1 play the sound
Head to the end of "sonic1.asm" and you should find this somewhere:
SegaPCM: incbin sound\segapcm.bin
Replace segapcm.bin with whatever your file name is. And bravo! You should have your new SEGA sound. Ciao!
Written by XPointZPoint
|
OPCFW_CODE
|
The jobs do exist, but I've no idea how common they are
I am in the US, but I work for an international company for which the UK members have a strong leadership presence. My boss' boss is in the UK.
I'm a generalist. I know something about a lot of different things, can use that to solve lots of problems, or create lots of solutions. And I've got a job where that's basically what I do professionally, where the breadth of my skills is basically specifically why I'm valued, and I'm paid very well. I've been where I'm at for some time, though, so I can't speak to how easy it is to find a job like mine, and it's something I do worry about should this job disappear or become unsavory. I *can* tell you my team wishes we could find more people with a breadth of skills.
Where I fit in best is in a place where specialists exist in their own silos. You have developers, DBAs, sysadmins, storage teams, and networking gurus. In places that divide specialties up like that, you often benefit from someone who is a bit like a business analyst, except instead of being the interface between developers and customers, they face the other direction, interfacing between developers and infrastructure / middleware.
What we find is that the developers often are wildly ignorant of the implications of the system's (virtual) physical design. The infrastructure teams often have no time to learn the ins and outs of the applications, in order to tune their systems for them. I help the developers create systems that won't be rubbish on the basis of the systems on which they run, and help the infrastructure folks design hardware that won't be rubbish for the needs of the application.
The challenge is in finding an organization that values this role. Not everyone does, and that's clear even within my company. What seems to make the tuning and problem solving skills valuable to people is when they're strapped for budget and they need to expand their system or make their existing scale of system run better. Tuning things can increase concurrent users on existing footprint or reduce infrastructure for same performance. And even in a cloudy context, the ability to achieve those things can be valued. But I fear that may be rare.
I would never do project management. It has nothing to do with why I'm in IT, and requires primarily the exercise of people skills, not technical ones. If I lost this position, I would look for a job as a systems architect - someone who looks at the big picture of software, infrastructure, APIs and whatnot and assembles it into a solution. I see a PM as someone who drives all the people involved to implement that vision. I would want to be the person creating the vision itself.
|
OPCFW_CODE
|
This is a bot programming game written in JAVA.You are required to write a program to control the player in a virtual environment to make the bot move, scan the arena for enemies, throw bombs etc. Two such programs are made to fight against each other.
A sound toy for the Nintendo DS
A sound toy for the Nintendo DS. The toy is a simple loop based sample sequencer with an original and intuitive visual representation. The user uses an intuitive touch screen interface to manipulate moving spheres therefore manipulating sounds.
Clone of popular Polish speedway manager "Menadżer Żużlowy".
The Wii Homebrew Installer is used to install Wii Homebrew applications on the SD card of the Nintendo Wii. The application to install can be downloaded from the internet or taken from a local file system by the Wii Homebrew Installer.
This isn't just cheats for New Super Mario Bros Wii But a whole lot more. With cheats both Orcania and Hints. A well worded and written walk-through, screen-shots to see if the game would suit you or just to help you.
Engines for Batch Games
WARNING 1: project abandoned! (I'll come back some day) WARNING 2: project website is abandoned. Seta Engine Is a set of engines for batch games. Now you can program with notepad your own games in batch scripts with sound and colours! Please use the web page of proyect to get a complete list of launched sample games. Get help and usage guide: https://sourceforge.net/p/seta-engine/wiki/Home/
Dark Bounce is a "Pong" remake.
Dark Bounce is a modern Pong remake.
Help the Angry German Kid defeat Jovi before she takes over the world! Based on the video "Angry German Kid vs Jovi Part 1" (http://www.youtube.com/watch?v=nGXuupoI5Io) now you can act as Leopold and save the world!
The Lost Amulet is a text-based RPG similar in style and plot to The Colossal Cave Adventure. You must enter the ruins of a castle to retrieve a lost artifact, the Amulet of Vigour.
Esse projeto serve para que pessoas com interesse em jogos antigos, ou outros motivos, criem seus jogos baseados em texto.
A desktop application to help the formulation and communication of tactics for online multiplayer games such as Call of Duty, Killzone, etc.
IDE de desarrollo para el lenguaje de programación LUA, utilizado para programar en la plataforma de Sony PSP.
retail blockland is crashing the download sites of the other versions of blockland unless they make it a mod for retail so I'm going to save the free version in attempt to keep it alive,we already lost RTB and TBM/DTB but I'm going to bring them back
A Comical Text Adventure Starring Forge Sourcer
Forge Sorcer is on a mission to forge the source. He must find a wife, get married and start a family before he turns 30. That's next year!!!
This project is for the Fort Myers High School Technology Student Association club, and is apart of the Video Game division. We will be creating a video game from scratch and registering it into a contest, with the best hopes of winning :)
Roguelike under active development.
Coffee-break roguelike in active development, with an emphasis on long gameplay per single character.
Invaders like game, written in Python with Curses... :)
Invaders like game, written in Python with Curses... :) Also, this was a learning project for me to Python language.
LGame to tekstowa gra w konsoli typu RPG. Wersja 0.9 beta to pierwsza wersje tejże gry. Będzie ona jednak żywo aktualizowana i wzbogacana.
BossFight is a console-based boss fight game based on random chance.
RogueLike game for Mystic BBS v1.10+
Doctor Who themed multi-player roguelike game for Mystic BBS v1.10+
An all inclusive halo editor/creator, will be mostly targeted toward halo1 until that segment has reached a late stage of development. After that it will be on to halo2 and 3. All are welcome to contribute please use Visual C# 2008.
Wlag 360 was created as an alternative to hardware networking modifications. This program replaces the need for a switch added to a wired connection between an Xbox 360 console and the networking router of your home.
Simple Console Arcanoid
Console Arcanoid written for Windows. In future I want to make it cross-platform. Current version is not the best code, that could be written, so some time later I'll modify it
An Interactive Fiction set in a dark future.
Struggling to survive after a nuclear holocaust, you find your self trapped in the Post Apocalyptic Circus.
Port of old SDL 1.2 to gamecube.
I am no author of this, sources are scattered around the Net. SDL and SDL_image come from infact's cubeSDL: https://bitbucket.org/infact/cubesdl SDL_mixer and smpeg come from sdl-wii: http://code.google.com/p/sdl-wii/
|
OPCFW_CODE
|
View Full Version : Upgrading to IE 6.0, boy did I screw up
01-01-2002, 03:19 PM
Well, after hearing that Micro$oft finally worked out the bugs in IE 6.0 and a few buddies said it was working fine on their puters, I finally upgraded from 5.5 to 6.0. What a mistake.
I've lost the ability to use the start button to select programs and open them there. On my links toolbar, I cannot use the drop down window to select any links. IE 6.0 crashes almost daily on me. Using my Windows Explorer now also crashes daily.
As soon as I figure out how to cure this mess I'll post the answers or I may even go back to ###IE 5.5.
Anyone else have any problems with IE 6.0?
01-01-2002, 04:56 PM
I have been using IE 6.0 since it came out. ###Have not had any problems yet. ###I am running Win 98se. ###As you probably know with Win 95 and Win 98 after a while it is best to blow away your entire operating system and reload it. ###Just be sure to save your stuff.
03-07-2002, 09:14 PM
A simple fix for anyone who wants to go back to whatever version of IE they had before upgrading.......go to "add/remove programs" and hi-light Internet Explorer and click remove. It will NOT uninstall it when you do this, but will present you with a prompt that gives you 3 options. One of the options is "restore the previous windows configuration". Select that and it will re-install your previous version of Internet Explorer.
03-08-2002, 10:16 AM
Thanks for the headsup on the IE 6.0. I had been thinking about it. I am running IE 5.5 and it is really a solid system. If I do make a mistake and do something foolish like trying new software that doesn't work, I can completely return my system to its current configuration. I have Norton's Ghost. I got a copy of Norton System Works Professional for $11 and it has Ghost on it. I have been able to write my complete system off onto CD's and can get back to my current system from a blank HD in about 1.5 hours. It is a comfortable feeling!
Good back-ups from Varmint Al
Powered by vBulletin® Version 4.1.7 Copyright © 2013 vBulletin Solutions, Inc. All rights reserved.
|
OPCFW_CODE
|
entering an address by sending it from the web to
the car [ 11].
All of these connected functions can potentially
be exploited by UE for research needs. For example,
variants of the software can be sent to target user
populations for either their subjective feedback or
to generate “A/B” test cases. A population’s configuration settings can be analyzed to determine the
most and least popular setting changes.
Most important, the software development organization can rapidly update the product during
usability testing and then observe users for their
reactions to the device and its new software.
Many Groups Use Back-haul Data
User experience is hardly the only group motivated
to scoop up back-haul data. Back-haul data collection is essential for other functions to fulfill their
The ability for UE to understand, leverage, and
champion these other needs will maximize its
influence over the form that back-haul infrastructure takes. In particular, UE needs to realize its
stake in the design of these infrastructure components when initial implementations are sketched. A
common mistake for typical small UE teams is not
seeing the potential at early stages.
For example, the operations group will require
usage data for authentication and billing. IT needs
aggregate usage metrics that trigger alarms when
usage levels drop, whether due to instability in its
own infrastructure or to service-provider outages.
Support needs some form of back-haul data to
understand customer issues more efficiently, such
as versioning, configuration settings, stack traces
[ 12], and prior usage. QA values back-haul data on
common performance metrics, such as “time until
first GPS fix” and “time to first network connection”
so that it can assess release readiness. QA also values back-haul data to help track metrics of system
performance and stability. With this data, the QA
team can better evaluate release readiness. Finally,
everyone wants some method to “file a problem
report” from the device to reduce the time to file,
reproduce, and fix bugs.
may say the product is performing as specified, but
they don’t indicate if the users are actually happy
with that level of performance.
For UE design, it is beneficial to triage use cases
into buckets such as “frequent for all,” “frequent for
some,” or “infrequent for all.” In order to do this, the
data source must maintain a marker of individuality along with the data, so the data analyst can slice
and dice the data to discover such relationships.
In addition, the UE group desires the ability to
run longitudinal queries, particularly the ability to
see that new user features lead to perceptible user
benefits. This requires the UE group itself to create
and maintain over time a database of high-level
user events in a normalized format.
[ 11] MyDash, Send2Car,
[ 12] Wikipedia,
Reporting ( WER),”
September + October 2009
Issues Unique to UE Needs
The UE group has needs distinct from the other
functional groups. Both QA and operations, by and
large, are more concerned with aggregate numbers
than an individual’s experience. Their numbers
In order to exploit the back-haul data, central sampling issues such as who, what, and when (how frequently) need addressing.
Behavioral logging benefits from the ability to
understand and filter the results through the lens
of various user-grouping mechanisms. For example,
stakeholders will question unwelcome results
because various outlier groups may have skewed
Invariably, both business and operational groups
have an interest in creating user segmentation; this
functionality is thus almost certainly available to
the UE group. However, the grouping mechanisms
will be driven by customer purchasing segments,
and this in turn can constrain an experiment’s
Problem reports. Since users have problems, they
need a mechanism to explain what those problems
are: the problem report. There are two different
Mostly passive, where the problem report is •
created on behalf of the users. They just need to
submit the report.
Mostly active, where the user initiates the prob- •
Both are, of course, useful. But for devices, the
active report is particularly useful, as it is often
very difficult to create bug reports independent
of a complex environment that only the user fully
The work required is not just the data transmission and recording. In implementing the problem
report feature, I recommend budgeting a fair
amount of time for implementing tools that facili-
|
OPCFW_CODE
|
the list so far.... [Completed]
chthon at chthon-uk.com
Tue Jun 11 08:37:19 PDT 2002
zaphod is cool (hitchhikers guide to the galaxy)
the naming is now complete!!!
woohoo! thanks for the help guys averyone who has helped is named below
286 - Terminal - Me
486 #1 - Wyldkat - Me
486 #2 - Haloumi - this one (part) named by sven Hartenstein
486 #3 - archaic - this one named by archaic
486 #4 - moksha - this one named by Nicholas Dille
486 #5 - turiya - this one named by Nicholas Dille
Pentium1 #1 - Akasha - Me
Pentium1 #2 - Monty - Me
Pentium1 #3 - python - this one named by Matt M
Pentium1 #4 - grackle - this one named by "S. Bougerolle"
Pentium1 #5 - azaad - this one named by Tushar Teredesai
Pentium1 #6 - zaphod - this one named by Timothy Bauscher
Pentium1 #7 - turiya - this one named by Nicholas Dille
Pentium2 #1 - perl - this one named by Matt M
Pentium2 #2 - linus - inspired by Johannes Berth
Pentium3 #1 - Darkstar - Me
Pentium3 #2 - Pollstar - Me
"Timothy Bauscher" <timothy at linuxfromscratch.org> wrote in message
news:20020611141024.GA29676 at shadowfax...
> On Tue, Jun 11, 2002 at 01:39:55PM +0100, James Iwanek wrote:
> > just 4 to go guys - keep em coming
> Bah, did you not like any of my suggestions? I like the
> idea of naming computers Null and Void. Perhaps Kryptonite?
> My favorite computer is named Marvin, and my server (which
> has two hard drives) is named, appropriately enough, Zaphod.
> -*- "Share and Enjoy" || "Go stick your head in a pig" -*-
> Unsubscribe: send email to listar at linuxfromscratch.org
> and put 'unsubscribe lfs-chat' in the subject header of the message
Unsubscribe: send email to listar at linuxfromscratch.org
and put 'unsubscribe lfs-chat' in the subject header of the message
More information about the lfs-chat
|
OPCFW_CODE
|
programming assignment help Things To Know Before You Buy
Pretty pleased with the C++ programming assignment tutoring received from you guys. The professor was incredibly impressed with the caliber of documentation as well as the grades speak for by themselves. The top place would be that the price tag is just right for us higher education Youngsters." -Peter Gordon, Aug 2016
AHT supplies an atmosphere the place leverage their abilities, skills, encounter and pursuits improve professionally and Construct satisfying Professions.
Computer system programming languages are these challenging assignments to deal with. I am happy that I discovered HelpWithAssignment as well as their online C++ programming assignment help professional.
R can be an built-in suite of software program software centers for facts adjustment, computation and Visible screen. It contains
Position assistance is actually A form from service where our authorities offer pupils along with the assignment service in the most effective feasible manner in addition to our authorities be particular you comply with each of the requires to finish the assignment for pupils.
Occupation help options from just one from The everyday service that pupils hunt for in all educational institutions and also universities along with for that purpose United kingdom isn’t truly the exemption. Many the chances, they're not in a position to manage their time is actually due to the a lot from the program. Inside their point of view, whenever they goal to send home projects from the made available time, they can certainly Raise their rate from functioning as in comparison with The remainder from your globe. They involve to become extremely notify together with perform high quality occupation in purchase to accumulate substantial credit history rankings as well as finish the program provided that from the big quantity from various duties.
We can easily cope with a wide range of programming languages and systems We can easily entire duties and assignments on Pretty much all most important programming and scripting languages, which include C and high-degree languages like Python and Perl. Our programming homework services incorporate the following programming languages:
But occasionally It happens you could do this Laptop programming homework, but you may have exams and course test that are extra critical than your this computer programming homework.
Not obtaining ample awareness and issue expertise are the principal causes for The scholars to hunt programming assignment help from the specialists in Australia.
R programming requires time and as a consequence a lot of students are more likely to R programming homework help to conserve time and adhere to your rigorous submission owing dates.
I received virtually immediate replies to all my e-mails and he was view website much more than ready to correct any mistakes there might have been or reply any concerns I might need experienced. Helping people with programming is his position and he normally takes it very significantly and does a magnificent position at it. I are not able to thank him plenty of for the many help he has presented me."
Such as, an aim C programming tutor is highly desirable under offered situation. Nonetheless, the value tags of our services are very reasonable they usually get a lot more Expense-effective provided our one hundred% gratification warranty.
Companies talk to our aid with Microsoft Workplace dependent projects wherever awareness about VBA or PowerShell is critical. Numerous types of services are offered together with electronics programming to Website or Matlab homework help online. Our programming services may very well be grouped as follows:
Our expirienced developers haven't yet faced the responsibilities they cannot full. We've been All set for difficult coding if you are doing have a single. Dealing with our team you'll get 24/seven purchaser service and aid and you should be able to control the progress, along with get your perform accomplished speedily and for that realistic selling price.
|
OPCFW_CODE
|
Citrix and Microsoft, already close collaborators on many
virtualization-based initiatives, ratcheted up their relationship Monday with
the announcement of Citrix Essentials, a group of technologies calculated to
shoehorn their way into the enterprise datacenters now dominated by VMware.
VMLogix has partnered with Citrix to shore up its brand-new offering, adding lab management and automation capabilities to Citrix Essentials.
Microsoft and its hardware partners on Monday rolled out preconfigured data warehouse reference architectures that incorporate Microsoft SQL Server 2008.
Another vendor has thrown its hat into the enterprise virtualization ring. This one is red.
Microsoft and Red Hat inked a deal to ensure that Windows operating systems and Red Hat Enterprise Linux OSes can run as virtual machines on each other's platforms.
Emmett gives Red Hat's Virt-Manager high marks after taking it for a spin on Fedora 7. Here's a rundown of how to set it up on your own machine.
- By Emmett Dulaney
Unified communications (UC) has emerged from the Internet cloud thanks to a partnership that brings together LightEdge Solutions, BroadSoft and Microsoft
- By Jim Barthold
A weakened economy will serve as a catalyst to push enterprises from on-premise computing to accessing services over the Internet cloud, according to Microsoft exec Doug Hauser, who delivered an address on Wednesday at the Thomas Weisel 2009 Technology and Telecom Conference.
- By Jim Barthold
EMC and Microsoft formally renewed a partnership on Tuesday, extending their collaborative efforts on virtualization, content management and security solutions for the enterprise market through 2011.
Microsoft on Tuesday rolled out its "launch experience" of Office Communications Server 2007 (OCS) Release 2, a unified communications (UC) product first unveiled in October.
- By Jim Barthold
Xenocode, the Seattle-based maker of virtualization tools, today released the latest version of its Virtual Application Studio, a developer-focused authoring environment for virtualizing existing Windows-based applications.
- By John K. Waters
Joern shows how to design a virtual infrastructure with security in mind.
- By Joern Wettern
One bright spot in the IT economy appears to be virtualization, with leading provider VMware today announcing that its yearly revenue rose 42 percent in 2008 to $1.9 billion. Fourth quarter revenues were a "solid" $515 million -- up 25 percent over last year.
Citrix Systems has teamed with Intel Corp. to provide virtualization technology for Intel-based desktop computers.
- By Rutrell Yasin
A new report from security consultancy AppRiver confirms what many of us have long expected: Spammers are becoming both savvier and sneakier.
- By Stephen Swoyer
Microsoft released a beta of Microsoft Enterprise Desktop Virtualization (MED-V), which lets users run older OSes and applications on newer Windows systems.
New global research forecasts a swift rise in development plans for Software as a Service applications in the next 12 months.
- By Kathleen Richards
The beta of Windows Server 2008 R2 is currently available for download by TechNet Plus and Microsoft Developer Network (MSDN) subscribers, as well as by technology enthusiasts.
The beta release of the update to Windows Server 2008 will be of special interest to those who work with virtualization, as it unveils a key technology long-promised by Microsoft.
Application lifecycle management tools maker Borland Software CEO Tod Nielsen has left the company to join VMware as chief operating officer.
- By Michael Desmond
|
OPCFW_CODE
|
Not quite polished yet (and it needs filtering to go with the sorting), but I’m happy with where it’s going!
Hey @Cathy_Sarisky, I’m keen to learn more about what you’re doing here. Is this some kind of integration you’re building, or something else?
It’s sort of a ‘sidecar’ for the regular Ghost admin, that allows bulk actions on posts (i.e. add/remove a tag, delete, etc). Not quite polished yet, but working on it.
OK, Phantom Admin is ready for some users! It’s not super pretty, but it’s functional, and @BruceInLouisville says it’s made it much easier to deal with 4000+ posts. (Big kudos to Bruce for paying for the development and then encouraging me to sell it to you! His site, https://forwardky.com is an awesome example of what you can build on Ghost, and you should check it out.)
All the details:
Phantom Admin is an add-on admin panel for Ghost, built to make it easier to work with large numbers of posts, sort and filter, and perform bulk actions.
It installs as a custom page in any theme and requires no changes to core. If you can unzip a theme, copy a file into it, and rezip it, you can run Phantom Admin.
Important: With great power comes great responsibility. Back up your content regularly, and certainly before using the bulk deletion tool. (Deletion is irreversible once you click confirm.)
Installation: Download your theme file and add the custom-phantom-admin.hbs file to it. (Alternately, if you have shell access on the server, just drop the file on the server in the active theme folder and then reload Ghost.) Create a page, and select the Phantom Admin template type from the right side menu. Note the address of that page. Visit your admin page (at https://yoursite.tld/ghost) to make sure you have a fresh admin cookie, then navigate over to your new page.
Set-up: Check that Phantom Admin has your admin URL correct - adjust if necessary. (For a typical install, you need only the domain, no /ghost/.) Choose ‘posts’ or ‘page’, then click the setup button. Phantom Admin will check that it has API access, and set up the filtering options.
Filter & retrieve records: Select one or more filter terms and the number of posts/pages you want, then click the button to retrieve posts/pages.
Sort or page through: You can click the headers to sort records, or use the “next” and “previous” buttons to scroll through them. (Note that selections don’t persist from page to page, so take any actions needed before moving to the next page.)
Do your thing! You can select all posts or click individual posts, edit individually or in bulk. Need to retag a bunch of posts? No problem! Bulk publish? Sure thing! Delete a bunch of drafts you made while testing an integration? You betcha, all in two clicks.
Is it beautiful? Well, no, not yet. But it’s super functional.
Version 0.9 is tested and very functional, but not yet super pretty. Purchases include a year of updates and bug fixes. You can buy it here: Introducing Phantom Admin
This is really cool. Would it be possible to filter based on titles and text within posts?
That’s an interesting question. Currently, filtering is server-side, using the API, so if you have 5000 posts but want the three with a particular tag, you can get those three from the server pretty quickly. Searching for a word in the post body would mean grabbing all the posts and looking for the word. I suspect it’d be noticeably slow for the number of posts that would cause someone to want it, but it would certainly be faster than looking through them yourself!
What’s the use case?
I think a better way to support searching by body content might be to leverage something like Algolia (which sites might be using anyway). Things to think about!
Want to jump in here and recommend both the new tool and the developer. Cathy was easy to work with, does good work, and is up front about costs and timing. If you have a custom project, she is one of the Ghost developers I would recommend, along with Eric from Layered Craft (firstname.lastname@example.org).
As for the tool – due to some import issues on my site, I wound up with dozens, even hundreds, of duplicated posts. The thought of opening each one to delete it, one by one, made me crazy. So, I asked for a tool, and Cathy delivered.
As she has said, it is not necessarily pretty, but it IS quite functional. It contains a number of useful tools, including bulk delete, bulk status change, and bulk tag management. It also has a number of “Are you sure?” dialogs to try to keep you from doing something stupid. And, even on my site with about 4,000 posts, it is pretty quick (1-3 second response time if scanning the entire set of posts; much quicker if less).
If you need this functionality – especially if you REALLY need this functionality – get this tool.
Actually (clearly I’m still thinking about it), given that sodo-search (Ghost native search) is ‘fast enough’ for searching post titles and excerpts, that makes me think that at least searching titles/excerpts would be ‘fast enough’, too, using a similar strategy to the native search implementation. And for that matter, if you really really needed to know where some post is based on some word that’s not in the title, you might settle for it taking a few seconds, even though that wouldn’t be performant enough for website search with users.
|
OPCFW_CODE
|
Devlog #67: Moving on with COVID-19 all around
Man, these are unpredictable times. COVID-19 surely hit the world hard.
Due to the outbreak, Polish citizens are now asked to stay home. Schools, cinemas and many other businesses have been closed. My wife is working as a photographer and all her sessions have been canceled. Luckily for us, I am able to perform my day job fully remotely.
Still, life goes on. I am still working on Shardpunk mostly during the evenings, so not much has changed here. And because the children are at home all the time, we even have more opportunities to spend quality time together. It's a good thing that I bought that PS4 earlier this year! :)
The Digital Dragons event - which was the main reason I decided to publish the 2nd demo, despite the tactical layer not being finished - has been postponed to September. The deadline for submissions to the Indie Showcase Awards has been moved to somewhere around May/June. This gives me time to polish the game and/or add more features.
This whole situation reinforces the thing I was aware of when I decided to start creating "Shardpunk": you should never wait for a "good time" to start working on your project. Things around you will be changing all the time (be it for the worse of for the better) so the best thing you can do is just be persistent and carry on with your work.
And that's what I am trying to do.
So here's the latest update for Shardpunk:
I am getting very positive user feedback for the 2nd demo (by the way: if you haven't played it, go grab it now). It has reached 100 downloads after 15 days. 1st demo was being downloaded 380 times in total in the span of 4 months.
The suggestions and remarks posted by the players are truly unremarkable. It is really an awesome feeling that there are people out that like the game and were willing to spend their time to post their comments and/or gameplay suggestions. Stuff like this gets me going!
Also, a message to the people who have filled in the post-game survey: worry not, I will surely respond to each and every entry!
Oh, and I'm proud that Shardpunk has now an article on the pixelpost page!
Even though I know it's not a fair comparison, it does feel great to be featured next to Doom Eternal :)
Anyways, let's talk about plans: I have around 2 months (maybe a little more) before the new Indie Showcase Awards deadline. I can either submit the same demo there (with possibly some few minor fixes) or try to create a 3rd demo having more features. I will shoot for the latter and we'll see how it goes.
So, the next tech demo will be about introducing stress and shelter mechanics.
Stress is now a stat that the player needs to be aware of.
Yup, that pink stress bar is new.
If the stress goes high enough, the player receives a "stressed" trait. This gives a % to hit penalty. Also, if the stress remains high, there's a probability that a character will receive a negative quirk. To make this thing balanced, receiving a negative quirk is currently only allowed once a day.
Characters gain stress each time they receive damage, or if any of their allies die. Stress can be regained by using stimpaks and killing enemies. The exact amounts of stress lost/gained will have to be defined later after I will have the whole gameplay loop present.
With these stress basics in place, I am ready to start working on the shelter mechanics to finalize the gameplay loop. Hopefully, I'll be able to write more about this in the next devlog entry.
Take care! And enjoy the explosion gif!
Log in with itch.io to leave a comment.
|
OPCFW_CODE
|
|This is retired content. This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This content may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.|
When a filter's input pin receives a sample, the filter processes the data in the sample and then delivers the sample to the next downstream filter. As the previous section described, the filter can do all of its processing while inside the IMemInputPin::Receiveon the downstream input pin, passing it a pointer to the sample's method, or it can hold the sample and process it afterward.
Within the call to Receive, the input pin has the option of blocking the upstream filter's calling thread. You can query the input pin's behavior by calling the IMemInputPin::ReceiveCanBlockmethod. If the return value is S_OK, the pin might block on a call to Receive. If the return value is S_FALSE, the pin will never block on calls to Receive. Based on the return value, the upstream filter might use a separate worker thread to deliver samples.
Some filters process samples in place, without copying any data. Other filters copy the data before processing it. Obviously, it is better to process samples in place whenever possible, to avoid the overhead of unneeded copy operations.
The input pin can reject samples by returning an error code or S_FALSE in its Receivemethod. The value S_FALSE indicates that the pin is flushing data (see "Flushing" below). An error code indicates some other problem. For example, if the filter is stopped, the return value is VFW_E_WRONG_STATE. If a pin rejects a sample, the upstream filter calls the IPin::EndOfStreammethod and stops sending data. If the pin returned an error code, the upstream filter also signals EC_ERRORABORT to the filter graph manager.
The upstream filter must serialize all of its Receivecalls to a given input pin.
When a source filter has no more data to send, it calls the IPin::EndOfStreammethod on the downstream input pin. The downstream filter propagates the call to the next filter. Eventually the EndOfStreamcall reaches the renderer filter. The renderer filter signals EC_COMPLETE to the filter graph manager.
Before posting an EC_COMPLETE notification to the application, the filter graph manager waits until all the streams signal EC_COMPLETE. The application does not receive an EC_COMPLETE notice until all the streams have completed. To determine the number of streams in the graph, the filter graph manager counts the number of filters that support IMediaSeekingor IMediaPositionand have a rendered input pin. A rendered input pin is an input pin with no corresponding outputs. The IPin::QueryInternalConnectionsmethod returns zero for a rendered input pin.
Filters must serialize EndOfStreamcalls with IMemInputPin::Receivecalls.
Flushing is the process of discarding all the pending samples in the graph. Flushing enables the graph to be more responsive when events alter the normal data flow. For example, in a seek operation, pending data is flushed from the graph before new data is introduced. If the graph contains multiple streams, it is possible to flush individual streams separately.
Flushing is a two-stage process:
When the BeginFlushmethod is called on a filter's input pin, the filter performs the following actions:
When the EndFlushmethod is called, the filter performs the following actions:
At this point, the filter can accept samples again.
In the pull model, the parser filter initiates flushing rather than the source filter. The sequence of events is the same, except that the parser also calls BeginFlushand EndFlushon the source filter's output pin.
Because filter operations are multithreaded, it is crucial that a filter serialize certain operations. Otherwise, race conditions can result. For example, the filter might try to use an allocator that is already decommitted. Filters use two critical sections, one to hold a streaming lock and the other to hold a filter state lock.
The streaming lock serializes streaming operations. The filter holds this lock when it processes the following method calls:
The filter state lock serializes state changes. The filter holds this lock when it processes the following method calls:
Within the Stopand EndFlushmethods, the filter must synchronize its filter state with its streaming state. In the Stopmethod, the filter holds the filter state lock and decommits the input pin's allocator. Then it holds the streaming lock and decommits the output pin's allocator.
The following base classes are useful for implementing data flow in a filter.
|COutputQueue||Queues data to be delivered downstream. Uses a worker thread for delivering samples.|
|CPullPin||Input pin on a parser filter. Uses a worker thread to pull data from a source filter.|
|CSourceStream||Output pin on a source filter. Uses a worker thread to push data downstream.|
Last updated on Tuesday, May 18, 2004
|
OPCFW_CODE
|
I mentioned previously that I had most of Windows Media Center 2005 going... I spent a few hours this evening and am much further along.
After some troubleshooting, I found the reason I wasn't getting video was an incompatible DVD decoder. Since I built my system using an NVIDIA GeForce FX5200, I installed the NVIDIA DVD Decoder and that turned out to be the key. Now I can playback DVD's and watch TV.
Windows Media Center
The Media Center guide is really nice - it shows a small window of live TV while you browse what shows are available.
I set it up to record all episodes of Scrubs, a comedy I used to watch and really enjoy. I only stopped because Scrubs shifted around days and wound up on Tuesday playing against 24 which meant I had to choose between the two shows, and Scrubs lost. Now, between my ReplayTV and Media Center system, I'll be able to catch the occasional conflict.
Of course, there is still a bit more to do. The current sound setup is bad - I have the computer sound output sent to two small cheap speakers. If there is such a thing as a "1/8 inch stereo miniplug to standard stereo cable" (i.e. red/white right/left cable) then I can send the computer sound out to my TV, which is what everything else is doing except the DVD player. Or, with the cable I can try piping sound through my DVD/receiver. Or, I can try the optical S/PDIF output... all this stuff will wait until I'm back from vacation.
The other improvement I could make is video output. Right now, I'm using the S-video output from my graphics card, and the video quality is acceptable. I could use the DVI output since my TV has DVI input, but I want to save that video input for my next HTPC project, a linux MythTV HDTV decoder. So I'll just leave the video output alone as I'd rather feed an HD signal to the TV via DVI.
For now, it is working enough for me to test out other features, such as recording TV!
I've tried out the TV recording function, and it works like a champ. I watched Scrubs, skipped commercials, rewound and paused, and it all worked great. The Media Center UI is very polished and easy to use, definitely on par with TiVo. It has been a year and a half since I've had a TiVo (due to bad luck with hardware both TiVo's I had died) and I don't remember navigating the UI as well. From what I do remember of TiVo, both it and Media Center are a notch above ReplayTV. I might have to split hairs to decide between TiVo and Media Center though.
|
OPCFW_CODE
|
The start-up businesses are required to undergo a spectrum of stages for launching their final product. Among all, the stage of idea validation is the most leading factor to finalize the process. The comparison of a prototype and an MVP (Minimum Viable Product) are of great importance throughout many businesses.
The approaches of a prototype and a minimum viable product are supportive in leading the businesses through different stages of product development. It is important to note that people frequently get confused with the definition of an MVP and a prototype.
The right usage, accuracy, and comprehension of these methodologies can generate corporate ideas in a well-structured manner. It can lead the respective users and stakeholders to understand the business concept in the right manner. It can then improve the successful outcome of all future products’ launches, thereby leaving no chance of failure or obstruction in any way.
It is again necessary to understand that the prototypes and MVPs have different objectives that need to be used at a different product development stage. There is also a possibility of combining the approaches of an MVP and a prototype, leaving again no chance of distraction among them.
Prototype V/s MVP
A prototype is simply the beginning of a material, such as a model or a sample, which is later purposely designed for a better business concept. A prototype is the releasing process of the products that are designed for testing the concepts or processes. These are later acted as the major things to be learned from or replicated.
On the contrary, an MVP (Minimum Viable Product) is a basic product that has a limited feature to be satisfied with the basic need of consumers. It has also enough features for providing feedback for future product development, giving a limited rise to more product development opportunities.
What is a Prototype?
A prototype is used in the early stage of startups’ product development. Its purpose is primarily to conduct a presentation and identify user testing. Upon presenting the prototype, it is possible to gather a few funds for the developing phase of the Minimum Viable Product. The afterward process is complex as the lifetime of the overall functional cycle gets terminated on the later stages of development.
It is again important to note that the primary level of stakeholders and users can get valuable feedback from the entire process. In case of getting negative feedbacks, the prototypes can be altered and tried again at a later stage for better product development and finalization.
On the contrary, the strategic approach of MVP is utilized for testing the startup idea for gaining better efficacy. It’s another purpose is to get community feedbacks that are later utilized in the development of a business idea. This tactical process enables in getting a huge and raised sufficient level of funds that are later rolled to the products’ marketplaces to the diversified people.
However, if the planned ideas seemed invaluable for developing the product ahead, then it could be stopped in any way. Such a stoppage of the process does not consume any deal of time and money while it lets the process continued at a later stage with a much-deliberated process.
What is an MVP (Minimum Viable Product)?
A Minimum Viable Product is considered a fully-fledged product, meaning that it is readily delivered to the markets. It is the right way to consider whether the products would have a value for the consumers in the markets.
Additionally, it requires a lot of consumers’ feedbacks from the target audiences. Unlike a prototype, an MVP expects intense technical development. It is owing to the reason that its functionalities could have a possibility of adding, particularly at the final stage of product scale.
MVP vs Prototype: Which is Best to validate:
Both a prototype and MVP are quite supportive to validate and verify the assumption of the product. They initiate a major performance in the development phase of product recycle. Their comparison is of great importance when it comes to bringing them into the product development stage.
An MVP and a prototype describe how and what could be developed from each stage of the process. However, it is noteworthy that they have different goals that are initiated in the designing flow of the process. The product prototypes are truly displayed with the functions when needing the operational measures.
In contrast, an MVP is mainly the functional product over the development stage. It is truly simple in the systemic process and is made available to the markets. Concerning the product development phase, an MVP has extended ways for enhancing and scaling the product.
Concerning the above statement, it could be determined that a prototype requires the basic testing and assumption of the product. It has a different way of showing the users about the early investment referring to the business processes and ideas. These entire considerations are aligned in the development phase of the product.
Such considerations increase the investors’ confidence about their money being invested in the prototype and MVP of the product. The investors understand that their money will give a worthwhile true business idea.
Regarding an MVP, it is not a sketch and is a less functional product. It requires the less functioning need to finalizing the product. Such a feature of the MVP product requires less implementation and monitoring.
Every prototype bears a product idea for the development stage. The only consideration requires it to check its working capability with every stage of product development. The product prototype is the next phase where the startup development process is initiated to move the product in the afterward stage.
The overall systemic process then leads the product toward the functional stage. From there, the product’s reliability is assessed in terms of its functionality and finalization. Thus, it is said that a prototype is highly considered as the prime implementation need of the future project. It is simply a sketch that is primarily meant to determine the investors about their ideas being developed and implemented.
The performance of the prototype is easily understood through the primary sketch of the product. In simple words, the technical documentation could be used in comparing the working capability of the prototype. The receipt and processed work of the prototype is then evaluated through the actual ideation of the product.
However, in an MVP, the process continues back to the actual functioning of the product or service. After the development of the prototype, it becomes easy to understand what an MVP will be. Nevertheless, it becomes essential to check the outlay and worth of an MVP that can be refocused at a later stage.
With an MVP, the processes help in saving time and money that are again used in the afterward process. For example, these could be used in the development of a functional product that is changed and scaled for enhanced viability. As a result, an MVP’s product development process is presented and utilized in the final stage of product development.
Prior to the release of an MVP, the prototype is processed for creating and receiving initial feedback. This feedbacks directly come from the existing customers and diversified public. It is then processed through the markets for more enhanced receipt of feedbacks. It finally leads the process to the actual phase of product development.
The process of a prototype requires a general testing at its initial stage. The basics of investors and adopters are to accomplish the entire task. The establishment of a prototype helps in achieving an earlier knowledge. These may include the ways of interacting with the future product. A leading example may include communication with the general public and prospective consumers.
The afterward process then includes developing the product referring to the clients’ feedback. The alteration of the current prototype is the next stage where the processing of the product is primarily initiated.
However, the missing of right product alteration requires the process to be initiated with the new development of a sample. It is then sent for testing and validation for its better usability and experience. The different types of scopes and contents are used in developing the prototypes until the receipt of the finally approved MVP which is the next stage of the final product.
In an MVP, the basic startup process initiates with the receipt of valuable users’ feedbacks. It is actually the prior stage of development because it offers the consumers experience about the MVP of the product. Upon successful receipt of valuable feedbacks, the process moves forward toward the fully featured product development. However, there are received fewer fresh ideas about the startup product, meaning that an MVP has limited features in the development stage.
It is important to note that the development style is changed upon reaching the initial crucial round of validation. The process then changes to moving to the different models of development. It may include the product development stage of quick sketch coding where the first arriving user reviews it for an onward process.
A leading example could be from the creator of the Customer Development method, a Silicon Valley entrepreneur, Steve Blank, who said that the businesses initially sell the visions and delivers the minimum features that are purposely meant for the visionaries. It means that not everyone has the access to the product development process or stage while having limited access to it.
Famous Examples of an MVP and a Prototype:
There are two major examples to review a better difference between a prototype and an MVP. An example of a prototype is “Apple Phone”. The below picture shows an early prototype of the Apple Phone that is a brand of yesterday and have been redesigned today. It is now known as the iPhone. There is no common sight concerning portability. However, Apple had the redesigning features to be developed as touchscreen technology.
An example of an MVP is “Facebook”. The below picture shows an early MVP of Facebook which was previously named “Thefacebook”. It was lacking a range of services that are readily accessed and availed now. Every user knew that Facebook was a leading and expensive source of social media communication tools at an earlier time. There was a limited functional feature on Facebook in the past, such as finding the friends’ friends, checking classmates, searching for people at the school, etc.
Both in a prototype and an MVP, a spectrum of questions is required to be asked and answered before building the product. These include the needs for validation. It also involves asking whether considered ideas are large or small. The afterward process involves setting the target audiences for the product and projects.
The process then continues with the testing of the entire system, particularly checking it in terms of working and efficiency. The need for overlooking the product in terms of its interactivity is also considered to provide the investors with value for their money. It helps in confirming the products’ values and functionalities while worrying less about the working assumption of the final product.
Upon receipt of the functional product as a prototype in the markets, the process continues in building an MVP that is later processed in the right systemic manner and with the proper functional measures.
|
OPCFW_CODE
|
We present a unified approach to (both finite and unrestricted) worst-case optimal entailment of (unions of) conjunctive queries (U)CQs in the wide class of "locally-forward" description logics. The main technique that we employ is a generalisation of Lutz's spoiler technique, originally developed for CQ entailment in ALCHQ. Our result closes numerous gaps present in the literature, most notably implying ExpTime-completeness of (U)CQ-querying for any superlogic of ALC contained in ALCHbregQ, and, as we believe, is abstract enough to be employed as a black-box in many new scenarios.
Legal technology is currently receiving a lot of attention from various angles. In this contribution we describe the main technical components of a system that is currently under development in the European innovation project Lynx, which includes partners from industry and research. The key contribution of this paper is a workflow manager that enables the flexible orchestration of workflows based on a portfolio of Natural Language Processing and Content Curation services as well as a Multilingual Legal Knowledge Graph that contains semantic information and meaningful references to legal documents. We also describe different use cases with which we experiment and develop prototypical solutions.
Leo-III is an automated theorem prover for extensional type theory with Henkin semantics and choice. Reasoning with primitive equality is enabled by adapting paramodulation-based proof search to higher-order logic. The prover may cooperate with multiple external specialist reasoning systems such as first-order provers and SMT solvers. Leo-III is compatible with the TPTP/TSTP framework for input formats, reporting results and proofs, and standardized communication between reasoning systems, enabling e.g. proof reconstruction from within proof assistants such as Isabelle/HOL. Leo-III supports reasoning in polymorphic first-order and higher-order logic, in all normal quantified modal logics, as well as in different deontic logics. Its development had initiated the ongoing extension of the TPTP infrastructure to reasoning within non-classical logics.
The window mechanism was introduced by Chatterjee et al. to strengthen classical game objectives with time bounds. It permits to synthesize system controllers that exhibit acceptable behaviors within a configurable time frame, all along their infinite execution, in contrast to the traditional objectives that only require correctness of behaviors in the limit. The window concept has proved its interest in a variety of two-player zero-sum games, thanks to the ability to reason about such time bounds in system specifications, but also the increased tractability that it usually yields. In this work, we extend the window framework to stochastic environments by considering the fundamental threshold probability problem in Markov decision processes for window objectives. That is, given such an objective, we want to synthesize strategies that guarantee satisfying runs with a given probability. We solve this problem for the usual variants of window objectives, where either the time frame is set as a parameter, or we ask if such a time frame exists. We develop a generic approach for window-based objectives and instantiate it for the classical mean-payoff and parity objectives, already considered in games. Our work paves the way to a wide use of the window mechanism in stochastic models.
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation and the vision of the Internet-of-Things fuel the interest in resource efficient approaches. These approaches require a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. On top of this, it is crucial to treat uncertainty in a consistent manner in all but the simplest applications of machine learning systems. In particular, a desideratum for any real-world system is to be robust in the presence of outliers and corrupted data, as well as being `aware' of its limits, i.e.\ the system should maintain and provide an uncertainty estimate over its own predictions. These complex demands are among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology into every day's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. First we provide a comprehensive review of resource-efficiency in deep neural networks with focus on techniques for model size reduction, compression and reduced precision. These techniques can be applied during training or as post-processing and are widely used to reduce both computational complexity and memory footprint. As most (practical) neural networks are limited in their ways to treat uncertainty, we contrast them with probabilistic graphical models, which readily serve these desiderata by means of probabilistic inference. In that way, we provide an extensive overview of the current state-of-the-art of robust and efficient machine learning for real-world systems.
In this paper, we consider counting and projected model counting of extensions in abstract argumentation for various semantics. When asking for projected counts we are interested in counting the number of extensions of a given argumentation framework while multiple extensions that are identical when restricted to the projected arguments count as only one projected extension. We establish classical complexity results and parameterized complexity results when the problems are parameterized by treewidth of the undirected argumentation graph. To obtain upper bounds for counting projected extensions, we introduce novel algorithms that exploit small treewidth of the undirected argumentation graph of the input instance by dynamic programming (DP). Our algorithms run in time double or triple exponential in the treewidth depending on the considered semantics. Finally, we take the exponential time hypothesis (ETH) into account and establish lower bounds of bounded treewidth algorithms for counting extensions and projected extension.
|
OPCFW_CODE
|
Didn't read the first part? Learn more about Nikko and this projection.
Our first projection
Which ended up being 2 projections
Interruption of the mapping session
At the beginning of the trip we asked ourselves how we should manage the projections.
Should we ask autorizations to do it? Or should we just go, project, and just stop if police come to us asking to stop?
Because we don't know anything about the places we are visiting, and because we are going to stay only 3 to 4 days at the same place, we decided to go for the option 2: project and see. We can't afford to get a "no"... We asked a friend (Hello Kento!) to translate to japanese a description of the project, that we would carry with us and show to people asking questions about what we are doing.
However, even prepared and with everything in mind, the projection moment is quite stressful!
As I said in the previous post about Nikko, the temple we choose is super isolated so we wouldn't bother anyone.
But at 2am a monk ran next to the temple, and didn't stop when we said hello... He even ran faster and started to clap his hands. David and I quickly looked at each others, and we agreed to pack everything and leave. The adrenaline of the situation was just telling us to leave fast.
Looking for a new place
So we went back to the guest house, discussed a bit, and decided we needed to find something different, as we didn't have the confidence to go back to the temple.
Also, as we have to project the next day, and won't have a lot of time to work on the mapping again, we have to find something simple.
Luckily... Just next to Nikkorisu, we have an interesting stone:
Surrounded by restaurants only, we won't have any troubles to project at midnight as they will be closed.
Working on visuals
After working on the editor, we created some visuals for the projection.
We took a shader that David made on shadertoy, and reworked it. New colors, new mouvements, new customizations. As we are going to play the animation in a cube, we had a bit of work on UV remapping, in order to make the animation smooth on every side and every edge.
In the end, the animation was quite nice, and we started to map it on the midi pad we have.
Then we created some mushis, with a ghosty look, and mixed both creation on some mapping.
On the left, the stone, and on the right, a test on the temple mapping we previously did. Because you know... If we can avoid the monk...
Projecting on the poem stone
Still a bit stressful to go and project in the middle of the street like that, but in the end, everything went well.
We also used some "Work road" sign to make it quite official.
When people saw us starting to project, some of them came to us to take pictures. Very good experience, and we received some good feedbacks too. Funny fact: The area contains a Pokemon gym: a lot of cars came, stopped, and fought for a while to leave 5-10minutes after.
Temple, we are back
After the success of the projection on the stone, we talked a bit about going back to the temple or not. We decided to give it a try, and to see how it goes. If someone is here, and want us to leave, we will just leave right away.
So we went back. And on the way, we find a monk running towards us, and clapping hands. No need to say that at this point we were like "fuck, we are doomed... Are we stupid or what?!". But we stopped, and started to talk with him. We use the paper to make him understand what we were doing here and what we wanted to achieve. He looked at it, nodded a bit, and told us that there is also some differents temples that we may be able to use.
We realized that he didn't care much about us projecting, and that he was not bothered by it. Which is cool.
When we asked him what he was doing, running at 2am, he told us he is doing a footing of 6hours everyday.
At this point, we understood that he was already doing it the previous day, when we got afraid. Then he left us, stopped a bit to pray, by clapping his hands. Haha :)
Sadly, we couldn't get very good footage of the temple.
Have a look at our video!
|
OPCFW_CODE
|
I am a historian of science with a focus on institutions, mathematics, and mapmaking in Islamicate societies until 1700 and cross-cultural encounters in the Mediterranean and western Asia since the eighth century. Currently, I am working on a new interpretation of the Book on the Balance of Wisdom by 'Abd al-Rahman al-Khazini (d. 1130s). With an international group of scholars, I am also building an image database on the visualization of the heavens in Eurasia and North Africa until c. 1700 and the material and intellectual cultures of these images.
I studied mathematics at the Technical University Dresden (1969–1973), history of mathematics and science at the Karl Marx University Leipzig (1973–1976), and Near Eastern history and Arabic at the Martin Luther University Halle-Wittenberg (1978–1982). I wrote my PhD on the history of linear programming and my second dissertation on number theory in Arabic and Persian texts composed between 800 and 1250 (1977; 1989). In 1991, I acquired the venia legendi.
I have published broadly on different topics in the history of mathematics, cartography, patronage, higher education, science and the arts, cross-cultural encounters, and historiography. Recently I edited a book on processes of globalization in the Mediterranean between 700 and 1500 (with Jürgen Renn) and a book about historical narratives on scholarly activities in non-Western societies of the past and their distortions (with Taner Edis and Lutz Richter-Bernburg). My latest published paper deals with early modern sources that reveal how Europeans learned to speak and write Arabic outside the university.
Brentjes, S. (2017). Algebra. In K. Fleet, G. Krämer, D. Matringe, J. Nawas, & E. Rowson (
Brentjes, S. (2017). Algorithm. In K. Fleet, G. Krämer, D. Matringe, J. Nawas, & E. Rowson (
Brentjes, S. (2017). Arithmetic. In K. Fleet, G. Krämer, D. Matringe, J. Nawas, & E. Rowson (
Brentjes, S. (2017). Learning to write, read and speak Arabic outside of early modern universities. In J. Loop, A. Hamilton, & C. Burnett (
Livesey, S. J., & Brentjes, S. (2017). Science in the medieval christian and islamic worlds. In I. R. Morus (
Max Planck Institute for the History of Science
|
OPCFW_CODE
|
Multi-core processors increasingly appear as an enabling platform for embedded systems, e.g., mobile phones, tablets, computerized numerical controls, etc. The parallel task model, where a task can execute on multiple cores simultaneously, can efficiently exploit the multi-core platform's computational ability. Many computation-intensive systems (e.g., self-driving cars) that demand stringent timing requirements often evolve in the form of parallel tasks. Several real-time embedded system applications demand predictable timing behavior and satisfy other system constraints, such as energy consumption. Motivated by the facts mentioned above, this thesis studies the approach to integrating the dynamic voltage and frequency scaling (DVFS) policy with real-time embedded system application's internal parallelism to reduce the worst-case energy consumption (WCEC), an essential requirement for energy-constrained systems. First, we propose an energy-sub-optimal scheduler, assuming the per-core speed tuning feature for each processor. Then we extend our solution to adapt the clustered multi-core platform, where at any given time, all the processors in the same cluster run at the same speed. We also present an analysis to exploit a task's probabilistic information to improve the average-case energy consumption (ACEC), a common non-functional requirement of embedded systems. Due to the strict requirement of temporal correctness, the majority of the real-time system analysis considered the worst-case scenario, leading to resource over-provisioning and cost. The mixed-criticality (MC) framework was proposed to minimize energy consumption and resource over-provisioning. MC scheduling has received considerable attention from the real-time system research community, as it is crucial to designing safety-critical real-time systems. This thesis further addresses energy-aware scheduling of real-time tasks in an MC platform, where tasks with varying criticality levels (i.e., importance) are integrated into a common platform. We propose an algorithm GEDF-VD for scheduling MC tasks with internal parallelism in a multiprocessor platform. We also prove the correctness of GEDF-VD, provide a detailed quantitative evaluation, and reported extensive experimental results. Finally, we present an analysis to exploit a task's probabilistic information at their respective criticality levels. Our proposed approach reduces the average-case energy consumption while satisfying the worst-case timing requirement.
If this is your thesis or dissertation, and want to learn how to access it or for more information about readership statistics, contact us at STARS@ucf.edu
Doctor of Philosophy (Ph.D.)
College of Engineering and Computer Science
Electrical and Computer Engineering
Length of Campus-only Access
Doctoral Dissertation (Open Access)
Bhuiyan, Ashik Ahmed, "Energy-Aware Real-Time Scheduling on Heterogeneous and Homogeneous Platforms in the Era of Parallel Computing" (2021). Electronic Theses and Dissertations, 2020-. 648.
|
OPCFW_CODE
|
[Errno 2] No such file or directory (Python)
Expected behaviour
Run a program that reads a file stored in the same directory as the program.
Actual behaviour
VS Code is returning the following in the terminal:
Traceback (most recent call last):
File "/Filepath/10-1_learning_python.py", line 3, in <module>
with open(filename) as file_content:
FileNotFoundError: [Errno 2] No such file or directory: 'learning_python.txt'
Steps to reproduce:
I am trying to run a very simple Python program in VS Code. In the same subfolder I have the two following files:
10-1_learning_python.py
learning_python.txt
This is the code in "10-1_learning_python.py":
filename = 'learning_python.txt'
with open(filename) as file_content:
content = file_content.read()
print(content)
When running the code I get this error:
FileNotFoundError: [Errno 2] No such file or directory: 'learning_python.txt'
This code works (using the very same directory and files) if I run it in other applications such as SublimeText.
Environment data
I am using macOS Catalina 10.15.5.
My VS Code version is as follows:
Version: 1.45.1
Commit: 5763d909d5f12fe19f215cbfdd29a91c0fa9208a
Date: 2020-05-14T08:33:47.663Z
Electron: 7.2.4
Chrome: 78.0.3904.130
Node.js: 12.8.1
V8: 7.8.279.23-electron.0
OS: Darwin x64 19.5.0
Value of the python.languageServer setting: Microsoft
@karthiknadig I added it, and I see now that the working directory is a parent folder.
However, I thought that the program would search for the file in the same directory in which my .py file is stored, unless I specified otherwise in my code.
You might have to enable, "Execute in file dir".
That did it! I did not know about this setting.
Thank you very much and sorry for the trouble...
We're glad we were able to help. Thanks for letting us know that fixed it. :smile:
That did it! I did not know about this setting.
Thank you very much and sorry for the trouble...
i have the same problem but where is that sittings i didn't found them
When you open a file with the file name , you are telling the open() function that your file is in the current working directory. This is called a relative path. If the user does not pass the full path to the file (on Unix type systems this means a path that starts with a slash), the path is interpreted relatively to the current working directory. The current working directory usually is the directory in which you started the program. A good start would be validating the input. In other words, you can make sure that the user has indeed typed a correct path for a real existing file, like this:
while not os.path.isfile(fileName):
fileName = input("Whoops! No such file! Please enter the name of the file you'd like to use.")
Another way to tell the python file open() function where your file is located is by using an absolute path, e.g.:
f = open("/Users/foo/filename")
|
GITHUB_ARCHIVE
|
PS3 Rock Band/Guitar Hero Instruments
This includes the following instrument additions for PS3 on macOS:
-Guitar Hero: World Tour Wireless Drum Kit (03000000ba1200002001000008010000)
-Guitar Hero 5 Wireless Genericaster Guitar (03000000ba1200000001000005000000)
-The Beatles: Rock Band Wireless Drum Kit (03000000ba1200001002000000020000)
-Rock Band 1 Wireless Fender Stratocaster Guitar (03000000ba1200000002000013030000)
-The Beatles: Rock Band Wireless Hofner Bass Guitar (03000000ba1200000002000000020000)
-Roll Limitless MIDI Drum Adapter (03000000ba1200001002000000010000)
Added for Wii on macOS:
-The Beatles: Rock Band Wireless Hofner Bass Guitar (03000000ad1b00001030000000020000)
As RPCS3 for macOS cannot map SDL in the emulator and only supports controllers found on the SDL database, these are imperative to add for the growing rhythm game community. I have included all the instruments I own and will reach out to the community for more.
Thank you for this PR. I have some concerns here I will lay out for you. Broadly speaking I am conflicted on this effort, though not totally opposed to it.
This repo is intended to supplement SDL
The db is for gamepad data. While the lines are blurred between some device types, SDL has explicit handling for other joystick devices types (eg. wheels, flightsticks, simulated musical instruments) and as such gamepad handling, including mapping, is meant to omit these device types, instead handling support for them explicitly in source
Despite point #1, a number of projects that do not implement SDL evidently use this data anyway: Godot, emulators etc.
Given point #2, I think the best route would be for RPCS3 to fully implement SDL for input handling, instead of consuming this data independently, in which case any PRs related to instrument support for the RPCS3 project should be taken directly upstream to SDL.
However, I am sympathetic that this may not be a desirable effort, and since these devices are technically console gamepad analogues, am willing to accept PRs to this end if the (undesirable) alternative is further fragmentation/siloing of mapping data into individual projects as opposed to multiple projects benefiting from the consolidating effort of this db.
Please consider the following when soliciting instrument mappings from your devs/players/etc. community as an incentive to have these mappings accepted:
Encourage contributors to gather mappings on as many platforms as they have access to, regardless of what subset RPCS3 actually needs: Windows, MacOS, Linux
Encourage contributors to map the full device per the original console mapping, with an effort not to miss any secondary controls such as dpad.
Thanks
Figured I'd chime in with comments from the research and documentation I've been doing for these peripherals.
While mappings for instruments would certainly help out for general purposes, they unfortunately cannot provide complete support for all of them without additional code to handle special cases. For example:
Tilt on PS3 Guitar Hero guitars is hidden away on a vendor-defined axis and must be handled manually (unless the SDL mappings allow for mapping vendor-defined HID usages).
Pro Guitars and Pro Keyboards use a lot of axis values as bitmasks/bitfields instead of ranges, completely disregarding any semantics from XInput standards or the HID descriptor.
Guitar Hero Live guitars require a magic keep-alive packet to be sent periodically in order for all input data to be sent correctly.
(This isn't a comprehensive list, there's a lot of weird inputs and edge cases which don't usually fit into most generic input handling schemes.)
What makes things worse is that not all devices can be fully supported directly on Windows without diving down into undocumented driver interfaces (which could potentially bring about legal issues, I imagine). Notably:
Xbox 360 Pro Keyboards and Pro Guitars make use of normally-unused data bytes that lie beyond the normal XInput report. The data isn't absolutely necessary for core gameplay to work, but it's certainly a bummer that the Pro Keyboard's touchbar or the Pro Guitar's tilt aren't usable.
Xbox One instruments are just outright not supported on Windows, beyond being able to navigate menus for games/programs that implement Windows.Gaming.Input and its navigation input features. Windows.Gaming.Input.Custom has an API to create custom device types for them, which would solve this issue, however it's shimmed and doesn't publicly define everything you need.
For what it's worth though, these two cases are rather niche, and all of the other instruments can be supported without much more issue (excluding cases like Xbox 360/One instruments on Mac which require additional driver support or manual USB handling).
My initial assumption was that RCPS3 was not using SDL proper for input, but relying on this db where native support is lacking as do some other apps. This was an error on my part as it seems the software does use SDL for input.
I believe the best way forward is for any additional device supprt from non-gamepad devices to come either as direct support in RCPS3 (eg USB passthrough) or a patch to SDL directly, per point 2 of my original response and @TheNathannator's comments regarding "unmappable" features. If the RPCS3/rhythm game community decide that mappings via gamepad db are indeed the best option for particular platforms/devices as @dirkNlerxst proposes with this PR, I think it might be best for an alternative mapping db to be created speficially for this purpose (adding rhythm game instrument mappings) that RCPS3 can concatenate to this (gamepad only) db upon deployment.
After discussing further elsewhere, we've settled on leaving instruments off the gamepad db with hopes that the effort can move upstream to SDL, or if necessary, downstream to the software supporting game instrument controllers.
|
GITHUB_ARCHIVE
|
PTIJ: Hadar walks in and we're super happy. Why?
A song that I often hear at this time of year (sometimes sung to this tune) contains the lyrics:
משנכנס הדר מרבין בשמחה
When Hadar walks in, we increase joy.
(This seems to be based on a variant text of the gemara in Ta'anit 29a.)
So, what is it about Hadar, King of Edom and husband of Mehetabel, that makes us so happy when he walks in?
This question is Purim Torah and is not intended to be taken completely seriously. See the Purim Torah policy.
Note: This question is especially relevant in Israel, where many of the locals pronounce ה virtually indistinguishably from א.
I don't believe Hadar actually exists. הדר אמר רבא לאו מילתא היא. "Hadar," said Rava, "is nothing"
Because he owed you a new door frame! Now that he's here, he'll finally pay up! Everyone knows that mezuzah, chovas Hadar.
That’s from Bava Metzi’a 101a, if you’re curious.
משנכנס הדר is referring to Hadar entering into the Land of Israel. The reason why we rejoice over this is that the Talmud (Ketubot 110b) tells us that as long as Hadar remains outside of Israel it is as if he has engaged in idolatry (כל הדר בחו"ל כאילו עובד עבודת כוכבים). It is only once he enters the Land of Israel that he is considered a good Jew. Thus we rejoice for him as we would for anyone who properly repents.
You're translating it wrong. It's:
From when the resident walks in, Marvin is happy.
But Marvin is never happy. Which means that the resident never walks in: he stays outside. Thus, Wonko really was sane.
Of course! Shnayim Ve'arba'im - mi yodeya?
This is because Hadar is one of the 2 comedians mentioned in Mishlei 31:25:
עֹז־וְהָדָ֥ר לְבוּשָׁ֑הּ וַ֝תִּשְׂחַ֗ק לְי֣וֹם אַחֲרֽוֹן׃
When Oz or Hadar make embarrassing jokes (Lebusha), you will laugh until the end of days.
Therefore, obviously we will be happy when Hadar comes in!
The only question is why Oz is not included. To this we can answer that although Oz was in before, Oz came out in 1939, so he was no longer included by the Gemara, which had Ruach Hakodesh that it would no longer apply.
But are we allowed to be happy? This sounds like Hadar is making Leitzanus.
According to many mefarshim, the pesukim about the 8 kings of Edom were written prophetically, and Hadar died shortly before Shaul became king. (Other say that they ruled before the time of Moshe.)
According to the first explanation, when Hadar came, it meant we were going to have a king soon. That's a good reason to celebrate.
Hadar owned the first Etrog tree. The etrog is called פרי עץ הדר - the fruit of the tree belonging to Hadar.
What's the connection of Succot and this time of year that we should sing about this tree? It's simple - around this time of the year is when the etrogim begin to grow.
I thought that esrogim grow all year, and even from year to year, without a break?
I'll do some research on that. News to me..
@Uber_Chacham Of interest - http://delphiresearchgroup.com/b2evolution/index.php?title=things_i_ve_learned_about_growing_etrog&more=1&c=1&tb=1&pb=1
I assumed that because of the gemara about pri hadar mishana l'shana, but it could be that new ones only start growing in the spring (that would make sense, as the fruit grows from what starts as flowers.
@Uber_Chacham I would need to look at that Gemara. But, I sense that it's a Midrash. Even if it isn't, it's a "play on words", as the word hadar doesn't actually mean this.
They definitely do grow for years if you don't pick them, I've seen it myself, and that is how they are able to get huge ones. The gemara uses the drasha to prove which fruit is the pri eitz hadar, as only esrogim can stay on the tree growing for years on end.
|
STACK_EXCHANGE
|
M: Programming Amazon EC2 - wouterinho
http://oreilly.com/catalog/9781449393687
My friend Jurg wrote the new O'Reilly book about EC2 that got published today.<p>Use the code DDPAZ to get 50% off on the e-book ($13.99).
R: wouterinho
My friend Jurg co-wrote this book. Werner Vogels of Amazon wrote a foreword
and I've pre-read parts of it, it's a great introduction to AWS.
Also, it's the O'Reilly deal of the day: the coupon code "DDPAZ" gives you 50%
off and a final price of $13.99 for the e-book.
R: stevenp
Awww man, wish I had this coupon when I bought the e-book the other day! So
far I'm really impressed.
R: mno
Although the technical details may lag behind the innovations of Amazaon, I
believe the book gives a good insight in the way of thinking when designing
your infrastructure/app for AWS.
R: jvehent
I browsed the table of content and I'm curious about the target. Is it for
sysadmins, or more of an introduction for developers ?
R: deweller
Some excerpts from the preface:
"we are not going to list all the available commands...you should be
comfortable with the command line...and it certainly wouldn't hurt if you know
what Ubuntu is...and how to install software... If you are a seasoned
software/systems engineer or administrator, there are many things in this book
that will challenge you."
R: benwerd
Bought. Thanks for the pointer!
|
HACKER_NEWS
|
issues when autoloading files with psr-4
I've been trying to get psr-4 autoloading work for over a week now with no success.
My file structure is as follows:
-Project
-src
-classes
session.php
-vendor
index.php
I've created the psr-4 autoload function as follows:
"autoload": {
"psr-4": {
"classes\\": "src/classes"
}
}
after using composer dump-autoload -0 ,inside my session.php class I gave the namespace:
namespace classes;
class session{
public static function exist($name){
return(isset($_SESSION[$name])) ? true : false;
}
I then required the autoloader and used the use function to name the session class as follows:
use src\classes\session as session;
require_once('vendor/autoload.php');
session::put('test', 'test');
after opening up the index.php page, I get a
Fatal error: Class 'src\classes\session' not found in /var/www/test/Project/index.php on line 10
is my directory structure / php correct? I've tried many different guides online and can't seem to get it to work.
Most simple solution:
use classes\session;
require_once('vendor/autoload.php');
session::put('test', 'test');
Unrelated
However, you probably don't want to use classes as a vendor namespace, but instead adjust a few things here and there:
Directory structure
-Project
-src
Session.php
-public
index.php
-vendor
Autoloading configuration in composer.json
{
"autoload": {
"psr-4": {
"Juakali\\": "src"
}
}
}
Replace Juakali with a vendor namespace you prefer, this is just a suggestion. Ideally, if you intend to publish your package, it should be one that isn't already claimed by someone else, see https://packagist.org.
For reference, see
http://www.php-fig.org/psr/psr-4
https://getcomposer.org/doc/04-schema.md#psr-4
Juakali\Session
Use the aforementioned vendor namespace of your choice:
namespace Juakali;
class Session
{
public static function exist($name)
{
return isset($_SESSION[$name]);
}
}
Consider using a widely used coding style, for example PSR-2.
For reference, see
http://www.php-fig.org/psr/psr-2/
index.php
Assuming that you want to expose index.php as the entry point for a web application, move it into a directory which you feel confident to expose as a document root of your web server, adjust the import in index.php, as well as the path to vendor/autoload.php:
use Juakali\Session;
require_once __DIR__ . '/../vendor/autoload.php';
Session::put('test', 'test');
After edit - this answer is more thorough than mine, good stuff localheinz
thanks alot for the help. If i was going to use dependencies such as phpmailer from a class in the src folder, would i still need to use the use\vendor\phpmailer.. etc or do i just include the vendor autoloader?
It looks like you're defining your alias of "src/classes" as 'classes'.
So you need to use:
use classes\session;
Instead
More info:
PSR-4 autoloader Fatal error: Class not found
|
STACK_EXCHANGE
|
package com.ociweb.pronghorn.exampleStages;
import static com.ociweb.pronghorn.pipe.Pipe.blobMask;
import static com.ociweb.pronghorn.pipe.Pipe.byteBackingArray;
import static com.ociweb.pronghorn.pipe.Pipe.bytePosition;
import static com.ociweb.pronghorn.pipe.Pipe.takeRingByteLen;
import static com.ociweb.pronghorn.pipe.Pipe.takeRingByteMetaData;
import com.ociweb.pronghorn.pipe.FieldReferenceOffsetManager;
import com.ociweb.pronghorn.pipe.Pipe;
import com.ociweb.pronghorn.stage.PronghornStage;
import com.ociweb.pronghorn.stage.scheduling.GraphManager;
public class OutputStageLowLevelExample extends PronghornStage {
private final Pipe input;
private final int msgIdx;
private final FieldReferenceOffsetManager FROM; //Acronym so this is in all caps (this holds the schema)
private final FauxDatabase databaseConnection;
protected OutputStageLowLevelExample(GraphManager graphManager, FauxDatabase databaseConnection, Pipe input) {
super(graphManager, input, NONE);
this.input = input;
//should pass in connection details and do the connect in the startup method
//a real database connection is also unlikely to to write per field like this but
//this makes an easy demo to understand and test.
///
this.databaseConnection = databaseConnection;
this.FROM = Pipe.from(input);
//all the script positions for every message is found in this array
//the length of this array should match the count of templates
this.msgIdx = FROM.messageStarts[0]; //for this demo we are just using the first message template
validateSchemaSupported(FROM);
}
private void validateSchemaSupported(FieldReferenceOffsetManager from) {
///////////
//confirm that the schema in the output is the same one that we want to support in this stage.
//if not we should throw now to stop the construction early
///////////
if (!"MQTTMsg".equals(from.fieldNameScript[msgIdx])) {
throw new UnsupportedOperationException("Expected to find message template MQTTMsg");
}
if (100!=from.fieldIdScript[msgIdx]) {
throw new UnsupportedOperationException("Expected to find message template MQTTMsg with id 100");
}
}
@Override
public void startup() {
try{
///////
//PUT YOUR LOGIC HERE FOR CONNTECTING TO THE DATABASE OR OTHER TARGET OF INFORMATION
//////
} catch (Throwable t) {
throw new RuntimeException(t);
}
}
@Override
public void run() {
//must be at least 1, if so we have a fragment
if (Pipe.hasContentToRead(input, 1)){
int msgIdx = Pipe.takeMsgIdx(input);
databaseConnection.writeMessageId(msgIdx);
//Read the ASCII server URI
{
int meta = Pipe.takeByteArrayMetaData((Pipe<?>) input);
int len = Pipe.takeByteArrayLength((Pipe<?>) input);
int pos = bytePosition(meta, input, len);
byte[] data = byteBackingArray(meta, input);
int mask = blobMask(input);//NOTE: the consumer must do their own ASCII conversion
databaseConnection.writeServerURI(data,pos,len,mask);
}
//Read the UTF8 client id
{
int meta = Pipe.takeByteArrayMetaData((Pipe<?>) input);
int len = Pipe.takeByteArrayLength((Pipe<?>) input);
int pos = bytePosition(meta, input, len);
byte[] data = byteBackingArray(meta, input);
int mask = blobMask(input);//NOTE: the consumer must do their own UTF8 conversion
databaseConnection.writeClientId(data,pos,len,mask);
}
int clientIdIdx = Pipe.takeInt((Pipe<?>) input);
databaseConnection.writeClientIdIdx(clientIdIdx);
//read the ASCII topic
{
int meta = Pipe.takeByteArrayMetaData((Pipe<?>) input);
int len = Pipe.takeByteArrayLength((Pipe<?>) input);
int pos = bytePosition(meta, input, len);
byte[] data = byteBackingArray(meta, input);
int mask = blobMask(input);
databaseConnection.writeTopic(data,pos,len,mask);
}
//read the binary payload
{
int length = Pipe.inputStream(input).openLowLevelAPIField();
//old untra low level design
// int meta = takeRingByteMetaData(input);
// int len = takeRingByteLen(input);
// int pos = bytePosition(meta, input, len);
// byte[] data = byteBackingArray(meta, input);
// int mask = blobMask(input);
// databaseConnection.writePayload(data,pos,len,mask);
//new InputStream DataInput design
databaseConnection.writePayload(Pipe.inputStream(input));
}
int qos = Pipe.takeInt((Pipe<?>) input);
databaseConnection.writeQOS(qos);
Pipe.releaseReadLock((Pipe<?>) input);
//low level API can write multiple message and messages with multiple fragments but it
//becomes more difficult. (That is what the high level API is more commonly used for)
//In this example we are writing 1 message that is made up of 1 fragment
Pipe.confirmLowLevelRead(input, FROM.fragDataSize[msgIdx]);
}
}
@Override
public void shutdown() {
try{
///////
//PUT YOUR LOGIC HERE TO CLOSE CONNECTIONS FROM THE DATABASE OR OTHER SOURCE OF INFORMATION
//////
} catch (Throwable t) {
throw new RuntimeException(t);
}
}
}
|
STACK_EDU
|
Trying to install 15.10 on a Laptop
I have a Gateway laptop that runs Ubuntu 14.04 from a external hard drive(long story). I am trying to install 15.10 from a usb drive to the windows 7 partition on the laptop so I can do away with the hard drive. I made the boot usb with the 15.10 iso and booted it, but when I go to erase and encrypt the disk it aborts and restarts. What am I doing wrong?
Boot into the windows 7 partition and start setup from within windows then choose the "alongside windows" option
Well, it's difficult to say what you are doing wrong if you don't show any of your actual commands and none of the results (i.e. error messages). I assume that you don't want to keep the Windows 7 partition.
The USB method
To boot from USB used to be a challenge to say the least. More recent PC's are generally capable of USB boot. But if the Laptop is slightly older, that might be an issue. I'd try to avoid it if possible. Second problem: encryption. If it boots ok and the problems start with the encryption, then don't select encryption. It is easier to reinstall with encryption later if you have already "excercised" the "normal" install. It is also simpler to add an encrypted home partition later if you need one.
Why don't you try a simple format and copy method, since you seem to have a working Ubuntu 14.04 (although an external drive)? This is a bit a "risky" operation, but if you already have some experience with ubuntu/linux it may be a good alternative. (I've done it this way a couple of times) If you anyway don't want to keep the Windows partition, then your not risking anything (except some spare time).
You need to do most of these commands as root. If you don't understand the commands well enough, I'd suggest that you read up on them first.
start the Laptop from the external drive
partition the internal drive. (replace sda with your actual drive. I'm assuming sda0 here)
fdisk /dev/sda
If you prefer this with a graphical interface you can use gparted. If it's not installed you can install it with sudo apt-get install gparted. I suggest you create one "normal" partition as ext4 and a swap partition. Mark the "normal" partition as "bootable".
create a filesystem on your "normal" partition
mkfs -t ext4 /dev/sda0
mount the newly created partition (here to /mnt/new)
mkdir /mnt/new
mount -t ext4 -o rw /dev/sda0 /mnt/new
copy the complete externa harddrive (here assumed to be /) to the internal drive recursively (so all files and subdirectories are copied)
cp -R / /mnt/new
Depending on the size of the external harddrive, this may take some time. The directories dev proc and sys are not required, and can be deleted afterwards.
install grub on the internal drive
grub-install --boot-directory=/mnt/new/boot /dev/sda
edit the file /mnt/new/etc/fstab
This file needs to contain the correct configuration of your actual drives. Since you copied it from the external drive, it still contains the paths to the external drive. These need to be changed to the internal drive.
reboot
Okay so the reason for the external hard drive is a couple years ago i got a rootkit on my gateway w7 laptop. Me being a novice and exhausting all of my efforts asked my not nice cousin who works with computers what to do. He tells me to install linux on a flash drive( I assumed i could just delete the hard drive when I was done) and it could fix it. Somehow i screwed my Win7 boot up and was left clueless running Ubuntu on a ehd. So i adapted and got used to it and eventually fixed the Win7 boot but for some reason i was missing a wifi driver and could not connect to anything.
But it would still connect in Ubuntu so I blew it off again. I realized I am a few distros behind and went to upgrade and then learned 14.10 and 15.04 are dead. So a fresh install of 15.10 would be my best option. It would be a pain and make no sense to do it on my ehd when i could kill two birds with one stone and put it on my 300 gig unused win7 internal drive. So i downloaded a 15.10 64 bit Iso from Qbittorrent, Used startup disk creator to install it to a 8 gb flash drive. Deleted the win7 partition on /dev/sda with Gparted so its unallocated
If I have a Intel Pentium P6100 processor should I have used the 64-bit amd or the 32-bit i386 version, It's 64-bit but its Intel?
@mrtuttle: I would nowadays always suggest to go with 64-bit (Windows or Linux or any other system). As this is the direction the is moving. But technically, you can install a 32-bit system. On Linux you'll not loose a lot (on windows that's a bit different).
|
STACK_EXCHANGE
|
KDE bandwidth estimation in R and Python
I am trying to estimate the bandwidth parameter of a multivariate KDE in R and then use the estimate to evaluate the KDE in Python.
The reason for this somewhat convoluted approach is that my project is in Python, but, as far as I know, there is no multivariate implementation of a plug-in selector for the bandwidth in Python. So I reverted to estimate the diagonal matrix for the bandwidth with R's ks package. Unfortunately, something does not work as I expected. I boiled it down to the following example:
import pandas as pd
import numpy as np
import rpy2.robjects as ro
from rpy2.robjects.conversion import localconverter
from rpy2.robjects import pandas2ri
from rpy2.robjects.packages import importr
from sklearn.datasets import make_blobs
import matplotlib.pyplot as plt
import statsmodels.api as sm
# ###############################################################################
# # call this part only once to install "ks"
# utils = importr('utils')
# base = importr('base')
# # select a mirror, otherwise the user is prompted
# utils.chooseCRANmirror(ind=1) # select the first mirror in the list
# # install required package "ks"
# utils.install_packages('ks')
# ###############################################################################
def get_bandwidth(data):
ks = importr("ks")
with localconverter(ro.default_converter + pandas2ri.converter):
# convert pandas.DataFrame to R DataFrame
r_from_pd_df = ro.conversion.py2rpy(data)
# bandwidth selection with rule of thumb
ks_hns = ks.Hns(r_from_pd_df)
ks_hns = ro.conversion.rpy2py(ks_hns)
# symetric bandwith (diagonal matrix)
ks_hpi_diag = ks.Hpi_diag(r_from_pd_df)
ks_hpi_diag = ro.conversion.rpy2py(ks_hpi_diag)
return ks_hns, ks_hpi_diag
# Create test-data
data_x, data_y = make_blobs(
n_samples=1000, n_features=2, centers=3, cluster_std=0.5, random_state=0
)
# re-scale one dimension
data_x[:, 0] = data_x[:, 0] * 20
plt.hexbin(data_x[:, 0], data_x[:, 1])
plt.show()
ks_hns, ks_hpi_diag = get_bandwidth(pd.DataFrame(data_x))
dens_n = sm.nonparametric.KDEMultivariate(
data=data_x, var_type="cc", bw="normal_reference"
)
dens_cw = sm.nonparametric.KDEMultivariate(
data=data_x, var_type="cc", bw="cv_ml"
)
print("Python 'statsmodels' normal reference bandwidth estimate:")
print(np.diag(dens_n.bw))
print("R 'ks' normal scale bandwidth:")
print(ks_hns)
print(f"Python 'statsmodels' cross validation bandwidth estimate:")
print(np.diag(dens_cw.bw))
print("R 'ks' PI bandwidth diagonal:")
print(ks_hpi_diag)
This returns:
Python 'statsmodels' normal reference bandwidth:
[[10.66300197 0. ]
[ 0. 0.4962383 ]]
R 'ks' normal reference bandwidth:
[[101.29354253 -1.81323816]
[ -1.81323816 0.21938319]]
Python 'statsmodels' cross validation bandwidth:
[[4.40348543 0. ]
[0. 0.19790704]]
R 'ks' PI bandwidth diagonal:
[[20.52921948 0. ]
[ 0. 0.04962396]]
I would expect that the results of the two implementation's normal reference (rule of thumb) will give me not exactly the same results, but at least something in the same order of magnitude. The same is true for ks's plug-in method and statsmodels' cross-validation method (ok, here I'm not that sure).
As suggested by @Josef, I plotted the different KDEs (code not shown):
The data for the plots on the top row are produced with statsmodels and its estimate for the bandwidth, the one on the bottom row with ks.
It seems to me from the plots that the different bandwidth produce comparable results with the corresponding implementations. For example, the two plots on the left are similar, even though the bandwidth are significantly different.
How are the estimates of the two so different?
The only idea I have is that they use different parameterizations or scales, but I couldn't find much on that. If that is the case, I would appreciate it if somebody could provide me a hint on where I could find that.
Plot the results to see if they have approximately the same smoothness. That would indicate whether some scaling factor differs.
Thanks for the hint, that is a good idea. I edited the post.
Because the R version returns a "bandwidth matrix," almost surely that will be the analog of a covariance matrix. Because Python returns only a diagonal matrix, it is plausible that it has taken the square roots so that these numbers can be interpreted as a characteristic distance. That is why it is striking that the matrices returned by R in your case are approximately the squares of the matrices returned by Python.
statsmodels KDEMultivariate uses a product kernel based mostly on Racine and Li. I guess R ks uses a multivariate gaussian kernel similar to scipy gaussian_kde.
In case somebody is looking for an answert to this question, @whuber was correct in the comments. I just make an answer out of it so it can be found easier.
From observations I can confirm that that R's ks returns the "bandwidth matrix", which is equivalent to the covarniance matrix. Python's statsmodels on the other hand seems to return square root of the diagonal.
So, if you want use your bandwidth estimate from R's ks in Python's statsmodels, you'll just have to take the element-wise square root of the bandwidth.
|
STACK_EXCHANGE
|
Slow. Inefficient. Error-prone. These three significant problems plague any manual, paper-based system for data collection.
Such was the case for a major Cleveland medical clinic, where pediatric psychologists used a manual, paper-based system to monitor ADHD medication progress in children and young adults.
Realizing they needed greater efficiency and accuracy, they sought to replace paper with a web-based custom healthcare platform.
The web app would be used by a mixed audience of parents, teachers, clinicians, and medical office administrators—all of whom had differing levels of technical proficiency. Therefore, it needed to be easy to use.
Realizing they didn’t have the in-house expertise to build the app themselves, the clinic hired a third-party software company. That company, in turn, tapped into Taazaa’s experience in healthcare applications to complete the project.
Taazaa collaborated with teams from both the clinic and the third-party company. As part of the Agile development process, we demonstrated the product twice at the end of each sprint: once with the third-party team and again with the clinic’s team the following day.
The initial scope of the project was to convert three paper forms into web forms. Taazaa’s engineers quickly realized that access conditions needed to be added as a deliverable, as well as menus and a way to input medication details.
We then helped rapidly develop and deploy a web-based application using .NET Core as backend and Angular as frontend. The modular design utilized reusable components and saved data automatically. Per HIPAA requirements, the application included features to keep all patient data private and secure.
Later in the project, the clinic requested additional functionality to be added to the application. Nevertheless, we completed the project within the clinic’s six-month timeframe.
The resulting application allows parents and teachers to report the daily progress and efficacy of a child’s new ADHD medication. In turn, clinicians use the app to monitor the child’s reaction to the new medication and then report those findings to the child’s doctor.
The custom application allows clinicians to track the following patient data:
- Medication name
- Generic medication name
- Medication type
- Number of days/weeks patient has been on the medication
- Behavioral impact of medication (symptoms, efficacy, side effects)
- Parental and teacher ratings of the patient on a 1-7 scale
Parents, teachers, and clinicians can also input freeform notes. The app emails a reminder to the parent if they do not input their daily report.
Not only is the app HIPAA compliant, but it also meets or exceeds accessibility standards and incorporates the clinic’s branding and color scheme.
With the bespoke application in place, the clinic no longer has to rely on paper forms that must be manually entered into the database. Clinicians, patients, and their parents now enjoy better doctor-patient communication.
|
OPCFW_CODE
|
Using Intune, you can create and manage local admin accounts on your Windows devices, which is particularly useful for managing devices that are not connected to a domain. You can easily create a local user account and then add it to the Administrators group using Intune.
This blog post outlines the steps for creating a local administrator account. If your specific requirement is to Add an existing Azure AD/Entra ID user account to the local admin group, please refer to: Add a User to Local admin group using Intune.
Not only you can create a local admin account on a Windows device using Intune, but you can also easily create a local administrator account on a Mac device as well. If you are managing a custom local admin account using Windows LAPS then you will need to create a local admin account first.
As an example, We are going to create a local admin account called
cloudinfraadmin. However, you can create a local admin user account by providing any name you like.
To create a local admin account, we would be creating a Custom device configuration profile and use Accounts CSP to create a user account. Let’s check the steps:
|Another way to Create a local admin user account using Intune and Powershell|
|You can also create a local user account and add it to the local administrator’s group using Intune and PowerShell by utilizing Intune proactive remediations. Refer to this post for more details: Create a Local Admin Using Intune and PowerShell.|
When you are using Intune Proactive remediations, you can use Powershell script to create a local user account. This way you have the option to not specify any password for the local user account. Could be helpful when you are managing that local user account using Intune Windows LAPS.
|Delete a local user account using Intune|
|If you are looking to delete a local user account using Intune, you can refer to the post: How To Delete A Local User Account Using Intune.|
Table of Contents
Create a Device Configuration Profile
To create a device configuration profile, we will follow below steps:
- Login on Microsoft Intune admin center
- Go to Devices > Configuration profiles > + Create profile
- Select Platform as Windows 10 and later
- Profile type as Templates.
- Template Name: Custom
- Provide a Name of the profile: Create Local admin on all devices.
- Description: This custom device configuration profile will create a local administrator account called cloudinfraadmin on all intune managed devices.
- Click on Add button to add OMA-URI settings and provide below details:
- Name: Create Local User Account
- OMA-URI: ./Device/Vendor/MSFT/Accounts/Users/cloudinfraadmin/Password
- Data type: String
- Value: C0mputEr@10!
You can replace cloudinfraadmin to any other name to create local user account as per your requirement. For example: If you replace cloudinfraadmin with myadminacc the local user account with name myadminacc will be created.
- Click on Add button again to add OMA-URI settings and provide below details:
- Name: Add user to Local administrator group
- OMA-URI: ./Device/Vendor/MSFT/Accounts/Users/cloudinfraadmin/LocalUserGroup
- Data type: Integer
- Value: 2
Create an Azure AD security group that includes the users or devices you want to apply the custom device configuration profile to. It’s important to note that if you add users to the group, a local admin account will be created on all of the user’s devices joined to Azure and Enrolled into Intune.
If you intend to deploy this configuration to specific devices, ensure that you add the devices to the Azure AD security group, not the users. To deploy it on all end-user devices, You can click on + Add all devices to target all devices that are enrolled into Intune.
Review + Create
On the Review + Create tab, review the device configuration profile and click on Create.
Sync Intune Policies
The device check-in process might not begin immediately. If you’re testing this policy on a test device, you can manually kickstart the Intune sync either from the device itself or remotely through the Intune admin center.
Alternatively, you can use PowerShell to force the Intune sync on Windows devices. Another way to trigger the Intune device check-in process is by restarting the device.
After the policy is deployed successfully, check the end user’s device. Confirm if a local user account has been created and added to the local administrator’s group.
- Click on Start and search for Computer Management.
- Click on Local Users and Groups > Users and find the local user account created by Intune Custom device configuration profile which is cloudinfraadmin in our case.
- Next, ensure that the account is added to the Administrators group, granting local admin privileges. Go to Computer Management > Local Users and Groups > Groups > Administrators and check if your local user account is listed within the Administrators group.
Set local user account password to never expire using Intune
To set the local user account’s password expiry to ‘Never‘ on target devices, deploy a PowerShell script with the given command. For step-by-step instructions on deploying PowerShell scripts via Intune, refer to the blog post titled How to Deploy a PowerShell Script Using Intune.
Set-LocalUser -Name "cloudinfraadmin" -PasswordNeverExpires 1
In this blog post, we’ve learned how to create a local administrator account on Intune-managed devices through a custom device configuration profile. It’s a straightforward process that enables you to create a local admin to manage all your organization’s devices.
|
OPCFW_CODE
|
Batch file to start a program and restart it when that program crashes
I have a script that should start Firefox with an iMacros script, and after that should detect when Firefox crashes, and if it does crash, the batch restarts Firefox again and so on.
But I noticed that Firefox is restarted many times, regardless of if it is crashing or not
I was wondering how I can change this code so it only restarts when Firefox is not responding.
Code:
@echo off
:loop
cls
taskkill /F /IM Firefox.exe
cls
taskkill /F /IM crashreporter.exe
ping <IP_ADDRESS> -n 1 -w 4000 > nul
set MOZ_NO_REMOTE=1
timeout /t 4 /nobreak
start "" "C:\Program Files (x86)\Mozilla Firefox\firefox.exe" imacros://run/?m=macros.iim
set MOZ_NO_REMOTE=
ping <IP_ADDRESS> -n 1 -w 1000000 > nul
goto loop
It restarts Firefox unconditionally every 1000000 milliseconds which is ~16minutes.
Really??? it's because of this line right? "ping <IP_ADDRESS> -n 1 -w 1000000 > nul"
What do I have to do in order to only restart if firefox crashes?
@wOxxOm if you may awnser :)
@user3707533 It is very difficult to let code in a batch file run event driven and most often not possible at all. Batch processing with command processor is designed for doing something automated and then end. Event triggered execution of commands using a batch file is outside of the design concept of batch processing. It would be better to code this task in other languages like C++, C#, etc. which support event driven execution by design.
In this case you need to use a check, look for example at this answer. So in your case, you need to use
@echo off
start "" "C:\Program Files (x86)\Mozilla Firefox\firefox.exe" imacros://run/?m=macros.iim
:loop
tasklist /nh /fi "imagename eq firefox.exe" /fi "status eq running` |find /i "firefox.exe" >nul && (
timeout 60 /NOBREAK
) || (
cls
taskkill /F /IM Firefox.exe
cls
taskkill /F /IM crashreporter.exe
timeout 4 /NOBREAK
set MOZ_NO_REMOTE=1
start "" "C:\Program Files (x86)\Mozilla Firefox\firefox.exe" imacros://run/?m=macros.iim
set MOZ_NO_REMOTE=
)
goto loop
using your code. This should check if firefox is not responding, and if this is not the case it waits 60 seconds. If it is not responding it goes to your restart code.
So this checks every 60s if firefox is runnimg, and if firefox doesn't respond, it restarts it right?
Not only if firefox is running, but if it is running AND not responding. The restart code is yours, so if that doesn't work you need to look at that code... The only difference is I used the timeout command instead of a ping to wait a period of time.
Did it work? If so you should mark an answer as correct, if not we can continue to try to help you.
The START command takes a /WAIT parameter. This means it won't return control back to your batch script as long as firefox is executing. Assuming firefox stops executing when it crashes (which it looks like you are assuming) then just adding the /WAIT parameter will do what you want.
|
STACK_EXCHANGE
|
import luchadores from "../models/luchadores.model.js";
const controllerLuchadores = {
init(model) {
model.luchadores = [...luchadores];
model.jugadorActual = 1;
},
characterSelect(data, model) {
// data = JSON.parse(data)
console.log(data, data.name ,model)
model[`p${model.jugadorActual}Characters`].push(data);
model.luchadores = [...model.luchadores.filter(character => character.name !== data)]
model.jugadorActual = (model.jugadorActual % 2 ? 2 : 1);
if (model.luchadores.length === 0) {
model.fightBtn = true;
}
}
}
const controllerWellcome = {
init(model) {
console.log(model)
}
}
const controllerFight = {
init(model) {
model.p1Characters = ([...sessionStorage.getItem("p1").split(',')]).map(c => {
return luchadores.find(a => a.name === c)
});
model.p2Characters = [...sessionStorage.getItem("p2").split(',')].map(c => {
return luchadores.find(a => a.name === c)
});
console.log(model)
},
punchA(data,model) {
console.log(data)
this.afterAction(data,model);
},
kickA(data,model) {
console.log(data)
this.afterAction(data,model);
},
specialA(data,model) {
console.log(data)
this.afterAction(data,model);
},
changePlayer(model) {
model.jugadorActual = (model.jugadorActual % 2 ? 2 : 1);
},
afterAction(data, model) {
this.changePlayer(model);
model[`p${model.jugadorActual}Health`] -= data;
this.checkIfWinRound(model);
},
checkIfWinRound(model) {
if(model.p1Health <= 0 || model.p2Health <= 0) {
if (model.round < 3) {
model.p1Wins = (model.p1Health > model.p2Health ? model.p1Wins += 1 : model.p1Wins);
model.p2Wins = (model.p2Health > model.p1Health ? model.p2Wins += 1 : model.p2Wins);
model.p1Health = 100;
model.p2Health = 100;
model.p1Health = 100;
if (model.round !== 2) {
model.round += 1;
} else {
model.winner = (model.p1Wins > model.p2Wins ? "PLAYER 1" : "PLAYER 2");
}
}
}
}
}
export {controllerLuchadores, controllerWellcome, controllerFight}
|
STACK_EDU
|
Highly inconsistent memory usage
⚠️ Please verify that this bug has NOT been raised before.
[X] I checked and didn't find similar issue
🛡️ Security Policy
[X] I agree to have read this project Security Policy
Description
👟 Reproduction steps
Start uptime kuma using docker compose, and observe. Memory usage spikes happen even though I'm doing nothing in my instance, it's just reporting as usual.
👀 Expected behavior
No such inconsistent memory usage spikes
😓 Actual Behavior
Memory usage spikes.
🐻 Uptime-Kuma Version
1.21.2
💻 Operating System and Arch
Ubuntu 22.04
🌐 Browser
Irrelevant
🐋 Docker Version
No response
🟩 NodeJS Version
n/a
📝 Relevant log output
No relevant logs
I've had my Uptime Kuma instance on Fly.io crash a few times randomly due to memory spikes (free apps have a 232 MB memory limit). Since it's back without seconds and there seem to be no issues with monitoring, I didn't look further into it, but I thought it might be relevant for this issue. Unfortunately, the logs already got cleared from the Fly.io console, although I don't think there were any log entries that could help pinpoint the leak.
I did some profiling and found that downloading the response body from monitored websites can be a big memory contributor. The response body gets parsed to a string with toString, which allocates a block of memory that doesn't seem to get freed automatically (in my 15 minutes profiling). But maybe it's already garbage collected and it's just not freeing it back to the OS unless some condition is met? This is deep inside node.js so I'm not sure how to fix.
I've had some crashes again and I've noticed that for my instances on Fly.io, the out of memory crashes are related to the nightly "clear-old-data" task. In a success scenario, the logs look like this:
2023-04-25T01:14:02.517 app[______________] ams [info] Worker for job "clear-old-data" online undefined
2023-04-25T01:14:04.903 app[______________] ams [info] 2023-04-25T03:14:04+02:00 [PLUGIN] WARN: Warning: In order to enable plugin feature, you need to use the default data directory: ./data/
2023-04-25T01:14:04.904 app[______________] ams [info] 2023-04-25T03:14:04+02:00 [DB] INFO: Data Dir: data/
2023-04-25T01:14:05.621 app[______________] ams [info] 2023-04-25T03:14:05+02:00 [DB] INFO: SQLite config:
2023-04-25T01:14:05.628 app[______________] ams [info] [ { journal_mode: 'wal' } ]
2023-04-25T01:14:05.631 app[______________] ams [info] [ { cache_size: -12000 } ]
2023-04-25T01:14:05.634 app[______________] ams [info] 2023-04-25T03:14:05+02:00 [DB] INFO: SQLite Version: 3.39.4
2023-04-25T01:14:05.664 app[______________] ams [info] {
2023-04-25T01:14:05.664 app[______________] ams [info] name: 'clear-old-data',
2023-04-25T01:14:05.664 app[______________] ams [info] message: 'Clearing Data older than 180 days...'
2023-04-25T01:14:05.664 app[______________] ams [info] }
2023-04-25T01:14:05.935 app[______________] ams [info] { name: 'clear-old-data', message: 'done' }
When it fails, it already stops after the first line:
2023-04-26T01:14:02.621 app[______________] ams [info] Worker for job "clear-old-data" online undefined
2023-04-26T01:15:34.847 app[______________] ams [info] [172851.821740] Out of memory: Killed process 526 (node) total-vm:1076864kB, anon-rss:182500kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:3948kB oom_score_adj:0
Also note that there's a 1.5 minute delay between starting the job and it being out of memory, while in the success scenario, the whole job runs in a few seconds. Maybe there's a bug with a memory leak in that job's code?
Just for clarity, it did print 'Clearing Data older than 180 days...' before going out of memory?
@chakflying Nope, it's just what I posted above, so it's only the 'Worker for job "clear-old-data" online undefined' line before it runs out of memory 1 1/2 minutes later.
Well that's very weird, because the logging calls are basically the first things that gets run, if no logs were printed it indicates the jobs didn't start.
Is the instance already running close to out of memory, and starting the job just pushed it over?
Memory usage does seem to be increasing over time the longer it is up. Here's my instance's memory usage over the past 7 days:
Those big spikes were the out-of-memory crashes, but many of the jumps in memory usage before that seem to be caused during the nightly job as well (but not all of them).
Here's a closer look at one of the spikes where it crashed:
It might be a weird Fly.io thing, but at the start of the job, memory usage jumps close to the limit, stays there for that 1 1/2 minutes (where there seems to be no logging), and only then it crashes. Seems very weird to me.
In any case, there seem to be one or more things that cause memory usage to go up (often in jumps), but not go down again, and at a certain point, the jump is too big and it runs out of memory.
Hope this helps!
The job currently runs on worker threads, which seems to start a new V8 instance with independent memory management.
This seems to prevent the max_old_space_size CLI arg to work as expected. I experimented with different values for my instance on Fly.io, but it still gets OOM killed every 2-3 days. I assume, as you expect, that not all the memory is returned to the OS after the clear-old-data job and so even if the main thread stays under the max_old_space_size, total memory consumption still rises like before. At least the graphs don't seem to have changed much.
Regardless rewriting this job to not use worker threads would probably eliminate this.
At the very least, it should then be possible to use max_old_space_size to force garbage collection before it gets OOM killed.
Will be fixed with 1.22.0
This issue started with version 1.9.0
I have been running uptime-kuma 1.21.3 on k8s and 1.21.2 on an ec2(t3a.small) using docker. Both the environments show an upward trend for memory utilization. Metrics are collected using node-exporter for both environment.
k8s has request set to 200Mi and limit set to 400Mi.
There are no limits set for container on ec2
$ docker stats uptime-kuma
CONTAINER ID NAME CPU % MEM USAGE / LIMIT
52a02f328b12 uptime-kuma 15.57% 1.034GiB
Is it related to the issue discussed here?
What monitors do you have setup and are you sending HTTP request to endpoints with a large response size?
Research and explanation was already written above. If you care about the memory used, update to 1.22 and set a memory limit for the container. (cli argument --memory=) It should run fine with around ~256MB.
@jaskeerat789 What tool did you use to generate those graphs?
@luckman212 See grafana
https://github.com/louislam/uptime-kuma/assets/26258709/4b4da9d4-8704-4278-b770-c6cdceeb4a14
|
GITHUB_ARCHIVE
|
I have done both software development and operations in my career. I’ve worked for large companies where dev and ops were run by different Vice Presidents and startups where we all did everything. I’ve been working with cloud computing (or as we naively called it then, utility computing) and ops automation since the early 2000s.
These days I’m a consumer of cloud and DevOps tools and an observer of trends in both areas. They strongly overlap in my work, so for this post, at least I will conveniently conflate them into CloudOps. I would like to offer some thoughts about the state of CloudOps. I confess to being a software engineer first and an ops engineer second. My opinions reflect that bias.
I’ll start by making some crass generalizations about the difference between development and ops. In my experience, developers spend a lot of time thinking about design elegance. Whether it’s a programming language like Lisp or Ruby, or a framework like Rails, or a methodology like Agile, developers strive for simplicity, uniformity, clarity, coherence. At least when it comes to tools for ourselves, we have a good sense of usability. I remember people slamming C++ because it was powerful but complicated and ugly.
Operations, on the other hand, tend to be pragmatic and concrete. Ops engineers pride themselves on their ability to master complexity, not hide it. Developers talk about moving up the abstraction stack. Ops engineers often have a hard time moving up the stack. The thought of not being to “fondle the router” makes them nervous.
Moving from crass generalizations about people to crass generalizations about the things they build, I’m going to claim that the current generation of CloudOps management tools reflects the ops engineer mentality. They make it possible to “do a bunch of things,” but lack design integrity. I think Chef is the coolest thing since sliced bread. I love what I can do with it. I try to design it into systems management solutions wherever I can. But the process of learning Chef was not a pleasant one. The catalog of “what you can do with Chef” was overwhelming. I had to repeatedly dig through the documentation and scratch my head to figure out the heart of its semantic model.
Cloud management tools often fail to reflect critical user needs. Development used to be really bad about that, too. Agile has helped us get better at listening to users and focusing on relevant business value. The key customer insight I see missing from cloud tools is the understanding that, just as maintenance is 90% of software development, so too change management is 90% of configuration management. CloudFormation is a prime offender. Once you change a template or an environment you’ve built from a template, there is no way to sync the change across the objects that supposedly reflect each other. IMHO CloudFormation is almost counterproductive because it creates a false sense of security.
Vagrant, on the other hand, is a shining example of how to get it right. It has a very clean conceptual model. I found myself able to understand what it was and what it did nearly instantaneously (all credit to Vagrant, none to me). Even better, Vagrant is all about managing change. At the risk of sounding corny, I would say that Vagrant is beautifully designed.
There are as many definitions of DevOps as there are practitioners. One that I’ve thrown around is “operations acting more like development.” I was referring to operations adopting software development tools and processes like scripting, version control, automated verification, and scrums. I think, though, that I need to expand my definition. Operations needs to act more like development when it comes to designing tools, in at least two ways:
- By striving for design integrity and beauty
- By adopting user-centered agile requirements techniques
I’ve viewed the world from the ops side of the fence. I have nothing but respect for everyone involved in creating the CloudOps tools ecosystem. If we’re going to start building things for others to operate, though, we need to be willing to learn from the folks on the other side of the fence. If we do, then together, we can build things better and build better things. IMHO that truly is DevOps.
|
OPCFW_CODE
|
RISC-V is an open source instruction set architecture (ISA).
I was broadly looking at what it would take to support RISC-V in
Fedora, and as well as the usual things like kernel, GCC, binutils,
maybe cross-compilers, and all the stuff that's the same for any new
architecture, there is one problem which is specific to RISC-V.
Because no hardware implementation of RISC-V exists that you can
buy, currently you have to use an FPGA development kit and use one
of the FPGA implementations -- I'm using lowRISC. There are
affordable FPGA kits for around US$150-$320 based on the Xilinx
Artix-7 FPGA which are supported by lowRISC.
The source for the RISC-V CPU core (called "Rocket") is written in
Verilog and is free software (3-clause no advertising BSD).
In FPGA-land, a "bitstream" is kind of like a binary or a firmware
Unfortunately to compile the source code to a bitstream, things get
very proprietary. For Xilinx, you have to install their proprietary
compiler, Vivado. It's not just proprietary but it has node-locked
licensing so it's user-hostile too.
There is a second sub-problem, but one which is going to be overcome
soon. At the moment there is only a free CPU core. However to talk
to the outside world even on an FPGA, it needs peripherals like a UART
(serial port), SD card reader and some other SPI peripherals. These
are provided by Xilinx and are (of course) proprietary IP. However
lowRISC plan to replace these with free software peripherals later
Once you've compiled your bitstream, you then need to write it to the
FPGA. Writing a bitstream to the FPGA turns the FPGA into a RISC-V
processor and you can boot Linux on it from an SD card.
You can use the proprietary Vivado tool to write the bitstream to the
FPGA, but there are also open source tools to do this. I used
This all really works - I've been documenting building everything from
source on my blog.
- Compiling the Verilog source code to a bitstream requires highly
proprietary tools and will never be possible in Fedora.
- Writing the bitstream to the FPGA is possible with GPL tools.
- There are currently some proprietary bits in the bitstream, but I
hope those will be removed at some point.
Obviously the last point makes this moot right now, but assuming that
can be fixed, here is my question: Can we package these bitstream
files in Fedora? It would allow a more immediate out-of-the-box
experience where you just plug in the development kit and go.
Hardware impls do exist, but they are all research projects so
far, or otherwise not for sale.
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-df lists disk usage of guests without needing to install any
software inside the virtual machine. Supports Linux and Windows.
|
OPCFW_CODE
|
Edit the HTTP Header Variable authentication scheme for an instance.
HTTP Header Variable supports the use of header variables to identify a user and to create an Oracle APEX user session. Use HTTP Header Variable authentication scheme if your company employs a centralized web authentication solution like Oracle Access Manager which provides single sign-on across applications and technologies. User credential verification is performed by these systems and they pass the user's name to Oracle APEX using a HTTP header variable such as "REMOTE_USER" (which is the default).
To edit HTTP Header Variable:
- Sign in to Oracle APEX Administration Services.
- Click Manage Instance.
- Under Instance Settings, click Security.
- Under Authentication Control, scroll down to Development Environment
Authentication Schemes. The Status column indicates if the authentication scheme designated as Current.
- Find HTTP Header Variable and click
Edit.The Edit Scheme page appears.
- Click Make Current Scheme to have applications identify and verify the user using this authentication scheme.
- Under Edit Authentication Scheme:
- PL/SQL Code - Enter a PL/SQL anonymous block of code that contains procedures for pre- and post-authentication entry points. To improve performance, you can also store this code in a PL/SQL package in the database.
- Pre-Authentication Procedure Name - Specify the name of a
procedure to be executed after the login page is submitted and just before
credentials verification is performed. The procedure can be defined in the
PL/SQL Code attribute or within the database.
Authentication schemes where user credentials checking is done outside of Oracle APEX typically do not execute the Pre-Authentiation procedure. Examples include HTTP Header Variable, Oracle Application Server Single Sign-On and custom authentication schemes that use
- Post-Authentication Procedure Name - Specify the name of a
procedure to be executed by the Oracle APEX
LOGINprocedure after the authentication step (login credentials verification). The
LOGINprocedure will execute this code after it performs its normal duties, which include setting a cookie and registering the session, but before it redirects to the desired application page. The procedure can be defined in the PL/SQL Code attribute or within the database.
- Under Authentication Scheme Attributes:
Tip:To learn more about an attribute, see field-level Help.
- HTTP Header Variable Name - Specifies the name of the HTTP
header variable which contains the username. The default
OAM_REMOTE_USERis used by Oracle Access Manager and has to be changed if another authentication provider is used.
- Action if Username is Empty - Specifies the action to be performed if the username stored in the HTTP header variable is empty.
- Verify Username - Specifies how often the username stored in the HTTP header variable is verified
- Logout URL of SSO Server - If the authentication scheme is based on
Oracle Access Manager or similar servers, use this attribute to specify a
URL to log out of the central single sign-on server.
Oracle Access Manager based SSO example:
The substitution parameter
%POST_LOGOUT_URL%will be replaced by an encoded URL to the login page of your application.
- HTTP Header Variable Name - Specifies the name of the HTTP header variable which contains the username. The default
- To save your changes, click Apply Changes.
|
OPCFW_CODE
|
Désolé, nous n'avons pas pu trouver le travail que vous cherchiez.
Trouver les travaux les plus récents ici :
[se connecter pour voir l'URL] - need chains in links to look like half tone steel chains - replace blues colours to one blue colour (royal blue 1) throughout except for blue colour in logo - all wording in pubspy classics is be active/live - thinner font in Welcome to PUBSPY - top section to be in that half tone steely grey colour - Login Register etc - the word - or - between Enter a Locati...
Hello . I want to build a simple web site that have upload document such as doc, ppt, excel, pdf and review the content on website. 1. want make it simple. No admin panel, no user/register feature. 2. Only want to upload document and preview it. I want to make it this simple web site in a day Thank you This is sample link [se connecter pour voir l'URL] want to clone it
Good morning, I am looking for an good experimented writer to create for my business al the regulations policy for my business including the website and the management system. I will discuss more details in private with the winner. My maximum offer is the best I can do so don`t bid if you believe that you may charge me more.
***** Please read all of descriptions and bid accordingly. Answer all of the displaying queries, otherwise you will hard to be considered.***** Looking for somebody that has lots of experience in Java(Web development) and React.js to work on an existing website portal. What you need to do is finishing MVP and eventually move to V2, V3 if you are good. In order to be considered please let me know...
We need a person or team they can do high-quality Illustration Work Regular Base. We will pay per work. We need: Children's Book Illustration Fantasy Illustration Digital House Illustration Comic Book Illustration Pop Art Illustration We need high-quality work on time. Example Link: [se connecter pour voir l'URL] If you really can do above, then please bid with you share-able por...
We have shopify website to sell the Blankets and then we are going to translate our websites in the multiple languages. Languages needed now is Russian, Ukrainian, Spain, Portuguese, Danish, German and so on. If you are native in one of them, please apply this job. Thank you.
decoration design work for villa
I am a doctor and wants to hire people for making animation videos related to the field of surgical procedures and related ailments.
To build a APP or program that can keep watch on the SNKRS and can notice the user some new [se connecter pour voir l'URL] we need Python Skill.
Design a Hamiltonian for Grover's search and run in on the cloud platform of D-Wave systems.
|
OPCFW_CODE
|
Describe the bug
Since 1.16 was released, rake is broken. See full trace here https://travis-ci.org/github/dev-sec/chef-os-hardening/jobs/763925521
$ bundle exec rake kitchen
rake aborted!
ArgumentError: wrong number of arguments (given 1, expected 0)
/home/travis/build/dev-sec/chef-os-hardening/vendor/bundle/ruby/2.6.0/gems/github_changelog_generator-1.16.0/lib/github_changelog_generator/task.rb:34:in `initialize'
/home/travis/build/dev-sec/chef-os-hardening/vendor/bundle/ruby/2.6.0/gems/github_changelog_generator-1.16.0/lib/github_changelog_generator/task.rb:34:in `initialize'
/home/travis/build/dev-sec/chef-os-hardening/Rakefile:57:in `new'
/home/travis/build/dev-sec/chef-os-hardening/Rakefile:57:in `<top (required)>'
/home/travis/build/dev-sec/chef-os-hardening/vendor/bundle/ruby/2.6.0/gems/rake-13.0.3/exe/rake:27:in `<top (required)>'
/home/travis/.rvm/rubies/ruby-2.6.3/bin/ruby_executable_hooks:24:in `eval'
/home/travis/.rvm/rubies/ruby-2.6.3/bin/ruby_executable_hooks:24:in `<main>'
(See full trace by running task with --trace)
The command "bundle exec rake kitchen" exited with 1.
cache.2
To Reproduce
Create Rakefile with
begin
# read version from metadata
metadata = Chef::Cookbook::Metadata.new
metadata.instance_eval(File.read('metadata.rb'))
# build changelog
require 'github_changelog_generator/task'
GitHubChangelogGenerator::RakeTask.new :changelog do |config|
config.future_release = "v#{metadata.version}"
config.user = 'dev-sec'
config.project = 'chef-os-hardening'
end
rescue LoadError
puts '>>>>> GitHub Changelog Generator not loaded, omitting tasks'
end
Run bundle exec rake kitchen
Expected behavior
No error. Expected documentation about required changes, if any.
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
1.15.2 was working OK.
@mfortin What happens to your rake task if you attempt to leave out the :changelog argument (which is the default, according to https://github.com/github-changelog-generator/github-changelog-generator/blob/master/lib/github_changelog_generator/task.rb#L35 ) ?
@mfortin Does the change in the linked PR change things for you?
I have the same issue and removing the super call as done #943 seems to fix it.
removing :changelog seems to be OK.
I believe the right solution is to remove the super call, as it is backward compatible
Maybe you can keep the super call only when there's no argument passed and emit a deprecation warning for who is still using it if you want to remove it in the future.
@kennyadsl Calling super() allows it work, too.
Right!
@olleolleolle are you planning on creating a new release today ?
|
GITHUB_ARCHIVE
|
A rolling deployment is a software release strategy that staggers deployment across multiple phases, which usually include one or more servers performing one or more function within a server cluster. Rather than updating all servers or tiers simultaneously, the organization installs the updated software package on one server or subset of servers at a time. A rolling deployment is used to reduce application downtime and unforeseen consequences or errors in software updates.
In a traditional software upgrade, an application servers is taken offline while its software is updated and tested, then it is returned to service. This can result in substantial downtime for the application -- especially if unexpected errors or problems force a developer to revert the installation to a previous version. In a rolling upgrade, only part of the server capacity for an application is offline at a given time. For example, consider a three-tier application comprising front end, back end and database, deployed with three nodes in each tier. Each of the three front-end application server nodes receives traffic through a load balancer. In a traditional upgrade, the load balancer closes all application traffic for the servers to go offline and update. In a rolling deployment, the organization can take one of the nodes in each tier offline, with the load balancer configured to direct traffic to the remaining servers still running the proven, current software version. The idle servers receive updates and testing; the remaining online servers support user traffic.
The number of simultaneous deployment targets in a rolling deployment is referred to as the window size. A window size of one deploys to one target at a time, and that deployment must finish before another deployment starts. A window size of three deploys to three target servers at a time. A large cluster might use a larger window size.
A rolling deployment might include a testing phase wherein the load balancer directs only limited or test traffic while the new software is configured and proven. This does not affect the experience of users still accessing the application on remaining servers that are not yet upgraded.
After the software is tested successfully, the server returns to service and another server node is taken offline to repeat the process, continuing until all of the server nodes in the cluster are updated and running the desired application software version. The overall application has been continuously available throughout the rolling deployment.
One of the important concerns for a rolling deployment is ensuring backward compatibility across the stack. Some nodes might be running newer versions and others are still on older versions during a rolling deployment. The organization must ensure that the older versions of web or application components work with new database schema, or vice versa.
Session persistence can be an issue with rolling deployments if user traffic is directed to servers that are running different software versions, with unpredictable and undesirable results. Load balancers should support user persistence functionality so that user traffic can continue on the same application server. In addition, session-sharing should be invoked wherever possible so that the user session can continue on another server if the server currently providing user persistence is taken offline for its update.
Rolling deployment does not require as much underlying IT resource capacity as blue/green deployment, wherein the update occurs on one entire production environment while an identical production environment runs the previous software version for a user.
Rolling deployment differs from phased rollouts. A phased deployment occurs over an extended time span and a user is aware that he's on different software versions, in order to gather feedback or test out performance before fully rolling out a change.
|
OPCFW_CODE
|
I've a matter, I need to head to Lima, Peru on Sept fifteen to possess a surgical treatment. But I don't have a US Passport, I am a everlasting resident so I've a green card.
“They made an effort to convey to me he was afraid of the general inhabitants … but that’s Portion of jail,” he reported within a the latest interview. “That’s what helps make you not want to go back, it becoming this type of Awful encounter.”
as the answer to your question exceeds the Room We have now right here for replies, I allowed myself to maneuver your comment to our Discussion Board under "Volunteering in Peru for any Calendar year ("")" and answered it there.
But if you find yourself welcoming and clarify the situation with your perform visa and many of the Peruvian red tape included, I'm positive you will get the level of times you'll need.
.. Based on what was mentioned over, I do not require another paperwork as opposed to my Philippine passport and eco-friendly card. Is that this appropriate? If that's the case, in which in composing am i able to i give proof when I come upon the immigration with the airport?
In what is usually identified as “fork out-to-remain” or “private jail,” a constellation of small city jails — no less than 26 of these in Los Angeles and Orange counties — open their doors to defendants who will manage the option.
So if you can satisfy all other needs the Peruvian Consular Area in India is asking, I'm sure your age shouldn't be a problem.
While a choose stated Sparks needed to provide his time inside a calendar year, he took two many years to finish as he ongoing working and traveling internationally, In accordance with court docket and jail information.
In order to avoid any doable challenges when coming into Peru, I might get in connection with the closest Peruvian consulate.
) on the way in which letting immigration officers to deny entry to foreigners using a felony history (Particularly with drug similar convictions). In terms of I do know these designs are still not implemented and I doubt that they are often recognized at all.
Just in case you are looking at marrying your girlfriend, you'll be able to make an application for Visa Common de Residente para el caso de casadao con Peruana which as well permits you to function in Peru.
As outlined by Peruvian regulation overseas site visitors have to have a return or onward ticket /passage when getting into the state. While this regulation is just not enforced by Peruvian immigrations, Airways usually demand from customers to see a return or onward ticket when checking in for a flight to Peru.
Wurtzel later claimed the act was consensual, but in 2011 he pleaded no contest to sexual battery and was sentenced to a calendar year in jail.
What exactly visit their website transpires when that period of time is up? Do I've to go away the region and come back? Or is there a means which i can acquire a visa to make sure that I don't have to do that? Thank you.
|
OPCFW_CODE
|
using Microsoft.Reactive.Testing;
using System.Reactive;
using Xunit;
namespace RxLibrary.Tests
{
public class CreatColdObservableTests : ReactiveTest
{
[Fact]
public void CreatColdObservable_ShortWay()
{
var testScheduler = new TestScheduler();
ITestableObservable<int> coldObservable =
testScheduler.CreateColdObservable<int>(
// Inheritting your test class from ReactiveTest opens the following factory
// methods that make your code much more fluent
OnNext(20, 1),
OnNext(40, 2),
OnNext(60, 2),
OnCompleted<int>(900)
);
// Creating an observer that captures the emission it recieves
ITestableObserver<int> testableObserver = testScheduler.CreateObserver<int>();
// Subscribing the observer, but until TestSchduler is started, emissions are not be emitted
coldObservable
.Subscribe(testableObserver);
// Starting the TestScheduler means that only now emissions that were configured will be emitted
testScheduler.Start();
// Asserting that every emitted value was recieved by the observer at the same time it
// was emitted
coldObservable.Messages
.AssertEqual(testableObserver.Messages);
// Asserting that the observer was subscribed at Scheduler inital time
coldObservable.Subscriptions.AssertEqual(
Subscribe(0));
}
[Fact]
public void CreatColdObservable_LongWay()
{
var testScheduler = new TestScheduler();
ITestableObservable<int> coldObservable = testScheduler.CreateColdObservable<int>(
// This is the long way to configure emissions. see below for a shorter one
new Recorded<Notification<int>>(20, Notification.CreateOnNext<int>(1)),
new Recorded<Notification<int>>(40, Notification.CreateOnNext<int>(2)),
new Recorded<Notification<int>>(60, Notification.CreateOnCompleted<int>())
);
// Creating an observer that captures the emission it recieves
ITestableObserver<int> testableObserver = testScheduler.CreateObserver<int>();
// Subscribing the observer, but until TestSchduler is started, emissions are not be emitted
coldObservable
.Subscribe(testableObserver);
// Starting the TestScheduler means that only now emissions that were configured will be emitted
testScheduler.Start();
// Asserting that every emitted value was recieved by the observer at the same time it
// was emitted
coldObservable.Messages
.AssertEqual(testableObserver.Messages);
// Asserting that the observer was subscribed at Scheduler inital time
coldObservable.Subscriptions.AssertEqual(
Subscribe(0));
}
}
}
|
STACK_EDU
|
Here's a few steps I recommend:
1. Make sure wireless debugging is turned on. My OnePlus phone often turns off wireless debugging when I switch networks.
2. Switch to tunnel, there are often funny issues on Public Networks.
3. Go to settings on your phone and find expo's settings. Turn the setting "Display over other apps", to off then on again (it should definitely be on).
4. Uninstall Expo Dev clients AND builds, this is often the cause for me.
5. Try a wired connection (make sure USB debugging is on).
6. Reinstall Expo Go on your phone
This can be a real pain, good luck to all.
OK, good. Although are you sure the link you posted here to corresponds with the --profile preview build?
Unless you changed the “preview” profile in eas.json it should have generated an APK rather than an AAB.
OK, here’s the problem
A “Preview” APK (or a production AAB) is a standalone app that you just install and open on your phone directly. It is not something you can use with Expo Go.
Even a “development” APK is not something you can use with Expo Go. It is an alternative to Expo Go.
If you want to use a dependency that includes native code that is not part of the Expo SDK, then you will need to build a dev client. This would previously have required you to “eject”, but these days many dependencies will work out of the box when built with EAS Build instead of the classic “expo build”. Others can still be made to work by installing or writing a “config plugin”.
I see you’re using react-native-sound, which includes its own native code. Fortunately it only mentions react-native link in the installation docs, and this should be taken care of automatically by React Native’s “autolinking” during the EAS Build build process.
So, use eas build -p android --profile development.
At the start of the build process it will print out the following:
% eas build -p android --profile development
✔ Using remote Android credentials (Expo server)
✔ Using Keystore from configuration: Build Credentials xxxABC123z (default)
Compressing project files and uploading to EAS Build. Learn more
✔ Uploaded to EAS 10s
Build details: https://expo.dev/accounts/[account]/projects/[project]/builds/ff7469b1-bb39-49c7-83d6-27c039167c17
You can see the build progress at the Build details URL. When it’s done, you will also be able to download the APK onto your phone from that page.
At the end of the build it will print a QR code in the terminal and the “Build details” URL again:
🤖 Open this link on your Android devices (or scan the QR code) to install the app:
The above-mentioned QR code is just a convenient way to get to the build page from your phone without having to type in the whole URL. This is not a QR code that would work in Expo Go.
If you browse to the URL with your phone you’ll see an “Install” button. This should download the APK, after which you should be able to install it.
After installing it, open it on your phone and run expo start in your terminal. Another QR code should be displayed in the terminal and below that something like:
› Choose an app to open your project at http://192.168.5.123:19000/_expo/loading
› Metro waiting on exp://192.168.5.123:19000
If you open the QR code on your phone (or the …/_expo/loading URL) then it will allow you to choose to open Expo Go or the Development Build. You’d choose Development Build.
Alternatively you can choose the option to enter the URL manually into the development client.
After that you can use the dev client as you would use Expo Go.
|
OPCFW_CODE
|
Improve feature provider option descriptions
At this point the descriptions of the plugin options visible in the Atom settings are rather basic and often less accurate than what can be found in the documentation of the specific feature provider.
We should improve those descriptions and/or link to the relevant docs if applicable.
Any help on this is highly appreciated. This issue doesn't need to be fixed in one go, we can iteratively improve the plugin documentation.
I spotted the following things:
Rope folder: If empty, no such a folder is used at all. This is a bit odd because if you leave it empty, the default (.ropeconfig) is used. So it may need to contain whitespace?!
Pydocstyle convention. This can be left blank? What does happen in the case that neither pep257 nor numpy is chosen?
Yapf: If both Yapf and autopep8 are enabled the latter is preferred. Does this mean that if both are enabled, there is no way to invoke Yapf despite it is enabled? In this case it would be more intuitive to have a select field that either enables Yapf or autopep8. But this list needs to be populated dynamically because installation of these plugins got optional and not all may be available.
@exploide Thanks for the review 👍
Rope folder: If empty, no such a folder is used at all. This is a bit odd because if you leave it empty, the default (.ropeconfig) is used. So it may need to contain whitespace?!
Good point fixed in #99
Pydocstyle convention. This can be left blank? What does happen in the case that neither pep257 nor numpy is chosen?
I personally don't use pydocstyle. The options mirror the CLI of pydocstyle. Unfortunately I haven't found a good way of handling null values in Atom config schema.
Yapf:
Does this mean that if both are enabled, there is no way to invoke Yapf despite it is enabled?
I haven't tried it myself but that's how I interpret the language server docs. We should test it and clear up the docs.
In this case it would be more intuitive to have a select field that either enables Yapf or autopep8.
Apart from the the downside you mentioned we had a similar discussion for completions on the pyls repo some time ago: https://github.com/palantir/python-language-server/pull/222#issuecomment-356603717
It would become impossible for a plugin to add extra formatters. Which was the point of plugins in the first place.
This is certainly a good argument for the language server itself. Though I'm unsure if this is a thing we want or should support in ide-python.
For now I tried to keep the config options as close to the language server as possible to avoid additional conversions. But I'm very open to ideas for restructuring our settings.
Good, thanks. Seems like a few points need further investigation.
I don't fully understand the thing with the formatters. If there were another plugin that eventually decides to offer formatting functionality (let's say pydocstyle supports formatting in the future), then the language server would need additional code to expose this functionality via the interface implemented by languageclient / ide-ui anyway. Or am I wrong? So changes to the language server (and then in this package) become necessary in every case?!
Seems like a few points need further investigation.
👍
So changes to the language server (and then in this package) become necessary in every case?!
People can also install 3rd-party-plugins like the isort-formatter to extend the functionality of the language server. I don't know if we'd be able to support this use case if we implement a custom config.
|
GITHUB_ARCHIVE
|
I’m currently regarding LCA 2013 as my last LCA for a while. Never say never: LCA 2014 bids came in from Sydney (so, local to me) and Perth (where I’ve never been and would like to go). But I first went to LCA in 2001 and then later went to 2004 and since 2007 I’ve been to LCA every year, except for 2010 and that only because I had a baby in the middle of the conference.
LCA used to be my main way of reconnecting with open source while I was working on my PhD. But now I work for the Ada Initiative and open source (and open stuff) events are a big part of my job. While I have more time and energy for conferences I am attending them for very different reasons now and the lure of the new is getting strong.
Because my volunteer time is diminishing, LCA 2013 is definitely the last LCA in which I will have had significant input into the program (Michael Davies and I are co-chairs of the conference program this year, as we were for 2010). So, it’s something of a farewell tour for me and I’m looking forward to the program we worked so hard putting together.
… actually my non-LCA-ing family is still in town Monday, so I’ll probably go to Bdale Garbee’s keynote and then hang out with them. Off to a great start here, I know.
Radia Perlman’s keynote is the keynote I am most looking forward to this year. Following that several of my peeps are giving Haecksen talks before lunch:
- Feminism, anarchism and FOSS – Skye Croeser
- Overcoming imposter syndrome – Denise Paolucci
- Security – Joh Pirie-Clarke
People may be especially interested in the Imposter Syndrome talk, Imposter Syndrome being the feeling that you’ve achieved your current position or status totally fraudulently and are going to be discovered any second and publicly humiliated. It’s very common among people who are in quite critical fields (like academia). Denise was among our Imposter Syndrome facilitators for AdaCamp DC.
I am not sure after lunch, but Web Animations: unifying CSS Transitions, CSS Animations, and SVG (Shane Stephens) is a definite contender. In the afternoon The Horrible History of Web Development (Daniel Nadasi) sounds interesting (although it’s the kind of talk where an abstract would be really useful in determining whether I want to go) but so do What we can learn from Erlang (Tim McNamara) and Concurrent Programming is not so difficult (Daniel Bryan)
Trinity: A Linux kernel fuzz tester (and then some) (Dave Jones) is very tempting in the first slot, but I think I will go to Think, Create & Critique Design (Andy Fitzsimon) because I want to “speak” design semiotics a little bit better and have for a long time. Talking to graphic designers is actually part of my job.
In the second slot I am not entirely sure, but probably Open Source and Open Data for Humanitarian Response with OpenStreetMap (Kate Chapman) since I periodically dabble in OpenStreetMap.
After lunch my pick is definitely Free and open source software and activism (Sky Croeser). I’ve been following Sky’s activism and research since the EFA lamb roast fun and met her at AdaCamp Melbourne. I want to hear what she has to say about (h)ac(k)tavism.
Not as sure about the following slot (in a moment of mischief, we put the DSD’s talk right after Sky’s, but I’m not especially interested) but the biggest contender is The future of non-volatile memory (Matthew Wilcox) because he usually is one of the highlights of the LCA lower-level technical talks.
The first slot after afternoon tea I am not committing, but it does contain Pia’s grand scheme Geeks rule over kings – the Distributed Democracy. After that I think Copyright’s Dark Clouds: Optus v NRL (Ben Powell) is required: it isn’t LCA without emerging feeling distinctly gloomy about the current state of the intellectual property framework.
|
OPCFW_CODE
|
Presentation on theme: "CSE 534 – Fundamentals of Computer Networks Lecture 10: DNS (What’s in a Name?) Based on Slides by D. Choffnes (NEU). Revised by P. Gill Spring 2015. Some."— Presentation transcript:
CSE 534 – Fundamentals of Computer Networks Lecture 10: DNS (What’s in a Name?) Based on Slides by D. Choffnes (NEU). Revised by P. Gill Spring 2015. Some content on DNS censorship from N. Weaver.
Layer 8 (The Carbon-based nodes) 2 If you want to… Call someone, you need to ask for their phone number You can’t just dial “P R O F G I L L ” Mail someone, you need to get their address first What about the Internet? If you need to reach Google, you need their IP Does anyone know Google’s IP? Problem: People can’t remember IP addresses Need human readable names that map to IPs
Internet Names and Addresses 3 Addresses, e.g. 22.214.171.124 Computer usable labels for machines Conform to structure of the network Names, e.g. www.stonybrook.eduwww.stonybrook.edu Human usable labels for machines Conform to organizational structure How do you map from one to the other? Domain Name System (DNS)
History 4 Before DNS, all mappings were in hosts.txt /etc/hosts on Linux C:\Windows\System32\drivers\etc\hosts on Windows Centralized, manual system Changes were submitted to SRI via email Machines periodically FTP new copies of hosts.txt Administrators could pick names at their discretion Any name was allowed alans_server_at_sbu_pwns_joo_lol_kthxbye
Towards DNS 5 Eventually, the hosts.txt system fell apart Not scalable, SRI couldn’t handle the load Hard to enforce uniqueness of names e.g MIT Massachusetts Institute of Technology? Melbourne Institute of Technology? Many machines had inaccurate copies of hosts.txt Thus, DNS was born
DNS at a High-Level 7 Domain Name System Distributed database No centralization Simple client/server architecture UDP port 53, some implementations also use TCP Why? Hierarchical namespace As opposed to original, flat namespace e.g..com google.com mail.google.com
Naming Hierarchy 8 Top Level Domains (TLDs) are at the top Maximum tree depth: 128 Each Domain Name is a subtree .edu neu.edu ccs.neu.edu www.ccs.neu.edu www.ccs.neu.edu Name collisions are avoided neu.com vs. neu.edu Root educomgovmilorgnetukfretc. neumit ccsecehusky wwwloginmail
Hierarchical Administration 9 Tree is divided into zones Each zone has an administrator Responsible for the part of the hierarchy Example: CS controls *.cs.stonybrook.edu SBU controls *.stonybrook.edu Root educomgovmilorgnetukfretc. neumit ccs wwwloginmail ICANN Verisign
Root Name Servers 10 Responsible for the Root Zone File Lists the TLDs and who controls them ~272KB in size com.172800INNSa.gtld-servers.net. com.172800INNSb.gtld-servers.net. com.172800INNSc.gtld-servers.net. Administered by ICANN 13 root servers, labeled A M 6 are anycasted, i.e. they are globally replicated Contacted when names cannot be resolved In practice, most systems cache this information
Basic Domain Name Resolution 11 Every host knows a local DNS server Sends all queries to the local DNS server If the local DNS can answer the query, then you’re done 1. Local server is also the authoritative server for that name 2. Local server has cached the record for that name Otherwise, go down the hierarchy and search for the authoritative name server Every local DNS server knows the root servers Use cache to skip steps if possible e.g. skip the root and go directly to.edu if the root file is cached
Recursive DNS Query 12 Puts the burden of resolution on the contacted name server How does asgard know who to forward responses too? Random IDs embedded in DNS queries Root com ns1.google.com www.google.com asgard.ccs.neu.edu Where is www.google.com?
Iterated DNS query 13 Contact server replies with the name of the next authority in the hierarchy “I don’t know this name, but this other server might” This is how DNS works today Root com ns1.google.com www.google.com asgard.ccs.neu.edu Where is www.google.com?
Administravia 14 Midterm on Monday Closed notes No electronic aids (you won’t need a calculator) Exam is 80 minutes: 8:30-9:50 Arrive on time to get full time! Questions?
DNS Resource Records 15 DNS queries have two fields: name and type Resource record is the response to a query Four fields: (name, value, type, TTL) There may be multiple records returned for one query What do the name and value mean? Depends on the type of query and response
DNS Types 16 Type = A / AAAA Name = domain name Value = IP address A is IPv4, AAAA is IPv6 Type = NS Name = partial domain Value = name of DNS server for this domain “Go send your query to this other server” Query Name: www.ccs.neu.eduwww.ccs.neu.edu Type: A Resp. Name: www.ccs.neu.eduwww.ccs.neu.edu Value: 126.96.36.199 Query Name: ccs.neu.educcs.neu.edu Type: NS Resp. Name: ccs.neu.educcs.neu.edu Value: 188.8.131.52
DNS Types, Continued 17 Type = CNAME Name = hostname Value = canonical hostname Useful for aliasing CDNs use this Type = MX Name = domain in email address Value = canonical name of mail server Query Name: foo.mysite.comfoo.mysite.com Type: CNAME Resp. Name: foo.mysite.comfoo.mysite.com Value: bar.mysite.combar.mysite.com Query Name: ccs.neu.educcs.neu.edu Type: MX Resp. Name: ccs.neu.educcs.neu.edu Value: amber.ccs.neu.eduamber.ccs.neu.edu
Reverse Lookups 18 What about the IP name mapping? Separate server hierarchy stores reverse mappings Rooted at in-addr.arpa and ip6.arpa Additional DNS record type: PTR Name = IP address Value = domain name Not guaranteed to exist for all IPs Query Name: 184.108.40.206 Type: PTR Resp. Name: 220.127.116.11 Value: ccs.neu.educcs.neu.edu
DNS as Indirection Service 19 DNS gives us very powerful capabilities Not only easier for humans to reference machines! Changing the IPs of machines becomes trivial e.g. you want to move your web server to a new host Just change the DNS record!
Aliasing and Load Balancing 20 One machine can have many aliases www.reddit.com www.foursquare.com www.huffingtonpost.com *.blogspot.com david.choffnes.com alan.mislo.ve One domain can map to multiple machines www.google.com
The Importance of DNS 22 Without DNS… How could you get to any websites? You are your mailserver When you sign up for websites, you use your email address What if someone hijacks the DNS for your mail server? DNS is the root of trust for the web When a user types www.bankofamerica.com, they expect to be taken to their bank’s websitewww.bankofamerica.com What if the DNS record is compromised?
Denial Of Service 23 Flood DNS servers with requests until they fail October 2002: massive DDoS against the root name servers What was the effect? … users didn’t even notice Root zone file is cached almost everywhere More targeted attacks can be effective Local DNS server cannot access DNS Authoritative server cannot access domain
DNS Hijacking 24 Infect their OS or browser with a virus/trojan e.g. Many trojans change entries in /etc/hosts *.bankofamerica.com evilbank.com Man-in-the-middle Response Spoofing Eavesdrop on requests Race the servers response – Useful for censorship
Solution: DNSSEC 25 Cryptographically sign critical resource records Resolver can verify the cryptographic signature Two new resource types Type = DNSKEY Name = Zone domain name Value = Public key for the zone Type = RRSIG Name = (type, name) tuple, i.e. the query itself Value = Cryptographic signature of the query results Deployment On the roots since July 2010 Verisign enabled it on.com and.net in January 2011 Comcast is the first major ISP to support it (January 2012) Prevents hijacking and spoofing Creates a hierarchy of trust within each zone
DNSSEC Hierarchy of Trust 26 dns.bofa.com Where is bankofamerica.com? IP: 18.104.22.168 Key: SIG: x9fnskflkalk.com (Verisign) Root Zone (ICANN) dns.evil.com IP: 22.214.171.124 Key: SIG: 9na8x7040a3
Does DNSSEC Solve all our problems? 27 No. DNS still vulnerable to reflection attacks + injected responses
DNS Reflection 28 Very big incident in 2012 (http://blog.cloudflare.com/65gbps-ddos-no-problem/) 65 Gbps DDoS Would need to compromise 65,000 machines each with 1 Mbps uplink How was this attack possible? Use DNS reflection to amplify a Botnet attack. Key weak link: Open DNS resolvers will answer queries for anyone http://openresolverproject.org/
So how does this work? 29 Remember: DNS is UDP No handshaking between endpoints I can send a DNS query with a forged IP address and the response will go to that IP address Secret sauce: a small request that can elicit a large response E.g., query for zone files, or DNSSEC records (both large record types). Botnet hosts spoof DNS queries with victim’s IP address as source Resolver responds by sending massive volumes of data to the victim
DNS amplification illustrated 30 Hosts infected by botnet Open Resolver Victim Src: Victim Dst: Open Resolver DNS … Src: Victim Dst: Open Resolver DNS … Sometimes the DNS resolver network thinks it is under attack by the victim!!
Amplification not unique to DNS 31 NTP is the latest protocol to be used in this way: http://www.prolexic.com/news-events-pr-threat- advisory-ddos-ntp-amplification.html (Exploiting NTP Monlist command which returns a list of 600 most recent hosts to connect to the NTP server)
DNS Basics DNS Security DNS + Censorship (reading presentation) Outline 32
Much More to DNS 33 Caching: when, where, how much, etc. Other uses for DNS (i.e. DNS hacks) Content Delivery Networks (CDNs) Different types of DNS load balancing Dynamic DNS (e.g. for mobile hosts) DNS and botnets Politics and growth of the DNS system Governance New TLDs (.xxx,.biz), eliminating TLDs altogether Copyright, arbitration, squatting, typo-squatting
Your consent to our cookies if you continue to use this website.
|
OPCFW_CODE
|
Melbourne at night
Something I have more time for...
Sorry about the lack of updates over the past month. I had 3 essays and an exam all due around the same time (one of which ended up blowing out to 40 pages!), which ate up a lot of my spare time. I finished up at LaTrobe as of Thursday, when I handed in a philosophy essay (for philosophy), got my last philosophy essay back, and did my politics exam. During the weeks before I got a politics and media essay in. As I said, when I get a chance I'll put some of those essays online.
Friday night I celebrated with a friend's 21st at Noise Bar in Brunswick (on Albert Street, just next to Brunswick station). Some local bands played, and while small and with a very limited bar, it seems to be a good venue for live bands. I got back from that at something like 3.30 am. Also managed to catch up with some people - a good night overall.
Aside from that, I got a few new tech-toys to play with.
First off, I upgraded the iMac from OSX 1.5 to OSX Jaguar 3.5. Safari, iCal, and iChat are nice additions. Also upgraded iLife (which means I have Garage Band), but I need a DVD drive before I can use it (then again, after a rebate it worked out to be free with OSX). Also got a D-Link DBT 120, making the Mac bluetooth compatible.
The next new tech toy I bought was a Nokia N-Gage QD. I read reviews of the original N-Gage, and decided to get the new model; it's a very nice phone. Two things about it though: first, it doesn't come with an MP3 player built in; you have to load one yourself. Secondy, you really need an MMC card if you do want a lot of photos, and the like. And third, while it is possible to send bluetooth files to the QD, you can't send them directly to the MMC card; instead they appear like an SMS message and you have to copy them across from there. To get around this, I got a 6-in-1 card reader. That said, with the card reader and the D-Link, the QD works very well with the Mac; it's fairly easy to get the iCal / calendar and Address Book to iSync, and it's easy to to move files to an MMC card with a card reader: when you put an MMC card in the reader, it appears in Finder like any hard disk or CD ROM drive.
The only thing that has really annoyed me so far about it is Optus. With a new phone, I selected some pollyphonic ringtones from the Optus Zoo website. I got the SMS from Optus okay, however each time I try downloading it I get a HTTP 403: No Server Access error. Aside from this, games and logos download fine. I called Optus' tech support about it, and they claimed it was a problem with their GPRS network.
Anyway, been talked into possibly going to the Falls Festival this year. I also have a whole bunch of people to catch up with, and that's what I've been up to the past few weeks.
|
OPCFW_CODE
|
LIFELONG LEARNING ADMINISTRATION CORPORATION
Monday - Friday, 40 hours per week
The Application Developer is responsible for design,
development, and maintenance of applications required by the Organization. In
addition, this position is responsible for creating the design and layout of a
website or web pages.
Note, this list is illustrative
only and is not intended to be a comprehensive list of tasks performed by
Collaborate with information systems manager, systems analyst, application developers, and end users to understand the gathering, definition, and documentation of system requirements.
Design application specifications based on consultations with information systems manager, systems analyst, application developers, and all others involved with project.
Convert specifications into high-quality, manageable scripts and web applications using coding best practices.
Create use-case test scenarios to identify errors and confirm application needs as well as troubleshoot and debug, as necessary.
Write and maintain documentation of changes to code, applications, specifications, user instructions, and operating procedures.
Revise applications for corrections, enhancements, or system environment changes based on code analysis and documented application revision specifications.
Provide technical assistance by responding to inquiries regarding access-control requests, errors, problems, or questions with applications.
Apply technical skills in the areas of Microsoft SharePoint, databases, web applications, including HTML development and CSS.
Apply Software Development Lifecycle best practices defined by the information systems department when designing, coding, testing, and implementing applications.
Design and develop customization of application systems, including dashboard-like graphical user interfaces, web-application parts, enterprise system content integration, InfoPath forms, and workflows using SharePoint or similar environments.
Develop web site content and graphics by designing images, icons, banners, audio enhancements, etc.
Create wireframes, storyboards, user flows, process flows and site maps.
Maintain web site appearance by developing and enforcing content and display standards; editing submissions.
Create dynamic web experiences.
Perform quality assurance inspections of applications or web application sites.
Perform other duties as assigned.
Minimum Physical Requirements: Mental Demands:
Data analysis, high workflow management, high project coordination.
Finger Dexterity; using primarily just the fingers to make small movements such as typing, picking up small objects, or pinching fingers together.
Talking; especially where one must convey detailed or important instructions or ideas accurately, loudly, or quickly.
Average Hearing; able to hear average or normal conversations and receive ordinary information.
Average Visual Abilities; ordinary acuity necessary to prepare or inspect documents or operate machinery.
Physical Strength; sedentary work. Sitting most of the time. Exerts up to 10 lbs. of force occasionally (almost all office jobs).
Frequent multi-tasking, changing of task priorities, repetitious exacting work required.
Working in a noisy, distracting environment with frequent deadline pressures. TRAVEL:
Ability to travel in performance of duties.
College Degree in Computer Science, Information Technology or related field preferred
Minimum Qualification, Job Experience, Knowledge, Skills and Abilities:
College degree in Computer Science, Information Technology or related field preferred, or equivalent years of application development / web design experience.
Experience with Office 365, Microsoft SharePoint, Google Sites, SQL server, MySQL, and Web application design and development.
Working knowledge of the Software Development Lifecycle (SDLC) phases including design, development, implementation, integration, testing, evaluation, and maintenance of applications.
Ability to handle large scale projects with competing priorities and deadlines.
Ability to generate, interpret, and communicate data from statistical reports.
Ability to solve problems creatively and effectively. For Web Design related duties:
- Demonstrable graphic design skills with strong portfolio.
- Up-to-date with the latest Web trends, techniques and technologies.
- Excellent visual design skills with sensitivity to user-system interaction.
Verbal, written and technical presentation skills. Certification Required:
Not Applicable. Additional Information:
Hardware, Software, Development Libraries, and Experience:
Understanding of IDEs (e.g. Visual Studio, XCode, Android Studio, etc.)
|
OPCFW_CODE
|
As a back end developer, we have a big job. We make apps that manage people’s data and we can make code that does many things to them whether they’re good or not.
In this article, we’ll look at some security best practices to take note of when we’re developing back end apps.
Suspicious Action Throttling or Blocking
We should have a way to detect suspicious activities and throttle or block them to prevent various kinds of attacks.
We can use content delivery networks like Cloudflare to detect suspicious traffic and block them at their source.
Also, we shouldn’t let people do things that’ll overload our systems by slowing down downloads and doing other things that won’t let them do too many operations at once.
The number of requests that can be made should also be throttled so that clients can’t make too many requests at a time and overwhelm our systems.
If the activities that are overloading our systems are persistent, then blocking the source that commits those actions are required.
For instance, we can’t let users make 100 requests a second to get customer information.
Also, other suspicious activities include deleting or editing lots of data at once.
Use Anonymized Data for Development and Analysis
We should only use as much data as we need for the purpose that we’re using them for.
For instance, if we’re developing our app with production-quality data, then we take data from production and then anonymize them so that we can’t see all of our customers’ private data.
Likewise, we should anonymize data that are used for statistical analysis. This is very important since we don’t want to show everyone’s private data when we’re analyzing them internally.
If we don’t need to see it, then we shouldn’t see it.
Temporary File Storage
We should know where we’re storing a temporary file. If we’re using publically accessible directories that we should make sure that they’re made read-only for users other than the user the app is running it as.
We can also use the protected directory to store temporary files for our app.
Security for Shared Server Environment
We should be aware of the security implications for hosting our apps on a shared server environment.
It’s possible that other people can have access to our app’s files if we host it on a shared environment.
We should check if our app’s logs, temporary files, etc. are accessible to the outside.
Of course, it’s not good that we expose those files to other virtual servers that share the same host machine for example.
Therefore, we should think about all the things that are stored on those virtual servers including:
- source code
- temporary files
- configuration files
- version control directories
- startup scripts
- log files
- crash dumps
- private keys
They all have valuable information, so we should make sure that they’re secured from the outside.
We should make sure that our file permissions or set up correctly so that only authorized users can do things to the files that we want them to.
Monitoring is definitely important from the security perspective.
We have to check that nothing suspicious is showing on our app’s logs.
However, we should control how logs are displayed to different users.
We don’t want all users to have access to all log data, which may have private data in it.
Therefore, we should just have a subset of users that can access the logs in ways that they’re needed.
We may want to restrict log access by users or IP addresses. Those are usually good candidates to restrict access by.
Risks of APIs
APIs can do powerful things if we let them. Therefore, we should carefully restrict any powerful actions to users that have access to those APIs.
We got to have property authentication and authorization mechanisms so that we can make sure that only the people that we allow can access resources that they should access.
Otherwise, we give attackers a chance to do anything with our systems.
Usually, external APIs are secured with an API key or OAuth. They’re both good for securing our APIs.
For internal APIs, we may also secure them with session stores and a JSON web token to check if a user is allowed to do a given action.
We should make sure that we don’t expose too much data to the public.
APIs should be acknowledged for the power that they can bring. We should secure them.
Also, we should detect any suspicious activity and throttle or block them.
|
OPCFW_CODE
|