Document
stringlengths
395
24.5k
Source
stringclasses
6 values
const typing = require('./typing') const messages = require('../controllers/messages') const util = require('../util') // Telegram doesn't allow messages longer than 4096 chacters, make it a little shorter just in case const MAX_CHARS = 4000 const MAX_TYPING_DELAY = 5000 const MAX_TYPING_LENGTH = 150 async function reply(ctx, text, { asReply = false, mention = false, markdown = false, withTyping = false, keys = undefined } = {}) { const { user, bot, msg } = ctx // Silently ignore self-mentions if (mention && user.id !== bot.id) { text = `${util.minDisplay(user)}: ${text}` } ctx.info('I said:', text) if (withTyping) { // To make it look better, show a typing indicator and a short delay before sending const start = Date.now() await typing(ctx) // Delay is larger the more text it includes // typing() is an actual request that takes time, aim to adhere to the desired delay const elapsed = Date.now() - start const len = Math.min(MAX_TYPING_LENGTH, text.length) const delay = Math.floor((len / MAX_TYPING_LENGTH) * MAX_TYPING_DELAY) - elapsed if (delay > 0) { await util.delay(delay) } } for (const chunk of split(text)) { const replyMsg = await ctx.reply(chunk, { disable_notification: true, disable_web_page_preview: true, reply_to_message_id: asReply ? msg.message_id : '', parse_mode: markdown ? 'Markdown' : undefined, reply_markup: keys === undefined ? keys : // Pass null to remove any previous keyboard keys ? { keyboard: keys, one_time_keyboard: true } : { remove_keyboard: true }, }) await util.catch(messages.insert(replyMsg)) } } // Just a shortcut reply.user = (ctx, text, opts = {}) => ( reply(ctx, text, { ...opts, asReply: true, mention: true }) ) function split(text) { const chunks = [] let buffer = '' for (const chunk of text.split('\n')) { if (buffer.length + chunk.length < MAX_CHARS) { if (buffer) { buffer += '\n' } buffer += chunk } else { chunks.push(buffer) buffer = chunk while (buffer.length >= MAX_CHARS) { chunks.push(buffer.slice(0, MAX_CHARS)) buffer = buffer.slice(MAX_CHARS) } } } chunks.push(buffer) return chunks.filter(chunk => !!chunk) } module.exports = reply
STACK_EDU
Building fitted desk for home office I have a spare bedroom which I'm converting to a home office. It's been stripped out and I'm about to start painting it. I'm looking ahead slightly to the next (I think!) step which is fitting a desk system. I'm a software developer by trade, and I have a lot of PC's. I also need space for electronics and other physical building, space for drawing - I need a lot of desk space in short. On top of this, the room is pretty small 2.5m x 3.5m ish. I have a plan for this which involves a single desk which runs around 3 walls of the room, with an T shape part in the middle of the longest side: Potentially subject to some change, I could live without the second seat and turning it into a simpler C shape I am already under the assumption that a modular desk system would be expensive, and difficult to fit precisely into the space, so my current plan is to use kitchen worktop. I was intending to use kitchen units to support the ends of the runs / the corners. The shelf space will likely be handy anyway. I was also thinking of using wall brackets to prevent bowing in between. I know I'm hardly the first person to do this, but I've never tried using worktop as desk before, and I have several questions: 1) Laminate or real wood? I much prefer the appearance of real wood, but is it hard wearing enough to use for this? 2) Height. My kitchen top is noticably higher than my current desk. Is there anything I can do to cut that down if I use kitchen units to hold the corner weight? Can I remove the feet instead of using kickboards? 3) Carpet. Do I carpet the room then put this on top, or fit all this and carpet around the units? I have many more other questions relating to the task itself, i.e. one of the walls has a radiator on, one of the walls is just a party etc. but I'll save all those for future questions. Not directly related to your questions, but in my opinion this looks like a very tight fit... you have planned for a lot of furniture in a very small room. I think it will be a tight squeeze to even sit down in one of those chairs, and you will not be able to move at all once you're seated. Only you know how much desk space you really need but if this were my room I would keep the desk to a single L shape and then add a bookcase for some light storage. You're quite right, it is an extremely tight fit - believe it or not that's not even the box room in this house! I think I might lose the T part as it's not really adding much usable space, and I'll keep the second chair out of the room most of the time. That should, hopefully, allow me to actually breathe in there.. To answer your questions in reverse order: 3) Install your floor coverings first. It will go faster and easier if you have no obstacles to work around. You can always cover the new floor with tarps if you need to paint, drywall, etc. in the future. 2) If you need to lower the height of the cabinets ,as you noted, the toe-kick can be sawed off so as to decrease the cabinets height by (+/-) 3 inches. If you need the height lowered further you will have to modify the cabinet carcass (think auto body chopping). 1) Laminate or wood? Completely discretionary. Whatever appeals to you. Laminate will take more abuse and more scratch resistant. Wood, on the other hand, is pure and elemental. Sealed with the right product wood will endure years before it needs to be re-sealed. Thanks :) just to double check, with the carpet - you don't think there's any chance of having problems getting the units properly level if they're sat on top? It shouldn't be a problem, but if you think the carpet will add to the cabinet height it can be cut out from under the cabinet base. I assumed the carpeting was being installed over padding and kicked onto tackles strips which would make installing it after much more labor intensive. If it is a glue down it would be a bit easier. Yeah, underlay and carpet stretched onto tackless strips. As long as it won't make leveling the worktop hard I think I'll do that - the difference between carpeting either a basically square room or some sort of fractal nightmare is just too much to ignore! This is spot on. I did a similar (but smaller) project in an existing room. We wanted 28 inch counter height, so cutting down cheap cabinets worked well. If you go with the peninsula you show, you probably want some legs at the end. For cheap, water pipes with flanges at the end(s) are hard to beat. Your local supply store will cut the pipes to length and thread them-they are cheap compared to everything else you are doing. I agree, 1/2" or 3/4" steel pipe threaded into floor flanges on the underside of a countertop make sturdy support legs. I would also suggest either threading the appropriate sized fitting (pipe cap) on the opposite end of the pipe leg. Note that nearly all kitchen systems typically also sell desk-height systems as many kitchens are now built with desk/work areas. So if you go the kitchen route, you may want to look into that option. As for the work surface, sure, wood or laminate are both fine. And both used widely for this purpose. It really comes down to personal preference as to what you want. I did not know that! Not having to modify them could be very handy indeed, as there will probably be half a dozen units. I'll look into desk height units, thanks :)
STACK_EXCHANGE
Blinky in Studio Technix The Blinky program is typically the first program that is written for an embedded system. It is the equivalent of Hello World in the embedded world. It only performs one task: blinking a LED on and off. This article shows how you can create your first Blinky program in Studio Technix. First we'll make an LED blink with simulation only. But the end goal is to test embedded C code with simulation, so in the second part of the article we'll replace one of the simulation components with custom C code. This is an overview article, for a step-by-step tutorial on how to use Studio Technix, go to the tutorials page. Blinky with simulation only Open a new project in Studio Technix. Adding an LED First we'll add a LED. From the Dashboard Library, drag the Light component () onto the diagram. By default, the LED is red. To change the color, click on the component, and in the Properties panel, change the color to a bright green. Adding a on/off pulse Next, drag the Pulse component () from the Signal Library onto the diagram, and place it to the left of the Light. By default, the Pulse component generates an analog pulsating signal. To change it to a digital signal, click on the component, and in the Properties panel, set DataType to Bool. Finally, connect the output port of the Pulse component with the input port of the LED component. Now the simulation model is ready. It should look as follows. Start the simulation by pressing the Play button. The LED is blinking on and off with a period of 1 second. Try playing with the properties of the Pulse component. By changing the period or duration it is possible to make the LED blink faster or slower, or change the duration of the time the LED is on. Blinky with code and simulation In this section, we'll replace the Pulse component, by an embedded application component. We'll write the typical Blinky program in C code. Then we'll connect it to the simulator so we can blink our virtual LED from code. Blinky C program In your favorite code editor or IDE, create a new file main.c and add the following code: #define LED_PORT 0 #define LED_PIN 1 tx_gpio_pin_set(LED_PORT, LED_PIN, true); tx_gpio_pin_set(LED_PORT, LED_PIN, false); Next, add the Studio Technix API header file technix.h and source file technix.c to the project. These files can be downloaded here. Finally, compile the project as a DLL called blinky.dll (all necessary DLL exports are defined in the Connecting the Blinky program with the simulator In Studio Technix, remove the Pulse component, and drag a new Application component from the Target Library onto the diagram. Click on it, and in the Properties panel, remove the gpio1 IO definition. Rename Light. Connect the Light output port of the application component to the input of the LED component. Finally, set the Application DLL property to the path of the Blinky C program DLL you just compiled ( Press Play to start the simulation. The LED starts blinking like before. But this time, the blinking is controlled by the C program. Change the value of the delay in the C code, recompile, and observe the effect on the simulation. Simulating an embedded program with virtual inputs and outputs, like this, is called software-in-the-loop simulation, and speeds up development considerably because experimenting is much easier and faster. Download the project files to try out the blinky demo.
OPCFW_CODE
Thriven and throfiction War Sovereign Soaring The Heavens – Chapter 3400 – The Law of Life milky wail share-p3 Jellynovel War Sovereign Soaring The Heavens txt – Chapter 3400 – The Law of Life church glamorous share-p3 blackberry wine brands Novel–War Sovereign Soaring The Heavens–War Sovereign Soaring The Heavens Chapter 3400 – The Law of Life tray painful Li Rou said airily, “Fine, bring me straight back to the Smudge Crow Sect then…” “I’m carried out,” Li Rou ultimately mentioned. what fruit plants have thorns Fang Ji’s concept modified once again. His cardiovascular sank he did not expect Duan Ling Tian to know this topic. Dependant upon the unconcealed eradicating objective flas.h.i.+ng in Duan Ling Tian’s eye, he understood he was condemned. Even so, he could not assistance but ask, “My lord, would be the remnant with the World of G.o.ds actual?” Li Rou reported airily, “Fine, bring in me back to the Smudge Crow Sect then…” With only a idea, sword sun rays showed up beyond lean fresh air and slashed for the five gentlemen. The sword sun rays drew our blood, even so the injuries were definitely not fatal. Li Rou shook her top of your head and explained, “It’s unwanted to discuss it…” With only a thought, sword sun rays sprang out beyond skinny fresh air and slashed on the five men. The sword sun rays drew blood vessels, though the accidental injuries were not deadly. Duan Ling Tian looked at Fang Ji meaningfully as the corners of mouth picked up in a disdainful teeth before he replied, “I’m Li Rou is my son…” The 5 guys could not put up with the unbearable ache and pleaded just one just after another for any rapid loss of life. They had been completely restrained via the rules of s.p.a.ce’s Restraining Profundity they could not really mobilize their Celestial Beginning Energies to dedicate suicide. The five gents could not endure the unbearable discomfort and pleaded just one following another for your swift loss of life. These folks were completely restrained with the regulations of s.p.a.ce’s Restraining Profundity that they could not actually mobilize their Celestial Beginning Energies to dedicate suicide. Duan Ling Tian disregarded them. War Sovereign Soaring The Heavens Whoos.h.!.+ Whoos.h.!.+ Whoos.h.!.+ Duan Ling Tian obtained cast a Noise Cancelling Formation and so the five men’s wretched cries would not interrupt his parents’ reunion. After he was completed with expressing what he wished to say, he wanted to finish their unhappiness. Even so, equally as he was approximately to generate a switch, a wave of strength swept toward him, ceasing him from hurting the 5 guys. “My lord, I beg you!” “I’ll let you know about it slowly and gradually.” At the same time, the expression of Fang Ji, Fang Chun, and Fang Ji’s disciples modified drastically every time they looked over the crimson-clad young guy who was glaring at them. It had been clear for them how the a few newcomers ended up better than them. With just a thought, sword sun rays made an appearance out of skinny oxygen and reduced at the five guys. The sword sun rays drew blood, but the accidental injuries were not lethal. Fang Ji’s term evolved all over again. His cardiovascular system sank he did not count on Duan Ling Tian to pay attention to this matter. Depending on the unconcealed eliminating objective flas.h.i.+ng in Duan Ling Tian’s eyeballs, he realized he was condemned. Even so, he could not assistance but inquire, “My lord, is the remnant on the Whole world of G.o.ds true?” With only a thinking, sword rays sprang out away from very thin oxygen and reduced in the five males. The sword rays drew our blood, although the traumas were actually not dangerous. When the five men regained their detects, they uncovered Li Rou was not on your own. A few people today experienced made an appearance adjacent to her: a good looking eco-friendly-clad mankind, a fine crimson-clad fresh man, with an outdated guy dressed up in an extended reddish colored robe. Fang Ji’s expression improved yet again. His cardiovascular system sank he failed to expect Duan Ling Tian to pay attention to this issue. In line with the unconcealed killing intent flas.h.i.+ng in Duan Ling Tian’s vision, he recognized he was condemned. Even so, he could not assistance but check with, “My lord, could be the remnant from the Whole world of G.o.ds genuine?” “M-my lords… W-who definitely are you?” Fang Ji questioned which has a soft experience. “My lord, you should spare us!” Fang Ji pleaded, “M-my lord… You need to have mercy! I used to be only pursuing the sales of the Sect Leader of your Smudge Crow Sect. I had left behind the sect. Be sure to explore this topic in case you don’t believe me. In addition, I want to set your mum free…” The Shadow – The Circle of Death Currently, a streak of lightweight suddenly swept toward Fang Ji plus the many others at the velocity of light-weight. They instinctively retreated in panic, they are able to sensation a formidable vitality from the streak of mild. Currently, none of them cared about Li Rou and did actually have forgotten about her. “Young male, all life are valued through the heavens. Make sure you present mercy…” “My lord, you should give us a fast passing away!”
OPCFW_CODE
MS Teams just has not taken off where I work. Everyone prefers Slack, myself included. Slack is just cleaner in my opinion. I've heard and casually looked at Slack, but after these two comments I thought I'd take a closer look. I really like the interface, too. Of course, being an IT department head, the first thing I did after looking at the glamour shots was go to the Pricing tab on the site then to the comparison grid to see what features each had. To duplicate what Teams does, it looks like the $12.50/user/month Plus plan is needed but even that (15 users on voice/video/screen share) doesn't match the 300 of even Teams Free. Granted, many may not need all that, but the entire O365 Business Premium suite with the installed Office apps, Teams, SharePoint, Exchange email hosting, and a handful other apps costs $12.50/user/month. At any of the price levels of Slack, I still have to buy some kind of Office productivity suite. The best combo may be O365 Business Essentials at $5/user/month with the $6.67 slack, but O365 BE already has Teams and SharePoint. Even G-Suite costs $6-$12/user/month. I guess you REALLY must like the UI better to warrant the extra cost, or have a small enough team you can get away with free/freemium editions. Actually I implemented the free version of Slack and got away with it just fine. We had other options available to use to for Video Calling and Conferencing. Slack replaced our team using Pidgin as a internal communication device. We got a free month way back when they were doing that and honestly we couldn't tell the difference. The only thing I would've paid for slack for was the unlimited message retention. Which I'm hoping that now that they are a Public Company they'll move that over and let the free people enjoy it. I love squirrels! (Not the vulnerability type, that's serious stuff.) Fun fact about the organic variety of squirrel: did you know squirrels bark and "squee"? I didn't realize this until we moved to a place dense with trees, it's basically Squirrelville here. Those noises we were hearing not birds... they were squirrels... (skip to about 1 minute for a good example) Squirrels are rather ambitious characters and smartly enough rarely run in front of my car when driving. Just don't let Benedict Cumberbatch read that article headline. Ever heard him try to say "squirrel"? LOL I tried to watch a nature series called South Pacific, but had to stop. I had never seen anything with this Cumberbatch fellow in it, but I gather he is fairly famous. He is horrible. Like, make you want to kick the animals bad. I only know he is the narrator since I looked it up, such that I will never again be exposed to that. He was almost worse than Meryl Streep, although that is likely physically impossible. Doctor Strange in the Marvel movies. Sherlock in the BBC series. Khan in the Star Trek reboot/alt-universe movies... Hopefully that puts a face to the voice for ya! oh man, albert & three brain, those were the days. monkey salad and jimmy pee were always my favorites. Meh, it's like any other Microsoft product. Enough people use it (which Teams is taking over for Skype so unless they go with something like Slack they will) and someone will find a hole in it. No software is bulletproof so eventually they are all exploited (if enough people are using it to make it worth the trouble of finding the exploits). Microsoft has put a lot of eggs in the Teams basket so I expect they'll have a patch out for this pretty quickly. This and in the news yesterday I read that Teams is now officially more popular than Slack... We are a Teams shop, going to have to figure out how to plug the hole. Seeing as it has to do with some form of Squirrel I figure a bunch of dogs hanging around chasing the squirrels is maybe a reasonable idea, LOL.
OPCFW_CODE
What is @OneToOne mapping in spring JPA? I am new to Spring data jpa and trying to understand @OneToOne mapping. Let's say I have a Employee entity and a Company Entity, If I want to map these 2 entities, then I can use one to one mapping on Employee entity which means one employee can belong to one company only. Is this understanding wrong? If one employee belongs to one company(lets say XYZ) then the company(XYZ) cannot be mapped to a different employee? I have read few posts but not completely understood. One-to-one is bi-directional. Company/Employee is one-to-many (everybody works for a single company) or many-to-many (some people have second jobs). May help: https://en.wikipedia.org/wiki/One-to-one_(data_model) Say an employee can be mapped to a Company that is one to one mapping keeping Employee as owner of the relationship. Whereas if you view Company as owner of the relationship, that it is One to Many. Case 1 : Employee as Owner @Entity Public class Employee { @ManyToOne private Company company; ...... } @Entity Public class Company { @OneToMany(mappedBy="company") \\ mappedBy is used to say that Employee is owner and \\it should match variable name company private List<Employee> employee; ...... } Lets say employee ABC is mapped to a company XYZ then employee DEF can never be mapped to a company XYZ as this @OneToOne Mapping? Employee DEF, PQR, STG any number of Employees can also be mapped to Company XYZ. It should be @ManyToOne in Employee Class. I have corrected it. Thanks for pointing it out. I got your point but what I am trying to figure out is in case of OneToOne, one employee XYZ is mapped to one company ABC and then employee DEF can never be mapped to company ABC as company ABC already has been mapped to employee XYZ, am I correct? Yes. Thats correct only if you put @OneToOne in Company Class for Employee. @OneToOne represents that there is only one Object of Entity related to the other Entity if we have Employee and Passport Entity so only One Passport related to One Employee and for sure one One Object of Employee related to One Object from Passport @Entity Public class Employee { @OneToOne private Passport passport; } so from Employee i can get his Passport @Entity Public class Passport { @OneToOne private Employee employee; } and from Passport i can get The Employee. Mapping is nothing but defining the relationship between two entities/objects and it is just like stating 5>4, 10=10, 6<8. The numbers here (5,4,10,6 and 8) are the entities and the symbols (>, = and <) are the relationships/mapping between them. We do the same with mappings in hibernate. Note down the two entities and put in between them the relationship (mapping) them in the way it makes more sense. Father OneToMany Child (Father can be One, To Many child/children) Child ManyToOne Father (Child/Children can be Many, To One Father) Employee ManyToOne Company (Employee can be Many, To One Company) Company OneToMany Employees (Company can be One, To Many Employee(s)) Address OneToOne Employee (Address can be One, To One Employee) Employee OneToOne Address (Employee can be One, To One Address) The relationship should make sense. Which means it should make sense when you look at the relationship from each of the both sides of the relationships (One child Many Fathers doesnt make sense but One Father many child/children does) Actually, in this case, You must use one to many relationship. you can simply use the @ManytoOne annotation in Company Entity that related to Employee Entity. Specifies a single-valued association to another entity class that has many-to-one multiplicity. It is not normally necessary to specify the target entity explicitly since it can usually be inferred from the type of the object being referenced. If the relationship is bidirectional, the non-owning OneToMany entity side must used the mappedBy element to specify the relationship field or property of the entity that is the owner of the relationship. Visit https://en.wikibooks.org/wiki/Java_Persistence/ManyToOne for more information and samples. Yes, you are correct. If you want the Company to have multiple Employees, then you want a ManyToOne relationship between Employee and Company, and a OneToMany relationship between the Company and employee.
STACK_EXCHANGE
This post has already been read 2104 times! What is Actually Happening with What Is Arraylist in Java Parameterized types form a sort hierarchy, just like normal types do. It employs the Binary search algorithm to get the elements. More elaborate conversions can be dealt with by combining ArrayList. Duplication is subsequently identified mostly by hashCode(). There’s an overhead that accompanies persistent data structures, however. Thus the bucket needs to be a selection of lists. Then you would like to create a bucket. If you prefer to reclaim it, you must utilize ArrayList. ArrayList is an overall list implementation acceptable for most use cases. Arraylist is essentially a dynamic collection. The ArrayList is among the essential classes in the System.Collection namespace. Whereas ArrayList can dynamically boost the size when new components are added. Ordinary ArrayList does not have any setSize or setLength technique. Next we will discuss about the BitArray in the exact same collection class. Sildenafil Citrate might also be used for different purposes not listed above. Lots of people using this medicine don’t have serious side effects. Since System.Object is the base class of the rest of the types, an item in various Objects may have a reference to any other sort of object. Thus array c keeps data that is connected under a predetermined name variable which has an index that is also known as a subscript. form Casting” means to change the kind of an object to be able to conform to a different use. Thus if you’re just going through the characters you aren’t going to create any extra strings. It is composed of succession of nodes linked together 1 right after the other. If needed, initialize each list. Don’t utilize StringBuilder to merge two or three strings together. To sort an ArrayList, utilize the Collections.sort(). Thus simply make a new List with that object. Java collections overview a summary of all typical JDK collections. Naturally, at times it’ll be stupid to manage operations character-by-character. A stack on the opposite hand returns the previous object added. This example demonstrates how to take advantage of ArrayList. It shows the usage of java.util.Arraylist.toArray() method. So only utilize StringBuilder if you’re likely to merge strings OFTEN. It explores these types and their usage in depth as a way to present a deeper comprehension of how generics work. Without generics, the usage of collections requires the programmer to bear in mind the correct element type for each collection. The use of method ToArray is quite easy and straightforward. Please be aware that the post by Attain technologies is an immediate copy of this informative article. The interface is a good example. It declares one type variable to represent the type of the keys and one variable to represent the type of the values. In cases like this, it is actually two functions. As you may guess, this is a pricey operation when there are numerous elements. It is a rather costly operation, and therefore you don’t wish to do it just to reclaim a small number of bytes. Any particular instructions for a specific patient needs to be agreed with your healthcare adviser or doctor responsible for the instance. Though the Java collection classes are modified to benefit from generics, you’re not required to specify type parameters to use them. In Java 6 it’s defined in class. We are likely to go over the frequently used classes within this guide. Now it’s the opportunity to try out the code on your own. Now it’s the opportunity to try out the code on your own and don’t forget leave comments at my bolg. Thus, to access an element in the center, it must search from the start of the list. The debut of generics does not alter this. Today you should have understood the value of these interfaces. There isn’t any difference in regard to performance between Array and ArrayList in Java classes. But should you want to specify the means of comparison, it is wise to use the second bit of code where you could define your own comparator. Both are a kind of List that return just one object in a certain position. This technique removes a sole element at given position. The RemoveAt technique is utilized to delete the designated location element in the ArrayList. In addition, the array is utilized to put away objects of type String. If you may use a collection of set size DO so. Once an object is made, it can’t be changed. Also different kinds of objects can be saved in the same ArrayList. If there are several objects in an identical name when you attempt to get rid of, then it will get rid of the very first occurrence in the ArrayList. An object comprises numerous object references plus many strategies for managing that collection. The RemoveRange way is used to eliminate the assortment of objects in the ArrayList. On the contrary, it will reuse the current empty case.
OPCFW_CODE
Error: Unable to open config file I have next error: Error: Unable to open config file command terminated with exit code 255 Whole Log: ... Do you wish to proceed with deployment? [Y]es, [N]o? [Default: Y]: Using Kubernetes CLI. Using namespace "default". Checking for pre-existing resources... GlusterFS pods ... found. deploy-heketi pod ... found. heketi pod ... not found. gluster-s3 pod ... not found. Creating initial resources ... Error from server (AlreadyExists): error when creating "/home/kube/k8s_glusterfs/gluster-kubernetes/gluster-kubernetes/deploy/kube-templates/heketi-service-account.yaml": serviceaccounts "heketi-service-account" already exists Error from server (AlreadyExists): clusterrolebindings.rbac.authorization.k8s.io "heketi-sa-view" already exists clusterrolebinding.rbac.authorization.k8s.io "heketi-sa-view" not labeled OK Flag --show-all has been deprecated, will be removed in an upcoming release Error: Unable to open config file command terminated with exit code 255 Error loading the cluster topology. Please check the failed node or device and rerun this script. My command: ./gk-deploy -g topology.json (in curent directory with file gk-deploy): { "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "server2" ], "storage": [ "<IP_ADDRESS>" ] }, "zone": 1 }, "devices": [ "/dev/sdb" ] } ] } ] } I think script don't see file topology.json but i don't know why... The topology file is not the same as the config file. Try running ./gk-deploy -gv for verbose output and see what you get. What do you mean you copy to /etc/heketi/topology.json? Also, are you running ./gk-deploy -g --abort between installation attempts? You should be doing that after most failures to make sure you start clean. My guess would be that your config secret did not generate properly. What do you mean you copy to /etc/heketi/topology.json? I copy file topology.json from current dictionary (where file gk-deploy) to /etc/heketi/topology.json Also, are you running ./gk-deploy -g --abort between installation attempts? No I didn't it. Now I to try it: call ./gk-deploy -g --abort kube@server2:~/k8s_glusterfs/gluster-kubernetes/gluster-kubernetes/deploy$ ./gk-deploy -g --abort Using Kubernetes CLI. Using namespace "default". Do you wish to abort the deployment? [Y]es, [N]o? [Default: N]: Y pod "deploy-heketi-7df6578-7dq4d" deleted service "deploy-heketi" deleted deployment.apps "deploy-heketi" deleted serviceaccount "heketi-service-account" deleted clusterrolebinding.rbac.authorization.k8s.io "heketi-sa-view" deleted Error from server (NotFound): services "heketi-storage-endpoints" not found No resources found node "server2" labeled daemonset.extensions "glusterfs" deleted call ./gk-deploy -gv kube@server2:~/k8s_glusterfs/gluster-kubernetes/gluster-kubernetes/deploy$ ./gk-deploy -gv Welcome to the deployment tool for GlusterFS on Kubernetes and OpenShift. ... [Y]es, [N]o? [Default: Y]: Using Kubernetes CLI. Checking status of namespace matching 'default': default Active 14d Using namespace "default". Checking for pre-existing resources... GlusterFS pods ... Checking status of pods matching '--selector=glusterfs=pod': Timed out waiting for pods matching '--selector=glusterfs=pod'. not found. deploy-heketi pod ... Checking status of pods matching '--selector=deploy-heketi=pod': Timed out waiting for pods matching '--selector=deploy-heketi=pod'. not found. heketi pod ... Checking status of pods matching '--selector=heketi=pod': Timed out waiting for pods matching '--selector=heketi=pod'. not found. gluster-s3 pod ... Checking status of pods matching '--selector=glusterfs=s3-pod': Timed out waiting for pods matching '--selector=glusterfs=s3-pod'. not found. Creating initial resources ... /usr/bin/kubectl -n default create -f /home/kube/k8s_glusterfs/gluster-kubernetes/gluster-kubernetes/deploy/kube-templates/heketi-service-account.yaml 2>&1 serviceaccount "heketi-service-account" created /usr/bin/kubectl -n default create clusterrolebinding heketi-sa-view --clusterrole=edit --serviceaccount=default:heketi-service-account 2>&1 clusterrolebinding.rbac.authorization.k8s.io "heketi-sa-view" created /usr/bin/kubectl -n default label --overwrite clusterrolebinding heketi-sa-view glusterfs=heketi-sa-view heketi=sa-view clusterrolebinding.rbac.authorization.k8s.io "heketi-sa-view" labeled OK Marking 'server2' as a GlusterFS node. /usr/bin/kubectl -n default label nodes server2 storagenode=glusterfs --overwrite 2>&1 node "server2" labeled Deploying GlusterFS pods. sed -e 's/storagenode\: glusterfs/storagenode\: 'glusterfs'/g' /home/kube/k8s_glusterfs/gluster-kubernetes/gluster-kubernetes/deploy/kube-templates/glusterfs-daemonset.yaml | /usr/bin/kubectl -n default create -f - 2>&1 daemonset.extensions "glusterfs" created Waiting for GlusterFS pods to start ... Checking status of pods matching '--selector=glusterfs=pod': glusterfs-9b4cm 1/1 Running 0 49s OK /usr/bin/kubectl -n default create secret generic heketi-config-secret --from-file=private_key=/dev/null --from-file=./heketi.json --from-file=topology.json=topology.json Error from server (AlreadyExists): secrets "heketi-config-secret" already exists /usr/bin/kubectl -n default label --overwrite secret heketi-config-secret glusterfs=heketi-config-secret heketi=config-secret secret "heketi-config-secret" labeled sed -e 's/\${HEKETI_EXECUTOR}/kubernetes/' -e 's#\${HEKETI_FSTAB}#/var/lib/heketi/fstab#' -e 's/\${HEKETI_ADMIN_KEY}//' -e 's/\${HEKETI_USER_KEY}//' /home/kube/k8s_glusterfs/gluster-kubernetes/gluster-kubernetes/deploy/kube-templates/deploy-heketi-deployment.yaml | /usr/bin/kubectl -n default create -f - 2>&1 service "deploy-heketi" created deployment.extensions "deploy-heketi" created Waiting for deploy-heketi pod to start ... Checking status of pods matching '--selector=deploy-heketi=pod': deploy-heketi-6c7d48f8b-lcdkx 1/1 Running 0 13s OK Determining heketi service URL ... Flag --show-all has been deprecated, will be removed in an upcoming release OK /usr/bin/kubectl -n default exec -i deploy-heketi-6c7d48f8b-lcdkx -- heketi-cli -s http://localhost:8080 --user admin --secret '' topology load --json=/etc/heketi/topology.json 2>&1 Error: Unable to open config file command terminated with exit code 255 Error loading the cluster topology. Please check the failed node or device and rerun this script. You don't need to move the topology file. /etc/heketi/topology.json is the location inside the container where it is expected to be mounted. Just leave it in the same directory as gk-deploy. Abort and try again. @jarrpa Today I reboot host and replay these steps: ./gk-deploy -g --abort ./gk-deploy -gv and came to next step ) I see: ... /usr/bin/kubectl -n default exec -i deploy-heketi-6c7d48f8b-9qwbc -- heketi-cli -s http://localhost:8080 --user admin --secret '' topology load --json=/etc/heketi/topology.json 2>&1 Creating cluster ... ID: 131c7a88c5cac3255ca3a822878321c6 Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node server2 ... ID: f6b2ae697200f54a51bf26de4c4a152c Adding device /dev/sdb ... OK heketi topology loaded. /usr/bin/kubectl -n default exec -i deploy-heketi-6c7d48f8b-9qwbc -- heketi-cli -s http://localhost:8080 --user admin --secret '' setup-openshift-heketi-storage --listfile=/tmp/heketi-storage.json 2>&1 Error: Failed to allocate new volume: No space command terminated with exit code 255 Failed on setup openshift heketi storage This may indicate that the storage must be wiped and the GlusterFS nodes must be reset. I found that client version 6 solve this problem (Error: Failed to allocate new volume: No space) https://github.com/heketi/heketi/issues/1046 @jarrpa, should there be a partition on the device (/dev/sdb) used? Remove any data on the disk using, for example, wipefs -a. If you are not running with at least three storage nodes, also use the --single-node option with gk-deploy. I remove partition /dev/sdb1 and add again topology.json: { "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "server2" ], "storage": [ "<IP_ADDRESS>" ] }, "zone": 1 }, "devices": [ "/dev/sdb1" ] } ] } ] } apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: glusterfs-storage provisioner: kubernetes.io/glusterfs parameters: resturl: "http://<IP_ADDRESS>:8080" Deployment complete! @jarrpa thank you! I will try to use.
GITHUB_ARCHIVE
Core UUPS Resolves #774 All tests passing and deploying successfully. @Keyrxng Is it ready for review? @rndquu It is mate aye, I noticed I never called UUPS_init but thats the only thing I can think. Will sort that out once I'm free later tonight but other than that aye it's ready for review @rndquu It is mate aye, I noticed I never called UUPS_init but thats the only thing I can think. Will sort that out once I'm free later tonight but other than that aye it's ready for review Pls mark the PR as ready for review when it is ready + make sure that CI is passing I will do my best to fix CI, slither is failing because it cannot find a file which makes me think it's running against the dev branch and not my pr changes, maybe I've missed something else but unsure why it wouldn't be able to locate a .sol in the tests dir. Compare test coverage also seems to be failing for the same reason. The build & test I can sort no problem. slither is failing because it cannot find a file which makes me think it's running against the dev branch and not my pr changes Slither workflow is running against a PR branch. Slither workflow explicitly states the error: Source "test/helpers/UupstestHelper.sol" not found . The UupstestHelper.sol file had never existed in the project before this PR was created. I realise that I forgot to implement _authorizeUpgrade() behind an onlyAdmin auth in a couple places so I've put those in now too. I'm not sure if there is a need for a constructor in the base ERC20 as it's only ever inherited vs ubiquistick which is deployed using the ERC1155 base directly in tests. I realise that I forgot to implement _authorizeUpgrade() behind an onlyAdmin auth in a couple places so I've put those in now too. Pls mark this PR as "ready for review" when you're done I'd like to take on writing the test suite for the UUPS side of things if that's due to be scoped for bounty after this pr if it hasn't already been assigned @rndquu what do you think, am I up for the task? I'd like to take on writing the test suite for the UUPS side of things if that's due to be scoped for bounty after this pr if it hasn't already been assigned @rndquu what do you think, am I up for the task? I don't understand how exactly "the test suite for the UUPS side of things" will look like. Could you provide an example of what we need to test regarding the UUPS? Nice to meet you first of all. All current tests have been refactored and I introduced one demonstrating upgradeability. I think I'm following what your saying. Current tests are refactored but it has lowered coverage, ideally as part of the issue I'd introduce enough to pass CI at least is what you mean right? But in depth extensive testing that wouldn't be required? As for examples; Everything that should be is upgradeable only by auth'd caller. The use of the manager contract and other facets alongside the UUPS proxies are interacting as they should, I didn't dive into the manager very much I am thinking out loud here. Everything is initialised correct and can't be again. Depending on how core interacts with faucets do we have to take anything else into consideration? Honestly when asking the questions I thought you'd have a ream of things to test against, maybe it's my first time contribution nerves with the Solidity stuff 😂 Is the current test suite sufficient for all of this forgetting the fact coverage has dropped a bit? Your branch seems to be outdated, the build it's passing, but the PR needs more work Refactor tests for core contracts accordingly I'm only seeing one test for a contract, part of the issue #774 is to refactor for all the contracts Refactor tests for core contracts accordingly I'm only seeing one test for a contract, part of the issue #774 is to refactor for all the contracts Spec Update deployment script accordingly Refactor tests for core contracts accordingly The spec had only mentioned refactoring existing tests which is why I asked the question previously about the test suite for UUPS. If I'm also to write new tests for all of the UUPS contracts then can it be detailed what I am to test exactly? Or are the three implemented in StakingShare.t.sol to be replicated across core and that tests the basic UUPS functionality? If they should be more in depth then I'm asking for help in what exactly I should be testing against in a standalone context, like the three I have written. I assumed that refactoring existing tests would cover working with the protocol context but that you guys may have more things in mind specifically but that would be a separate issue. This was a nervous pr for me compared to others, clearly a lot of context I have incorrectly assumed so my apologies for being a pain in the ass with it 🤣 The next one will be smoother for sure Refactor tests for core contracts accordingly I'm only seeing one test for a contract, part of the issue #774 is to refactor for all the contracts Spec Update deployment script accordingly Refactor tests for core contracts accordingly The spec had only mentioned refactoring existing tests which is why I asked the question previously about the test suite for UUPS. If I'm also to write new tests for all of the UUPS contracts then can it be detailed what I am to test exactly? Or are the three implemented in StakingShare.t.sol to be replicated across core and that tests the basic UUPS functionality? If they should be more in depth then I'm asking for help in what exactly I should be testing against in a standalone context, like the three I have written. I assumed that refactoring existing tests would cover working with the protocol context but that you guys may have more things in mind specifically but that would be a separate issue. This was a nervous pr for me compared to others, clearly a lot of context I have incorrectly assumed so my apologies for being a pain in the ass with it 🤣 The next one will be smoother for sure Upgradeability it's core test here. Make upgradeability testing for all core contracts @Keyrxng pls merge https://github.com/Keyrxng/ubiquity-dollar/pull/5 Just hit merge?😂
GITHUB_ARCHIVE
Robot in 2 DaysTags: journal Our First Ideas: Robot in 2 Days! Our Ideas with Build As a base robot, we started with an 18 inch cubic frame. This was the frame we'd most easily be able to modify and build upon as we progressed further into the year. For an initial brainstorming session, our Robot in Two Days would let us get some ideas into place which we could then bring up to our standard and innovate on. First, we noticed that our motor placement would make it harder to retrieve any of the cones. We knew we would need a mechanism to bring the cones up at varying heights to deposit on top of the poles, so we decided on using a gripper and linear slides. We moved all the motors back to create space in the front for our linear slide and gripper. We also had to clear up any obstructions, such as beams, control hubs, our expansion hub and battery to make some more space. We attached the control hub, expansion hub, and battery to the back of the robot. We fixed inconsistencies in the frame, such as misalignment in beams, to improve the stability and quality of construction. Working on wire management was cumbersome, but necessary, so our robot would be able to get as many points as possible without getting tangled up in wires. System for Intake Because the highest beams are 30 inches, we designed a linear slide which can reach a maximum height of 30 inches. This means it will be able to score on all the poles: low, medium, or high. The linear slide was attached to the middle of the robot, in the space which was just cleared out. We also made a pair of tweezers which would help grab game elements from the top and latch on inside the game elements. These were then attached to an angle control servo, which would make it easier to grab the cones and also increase the reach height slightly. Because we needed more accuracy, we used a flexible material to make a funnel to better intake our game elements. We had to fix and align the funnel a few times for greater efficiency. Next, we got to wiring up the motor for the actual lift. We realized that the linear slides stick out of the sizing cube by a 1/2 inch, but we purposefully ignored the problem because we'll have more time to get the technicalities correct by our first league meet. Finally, we covered all the sharp corners of our robot with gaff tape and added the LED panels we're using for team markers. First Implementation of Code Our team has a lot of new coders this year, so we spent most of this time getting used to the interface and re-examining and interpreting code from past years which we could then use as a template. However, we got all the motors and servos to work, as well as coding the Mecanums to navigate the game field.
OPCFW_CODE
Django Framework Help Services Django Web Framework Django is both a free and open-source web framework based on the Python Programming Language. It follows the model-template-view architectural pattern. It is made to make the process of creating applications go as smoothly as possible for developers. Django takes care of web development. As a result, instead of creating anything new, you may concentrate on writing your app. It's free and open source. Some of the features of Django include: - A small, independent web server for testing and development - A flexible and scalable database routing engine - An extensible template system - A pluggable authentication system - Support for internationalization Would you like more information on Django, or do you have any specific questions? Django Framework Help: Sure! We'd be happy to help with any questions about the Django web framework. What specifically do you need help with? You will soon find out that developers may encounter several common challenges when working with the Django web framework. Here are a few examples: - Handling large amounts of data: Django is well-suited for handling large amounts of data, but working with a large dataset can still be challenging. You may need to use techniques such as database indexing and caching to ensure that your application remains performant. - Debugging: As with any software development project, debugging is at the core of building it. Django provides helpful error messages and a powerful debugging tool, but tracking down bugs in a complex application can still be challenging. - Security: Django takes security seriously, but it is still essential to be aware of potential security risks and initiate the steps to mitigate them. That includes adequately sanitizing user input, protecting against cross-site scripting (XSS) and cross-site request forgery (CSRF), and using secure passwords. - Deployment: Deploying a Django application can be more complex than deploying a simple static website. You'll need to consider things like how to scale the application, how to handle traffic spikes, and how to manage dependencies. - Third-party packages: Django has a large and active community, and many third-party packages are used to add functionality to your application. However, choosing suitable packages and ensuring they are compatible with your application can be challenging. Do any of these challenges currently stand out to you as something you need help with? Don’t hesitate. We are here to help you and make your learning process less painful. Why is our Django Framework Help Superior? For those studying Python, programming help is available when you run into problems. We offer a programming help service that includes Python programming. Python programming help that we provide includes: - As experts in the Django Framework, we can provide quick and accurate assistance to users seeking help with Django Framework tasks and projects. - Additionally, we know a wide range of libraries and Django Framework concepts. It allows us to assist with various issues, topics, and technologies and offer informed and accurate guidance to users. - Our help service is designed to be user-friendly and easy to use. Users can ask questions and receive answers in real time without waiting for a response because of our level of expertise. It makes it convenient and efficient for users to get the assistance they need and allows them to move forward with their projects more quickly. - Overall, our help service aims to provide high-quality and reliable assistance to users seeking help with Django Web Framework and other programming tasks and projects. We hope our service can meet users' needs and help them achieve their programming goals.
OPCFW_CODE
Fortran is a programming language that is oriented and adapted for numerical applications and scientific computing. With Fortran, modern programming was born. Through it, concepts such as scientific computing, or code complication, among others, have been put into practice. The origin of this programming language dates back to 1954, and is attributed to John Backus, an experienced American computer scientist who belonged to the IBM company. His proposal focused on launching a programming language whose objective was to translate different mathematical formulas into code that a computer could understand in a simple and accessible way. As a curiosity, this computer specialist was working on a previous project called SSEC (Selective Sequence Electronic Calculator) to get this program to calculate the positions of the moon. At the time of presentation, there were some reluctance, since all were accustomed to their predecessor, the assembly language that emerged in 1949. But soon the general perception changed since there were many advantages that Fortran used to use. It was considered as a high-level programming language, which managed to translate entire programs without having to do it manually as with its predecessors. In addition, its use was simpler, not as restrictive as the previous existing programming languages. One of the things that revolutionized the world of programming was the fact that it could allow the code to be written faster, and also did not require such specialized professionals, which made it more accessible to anyone. It is a language that has never stopped evolving. It has been changing over the years until arriving at Fortran 2018, which has included new features and improvements since its origin. Fortran has served as inspiration and basis for the creation of other types of programming languages such as: Lisp (1958), COBOL (1959) or ALGOL (1958). Without a doubt, it is one of the languages that are still taken into account when working with them, and that has served as information to create other aspects of derived programming based on it. Advantages and disadvantages of Fortran Its advantages include the following: - Easier to learn than their predecessors. - It is still used as one of the most prominent languages when performing numerical calculation. - It is considered a revolution and the principle of modern programming. - Its implementation, and the years of use have resulted in proven and efficient libraries that confirm its effectiveness as a programming language. Its disadvantages must also be taken into account when it is used: - It is a programming language in which there are no classes, or structures. - It makes it impossible to make a dynamic memory reservation. - For the process of texts, lists and data structures of a high degree of complexity, it is a somewhat primitive language.
OPCFW_CODE
Copy, cut and paste with context menu Problem First, thanks for the awesome work on reviving this! I tested on Ubuntu and it works! There is the Shift + Right Click for Browser Menu entry, but Shift + Right Click does not do anything for me. This means that some actions (copy, paste, cut, copy link address) are not accessible now. Proposed Solution Work with upstream JupterLab to add full self-contained functionality of copy, paste, and cut (which are already available in Text Editor but not in Notebook) and maybe additional actions (copy link) too Disable the Shift + Right Click for Browser Menu entry in JupyterLab App as it is misleading (this should be possible with the menu schema files). I explored four approaches: A) using webContents API (e.g. webContents.copy) for both interacting with clipboard and with the page, and communicating via remotes B) using Electron.clipboard directly for interacting with clipboard, Web APIs like Selection and DOM API for interacting with the page, and communicating via remotes C) using Clipboard API on client side only, no communication with server D) adding a fallback context menu with Electron-native implementation of commonly needed actions The code for A and B would rougly look as follows: Code In desktop-extension app.commands.addCommand(CommandIDs.copySelection, { label: 'Copy Text', execute: () => { const selection = window.getSelection(); asyncRemoteRenderer.runRemoteMethod(IAppRemoteInterface.writeToClipboard, selection.toString()).catch(console.warn); }, isVisible: () => { const selection = window.getSelection(); return selection.toString() !== ''; }, }); and server side (in main/app): asyncRemoteMain.registerRemoteMethod(IAppRemoteInterface.writeToClipboard, (data: string): Promise<void> => { // approach A: // clipboard.writeText(data); // approach B: this._window.webContents.copy(); return Promise.resolve(); }); Problems with approaches A and B: isVisible() gets called too frequently leading to performance degradation (rendering it unusable); handling of CodeEditor in Notebook cells is difficult because the custom context menu takes focus away from the editor hence the text is no longer selected; this necessitates writing custom code as for the Copy command of File Editor in the core JupyterLab but also aware of multiple cells (which would be a slight adjustment) Problems with A: in order to handle CodeEditors issue requires some clever focus management/closing the JupyterLab context menu first, ensuring focus is back on selected element and only then executing the action. This is doable but will require a major API change upstream to implement it cleanly. This might be error-prone. Problems with B: in order to handle CodeEditors issue requires getting the selected text directly from the editor requires re-implementing a bit of logic to handle rich-MIME copying; this is already somewhat implemented in JupyterLab core and we would be duplicating it here if using approach B pasting is difficult (especially into contentEditable/CodeMirror editor) because we need to manage DOM nodes/events manually; this is not an issue with approach A. "Copy link URL" and "Copy Image" might not be trivial to implement well (but still feasible!) Problems with C: it needs to be implemented directly in JupyterLab JupyterLab postponed implementation waiting for wider browser support for Clipboard API it might need to be implemented as an opt-in because some browsers do not allow to read from clipboard in web context actions such as "Copy link URL" and "Copy Image" will need to be implemented (note: we already have "Copy Output to Clipboard" implemented in https://github.com/jupyterlab/jupyterlab/pull/10282) Problems with D: we end up with two menus again, this is detrimental to UX (though at least consistent with the web version) I think that long term we should pursue approach C by implementing relevant code in JupyterLab core (4.0 or 4.1?). In the meantime we can adopt approach D.
GITHUB_ARCHIVE
Introduction from BITSS: In addition to being synonymous with research transparency and reproducibility, open science is also about building communities centered around collaboration and the exchange of knowledge. In this post, Catalysts Amanda Domingos and Rodrigo Lins share their experience establishing and leading Métodos em Pauta [Methods on the Agenda], a student-led initiative and podcast that looks to democratize discussions around open science in Brazilian political science. Enjoy the read! What is Métodos em Pauta, and what are your goals? Métodos em Pauta (MeP) was created in November 2018, on a typical warm afternoon in Recife, Brazil, during a conversation in the break between classes. We discussed the need to change teachers’ approach to presenting research methods and techniques in classes, which seemed a little distant from the students’ questions. For example, there seems to be a “vicious cycle,” where students are required to take classes about the same techniques both in undergrad and graduate-level classes. We wanted to break that cycle. That same day, we decided to create an event “from students to students,” which would bring discussions about techniques and topics in which students were interested, and do it in a fluid format and with plenty of space for interaction. On that day, we created the first MeP meeting, which took place in March 2019. Over two days, we discussed topics ranging from causal inference to qualitative data analysis techniques—all through the lenses of research transparency and open science. We had Nicole Janz, Dalson Figueiredo, and Norm Medeiros from Project TIER (via videoconference) in a roundtable discussion on open science. To our happiness (and quite a surprise!), the event was very well accepted by Recife’s social science academic community. We had over 90 participants and, even after the end of the meeting, we continued to receive positive feedback. And that was when the idea of turning it into an initiative came up. We decided to call it “Metodos em Pauta,” which translates roughly to “Methods on the Agenda” because its purpose was just that—to introduce new ideas at our department about research methods. Since then, we have joined forces with other collaborators, including Willber Nascimento, who recently received his PhD in Political Science, and Antônio Fernandes and Danillo Batista, both grad students in Political Science. During the initiative’s two years of existence, we have carried out various activities to democratize the discussion on quality in social research. We have hosted several round table discussions, talks, workshops, and even started a podcast (called PodMétodos, hosted by the two of us) to discuss research, methods, and transparency. Since 2019, we were able to spread MeP to other states in Brazil. E.g., we hosted a one-day meeting at Aracaju (500 km from Recife) to discuss open science with undergraduate students in social sciences and international relations. Since the start of the COVID-19 pandemic, we transitioned all of our activities online. In August 2020, we had a webinar called “Quarantine with Métodos em Pauta,” where we brought together scholars from all over Brazil to discuss social research topics. With this event, we realized that students all over Brazil are interested in discussions about research methodology. That inspired our new project idea entitled “Drops,” where researchers will present research techniques in short videos (~ 20 min). The goal is to give space for researchers to share their work with the academic community and catalyze the spread of good analytical techniques. We expect to kick off this project later this year. Our primary focus is undergraduate students. We believe that it is important for them to expose them to open science in their formative years. We hope that this will, in time, make Brazilian political science more open and transparent. However, this is not to say we do not talk to senior researchers. They play an important role in practicing reproducible research and setting an example for younger researchers. How did you and MeP become interested in open science? We were both lucky to be Dalson Figueiredo’s (also a BITSS Catalyst) students, who introduced us to vital discussions in open science. With this background, it couldn’t be different: when we created MeP, we had also to include open science topics. (Fun fact: on the occasion of our first event, we were lucky enough to meet with Prof. Gary King in São Paulo, who kindly recorded the opening video of our first meeting—welcoming the participants and talking about inference and replication). We have a strong commitment to the dissemination of open science topics in all our activities. In the first and second MeP meetings, we held at least one roundtable discussing transparency, replicability, and reproducibility. We also regularly discuss transparency tools in our workshops: over the years, we introduced the Open Science Framework (OSF) and the TIER Protocol, among other things, to undergraduate students in political science and international relations. They received the content very well! We have followed some of them closely and have seen that they continue to use the tools as they write their undergraduate theses. We have also been conducting short courses at different institutions—presenting the problems surrounding the “credibility crisis” and introducing tools to deal with such issues. E.g., in 2019, we presented at a few private universities in Recife; in 2020, we debated with social science students at the Federal University of Alagoas, and this month, we will hold a one-week course on transparency in empirical research at the Federal University of Latin American Integration. We had special episodes to discuss transparency in PodMétodos, and, early this year, we had a webinar where we received Fernando Hoces de la Guardia to discuss Open Policy Analysis. We are also trying to expand beyond Brazilian borders. Besides Fernando’s participation in Spanish, we also had a webinar in English with Carl Henrik Knutsen (V-Dem and the University of Oslo) on different measures of democracy and how to use the V-Dem dataset. What is your vision for MeP going forward? We would like to see MeP become a platform to connect researchers across Brazil (and, perhaps, from other countries in the region) interested in Open Science. We believe that science is a built-in community. We believe that through open science communities, we can overcome regional disparities and make the academic community more diverse. Finally, what have we learned from the MeP experience that could help other initiatives? First, we learned that young scholars are eager to learn about transparency. Second, we’ve noticed that it is important to have a space for open discussion outside of the universities. This is especially true for the Brazilian social sciences since the curriculum rarely includes this topic. Third, similar initiatives should not be worried about having institutional ties to get things going. If you have a small group of fellow researchers willing to collaborate, you should start your own initiative. This is how we can populate the academic world with “transparency-minded” researchers. About the authors: - Amanda Domingos is co-director of Métodos em Pauta and a Political Science PhD student at the Federal University of Pernambuco. Her research interests are public policy, subnational and distributive politics, and transparency. She is a BITSS Catalyst and a 2017 Research Transparency and Reproducibility Training (RT2) London participant. - Rodrigo Lins is co-director of Métodos em Pauta and an adjunct professor at the Federal University of Sergipe and Estacio Recife. He holds a PhD in Political Science from the Federal University of Pernambuco. His research interests are comparative politics, democratization and survival of democracies, and transparency. He is also a BITSS Catalyst and a 2016 BITSS Summer Institute participant.
OPCFW_CODE
iOS9 not resolving address from iOS10 I tried out the iOS10 beta recently and noticed that HHServices no longer resolves bluetooth addresses on the iOS9 side. I drilled down and it looks like DNSServiceGetAddrInfo never calls the callback after beginning to resolve an address from getNextAddressInfo. I haven't tried it with two iOS10 devices yet, but I have a hunch it will work fine. I noticed almost the same exact issue when iOS8 came out, were it would only work with other iOS8 versions. With iOS9 it again worked with every version other than 8. Any ideas how to solve this would be great. Pretty worried this is an iOS issue and I won't be able to do anything about it, and using the iOS Multipeer framework is just too slow for my needs. I haven't tried HHServices with iOS10 beta yet, but I'm going to try and do that soon. In the mean time, please have a look at the develop branch and see if that works better (it includes some general improvements as well as a few minor bug fixes). Thanks Tobias, I haven't looked at the develop branch yet, I'll give that a try. I spent some more time looking into the issue and now I'm pretty certain it has to do with iOS enforcing iPv6 addresses for bluetooth. I'm not able to resolve any iOS10 device over bluetooth using HHServices. I tried passing kDNSServiceProtocol_IPv4 | kDNSServiceProtocol_IPv6 into DNSServiceGetAddrInfo in HHService.m to get back an IPv6 address, and resolution did actually return something. The problem was that I couldn't get a usable address out of it. Hopefully that helps. Hey Tobias, I was able to fix the issue after adding IPv6 support to my local pull of the main branch! I tried the develop branch and it also worked after one change: Make sure to pass kDNSServiceProtocol_IPv4 | kDNSServiceProtocol_IPv6 into the protocol field of DNSServiceGetAddrInfo in getNextAddressInfo of HHServices.m. Otherwise it won't resolve IPv6 addresses coming from iOS10 devices over Bluetooth. It should be pretty easy to update the main branch to support IPv6 and close this issue. The develop branch was giving me some latency I wasn't experiencing on the main so I'll stick with that one for now. Thanks! Hey, Yeah, I noticed that was missing as well just the other day :) Have added it locally, will push that change momentarily. What kind of latency did you experience, and in what operation? Thanks, Tobias On 25 August 2016 at 00:36, sgosztyla<EMAIL_ADDRESS>wrote: Hey Tobias, I was able to fix the issue after adding IPv6 support to my local pull of the main branch! I tried the develop branch and it also worked after one change: Make sure to pass kDNSServiceProtocol_IPv4 | kDNSServiceProtocol_IPv6 into the protocol field of DNSServiceGetAddrInfo in getNextAddressInfo of HHServices.m. Otherwise it won't resolve IPv6 addresses coming from iOS10 devices over Bluetooth. It should be pretty easy to update the main branch to support IPv6 and close this issue. The develop branch was giving me some latency I wasn't experiencing on the main so I'll stick with that one for now. Thanks! — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/tolo/HHServices/issues/17#issuecomment-242230072, or mute the thread https://github.com/notifications/unsubscribe-auth/ABK_OSSGmnhjgGZ5reEpKyIyf_Qqesgyks5qjMeJgaJpZM4JXi3X . Hey help! All of my apps on the app store are gonna be broken, I think because of this exact issue -- the symptom is that iOS 10 won't connect to iOS 9. @sgosztyla can you please please please share your fork of this repo with the fixes?! @tolo, sorry for the delay, I didn't notice the response. I went back and tested it with the latest develop branch and I'm not seeing any latency. It's possible I left debug logs on, or did something else to cause my slowdowns. Sorry for the false alarm. @tolo ahh bless you, I was stuck for hours trying to get the updated pod to integrate. Gave up and went for a shower, got back, and you'd fixed it (4:28pm local time here). You are the best Haha, thanks @xaphod ;) No worries @sgosztyla - glad it works as expected. Also, the develop branch has now been merged into master, and HHServices is now available as a pod.
GITHUB_ARCHIVE
One thing I want to mention up front: I haven't seen the Merlin TV show or any of the movie adaptations other than Monty Python's. It may be quixotic (ha, ha), but I wanted to approach the characters from as close as I could get to their literary origins without learning any dead languages. Obviously there's going to be a modern lens applied, but I wanted it to be my own lens. Because I am incapable of doing anything the simple way (see: including Arthurian mythos at all), I'm not going with any single source, but synthesizing a few versions of the story into one that suits my needs. I do this in the serene certainty that I am following not so much in others' footsteps as on an actual highway. I'm starting from Malory as readily available and readable, but adding and subtracting freely. Even if I was going with just his version, I would have to do a fair amount of sorting and adapting, as he wasn't paying attention to consistency. Was Mordred Gawaine's cousin or his brother? Was the knight who followed the Questing Beast Pellinore or Palmides? Was Anguish the king of Ireland or Scotland? Really, "Anguish?" How many damn times did Tintagel change owners, since everything important happens there? Don't even get me started on Lancelot's family tree, which grew more cousins and nephews with every chapter. Regardless. My story takes place in (mostly) the real world. Arthur did not exist as depicted in our world. Therefore, I have to come up with a version of the story that is identifiably itself, but which satisfies some basic rules for realism in my book-universe. I had reasons for thinking this was a good idea, I swear. From the outset I discarded the invasion of Rome. That story doesn't make sense as anything but exaggerated propaganda no matter how you slice it. I am on the fence about the Grail quest. It extends the timeline and the scope of the action enormously, and it requires going literal where I prefer to leave things metaphorical. Not to mention that if you were going on a quest in which purity by any definition was important, you could hardly do worse than this lot of characters, no matter which version you read. Also, Galahad's existence bugs me. There is no way to un-squick fathers pimping out their daughters via magical deception -- the goal of which is to make a guy think he's having sex with a woman who's already married to someone else to begin with -- in order to satisfy a prophecy. While there are many interesting elements there that suggest alchemy to my imagination, it's hard to imagine God approving. Slicing out both of those sub-plots simplifies things a lot. The next task is putting some boundaries on the story. When he first becomes king, Arthur is described as "beardless." I'm taking this liberally to mean that he was under twenty, not necessarily too young to shave. By the end of things, he is still a formidable warrior. I have a hard time buying this after age 50, and that's stretching. I don't care how bad-ass you are, that lifestyle takes a toll. This puts an outside limit of 30 years on his reign. This makes for further simplification, in that some of the tales continue into a third generation. Mordred, for instance, is supposed to have an adult son at the end of at least one version. Well, when could that have happened? Out he goes. This leaves me with a skeleton that resembles a single story, rather than the conglomeration of assorted individual tales that were smushed together willy-nilly in the sources. My simplified timeline breaks down into the following chunks: - Arthur's accession and the early wars to legitimize his rule. This period is capped off by his marriage to Gwenivir, which spelling I have settled on because it is the shortest. - A stabilization period in which many of the famous individual adventures happened. Room can be made here for Arthur himself to still be doing some solo adventuring, along with the big names from the first generation of knights. - Continental wars, drastically down-scaled from the invasion of Rome. This period includes some notable deaths from the first generation of knights, and Lancelot's early life. I'm using the French version of his story, in which he got kidnapped in infancy by the Lady of the Lake. I'm also putting him in the second generation of knights, because otherwise we have to believe that he carried on an affair with the queen for several decades, during which (per Malory) everyone in the kingdom except for Arthur knew about it. - A second period of stability is a false one, as evil is starting to get its act together and the queen is straying. A lot of the Tristram stuff would end up here. - The affair is discovered, war between Arthur and Lancelot, Mordred makes his move, everything goes to tragedy.
OPCFW_CODE
Java is widely used in internet and utility improvement as properly as massive information. Java can be used on the backend of a number of in style websites, including Google, Amazon, Twitter, and YouTube. New Java frameworks like Spring, Struts, and Hibernate are additionally very fashionable. With tens of millions of Java builders worldwide, there are lots of of the way to learn Java. A programming language must encourage recomposition — grabbing elements of other programs, assembling them collectively, modifying them, building on prime of them. This offers creators the initial material they need to create by reacting, instead of going through every new idea with a blank page. It additionally permits creators to study from each other, as an alternative of deriving techniques and magnificence in a vacuum. In HyperCard, this system is represented as a stack of cards, with the programmer drawing objects onto each card. Access to sure private Information that’s collected from our Services and that we preserve could also be obtainable to you. For example, when you created a password-protected account within our Service, you possibly can access that account to review the information you provided. We also use Secure Sockets Layer protocol on your account info and registration pages to protect delicate private data. Sensitive knowledge is encrypted on our iD Sites & Services and when saved on the servers. Our iD Sites & Services are operated and managed on servers located within the United States. - R is a free and open-source programming language for statistical analysis and the creation of nice looking data visualizations. - Of these, onein explicit — Flash, from Adobe — seems tobe changing into a de facto standard. - Think about how to design your packages to make it simpler for individuals who will maintain them after you. - Unlike most of the puzzle-based coding functions Alice motivates learning via artistic exploration. Learn To Code For Free As one of many first programming languages ever developed, C has served as the inspiration for writing more modern languages such as Python, Ruby, and PHP. Whether you’re new to programming or looking to brush up in your abilities, it helps to know which languages are in excessive demand. This is an introduction to programming for people who have by no means programmed earlier than. It may even be useful for individuals who have programmed a bit and want to improve their style and method – or simply be taught fashionable C++. If you are a beginner seeking to dive into coding, check out these 9 spots on the web the place you possibly can study to code ! Learning a model new programming language is much like learning a model new spoken language. Computer programming is quickly turning into a huge necessity inside our lives. CoderDojo is a global neighborhood of free and open coding clubs helping youngsters create cool and fun issues with expertise. You can find a great coding membership for youths or to volunteer at here, or see all our nice lessons and projects to be taught these languages right here. It is straightforward to study with a helpful 20 minute fast start information on the official Ruby web site. A simple answer would be, “Programming is the act of instructing computers to carry out duties.” It is sometimes called coding. If you look at a newspaper, it all just looks like overwhelming, like, well, I need to go and study 2000 characters. But if you simply look at a single character at a time and you break apart the radicals … So, I consider it’s mu or shu is the word for wooden or tree. And that radical that looks like a tree exhibits up in a lot of picket things. We estimate that students can complete the program in 4 months, working 10 hours per week. If you wish to study to code however have little or no expertise, this program provides the proper starting point. Clojure is designed to be a hosted language, sharing the JVM kind system, GC, threads and so on. Clojure is a superb Java library consumer, offering the dot-target-member notation for calls to Java. Clojure supports the dynamic implementation of Java interfaces and courses. Clojure simplifies multi-threaded programming in several ways. But even when they do not yield large profits,thousands — and soon hundreds of thousands — of individuals arebeginning to create and share good programswe can all use free. Successful corporations trainnew programmers, who then generate theirown ideas and instruments, in addition to the toolstheir companies build. Smart companies arealready searching for young individuals who cancreate these new instruments — workers who aretwenty-first-century literate. Tools have at all times been necessary to people;now, mental instruments have gotten increasinglysignificant. Until recently, getting an educationand changing into a literate person meantlearning to use the set of instruments thought-about essentialfor every subject or discipline.
OPCFW_CODE
Nowadays, it’s a staple of Linux administration, and a powerful tool that you can use to improve your efficiency. What is Grep? Grep stands for "globally search for a regular expression and print matching lines". And as the name suggests, it is a tool well suited to sifting through text. When Should You use Grep There are two main use cases for grep - Search text within files - Filter output of another program, so you only get what you care about We’ll go through some examples of grep in action. The text file we will work with will be a pretty trivial one, just so I can show you the absolute basics of what grep is capable of, and some of the options you can run it with. Perhaps in the future we could explore some more realistic applications. Working With Grep: A Basic Example Consider the following file, saved as grep-example.txt. It's 17 lines long, 15 contain a word, with one rogue line containing a random IP address, and another line containing something that had ambitions of being an IP address once upon a time… Let’s see what we can do with this. vessel cemetery scintillating Scintillating shape 126.96.36.199 2049235%208a222 opine declare zinc disagreeable lizards mark Hello hello queen queue The most basic way to use grep is to try to get it to produce an exact match. We don’t have to wrap our search term in single quotes, but it does help guard against some unpredictable outcomes if you’re new to the tool, so I’d recommend doing so to start. If I type in grep 'cemetery' grep-example.txt the command will return the following: $ grep 'cemetery' grep-example.txt cemetery Nice! What if I want all the entries that contain the letter "c". I can run grep 'c' grep-example.txt $ grep 'c' grep-example.txt cemetery scintillating Scintillating declare zinc OK, but this by itself might be pretty useless: How am I going to actually look these up. One way to do this would be to use the --line-number option , which can be abbreviated to $ grep 'c' -n grep-example.txt 2:cemetery 3:scintillating 4:Scintillating 9:declare 10:zinc So far so good. But what if we want to grab that IP address?: $ grep '188.8.131.52' grep-example.txt 184.108.40.206 2049235%208a222 Wait! What? We didn’t get the results we were expecting.This didn’t work because Linux interpreted the . character here as standing in for any character whatsoever. As we touched upon earlier, the RE in grep stands for "regular expressions"- the programme checks for these and uses them in your search. Because of this the following things also would have been returned in the search as well: 204A235B208C222 204-235-208-222 204323532083222 We wanted to find the IP address though. What can we do to get the results we want? There are two ways of dealing with this situation You could escape characters, so their special meaning in Regex is taken away: $ grep '204\.235\.208\.222' grep-example.txt 220.127.116.11 Which gives us what we want. However, for a large string, this could be quite tedious, and if you were sending this line to a colleague who isn't used to working with regex, they might not find the command you wrote to be intuitive. A cleaner alternative would be to pass the -F flag (which is an abbreviation for --fixed-strings)to grep, or use fgrep. Both of these options will have the same result: they will let you search for a string that is an exact match to one you provide. $ grep --fixed-strings 18.104.22.168 grep-example.txt 22.214.171.124 $ grep -F 126.96.36.199 grep-example.txt 188.8.131.52 $ fgrep 184.108.40.206 grep-example.txt 220.127.116.11 #All these commands return the very same result When you do want to use regular expressions in your search term, it can be a good idea to use the --extended-regexp option ( -E ) for short. you can also call egrep instead of grep to achieve the same thing: $ grep -E '[Hh]ello' grep-example.txt Hello hello A lot of Regex features are available without passing this flag, but in case you can't remember which are included and which are not, passing this flag can help you stay on the safe side. Filtering Out Noise Imagine we are getting a huge amount of input from a programme we are using which is giving us information we don’t care about right now. Often when we use grep, we use it to filter out things we don’t care about. Imagine that, in the example from earlier, instead of searching for all lines that contained the letter c, what about lines that omit it? To do this, we have the option to use the inversion flag -v , short for --invert-match, which works like this: $ grep -v 'c' grep-example.txt vessel shape 18.104.22.168 2049235%208a222 opine disagreeable lizards mark Hello hello queen queue $ grep --invert-match 'c' grep-example.txt vessel shape 22.214.171.124 2049235%208a222 opine disagreeable lizards mark Hello hello queen queue #Again, both these commands do the very same thing This command is especially useful when you are using grep to control the flow of data from one application to another (As you might do with a CI pipeline). The next thing we should take a look at is when you are sifting through a large amount of text, and you want to check for all instances of a string, without filtering for caps/lower case. At the moment, we know how to get grep to return only one of the two hello strings in our file: $ grep 'Hello' grep-example.txt Hello $ grep 'hello' grep-example.txt hello To do this. Grep has a special flag --ignore-case which (as the name suggests) ignores cases when you do your searches. The short form of this command is $ grep --ignore-case 'H' grep-example.txt shape Hello hello The abbreviated version for this command grep --i 'H' grep-example.txt would have accomplished the very same thing. These commands so far should get you started, and give you some idea of how grep is used within a file. but what if you have a large number of files distributed over a large number of directories. This is probably how you are going to use grep a fair bit in reality, and fortunately, as we mentioned at the beginning of this article, It is a problem that grep was born to solve. To show how grep works recursively, I've taken our example file from early and placed copies of it in several places within the directory structure of a project. I used the tree command (which is a pretty handy tool to get your hands on, in and of itself, ) to generate the diagram below. $ tree nested_texts/ . ├── grep-example.txt └── nest ├── grep-example.txt └── nest └── grep-example.txt Right, so to simply do the job, and run a search recursively, through every file in a tree, we can pass grep the -r flag as below.(The long form of this command being grep -r hello nested_texts/ nested_texts/grep-example.txt:hello nested_texts/nest/grep-example.txt:hello nested_texts/nest/nest/grep-example.txt:hello Pretty handy right? You can probably already start to see how this could be used to trawl through some of the larger repositories you work with. A Side Note About Flags That pretty much concludes everything I wanted to show you, however there is one last thing I wanted to show, that I couldn't find a good way to fit in. When you use flags with grep (and a number of other Linux tools, for that matter), you can combine your flags. A couple quick examples of this are below: A command combining recursion with case insensitivity: grep -ri 'H' nested_texts/ nested_texts/grep-example.txt:shape nested_texts/grep-example.txt:Hello nested_texts/grep-example.txt:hello nested_texts/nest/grep-example.txt:shape nested_texts/nest/grep-example.txt:Hello nested_texts/nest/grep-example.txt:hello nested_texts/nest/nest/grep-example.txt:shape nested_texts/nest/nest/grep-example.txt:Hello nested_texts/nest/nest/grep-example.txt:hello #Returns everything containing the letter 'h', Caps or not, in every file within the tree. A command combining recursion with case insensitivity and prints line numbers, I've seen people use this a lot, and I'm sure you will too: grep -rni 'H' nested_texts/ nested_texts/grep-example.txt:5:shape nested_texts/grep-example.txt:14:Hello nested_texts/grep-example.txt:15:hello nested_texts/nest/grep-example.txt:5:shape nested_texts/nest/grep-example.txt:14:Hello nested_texts/nest/grep-example.txt:15:hello nested_texts/nest/nest/grep-example.txt:5:shape nested_texts/nest/nest/grep-example.txt:14:Hello nested_texts/nest/nest/grep-example.txt:15:hello #Same as before, but with line numbers So, If you're getting used to using grep for the first time, I hope this has served as a good introduction. Note that if you're working with really large repositories the are faster tools available to you, but grep is still worth getting to know. Thanks for reading.
OPCFW_CODE
Uncaught Exception During Write (getOverlapped Result): Unknown error code 433 SerialPort Version 10.4.0 Node Version 16.14.0 Electron Version 19.0.11 Platform Microsoft Windows NT 10.0.19045.0 x64 Architecture x64 Hardware or chipset of serialport STM32f103 What steps will reproduce the bug? When there is a power disruption during write, i.e. EMI or occasionally physical disconnection there is an uncaught exception. self.port in this context is the SerialPort instance async function async_call(self, resolve, reject, command){ try { self.port.once('error', err => reject(err)); let timer = setTimeout(() => { // self.close() self.port.removeAllListeners('error'); reject( "Device timed out. CMD: " + command.toString().trim() + " failed. Com: " + self.com ); }, 5500); await self.port.write(command, (err) => { if(err) { self.port.removeAllListeners('error'); reject(err); } }); ///resolves on 'data' event from parser catch (err) { reject(err) } } and even this doesn't capture the error: process.on("unhandledRejection", (reason, p) => { Logger.log("error", "Unhandled Rejection at: "+ p + " reason: " + reason) console.log("Unhandled Rejection at: ", p, "reason:", reason); // application specific logging, throwing an error, or other logic here }); What happens? Results in a pop-up: A JavaScript error occurred in the main process Uncaught Exception: Error: Writing to COM port (GetOverlappedResult): Unknown error code 433 I think I've seen error code 31 as well for a physical disconnect 433 seems to show up from an EMI event. This exception blocks the entire process and I can't seem to catch it. What should have happened? The device is able to reset both on the board and it appears to reset in windows after an EMI event through the Windows Device Manager but my application will completely stop until that pop-up is acknowledged which is a problem when I am trying to collect data from multiple devices overnight. I see in the documentation that errors can occur on the stream with an 'error' event instead of through the callback. I'm going to try again with a catch on the parser instead of the port? The 'error' event does not appear to be present on the port nor in the write callback. Additional information I have EMI shielding but there's only so much I can do. I only run into this after the port has already made 1000+ calls after hours of running with no issues. I can occasionally trigger it with unplugging my device at the "right" time since I have an interval checking for the listed devices. I wonder if this could be similar to the issue in this thread there seem to be suggestions to disable the "USB Selective Suspect" setting, or rollback a problematic windows hotfix Could be related, I definitely have USB selective suspend disabled in both the device manager as well as the windows power plan. I realized that I was not correctly propagating the error higher up the chain which is why I was not able to recover properly/getting the uncaught exception warning. The solution was fixing my error handling up the chain in my API and serialport was behaving as expected.
GITHUB_ARCHIVE
The Struggles of Going Open Source It's been about a week since my last post, and I do apologize for the delay. I just started a new job recently and with that comes a change in life style (I need to wake up early now). While the pandemic rages on I'll be working a 9 to 5 job at home using the greatest OS known to man, Windows 10 (...sorry, I threw up a little bit in my mouth there). Even though I say Windows 10, the truth is because of a few programs I need to run, I actually need to use a Windows 7 virtual machine in Windows 10. Even on my gigabit internet connection, it's not that fast... During a team meeting I joked that if only we used Linux we might not have this issue, and to my surprise some of my bosses agreed. Open Source just seems to scare the masses away. Not many if any businesses use Linux, BSD, RISC OS Open or any of the other free/open source operating systems out there. Windows has its dirty little fingers on the world and it refuses to let go. What about open source programs? Certainly if you learned how to use something like K-Denlive or GIMP you could do a great many things without the hefty price tag. However, Photoshop and Premier/Final Cut Pro dominate the creative market and come with their own user license agreements that aren't always fair to the end user, yet leave the alternatives behind. Gaming on PC is another issue. All the games comes to Windows and developers don't bother to make a Linux version. Hardware suffers similar issues with new processors and graphics cards being gimped on a Linux/BSD build. I would love to play a few rounds of Destiny 2 on my Linux gaming PC, but at this moment in time, it's just not a possibility, I need to run it in Windows. In the smartphone world, you only get two options, Android or iOS. Some of you may say you can hack an Android phone and put a custom ROM on it. This is true, but what's also true is this will only appeal to a niche group. Even on a hacked Android device, you lose access to certain apps. You know, the really popular ones all your friends are using. In a world where you can tally up the pros vs the cons of going open source vs being closed source it's amazing we chose the latter. Where we agree to EULA's we can't fully read and understand. Have applications never explicitly less us know what they do. And, essentially be slave to a handful of million, billion and trillion dollar companies. Why? It Just Works I can sing up to the mountains how much I love the work that any open source operating system is doing and the unique programs and work flows that they offer. However, as much as I love them I still from time to time need to use Windows for this or that. Why? Because it just works. I still have bad memories of not being able to view PDF's in Linux because Adobe dropped support and the alternatives just didn't work right. I remember when I tried to convince my partner that she should use a Linux laptop. It didn't offer Word or Teams and as a teacher, it just wasn't going to cut it. I remember when I tried to convince her to use a Pinephone, but she would replied that she would lose access to Snapchat, Facebook Messengers, Google Maps, etc. It wasn't going to happen. Nothing works as well as her iPhone 11 Pro Max. The world of open source is always playing second fiddle to these closed source OS's and programs that have no respect for us, steal our data, pry into our lives and offer little in return other than that they're industry standards. While my convictions keep me away from these types of things, I am envious. I drool over every new piece of tech whether that be a game consoles, a smartphone, or laptop knowing there isn't an open source alternative available. If I think about it long enough, I can totally see myself getting swept up into an Apple or a Google ecosystem. Enjoying the harmony of all these different things working together. Hardly running into any issues, but if so being able to call a help line to fix it. When I think about these things I wonder if I made the right choice avoiding these luxuries. I don't regret my choices though, and I will continue to use and sing the praises of all things open source. However, sometimes the journey is a struggle.
OPCFW_CODE
In this video, Russ Long talks about some basic best practices for creating a cluster. Discover how to create solid cluster prior to enabling cluster options such as HA or DRS. - [Teacher] Now before we dive into the details of an HA cluster, I want to make sure that we understand that the cluster itself is really important. We have to make sure that it is built in the best manner possible, otherwise we're creating problems before we even bring in the HA technology. So before you ever press the HA button, that HA option to turn high availability on, we have to make sure your cluster is created correctly. Now the linchpin to any cluster is Shared Storage. This is the backbone where all the data is stored. Data will be moved back and forth several times to the Shared Storage option, so we have to make sure that our performance is up to snuff. Now there are many options for storage; iSCSI, Fibre, Fibre Over Etnernet, or NFS. Our choice here will directly affect the cluster at large. So make sure that your performance, your reads and writes, your IOPS, are all perfect before you turn on the HA option. Now another key to building a good cluster is, I want you to make sure that your hosts within a cluster are built as similarly as possible. You want the same processors. You want the same type of memory. You want the same amount of memory to be in each host. This allows virtual machines to move from one host to another within a cluster without experiencing a big difference in the type of resources that are being offered. So a key to building a good cluster is to make sure the hosts are as similar as possible. Now one of those main concerns that we should have when building a cluster is to make sure that we have the same vendor. You don't want to have AMD on one side and Intel on the other side of your cluster, it just won't work. Your migration is going to fail, and that's a bad thing. We have to have Intel-to-Intel migrations; otherwise, we'll have no migration at all. Now a lot of people said, "Well, EVC. "Enhanced V-motion compatibility. "That solves that problem, doesn't it, Russ?" Absolutely not. EVC is for different models within the same family of processors and not different vendors. So it's for AMD-to-AMD processors or Intel-to-Intel processors that just happen to be different models. Even though we have EVC enabled, I would rather you use the same models within the same vendor. That is the best practice. EVC is when you have to shoehorn something in or just try and make it work with what you have. Always be mindful of the processors within your cluster and how you are going to utilize that cluster and how the virtual machines are going to move around inside. Now last but not least on our cluster practices is to avoid placing virtual machines that have very high utilization on the same hosts within a cluster. This causes a contention for resources, such as our CPU, our memory, and even our networking. So we want to make sure that high-utilization virtual machines are kept separate on different hosts, even though they reside in the same cluster. A great example of this is our databases. If I have two servers that are databases within the same cluster, I'm going to try my hardest to make sure that those are separated at all cost. This means when you place virtual machines within a cluster, you're going to plan out where those virtual machines reside on each host, rather than letting the chips fall where they may. Note: This course will also help you prepare for the Configure and Administer vSphere Availability Solutions domain of the VMware Certified Professional – Data Center Virtualization exam. View the exam blueprint at https://mylearn.vmware.com/mgrReg/plan.cfm?plan=64180&ui=www_cert. - How vSphere High Availability works - The basics of clusters - Understanding failure types and failure response - Monitoring HA virtual machines and appliances - Using heartbeats - Creating and configuring clusters - Configuring admission control - Best practices: Networking, interoperability, and cluster monitoring
OPCFW_CODE
Learn numbers in Moloko Knowing numbers in Moloko is probably one of the most useful things you can learn to say, write and understand in Moloko. Learning to count in Moloko may appeal to you just as a simple curiosity or be something you really need. Perhaps you have planned a trip to a country where Moloko is the most widely spoken language, and you want to be able to shop and even bargain with a good knowledge of numbers in Moloko. It's also useful for guiding you through street numbers. You'll be able to better understand the directions to places and everything expressed in numbers, such as the times when public transportation leaves. Can you think of more reasons to learn numbers in Moloko? The Moloko language (Məlokwo) belongs to the Chadic languages family, and more precisely to its Biu–Mandara, or Central Chadic, branch. It is spoken in northern Cameroon, in the Mayo-Sava department. Moloko counts about 8.500 speakers. List of numbers in Moloko Here is a list of numbers in Moloko. We have made for you a list with all the numbers in Moloko from 1 to 20. We have also included the tens up to the number 100, so that you know how to count up to 100 in Moloko. We also close the list by showing you what the number 1000 looks like in Moloko. - 1) bǝlen - 2) cew - 3) makar - 4) mǝfaɗ - 5) zlom - 6) mǝko - 7) sǝsǝre - 8) slalakar - 9) holombo - 10) kǝro - 11) kǝro hǝr bǝlen - 12) kǝro hǝr cew - 13) kǝro hǝr makar - 14) kǝro hǝr mǝfaɗ - 15) kǝro hǝr zlom - 16) kǝro hǝr mǝko - 17) kǝro hǝr sǝsǝre - 18) kǝro hǝr slalakar - 19) kǝro hǝr holombo - 20) kokǝr cew - 30) kokǝr makar - 40) kokǝr mǝfaɗ - 50) kokǝr zlom - 60) kokǝr mǝko - 70) kokǝr sǝsǝre - 80) kokǝr slalakar - 90) kokǝr holombo - 100) sǝkat - 1,000) dǝbo Numbers in Moloko: Moloko numbering rules Each culture has specific peculiarities that are expressed in its language and its way of counting. The Moloko is no exception. If you want to learn numbers in Moloko you will have to learn a series of rules that we will explain below. If you apply these rules you will soon find that you will be able to count in Moloko with ease. The way numbers are formed in Moloko is easy to understand if you follow the rules explained here. Surprise everyone by counting in Moloko. Also, learning how to number in Moloko yourself from these simple rules is very beneficial for your brain, as it forces it to work and stay in shape. Working with numbers and a foreign language like Moloko at the same time is one of the best ways to train our little gray cells, so let's see what rules you need to apply to number in Moloko Digits from one to nine are rendered by specific words, namely: bǝlen , cew , makar , mǝfaɗ (or ǝwfaɗ) , zlom , mǝko , sǝsǝre , slalakar , and holombo . Tens are formed starting with the word for ten (singular: kǝro, plural: kokǝr), followed by the multiplier digit separated with spaces, except for ten itself: kǝro , kokǝr cew , kokǝr makar , kokǝr mǝfaɗ , kokǝr zlom , kokǝr mǝko , kokǝr sǝsǝre , kokǝr slalakar , and kokǝr holombo . Compound numbers are formed starting with the ten, then the word hǝr, and the unit separated with spaces (e.g.: kǝro hǝr slalakar , kokǝr zlom hǝr bǝlen (51], kokǝr holombo hǝr makar ). Hundreds are formed starting with the word for hundred (sǝkat), followed by the multiplier digit separated with a space, except for one hundred: sǝkat , sǝkat cew , sǝkat makar , sǝkat mǝfaɗ , sǝkat zlom , sǝkat mǝko , sǝkat sǝsǝre , sǝkat slalakar , and sǝkat holombo . Thousands are formed starting with the word for thousand (dǝbo), followed by the multiplier digit separated with a space, except for one thousand: dǝbo [1,000], dǝbo cew [2,000], dǝbo makar [3,000], dǝbo mǝfaɗ [4,000], dǝbo zlom [5,000], dǝbo mǝko [6,000], dǝbo sǝsǝre [7,000], dǝbo slalakar [8,000], and dǝbo holombo [9,000]. Between hundred and ten or unit, but also between thousand and hundred, ten or unit, the word nǝ is used (e.g.: sǝkat nǝ bǝlen , sǝkat cew nǝ kokǝr makar hǝr zlom , dǝbo nǝ bǝlen [1,001], dǝbo cew nǝ sǝkat mǝfaɗ [2,400]). The expression for one hundred thousand is dǝbo dǝbo sǝkat [100,000 or 105] (literally thousand thousand hundred). The expression for one million is dǝbo dǝbo dǝbo [1 million or 106] (literally thousand thousand thousand). A grammar of Moloko, by Dianne Friesen, Language Science Press (2017) Numbers in different languages
OPCFW_CODE
05 Jun 2022 Academics who code are, I think, somewhat known for their less than stellar creations. I have certainly written code that I would prefer didn’t see the light of day. A recent example that attracted a great deal of attention is Professor Neil Ferguson’s CovidSim. As the name suggests, it is a simulator of COVID-19 transmission that works by creating artificial agents representing people and environments that they interact with, in much the same way as the SimCity series of games but without the funky graphics. It was the basis of a paper that is credited with fundamentally altering the course of the UK’s COVID-19 policy. The paper predicted that even under the optimal ‘mitigation’ strategy that was considered, the peak surge capacity of ICU beds in the UK would be exceeded 8-times over due to the pandemic. As of the date of writing, it has 3,910 citations. In May 2020, The code behind the simulation was released to GitHub, which is a ubiquitously-used online tool for software development. Before it was released to the public, it apparently consisted of a single 15,000 line file written in C. Having one file of source code that long is already a cardinal sin in coding, and there were many other failures to live up to standard software development practices. Ok, but it worked, didn't it? An independent research group was able to reproduce the published results of CovidSim by running it themselves. So, while CovidSim may have been less than desirable from a coding point of view, it did its job as intended. Does it really matter that it wasn’t that pretty? I have a lot of sympathy for the team that worked on this code. Academic coding often doesn’t live up to industry software development standards, for a multitude of reasons. Code that is used to carry out analysis for research is typically ‘one-use’, or close to it. It is not, for example, google.com, which is used by god knows how many millions of people every day and has a large team of people continuously maintaining it. Code written for research is typically intended to be used by a small number of people who authored it, to obtain a specific set of outputs just once. In addition to that, the PhD students, postdocs and other research staff who typically write the code for such projects are usually themselves not trained in industry standard software development pratices. Many of them will come from disciplines that aren’t primarily coding-centred, and will have to pick it up along the way with little or no oversight. That’s a tough position to be in. Third, academia is often a race to publish results, that can end up being a very much winner-take-all proposition. Release your paper a week too late and posterity will not look kindly upon you. These are not circumstances that are conducive to producing nice code. Lastly, an adage comes to mind that goes something like this: “Feel free to break the rules once you know why they exist”. It captures the idea that once you are at a sufficiently high level in a given skill, a lot of what you do consists of knowing when exactly the rules can be bent/broken. Great chess grandmasters, for example, often play with flagrant disregard for well-established principles of the game. They have been at it for long enough that they can often get a competitive edge by going beyond the rules, in a way that is informed by years of study and expertise. Likewise, there are circumstances where it is ok to write fairly ‘terrible’ code, that breaks all the rules - sometimes you just need a result quickly, and you can focus on cleaning it up/optimising later, or not at all. Why, for example, spend hours beautifying a piece of code when you have a deadline looming that depends crucially on the output of said code? Do your due diligence in making sure it is correct, and hit enter. That said, it is not exactly confidence-inspiring that this simulation that changed the course of a nation was lacking in robustness. One reason we like pretty code is that it tends to minimise the chances of an error. The stakes could not have been higher in this case - an error would have had massive consequences. Also, the fact that the code has been around in one form or another since 2005, and has been adapted repeatedly for modelling various epidemics, makes me a little less sympathetic. That is not exactly what one would call one-use code.
OPCFW_CODE
RAID is a topic that comes up a lot when discussing servers. If you’ve wondered what RAID is, why you might want it on your server, and whether RAID 10 is the best option for you, look no further. We’re going to be discussing all of that in today’s article. In RAID 0, data is split (Orange and Red) across two disks for better performance — but no redundancy. RAID, or Redundant Array of Independent Disks, is a technology for using multiple physical disk drives to behave as a single logical storage system. RAID is normally used to protect data in case one (or more) of those drives fails. Most types of RAID also improve access speed and storage space compared to a single drive. A collection of drives in a RAID configuration is often referred to as a RAID array, or just an array. A RAID controller, which is piece of software or hardware that manages the RAID array, is responsible for creating a logical volume (the array) out of physical volumes (the drives). The logical volume appears to the operating system like a single drive, allowing any software to use multiple redundant drives without any special programming. Why would my server need RAID? On the internet, people expect the services they use to remain accessible and quick at all times, and they expect their data to remain safe. Lost data, downtime, or poor performance can hurt your businesses reputation and lead to lost revenues. An unfortunate fact is that anywhere from 1% – 10% of hard drives will fail in any given year. By using a type of RAID that supports redundancy, your server can stay online with your data safe, even if a hard drive fails. And though server CPUs have become hundreds of times faster over the last few decades, hard drive speeds are improving much more slowly. By using several disks in one virtual array, performance is often vastly improved. Because of these factors, RAID has become very popular for web servers and other internet connected devices. RAID is a relatively inexpensive way to improve performance and reliability in this demanding environment. Are there different types of RAID? If you found this article from google, you probably already know there are several types of RAID: RAID 0, 1, 5, 6, 10, etc. What do they do? Each of these types of RAID is known as a “RAID level”, and the RAID level chosen will determine the system requirements, speed, reliability, and available disk space of the RAID array you are creating. Picking the right RAID level is one of the most important choices you can make in getting your server ready for RAID. What RAID level should my server use? RAID in servers is a bit different than RAID at home. For home use, the RAID levels you’re most likely to find are RAID 0 and RAID 1. RAID 0 and RAID 1 are inexpensive, because they only require 2 drives, and nearly all hardware and software supports it. This means that even home users can quite easily make use of them. However, RAID 0 provides no redundancy whatsoever, and so is very unreliable. For RAID 1, it does provide redundancy through mirroring, but only provides the same disk space as a single drive, and no improvement to disk write speed. These disadvantages make RAID 0 and 1 best avoided in a business setting. For servers, web hosting, and other business use, RAID 5, RAID 6 and RAID 10 are popular options. RAID 5 requires a minimum of 3 hard drives, and both RAID 6 and RAID 10 require a minimum of 4 hard drives. RAID 5, 6 and 10 are more expensive than raid 0 or 1 because they require better quality raid hardware or software, and also require more disk drives. These RAID levels are popular because they provide a good mix of storage space, speed, and reliability. RAID 10 is one of the most popular RAID options for web servers, VPS servers, and other internet facing devices. For that reason, we’ll focus the rest of this article on RAID 10 and how it compares to other popular RAID types. If RAID 10 is so popular, how does it work and why is it so good? RAID 10 has become popular because it offers the benefits of RAID 0 and RAID 1, offering high performance, good reliability, and extra disk space compared to a single drive alone. RAID 10 is what’s known as a “nested” RAID level as it literally “nests” both RAID 0 and RAID 1 together. Before we move on, let’s talk a bit about nested RAID. In RAID 10, data is first split using RAID 0 (Orange and Red), across two RAID 1 arrays. Each RAID 1 array copies their data to two drives for redundancy. What is nested RAID? To understand RAID nesting, you need to understand the three types of RAID: RAID Mirroring, RAID Striping, and Parity RAID. Each of these types is explained in depth in the articles linked above. Very briefly, here is how each works: RAID Mirroring means copying data from one drive to another for redundancy. RAID 1 is an example of mirroring. RAID Striping means having some data on one drive, and some on another, for extra disk space and speed. Raid 0 is an example of striping. Parity RAID requires at least 3 drives, and uses complex math to allow you to lose any single drive and still keep all your data. RAID 5 is an example of RAID parity. Nested RAID uses two of the above raid types in a single array, to get the benefits of both types of RAID. For example, RAID 10 uses both striping and mirroring. This allows you the benefits of striping (extra speed, extra disk space) and the benefits of mirroring (data redundancy). As such, RAID 10 requires a minimum of 4 drives. RAID 50 is another nested RAID type, which combines parity and striping. We’ll be talking about RAID 10 in this article. At a technical level, the order of the numbers used to identify a nested RAID level tells you how the levels are combined from the bottom up (i.e. the first number is the lowest level of the nested arrays). For example, and as illustrated by the below diagram, RAID 10 provides a RAID 0 array of RAID 1 logical volumes. This means that you get the write speed improvements of RAID 0 with the redundancy improvements of RAID 1. In a RAID 10 configuration, you can lose one drive from each raid-1 sub array without losing data. Because of this, the number of drives that can fail without losing data varies depending upon which drives fail. The array will always be operable with one drive failure. Two drive failures will sometimes lose all data, and sometimes not. And if you lose more than half the drives in a raid 10 array, you will always lose all data. Nested RAID levels, although widely supported, are generally less well supported than basic RAID levels. Cheaper “fakeraid” controllers, often included as an inexpensive feature on motherboards, will usually not support nested or parity raid. Some types of software raid will also not support nested or parity raid. Nested raid requires better quality raid hardware, as well as more hard drives, compared to basic RAID levels. The extra drives, in addition to costing more on their own, also require a computer that can support the additional drives, which take up more space and use more power. All of these factors make nested raid more expensive, and less common for home users. Because of this, nested RAID is more common on servers and other business class and enterprise configurations. Now that we understand nested RAID a bit better, let’s get back to why you might use RAID 10 on your server. Why would I use RAID 10? RAID 10 has a number of important advantages over other raid levels: RAID 10 has good data redundancy. A RAID 10 array will always stay online if 1 drive fails, and sometimes will stay online even if up to half of your drives fail (if the “correct” drives fail). RAID 0 always fails if any drives fail, and RAID 5 always fails when 2 or more drives fail. Because it support striping, RAID 10 offers more disk space than raid 1. RAID 10 is fast. A 4 drive RAID 10 offers twice the read and write speed of a 2 drive raid 1, twice the read speed of a 2 drive raid 0, and far superior write speed compared to a 4 RAID 6 or RAID 6. RAID 10 is well supported in most software and hardware. This can be a problem with raid 5, 6, 50, and 60. Unlike raid 5, 6, 50, and 60, raid 10 performs well even if you don’t have an expensive accelerated hardware raid controller. Why doesn’t everyone use RAID 10? Although RAID 10 has a number of upsides, it’s not perfect for all situations. Here are some downsides of RAID 10: RAID 10 requires a minimum of 4 drives. RAID 0 and 1 only require 2 drives, and RAID 5 has a minimum of 3 drives. This can increase cost. RAID 10 might not be supported by inexpensive “fakeraid” controllers, which only work properly with raid 0 or 1. With RAID 10, you lose half of your disk space to mirroring. RAID 5, 6, 50, and 60 can give you more available disk space with the same number of drives. This is important for uses like data backups, where disk space is more important than speed. Failures can be unpredictable. Although you can always safely lose 1 drive, losing 2 drives at once can sometimes cause data loss. If you need to always be able to lose 2 drives without losing data, RAID 6 can do that, and RAID 10 can’t. Learning more about RAID I hope this overview gives you an idea of what RAID is good for, how nested RAID works, and whether RAID 10 is right for you. If you’d like to learn more about RAID, stay tuned for one of the following upcoming articles: If you’re ready to get started with a server with RAID, one of the easiest ways to do it is with a dedicated server from IOFLOOD.com. Contact us today and we’d be happy to explain your options and help you pick the configuration that’s right for you. Gabriel is the owner and founder of IOFLOOD.com, an unmanaged dedicated server hosting company operating since 2010.Gabriel loves all things servers, bandwidth, and computer programming and enjoys sharing his experience on these topics with readers of the IOFLOOD blog.
OPCFW_CODE
Tonight I baked chicken drumsticks, for foods purposes, in two attempts because the first time I took them out they came out underdone.| In other news, I talked to both Chas and Dean today. It turns out they'd read all the comments on a recent post, which I hadn't expected (to the extent that I thought about it, since I was answering comments while quite upset), and had gathered alarming hints about how I've been doing. Soooo, all my dedicated efforts to keep them from knowing anything about how I've been managing in the last week have come to naught, but on the bright side, we're all fine. And since this means I am no longer carefully dodging the possibility of them finding out what's been happening, I can talk about it here. My reasons for doing so are mixed - partly for my own benefit, since this is my own damn journal, partly for the informational benefit of people who care about me, partly because I know sometimes it will be surprisingly helpful to a stranger to know that other people deal with this kind of thing too. I've been struggling to hold things together in the last week. All the strains that were there before the wedding are still around, only now my brother and my best friend are far away and out of reach. My dear friend Oliver has been helping, trying to take care of me, and housemate.Dave cares, but it's not the same, and I've been having trouble. My psychologist has been taking the angle of reminding me that this is my great chance to work on being able to deal with things independently, without help, but it turns out I'm not entirely ready for that yet. Oh, I'm better enough not to be totally dependent any more, but... Ideally, I think, even if I moved out and was living alone, say, I would still be in frequent contact, by e-mail/IM/phone/etc, with my family. (By which I mean my brother-out-law Chas and my sister-out-in-law/BFF Dean.) Feeling cut off and isolated is bad for me - my actual, real breakdowns while they've been gone have both taken place when (first time) everyone I tried to call wasn't answering, or (second time) I was feeling like I couldn't call on anyone at all. First time I mostly held it together until Dave came home. Second time ( is cut for the squeamish. ) Linkin Park. I bleed it out digging deeper just to throw it away, just to throw it away, I bleed it out... I've always had this feeling like everything would be okay if I could just get the blood to run. The only thing that seems to have bled out with it is that. Blood won't help. The cut will hurt and the blood will be in sight, and I'll be distracted from my pain by hating myself for the weakness that cutting represents, but it won't make it all better. I know I've been told this, many times, but I could never feel it. But I have enough self-inflicted scars, and I've seen my life pooling on the floor, and I want to believe I can let this all go now. If two sutures is what I needed to be able to put this behind me, I'll take it. I want to be past this. I want to feel like I don't have to be afraid I'll lose myself, like I don't have to be terrified that depression is an illness that will kill me. Current Music: Cobra Starship - Guilty Pleasure So, I'm working on building my new website. It's nowhere near finished, not even ready to be linked, and there are going to be limits to how strongly I want to associate it with this journal, I think, because the website is "official" and this journal, in theory, is fandom.| New website includes my Very Serious Blog, which so far has all of two entries, neither of them serious, but: the VSB is for stuff that can be linked to my "real" name (which will, in time, become my actual legal name). I have kept a bunch of blogs and journals over the years, and I intend, over time, to add the better posts from all of them to the VSB. Going through the archives of one of my older blogs, I came across Fly the Copter, possibly the simplest game in history, in terms of controls: hold down button to go up, release to go down. It's oddly addictive. It's also an amazing demonstration of the effect of ADHD medication. Because playing it requires focus. Seven years ago, when I first found this and was playing it, I couldn't get my score past the 300s, despite a lot of time trying; today, in the course of a few minutes, I've gone past 1000. A game where loss of focus will kill you really shows the difference. So, I have a lot of work to do. Which I should endeavour to get done during my medicated hours.| The problem? While I have breakfast and just after, before my meds kick in, is when I catch up on LJ and DW. Which often leads to links that are interesting. And sometimes leads to discovering things, like The Yuletide Archive by Fandom which I could easily lose the whole day to. So, as an exercise in willpower, I've delicioused a couple that I already had open in tabs with an "unread" tag, bookmarked the archive, and closed those tabs so I can do my essay work. But I want to read fiiiiiic. (I have today off, due to uni's Accidental Holiday, and have no history reading to do at all this week! The urge to slack off is strong but my essay is due in THREE AND A HALF WEEKS and I have ridiculous amounts of reading to do on it. Later, I may be making a collation of my notes and ideas so far, which I may post to history.) Things I have discovered: I am weirded out by Bible fanfic. It is the first fanfic I have encountered that truly weirded me out on a fundamental level. I, just... no, okay? Religion is not for fic. And yet, I have no problem with fiction that includes religious figures as, well, religious figures, divine intervention, magic, what have you, even the kind that reinterprets theology and mythology in dark and interesting ways. I'm not sure where the dividing line is. It's partly on the basis of currency of myth - like, Tom Holt's books featuring the Greek/Roman/Norse pantheons, fine, something similar featuring the Hindu pantheon, not fine. Jesus Christ Superstar, fine, Anne Rice's Jesus fanfic novel, not fine. Gaiman style entangled mythologies and reinvention, fine, apostle slash, not fine. It's such an "I know it when I see it" thing and I don't have time to think about it, argh. Also, I'd need resources. One of the things I've realised recently, in the course of tripping over my word usage and really upsetting someone, is that when it comes to Big Deal Personal Issues, I can only handle thinking about it so long as I can think about it on a theoretical level, and can access a more comprehensive theory and/or philosophy about it. I can make conclusions about feminism and how I relate to it in life because I have resources on that. I was able to handle defining my sexuality when I was frequently engaging with queer theory, but since I haven't been looking at that in years I'm now unable to do so at all. Apparently on some level I process through abstraction. Which is fine for me, but can cause issues for other people when I say I'm interested on a theoretical level - to me, that means that I want to engage with the theory of it, want to understand the ramifications of it, beyond my own subjective experiences, which are of course suspect for universal applicability. To other people, it seems, this can come across as: "I can talk about this so long as you don't expect me to actually deal with it or anyone like it or anything." Which, you know... ouch, wrong. Not what I meant, just what I said. The thing that is non-obvious here being that I am an academic by mindset, and I have always - since I learned to talk, apparently - sought objective understanding of things, as much as is possible. (Often it isn't, obviously.) Which means that, to me, if I want to understand race, and racism, that means I'm interested in it on a theoretical level - that is, I want to learn the theory, I want to learn how race has been analysed and deconstructed and reconstructed, how it functions, how and why racism is manifested, how race affects individual experience. I want to understand race on a level that cannot be anything but theoretical to me - I cannot know the subjective experience of being other than white, but I can engage with the abstractions of it, the theory. And thereby, I can try to understand why people do what they do, feel what they feel, act as they do. I suspect it's a byproduct of the mechanisms I've developed over a lifetime of undiagnosed ADHD - combined, of course, with the effects of a seriously dysfunctional family. I grew up not understanding how people functioned, why people acted the way they do. I grew up not understanding relationships, or behaviours, that seemed to be obvious to other people. So I analyse. I read theory. I construct a philosophical framework into which things fit, and make sense to me. (This is actually why I'm good at history, I think - I'm good at building conceptual frameworks even when my data is incomplete, with "insufficient data" as an available option that lets me hold a space in my conceptualisation to fill in later.) On the one hand, once I understand something, I'm good at working on those lines on an ongoing basis; on the other hand, until I understand something, I have trouble engaging with it, and "I don't know enough about this to deal with it" can come across as "... and I don't want to and don't care". Plus, even if it's something that applies to my direct subjective experience, I need an abstract conceptual framework to work in, and if I don't have one, I just can't deal. I think this may be an important revelation. Worthy of the half hour of medicated time I just spent writing this post.
OPCFW_CODE
What does scrambling mean in digital communication Sun Jun 10 18:06:34 2001 Anyone who wants to disguise telephone or radio communication can take a variety of paths. The oldest types of devices for speech obfuscation are the "scrambler". They encrypt what is spoken by messing up the order of what is spoken ("time domain scrambling"), or by changing the frequencies that make up human speech ("frequency domain scrambling"). After "scrambling" the encrypted message can no longer be understood, but it is possible to recognize a human voice through the strangely deformed noises. It is also possible to digitize the language first (convert it to zeros and ones) and to encrypt the resulting bit rows. The encrypted message must then be converted back into an audio signal that is suitable for being sent via a telephone or a transmitter. This procedure may sound awkward and complicated, but it has several advantages. In contrast to the language itself, the bits can be processed with more complex encryption recipes. "Scrambling" methods only make sense if they work in "real time", ie the encrypted messages are sent so quickly that direct communication is still possible. This "scrambling" form divides the spoken text into "blocks" approximately every half a second. The device saves these "blocks", divides them into smaller pieces and mixes them up according to a certain pattern, and the shattered "blocks" are sent. This is comparable to encryption using the permutation method: it is not the characters / signals themselves that are changed, but only their order. The reverse procedure is used for decryption. Because a piece of "speech" always has to be temporarily stored, there is always a delay of about half a second on the sending and receiving side, which means that the communication delay is about one second. That is not much, but it does mean that those who communicate with each other need to be patient. Talking confused or at the same time is not advisable. If half a second is subdivided 15 times, the number of mixing options is mathematically quite large, in any case much too large to be able to simply try to put the original message back together again. However, as I said, it is possible to hear in the tone sequence of the beeper and snippets of speech whether e.g. a man or a woman is speaking. Persistent eavesdroppers can surely recognize the individual interlocutors over time. Speech consists of sound waves of different frequencies. During frequency conversion, the frequency that makes up the language is processed. Each frequency is changed to another. In the somewhat older systems, this was always done according to a fixed "key" (conversion frequency), but this turned out to be easy to crack. For more modern systems, a different key is used for each frequency of the language to be distinguished during the conversion. Lower frequencies are converted to higher and higher to lower. At the moment, systems that use constantly changing keys are mainly used. The greater the number of keys in the device, the more difficult it is to crack the system. Another great advantage is that there is no delay during communication. The principle is common for telephone and radio communication connections. There are also devices that combine both methods. Such encryptions are correspondingly more difficult to crack. However, these devices also have the disadvantage of the former method: there is a delay of about one second. Until recently it was only possible for private individuals to buy a limited number of simple scramblers. However, these do not guarantee real security for a long time. However, more modern scramblers and the first digital voice encryption systems have recently become commercially available. With the most modern speech encryption techniques, no deformed speech is sent after encryption, but a signal that contains bits. Zeros and ones are displayed with distinguishable beeps or tones. Until a few years ago, this whole process - digitizing, encrypting and converting the bits into a suitable signal (modem) - still led to problems. On the one hand, as a result of digitization, the number of bits (too) was obtained, and on the other hand it was not possible to send these bits in real time.11.1 There are now methods of audio digitization that generate fewer bits. Modem transmission techniques have also been optimized.11.2 The transmission speed has increased enormously. Developments that also open the way to digital speech obfuscation. In principle, we can use the same recipes for encryption as we described earlier: DES, IDEA, a pseudo-random key or an XOR operation.11.3 Devices that use pseudorandom keys or DES are the most common. After digital voice obfuscation, only a noise can be heard and no conversation can be recognized. The American company Motorola is one of the first to bring a system onto the market that is suitable for (mobile) radio communication ("Digital Voice Protection", or DVP)11.4). Other companies such as Marconi, Ascom or Philips meanwhile also supply digital voice encryption systems with different technical levels. In addition, digital encryption systems are now also available for radio communication (by telephone, fax or modem). Of course, with every new encryption system the question always arises as to whether a back door has been built in somewhere that enables the manufacturer (or state authorities) to overhear. Anyone who buys a finished product never really knows what it contains. If you decide to purchase such a system, British Ascom is probably still the best choice. We haven't mentioned the price yet, hold on! Two encryption units (one on each side) quickly cost 12,000 DM. You still don't have any devices for making keys. A simple PC program with a cable to the crypto phone costs around 5000 DM. - The better voice encryption apparatus are expensive and, moreover, difficult to obtain commercially. However, you don't need to be a child prodigy to assemble an encryption system for a normal telephone with the devices that are usually already in the house. You will need a PC (for encryption), a sound card (for recording and playing back the sound), a modem (for communicating with the other side) and of course enough expertise and patience to connect the whole thing. Unfortunately, it does not (yet) provide a practical, mobile whole. Nevertheless, we would like to pass on the first tips on this area. The first step is to convert the language into as few bits as possible. In recent years, great progress has been made in the field of digitized sound and image design (multimedia). All kinds of sound cards (audio cards) have been put on the market that convert bits into sound. These cards can be built into the PC; the required software is usually supplied with the card. A well-known example of such a sound card is the Sound Blaster. The modern cards employ fairly effective compression techniques. Such cards are available from a few hundred marks. When buying, you should make sure that the compression takes place with the help of the hardware. Cards are commercially available in which the operating information states that compression is possible, this is then sometimes done with the aid of the software, which makes the process too slow. There are various techniques with which speech can be converted into bits, namely pulse code modulation (PCM), delta modulation (DM) or delta sigma modulation, the technique of the subband coder / vocoder (e.g. Mpeg audio Coder) and the Lineais predictive coding (e.g. LPC-Celp). The last two techniques mentioned produce the lowest number of bits per spoken second in the end, LPC would even have to manage 740 bit / sec, but the techniques are unfortunately not yet installed as standard in conventional audio cards and are therefore correspondingly expensive. Readers with electrical engineering skills can find circuit diagrams in specialist magazines and possibly solder themselves together a little. The (cheaper) chip, which is often built into audio cards as standard and provides compression, is called DSP. The compression methods that support this chip are called AD-PCM, mu-Law and A-Law. Cards that support compression with the DSP chip are, for example, the Sound Blaster 16 MultiCD (approx. 500 DM) and the Microsoft Sound System 2.0 (approx. 460 DM). If the speech signals have been converted into as few bits as possible, then those bits have to be encrypted again. With digital voice encryption, in principle the same encryption recipes can be used that we have already described. Now, however, the speed of the algorithm is of greater importance, which of course also partly depends on the performance of the computer used. IDEA block encryption is most suitable for experimental purposes. This is on average twice as fast as DES and seems to be (more) secure. The software version is freely available in the original coding form, but would have to be fundamentally adapted, since the 128-bit key that IDEA uses is based on (part of) the message. This is of course not a good starting point for real-time voice encryption. Because encryption also requires a lot of storage space, it is recommended to use at least a 386 DX with 4Mb of RAM. A modem is required to send the encrypted voice. The most modern, but now still very expensive, modems already achieve transmission speeds of around 24,000 bps (real speed). A modem with a speed of 14000 bps and an integrated error correction mechanism is available for 300 DM. Such a modem is in principle suitable for "real time" transmissions, provided that the digitization has not resulted in too many bits. Encrypting these bits now takes up most of the time. If you want to build a system yourself, with the current technical means you will likely get a somewhat impractical system that can only be used from a fixed location. A modem connected to the car phone cannot operate at very high speeds, and the audio cards available for laptop computers are not fast enough because they are mostly external devices.11.5 Contact: [email protected] - What is required SAT score for KAIST - What is fine granulated sugar used for? - What makes Lisk so great - Is AWS a CDN - Post Tweetbot on Facebook - How scary is the movie Black Swan - Please solve question number 4 - What is a good recipe for champagne cupcakes - Are animals cold-blooded underwater? - What do ballet dancers eat before shows - How do you behave in meditation - How does CouponDunia collect the vouchers - What is emotional hospitality - Is the suppression of voters legal in the USA? - How does a heroin overdose kill - What is the abbreviation for building - How does a heroin overdose kill - What are demands - Which is better love lust or money - Are people born with Unsocial Personality Disorders - Why can't you buy Monero with USD? - How to sterilize water at home - Which lyrics inspired you the most - What does 73
OPCFW_CODE
Pocketbase I’m exploring using pocketbase PB with Xtemplate . It’s sort of like a DB and Real time system in one with auth all the way through . It’s throws events over web sockets . Wish Is was SSE , but such is life :) do you can have a record changes and then update all gyi effected using htmx and SSE. So all users stay up to date . you can call it over http , so I currently use it with benthos to do any json mapping on the way in and out . it also has an FS system for local and remote. PB is real time so that when any record changes it tells you . Xtemplate has the htmx stuff I need benthos is where you can do custom logic / validation The Xtemplate providers are nice still and useful .. One thing I find hard with Xtemplate is sitting it all and understanding it . More examples would def help . IMHO, PocketBase is a interesting piece of software. It took me a while to figure out what you had in mind with this feature request. I think you would benefit from explaining your toughs and giving some use cases with some imaginary code. If I understand you well, PB whould be a backend interface to implement the main business logic, then xtemplate would implements the front end logic. xtemplate would make requests to PB in a way or another, making it able to get collections defined in PB, something like: {{range .PocketBase.get 'pages' 'slug' 'home' }} to get the object from the "pages" collection with the "slug" field equal to "home". Am I right ? xtemplate would generate entire pages with some htmx that would request some html fragments, also generated by xtemplate. Am I getting it right? In this case, it would be a elegant way to provide RAD web developments. But: I'm reluctant to allow users to build their own business logic in a no code way, as it seems hard to enable versioning in this case ; and it seems even more complicated with a mixed solution (at least, there would be some xtemplate code, right?) PB is SQLite based ; I don't think xtemplate should impose a db solution ; it is "SQL agnostic" at this point, and I think it should remain like that. I did not dive into how xtemplate is (or not) expandable but, I think that, if possible, PB-xtemplate should be available as a plugin or a module. Well, my bad, this is not exactly what I mean by use cases. On the one hand, we can say that we can assemble an engine with wheels and have as a use case a trip from Paris to Berlin or Rome, but not New York. This doesn't tell us much about the desired experience. These use cases would be just as suitable for a bike as a car, autonomous or even a taxi. I want to know what experience you want to get as a developer once you have assembled all the tools you propose. This is what I call a use case. And that includes some desired code samples that show concretely how you envision things. Without that, it's hard to know not only if our visions are compatible, or if what you want fits well into the scope of the project, but also how to achieve it. That's why I proposed my own code sample: to see if I understood your intention correctly. BTW, I googled a while searching for what Benthos is. I found a lot of stuff, but not at all web or even computing related. At least, I found something named Redpanda, and, googling for both, I found that Redpanda acquired Benthos in May. As I didn't know both before, I wonder how Benthos was used (or how, concretely you use it), and what is the impact of this acquisition on Benthos. And I don't really understand what it has to do with Xtemplate. To put it in one sentence: I'm very pleased and grateful to see your enthusiasm in engaging in the same projects as me, but I have some hard times to understand where and how you wanna go, as you are often abstract and talking about mixing tools I've never heard about before. It makes me think I should provide a roadmap for my own project (LLW) and, maybe, you should do so, so we can compare notes.
GITHUB_ARCHIVE
Crossar.io authentication failed and connection dropped on trying got Handshake with Autobahn I got theses errors while implement login (with wampcra) using Autobahn JS to Crossbar.io: 2018-04-13T09:04:34-0300 [Router 6948] failing WebSocket opening handshake ('This server only speaks WebSocket subprotocols wamp.2.cbor.batched, wamp.2.cbor, wamp.2.msgpack.batched, wamp.2.msgpack, wamp.2.ubjson.batched, wamp.2.ubjson, wamp.2.json.batched, wamp.2.json') and 2018-04-13T09:04:34-0300 [Router 6948] dropping connection to peer tcp4:<IP_ADDRESS>:53586 with abort=False: This server only speaks WebSocket subprotocols wamp.2.cbor.batched, wamp.2.cbor, wamp.2.msgpack.batched, wamp.2.msgpack, wamp.2.ubjson.batched, wamp.2.ubjson, wamp.2.json.batched, wamp.2.json I think this is an Autobahn version issue. Version: Crossbar.io COMMUNITY 17.11.1 How I had suspected the Autobahn was updated. Solution: Re install the autobahn. Remove your AutobahnJS: npm un autobahn -S Install the latest AutobahnJS: npm i autobahn -S $ npm i autobahn -S +<EMAIL_ADDRESS>updated 1 package in 8.259s And try to connect to Crossbar again. 2018-04-13T09:04:02-0300 [Controller 6943] __ __ __ __ __ __ __ __ 2018-04-13T09:04:02-0300 [Controller 6943] / `|__)/ \/__`/__`|__) /\ |__) |/ \ 2018-04-13T09:04:02-0300 [Controller 6943] \__,| \\__/.__/.__/|__)/~~\| \. |\__/ 2018-04-13T09:04:02-0300 [Controller 6943] 2018-04-13T09:04:02-0300 [Controller 6943] Version: Crossbar.io COMMUNITY 17.11.1 2018-04-13T09:04:02-0300 [Controller 6943] Public Key: xxxxxxxxx 2018-04-13T09:04:02-0300 [Controller 6943] ... 2018-04-13T09:04:36-0300 [Router 6948] failing WebSocket opening handshake ('This server only speaks WebSocket subprotocols wamp.2.cbor.batched, wamp.2.cbor, wamp.2.msgpack.batched, wamp.2.msgpack, wamp.2.ubjson.batched, wamp.2.ubjson, wamp.2.json.batched, wamp.2.json') 2018-04-13T09:04:36-0300 [Router 6948] dropping connection to peer tcp4:<IP_ADDRESS>:53610 with abort=False: This server only speaks WebSocket subprotocols wamp.2.cbor.batched, wamp.2.cbor, wamp.2.msgpack.batched, wamp.2.msgpack, wamp.2.ubjson.batched, wamp.2.ubjson, wamp.2.json.batched, wamp.2.json .... 2018-04-13T09:06:04-0300 [Router 6948] session "6143323932507538" joined realm "realm1"
STACK_EXCHANGE
In addition to this, there are a lot of advantages to writing applications in this manner. However, there are some things that you might lose. For instance, web crawlers to index the app and slower the loading performance. So, Server-Side Rendering (SSR) is used to bridge the gap. What Is Server-Side Rendering (SSR) In server-side rendering, the server will return a static web page that is compiled with dynamic data. This data is ready to be displayed on the browser and also comes with the client-side scripts that are needed to make the page dynamic. To fetch the dynamic data, the developer has to write server-side scripts with the use of server-side languages. This was the web page rendering process in the old days with the use of technologies like PHP, Perl, and CGI. But recently it has gained traction with technologies, like Angular and Express. Server-side rendering is SEO-friendly and is the best solution for low-power devices. What Is Client-Side Rendering (CSR)? In client-side rendering, the partial web page is returned by the server without dynamic data, but it offers the client-side scripts that are required to fetch the data on-demand that are asynchronous. Here, no one but the client is responsible for data fetching while loading a new page or when there is user interaction. In CSR, there are many desynchronized calls to the server. Besides, client-side rendering is not SEO-friendly, as the content in this process is always dynamic. Why Do We Need Server-side Rendering? As we know, Angular apps are client-side apps that are executed on the browser. This means that Angular apps are rendered on the client and not on the server. But with the help of Angular Universe, we can add server-side rendering to any Angular application. But the main question is, why do you need to do that? There are two main reasons that will explain why an Angular developer needs to create a server-side version of the application: - Performance: By rendering the Angular app on the server-side, the developer can improve the performance of the app, particularly on the low-powered and mobile devices since the browser of these devices will not require extra time to render content. This can help reduce the time for the First-contentful Paint. - SEO: Server rendering enables the search engine to easily crawl the web application which eventually helps with SEO. Let’s show some examples: - Sharing some URLs on Facebook which do not have the SSR facility. They create the same content in the info-box for two different URLs of the same site. It means it returns some static content for all the pages of the site. - Sharing some URLs on Facebook which have the SSR facility. They create different content in the info-box for two different URLs of the same site. It means it returns dynamic content based on the content of the page. Therefore, the developer needs to make sure that every social network and search engine can recognize the content of the Angular web app. For this, they need to create universal apps or implement the old server-side rendering. Now, let us create a standard Angular app using some Angular best practices that is by default development for client-side rendering. After that, we will use the new Angular schematic to configure the app as server-side rendering. Create a Standard Angular App Check whether you have the latest Angular CLI which is 9 or greater. If your CLI version is not as required, upgrade it. npm i -g @angular/cli Create a new Angular app. ng new angular-SSR A new Angular project will be created and packages will be installed by the angular CLI. Run the app and observe the web page’s content. After the successful installation of all the packages, we can run the app. You can now check out the message that the development server is running at Observe that the HTML page’s source is served by the app. Besides this, you aren’t able to see the static HTML for the page’s content, as most of the content has been dynamically loaded with the help of client-side scripts. As you can see here, the app created with CLI is set up to have Client-side rendering. And with the help of Angular Universal, we can configure the app as server-side rendering with ease. Configure for Server-Side rendering (SSR) From @nguniversal, we are going to the latest Angular Universal diagrammatic. It helps in adding an express server to the project. ng add @nguniversal/express-engine This is what you will observe: All needed files have been created. You just need to observe the following as it has been added to the package.json. Also, observe the newly added shortcuts/scripts in package.json "dev:ssr": "ng run angular-SSR:serve-ssr", "serve:ssr": "node dist/angular-SSR/server/main.js", "build:ssr": "ng build --prod && ng run angular-SSR:server:production", "prerender": "ng run angular-SSR:prerender" Run the app and check the web page content. npm run dev:ssr We can see that the server has delivered the entire static HTML page with all elements in pure server-side rendering. The requirement of the client-side script is only when there is user interaction, it is not responsible for fetching the required data. However, when we talk about the Angular Universal, there are hybrid approaches like universal templates and more. We know that the Universal app does not run on the browser but on the server. For this, there are a few things that a developer needs to watch out for in the code – - Have a look at the usage of browser-specific objects like documents, windows, or location. These aren’t available on the server. Try to use an injectable Angular abstraction like Location or Document. If you really need them then wrap their usage in a conditional statement. It can then only be utilized by Angular on the browser. We can do this by importing the two functions isPlatformServer and isPlatformBrowser from @angular/common. Then injecting the PLATFORM_ID token into the component and running the functions that are imported to check whether you’re on the browser or a server. (To check a quick solution see – Additional tips to optimize server.ts) - If you make use of ElementRef to get a reference to an HTML element then don’t try to use the nativeElement in order to manipulate attributes on the element. We can try to inject Renderer2 and then use one of the methods. - If browser event handling doesn’t work then your app won’t respond to any click events or browser events when it is running on the server. In such cases, any link that is generated from a routerLink will help in navigation. - Use setTimeout less or only where necessary. - Create all URLs for server requests. The reason behind it is that the requests for data from relative URLs are going to fail when it runs from the server, even if the server has the capacity to handle relative URLs. (Check a quick solution for Handling requests for relative path and only allowing GET request on the server-side) Avoiding Duplicate HTTP Calls in Angular Universal When a developer is working with Angular Universal, it can present a set of unique challenges. The HTTP calls are duplicated on both the client and server apps. You can solve this problem in many different ways, but it clearly depends on your specific scenario. If your application is using Angular’s HttpClient to make HTTP calls then the solution is a straightforward one. In such cases, you can use TransferHttpCacheModule of Angular Universal on your ServerTransferStateModule and app.module on your app.server.module. But, if you want to utilize the TransferHttpCacheModule, you must first install it. It can be installed as part of the top-level application module. After that import ServerTransferStateModule in the Server module. When this is done, the HTTP calls are generated with the use of HttpClient and this will not result in duplicate calls, especially, when the Universal app is loaded on the browser. Handle the relative path requests and on the server-side, only allow the GET requests. There are chances that you might run into problems especially when you try to use the relative paths in requests on the server-side. Here, when you want to solve the problem, one of the solutions is to provide the full URL to your app that is on the server, and then you can write an interceptor that can help in retrieving this value and evaluate the request URL. When you are using the ngExpressEngine, as you can see in the example in this guide, you are halfway there. Here, we can consider that this is the case, but it is not important to offer the same functionality. Also, we need to add a condition for the GET method. The reason behind it is that we only cache GET requests. Let’s start this by creating an HttpInterceptor. Now, offer the interceptor for the server AppModule in the providers. Now, this interceptor will be fired and it will replace the request URL on every HTTP request made on the server. Here the absolute URL is offered in the Express Request object. Additional Tips to Optimize the server.ts: - As we know that the Universal apps run on the server more than on the browser, there are some of the major things you need to watch out for in the app code. You can have a look at the browser-specific objects like document, window, or location and if any of these are missing, so we recommend you to use domino for Server-side DOM abstraction. Hereby Domino we mean a Server-side DOM implementation that is based on Mozilla’s dom.js. Here is what you need to do- ● Install domino npm for server-side dom abstraction ● Then, configure the “server.ts” - Gzip compressing It is a process that can greatly decrease the size of the response body and also increase the speed of a web application. For gzip compression, you can use the compression middleware in your Express application. It enables the content publishers to restrict the attackers from getting to their content. To prevent the content, the publishers can use the DENY option. It is one of the most secure options to prevent the use of the current page in a frame. There are chances that the Hackers can exploit the familiar vulnerabilities in Express/Node. This can especially happen if they see that your website is powered by Express. For example, by default X-Powered-By: Express is sent in each and every HTTP request that comes from Express. This process won’t offer security but it can help in some ways. The last point is that if you want to stop the server-side rendering for some specific URLs, then you can send the index.html file directly as a response. As seen in this blog, Angular Universal is a pre-render builder that enables the developers to render the application on the server-side. It pre-renders the application while there is a first hit on the website from the user. Angular universal and server-side rendering is beneficial for the accessibility, performance, and search engine optimization of the web-pages. To sum it up, Server-side rendering with Angular Universal can boost the app’s performance and make it SEO-friendly.
OPCFW_CODE
Cannot complete a command line RIT-Agent installation on AIX Attempts to install IBM Rational Integration Tester Agent (RIT-Agent) using imcl on AIX fail. You are using the IBM Installation Manager (IM) imcl program to install RIT-Agent without a GUI. You have installed IM and downloaded and extracted the necessary RIT-Agent package. When you proceed through the text menus past configuring the languages, you get stuck at the following point: =====> IBM Installation Manager> Install> Licenses> Shared Directory> Location> Translations> Features IBM Rational Integration Tester Agent 1. [ ] service.install.feature B. Back, C. Cancel You are unable to enter option 1. This is caused by a defect and will be corrected in a future release. Resolving the problem You can install RIT-Agent successfully by not requiring any user input. You can also install a fix release in a single operation. For this you need to download the main version and the fix version. In the following example RIT-Agent version 184.108.40.206 is installed. Pass the necessary parameters to the imcl program as follows: - Download and extract the packages such as RIT-Agent 8.6.0 and 220.127.116.11. - Navigate to the IM toolsfolder. Typically this is - Use the IM imcl command to verify that you have your installation packages extracted correctly. ./imcl listAvailablePackages -repositories /tmp/RITA_SETUP86/disk1/diskTag.inf,/tmp/8603/RITA_SETUP/disk1/diskTag.inf The packages and versions are listed. Note that the main installation and fix packages will look the same apart from timestamps. - Decide on the parameters to use for the installation such as the RTCP URL and the license option com.ibm.rational.rita.offering. This is set as follows: Rational Test Virtualization Server (PVU mode) - Rational Performance Test Server (PVU mode) - Rational Test Virtualization Server (Agent mode) - Rational Performance Test Server (Agent mode) - Probes only - - Perform the installation. Here is an example ./imcl install com.ibm.rational.rita.offering_8.6.0.I20141217_1502 -repositories /tmp/RITA_SETUP86/disk1/diskTag.inf,/tmp/8603/RITA_SETUP/disk1/diskTag.inf -installationDirectory /opt/IBM/RIT-Agent -acceptLicense -showVerboseProgress -properties user.licenseOption,,com.ibm.rational.rita.offering=rtvs,user.RTCP_url=http://localhost:7819/RTCP - Verify the installation. See also Command-line arguments for the imcl command in the IBM Installation Manager Knowledge Center.
OPCFW_CODE
In this blog, we will discuss why Next.js is best suited for WooCommerce Multi-Vendor Marketplace? If you want to develop a WooCommerce Marketplace with headless frontend then the first thing comes to mind is- On which technology the frontend should be built? The Next.js is the answer to your question for headless frontend development for a Marketplace. Next.js is an open-source React framework that is used to build server-side rendered apps and generate static websites. It allows dynamic websites development which means it can be deployed on a platform that can run Node. Next.js library is built using React library with all the advantages of ReactJS and React DOM. Next.js is developed using React.js so it has some similar core features to ReactJS. Although Next.js has some common features like; pre-rendering, code splitting, routing, and webpack support. React builds things in a way the developer wants and is supported by a strong community. Similarly, Next.js is used by the developers broadly and it has a big community like ReactJS. It is easy for any ReactJS developer to build it as it will be easy for them to learn Next.js.Hence it is easy to find the resources due to its big community. WooCommerce MultiVendor Marketplace allows you to convert your website to multiple vendor platforms. Here multiple vendors can add their products from their separate profiles. Every seller has his dedicated seller dashboard from where they can manage their orders and products. The sellers can edit their profiles, view the transactions, and order history. Likewise, the admin can also manage the products, sellers, and commissions. Having an e-commerce business with an excellent marketplace website is all-important for business merchants nowadays. It is also critical to modern web trends as your website must develop so that it will easy to survive in this competitive digital world. Therefore it is compulsory to follow the modern web and e-commerce trends to develop the website in a cost-effective and user-friendly way. The chief feature of Next.js that makes it as demanding is its server-side rendering. In every e-commerce business, it is a must to have for the website to be interactive after fetching the data from the server. Next.js gives the key functionality of server-side rendering to your website. It uses the Hydration process by building the HTML page at build time. Hence the key element that makes Next.js one of the most popular in the programming world. Next.js apply a headless content management system for its marketplace content management system which means there are no regular updates required which also leads a cost-effective website handling. It is easy to integrate different web services in headless commerce including PIM (Product Information Management), and other accounting software due to its headless CMS. WooCommerce Headless development works on a decoupled or a headless system in which the frontend is completely separated from its backend and the backend and frontend are connected through APIs. It always works smoothly when the developers do not require to change the frontend every time if changing the backend. In headless commerce, headless development is all about decoupling the backend to its frontend. Hence the change in the backend will not impact the frontend. Google offers Web Vitals to guide for quality signals that are necessary to deliver an enhanced web user experience. Core Web Vitals are the subsets of web vitals applied to all the web pages. Every site owner must apply Core Web Vitals as it audits the user experience distinctly. Web Vitals measures the user experience based on loading, interactivity, and visual stability. It computes the LCP(Largest Contentful Page), FID(First Input Delay), and CLS(Coummulative Layout Shift. The ideal measurement for LCP must be 2.5 seconds, FID should be 100 milliseconds, and CLS should be 0.1 or less. Therefore Node.js provides an excellent Google Page experience by using Core Web Vitals. Next.js offers the most prominent feature of website ranking in website development. These days the SEO ranking is playing a key role to promote your website on a search engine. Nodejs has the below dynamic features that make it SEO friendly: Next.js is using a file-system-based router that works on the concept of pages. All the files inside the pages directory will automatically consider as a new page. Before managing the metadata, there is a need to organize the data and create a new file called styles.css at the root of the project. Now we add the styles globally by overriding the app.js file. It uses a component to initialize whole pages called App Component. Code can be abstracted by moving the logic from index.js to another component. Therefore there is a need to create a new folder src in the root project. Inside the src folder create another folder component. Now create a new file called Home.js. In the end import that component into the index page called index.js. Next.js takes advantage of the Home page component to manage the metadata in the applications. This component allows adding metadata inside them. Now the developer can change the metadata depending on the current pages. There is an option to add more styles. For this create a new component called Nav that will allow moving through the application. Now add another Next.js component called Link that helps to make client-side transactions between routes. These transactions are enabled via the Link component exported by next/link. Now add the Nav component into the app.js file that helps to keep that component in the entire application to move between pages. Next.js enables the creation of an isolated component that will use in the entire project. The component has default metadata and Open Graph metadata that let the apps like Google, Facebook, and Twitter crawl the metadata. It uses Google Lighthouse to get the hundred percent results. Google lighthouse measures the quality of web pages. It provides the auditing of website SEO, performance, and accessibility. Of course, Next.js provides a better experience to the vendors as well as to the customers. Vendors get smooth website development in the minimum period along with modern technology. Due to its SEO-friendly feature, the marketplace website also gets the best search results. It offers a better experience as you can keep a different vendor dashboard and a separate customer panel. Likewise, the customers also get an interactive and user-friendly website that creates the ease to use the website. WooCommerce has some frequent updates whenever anything new comes in the development end. Every store owner wants his marketplace website as it works with the upcoming new changes. Hence there is no need to update the frontend when you are updating the WooCommerce website every time as only a one-time requirement for front-end development. The developer does not require to have full-stack development knowledge as the resources are easily available and cost-effective due to its big community. In the end, everyone wants an economic and budget-friendly marketplace website. That is all required in the end for any business merchant. Here using Next.js the business merchants get cost-effective solutions. Next.js provides better performance and also has a higher tendency of getting higher rankings on search engine sites such as Google. Considering all the above key points Next.js is suggested as the best-suited framework for your WooCommerce Multi-Vendor Marketplace. That’s all for Why Next.js Is Best Suited for WooCommerce Multi-Vendor Marketplace. If you still have any issues feel free to add a ticket and let us know your views to make the module better contact us at our Webkul Support System.
OPCFW_CODE
Features of c programming pdf Related Book Ebook Pdf C Advanced Features And Programming Techniques Step By Step C Book 3 : – The Prince Annotated – Criminal Justice Today An Introductory Text For … This Stanford CS Education document tries to summarize all the basic features of the C language. The coverage is pretty quick, so it is most appropriate as review or for someone with some programming background in another language. Topics include variables, int types, floating point types, promotion, truncation, operators, control structures (if, while, for), functions, value parameters 11/07/2016 · Features and Characteristics of C Programming Language C is a structured programming language developed in 1973 by computer programmer Dennis Ritchie at the Bell Laboratories. It is one of the oldest programming languages in the world and used even today in colleges and universities around the world to introduce students to computer programming. Features of OOP. OOP stands for Object Oriented Programming and the language that support this Object Oriented programming features is called Object oriented Programming Language. An example of a language that support this Object oriented features is C++. Features of Object oriented Programming. The Objects Oriented programming language supports all the features of normal programming … Salient features of the language. C language is a software designed with different keywords, data types, variables, constants, etc. Embedded C is a generic term given to a programming language written in C, which is associated with a particular hardware architecture. Embedded C is an extension to the C language with some additional header files. These header files may change from controller to Telephone Features Programming Guide Preface This guide provides information about how to program a Business Communications Manager telephone. This information includes items such as programming personal speed dials, transferring a call, and using special features. Some of the features included in the Business Communications Manager telephone system are: • conference … C Advanced Features And Programming Techniques Step By Step C Volume 3 PDF Download PDF Download C Advanced Features And Programming Techniques Step By Step C Volume C++ programming language is object-oriented, and it contains all the features of C language so learning C first will help you to learn C++ easily and then you can go for Java. C programming PDF Dev C++ compiler C is called C because it is based on the B programming language (and naturally..? C++ has the double ‘+’ on it because one of the new additions was the ++ operator (instead of writing x=x+1. • Why is the language called C++.General reasons to name Programming languages as B.C++ • why b language is called b? B was called B because it was meant to be a stripped down version of the BCPL Features of C language. The following are some of the features of C programming language: C programming language has a variety of built in functions and operators that can be used to solve complex problems. [[PDF Download]] C Advanced Features And Programming Features and Characteristics of C Programming Language Version 2.08.01 features minor corrections in the book. It is sort of a crash course into C Programming. Summary. This pdf will help anyone who wants to learn how to program in C… c & c++ Book Description: This is a hands-on book for programmers wanting to learn how C++ is used in the development of solutions for options and derivatives trading in the financial industry. 1 IIMC M.Vijay Advantages (or) features of C Language: C is the most popular programming language, C has many advantages: Modularity: modularity is one of the important characteristics of C… This came to my mind after I learned the following from this question: where T : struct We, C# developers, all know the basics of C#. I mean declarations, conditionals, WHY C# ? : FEATURES 0 C# is thefirst “component-oriented”language in C/C++ family. 0 The big idea of C# isthat everything an object. 0 C# is a programming language that directly reflects the underlying Common Language C++ 17 is the most recent version of C++ programming language revised by ISO/IEC 14882 standard. Its specification reached the DIS (Draft International Standard) stage in March 2017 and after approval, the final standard was published in December 2017. Hundreds of proposals were put forward for updating the features in C++17. 4/04/2016 · SPONSORS DevMountain Coding Bootcamp https://goo.gl/P4vgKS Description: In this video we will look at the top 10 server side frameworks in 2019. C Advanced Features And Programming Techniques Step By Step C Book 3 Free Download Size 50,87MB C Advanced Features And Programming Techniques Step By Step C Book 3 FUNDAMENTALS OF PROGRAMMING Chapter 2 Programming Languages. PROG0101 Fundamentals of Programming 2 Programming Languages Topics • Definition of Program, Computer Programming, and Computer Programmer. • Generations of Programming Language • Types of Programming Language. PROG0101 Fundamentals of Programming 3 Programming Languages Computer Program • A program … 11/08/2017 · Hello, In This video i have explained the feature of C programming language. So Watch and Share. If You like it don’t forget to press Like and Subscribe button. C is a general-purpose programming language developed by the ultimate god of the programming world, “Mr.Dennis Ritchie” (Creator of C programming ). The language is mainly used to create a wide range of applications for operating systems like Windows and iOS . C++: Advanced Features and Programming Techniques 1st Edition Pdf Download For Free Book – By Nathan Clark C++: Advanced Features and Programming Techniques Take Your Skills to the Next Level with 70+ Examples Get the Kindle v – Read Online Books at SmteBooks.Eu 1/11/2017 · C programming is known as mother and base language for all programming language because all the programming language uses the syntax of c language e.g c++, java, etc. So after learning c programming language, you can easily learn other programming language because all the programming language use the syntax of c language. back and consider the programming philosophy underlying classes, known as object-oriented programming (OOP). 1 The Basic Ideas of OOP Classic “procedural” programming languages before C++ (such as C) often focused on the question “What should the program do next?” The way you structure a program in these languages is: 1. Split it up into a set of tasks and subtasks 2. Make … A lot of C99 features were introduced to make the idea of upgrading Fortran programs to C more attractive. (Fortran was still fairly widespread in the 90s). AFAIK, VLAs are one of those features. (Fortran was still fairly widespread in the 90s). The object oriented programming will give the impression very unnatural to a programmer with a lot of procedural programming experience. In Object Oriented programming Encapsulation is the first place. Encapsulation is the procedure of covering up of data and functions into a single unit (called class). An encapsulated object is often called an abstract data type. In this article let us see 11 Features of Java Programming Language Simple : Java is Easy to write and more readable and eye catching. Java has a concise, cohesive set of features that makes it easy to learn and use. Most of the concepts are drew from C++ thus making Java learning simpler. Secure : Java program … 27/11/2010 · The Features of C++ C++ is the multi paradigm, compile, free form , general purpose, statistically typed programming language. This is known as middle level language as it comprises of low level and high level language features. C++ 17 – New and Removed Features C and C++ Programming PDF Object-oriented programming using C++ classes is established practice in the general programming community and is beginning in computer music applications (Chaudhary, Freed et … C++ was first designed with a focus on systems programming, but its features also make it an attractive language for creating end-user applications, especially those with resource constraints, or that require very high performance. C++ Standard • For a long time, there was no standard at all o Multiple compilers, mostly agreed with what Stroustrup wrote • C++ 98 o Slightly tweaked in 2003 Epub Book-]]] C Advanced Features And Programming C programming language is a small language , The concepts that it requires is quite short , There are 32 keywords in ANSI C only and its strength lies in its built-in functions , Many standard functions are available that can be used for developing the programs . Features of C Programming Language – Download as Word Doc (.doc), PDF File (.pdf), Text File (.txt) or read online. Scribd is the world’s largest social reading and publishing site. Search Search C Programming provides low level features that are generally provided by the Lower level languages. C is Closely Related to Lower level Language such as “ Assembly Language “. It is easier to write assembly language codes in C programming. Features of C Programming Language C (Programming Features of C++ Ameya’s Blog Music Programming with the new Features of Standard C++ Features of C Programming Language ~ JAVA95 C++ Advanced Features and Programming Techniques Free Pdf Features Of C Programming Language YouTube C# Hidden Features C Sharp (Programming Language Top 10 Most Important Features Of C# c-sharpcorner.com What are some features of the C programming language that Advantages (or) features of C Language IIMC Hyd
OPCFW_CODE
The Allegro Wiki is migrating to github at https://github.com/liballeg/allegro_wiki/wiki Building Allegro 4.9/Installing 4.9 Windows To build Allegro 5 and all its add-on libs, you'll need to install the following libs for your compilers. Open a console and make sure the MING_DIR and MSCVDir environment variables are set zlib and libpng (optional) zlib is required if want PNG and/or TTF support copy zlib1.dll \windows\system32 copy lib\zdll.exp %MSVCDir%\lib copy lib\zdll.lib %MSVCDir%\lib copy include\*.h %MSVCDir%\include copy lib/zdll.lib %MING_DIR%\lib ren %MING_DIR%\lib\zdll.lib libz.a copy include/*.h %MING_DIR%\include mingw & msvc Gnuwin32 have good precompiled binaries of libpng 1.2 Download http://heanet.dl.sourceforge.net/sourceforge/gnuwin32/libpng-1.2.7-bin.zip This contains bin/libpng.dll which you copy to \windows\system32 Download http://heanet.dl.sourceforge.net/sourceforge/gnuwin32/libpng-1.2.7-lib.zip which contains both mingw and vc libs copy include\* %MSVCDir%\include copy include\* %MING_DIR%\include copy lib\lib* %MING_DIR%\lib copy lib\pkgconfig\libpng.lib %MSVCDir%\lib freetype provides support for the al_ttf & al_font add-ons. Requires zlib. Gnuwin32 Website : http://gnuwin32.sourceforge.net/packages/freetype.htm Download : http://heanet.dl.sourceforge.net/sourceforge/gnuwin32/freetype-2.3.5-1-setup.exe and run it. Is this supposed to install in the compiler automatically? well it didn't for me. So, in C:\Program Files\GunWin32 copy bin\freetype6.dll to \WINDOWS\System32 copy -r include\* %MSVCDir%\include ft2build.h and freetype dir copy lib\freetype.lib %MSVCDir%\include copy -r include\* %MING_DIR%\include ft2build.h and freetype dir copy lib\libfreetype.dll.a %MING_DIR%\lib copy lib\libfreetype6.def %MING_DIR%\lib If you build Allegro without this, you will not get the d3d9 graphics driver. You cannot build the DLL without this, as the ABI specifies its presence. M$ DX SDK Download page: - http://www.microsoft.com/downloads/details.aspx?FamilyID=77960733-06e9-47ba-914a-844575031b81&DisplayLang=en - 150MB - needs WGA - 9.0c - http://www.microsoft.com/downloads/details.aspx?familyid=519AAE99-B701-4CA1-8495-39DDDE9D7030&displaylang=en - enormous 450MB but no WGA and includes DX10 Web page: http://www.cmake.org/ Download page: http://www.cmake.org/cmake/resources/software.html Open a console. Use either the Start-menu VS shortcut or manually run VCVAR32.BAT In your allegro dir, type to see the list of generators which should include your version of Visual Studio, and then (e.g.) cmake -G "Visual Studio 8 2005" -DSTATIC=ON -DSHARED=OFF use -DSHARED=OFF if you do not have both Open GL and D3D9 libs installed, because they are both needed to build the DLL. Now you should have 2 workspace/solution files in the root e.g. ALLEGRO.sln and BUILDALL.sln Open your dir in Explorer and open ALLEGRO.sln. This will start VS with all the Allegro libs and examples as projects. Build All. Enjoy the examples. Now quit and install allegro+plugins for other projects copy lib\debug\*.lib %MSVCDir%\lib\ mkdir %MSVCDir%\include\allegro5 xcopy include\allegro5\* %MSVCDir%\include\allegro5 /E Now, write a game
OPCFW_CODE
Keeping up with Airflow releases Since 2019, Apache Airflow has been releasing over 10 times a year from its GitHub repository. When maintaining a stable working environment from development to production, keeping up with all these releases can be challenging and time-consuming. Based on our experience, it is essential to not delay updates in order to access the latest features and improve workflows, and avoid falling into deprecation traps when releases are far apart from each other. Finding a suitable balance for updating the Apache Airflow environment is crucial in staying up-to-date with the latest innovations, features, libraries, and security updates. I have been working with Apache Airflow since its incubation period within the Apache Organisation. The project has grown substantially since then, with considerable community support and contributions. As a consequence of the additional features and methodologies, the core of Airflow is quite different compared to its early days. The community’s efforts have resulted in a significant amount of code, both from new lines and a substantial amount of churn. This code needs to be released at a certain cadence, necessitating careful management. However, not everything always functions as intended, which makes keeping up with Airflow releases/tags challenging. In the past, some releases were only days apart due to severe bugs in the core that necessitated a quick fix release version. Nonetheless, the community collaborates effectively, testing, improving, and raising issues that are taken seriously by all members. The old experiences I still remember the early days of using Airflow when the documentation was not as comprehensive as it is today; back then, code was everything. On certain occasions, we had to use a specific commit hash for our releases to overcome issues with the release at that time. There were also instances when I had to patch a few lines of code to ensure our deployment worked in a production environment. All of this was because, during the period when code standardizing and best practices of test coverage were being applied to Apache Airflow in its incubation period, it took considerable time for some GitHub PRs in the Apache repository to be reviewed, approved, and merged into the codebase. We decided to patch the code, which was both a fun and technically challenging task. However, keeping track of future changes proved to be difficult while maintaining Airflow across multiple environments. Some of our engineering team members disagreed with this approach, but in the end, we had to keep moving to ensure an operational Airflow for processing data in the pipeline. All of these experiences led us to a final question: which version of Python should we use? On the one hand, there was Python 2.7, which some libraries depended on and had not been ported to Python 3. On the other hand, there were libraries that only worked on Python 3. Resolving this issue at the time was complex, but we managed to isolate the Python 2.7 workload under a shell script and within the beloved Python Virtual Environment. We decided to use Python 3 from that point forward as our target version, patiently waiting for Google and others to release some of their libraries in Python 3 — a process that took some time. Transitioning from Airflow version 1 to 2 was a substantial task. Certain features that functioned in one version did not work seamlessly in the other, causing a few hiccups along the way. We chose to skip the database schema upgrade and start afresh, disregarding the logs and any history, which we viewed as mere dead weight. The improved code structure greatly facilitated release management, and the constraints on the Python version proved to be a lifesaver. It made handling Python packages in Airflow much easier for us. The new core and its features made version 2 a welcomed addition to the data processing stack in its early stages. However, subsequent database upgrades and numerous deprecations within the version 2 releases presented some challenges. Every Airflow release inevitably brings bug fixes and new features. While the current release version may resolve an issue or bug, it can also potentially introduce a new bug or instability in certain areas of the product. During one of our upgrades, the ‘airflow tasks test’ command line wasn’t functioning at all. This issue was only addressed in the second or third minor release that followed. As a result, we had to tolerate a buggy feature for a while. Fortunately, it didn’t impact production, just development. Airflow as services Airflow as a service, provided by major cloud data centers and other companies, assists in managing Airflow versions, libraries, and packages. But (yep, there’s always a but) we often find ourselves needing something that isn’t present in their standard platform, be it a different version of a library or an additional feature. At Playground XYZ, we encountered an intriguing problem with a specific version of Google Composer, which did not function as expected with a required Google Airflow Operator. Fortunately, their product allowed us to switch the operator provider in question to another version. This adjustment proved effective, enabling us to use the version we needed for our workload and to utilize Airflow Task Mapping. We installed extra packages to extend the usage of Airflow for various workloads. This kind of flexibility proved to be advantageous in overcoming challenges related to library management. Apache Airflow releases occur approximately every five weeks and can include minor releases and patches. Each release comes with numerous changes, and updating Apache Airflow for each release can be time-consuming, almost amounting to a full-time job when dealing with side effects, deprecated libraries, and other issues. However, it’s crucial to find a rhythm that maintains a healthy and happy team that uses and develops the workflows. It’s not feasible to indefinitely lock a specific version of Apache Airflow in a data platform. The introduction of new features, stability improvements, bug fixes, and enhancements are highly beneficial for developers, as these updates can enhance their efficiency in developing workflows. They’re also advantageous for infrastructure and operations, as they provide better monitoring and process management. Updates and upgrades are also crucial for maintaining security. Outdated libraries can harbor significant issues, potentially leading to exploitable vulnerabilities. For security reasons alone, upgrading your Apache Airflow should be on the roadmap, just like any other system, to prevent potential problems. A good cadence for updating/upgrading your platforms will depend on a blend of factors including opportunities, development requirements, business workload, DevOps workload, roadmap priorities, team size, technical capabilities, and so on. It can be tricky to establish such a cadence, as it is often time-consuming and challenging. However, updating 2 to 4 times a year might be feasible for some, or at the very least, once or twice a year would be ideal to avoid a significant upheaval in platform changes, configuration, and feature deprecation. Like any product, Apache Airflow has undergone an extensive process to enhance development workflows, standardize the code, and establish an architecture that is flexible, stable, scalable, and configurable. Plenty of exceptional features have been developed for widespread use across a range of workloads and platforms. However, there have been numerous bug fixes and library version mismatches over time that required attention, and such issues will continue to occur as the product evolves. The most challenging aspect of dealing with Apache Airflow is managing Python library versions. Given that Airflow is intertwined with numerous libraries, dependency issues can crop up. However, Apache Airflow’s current design and architecture is truly a winner due to its flexibility, which allows for basic overwriting of original release setup version numbers to deal with such occurrences. This flexibility, when extended to Airflow as a Service, makes updating/upgrading the platform even easier. Managing Python libraries is already a headache, let alone the additional complexities of Kubernetes, databases, instances, and OS versions depending on where Apache Airflow is running. Its flexibility is so extensive that it operates across a wide range of platforms. I certainly don’t miss the old days of patching code or using specific commit hashes to deploy a platform that served as a core data processing layer.
OPCFW_CODE
The "usual" advice if you can't directly write to DVD is to create a folder and use different software to write to the DVD (see below) BUT... you get no errors, yet you can't even create a folder... which is VERY odd A couple ideas... 1- post a screen shot of your output settings 2- are you able to export (share) ANY kind of output file... such as a new DV AVI file? Even though you do not receive any error messages, a "long shot" idea Long File Names or odd characters cause problems And This Message Thread http://forums.adobe.com/thread/665641 And... the "usual" advice Start --> http://forums.adobe.com/thread/608660 #2 has WHY Explained http://forums.adobe.com/thread/607390 Create an ISO (Encore) or folder on your hard drive (Encore or Premiere Elements) and then use the FREE http://www.imgburn.com/index.php?act=download to write files or folders or ISO to disc for DVD or BluRay (send the author a PayPal donation if you like his program) Thanks for e-mail, it is very much appreciated. Previously I was using Premier Element 7 and had no problem writing to Disc. With version 10, I can’t output to disc, folder or DV tape. The project setting is NTSC and DV-Avi, standard 48 Khz. The video is recorded from a Sony Mini DV Camcorder. The video is standard definition. There is no fancy effects, transitions or still pictures. When Share is selected and I click on Disc and select DVD, the DVD burner is recognized. The program goes through the routine of encoding the menus, and media which takes about 20 minutes for a 39 minute project(I have a fast computer) and then switches to burning disc which takes no time and instantly ejects the disc with the message “The project burned to disc successfully” but the disc is blank. Outputting to folder is the same, ie, after encoding the message states the folder is created but none exists. Outputting to Mini DV tape takes 39 minutes, the VCR is activated and recording is done but the tape is blank. I have read the links but none of them apply to my problem. As I mentioned before, I have completely removed the program and cleaned the Registry, reinstalled the program with no success. I can create and edit a project, create menus and titles but can not output to any media. There is no provision in Element to create an ISO file. Anybody has other suggestions? Thanks everybody. The problem was solved. The default project setting was avchd instead of DV-AVI.
OPCFW_CODE
It’s Done! (or is it?)10-02-2008 Explaining Story Points to Management13-02-2008 I have a hobby. Sign Language. There is really no reason for me to have this hobby, but I have long been curious about using signs as a means of communication. There are a lot of challenges in learning a sign language, one of which is the speed in which native speakers can sign — I suppose that’s not that different from a learning a spoken language, actually. The finger alphabet is not the same thing as the sign language. Sign languages are a much more efficient way of communicating ideas than simply spelling out words. The alphabet is still important and is used mostly for names and places, but also for words that don’t have a dedicated or well-known sign. Not finding a good tool for learning to finger spell, I wrote one. It was a double learning experience: I have gotten much better at finger spelling and I got to cut my teeth on AJAX. After showing it to colleagues in class, I was really motivated by their reactiosn to polish the program and put it on the web or even commercialize it. So it needed a better visual design. It needed to do address a wider community so it needs to be multilingual, not just the text on the web site, but it needed to support American Sign Language (ASL), French Sign Language (LSF), German and Swiss German Sign Languages (DGS and DSGS). Each has its own alphabet. The list of wonderful features got longer and longer, and it seemed like the program would never be finished. And then came jp’s email. Eureka! Focus on getting it done! This ‘done’ means more than just getting the features done. Done means the product is out there producing a return on investment. And there were plenty of cool new features to implement. So getting to done meant shortening the list. So I thought – prioritize. What does it need to be releasable, what can be postponed to a later release? Here is the list I came up with: - A user can choose between ASL und DSGS signing languages - An ASL user can display words from a database of US names and places - An DSGS user can display words from a database of Swiss names and places - A user sees the pretty new pictures Hans Peter, Norbert and I created - *A user understands the licensing conditions (Creative Commons) - *A user who appreciates the service can make a paypal contribution - *A user can enter a word to be fingerspelled from the keyboards - *A Windows User can see the screen properly with Internet Explorer The first three points have been working for a while, the rest, those with a ‘*’, while not a lot of work, needed be done, and with this feature set, I would be willing to tell the world about it. - A user can use the site in German - A user can use the DGS Alphabet - A DGS user can display words from a database of German names and places. - A user can use the LSF Alphabet - A LSF user can display words from a database of French names and places. - User contributions go to a new paypal account So I was able to push a lot of work into the future and today, version 1.0 is done. Thanks to the joys of Internet Explorer, I am not sure if got all rendering issues fixed (css patches would be most welcome), but it’s working well enough, that the time has come to set it free. So where’s the ROI on a project like this? Well, mostly this is about the satisfaction of having a cool tool out there, learning some interesting things along the way, and hopefully helping a lot of people learn to sign. I have no idea whether the donation model is a viable way of financing software, but shareware has a long established tradition, so we’ll see. But if it weren’t out there, I’d never find out. So here goes! Version 1.0 is officially live.
OPCFW_CODE
The SimplytheBest software directory offers you several main software categories, each with its subject related sub categories. No spyware, no adware, no popups, no nonsense. 3D graphics animation apps audio automation board games budget business plan construction design editor file compare financial financial plan flanger FLV functions games graphics guitar guitar lessons images karaoke macro recorder macros mastering merge movie multimedia music non profit office suite personal budget player programming recording resize shell sound sound editor spreadsheet strategy games studio utilities video VST plugin web designer word processor 3D Asian Bold Calligraphic Cartoon Classic Comic Computer Crazy Decorative Dingbats Drippy Entertainment Famous Fancy Foreign Funky Futuristic Gothic Graffiti Handwriting Heavy Hi-tech Holiday Hollow Horror Icon Industrial Kids Liquid Messy Modern Movie Musical Narrow Old fashioned Outdoor Retro Round Sans serif Sci-fi Script Serif Stencil Stylish Texture Thick TV show Typewriter Wavy Webfonts Western Wide Wild Wood Latest News Biology Biometrics Biotech Cloud Earth Electronics Energy Engineering Medical Mobile Nanotechnology Open Source Photo Physics Robotics Science Security Software Space Web development alarm alligator ambient animals answering machine applause baby bark bat bears beats bee beep bell birds blue martin boat boing brake bubbles bullet car cat cats chainsaw chime chimpanzee chord riffs clap classical clock computer cork cows coyote cricket crow crush crying cuckoo dance dog door drums duck elephant elk events explosion farm fire frog fusion gallop girl gorilla guitar gunshot harp hawk horn horse house howl hurricane hyena insects instrument jazz jungle killdeer kingfisher laughing loon loops machinery male monkey music NASA nature owl panther people phone pop rain rattlesnake ring ringtones robot rock sheep shuttle siren snake snakes solo riffs song sound effects space swan techno thunder tiger toddler touchtone train trance vireo voice weather willet wind wolf 3D accordion admin AngularJS animation Apache API audio gallery autocomplete autoheight background boilerplate Bootstrap browser cache calendar carousel CGI characters charts classes clone CMS CoffeeScript color picker controls cookies count countdown countto CRUD table CSS CSV customizer D3 dashboard database datagrid datepicker datetime disable display Django DOM drag drop easing edit editor effects email embed encryption Excel export extract records feeds file upload firewall Flash flipbook fonts form Foundation framework FreeBSD generator graphs grid Grunt hashchange header height highlight hovercard htaccess icons image gallery image load image rotation image zoom infinite scroll input JOIN Joomla jPlayer jQuery jQuery Mobile jQuery UI JS loader JSON keyboard events language layout lazy load Less library lightbox listbox load more maps menu menu tree Meteor mobile modal mouseover mousewheel music MVC MVVM MySQL newsletter Node notification operating system pagination parallax parser password strength PDF PHP player playlist plugin preload progress bar push menu Python rating repository resize responsive retina display review router RSS SASS scrollbars scroller scrollto search search engine security select sendform SEO share sitemap slide slide menu slider slideshow social network sparklines spinner spreadsheet sticky style switcher suggest SVG swipe switch tabs tags task manager templates testing textarea themes TOC toggle tooltip touch tour transitions tween Twitter typography UI unpack validation video viewport visibility visualizations web server window wizard Wolf WordPress WYSIWYG XML YouTube Zepto zoom
OPCFW_CODE
The user can choose one choice from a list of options by using a radio button code in C# option buttons, leaving all other radio buttons in the same group unchecked. With the help of this article, you can easily learn how to write radio button code in C#. What is an example of a radio button? A list of circular holes with either a dot (for selected) or white space (for unselected) may be presented on the screen as an illustration of how radio buttons are grouped in groups of two or more (for selected). Every radio button typically has a label next to it that describes the option it represents. Height, width, and size of radio buttons The Location attribute accepts a Point as an argument, which determines the beginning location of a RadioButton on a Form. You can also utilize the Left and Top attributes to describe the placement of control from the Form’s left top corner. The Size attribute determines the control’s size. Instead of the Size property, we may alternatively utilize the Width and Height properties. The code below configures the Location, Width, and Height attributes of a radio button control. - Dynamic Radio Button. Location = new Point (20, 150); 2. dynamicRadioButton.Height = 40; 3. dynamicRadioButton.Width = 300; 4. dynamicRadioButton.Width = 300; Background, Foreground, and Border Style of a Radio Button The BackColor and ForeColor attributes are used to specify the background and foreground colors of a RadioButton. The Color Dialog appears when you click on these parameters in the Properties window. You may also change the background and foreground colors during runtime. The code below sets the BackColor and ForeColor attributes. - BackColor = Red; 2. dynamicRadioButton.ForeColor = Blue; The current text of a RadioButton control is represented by the text property of a RadioButton. The Left, Center, or Right text alignments are represented by the TextAlign property. The next piece of code determines a RadioButton control’s size and sets the Text and TextAlign attributes. - dynamicRadioButton.Text = “I am a Dynamic RadioButton”; - TextAlign = ContentAlignment.MiddleCenter; The text font of a RadioButton control is represented by the RadioButton Font Font attribute. Font name, size, and other font settings are visible if you click the Font property in the Properties box. The Font property is run-time set in the following line of code. - Font = newFont(“Georgia”, 16); Browse RadioButton Contents Using the Text property is the simplest approach to reading a Radio Button controls content. The contents of a RadioButton are read from a string in the following line of code. - stringRadioButtonContents = dynamicRadioButton.Text; The appearance of the RadioButton The RadioButton’s Appearance property may be used to change the look of a RadioButton to that of a Button or a RadioButton. There is no round choice option with the Button look. The following property transforms a radio button into a Button control. - Appearance = Appearance.Button; The following code snippet uses an image as the backdrop of a radio button. - Image = Image.FromFile(@ “C:\Images\Dock.jpg”); - ImageAlign = ContentAlignment.MiddleRight; - FlatStyle = FlatStyle.Flat; A common radio button control can have two states: checked and unchecked. When the radio button is checked, it has a checkmark on it, and when it is not checked, it is unchecked. To check or uncheck a radio button, we often use a mouse. When a radio button is checked, the checked attribute is true. - Checked = true; The AutoCheck attribute indicates whether the Checked or CheckState values, as well as the look of the RadioButton, are altered automatically when the RadioButton is clicked. This attribute is set to true by default but can be adjusted to false. - AutoCheck = false; Checked RadioButton Event Handler When the Checked property’s value changes, the CheckedChanged event is triggered. To add this event handler, navigate to the Events pane and double-click on the CheckedChanged events, as shown in Figure 6. The code snippet below specifies and implements these events and associated event handlers. You may use this code to dynamically implement the CheckedChanged event. - CheckedChanged += newSystem.EventHandler(RadioButtonCheckedChanged); - private void RadioButtonCheckedChanged(object sender, EventArgs e)
OPCFW_CODE
Undefined currentUser on refresh @Urigo , I've been noticing something quite odd and was hoping for some clarification or input on the matter. $rootScope.currentUser is often returned 'undefined' when refreshing the page and is very reproducible. I'm leveraging, angular-meteor, meteor-ionic, and meteor-angular-ui-router. The associate error message in the console: TypeError: Cannot read property 'then' of undefined at Object.ngIfWatchAction [as fn] (http://localhost:3000/packages/urigo_angular.js?bb1ce5cfe93e9dc2a14be7be84b4994234f5c846:23116:47) at Scope.$digest (http://localhost:3000/packages/urigo_angular.js?bb1ce5cfe93e9dc2a14be7be84b4994234f5c846:13984:29) at Scope.$apply (http://localhost:3000/packages/urigo_angular.js?bb1ce5cfe93e9dc2a14be7be84b4994234f5c846:14246:24) at http://localhost:3000/packages/urigo_angular.js?bb1ce5cfe93e9dc2a14be7be84b4994234f5c846:27149:43 at Tracker.Computation._compute (http://localhost:3000/packages/tracker.js?192a05cc46b867dadbe8bf90dd961f6f8fd1574f:288:36) at Tracker.Computation._recompute (http://localhost:3000/packages/tracker.js?192a05cc46b867dadbe8bf90dd961f6f8fd1574f:302:14) at Tracker.flush (http://localhost:3000/packages/tracker.js?192a05cc46b867dadbe8bf90dd961f6f8fd1574f:430:14) angular.js:11339 However, $rootScope.currentUser is always defined if I go back then forward. Just on refreshing it often reverts to undefined as if it just doesn't set it. This is incredibly frustrating. I don't think I am using the $subscribe functionality, but I do notice that on refresh on the browser currentUser sometimes becomes undefined. It may be a race condition somewhere between loading up and getting the current user. Actually one workaround for me is to not use Meteor.user() or currentUser but use Meteor.userId() to determine if the user is logged in, with that approach it never gets the "disappearing currentUser" issue on refresh no matter how many times I do it. It is likely faster too, since there's no lookup. If I need the data (usually for showing some data about the current user profile) I would still use the currentUser reactive value and even if it is not there immediately it should be fine.
GITHUB_ARCHIVE
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using Microsoft.WindowsAzure.Storage; using Microsoft.WindowsAzure.Storage.Table; using Microsoft.Azure; using ConsoleApp1.Entites; namespace ConsoleApp1 { class Program { static void Main(string[] args) { var cloudStorage = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnection")); CloudTableClient tableClient = cloudStorage.CreateCloudTableClient(); CloudTable table = tableClient.GetTableReference("customers"); table.CreateIfNotExists(); /*createCustomer(table, new Customer("Cust1", "cust1@localhost.local")); createCustomer(table, new Customer("Cust2", "cust2@localhost.local")); createCustomer(table, new Customer("Cust3", "cust3@localhost.local")); createCustomer(table, new Customer("Delete", "delete@localhost.local")); getCustomer(table, "USA", "cust4@localhost.local"); //getAllCustomer(table); var update = returnCustomer(table,"USA","cust1@localhost.local"); update.Name = "Customer1"; updateCustomer(table, update) var delete = returnCustomer(table, "USA", "delete@localhost.local"); deleteCustomer(table, delete);*/ TableBatchOperation batch = new TableBatchOperation(); var cus10 = new Customer("Cust10", "cust10@localhost.local"); var cus11 = new Customer("Cust11", "cust11@localhost.local"); var cus12 = new Customer("Cust12", "cust12@localhost.local"); var cus13 = new Customer("Cust13", "cust13@localhost.local"); var cus14 = new Customer("Cust14", "cust14@localhost.local"); var cus15 = new Customer("Cust15", "cust15@localhost.local"); batch.Insert(cus10); batch.Insert(cus11); batch.Insert(cus12); batch.Insert(cus13); batch.Insert(cus14); batch.Insert(cus15); table.ExecuteBatch(batch); getAllCustomer(table); Console.ReadKey(); } /* * Inserting to storage table */ static void createCustomer(CloudTable table, Customer customer) { TableOperation insert = TableOperation.Insert(customer); table.Execute(insert); } /* * Data retrieval for a single record from storage Table */ static void getCustomer(CloudTable table, string partitionKey, string rowKey) { TableOperation retrive = TableOperation.Retrieve<Customer>(partitionKey, rowKey); var res = table.Execute(retrive); Console.WriteLine(((Customer)res.Result).Name); } /* * Retrive all records from storage table */ static void getAllCustomer(CloudTable table) { TableQuery<Customer> query = new TableQuery<Customer>(). Where(TableQuery.GenerateFilterCondition("PartitionKey",QueryComparisons.Equal,"USA")); foreach(Customer customer in table.ExecuteQuery(query)) { Console.WriteLine(customer.Name); } } /* * Return the customer */ static Customer returnCustomer(CloudTable table, string partitionKey, string rowKey) { TableOperation retrive = TableOperation.Retrieve<Customer>(partitionKey, rowKey); var res = table.Execute(retrive); return (Customer)res.Result; } /* * Update Customer * */ static void updateCustomer(CloudTable table,Customer customer) { TableOperation update = TableOperation.Replace(customer); table.Execute(update); } /* * Delete Customer * */ static void deleteCustomer(CloudTable table,Customer customer) { TableOperation delete = TableOperation.Delete(customer); table.Execute(delete); } } }
STACK_EDU
Investigate Instant Startup of the Engine This task is automatically imported from the old Task Issue Board and it was originally created by jaroslavtulach. Original issue is here. Why As Enso User I want the engine start and work fast **So that opening a project doesn't take five seconds or more ** Acceptance Criteria Scenario: Given newly created project When it is opened Then the engine shall work immediatelly without typical _java-like_ warm up. Notes: GraalVM offers AOT - native image - compilation which is the recommended way for Truffle languages to offer almost instant startup and better user experience. Tasks: [X] Investigate what it would take to compile engine with native image [X] Modify build.sbt to invoke native-image compilation for the engine [X] Investigate why engine+runner combo crashes the compilation [X] How to load libraries in NI mode? Espresso - yes, it can work: #183260380 [X] Make sbt engine-runner/buildNativeImage goal part of the CI to verify proper @TruffleBoundary annotations: #183136313 Blockers: #183260380 resolved #183136313 resolved #183374932 resolved #183802194 resolved #184256209 blocked Comments: @hubertplociniczak , if you can take a look at `build.sbt` and modify it to create a new task to invoke native image with proper engine (probably `runtime.jar` and `runner.jar`) class path, that'd be great. Compilation may fail, but it'll be easier for me to take over when we have the `sbt` task. (jaroslavtulach - Aug 5, 2022) I tried to use Hubert's `sbt` changes, but the NI [compilation was crashing](https://graalvm.slack.com/archives/CN9KSFB40/p1660912345522119?thread_ts=1660539227.687399&cid=CN9KSFB40). As such I had to create something simpler. Branch [SimpleLauncher](https://github.com/JaroslavTulach/enso/tree/jtulach/SimpleLauncher) shows the _instant startup benefits_ of using _Native Image_ for compilation & execution. Follow the [readme](https://graalvm.slack.com/archives/CN9KSFB40/p1660912345522119?thread_ts=1660539227.687399&cid=CN9KSFB40) and you will see that executing simple helloworld takes `40ms`: ```bash time ./target/simplelauncher fac.enso 3628800 real 0m0,038s user 0m0,021s sys 0m0,017s which is **hundred times faster** than traditional `4s` needed to boot the Enso engine in JVM mode. (jaroslavtulach - Aug 19, 2022) <hr /> With the [help and guidance of Chris Seaton](https://graalvm.slack.com/archives/CN9KSFB40/p1661057653619979?thread_ts=1660539227.687399&cid=CN9KSFB40) I managed to compile the launcher with `sbt engine-runner/buildNativeImage` when I gave the process enough memory (22GB was enough). E.g. the next milestone has been reached. Next goal: make the `sbt engine-runner/buildNativeImage` goal part of the CI run to verify all the necessary `@TruffleBoundary` annotations are in place. (jaroslavtulach - Aug 22, 2022) <hr /> Status summary as of Aug 23, 2022: https://docs.google.com/document/d/1yZmj4y-mOswTUHFzYkV7FM6Vh9P9LLXj-plAW3bVzbI + will be discussed at 15:00 CET (jaroslavtulach - Aug 23, 2022) <hr /> #4877
GITHUB_ARCHIVE
<?php namespace Aerys; class VhostContainer implements \Countable, Monitor { private $vhosts = []; private $cachedVhostCount = 0; private $defaultHost; private $httpDrivers = []; private $defaultHttpDriver; private $setupHttpDrivers = []; private $setupArgs; public function __construct(HttpDriver $driver) { $this->defaultHttpDriver = $driver; } /** * Add a virtual host to the collection * * @param \Aerys\Vhost $vhost * @return void */ public function use(Vhost $vhost) { $vhost = clone $vhost; // do not allow change of state after use() $this->preventCryptoSocketConflict($vhost); foreach ($vhost->getIds() as $id) { if (isset($this->vhosts[$id])) { throw new \LogicException( $vhost->getName() == "" ? "Cannot have two default hosts on the same `$id` interface" : "Cannot have two hosts with the same `$id` name" ); } $this->vhosts[$id] = $vhost; } $this->addHttpDriver($vhost); $this->cachedVhostCount++; } private function preventCryptoSocketConflict(Vhost $new) { foreach ($this->vhosts as $old) { // If both hosts are encrypted or both unencrypted there is no conflict if ($new->isEncrypted() == $old->isEncrypted()) { continue; } foreach ($old->getInterfaces() as list($address, $port)) { if (in_array($port, $new->getPorts($address))) { throw new \LogicException( sprintf( "Cannot register encrypted host `%s`; unencrypted " . "host `%s` registered on conflicting port `%s`", ($new->IsEncrypted() ? $new->getName() : $old->getName()) ?: "*", ($new->IsEncrypted() ? $old->getName() : $new->getName()) ?: "*", "$address:$port" ) ); } } } } private function addHttpDriver(Vhost $vhost) { $driver = $vhost->getHttpDriver() ?? $this->defaultHttpDriver; foreach ($vhost->getInterfaces() as list($address, $port)) { $generic = $this->httpDrivers[$port][\strlen(inet_pton($address)) === 4 ? "0.0.0.0" : "::"] ?? $driver; if (($this->httpDrivers[$port][$address] ?? $generic) !== $driver) { throw new \LogicException( "Cannot use two different HttpDriver instances on an equivalent address-port pair" ); } if ($address == "0.0.0.0" || $address == "::") { foreach ($this->httpDrivers[$port] ?? [] as $oldAddr => $oldDriver) { if ($oldDriver !== $driver && (\strlen(inet_pton($address)) === 4) == ($address == "0.0.0.0")) { throw new \LogicException( "Cannot use two different HttpDriver instances on an equivalent address-port pair" ); } } } $this->httpDrivers[$port][$address] = $driver; } $hash = spl_object_hash($driver); if ($this->setupArgs && $this->setupHttpDrivers[$hash] ?? false) { $driver->setup(...$this->setupArgs); $this->setupHttpDrivers[$hash] = true; } } public function setupHttpDrivers(...$args) { if ($this->setupHttpDrivers) { throw new \LogicException("Can setup http drivers only once"); } $this->setupArgs = $args; foreach ($this->httpDrivers as $drivers) { foreach ($drivers as $driver) { $hash = spl_object_hash($driver); if ($this->setupHttpDrivers[$hash] ?? false) { continue; } $this->setupHttpDrivers[$hash] = true; $driver->setup(...$args); } } } /** * Select the suited HttpDriver instance, filtered by address and port pair */ public function selectHttpDriver($address, $port) { return $this->httpDrivers[$port][$address] ?? $this->httpDrivers[$port][\strlen(inet_pton($address)) === 4 ? "0.0.0.0" : "::"]; } /** * Select a virtual host match for the specified request according to RFC 7230 criteria * * @param \Aerys\InternalRequest $ireq * @return Vhost|null Returns a Vhost object and boolean TRUE if a valid host selected, FALSE otherwise * @link http://www.w3.org/Protocols/rfc2616/rfc2616-sec5.html#sec5.2 * @link http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html#sec19.6.1.1 */ public function selectHost(InternalRequest $ireq) { if (isset($ireq->uriHost)) { return $this->selectHostByAuthority($ireq); } else { return null; } // If null is returned a stream must return 400 for HTTP/1.1 requests and use the default // host for HTTP/1.0 requests. } /** * Retrieve the group's default host * * @return \Aerys\Vhost */ public function getDefaultHost(): Vhost { if ($this->defaultHost) { return $this->defaultHost; } elseif ($this->cachedVhostCount) { return current($this->vhosts); } else { throw new \LogicException( "Cannot retrieve default host; no Vhost instances added to the group" ); } } private function selectHostByAuthority(InternalRequest $ireq) { $explicitHostId = "{$ireq->uriHost}:{$ireq->uriPort}"; $wildcardHost = "0.0.0.0:{$ireq->uriPort}"; $ipv6WildcardHost = "[::]:{$ireq->uriPort}"; if (isset($this->vhosts[$explicitHostId])) { $vhost = $this->vhosts[$explicitHostId]; } elseif (isset($this->vhosts[$wildcardHost])) { $vhost = $this->vhosts[$wildcardHost]; } elseif (isset($this->vhosts[$ipv6WildcardHost])) { $vhost = $this->vhosts[$ipv6WildcardHost]; } elseif ($this->cachedVhostCount !== 1) { return null; } else { $ipComparison = $ireq->uriHost; if (!@inet_pton($ipComparison)) { $ipComparison = substr($ipComparison, 1, -1); // IPv6 braces if (!@inet_pton($ipComparison)) { return null; } } if (!(($vhost = $this->getDefaultHost()) && in_array($ireq->uriPort, $vhost->getPorts($ipComparison)))) { return null; } } // IMPORTANT: Wildcard IP hosts without names that are running both encrypted and plaintext // apps on the same interface (via separate ports) must be checked for encryption to avoid // displaying unencrypted data as a result of carefully crafted Host headers. This is an // extreme edge case but it's potentially exploitable without this check. // DO NOT REMOVE THIS UNLESS YOU'RE SURE YOU KNOW WHAT YOU'RE DOING. if ($vhost->isEncrypted() != $ireq->client->isEncrypted) { return null; } return $vhost; } /** * Retrieve an array of unique socket addresses on which hosts should listen * * @return array Returns an array of unique host addresses in the form: tcp://ip:port */ public function getBindableAddresses(): array { return array_unique(array_merge(...array_values(array_map(function($vhost) { return $vhost->getBindableAddresses(); }, $this->vhosts)))); } /** * Retrieve stream encryption settings by bind address * * @return array */ public function getTlsBindingsByAddress(): array { $bindMap = []; $sniNameMap = []; foreach ($this->vhosts as $vhost) { if (!$vhost->isEncrypted()) { continue; } foreach ($vhost->getBindableAddresses() as $bindAddress) { $contextArr = $vhost->getTlsContextArr(); $bindMap[$bindAddress] = $contextArr; if ($vhost->hasName()) { $sniNameMap[$bindAddress][$vhost->getName()] = $contextArr["local_cert"]; } } } // If we have multiple different TLS certs on the same bind address we need to assign // the "SNI_server_name" key to enable the SNI extension. foreach (array_keys($bindMap) as $bindAddress) { if (isset($sniNameMap[$bindAddress]) && count($sniNameMap[$bindAddress]) > 1) { $bindMap[$bindAddress]["SNI_server_name"] = $sniNameMap[$bindAddress]; } } return $bindMap; } public function count() { return $this->cachedVhostCount; } public function __debugInfo() { return [ "vhosts" => $this->vhosts, "defaultHost" => $this->defaultHost, ]; } public function monitor(): array { return array_map(function ($vhost) { return $vhost->monitor(); }, $this->vhosts); } }
STACK_EDU
// // AppStrings.swift // doordeck-sdk-swift // // Copyright © 2019 Doordeck. All rights reserved. // import UIKit enum PrintChannel { case constraints case lock case sites case temp case error case debug case token case beacons case url case cells case pushNotifications case widget case watch case keychain case deeplinking case share case NFC case GPS case DoordeckSDK } fileprivate func debug () -> Bool { #if os(iOS) return UIApplication.debug() #else return true #endif } /// print function to replace apples built in ones, channels allow you to silence certain aspects of the print /// on anything but debug all the print is disabled. /// /// - Parameters: /// - channel: specify a print channel to all print only important output /// - object: the object you would like to print to the console func print(_ channel: PrintChannel, object: Any) { if debug() { var printOut: Bool = false var channelPre: String = "" switch channel { case .constraints: channelPre = "😩 Constraints" printOut = false case .error: channelPre = "❗❎❌😫😰😱😲😡❌❎❗Error" printOut = true case .debug: channelPre = "✅😍😈😎✅ Debug" printOut = true case .token: channelPre = "😋 Token" printOut = true case .url: channelPre = "😜 URL" printOut = false case .beacons: channelPre = "😜😈😍 Beacons found" printOut = false case .cells: channelPre = "😳 Cells" printOut = false case .pushNotifications: channelPre = "😎 PushNotifications" printOut = false case .lock: channelPre = "😎 Lock" printOut = false case .sites: channelPre = "✅✅ site" printOut = false case .widget: channelPre = "😩 widget" printOut = false case .watch: channelPre = "✅✅ watch ✅✅" printOut = false case .keychain: channelPre = "😱😱 Keychain 😱😱" printOut = false case .deeplinking: channelPre = "😱✅ Deeplink ✅😱" printOut = false case .share: channelPre = "😈😈😈 share 😈😈😈" printOut = false case .temp: channelPre = "😎😎😎 temp 😎😎😎" printOut = false case .NFC: channelPre = "😱😱😱 NFC 😱😱😱" printOut = false case .GPS: channelPre = "😱😈😱 GPS 😱😈😱" printOut = false case .DoordeckSDK: channelPre = "😍😈😍😈😍😈😍😈 DoordeckSDK 😍😈😍😈😍😈😍😈😍😈" printOut = true } if printOut { print("\(channelPre) \n \(object)") } } }
STACK_EDU
When you are doing a second round review, do you read other reviewers' comments? Why? It seems to me that all I need to do is seeing whether the author/s satisfied my comments and whether I find any more problems in the current version of their manuscript. I do read the other reports, and other reviewers should too. A few reasons: - If the editor has an author's response letter and two reviews, and the three documents say different things, then the editor will be very confused. If I see the other reviewer and the author disagree, then I try to be a tie-breaker to help the editor. - Authors are not in a place where they can convincingly say the other reviewer is wrong. If I see that the other reviewer has made a mistake, I can point it out much more effectively than the authors because I am a neutral party. Ideally, editors would be able to do this, but they have limited time. - By reading other reviewers' comments, I can learn to be a better reviewer. It is probably best to be as informed as possible at that stage. There is likely no need to repeat comments of others. It might also inform you of things you missed. However, it does reduce the independence of reviews. If that is important to you, the journal, or the field, you should probably avoid it and deal exclusively with the paper itself. So, the answer is, it depends. But if you have been sent those reviews by the editor, they may expect you to read them. Tangential, but I'm guessing that you'll be interested: it is pretty rare for reviewers to explicitly refer to another reviewer's report in their comments. "The authors have not addressed my comments #1, #5 and #9" is very common; "the authors have not addressed the comments #2 and #4 by the other reviewer" is not. That said, this does not mean the reviewer is not looking at another reviewer's report - there are reasons to do so, e.g. if the paper changes in a way the first reviewer did not like but was requested by a second reviewer, or they might just be curious what others thought about the paper. However, if the reviewers are doing this, they don't indicate it explicitly. Different people have different ways of looking at things. In my experience, the issues that different reviewers raise are rarely overlapping completely, and sometimes even not at all. I think this is one of the key reasons why there are multiple reviewers in the first place. Looking at the comments of the other reviewers will give you the opportunity to look at the paper through someone elses eyes, someone with a different education, different experiences and different methodology. By observing and analyzing what shortcomings others have found that yourself have "overlooked" or better, not perceived as such, you will learn a more holistic approach to reviewing and a better idea of what others - and not only yourself - need to find a paper well structured, interestig and understandable. I always found it very beneficial to look at the other's commments for that reason, and I strongly believe doing so will make you both a better reviewer and will also enhance your own writing and research skills.
OPCFW_CODE
So far we have only worked with predefined content types, but Scratchpads also let you defined custom content types. This allows you to create content in which the data are entered and saved in specific fields. Adding content type¶ From the Admin menu go to Structure > Content types Click + Add content type link at the top Enter a NAME (“Literature mining”) and DESCRIPTION Under the Submission form settings tab change the TITLE FIELD LABEL to Taxonomic name as cited Under the Display settings tab choose the View display and select Display on species pages Choose on which tab of the species pages the new content type should be visible Select Own tab so that the new content type appears on a tab of its own Click the Save and add fields button. This will lead to the Manage fields tab By default every new content type has a title and a body field. In this example the body field is not needed, so click on delete for this field. Term reference fields¶ First we want to add a field that links to the biological classification, so that we can tag our new literature mining content to one or more taxonomic names. Like with other content types, the Taxonomic name field should be an autocomplete field. A field like this is already present in several content types, so we don’t need to create a new one, we can use the existing one. Because it links to taxonomy terms a field like this is called a Term reference. - Go to Add existing field. As Label enter “Taxonomic name” - In the Field to share drop down menu select Term reference: field_taxonomic_name (Taxonomic name) and in the Form element to edit the data drop down menu select Autocomplete term widget (tagging) - Click Save NOTE: These existing fields are locked, so you will not be able to edit their settings (make them required, for example). Node reference fields¶ Next we want to add a field that links to the biblio content type (References), so that we can select a biblio node. This field should be a dropdown menu. Again, a field like this is already present in other content types, so we can use the existing one. Because it links to a node in a different content type a field like this is called a Node reference - Go to Add existing field. As Label enter “Reference”, in the Field to share drop down menu select Node reference: field_reference (Reference) and in the Form element to edit the data drop down menu select Select list - Click Save Next we want to add a field for the page number on which the taxon is cited in the reference. A page field doesn’t exist, yet, so we need to create a new field. Since pages are numbers, we could use Integer as data type. However, in some cases we might want to add a range of pages and this would not be possible with “Integer”, so instead it is better to use the “Text” data type, which is for text that is up to 255 characters long. - Go to Add new field. As Label enter “Page”, in the Type of data to store drop down menu select Text and in the Form element to edit the data drop down menu select Text field - Click Save - Under Field settings enter “20” as MAXIMUM LENGTH. This should give plenty of space for adding the page number. There are various options for adding keywords to our new content type. We can just link to the existing keywords on the site that are for example used for images. To do this we would add the existing “Term reference: Field_keywords (Keywords) field. We could also create a new non-biological vocabulary for our literature mining keywords and create a new term reference field linking to this vocabulary. With the right settings, new keywords can be added to this vocabulary by adding them to the literature mining node. Another option would be to create a list of literature mining categories to choose from and then enter additional information or keywords into a text field. We will do the latter now: Go to Add new field. As Label enter “Literature mining category”, in the Type of data to store drop down menu select List (text) and in the Form element to edit the data drop down menu select Select list In the ALLOWED VALUES LIST enter a few categories, e.g. distribution, original name, type information, one line each Click Save and Save on the next page also Long text fields¶ For adding extracts of the cited paper, we need a text field that can hold more information then just 255 characters. So we will use a ‘Long text’ field. - Go to ‘Add new field’. As ‘Label’ enter “Text”, in the ‘Type of data to store’ drop down menu select “Long text” and in the ‘Form element to edit the data’ drop down menu select “Text area”. Save and save again. - As HELP TEXT enter “Enter keywords or text extracts from the mined paper” and under TEXT PROCESSING select “Filtered text”, so that it is possible to use italics and other formatting. Save settings. To facilitate the entering and viewing of fields, they can be sorted into groups. Groups can be shown as boxes around the fields (Fieldset) or for example as horizontal tabs in the view and edit mode. For the few fields we have in this content type groups are not really necessary, but we will add two anyway to demonstrate horizontal tabs. Each horizontal tab (horizontal tab) is a group and all tabs together also form a group (horizontal tabs group). First we need to create a horizontal tab group to which we then each Horizontal tab. Go to Add new group. As Label enter “Horizontal tabs” and as Group name enter “horizontal_tabs” In the drop down menu select Horizontal tabs group Go to Add new group. As Label enter “Reference data” and as Group name enter “reference_data” In the drop down menu select Horizontal tab Drop and drag the Reference and the Page fields into the Reference data group Create another horizontal tab called “Text mining” and move the Literature mining category and Text fields into it Drag the Reference data and the Text mining groups into the Horizontal tabs group Create a new node¶ Check out how your new content type looks by adding a literature mining node. From the Admin menu go to Content > Literature mining > Add If you do this in a separate browser tab you can play around with changing the settings and seeing how this affects the view. The way the node is displayed can be changed under the Manage display tab. Two things can be changed: The position and presence of the label and the format of the field. The field format largely depends on the field type Change all the labels to “Inline” to save space and have a look at the format options for different field types but don’t change any At the point where a new content type has been saved a menu item is added to the Main menu. By default this page just lists the titles of literature mining nodes with a link to the respective node. To improve this page and change it into, for example, a matrix, you need to edit the view. See Adding and editing views for more info
OPCFW_CODE
These are two important header files used in C programming. While “<stdio.h>” is header file for Standard Input Output, “<stdlib.h>” is header file for Standard Library. One easy way to differentiate these two header files is that “<stdio.h>” contains declaration of printf() and scanf() while “<stdlib.h>” contains declaration of malloc() and free(). In that sense, the main difference in these two header files can considered that, while “<stdio.h>” contains header information for ‘File related Input/Output’ functions, “<stdlib.h>” contains header information for ‘Memory Allocation/Freeing’ functions. Wait a minute, you said “<stdio.h>” is for file related IO but printf() and scanf() don’t deal with files… or are they? As a basic principle, in C (due to its association with UNIX history), keyboard and display are also treated as ‘files’! In fact keyboard input is the default stdin file stream while display output is the default stdout file stream. Also, please note that, though “<stdlib.h>” contains declaration of other types of functions as well that aren’t related to memory such as atoi(), exit(), rand() etc. yet for our purpose and simplicity, we can remember malloc() and free() for “<stdlib.h>”. It should be noted that a header file can contain not only function declaration but definition of constants and variables as well. Even macros and definition of new data types can also be added in a header file. Please do Like/Tweet/G+1 if you find the above useful. Also, please do leave us comment for further clarification or info. We would love to help and learn 🙂 Attention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready. - Header files in C/C++ with Examples - Difference between Header file and Library - accumulate() and partial_sum() in C++ STL : numeric header - numeric header in C++ STL | Set 2 (adjacent_difference(), inner_product() and iota()) - Namespace in C++ | Set 3 (Accessing, creating header, nesting and aliasing) - Print "Hello World" in C/C++ without using any header file - Comment in header file name? - How to write your own header file in C? - clocale header file in C++ - time.h header file in C with Examples - <complex.h> header file in C with Examples - dos.h header in C with examples - C Program to list all files and sub-directories in a directory - C program to compare two files and report mismatches - C Program to merge contents of two files into a third file - Linking Files having same variables with different data types in C - Types of C files after its compilation - Relational Database from CSV Files in C - Difference between Stop and Wait protocol and Sliding Window protocol - Similarities and Difference between Java and C++
OPCFW_CODE
This article explains how the OAuth 2.0 authorization framework authenticates a user on a third-party HTTP website, and how this kind of social identity provider based authentication makes use of what is called authorization code grant flow. For some websites, when logging in–whether to receive a service or make a purchase–you may have noticed a new type of login that prompts the user with options to login with Google and other third-party social identity provider services. An example of such a login is the Coursera.org login shown in Figure 1. Figure 1. Login with third-party service providers Firstly, why is the user being prompted to Continue with the login using social identity providers? Simply, to avoid having a user create an account, or register with the Coursera.org website with Sign up, which is still an option. Modern applications make use of the smooth login authentication using social identity providers. Secondly, how is a third-party social identity provider based login set-up? Third-party login is an authentication layer on top of an authorization layer. The identity authentication is performed by an authorization server on the chosen social identity provider using OAuth 2.0 protocol. OAuth 2.0 is designed for use with HTTP only. What Is OAuth 2.0? The OAuth 2.0 is an authorization framework that enables a third-party application to obtain a limited access to an HTTP service. OAuth 2.0 defines some terms, and roles that are used. A protected resource is a resource such as user information. A resource server is the server on which a protected resource is stored. A resource owner is the one who owns a protected resource and is able to grant access to it. A client is one who wants to access a protected resource. An authorization server is a server that obtains authorization on behalf of the client. It returns access tokens to the client with which a client may access a protected resource. The access tokens are limited in scope such as they may be valid for a limited time and for a limited subset of resources. The authorization server could be the same as the authentication server. Login Authentication Example For a login, a client website, using Coursera.org as an example, is interested in authenticating a user using one of the user’s social identity provider accounts, (if the user has one) and obtaining user information such as name and email from the social identity provider account. For Coursera to be able to authenticate its users using a Google, or another identity provider account, it must have registered as a client with the identity providers. In the example login illustrated in Figure 1, if the user selects Continue with Google, the request is sent to Coursera’s backend server, and the server sends a redirect URL directed at Google’s authorization server to the user’s browser, which acts as a user agent. The redirect URL includes oauth2/v2/auth, which implies that OAuth 2.0 is used for authentication. The URL also includes as request parameters the Coursera’s client id registered with Google, and a URL directed at Coursera’s backend server that the Google’s authorization server can use to construct a redirect URL to send back to the user’s browser. The user is prompted to provide a Google account email as shown in Figure 2. The user then provides an account email and clicks on Next. Figure 2. Sign in to Google to continue to Coursera Continuing with the example login, the user provides the password for its Google account and clicks on Next as shown in Figure 3. Figure 3. User provides password to authenticate with Google The authentication request is sent to Google’s authorization server. The server authenticates the user, and if the email & password are valid, the server constructs a redirect URL directed at Coursera’s backend server and includes as a request parameter an authorization code that Coursera can use to access Google’s authorization server. The redirect URL is sent to the user’s browser. A message at the bottom of the redirect URL window indicates that Google will share the name, email, and language preference with Coursera.org as shown in Figure 4. Figure 4. Sign in message The authorization code exchange happens transparently to the user using the redirect URL. Once the authorization code has been sent to Coursera, the user’s input is not needed any more. When Google, or another social identity provider is used for proxying a login, the user’s credentials registered with the social identity providers such as Google account password are not shared with, or made available to, the client website; Coursera.org in the example. Coursera, as Google’s client, sends a request to Google’s authorization server and includes the authorization code in the request. Google’s authorization server authorizes Coursera, its client, and returns an access token to Coursera; an access token that Coursera can use to access user’s information, the protected resource in the example. Coursera uses the access token to connect with the resource server and access the user’s information. The exchanges between Coursera’s server and Google’s servers happen transparently to the user. If all authorizations get validated, the user gets logged in to Coursera’s website as shown in Figure 5. Figure 5. User logged in to Coursera Using the Cousera.org login as an example, the sequence used by authentication is as follows: - The user clicks on the Log in link on Coursera.org website. - The user is directed to a Login page or window. - The user chooses a social identity provider, as an example Google. The user clicks on Continue with Google, which is shown in Figure 1. - The user is directed to the Sign in window shown in Figure 2. - The user provides a Google account email and clicks on Next. - The user is prompted to provide the Google account password, which the user does and clicks on Next as shown in Figure 3. If the user is already logged in to the Google account that is to be used for authenticating the user, the account email is displayed to the user, and the user only needs to select the account. - If the user authenticates with Google’s authorization server, the server returns a redirect URL to the user’s browser directed at Coursera’s server. The redirect URL includes an authorization code. User has no more role to play in the authentication process. - The client (Coursera.org) makes an authorization request to Google’s authorization server and includes the authorization code in the request. - Google’s authorization server authenticates the authorization code and returns an access token to Coursera. - Coursera connects with Google’s resource server using the access token to get user’s information. - The resource server validates the access token, and returns the user information to the client, Coursera. - The user gets logged in to Coursera. In this article, we discussed using the OAuth 2.0 authorization framework for authenticating a user on a third-party HTTP website. Such a social identity provider based authentication makes use of what is called authorization code grant flow.
OPCFW_CODE
PANDAI, A MALAY ADJECTIVE WHEN TRANSLATED TO ENGLISH REFERS TO: ADJECTIVE 1. SHOWING SKILL AND QUICK AT LEARNING AND UNDERSTANDING THINGS. Headquarters Region : Asia-Pacific (APAC), Association of Southeast Asian Nations (ASEAN), Southeast Asia Founded Date : Jul 2016 Operating Status : Active Number of Employees : 1-10 The Art: Conversational UX Conversational UX refers to the user experience that one has when interacting with an automated agent over chat interfaces. i.e. chatbots. Like a good human communicator, a good conversation UX is a combination of high quality substance (professional expertise), the choice of words, empathy with the audience, some doses of humor, and extensive A/B testing, amongst others. A good conversational UX design elicit Call-to-Action (CTA) that meets your business objectives. The Science: Deep NLP Deep NLP is the use of deep-learning algorithm for natural language processing, i.e. it attempts to learn multiple levels of representation, in this case, word representation, of increasing complexity or abstraction; and unlike traditional machine learning, it can do so without human intervention. Deep NLP works best when we could perform uniform parallel operations on dense vectors, which is at the core of what we do here. Pand.ai is a member of the NVIDIA Inception Program, which supports Deep Learning AI startups. Natural Language Understanding The core engine behind the AI engine is deep learning for Natural Language Processing (Deep NLP). Unlike traditional keyword-based approach, our engine is able to extract the semantics of the entire sentence through Pand.ai’s proprietary model. This technology allows grammatically imperfect messaging, including many internet lingo and slang words as well as hybrid or mixed languages that are pre v a lent in many parts of the world. The embedded in-state memory retention technology enables the bot to remember what your customer says previously without needing them to always repeat. This makes conversation with the chatbot more intuitive and natural. Chatbot typically have difficulty understanding complex requests that consist of multiple parts. Instead of doing multiple sequencing or rule based filtering, the Pand.ai chatbot is able to instantly parse and thus comprehend, on a more sophisticated level. This multi-dimensional capacity allows anyone to initiate complex queries within a sentence. We pick up clues from all the interactions that your customers have with the chatbot, and automatically calculate the most appropriate content/message to serve each of them, individually. Real-Time Analytics through Data Visualisation Real time data visualisation are richly informative. Especially important and useful in a fast paced market or environment, it helps to put the magnitude of data into perspective.
OPCFW_CODE
Customizing the delimiter line for DelimInstancesQueryHandler not working IGUANA_VERSION="3.3.3" Ubuntu 22.04.3 LTS openjdk 21.0.1 2023-10-17 http://iguana-benchmark.eu/docs/3.3/usage/queries/#multiple-line-plain-text-queries Setting the delimiter line as described in the documentation seems not to be working. I get the following WARNs in the log output: ... 2024-02-07 15:12:46,755 [main] INFO [org.aksw.iguana.cc.config.IguanaConfig] - <Executing Task [1/null: lindas-cc-hidden-data, public, Stresstest]> 2024-02-07 15:12:46,866 [main] WARN [org.aksw.iguana.cc.lang.impl.SPARQLLanguageProcessor] - <Query statistics could not be created. Not using SPARQL?> 2024-02-07 15:12:46,876 [main] INFO [org.aksw.iguana.rp.experiment.ExperimentManager] - <Got start flag for experiment task ID<PHONE_NUMBER>/1/1> ... 024-02-07 15:12:51,885 [pool-4-thread-1] INFO [org.aksw.iguana.cc.worker.AbstractWorker] - <Worker executed 42.0 queryMixes> 2024-02-07 15:12:51,907 [pool-4-thread-1] WARN [org.aksw.iguana.cc.worker.AbstractWorker] - <Worker[SPARQLWorker : 0]: Socket closed on query (ID sparql0) SELECT * {?s ?p ?o} LIMIT 10 ### SELECT * { ?s ?p ?o } LIMIT 1 > 2024-02-07 15:12:51,908 [pool-4-thread-1] INFO [org.aksw.iguana.cc.worker.AbstractWorker] - <Stopping Worker[{SPARQLWorker} : {0}].> Config: ... # The benchmark task tasks: - className: "Stresstest" configuration: # 1 minute (time Limit is in ms) timeLimit: 5000 # we are using plain text queries queryHandler: className: "DelimInstancesQueryHandler" delim: "###" ... Only if I use an empty line (the default), then the DelimInstancesQueryHandler seems to work fine. I don't get warnings that way. Hi, I think the configuration should look like this: tasks: - className: "Stresstest" configuration: timeLimit: 5000 queryHandler: className: "DelimInstancesQueryHandler" # configuration is missing configuration: delim: "###" The delim attribute need to be inside the configuration object. Sorry for the late response, I didn't find the cause of the problem at first either and we're currently rewriting the whole program.
GITHUB_ARCHIVE
The SCons wiki has moved to https://github.com/SCons/scons/wiki/APLSConscript01-gsc Line 3: Imports the env from the parent script Line 5&7: Sets up the C compiler to look in both the current directory and the include directory underneath the main directory in which SCons was started. Note the "#" in the "#include". This is important. That include directory contains a "config.h" file that almost all the C code references and it is not copied over to the build/etc/etc hierarchy. Line 9: Gambit Scheme input files (sources) Line 12: Extra input source files. This is here because these files have extra dependencies. Line 16&17: Converts the Gambit Scheme input files into C output files (targets) via the GambitCompiler Builder defined in the top-level SConstruct. ie. _gsi.scm becomes _gsi.c. Note that I am saving the targets here for later reference. Line 23,23,25,26,27: Lots of dependencies. Make the targets dependent upon libgambc. As the comment says, only the gsc executable is, strictly speaking, dependent upon libgambc. However, the easiest way to pick up the dependencies from lots of the source files to the lib files is to make the targets depend on the whole library. Yes, it's a bit of a hack which forces libgambc to build first. Line 29: Gambit requires some extra link information to produce a stand-alone executable. This uses the GambitLinker Builder that was defined in the top-level SConstruct to produce that file. It demonstrates multiple sources compiling into one target. Line 31: Invoke the normal SCons Program Builder to produce an executable and remember it Line 33: Not used yet, but I send the reference to the executable back up to the SConstruct for later use, if required. #!python # -*-python-*- Import(["env", "libgambc"]) cpppath = [".", "#include", "#lib"] env.Replace(CPPPATH=cpppath) gambitSourceFiles = ["_back.scm", "_env.scm", "_front.scm", "_gvm.scm", "_host.scm", "_parms.scm", "_prims.scm", "_ptree1.scm", "_ptree2.scm", "_source.scm", "_t-c-1.scm", "_t-c-2.scm", "_utils.scm"] gambitGenericDependentSourceFiles = ["_t-c-3.scm", "_gsc.scm"] # This "GenericDependent" contortion is because source files can't have dependencies gambitTargetFiles = env.GambitCompiler(gambitSourceFiles) gambitGenericDependentTargetFiles = env.GambitCompiler(gambitGenericDependentSourceFiles) # The target files are not strictly dependent upon the entire # libgambc but on some of the files generated during its build env.Depends(gambitTargetFiles, libgambc) env.Depends(gambitGenericDependentTargetFiles, libgambc) env.Depends(gambitTargetFiles, ["fixnum.scm", "_envadt.scm", "_gvmadt.scm", "_ptreeadt.scm", "_sourceadt.scm"]) env.Depends(gambitGenericDependentTargetFiles, ["fixnum.scm", "_envadt.scm", "_gvmadt.scm", "_ptreeadt.scm", "_sourceadt.scm"]) env.Depends(gambitGenericDependentTargetFiles, ["generic.scm"]) gscLinkerFile = env.GambitLinker([gambitTargetFiles, gambitGenericDependentTargetFiles]) gsc = env.Program("gsc", [gambitTargetFiles, gambitGenericDependentTargetFiles, gscLinkerFile, libgambc]) Return("gsc")
OPCFW_CODE
Command [create-react-app project-name --template=typescript] do not finish! Describe the bug Whenever I have to create a new project the command does not end and also does not return an error log. Any of the commands that I have to execute generate the same result .. create-react-app project-name create-react-app project-name --template=typescript npx create-react-app project-name --template typescript yarn create react-app project-name --template typescript The process starts and when it arrives at a certain moment (Done in ..s) it always hangs, print follows. It does not return any error but it also does not end. Stuck process Then I need to cancel the process with the command (Ctrl + C). And the only files created are: * node_modules * package.json * yarn.lock Created files I've tried to follow some steps I saw on the forums: Run the command npm config set cache C: \ tmp \ nodejs \ npm-cache --global. Remove create-react-app globally and add it again right after: yarn global remove create-react-app > yarn global add create-react-app. Did you try recovering your dependencies? Yes, I already removed the files created in half and tried to create again in other directories several times. npm version: 6.14.5 yarn version: 1.21.1 Which terms did you search for in User Guide? cannot create a new react project using create-react-app create-react-app failing when create a new project why create-react-app doens't create a new app i can't create a new project with create-react-app infinite loop when i try create a new project with create-reac-app command create-react-app on infinite loop create-react-app fails to create a new project on windows 10 create-react-app do not finished create-react-app not working why running command line create-react-app, this don't finish the proccess? initializing a new project with the commando create-react-app, fails. Environment Environment Info: current version of create-react-app: 3.4.1 running from C:\Users\caior\AppData\Roaming\nvm\v12.13.0\node_modules\create-react-app System: OS: Windows 10 10.0.18363 CPU: (8) x64 Intel(R) Core(TM) i5-8300H CPU @ 2.30GHz Binaries: Node: 12.17.0 - C:\Program Files\nodejs\node.EXE Yarn: 1.21.1 - C:\Program Files (x86)\Yarn\bin\yarn.CMD npm: 6.14.5 - C:\Program Files\nodejs\npm.CMD Browsers: Edge: 44.18362.449.0 Internet Explorer: 11.0.18362.1 npmPackages: react: 16.13.1 => 16.13.1 react-dom: 16.13.1 => 16.13.1 react-scripts: 3.4.1 => 3.4.1 npmGlobalPackages: create-react-app: Not Found Steps to reproduce Open PowerShell as Administrator on Windows. Or any other command prompt; Run the command npx create-react-app project-name --template typescript; At this stage where the process stopped; Expected behavior May the command successfully finish creating my project. Actual behavior The process simply hangs without generating an error log. Stuck process Reproducible demo I don't have the project created! Half created project I have the same problem. But instead as tried above I fire "npx create-react-app my-app" command. The process of installing is stuck. I tried to uninstall create-react-app globally as given in documents. Then I uninstall node and reinstall it. But nothing seems to be right. Please help me with this situation. I have the same problem. But instead as tried above I fire "npx create-react-app my-app" command. The process of installing is stuck. I tried to uninstall create-react-app globally as given in documents. Then I uninstall node and reinstall it. But nothing seems to be right. Please help me with this situation. Yes, I know, I already did the same steps, but the problem persists. I think that if you installed the NodeJS with installer.exe file, when you need uninstall persists some files in your OS. So cause this problem. I am facing the same problem. Has anyone found a solution to this problem? I am facing the same problem. Has anyone found a solution to this problem? Man, i had to format my computer.
GITHUB_ARCHIVE
Use Cases: Neo4j for Graph Data Science Today’s businesses are faced with extremely complex challenges and opportunities that require more flexible, intelligent approaches. That’s why Neo4j created the first enterprise graph framework for data scientists – to improve predictions that drive better decisions and innovation. Neo4j for Graph Data Science™ incorporates the predictive power of relationships and network structures in existing data to answer previously intractable questions and increase prediction accuracy. Neo4j for Enterprise Graph Data Science From pointers to patterns to predictions, only Neo4j offers such breadth and depth of advanced graph analytics and data science capabilities in an integrated enterprise environment. Our efficient property graph model stores nodes and their corresponding relationships together, so you just follow the pointers for real-time queries. The Neo4j graph algorithms inspect global structures to find important patterns and now, with graph embeddings and graph database machine learning training inside of the analytics workspace, we can make predictions about your graph. Neo4j for Graph Data Science is comprised of the following products: A toolkit with a flexible data structure for analytics and a library with five varieties of powerful graph algorithms. A highly scalable, native graph database, purpose built to persist and protect relationships. A graph visualization and exploration tool that allows users to visualize algorithm results and find patterns using codeless search. Graph Data Science helps businesses across industries leverage highly predictive, yet largely underutilized relationships and network structures to answer unwieldy problems. Examples include user disambiguation across multiple platforms and contacts for more personalized services and marketing, identifying early interventions for complicated patient journeys to improve outcomes, and predicting fraud through sequences of seemingly innocuous behavior. To accomplish these goals, organizations explore the results of graph algorithms and then use predictive features for further analysis, machine learning or to support AI systems. With this approach, Neo4j customers are demonstrating that graphs bring tremendous value to advanced analytics, machine learning and AI. Read the white paper, Artificial Intelligence & Graph Technology: Enhancing AI with Context & Connections, on how graph technology enhances machine learning and AI projects by providing context and connections within the underlying data. Graph Data Science For Dummies Learn the foundations of graph data science and dive into graph analytics and algorithms that solves real-world problems using machine learning and more.Get the free book Case Study: NYP Advances Analysis to Track Infections with Neo4j Learn how NYP Hospital's analytics team used graph data science to relate all their event data, enabling them to track infections and take strategic action to contain them.Read the case study Neo4j Graph Data Science Sandbox Test drive Neo4j Bloom and the GDS Library together with our graph data science sandbox – the fastest way to experiment since there's nothing to install or data to load.Try the sandbox How Graphs Enhance Artificial Intelligence, with Neo4j's Amy Hodler Amy Hodler, Analytics & AI Program Manager at Neo4j, speaks at GraphTour on how graph technology enhances AI, with tactical steps in how to move forward in graph data science.Watch video Incorporating the predictive power of relationship in advanced analytics and machine learning enables you to continually improve predictive accuracy. Answer Intractable Questions Graph algorithms are a subset of data science algorithms created to analyze network structures so you can better understand complex systems and answer more complicated questions. Using an industry leader to add graph based features to existing data science pipelines is a low-risk way to put more accurate models into production faster. Analytics and machine learning requires a lot of data to increase accuracy but most models today aren’t using their existing data about relationships and network structures. Data science is inherently iterative so it’s essential to use a framework that brings in highly predictive relationships while streamlining the process of moving from data to analysis to visualization and back. Lack of Scale and Support Data scientists need enterprise scale, productions features and dedicated data science support that includes packaged and tested algorithms. Scalable Graph Analytics Neo4j Graph Data Science library creates a friendly analytics workspace with powerful graph algorithms that can operate over 10’s of billions of nodes and relationships. Integrated Native Graph Store Neo4j graph database natively stores interconnected data for persistence and automates data reshaping for analytics. Intuitive Graph Visualization Neo4j Bloom enables graph novices and experts to explore results visually, quickly prototype concepts and collaborate with different groups. Digitate's ignio AI system enables organizations to optimize their most complex business areas like IT, batch manufacturing and enterprise resource planning. Learn how to leverage knowledge graphs, which are connected data graphs combined with iterative machine learning, to solve many enterprise challenges. Learn More About AI & ML Use Cases Datanami: Why Knowledge Graphs Are Foundational to Artificial Intelligence Computer Business Review: Creating The Most Sophisticated Recommendations Using Native Graphs Neo4j & Expero, Inc.: Thwart Fraud Using Graph-Enhanced Machine Learning & AI
OPCFW_CODE
connect using URI connection string support As mentioned in #73. Typescript doesn't allow spreading arguments into function https://github.com/Microsoft/TypeScript/issues/4130 and doesn't allow conditional super() call https://github.com/Microsoft/TypeScript/issues/945. One way to achieve this is to instantiate Sequelize in a static factory function. Old constructor must be removed to allow arguments to be exactly relayed to OriginSequelize constructor. Thus, this change will break backward compatability. Developers currently using "new Sequelize(config)" have to change the code to "Sequelize.init(config)" or "Sequelize.init(URI, config)". Should this be merged into next major version ? Codecov Report Merging #79 into master will decrease coverage by 0.18%. The diff coverage is 87.5%. @@ Coverage Diff @@ ## master #79 +/- ## ========================================== - Coverage 95.6% 95.41% -0.19% ========================================== Files 65 65 Lines 750 763 +13 Branches 103 105 +2 ========================================== + Hits 717 728 +11 - Misses 11 12 +1 - Partials 22 23 +1 Impacted Files Coverage Δ lib/models/BaseSequelize.ts 90% <100%> (ø) :arrow_up: lib/models/v3/Sequelize.ts 95.16% <86.66%> (-2.8%) :arrow_down: Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update c3f8b3f...6b18fa1. Read the comment docs. Hey @kukoo1 thanks for contributing :) Why is spreading necessary here, from your point of view? Did you try to resolve this with overloading the constructor like: export class Example { constructor(connection: string); constructor(options: Options); constructor(optionsOrConnection: Options|string) { /*...*/ } } ? I've tried this. constructor(config: ISequelizeConfig, uri?: string) { // Sequelize constructor signature: //// new (database: string, username: string, password: string, options?: Options): Sequelize; //// new (database: string, username: string, options?: Options): Sequelize; //// new (uri: string, options?: Options): Sequelize; if (uri) { super(uri, BaseSequelize.prepareConfig(config)); } else { super( (preparedConfig = BaseSequelize.prepareConfig(config), preparedConfig.name), preparedConfig.username, preparedConfig.password, preparedConfig ); } this.init(config); } The compiler complains A 'super' call must be the first statement in the constructor when a class contains initialized properties So I tried solving with this. constructor(config: ISequelizeConfig, uri?: string) { super( uri || (preparedConfig = BaseSequelize.prepareConfig(config), preparedConfig.name), uri ? BaseSequelize.prepareConfig(config) : preparedConfig.username, uri ? undefined : preparedConfig.password, uri ? undefined : preparedConfig ); this.init(config); } Now it works with traditional connection options as an object. But by using connection string, it won't work. Because originalSequelize will recognize that super(uri, config, undefined, undefined) contains 4 arguments, so "uri" would be db name, "config" would be username, "undefined" would be password and so on. Any idea to this? :) @kukoo1 Thank you for your input :) Or much simpler: new Sequelize({uri: '...'}) constructor(config: ISequelizeConfig | ISequelizeUriConfig) {} Since sequelize-typescript already uses one object literal for options instead of multiple parameters, I think this will be the most consistent approach. What do you think? hmm it's just a string, do we really need to make it object literal? @BruceHem I think it is most likely, that the configuration will look like this new Sequelize({ uri: '...', modelPaths: ['...'] }) But in addition, we could overload the constructor like: constructor(connection: string); constructor(options: ISequelizeConfig | ISequelizeUriConfig); constructor(optionsOrConnection: ISequelizeConfig | ISequelizeUriConfig | string) { /*...*/ } I found the root of this mess. I've just looked into Sequelize code. It accepts new Sequelize({ ... options }) new Sequelize(URI, { ... options }) new Sequelize(database, username, password, { ... options }) So we could call super(uriOrOption) properly. At first, when I coded this PR, I only looked at Sequelize.d.ts definition (@types/sequelize). The definition accepts only new (uri: string, options?: Options) new (database: string, username: string, options?: Options) new (database: string, username: string, password: string, options?: Options) So we've to open PR in DefinitelyTyped to add this beforehand new (options: Options): Sequelize; Now I get into another trouble. I can't pass both URI and options at the same time. current constructor looks like this constructor(configOrUri: ISequelizeConfig | ISequelizeUriConfig | string) { super( (typeof configOrUri === "string") ? configOrUri : // URI connection string BaseSequelize.prepareConfig(configOrUri) // config object ); } Sequelize checks arguments using this code if (arguments.length === 1 && typeof database === 'object') { // new Sequelize({ ... options }) <-- in order THIS to WORKS, we must ONLY pass 1 argument !! // ... } else if (arguments.length === 1 && typeof database === 'string' || arguments.length === 2 && typeof username === 'object') { // new Sequelize(URI, { ... options }) // ... } else { // <-- with this, URI string will NOT WORKING !! // new Sequelize(database, username, password, { ... options }) // ... } So we couldn't do this. super( (typeof configOrUri === "string") ? configOrUri : // URI connection string BaseSequelize.prepareConfig(configOrUri), // config object (configOrUri.uri) ? configOrUri : // ISequelizeUriConfig undefined // <-- THIS will BREAK arguments.length === 1 ); There're 2 options. ignore config when URI is passed (ISequelizeUriConfig will be a junk and we'll lose flexibility when use URI connection string) open PR in Sequelize to add const argLength = arguments.filter(val => val !== undefined).length and use if (argLength === 1) instead. @kukoo1 Ahh ok, I see. I didn't get it in the first place. From my understanding, typescript makes it impossible to use overloads in constructors in combination with inheritance in general. Cause it is very common to check arguments length. Or have we overseen something? waiting for review ... https://github.com/DefinitelyTyped/DefinitelyTyped/pull/18896 @RobinBuschmann I committed the new one. constructor(config: ISequelizeConfig | ISequelizeUriConfig | ISequelizeDbNameConfig); constructor(uri: string); I suggest to deprecate "name" property in options, and use "database" instead. Because, in Sequelize documentation, "database" is used. In order to use the same config when switching from ordinary sequelize, we should use "database". What's your opinion ? Looks good to me. Deprecating "name" is reasonable as well. But instead of adding "@Todo", you could write "@deprecated" - this will be noticed by many IDEs and therefore be notice the user that "name" is deprecated. Fixes #73
GITHUB_ARCHIVE
Stack Overflow Documentation for Microsoft Developers This post was written by Jeff Sandquist, General Manager in the Cloud + Enterprise Division. Today we are announcing a partnership with Stack Overflow to support Stack Overflow Documentation for Microsoft developers. As part of this announcement, we are announcing that Stack Overflow Documentation content will be integrated into docs.microsoft.com API reference content in the future. Microsoft has long been a partner with Stack Overflow in the form of sponsored tags, and the launch of Stack Overflow Documentation enables the community to have an easy way to create and vote on code samples using the .NET Framework, Xamarin, and other Microsoft products and technologies. Both docs.microsoft.com and Stack Overflow had shared goals - we want to make it easy and simple for the community to contribute great documentation for using products and services. Both sets of content have an open license, use markdown as the content format, and easily enable community contributions (Click Edit from any docs.microsoft.com page). A Quick Tour of Stack Overflow for .NET Developers When you first arrive at StackOverflow Documentation you'll find the familiar tag-based categorization of the topics available. At first glance you'll see all the tags previous created by the StackOverflow community. Once you filter by tag, a list of each appears with a few important metrics shown right up front - the count of topic requests, proposed changes, and improvement requests. Request a Topic Few developers have solved difficult problems without the help of the StackOverflow audience. Together with our colleagues, peers, and legends from the community who monitor StackOverflow for questions related to their own products, we've built a network of information organized by tags. Documentation capitalizes on the audience commitment to collectively building knowledge by giving us a place to ask for help - the "Requested Topics" section of each tag gives the community a place to identify common problems. By clicking "Create Topic," you can create content for the topic, provide your own code samples and documentation. The experience provides guidance on where to provide each section of content, which can be edited using Markdown syntax if you prefer. As you create topics, your progress is saved in the right navigation, where you can see all of the draft content you've not yet completed or sent for review. Familiar Code-viewing Experience As with the StackOverflow Q&A site, the code for individual topics is presented in clear, concise format so you can copy-edit-use or learn from simply reading the code. Developers can up/down vote code samples just like answers on Stack Overflow. In the future, docs.microsoft.com will integrate a curated list of Stack Overflow documentation samples directly into Microsoft API documentation. Developers looking at Microsoft API reference will see both samples created by Microsoft as well as Stack Overflow's samples created by the community. The samples will be curated to ensure that code samples use established .NET coding guidelines and best practices. We will release more details on this integration in the future.
OPCFW_CODE
Beginning with the Hub Services August 2020 Cloud and Workspace ONE UEM 2008 releases, management of the Workspace ONE UEM Hub catalog settings moves to the Hub Services console. You can migrate your Workspace ONE UEM Hub catalog settings to the Hub catalog settings in the Hub Services console. Users' access to their Hub catalog through the Workspace ONE Intelligent Hub is uninterrupted. During migration, your customer OG UEM app catalog settings are migrated to Hub Services and become the global level settings for the Hub app catalog. Hub templates are created for any child OGs with different settings from your customer OG settings. The Hub templates are assigned to UEM smart groups based on the user assignment in the OG. If you do not want to migrate your Workspace ONE UEM Hub catalog settings, you can select to discard the migration option in the Hub Services console. If you select to discard the migration option, you cannot migrate your Workspace ONE UEM catalog. You can create Hub templates in the Hub services console, configure the Hub catalog settings, and assign Workspace ONE UEM smart groups to the templates. When you migrate the Workspace ONE UEM Hub catalog settings to Hub Services, Hub templates are created based on your organization's deployment of the Hub catalog. - The default global settings that are configured in Hub Services, including Branding, are applied to each template. If you do not want to use the global settings, you can customize the template. - A Hub template is created during migration when child OGs is different from your customer OG. When the child OGs have the same configuration, but different users, only one template is created. For example, the customer OG hierarchy includes two child OGs, C1 and C2. C1 and C2 are configured with the same settings. The customer OG is configured with different settings. During the migration, Hub Services creates one template for C1 and C2 because they have the same configuration and assigns smart groups to the template . - Smart groups are created based on the Workspace ONE UEM OG's assignment groups. The smart group settings mimic the OG settings. - The platform settings that are enabled to access the Workspace ONE UEM Hub catalog are migrated to the Hub Services Hub catalog settings. You manage the platform settings for iOS, Android, Mac, or Windows in the template Hub catalog settings. - Templates are prioritized in the Template list based on the OG hierarchy. The lower a child OG is in the OG hierarchy, the higher the associated template is listed in the Template list. To learn more about Hub templates, see Using Hub Templates to Customize the Workspace ONE Intelligent Hub Experience for Different Users. Migrate Hub Catalog to Hub Services When Workspace ONE UEM is using Hub Services, you can migrate your app catalog settings to Hub Services. - To start the migration, log into the Hub Services console. - If your organization can migrate, you see the Hub Template Migration Experience screen. - In the Migrate all App Catalog settings section, click MIGRATE. - When the migration is complete, in the Next Steps section, click FINISH. Go to the Templates page to see the prioritized list of templates that were created from the migration. You can re-prioritize templates, edit templates, assign different smart groups, and delete templates. Your user access is uninterrupted. If you made changes to the branding or app catalog features in Hub Services, users see the changes when they sign in to Workspace ONE Intelligent Hub. The catalog settings are removed from the Workspace ONE UEM console Groups & Settings > All Settings > Apps > Workspace ONE > AirWatch Catalog > General > Authentication tab or are displayed as read-only. The read-only settings in the Platform section that are listed on the page apply to older versions of Workspace ONE Intelligent Hub.
OPCFW_CODE
Wake lock Allows using PowerManager WakeLocks to keep processor from sleeping or screen from dimming. Grand Theft Auto Apk Moda huge urban sprawl ranging from the beach to the swamps and the glitz to the ghetto, was one of the most varied, complete and alive digital cities ever created. In this game, you will find many more game modes such as free-roaming role-playing, which gives you great freedom and can be played on the basis of the given missions. The current series has ten independent versions and four extensions. Offline Welcome back to Vice City. The game is well optimized and runs smoother on most mobile devices. Rockstar Games has released a high-quality game that is perfectly optimized for mobile devices, tablets and so on. Moreover, when you playing the game you will get 100 new rewards with a value of 25000 V-Bucks. It is in the top five best video games ever. For example, when you run and collide, the car is damaged and may catch fire, or when you fire on others, or effects when the environment is destroyed. Vice Metropolis, an enormous city sprawl starting from the seaside to the swamps and the glitz to the ghetto, was one of the vital different, full and alive digital cities ever created. The characters in the game become more lively thank to the realistic dubbed voices.Next All players play tight your chisel to play this game. The story of the game is based on this happen. After that in 2017 you will get General 2 and General 3 and in 2018 they update Field research, training, friends, Alolan Forms and Trading in General 4. With the launch of this game, it was very fame that led developers to play for the play station and re-launching it in Europe, North America. In this mode, you also have to select different characters and it can also be changed later. Combining open-world gameplay with a personality pushed narrative, you arrive in a city brimming with delights and degradation and given the chance to take it over as you select. Gameplay — The gameplay of this game is very charming and easy to play.Next You were interested in it, right? In case your machine will not be listed, please test assist. You will be more flexible if you choose a motorbike. But you must follow the mission a linear, overlapping plot. As we know, the Grand Theft Auto series is one of the best gaming series in which millions of people all over the world are commendable. Like a common citizen, you have to keep your environment such as adhere to traffic signals, there is no discrepancy for pedestrians. The latest version of Vice City is updated with many new maps and vehicles. Grand Theft Auto: Vice City — a famous computer game is now released on the Android, thanks to Rockstar studio, which played everything. Grand Theft Auto: Vice City — the famous computer game is now released and the android in which everyone played! It retained the classic look of the original in 2002 but was rebuilt with the better graphics and sound. If your device is not listed, please check support. Combining open world gameplay with a character driven narrative you arrive in a town brimming with delights and degradation and given the opportunity to take it over as you choose. I believe this game will run perfectly on your mobile, there will be no problem.Next The Epic Seven is one of the roles paying game, which is equipped with hypnotic graphics and ultimate skill. This game developed and published by the Rockstar Games. You can open the game and enjoy it now. The game also requires agility and intelligence in carrying out missions. Vice City takes players into an open world, including a lot of car action, third-person shooters, helicopter battles. But you are easy to be attacked or pursued by the police.Next Operation Systems Min Sdk 24 Min Sdk Txt Android 7. Thanks to the Rockstar studio this game appeared on your android device. From the decade of big hair, excess and pastel suits comes a story of one man's rise to the top of the criminal pile. If you play the game long enough, you will notice that the effects in Vice City are very well done. If you think it is not used, it can also make adjustments in the settings, the location of the virtual rocker and mode settings to facilitate their own operating habits. Pokémon Go Latest update November 2018 Hello Friends, Here you will get the Pokémon Go Latest update November 2018.Next Below we have mention Season 6 video. After the launching of the game, they had updated it continuously after in a certain period of time. You can buy weapons, vehicles or anything in the game easily. Grand Theft Auto: Vice City - and now on Android. All the things you will get with the latest updates. Controlling the car in the game is difficult, but when you get to know the controls, you will find this a great way to travel to explore the city.Next All of them are absolutely safe, as they are checked for viruses and for workability. And to play Android phones differ slightly when you play. Here we will provide you Fortnite v5. Vibrate Allows access to the vibrator. Of course, a detailed guide will help you install this game easily in just a few steps. In the game, you will be involved in chase, gunfight and car hijacking… in a fictional fantasy city in Vice City, is based in Miami with many buildings, vehicles and people as in real life Play in your way Vice City is a sandbox game, in which you will be free to do anything that you want. Welcome again to Vice Metropolis.Next The more you do, the more experience, fun and powerful weapons you will have. Vice City is a highly entertaining game that helps players release stress. Vice City, a huge urban sprawl ranging from the beach to the swamps and the glitz to the ghetto, was one of the most varied, complete and alive digital cities ever created. The Character of the game Armed with a decent dexterity which makes you habituated of this game. Please guarantee you have got a minimum of 1. Combining open-world gameplay with a character driven narrative, you arrive in a town brimming with delights and degradation and given the opportunity to take it over as you choose.Next
OPCFW_CODE
The Samsung 470 Series Solid State Drive is this company’s first entry into the consumer SSD market. As a producer of many of the components used in other SSDs, as well as providing OEM SSD solutions for various computer manufacturers, we were surprised that they haven’t entered this market earlier, having introduced this drive in November 2010. The Samsung 470 Series SSD comes in three capacities, 64 GB, 128 GB and 256 GB. What follows is our review of the 256 GB model. The 470 Series comes in a clear plastic container, the drive contained in a foam enclosure. The drive itself looks stylish, the top being brushed metal with raised SAMSUNG lettering, and orange plastic in the corner of the drive indicating the capacity. The insert lists several features of the drive, such as Samsung 32 nm MLC NAND Flash Memory, Samsung S1MAX SSD Controller, 470 MB/s Read + Write Speed, and a 2.5 inch form factor with SATA II Interface. The 470 MB/s statement was also in larger lettering, with smaller lettering detailing that this was arrived at by combining a Read speed of 250 MB/s and a Write speed of 220 MB/s, which we think is somewhat misleading. Samsung 470 Series 256 GB SSD MLC is Multi Level Cell, which is one type of DRAM used in SSDs, the other being SLC, or Single Level Cell. MLC is typically lower performing, but less expensive, than SLC. The 32 nm is the memory technology used in this drive, and it was state of the art when the drive was introduced. The smaller this number, the more memory can fit in a space. The latest reported technology is a 19 nm process from Toshiba and Sandisk. When we removed the drive from the packaging, at first we thought we had received a mockup, because the drive was so light compared to what we’re used to with a 2.5 inch drive. Whereas the Hitachi drive we’ll be benchmarking against weighs about 4 ounces, the 470 series weighs in at a mere 2.4 ounces. The bottom of the unit, which has screw holes in standard locations on both the side and bottom of the drive, but appears to be plastic, raised concerns about durability and the possibility of stripping it if you’re a bit too enthusiastic with your screwdriver. This drive was tested in a MacBook Pro (Early 2008) with 6 GB of RAM, running Mac OS X 10.7.2, and a Mac mini (Mid 2010) with 8 GB of RAM running Mac OS X 10.6.8. Note that the SATA bus on the MacBook Pro is limited to 1.5 Gb/s, or 192 MB/s, whereas the SATA bus on the Mac mini can achieve 3.0 Gb/s, or 384 MB/s. The rotational drive in the MacBook Pro is a Hitachi HTS725050A9A364 and the rotational drive in the Mac mini is a Toshiba MK3255GSXF. Our first measure of performance is boot time, which is the time from the boot chime (on the Mac mini) or spinning progress wheel (on the MacBook Pro) to the time the desktop is presented, and a drive read activity indicator indicates no activity. This is a good test for how the drive deals with a large number of small files. Our second measure of performance is the transfer a single large file between the SSD and another drive on the system. On both the MacBook Pro and Mac mini, we transferred files with an Iomega eGO drive connected via FireWire 800. Our third measure of performance is synthetic benchmark, using Drive Genius 3.1 from Prosoft Engineering. It performs sustained read, sustained write, random read and random write tests, using a block size ranging from 32K to 16M in size. We also enabled TRIM mode on our Lion machine, to see if we could get increased write performance. TRIM is a method of making sure SSD memory cells are clean before they are written to, resulting in maximum throughput. For all tests, items in bold are better. Startup Time (seconds) |MacBook Pro Rotational||175| |MacBook Pro SSD||32| |Mac mini Rotational||154| |Mac mini SSD||22| Large File Transfer (seconds) |MacBook Pro Rotational to FW||55| |MacBook Pro Rotational from FW||47| |MacBook Pro SSD to FW||39| |MacBook Pro SSD from FW||38| |Mac mini Rotational to FW||52| |Mac mini Rotational from FW||56| |Mac mini SSD to FW||29| |Mac mini SSD from FW||33| Battery Life (minutes) |MacBook Pro SSD||196| |MacBook Pro Rotational||166| MacBook Pro - Random Tests (MB/s) |Block Size ||Write SSD ||Write SSD TRIM ||Write Rotational ||Read SSD ||Read Rotational| |32K ||61 ||83 ||58 ||26 ||1| |64K ||79 ||97 ||76 ||61 ||3| |128K ||93 ||106 ||94 ||79 ||6| |256K ||106 ||119 ||100 ||86 ||12| |512K ||117 ||122 ||118 ||115 ||22| |1M ||121 ||125 ||120 ||122 ||32| |2M ||119 ||122 ||118 ||120 ||47| |4M ||117 ||123 ||67 ||120 ||55| |8M ||50 ||51 ||83 ||125 ||73| |16M ||122 ||121 ||95 ||126 ||79| MacBook Pro - Sustained Tests (MB/s) |Block Size ||Write SSD ||Write SSD Trim ||Write Rotational ||Read SSD ||Read Rotational| |32K ||36 ||57 ||56 ||60 ||58| |64K ||56 ||76 ||77 ||84 ||78| |128K ||88 ||103 ||92 ||104 ||73| |256K ||99 ||117 ||106 ||116 ||67| |512K ||118 ||125 ||116 ||125 ||55| |1M ||120 ||127 ||120 ||127 ||85| |2M ||119 ||126 ||63 ||126 ||88| |4M ||120 ||123 ||99 ||126 ||71| |8M ||119 ||125 ||99 ||126 ||60| |16M ||120 ||127 ||79 ||127 ||95| Mac mini - Random Tests (MB/s) |Block Size ||Write SSD ||Write Rotational ||Read SSD ||Read Rotational| |32K ||104 ||21 ||13 ||1| |64K ||129 ||5 ||21 ||3| |128K ||161 ||73 ||23 ||5| |256K ||181 ||86 ||79 ||10| |512K ||195 ||22 ||146 ||16| |1M ||202 ||99 ||232 ||24| |2M ||202 ||99 ||230 ||27| |4M ||201 ||80 ||240 ||34| |8M ||61 ||42 ||245 ||31| |16M ||95 ||46 ||249 ||38| Mac Mini - Sustained Tests (MB/s) |Block Size ||Write SSD ||Write Rotational ||Read SSD ||Read Rotational| |32K ||73 ||47 ||137 ||45| |64K ||107 ||66 ||195 ||41| |128K ||142 ||80 ||218 ||39| |256K ||175 ||91 ||219 ||45| |512K ||193 ||99 ||240 ||30| |1M ||202 ||101 ||251 ||45| |2M ||201 ||101 ||249 ||45| |4M ||203 ||50 ||252 ||52| |8M ||203 ||42 ||250 ||53| |16M ||203 ||49 ||251 ||37| The startup time for both systems is a clear indication of the advantage of these drives when reading a large number of relatively small files. The time went from hundreds of seconds to tens of seconds. Transferring a single large file showed a less dramatic benefit, but in all cases, the SSD was faster than the rotational drive. For the synthetic benchmarks on the MacBook Pro, the tests showed little difference in write performance between the SSD and our rotational drive, until the block size got beyond 4 MB, whereas the read speeds showed a clear advantage from the start. The maximum speed achieved of 127 MB/s when writing is quite a bit below the theoretical maximum of 192 MB/s for the SATA I bus in this machine. We also noticed an odd reduction in performance with an 8 MB block size write performance, but we’ll attribute this to shortcomings in the SATA implementation in this machine, since we didn’t see this on the mini. Enabling TRIM showed a measurable increase in write performance for small block sizes on the benchmarks, but we didn’t find it significant in day-to-day use. For the synthetic benchmarks on the Mac mini, we saw more dramatic results, partly due to the SATA II bus on this machine, allowing us to achieve full bandwidth, and partly due to the Toshiba drive in this machine, which we found would only negotiate a SATA I speed (1.5 Gb/s) with the mini. The drive nearly reached its advertised maximum write speed of 220 MB/s, reaching 203 MB/s in our benchmark, and achieved the advertised maximum read speed of 250 MB/s, both when test block sizes were 1 MB or greater. The Samsung 470 Series SSD shows a clear performance advantage on our Macs, even one that don’t have the latest SATA II bus supported by this SSD. Due to the relatively low power draw when compared to our rotational hard drive, we noted a significant increase in battery life, always a concern for those on a portable computer. The only downside is the price; for the same capacity, one could get a rotational hard drive for $100 or less. However, the pricing for this SSD is comparable to what we found for other of similar capacity. For those that can afford it, the Samsung 470 256 GB SSD is a worthy upgrade.
OPCFW_CODE
Durante el evento se llevaron a cabo un total de 11 sesiones que se grabaron para todos aquellos desarrolladores de C++ que quería crear apps usando C++, DirectX y XAML. Os dejamos aquí los enlaces a las grabaciones junto con la descripción de lo que podréis encontrar en cada una de las sesiones. Keynote: Visual C++ for Windows 8 Speaker: Herb Sutter This talk will begin with an overview of how the WinRT type system is projected in Visual C++, then delve into how easy it is to use fast and portable C++, UIs built using XAML or DirectX or both, and powerful parallel computation from std::async and PPL to automatic vectorization and C++ AMP to harness powerful mobile GPUs…. Building Metro style apps with XAML and C++ Speaker: Tim Heuer With the introduction of Metro style apps for C++ developers, Microsoft now brings the XAML UI platform to native code! I will take you through a lap around creating a Metro style app in XAML and C++. I'll introduce the fundamentals of the XAML platform in WinRT and how C++ developers can easily write applications with a consistent, touch-friendly UI framework. Designing Metro style apps using XAML Designer in VS and Blend Speaker: Navit Saxena If you want your Metro style app to delight users you'll want to start with a great UX design. In this session, I will show you some of the key features of XAML designer in Visual Studio and Blend to design and build a C++ Metro style app that is both visually appealing and easy to use. Porting a desktop app to a metro style app Speaker: Sridhar Madhugiri What does it take to port a desktop app to Metro? Learn about the common issues when porting and techniques to help address them. Building Windows Runtime components with C++ Speaker: Harry Pierson Introduction to Casablanca Speaker: Niklas Gustafsson Casablanca is an incubation project available on MSDN DevLabs. Its primary purpose is to explore how to access and author REST services in modern C++ using best practices such as asynchronous operations to achieve responsiveness in clients and scalability in service code. C++ and Direct X for Metro style games Speaker: Chas Boyd & Matt Sandy DirectX, the most popular 3-D game API, is directly accessible by Windows 8 metro-style applications in C++. If you have a C++/Direct3D codebase, or want to create a 3-D game, this talk will show you how to use C++ and DirectX to build metro-style apps… Combining XAML and DirectX in Metro style apps Speaker: Jesse Bishop Windows 8 introduces the ability to use both XAML and DirectX in the same C++ Metro-style app, allowing you to combine the rich UI and interactivity of XAML with the power and control of DirectX graphics. Learn about the different mechanisms provided and the advantages of each approach. Getting the most out of the compiler: autovectorization Speaker: Jim Hogg The C++ compiler in Visual Studio 11 includes a new feature, called auto-vectorization. It analyses the loops in C++ code and tries to make them run faster by using the vector registers, and instruction, inside the processor. This short talk explains what's going on. Async made simple with C++ PPL Speaker: Rahul Patel and Genevieve Fernandes The new Windows Runtime is adopting a heavily asynchronous programming model to ensure the responsiveness of Windows 8 client apps. This makes it more critical than ever to have great support for async programming in C++. Learn about the PPL async library innovations and how these features come together with new Windows Runtime APIs to simplify async programming. Introducing the Windows Run-time Library Speaker: Sridhar Madhugiri, Lukasz Chodorski What is WRL and how does it help you write Metro Apps? Learn what is involved in consuming and authoring WinRT objects with WRL. Espero que estas sesiones os parezcan tan interesantes como a nosotros.
OPCFW_CODE
The hosts file on your computer allows you to override DNS and manually map hostnames (domains) to IP addresses. This can come in handy during migrations as you might want to see how the website looks on a different server, but perhaps DNS hasn’t been pointed to the new server or propagated yet. Modifying your hosts file causes your local machine to look directly at the Internet Protocol (IP) address that you specify. This involves adding two entries to it. Each entry contains the IP address to which you want the site to resolve and a version of the Internet address. Now let’s look at accessing the hosts files in the different operating systems… Step 1: Open Notepad as an Administrator You’ll need administrator privileges for this operation. - Click the Windows button and type “notepad” Let the search feature find the Notepad application. - Right-click the Notepad app, then click Run as administrator - Windows User Account Control should pop up asking, “Do you want to allow this app to make changes to your device?” Click Yes Step 2: Open the Windows Hosts File - In Notepad, click File > Open - Navigate to c:\windows\system32\drivers\etc - In the lower-right corner, just above the Open button, click the drop-down menu to change the file type to All Files - Select “hosts” and click Open Step 3: Edit the File Add the IP address and host name in the following format where 0.0.0.0 is the IP of the server where the website is hosted… Once you’re finished making your changes, save the file ( File > Save ) and exit. If you make an edit to the hosts file and something stops working, you can tell Windows to ignore any line by putting a # sign at the beginning of that line. To access the hosts file in Windows 7 you can use the following command in the Run Line to open notepad and the file. Once Notepad is open you can edit the file per the above instructions. Step 1: Open the Mac Terminal Open the Finder , and go to Applications > Utilities > Terminal Type the following in the terminal window… sudo nano /etc/hosts The system will prompt you to enter your password – this is the same password you use to log in to the system. Type it in, and hit Enter . Step 2: Edit Mac Hosts File The IP address is first, and the server name comes second. Comments are indicated with a ‘#’ sign. Enter the IP address you want to refer to first, hit tab, and then the server name (or domain name) that you want to associate with it. Save your changes by pressing Control + O , then exit by pressing Control + X .
OPCFW_CODE
Additionally, Cypress provides a visual interface to indicate whether tests are running, passing, or failing, along with the test script runner. As part of the test, we can manipulate the DOM, assert if certain elements are present on the screen, read or write data into/from fields, submit forms, and even redirect a page without actually modifying your code. This allows us to test highly interactive applications. In order to meet the needs of local developers, Cypress is built and optimized. Cypress offers a platform for quickly debugging and maintaining your code, so you may want to do all your development in it after using it for some time. To know more about Cypress automation testing and how you can set it up for your organization, read this blog! You can use these features to capture screens, record videos, travel back in time, debug programs more easily, etc. Cross-browser testing is also possible with Cypress on Edge, Firefox, and Chrome. Testers can use assertions from Mocha and Chai libraries by default in the Cypress framework. There is no doubt that the reporting feature is one of the most used features in the automation world. Specs can be configured to use either Mocha reporter or Spec reporter because Cypress uses Mocha reporter internally. CI/CD tools are integrated with Cypress tests via Cypress Command Line Interface, aka Cypress CLI. Let us now have a look at how you can utilize Cypress for Automation Testing. - Architecture and introduction: In order to test modern web applications, Cypress is the next generation of front end testing tools. Typically, testing tools (like Selenium) are able to run outside of the web browser and execute remote commands over a network. Nevertheless, the Cypress engine is directly integrated into the browser. With it, Cypress is able to alter network requests and responses at runtime and listen to browser behavior. - Installing and configuring the node: - Install VS Code: - Installation and setup of Cypress: Direct downloads of Cypress can be made from the Cypress CDN. The direct download will always download the latest version relevant to your platform. The user can extract the zip file after it has been downloaded. Npm can also be used to download it. In addition to setting up a basic project with a package, it explains how to use the package engine. It comes with JSON and Cypress installed. - Test performed on Cypress: - Test Runner based on Cypress: A unique feature of Cypress is its test runner, which lets us see commands being executed live. Moreover, the application under test is also running in real-time. With the help of the Cypress Test Runner, we will develop our first automated test case using Cypress and execute it. The terminal will also show us how to operate different components Test Runner has. - Locator in Cypress: Web-based applications require locators as the foundation of their automation frameworks. An automation tool uses locators to determine which GUI elements to operate (such as Text Boxes, Buttons, Check Boxes, etc.). A similar method is used by Cypress to locate the user interface element when testing an application. - Commands for Get and Find: In order to search for web elements based on the locators, Cypress provides two essential methods get() and find(). There are almost no differences between the results of these two methods. There is, however, a place for each and its importance. - Asynchronous nature of Cypress: As part of parallel programming, asynchronous programming involves running a unit of work separate from the application’s main thread. In addition, it notifies the calling thread when the task is completed, failed, or in progress. A non-blocking program is one that executes synchronously, which means it waits for its completion before proceeding to the next step. The asynchronous method, on the other hand, allows you to move on to another task before the process is finished. - Async promises for non-Cypress: - Assertions made by Cypress: An assertion is a validation step that determines whether the specified step of an automated test case was successful or not. The purpose of Assertions is to validate the desired state of your elements, objects, or application under test. As an example. Assertions are useful for verifying scenarios such as whether elements are visible or have particular attributes, CSS classes, or states. The assertion steps are always recommended in automated test cases, otherwise, the application will not be able to achieve the expected state. - DOM elements can be interacted with in the following ways: There are some APIs or methods available in each UI automation tool in order to interact with web elements in order to perform the designated operation on the UI element. Additionally, these methods facilitate simulations of user journeys. Additionally, Cypress offers multiple commands that simulate the interaction between the user and the application. The installation of Cypress is straightforward. The steps are as follows: - Step 01: Download the NPM and Node: Visit the official website of node.js to download it - Step 02: Configure the NODE_HOME environment variable.Adding a node.js path file to the system variable can be done by navigating to your computer’s “Advanced System Settings” screen. - Step 03: Generate package.json in the Cypress folder: Using the command prompt, create a JSON package for Cypress Automation in the ‘user’ folder. - Step 04: Cypress installation: Cypress can be installed by running the command ‘npm install cypress –save-dev’ after step 3. - Step 05: Running a test after installing the IDE: Start by executing any predefined tests, or create your own, using any preferred IDE such as Visual Studio Code. In Cypress, folders are created by default. There are several subfolders within the cypress folder. - Plugins: Special files can be found in the Plugins folder which are used to execute code before the project is loaded. In this folder, you should include any preprocessors your project requires. It is possible to customize the index.js file found in the plugins folder to create tasks of your own. - Integration: The test scripts are contained in this folder. - Fixtures: You can organize your data inside the Fixtures folder if you are using external data in your tests. - Support: Utility files, global commands, and frequently used codes can be found in the Support folder. Commands.js and index.js are the default files in this folder. If necessary, you can add more files and folders. - Assets: The screenshots, videos, and other downloads will be stored in a folder called downloads after the test run. When you use Cypress to write new tests from scratch, you’ll have more power and you’ll have an easier time reading and writing your tests. The code is generated directly in the browser, not as a script, so it runs faster, since the browser runs natively in the browser rather than on the test runner’s computer. When running tests on a local setup, it is not possible to achieve optimal browser coverage using the Cypress testing framework. A cloud-based test infrastructure like LambdaTest will provide optimal browser coverage since you can perform Cypress UI testing on different operating systems and browsers by leveraging Cypress’ framework. LambdaTest can resolve many of these underlying issues. Test Cypress online with LambdaTest to reduce test cycles. A Cypress tutorial explains the basics of the framework and how its modern architecture addresses the problems encountered in modern web automation testing. In addition to its core functionality, Cypress has a wide range of community-built plugins. Screenshots and video recordings can be added with these plugins. The Cypress website lists available plugins. Let’s wrap it up! We have provided a basic overview of the Cypress test automation tool in this article. Cypress has been installed, and a test has been created. As well as the basic features, we’ve also looked at some of the more advanced features Cypress offers.
OPCFW_CODE
M: A Linux botnet is launching crippling DDoS attacks in excess of 150Gbps - testrun http://www.pcworld.com/article/2987580/security/a-linux-botnet-is-launching-crippling-ddos-attacks-at-more-than-150gbps.html#tk.rss_all R: pja Rate limiting on password logins ought to be the default, not some optional extra that only the security savvy know to install. Defaults matter: not implementing rate limiting by default in sshd has left an open door for these people to walk through & attack the net. Sure, you can blame people for choosing poor passwords but, given the reality of the millions of machines out there, as programmers we _know_ that some of them will have poor passwords because people are people. Failing to code with that expectation in mind is doing our users a disservice. R: conceit To make matters worse, the debian openssh .deb, that i installed to get the client, configured an autostart for sshd, without asking. R: dredmorbius Debian's policy is that if you install a service, you intend for it to run by default. Since services _aren 't_ installed by default, this is a fairly defensible position, though it's been commented on many times. It's possible and fairly straightforward to disable autostart of services through either old-school SysV init or, I suspect, systemd. Upshot: running complex systems _isn 't_ entirely trivial. R: conceit Yes, a desktop computer is a complex system, that's why I leave these not entirely trivial plumbings to the system programmers. The downside of this is the exploited systems that potentially everyone has to suffer. R: Animats _" Attackers install it on Linux systems, including embedded devices such as WiFi routers and network-attached storage devices, by guessing SSH (Secure Shell) login credentials using brute-force attacks._" I'm seeing that - endless attempts to log in as root over SSH. It's apparently aimed at random IP addresses - it was hitting a newly installed server that wasn't even in DNS yet. Here's what the attack looks like: Sep 30 00:06:39 s3 sshd[29144]: Failed password for root from 43.229.53.44 port 44450 ssh2 Sep 30 00:06:42 s3 sshd[29185]: Failed password for root from 43.229.53.44 port 11229 ssh2 Sep 30 00:06:42 s3 sshd[29188]: Failed password for root from 43.229.53.44 port 12106 ssh2 Sep 30 00:06:44 s3 sshd[29185]: Failed password for root from 43.229.53.44 port 11229 ssh2 Sep 30 00:06:44 s3 sshd[29188]: Failed password for root from 43.229.53.44 port 12106 ssh2 Sep 30 00:06:46 s3 sshd[29185]: Failed password for root from 43.229.53.44 port 11229 ssh2 Sep 30 00:06:46 s3 sshd[29188]: Failed password for root from 43.229.53.44 port 12106 ssh2 Sep 30 00:06:48 s3 sshd[29201]: Failed password for root from 43.229.53.44 port 33446 ssh2 Sep 30 00:06:49 s3 sshd[29212]: Failed password for root from 43.229.53.44 port 34570 ssh2 Sep 30 00:06:50 s3 sshd[29201]: Failed password for root from 43.229.53.44 port 33446 ssh2 Sep 30 00:06:51 s3 sshd[29212]: Failed password for root from 43.229.53.44 port 34570 ssh2 Sep 30 00:06:52 s3 sshd[29201]: Failed password for root from 43.229.53.44 port 33446 ssh2 Sep 30 00:06:53 s3 sshd[29212]: Failed password for root from 43.229.53.44 port 34570 ssh2 R: jasonoliveira so fail2ban, disabling root logins, and key-based authentication are the answer? R: dredmorbius A large portion of it, yes. Limiting SSH to specific IPs or netblocks, and/or specifically _excluding_ those you're likely to never use, would also help cut down on the attack surface. Not that hosts _within_ your perimiter don't get compromised, but there are far fewer of them. 2FA including keyfobs is yet another option. R: illumen Get a range of ip addresses where you will log in from (your DSL address ranges from home for example), and block all other ip addresses on that port. You may want to add some other hosts you control to that whitelist as well as a backup in case your ISP changes addresses. If you are the only one that is supposed to access some ports, then block everyone else. (use last -a to see all the places you've logged in recently to start making your whitelist). You can also rate limit new connections to ssh depending on your use cases. So even if your whitelist is beaten you limit the number of attempts. Now add fail2ban, so that even at 10 new rate limited connections per minute they get banned after X failed attempts. Change the default port of ssh, to make it harder to find. Remove password authentication, and make sure you have good secure keys. You can also disable sudo, and root logins perhaps. As well you could try 2 factor auth, and perhaps move your keys to a secure(encrypted) USB drive. But really, whitelisting who can connect to ssh in the first place will stop 99% of the automated brute force attacks, with all the rest stopping the remaining 0.99%. R: greyfade If you configure your SSH server for a limited, secure set of ciphers and HMACs, these automated attacks won't even get to the point of attempting authentication. [https://stribika.github.io/2015/01/04/secure-secure- shell.ht...](https://stribika.github.io/2015/01/04/secure-secure-shell.html) Since following the above guide, my auth log has been filled with nothing but this: Sep 30 09:46:00 myserver sshd[74033]: fatal: no matching mac found: client hmac-sha1,hmac-sha1-96,hmac-md5,hmac-md5-96,hmac-ripemd160, server , [preauth] Of course, I can't use old SSH clients to connect, but it's a good tradeoff, IMO. R: illumen That's very good hint, thanks. There's only been a couple of remote ssh exploits (that I'm aware of) and both of them were stopped by white listing. If you can figure out your address ranges, I think it still makes sense to white list. I guess also the bots will catch up with modern ciphers. R: nchelluri To what end would people do this? > The most frequent targets have been companies from the online gaming sector, > followed by educational institutions, the Akamai team said in an advisory... Doesn't seem like you'd ever make any money doing that. I suppose one competitor could target another. Not sure if its a valuable thing to do. I just dunno what the point of DDoS is besides ransom-type stuff. R: tomtoise Speaking from experience (Of getting hit, not running one of these services!) - Gaming is a particularly juicy target, you can set up a basic front-end that allows people to 'hit' certain IP addresses offline for x amount of time in exchange for money. Its use is quite common in certain games like League of Legends, CS:GO etc. R: twoodfin I assume to gain an advantage in PvP? If so, how does the attacker identify the IP address they want to hit? R: oddevan OK, DDoSing and lagswitching has been a big deal in the Destiny community recently, so here's what I know: In Destiny, there's often a weekend-long PvP showdown with top-tier in-game rewards. The game type is 3v3 with no matchmaking, so you must bring your own team; the best rewards are for winning 9 in a row. If your team disconnects, it counts as a loss (no cowards here!). The way Destiny is set up, one player's console is considered the "host" and all the other players connect to it. So if you or someone on your team (a 50-50 chance) is the host, you can see the IPs of everyone in the game by monitoring network traffic. If not, you can still see the host's IP. Find out IP of any opposing player, aim the botnet, fire. Boom. You've instantly got a one-player advantage. Repeat ad nauseam. Suddenly going 9-0 isn't so daunting. The worst part? Unlike other network manipulations, this one is a lot harder for Bungie to prove. But you can bet they're working on it, and they aren't shy about wielding the banhammer if they find cheating. R: SixSigma Calling it a Linux security issue is a bit erroneous, a weak Ssh password on BSD will get you hijacked too. Or Osx, or even Windows! R: ibmthrowaway218 Sure, but if the payload in this example is a Linux binary then it won't do anything to an OSX/BSD/Windows machine. If the payload is an interpreted script, or something that's compiled (if it can find a compiler, etc) then it may be able to infect different OSes. R: SixSigma I wonder if Linux emulation on BSD / plan9 / cygwin will run it? R: ChuckMcM This is, in my experience, another interesting and unintentional side effect of a free OS. When the OS is free, the engineers I've talked to who have been tasked with using it to implement their product are generally less experienced (which is to say less expensive). And the entire impression I get is that there is some cost minimization going on. Some sort of psychological trick that if its an expensive OS you need some expert engineer to bend it to your will, but if its free and seems to be everywhere, well you can hire an intern to do all your integration, development, and testing. R: drzaiusapelord Maybe, but out of the box a typical linux distro is fairly unsecure. There's no rate limiting of password guessing, no default password policy, no default fail2ban behvaiors, usually no firewall enabled by default, no auto-updating service on by default (or any at all), etc. Some popular linux services are scary unsecure. Samba runs as root, which is a security nightmare considering its that poorly implemented reverse engineered crapfest with a long history of security problems. Popular FOSS projects hosted on linux are bad too, like the recent massive hole in Drupal and the endless Wordpress holes (I bet this recent one is WP related). Not to mention heartbleed, shellshock, etc. I don't think there's anything wrong with a cheap and common OS for junior people to use, its just the OS needs to be shipped with sane defaults. This will never happen in the typical world of FOSS for fear of breaking things and "keeping things simple" and "you should know what you're doing." The problem is many people don't know what they're doing, at least in terms of security. An authoritarian attitude regarding security that would tie developer's hands is the antithesis of FOSS culture. I think the status quo of endless hacking is just the way its going to be until everyone gets serious about security. What that means exactly is hard to say, but shifting to some type of memory managed language (Rust perhaps?) and sacrificing some performance for security are probably what its going to look like. Even then, botnets aren't going to go away, but might be small enough where they aren't able to do DDOS's like this regularly. R: thatusertwo My VPS had a default username/password called testuser, I never actually noticed it existed till my server got taken over by some chinese bruteforce attack. Every time I rebuilt the server that user account was created, apparently it is part of the image used by my ISP. My point is that many people probably have their servers taken over in a similar way without even realizing it. R: ryanlol I'm pretty curious, why is this particular botnet noteworthy? There's a plenty of kids on efnet capable of pushing that R: elsjaako What's the best way to make sure that your system isn't part of this? R: Tobu `PermitRootLogin = without-password` in /etc/ssh/sshd_config Same thing for non-root: `AuthenticationMethods = publickey` And when buying a router, buy something that will get regular security updates, or where you can put OpenWRT. R: bkor From openssh 7.0 release notes: * PermitRootLogin=without-password/prohibit-password now bans all interactive authentication methods, allowing only public-key, hostbased and GSSAPI authentication (previously it permitted keyboard-interactive and password-less authentication if those were enabled). It mentions that previously without-password it would still allow keyboard- interactive logins. Should be fairly easy to fake for a botnet! R: mirimir I still prefer "PermitRootLogin=no". R: lordnacho How do the botnets come to be named? Is it named by whichever security firm finds it? R: ryanlol There's no official rules on this, usually the botnets are named by their developers and renamed by AV developers for marketing purposes. This results in stupid shit like kaiten.c which is a *nix bot from 20 years ago now being known (and detected) as "OSX/Tsunami". This is actually harmful as it makes researching whatever infection you got harder when every AV vendor decides to give a very well known malware a different name. R: z3t4 The biggest attack surface is probably cgi scripts (php etc).
HACKER_NEWS
How to accomplish a secure authentication in Javascript API We are trying to develop an API for our service and we are doubting in how to accomplish the authentication process. Our clients have to be able to include a .js file which connects with our Node.js server. The key point here is that our backend must track the use of the API so our clients are going to be charged according to its use. Our intention is to design the API as simple as possible for the users, as well as making it secure. We have thought of: Creating an API_KEY for each user and matching it with their domains in every request. Problem here could be that the domain is not the most secure option, isn't it? We understand that the domain may be supplanted in an HTTP request. Using a SDK with an API_KEY and SECRET_KEY to generate a token for a given session and user. I don't dislike at all this option but we would prefer a simpler solution for the developers, which would not imply using several APIs. Do you have any ideas/suggestion/considerations/whatever? Thanks in advance. If by 'the domain' you mean the value passed in the Origin header, this is not spoofable on newer browsers that support CORS. It can still be spoofed on older browsers (notably IE) but this might be acceptable because a single newer-browser request could expose the subterfuge - they could only get away with it if all their users were on IE. I like your second option best. In addition to an API_KEY and SECRET_KEY you can do a number of other things. First of all make sure all requests are done through HTTPs. It is the single most important security feature you can add... and its easy to do. Second if you want to make things super secure send a timestamp from the client. This timestamp can be used to hash the SECRET_KEY providing you protection against someone recreating data. Your client would send the following with every request: 1) timestamp - you would store this in your database and reject any new requests with a smaller number 2) API_KEY - essentially a userID 3) signature - this is a hash of the SECRET_KEY, timestamp, and API_KEY. (The hashing algorithm and order of parameters is generally unimportant. SHA1 is pretty decent) Your server can calculate this hash to validate that this client actually knows the SECRET_KEY. At no time should your client ever disclose the SECRET_KEY to anyone. You can also look into the OAuth standard. I believe NodeJS and Javascript in general both have libraries for it. NodeJS OAuth Provider Is it possible to avoid disclosing the SECRET_KEY to any users(including the actual user) when dealing only in JavaScript? @Sti Just wondering / hoping you now know the answer! sincerely Mr Curious
STACK_EXCHANGE
Office 2013 was released for consumer preview in July but one un-hyped detail was the new development features Microsoft added to Office and SharePoint. We highlighted Microsoft’s bold move to offer its Office software suite in the cloud some months ago. However in July, news of Office 365 picked up again when the company released the customer preview of Office 2013. Most of the coverage focused on how it was completely re-vamped to work better with Microsoft’s focus on the tablet market, but there was another interesting part of the announcement that got overlooked by most. The model of Office and SharePoint have been completely reworked which creates a wide new market for developers. One director in Microsoft’s Office division, Richard Riley, is in charge of the SharePoint marketing team. He was quoted as saying, “This is the most significant change to Office and SharePoint in the last 15 years.” Microsoft’s Office user base is over one billion people, and the company is hoping that that equates to the success that Apple has had with its app store and Google Play but in the form of an Office App Store. Brian Jones, group manager of the Office Solutions Framework Team said, “The number of people that actually have Office is a huge addressable market for developers. If you are already an Office of SharePoint developer, you’re going to love what we’ve done with the new model, while we continue to support your existing solutions. If you aren’t yet an Office developer, but you build web solutions, you’re going to want to give us a big hug, as we’re bringing you a huge set of potential customers.” Riley went on to explain: “It’s similar to the model you see with Facebook. Facebook isn’t hosting the app you just added to your page, Zynga is. Facebook just asks for the right bits when you engage the app.” Microsoft has even launched a new Office developer center to act as a repository for documentation, forum discussions, tutorials, and samples. Note: Microsoft saw that it’s more beneficial for customers to be able to access their data from anywhere. Start cloud hosting with NetHosting today to get incredible server flexibility and the ability to access your data from anywhere. Jones has already begun envisioning the kinds of apps that developers could come out with that the general public would be interested in. One already developed app is named Olympic Medal Tracker, and it retrieves real-time online data about London Olympic events, and then puts that data into an Excel spreadsheet. Another app named Bing Maps takes the Olympic Medal Tracker data from the spreadsheet and puts it on a world map, with other interactive features also available on the map. There’s even an Outlook app that lets users take notes on top of emails they’ve sent or received. The note then gets attached to the contact and will come up whenever you interact with that particular contact. Essentially, this app creates a lightweight CRM function. Other ideas include SharePoint apps like a workflow add-on or a room reservation system. To read more about apps in the cloud, check out our blog post about Microsoft Lync, the office communications application that Microsoft launched that allows users to chat from their office email accounts.
OPCFW_CODE
#!/usr/bin/env python # -*- coding: utf-8 -*- import unittest import psycopg2 import psycopg2.extras from pgextras import PgExtras, sql_constants as sql class TestPgextras(unittest.TestCase): def setUp(self): self.dsn = 'dbname=pgextras_unittest' self.conn = psycopg2.connect( self.dsn, cursor_factory=psycopg2.extras.NamedTupleCursor ) self.cursor = self.conn.cursor() def drop_pg_stat_statement(self): if self.is_pg_stat_statement_installed(): statement = "DROP EXTENSION pg_stat_statements" self.cursor.execute(statement) self.conn.commit() def create_pg_stat_statement(self): if not self.is_pg_stat_statement_installed(): statement = "CREATE EXTENSION pg_stat_statements" self.cursor.execute(statement) self.conn.commit() def is_pg_stat_statement_installed(self): self.cursor.execute(sql.PG_STAT_STATEMENT) results = self.cursor.fetchall() return results[0].available def test_that_pg_stat_statement_is_installed(self): self.create_pg_stat_statement() with PgExtras(dsn=self.dsn) as pg: self.assertTrue(pg.pg_stat_statement) def test_that_pg_stat_statement_is_not_installed(self): self.drop_pg_stat_statement() with PgExtras(dsn=self.dsn) as pg: self.assertRaises(Exception, pg.pg_stat_statement) def test_cache_hit(self): with PgExtras(dsn=self.dsn) as pg: results = pg.cache_hit() self.assertEqual(results[0].name, 'index hit rate') self.assertEqual(results[1].name, 'table hit rate') def test_version(self): with PgExtras(dsn=self.dsn) as pg: results = pg.version() self.assertTrue(results[0].version) def tearDown(self): self.cursor.close() self.conn.close() if __name__ == '__main__': unittest.main()
STACK_EDU
You save about 15% by purchasing the bundle instead of the individual libraries. - Already own one of the libraries? Please contact me – here – for an affordable upgrade path. - Purchases made on A Sound Effect are applicable with proof of purchase. - SD06 Wind Harp – West Texas (Stereo) - AMB71 Iceland: Underwater Snow - AMB70 Iceland: Wind 3 (Stereo + Quad) - AMB69 Iceland: Wind 2 (Stereo) - AMB68 Norway: Rain & Sleet (Stereo) - AMB65 Norway Wind (Stereo + Quad) - AMB62 New Mexico: Falling Snow (Stereo + Quad) - AMB61 New Mexico: Wind (Stereo + Quad) - AMB60 Greenland: Rain & Sleet (Stereo) - AMB52 Tennessee: Rain (Stereo) - AMB51 Tennessee: Wind (Stereo + Quad) - AMB49 Colorado: Wind (Stereo + Quad) - AMB47 Colorado: Thunderstorms (Stereo) - AMB45 Iceland: Wind (Stereo) - AMB41 Iceland: Rain (Stereo) - AMB39 Colorado: Falling Snow (Stereo) - AMB38 Maine: Wind (Stereo + Quad) - AMB35 Maine: Rain (Stereo + Quad) - AMB32 Pacific Northwest: Wind – Eastern Washington (Stereo + Quad) - AMB31 Pacific Northwest: Falling Snow – Eastern Washington (Stereo + Quad) - AMB25 Alaska: Rain (Stereo + Quad) - AMB24 Alaska: Wind (Stereo + Quad) - AMB04 High Desert Thunderstorms – West Texas (Stereo) - AMB03 High Desert Winds 2 – West Texas (Stereo) - AMB02 High Desert Winds 1 – West Texas (Stereo) - Use a desktop computer for easier spreadsheet viewing. |Specs: 88 hours total. See full specs in the spreadsheet above. |Metadata: CSV, Soundminer, BWAV, Text Markers| Categories: Environments, Weather Location: Various – see spreadsheet above |Mastering: read my Field Recording Mastering Rules for more info.| |Delivery: Instant - blazingly-fast - digital download| |License type: Single user, royalty-free - for a multi-user license, click here| |Sound Library Guarantee: If you're unhappy with my field recordings in any way, I'll give you store credit equal to the cost of the sound library. Read the full details – here.|
OPCFW_CODE
""" Module with processing functions """ import typing import cv2 import imgaug import numpy as np def pad_to_size(image: np.ndarray, size: int, color: typing.Tuple[int, int, int]) -> np.ndarray: """ Given an image center-pad with selecter color to given size in both dimensions. Args: image (np.ndarray): 3D numpy array size (int): size to which image should be padded in both directions color (typing.Tuple[int, int, int]): color that should be used for padding Returns: np.array: padded images """ # Compute paddings total_vertical_padding = size - image.shape[0] upper_padding = total_vertical_padding // 2 lower_padding = total_vertical_padding - upper_padding total_horizontal_padding = size - image.shape[1] left_padding = total_horizontal_padding // 2 right_padding = total_horizontal_padding - left_padding # Create canvas with desired shape and background image, paste image on top of it canvas = np.ones(shape=(size, size, 3)) * color canvas[upper_padding:size - lower_padding, left_padding:size - right_padding, :] = image # Return canvas return canvas def remove_borders(image: np.ndarray, target_size: typing.Tuple[int, int]) -> np.ndarray: """ Remove borders from around the image so the output is of target_size. Args: image (np.ndarray): input image target_size (typing.Tuple[int, int]): tuple (height, width) representing target image size Returns: np.ndarray: output image """ # Compute paddings total_vertical_padding = image.shape[0] - target_size[0] upper_padding = total_vertical_padding // 2 lower_padding = total_vertical_padding - upper_padding total_horizontal_padding = image.shape[1] - target_size[1] left_padding = total_horizontal_padding // 2 right_padding = total_horizontal_padding - left_padding return image.copy()[ upper_padding: image.shape[0] - lower_padding, left_padding: image.shape[1] - right_padding, :] def get_sparse_segmentation_labels_image( segmentation_image: np.ndarray, indices_to_colors_map: typing.Dict[int, typing.Tuple[int, int, int]]) -> np.ndarray: """ Creates a segmentation labels image that translates segmentation color to index value. For each pixel without a reference color provided in indices_to_colors_map value 0 is used. Args: segmentation_image (np.ndarray): 3 channel (blue, green, red) segmentation image indices_to_colors_map (typing.Dict[int, typing.Tuple[int, int, int]]): dictionary mapping categories indices to colors Returns: np.ndarray: 2D numpy array of spare segmentation labels """ segmentation_labels_image = np.zeros(segmentation_image.shape[:2]) for index, color in indices_to_colors_map.items(): color_pixels = np.all(segmentation_image == color, axis=2) segmentation_labels_image[color_pixels] = index return segmentation_labels_image def get_dense_segmentation_labels_image( segmentation_image: np.ndarray, indices_to_colors_map: typing.Dict[int, typing.Tuple[int, int, int]]) -> np.ndarray: """ Given sparse encoded segmentations image, convert it to bgr segmentations image Args: segmentation_image (np.ndarray): 2D array, sparse encoded segmentation image indices_to_colors_map (dict): dictionary mapping categories indices to bgr colors Returns: [np.ndarray]: 3D BGR segmentation image """ bgr_segmentation_image = np.zeros( shape=(segmentation_image.shape[0], segmentation_image.shape[1], 3), dtype=np.uint8) for index, color in indices_to_colors_map.items(): mask = segmentation_image == index bgr_segmentation_image[mask] = color return bgr_segmentation_image def get_augmentation_pipepline() -> imgaug.augmenters.Augmenter: """ Get augmentation pipeline """ return imgaug.augmenters.Sequential([ imgaug.augmenters.Fliplr(p=0.5), imgaug.augmenters.SomeOf( n=(0, 3), children=[ imgaug.augmenters.Affine(rotate=(-10, 10)), imgaug.augmenters.Affine(scale=(0.5, 1.5)), imgaug.augmenters.Affine(shear={"x": (-20, 20)}), imgaug.augmenters.Affine(shear={"y": (-20, 20)}), ]) ]) def are_any_target_colors_present_in_image( image: np.ndarray, colors: typing.List[typing.Tuple[int, int, int]]) -> bool: """ Check if image contains any of target colors Args: image (np.ndarray): 3D array that should be examined for target colors colors (typing.List[typing.Tuple[int, int, int]]): list of colors to search for Returns: bool: True if any of target colors is found in image, False otherwise """ for color in colors: # inner np.all checks that for given pixel in image all three components of a color are correct, # then outer any checks that there was any pixel with given color if any(np.all(image.reshape(-1, 3) == color, axis=-1)) is True: return True # No target color found in image return False def get_segmentation_overlay( image: np.ndarray, segmentation: np.ndarray, background_color: typing.Tuple[int, int, int]) -> np.ndarray: """ Get image with segmentation overlaid over it. Args: image (np.ndarray): 3-channel image segmentation (np.ndarray): 3-channel segmentation background_color (typing.Tuple[int, int, int]): background color, segmentation pixels that match background color won't be overlaid over image Returns: np.ndarray: Image with segmentation overlaid over it """ blended_image = cv2.addWeighted(image, 0.2, segmentation, 0.8, 0) overlay = image.copy() mask = np.logical_not(np.all(segmentation == background_color, axis=-1)) overlay[mask] = blended_image[mask] return overlay
STACK_EDU
How to solve complex problems using systems thinking So you have what appears to be an unsolvable problem on your hands. It’s an important issue that’s proven to be chronic, its recurrence has made it familiar enough to be identified with a known history, and many have unsuccessfully tried to solve it before. What you have is a complex problem. Fortunately, a tested strategic approach already exists for solving complex problems - systems thinking. What is Systems Thinking? Founded in 1956 by MIT professor Jay Forrester, systems thinking is an approach to solving complex problems by understanding the systems that allow the problems to exist. You have a complex problem when: - There’s no clear cut agreement on what the problem really is because the context it depends on evolves over time. - It’s difficult to assess what the real causes are behind the problem due to many factors and feedback loops influencing each other. - It’s not certain what the best steps are to solve the problem because there are many potential and / or partial solutions that may require incompatible and even conflicting steps. - It’s hard to pinpoint who has sufficient - ownership, accountability, and authority to solve the problem, or if there even is just a single individual that suits the criteria — and it’s challenging to keep various stakeholders from getting in each others' way. Where traditional analysis zooms into a smaller piece of a whole, systems thinking zooms out to view not just the whole, but other wholes that are affecting each other. Through this approach, systems thinking formalizes methods, tools, and patterns that allow practitioners to understand and manage complex settings and environments. This is why systems thinking is important — and effective — in solving complex problems. 3 Unique Systems Thinking Benefits Like other established approaches to solving different kinds of problems, systems thinking can prove insightful and effective when used properly. Beyond those general benefits, systems thinking also presents some unique advantages: Systems Thinking Allows Meaningful Failure Failure is a discovery mechanism in properly applied systems thinking. It allows you to learn and improve the design or implementation of your solution. Failure in systems thinking can: - Allow you to learn and adapt from small missteps quickly. - Shows you the right option, or at least reduces the wrong ones, when it comes time to test hypotheses. - Only temporarily hamper a system, not completely jeopardize it, in exchange for meaningful input. Systems Thinking is Inclusive and Collaborative Because of the holistic viewpoint taken in systems thinking, it inherently opens up levers for collaboration across involved parties. It isn’t just nice to gain input from diverse stakeholders with dynamically interrelated roles and interests — it's required. Implemented properly, systems thinking encourages a culture of inclusiveness and collaboration to fix systemic problems that in turn benefit multiple stakeholder teams simultaneously. Systems Thinking Provides Actionable Foresight Part of why complex problems are hard to solve is because each involved party only ever sees their portion of the issue. Therefore, they typically execute solutions that resolve parts of the constantly evolving problem, which in the holistic view may even lead to other issues or complications. Systems thinking allows you to predict how systems change and how steps within parts of the system will impact the whole. In applying systems thinking, you analyze causal structure and system dynamics, assess policies and scenarios, and test action steps and hypotheses to foresee consequences in order to synthesize long-term strategies. Solving Complex Problems with System Thinking Frameworks and Methodologies So how do you use systems thinking and its frameworks and methodologies in your organization? Systems thinking is not an instant panacea. Implementing its methods and frameworks isn’t like applying smart charts to raw data on spreadsheets. Those aren’t complex problems. The implementation of systems thinking involves the application of frameworks that illustrate levels of thinking, and the use of tools to allow people to better understand the behaviors of systems. The Iceberg Framework At a primary level, systems thinking takes a holistic view to try and understand the connectedness and interactions of various system components, which themselves could be sub-systems. You can start by focusing on points that people gloss over, and attempt to explore these issues by focusing on aspects you don’t understand. The iceberg framework in systems thinking can guide you through this. The iceberg framework illustrates four levels of thinking about a problem, arranged thus: - “Events” - Events form the tip of the iceberg. Events that characterize a complex problem are the most visible, and therefore also the ones that appear to require being addressed in an immediate, reactionary way. This level of thinking is the “shallowest,” as typically events are only symptoms of underlying issues. - “Patterns and trends” - Directly below the tip of the iceberg, the Patterns level is the first one hidden from view. Thinking deeper about events can lead problem solvers to more insight into patterns and trends that lead to them. Any approaches to solving patterns and trends will more effectively resolve events. - “Underlying structure” - Even deeper below the surface, you’ll find there are underlying structures that influence the patterns and trends that lead to the visible symptoms of complex problems. This is where the interaction between system components produces the problematic patterns that in turn cause the visible events. - “Mental models” - Finally, the bottom of the iceberg that props everything up are the assumptions, beliefs, and values held about a system culminating in the inadvertent creation and maintenance of underlying structures that result in unfavorable patterns within systems, which in turn bubble up to the surface as symptomatic events. Once systems thinking practitioners understand this framework, they can employ tools and technology that allow human perception to genuinely digest the behavior of complex systems. At this level of systems thinking, qualitative tools generate knowledge to unravel complex problems. Causal Loop Diagrams and System Archetypes Some of the most common and flexible tools in systems thinking are causal loop diagrams that demonstrate system feedback structures. They show causal links between system components with directional cause and effect. Causal loop diagrams display the interconnectedness of system components to serve as a starting point for further discussion and policy formulation. Naturally, these diagrams can also help problem solvers identify in which parts of the system they can assert a positive influence to impact the entire loop favorably. In effect, these diagrams can help prevent poor decisions such as quick fixes. Another important tool in systems thinking are the system archetypes that generally describe how complex systems work. They are generic models or templates representing broad situations to provide a high-level map of complex system behavior. Because they have been well-studied and mapped, these models can identify valuable areas where steps can be taken to resolve complex problems through interventions that are called leverages. In general, there are two basic feedback loops (reinforcing and balancing) that identify nine system archetypes (or eight or ten, depending on who you ask): - Balancing loops with delays - Drifting goals - Fixes that fail - Growth and underinvestment - Limits to success - Shifting the burden - Success to the successful - Tragedy of the commons Each of these archetypes are rarely sufficient models on their own — they merely offer insight into possible, common underlying problems. They can of course also be used as a basic structure upon which you can develop a more detailed model specific to your complex systems. Adding Advanced Tools into Your Systems Thinking Toolbox There are several dynamic and structural thinking tools in the systems thinking repertoire. Causal loop diagrams and system archetypes are dynamic thinking tools. Graphical function diagrams and policy structure diagrams are structural thinking tools. All of these can be mapped or used in computer-based tools like a management flight simulator or learning lab. Of course, there are tools to what you can achieve with your toolbox. Causal loop diagrams, for example, are static — they cannot describe the evolving properties of a system over time. To overcome such limitations, you need to simulate management issues quantitatively through system dynamics modeling. Computer models of system dynamics allow you to explore time-dependent complex system behavior under different states. They essentially enable you to simulate how a causal loop diagram evolves as it is affected by different assumptions over time. Solving Complex Problems in Project Management So should you start learning about causal loop diagrams and begin shopping for the best systems dynamics computer modeling tools in the market as soon as you find a project management problem you can’t seem to solve? Don’t jump the gun. You can implement systems thinking in inquiry and problem diagnosis to great effect without needing diagrams and computer models. Apply the concept of the iceberg model and you might already find you’re asking better questions than before, or you’re catching common quick fix solutions — like needing more budget or hiring more people — that don’t address deeper problems. Once you realize that you’ve got a complex problem that requires an in-depth systems thinking approach, you can then explore your options with your team. The important part is to embrace the mental models that make systems thinking invaluable for understanding complex systems and resolving the complex problems that arise from them.
OPCFW_CODE
Enhance support for real time workloads What this PR does / why we need it: This PR aims to improve the support for real-time workloads and tune the libvirt's XML for low CPU latency. The motivation for this change is to make kubevirt aware of realtime workloads and tune libvirt's XML to reduce the CPU latency, following libvirt's recommendation. It archieves this goal by implementing the following changes: Extends the VMI schema to add a new knob structure inside spec.domain.cpu named realtime. This new knob is on itself a new structure that so far only contains the field mask. The mask field allows the user to define the range of vcpus to pin in the vcpusched xml field for scheduling type as fifo priority 1. If this field is left undefined, the logic will pin all allocated vcpus for real-time workload. Example when enabling the realtime knob without defining the mask: spec: domain: cpu: realtime: {} When the realtime knob is enabled, the virt-launcher will add the following libvirt XML elements: When huge pages is supported <nosharepages/> virt-handler will set the memory lock limits for the qemu-kvm process. Disable the Performance Monitoring Unit in the Features section: <pmu state="off"/> Add the vcpusched element in the CPUTune section: <vcpusched scheduler="fifo" priority="1" vcpus="<pinned vcpus>"> Nodes that support realtime workloads (kernel setting kernel.sched_rt_runtime_us=-1 to allow unlimited runtime of processes running with realtime scheduling) will be labeled with kubevirt.io/realtime. When deploying a VMI that has the realtime knob enabled in its manifest, the generated pod manifest will be mutated to include the node label selector kubevirt.io/realtime, so that the pod is scheduled to run in a node that supports realtime workloads. In short, a VMI with the realtime knob enabled will require a node that is capable of running realtime workloads ( kubevirt.io/realtime label), is able to isolate CPUs (cpumanager=true label) and has been configured with hugepages. Example of a complete VM manifest: --- apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-realtime name: vm-realtime namespace: poc spec: running: true template: metadata: labels: kubevirt.io/vm: vm-realtime spec: domain: devices: autoattachSerialConsole: true autoattachMemBalloon: false autoattachGraphicsDevice: false disks: - disk: bus: virtio name: containerdisk machine: type: "" resources: requests: memory: 4Gi cpu: 2 limits: memory: 4Gi cpu: 2 cpu: model: host-passthrough cores: 2 sockets: 1 threads: 1 dedicatedCpuPlacement: true isolateEmulatorThread: true ioThreadsPolicy: auto features: - name: tsc-deadline policy: require numa: guestMappingPassthrough: {} realtime: {} memory: hugepages: pageSize: 1Gi guest: 3Gi terminationGracePeriodSeconds: 0 volumes: - containerDisk: image: quay.io/jordigilh/centos8-realtime:latest name: containerdisk Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): TBD Special notes for your reviewer: /cc @rmohr @vladikr @davidvossel /cc @pkliczewski @fabiand Release note: adds support for real time workloads @vladikr @davidvossel @rmohr I've finished implementing the changes for the real-time enhancements. Can you take a look whenever you have time? Thanks! /Jordi any thoughts on how to approach functional testing? I don't think it's realistic to actually test realtime consistency or anything with the way our CI is set up. However, there should be some basic validations we can test to ensure realtime guests bootup properly. We had an initial discussion to use a centos image and install the RT packages during CI execution, and use that image to run against a predefined template, like the one I used in my tests. Would that work? /retest /retest /test pull-kubevirt-e2e-kind-1.19-sriov /retest /retest /retest /test pull-kubevirt-build /retest /retest /test pull-kubevirt-e2e-kind-1.19-sriov /test pull-kubevirt-e2e-k8s-1.20-sig-network /retest /test pull-kubevirt-check-tests-for-flakes /retest @jordigilh you have tests Realtime should start the realtime VM when realtime mask is specified and Realtime should start the realtime VM when no mask is specified failing. @jordigilh you have tests Realtime should start the realtime VM when realtime mask is specified and Realtime should start the realtime VM when no mask is specified failing. Yes, looking into that. It's a privilege issue that the virt-launcher is unable to set the scheduling priority for qemu-kvm. These tests pass in my local environment. /test pull-kubevirt-e2e-k8s-1.19-sig-compute /retest /retest /retest /retest /test pull-kubevirt-goveralls /retest /test pull-kubevirt-e2e-kind-1.19-sriov /retest /retest /test pull-kubevirt-e2e-kind-1.19-sriov /test pull-kubevirt-e2e-kind-1.19-sriov /test pull-kubevirt-e2e-kind-1.19-sriov /test pull-kubevirt-e2e-kind-1.19-sriov /test pull-kubevirt-e2e-kind-1.19-sriov /test pull-kubevirt-e2e-kind-1.19-sriov /test pull-kubevirt-e2e-k8s-1.22-sig-compute-realtime /test pull-kubevirt-e2e-k8s-1.22-sig-compute-realtime /test pull-kubevirt-e2e-k8s-1.22-sig-compute-realtime any thoughts on how to approach functional testing? I don't think it's realistic to actually test realtime consistency or anything with the way our CI is set up. However, there should be some basic validations we can test to ensure realtime guests bootup properly. What we can tests is to validate that the number of vcpus have the expected scheduling (FIFO) and priority (1), and the qemu-kvm process has the proper memory lock limits. After this is just boot the VM and validate that it is possible to login. What we can tests is to validate that the number of vcpus have the expected scheduling (FIFO) and priority (1), and the qemu-kvm process has the proper memory lock limits. After this is just boot the VM and validate that it is possible to login. yup, that sounds like the most practical approach given the CI constraints we have with shared infrastructure. /test pull-kubevirt-e2e-k8s-1.22-sig-compute-realtime /test pull-kubevirt-e2e-k8s-1.22-sig-compute-realtime /test pull-kubevirt-e2e-k8s-1.22-sig-compute-realtime /test pull-kubevirt-e2e-k8s-1.22-sig-compute-realtime /retest /test pull-kubevirt-e2e-k8s-1.22-sig-compute-realtime /test pull-kubevirt-e2e-k8s-1.22-sig-compute-realtime /test pull-kubevirt-e2e-k8s-1.22-sig-compute-realtime /test pull-kubevirt-e2e-k8s-1.22-sig-compute-realtime /test pull-kubevirt-e2e-k8s-1.22-sig-compute-realtime /test pull-kubevirt-e2e-k8s-1.22-sig-compute-realtime /test pull-kubevirt-e2e-k8s-1.22-sig-compute-realtime /test pull-kubevirt-e2e-k8s-1.22-sig-compute-realtime /test pull-kubevirt-e2e-k8s-1.22-sig-compute-realtime /test pull-kubevirt-e2e-k8s-1.22-sig-compute-realtime /test pull-kubevirt-e2e-k8s-1.19-sig-compute /retest /retest PR pending of https://github.com/kubevirt/kubevirt/pull/6375 to be merged first as it requires changes to how the CDI operator is deployed. /retest /test pull-kubevirt-e2e-k8s-1.22-sig-compute-realtime /test pull-kubevirt-unit-test /test pull-kubevirt-e2e-k8s-1.22-sig-compute-realtime /test pull-kubevirt-e2e-kind-1.19-sriov @davidvossel @rmohr @vladikr @fabiand Development for this PR is completed. When you have time, can you take a look and let me know what you think? Thanks! /test pull-kubevirt-e2e-k8s-1.22-sig-compute-realtime /retest /retest /test pull-kubevirt-e2e-k8s-1.22-sig-compute-realtime I can just second Vladik - Thank you very much for this contribution! And all the work that specifically went into testing it.
GITHUB_ARCHIVE
[Lldb-commits] Question about IRMemoryMap Malloc matt.kopec at intel.com Wed Apr 24 14:02:24 PDT 2013 Can you explain what is being achieved with this line in IRMemoryMap::Malloc? 239 size_t allocation_size = (size ? size : 1) + alignment - 1; If this is attempting size alignment, it's incorrect. It looks like additional bytes are being set for the allocation size for some reason? This is causing problems on Linux and some expressions are exhibiting strange behaviour, for instance: Current executable set to 'a.out' (x86_64). (lldb) b main Breakpoint 1: where = a.out`main + 30 at main.cpp:14, address = 0x000000000040065e Process 21544 launched: '/home/mkopec1/dev/llvm/tools/lldb/test/expression_command/test/a.out' (x86_64) Process 21544 stopped * thread #1: tid = 0x5428, 0x000000000040065e a.out`main(argc=1, argv=0x00007fff914a0fe8) + 30 at main.cpp:14, stop reason = breakpoint 1.1 frame #0: 0x000000000040065e a.out`main(argc=1, argv=0x00007fff914a0fe8) + 30 at main.cpp:14 12 int main (int argc, char const *argv) -> 14 printf ("Hello world!\n"); 15 puts ("hello"); 16 // Please test many expressions while stopped at this line: 17 #if 0 (lldb) expression (int*)argv (int *) $0 = 0x00007fff914a0fe8 (lldb) expression ((char**)environ) (char *) $1 = 0x00007fff914a13b9 "SSH_AGENT_PID=1921" (lldb) expression int i = 5; i (int) $2 = 5 (lldb) expression $2 + 1 (int) $3 = 32531 The value of $3 is wrong. I did a little debugging and it looks like some allocated data is getting overwritten incorrectly during execution. However, if I align the size requested in Malloc, it works fine on Linux. It just so happens this case I've tested, the sizes were already aligned. More information about the lldb-commits
OPCFW_CODE
It can be a web (http) server if you want to access the file with a web browser. I would like to download a text file stored on the sd card from a local. File myfile char httpreqreqbufsz 0 buffered http. Download zip archive of entire current repository snapshot run git clone http. This library is recommended to control movi from the arduino ide. In this tutorial i use an arduino uno and an ethernet shield. Rate above is the estimate for a phrase, respectively, and the file is accessible online as plain text. 1 arduino arduino is an opensource electronics prototyping platform based on flexible, easytouse hardware. Usually the downloaded file is saved under the download folder. Device in a data center and they do not allow external connections (http, gsm) from arduino. It allows your arduino to be a full fledged ros node which can directly. Note if you do not already have an arduino ide installed, download it from the arduino website. Of interpreting and rendering the file, the browser will download it and save it locally. This example for a yn device shows how create a basic http client that connects to the internet and downloads content. Im trying to add the esp8266 board to the arduino ide and im getting the following. The open-source arduino software (ide) makes it easy to write code and upload it to the board. The environment is written in java and based on processing and other open-source software. An arduino and ethernet shield are used as a client to fetch a web file. The first sketch saves the http header and requested file to the sd. Since march 2015, the arduino ide has been downloaded so many times. (impressive!) no longer just for arduino and genuino boards, hundreds of companies around the world are using the ide to program their devices, including compatibles, clones, and even counterfeits. In the arduino ide go to file examples sodaqxxxx example. Библиотека для arduino, позволяющая управлять gprs shieldом от амперки. On our example the esp8266 is the client and the server that is hosting our website is the server. How to post http request with arduino esp8266 at jul 16, 2018 the. The uipethernet library can be downloaded from the github website or from tweaking4all. In the filepreferences menu, you can check see verbose output to see which parameters. 1 released arduino library for sim800 for gprshttp communication arduino. My attempts to download the file from http server was a success, but i am stuck. Arduino is an open-source hardware and software company, project and user community that designs and manufactures single-board microcontrollers and microcontroller kits for building digital devices. Even though file system is stored on the same flash chip as the program,. Installing arduino libraries can be done in three different ways manually installing the files, importing a zip file, and using the. Once downloaded, go to the arduino ide and click sketch include library add. H, enable the line define nodemcu and disable the other boards 3.
OPCFW_CODE
|A few weeks ago, i rebooted my computer and the monitor failed to turn on with the reboot. I manually turned the pc off and back on again, and the monitor powered up normally. Since then it has progressively gotten worse... every time i reboot/power off the pc, i have to turn the pc off and back on 5-10 times before the monitor will come on. After troubleshooting, I feel like this is more of a PC problem than a video card/monitor problem... but not sure what it is.| Here is more detail of whats going on: The problem: Power off the computer... both the PC and monitor turn off, and the monitor light turns from green to amber. Press power on the PC - the keyboard lights immediately (and briefly) flash green and the PC fans/drives start running, but the monitor stays off with an amber light. Eventually, after 5-10 power cycles, the monitor will turn on and give a green light... when this happens, the keyboard lights flash green a 2nd time at the same time the monitor turns on. I have tried using another monitor, and the same problem happens. I am using the integrated graphics vga port... i tried installing a video card, and the same problem happened. I disconnected all CD/HD drives, and the same problem happens. I removed all ram -- leaving only the monitor and keyboard/mouse connected. I powered up the PC to see what happens... only every 5-10 power cycles would i receive the BIOS beep indicating (im guessing) a ram problem (1 short, 1 long repeating). Every other time, the motherboard fans would just power up and run without any beeping. Even though the hard drive powers up, it doesn't appear to do anything past the initial power up when the monitor fails to activate. Only when the monitor turns on do i hear the hard drive load the OS. Each time i power the system up, and the monitor doesn't come on, i do not hear any hard drive activity during the time it would normally be loading the OS. I'm not sure if this is related, but i also recently developed an intermittent problem where my screen suddenly displays nothing but unchanging digital garbage... when this happens, my entire pc locks up and i am unable to do anything except manually power off the system. Seeing that when i remove all the ram, i only get the system error beeps at the same frequency that the monitor activates when everything is connected, it tells me that it probably isnt the ram, monitor, or hard drives. So what does that leave? PSU and motherboard? Any ideas? hp a1600n 2ghz 1gb ram windows xp sp3 Thanks for any help!
OPCFW_CODE
Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Feb 15, 2016. I couldn't have said it better than yourself. Microsoft dropped the ball on this one. Keep in mind that these games they are bringing to Windows 10 store exclusivity are games that were/are Xbox One exclusive games and they are only bringing them to PC to promote their store and increase its library. Without Windows 10 store they would remain on Xbox One and go nowhere else. This doesn't mean they shouldn't clean up and make Windows 10 store a better experience for the user but it does mean anyone thinking this should be on Steam is way way out to lunch on this one. Not if they fundamentally change how every application run off windows stores without sandbox mode. Specially with how every devices running windows 10 OS need to work seamless, with MS's universal app platform on their one windows vision. Aside from Vsync issues, Borderless Windowed mode is pretty nice. I usually run most if not try to run all games that way. Other than that, awful restrictions being in the Windows Store. The sad part is the Windows Store could be awesome if Microsoft took it seriously. MS should improve their store. But then again we are going to need proper competition vs steam that is almost monopolizing AAA games on their platform. Like MS is trying to make Quantum Break only on their platform not that different from steam in really basic sense. I don't get what you guys are talking about. Microsoft has been constantly improving the Windows Store and listening to feedback. If you want Microsoft store items to be exposed to steam, just suggest it, they actually listen. Like yeah I get it, it may not be up to par with steam currently, but at least they are working on. Phil Spencer from Xbox division seems to be pretty supportive of PC gaming in general. I think they are moving in the right direction. Tbh Phil is rather awesome guy considering pc and console gaming. To me windows store is decent, not that far behind of origin but way ahead of uplay. Last time I looked at the Windows store it's just full of complete crap, it's one of the first things I disable and uninstall. I had high hopes for this but it seems they want to make us play tablet games on gaming pcs with the odd actual pc game which happens to be a shoddy console port restricted as hell, whats the point then? I would rather play it on xbox if that's how its gonna be they are kind of announcing forza something and more xbox exclusives for pc though, who knows maybe they listen to feedback and make this thing usable, it shows great potential from what they claim they want it to be like How is it restricted? I play Gigantic through the Windows Store, feels like any other PC game to me. Is Gigantic out already or is it a beta? read my post on page 1. Don't think it'll be shoddy.PC graphics rendering is still far superior to consoles even on the lowest settings.You literally have to hack game files in order to bring the IQ down to console level.Bugs and patches I've accepted as a natural part of it for some time now. No Sli, no way to turn off Vsync, no way to install the game to a different drive, the list goes on. I have no problem with other platforms, but when Uplay is much better then you know how bad it is. Fair enough -- his tablet comment made me think he was speaking from a game design/controls perspective, not like third party stuff. I think you can install on any drive since windows lets you decide which drives are for what apps and such the issues here are common pc annoyances, like the odd game which has this weird forced vsync mode where it decides arbitrarily to use 30 or 60fps from time to time, things you can simply edit a .ini file to fix, or annoying intro screens that you can simply delete from the game folder, devs almost never fix those issues, what hope is there that Microsoft will be different? How stupid one can be? And here we have Microsoft... Their stupid decisions almost killed xbox one and now this... F You M$. I won't buy this game on either pc or xbone. lol yeah let's not buy a good game because we have to click the buy button on a page we are not used to ! It's a bit more than that. You can actually move the damn ms store apps to different drive because microsoft has added that feature for ms apps. And it will work just as usual.
OPCFW_CODE
Checking devices should have a clear command python -m sounddevice is used to see the list of devices. I think that's a bit awkward, because the command is not descriptive. I know that it's the only command that sounddevice needs to offer, but I do think it'll be more elegant as something like python -m sounddevice list_devices. That's already implemented! You can also use something like: python3 -m sounddevice gimme_my_devices --and-hurry-up-please Therefore, supporting a special command (like list_devices) explicitly doesn't make too much sense, or does it? What should happen if a unsupported command is used? What if no command is used? Yeah, I'd understand the resistance to this issue, it might be OCD on my part. The issue is that once you support calling python -m sounddevice to show the list of devices, it becomes difficult to change it in the future because of backward compatibility. You might want to change it in the future to support more commands, or for example --version to get the versions of both sounddevice and portaudio. So changing it as soon as possible is beneficial. To your questions: What should happen if a unsupported command is used? An error message, same as if you tried an unsupported command on any other CLI tool: $ pip floof ERROR: unknown command "floof" What if no command is used? Showing the list of commands would be reasonable, like pip does. I'd understand if you'd close this issue. The issue is that once you support calling python -m sounddevice to show the list of devices, it becomes difficult to change it in the future because of backward compatibility. That is true, but now it would be too late anyway. This functionality has been there since the initial commit, now slightly more than 5 years ago. I knew that it was a risk not to use sub-commands, but I was willing to take it. You might want to change it in the future to support more commands, or for example --version to get the versions of both sounddevice and portaudio. That's a good idea. But why not just print the version information unconditionally every time, before or after the device list? I'm not considering the actual text content of the output to be part of the API, so I would feel free to add some text any time. This is meant for human consumption, I don't think adding some version information would hurt. On the contrary, it might be very helpful information for debugging. If we just add version information to the output, we don't need any additional command line options (at least for now). Alternatively, we could just check for --version and make a separate version output, I don't see any conflict with the current behavior. What if no command is used? Showing the list of commands would be reasonable, like pip does. I think this would be annoying, since we would have at most two options. So why not show everything immediately? It sure makes sense with pip to have sub-commands, because it actually has several sub-commands. To give a counter-example, this module doesn't have sub-commands (but it takes some optional arguments): python3 -m http.server We could of course make a sub-module like sounddevice.devicelist which could be used like this: python3 -m sounddevice.devicelist ... but I don't think this is actually an improvement in ergonomics and I don't think it would be worth the implementation effort (re-structuring the module/package). I'd understand if you'd close this issue. I'm very much open to suggestions like this. I've mentioned my concerns and some counter-suggestions, probably we can find some middle ground? Hmm, this issue is small enough that I don't see the point in a compromise. If you're happy with the current situation that's fine, if you'd like to change it that's fine too. I really don't see any reason for artificially creating an obstacle for users by requiring an explicit sub-command (especially since there will be only a single sub-command in the foreseeable future). I would be fine with optional sub-commands (or flags), though, if there is need for them. I do like your idea of providing version information (be it in addition to the device list or separately). And I'm open for further suggestions!
GITHUB_ARCHIVE
I’m very pleased to welcome you all to The Book of Trogool, a brand-new blog about e-research. My name is Dorothea Salo, I’m an academic librarian, and I am fascinated with the changes that computers have wrought in the academic-research enterprise. I hope to explore those changes, and particularly library responses to them, in the company of the wonderful ScienceBlogs community. My thanks to John, Christina, and Walt for paving the way, and to Erin for welcoming me here. I hope to tell stories about e-research projects (because narrative is how humans come to grips with novelty), pass on tidbits of e-research?related news, demystify jargon, ask and answer questions?in toto, I hope to bridge the science, library, and IT communities as we all work to understand, accommodate, and make the most of computers in research. One small note: Though this is ScienceBlogs, I by no means plan to limit my remarks to the sciences. This is a tremendously exciting time for the so-called “digital humanities” as well, and as I am a humanist by training, I pay close attention to developments in those disciplines. Right. About that blog name? I am an earnest devotee of Lord Dunsany’s wry, half-parodic Peg?na stories (do take a look at the Project Gutenberg version) and their quarrelsome, none-too-bright, easily-offended pantheon of deities. On a rereading some time ago, I noticed a rather curious and delightful passage in “Of the Thing that is Neither God nor Beast:” Trogool is the Thing that is neither god nor beast, who neither howls nor breathes, only It turns over the leaves of a great book, black and white, black and white for ever until THE END. And all that is to be is written in the book is also all that was. When It turneth a black page it is night, and when It turneth a white page it is day? Trogool is the Thing that men in many countries have called by many names, It is the Thing that sits behind the gods, whose book is the Scheme of Things. Are researchers and those of us who serve them not all trying, in our own ways, to write the book of the Scheme of Things? And has that book not become binary?black and white, one and zero?in the last several years? And is not the computer “neither god nor beast?” I believe so, as I believe that Trogool is a fitting symbol for the e-research enterprise, which just like Trogool has many names. Behold a picture of Trogool by the utterly marvelous Sidney Sime (don’t worry; Dunsany and Sime’s Peg?na-related works are in the public domain): Rather an intimidating chap, isn’t he? I hope I can help him seem less so.
OPCFW_CODE
> On Error > Vbscript Goto Label Vbscript Goto Label An "active" error handler is an enabled handler that is in the process of handling an error. As a result, the conditional statement on line 13 evaluates to True, and a second error dialog is displayed. Performs input or output operations from or to a device or file. VBScript Constants C. Source In this case the script doesn't do anything with the return value of TerminateProcess, but it could branch and perform different operations depending on that value. If the value of the error code is nonzero, an Alert box opens that displays the error code and its corresponding description. Notice that after displaying the error information, we call the Clear method of the Err object. Again, this is purely a function of how the host handles any errors that occur.Within any particular procedure, an error is not necessarily fatal as long as error-handling is enabled somewhere navigate to these guys Vbscript Goto Label If an error occurs while an error handler is active (between the occurrence of the error and a Resume, Exit Sub, Exit Function, or Exit Property statement), the current procedure's error The techniques for doing this are explained in some detail in "Automating TCP/IP Networking on Clients - Part 3: Scripting Remote Network Management." With the Win32_PingStatus class, WMI provides a way However, you can assign a value to the Source property in your own error handling routines to indicate the name of the function or procedure in which an error occurred. Sub Work On Error Resume Next Dim objExcelApp Dim wb Dim ws Set objExcelApp = CreateObject("Excel.Application") Set wb = objExcelApp.Workbooks.Add(True) Set ws = wb.Sheets(1) ws.Cells(1,1).Value = "Hello" ws.Cells(1,2).Value = "World" wb.SaveAs("c:\test.xls") Error Handling and Debugging 5. On Error Resume Next DoStep1 If Err.Number <> 0 Then WScript.Echo "Error in DoStep1: " & Err.Description Err.Clear End If DoStep2 If Err.Number <> 0 Then WScript.Echo "Error in DoStop2:" & Just remember, scripting without mysteries would be insipid and boring. Vbscript Error Handling Best Practices ERROR: Unable to retrieve state of Alerte service. Instead, use error handling techniques to allow your program to continue executing even though a potentially fatal error has occurred. Vbscript On Error Exit The more potential places errors can occur, the more we can profit from displaying our own custom error message to explain more fully where the problem occurred and what may have This allows execution to continue despite a run-time error. Is Nothing You can use the Is operator to compare an object with the Nothing keyword. Vbscript On Error Exit Featured Post Looking for New Ways to Advertise? https://msdn.microsoft.com/en-us/library/5hsw66as.aspx The routine should test or save relevant property values in the Err object before any other error can occur or before a procedure that might cause an error is called. Vbscript Goto Label If you don't believe us, check out this free movie: Hey, Scripting Guy! Vbscript On Error Goto Sub But they do show how to build effective scripts from reusable code modules, handle errors and return codes, get input and output from different sources, run against multiple machines, and do You can be sure which object placed the error code in Err.Number, as well as which object originally generated the error (the object specified in Err.Source).On Error GoTo 0On Error GoTo this contact form End Sub RequirementsNamespace: Microsoft.VisualBasicAssembly: Visual Basic Runtime Library (in Microsoft.VisualBasic.dll)See AlsoErrNumberDescriptionLastDllErrorEnd StatementExit Statement (Visual Basic)Resume StatementError Messages (Visual Basic)Try...Catch...Finally Statement (Visual Basic) Show: Inherited Protected Print Export (0) Print Export (0) Share The only downside appears to be that in case of failure they don't return detailed error codes, as the Err object can. Visual Basic Language Reference Statements F-P Statements F-P Statements On Error Statement On Error Statement On Error Statement For Each...Next Statement For...Next Statement Function Statement Get Statement GoTo Statement If...Then...Else Statement Error Handling In Vbscript Tutorial The Err object’s Number property returns a decimal integer, but the WMI SDK generally uses hexadecimal values, so these scripts take a bilingual approach. The property values in the Err object reflect only the most recent error. Bu shi? (No, this is not what you're thinking: it's actually Chinese for "Not so.") OK, so maybe Doctor Scripto needs to work on his calligraphy, but handling errors does present have a peek here To Err Is VBScript – Part 1 By The Microsoft Scripting Guys Doctor Scripto's Script Shop welds simple scripting examples together into more complex scripts to solve practical system administration scripting You need to then test error after every possible statement Go to Solution 3 Comments LVL 142 Overall: Level 142 VB Script 10 Message Active today Assisted Solution by:Guy Hengel On Error Resume Next Vba The TerminateProcess function calls the Terminate method of Win32_Process on the object reference passed to it. Here’s the output if the computer is not found: Copy C:\scripts>eh-sub-displaycustomerror.vbs ERROR: Unable to bind to WMI provider on sea-wks-5. The next script, Listing 7, terminates a process by using a process object passed as parameter. Listing 4: Subroutine – Handle Basic VBScript Errors with Custom Error Messages Copy On Error Resume Next strComputer = "." 'Change to non-existent host to create binding error. On Error Goto ErrHandler statement1 ' this is the line having an error statement2. . . . Its syntax is: where ErrorNumber is the numeric code for the error you’d like to generate. On Error Resume Next Vbscript W3schools After calling ExecQuery to request any instance of Win32_Process whose Name property is the value of strTargetProc, the script checks whether colProcesses.Count = 0. However, there are times, particularly when you are creating large, complex scripts, that you need to test the effect a particular error will have on your script. We appreciate your feedback. However, the host running the code determines the exact behavior. Check This Out A penny saved is a penny In the Lineweaver-Burk Plot, why does the x-intercept = -1/Km? Programming Outlook Forms 7. Please click the link in the confirmation email to activate your subscription. Do you know where your processes are? - The Sequel Metering Application Usage with Asynchronous Event Monitoring Out of Sync: The Return of Asynchronous Event Monitoring To Err Is VBScript – Herong Yang VBScript Tutorials - Herong's Tutorial Examples ∟Error Handling Flag and the "Err" Object ∟"On Error GoTo 0" - Turning off Error Handling This section provides a tutorial example on Connect with top rated Experts 8 Experts available now in Live!
OPCFW_CODE
package org.six11.flatcad.geom; import org.apache.commons.math.linear.RealMatrix; import java.util.Stack; import java.nio.DoubleBuffer; import org.six11.flatcad.gl.GLApp; import org.six11.util.Debug; import org.junit.Test; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; /** * A MatrixStack is simply a java.util.Stack that works with * RealMatrix objects. At any point you can ask for the 'current * matrix' which is just all the matrixes on the stack multiplied together in order of **/ public class MatrixStack extends Stack<RealMatrix> { /** * The current result of multiplying the matrices together. It is * nulled out whenever a push/pop is exectued, so it must be * recalculated by the getCurrent() method. */ protected RealMatrix current; /** * This is a pre-allocated DoubleBuffer for use with getting the * matrix stack value in a format that glMultMatrix can use. */ protected DoubleBuffer doubleBuffer; public MatrixStack() { this(true); } public MatrixStack(boolean ident) { if (ident) { // push(MathUtils.getIdentityMatrix()); current = MathUtils.getIdentityMatrix(); } } private void forget() { current = null; doubleBuffer = null; } /** * Pop the most recent entry of the stack, removing and returning * it. */ public RealMatrix pop() { forget(); return super.pop(); } /** * Pushes a new entry onto the stack, returning argument (for some * reason... I'm just doing what the Java API does). */ public RealMatrix push(RealMatrix rm) { // forget(); if (rm == null) { Debug.out("MatrixStack", "Warning: you are adding a null matrix to the stack. " + "Probably not what you want."); new RuntimeException("hi").printStackTrace(); } RealMatrix ret = super.push(rm); current = current.multiply(rm); return ret; } /** * Return the matrix that is the result of multiplying together all * the matrices in the stack in reverse order. In other words, if * there are three matrices in the stack, the current matrix is the * result of performing: 3 * 2 * 1. * * MatrixStack caches your result, so the 'current value' is only * calculated one time and used until you pop or push a value. NOTE: * if you use any of the List methods such as add/remove, you are * boned because this implementation only notices pushes and pops. */ public RealMatrix getCurrent() { if (current == null) { current = MathUtils.getIdentityMatrix(); for (RealMatrix rm : this) { current = current.multiply(rm); // current = current * rm } } return current; } /** * Returns the same as getCurrent() but in a format that can be used * with glMultMatrix(). Note that the return value is shared among * all callers of this MatrixStack, and is not thread-safe. For best * results, make a copy if you need to hang on to it for a while. */ public DoubleBuffer getCurrentDoubleBuffer() { if (doubleBuffer == null) { doubleBuffer = GLApp.allocDoubles(16); RealMatrix rm = getCurrent(); MathUtils.fillDoubleBuffer(rm, doubleBuffer); } return doubleBuffer; } /* ------------------------------ Test functions. ------- */ @Test public void testBasic() { // MatrixStack mats = new MatrixStack(); // Direction dir = new Direction(0,0,1); // z axis // double angle = MathUtils.degToRad(-90); // Vertex first = new Vertex(-3, 10, 7); // Vertex afterTrans, afterRot, afterBack; // RealMatrix trans = MathUtils.getTranslationMatrix(new Vertex(4, -10, -7)); // mats.push(trans); // afterTrans = first.getTransformed(mats.getCurrent()); // assertEquals(1d, afterTrans.x()); // assertEquals(0d, afterTrans.y()); // assertEquals(0d, afterTrans.z()); // RealMatrix rot = MathUtils.getRotationMatrix(dir, angle); // mats.push(rot); // afterRot = first.getTransformed(mats.getCurrent()); // assertEquals(0d, afterRot.x()); // assertEquals(1d, afterRot.y()); // assertEquals(0d, afterRot.z()); // RealMatrix back = MathUtils.getTranslationMatrix(new Vertex(-4, 10, 7)); // mats.push(back); // afterBack = first.getTransformed(mats.getCurrent()); // assertEquals(-4d, afterBack.x()); // assertEquals(11d, afterBack.y()); // assertEquals(7d, afterBack.z()); // mats.pop(); // should remove 'back' // Vertex sameAsAfterRot = first.getTransformed(mats.getCurrent()); // assertEquals(0d, sameAsAfterRot.x()); // assertEquals(1d, sameAsAfterRot.y()); // assertEquals(0d, sameAsAfterRot.z()); } }
STACK_EDU
I setup the configuration, make the project active, do not use teams or price lists. Added work logs to projects (ON.timesheet produces a report), specify a valid reporting period with all components, versions and activity types (whiche are defined and assigned to all projects), but no report data is fetched from the database. Am I missing someting. Runing the latest version of Jira and installed the current version for ictime. There has been a bug in ictime version 2.3.2+ that caused reports and timesheets not delivering results when using Oracle, PostgreSQL or MS SQL databases for your JIRA installation. This should be fixed in ictime release 2.3.5 from yesterday late afternoon for Oracle and PostgreSQL, for MS SQL we need some more tests to be sure that the fix also solves the problem for MS SQL, but probably it already does. Are you using ictime 2.3.5, and if yes, which database are you using? If you are not using ictime 2.3.5 already, please update. I am using Oracle database and have the newest version 2.3.5 of ictime installed. No results are produced. How can I check this issue. Is there a way to trace the sql statement ? The report query still seems to have problems with the oracle database. We are aware of it and are currently working on a permanent fix. This is one example of a report query(the "w.*" usually comes in first place, but w.id is enough for testing): SELECT DISTINCT -- w.* w.id , p.id as projectId ,t.rounding_rule_id , t.start_time , t.end_time , t.billed , t.chargeable , t.calc_time , t.worklog_id , t.project_id as timeEntryProjectId , t.price_list_id , at.name , at.id as activityTypeId , ex.excluded , cu.value from project p INNER JOIN jiraissue i on (p.id = i.project) left outer join AO_9B23C2_PROJECT_STATUS s on ( p.id = s.project_id) inner join worklog w on ( w.issueid = i.id ) left outer join AO_9B23C2_TIME_ENTRY t on ( t.worklog_id = w.id ) left outer join AO_9B23C2_ACTIVITY_TYPE at on ( at.id = t.activity_type_id ) left outer join AO_9B23C2_EXCLUDE_WORKLOG ex on ( ex.worklog_id = w.id ) left outer join AO_9B23C2_CURRENCY cu on ( cu.project_id = p.id ) where t.billed = 0 and ( s.value is null or s.value = 0 or s.value = 1 ) Maybe you could try to run this query directly in your database and tell me if there are any errors. That would help us a lot for finding a quick solution. We have released ictime version 2.3.6 today, this version has been specifically tested with Oracle (and MS SQL) and should now properly work. Please give it a try. Get ready! Demo Den Episode 4 is coming your way on Tuesday, May 28, 2018 with a Continuous Integration and Delivery special demo. CircleCI Director of Solutions Engineering, Eddie Webb will show us ... Connect with like-minded Atlassian users at free events near you!Find an event Connect with like-minded Atlassian users at free events near you! Unfortunately there are no Community Events near you at the moment.Host an event You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events
OPCFW_CODE
M: Ask HN: What is a good site to post remote/heavy travel programmer positions? - tluyben2 I know HN is good but that is only once a month and even a few days later the thread is not visited anymore. So what would be a better place? In our case it is a full time mostly remote position with a lot of travel potentially. Recruiters only get us people who do not want to travel... R: mtmail [https://wearehirable.com/](https://wearehirable.com/) for adding yourself, [https://remotive.io/](https://remotive.io/) for searching job offers from my bookmarks. [https://github.com/lukasz-madon/awesome-remote- job](https://github.com/lukasz-madon/awesome-remote-job) for a much longer list. R: Mz I have no idea how good any of these are, but it is a list of remote job boards I compiled at some point: [http://gigworks.blogspot.com/2017/01/remote-job- boards.html](http://gigworks.blogspot.com/2017/01/remote-job-boards.html) R: starbuxman [https://www.SkipTheDrive.com](https://www.SkipTheDrive.com) R: daily_foods1726 how about [https://www.stackoverflowbusiness.com/talent/platform/source...](https://www.stackoverflowbusiness.com/talent/platform/source/job- listings#) ? R: aszantu there is a subreddit for digital nomads R: tluyben2 It is more remote than digital Nomad but let me check it! I am surprised there are not more remote working sites. R: PaulHoule Tell us about the job. R: tluyben2 Senior developer, preferably in the payments space, preferably with payment processor integration experience. Both not mandatory though. Tech is .NET/C#, Python/Django, HTML5, ARM asm and C. There is a team that does most of the work but the job involves taking high level dev decisions, solving issues the team gets stuck on. It is not very easy to explain but it is a very diverse position which brings you to China, US, Thailand, Aus, Hong Kong, South Korea, Japan, India and Europe. Next to that we are looking for team members who work remote but will do not mind to reside and work in Thailand 3-4 months of the year. Edit; much easier to fill but we also are looking for a front end app/web dev with a lot of feel for design for the UK, Gatwick area. R: FlopV What type of citizenship can apply? US? R: tluyben2 Yes, that would be no problem.
HACKER_NEWS
Print.PrintSupport.Source.dll Exception(1) Element Not Found Calling CDC::StartDoc I'm hoping someone can help point me in the right direction here. I have a VS2008 application that I've been porting over to VS2022. It has been a mostly painless excercise. But printing crashes. Previews work fine but actual printing crashes in the CDC::StartDoc() call. This code works just fine in the vs2008 code and hasn't been changed in the vs2022 code. I'm guessing I'm missing an include or library or something but don't know what I'm missing? This is the debugger output: onecoreuap\printscan\print\workflow\printsupport\dll\printsupportutil.cpp(573)\Print.PrintSupport.Source.dll!7AC3BA60: (caller: 7AC26A2B) Exception(1) tid(7094) 80070490 Element not found. Exception thrown at 0x75A37402 in IPC2000.exe: Microsoft C++ exception: wil::ResultException at memory location 0x06B6E4AC. Exception thrown at 0x75A37402 in IPC2000.exe: Microsoft C++ exception: wil::ResultException at memory location 0x06B6DA50. Exception thrown at 0x75A37402 in IPC2000.exe: Microsoft C++ exception: wil::ResultException at memory location 0x0018BA70. Exception thrown at 0x75A37402 in IPC2000.exe: Microsoft C++ exception: [rethrow] at memory location 0x00000000. Print.PrintSupport.Source.dll!7AC19ED8: ReturnHr(1) tid(6770) 80070490 Element not found. Msg:[onecoreuap\printscan\print\workflow\printsupport\dll\printsupportutil.cpp(573)\Print.PrintSupport.Source.dll!7AC3BA60: (caller: 7AC26A2B) Exception(1) tid(6770) 80070490 Element not found. ] Print.PrintSupport.Source.dll!7AC151F7: LogHr(1) tid(6770) 80070490 Element not found. Exception thrown at 0x75A37402 (KernelBase.dll) in IPC2000.exe: WinRT originate error - 0x80070490 : 'Element not found.'. Exception thrown at 0x75A37402 in IPC2000.exe: Microsoft C++ exception: winrt::hresult_error at memory location 0x0018BBC8. I finally found a clue on this after struggling for hours. It turns out that it is a permissions issue. The vs2008 version of the application was run AsAdministrator. With the vs2022 version, I'm moving to AsInvoker and working through getting rid of the assorted UAC issues we were getting. If I run the vs2022 application As Admin, it prints fine. Otherwise it crashes on CDC::StartDoc(). I'm going to go digging but if anyone has any ideas, I'd sure love to hear them. If anyone's watching this. It looks like it might be a vs2022 environment thing. I just discovered that if I run the application from a shortcut, no matter the "run as administrator", the printing is fine. In the debugger, it looks like it gets this error no matter whether I run it As Admin or not. I tried running vs2022 as admin and setting the manifest to requiresAdmin and it still crashes in the debugger. I'm getting this exact issue. It only happens while debugging in Visual Studio. If I run the application outside VS printing works fine. My "fix" is to ignore that exception: while debugging in VS, go to Debug -> Windows -> Exception Settings, expand "C++ Settings", and disable "winrt::hresult_error". I have the same problem in both VS2019 and VS2022 with existing code. The shipped product in VS2019 works, but a recompile does not. In VS2022, throws exception in CDC.StartDoc. Please update and I will do the same. I have the same problem with VS2022 17.9.5. It's a pain in the neck that this is not resolved. I developed this function a few years ago and there was no problem at that time. However, when I wrote the same code recently, t he problem occurred. I think it's a first chance exception in Windows 11. Just ignore it, the program will still work. enter image description here
STACK_EXCHANGE
#!/usr/bin/env python3 # General purpose import time import numpy as np import random import math # ROS related import rclpy from rclpy.node import Node from std_msgs.msg import String from std_srvs.srv import Empty from geometry_msgs.msg import Twist from sensor_msgs.msg import LaserScan from nav_msgs.msg import Odometry from rclpy.qos import QoSProfile from rclpy.qos import qos_profile_sensor_data # others from pic4rl.sensors.pic4rl_sensors import pose_2_xyyaw from pic4rl.sensors.pic4rl_sensors import clean_laserscan, laserscan_2_list, laserscan_2_n_points_list import collections class MobileRobotState(): def __init__(self): # These store callbacks last values self.odometry_msg_data = None self.laser_scan_msg_data = None # These store transitions values self.odometry_data = collections.deque(maxlen=2) self.laser_scan_data = collections.deque(maxlen=2) self.observation = collections.deque(maxlen=2) # Goal a point to reach self.goal_position = None self.done = None self.goal_distance = collections.deque(maxlen=2) self.goal_angle = collections.deque(maxlen=2) def update_state(self): self.odometry_data.append(self.odometry_msg) self.laser_scan_data.append(self.laser_scan_msg) x, y, yaw = pose_2_xyyaw(self.odometry_data[-1]) self.goal_distance.append( goal_pose_to_distance(x,y,self.goal_pos_x, self.goal_pos_y)) self.goal_angle.append( goal_pose_to_angle(x,y,yaw, self.goal_pos_x, self.goal_pos_y)) self.check_done() def update_observation(self): processed_lidar = laserscan_2_n_points_list( clean_laserscan( self.laser_scan_data[-1] ) ) x, y, yaw = pose_2_xyyaw(self.odometry_data[-1]) self.observation.append( # processed_lidar + # [x] + # [y] + [self.goal_distance[-1]] + [yaw] ) def compute_reward(self): self.reward = reward_simple_distance(self.goal_distance, self.goal_angle) def check_done(self): self.done = False min_collision_range = 0.25 # m for measure in laserscan_2_list( clean_laserscan(self.laser_scan_msg)): if 0.05 < measure < min_collision_range: self.get_logger().info('Collision!!') self.done = True return if self.goal_distance[-1] < 0.1: self.get_logger().info('GOAL REACHED') self.done = True return def reward_simple_distance(goal_distance,goal_angle): # This function expects history related to goal return (goal_distance[0]-goal_distance[1]) def goal_pose_to_distance(pos_x, pos_y, goal_pos_x, goal_pos_y): return math.sqrt((goal_pos_x-pos_x)**2 + (goal_pos_y-pos_y)**2) def goal_pose_to_angle(pos_x, pos_y, yaw,goal_pos_x, goal_pos_y): path_theta = math.atan2( goal_pos_y-pos_y, goal_pos_y-pos_x) goal_angle = path_theta - yaw if goal_angle > math.pi: goal_angle -= 2 * math.pi elif goal_angle < -math.pi: goal_angle += 2 * math.pi return goal_angle
STACK_EDU
What's in this post... Last Updated on November 8, 2019 by GrahamWalsh Whilst I was out at Microsoft Ignite and working on the Crestron booth, of course, seeing the Room Scheduling panels and how simple they are to connect directly to Microsoft Exchange and have a simple way of reserving a meeting room there and then. It then got me thinking, wouldn’t it be great that once you are in the meeting room, the Microsoft Teams Room (MRT) system has a Join button when you get to the console. So I remembered I did some testing a while back with Microsoft Flow, which got a new identity at Microsoft Ignite 2019 and is now known as Microsoft Power Automate. I also previously used a Microsoft Exchange Transport Rules to modify email messages, that could be another possible way too. I’ll research that another time. Whilst I was in the boarding queue at the airport, I headed into https://flow.microsoft.com and realised the Skype for Business feature was there still. Bingo!! Fast forward 12 hours and I’m home now and had to test out my experiment. Creating the Power Automate So first off, log into Flow and search for “schedule meeting” as you can see below Now just populate a few details in the form. Whenever a user books an ad-hoc meeting from the Room Panel, it will have the name of “Walk up meeting”, so you can use that to ensure the works with these meetings. If I user decided to call the subject of their meeting the same thing, then it would create them a Skype Meeting. You can check your flow to ensure there are no errors and we can now test it out. Booking the Meeting with the Room Scheduling Panel When the Room Panel is idle, it looks like this. Panels can be customised, so the background and logo could different, etc. Now we touch the screen to wake it up and then press the + icon in the middle. You would see other meetings in the day if the room was busy. My home office is quite quiet today. We can now reserve a meeting. Depending on the policy defined, you might have set blocks of 30 minutes or up to every 30 minutes, etc. The panel is quite configurable. The Room Panel is now scheduling the meeting into the Exchange Room Resource mailbox. This is what happens on the Exchange Calendar side. A simple entry, blocking the room out. Now the Room Panel shows that the room has been booked out until the top of the hour. All is good from here. Microsoft Teams Room Console and Meeting Once you are in the room, you have a few options. You can create a Meet Now in your Teams or Skype client and then bring the room into the meeting with say Proximity Join or by searching for the room and adding it in. However, that’s no fun!!! Below is what the MTR console shows when the Walk up meeting is sent to the panel. There is the meeting Title and who booked it (the Resource Account). When we press the … we can see that it is not a Teams or Skype Meeting. I now introduce you to Power Automate. It will then look to see that a meeting was created and update the meeting invite. If we look in Outlook for the room calendar, we can see the invite now look like this. On the MTR console, we now see a Join button I simply just press the Join button and I’m in the meeting. And that is it. A nice and simple way to automate a meeting and make it a Skype meeting. I just need to work out the same with a Teams Meeting. Feel free to ask any questions below.
OPCFW_CODE
I don’t know how other professionals behave, but if you email a researcher at 11pm or even 2am, there is a good chance he will get back to you within two minutes. Many of them are workaholic, at least as far as email goes. According to my Google activity report, not counting spam, I receive 5000 emails a month and I send back about 1000 emails. To preserve my mental health, I have decided to set some time aside every day for my family and for relaxing. I play video games, I read novels, and I watch TV shows (on a tablet). For this unwinding to happen, I need to avoid email in the evening. Otherwise, the work never ends. The net result is that I have accumulated emails in the morning. So the first thing I do at 9pm is reading my emails. As I write this blog post, it is 10am and I am still grinding through emails that were sent last night. As I process these emails, more tend to appear. The net result is that I often finish reading my morning emails at noon. The time I spend on email tends to increase, year after year. It is not as bad as it sounds: most of my work involves writing emails anyhow. For example, much of my research is coordinated through email. Also, the emails I spend time on are truly important. They come from students, research collaborators and key colleagues. Nevertheless, I often feel guilty. After all, giant scholars often avoid email altogether. Maybe I would produce brilliant work if only I did not spend so much time with my email. To cope, I try to process emails in batch. That is, if you email me, you can expect long delays before I answer. It can take a day or a week. This seems to violate some social convention as I routinely find that people decide that I have ignored them because I haven’t gotten back to them immediately. But if I did try to get back to everyone at once, I would morph into some kind of email robot. I would never got to write code or research papers anymore. This being said, as the amount of interesting emails I receive keep on increasing, and people’s expectation increase, I worry. It seems that there is intense pressure to get back to more people faster. I might soon have to declare email bankruptcy. I will simply receive too many worthy emails for me to process it all. Yes, we have fantastic spam filters. And these help a lot. And my email client (GMail) tries its best to identify the most relevant emails and put them in a priority inbox. However, it is quite clear that we are going to need even more help. CEOs and important folks have human assistants to sort through their mail and answer common queries. Yet I am not an important person, I am just a regular Joe. Still, I too need assistants. The answer seems obvious: we need clever AIs that can process the bulk of our email. How could this work? Let me run through some examples… - A lot of my students email the same queries. Very often, they are asking for more time to complete their assignments. I tend to grant these requests. A software assistant could recognize this pattern and help me process these emails with less effort. - When I work with a student or a collaborator on a research project, we send a lot emails back and forth. I find that we routinely “forget” about past discussions and turn in circles. Moreover, it is hard to track ongoing tasks. Who is doing what and when? If I want to cleanly archive discussion, I need to do it manually. I get no help in organizing the discussions from my email client. If I want to be reminded of a deadline, or of the need to follow-up on what a student is doing, I need to set reminders myself. Overall, email discussions are much more effort than they could be. It seems clear that there is a huge need for more email-related AI. These problems are absurdly difficult given the current state of our knowledge, but I am depressed at how little effort seems to be put on these important practical problems. 6 thoughts on “We need more than spam filters: we need bona fide assistants!” There has been work on related topics in the AI community — for example, by Eric Horvitz at MSR. e.g., Though I was not aware specifically of MSR work on this topic, I do know that there has been work related to this problem… I have a friend who has worked on helping people write better emails through AI. My specific point is that it may not receive as much attention as it should. Not just from the AI community, but also from engineers. Au lieu d’étudier l’utilisation d’une intelligence artificielle pour les échanges avec les étudiants et les collaborateurs, je tenterais probablement d’utiliser un wiki. Ils permettent les communications asynchrones comme les courriels. Cependant, il est possible d’orienter les échanges vers la construction d’une base de connaissances utile pouvant aider à minimiser les communications redondantes. L’idéal, je crois, n’est pas juste de minimiser le travail de votre point de vue, mais bien de celui de tous les collaborateurs. Une telle approche pourrait permettre de souvent minimiser les délais pour les gens qui cherchent à obtenir une information de votre part. Évidemment, il y a plusieurs obstacles à une telle approche. Une analyse comparative des deux approches serait sûrement un beau sujet de recherche. Maintaining a collaborative wiki is hard work. The dominant characteristic of all wikis I have ever seen is that they are out of date. Then you end up emailing the maintainer/author to ask your question anyway. I agree it would be helpful. One thing that I found out for myself is that I tend to write more in the morning. So I avoid responding to emails in the morning. I rather use that time to draft papers or other things. If I answer the same email at the end of the day (say a bit before 17h), I will spend much much less time on it then at 8h in the morning. In short, WHEN I deal with an email impacts the time I spend on it. You may subscribe to this blog by email.
OPCFW_CODE
The game you are trying to view has ceased development and consequently been archived. If you are a member of this game, can demonstrate that it is being actively developed and will be able to keep this profile up to date with the latest news, images, videos and downloads, please contact us with all details and we will consider its re-activation. A one-level (singleplayer / splitscreen multiplayer) racing game project done for educational purposes. How to make input keys for two players on the same screen (for the splitscreen multiplayer mode.) Posted by feillyne on Oct 17th, 2010 After starting a Unity project (or booting original Unity car-racing one). 1. Select Edit -> Project Settings -> Input 2. In CarEdu, Fire Axes (Fire1, Fire2, Fire3) were removed, and only 11 were left: Horizontal, Vertical, Handbrake, Mouse X, Mouse Y, Mouse ScrollWheel, Window Shake X, Window Shake Y, Horizontal2, Vertical2, Handbrake2 As you can see, also Jump axes were renamed to Handbrake axes, and instead of having two Horizontal (Horizontal, Horizontal), there is Horizontal and Horizontal2. The number 2 points out which players own which "axes", i.e. keys. In Horizontal axis Alt Negative Button and Alt Positive Button settings were erased. In Horizontal2 axis alt Negative and Positive buttons were defined ("a" and "d" keyboard keys), and other options (such as Gravity, Dead, Sensitivity, Snap, etc.) were made after Horizontal. Also notice change in Joy Num property - it was changed to Joystick 1, and Joystick 2, respectively. Same was done for Vertical (Vertical, Vertical2) axes. Also, Handbrakes were set appropriately. You can compare them all with input settings of the original Car-racing tutorial by Unity. See pictures below to see the everything done. Mouse X, Mouse Y, Mouse ScrollWheel, Window Shake X, Window Shake Y axes were left untouched. The entire class Wheel was removed from the Car2 script, since only one class with such a name can exist. Then look at the code itself of Car2.js. What has changed? Can you spot it? ;-) As you can see, we need to allow both players to control their respective cars (or characters if you make your own game with the splitscreen multiplayer). So we scroll down to this function: In this function, there are two different axes for each player, see below. throttle = Input.GetAxis("Vertical"); steer = Input.GetAxis("Horizontal"); throttle = Input.GetAxis("Vertical2"); steer = Input.GetAxis("Horizontal2"); Vertical2 and Horizontal2 are names of the axes you set earlier in Input Manager. If you renamed your second Horizontal to e.g. Player2Horizontal, and your second Vertical axis to Player2Vertical, you would have to set such axes: throttle = Input.GetAxis("Player2Vertical"); steer = Input.GetAxis("Player2Horizontal"); So proper keys are mapped to proper players. You need to set handbrakes right, too. You need to find CheckHandbrake() function in Car.js. Below its code and structure: There's a line that will interest us most: As you can figure out, there is another button set for the second player. The "r" keyboard key will be used by the second player to brake. Also remember to set the Car2 object itself to the Car2.js script, by clicking Car2 object and dragging Car2.js script onto the car, as below: All done. :-)
OPCFW_CODE
Use ready to deploy Visits App to automate visits and deliveries without having to spend time building your own app. Invite users one at a time or in bulk by using either CSV import or API. Once unique links are generated, your app users will be able to install Visits App without having to perform login and immediately start tracking. Visits app users can perform visits at destinations shown in the app. Each destination in the app can be created either With Trips or Geofences or both. In addition, Visits App supports Geotags where each checkin and checkout triggered by the app user gets delivered to HyperTrack as a geotag. Distribute Visits App to your app users HyperTrack provides an Generate invitation links API for you to generate personalized deep links for your user to install Visits App. Use your own app user unique identifier to generate a unique invitation link payload. For example, to generate a unique invitation for the user 0053t000007wsZPAAY you create deep link URLs with the following payload: with a result like this: After you obtain the link, you can choose your preferred communication method. You can send this link to your mobile app user via email, SMS, or WhatsApp, for example. Once your user installs Visits App with https://hypertrack-logistics.app.link/1peqNytrWab URL from above, the user's new device_id becomes automatically associated with the unique user record identifier that is submitted in the payload above. Once the app installs and loads, it will process deep link data from the invitation and connect your user's identifier as primary identity to the device in your HyperTrack account. In some cases, this may take time before screens below are loaded. Seamlessly onboard app users Your mobile app user will go through some of the following screens to get started tracking with HyperTrack: Once your app user grants location and motion permissions in the app, the app is ready to use. Tracking during the work day At the start of work day, your app user is presented with this screen as shown below: Once "Clock in" button is pressed, Visits App starts tracking and will track distance from the location where the app user currently is to the first visit destination. Visits App will only start tracking once your app user starts the work shift. Once the work shift is completed, your app user can stop tracking location for the day. Create visit notes when checking in Visits App uses Geotags to create visit notes. Once the app user arrives at the site, "Check in" button is pressed. This generates a geotag record that can be observed in HyperTrack dashboard. Upon the completion of the customer site visit, your app user can press "Check out" button to mark the completion of the visit. At the end of the work day, the app user can end the work shift by pressing the "Clock out" button. At this point, the app will stop tracking. Create geofences for expected visit destinations Use Geofences API to create expected visits destinations for your Visits App user. Each expected destination will be shown to you app user in Visits App screen. Every visit destination entry and exit is automatically captured due to destination geofences you previously set up with Geofences API. Each destination entry and exit can be observed in HyperTrack dashboard At the same time, once the app user arrives at the site, optionally, "Check in" button is pressed. This generates a geotag record that can be observed in HyperTrack dashboard. Upon the completion of the site visit, your app user can press "Check out" button to mark the completion of the visit. At the end of the work day, app user can end the work shift by pressing the "Clock out" button. At this point, the app will stop tracking. Every visit destination entry and exit is automatically captured due to destination geofences you previously set up with Geofences API. Your app users can run Visits App in the background, and HyperTrack will do the rest to help you automatically see when your users enter and exit visit destination geofences. Create trips for scheduled visits Use Trips API to create destinations with scheduled visit times. Once the trip is created, you generate a share URL that can be used to communicate app user's location and ETA to the customer awaiting at the destination. Visits App will only start tracking once you create a trip for the app user. Once the trip is completed, tracking will stop. You may create a trip for an app user, for example, 2 hours before scheduled appointment. Once a trip is created, Visits App starts tracking on your app user's device. Every trip created for this user will be shown in the Visit App screen for the day of the scheduled trip. For questions or comments, please do not hesitate to contact us.
OPCFW_CODE
|Valkyrie/ Hyrax work state| Tom Johnson, Lynnette, Josh Gum, Linda Sato, Collin Brittle, Anna Headley, LaRita Robinson (plus SVT) |Hyrax dev work - since last meeting| Lightwieght - main thing worth noting, tickets from metadata group. Rights work field. Title as a singular property (still multiple on the backend) - make it singular through the form. Still a few tickets from the metadata WG output, while 3.0 work continues. 3.0 beta released - another beta before RC. Issues found in testing that haven't been dealt with. While Valkyrie goes on. Testing coordinator - needed. No one there to run the testing process. Notch8 can put some resources to that. (Kelly from Notch8) |Hyrax dev work - upcoming| Immediate priority to close out - priority to clean up bugs. Quiet but important change - all metadata changes get in as well. Short term - clsing out 3.0 items and a release. Continuation of Valkyrie work, metadata work and permissions. Valkyrie and permissions work will have their own resources. Metadata WG driven work. Ruby 2.6 support added over past few days. Chris Colvard has been doing the work to get Rails 5.0 2 support in anticipation of 6. James Griffin from Princeton - Hyrax CI/CD process. Circle work. Technical facing point, agenda item in Hydra Tech tomorrow to remove Travis Build from CI process in favor of Circle Matrix. |Users Work/ Permissions (upcoming work related)| Immediate term Permissions: - met with interested parties last week, identified places for immediate work. 3 big items: IP based access controls. Happen for a Group rather than just institutional access. Collectons or works with an IP-based Group authentication. Start dates in the future for leases - use cases, starts in the future and has a sunset date. Deriving Group Membership from an authentication system (LDAP or Shib groups hooking into Hyrax Groups). Document how that is set up rather than a ton of code work. Some high level audit to make sure what permissions set up now is pretty consistent, maybe open UI facing tickets. Collin - Permissions 2nd Phase - wait for permissions to be finished in Hyrax first. Waiting for a production release beyond 3.0. Manager is different from an editor - Ability to do bulk changes to access control - maybe another way to do it. Inheritance of collection permissions happens at creation. Collections extentions makes it more robust. Once the item is created, understanding where permissions came from. If a collection changes, the item retains its permissions. ACLs on an item, don't have a way to track where they came from. Beyond deposit - very hard. Develop out stories and use cases for how the bulk change would occur. Adequately staffed, schedule a sprint. Who will be tech lead? When sprint is scheduled - will determine tech lead. Tom has conflicts with Valkyrie. Moira: earliest dates - week of the 15th of April. Julie got a IIIF implementation - extracted from Hyku into Hyrax. New release - bumping version of Hyrax. May go forward with HyKu 3.0 HyKu Up is at hykuup.com. Consolidate places where you can learn about Hyku - Hydra in a Box sites, dead Hyku Direct sites. Make sure all Hyku related releases get seen. At Partners or similar - get notices for significant releases up on main Samvera Website. Documentation and announcement piece of Hyku site needs rejuvenation from Stanford heyday. Ubiquity may be moving toward Partner - working on getting their service up |Roadmap Council Check In| Roadmap Council - whitepaper, state of technical community, acts as a clarifier about a perception of turmoil in Samvera Land. Should be printed before LDCX. Next agenda - what's next for the WG How we can help Steve with next step of resourcing? - |Prepping Hyrax for Partners|
OPCFW_CODE
BEIJING — June 26, 2013 — Perched in a high-rise above the 3rd ring road in Zhongguancun, Beijing’s technology innovation hub, Joshua Xiang and his team are hard at work. Among the typical desktop tchotchkes are scattered fragments of automotive electronics: speakers, car stereos and touch screens. As a group program manager for the Windows Embedded Automotive team, Xiang leads a team of program managers, along with developers and testers, in building the next release of Ford SYNC, which was launched earlier this year and is on display this week at the Mobile Asia Expo. Since Ford SYNC was first released in 2007, Ford Motor Co. and Microsoft have led the way in developing an updateable in-car technology solution that helps drivers stay connected and safe while behind the wheel. More than 5 million vehicles have since been sold with Ford SYNC, with support for more than 20 languages. For this release, Ford wanted a universal code base that could be used across different countries, and regionalized as necessary. This is also the first release of Ford SYNC that was developed and installed in China, and which supports the use of voice commands in Mandarin, making the technology a more feasible solution in the Chinese market. Unlike many other groups within Microsoft, the Windows Embedded Automotive team works directly with the hardware they’re coding for and interacts closely with the customers — in this case, a team of technical experts at the Ford R&D center in Nanjing that oversees development of all China-specific features. As an extension of the team in Redmond, Wash., the Beijing branch is part of Microsoft Asia Pacific R&D Group (ARD), Microsoft’s largest R&D center outside the U.S. In addition to developing code and helping globalize the product for use in different markets, the Beijing team was largely responsible for developing the first version of Ford SYNC for battery-powered and hybrid vehicles, which Ford launched globally last summer. And with this next release of Ford SYNC, they are developing China-specific features, getting the code ready for rollout and helping finalize the overall user experience for the Chinese market. “The China team’s work provides great examples of ARD’s ability to respond to the unique opportunities in the local market while contributing to global products,” said Ya-Qin Zhang, corporate vice president and chairman of ARD. The Mandarin version of Ford SYNC The most notable of the new features is Ford SYNC’s support for Mandarin based on research from six different regions. As Xiang points out, developing support for Mandarin isn’t as simple as making a direct translation. With their subtle differences in pitch and accent, tonal languages like Mandarin are an unprecedented challenge for speech recognition engines. Add to that the built-in learning curve needed for speech recognition engines to generate conversational language. “The challenge with most translation software is that the outcome, while technically accurate, isn’t captured in the same way as a native speaker would put it,” said Xiang. “Our goal was to create a voice recognition system that not only understands the driver’s commands, but also enables a user experience that is seamless and free of distractions.” Extensive testing was conducted by Ford in 13 cities throughout China. In addition, the Windows Embedded Automotive team worked with Nuance to adjust its speech recognition engine so the user experience would feel more natural to drivers in China. The team also made significant changes to the Ford SYNC user interface to create a translation that flowed well with Mandarin’s distinctive syntax. In addition to developing the speech and voice recognition support for Mandarin, Xiang’s team also created features that would adapt the navigation system in MyFord Touch to some of the cultural nuances. For example, people in China prefer using landmarks as their point of reference, and street addresses are listed in reverse order from what is common in the U.S. Furthermore, under Chinese law it’s illegal for GPS to provide emergency responders with a person’s exact location. With these things in mind, the team redesigned and developed a navigation interface that would abide by government regulations while providing drivers with an interface that was easy to use. The perfect laboratory for imperfect driving conditions As Ford’s primary development partner for Ford SYNC, Microsoft is also responsible for integrating the technologies, which include navigation technology from Telenav and Nuance’s speech recognition engine. The team is split up into small groups that focus on developing specific Ford SYNC features. In addition, they work closely with colleagues at the main base of Windows Embedded operations, located at Microsoft’s Redmond headquarters. The result is a global, collaborative work environment that blends code development with hands-on troubleshooting to address the requirements and challenges of driving in China. In Beijing, perpetual construction, street beautification and new drivers on the roads have all merged to create a situation in which the normal rules of the road don’t apply. Drivers routinely back up or stop midstream if they’ve missed a turn, pedestrians pay little heed to traffic, and gridlock is the rule of the day. Collectively, these factors are the perfect laboratory environment for recreating the imperfect conditions that drivers face around the world. And the break-neck speeds at which new buildings and roads populate Beijing’s cityscape provide a challenge all their own for GPS signals, generating anomalies such as cars floating out to the ocean or disappearing from the navigation screen altogether. Fine-tuning the best driving experience In response, the Windows Embedded team worked side-by-side with Telenav engineers to run field tests and fine-tune the navigation system’s dead reckoning algorithm. The team received software code from Telenav, which it integrated into Ford SYNC for a battery of tests that team members run either at their desks, on the “bench” (a complete mockup of Ford SYNC located in the office), or in one of two test vehicles that the team has at their disposal. Team members work with Ford to develop a routing plan and identify which components of Ford SYNC to focus on. Typically, the plan includes several rounds of weeklong testing, with a blend of driving through the countryside and in different parts of the city where traffic is especially challenging. And a team of Xiang’s colleagues are constantly testing the latest smartphones to ensure compatibility. The combination of multiple rounds of testing and dog-fooding on the weekends has created a solid code base for the evolution of Ford SYNC. “Working directly with Ford, the equipment and other technology providers has given us a better sense of the challenges and helped us address a lot of issues,” said Xiang. “The result is a dependable technology solution that provides the foundation for a connected driving experience in practically any driving environment. And as demand for in-car technologies expands, Microsoft stands ready around the world to help carmakers create driving experiences that suit the needs of the local driver.”
OPCFW_CODE
Generative AI: What Is It, Tools, Models, Applications and Use Cases One significant application of generative AI in healthcare is in medical image analysis. AI models are trained to detect patterns and abnormalities in Yakov Livshits medical images, such as X-rays and CT scans. This allows for quicker and more accurate diagnoses, ultimately leading to better treatment outcomes. - Complex math and enormous computing power are required to create these trained models, but they are, in essence, prediction algorithms. - After an initial response, you can also customize the results with feedback about the style, tone and other elements you want the generated content to reflect. - Gaming companies can use generative AI to create new games and allow players to build avatars. - I opted to compose an additional prompt that would get ChatGPT to do a Chain of Thought approach on this answer. - Automotive companies can use generative AI for a multitude of use cases, from engineering to in-vehicle experiences and customer service. Through machine learning, practitioners develop artificial intelligence through models that can “learn” from data patterns without human direction. The unmanageably huge volume and complexity of data (unmanageable by humans, anyway) that is now being generated has increased the potential of machine learning, as well as the need for it. Some examples of foundation models include LLMs, GANs, VAEs, and Multimodal, which Yakov Livshits power tools like ChatGPT, DALL-E, and more. ChatGPT draws data from GPT-3 and enables users to generate a story based on a prompt. Another foundation model Stable Diffusion enables users to generate realistic images based on text input . Generative AI can learn from existing artifacts to generate new, realistic artifacts (at scale) that reflect the characteristics of the training data but don’t repeat it. Generative Adversarial Networks What is new is that the latest crop of generative AI apps sounds more coherent on the surface. But this combination of humanlike language and coherence is not synonymous with human intelligence, and there currently is great debate about whether generative AI models can be trained to have reasoning ability. One Google engineer was even fired after publicly declaring the company’s generative AI app, Language Models for Dialog Applications (LaMDA), was sentient. Now, pioneers in generative AI are developing better user experiences that let you describe a request in plain language. After an initial response, you can also customize the results with feedback about the style, tone and other elements you want the generated content to reflect. Generative AI raises several ethical concerns, including copyright infringement and the creation of fake content. Bias can also be introduced into the model if the training data is not diverse enough, leading to discriminatory outputs. Artists can use these generated pieces as a starting point for their own creative process, manipulating and editing the pieces to fit their vision. Amazon debuts generative AI tools that helps sellers write product descriptions The technology uses machine learning algorithms that analyze large datasets, identify patterns and generate new output based on this learned knowledge. The process of training generative AI models involves exposing a machine learning algorithm to large volumes of data, then training it to recognize and replicate patterns, which can then be used to generate new content. Generative AI is a new buzzword that emerged with the fast growth of ChatGPT. Generative AI leverages AI and machine learning algorithms to enable machines to generate artificial content such as text, images, audio and video content based on its training data. Founder of the DevEducation project A prolific businessman and investor, and the founder of several large companies in Israel, the USA and the UAE, Yakov’s corporation comprises over 2,000 employees all over the world. He graduated from the University of Oxford in the UK and Technion in Israel, before moving on to study complex systems science at NECSI in the USA. Yakov has a Masters in Software Development. This is the basis for tools like Dall-E that automatically create images from a text description or generate text captions from images. But it was not until 2014, with the introduction of generative adversarial networks, or GANs — a type of machine learning algorithm — that generative AI could create convincingly authentic images, videos and audio of real people. Examples of foundation models include GPT-3 and Stable Diffusion, which allow users to leverage the power of language. For example, popular applications like ChatGPT, which draws from GPT-3, allow users to generate an essay based on a short text request. On the other hand, Stable Diffusion allows users to generate photorealistic images given a text input. You can then use the computer to store the data and search the data by exploiting a tree-like capacity. Turns out that trees or at least the conceptualization of trees are an important underpinning for the latest innovation in prompt engineering and generative AI. Prompt engineering is gaining on generative AI via the emergence of the Tree of Thoughts (ToT) … In this article, we explore what generative AI is, how it works, pros, cons, applications and the steps to take to leverage it to its full potential. It’s only the beginning of this tech, so it can be hard to make sense of what exactly it is capable of or how it could impact our lives, but so far, it’s impressive. We’re committed to answering the biggest questions surrounding it, and sharing what we know. By examining those two lines of thought, hopefully, a decision can be made about which of the two is most meritorious. In general, you might want to somehow compare and contrast each of the distinctive lines of thought. For example, you could try to use numeric weights and mentally calculate the winning potential of each line of thought. Another Yakov Livshits approach could be to directly compare side-by-side the lines of these thoughts. Generative AI enables early identification of potential disease to create effective treatments while the disease is still in an initial stage. For instance, AI computes different angles of an x-ray image to visualize the possible expansion of the tumor. We cannot say for sure what goes on in the human mind when thinking about things such as which chess move to make. In any case, we all have agreed to refer to those human ponderance as thoughts. Generative AI also can disrupt the software development industry by automating manual coding work. Instead of coding the entirety of software, people (including professionals outside IT) can develop a solution by giving the AI the context of what they need. A low-resolution and bad quality picture can be turned into a decent resolution thanks to some Generative AI tools. If we take a particular video frame from a video game, GANs can be used to predict what the next frame in the sequence will look like and generate it. This approach implies producing various images (realistic, painting-like, etc.) from textual descriptions of simple objects. The most popular programs that are based on generative AI models are the aforementioned Midjourney, Dall-e from OpenAI, and Stable Diffusion. But still, there is a wide class of problems where generative modeling allows you to get impressive results.
OPCFW_CODE
So this blog post is something different: it's about my own research. The main thing I do is make computer models of stars in binaries and then compare them to stars, galaxies and other fun things and events in the Universe. I also make as many models and predictions as possible public so other astronomers can use my models in their own studies. Over the past year I have created my latest set of models which, while not perfect,are still a big improvement on my first go. I was in the process of writing this all up for what I call ”the instrument paper” when I started finding lots of exciting results that I thought I should publish as quickly as possible... which has led to some pretty cool papers by myself and others. Out of these, the coolest started with the rumour that the LIGO gravitational wave detectors had found something. Gravitational waves come from many sources, but one of the most common is from the merger of binary star systems… of exactly the kind my models can simulate. For a long time I ignored the rumours, but papers by other people in the same field started to appear which suggested the rumours might be worth investigating further. It wasn't until someone posted a comment on Facebook, quoting an email from someone in the US stating that LIGO had detected the merger of two black-holes, each of around 30 times the mass of the Sun, merging into a single black-hole that I got interested. While a lot of scientists were excited as gravitational waves had been detected for the first time I was excited because black holes are created in stars and we had a new way to determine what the masses of the black holes that are born in stars could be. It seemed that a lot of researchers thought that these black holes were unusually massive. That is true, and they are more massive than most black holes we have seen in the past, but they’re not too much more massive. Anyway, I like observational data and I like nothing better than comparing it to what my binary models predict. So I started to write a code and made some predictions in the two weeks before the announcement. The cool thing was that we already made the black hole binaries in existing models; we only needed to write a new code to calculate how long it took for the two black holes to merge and this would allow us to see which of our models could produce the observed merger within the age of the Universe. This work was done in close collaboration with Elizabeth Stanway from the University of Warwick. As she was in the UK and I was in Australia at the time we could take turns working on an interpretation paper with one of us sleeping while the other was hard at work! At the time it was kind of a shot in the dark - the rumours might have been untrue, and we’d have had to write off our hard work as a useful but ultimately futile exercise. Then the announcement was made and we confirmed the masses of the black holes and submitted our paper. As always we had to go through the peer-review process and we were asked to considerably expand our predictions to include neutron star-neutron star systems as well describe in more detail our code, which had been discussed elsewhere and it will all be in the instrument paper but it was best to also include a description of all the important stuff in this paper. So after a considerable amount of extra work and clarifications to our much improved paper it was finally accepted. During this time others published very similar papers but with different views and some interesting extensions. Our paper was always about showing that our BPASS code could predict roughly the right rate of black hole mergers and could reproduce GW150914, as well as all the other observations it can predict, where some other similar codes only consider single stars so can’t produce GW150914 type events. However one really cool and exciting new bit of science we put in was in response to trying to explain why our computer models predicted more massive black holes than some other codes. This is actually a key uncertainty of stellar evolution, how massive is the final remnant formed by the death of a star? The merger of black holes gives us a way to measure this, but it only tells us about the black holes in binary stars. What about single black holes that might been ejected from a binary by a ‘kick’ they received in their natal supernova? At the same time as GW150914 was being announced, another really quite awesome study (Wyrzykowski et al., 2016) has used gravitational microlensing to find candidate single black holes drifting through our own Galaxy. This technique is widely used to discover planets around other stars, but they also found many possible black holes. What was interesting about the study is they found a number of low mass black holes near the minimum mass that can be created in stars of 3 times the mass of the Sun. Others had suggested these didn’t exist so this was a surprise. In our models we have a range of possible black hole masses from 3 times the mass of the Sun up to and beyond the masses of GW150914. A really fun thing though is that these single black-holes that were discovered are on average lower masses than those we see in binaries in our Galaxy and those we saw from GW150914 and the other recent similar detections. The figure below shows the results of the black hole masses. Predictions from our code for single stars are shown in red, black holes in binaries in blue with the solid line representing the mean black hole masses with the dashed lines represent the boundaries within which about 68% of black hole masses must be within. The vertical axis is the black hole masses while the horizontal axis is the “metallicity” of the stellar models - a measure of which generation of stars they are: less metals is an earlier star. The grey shaded region represents the metallicity range of our Galaxy. The black horizontal lines are the masses of the black holes in GW150914. The asterisks are known black holes in binaries while the red and blue points are the masses of the single star and binary star observations. What this plot tells us is that more massive black holes are more likely to be seen in binaries and lower mass single star black holes are more likely to be unbound in their forming supernova and so seen in isolation. This is quite an interesting finding: more gravitational wave sources and more single black hole detections by gravitational microlensing will tell us a surprising amount about black hole formation. My favourite plot those is the next one, it is a quite colourful and dramatic plot. Each panel shows the predictions for a different generation of stars in the Universe and the brightest colours indicate where the most common black hole mergers should occur. They range from some of the earliest stars in the Universe (in the upper left panel) to those similar to the ones forming in our own Galaxy (in the lower right). On top of this are the 3 detected black-hole mergers to date of GW150914 (dark blue, top right) LVT151012 (green, middle) and GW151226 (cyan, lower right). Each system is plotted twice as we don’t know which black hole came from which star: the initially more or less massive. The contours and shading then represent where the masses of the most likely black hole mergers. We can see that GW150914 is only possible at the lower metallicities, while the others are possible in all populations and GW151226 is closer to the typical mass of black hole mergers expected in all the populations. The one interesting thing is GW151014 is a typical merger at the lowest metallicities which means it might have come from some of the earliest stars to form in the Universe. Although we can’t be certain. To show this we need to do a similar study to some of our fellow astronomers, Belczynski et al., who also modelled in how many stars of different generations were formed at different times through the Universe and how long they would take to merge and so whether they would be observed today. They found either the binary merger was relatively recent or again closer to the formation of the Universe. While this requires lots of assumptions about unknowns in cosmic history, we may try to calculate this with our own models in future. The key result we wanted to show, and why we wrote the paper, was just that in BPASS our models naturally have these black-hole mergers in. The code does both population and spectral synthesis and it is one of very few spectral synthesis codes that can predict black-hole mergers alongside all our predictions of stellar clusters and galaxy populations. Why? Well the other most common ones assume all stars are single! Only a binary code like ours can get close to the correct black-hole merger rate inferred from LIGO after its first observing run. We’re starting to find that including binaries makes a key difference to our understanding of the Universe - all the way from distant galaxies to individual stars. Combining the LIGO and lensing results with our BPASS code has added another piece to the jigsaw.
OPCFW_CODE
Getting Error 26 in SSMS when trying to connect to my Docker instance I'm trying to spin up a SQL instance of which I'm an admin so that I can follow the exercises in a Microsoft T-SQL book. I'm trying to connect to my Docker SQL using the password that I set up. I'm using what I believe is a default username ("sa"). I've tried putting the server name with and without the port number. I have provided the error below that I receive when I attempt to connect to the server using the credentials of the screenshot above. I have also attempted to add a rule to my firewall to resolve this error but was unsuccessful thus far. Let me know if there is any other information I should provide. Thank you all for your time. Please help to share the error in the content rather than the image @Ashok I have edited the question to state when I get the error. are you connecting from Windows machine A to Windows machine B? @Ashok I am connecting to a Docker container with a SQL image on the same Windows machine I am using SSMS on. Change your .\SQLEXPRESS,and add your SQL express name only and it works for me Basically, you've got the wrong server\instance in the connection string @ashok Thank you for your assistance. However, I am not sure what .\SQLEXPRESS is or how to obtain a SQL express name. Can you please share the connection string code snippet Or else try to start SQL server manually if not automatically started @Ashok Can you clarify how I can obtain the connection string code snippet? First, try to start SQL server manually if not automatically started @Ashok OK. I have clicked "stop" on the sqltest docker container. Then I clicked "run." It is currently running. @MeowMeow Have you tried just using localhost,1401? If that doesn't work, change your -p argument on your container to use -p 1433:1433 and then connect with localhost @dfundako I think most people on here are assuming I have a lot of experience I do not have. I do not know what a -p argument is aside from it probably has something to do with a PS command which I am not using. Are you saying I should type in localhost, 1401 into the SSMS login screen? @MeowMeow Ahh ok. I'll add an entire answer. Hold please. When you make your docker container, try and use the following, but use your own password, container name, image name, etc. But try and leave the port mapping, or the -p arg as is. This comes straight from the MS docs: docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=<YourStrong@Passw0rd>" \ -p 1433:1433 --name sql1 -h sql1 \ -d mcr.microsoft.com/mssql/server:2019-latest Or in one line: docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=<YourStrong@Passw0rd>" -p 1433:1433 --name sql1 -h sql1 -d mcr.microsoft.com/mssql/server:2019-latest MS Docs Once that container is up, open SSMS and in the connection prompt, use the following: By just using localhost, that assumes you're trying to connect to port 1433. If you change your port mapping in your container command, you use the comma and port number after localhost, like this: localhost, 1456 I ran that command in Powershell and got the error that "-p" is not recognized as a cmdlet. https://ibb.co/tpV6qKk @MeowMeow Try putting it all on the same line and remove the backslash values. Currently it is downloading an image from the internet even though I already had an image on docker. It claimed it was unable to find that image locally. It is still downloading. Followed directions and got the same error. - https://ibb.co/njxQyzm
STACK_EXCHANGE
Didn’t like windows 8, Excited about a newer and a better version of windows. Now you can now try out Office for Windows 10 yourself, according to a post on the Office Blog. new users signed up for the Windows 10 Technical Preview can now download the apps – including Word, Powerpoint, and Excel – from the Windows Store Beta. Download links for Windows 10 Preview Get windows 10 Technical preview iso [ Here ] This includes Microsoft Word , Microsoft excel and Microsoft PowerPoint A universal version of OneNote already comes with Windows 10, while Outlook and Calendar are expected to be released at a later date. It is not know yet what type of version would be released, including enterprise, Pro , Build , Version of this release and the prices for this release are also not known. FREE upgrade for all WIndows 7 and 8 Users At the recent Microsoft’s Windows 10 event, it was announced that all Windows 7, 8 and 8.1 users will be able to upgrade to Windows 10, for free. People who want to upgrade after that first year have to purchase it. Only people who have a legitimate copy of Windows 7 or 8.1 will receive a license, as long as you claim it within a year after Windows 10’s launch. Microsoft said that the free upgrade will be valid for the first year immediately after the release; if you upgrade during that time, you’ll get Windows 10 for free, for the “lifetime” of your device. The Windows Phone 8.1 will also get the update to Windows 10 on their mobiles. So why does windows want you to have the free technical preview download ? Here is what they have to say Help shape the future of Windows Download the Windows 10 Technical Preview and try out the new features, give us your feedback, and get the chance to win cool stuff. What Is A Technical Preview? The Windows 10 Technical Preview (TP) is an evaluation copy for enterprise users. Basically, it’s an early test version. Businesses are given the chance to try it out, see how it fits into their routines and provide Microsoft with feedback. Ideally, Microsoft will integrate the collected data into a final product that meets the needs of its customers. Get early access to releases Experience Windows 10 in its earliest stages. Download the Windows 10 Technical Preview to get the latest build and see the progress as it’s happening. Terry Myerson the VP of Windows at Microsoft said that the company wanted to make this version of windows a “seamless upgrade” for those users when it is released. Microsoft usually charge around £100 for new releases of their highly popular OS The Windows 10 Trial Will Expire [April 15, 2015.] Another reason why you shouldn’t get too comfortable with using Windows 10 Technical Preview as your main OS, is that the preview build will expire on April 15, 2015. You can sign up for said program by visiting the Insider website and logging in using an appropriate Microsoft account. After accepting the terms for the program, you’re walked through the download process for the Windows 10 preview. We will be back with a detailed review and update on windows 10 when it is released in Australia or the USA If you haven’t yet signed up for the Windows 10 Technical Preview, you can do so here.
OPCFW_CODE
CMS & WordPress Hi. I am creating a lyrics website in Wordpress. I am using a custom post type called "lyrics" with artist, and album as taxonomies. And I have run into a problem. The artist taxonomy includes for now 6,000 records, which makes working with the add new (custom) post type (new lyric) extremely impractical because every time I want to add a new lyric, the page loads the entire artist taxonomy. Selecting a checkbox out of that taxonomy is essentially impossible. Is there a workaround? Something like adding a different taxonomy metabox that doesn't load all of them? There is a plugin which adds autosuggest to the taxonomy box (like google search) which I have installed, but the page is still sluggish, probably because even with that plugin, the taxonomy still gets loaded in full. This is already impractical, and it will get completely impossible to work with adding new records when I add items to the "album" taxonomy. Surely having several taxonomies with lots of items is not as uncommon. How do you deal with that when it exists? Or, if possible, how would you do it differently? As in, maybe use artist and album as custom post types instead of taxonomies, but this leaves me with the question of how to associate them and do things like "display all songs from this album" or "display all songs by this artist", which I can easily do with the current structure where "album" and "artist" are taxonomies. I wouldn't put it past Wordpress to be running over 6K queries. I don't really think Wordpress is the right tool for the job considering the amount of data you have already. You will either need to tune the hardware and/or add excessive server side caching. Data is stored very poorly in Wordpress which causes it to be extreme sluggish when scaling. There probably are some work-arounds but the better work-around is using the right tool for the job. Not to mention wordpress developers typically don't think about scalability. So the more and more plugins you add the worse and worse the problem is likely to become. Actually, the website is blazingly fast without any caching on the front end and on every other page except on the particular page for adding a new lyric post, so you're wrong about it being a Wordpress problem. It's just that the browser doesn't deal very well with loading a <select> element with 6,000 items in it (the page source is 1.6 MB due to loading all those taxonomies at once). Scrolling the page is very slow and jerky. Kind of like it happens when running a page with a badly designed flash script. That's why I was looking for an alternative to the taxonomy metabox. I did find one http://wordpress.org/plugins/admin-category-filter/screenshots/, but since the metabox still loads all of the items and the only thing it changes is adding the autocomplete metabox, the problem isn't really resolved. If I could get it to retrieve the taxonomies from the database via Ajax without loading the whole <select> element, the problem would be solved. This topic is now closed. New replies are no longer allowed.
OPCFW_CODE
10 Bad Project Warning Signs One of the great things about being an agency owner is the ability to turn down projects. I've come across a few projects recently that sounded interesting but made me feel nervous. It wasn't any one specific thing; rather a series of small little things that set my internal alarm bells ringing. As such I've written up a list of bad project warning signs. Individually none of these signs should be deal breakers. However put a few of them together and it may be worth thinking twice about taking on that project. - The project needs to be done in an incredibly short space of time, due to a fixed deadline. In these situations the potential client has often known about the deadline for a while. However it's taken them longer to plan the project than initially anticipated so they expect the developer to make up the time. - The potential client says that they have no idea about budget. This could indicate they haven't done their home work and aren't very serious about the project. - The potential client says they have a budget but won't tell you what it is. This is often an indication that the client doesn't trust you and feels that if they let you know their budget, you'll simply charge them more for the same solution. - The client says they want the site to be as cheap as possible, or they have an extremely low budget. This usually means the client doesn't value their web presence much, preferring cheaper over better. In this situation potential clients are often spending their own money, can be extremely demanding and expect more for less. - The client expects much more from the project than their budget will allow. In these situations it can be difficult to manage the client's expectations. - You are expected to come up with design ideas for the pitch. This is often problematic as you won't know enough about the project at such an early stage. These type of pitches can turn into a beauty contest where participants are solely judged on the visuals they create rather than their ability and track record. There is a strong risk that elements of your design will be used even if you don't win the pitch. - The potential client won't tell you how many agencies they have contacted about this project. This could indicate that they have emailed a large number of agencies and are shopping for the lowest quote. - You will be pitching against a large number of other agencies. This often means the client hasn't done their homework researching potential suppliers. If the pitch involves lots of preparation this puts a financial burden on the agency, while limiting the chance of success. - There is no central point of contact. Projects for large companies often involve many stakeholders. If there is nobody managing the project at the client's end you'll having to do it for them. This vastly complicates the project and increases your overheads. Being a supplier you'll have no power in the organisation, making your task extremely difficult. You also run the risk of getting sucked into company politics. - The potential client hasn't provided you with a request for proposal and doesn't have the time to fill in your design questionnaire fully. If the client isn't willing to put the required time into the project it could indicate they aren't going to take the project seriously. It could also indicate that they have contacted lots of agencies and just don't have the time or are simply window shopping. What are your bad project warning signs? Are there any projects you've taken on and wished you hadn't? Conversely, were there any projects you were nervous about taking on only to find those concerns were unfounded?
OPCFW_CODE
Flappy Bird is a popular mobile game that gained immense popularity upon its release in 2013. The simple yet addictive gameplay, where players control a bird by tapping to make it fly and navigate through a series of pipes, quickly captivated millions of players. Overview of the GitHub Repository The key features of this game implementation include: - Responsive gameplay: The game is designed to be played on both desktop and mobile devices, ensuring a seamless experience for all users. - Simple controls: The player controls the bird's movement by pressing the spacebar or tapping the screen, making it easy to play for both casual and experienced gamers. - Score tracking: The game keeps track of the player's score, incrementing it each time the bird passes through a set of pipes. The score is displayed on the screen, adding a competitive element to the gameplay. - Collision detection: The game checks for collisions between the bird and the pipes, ending the game if a collision occurs. This adds a challenging aspect to the gameplay, requiring the player to navigate the bird through narrow openings. - Randomized pipe generation: The pipes in the game are generated randomly, creating a different level layout each time the game is played. This adds variety and replay value to the game. Clone the Repository: Start by cloning the GitHub repository to your local machine. You can do this by running the following command in your terminal: git clone https://github.com/username/flappy-bird.git Install Dependencies: Navigate to the project directory and install the required dependencies using a package manager like npm or yarn. Run the following command: This will install all the necessary packages and libraries required for the game. Run the Game: Once the dependencies are installed, you can start the game by running the following command: This will start a local development server and open the game in your default web browser. Play the Game: Use the spacebar or mouse click to control the bird's flight and navigate through the obstacles. Try to achieve the highest score by avoiding collision. By following these steps, you can set up the Flappy Bird game locally on your machine and start playing. Make sure you have the latest version of Node.js installed before proceeding with the setup. Understanding the Code The main code structure of the Flappy Bird game consists of several files that work together to create the game experience. Here are the key files and their functions: index.html: This file serves as the entry point of the game and contains the HTML structure for the game canvas. It also includes the necessary CSS styles for the game elements. script.js: This file is where the majority of the game logic is implemented. It handles the game initialization, rendering, and updating of the game elements. It also manages user input and collision detection. assets.js: This file is responsible for loading and storing the game assets such as images and sounds. It ensures that all required assets are loaded before the game starts. bird.js: This file defines the Bird class, which represents the player-controlled character in the game. It contains methods for updating the bird's position, handling user input to control the bird's movement, and detecting collisions with pipes. pipe.js: This file defines the Pipe class, which represents the obstacles in the game. It contains methods for updating the pipe's position, rendering it on the screen, and detecting collisions with the bird. Now let's discuss the game mechanics and logic. In Flappy Bird, the player controls a bird that must navigate through a series of pipes without colliding with them. The bird constantly falls due to gravity, but the player can make it flap its wings to gain height. The goal is to achieve the highest score possible by successfully passing through as many pipes as possible. The code also includes logic for generating random pipe positions and calculating the score based on the number of successfully passed pipes. It also handles user input to control the bird's movement by listening for keyboard events. Understanding the code structure, the different files, and the game mechanics and logic will help you navigate the Flappy Bird game implementation and make any necessary modifications or improvements. Contributing to the Project Fork the repository: Click on the "Fork" button at the top right corner of the repository page. This will create a copy of the repository under your GitHub account. Clone the repository: On your local machine, navigate to the directory where you want to clone the repository. Use the following command to clone the repository: git clone https://github.com/your-username/flappy-bird-game.git Create a new branch: Before making any changes, create a new branch to work on. Use the following command to create a new branch: git checkout -b my-feature Make your changes: Open the project in your preferred code editor and make the necessary modifications or additions. Test your changes: Before submitting your changes, ensure that the game is still functioning correctly. Test the game in a web browser to verify that your changes have not introduced any bugs or errors. Commit your changes: Once you are satisfied with your modifications, commit your changes to your local repository using the following commands: git add . git commit -m "Brief description of your changes" Push your changes: Push your changes to your forked repository on GitHub using the following command: git push origin my-feature Submit a pull request: Go to the original repository page and click on the "New pull request" button. Fill in the necessary details, including a brief description of your changes, and submit the pull request. The project maintainers will review your changes and provide feedback if necessary. Once your pull request is approved, your changes will be merged into the main repository. Please follow the development guidelines specified in the repository's CONTRIBUTING.md file to ensure a smooth contribution process. We provided step-by-step instructions for setting up the project locally and explained the main code structure, different files, and their functions. We also delved into the game mechanics and logic behind Flappy Bird. To encourage readers to get involved, we discussed how to contribute to the open-source GitHub repository and mentioned the development guidelines and the process for submitting pull requests.
OPCFW_CODE
You have launched your PHP website, but have to take care of its security? A lightweight yet extremely powerful programming backend language is PHP web development. Around eight per cent of the website application all around is powered by PHP and it makes it one of the most widely used programming language that is used for world development. It is popular and used widely as it is easy to code and have functions that are developers friendly. There are plenty of CMS and built frameworks on PHP and these are known developers from all around the world and have become a regular part of the community. WordPress is one such example. - At the point when PHP applications are sent on live servers, it might confront a few cases of hacking and attacks online, which makes its webpage information vulnerable to attacks. It is one of the most discussed subjects locally, that how to fabricate a safe application, holding under tight restraints all the central goals of the projects. - Despite their earnest attempts, the developers generally stay careful about the secret provisos that go unrecognized while fostering an application. These escape clauses can genuinely think twice about the security of indispensable website information on any web facilitating for PHP MySQL applications, leaving them powerless for hacking endeavours. Thus, this article is about some helpful PHP security tips that you could use admirably in your activities. Utilizing these little tips, you can ensure that your application generally stands high on security checks and never gets undermined by the attacks. One can also choose to get PHP website development services to keep their website safe. XSS Or Cross-Site Scripting - Cross-Site Scripting is perhaps the riskiest attack performed by infusing any malignant code or content into the site. It can influence the centres of your application, as the programmer can infuse any sort of code into your application without giving you a clue. This attack for the most part happens in those sites that concede and submit client information. - In an XSS assault, the infused code replaces the first code of your site, yet functions as a genuine code upsetting site execution and frequently taking the information. The programmers sidestep the entrance control of your application, gaining admittance to your treats, meetings, history, and other indispensable capacities. - You can counter this assault by utilizing HTML exceptional roasts and ENT_QUOTES in your application codes. Utilizing ENT_QUOTES, you can eliminate single and twofold statement choices, that permits you to cleanse out any chance of the cross-site attacks CSRF Or Cross-Site Request Forgery - CSRF gives out total application control to the programmers to play out any bothersome activity. With unlimited authority, programmers can do vindictive activities by moving contaminated code to your site, bringing about information robbery, useful changes, and so on The assault powers the clients to change the traditional solicitations to the modified horrendous ones, such as moving assets accidentally, erasing the whole information base with practically no warning, and so forth - The CSRF assault must be started once you click on the hidden malevolent connection sent by the programmer. This intends that assuming you are savvy to the point of sorting out the contaminated secret contents, you can undoubtedly preclude any potential CSRF assault. In the meantime, you can likewise utilize two defensive measures to brace your application security, for example by utilizing the GET demands in your URL and guaranteeing the non-GET demands just produce from your client-side code. - Session hijacking is the attack through which the programmer takes your meeting ID to get sufficiently close to the intended accounts. Utilizing that meeting ID, the programmer can approve your meeting by sending a solicitation to the server, where a $_SESSION exhibit approves its uptime without keeping in your insight. It tends to be performed through an XSS assault or by getting to the information where the meeting information is put away. - To forestall session hijacking, consistently tie your meetings to your real IP address. This assists you with refuting meetings at whatever point an obscure infringement happens, promptly telling you that somebody is attempting to sidestep your meeting to oversee the application. What’s more, generally recall, not to uncover IDs under any conditions, as it can later think twice about personality with another attack on PHP application development. Avoid SQL Injection Attacks - The database is one of the critical parts of an application that for the most part gets designated by programmers using an attack of SQL injection. It is a kind of assault wherein the programmer utilizes specific URL boundaries to gain admittance to the data set. - The attacks can likewise be made by utilizing web structure fields, where the programmer can modify information that you are going through questions. By changing those fields and inquiries, the programmer can deal with your information base and can play out a few unfortunate controls, including erasing the whole application data set. - To forestall SQL attacks, it is educated 100% concerning the time to utilize defined questions. These PDO questions appropriately substitute the contentions before running the SQL inquiry, successfully precluding any chance of SQL attacks. This training not just assists you with getting your SQL questions yet, in addition, makes them organized for proficient handling. PHP application developers should take note of this or choose professionals for these services. Make Sure To Use The Certificates Of SSL - For transmitting data with end-to-end encryption online, consistently use SSL certificates in your applications. It is a worldwide perceived standard convention known as Hypertext Transfer Protocol (HTTPS) to send information between the servers safely. Utilizing an SSL declaration, your application gets the safe information move pathway, which nearly makes it inconceivable for programmers to meddle with your servers. - All the significant internet browsers like Google Chrome, Safari, Firefox, Opera, and others suggest utilizing an SSL testament, as it gives an encoded convention to communicate, get, and decode information over the web. The application and websites of PHP are vulnerable to cyber-attacks. People should hire professionals who provide PHP web development services. On WeblinkIndia.net one can get all the necessary details about PHP application and their safety tips along with web development services.
OPCFW_CODE
To be able to transfer money or value Peer to Peer (P2P) without any central authorization has been the dream of cypher punk since the 60s. However, the concept of decentralized digital money possessed a long unsolvable problem called The Byzantine Generals’ Problem. It questions the possibility of forming a consensus in a computer network. In 2008, an unidentified person using the pseudonym Satoshi Nakamoto published a paper “Bitcoin: A Peer-to-Peer Electronic Cash System”. Satoshi proposed a distributed ledger system encrypted by cryptographic and run automatically by algorithm. All the nodes (the computer connected to the network) will always get an updated ledger of all transactions in the network. New multiple transactions will be pooled together as a ‘block’. The algorithm will verify the block using a consensus mechanism called Proof of Work. The confirmed block is added in a linear & chronological order to the chain. The technology of these chained blocks will create an auditable and transparent record of transactions which later is known as the blockchain. Most of the cryptocurrency today uses the technology blockchain as their infrastructure. However, each blockchain has a different consensus. Consensus is the heart of decentralized blockchain because, without any central authority, the participants have to agree on rules on how to operate the blockchain. Throughout the years, people try to create better consensus algorithms. The Byzantine Generals’ Problem To understand the seriousness of consensus, we need to understand the Byzantine Generals’ Problem. Imagine a group of generals, commanding Byzantine armies, surrounds an enemy city and can only communicate by messenger. To conquer the city, the generals have to agree on a battle plan. However, one or more generals might be traitors and sabotage the message plan. How many traitorous generals can the army have to still be able function as one? The analogy depicts the problem with digital currency where there is no central authority to be the custodian of assets and no central authority to verify assets and transactions. In distributed ledgers, the different nodes act like generals. How many transactions can be malicious without the system having to refuse a transaction? Proof of Work Proof of Work means participants (nodes) must proofread works (using participants’ computing power to verify & add transactions to the public ledger) in order to earn Bitcoin as rewards. In permission less blockchain, the nodes do not know each other (just like the generals). How can Bitcoin blockchain maintain a decentralized network if there are traitors? In order to add new data entries (block) to the chain, nodes need to solve a hard computational challenge which consumes high computing power and processing time. There is a small chance any single node can generate the required proof-of-work without high cost of computing power. Thus, minimize the spamming attack. Every 10 minutes, a valid Proof-of-Work (PoW) is produced. If there are two blocks created at the same time, the one with the longest chain is accepted as valid. Proof of Work does not have any central authority, but systems assume that the honest nodes (the longest chain) control the majority of computing power. However, there are several problems regarding PoW: - Energy Consumption: Every year the mathematical problems continuously become more difficult to solve which require more amount of electricity. - Centralization: PoW creates an unfair system because those who have powerful and expensive hardware devices will have greater chance of winning the mining rewards. - 51% Attack: A group of people who hold more than 51% of the system's computing power can alter the blocks for their gains. Proof of Stake Proof of Stake created in 2012 to solve the PoW’s problem. If the PoW rewards the miners for solving computational problems, the validators of PoS earn the transaction fees when creating the next block based on how much they have ‘staked’. The validators are people who lock up (stake) particular coins of blockchain. Validators are randomly chosen by the network to propose new blocks. The network also selects multiple validators to attest the proposed block. The chosen validators who proposed and attest the block will earn transaction fees. Validators who are offline or not making correct attestations will receive a penalty (slashing of their stake). If validators try to attack the network, they can lose their entire stake. Earning transaction fees in PoS do not require fancy hardware and less energy consumption. Therefore, more ordinary people can be validators in PoS rather than in PoW which will allow a more decentralised network. PoS punishes nodes that do not follow the consensus mechanism which reduce the 51% attack possibility.
OPCFW_CODE
PostgreSQL provides two main types of replication: Physical Streaming Replication and Logical Replication. In this blog post, we explore the details of Logical Replication in PostgreSQL. We will compare it with Physical Streaming Replication and discuss various aspects such as how it works, use case, when it’s useful, its limitations, and key points to keep in mind. What is Logical Replication and how does it work? Logical Replication in PostgreSQL is designed for replicating specific tables, rows, or columns between database servers. It uses a publisher-subscriber model where the publisher sends changes and the subscriber applies them. This is different from Physical Replication, which replicates the entire database at the block/page level using WAL records. Key components in Logical Replication include: - Logical Replication Worker: Manages replication tasks. Checks worker state on subscriber side. When a new subscription is created/enabled, it spawns a walsender process on the publisher side. - Walsender: Decodes WAL contents and reassembles transaction changes, sending them to subscribers or discarding them if a transaction aborts. - Decoder: Uses the PostgreSQL standard plugin output (pgoutput) for decoding. Note that all transactions are fully decoded on the publisher and only then sent to the subscriber as a whole. This behavior is deduced by the streaming option while creating a subscription. Check more: Streaming option for Subscription. - Initial Synchronization Worker: Synchronizes initial/existing data from the publisher by creating a temporary replication slot and running a COPY command. - Apply Worker: Applies the incremental changes on the subscriber side. The replication ensures transactional consistency by applying changes in the commit order on the subscriber side. Each subscription receives changes through one replication slot, and there can be multiple table synchronization workers to expedite the process, only one per table. After initial data copying, real-time changes are sent and applied. When should Logical Replication be used instead of Physical Streaming Replication, and what distinguishes it from Streaming Replication? Here are some key reasons why logical replication is needed in PostgreSQL: - Logical replication allows for the replication of chosen tables or specific rows and columns, rather than replicating the entire database as physical replication does. This is particularly useful when only certain parts of the data need to be replicated. This is also essential for complying with legal regulations in different regions. For instance, you can replicate non-sensitive data to a subscriber outside the US, while keeping all sensitive dat replicated within subscribers located in the US. - Unlike streaming replication which requires the same major version, logical replication supports data replication across different major versions of PostgreSQL. This is beneficial for executing major version upgrades with minimal downtime. - Logical replication supports the real-time consolidation of data from various sources into a single, centralized reporting or analytical database. - Subscribers in logical replication setups can perform write operations, unlike streaming replication where replica is read-only mode. It also doesn’t require the same system configurations between the publisher and subscriber. - Logical replication in PostgreSQL can be utilized to set up a bi-directional replication system where each node can accept write operations and replicate these changes to other nodes. However, in such a scenario, it’s essential to prepare for write-level conflicts and avoid circular replication. From PostgreSQL 16 onwards, a new option ORIGIN has been added to the subscription settings. It tells the publisher to send only changes that do not have replication origin(only send writes performed on publisher) or to send all changes, which includes both the local changes and those replicated from other sources. - Logical replication utilizes WAL data but optimizes it by filtering and transmitting only the required data. This leads to reduced bandwidth and storage needs compared to physical replication, which replicates all WAL data. What are the limitations of Logical Replication? - Tables being replicated logically must have a primary key or a replica identity set. We will discuss this further below. - Logical replication does not replicate DDL changes. For instance, changes like index creation, tablespace alterations, vacuum or altering the data type of a column are not replicated. - Logical replication is restricted to table data. It does not replicate other database objects like roles, sequences, or schema changes. - Logical replication does not resolve conflicts that may arise due to concurrent writes on the primary and the replica. Conflict management has to be handled externally. - Logical replication can introduce additional load on the primary database because it needs to transform WAL records into logical change records, which can be resource-intensive. - In cases of subscriber downtime, this can lead to increased disk space usage on the primary server. The primary server uses replication slots to keep subscribers in sync which means it needs to retain WAL logs until they are confirmed to be received by all subscribers. What is Replica Identity? For logical replication of UPDATE and DELETE operations in PostgreSQL, identifying the correct rows on the subscriber side requires one of the following: - Primary Key (Default Replica Identity): When updating/deleting rows with a primary key, the system publishes the old primary key values and all new column values to the WAL on the publisher side, which are then sent and applied to the subscriber. - Unique Index with Not Null Columns (Replica Identity Index): Updates/deletes on tables with a unique index result in the publication of the old index values (if the unique indexed column is updated) and all column values to the WAL, which are then transmitted to and applied on the subscriber. - All Columns (Full Replica Identity): This method treats all columns as a single key, publishing both old and new values of all columns to the WAL. This approach can lead to excessive logging, increased data transfer, and unnecessary disk I/O. How to add or remove columns in tables that are involved in logical replication? When implementing schema or DDL changes in a system using logical replication, the order of applying these changes is crucial. For adding columns, start by making changes on the subscriber side and then proceed to the publisher. Conversely, when dropping columns, remove them from the publisher first, followed by the subscribers. Not following this sequence can stop logical replication, which requires manual intervention. How to speed up the initial data syncing process on the subscriber side? First understand the following GUC parameters(Applies on subscriber side only): max_logical_replication_workers = Specifies maximum number of logical replication workers. This includes both apply workers(On subscriber side) and table synchronization workers. max_sync_workers_per_subscription = increasing max_sync_workers_per_subscription only affects the number of tables that are synchronized in parallel, not the number of workers per table. - To enhance the initial synchronization speed of tables in logical replication, you should increase the values of max_logical_replication_workers and max_sync_workers_per_subscription on the subscriber side. Keep in mind that max_logical_replication_workers should not exceed max_worker_processes, and max_sync_workers_per_subscription should be less than or equal to max_logical_replication_workers. - If dealing with large tables, Consider dividing your tables for example: put large tables in separate publications and small tables in another one. - If dealing with large indexes, consider removing them during the initial sync and then recreating them using the CREATE INDEX CONCURRENTLY command to avoid blocking reads/writes. - Always monitor disk and CPU usage on the subscriber side to ensure that there is no performance issue. PostgreSQL 16 has introduced several enhancements to its logical replication capabilities. One of the key features is the ability to copy initial data using a binary format, which marks a significant improvement over the previous text format. Check here for Binary format inside COPY command: https://www.postgresql.org/docs/16/sql-copy.html How to find tables with primary keys? select tab.table_schema, tab.table_name from information_schema.tables tab left join information_schema.table_constraints tco on tab.table_schema = tco.table_schema and tab.table_name = tco.table_name and tco.constraint_type = 'PRIMARY KEY' where tab.table_type = 'BASE TABLE' and tab.table_schema in (ADD SCHEMA NAMES HERE) and tco.constraint_name is not null order by table_schema, table_name; What factors lead to Logical Replication lags? - Replication lag can occur when data transmission between the publisher and the subscriber is slowed due to unstable network connections. This is especially true in environments where network reliability is an issue. - If hardware resources such as CPU, memory, or disk I/O are insufficient on either the publisher or the subscriber, it can negatively affect the efficiency of the replication process. - Large transactions occurring on the publisher can also cause a replication delay as these transactions are applied in commit order. In such scenarios, smaller committed transactions may end up lagging behind. How to monitor Logical Replication? Run the following query on publisher side: SELECT slot_name, active, confirmed_flush_lsn, Pg_current_wal_lsn(), Pg_size_pretty(Pg_wal_lsn_diff(Pg_current_wal_lsn(), restart_lsn))AS retained_walsize, Pg_size_pretty(Pg_wal_lsn_diff(Pg_current_wal_lsn(), confirmed_flush_lsn)) AS subscriber_lag FROM pg_replication_slots; slot_name | active | confirmed_flush_lsn | pg_current_wal_lsn | retained_walsize | subscriber_lag -----------+--------+---------------------+--------------------+------------------+---------------- mart_sub | t | 0/DC29108 | 0/DC29108 | 56 bytes | 0 bytes slot_name is the name of the subscriber. Active is the state of logical replication. ‘t’ means it is running without errors. confirmed_flush_lsn is the wal lsn record replayed on the Subscriber side. pg_current_wal_lsn is the current wal record number on the publisher. retained_walsize is the size of the wal retained by the publisher for the slot. Subscriber will start from restart_lsn point after disconnection. Subscriber_lag defines the overall replication delay between publisher and subscriber. Make sure that the active column is equal to ‘t’. If it shows ‘f’, then check for errors inside the log file on the publisher side. Another query to run on publisher side is: select pid, application_name, pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), sent_lsn)) sending_lag, pg_size_pretty(pg_wal_lsn_diff(sent_lsn, flush_lsn)) receiving_lag, pg_size_pretty(pg_wal_lsn_diff(flush_lsn, replay_lsn)) replaying_lag, pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), replay_lsn)) total_lag from pg_stat_replication; application_name shows the name of the subscriber. sending_lag could indicate heavy load on primary. receiving_lag could indicate network issues or replica under heavy load. replaying_lag could indicate that the replica is under heavy load. total_lag defines the overall replication delay between publisher and subscriber. How to add new tables to existing logical replication? Creating a publication using the FOR ALL TABLES option ensures that any tables added to the database in the future are automatically included in the publication. Similarly, when a publication is established with the FOR TABLES IN SCHEMA option, it automatically incorporates any future tables created within that specific schema into the publication. In cases where neither of these options is used, you must follow these steps: - Include the table in the publication ALTER PUBLICATION ADD TABLE TABLENAME; - Update the subscription to incorporate data from the newly added tables. This step is necessary in all scenarios. ALTER SUBSCRIPTION subscription_name REFRESH PUBLICATION;
OPCFW_CODE
Matthew Hurst (@ Data Mining) recently posted about the concept of the “entity web” to describe companies involved in web-based information retrieval that are evolving into more than search engines for retrieving textual documents. Hurst speculates about the corporate skill set that will be needed to deliver on this concept, which he terms the three competencies: Understanding (1) the Web (e.g., HTML, CSS, AJAX, and other web technologies); (2) the world (i.e., the real world relationships between data points, such as that a song has an artist); and (3) Web presence (e.g., how entities appear and interact on the web). Of course, competencies (2) and (3) include the ability to record and use this knowledge in some structured model. I characterize what Hurst is discussing as integrating semantic data into existing textual search services. I also think the term “entity” is a bit limited because is implies the data is focused only on the actors (people, organizations, websites, document sources, etc.) when the users information needs may not be focused on entities at all (e.g., asking a system how photosynthesis functions or the answer to 1 + 1). Whatever you label it, Hurst is right about the direction in which we seem to be headed and when you think about how the traditional legal information industry measures up on these competencies, things do not look very good. the Web.— Hurst comments that this is an area in which the broad market players (Google, Facebook, etc.) have largely mastered (but have room to improve). On the legal side, I would say that large legal publishers have suffered from many of the same problems of other older companies when it comes to embracing web technologies. Namely, that they tend to lag too far behind in adopting the newest web technologies. They also have a hard time building institutional knowledge in this area because they often outsource this type of work to vendors and let some departments have too much influence (e.g., marketing and communication, public relations). Overall, I would say the legal information industry is obviously not as competent as the big tech companies in this area but they generally do well with deploying established web technologies and are on par with other older companies when it comes to adopting the newest technologies. the World.— This is probably the area in which the traditional legal information industry is the most competent but even here I think there are many reasons to be worry about the future. There is a high degree of competency in this area because traditional legal publishers have spent a long time developing institutional knowledge related to all the intricacies of government data and distribution. Other than perhaps law librarians, there a very few places that foster this kind of knowledge. I think this institutional knowledge is, however, at risk because many legal publishers have increasingly outsourced or automated the very functions which gave rise to this knowledge-building. Web Presence.— This is probably the area in which traditional legal publishers are the weakest. In the legal field, an complete understanding of web presence would involve all the various actors interact on the web (e.g., legislators, courts, state and federal agencies, lawyers, etc.). Although traditional legal publishers are most familiar with official entities involved in issuing documents (legislatures, courts, etc.), they are much less familiar with entities that discuss or debate the legal content (blogs/blawgs, social networking sites, law firms, political and legal discussions by non professionals, etc.). A future entity web information retrieval system might need to track these sources to know that ‘Obamacare’ refers to the Affordable Care Act or that while a particular judge has not ruled on an issue his wife belongs to a group on Facebook against the issue.
OPCFW_CODE
add parallelproj cuda to build Checklist [ ] Used a personal fork of the feedstock to propose changes [ ] Bumped the build number (if the version is unchanged) [ ] Reset the build number to 0 (if the version changed) [ ] Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering) [ ] Ensured the license file is being packaged. Error: Failed to render jinja template in /home/conda/recipe_root/meta.yaml: 'cuda_compiler_version' is undefined no idea why. I think I will have to leave this to you for a bit. Ping me if you want me input. Error: Failed to render jinja template in /home/conda/recipe_root/meta.yaml: 'cuda_compiler_version' is undefined no idea why. I think I will have to leave this to you for a bit. Ping me if you want me input. I think this is because we have to define it in the .ci_support/*.yaml files that configure the build environment. For parallelproj those those yaml files which are auto generated by conda-smithy. An example for a linux cuda config is here and a osx non-cuda config is here. The Readme that we have to create/change conda-smithy's input recipe/conda_build_config.yaml and re-render the recipe, rather than editing these files directly. Should I re-test with the conda_build_config.yaml from parallelproj? @conda-forge-admin please rerender @conda-forge-admin please rerender Thanks all! Indeed building with cuda when appropriate, see e.g. a Win job - Found parallelproj 1.2.14 (will use its CUDA support) -- Parallelproj projector support enabled. Non-cuda jobs work. test_OSMAPOSL_parallelproj fails. Any idea why that is? Moreover, test_blocks_on_cylindrical_projectors also fails. these are the 2 only tests that use parallelproj . In the logs I see the following in Linux cuda jobs cudaMalloc returned error no CUDA-capable device is detected (code 100), line(57) and for Windows cudaMalloc returned error CUDA driver version is insufficient for CUDA runtime version (code 35), line(57) Possibly we need a check on cross-compilation like you have. However, I don't think that's it as the parallelproj-feedstock is running the ctests for the one job I checked We don't have CI jobs that have GPUs available currently. So it is not that surprising that running anything that requires a GPU would have issues. We are looking into how to improve this, but it is a longer term project Really what we try to do on CI is make sure all the build tools are available so we can build the executables So probably the best thing to do is skip these tests or at least check whether a GPU is present before trying to run them. In the latter case one could download the package and test locally That makes sense to me. However, I don't understand how the parallelproj feedstock succeeds its tests. We'll leave @gschramm to comment on that. Is there an easy way we can make the build scripts aware if this is a cuda build or not? Also, is it possible to download the build output of a PR (such that someone with a GPU can run the test)? That makes sense to me. However, I don't understand how the parallelproj feedstock succeeds its tests. We'll leave @gschramm to comment on that. Is there an easy way we can make the build scripts aware if this is a cuda build or not? Also, is it possible to download the build output of a PR (such that someone with a GPU can run the test)? @KrisThielemans in the CMakeLists.txt of the parallelproj cuda test, I detect whether a physical GPU is present or not. If there is none, I just skip the test. I mentioned this briefly in your issue on parallelproj. Makes sense to me. I guess we need to modify this line here and add more -E. Can you tell me which tests to exclude? I guess I'd do if [[ ${cuda_compiler_version:-None} != "None" ]]; then CTEST_EXCLUDES="test_OSMAPOSL_parallelproj test_blocks_on_cylindrical_projectors" fi and then use it below. (It'd be more future proof to exclude *_parallelproj I guess, but that'll be fun with wildcards and escaping). and similar in bld.bat. thanks! if [[ ${cuda_compiler_version:-None} != "None" ]]; then EXTRA_CTEST_EXCLUDES="test_OSMAPOSL_parallelproj test_blocks_on_cylindrical_projectors" echo "Excluding GPU run-time tests $EXTRA_CTEST_EXCLUDES" fi Is nicer I guess Still struggling to get the -E syntax of ctest correct. I will run some local tests. Is there a reason why we are not using --parallel in cmake --build to speed up the builds? Is there a reason why we are not using --parallel in cmake --build to speed up the builds? Not sure what conda-forge recommends. On GitHub Actions, you only get 2 cores anyway. I tend to find that --parallel without specifying a number of cores goes crazy as it overloads the system (when using make, it is probably alright for ninja) @KrisThielemans some progress :) some of the linux cuda builds work. but: in the linux cuda 10.2 builds we get liniking errors for test_datetime and test_radionuclide all windows cuda builds fail when installing the project - see e.g. here. no clue what is causing this. maybe @carterbox or @jakirkham can help in the linux cuda 10.2 builds we get liniking errors for test_datetime and test_radionuclide I don't know really. These do not depend on much, certainly not on parallelproj nor CUDA. I wouldn't know why they'd link in the non-CUDA build and not here, unless it's a bug in the linker (e.g. too many files to link with). I wonder if it'll disappear when re-running the job... all windows cuda builds fail when installing the project - see e.g. here. no clue what is causing this. This line says -- Installing: D:/bld/stir_1677093359775/_h_env/Library/bin/dumpSiemensDicomInfo.sh [0/1] Install the project... 'parallelproj' is not recognized as an internal or external command, operable program or batch file. It seems to vary between jobs where this line appears , indicating it's doing something in parallel (even though we're not asking it). I don't know where this comes from. Is there a reason why we are not using --parallel in cmake --build to speed up the builds? Not sure what conda-forge recommends. On GitHub Actions, you only get 2 cores anyway. I tend to find that --parallel without specifying a number of cores goes crazy as it overloads the system (when using make, it is probably alright for ninja) Conda build defines the CPU_COUNT environment variable. Use it to prevent oversubscription of processors. @conda-forge-admin please rerender Windows builds all fine now. One Linux job failed due to unrelated https://github.com/UCL/STIR/issues/1164. This seems to occur very infrequently, so at the moment, I'd just go ahead here without extra mods. Hopefully it won't break the next runs... Only remains disabling the gdd 7 builds. in the linux cuda 10.2 builds we get liniking errors for test_datetime and test_radionuclide I don't know really. These do not depend on much, certainly not on parallelproj nor CUDA. I wouldn't know why they'd link in the non-CUDA build and not here, unless it's a bug in the linker (e.g. too many files to link with). All GNU 7.5.0 toolchain + CUDA jobs are affected by this. I vote for excluding that from our build list, but don't know how to do that. @carterbox @jakirkham what do you think? I guess that should be possible by including a conda_build_config.yaml next to meta.yaml, see here. The only thing I see is, how to exclude compiler versions instead of explicitly including them. Since the c_compiler version is determined by the cuda_compiler version, I'd say it would make sense to simply exclude the 10.2 cuda version @conda-forge-admin, please rerender Nice and simple! I'm optimistic... Nice and simple! I'm optimistic... Nice and simple! I'm optimistic... If the current build work, should we try ``--parallel ${CPU_COUNT}` before merging? OK to skip CUDA 10 if you no longer support it. AFAIK, 'gcc7' is not a valid selector. https://docs.conda.io/projects/conda-build/en/latest/resources/define-metadata.html#preprocessing-selectors OK to skip CUDA 10 if you no longer support it. AFAIK, 'gcc7' is not a valid selector. https://docs.conda.io/projects/conda-build/en/latest/resources/define-metadata.html#preprocessing-selectors I tried skip: True #[linux and c_compiler_version == "7"] But that did not exlude the cuda10.2 builds that use gcc7 If the current build work, should we try --parallel ${CPU_COUNT} before merging? I'd rather not. Yes, it makes the build faster, which is handy when using doing PRs like this, but it als makes understanding problems usually a lot harder. Is this feedstock should only run occasionally, I think we can live with a bit of delay. Maybe we'll change it later... The job failure is due to https://github.com/UCL/STIR/issues/1164 again. sigh. I'll merge this and see what happens. If necessary I can create a bug-fix version of STIR. oops. pressed wrong button! Is there a reason why we are not using --parallel in cmake --build to speed up the builds? Not sure what conda-forge recommends. On GitHub Actions, you only get 2 cores anyway. I tend to find that --parallel without specifying a number of cores goes crazy as it overloads the system (when using make, it is probably alright for ninja) Conda build defines the CPU_COUNT environment variable. Use it to prevent oversubscription of processors. @gschramm I forgot that Ninja does parallel builds (optimised for current load etc) automatically, so we don't need this flag for faster builds (and of course our build-log is confusing).
GITHUB_ARCHIVE
Announcing Missions 2.0: SCN Gamification Redesign This post is to inform you about changes in our mission design, effective today. As you know we have had extensive game mechanics on SCN for almost a year. When we launched in April 2013 the community liked the “new experience” and enjoyed completing challenges and earning badges. These SCN badges represent interest, activity and topic expertise, and they can even help give you a sense of people’s personalities. Some of them are serious while others are just for fun – although it’s true that we have more of the serious badges, being a professional community. Badges can be a way to say “congrats” or “thank you”, but most of the time they provide motivation to do certain things: contribute original, valuable content and demonstrate behaviors beneficial to the community. You may want to refer to the list of SCN missions for better understanding of the details that follow. The changes are based on 10 months of monitoring, observing and listening to community feedback. When we launched last year we expected to continuously improve our initial design. Iteration is a key aspect of gamification. The goals of these changes are the following: - Continue to reward (and hopefully encourage) quality participation as judged by the community - Encourage behaviors that are beneficial to the community, such as answering questions correctly, participating in discussions and writing thought provoking blogs - Discourage cheating such as plagiarism and point cheating Please refer to the SCN Rules of Engagement to understand appropriate community behaviors. We are removing the prerequisites of some of the onboarding missions and changing points awarded for certain missions. We are also making small changes here and there such as removing the repeatability of the “Pay It Forward” mission. We want to make people accountable to produce original, high quality content. We also want to discourage any point cheating or copyright infringment. Therefore we are introducing penalty points that result in overall points reductions when blogs and documents are rejected as part of an abuse report (submitted via the “alert moderator” link). Removing points from members who repeatedly ignore moderators’ advice should encourage them to try to improve. Additional measures will be taken for anyone having more than 6 pieces of content (blogs and/or documents) rejected. Blog, document, and discussion mission adjustments: The feedback we received was that the quality requirements were not high enough. We are changing that for the progression missions “I Blogged!” through “Super Storyteller” and “I shared knowledge” through “Super Tutor”. We also decided to make certain descriptions more vague so that people spend less time pursuing points and badges, and more time engaging with the community and providing helpful knowledge on SCN. Super Answer Hero, Super Storyteller and Super Tutor are now hidden missions. The badge is more of a “reward” badge than a motivation badge, meaning that members should not be worrying about how difficult it would be to achieve. Also, and this will interest a lot of the discussion forum users, we are introducing a new mission beyond “Super Answer Hero” which will be a surprise for now. Let’s see who gets it first 😉 We felt that it was needed to add a mission in the progression to harmonize our point economy across all our asset types and recognize the amount of effort needed to answer a lot of questions, sometimes without getting feedback from the community. By doing so, we give discussion forum contributors the opportunity to earn just as many points as a blogger or document contributor would. We hope you find that these changes provide a better experience in the community. As always, we will observe and make adjustments as needed. We will listen to your feedback and measure the impact of these changes in terms of mission completion, quality of content, and overall community satisfaction.
OPCFW_CODE
Re: How to retrieve next record? - Date: Thu, 11 Dec 2014 23:44:10 +0100 (CET) - From: Johan De Meersman <vegivamp@xxxxxxxxx> - Subject: Re: How to retrieve next record? ----- Original Message ----- > From: "Wm Mussatto" <mussatto@xxxxxxx> > Subject: Re: How to retrieve next record? > Related what is the form of the prmary key. If its numeric something like > $sDBQuery1 = "SELECT * FROM kentekenlogtest WHERE kenteken < > '$sActueelkenteken' limit 1" > might work. No, kenteken is dutch for license plate. If so, not numeric, although greater/less comparisons do work on strings, too. My guess, from the sample queries, would be that this is processing for some form of automated number plate recognition system :-) Now, Hans, besides pointing you in the right direction, I'm going to be whining a bit about some pet peeves of mine. I'm waiting for the start of a midnight intervention, anyway :-p That query, as pointed out already, is only asking for a single kenteken. I'll stick to the dutch column names for clarity for other readers, btw - although one of the aforementioned pet peeves is nonenglish variable names. Makes code an absolute bitch to maintain for someone who doesn't speak that language. That's from experience; I've had to debug crap in french and spanish, among other languages. Your code (or, more precisely, the DB driver) is only going to make those records available to your program that you have explicitly asked for, so that query will only ever make the one record available. You will need to build a query that returns all the records you want to access, or, alternatively, make repeated queries. The former is more efficient by far; the latter is useful if the next set depends on what you find in the previous set. Another pet peeve: don't use select *. Explicitly select the columns you're looking for. It a) saves network bandwith; b) guards against later table structure changes; c) potentially allows the use of covering indexes and d) reduces the server memory footprint required for sorting etc. Once you built the correct query, you'll need to have a cursor to loop through it. Your DB driver will probably refer to it as a resultset or a similar denomination. The typical buildup for a database connection (bar advanced abstraction layers) is db_connect (returns a database handle); dbh->execute(sql) (returns a resultset handle); loop using rs->fetch_next (probably returns an array or hash with the data). See your language's db class documentation for the gritty details there. You may also find a fetch_all or similar which returns you the entire resultset in a single call. Can be useful, but remember that that means allocating memory clientside for the entire dataset in one go, instead of reusing the same variables row for row. A further pet peeve: don't just dump variables into your sql string, use bind variables. The "easy" method opens you up for little Bobby Tables. Google that, if you're unfamiliar with it. Then weep in despair :-p The idea of bind variables is fairly simple: you stick placeholders in your sql string where you would otherwise use string interpolation; then tell the statement handle the variables that should go in there. The database is actually aware of this method, so there is no chance that the variables might get interpreted as part of the SQL - it KNOWS they're variables, not keywords. Additionally, if you're going to be executing the same statement repeatedly, use prepared statements instead of regular executes. On MySQL the benefit is marginal (but still noticeable), on other databases it might be considerable - sometimes orders of magnitude faster. Oracle, for instance, has an execution plan cache; so if you use prepared statements, it can skip the whole parse - analyze - pick plan bit and skip straight to the next execution round with the new values you provided. On fast statements (like primary key lookups) that can sometimes save 80% and more of the roundtrip time. The abovementioned where-clause with limit is probably also going to work; but then you'll need to re-query time after time; and limit does not always work quite intuitively - although in this simple case, it does. If you *must* re-query time after time, do a speed comparison with and without prepared statements; otherwise do go for the fetch_next loop. Now, you've got documentation to read, I believe. Off you go :-) Unhappiness is discouraged and will be corrected with kitten pictures. MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql
OPCFW_CODE