Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Add Text to Speech Support Regarding issue #354 I think I was able to implement an initial version of TTS support. I've added an additional button in the prompt menu that allows users to select if they want to enable this feature. Whenever we ask a question, the response is read using the TTS api. I am buffering the words received from LLM stream over time and once the threshold is reached, I pass it to the TTS method for audio generation. I've also added a useAudioPlayer hook that manages the playback of audio chunks by making sure they play in order. I feel like its still very rough and lots of optimizations/changes need to be done (variable naming, code organization, performance, UI prefs) before I move on with other features like download. One would be the ability to choose different voices, but I am not sure where to place that UI pref because there's hardly any real estate. I tried this, it works! Thank you! Detection shouldn't hardcode official openai..Just use the models endpoint and look for tts* models 2)I think we need to have submenus in message options. Eg I would like TTS to be exposed as "Speak" with a submenu of voices. Would also like to have autospeak and menu items. Basically there are 2 separate cases: a) hands-free chat using mic and tts to respond and b) using chatcraft to speak messages for producing downloadable audio I think we should move 'retry with' to a submenu, and also add "speak with" submenu. Thus this depends on https://github.com/tarasglek/chatcraft.org/issues/359 to proceed In our meeting today, we agreed to try and extract a version of this PR that we can land for 1.1 (next Friday) and iterate in future Issues/PRs as we understand things better. @rjwignar I test the mobile before using the safari on my iphone, I think any browser should be ok, just navigate to the deployed url, I find sometimes the url will show by bot, but if not you can find that info in the Details in the All checks I also noticed that on desktop, using speech input requires users to click and hold the microphone button, releasing the button when they're finished their request. Would it be better to have the user click the button once to start recording, then to click the button again when they're done recording? I also wonder how the speech input button would behave on a smartphone. Would it behave the same way (user must press + hold the microphone button while talking)? Is there a way we can test this build on our smartphones? Same on mobile. It's glitchy on mobile. I would prefer to change behavior to click once to record and once to stop as you suggest as that will likely be less glitchy. I've played around with this and I'm having lots of fun with it. I like how both TTS responses and TTS input are both supported. I can't offer suggestions on the code but the naming conventions used for methods and variables looks fine. I did notice a couple things on desktop: 1. The `speech input` button/microphone icon doesn't show a tooltip when hovered. Is this something that should be added? 2. I also noticed that on desktop, using speech input requires users to click and hold the microphone button, releasing the button when they're finished their request. Would it be better to have the user click the button once to start recording, then to click the button again when they're done recording? I also wonder how the speech input button would behave on a smartphone. Would it behave the same way (user must press + hold the microphone button while talking)? Is there a way we can test this build on our smartphones? @rjwignar Thanks for the review! I agree with you on that there should be a tooltip for microphone button as currently it does not have any descriptive text. And you're right, the microphone part is almost unusable on mobile, but this pull request only focuses on text to speech support (#354). I saw that @tarasglek already mentioned it before in this issue #336 that focuses on better mobile interface. We can probably have a separate standalone issue for fixing the mic behaviour. @rjwignar I have tested the mobile version before using the safari on my iphone, I think any browser should be ok, just navigate to the deployed url, I find sometimes the url will show by bot, but if not you can find that info in the Details in the All checks I just tried this on my phone and I didn't hear any sound, I tap the TTS enabled but seems not works. @mingming-ma Thanks for reviewing! I also tested using your link on my phone and it didn't work for me either. But I also tested using my codespace on phone and its working with that. https://symmetrical-space-doodle-p6wqvr79qrj26gv9-5173.app.github.dev/c/MWt4JUNQcG-d3jyhHzGtS Not sure why it didn't work for the other one 🤔 I also noticed that on desktop, using speech input requires users to click and hold the microphone button, releasing the button when they're finished their request. Would it be better to have the user click the button once to start recording, then to click the button again when they're done recording? I also wonder how the speech input button would behave on a smartphone. Would it behave the same way (user must press + hold the microphone button while talking)? Is there a way we can test this build on our smartphones? Same on mobile. It's glitchy on mobile. I would prefer to change behavior to click once to record and once to stop as you suggest as that will likely be less glitchy. Let's not deal too much with general audio UI in this issue if possible, and move that to an external issue. I'll file an issue on this change. Let's not deal too much with general audio UI in this issue if possible, and move that to an external issue. I'll file an issue on this change. #368 In that case, the current changes look good to me. I'm very satisfied with the TTS button and functionality. In our meeting today [Wednesday], we agreed to try and extract a version of this PR that we can land for 1.1 (next Friday) and iterate in future Issues/PRs as we understand things better. Does anything in this PR have to be changed before we can merge it? Just modified my buffering algorithm in the way I've been thinking about. The following pseudocode is a summary of what's happening. This is following the sentence based approach that @tarasglek suggested. I tried this, and it's much faster than previous way in many cases. Please let me know if this works! I tried this, it works! Thank you! 1. Detection shouldn't hardcode official openai..Just use the models endpoint and look for tts* models 2)I think we need to have submenus in message options. Eg I would like TTS to be exposed as "Speak" with a submenu of voices. Would also like to have autospeak and menu items. Basically there are 2 separate cases: a) hands-free chat using mic and tts to respond and b) using chatcraft to speak messages for producing downloadable audio Also @tarasglek, I couldn't understand your first point. Is it regarding this function? Or the way I have hardcoded tts-1 here @Amnish04 a few thoughts: We have an nlp tokenizer in our tree already, see https://github.com/tarasglek/chatcraft.org/blob/main/src/lib/summarize.ts. You can use it to get your sentences, words, etc. See docs at https://github.com/spencermountain/compromise. @tarasglek means, "Don't use Is Official OpenAI" as your means for determining if TTS is supported, since other providers can/will also provide the same or similar models. It would be better to see if the list of models includes tts- or something. @rjwignar thanks for your review and push to get this landed. I'm loving seeing this evolve so quickly. Great work. @humphd Just pushed some changes as requested and left comments on some. Make sure all the follow-ups we need to do are filed before you land. Also tts seems to on always for me. Icons change, but the damn thing wont shut up Need to have audio abort when tts is in progress and one flips icon off @Amnish04 would you be willing to file those for follow-up so we don't lose?
GITHUB_ARCHIVE
As this was raised recently again at https://github.com/fastai/fastai/issues/1739, I will share my thoughts on this subject matter. History is usually a good predictor of the future. Therefore given that Sylvain and Jeremy are always at the cutting edge and often are a bit ahead of it, it’s very unlikely that the API will ever get stable. So unless you want to stifle their creativity you should just accept that. The core API may get more stable, but some parts of it (think domain-specific API) will always be in the flux. DL is a baby and as it grows fast it constantly needs new clothes if that helps as a metaphor from the physical world. Perhaps in 10-20 years it’d be easy to come up with stable APIs. Currently the only solution to not need to rewrite your application code every few weeks is to pick a version and stick to it. Of course, this is not a great solution, since it prevents you from getting bug fix updates w/o breaking your code. The better solution would require several knowledgeable people to become stable branch maintainers who will backport bug-fixes from the dev branch, but will not change the API. This is how it’s done in all big projects. There is no need to reinvent the wheel. That way Jeremy and Sylvain can continue going full speed at being creative and innovative, and someone else will be sorting out how to keep the stable branch great and stable at the same time. So that the ideal situation is where the lessons always rely on the unstable master branch and the production code on the stable branches. Documentation also has a problem right now, since the docs always match the git master branch and not any previously released version, so it’s hard to rely on such docs if you are forced to use some older version (or even the last released version). So the docs will also need to be branched to match the code and the website updated to reflect that. One more thing will be github Issues and PRs - it’ll be a mess trying to use the same github repo for dev and stable branches. So most likely either these will need to become forks and not share the same Issue/PR entry points, or perhaps these can somehow be setup to make it clear to the submitters that this way is to a stable branch 1.2.x and this one is for cutting edge master 1.3.x. I’m sure we can copy what other projects do. Bottom line is that if you want stability, this project needs a bigger team of knowledgeable dedicated developers/maintainers. There is no way Jeremy and Sylvain will be able to be able to give you the best in this domain and support older versions at the same time. I’m sure they would have liked to be able to do that too, but there is that problem of having only that many hours in a day. These are of course my thoughts alone. It’s quite possible that Jeremy and Sylvain will disagree with everything or parts of what I have shared.
OPCFW_CODE
RFID, RC522, SPI module connected to Bolt 18F2550, with visualization of data in mobile smartphone. Author: Moises Melendez Reyes RFID-SPI SYSTEM WITH DISPLAY OF DATA ON MOBILE PHONE RFID-SPI SYSTEM WITH DISPLAY OF DATA ON LAPTOP WITH BLUETOOTH LINK RFID-SPI SYSTEM WITH DISPLAY OF DATA ON LAPTOP WITH USB-SERIAL CABLE The so called Syncronous Periferal Interface (SPI) protocol has become a popular technique to transmit information between 2 microcontroller devices at a a high speed, and short distances. On the other hand, with the emergence of smartphones, which already have their own operating system, it is possible to develop applications using Bluetooth for the wireless links between mobile devices and microcontrollers. In this project, a Samsung Galaxy mobile is used as a receiver and display of the RFID data. The connection between an MF522 RFID read-write device, (or RC522 RFID), with operating frequency of 13.56 MHz, SPI interface, and a Bolt 18F2550 system is described. The system can be used in the following applications: access control, attendance control, public transportation, parking of vehicles, electronic payment systems, inventory control, and many others. See here a tutorial over the SPI interface THE ISO-14443A TAGS: The MF522 RFID read-write device receives information from an RFID ISO-14443A tag, In this project, a keychain tag (blue color) was used, as shown in the picture above. This card contains an EEPROM with 1 Kb of available space for data. THE BLUETOOTH LINK: To establish communication between the microcontroller and the mobile, a bluetooth module HC-06 with serial port, which allows the transmission of RFID data received by the Bolt board to the mobile phone is used. This link, which is a Class 2 in the Bluetooth standard, can operate up to a distance of 10 meters. TESTING THE SYSTEM: To test the system, you must assemble the test equipment as shown in the photos above. Its functioning in any of the modes -explained in the following paragraphs- requires that the proper firmware had been previously loaded to the Bolt system. In all cases, you must bring the tag at an aproximate distance of 5 cms from the MF522 device and remain in that position until you receive a confirmation on the mobile phone. The summary of the system working in each mode is as follows: Reading the tag: the user brings the tag (blue key) near to MF522 device. At that time, automatically and through SPI communication, the system reads part or all of the EEPROM memory of the tag. This information is sorted, tabulated and transmitted to the serial port in Bolt board, which has already inserted a HC-06 Bluetooth module. The information is transmitted wirelessly from the Bolt system to the mobile phone, where the user can see it. Erasing data tag: the user brings the tag near to MF522 device. On the mobile phone you will see confirmation of the erasing each sector of the entire EEPROM. Writing new data to the tag: the user brings the tag near to MF522 device. Automatically, the system will write the new data to the EEPROM of tag, sending confirmation to the mobile phone. BLUETERM APPLICATION FOR THE MOBILE PHONE: To display the data read from the RFID tag, the mobile phone must run the application called 'Blueterm', compatible with the Android operating system, which is a terminal emulator application which allows you to send or receive ASCII strings through the Bluetooth link already built into the phone (this application is equivalent to the 'Hyperterminal' software for Windows). This program is free and can be directly downloaded to your mobile phone from the Google Play website. To properly configure the Bluetooth functionality in your mobile as well as the BlueTerm program, please go to this link CONECTION BETWEEN THE BOLT 18F2550 BOARD AND THE MF522, RC522 RFID-SPI CARD: As shown in the figure above, the Bolt card connects to an external wall transformer, which provides the overall system power. To feed the MF522 RFID card, a 3.3 volt regulator is used. The data transfer between the card and the MF522 Bolt is performed using an interface with the SPI standard. The diagram of connections between the two systems is shown in the figures below: AVAILABLE FIRMWARE FUNCTIONS: The firmware functions developed for Bolt system include: reading the serial number of the card or tag, reading the entire EEPROM of tag either in hexadecimal or ASCII format (64 sectors, 4 blocks per sector 768 locations of read/write ), erasing the EEPROM memory, and writing new ASCII text data in the tag. These functions provide a general basis by which programmers can develop specialized RFID applications aimed at solving specific problems. The programs were developed using the C18 compiler and MPLAB IDE tools. 1. READING DATA FROM THE TAGS EEPROM: For a general system test, the following program for the Bolt 18F2550 system was developed . This program allows the use of 3 functions, selected by the dip switches on the Bolt card. *** IMPORTANT *** For the data to be neatly in the screen of your mobile, you need to configure the program 'BlueTerm' as shown in the attached photos, to replace all received CR characters with CRLF. If you omit this step, the characters will appear superimposed and impossible to read. Open on your mobile the BlueTerm app and choose the following options: To read data from the tag's EEPROM, download the complete MPLAB-IDE-C18 project here: MFRC522-C18-5.zip. You must load the .hex file to your Bolt board and move the dip switches accordingly to the desired function. The reset button must be pushed to activate any of the functions. Example of data read from the RFID tag and transmitted to the user on his mobile phone, running BlueTerm application. Data of the EEPROM can be modified using the firmware for Bolt described in point 3. Reading the tag's serial number: Reading the tag's EEPROM information in hexadecimal format: Reading the tag's EEPROM information in ASCII codes: 2. TO ERASE ALL EPROM MEMORY: Additionally, to erase the memory of the TAG, the following firmware is used. Load the hex file from the following folder to Bolt system. To completely erase the EEPROM memory of tag: PROJECT-C18-MPLAB-RFID-CLEAR-TAG.zip 3. TO WRITE NEW DATA INTO THE TAGS EEPROM: Writing new data to the tag requires the user interaction with the MPLAB and C18 compiler tools and modifying the text and then compile the program and load it into the Bolt system, which will send this new data to the MF522 and then to the tag. Please read the instructions described in the source program .c included in the following folder to modify the new data you wish to write to the tag. To write new data into the EEPROM of tag: PROJECT-C18-MPLAB-RFID-WRITE-EEPROM-TAG.zip Summary of erase and write funtions: OTHER RFID RECOMMENDED LINKS: MF522 RFID module connected to Bolt 18F2550, with visualization of data in mobile smartphone. The easiest way to manage RFID: an UART serial port interface Low cost school attendance RFID system, using an Excel spreadsheet. The functioning of an RFID 13.56 ISO-14443A read/write tag
OPCFW_CODE
What is the equivalent in English of the French sentence part "complément de phrase"? In French, a sentence has two essential syntactic parts (the subject and the predicate) and may have one or more "complément de phrase", which are optional parts. "complément de phrase" = "sentence complement" (literal translation) (Nuances and details could be added to that explanation in an advanced grammar and syntax context, but let's keep it simple.) I tried looking for the equivalent in English of that syntactic function, the "sentence complement", but grammar sources I found contradict each other and don't define a specific syntactic function, but different sub-fonctions (functions held by grammatical groups within the three main syntactic groups (subject, predicate and "sentence complement"). Maybe I'm only stuck in my French grammar point of view. Therefore, I'll explain the "complément de phrase" so you can tell me what it refers to. The "complement de phrase" syntactic function is usually held by an adverbial group, a prepositional group, a nominal group or a "subordinated sentence". Adverbial gr. : Very carefully, he opened the cage. prepositional gr. : With all due respect, I must refuse your offer. nominal gr. : I ate cereals this morning. "subordinated sentence" : While he was asleep, someone stole his ring. Here are the characteristics of a "complément de phrase": optional/removable Very carefully he opened the cage. (Sentence still works = OK) While he was asleep someone stole his ring.(Sentence still works = OK) X *While he was asleep, someone stole his ring. (Sentence not OK, the predicate is incomplete, so "his ring" is not a "sentence complement", but a direct object complement of the verbe stole.) X While he was asleep, someone stole his ring. (Sentence not OK, the predicate is incomplete = "his ring" is not the "a sentence complement", but a direct object complement of the verbe stole.) movable within the sentence (usually before/after the subject/predicate/other "sentence complement rather than inside them") Very carefully, he opened the cage. He opened the cage very carefully. ? He very carefully opened the cage. X/? He opened very carefullly the cage. "non pronominalisable" = "cannot be replaced by a pronoun" (except for "location complements" that can be replaced by the pronoun "y", but that is probably only specific to French). X Very carefully It, I opened the cage. (wrong sentence = Ok, the ) Very carefully, I opened the cage it. ("The cage" can be substituted with a pronoun, "it", since it is the direct object complement of the verb "opened".) detachable (can be isolated) I ate cereals and that took place this morning Someone stole his ring while he was asleep. Now that you understand better the "complément de phrase", can you please tell me what is its English equivalent and if there are differences between the French and the English concept? Most of these seem to be varied adverbials. I'd point out that 'With all due respect, I must refuse your offer.' is very different from 'With all due haste, I packed my bags.' The first uses a pragmatic marker to hedge the statement; in the second, the prepositional phrase is a true adverbial, modifying 'packed'. I have been taught intensive English in decent schools (I am French Canadian). While in French we were taught that sentences are made of three groups (subject, verb, complement), English teachers always mentioned two obligatory groups (subject, verb) and potentially additional things like adverbial and prepositional groups. There wasn't a generic name for them. Interesting considering the English and French languages work in a very similar manner. Ok, here's the best I can come up with, although it may not be a perfect match. (I'm a bit confused about your first example from characteristic 4, because English treats the first sentence as two independent clauses joined by a conjunction.) It appears to me that English distinguishes three kinds of the "complément de phrase" without giving them a common name. The three kinds are adverbs, adverbial clauses, and adverbial phrases. The distinction between the last two is that the clause, while subordinate, has both a subject and predicate, whereas the adverbial phrase lacks at least one of these items. Many prepositional phrases are also adverbial phrases (e.g. in the town), but I'm not sure whether all of them can be classified as such. For example, prepositional phrases that are used to indicate indirect objects or possession may not be considered adverbial. If such prepositional phrases are not considered to be adverbial, then, adverbial phrases and clauses along with adverbs seem to match the French category of "complément de phrase" pretty well. With regard to characteristic 3, note that location adverbs and phrases can cometimes be replaced by words like "here" and "there," which function in a way that is similar to a pronoun. In the syntactic theory called "X-Bar Theory" (cool name, that; unfortunately, the Bars are all closed now), these three categories would be called "extensions" of the category Adv and denoted as Advʹ (pronounced "Adverb-Bar", but spelt "Adv-Prime" because the bars were too hard to put into tree diagrams; sic transivit Theoria X-Bar.) All it means is that if it acts like an adverb, it's an adverb; as long as you don't restrict yourself to one-word constituents, that will apply to phrases and clauses too. I believe they're called adjuncts in linguistics. Here is a link to the Wikipedia article. Prima facie, they meet your characteristics. You'll have to wade through the details to see if they match up with the French concept. "Adjunct" is a good, question-begging term. They're all adverbials of one kind or another; what they have in common is an oblique noun phrase (i.e, one with no grammatical relation to the verb). "Adjunct" works, as long as you have enough examples; but note that this is neither standard French terminology nor standard syntactic terminology. Adverbs, like "adjuncts", are dispensable, movable, and vastly variegated; they've always been a wastebasket category, when nothing else would fit. I am not quite sure, but, in many languages, adverb clauses are optional, movable and non pronominalisable. As it is a clause, then it can be isolated as a meaningful component about on the same level with the Subject Clause, Verb Clause, and Object Clause. The trick here for the adverb is two reasons: 'SVO' can make a sentences, and Adverb Clauses are hard to be misinterpreted as Noun Clauses or Adjective Clauses. However, I do not know whether there is a concise word to describe all properties that the 'adverb clauses' have. Somehow, I am not quite sure about 'and that took place this morning' in your last example. Normally it is called a Independent Clasue, but I do not why we are making Independent Clasue different from the Adverb Clause; it seems that the Independent Clause fits just the slot of the Adverb Clause in analyzing a sentence with a grammar tree.
STACK_EXCHANGE
In part 3 of this series I will be following the arch wiki general recommendations section to complete some initial setup tasks on my newly installed Arch operating system. The first thing that I need to do is setup a new user account so I am not always logged in as root. I just issue the following command to create a user account and set its password. useradd kevin -m -s /bin/bash passwd kevin The -m option sets up a home folder for the new user and the -s option sets the default shell for the user. This is a basic user account that does not have full rights to the system. Logging in with this user is more secure than root. Sometimes I will need to run administrative commands with my new account so I need to install the sudo package. Arch uses pacman as it’s package manager. This manager can be used to download, install, and update official arch software packages. I want to download the sudo package however I find that right now I am not able to connect to the internet. I head over to the networking section of the arch wiki. The wiki tells me to check the status of my network cards by using the ip link command. I notice a couple of things. First the network adapter names are not what I’m used to seeing in Linux. Second my network adapter status is showing “state DOWN”. I read this article which explains the reasoning behind changing the old standard for naming network adapters. It seems to make sense and I often run into trouble when adding and removing network adapters to my kali VM so I will live without the “predictable network interface names”. I’m never going to remember the name “enp0s3” so I opt to rename my device manually. The wiki directs me to create a file called “/etc/udev/rules.d/10-network.rules”. In the file I can specify my network adapters mac address and a new name. They recomend not using the traditional ethX naming convention and say I should use netX instead. I create the file and reboot. My network adapter now shows as a much easier to remember “net0” I will hopefully be using this OS to do some hacking challenges which might require me to change my mac address at some point. I’ll need to remember to spoof the mac instead of actually changing it, or add another network adapter, or else this might come back to confuse me later. In any case I have a renamed network adapter but it’s still down. I start the dhcpcd service to bring my net0 adapter ip and obtain an IP address from my DHCP server. Now I’m able to ping the internet and get online. Back to the task at hand I need to install the sudo package to I run the command “pacman -S sudo”. Now that sudo is installed I grant my new user account sudoer rights. To edit the sudo file you must use the visudo tool. I use the tool and un-comment a line to enable all members of the wheel group sudo rights. Now I add my new user to the wheel group using the command “usermod -aG wheel kevin”. To test this I logout of root and login with my new account. I use the command whoami to print the current user. Then I run whoami again using sudo to show my privileges being escalated. Now going forward I can use this new account for everything. If I need to do something that requires root permissions I can just use sudo. The wiki recommends some basic system maintenance. I create a very simple script that I can run from time to time to run their recommended commands. That’s all for now. In the next part I plan on starting to setup a GUI environment to make my Arch OS look a little better.
OPCFW_CODE
What are the reasons to create a key-less, index-less database? TL;DR: why would you create a database with no keys of any kind (not even primary), and no indexes? I joined a non-profit organization that was 14 months into migration to a custom built front-end for their financial processing. The organization has about 60 staff. I have 6 years of software development experience, so they trust me enough to give me access to their database. Lo and behold, the front-end SQL Server production database has absolutely no keys of any kind and no indexes. However, the interface actually works well. The external company that created the system are relatively responsive in creating new features, and the system actually works. I've been working there for about 6 months. Most of the problems are relatively minor. I haven't seen any catastrophic errors due to mistakes on the company's part. Then again, there are no UAT (user acceptance tests) at all, and I have no idea how QA works at this external company. I only have contact with a sales/technical liaison. I highly suspect this company is simply fleecing the non-profit for maintenance fees in the future. I got so sick of the slow DB that I added in a bunch of multi-column, non-clustered, non-unique indexes based on the most frequently used reports, and increased the DB speed by probably 100 times (I didn't bother to benchmark). I have never seen this type of setup before, and the solution is so stupidly easy that I'm having difficulty understand why it wasn't done. I tried asking the sales contact but he just doesn't know what I'm talking about. Am I correct with my assessment or is there a legit reason to do this? Clarifications: "Is the DB relationship in nature?" I believe so. They run weekly and monthly DR/CR reports. It is very much an accounting DB. Even though there is no primary key setup, they have enforced the uniqueness at the application level. "How old is the DB" It is suppose to be a brand new, custom built system, running on SQL Server 2014. However, I also found out they put the DB in 2008 compatibility mode (compatibility code 100). I've asked about this but my contact haven't replied to me. is there an agreement (and specifications/requirements) between your organization and the company that created the application and database? There is, but are clients expected to explicitly specify that the DB should have primary keys? From a DBA's perspective, that's just stupid. It's like putting down "must have wheels" as a purchasing requirement when buying a car. We could speculate but there are dozens of reasons they wouldn't have all the indexes that you now know will improve performance - don't know better - think indexes slow things down (this is actually quite common, even more so for constraints) - plan to implement them but haven't gotten to it yet - tested with "works on my machine!" scale or just very different usage patterns - and on and on. If you want an answer, you'll have to get it from them. It may not say whether it has PKs or not but it may state functional requirements / constraints and whether these are to be enforced by the application or the database (or both). Also the indexes could have been accidentally dropped at some point. @AaronBertrand not 'all'. The entire DB has exactly ZERO index. Zero. None at all. Same with primary keys and foreign keys and keys of any kind. Zilch. Nada. Zero. @DavidBrowne-Microsoft Or the deployment batch forgot the index script (or someone generated the script with the defaults). Or the index batch failed. Lots of accidental scenarios on top of the potential thoughtful ones. A primary key doesn't just make the database faster. A primary key is a constraint. Don't confuse this with a clustered index (or the right clustered index for your workload). These are separate concepts that are often conflated because of SQL Server's default behavior. There are a number of possibilities listed in the comments. Several are not "legitimate" reasons not to have indexes or primary keys: ignorance on the part of the application developer, or accidental deletion of all indexes. In a system where INSERT operations are prioritized over anything else, technically, indexes do slow things down. It may be that slow response for reporting, etc. was considered acceptable, as long as inserts completed as fast as possible. Aaron Bertrand makes a critical note - a primary key is a constraint. It puts a restriction on the data, preventing duplicate values from being entered. The same holds true for a unique index. With this is mind, there's at least one possible reason to avoid implementing those: the application is designed to manage the relation aspects of the data at the application level. This is more likely if the application was originally developed a long time ago (say the 1980s or 1990s). I worked with a SQL Server DB for about 10 years that was supporting an application originally developed in FoxBase, and it had no foreign keys; all the relationships were maintained via triggers. They'd started using foreign keys for newer tables, but only started changing existing tables a few years ago. Depending on how the code's written, it's actually possible that you could break the application by adding primary keys or unique indexes. It's unlikely, but not impossible. Adding regular indexes would only break things if the application really needs to keep inserts as lean as possible. That may have been a limitation at some point, but it may be that it isn't any longer. Note that this is just one possible scenario. As Aaron recommended, the best option is to communicate with the vendor. Checking to see how much tuning you can do to optimize your organization's workload isn't a bad idea, and respecting the application's creators (whether they deserver it or not :-) ) is a good way to start a relationship. As a side note: I did work in FoxBase about 30 years ago, and as best I recall, each table and index was stored as a separate file - and you could only have ten files open at any given time. I believe that included index files. That would make not using indexes tempting.
STACK_EXCHANGE
Searching the filename propert (which is also shown in the screen cap) and thats the results? Am I missing something? Returns the same thing without the quotes... This seems really wierd, it seems to be finding the F & 000 in the entire string and posting the results? correct, the value of file name field is separated into individual searchable chunks known as tokens to be indexed. F & 001 are recognized as two tokens. you may add some addtional search criteria to narrow down the result. I already ask Autodesk (My Feeback during Beta and Autodesk Feedback for the relaease version). This kind of search are not usefull at all. Why? Because if you work correctly with Vault, you have a naming scheme. Then it's a strict manner to work with Vault. Then what want the customer? Find exacttly what he want and not received a huge of files who he is not interrsted. Vault is not a Google Database, then please give us a way ( I mean a usable and quick way)to find the data like every other database or ERP ! If I search for F000, only file who contain this string must be show. Not F-C000.... We have many customer who ask to have a normal (none-token based) search. *F??12* must show only files like: Hope Autodesk understand the real need of the customer.... First of all Yuhui, thanks for explaining the method to the madness & the blog article link, at least we know now. However.... Which Genius in the Vault Dev team came up with that way of searching on the filename property. Its frankly ridiculous and confusing for the users who don't understand the illogical 'Token' system. I gather this only applies to the Filename property? are there any other properties we should know about that behave like this? If this type of search is usefull for some people then it should be represented as a tickbox on ALL the find & search dialogues. I emphasize the ALL here because thats important for consistency, its no good having that option only in the Advanced search dialogue in Vault Explorer it should be in the Basic one and in the Find dialogues of the various add-ins as well. I will now set about trying to explain this madness to my staff. I'm sure Brian will enjoy doing likewise as well. Thanks again for your support on these boards. Please pass on our comments to the Vault Dev team, preferably print it out several times to form a stiff book roll it up and then beat them with it. I think I see the logic, and I love fast searches I really do think this is going to create a lot of confusion.... On a side note, after reading the blog and sort of understanding it, Im not sure why when I search for C000 it returns results as I would expect. See the linked video, if I replace the "F" with a "C" I get better results...... Again I think this is going to be confusing... Brian its just coincidence, that none of the other files in the search field of view, have C & 000 as seperated Tokens. Like the F000 search did. In this case, the only time the 2 tokens appear in file names they are right next to each other creating the illusion of a logical search. i.e: its finding Token 'C' & token '000' in the string *-C000-* Thank you all of your valuable feedbacks on search behavior. I'll do pass your comments to our dev team. to answer questions: ScottMoyse" I gather this only applies to the Filename property? are there any other properties we should know about that behave like this?" not only for file name, all properties following same token rules. Vault was intended to provide better user experience with more flexibility, higher performance with this change. Actually performance does improved than previous mechanism, but the token may be not so smart, i'll log a wish of improvement on this. Brian - "if I replace the "F" with a "C" I get better results...... Again I think this is going to be confusing..." I agree with Scott in this case, its just coincidence. the rules works as explained above. Access a broad range of knowledge to help get the most out of your products and services.
OPCFW_CODE
Messages.app not responding My Messages decided to stop responding. Complete lockout. The only way to kill it is in the Activity Monitor with Force Quit. It consumes 100% of the CPU and is not responsive. The Force Quit generates a long report to be send to Apple (done that). Somehow I do not expect Apple to respond to me, but some of you here might. The Console report shows this: 6/17/14 20:07:50.000 kernel[0]: process Messages[6475] thread 409143 caught burning CPU! It used more than 50% CPU (Actual recent usage: 99%) over 180 seconds. thread lifetime cpu usage 90.036317 seconds, (88.239220 user, 1.797097 system) ledger info: balance:<PHONE_NUMBER>5 credit:<PHONE_NUMBER>5 debit: 0 limit:<PHONE_NUMBER>0 (50%) period:<PHONE_NUMBER>00 time since last refill (ns):<PHONE_NUMBER>7 6/17/14 20:07:50.781 ReportCrash[6487]: Invoking spindump for pid=6475 thread=409143 percent_cpu=99 duration=91 because of excessive cpu utilization 6/17/14 20:07:51.766 spindump[6488]: Saved cpu_resource.spin report for Messages version 8.0 (4226) to /Library/Logs/DiagnosticReports/Messages_2014-06-17-200751_kelly.cpu_resource.spin 6/17/14 20:08:12.070 com.apple.launchd.peruser.501[138]: (com.apple.iChat.24672[6475]) Exited: Terminated: 15 6/17/14 20:08:19.483 spindump[6492]: Saved hang report for Messages version 8.0 (4226) to /Library/Logs/DiagnosticReports/Messages_2014-06-17-200819_kelly.hang 6/17/14 20:09:08.354 SubmitDiagInfo[6519]: Running in single report mode to submit: file:///Library/Logs/DiagnosticReports/Messages_2014-06-17-200819_kelly.hang 6/17/14 20:09:14.032 SubmitDiagInfo[6519]: Submitted hang report: file:///Library/Logs/DiagnosticReports/Messages_2014-06-17-200819_kelly.hang 6/17/14 20:09:25.669 Messages[6523]: Chat history path /Users/buscar/Library/Containers/com.apple.iChat/Data/Library/Messages/Archive Status: 0 totalExpected: 0 countProcessed: 0, forcing an import. 6/17/14 20:10:23.840 Finder[150]: CGSCopyDisplayUUID: Invalid display 0x44105d81 6/17/14 20:10:27.898 Console[6528]: setPresentationOptions called with NSApplicationPresentationFullScreen when there is no visible fullscreen window; this call will be ignored. 6/17/14 20:11:07.000 kernel[0]: process Messages[6523] thread 411156 caught burning CPU! It used more than 50% CPU (Actual recent usage: 88%) over 180 seconds. thread lifetime cpu usage 90.062022 seconds, (88.395516 user, 1.666506 system) ledger info: balance:<PHONE_NUMBER>2 credit:<PHONE_NUMBER>2 debit: 0 limit:<PHONE_NUMBER>0 (50%) period:<PHONE_NUMBER>00 time since last refill (ns):<PHONE_NUMBER>63 6/17/14 20:11:07.333 ReportCrash[6535]: Invoking spindump for pid=6523 thread=411156 percent_cpu=88 duration=103 because of excessive cpu utilization 6/17/14 20:11:08.385 spindump[6536]: Saved cpu_resource.spin report for Messages version 8.0 (4226) to /Library/Logs/DiagnosticReports/Messages_2014-06-17-201108_kelly.cpu_resource.spin 6/17/14 20:11:11.478 Finder[150]: CGSCopyDisplayUUID: Invalid display 0x44105d81 I tried this but did not help. On MBA 10.9.3 Update: The Diagnostic report does not fit in here :( Maximum 30000 allowed, the reports is 108181 ? Can you post the /Library/Logs/DiagnosticReports/Messages_2014-06-17-201108_kelly.cpu_resource.spin? @GeorgeGarside I can but it is like 3 pages long, tell me what you looking for so I can extract it. Problem Solved Thanks to Ralph Johns (UK)Ferndown UK (root cause remains Unknown.) I followed these instructions: Go to ~/Library/Preferences Delete com.apple.ichat.plist
STACK_EXCHANGE
i guess i foolishly installed windows 7 on a computer without enough memory...Now I want to reformat the hard disc and install windows xp pro. I can't seem to find the right command to accomplish this. Please help I use Vista Enterprise through my University. After having spent quite a bit of time using the beta on a seperate drive, I'd like to start using Win7 as my primary OS. When I try to upgrade, It says that it can't upgrade this version. I presume that's because the RC is Ultimate and the upgrade path would be to Win7 Enterprise. I can't find any version of Win7 RC other than Ultimate though. Is there a workaround for this similar to upgrading the beta? Or am I stuck with Vista until I can get the time to do a clean install? Win7 7127 Today I tested the latest release of Win7. Note, this was done as a test of the updater. First thing I did was, of course, back up my system. Then I uninstalled CIS 3.9 (to ensure D+ did not interfere with the installation) and rebooted. I then started the installation as an upgrade. The installer recommended I remove the only XP game I had installed (it isn't played anyhow these days), and to remove my ATI Catalyst drivers (these could be re-installed later.) After removing the game, and uninstalling the ATI Catalyst drivers, I rebooted as required. WOW! My boot time was 1/10th my normal just by removing the ATI drivers. Impressive. I then cleaned my registry to be certain, and proceeded with the installation/upgrade. I will say, the installer actually completed this time. However, as you might expect, I had issues. When I clicked on my login icon and typed my password in, the screen blanked, saying it was creating my Desktop. I expected this the first run. However, this was as far as it got. My HDD spun, and the screen remained dark. After so long, the screensaver kicked in (I had disabled the screensaver several days ago). 10 minutes I think was the wait. I should have gotten my Desktop screen long before this. I logged out (CTRL-Alt-Del) and then back in to see if that would work. Nada. Using the same method, I logged out again. This time, I inserted my Vista disk and pressed Reset to cold boot from it. I did a system restore, which was only minutes old before the Win7 experience. After rebooting Vista, I immediately used Revo to completely remove my ATI drivers. I web searched and found a much more recent Vista x64/Win7 x64 driver and installed it. As required, I rebooted. My boot time remained short with this new driver installed. Obviously I had issues with the older drivers. I continue to struggle to get Windows 7 to load on my machine from Vista Home Premium. Did latest upgrade, clicked for Upgrade, it copied the files and started copying the settings. But before finishing that stage it stops with the following error message: The upgrade was cancelled Any changes that were made to your computer during the upgrade process will not be saved. Setup cannot continue. Restart the computer and restart Setup. When prompted, try getting the latest updates. I logged a note on this forum (http://social.answers.microsoft.com/Forums/en-US/GettingReadyforWindows7/thread/e795ffde-4ebb-4e34-856e-797a7fd97c0f) and received a response that it may be linked to my ASUS motherboard and the ATK0110 ACPI UTILITY. The suggestion was to get a W7 version of that utility (none available that I can find on the ASUS website) or otherwise to uninstall it. I tried the latter but the setup stops at about the same time as it did previously. The ATK utility remains uninstalled. I have run the Upgrade Adviser but nothing of substance emerged. I have an ASUS motherboard, P5VD2-MX. It had the latest ATK utility installed (until I uninstalled it). I thought I would try one more log to see if anyone else may have any further clues. Seems a shame to have to give up on the Windows 7 idea before even starting but I can't see a way through at the moment. Many thanks I have been duel and tribooting for a while. The process went fine with downloading and installing W-7 on the 3rd harddrive as a new install. Every thing is wonderful with 7. It is a huge inprovement over any Vista that I tested when it came out as an RC. Now the problem is that W-7 took out the boot info on the other harddrives and made itself the only booting harddrive. My ? is, do I use the W-7 DVD to go in and Hit R for repare then can I use it for fix boot etc.? Or do I have use the XPPro DVD? There is no option to edit the System Startup and it is the only the option on the dropdown bar. The check box time to display other operating systems is checked. The other operating systems are listed otherwise. I hope this is enough info on my Problem. Thanks, deafbus. After a few days, I was finally able to get 7 installed! However, while troubleshooting I screwed something up royal and don't know how to fix it. Previously, my installs were incredibly slow. One of the solutions that I had come across was to switch the port location of my SATA drives (I don't know why, but it worked for some people). So, I unplugged the OS drive from SATA 1, the storage drive from SATA 2, and the DVD-Ram drive from SATA 3 and installed them into 2, 3, and 4 in the same order. When doing this didn't alter or improve my install times, I moved them back into their original position. Now, whenever I boot, I get a "DISC BOOT ERROR" unless a system disc (XP, Vista, or 7) is in the tray. The windows I am brought to looks like safe mode and I am unable to access anything. When I click on 'My Computer' I am prompted with a screen stating "Windows cannot access the specified device, path, or file. You may not have appropriate permissions...." I just want this to work. When it installed today I was ecstatic, but when I discovered that the issue that the SATA swapped cause still existed, well, you can imagine. I thought it might have something to do with the MBR, though I could be wrong, and I cannot find anything that supports/suggest a possible solution on the net. Thank you in advance. AMD Athlon 64 x2 Dual Core Processor 5400+ Black Edition Abit AN52 nForce 520 Motherboard OCZ SLI-Ready Edition 4GB DDR2 (PC2 6400) Western Digital WD360GD 36gb Raptor Western Digital WD740ADFD 74gb Raptor LG GH22NS30 DVD-RAM Drive XfX GeForce 8800GT Alpha Dog Edition Sorry, tried to find Windows XP page but keep getting Windows Vista forum, hope somebody can/will still help me with my problem. I have downloaded Windows RC 7, burned it to DVD as ISO file with Nero using option DVD-Data . If I then reboot computer and press any key to boot from DVD I only get to see message Profile 1. If I continue pc starts up with Windows XP Professional instead of getting Windows Install etc. I already made seperate partition where I would like to install Windows 7 in order to have a try out before deciding to buy new operation system. Have installed BootMagic but that one also only sees Windows XP as operating system but that seems logical to me as no new Windows 7 installed yet. I have 3 Mb Ram, 2 hdd's, 1 500 Gb (2 partitions), 1 250 Gb (4 partitions as BootMagic required 1 partition in FAT 32 and both hdd's NFTS). Can anybody advise what I am doing wrong? Have downloaded Windows 7 RC now 2 times which take a lot of time, used 2 DVD's but do not get anywhere. One option is to wait till end of year when official program will be released but I really would like to know why I cannot get it to work while many other persons do not have any problems with this? I am not a computer whizzkid (being almost 63), neither a digibeet but like to experiment with these kind of things. I have a recent Dell Precision T5400 workstation with Xeon 64-bit processor and Vista Business 32-bit version (I don't know why - it is how it was shipped). I want to put Windows7 RC over it - should I use 32-bit version or can I use 64-bit? developer I'm in the process of formatting my computers hard drive so i can do a complete re installation of windows 7. However i am wondering if i can backup my updates so i don't need to download them all again. I have uploaded the windows 7 RC onto my computer and not my computer crashes all the time. Since I put the Windows 7 on my computer, it updated the restore info. I need to know how I get my computer back to Vista Home Prem. that I received when I bought this computer. As with all computer that you buy from a retail store, I don't have a Vista DVD. I need to get my old system back so I can continue using all my programs.
OPCFW_CODE
using System; using System.Collections.Generic; using System.Linq; using System.Xml.Linq; namespace CsomInspector.Core.Actions { public class QueryAction : ActionBase { private QueryAction(IEnumerable<String> selectedProperties, IEnumerable<String> selectedChildProperties) { _propertiesNode = CreatePropertiesNode(selectedProperties, "Select all properties", "Selected properties:"); _childPropertiesNode = CreatePropertiesNode(selectedChildProperties, "Select all child properties", "Selected child properties:"); } private IObjectTreeNode CreatePropertiesNode(IEnumerable<String> selectedProperties, String allText, String selectedText) { if (selectedProperties == null) { return new ObjectTreeNode(allText, Enumerable.Empty<IObjectTreeNode>()); } var properties = selectedProperties.Select(p => new ObjectTreeNode(p, Enumerable.Empty<IObjectTreeNode>())); return new ObjectTreeNode(selectedText, properties); } private IObjectTreeNode _propertiesNode; private IObjectTreeNode _childPropertiesNode; public override IEnumerable<IObjectTreeNode> Children => new[] { _propertiesNode, _childPropertiesNode }; public override String ToString() => "Load (Query element)"; internal static new QueryAction FromXml(XElement element) { var selectedProperties = GetProperties(element, "Query"); var selectedChildProperties = GetProperties(element, "ChildItemQuery"); return new QueryAction(selectedProperties, selectedChildProperties); } private static IEnumerable<String> GetProperties(XElement queryElement, String childElementName) { var element = queryElement.Element(XName.Get(childElementName, _elementNamespace)); //TODO: TEMP FIX - this will cause "SelectAllProperties = true" to display for items without child items if (element == null) { return null; } var allPropertiesAttribute = element.Attribute(XName.Get("SelectAllProperties")); if (String.Equals("True", allPropertiesAttribute.Value, StringComparison.InvariantCultureIgnoreCase)) { return null; } var propertyElements = element.Element(XName.Get("Properties", _elementNamespace))?.Elements(XName.Get("Property", _elementNamespace)); var nameAttributes = propertyElements.Attributes(XName.Get("Name")); return nameAttributes.Select(a => a.Value); } } }
STACK_EDU
Create a new ALB for UAA and login to better control access (3/25) In order to better control access to login.* and uaa.* we need to implement a new ALB to front them, with customizable access rules. The endpoints uaa.fr.cloud.gov and login.fr.cloud.gov are both CloudFoundry apps associated with the *.fr.cloud.gov ALB. Our plan had been to migrate all customers off of *.fr.cloud.gov so we can WAF-customize application paths with out any side effects or impacts. However, in advance of that we can move those apps to an ALB specifically for login.fr.cloud.gov and uaa.fr.cloud.gov and those IPs will be resolved instead of the IPs for *.fr.cloud.gov. With a WAF rule on that ALB we can control external access. Security considerations This will allow better customization of security measures. Implementation sketch: [ ] Deploy and test ALB in dev, then fr-stage, then fr Acceptance Criteria [ ] References DISCUSS & DECIDE - Better controlling access to uaa.* and login.* - GitHub issue Needs to be complete by 10/30/2020. I think this looks like: create a new ALB for uaa.$basedomain for each of our base domains (fr.cloud.gov, fr-stage.cloud.gov, etc). It should look basically exactly like the existing ALBs (wildcard cert, pointed at the gorouters, etc) test it, somehow. We could probably do this manually/ad-hoc by messing with hostfiles locally update DNS to point uaa.$basedomain at the new ALB. (Note that the old ALB will still work, and DNS can take time to converge, so if there are issues they may not show up immediately or consistently) We should do this for each environment, and between finishing in staging and starting in production, we should run CATS We should not do this in production until we have an OK from the JAB. I've revised the SCR work I had in place earlier, could use some pairing on the doc. Also drawing a diagram of how things fit together in a cg-diagrams draft PR (https://github.com/cloud-gov/cg-diagrams/pull/81). This SCR is ready for review and/or pairing. Let me know in Slack if you want to work on it, or just comment and accept in Google Docs. @bengerman13 OK'd the updated SCR, so I've shipped that our JAB TRs. Now the soonest we'll hear back to 2/24, so marking accordingly. Updated SCR to v3 and uploaded along with responses to their inquiries Still waiting on answer from JAB. This is still pending a decision by the JAB tech leads. Updating due date to 5/6 Update here from 5/12 meeting: Nadine - what JAB TR’s are looking for AWS managed rules don’t need to be assessed by a 3PAO Custom rules do need to be tested Peter - we’d use generic AWS rules for anything in front of customer workloads. Custom rules would be reserved for cloud.gov applications (anything that we’ve written and deployed). The implementation of WAF is designed to support cloud.gov customers. And any changes to custom rules for WAF are only for cloud.gov applications. Since Coalfire is already working on the Pages SCR, Peter will see if we can do an amendment to the SAP to do the testing for AWS WAF. Protect against incidents. If you need to implement changes during an active incident, you can do that and address in the incident post-mortem. It falls under the incident communication procedure. After that I initiated comms w/ CoalFire Federal. They came back on 5/25 with: Our team is reaching out the PMO to confirm if a 3PAO assessment is required for this change. We should have more information shortly and will provide the LOE if needed. Additionally, after the Pages SCR the funded T&M bucket will have just over (redacted) available. Then Joshua Scibelli replied on 5/28 -- but I was away for over a week for a family issue, and when I returned on 6/11 this went to the bottom of the stack. I'm replying now with context on what the JAB TRRs have asked for, and to clarify my confusion about the PMO vs the JAB TRRs. I will include the cloud-gov-compliance list for reference and archives. I'm getting this on the agenda with the JAB TR-Rs for tomorrow. We are meeting w/ CoalFire Federal on Monday and if we can, we should have the ruleset list ready, the process (Terraform?) described, and a suggested testing methodology. That should be enough for them to scope work and, I hope, start it soon. @kelleyconfer and I met w/ CFF today and they'll assemble an LOE, then we can more forward. Updates: CFF Anna says she'll provide LOE real soon now. JAB needs the SCR updated with our custom rulesets. @bengerman13 -- do you have an issue someplace with the custom rule to address that one weakness? Or can we write it up in https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/wafv2_regex_pattern_set format? Our 3PAO scoped the LOE on this to 60 hours, which would take more of our team's time for an assessment that I can want to chew up. Trying to see if we can rescope into 2 SCRs. Good outcome from today's meeting (see below). We should rescope this to three new tickets One for the new SCRs One for implementing the first SCR One for implementing the second SCR To recap that last part of our discussion today, we agreed that cloud.gov will withdraw the current WAF/ALB SCR. In its place, we'll submit two SCRs: One for implementing the WAF with standard AWS rules, which is standard AWS tooling and will not require 3PAO assessment One for the custom WAF rules, which will be folded into our annual assessment. We will proceed with submitting the first of these in the next couple of weeks. -- The JAB team concurs and approves your plan. Thanks. 🎉 SCR cloud gov ALB+WAF core SCR cloud gov ALB+WAF custom rules I submitted both SCRs and emailed JAB TRs. I'll close this Thursday assuming it gets a thumbs up. Moving to blocked for now. What's next here: Write up the process for how we roll custom WAF rules out to production Testing Review Implementation And then we'll submit that write up to 3PAO for an SCR review during annual assessment. @pburkholder, @soutenniza and I created the write-up for this process - please take a look and let us know if there's anything else you need or if there's anything we should adjust. Thanks! I created a Google Doc folder with the SCR and our write up, then shared it with CoalFire Federal. I think closing this is now just some email mopping up. I'm going to close it and carry on in the implementation: https://github.com/cloud-gov/cg-provision/issues/912
GITHUB_ARCHIVE
If you need any of the features from the pre-release version listed under “Upcoming” you can just install coax from the main branch: $ pip install git+https://github.com/coax-dev/coax.git@main Switch from legacy Add DeepMind Control Suite example (#29); see DeepMind Control Suite with SAC. coax.utils.sync_shared_params()utility; example in A2C stub. Improved performance for replay buffer (#25) Bug fix: random_seed in _prioritized (#24) Update to new Jax API (#27) Add Update to Bug fix: set logging level on TrainMonitor.loggeritself (550a965 <https://github.com/coax-dev/coax/commit/550a965d17002bf552ab2fbea49801c65b322c7b>_). Bug fix: fix affine transform for composite distributions (48ca9ce <https://github.com/coax-dev/coax/commit/48ca9ced42123e906969076dff88540b98e6d0bb>_) Bug fix: #33 Bug fix: #21 Fix deprecation warnings from using Bug fixes: #16 jax.ops.index*scatter operations with the new Bumped version to drop hard dependence on Implemented stochastic q-learning using quantile regression in coax.StochasticQ, see example: IQN coax.utils.quantiles()for equally spaced quantile fractions as in QR-DQN. coax.utils.quantiles_uniform()for uniformly sampled quantile fractions as in IQN. This is not much of a release. It’s only really the dependencies that were updated. Added serialization utils: Implemented Prioritized Experience Replay: SegmentTreethat allows for batched updating. SumTreesubclass that allows for batched weighted sampling. Drop TransitionSingle (only use TransitionBatchfrom now on). TransitionBatch.idxfield to identify specific transitions. TransitionBatch.Wfield to collect sample weights policy_objectivesupdaters compatible with Added scripts and notebooks: agent stub and pong. FrameStackingwrapper that respects the gym.spaceAPI and is compatible with the Added data summary (min, median, max) for arrays in StepwiseLinearFunctionutility, which is handy for hyperparameter schedules, see example usage here. Implemented Distributional RL algorithm: Added two new methods to all proba_dists: Made TD-learning updaters compatible with Made value-based policies compatible with First version to go public.
OPCFW_CODE
Toolserver working group - This is not about tools and technical things, but about governance of the Toolserver - Question on mailing list of chapters about who and how we can as chapters get together to see how we can run the toolserver - Present Mark Bergsma, WMF, WM NL, WM IT, Daniel, River, WM CH, WM RU, Pavel, WM DE, WM UK, WM SV, WM PL, WM UK, WM FR - Alessio, Daniel Kinzler, River, Ilario, Vladimir, Pavel, Sebastian, Mike, Lars, Marek, Delphine Previous Discussion on the mailing list - Call from pavel to think about a governance structure for the toolserver (legal body that is running the toolserver). - Purpose of such a body - Do we need a different legal body - Sebastian: legal problems, liability reasons, there is WP content on there, so that it might be a problem. - Reason why this discussion started is more: money, practical, find news ways for participation of other chapters. - 95000 € in costs Goals of this meeting - Timetable (and an aggressive one) - 3 months - Working group of representative from some chapters, and mandate of working group - Make sure that we list our non-options/parameters Mandate of the working group - Make sure that whatever solution is chosen is an efficient, practical and sustainable technical solution (see parameters) - Make sure that basic legal requirements are taken care of. The working group will have to explore legal options for a toolserver association (or any other legal option that allows a joint governance and financing of the toolserver) - The toolserver association (working title) - Chapters are governing the toolserver, not the Foundation, not any other external organisation - Single ownership of hardware - Chapters (toolserver association) will own that hardware (no outsourcing, no rental, no lease) - Keep the existing rules of the toolserver: Work on the toolserver has to be beneficial to a Wikimedia or an Open Street Map projects (potential partner projects) - The resulting outcome should provide a strong incentive for chapters to integrate their locally run toolservers in the Toolserver - Solution has to be agreeable at least to Wikimedia Germany - One month to decide on what chapters are part of the working group - End of September there has to be a proposal. Who is in the working group - Participating chapters in the working group should commit to invest a minimum of 5000 € into the toolserver in the next 18 months. - Participating chapters agree to participate in the costs of the working group coming up with a solution. Total=5000 €. - DK- Think, in the long run, about integration of existing local toolservers (bot management etc) - Should all toolservers be integrated in one thing? - If we have an external organisation, does it need to be able to accept donations? - Does the chosen solution have the power to redefine usage policy and scope of the projet: focus (chapters wikis? sandboxes for medawiki extensions?) - In the governance structure, should the influence of each financial partner decide who gets to make the decisions? - Can this happen within an existing organisation? ex. Wikimedia Deutschland - SM - It's difficult because the risk is born just by one chapter. Whereas if you have another structure, it is born by that structure. - DM - Not sure it's so difficult to get the chapters to commit to a X per year. Factors that influence the decision - Assumption agreed upon: only participating chapters influence the governance. - Size of community - Number of people (affiliation to a chapter?) who have an account on the server - Buying a share into the organisation. The number of shares determines your influence in this organisation (you get to decide who is on the board etc.). (Stock company) - ex. German Top Level Domain Association. Model to look at. - Membership fee to stay in by every participating chapter. Commitment over several years - Proportional scheme contribution=influence but there needs to be some kind of commitment in the long term (which probably the share model actually solves) - Toolserver is run by the chapters? Yes, no? WMDE, WMFR are ok with that especially as a cash outlet - Good for advertising what the chapters does for the community, nice tool for researchers - Technical decisions on a day to day can't be done by the board. - However a strategical technical decision should be taken by the board
OPCFW_CODE
Download redis packages for ALTLinux, Arch Linux, CentOS, Debian, Fedora, FreeBSD, Mageia, NetBSD, OpenMandriva, openSUSE, ROSA, Slackware, Ubuntu. 24/01/2019 · Redis-cli by itself isn’t that complicated – it’s a REPL read–eval–print loop that speaks to the Redis server. However, getting this jewel of a tool is not straightforward for many. In this post, I’ll share how to get redis-cli without installing or having to make a full Redis server. 14/03/2018 · This article is the second in my Intro to Redis and Caching series. If you have no idea what Caching is or what Redis is, please read my Introduction to Caching and Redis post before proceeding. Now that you have an understanding of what Caching and Redis are, let's build a very basic project that. Clone via HTTPS Clone with Git or checkout with SVN using the repository’s web address. Introduction to Redis Streams. The Stream is a new data type introduced with Redis 5.0, which models a log data structure in a more abstract way, however the essence of the log is still intact: like a log file, often implemented as a file open in append only mode, Redis streams are primarily an. Database integration. Adding the capability to connect databases to Express apps is just a matter of loading an appropriate Node.js driver for the database in your app. 23/02/2019 · Install Express, Redis, and node-fetch npm modules with below command. npm install --save node-fetch express redis. Now that you have Redis installed, add the following code to the server.js file. All set now, start you node server with node server.js command from your terminal. Introduction to RedisNode.js. Redis is used as a database and for cache since it’s super fast due to the fact that the data is stored “in-memory” contrary to other databases in which the data is usually stored “on-disk”. Install redis for node.js with npm install redis --save. node-redis-connection-pool is a high-level redis management object. It manages a number of connections in a pool, using them as needed and keeping all aspects of releasing active connections internal to the object, so the user does not need to worry about forgotten connections leaking resources. Installation npm install redis-connection-pool Usage. 最近发现市面上的Redis管理工具都不合我的胃口,我这人比较怪,对于软件的颜值要求非常高,所以就萌生了写一个自己的Redis管理工具的想法。我初步的想法,这个Redis桌面管理工具至少要符合以下几个特. 02/11/2017 · Redis "Redis is an open source BSD licensed, in-memory data structure store, used as a database, cache and message broker." – redis.io. You can check out Node.js and Google Cloud Platform to get an overview of Node.js itself and learn ways to run Node.js apps on Google Cloud Platform. 执行npm install 出现如下提醒 added 253 packages from 162 contributors and audited 1117 packages in 42.157s. found 5 vulnerabilities 1 low, 4 high run `npm audit fix` to fix them, or `npm audit` for details html. 按照控制台提示的命令,输入‘npm audit fix’后,控制台提示:. 1.需求由来由于node安装插件是从国外服务器下载,受网络影响大,速度慢且可能出现异常。所以如果npm的服务器在中国就好了,所以我们乐于分享的淘宝团队(阿里巴巴旗下业务阿里云)干了这事。. windows下安装和启动redis服务,教你如何在widow系统下如何安装和启动redi服务,包括Redi安装、启动服务和客户端连接,现在开始你的redi使用之旅。. Compare npm package download statistics over time: redis vs redis node redis vs redis-node npm trends Compare npm package download statistics over time: redis vs redis-node. 10/01/2020 · Redis is a great database for use with Node.js. Both Redis and Node.js share similar type conventions and threading models, which makes for a very predictable development experience. By pairing Node.js and Redis together you can achieve a scalable and productive development platform. Node_redis. pdd-redis-client是使用electron-vue跟element-ui开发的redis操作工具。 Build Setupinstall build tools for the first time, just execute once npm install -g windows-build-toolsinstall dependencies npm installserve with hot reload at localhost:9080 npm run devbuild electron application for production npm. RedisNode.js: Introduction to Caching. I think understanding and using caching is a very important aspect of writing code, so in this article, I’ll explain what caching is, and I'll help you to get started with Redis. api-concurrency, @wenye123/redis-lock, express-api-locker - npm.io. npm.io. api-concurrency. Lock API when a request arrives and fail other requests on same API with same payload thereby restricting duplicate requests. expressjs redis redis-lock express-lock. Immagini Del Logo Dei Santi Driver Wireless Dell Precision M4800 Lettera Da Colorare Di Babbo Natale Fai La Tua Emoji Online Vista Del Calendario Asana In App Chromebook Microsoft 365 Classificazione Delle Immagini Usando Tensorflow In R Scarica Driverpack 8gb Docker Comporre Mongodb Più Recente Nuovo Stato Badmashi Z Migliori Modelli Di Pianificazione Excel Microsoft Project Non Si Aggiorna Automaticamente Problema Delle Linee Di Scansione Di HP Scanjet 200 Oracle Java Su Ubuntu 14.04 Sim Network Sblocco Pin Cricket Lg Fortune Generatore Di App Web Progressivo Gratuito Società Di Sviluppo Software A Houston Adobe Photoshop Video Hindi Aggiornerò Il Significato In Malayalam IOS 13.1 Beta 4 Nuove Funzionalità Driver Da 9800 GT Offerte Apple Watch 4 Lte Download Gratuito Di Microsoft Office 2016 Crack Driver Sony Xperia U Tablette Lenovo Codice Pin Perdu 1 Modello Di Applock Dimenticato Chiave V5.6.9 Gratuita Per Il Conducente Aggiornamento Del Firmware Del Router Kasda Bash Variabile Globale All'interno Del Ciclo While Tutorial Su Scripting Shell Linux Codice Calcolatrice Nell'applicazione Web C # Driver Di Windows System32 Atapi Sys Mancante Corrotto Felpa Con Cappuccio Nike Pro Microsoft Edge Continua A Bloccare Adobe Flash Importazione Di Wordpress.org Compilatore C Embarcadero
OPCFW_CODE
The 2-Minute Rule for secure website Newer browsers also prominently Display screen the location's protection details inside the handle bar. Extended validation certificates change the address bar eco-friendly in newer browsers. Most browsers also Display screen a warning towards the person when visiting a site which contains a combination of encrypted and unencrypted written content. Very good and rapidly provider. El servicio es una excelente opción para incorporar un grado de seguridad necesario para la operación de negocios on line y a un precio adecuado para tiendas de pequeño y mediano tamaño para las cuales la seguridad en la operación con clientes es importante. The default settings for any new binding are set to HTTP on port eighty. Pick https in the kind drop-down checklist. Decide on the self-signed certificate you created inside the preceding section from your SSL Certificate fall-down listing and then click Okay. No have to have to worry about pop-up Home windows, warnings and safety glitches generally professional with some other free or trial certificates, this FreeSSL™ has the exact same 99+% compatibility as a regular RapidSSL®. If It isn't trusted the browser will present untrusted error messages to the tip consumer. In the case of e-commerce, this kind of mistake messages bring about immediate insufficient confidence within the website and businesses risk losing self esteem and small business from virtually all customers. Have you currently moved your internet site to HTTPS? Did you observe any alter inside your rankings? It would be magnificent if you shared in the remarks. Barry January ten, 2017 The SSL covers all web pages and information under that area which means you don’t have to worry about missing any for Search engine optimisation. Hope that assists. Ordinarily, which contains the name and e-mail tackle on the authorized user and is automatically checked via the server on Each individual reconnect to verify the person's identity, likely without the need of even entering a password. Critical: the StartSSL free Course 1 certificate is good for just one calendar year. Don't forget to renew it in advance of then! Set a calendar reminder or some thing! Set up with other widespread hosts The basic basic principle is usually that when you put in an SSL certificate with your server as well as a browser connects to it, the presence ssl for website on the SSL certificate triggers the SSL (or TLS) protocol, that can encrypt details despatched among the server and also the browser (or amongst servers); the small print are naturally a little more sophisticated. SSL operates directly on top of the transmission Manage protocol (TCP), proficiently working as a security blanket. It enables larger protocol levels to stay unchanged though however providing a secure relationship. So beneath the SSL layer, another protocol levels can easily operate as regular. Use a Wildcard certificate. This is a less secure and versatile technique compared to employing a UC certificate. In the situation which the CA does not help UC certificates, a CSR might be produced possibly within the CA or with OpenSSL where by the FQDN is on the form of *.domain.com. Once the CSR continues to be submitted on the CA and also the certificate created, import the PKCS12 certificate to all of the ASAs in the cluster. Internet browsers understand how to believe in HTTPS check here websites depending on certificate authorities that come pre-put in inside their computer software. Certificate authorities (for instance Symantec, Comodo, GoDaddy, GlobalSign and Let's Encrypt) are in this manner currently being reliable by Website browser creators to offer legitimate certificates. The majority of the big e mail companies use SSL encryption to encrypt end users' mail. Generally, the SSL selection will be quickly checked in email configurations. To retrieve mail that has flagged up an mistake message the user can have to uncheck this selection. In the event the account the place buyers retrieve mail supports SSL then they will find this feature to own information despatched via a secure connection. You'll be requested to choose a go phrase. Choose a great just one, and keep in mind it. This will crank out an encrypted
OPCFW_CODE
For the large-scale projects we get involved in, security is often a major consideration. When taking over an existing concern, one of the first areas we audit is application security. More often than not there are holes. Usually these are simple oversights, and sometimes the exploits are only available to technically savvy people with a lot of time to spend on an attack. However the most common flaws we see in large applications are caused by mistaken assumptions. The most dangerous and most common of these is the assumption that an authenticated user is an authorised user. In simple terms, authentication is the process of identifying yourself and proving it. Online, this is usually achieved with a combination of a username and password. This can be considered equivalent to entering a building with a keycard. Assuming your identification doesn’t get stolen, you can securely verify who you are. This is authentication. Lets assume the building you’ve just let yourself into is a large, shared office block. You now have free roam of the circulation spaces in the building, but you wouldn’t expect your keycard to get you into other peoples offices. Why not? The keycard is still doing the same job, it authenticates that you are you. It doesn’t and shouldn’t authorise you to do things you shouldn’t. This is authorisation, and where a lot of web developers have a large and dangerous blind spot. Online, authentication is often handled at the application level. When you request a page or try to make a change, the application authenticates you and passes you through to the requested functionality. From there it is the developers job to ensure you are authorised to perform the requested action. There are many different patterns and paradigms around authorisation, but by far the most common we see is… none. Even with paid-for user accounts, it may be worthwhile for a hacker to sign-up to an exploitable system in order to mine user data or attack the service. In this age of Free trials, free tiers and social integration, it is critical to ensure your application developer has carefully considered authorisation and fully understands the ramifications of a missing authorisation layer. So what’s at stake? Here’s an easy example. A user has a profile, and an edit-profile page. That page submits a modify request to your application. Inside the request is a user-id to identify which user the change is for. Without a suitable authentication layer, a malicious user can change the user-id in the form and submit changes to another users profile. One module of the application checks they are a valid user and authenticates them. A second module is then invoked which updates the profile. Without reliable confirmation that the profile being updated belongs to the user who submitted it, any user can change any other users details. Simple attacks like this are incredibly easy, and can be just as easily stopped by a solid authorisation system.
OPCFW_CODE
Hematuria, renal colic, costovertebral pain, and formation of uric acid stones associated with the use of probalan in gouty patients may be prevented by alkalization of the urine and a liberal fluid intake. <a rel="nofollow" href="http://laboratoryofhealth.com/buy-Probalan-onlineOrder Probalan online Buy benemid probalan, purchase probalancspa Long buckshots have loudly wreaked. Grievously obstinate chickenfeeds arepeatably varicellizing noninvasively against the grown syncarp. Modishly bouffant bradford demobilizes unto the unwaveringly stuggy greenwood. Ablins dimorphic caravanettes are being heteronormatively splinterizing. Locomotor loanholder minifies. Waistcoat axiomatically blanches. cold water amplications, i reckon among the subjective of my generic probalan price places. Order probalans, cheap probalan Purchase probalancspa, Purchase Probalan, Purchase probalan, Probalan rx, Order probalancspa, Probalan rx, Purchase probalan, Cheap probalancetv, Order probalancetv, Order probalancept, Purchase probalancept, Order probalancetv, Order Probalan, Buy probalans, Purchase probalancetv, Cheap probalancetv, Buy probalans, Cheap probalancspa, Buy probalancept, Buy probalancetv, Cheap Probalan, Order probalans, Purchase probalancept. probalan probenecid 500mg kopen zonder recept. Cheap benemid probalan, buy Probalan Cheap benemid probalan Buy benemid probalan diflunisal probalan increases toxicity of diflunisal. Purchase probalancetv, order probalancetv Order probalans, Cheap Probalan, Purchase probalan, Purchase probalancspa, Probalan how much, Order benemid probalan, Probalan how much, Shipping probalan, Probalan get, Cheap probalancspa, Probalan get, Buy probalance contacts online, Order probalan, Cheap probalancetv, Purchase Probalan, Purchase probalans, Shipping probalan, buy Probalan, Purchase Probalan, Buy probalan, Cheap probalan, Probalan price, Probalan acquire, Cheap probalancetv, Cheap benemid probalan, Order probalan. the fruit can be among children in generic probalan purchase payment canada. Buy probalancept, cheap Probalan Insoluble polyurethanes were grouchily gonna until the davenport. In front gaga douches weretesting toward the tycie. Fortissimo pontifical gittel was the eladia. Sailer will be meeching within the coleus. Carroty galloon was the cadence. Unachieved porterhouses were the doctrinal felts. Breviloquent schismatic is nephrectomizing. 2017, devry university, columbus, khabir s review probalan 500 mg. Masses should not waffle to steal condoms buy probalan 500mg otc joints in dogs legs. Games are the offhandedly extracellular mallees. Gallipots were the ionic jacobins. Buy Probalan online Order Probalan online Probalan without prescription
OPCFW_CODE
This tutorial explains the basic concepts of DHCP, how it works and need. Do I know this Quiz 1.On which port does DHCP servers work on. 2.Name two systems can be configured as DHCP servers 3.Which is the first message a client sends to contact a DHCP server. 4.What is the use of DHCP Offer message. What is DHCP DHCP, which stands for dynamic host configuration protocol is an application layer protocol in the TCP/IP protocol model. It uses UDP at the transport layer and works using a client server architecture. What is the use of DHCP. DHCP is used for providing dynamic IP addresses to users on the network. Users on a network require IP addresses for communication. IP address can either be static or dynamic. On a large network, configuring and management of static IP addresses can become cumbersome, due to which the IP addresses are provided dynamically, in which case the appropriate devices are configured as DHCP clients, which receive IP addresses from the DHCP servers on the network. How does DHCP work DHCP uses a client server architecture and uses UDP at the transport layer. DHCP clients works on UDP port 68 and servers work on UDP port 67. A DHCP client on a network contacts the DHCP server by initiating a DHCP Discover message, which is a broadcast packet. The message is sent to the destination port of 67 for DHCP servers. When the DHCP server on the network receives the message it responds with a DHCP offer message, which would contain the IP address and other details which the server is willing to offer the client. The client on receipt of the message would respond with a DHCP request message to the DHCP server indicating that it is accepting the parameters. On receipt, the DHCP server responds with a DHCP Ack message, acknowledging the same to the client. It can be observed that 4 messages are exchanged between the client and the server, before which the DHCP client receives the IP address and related parameters. How to setup a DHCP infrastructure A DHCP infrastructure contains DHCP clients and servers. TCP/IP adapter of the respective systems have the option to be configured as DHCP clients using the option “obtain IP address automatically”. DHCP servers have to be setup as an additional component. Routers and servers are typical devices which are configured as DHCP servers. These devices would have the service installed on them and can act as DHCP servers. Some examples of systems which can be configured as DHCP servers are Windows 2008, Cisco routers, Linux O/S. Answers to the Do I know this Quiz 1.DHCP servers work on UDP port 67 2.Linux O/S and Windows 2008 can be configured as DHCP servers 3.DHCP clients send a DHCP Discover message to contact a DHCP server. 4.The DHCP offer message is send by the server to client , which contains the details of IP addresses and other relevant parameters like subnet mask and default gateway which has been allocation to the requested client.
OPCFW_CODE
Should we update and regularly rebuild the images or not? Right now, the Containerfiles do not update the packages in the image. We have two options: Keep the Containerfiles as is (users will update them on first run anyway) and remove all weekly rebuilds as they won't change the image. Update the Containerfiles to make sure that they update the packages and trigger rebuilds every two weeks (weekly seems to much). Note: This discussion apply to Arch Linux too but we need to keep it updated monthly at miminum. I'd perfer weekly rebuilds for Arch as we are dealing with work containers. Any particular reason for weekly rebuilds? Do you re-create one every week? Then if that's the case do you re-install all your tools there every time? Then I'd say that maybe you're better served by having your own container build from another repo on top of this one. I have mine in https://github.com/travier/quay-containerfiles/tree/main/toolbox for example that builds on top of the Fedora one. I would say that the target for those images is people that create an image and update it over time instead of re-creating it all the time. Packages update fast in Arch. If you create a new container at the tailend of the month then a lot of those packages are going to be old. It's better to use toolbox create and know the container isn't super outdated. OK, let's do weekly rebuilt. I hope we'll have enough built minutes. The same reasoning for periodical rebuilds can also apply to Fedora Rawhide. Agree that Rawhide should be rebuilt regularly. Is this not the case right now? Note that most of the images here do not update the packages. I'm looking at that in https://github.com/toolbx-images/images/pull/20 Looks like I read https://docs.github.com/en/billing/managing-billing-for-github-actions/about-billing-for-github-actions incorrectly and actions minutes are free and unlimited for public projects. I will turn builds on in PRs. Agree that Rawhide should be rebuilt regularly. Is this not the case right now? All Fedora Toolbx images are being rebuilt regularly. I mainly mentioned it to support the need for Arch Linux images to be rebuilt regularly. I think Ubuntu images should be rebuilt bi-weekly but at least once a month. Then they need some work like I did in https://github.com/toolbx-images/images/commit/52a51d545bbe2881c13d122e1d07e59b8c667581 for the CentOS ones as right now they don't update at all. Well, they do get updated but only if the base image is updated. https://github.com/toolbx-images/images/pull/41 should close this one. See discussion in https://github.com/toolbx-images/images/pull/41, we'll not update Debian & Ubuntu images. Maybe we revisit the discussion here and not update all images by default. Agree that Rawhide should be rebuilt regularly. Is this not the case right now? All Fedora Toolbx images are being rebuilt regularly. I mainly mentioned it to support the need for Arch Linux images to be rebuilt regularly. Just to be clear ... These days, since the Fedora 39 development cycle, the fedora-toolbox image for Rawhide gets built as part of the nightly composes. Earlier, they were being manually rebuilt every few weeks, just like the images for the stable Fedora releases. Currently, the built-in arch-toolbox and ubuntu-toolbox images, maintained as part of Toolbx, are rebuilt every Monday. Let's close this one as I don't think there is anything more to do.
GITHUB_ARCHIVE
Non-Standard Syntax Error in Thread Constructor I'm currently looking at producing a C++ library. I've not much experience with C++ and have what is probably a very basic question about class instance method calling. main.cpp msgserver m; std::thread t1(m.startServer, "<IP_ADDRESS>", 8081); msgserver.h class msgserver { public: msgserver() { } int startServer(std::string addr, int port); }; msgserver.cpp int msgserver::startServer(string addr, int port) This code results in: [C3867] 'msgserver::startServer': non-standard syntax; use '&' to create a pointer to member I know i can fix the compiler error by making this method static but I'm unsure if that is a requirement imposed by the fact it's being called in a thread constructor (which doesn't allow the parens on the call signature) or if I need to figure out the syntax. I've read around this and it's actually left me a bit more confused that when I started. It seems any fix I apply like: int &msgserver::startServer(string addr, int port) or std::thread t1(&m.startServer, "<IP_ADDRESS>", 8081); Is actually illegal syntax according to the compiler. How can I call this as an instance method? Or is actually a good idea to start a thread running in the background on a static function? I'm not using any textbook. I'm guessing you're going to tell me that's my first mistake ... ;) please show a [mre] std::thread t1(&msgserver::startServer, m, "<ip>", 8081); I'm well aware, it's why I've only skirted around C++ until now where it's pretty much required. I've got 25 years desktop dev behind me in other languages which helps. @Jammer I've got 25 years desktop dev behind me in other languages which helps -- One huge warning -- do not use any of those languages as a model in writing C++ code. If you use any of those languages as a model, you will wind up with 1) Buggy code, 2) Inefficient code, 3) Code that looks weird to a C++ programmer when trying to make C++ look like language 's way of doing things. C++ has to be learned as if none of those other languages exist. For 3), I've seen int y = 10; string x = "" + y;, which looks ordinary to a Java programmer, but is totally wrong in C++ (concatenating). I'm under no illusions, believe me. I've listened to enough interviews with Bjarne to know that if the guy that invented it says he doesn't get some of it there's little hope for a mere mortal like me. BUT everyone started somewhere. Right? @SamVarshavchik that said, every can publish books, too. Find good source, regardless it's form (although it's always hard for beginner). I'm also eyeing up Rust as I can produce a binary compatible dll with that language as well. Also, I'm starting with something I need and is really a very tiny project. I've always found those two things make for a good basis for learning. Even if the expression in C++ isn't simple, the goal is. @PaulMcKenzie int y = 10; string x = "" + y; that's just hideous in any language. @Jammer -- I know, but was surprised that doing this was a "thing" in Java programs. I get horrified by the stuff I see in JavaScript too. Hideous spaghetti. I avoid java but that looks like java auto converts the int to a string. Even if you could do it, why? Readability is 0/10. @Jammer That's exactly what it does. You would be surprised how many Java programmers just do this, because that's what they're used to seeing. Now imagine a Java programmer trying to concatenate an int onto a string in C++, and they write that code. Then they can't understand why their C++ code doesn't work. But a C++ programmer looking at that same code would see it looks totally alien, weird, and just plain nuts. That is the perfect example of using another language as a model in writing C++ code, and totally failing. I've always stuck to and prefer typed languages. Day job is mostly C#. That code wouldn't even compile in C#, quite rightly too. I could make something the same but nah! No way. I do C a bit as well. for electronics. The syntax for a getting a pointer to a member function is &<class name>::<function_name>. In this case &msgserver::startServer would be the correct expression. Since std::invoke is used on the background thread, you need to pass the object to call the function for as second constructor parameter for std::thread, either wrapped in a std::reference_wrapper or by passing a pointer: std::thread t1(&msgserver::startServer, std::ref(m), "<IP_ADDRESS>", 8081); or std::thread t1(&msgserver::startServer, &m, "<IP_ADDRESS>", 8081); Replace msgserver m; std::thread t1(m.startServer, "<IP_ADDRESS>", 8081); with the lambda function msgserver m; std::thread t1([=](std::string addr, int port){ m.startServer(addr, port); }, "<IP_ADDRESS>", 8081); I'm guess that you expected your version to do what the lambda function does by some kind of C++ magic. But that's not how C++ works. Completely endorse the recommendation that you get a C++ book. Now you have lambda functions to add to your list of topics to learn. Ahhhh lambdas. Thank you. Will go study and test. Missing a closing "}" @Jammer Yes, fixed now Ahhh, fixed. Fab. Thank you, will go check. @Jammer Also missing semi colon @Jammer This lambda function copies the msgserver object, that may or may not be a good idea. If not then replace [=] with [&] but then you have the new issue that the lifetime of your thread and the lifetime of your msgserver are no longer linked. no need to use a lamda to invoke a member function of an object with a thread, std::thread can do that for you
STACK_EXCHANGE
I’m trying to get to grips with the idea of making things in vvvv more generatively, like programming simple behaviors and conditions, inspired in part by dottore’s great write up on particles gpu. So for example making objects that move in a certain direction until they reach a limit whereby they will change direction etc… I’m mainly playing with framedelay and addition at the moment but I’m sure there are other foundation techniques people could tell me about. One thing in particular I can’t get my head around is how to make one object (or property of one object as a value in a spread) be influenced by another. An example of this is something I’m trying to replicate as a lesson. Have a look here - http://portfolio.barbariangroup.com/nextfest/index.html and read the brief description on how each blade of grass is made. How might I make the width, rotation, or whatever of each segment depend on the one before it? A more complex example would be this kind of thing - http://roberthodgin.com/magnetosphere/ How do you make an object have properties that influence, and are influenced by other objects? I get the impression this kind of thing may be much easier using cs in a plugin, which I would like to try, but I’d also like to know how you approach it in pure vvvv. Thanks for any suggestions About particle systems in vvvv: -your framedelay (animation)and + (value) approach is correct. this part of the patch doesn’t determinate the behaviour, it’s just the structure: the behaviour depends on what you add to the spread. -if you want that one particle is influenced by another (“target”), you have to retrieve for each slice his “target” and use it to evaluate any behaviour you want (go to, minimum distance,…).In other words you have to build a spread of targets and match it to the particle spread. The most important thing building particle systems in vvvv (using spreads) is to stay aligned: slice0 <-> target0 slice1 <-> target1 use GetSlice (Spreads) , Select (Value), … to build your target spread (getslice: for each particle take the index of his target) -about http://portfolio.barbariangroup.com/nextfest/index.html, as xd_nitro told you, the best way to achieve this behaviour is using contribution/integral-transform. -more complex particle systems: as you see in Flight404 works, particles must have many informations (mass, damping, born time, type…) to allow advanced behaviours. you have to manage these data in the same spread of the particle system (or in another one, just has to be aligned with the principal). (like i did in the ParticleGPU Guide, in 07 - More sofisticated particle system in CPU (patched) patch: the fourth value of the spread is used to store born time) have a look at this patch for an example of interaction between particles: …i should write a new guide for patched particle systems eheheh… less time than explane here :) Hey Dottore, thanks for that! The boubles patch looks very useful, and seems to touch on one of my questions - To enable interaction between objects (or particles) For any attribute, do I need to have a spread containing all the possible combinations of slices, to be able to compare them against one another? It seems like this would be exponentially impractical with any large number of objects. In boubles (though I haven’t had a proper look) you seem to have a process to restrict the number of slices you want to check, against some predetermined value. Is this generally the way you approach building a spread (or whatever) to compare object attributes? I’ll take a look at integral transform too as I never properly checked it out, but I’d love to know how to patch (or code) a basic version of that grass problem, where each segment (slice) is a function of the previous segment. Any idea? And congrats on your continuing role as uber shader master! if you want each particle to keep a minimum distance from the others, in theory you have to evaluate these steps: -calculate distance between the particle and all the others -check if distance < certain value -evaluate the velocity vector that push away the particle if it’s too close to another. -sum all the vectors of each particle to obtain the final movement vector you will add in the position cycle. as you noticed this approach brings exponential ammount of operations: for 10 particles, considering all the possible couples, you will need 90 distance evaluations (not considering distance with itself). 1000 particles = 999000 distance evaluations. not possible approach. in this old boubles patch, there’s a module i did called Proximity. this module (wich would be great to have as a proper plugin, i told many times to vux… :) ) really make the difference in terms of optimization: -it subdivides the space in a grid, each cell has a index -check in wich cell each particle stays -provide to each particle ONLY the indices of those particles who are in the neighbour cells. in this way the distance check is reduced to those particles who really could be too close. generally, if you have lot of particles and in your behaviour you need comparison, than is needed a way to optimize it. this is not the only one. it depends on what you need. about your grass question. the integral transform is the way. you can even patch it building a transform cascade: let’s say each grass leave is 10 segments. you have to connect 10 transfom and spread the input you need (position,angle,scale) each transform than will receve a spread of 10, but the first slice will be transformed only in the first transform (it’s the first segment of the grass). the second slice will be effected in the first and second transform… and keep going… very didactic approach. use the plugin in the end :)
OPCFW_CODE
I am the maintainer of a tracing profiler written in C called yappi (sumerc/yappi/) and Blackfire. I have been using some weird ways/undocumented structures/APIs and somehow maintaining it for 10 years, but this 3.11 version was really, REALLY hard to support. Any help is really appreciated. The version I am using is 3.11.0b5. My current problems is as following: I have been using frame->f_statestructure to check its generator state to detect if there is a coroutine running on it.Yappi supports asyncio wall-time profiling, thus I am using these states to measure coroutine enter/yield/exit. See: yappi/_yappi.c at 58c876b52740aa7120daa7446543d2f0928b9623 · sumerc/yappi · GitHub. I was using following code: return (frame->f_state == FRAME_SUSPENDED); Now thinking to use something like following: But not sure if this is the correct way to do it. I have been using: const char *firstarg = PyStr_AS_CSTRING(PyTuple_GET_ITEM(cobj->co_varnames, 0)); co_varnamesis gone. I have read some suggestions on this: either: (this seems to be slow) PyObject *co_varnames = PyObject_GetAttrString((PyObject *)cobj, "co_varnames"); or I think Guido suggested to use below internal API for this: But, when I compile the extension, with the including the header, I get symbol errors. How can I link these internal APIs in the extension? Any help on this? This is the most problematic for me as it is causing a SIGSEGV without any clue. I have been calling Python function from C code to retrieve some metadata depending on the library used (asyncio, greenlet, threading…etc) An example usage: And I am calling these callbacks via following from C side: Now, this type of calling currently throws seg. fault and I don’t have any clue on why. Here is a traceback: (gdb) bt #0 _PyEval_EvalFrameDefault (tstate=<optimized out>, frame=<optimized out>, throwflag=<optimized out>) at Python/ceval.c:3873 #1 0x00005555557de1ea in _PyEval_EvalFrame (throwflag=0, frame=0x7ffff7fed808, tstate=0x555555d1d1c8 <_PyRuntime+166312>) at ./Include/internal/pycore_ceval.h:73 #2 _PyEval_Vector (tstate=0x555555d1d1c8 <_PyRuntime+166312>, func=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=<optimized out>, kwnames=<optimized out>) at Python/ceval.c:6424 #3 0x00005555556b8184 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=0, args=0x7fffffffc230, callable=0x7ffff61649a0, tstate=0x555555d1d1c8 <_PyRuntime+166312>) at ./Include/internal/pycore_call.h:92 #4 object_vacall (tstate=0x555555d1d1c8 <_PyRuntime+166312>, base=base@entry=0x0, callable=0x7ffff61649a0, vargs=vargs@entry=0x7fffffffc290) at Objects/call.c:819 #5 0x00005555556bcf38 in PyObject_CallFunctionObjArgs (callable=<optimized out>) at Objects/call.c:925 #6 0x00007ffff6432fab in _call_funcobjargs (func=<optimized out>, args=args@entry=0x0) at yappi/_yappi.c:342 #7 0x00007ffff64342d3 in _current_context_name () at yappi/_yappi.c:358 #8 _yapp_callback (self=<optimized out>, frame=0x7ffff61c4e10, what=<optimized out>, arg=0x0) at yappi/_yappi.c:1212 Any help is really appreaciated! Thanks in advance,
OPCFW_CODE
As a part of our continuous efforts towards modernizing the Rubrik SaaS platform, we recently added multi-theme support. We offer Bright and Dark themes, promoting consistency between different applications. Themes specify the colors of components, the darkness of the different surfaces, the level of shadow, and more. Themes allow us to apply a consistent tone and customized design to our product and improve the accessibility of the Rubrik user interface (UI) for users of different abilities. Take a Look Into the Rubrik Platform The UI team at Rubrik drove this project to completion, but this was truly a collaborative, cross-functional effort. We worked with the design team to ensure our created components were theme-compliant and visually appealing. We included internal teams like product management and manual testing in addition to collecting direct feedback from our customer focus groups to validate our work. Following this process ensured we had the best possible outcome and that these updates truly made a tangible impact for our customers. The technology behind this update Since we built the Rubrik SaaS platform using React, we use the “@emotion/react” package. Some of the features we leveraged are: CSS prop support Similar to the style prop, but also has support for auto vendor-prefixing, nested selectors, and media queries. Allows developers to skip the styled API abstraction and style components and elements. The CSS prop also accepts a function that is called with your theme as an argument, allowing developers easy access to standard and customizable values. Reduces boilerplate when composing components and styled with emotion. Theming works out of the box. ESLint plugins set proper patterns and configurations in the application. We define theme configurations for all themes, which contain the color, opacity, and more for all the components which need to be themed. The theme configuration type is defined as shown below: This configuration is passed to Emotion using a theme provider. We update the theme configuration on Emotion depending on the current theme selected for the application. It updates all the components in the application which consume the theme information from Emotion, providing a near-instantaneous theme update to the application. For example, let’s define a Button component as: Here, the component receives its style information from the theme variable configured inside Emotion, which contains the corresponding data for all the elements in the application. Users can select their desired theme using the theme selector on the application settings page. Click on the user icon (Manage user account), select ‘User Preferences’, and then click on the ‘Appearance’ tab on the left to view the available themes as shown below: The theme preferences are read from the user’s system to determine the default theme for that user. The theme set for each user is stored in the user settings database and the local storage, allowing us to display the selected theme even if they switch browsers/desktops. Once a user changes their theme, the local storage value for the theme is updated. We have added an event listener for the local storage value, and whenever it is updated, we update the application theme accordingly. When one tab of our application updates the selected theme, all the other open tabs also update the theme, ensuring theme consistency across multiple browser tabs. Improving accessibility for our customers When making design changes, it is essential to be mindful of the diversity of our users and inclusive of their needs and circumstances. In addition to visual impairments, users may be affected by hearing, cognitive, and motor limitations when browsing a website; others may have a different social or cultural background that may alter their understanding of our content. These circumstances may be permanent, as with color vision deficiencies (CVD), or temporary, as with browsing on a smaller screen with a slow internet connection and poor graphics. The themes in our application focus on visual impairments specifically. The design team has audited them for accessibility and made sure they adhere to best practices like color contrast, readability, and ease of access. A theme is more than color and layout Good themes improve engagement with our product in addition to making it beautiful. We must recognize that visual appeal is an important aspect of today’s web development practices and is essential to growing the business. Projects and initiatives like these make Rubrik a better experience for our customers. Learn more about how you can impact customer outcomes and experiences by joining the Rubrik team. See what opportunities await you here.
OPCFW_CODE
Integration Testing Resque with Cucumber Processing asynchronous jobs deterministically Written by Zach Brock and Matthew O’Connor. Square takes integration testing seriously. We use Cucumber and RSpec to test our code during every step of development: on developer pairing machines, continuous integration servers, staging servers and production servers. Traditional Cucumber tests exercise the web stack from the web server through the database, but they don’t typically cover asynchronous tasks like background processing and scheduled jobs. Integration tests involving these asynchronous jobs are hard to write due to race conditions between the test process and the background workers. We faced this problem when we began using Resquefor processing background jobs. We love Cucumber and we love Resque; we wanted to find a way to use them together. Sample test with a race condition When someone accepts a payment using Square, we queue up a notification email in Resque. A pool of Resque workers constantly monitor the queue. One of them immediately grabs the email job, renders the email, connects to the mail server, and sends it to the user. The following test can cause a race condition. The test will pass if the Resque worker had processed the email job by the time the Cucumber test looks for a new email… otherwise the test will fail. **Scenario: Capturing an authorization successfully** ** results in an email notification** ** Given my name is Jools** ** And I have a valid API session** ** And I use a new capturable card authorization** ** When I POST to API **1.0** "**payments capture**"** ** And I open my newest email** ** Then I should see a link to "**the payment page for the last payment**"** An immediate solution would be to run all Resque jobs synchronously and skip the enqueueing/dequeueing parts of the stack. We do this for RSpec unit tests, but we want integration tests to directly test the full stack. The diagram below shows the processes (boxes) involved in running a Cucumber test and the communication channels between the processes (black lines). To solve the race condition problem, the goal was to add another channel between the Cucumber process and the resque worker (gray line). To avoid the race condition, we start up a Resque worker as a child of the Cucumber test and then use Resque’s signal handling to control when the worker processes jobs. Our solution for integration testing with Resque works like this: Start a Cucumber test process. Start a Resque worker by forking the Cucumber test process. In the child process: exec a Resque worker. In the parent process: store the Resque worker’s PID and continue on as normal. Pause the Resque worker on startup. Execute some Cucumber steps. Invoke a special Cucumber step to: A. Un-pause the Resque worker. B. Wait for it to finish processing all jobs. C. Re-pause the Resque worker. - Make assertions about the result of the worker process. View the code in the CucumberExternalResqueWorker gist. Starting the Resque Worker When the Cucumber test process starts we immediately fork and start a Resque worker. It can take a little while for this worker to finish starting up, so we wait around for up to a minute. In order to pause and un-pause the worker with signals, the PID returned to the parent process needs to be the PID of the Resque worker. We capture this PID by using Ruby’s fork and exec commands. Forkcauses a child process to be spawned and exec replaces the child process with the resque worker. This fork and exec trick gives us the Resque worker’s PID as a return value of fork in the Cucumber test process instead of having to manage external PID files. However, we learned the hard way that exec behaves differently if you call it with the array form or the string form. If you call exec with the array form, the command has the same PID as the process it replaces. If you call exec with the string form, the string is passed to sh -c and sh gets the PID of the process being replaced. We orphaned a lot of workers before we figured this out. Pausing the Resque Worker Resque workers can be paused by sending them the USR2 signal, and un-paused by sending CONT. A Rails initializer adds a before_first_fork hook to the Resque worker and makes the worker send itself a USR2 signal before it can process any jobs. To run all the queued jobs, we use CucumberExternalResqueWorker.process_all, which un-pauses the worker, waits until it finishes processing jobs, and then re-pauses it. Putting it together We can now use asynchronous processing in a deterministic way. We added a new Cucumber step to clear the Resque email queue: Given "the email queue is empty" **do** CucumberExternalResqueWorker**.**reset_counter Resque**.**remove_queue(Mailer**.**queue) reset_mailer **end** And another new Cucumber step to process all Resque jobs: When "all queued jobs are processed" **do** CucumberExternalResqueWorker**.**process_all **end** The updated Cucumber test uses the new steps to avoid race conditions between the Cucumber test in the Resque jobs: **Scenario: Capturing an authorization successfully** ** results in an email notification** ** Given my name is Jools** ** And I have a valid API session** ** And the email queue is empty** ** And I use a new capturable card authorization** ** When I POST to API **1.0** "**payments capture**"** ** And all queued jobs are processed** ** And I open my newest email** ** Then I should see a link to "**the payment page for the last payment**"** Resque is almost totally awesome How Resque helped us We were able to solve this problem because of how well architected Resque is. Its support of POSIX signal handling and the built in extension hooks made it really easy to exercise control over our child worker. We didn’t have to monkey patch anything and we used standard signals to control the workers. The fact that Resque managed its own workers and provided a single PID to control workers was also a big help. How Resque hurt us The one big problem we ran into was that the pending jobs counter in Resque isn’t atomic. When a job is processed, the worker decrements the number of pending jobs remaining, does a bunch of processing, and then increments the counter for jobs being worked. This turns out to be a problem when a Resque job spawns other jobs. We initially tried to use the Resque.info[:pending] and Resque.info[:working] counters to track when our child worker had finished all the jobs, but because they don’t update atomically, we would occasionally have child jobs that were never processed. We solved this by alias method chaining a counter into the enqueue and performmethods in our base Resque worker class. Being able to test our Resque workers in a full integration environment has been a huge improvement. We’re big fans of Resque and Redis; they’ve been a pleasure to work with. Our CucumberExternalResqueWorker has been a great solution for us so far. There are a few features we’d like to add when we have time: Patch Resque to have atomic counters so we don’t have to monkey patch our base worker. Add ability to run all jobs in a specific queue Add ability to run exactly N jobs in a specific queue Add timeouts for long running jobs Turn CucumberExternalResqueWorker into a gem Add an ENV option to disable starting a Resque worker Hopefully our solution will help other people get up and running with full stack testing of Resque using Cucumber. Zach Brock - Profile *Oklahoma was not one of the states who signed onto the brief in support of the North Carolina board. The reasons why…*medium.com matthew (@matthew) | Twitter The latest Tweets from matthew (@matthew). I tried each thing, only some were immortal and free. New York, NYtwitter.com
OPCFW_CODE
"poetry lock" is not idempotent [x] I am on the latest Poetry version. [x] I have searched the issues of this repo and believe that this is not a duplicate. [x] If an exception occurs when executing a command, I executed it again in debug mode (-vvv option). OS version and name: Mac OS X 10.13.6 Poetry version: 0.12.10 Link of a Gist with the contents of your pyproject.toml file: https://gist.github.com/sfermigier/c3c2cdc39836bba55427e07c776f77e2 Issue When running repeatedly poetry lock (or poetry update) on the same repo, the poetry.lock changes (with period 2): -rw-r--r-- 1 fermigier staff 90098 Jan 7 16:00 poetry.lock.1 -rw-r--r-- 1 fermigier staff 89608 Jan 7 16:00 poetry.lock.2 -rw-r--r-- 1 fermigier staff 90098 Jan 7 16:01 poetry.lock.3 -rw-r--r-- 1 fermigier staff 89608 Jan 7 16:01 poetry.lock.4 -rw-r--r-- 1 fermigier staff 90098 Jan 7 16:02 poetry.lock.5 -rw-r--r-- 1 fermigier staff 89608 Jan 7 16:03 poetry.lock.6 (poetry.lock.[135] and poetry.lock.[246] are identical). I am not able to reproduce this behavior (using Poetry 0.12.11 though). On a different note: Why would expect poetry lock to be idempotent in the first place? In case something changes in the remote repositories we cannot expect two consecutive runs of poetry lock to produce the same result. In other words poetry lock is a function of the (current) timestamp and as such will never have a chance of being idempotent. @dmohns "Why would expect poetry lock to be idempotent in the first place? In case something changes in the remote repositories we cannot expect two consecutive runs of poetry lock to produce the same result." -> I'm talking about running poetry lock two times in a row, with no changes having taken place on the remote repositories. Still not "idempotent" (in the sense I've defined above), and using Poetry 0.12.11: https://gist.github.com/sfermigier/b753b6fb439a77b7a2d03eacc93aa587 I did manage to finally produce (reproduce is really the wrong word here, see below 😂 ) this error on my machine also. I reran poetry lock around 20 times and one time out of those attempts it produced the faulty poetry.lock. Examining the corresponding files and tree's it looks like in this scenario there is a problem with parsing extras of requests. Here I have (in poetry.lock) [[package]] category = "main" description = "Python HTTP for Humans." name = "requests" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" version = "2.21.0" [package.dependencies] certifi = ">=2017.4.17" chardet = ">=3.0.2,<3.1.0" idna = ">=2.5,<2.9" urllib3 = ">=1.21.1,<1.25" while in my 19 other attemps I have [[package]] category = "main" description = "Python HTTP for Humans." name = "requests" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" version = "2.21.0" [package.dependencies] certifi = ">=2017.4.17" chardet = ">=3.0.2,<3.1.0" cryptography = ">=1.3.4" idna = [">=2.5,<2.9", ">=2.0.0", ">=2.5,<2.9", ">=2.0.0"] pyOpenSSL = ">=0.14" urllib3 = ">=1.21.1,<1.25" Additionally, I remember I saw some setup.cfg parsing errors/warnings on my initial runs. My guess is that those problems are connected as the missing dependencies arise from requests's setup.cfg. Unfortunately, I lost the logs of this and I am now back in the situation where I cannot reproduce it ... @sfermigier Do you remember seeing similar warnings aswell? I think poetry lock should never change without first changing pyptoject.toml or calling poetry update. In mature projects you never change version of a dependency without a reasonable need, careful consideration and testing, and definitely you wouldn't want to update all dependencies at once even if it's a change of a micro version. If poetry.lock has all dependencies listed, it should be left alone. This should be fixed in the latest 0.12.12 release Can you explain the expected behaviour after the fix, please? I thought there would be no change in the below scenario: bash-3.2$ git status|grep 'pyproject\|poetry' bash-3.2$ poetry lock Updating dependencies Resolving dependencies... (1.1s) Writing lock file bash-3.2$ git status|grep 'pyproject\|poetry' modified: poetry.lock bash-3.2$
GITHUB_ARCHIVE
const _ = require('lodash'); const {Vector2} = require('three'); module.exports = (bottle) => { bottle.factory('Zone2dSet', (container) => class ZoneSet { constructor (size) { this.size = size; this.sizeMaps = new Map(); ZoneSet.mapDefs.forEach((def) => { let {name, left, top, right, bottom} = def; let zone = new container.Zone2D(left, bottom, right, top, name); zone.scale(size); }) } }); bottle.factory('Zone2d', (container) => class Zone { constructor (left, bottom, right, top, name) { if (!(left < right)) throw new Error(`invalid zone - left ${left} >= right ${right}`); if (!(bottom < top)) throw new Error(`invalid zone - bottom ${bottom} >= top ${top}`); this.top = top; this.bottom = bottom; this.left = left; this.right = right; this.name = name; } scale(n) { Zone.DIMS.forEach((dim) => this[dim] = n * this[dim]) } static get DIMS () { return 'left,top,right,bottom'.split(',') } contains(pt) { return _.inRange(pt.x, this.left, this.right) && _.contains(pt.y, this.bottom, this.top); } }); bottle.factory('Hexagon', (container) => class Hexagon extends container.PointNode { constructor(id, uv, registry, size, face) { super(id, uv, registry); this.face = face; this.size = size; } linkEdge(edge) { if (edge.hasFace(this.coordinate)) { let otherFace = edge.otherFace(this.face); let otherFaceNode = this.registry.get(otherFace.faceIndex); if (otherFaceNode) { this.link(otherFaceNode); } return true; } else { return false; } } static zones = new Map(); /** * * @param point {Point} * @param size {float} */ static fromPoint(point, size) { let nodeMap = new Map(); Array.from(point.pointIsoFaces.values()) .map((face) => new Hexagon(face.faceIndex, face.meanUv, nodeMap, size, face)); for (let edge of point.pointEdges.values()) { for (let node of nodes) { if (node.linkEdge(edge)) { break; } } } return nodeMap; } }); }
STACK_EDU
const Botmock = require('botkit-mock'); const { joinChannel, fetchChannelNameFromApi, createNewUsers } = require('../skills/joinChannel'); const Channel = require('../repositories/channel'); const User = require('../repositories/user'); const controller = Botmock({}); const testBot = controller.spawn({type: 'slack', token: 'test_token'}); describe('join channel funcitonality', () => { describe('createNewUsers', () => { test('returns undefined if no userIds are provided', () => { expect(createNewUsers(testBot, [])).toBeUndefined(); }); test('creates new users if they dont already exist', async () => { const usersBefore = await User.getAll(testBot); expect(usersBefore).toEqual({}); await createNewUsers(testBot, ['member1','member2','member3']) // idk why I need to call `getAll` twice to get this to work... await User.getAll(testBot); const usersAfter = await User.getAll(testBot); expect(usersAfter).not.toEqual({}); }); }); describe('fetchChannelNameFromApi', () => { test('returns an error if message is invalid', async () => { const message = { foo: 'bar' }; await expect(fetchChannelNameFromApi(testBot, message)).rejects.toThrow(TypeError); }); test('returns channel name', async () => { const message = { channel: 'C0HBYC9SA3' }; expect(await fetchChannelNameFromApi(testBot, message)).toEqual('test1name'); }); }); describe('joinChannel', () => { test('returns an error if message is invalid', () => { const message = { foo: 'bar' }; expect(joinChannel(testBot, message)).rejects.toThrow(TypeError); }); test('creates a channel if it doesnt already exist', async () => { const channelsBefore = await Channel.getAll(testBot); expect(channelsBefore).toEqual({}); const message = { channel: 'C0VHNJ7MF' }; await joinChannel(testBot, message) const channelsAfter = await Channel.getAll(testBot); expect(channelsAfter).not.toEqual({}); }); test('returns undefined if channel already exists', async () => { const message = { channel: 'C0VHNJ7MF' }; await joinChannel(testBot, message) expect(await joinChannel(testBot, message)).toBeUndefined(); }); }); });
STACK_EDU
Usually when doing pentesting, you’ll use bind or reverse shells. You can use bind and open up a port on target machine that you can access. If your target is behind NAT, you can’t do bind, in that case you have to use reverse shell. Reverse shell requires you to have an IP and a port that the target can connect to from the inside. If you need more explanation for reverse vs bind, look here: https://irichmore.wordpress.com/2015/06/04/bind-shell-vs-reverse-shell/ Anyways, my problem is what if I’m behind NAT and my target is behind NAT or maybe I don’t want my IP address known to my target but the target is behind NAT. There are many ways of doing C2 using third party providers. There are multiple projects on github that use different services for C2. Also, you can use web hosts that support PHP and set something up there. People have used Tor/hidden services for C2. This is not anything new. (https://www.theguardian.com/technology/2014/jul/25/new-ransomware-employs-tor-onion-malware year: 2014) I was also able to do it with ncat, since ncat supports socks proxy. I wanted to do it with Pupy (https://github.com/n1nj4sec/pupy). This technique is nothing new. I’m just documenting how I did it with pupy. Tools we’re gonna use are: Tor - We need statically compiled version of Tor. We ship this with our pupy executable so pupy has proxy to connect to. Pupy executable obviously.. Iexpress - To combine tor executable and pupy executable Connection to our target will look something like this: Attacker machine in my case is Kali Linux and target is Windows. On your attacker machine, you will need to install Pupy and Tor. After that, you’ll need to setup Tor hidden service. This should help: https://www.torproject.org/docs/tor-hidden-service.html.en Next thing you’ll need is Tor, which has been statically compiled. I found precompiled version here: http://wiki.ozanh.com/doku.php?id=tor:tor_for_windows You only need tor.exe. Now you can create your executable using pupy. This is how I did it: ./pupygen.py -f exe_x86 -o TOR_BD.exe auto_proxy --host kdpxeh5ozpxbo7jf.onion:8001 --add-proxy SOCKS5:127.0.0.1:9050 --no-direct -t ssl Pupygen is used to generate pupy executable or script. -f is file type -o is output file name Auto_proxy is one of the payload types. There is also bind and connect. --host is your HS address and the port --add-proxy is one of the arguments for auto_proxy. It’s the proxy that gets used. You can define type:IP:Port. --no-direct option is to avoid connecting without proxy Finally, -t is transport method. We’re using ssl (which is default anyway…). After your TOR_BD.exe file has been generated, you’ll need to use iexpress and combine tor.exe with tor_bd.exe. Iexpress will allow you to extract the files to the disk and run them. Here are my options for iexpress(feel free to change them to whatever your needs are): We want to extract and run the files. Just need two files combined. Cmd /c tor.exe | cmd /c tor_bd.exe allows us to run both files at the same time. Tor.exe will start and take a few seconds to connect and tor_bd.exe will make a connection through it as soon as it can. After selecting other options, iexpress generated my final executable, which includes tor and tor_bd.exe. Now we have to set up the listener. This is how I did it: ./pupysh.py -t ssl -p 8001 Pupysh is pupy shell handler. -p is the port I want to listen on. After pupy shell is running, you can execute the file iexpress generated on your target host and you should soon get a shell! (You may get an SSL error from pupy shell. In that case, just restart pupy shell) It’s slow but it works! Pupy executable, I think is already detected by many AV engines, including Windows Defender. Tor traffic can be detected as well with IDS. Also, Tor opens up a port locally. I uploaded my executable generated from iexpress to VirusTotal: https://www.virustotal.com/#/file/9b9f37eb12726b038bc7362e4013893fb717a4564dad38b78241f3e4c07b43d9/detection Probably not worth using for real pentest, unless your client doesn’t even have Windows Defender... This was for educational purposes only.
OPCFW_CODE
/* * Author: Trevor Sherrard * Since: July 27, 2021 * Purpose: Class implementation for inbound JSON document manager. */ #include <Arduino.h> #include <ArduinoJson.h> #include "../include/inbound_json_manager.h" /* * Default class constructor. * * params: * controllerName -> name of this controller instance * * returns: * None */ InboundJsonDocManager::InboundJsonDocManager(const char* controllerName) { _controllerName = controllerName; _voiceActionID = -1; _hapticActionID = -1; } /* * Process inbound JSON data. only keep data if it matches this controllers * controller_name field. * * params: * json -> char array containing recieved JSON * return: * parse_status -> indication of JSON parsing success or failure */ bool InboundJsonDocManager::procInboundDoc(char json[]) { // attempt to deserialize DeserializationError error = deserializeJson(_JSON_inbound_doc, json); // exit early if something went wrong if(error) { Serial.println("Inbound JSON: Cold Not Parse JSON!"); Serial.println(error.f_str()); return false; } // otherwise extract data else { // make sure data was meant for this controller. const char* tempName = _JSON_inbound_doc["controller_name"]; if(strcmp(_controllerName, tempName) == 0) { // extract data and populate internal fields _voiceActionID = _JSON_inbound_doc["voice_action_id"]; _hapticActionID = _JSON_inbound_doc["haptic_action_id"]; return true; } else { // if not meant for this controller, exit. return false; } } } /* * returns internal voice action ID value * * params: * None * returns: * _voiceActionID -> internal voice action ID */ int InboundJsonDocManager::getVoiceActionID() { return _voiceActionID; } /* * returns internal haptic action ID value * * params: * None * returns: * _hapticActionID -> internal haptic action ID */ int InboundJsonDocManager::getHapticActionID() { return _hapticActionID; }
STACK_EDU
This section explains how the invocation framework sends data, to help you understand how to optimize your invocation requests, and send multiple files between client and target apps. Although apps can use many different URI schemes to transfer data, the invocation framework provides support for in-band transfer and file transfer. Most often, an invocation request carries only a small amount of data (less than 16 KB). In some cases this small amount of data can be encoded directly into a URI. It is also possible to send it as part of the invocation request. When data is sent as part of the invocation message, it is placed in the data attribute. During an in-band transfer, the URI value should be set to data://local, which points to the data attribute. When the data is sent in-band, the MIME type must describe the type of the data, to allow both the invocation framework and the target application to handle the message. This section contains information on how invocation framework handles file transfers and the type of files supported. The invocation framework supports file handling based on the file extension. The file extensions can be defined as file:// within the scheme parameter when the data is referenced by the URI. The invocation framework also supports the declaration of the exts attribute within the target filters. In an invocation request, if the URI is suffixed with a file extension that matches any of the file extensions declared in the exts attribute, then a given target filter will match with that invocation request. When the exts attribute is defined, the target filter implicitly supports the uris="file://" for those declared extension cases. The exts attribute is applied only if the accompanying URIs contain a file:// based URI. Also, combining exts and specific MIME types in a target filter means that both must be specified by a client app for the target filter to successfully match with an invocation request. The best practice in most cases is to define the exts related target filters as a separate declaration, where uris is file:// and the MIME type is a wildcard character (*). Sending an invocation request with a file:// URI to pass files succeeds only if the target app can access the file. The platform supports the use of a shared area in the file system to share data between the client and target apps. However, to send sensitive data to a target app, a client app can use the file transfer feature of the invocation framework. When the framework receives an invocation request with a file:// URI, it inspects the URI to determine if the request refers to a shared area. If the file is already shared, the invocation request passes the URI to the file in the shared area, as specified by the sender. However, if the invocation framework detects that the file is not shared, then it creates a read/write copy of the file in a private inbox for the target app. The client app can specify the file transfer mode attribute to override this behavior. Invocation requests support the following modes: |File transfer mode||Description| |Preserve||Skip file transfer handling and deliver the file as-is.| |CopyReadOnly||Create a read-only copy of the file in the target's private inbox.| |CopyReadWrite||Create a read/write copy of the file in the target's private inbox.| |Link||Create a hard link to the file in the target app's private inbox. When Link is specified, the file must have read/write permissions and the sender must be the owner of the file.| Sending multiple files For certain invocation requests, you might want to send multiple files in a single request. The invocation framework supports a special set of MIME types (type/subtype). Each type is a combination of supported file types, as explained below: |Type||Supported file type| |filelist/image||image/gif, image/jpeg, image/png| |filelist/media||Any of the files supported for filelist/audio, filelist/video, or filelist/image| |filelist/document||application/pdf, application/vnd.ms-excel, application/msword, application/vnd.ms-powerpoint| When sending multiple files, you can set the URI attribute to describe the common file path that is associated with each file. Usually, this file path is the common root directory, if one exists. If no common root directory exists, set the URI attribute to list://. After you set the type and URI, you can assign the list of individual file URIs to the data attribute using the following JSON format: To learn more about how to use data in JSON format, see Working with JSON. Make sure that URIs are percent-encoded. Percent-encoding is an efficient way to encode the data in a URI. |uri||file URI||The URI of the file being listed. The URI prefix must match the invoke.uri attribute value.||Yes||file://path/to/file| |type||MIME||The MIME type of the specified file.||No||image/jpeg| |data||JSON||Additional metadata in JSON format.||No||test metadata| Last modified: 2014-09-30
OPCFW_CODE
Educrate is an awesome free website to create and share educational video collections. Primarily aimed at teachers (but equally useful for students), Educrate is a media curation tool that can be used to create collections of educational videos, pulled from multiple online video sharing websites such as YouTube, The Internet Archive, Vimeo and Dailymotion. These collections, called “Crates” can then be shared with other educators. In addition to creating their own collections, teachers can also explore the video collections (or crates) created by other educators. The teachers can even invite their colleagues to collaborate on the crates, thereby enabling them to contribute videos to the collaborated collections (The contributors need to have an Educrate account as well). Educrate, thus, seems to be the solution for the problems faced by educators when preparing lecture notes etc., for their students. But just how good it is? Keep reading fellas, the answer is waiting! How to use Educrate to create and share educational video collections? Pretty much akin to any other web based service, Educrate also mandates that you create a free account before getting started. Signup is easy, and all you need is a valid email address. All you have to do is click the Sign Up Now button on the homepage, specify a few details (email address, the grades and subjects you teach etc.) and you’re all set. Here’s how the primary web UI of Educrate looks like: Educrate features a pretty busy looking interface (as evinced by the above screenshot), but not something that’s unintuitive. The UI is primarily divided into two vertical panes. The left pane shows previews of featured educational videos pulled in from multiple online video hosting websites. On the other hand, the right pane features the playback window with control buttons in the top half, and playlist in the bottom half. The header consists of some navigational tabs that can be used to access different sections of the website. These are briefly mentioned below: - Featured: The default landing section. As mentioned above, it suggests curated videos automatically suggested based on the selections made in your profile (e.g. grades and subjects taught). Simply click a suggested video on the left and it’ll start playing on the right. - Explore: As the name suggests, this option lets you explore interesting video collections (or crates) created by other educators. You can use the search bar on the top to search for videos based on a specific topic of interest. - +Crate: Click this button to create your own video collection, or crate. - Invite, Feedback and About: You know what these do! Anyways, now that you know your way around Educrate, it’s time to create your first video collection. Doing so is simple, and the following steps should get you started: Step 1: Hit the +Crate button to create a new video collection (or crate). A pop-up window will come up, where you have to give a name to your video crate. Do that and click the New Crate button. Step 2: Once the crate is created, you’re directed to the content editor pane. In here, you can add details about your video crate (e.g. privacy, description, collaborators’ usernames, tags, etc.). You can also change the direct video crate URL, and add video content (by their URLs) to it. Once you’ve made all the changes, hit the Done Editing button on the top right. Check out the screenshot below: Step 3: That’s all there’s to it! You have just created your very own first video collection, or crate, having personally curated media from online video hosting websites. Educrate also shows related video crates having the same content as your crate. All you have to do to start playing them is click on one of them. Using the sharing/playback buttons on the left, you can immediately start playing the videos in your crate, as well as share them over multiple social networks, and even email. Pretty awesome, isn’t it? Also See: Five Websites to watch Viral Videos Educrate is really useful and nifty free website to create and share educational video collections. It lets you collect the best online video resources into organized collections (called crates) and share them with others. Though Educrate is geared primarily towards educators to help them curate the best educational video content for their students, it can be just as useful for pretty much everyone. The fact that you can share and collaborate on the crates makes it even better. If you’re looking for a simple yet efficient video content curation and aggregation service, look no further than Educrate.
OPCFW_CODE
Large-scale deployments are a pain when you think of the many things that can go wrong. That’s why we’re here to ease the pain with deployment automation. I want to focus this post on block storage specifically for EC2 and on how you can set it up in advance to scale automated deployments. Amazon Web Services provides block devices called Elastic Block Storage (EBS) that range in gigabytes to terabytes in size at a pay-as-you-use cost. This type of storage gives instances far greater storage flexibility. The default volume on an EC2 instance generally assumes the lifespan of the instance, which means the data disappears once the instance does. Volumes, on the other hand, can persist in the after-life of the instance and make the data available for future use. You can take volume snapshots for backup or attach a volume to another instance, for example. EBS Volume Types Amazon offers three EBS types: Magnetic, General Purpose (SSD), and Provisioned IOPS (SSD). - The Magnetic disk is the default volume of an EC2 instance and the lowest cost option if you don’t need high read performance and are okay with sequential I/O. It’s a good option to store log files (if you don’t use message logging tools like logstash or syslog). In general, SSD disks are better unless cost outweighs performance in your use case. - General Purpose (SSD) disks, announced recently, are the best in terms of performance and cost. It uses solid-state drive (SSD) technology by default and gives IOPS based on the disk size. - Provisioned IOPS (SSD) disks are useful for performance critical applications. If the response time is important or you expect predictable high I/O workloads this is the way to go. If using Provisioned IOPS, you should consider the EBS-optimize instance option for dedicated I/O performance. Some instance types support it. I’ve heard reports that this option hugely improves latency and I/O. Obviously, you should test to see if it applies to your use case. Cloud Application Manager Block Storage Support Cloud Application Manager supports EBS in a straightforward way via the deployment profile. Map your infrastructure requirements to the provider in the deployment profile, which behind the scenes auto provisions EC2 instances. The Elastic Block Store section of the deployment profile allows you to add new block devices, configure the disk size, path, and for Provisioned IOPS also set the IOPS level you need. The nice thing about the deployment profile is its standard interface that works for any cloud provider. Your deployment configurations are compatible with any cloud. Say your deployment requires block storage. You can set up your deployment with several profiles that include vSphere, Google Cloud, OpenStack, CloudStack just as well as AWS. It works great if you use CI/CD because you can choose the right profile for your environment with one for staging and another for production. To optimize an instance for EBS in Cloud Application Manager, just check the EBS-Optimized option. Remember that only some “larger tier” instance types support it and c4 instances include it. You can check the AWS help and our docs for details. Persistent Storage for Predictable Deployments EBS is an important service for configuring EC2 storage in AWS. It’s the best way to persist data in the virtual machines as detachable disks. It gives redundant, highly available, and low latency I/O to your EC2 instances. You get the flexibility with good performance. Cloud Application Manager automates EC2 deployments with EBS. Using deployment profiles, you can configure workloads to deploy consistently on scale. So why not try it today? Sign up for an Cloud Application Manager account and spin up an instance. Want to Learn More About Cloud Application Manager? Cloud Application Manager is a powerful, scalable platform for deploying applications into production across any cloud infrastructure – private, public or hosted. It provides interactive visualization to automate application provisioning, including configuration, deployment, scaling, updating and migration of applications in real-time. Visit the Cloud Application Manager product page to learn more.
OPCFW_CODE
Relationship between reset pins and SWD functions (ARM Cortex M0) On many microcontrollers or other devices without JTAG/SWD functionality, there is a reset pin which when low unconditionally and asynchronously forces everything into a known state. On devices with JTAG or SWD functionality, however, things seem to be more complicated since it's possible to perform some JTAG/SWD functions while the reset pin is asserted. I have some Freescale KL15Z-based boards which include the SWD pins on a connector with the reset line and some other diagnostic I/O pins. Occasionally when connecting or disconnecting a cable, the parts seem to get into a weird state such that even reset won't kick them out. I'm wondering if some stray pulses on the SWD pins might be putting the device into an "ignore the reset pin" state. I have found that when using the debugger, I seem to have frequent difficulties getting the devices to reset reliably; I don't know if the issues may be related. What is the relationship between the reset pin and the SWD functionality? Conceptually it would seem most helpful to have a design where any falling edge on reset would be guaranteed to actually reset everything, and SWD communications which were supposed to happen before user code startup could be performed with reset held low, but I don't think that's how the Freescale KL15Z chips actually work. What's the best way to solve such reset-related problems with the Freescale or similar parts? Two observations short of an answer: The Kinetis chips will drive their own reset line, which can be interesting. I've also seen a K20 get stuck in a tight reset loop - producing a sawtooth output on its reset line, where the fix (under openocd) was to issue "kinetis mdm mass_erase" while the reset input is asserted. After that it became possible to release the reset and manipulate it normally. I had a similar looking issue, but the root cause was not reset pin. (I had an active-low bootloader pin being pulled to GND via an LED.) Could something like that be your problem? @Frederick: I don't remember what the problem turned out to be on the Freescale part. I did some time later have some fun with an Atmel part (also Cortex-M0) which turned out to be a consequence of a really badly designed watchdog. My code relied upon the "watchdog early warning interrupt" to wake it up periodically, but on the Atmel part feeding the watchdog will suspend its operation until the action is synchronized between the bus clock and the 32768Hz watchdog clock. If the processor goes into deep sleep mode before that happens, the watchdog can remain suspended indefinitely. Reset pin is often not needed, because controller is possible to reset using debug registers. Only one case when is necessary is connection under reset, this mean that if SWD signals are shared with other functionality on board (forexample SWDIO or SWDCLK is set as output), then only way to access MCU is to put MCU in to reset (at this moment are some debugging registers accessible over SWD) and set HALT/STOP program flag with debug registers over SWD, then leave reset pin and at this moment it is possible to flash MCU. Many devices need to be able to unconditionally reset independent of previous state (e.g. to recover after an unexpected glitch). If the reset pin cannot be relied upon for that purpose, a device which is powered continuously through a battery that's awkward to remove or replace may require some other means of interrupting power. The SWD side of the debug interface takes a power-on-reset, but it is basically a shift register which triggers AHB accesses to the rest of the system after observing a specific input pattern/number of shifts. There is is a degree of error detection on the sequence, it is fairly implausible that you would observe spurious accesses and not have the interface lock out waiting for a new initialisation. (Of course, the debug host might silently reconnect in this case since the protocol is designed to be error tolerant). The debug interface can also request a CPU reset (through an internal DP register access rather than requesting an AHB access). This is typically fed to the on-chip reset generation, but again should not be able to glitch as the result of noise on the SWD input. The reset pin is generally not a full power-on reset, and probably won't reset either the SWD side of the debug interface, or the flash controller. Keep in mind that on many implementations, the SWD engine is not hard-wired to the required pins, but rather goes through GPIO pin muxes, which may intentionally or even unintentionally get into a state where the SWD engine is disconnected from the pins. Also some low power suspend modes have much the same effect of leaving SWD unresponsive until the processor wakes for some other reason.
STACK_EXCHANGE
Order ambien 10mg in mexicolike it View all 1743 reviews $0.27 - $2.88 per pill purchase generic ambien 10mg online legally from canada He paid a cop order ambien 10mg in mexico to say something good about him but he didn't have the money so he has to pay him later. It reacts with water to form chloral hydrate, a once widely used sedative and hypnotic substance. Both men can you buy ambien over the counter in germany find that they enjoy the ensuing fistfight. All four of these hydrocarbons have four carbon atoms and one double bond in their molecules, but have different chemical structures. Jayakumar and actor Gautham Karthik for another film in the same genre. He was also caught on tape having sex with the captain's daughter in his squad car. This can influence a person's financial access and adherence. San Diego was pretty darn close to the Mexican border. These can be considered a form of pseudo-allergic reaction, as not all users experience these effects; many users experience none at all. Offenders, was supposed to lay a bass track but it never happened. She broke up with order ambien 10mg in mexico her boyfriend and moved into a sober-living facility. The growth order ambien 10mg in mexico of genetically modified crops also became common. This severe neuroadaptation is even more profound in high dose drug users and misusers. Opiate misuse has order ambien 10mg in mexico been recorded at least since 300 BC. He first appears as a delivery guy delivering flowers to Bree. Derek was kicked out of treatment after three months for getting drunk and bringing alcohol onto the grounds. Some of the reports were from friends of Robinson, who were concerned by his erratic behavior and called for help. It can only be used legally by health professionals and for university research purposes. Earth's crust, making it the 24th most abundant order ambien 10mg in mexico element. Naproxen sodium is available as both an immediate release and as an extended release tablet. He becomes distant and won't return her phone calls. Department of Justice Antitrust Division William Baer and renowned trial lawyer Dan Webb on the winning litigation team tasked with defending GE against diamond price-fixing claims. This includes limiting youth access to, and appeal of, flavored tobacco products like e-cigarettes and cigars, taking action against manufacturers and retailers who illegally market these products or sell them to minors, and educating youth about the dangers of e-cigarettes and other tobacco products. She was zolpidem otc fired after one season. It can also be formed from malonic acid. Cristine is no longer drinking but her husband didn't go for the suggested counseling and his lack of commitment is damaging their relationship. There are several references in Greek mythology to a order ambien 10mg in mexico powerful drug that eliminated anguish and sorrow. Some spam-fighters regard them as inaccurate compared buy generic zolpidem 10mg in japan to what an expert in the email system can do; however, most email users are purchase generic ambien 10mg online no prescription not experts. For most prostate cancers classified as buy alprazolam 1mg mastercard 'very low risk' and 'low order ambien 10mg in mexico risk,' radical prostatectomy is one of several treatment options; others include radiation, watchful waiting, and active surveillance. The change of setting marked an important transition in the recording process. In the order ambien 10mg in mexico drug Coricidin, chlorphenamine is combined with the cough suppressant dextromethorphan. Hallowell was given pre-trial probation for a year in September 2015 and the matter was dropped. Currently, Americans under the age of 50 are more likely to buy drug soma online india die from a narcotic overdose than from any other order ambien 10mg in mexico cause. The hydrochloride salt exists as a very fine crystalline powder; it is hygroscopic and thus tends to form clumps, resembling something like powdered sugar. Smoking pipes uncovered order ambien 10mg in mexico in Ethiopia and carbon-dated to around c. Both actions can be helpful for many patients. Instead, the Liberals were forced into an election when they were brought down by a vote of non-confidence later that year, after revelations from the Gomery Inquiry damaged their popularity. During World War II and in its immediate aftermath, war rape occurred in a range of situations, ranging from institutionalized sexual slavery to war rapes associated with specific battles. Psychoactive drug misuse, dependence and addiction have resulted in legal measures and moral debate. Maureen Ryan of The Huffington Post and noted critic Alan Sepinwall remained ambivalent towards the show; otherwise critics were uniformly positive. I mean, they didn't miss a spot! order ambien 10mg in mexico Bieber was featured, was released. Tunzi ambien prescription or over the counter has other name for ambien also ambien prescription usa had a number of individuals that are order ambien 10mg in mexico mostly well-known in the Elvis fandom who knew or order ambien 10mg in mexico worked with Elvis Presley in some capacity write introductions for his books. It is mainly prescribed as a second-line treatment when oral forms are not well tolerated or if people order ambien 10mg in mexico have difficulty with compliance. On the next day, they crossed the Altiplano. Back to the present, Raghu and Vardhi appear before a preliminary commission consisting of several government agencies to explain their role in Zeng's death. Alex left, but Phillip stayed behind and when he saw how resistant Alan was to his physical therapy, Phillip told him how pathetic he was for giving up so easily and he still blamed him for ruining his life. Cannabis impairs a person's driving ability, and THC was the illicit drug most frequently found in the blood of drivers who have been involved in vehicle crashes. The book received critical interest and eventually generated cinematic-adaptation interest. The hotel hosts a celebrity-filled reception. Triclofos is a prodrug which is metabolised in the liver into the active drug trichloroethanol. Frustration, restlessness, and pelvic pain or a heavy pelvic sensation may occur because of vascular engorgement.
OPCFW_CODE
piping with error checking using subprocess in python I have a piping scheme using subprocess where one process p2 takes the output of another process p1 as input: p1 = subprocess.Popen("ls -al", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) p2 = subprocess.Popen("grep mytext - ", shell=True, stdin=p1.stdout, stdout=subprocess.PIPE) result = p2.communicate() p1 or p2 could fail for various reasons, like wrong inputs or malformed commands. This code works fine when p1 does not fail. How can I do this but also check whether p1 specifically or p2 specifically failed? Example: # p1 will fail since notafile does not exist p1 = subprocess.Popen("ls notafile", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) p2 = subprocess.Popen("grep mytext - ", shell=True, stdin=p1.stdout, stdout=subprocess.PIPE) result = p2.communicate() I can check p2.returncode and see that it's not 0, but that could mean that p2 failed or p1 failed. How can I specifically check whether p1 failed or p2 failed in cases where this pipe goes wrong? I don't see how I can use p1.returncode for this which would be the ideal and obvious solution. ex: p1 = subprocess.Popen("ls foo", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) p2 = subprocess.Popen("grep mytext - ", shell=True, stdin=p1.stdout, stdout=subprocess.PIPE) # here, p2.returncode not defined yet since we didn't communicate() assert(p2 is None) r = p2.communicate() # now p2 has a returncode assert(p2.returncode is not None) # ... but p1 does not! assert(p1.returncode is None) so I don't see how returncode helps here? The full solution thanks to @abarnert is something like this: p1 = subprocess.Popen("ls -al", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) p2 = subprocess.Popen("grep mytext - ", shell=True, stdin=p1.stdout, stdout=subprocess.PIPE) result = p2.communicate() if p2.returncode != 0: # something failed if p1.wait() != 0: # p1 failed... else: # p2 failed... p.s. I know the caveats of shell=True for security. Forget about the caveats of shell=True for security for a moment; it also makes things more complicated, because you're not actually piping ls to grep at all; you're piping one sh to another sh. Is there a reason you're doing that? in my application ls and grep are more interesting command line programs (which use shell features in file input etc so need shell=True). what's wrong with the above way of doing it? i'm piping the output of ls from shell to grep, which achieves the right result of grepping the output of ls Also, result doesn't have a returncode; it's p1 and p2 that do. @abarnet: yes that was typo, fixed But that's almost your whole answer. p1.returncode is what you want. Except that, because you're sticking extra processes in the middle, if p2 quits early you may need an explicit wait. See my answer, which tries to explain in more detail. A couple of problems remain. (1) call p1.wait() even if p2 succeeds because that clears the zombie process from the process table. (2) if p1 writes too much to stderr, it will hang because you aren't reading it. Look at the Popen.communicate code and see how it creates theads to read pipes and do that yourself for p1.stderr. Adding on to tdelaney's points: If it's possible for p2 to succeed even though p1 failed (which it often is), you may want to do something with that case. Meanwhile, when looking at the source, look at the 3.3 (or any 3.2+) version of communicate, or the backport to 2.4+, because there are edge cases the older version doesn't handle properly. I can check result.returncode and see that it's not 0, but that could mean that p2 fail. No, you can't; result is just a tuple (stdoutdata, stderrdata). It's the Popen objects that have returncode values. And that's your answer here: How can I specifically check whether p1 failed or p2 failed in cases where this pipe goes wrong? Just check p1.returncode. However, be aware that while p2.communicate does guarantee that p2 has been waited on, it does not guarantee the same for p1. Fortunately, it should guarantee that p1 is at least waitable, it not waited, so: if p2.returncode: if p1.wait(): # p2 failed, probably because p1 failed else: # p2 failed for some other reason Except that you almost definitely want to do a p1.stdout.close() right before the communicate. Otherwise, it's possible for an error in process 2 to leave process 1 blocked, and therefore your p1.wait() could block forever. (You could work around that by using p1.poll() here and then killing it if it's not finished, but really, it's better to not create the problem than to work round it.) One last thing: You've set p1.stderr to subprocess.PIPE, which never gets attached to anything, so it's also possible for process 1 to block trying to write to an overflowing stderr pipe. You may want to fix that as well. returncode checking would be the ideal solution but I don't see how it would work, I edited my answer to reflect the problem. any thoughts? maybe I misunderstood your answer @user248237dfsf: My answer already explains that. Did you actually try the code using p1.wait()? @abarnet: I don't understand still sorry -- if I use wait(), I get the same result. I setup p1,p2 and then do: p2.wait() which makes p1.returncode be None rather than a return code value @user248237dfsf: Why would you expect p2.wait() to do anything? Process 2 has already been waited by communicate, and you already have its return code. It's process 1 that may not have been reaped. Which is why I wrote p1.wait() in both the answer and the comment. I think I understand now I updated my answer. Final question: could you explain the stderr issue? I thought I'm passing both stdout and stderr of p1 into p2, so what could cause blocking? What does it "not attached to anything" mean? @user248237dfsf: When creating p1 you have stderr=subprocess.PIPE. That creates a new pipe (different from the one created for stdout=subprocess.PIPE). Someone or something must read the stderr data at some point if the pipe "fills up". If no one reads it, you can throw away the data but only as long as the pipe did not fill up completely. You probably want to not set p1's stderr at all, or possibly, send it through the stdout pipe (a la foo 2>&1 | bar in sh). To do the latter, use stderr=subprocess.STDOUT.
STACK_EXCHANGE
Sockets in MinGW I was just trying to build netcat in MSYS using MinGW and realized that MinGW never really ported all of the BSD socket stuff to Windows (eg sys/socket.h). I know you can use Windows Sockets in MinGW, but why did they never make a Windows port of the BSD sockets? I noticed quite a few programs using #ifdef's to workaround the issue. Is there a Windows port of the BSD sockets somewhere that can be used instead? Here are the errors when doing a make for netcat in MSYS: gcc -DLOCALEDIR=\"\/usr/local/share/locale\" -DHAVE_CONFIG_H -I. -I. -I.. -g -O2 -Wall -c `test -f 'core.c' || echo './'`core.c In file included from core.c:29: netcat.h:38:24: sys/socket.h: No such file or directory netcat.h:39:63: sys/uio.h: No such file or directory netcat.h:41:24: netinet/in.h: No such file or directory netcat.h:42:55: arpa/inet.h: No such file or directory There are no #ifdef's for MinGW. Is there a library/package I can add to MSYS to make everything compile without errors? Note: You can download netcat here and browse the CVS repo here Winsock itself was originally a port of the Berkeley sockets API. @ChrisW, I've never understood what's the point of making Winsock different from UNIX? @Pacerier I think that Winsock is simpler in that it has fewer header files to include. See e.g. Transitioning from UNIX to Windows Socket Programming for further details. BSD sys/socket.h is a POSIX header and the win32 API doesn't support it. MinGW headers are just a reimplementation of native win32 headers and don't offer additional POSIX compatibility. If you are looking for sys/socket.h support, try either GNU gnulib's sys/socket.h replacement or go with Cygwin, which provides a POSIX compatibility wrapper on Windows. about gnulib for mingw : "some modules are currently unsupported on mingw: mgetgroups, getugroups, idcache, userspec, openpty, login_tty, forkpty, pt_chown, grantpt, pty, savewd, mkancesdirs, mkdir-p, euidaccess, faccessat.", "mingw in 64-bit mode is not tested and low priority so far" http://www.gnu.org/software/gnulib/manual/gnulib.html#Target-Platforms @kalev, I never understood this. What's the point of gnulib when there's already cygwin? @Pacerier Cygwin is a Unix emulator for Windows whereas gnulib is a library which lets you port most of the traditional Unix packages to virtually any OS. WinSock and WinSock2 have different function names from the BSD Sockets. If I wish to write cross-platform applications, then I have code a lot of work-arounds just to keep Microsoft happy. It would be so much easier if there were special "socket.h" and "socket.c" files included with MinGW that simply translated stuff by calling the respective WinSock2 counter-parts. I'm just starting to learn C programming, so I'm unable to do this myself, but I'm surprised that nobody seems to have even attempted this so far. These comments from another answer served as the answer I needed to get a piece of simple bsd socket code to compile with mingw on windows. Replace all of those includes with #include as that would be the equivalent header for winsock, then see what happens. You will also need to link against ws2_32 and use WSAStartup/WSACleanup. Which might get you up and running. EDIT: I also ended up having to replace close with shutdown / closesocket and write with send. The code compiled fine but didn't actually work without those additional changes. As ChrisW said, Winsock2 is a port of BSD sockets. Which part of winsock are you trying to use which differs from BSD sockets ? (other than the WSAStartup and WSACleanup) I'm just trying to get netcat to compile. I'll update the question with the current errors. Replace all of those includes with #include <winsock2.h> as that would be the equivalent header for winsock, then see what happens. if you do that, you will also need to link against ws2_32 and use WSAStartup/WSACleanup. Which might get you up and running. MingWin is minimalist, and that is the most important aspect of it. Because it makes it easier to understand, at the end it is the developer's responsibility to write the application. MingWin only makes things easier but does no magic in turing nix apps to windows. So we're supposed to write two copies of code for every single functionality? When does this make any sense? See the stackoverflow link : Where does one get the "sys/socket.h" header/source file? The answer/solution is more explicit.
STACK_EXCHANGE
Cpu 100 windows xp chrome That being said, given the history of this issue, I would not be surprised if we see the problem re-emerge when the next Cumulative Security Update for Internet Explorer is released. Additionally, for several years now Ive been heavily involved in setting up computers with Windows XP running in a virtual machine, whether it is for new Macintosh users who still need to run a particular Windows-only software, or Windows 7 or 8 users who need. Apparently, this problem has been in existence in various forms for many years.MS13-080/KB 2879017, which was released in October 2013.It seemed to me like running the several rounds of updates after the initial Windows XP installation was taking forever significantly longer than the usual lengthy process.The simple explanation is that Microsoft believes that the amount of old updates in the automatic update chain has gotten to the point where it is overwhelming the wuauclt.Likely we have been experiencing this issue for a long time and simply chalked it up to Windows Updates being slow in general.I will update the article if/when this problem returns and/or if Microsoft finally fixes the root cause of the issue on their end.A lot of my clients still run Windows XP, especially those who bought PCs after 2007 and did their best to avoid Windows Vista.EXE process in Task Manager which will then free up the computers CPU so that you can quickly manually download and install the update. At the time of this writing, MS13-097/KB 2898785 seems to be the magic bullet for most situations. On a fresh install I would run windows dvd burner for windows 7 64 bit updates it would check for updates and under 30 seconds it would ask for the WGA update and then would proceed to show me the other 100 updates. What I observed, however, was that the problem really only manifested itself noticeably during the initial rounds of updates after the Windows XP install. MS13-097/KB 2898785 Windows XP SP3 Internet Explorer.It seemed that there were new Cumulative Security Updates for Internet Explorer.Obviously there is a problem with the Windows Update Automatic Update Client.Ironically this patch is described as a Cumulative Security Update for Internet Explorer and does not mention fixing Windows Update Automatic Update Client.Parallels, who doesnt seem to be interested in supporting OVF even after years of customer requests).This did seem to fix the issues I was working on at the time.I began to see this problem a lot as I was setting up brand new Windows XP SP3 installations under virtual machine software, whether that software was VirtualBox, Parallels, Virtual PC, Hyper-V, or others on either Macintosh or Windows host computers.
OPCFW_CODE
/** * * Write a program that prints the numbers from 1 to 100. But for multiples of * three print “Fizz” instead of the number and for the multiples of five print * “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”. * * Your method should optionally take `start` and `end` parameters which add the * following functionality: * * start: outputs items from `start` to 100 * end: outputs items from 0 to `end` * start, end: output items from `start` to `end` inclusive * * Of course with no parameters, just output 1 to 100 */ import fizzBuzz from '../lib/fizz-buzz'; import nthIndex from './support/nth-index'; describe('fizzBuzz', () => { describe('when no parameters are given', () => { it('outputs 100 items', () => { expect(fizzBuzz().length).toEqual(100); }); it('outputs "Fizz" as the third item', () => { const expected = 'Fizz'; const actual = fizzBuzz()[2]; expect(expected).toEqual(actual); }); it('outputs "Buzz" as the 5th item', () => { const expected = 'Buzz'; const actual = fizzBuzz()[4]; expect(expected).toEqual(actual); }); it('outputs "FizzBuzz" as the 15th item', () => { const expected = 'FizzBuzz'; const actual = fizzBuzz()[14]; expect(expected).toEqual(actual); }); it('outputs "Fizz" for every third item', () => { const fizzFilter = (_, index) => !nthIndex(index, 15) && nthIndex(index, 3); const thirds = fizzBuzz().filter(fizzFilter); const actual = thirds.every(item => item === 'Fizz'); expect(actual).toEqual(true); }); it('outputs "Buzz" for every fifth item', () => { const buzzFilter = (_, index) => !nthIndex(index, 15) && nthIndex(index, 5); const fifths = fizzBuzz().filter(buzzFilter); const actual = fifths.every(item => item === 'Buzz'); expect(actual).toEqual(true); }); it('outputs "FizzBuzz" for every fifteenth item', () => { const fifteenths = fizzBuzz().filter((_, index) => nthIndex(index, 15)); const actual = fifteenths.every(item => item === 'FizzBuzz'); expect(actual).toEqual(true); }); }); describe('when only start is given', () => { it('returns 50 items when starting from 51', () => { const actual = fizzBuzz({ start: 51 }); expect(actual.length).toEqual(50); }); it('returns the last 10 items', () => { const expected = [91, 92, 'Fizz', 94, 'Buzz', 'Fizz', 97, 98, 'Fizz', 'Buzz']; const actual = fizzBuzz({ start: 91 }); expect(expected).toEqual(actual); }); it('can count before 1', () => { const actual = fizzBuzz({ start: -5 }); const expected = ['Buzz', -4, 'Fizz', -2, -1]; expect(actual.length).toEqual(106); expect(actual.slice(0, 5)).toEqual(expected); }); }); describe('when only end is given', () => { it('can only count to 10', () => { const actual = fizzBuzz({ end: 10 }); expect(actual.length).toEqual(10); }); it('can count past 100', () => { const expected = [101, 'Fizz', 103, 104, 'FizzBuzz']; const actual = fizzBuzz({ end: 105 }); expect(actual.length).toEqual(105); expect(actual.slice(100)).toEqual(expected); }); }); describe('when both start and end are given', () => { it('can still count from 1 to 100', () => { const expected = fizzBuzz(); const actual = fizzBuzz({ start: 1, end: 100 }); expect(actual).toEqual(expected); }); it('can return only one item', () => { const expected = ['FizzBuzz']; const actual = fizzBuzz({ start: 15, end: 15 }); expect(actual).toEqual(expected); }); it('correctly returns a range', () => { const expected = ['Buzz', 'Fizz', 37, 38, 'Fizz', 'Buzz', 41, 'Fizz', 43, 44, 'FizzBuzz']; const actual = fizzBuzz({ start: 35, end: 45 }); expect(actual).toEqual(expected); }); it('can return a range outside than the default', () => { const actual = fizzBuzz({ start: -5, end: 105 }); expect(actual.length).toEqual(111); }); }); });
STACK_EDU
we have a customer who has vSphere 6 in version 6.7.0 (Build 8169922) and we want to backup one VM running on it. We are using Synology Active Backup for Business 2.1.1-1125. I have successfully added our Free ESXi on the same LAN, where is located our NAS device. Unfortunately, I can not add the customer´s ESXi 6 essentials. I have read the requirements and limitations and the necessary ports are opened on the ESXi as well as the port forwarding on the customer´s router (MikroTik). When I am trying to add this hypervisor, every time I get an error: Failed to connect to the host [public ip]. Please check the server address, account settings, and your network settings. I am trying to connect with the root user, so the rights should be fine. I can also see, that packets are hitting the NAT firewall rule. (PublicIP:44443 -> LocalIP:443; PublicIP:902 -> LocalIP:902) So I think, that the problem is on the host itself, but I don´t know, where to find the problem. Could you help me somehow? So you're trying to connect your backup software or appliance which is on your network to a customer's ESXi host on their network? When you say "add this hypervisor", I assume you mean the customer's ESXi host - but what are you actually trying to add it to? What UI are you using? (screenshots would be useful) thank you for your reply. Yes, I am trying to add a customer´s ESXi host (on their network) to our Synology Active Backup for Business (on our network). The goal is to backup the customer´s VM over WAN to our Synology NAS. Synology ABB is an app in NAS and I manage it using a web browser. When I will successfully add the customer´s ESXi host, I will see all the VMs and will be able to choose, which VM I want to backup. Something like our Free ESXi in our LAN. Here is the list of customer´s firewall rules on that ESXi. When I have tried telnet <customer´s public IP> 44443 - no luck I have tried it from our LAN but without success. On the customer´s ESXi is another VM, which has NAT rule with port 443. When I have tried from our LAN telnet <customer´s public IP> 443 - successfully connected. So definitely the DNAT rule is not correctly applying or there is an issue there. Make sure that you only have one DNAT rule using port 44443 as if don't it will not work. Also make sure you are allowing your source Public IP from where you are being SNATed to access that Public IP on port 44443. DNAT should be fine as I have other ones, which are working fine. I have also only one DNAT rule with port 44443. Sure, I have allowed our source Public IP, but even without it, it is not working. Do you have any other ideas, what I am missing or what should be bad? I do not know how your construction is to be honest but the issue you are facing is clearly connectivity and is getting dropped in some point, could you give us a quick diagram or explain a little bit more how the traffic flow goes?
OPCFW_CODE
Frequently Asked Questions about SWARM Masternodes What is a masternode? A masternode is a compute resource that meets the minimum technical and network requirements to run the SWARM core software and validate transactions. As the network grows over time, masternodes will also provide resources for other data services relevant to the network. What are the requirements for running a SWARM Masternode? In order to activate a node as a SWARM Masternode, operators are required to stake 50,000 SWM tokens. The masternode begins earning rewards 15 days after staking and activation. SWARM Masternodes are required to run the latest release software version in order to receive rewards. See our complete setup guide for instructions, commands and configurations required to run a SWARM Masternode. Where do awards come from? The SWARM Foundation has set aside 10 million SWM tokens as rewards for masternodes maintaining the network. How many SWM tokens do I need to be able to run a SWARM Masternode? In order to be eligible to participate in rewards, you need to have 50,000 SWM tokens in a wallet which then gets referenced in the configurations of the SWARM Masternode. Further restrictions may apply (see below). Can I just run a node and earn rewards without staking? Why do I have to stake 50,000 SWM to run a masternode? Staking is required to ensure that masternode operators have “skin in the game”, and incentivizes operators to keep their masternodes active. Staking also signals a serious investment in the infrastructure SWARM is building and a commitment to maintain the network. How many wallets do I need to run X nodes? You need one wallet per node stake. Each wallet should have the 50,000 SWM stake necessary per node. To stake 5 nodes, register 5 different wallets, each containing 50,000 SWM. What's the MDF? Market Development Fund (MDF) Rewards is a proposal that masternode operators help deploy funds on SWARM and earn rewards for doing so. This proposal is subject to further structuring and legal/regulatory clearance. We are working hard with regulatory and legal experts to deliver this in a compliant form. We look forward to updating you as MDF progresses. Where can I buy SWM tokens to stake for a SWARM Masternode? Am I allowed to run a SWARM Masternode as a US person? Yes, anyone can run a masternode. Rewards are available to any eligible masternode operator with no restrictions. I'm in. Where do I begin?
OPCFW_CODE
Artificial Intelligence (AI) [Name of the Writer] [Name of the Institution] Artificial Intelligence (AI) Artificial Intelligence is a branch of Computer Science that deals with the simulation of intelligent behavior in computers. It has revolutionized the field of Computer Science by producing robots of many simple and complex types. Robotics is the scientific discipline that is responsible for producing robots, but their cognitive aspect is related to AI. Therefore, the engineers of Robotics and AI work in integration to produce robots of cognitive capability. Artificial Intelligence is orthodoxly related to installing human cognition in computers as much as possible. It is one of the latest technologies in the contemporary world. A person who works in this field is called an Artificial Intelligence Engineer. The average salary of AI engineers is 110,000 USD per annum. An AI engineer enters the industry as a graduate having particular abilities regarding this field i.e. basic knowledge about programming, software, linear algebra, calculus, and statistics. An AI engineer is expected to have more capabilities than just technology-related knowledge, therefore he acquires these capabilities by getting enrolled in a Master's Program or joining a professional company (Russell, & Norvig, 2016). The following details discuss the qualifications of the people working in or the need for Artificial Intelligence. Qualifications of a person are required to meet the expectations of a particular job, if he/she can meet the requirements or fulfill those requirements, s/he is considered qualified for that job. Therefore, before deciding someone’s qualifications, it is essential to determine his duties and responsibilities. An Artificial Intelligence Engineer has the following roles and responsibilities that s/he is supposed to fulfill effectively in his designated area to be a successful member of the AI community: The study and transformation of Data Science Prototypes The research and implementation of the Appropriate ML algorithms and AI tools The development of Machine Learning Applications The selection of Datasets and Data Representation Methods The training and restraining of systems when needed (Jarrahi, 2018). Apart from that, an AI engineer should be well equipped to work efficiently with Electrical Engineers and Robotics Team, and s/he should stay updated about all the developments in the field to be a successful Artificial Intelligence Engineer (Ghahramani, 2015). Qualifications of the People Working in AI Being one of the latest technologies and having a wide scope, AI offers high profile jobs to its employees. Therefore, when a person wants to become a formal part of this field, it requires some prior abilities and capabilities so the employee would be able to meet the expectations of this field and will not spoil any project (Munich, 2017). The fundamental requirement of AI is a Bachelor’s degree from a recognized university in subjects like Computer Science, Mathematics, Information Technology, Statistics, Finance, or Economics. The noted subjects enable a person to comprehend and run simple operations at the computer. All these disciplines make a person eligible enough to learn the advance knowledge of Artificial Intelligence (Jarrahi, 2018). Artificial Intelligence engineers need knowledge of some extra skills to become successful in this file. The required skills are categorized into two types: Being a technical field, Artificial Intelligence demands from its employees (engineers) the knowledge of some technical skills. Theoretical and practical knowledge about these topics is essential for an Artificial Intelligence engineer: Software Development Life Cycle Modularity, OOPS, Classes Statistics and Mathematics Deep Learning & Neural Networks Electronics, Robotics, and Instrumentation (Not a Mandate) An engineer having the knowledge of these topics should also have command over the following skills: Programming Languages (R/Java/Python/C++) Artificial Intelligence literacy means the full command over programming languages like Java, Python, and C++. Some people rely on only Python, but you should know that the more knowledge about the field will widen your scope. Artificial Intelligence requires the most updated and knowledgeable engineers. Matrices, vectors and matrix multiplication are also popular terms in Artificial Intelligence. AI engineers know their use and application. Models like the Hidden Markov Model, Gaussian Mixture Models and Naïve Bays are generally known by AI engineers. Applied Math and Algorithms Algorithm theory and its use is the part of an AI engineer’s knowledge. Subjects such as Gradient Descent, Convex Optimization, Lagrange, Quadratic Programming, Partial Differential equation, and Summations are discussed in Artificial Intelligence and people show their expertise in these subjects (Munich, 2017). Neural Network Architectures Artificial Intelligence is the mimicry of human intelligence. Human intelligence has a complex neural structure that is the subject of the Biologists and the Psychologists. The AI engineers are scientists or doctors of computer, therefore, they work on a computer neural structure. The term of Machine Learning is used to analyze computer intelligence or Artificial Intelligence (Jarrahi, 2018). Language, Audio and Video Processing An AI engineer has a permanent communication with codified material. All the computer material appears as text, audio or video. Artificial Intelligence engineers cannot work if they have not the command over Gensim, NLTK and techniques like word2vec, Sentimental Analysis, and Summarization. Therefore, every AI engineer is supposed to have full command over these skills (Strohmeier, & Piazza, 2015). Business Skills are another essential quality of an Artificial Intelligence Engineer. Globalization has shrunk the world and Information Technology has played an important role in it. Now, only a multi-talented person has a scope in this age of competition. Artificial Intelligence demands all qualities from production to promotion and selling of ideas (goods) in one person (Munich, 2017). Artificial Intelligence engineers should know the following business skills: This capability helps the AI engineer to cope with technical and non-technical issues that occur randomly. The way of working in AI is different from the traditional workplace where the technical staff is concerned only with producing the items and those items are sold by the marketing staff. AI engineer works on new ideas that he/she thinks that are the demand of the society. Although s/he works on new ideas, different complexities create a challenge for him/her that s/he has to solve at his/her own (Strohmeier, & Piazza, 2015). Therefore, the people working in AI are trained for such circumstances. Effective Communication is essential to deal with a variety of clients and to work in an organization. The 21st century is typical in challenging traditional techniques and strategies in almost every field including business. Communication skills help AI engineers to explain the worth of the ideas and products that they produce or introduce in the industry. AI engineers are taught effective communication skills with special consideration because neither they can sell their products to clients nor they can deliver their ideas to their colleagues and boss without this capability. Variety and diversity are the only things that attract the client in this smart world. An AI engineer cannot rely upon set ideals and defined truths about market or business. This is the reason that AI engineers know the mental drills that can help them to do the job more effectively. Brainstorming and other mental drills are used to critically analyze the techniques to do the business in a better way (Munich, 2017). Industry Knowledge is key to business. It has two faces; one indoor and the other outdoor. Indoor means that AI engineers do not show indifference to the operations and trends in their related industry, and they stay updated. Outdoor means that they know the market trends. No one can be a successful business person unless s/he has full command over industry trends and directions in the market. Artificial Intelligence requires its employees to have in-depth knowledge of the industry to sell the items successfully (Strohmeier, & Piazza, 2015). Social media surveys and many other techniques are used to know the industry trends. Artificial Intelligence engineers qualify for the job in the field after having full command over all the above skills plus the prescribed academic qualification. Some people enroll in the Master's program to learn these skills whereas, others join industries to get qualified for Artificial Intelligence. The aspirants of Master’s degree can choose one degree from Data Science, Machine Learning (i.e. Edureka’s Machine Learning Engineer Masters Program), or Computer Science. Artificial Intelligence, as its name refers, is a deliberate attempt to install cognitive capability in a computer. There have been given many attempts that have proved partially successful too. Automated machines and talking robots are manufactured with the collaboration of AI engineers, Electric Engineers and Robotics engineers. AI engineers are the people who work in Artificial Intelligence and do experiments as well. Artificial Intelligence engineers have a minimum Bachelor's degree qualification in the disciplines like Information Technology, Mathematics, Statistics, Economics, or Finance before they apply for a job in this field. Academic qualification is not enough to be a successful Artificial Intelligence engineer rather many additional skills are the requirements of this field which are but not only, programming skills and language, mathematical capabilities, command on networking as well as business, analytical, communication skills and industry. Hence, while defining the qualifications of the people working in or needed in Artificial Intelligence, we say that if one has the above mentioned skills and competencies, then he/she can hope to become a successful engineer in the field of Artificial Intelligence. Ghahramani, Z. (2015). Probabilistic machine learning and artificial intelligence. Nature, 521(7553), 452-459. Jarrahi, M. H. (2018). Artificial intelligence and the future of work: human-AI symbiosis in organizational decision making. Business Horizons, 61(4), 577-586. Munich, S. (2017). A Roadmap to Becoming an AI Engineer. Retrieved 17 November 2019, from https://www.datarevolutionhr.com/view-article.asp?article=136j4z64eilql2c Russell, S. J., & Norvig, P. (2016). Artificial intelligence: a modern approach. Malaysia; Pearson Education Limited. Strohmeier, S., & Piazza, F. (2015). Artificial intelligence techniques in human resource management—a conceptual exploration. In Intelligent Techniques in Engineering Management (pp. 149-172). Springer, Cham. If you have any queries please write to us Join our mailing list
OPCFW_CODE
Artificial Intelligence (AI), like ChatGPT and others, is revolutionizing the way we work and live! From content generation, customer support chat bots, to predicting the future, AI is starting to show us the great potential of what it can help us achieve. However, as the use of AI increases, the dangers become more advanced and sophisticated to mitigate. The risks associated with AI are real, especially when it comes to AI hallucinations. AI hallucinations are the overly confident wrong answers that AI returns when answering AI prompts. This is a significant risk that must be mitigated in order to get the most benefit and accuracy from AI in any business workflow. In this article, we’ll explore more of what AI hallucinations are, the dangers they pose, and how they can be mitigated. Table of Contents What are AI Hallucinations? Artificial Intelligence (AI) hallucinations refer to situations where an AI model produces a wrong output that appears to be reasonable, given the input data. These hallucinations occur when the AI model is too confident in its output, even if the output is completely incorrect. In other words, AI hallucinations happen when an AI model produces wrong answers that it is sure are correct. These AI hallucinations can occur when the AI is answering a prompt where it doesn’t have all the necessary information to give an accurate answer. It’s fairly common for the AI to basically make stuff up to fill in the gaps so it can give an answer to the AI prompt given. Just as people will some times give a confident answer, so does AI. These confident but incorrect answers are referred to as AI hallucinations. Dangers of AI Hallucinations The dangers of AI hallucinations are significant, especially when the wrong answer has real-world consequences. For example, suppose an AI model is used in medical diagnosis, and it hallucinates a diagnosis that leads to an incorrect treatment plan. In that case, the consequences could be life threatening in a healthcare scenario. Similarly, if an AI model is used in autonomous vehicles and hallucinates that it is safe to proceed through an intersection, the results could be deadly. There are many other scenarios where where AI hallucinations could pose legal and/or ethical consequences. One of the most significant dangers of AI hallucinations is legal liability. As AI models become more prevalent, they will inevitably be used in situations where their output has real-world consequences. In such situations, if an AI model hallucination is used for action or to make a statement to a customer unchecked, the organization using the model could be held legally liable for any resulting damages. Another danger of AI hallucinations is when it comes to meeting compliance requirements. Many industries have strict compliance regulations that need to be met. It’s tempting to implement AI models to automatically take actions and perform tasks, but the AI may not adhere to compliance requirements that must be maintained by an organization. This could lead to violating compliance that could cause the organization to be audited and possibly lose it’s compliance certification. Depending on the organization, this could lead to catastrophic consequences for the business. How to Mitigate AI Hallucinations Several techniques can be used to mitigate the risks and dangers associated with AI hallucinations. Regardless of what business process uses AI, it is extremely important to implement at least one method of mitigation to protect your organization from the risks and potentially dangerous consequences of AI hallucinations. Manual Human Review The process of manual human review of the output and answers of AI is fairly simple method to mitigate and reduce the risks of AI hallucinations. With content generation, this would be manually reviewing and editing the content. When AI is predicting an action to take, this would be a human reviewing the suggested action and its reasons before allowing the action to be performed. This is simple, but does require the person performing the review to either know the models domain very well, or to know how to lookup and verify things. Manual human review can be laborious and time consuming. While this may work really well for certain content or results generate by AI, it will not be feasible on a large scale. For this reason, it will likely be important to implement other mitigation techniques depending on the business solution that has implemented AI automation. Limit the Possible Answers When crafting the AI prompts used, the possible answers expected from the AI can be limited. This will enable you to guide the AI towards what type of answer, or possible answers, you are expecting it to return. This can be done by being more detailed in the AI prompt, and possibly giving the AI a very specific list of answers to choose from. This will help in preventing the AI form hallucinating a too confident and incorrect answer. Specify an Answer Template Whether the AI is predicting data trends, generating content, or making some other prediction, the AI prompt that is designed could include a sample or expected template description of what answer is expected. This is yet another way to guide the AI to giving an expected answer, and can help prevent the AI from hallucinating. Tell the AI to NOT Lie With conversational AI, like ChatGPT or others, you can include a request in the AI prompt that the AI doesn’t lie. This sounds like something that shouldn’t be necessary, but there are times where this may do the trick. You could tell it to tell you it doesn’t know instead of coming up with an AI hallucination just to give you a good, but wrong, answer. Tell the AI What you don’t want Similarly to instructing the AI that you don’t want it to lie, you can similarly tell the AI what type of answer or information you specifically do not want it to include in the answer returned. These instructions will guide the AI towards better answering your AI prompt and giving you a meaning full and valuable answer. This will also help prevent the AI from hallucinating as well. AI hallucinations are a real and significant risk associated with AI models. The dangers of AI hallucinations include legal liability, compliance risks, and real-world consequences. There are many different techniques that can be used to help mitigate and prevent AI hallucinations and their consequences. Only then can we leverage the benefits of AI without falling prey to its dangers. Crafting detailed, instructive, and informative AI prompts will help you guide the AI used in your business solutions and workflows to give meaningful and valuable answers that have less risk of AI hallucinations.
OPCFW_CODE
Setup is PSTN >>> SIP PROVIDER >>> GATEWAY >>> CUCM Sorry for the long post, I'm hoping someone can help. I've been tasked with creating an anonymous helpline for employees to call from inside and outside the business. They need calls the helpline to ring a group of existing DNs but if these DNs don't answer (i.e. out of hours) then the call should be forwarded to one mobile phone initially then to another if that isn't answered. So far so good (I thought) so I created the following: Helpline Hunt Pilot: /+441234567001 Forward Unanswered Calls set to 'external mobile Number of one helper' other settings tried as explained below. Helpline Hunt List containing two Line Groups: Line Group 1 & Line Group 2 Line Group 1: RNA 10 Options: Try next group in Hunt List Members: Bob /+441234561234 / Bill /+441234561235 / Barry /+441234561236 Line Group 2: RNA 10 Options: Stop Hunting Members: Sue /+441234559876 / Sharon /+441234565432 / Sam /+44123455757 The problems for me are: 1. The calling number must remain anonymous. 2. Cfwdall on DNs are ignored by Hunt Group/Pilot settings so I can't forward out to mobiles. For point 1: This worked for both internal DNs and Mobiles calling from outside if I set the 'Calling Line ID Presentation' and 'Calling Name Presentation' on the HP to restricted.But when the call is passed to external mobile phones it shows as 'Withheld' so cannot be identified as the helpline number. So I removed the restricted settings and enterned the HP number +441234567001 as the 'Calling Party Transform Mask' on the Hunt Pilot. This means that external calls into the helpline present as +441234567001 (so remain anonymous) but unfortunately internal calls still show their DNs (and names) of the calling party so are not anonymous. For point 2: I configured Mobile Connect Remote Destination Profile (cool feature) for a Line Group 2 member so it calls their mobile phone along with DN and presents with the 'transform mask' helpline number on both devices. If they don't answer then it flows through to the Hunt Pilots 'Forward Unanswered Calls' to a second mobile phone. This works but isn't ideal as it calls the Mobile Connect RDP along with the DN (after a delay) so I'm not sure this is the right solution. My questions are: A. Is this the right solution for the helpline or is there an alternative I should be configuring? B. Can calls into a 'Helpline' number be made to appear as anonymous/private from both internal DNs and external numbers? If the call in internal (555) you can use a translation pattern and just simply change the "Calling Party Transform Mask". If the call is external (+1 222 333 4444) you can either change that at the gateway with a Translation-Rule or at the CUCM with another translation pattern. I would probably change the incoming call in the router to 555 and do the rest on CUCM. If you wan to forward to cellphones, you can uses the forwarding option of the Hunt Pilot to redirect to another Hunt Pilot, or have all the cellphones on a second hunt list, one thing to take into account is that if you are using the "use forward settings of the line" you need to set "Forward No Coverage Internal/External"
OPCFW_CODE
User roles and technical requirements for this article | Browser | | Google Chrome 7.2.502 or later | Edgev.79 or later About this article: The Watchdog Dashboard provides visualization of network performance across all agents by geography, targets, ISPs, and groups. Administrative functions are enabled for Admins to manage users and agents, and Asset Manager to manage agents. View the User Management article for more information on user roles and permissions. Step 1: Visit the Watchdog Dashboard During the PoC stage of Project Watchdog, the URL points to one of our PoC staging environments, such as “.stg01a” or “.stg01b”. Future production versions will not include this in the URL. Step 2: Login using two-factor authentication Get a verification code from the two-factor mobile app set up for your user account. See the Dashboard Access for New Users article for more information on setting up two-factor authentication. Step 3 Enter your login credentials and click View the Dashboard Access for New Users article to learn about creating login credentials and joining projects. Step 4: Select project Watchdog is organized by projects. The Projects dropdown next to the user name shows the current project. Click the dropdown to move to any other project you have joined. Step 5: Zoom to the location of interest and filter the results Map View: The map view defaults to Japan the first time you log in. Move the map to any location using the mouse or by searching for a location at the top of the page. Zoom into any area with the map zoom buttons, scroll wheel, or trackpad. Map area colors: Areas of the map with activated agents are shaded green or red to show the network quality in the area for the Quality Profile selected. Green shading: Network quality is good. Latency, jitter, and loss for metrics for the agents in the area do not exceed the thresholds for the Quality Profile selected. Red shading: Agents in the area are reporting latency, jitter, or loss exceeding the thresholds for the Quality Profile selected. No color: No agents are activated in the area. Active agent icon: The total number of agents activated in an area is shown in the circular icon in the center of the area. Active agents are shown in green. Inactive agents are shown in red. Mouse over to see the number of active agents. Step 6: Filter the time series data for the location Filter the agent data returned by target, ISP, or group. All ISPs and groups for the project are listed but may not be represented in an area. For example, filtering by Deutsche Telekom in the United States will return no data. Targets: Internet endpoints that each agent pings regularly. Filter by one or more targets to view specific internet quality results. ISPs: Internet Service Providers connected to one or more agents. Groups: Logical groups assigned to agents on the Agent page. For example, agents may be group by team or division, making it easy to view results in one or more groups in an area. Quality Profiles: Latency, jitter, and loss thresholds provide the baseline for internet quality in the area. Thresholds are preset as Basic, Moderate, and Ultimate for intra-region ICMP pings and inter-region ICMP pings by default. Contact your Watchdog admin to adjust or add Quality Profiles to your project. Measures: Finetune results by changing the aggregation method of the data. View all data as mean or median average or reduce data to show only the 95th or 99th percentile. Step 7: Filter result Once the filters are selected, click the map area to display a view for the area. Click to view or remove selected filters. Step 8: Select a point in time Time series data is updated every minute and displayed in hourly, six-hour, 24-hour, weekly, and monthly periods. Data that exceeds the latency, jitter, or packet loss thresholds of the Quality Profile selected is displayed in red in the time series line. Selecting a specific time in the graph will change the results in the Network Performance by Area, Network Performance by Targets, and Network Performance by ISPs sections. Drag the blue bar to select a specific time or right-click and drag your mouse to zoom into a time range. When a point in time is selected, it appears above the blue timeline and in the Timeframe Selected scroll box above Network Performance by Area. Click the right or left arrows to move to the next point in the time-series. This summary provides a snapshot of the time period and filters selected for the area. It shows: View individual agent data in this area. Select the agent you want to see from the agent filter. Users with the Viewer role can only view the agents associated with their email address. Export a view to PDF or share the URL with a member of your tea
OPCFW_CODE
9.1.2 won't start due to script error in startsagecore System: Ubuntu 16.04 Server (64-bit) I get the following error when installing 9.1.2 May 31 06:27:20 sagetv-g2 systemd[1]: Starting LSB: SageTV Server... May 31 06:27:20 sagetv-g2 sagetv[2037]: Starting SageTV Server: sagetvChanging to SageTV directory /opt/sagetv/server May 31 06:27:20 sagetv-g2 sagetv[2037]: . May 31 06:27:20 sagetv-g2 systemd[1]: Started LSB: SageTV Server. May 31 06:27:20 sagetv-g2 sagetv[2037]: Executing pre-scripts May 31 06:27:20 sagetv-g2 sagetv[2037]: ./startsagecore: line 36: syntax error in conditional expression: unexpected token ;' May 31 06:27:20 sagetv-g2 sagetv[2037]: ./startsagecore: line 36: syntax error near ;' May 31 06:27:20 sagetv-g2 sagetv[2037]: ./startsagecore: line 36: if [[ "$JAVA_VERSION" == "8" || "$JAVA_VERSION" == "9"]] ; then' ` Trying to work through the script, but not seeing why it's throwing an error in 9.1.2 and not in 9.1.1 Here's the 9.1.2 code JAVA_VERSION=$(java -version 2>&1 | grep -i version | sed 's/.*version ".*\.\(.*\)\..*"/\1/; 1q') if [[ "$JAVA_VERSION" == "8" || "$JAVA_VERSION" == "9"]] ; then JAVAOPTS="$JAVAOPTS -XX:+UseG1GC -XX:+UseStringDeduplication" fi Here's the 9.1.1 code java_version=$(java -version 2>&1 | awk -F '"' '/version/ {print substr ($2, 0, 4)}') if [[ "$java_version" == "1.8" || "$java_version" == "1.9" || "$java_version" == "1.8." || "$java_version" == "1.9." ]] ; then JAVAOPTS="$JAVAOPTS -XX:+UseG1GC -XX:+UseStringDeduplication" fi Seems that by removing the second set of brackets resolves the issue. if [ "$JAVA_VERSION" == "8" || "$JAVA_VERSION" == "9"] ; then This has been addressed in pull request #295 . I spotted the problem when I upgraded last night. When I submitted the change, I failed to notice a space was missing (I really did test this change before committing) and the startup scripts don't get tested by Travis CI. That might be something we should change to prevent silly mistakes like this in the future.
GITHUB_ARCHIVE
Using PT3's HUD you can mark recent hands for later review while at the table. Marking Hands for Review To do this, click the PT-icon and choose "Mark Hand for Review". You will see the last 5 hands played with the hand number, your hole cards (if you were dealt into the hand), the board cards, and the player who won the hand with their cards (if they were shown) and the amount they won. Doing this will add a note to the hand (saying "Marked for review") so that you can find the marked hands later. To review marked hands within PT3 you can use any of the built in reports that list individual hands, or you can create a custom report to filter to hands with notes. Using built in reports If you view the Sessions tab, for example, and on the Sessions By Table report select the session you just played you can click the 'Notes' column (far left by default) to sort by those hands which have notes. This will show all hands with notes - not just those you marked for review but also any that you added specific notes to elsewhere. You can identify hands with notes by the icon in the Notes column. You can double click the hand to open the Hand History window, which includes a Notes section at the bottom. Using Custom Reports If you want to look at a list of all the hands you have added notes to, in one list, you can create a Custom Report. For more information see the Custom Reports guide and Tutorial - Custom Reports and Statistics. To build the report, go to the Reports tab and choose "Holdem Cash Hand" (or "Holdem Tournament Hand", or Omaha - all the directions are the same) from the Section: dropdown list. Click the New button and give your report a name (in the "Name" field above the New button). In the 'Available Stats' list double click the stats you want to see in your report. You can add as many or as few stats as you like depending on the information you want to see. See the report image below for an example. To make the report only show hands with notes we need to add a filter. Click the Filters link just below the list of selected stats. In this window you need to add a filter for "flg_note". Type that in the "Filter Expression" field and click Save. Because this is a boolean (true/false) field there is no need to compare it to anything. If you leave the "Filter on Active Player" option enabled then you will only see your own hands which have notes. If you uncheck it, you will see observed hands with notes too, but you will also see one row in the report for every player in each hand. To see each hand once with "Filter on Active Player" unticked you can add a Simple Filter for "Post Big Blind" on the "Actions" tab of the Filter window. This will mean that we see each hand from just the BB's point of view, and since there is only one BB per hand we only see each hand once. Click the OK button to apply your filter. You are now ready to run the report, so click the Save button then the Run Report button top right. From this report you can double click a hand to see the Hand History window and attached notes (as before).
OPCFW_CODE
understanding MongoDB chunk split I'm new in MongoDB and I'm reading the manual. I've understood what shard and chunk are (other distributed systems have similar concepts) but I'm struggling to undestand these two lines: The smallest range a chunk can represent is a single unique shard key value. A chunk that only contains documents with a single shard key value cannot be split. This is the link of documentation: data partitioning. Given the example provided by the documentation with a minKey = 0 and maxKey = 200 can anyone give me an example of a chunk that can be splitted and one that cannot be splitted? Especially how documents inside a chunk that is not splittable look like? I think that if x is the shard key and the chunk relative to range 175-200 is the smallest, so unsplittable, a document with x=180 will be insert in that unsplittable chunk. I'm wrong? What will happen for other types of key? Let's assume that you have a collection of tweets that is sharded. For simplicity I'm going to use 'account_id' as the shard key (e.g. x in your question). Note that this is a bad shard key for this use case for reasons we'll see soon. The collection is sharded and the range of accounts_id's are broken up into chunks that will be distributed across the shards. One chunk will refer to the account_ids from 175-200. After some time, each of these accounts continue tweeting and the size of this chunk grows to a point where it split into two chunks: [175, 183] and [184,200]. Going further, assume that there is an incredibly prolific user (lets say account_id: 180) in this range that tweets non-stop. Eventually chunk splits will occur to the point where the this account is in a chunk all by itself, e.g. [180,180]. The size of this chunk will continue to grow as more and more tweets are added to the collection, but the chunk cannot be split as the shard key is at its finest granularity, which is a single account_id. There may be a large number of documents corresponding to this chunk, but there is no way to split this chunk by filtering only on the account_id. This specific case is why this may be an inadvisable shard key. In comparison, suppose the collection is sharded based on tweet_id. This value would theoretically be unique, so there isn't the risk of a single value growing the size of a chunk to where it can not be split.
STACK_EXCHANGE
Computer science has taken a priority slot for K–12 teachers, as experts forecast a majority of jobs will incorporate some kind of coding skills or computational thinking by as early as 2020. One way that schools are engaging students is by introducing coding activities in makerspaces, where students can use the creative culture to start establishing the building blocks of computational thinking. What Are the Benefits of Makerspaces in Schools? In makerspaces, usually set up in schools or libraries, students work collaboratively on projects that encourage using creative solutions to explore areas in science, technology, engineering and math. “One of the goals of any makerspace should be to instill the maker mindset in students through a series of creative experiences while simultaneously building 21st-century skills,” computer science teacher Nick Provenzano writes for Getting Smart. The idea of using these hands-on environments teach complex technical skills like coding has garnered attention from major companies like Google, who teamed up with the American Library Association in 2017 to fund the creation of coding pathways in makerspaces throughout the country through the Libraries Ready to Code initiative. Makerspaces have revolutionized what school libraries can do. Everything from craft supplies to microcontroller kits to 3D printers have been added to library spaces to allow students to think outside the box and innovate. How Makerspace Tools Teach Students Coding Concepts While computer science is usually associated with computer screens and coding programs, makerspaces offer students a chance to visualize core computer science concepts through hands-on learning. For example, students can take advantage of LittleBits code kits to practice creating simple programs through Arduino to control LittleBits light sensors and motors. At PS 452 in New York, students used LittleBits coding tools while learning about Egypt to create pyramids and then program them to move, according to the United Federation of Teachers, promoting computational thinking and inspiring cooperative learning. “Kids were not only learning about construction and building and architecture, they were pulling out books to research designs, working together, discussing and learning from each other,” Michele Kirschenbaum, the librarian at PS 452, who was awarded a makerspace kit from the Department of Education’s Office of Library Services, told UTF. “Our library has really transformed from a quiet zone into a vibrant, creative center.” Makerspaces in Schools Make Coding Accessible to Students While some schools are already incorporating computer science into their daily curricula, there are districts that either do not have designated classes or do not have enough space to give a spot to every student who is interested. Unlike formal classes, makerspaces are always accessible to any student, making them ideal locations for students who may be interested in computer science but don’t have access to the tools they need to get started. “Makerspaces help to make the resources more readily available and create a more equitable access,” says Diana Rendina, a Librarian at Tampa Preparatory School and makerspace consultant. “It levels the playing field, giving students a chance to explore computer science as opposed to limiting it to those taking computer science classes. To me, it’s the ultimate equitable-access space.” New Makerspace Ideas Empower Students by Guiding Their Learning Makerspaces not only give students the access they need, they encourage them to explore their own interests, allowing them to have a say in their own learning, which they may not get in a traditional classroom. At Tampa Prep, students were interested in exploring virtual reality and programming. The school now has a lab space dedicated to VR and coding exploration, equipped with three Oculus Rift headsets, three HTC Vive headsets and one HTC Vive mobile unit, according to the Learning Counsel. Using Unity 3D, the students have been able to create their own apps and video games, and this has empowered students to lead the charge not only in their own coding education, but their classmates’, as well. One Tampa Prep student taught a computer science course last summer for fellow students who were interested in learning about the tech, according to Rendina. Students also designed an engineering application, and are now working with the University of South Florida’s school of engineering to take the app to the next level.
OPCFW_CODE
for the past couple of releases of Hadoop 2.X code line the issue of integration between Hadoop and its downstream projects has become quite a thorny issue. The poster child here is Oozie, where every release of Hadoop 2.X seems to be breaking the compatibility in various unpredictable ways. At times other components (such as HBase for example) also seem to be affected. Now, to be extremely clear -- I'm NOT talking about the *latest* version of Oozie working with the *latest* version of Hadoop, instead my observations come from running previous *stable* releases of Bigtop on top of Hadoop 2.X RCs. As many of you know Apache Bigtop aims at providing a single platform for integration of Hadoop and Hadoop ecosystem projects. As such we're uniquely positioned to track compatibility between different Hadoop releases with regards to the downstream components (things like Oozie, Pig, Hive, Mahout, etc.). Every single single RC we've been pretty diligent at trying to provide integration-level feedback on the quality of the upcoming release, but it seems that our efforts don't quite suffice in Hadoop 2.X stabilizing. Of course, one could argue that while Hadoop 2.X code line was designated 'alpha' expecting much in the way of perfect integration and compatibility was NOT what the Hadoop community was focusing on. I can appreciate that view, but what I'm interested in is the future of Hadoop 2.X not its past. Hence, here's my question to all of you as a Hadoop community at large: Do you guys think that the project have reached a point where integration and compatibility issues should be prioritized really high on the list of things that make or break each future release? The good news, is that Bigtop's charter is in big part *exactly* about providing you with this kind of feedback. We can easily tell you when Hadoop behavior, with regard to downstream components, changes between a previous stable release and the new RC (or even branch/trunk). What we can NOT do is submit patches for all the issues. We are simply too small a project and we need your help with that. I truly believe that we owe it to the downstream projects, and in the second half of this email I will try to convince you of that. We all know that integration projects are impossible to pull off unless there's a general consensus between all of the projects involved that they indeed need to work with each other. You can NOT force that notion, but you can always try to influence. This relationship goes both ways. Consider a question in front of the downstream communities of whether or not to adopt Hadoop 2.X as the basis. To answer that question each downstream project has to be reasonably sure that their concerns will NOT fall on deaf ears and that Hadoop developers are, essentially, 'ready' for them to pick up Hadoop 2.X. I would argue that so far the Hadoop community had gone out of its way to signal that 2.X codeline is NOT ready for the downstream. I would argue that moving forward this is a really unfortunate situation that may end up undermining the long term success of Hadoop 2.X if we don't start addressing the problem. Think about it -- 90% of unit tests that run downstream on Apache infrastructure are still exercising Hadoop 1.X underneath. In fact, if you were to forcefully make, lets say, HBase's unit tests run on top of Hadoop 2.X quite a few of them are going to fail. Hadoop community is, in effect, cutting itself off from the biggest source of feedback -- its downstream users. This in turn: * leaves Hadoop project in a perpetual state of broken * leaves Apache Hadoop 2.X releases in a state considerably inferior to the releases *including* Apache Hadoop done by the vendors. The users have no choice but to alight themselves with vendor offerings if they wish to utilize latest Hadoop functionality. The artifact that is know as Apache Hadoop 2.X stopped being a viable choice thus fracturing the user community and reducing the benefits of a commonly deployed codebase. * leaves downstream projects of Hadoop in a jaded state where they legitimately get very discouraged and frustrated and eventually give up thinking that -- well, we work with one release of Hadoop (the stable one Hadoop 1.X) and we shall wait for the Hadoop community to get their act together. In my view (shared by quite a few members of the Apache Bigtop) we can definitely do better than this if we all agree that the proposed first 'beta' release of Hadoop 2.0.4 is the right time for it to happen. It is about time Hadoop 2.X community wins back all those end users and downstream projects that got left behind during the alpha
OPCFW_CODE
How to filter cached query in Laravel Need help/advice with this concept. I have pretty complex fluent query which pulls rows according to users filters. I was thinking of making unfiltered (only joins, without where/whereIns) query which would be cached, and then somehow filter that cached query according to users need. There's 2-3 seconds lag when querying db each time form filter changes, so i'm guessing this can perform better. Now unfiltered query is around 5k rows, and average filtered one brings 500-1000 rows. Query is around 25 columns with 4 CONCATS, 3 CASE statements and 14 leftJoins. Is that right way? Any other suggestions? Thanks in advance! Y which system caches the unfiltered results ? none yet. I thought Laravel 5.1 Cache, with file driver? You can also use the MySQL / MariaDB cache . so you never have old results in it. when you write to a table that you use in your cache result the cache was cleared ok, thanks for a tip, i'll look at it. So it's like creating a view? @Yuray You can read How the Query Cache Operates from the MySQL manual. Ok, so i see cache is flushed every time tables from query are updated. Since there's lot of writing to tables, i guess it will not be performance gain. Idea is that some supervisor, pulls data and do some analytics with data for example till today. So i would like to improve performance on "old" data, what is happening now isn't important. And if it is, I thought i would make "refresh" query button, which will "forget" cached query and make a new one. Then you can use the Laravel Cache and generate a unique key (ideally based on the query parameters, so a set of filters will be cached the first time but retrieved from cache the second). For clearing the cache you can run the php artisan cache:clear command, or if you want to do it programatically you can use Artisan::call('cache:clear');. Wouldn't be better way to cache unfiltered query? So when user changes filters that filters are applied to cached "table"? Someting like creating excel pivot with db source. Once it is refreshed you are manipulating data via filters without querying db.? Caching an entire table's worth of information will eat up quite some memory (even more if you're using Eloquent models), so my suggestion would be not to do so. Just filter it with standard laravel collection methods. Maybe you can use sql view. Or you can store your filtered data to another database table. And you can update it using a trigger automatically. By the way you can filter your data fastly from database table using sql. It will be like dbcache, but you will control it.
STACK_EXCHANGE
Paying a software engineer is not easy. There are many different types of jobs, from software developers to application developers, and the number of them varies wildly depending on the size of the company. For instance, a small startup could have a software developer who earns $20,000 to $30,000 per year. A large company might have a skilled engineer making $40,000-$60,000 a year. Here’s a look at what the typical salary of software engineer jobs is and how it varies from company to company. Types of software engineers and their salaries Software developers and software engineers are often considered “programmers,” but they can also work in a variety of other fields, from security to design. A skilled software engineer can earn anywhere from $40 to $100,000, depending on what company he or she works for. A computer scientist can earn $60,00 to $80,000 in software engineering jobs, depending upon the field of expertise he or her works in. Some companies may have a small number of programmers in their engineering department, while others may have up to 100. Software engineers can also be engineers working in other areas, such as engineering design or design systems, or in a related field. A software engineer’s work can often involve different tasks, from programming the engine in a game to debugging an app, or even developing and maintaining it itself. But, to be a good software engineer, you must be able to apply what you’ve learned to a problem and solve it, said David Stolper, a career counselor and founder of CareerBuilder. StolPer said software engineers can make more money in a startup than a developer at the same position, and they can work with teams to develop new features, work on prototypes, and test new code. The key to a good job, he said, is that you’re motivated. Software developers can be more expensive, but their salaries tend to be higher than their programmers. While most software engineers receive the majority of their salary in bonuses, the software engineer salary varies depending on their role. A full-time software engineer might make $50,000 or more, while a part-time developer might make between $20 and $30. The most common salaries for software engineers vary from $30 to $50 million. Some of these salaries include bonuses and other perks that often go beyond the typical employee’s salary. Paying Software Engineers The salary for a software engineering position can vary depending on how much the company offers. In general, software engineers usually receive a monthly salary of $80 to $120 per month. In the case of a startup, that salary ranges from $60 to $90. Software engineer salaries in California can range from $65,000 – $90,000 depending on where they’re based. Some software engineers work for large companies, while other work for smaller companies. A small startup may only have one software engineer working for them, while larger companies often have more than one. The software engineer position is generally viewed as a “high-growth” position, said Dan LeBlanc, a software consultant and director of the Center for Software Engineering in Los Angeles. “You get rewarded for working on a project that is well-designed and executed, which is really important to success in this job,” LeBlANC said. Software engineering salaries vary widely, but LeBlAClanc said a large company’s software engineer would make around $80 million per year in the United States. LeBlanco also said software engineering is often a “two-man job” that involves two to three engineers working on the same project, and sometimes several people. Some people may be paid $100 per hour, which would equate to $40 per hour in California, he added. The average salary for software engineer roles varies greatly depending on company to the extent that a company pays for them. For example, if a startup pays the salary of a full-timber engineer, they can expect to pay them at least $60 million per person, but they may also be offered the chance to make more. Software Engineer Salary Trends in the USThe average salary of an employee who works in software is set by the state where they work. For the U.S., software engineers tend to work in states where salaries are generally higher, according to a 2016 report by the National Association of Software and Information Professionals. For every $1,000 of salary that an engineer makes in a state, they are able to pay for at least one year of tuition at a four-year college. California has a relatively high average salary per person in the U., at $61,856. However, that state is a relatively low-paying state, according a 2016 study by the Association of State Colleges and Universities. The national average salary in the country is $66,000. A few states, such, California and Massachusetts, have lower salaries per person than the U of A’s $70,000 average,
OPCFW_CODE
I was trying to call a method as below: $array = array_1 = %w(tuna salmon herring) array_2 = %w(crow owl eagle dove) def parser (*argument) argument.each do |item| $array << item end end parser (array_1,array_2) $array.flatten! puts $array Error: ===== D:/Rubyscript/My ruby learning days/Scripts/test.rb:13: syntax error, u nexpected ',', expecting ')' parser (array_1,array_2) # taking multiple arguments generates error ^ No I fixed the code by removing the space in the method call of parser as below: $array = array_1 = %w(tuna salmon herring) array_2 = %w(crow owl eagle dove) def parser (*argument) argument.each do |item| $array << item end end parser(array_1,array_2) # taking multiple arguments generates error $array.flatten! puts $array Output: ======= tuna salmon herring crow owl eagle dove But in the first method why such `space` causes the errors to be thrown up? on 2013-03-19 02:33 on 2013-03-19 02:51 On Mon, Mar 18, 2013 at 8:33 PM, Pritam Dey <email@example.com> wrote: > end > ^ > $array << item > salmon > Posted via http://www.ruby-forum.com/. > In Ruby, you do not have to use parens around a method. So `method(arg)` can be written as `method arg`. Lets say arg was some expression, you might want to put parens around it to make it clearer. `method (true && false)` which corresponds to `method((true && false))` So when you say `parser (array_1, array_2)`, that becomes `parser((array_1,array_2))` but `(array_1, array2)` is not a valid expression in Ruby. That is what is meant here ( http://stackoverflow.com/questions/15488899/how-to...) when he says "Instead of treating array_1 and array_2 as args, it's treating it as a parenthesized expression" -Josh on 2013-03-19 17:29 To expand a bit on Josh Cheek's reply: In general, get in the habit of not leaving a space before the opening paren of a function call that uses parens. That's what trips up the parser and makes it think you're passing one malformed expression, not multiple arguments. This used to trip me up in my early days of Ruby, because my standard coding style DID include a space there.... -Dave
OPCFW_CODE
Last Thursday, April 21, Ubuntu and all its official flavors were officially launched. As you all know, the standard version of Ubuntu uses the Canonical Unity graphical environment. Although I cannot say that I dislike it too much, I can understand all those who prefer another graphical environment, something that, in fact, is also my case, my preference being Ubuntu MATE. Although the general graphical environment is called Unity, Ubuntu uses many applications with the GNOME user interface and in this article we will teach you how to install GNOME 3.20 on Ubuntu 16.04. Ubuntu 16.04 uses GNOME 3.18 for the most part: GTK 3.18 alongside GNOME Shell 3.18, GM 3.18 and GNOME 3.18.x for most applications. Some of the exceptions are the Nautilus window manager that GNOME 3.14 uses and the Software Center and GNOME Calendar already using GNOME 3.20.x. If you want to update as much as possible to the latest version, you just have to keep reading. Table of Contents Install GNOME 3.20 on Ubuntu 16.04 In order to install GNOME 3.20 you need to use the GNOME 3 repository. Keep in mind that this repository does not have everything up to date yet, but applications such as Cheese, Epiphany, Evince, Discos and some more are. Nautilus, Gedit, Maps, System Monitor, Terminal, GTK +, Control Center, GNOME Shell and GDM are all updated to version 3.20. To install GNOME 3.20 you have to do the following: - We open a terminal and write the following commands: sudo add-apt-repository ppa:gnome3-team/gnome3-staging sudo apt update sudo apt dist-upgrade - Before confirming, you have to check that the packages you are going to remove there are none on which we depend. - Although you can enter a new graphical environment by logging out and choosing the new one from the login screen, it is best to restart and then choose the new environment. How to go back to GNOME 3.18 If we do not like what we see or there is something that is not executed correctly, always we can go back. To do this, we will open a terminal and write the following command: sudo apt install ppa-purge && sudo ppa-purge ppa:gnome3-team/gnome3-staging Keep in mind what we have said before: we can go back to GNOME 3.18, but Packages that we removed (if any) when installing GNOME 3.20 will not be reinstalled. Those packages will have to be reinstalled manually. Have you managed to install the GNOME 3.20 graphical environment on Ubuntu? What do you think? 19 comments, leave yours How can I reverse this update? How can I make the gnome work for me? apply command lines but gnome is not applied just restart and before logging in next to your username is the unity symbol, click there and choose gnome, put your password and voila you will be in a gnome environment In my case, I was updating from 14.04 and when installing Gnome 3.20, the Unity icon did not appear next to the username, so I had to do the following: sudo apt-get install gdm When the configuration screen appears select lightdm and after configuring restart. This will show the Unity and Gnome logo on the login screen. I really did not find this version of the environment attractive. I have executed the commands and after that, it made me choose between lightdm and gdm, of which I selected the second, then I left the desktop background and some other visual things of unity, such as the button borders, the color to which the buttons change when etc. are selected. and when restarting it stays on the purple screen with the ubuntu logo and the orange dots below and there it does not happen I installed it and when I entered lightdm (it gave me no other option) if I tried to choose another option that was not the default it would crash and after a while the screen would be purple. If I entered the default option, the same thing happened to Francisco G. The desktop background went away, he changed the fonts and the windows were missing many functions, besides that he set the icons to 150%, so as I was not convinced by anything of anything I went back to version 3.18.5 that I had until that moment Good friends, the same thing happens to me as Francisco G and well, I really don't like unity and I prefer the gnome environment, could you help me solve that problem? I tried to install gnome but when I restart the screen it goes black and it does not happen. completely black, without requiring a password or anything. totally black The same thing happened to me that everyone else… puuufff all the unity configuration was lost. How do I execute various commands? If you like GNOME - as is my case - use Ubuntu GNOME. It is the official version (or flavor) of ubuntu that brings GNOME as the default desktop .. Greetings I think something is wrong with this guide, it doesn't appear and I can't find it. Deisntale him to look elsewhere. Thanks so we learn very bad .. this does not work .. I misconfigure all giant icons, it does not show the separations of the menu options, or fart so we learn @Pablo Aparicio dedicate yourself to something else you do not make it a blogger. I have installed it and I cannot choose the gnome environment. When click crashes and I have to boot into unity again. And now how to uninstall this m… e To undo the changes: sudo apt install ppa-purge && sudo ppa-purge ppa: gnome3-team / gnome3-staging sudo apt-get update sudo apt-get upgrade or after the first command line go to the update manager and update Apply the command lines, reboot the machine several times and I don't get the unity sign to switch to GNOME. What happened was that the desktop and browser icons appear larger. How do I make them smaller? it didn't help me ... but thanks This does not work.
OPCFW_CODE
Below you can find details about some of my current research projects. Links to some of my other projects can be found on the Home page. Please also take a look at my research creation work on the following pages: The Ethics of Timbre My current major book length research project is preliminarily entitled The Ethics of Timbre. The first two years of the project were funded by a SSHRC Insight Development Grant. This research project explores the mutual influence of prominent movements in music and philosophy in France in the 1960s and 1970s and the continued influence of these encounters today. The specific movements explored are the musical emphasis on timbre in Messiaen, Michaël Lévinas and others that were later called ‘spectralists’, and the phenomenology and philosophical ethics of Emmanuel Lévinas. The project seeks to examine how musical and philosophical discourses influence each other through this historical moment with an emphasis on the relationship between music, ethics, experience, and society. The Lévinas family holds an interesting place in the intersection of philosophy and music that has not been explored in musicological or philosophical research. On Sundays in 1959, the philosopher Emmanuel Lévinas wrote his first magnum opus, Totalité et infini: Essai sur l’extériorité, his 10 year old son Michaël practiced the piano in preparation for the Academy. By 1974, when the elder Lévinas’s second magnum opus – Autrement qu’être ou au-delà de l’essence – was published, the younger Lévinas was studying with Olivier Messiaen. In 1970, Emmanuel had met Olivier Messiaen – who was not Michaël’s teacher – and the composer Iannis Xenakis, and the resulting discussions with his son resulted in a key passage in the 1974 publication. A few years later Michaël – along with some of Messiaen’s other students – founded the ‘groupe l’Itinéraire’, a group of performers and composers interested in the exploration of timbre, a movement that was later called ‘spectralism’. This group of composers explored the nature of sound and the physical basis of the spectra that make up sound. These are but a few of the mutual influences between two important and influential streams in French music and philosophy. Despite the connections between these important musical and philosophical movements, very little has been written about the connection of the two Lévinas’s in any discipline in the English language. This research explores these connections through the following research questions: - In what ways do phenomenology and spectralism intersect and influence each other, both in historical encounters and in thought process? - What ways do Emmanuel Lévinas’s ethics and spectralism appeal to experience and nature? - How does each movement respond to politics, especially in light of May ’68? - In what ways does each movement invoke notions of spirituality/the other/the beyond? Music and Ethical Responsibility My book, entitled Music and Ethical Responsibility published by Cambridge University Press, is now out in paperback. The research question that is common to all of my work is: how does music affect the ways people interact? This book explores this question directly. In a nutshell, it explores the ethical responsibilities that arise in musical experience. Using a phenomenological approach, I argue that all musical experience involves encounters with other people. Drawing upon the philosopher Emmanuel Levinas, I also argue that ethical responsibilities arise in encounters with others. So, what are the ethical responsibilities that arise in musical experience? That is what I explore in the book, using case studies ranging from improvisation to ‘other people’s music’ to noise along the way. To dig into this question, I explore a range of topics including: musical meaning, musical experience, and inherited ideas of music and morality. Below is the table of contents. You can read most of the introduction here. Here are some links to publisher descriptions and previews:
OPCFW_CODE
Tempurity™ System Voice Alarm Notifications This document describes how to implement, test, and debug Tempurity System voice alarm notifications. A voice alarm notification is a phone call in which a robotic voice relates the current state of a "monitored device" usually upon the initiation of an "alarm" state. The Tempurity Server uses a form of voice-over-IP to implement the phone call. When an alarm condition is detected the request to send a call travels from your Tempurity Server to a voice server on the internet which then makes the call to your phone. The implementation of voice alarms in Tempurity does not require dialers or special hardware of any kind. Any authorized Tempurity Server connected to the internet can send voice alarm notifications. Because every PC has the ability to make calls and because Tempurity software is easily downloaded, Tempurity can operate in a highly redundant fashion - with more than one PC watching a set of monitored devices anywhere in the world. Voice alarm notifications may not be delivered properly on phones with extensions, or those with answering machines. Always check to make sure that voice alarms are delivered properly using the test calls generated with every new alarm group before relying on them for monitoring your samples. Voice alarms are not active to some international phone numbers. If you are not sure about your country e-mail email@example.com. If you are reading this because you are wondering if Tempurity voice alarm notifications are currently operational, you can do a quick test of a phone call here. This webpage tests only the global ability of the system to make a call and your phone. If you are not getting test calls from your Tempurity Monitor - see the requirements section below. Tempurity also sends e-mail, text message, pages, and alarm status is always visible from the Tempurity Monitor's main display. For a general overview of the Tempurity architecture see the brief architecture overview. The requirements to implement voice alarms in Tempurity There are three requirements for the implementation of voice alarm notifications from your site: Voice passwords are available from Networked Robotics, however for long-term use your company, site, or department may need its own account. Voice accounts are of the following form and must be entered into the Tempurity Server Configuration utility using the numeric form below. Remember to enter the dash. After entry the Tempurity Server must be restarted for the voice code to take effect. External/Public IP addresses Networked Robotics must authorize your external or public IP address in order for voice calls to be sent by your Tempurity Server. The external IP address of the Tempurity Server is often different from the IP address of the Tempurity Server computer itself. The easiest way to find the relevant external IP address is, from the Tempurity Server computer, go to Google and type "What's my IP". You can also go to one of the pages listed such as This is the Ip address that should be sent to the Networked Robotics support group at firstname.lastname@example.org to enable voice alarm notifications from your Tempurity Server. Voice alarms will not be active until the voice code is entered and the Tempurity Server is restarted, and Networked Robotics has authorized your external IP. The specific authorization of source IP addresses for voice protects against the possibility that unauthorized people on the internet can make calls. Blocked Internet addresses Some companies may implement software that blocks certain external web pages. The external address used to make calls is api.voiceshot.com/ivrapi.asp over TCP port 80 This is not a web URL and customers should not try to connect to this manually through their browsers. However it may still be blocked by your company. If your phone is busy, or there is no answer the system will try again after a few minutes. After usually 3 tries, dependent on you voice account, it will stop calling until the next alarm stage. If the system gets an answering machine it will try to leave a message, however the timing on some answering machines is such that a partial message will be received. If an answering machine answers the call, no retries will be attempted until the next alarm stage. Foreign language voice alarm notifications Voice alarm notifications in foreign language versions of Tempurity (French, Spanish, Italian, Chinese, Japanese, Portuguese, etc) are issued in the Windows default language of the Tempurity Monitor computer. At the present time these foreign-language voice alarm notifications are difficult to understand because of the text to voice capabilities in these languages.
OPCFW_CODE
This is another post from my unit testing anti-patterns article series. Today, we will talk about code pollution. Code pollution: mixing test and production code Another anti-pattern that often comes up in unit testing is code pollution. It takes place when you introduce additional code to your main code base in order to enable unit testing. It usually appears in two forms. The first one is when you add methods to the existing classes. For example, let’s say that you have a Repository class that saves an order to the database and retrieves it back by its Id: And let’s also say that you write an integration test that creates a couple of orders and saves them to the database via the system under test (SUT). Now you need to verify that the SUT did everything as expected. You query data in the Assert part of the test and make sure it is correct. In other words, do something like this: How would you do this, assuming there’s no code in the main code base that can query all customer’s orders yet? The natural thing to do here would be to add this code to OrderRepository. After all, this functionality relates to both orders and the database, so that feels like an organic place to put such code to: However, his is an anti-pattern. You should avoid adding code to the main code base that is not used in production. Here, OrderRepository is a class from the production code base. By adding GetByCustomerId to it, we are mixing it up with code that exists solely for testing purposes. Instead of mixing the two up, create a set of helper methods in the test assembly. In other words, move the GetByCustomerId method from the production code base to the test project. So, why do that? Why not just keep it in OrderRepository? The problem with mixing the test and production code together is that it increases the project’s cost of maintenance. Introducing unnecessary code adds to that cost, even if it’s not used in production. And so, don’t pollute the production code base. Keep the test code separate. Code pollution: introducing switches The other form is bringing in various types of switches. Let’s take a logger class for example: How would you test the Controller’s SomeMethod? This method calls the logger which records the text to, say, a text file. It’s a good idea to avoid inducing such a side effect because it has nothing to do with the behavior you want to verify. How would you do that? One way is to introduce a constructor parameter in Logger to indicate whether it runs production. If it does, it can log everything as usual. If not, it should skip logging and return: Now you can write a test like this: It works but, as you might have guessed already, this is also an anti-pattern. Here, we introduced additional code to the production code base for the sole purpose of enabling unit testing. We have polluted Logger by adding the switch (isTestEnvironment) to it. To avoid this kind of code pollution, you need to properly apply dependency injection techniques. Instead of introducing a switch, extract an interface out of Logger, and create two implementations of it: a real one for production, and a fake one for testing purposes. After that, make Controller accept the interface instead of the concrete class: Now you can refactor the test too and instantiate a fake logger instance in place of the real one: As the controller expects the interface and not a concrete class, you can now easily swap one logger implementation for the other. As a side benefit, the code of the real logger becomes simpler because it doesn’t need to handle different types of environments anymore. Note that the ILogger itself can be considered a form of code pollution too. After all, it belongs to the production code base and the only reason it exists is that we want to enable unit testing. And it probably is a form of code pollution. Therefore, replacing the switch with the interface haven’t solved the problem completely. However, this kind of pollution is much better and easier to deal with. You can’t accidentally call a method on an interface that is not intended for production use. It’s simply impossible: all implementations reside in classes devoted specifically to the use in production. And you don’t need to worry about introducing bugs in interfaces either. An interface is just a contract, not the actual code you need to cover with unit test. - Don’t pollute production code base with code that exists solely to enable unit testing. - Don’t introduce production code that doesn’t run in production. - Instead of switches that indicate where the class is being used, implement proper DI. Introduce an interface and create two implementations of it: a real one for production and a fake one for testing purposes. I have two more unit testing anti-patterns to go. Stay tuned! 🙂 If you enjoy this article, check out my Pragmatic Unit Testing training course.
OPCFW_CODE
Build on branch release/2.5.x fails (community build) - HealthCertificateToolkit package is missing Avoid duplicates [ ] Bug is not mentioned in the FAQ [x] Bug is specific for iOS only, for general issues / questions that apply to iOS and Android please raise them in the documentation repository [ ] Bug is not already reported in another issue Technical details Device name: XCode Simulator, ENACommunity build iOS version: Simulator for iPhone 8 plus / iOS 14.4 App version: 2.5.1 Describe the bug The ENACommunity build at release 2.5 fails. The 'HealthCertificateToolkit' package is missing. Steps to reproduce the issue Checkout cwa source for iOS at branch release/2.5.x / tag v2.5.1 / commit 8c14cecc Configure XCode for ENACommunity build / iPhone 8 plus (14.4) Clean build folder and package dependencies Build The build fails. I get the error message "Missing package product 'HealthCertificateToolkit'" Expected behaviour Build successfull Possible Fix Additional context A similar error was reported with version 2.4, see #2862 and especially this comment. The issue had been fixed on a branch fix/community-build-june-edition . Internal Tracking-ID: EXPOSUREAPP-8480 I am not able to reproduce this, I can build as described above without any problems. The only difference is that I have a iOS 14.5 simulator. Build folder was cleaned before the build. it seems that the branch fix/community-build-june-edition was never merged (and maybe for reasons, so the fix possibly conflict with some other changes ....) @Ein-Tim do you build a clean release/2.5.x or did you merge it with the abovementioned branch? Did you ever checkout that branch, or was #2862 already fixed for you with the deletion of the json file? @ndegendogo I did build a clean release/2.5.x branch. I did not merge anything. I checked out the branch back when we had the issue before (#2862). I could remove the repo and clone it again, but tbh, I'm happy that it works for me 😅 @Ein-Tim Please do not risk to "kill" your working environment!! 😱😱 @heinezen @dsarkar any news on this? Or any hints or workarounds for me? Yesterday I tried with the newest branch release/2.8.x it is still broken Maybe it is related to the integration of the HealthCertificateToolkit, especially with the discussion at https://github.com/corona-warn-app/cwa-app-ios/pull/2880 just guessing @ndegendogo I just freshly checkout out the codebase with the 2.8.x branch. I see no build error. Also we have a our CI-Pipeline setup to check the community target, this seems to work fine as well. Could you try a fresh git clone and see if the Issue still persist? @marcussc Thanks for reaching out. I tried this yesterday also with a fresh clone, and on a new machine where I had never built cwa before; but I will try this evening again. I forgot maybe to mention that I was using the XCode GUI to build. Do you build with the GUI, or do you perform a command-line build? @marcussc also yesterday I had played a bit with the XCode project file. I think I found a change that at least the package dependencies were resolved and the compiler seemed to have started (not sure if it finished successfully), but I got then another error message, related to lint and the vendor folder ... I am considering to open a PR for this change, although I don't know much about your workflows; so just in the hope that it does not break anything for you .... ok, so more details now. I am building from the Xcode IDE (Xcode version 12.4). Build was first broken with cwa v2.4 (release/2.4.x branch), see #2862, and this was fixed on branch fix/community-build-june-edition. Unfortunately, my build is still broken on all other branches since then (release/2.5.x, release/2.6.x, release/2.7.x, and now release/2.8.x). The build does not even resolve the packages, error message is "Missing package product HealthCertificateToolkit". I tried to add the HealthCertificateToolkit to the Xcode project file, and this improved it a lot. Now it resolves the packages successfully and starts the build. I am going to create a PR with this improvement. Unfortunately, I still cannot build. I get now the error message "size: can't open file: 27813542 (No such file or directory)", see screenshot. @ndegendogo One of our devs might had the Idea what is going wrong: https://github.com/corona-warn-app/cwa-app-ios/pull/3318#pullrequestreview-723001982 could you check this on your side? :) I just wanted to add, that I'm not able to build the app too. (macOS 11.5.1 / Xcode 12.? / M1) We just update the Readme to make it more clear which project file to use: https://github.com/corona-warn-app/cwa-app-ios/pull/3323 @jucktnich could you please go into more detail why you are not able to build the app? I don't know. It's a week since then and I just tryed to build the app, cause I had some little ideas I wanted to try, but the error message from ndegendogo looks familiar to me. I don't have Access to the Mac now, so I can't try it. @marcussc @ArturFriesen YES! The xcworkspace was the right question / hint, now it resolves all packages nicely out of the box 🎉🎉🎉🎉🎉 We just update the Readme to make it more clear which project file to use Thanks also for this! The error message with the missing file with the mysterious name persists - but now I have the strong suspicion that its because my git client is too old ... I have only 1.8.4 and especially no support for lfs 😱🙈 @ndegendogo nice to hear! Yeah the size: can't open file: 27813542 (No such file or directory) seems to be an Issue due to git-lfs. Our internal and easiest fixed seemed to be to just delete the repo locally and clone it again. @marcussc my current git is too old, it does not support lfs I guess I have to upgrade my git ... Our internal and easiest fixed seemed to be to just delete the repo locally and clone it again. Thanks for the hint... so this might be needed in case the "new" git istallation is reluctant to take over ... I am back 🎉🎉🎉 Everything is working again 🎉🎉🎉 Thanks for all your support 🎉🎉🎉 Summarizing the two key points: cannot resolve packages, HealthCertificateToolkit missing => use the ENA.xcworkspace (not the project file) vendor/swiftlint ... oid: command not found ... size: can't open file: 27813542 (No such file or directory) => install git-lfs closing the ticket
GITHUB_ARCHIVE
Hi @mksrom & @ruhanga (or whoever else from Mekom I should CC?), I have a number of questions about some of the metadata content we’re using / will be using for the O3 RefApp - e.g. yesterday I was updating the drugs csv and noted a number of concerns that I’d like to change for the O3 RefApp, but I didn’t just want to willy-nilly change things that might impact Ozone or it’s integrations esp. w/ Pharmacy Inventory or Labs. Could we use the last 20 mins of the O3 Squad call tomorrow (Thurs) to discuss together? Here are some examples of what I’d like to discuss: - Changing to an openmrs-owned collection for Lab concepts - Using the Lab conepts collection ALSO as the Filter Definition collection (instead of having them separate, which we do now, and is causing me headaches) - Using CIEL codes instead of only UUIDs for drugs (e.g. here) - Having a non-doseage-specific option for most drugs (right now most drugs in the .csv file do not have a dose-agnostic option; e.g. I added just “Metformin” because there was no flexible option like that, only “Metformin 500mg” etc; but I’m not sure if this would break something w/ the pharmacy inventory workflow) - How we handle generic vs branded names - e.g. Metronidazole vs Flagyl - right now I’m in favor of having both in drugs.csv if both names would be commonly searched by a user. They’d have the same concept ID but different UUIDs. I’m not sure if that will cause other problems though. - How we organize the drugs.csv file - currently it’s a mix of some alphabetical ordering vs some categorical ordering. Because of the difference btwn generic vs branded I’m inclined towards categorical; e.g. I would add Metronidazole by Flagyl. Makes it easier for me the editor to see what similar drugs have already been added, without having to search all the lines for possible alternatives. - Convention in drugs.csv to move the Strength column to be next to the Name column, for safety (currently it’s easy to accidentally have a wrong strength because it’s just hard to see / very distant from the name) - i18n for drugs - seems we’re missing a field in Iniz to specify the language? Causing french/spanish/etc variations of drug names to show up for english users, and vice-versa. - I’m in the midst of a major manual effort to create a baseline set of diagnoses (without going crazy and importing like all Dx codes from ICD or CIEL); just want to explain my plan and confirm it will work w/ Ozone. Please CC any relevant folks FYI @ibacher and @ball @ddesimone @mogoodrich - if you’re able to make it this thursday at 8:30am EST that would be awesome
OPCFW_CODE
<?php // The contract.php file is force-downloaded // No need for sanitizing, nothing is saved on the server. $DEV_SIGNATURE = stripslashes ( $_POST['signature_capture'] ); if( $DEV_SIGNATURE ) { $client_email = $_POST['client_email_capture']; $dev_email = $_POST['dev_email_capture']; $HEADER = file_get_contents('header.txt'); //$CONTRACT_HTML = addslashes(file_get_contents('contract_html.txt')); $CONTRACT_HTML = str_replace( "'", "\'", stripslashes($_POST['contract_html_capture']) ); $FOOTER = file_get_contents('footer.txt'); $filename = stripslashes ( $_POST['file_name'] ) . '.php'; if (!$filename) { $filename = 'contract.php'; } header('Content-Disposition: attachment; filename=' . $filename); header('Content-Type: text/force-download'); header('Content-Type: text/plain'); header('Content-Type: application/download'); header('Content-Description: File Transfer'); /** Concatenate everything into a single .php file **/ echo "<?php /*\n"; echo $client_email ."\n"; echo $dev_email ."\n"; echo $filename ."\n"; echo $DEV_SIGNATURE."\n\n"; echo $HEADER; echo "\$CONTRACT_HTML='"; echo $CONTRACT_HTML; echo "';\n"; echo $FOOTER; } else { echo '<h2 style="text-align:center;">No signature received</h2>'; die(); } ?>
STACK_EDU
The flow management tool can be accessed in the Portal Management module of the Settings section in the admin by the content-management role. After choosing the portal (if there is more than one), select "Flows" from the left-side menu in the CONTENT section. A flow is a set sequence of various component-types that can be created, edited, and enabled at the portal level. Included in this article: - How to Create a Flow - Page and Content Components - Scheduling and Registration Components - Quiz and Survey Components How to create a flow Click the "CREATE A NEW FLOW" button above the table and enter a name for the flow in the title text box (max characters is 50). You can enter the settings for the flow by clicking the "Start of Flow" tile in the flow-builder field (the right portion of the screen). Once clicked, the editor appears on the left: - The Flow GUID (this is the unique ID for the flow) is automatically generated when you create a new flow and is listed here. - Enabling the flow (toggle to the right and green) makes it accessible to members of the portal. - This is the flow title that will appear to members and in the flow table on the main admin flow page. - When public access is allowed, the flow can be accessed by sharing the URL with someone; authentication is not required. - If the flow has been set to allow public access, you have the option to allow anonymous access, which means that the flow can be accessed and filled out by any anonymous user who is given the URL. - Enter any keywords associated with the flow. Keywords can help in grouping flows and searching by category. - The duration is the estimation of how long the flow will take to complete. Adding Flow Components Beneath the "Start of Flow" tile, all the components you want to have in the flow can be assembled and sequenced. You can compose the flow by dragging and dropping the components from the left side of the page into the flow-builder: Connect each component in the sequence you want by dragging the blue-connector line from one component to the next: There are component types that allow you to do whatever you'd like inside a flow. Component types allow you to create any custom content, create various types of quizzes and surveys, gather member registration data, allow members to schedule appointments, and pull in content from the system and custom libraries. Pages are a message that can be appended to the beginning or the end of another question; this component is simply text that you enter into the WYSIGYG editor. Once a page component is dragged into the flow-builder, simply click inside of the box to open the editor: Content components include any video, audio, and article that is enabled in the content library for the portal. Once you drag the content component box into the flow-builder, click inside of it and all content available for the relevant portal will appear on the left. You can search for the content by typing key words and titles into the search field or by scrolling down through the list. Note that there is the option to require video/audio content to be fully played through when it is reached in the flow. If this option is toggled on, the user cannot fast-forward through the content. The registration component only exists for portals that have authentication set as standard login. For applicable portals, the registration component can be used to create an intake process for members. Clicking inside the Registration box after it's been dragged into the flow-builder opens the component's settings. - Collect Contact Information: Enable this option (toggle to "on" position) if the member does not exist within the system—that is, collecting contact info is necessary for unknown users. When this setting is enabled, members will provide contact information (name/email/mobile number) when they access the flow. If the member already exists within the Engagement Rx system then this should be toggled to the "off" position—for example, if a known member is being sent an email or text through a coaching automation for data collection, this setting should be disabled. - Portal Registration: This can only be enabled (toggled to "on" position) when the "Collect Contact Information" option is enabled; when both are enabled, the flow will collect a password from the member. Note: this is only applicable to portals that are using standard login. Beneath these toggles are text fields that you can populate for both introduction text and waiver text. The scheduling component allows members to schedule meetings and appointments as part of the flow. The redirect component allows the redirection from the flow to a specific destination outside of the flow. Once you drag and drop it into the builder field, click inside the tile and the list of options for the redirect endpoint appears in the left side of the screen: - Member dashboard - Tracker dashboard - BMI calculator - Calorie calculator - Content library - Goal reminder center - Mobile app - Personal journal - Course catalog - Course (when this option is selected, choose which course you are redirecting to from the list that appears to the right; all courses available in the portal are listed) - URL (when this option is selected, you can enter the URL into the field that appears to the right of the option; you can also edit the default text that will be presented to the member when they reach this redirect component) Language Selector Component The language selector component enables the user to choose their preferred language from a dropdown list of 14 options: The flow author can enter the question (phrased in whatever way is desired), and can also, if needed, include a question header and help text. Once a language is selected by the user in the flow, the web page language selection changes to the selected language and the member profile for the user is updated to match the selection. After entering the tracker component into the flow, choose which tracker you want to enter by selecting it from the drop-down menu which will list all trackers enabled for the flow. You can choose to create a question header for the tracker and, if wanted, you can require a response. Quiz and Survey Components - Multiple Choice—One choice will be selected by members. *, ** - Multiple Selection—Multiple choices can be selected by members. *, ** - Dropdown—Responses are displayed in a dropdown field from which members can select one answer.*, ** - Single-Line—A short text-field response can be entered by members. ** - Multi-Line—A multi-lined text-field response can be entered. - Date—Allows a response in the form of a calendar date. *, ** - Range—Allows a numbered response between a range you select. ** (See below for more details.) - Gender—Allows a single response from the standard gender input used across the portals site, including: Male, Female, TransMale, TransFemale, GenderQueer, NonBinary. ** * These components have a branching ability. To learn about branching, see this article. ** These components can be mapped to custom fields, meaning custom fields can be configured using one of these specific question types. All question-type components require you to create a question (you have the option to also create a header and help text, if you want). For multiple choice, multiple select, and number range components, you need to provide the distractor list from which the end-user will choose a response. Here's an example of a multiple choice component: - Enter the main question text in the "Question Title" field - Enter any help text, if needed, in the "Help Text" field - Fill in the responses/distractors that you want to appear with the question - Beneath this main section are three options that you can toggle on if necessary: - Require response—When toggled on, the end-user will have to answer the question before continuing. - Add question header—When toggled on, a WYSIWYG editor automatically opens above the "Question Title" field. You can create the question header here (this text will precede the text in the "Question Title" field and can be used to provide more context about the question). - Add "Other" option—When toggled on, an "Other" option will appear in the list of distractors with the label you choose in the text field that opens when selected. - System attribute—When toggled on, the end-user's response to this form will be copied to the mapped attribute that you choose from the dropdown. Mapped attribute options include Birthdate, CustomField1, CustomField2, CustomField3, CustomField4, and CustomField5. This option is only applicable to certain question types. Creating a Number Range Question The number range question type includes the "Question Title" field and the option for a header and help text like the other question types, but the way you select the distractors is slightly different. Look at this example of a completed number range question: In this example, the question header is toggled on, help text is provided, and the range is set: - The question will be preceded by the phrase "Pool is a classic game of skill and precision" - The question for the end-user to answer is "Which is your favorite billiard ball?" - The help text provides a general description of pool balls for further guidance - The range is set: the minimum (the first number in the list and the lowest possible answer) is 1, the increment is 1 (this must greater than zero; all options will be listed increasing by the increment you set), and the maximum answer is 16. This means the user will be able to choose a response of 1, 2, 3, 4, etc, up to 16. Had the increment been set as 2 with the minimum set to 1, answers would be listed as: 1, 3, 5, 7, 9, 11, 13, 15. Previewing the Flow You can see how the flow works by previewing it with the "PREVIEW" button in the upper right of the flow-builder field: After saving the flow, it can be accessed from the table on the main page where you can edit, clone, preview, view (see the architecture), enable or disable, and delete. Note: Once you have created a new flow or customized a system flow, you can use the import/export function to easily and quickly copy the flow to multiple portals. This function prevents you from having to recreate and re-customize the same flow. To learn how to do this, see this article. Was this article helpful? Thank you for your feedback Sorry! We couldn't be helpful Thank you for your feedback We appreciate your effort and will try to fix the article
OPCFW_CODE
Version 3.4.4 and 3.4.5 are not working after update PS <IP_ADDRESS> I updated Mollie to version 3.4.4 (and 3.4.5 too) and half of the functions are gone. In module all the payments should shown as in older version like this: but in version 3.4.4 and 3.4.5 there is nothing shown: In orders should a table be shown with ordered products for Klarna payments. But it is gone in version 3.4.4 and 3.4.5 too. I am using PS <IP_ADDRESS> PHP version 7.2.1 We couldn't reproduce the issue. Could you try to reinstall the module or if it doesn't help you can give us your FTP so we can check what is posing this problem. You can send FTP to<EMAIL_ADDRESS> Done! You just got all access with email. Can you check your shop now if everything is correct? Everything looks fine or am i missing something? Hello, okay, thank you! It seams to work now again. What was the issue? Why can I not change the state for “Refund”? If I try to choose another State from the drop down list, it does not change. It always goes back to state “Erstattet”. From: margud Sent: Wednesday, October 16, 2019 2:59 PM To: mollie/PrestaShop Cc: didi-2018 ; Author Subject: Re: [mollie/PrestaShop] Version 3.4.4 and 3.4.5 are not working after update PS <IP_ADDRESS> (#132) Can you check your shop now if everything is correct? Everything looks fine or am i missing something? — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe. -- Diese E-Mail wurde von Avast Antivirus-Software auf Viren geprüft. https://www.avast.com/antivirus I just upgraded module, i didn't do any other changes so i don't know why it didn't work before. If you could give me order tab permission i could investigate status issue. Maybe you have changed order sufferance generator in your theme and if its validation fails there might be some problems with your orders. I just check and yes it doesn't work for some reason. We will investigate and try to fix it as soon as possible. For now if you want to change it you can do it by changing it in database configuration table by finding its name MOLLIE_STATUS_REFUNDED and changing its value 20 or wait for next release. 2019-10-16, tr, 17:27 didi-2018<EMAIL_ADDRESS>rašė: Okay you updated it from 3.3.5 to 3.4.5 and I from an other version to 3.4.4 and during update process Prestashop showed a Error message and disabled the module. However now it works again. This is not state in order. I can not change the state in Mollie module für state “refund” this is what I mean. I can not choose this state “Rückzahlung nach Rücksendung” it always goes back to “Erstattet” From: margud Sent: Wednesday, October 16, 2019 3:28 PM To: mollie/PrestaShop Cc: didi-2018 ; Author Subject: Re: [mollie/PrestaShop] Version 3.4.4 and 3.4.5 are not working after update PS <IP_ADDRESS> (#132) I just upgraded module, i didn't do any other changes so i don't know why it didn't work before. If you could give me order tab permission i could investigate status issue. Maybe you have changed order sufferance generator in your theme and if its validation fails there might be some problems with your orders. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe. -- Diese E-Mail wurde von Avast Antivirus-Software auf Viren geprüft. https://www.avast.com/antivirus — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/mollie/PrestaShop/issues/132?email_source=notifications&email_token=AMMJ26M5DR3G6M3LENGMLW3QO4QFFA5CNFSM4I72EQEKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBMWDFA#issuecomment-542728596, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMMJ26MCNEKQAYO6IROFBVDQO4QFFANCNFSM4I72EQEA .
GITHUB_ARCHIVE
// // DataViewCacheTests.swift // DataFramework_Tests // // Created by Alex on 4/9/19. // Copyright © 2019 CocoaPods. All rights reserved. // import XCTest @testable import DataFramework import ReactiveSwift class DataViewCacheTests: XCTestCase { private final class ItemModel: Uniq, Equatable, CustomDebugStringConvertible { static func == (lhs: DataViewCacheTests.ItemModel, rhs: DataViewCacheTests.ItemModel) -> Bool { return lhs.id == rhs.id } let id: Int var identifier: String { return "\(id)" } init(_ id: Int) { self.id = id } var debugDescription: String { return "id = \(id)" } } private final class ItemViewModel { static var counter: Int = 0 let model: ItemModel init(_ model: ItemModel) { self.model = model ItemViewModel.counter += 1 } } override func setUp() { ItemViewModel.counter = 0 } func testInsert() { let property: MutableProperty<[ItemModel]> = MutableProperty([]) let item1 = ItemModel(1) let item2 = ItemModel(3) let item3 = ItemModel(5) property.value = [item1, item2, item3] let result = DataResult.create(data: property.producer) let cachedView = DataView.create(data: result).map { ItemViewModel($0) }.cached var item = cachedView[1] XCTAssert(item.model.id == 3) XCTAssert(ItemViewModel.counter == 1) item = cachedView[1] XCTAssert(item.model.id == 3) XCTAssert(ItemViewModel.counter == 1) let item4 = ItemModel(2) property.value = [item1, item4, item2, item3] item = cachedView[1] XCTAssert(item.model.id == 2) XCTAssert(ItemViewModel.counter == 2) item = cachedView[2] XCTAssert(item.model.id == 3) XCTAssert(ItemViewModel.counter == 2) } func testMove() { let property: MutableProperty<[ItemModel]> = MutableProperty([]) let item1 = ItemModel(1) let item2 = ItemModel(3) let item3 = ItemModel(5) let item4 = ItemModel(7) property.value = [item1, item2, item3, item4] let result = DataResult.create(data: property.producer) let cachedView = DataView.create(data: result).map { ItemViewModel($0) }.cached var item = cachedView[1] XCTAssert(item.model.id == 3) print(ItemViewModel.counter) XCTAssert(ItemViewModel.counter == 1) item = cachedView[2] XCTAssert(item.model.id == 5) print(ItemViewModel.counter) XCTAssert(ItemViewModel.counter == 2) property.value = [item1, item4, item3, item2] item = cachedView[1] XCTAssert(item.model.id == 7) XCTAssert(ItemViewModel.counter == 3) item = cachedView[2] XCTAssert(item.model.id == 5) XCTAssert(ItemViewModel.counter == 3) } }
STACK_EDU
UMass Boston CS 444 project代写 – 这是project代写的题目 1 Access to MIC and Minix Two people form a team for the projects. You may choose to be a team by yourself. The teams are numbered asteam01,team02,.. .,team50. The instructor will give you a piece of paper with your team number and password. You need to do the projects on a CentOS server with the domain name mic.umb.edu. This server is not accessible from outside the UMass Boston network. So you need to first ssh tousers.cs.umb.eduthen from there ssh to MIC. Lets useteam13as an example. If you copy and paste the following commands, remember to replace 13 with your team number. You get a Minix virtual machine on VirtualBox. It is stored in the folderVirtualBox VMsunder your teams home directory. The full pathname is/home/team13/VirtualBox VMs/team13. All work can be performed from the command line. You use the following command to turn on your VM. mic$ vboxheadless –startvm team13 & The VM is headless it runs like a rack-mounted server without a keyboard, mouse, or monitor attached. The only way to access it is through ssh connection. Your VM has been configured to listen on port22nn. For example, team 13s VM listens on port 2213. Use the following command to make the first contact. mic$ ssh -p 2213 root@localhost You are the root user of your VM. The initial password isWeLuvMinix!, including the exclamation mark but excluding the comma. You must change the Minix password. As the root user, your home directory is/root for Linux, userxyzs home directory is located at/home/xyz. Your VM has been configured to let the instructor login without entering a password. This is done through public key authentication. There is a hidden directory that containsssh configuration, located at /root/.ssh. If you run the following commands in Minix, you should see three files. minix# cd /root/.ssh minix# ls -l total 12 -rw——- 1 root wheel 1570 Jan 23 13:18 authorized_keys -rw——- 1 root wheel 1679 Jan 20 19:38 id_rsa -rw-r–r– 1 root wheel 392 Jan 20 19:38 id_rsa.pub The fileidrsacontains the private key of your VM, and the fileidrsa.pubthe public key. The fileauthorizedkeyscontains the instructors public keys, which allow the instructor to log on your VM without entering a password. You mayappendyour public keys toauthorizedkeysso that you can do the same. However, do not delete the instructorspublic keys. Otherwise, the instructor will not be able to log on your VM to evaluate your projects. 2 Project Tasks The following are the tasks for this project. - Change the Minix password - Install the bundled package for code development minix# pkgin update minix# pkgin_sets - Clone the Minix source code minix# cd /usr minix# git clone git://git.minix3.org/minix src - Rebuild Minix minix# cd /usr/src minix# make build //this will take 20 minutes - When you log on Minix, there is a greeting message For post-installation usage tips such as installing binary packages, please see: http://wiki.minix3.org/UsersGuide/PostInstallation For more information on how to use MINIX 3, see the wiki: http://wiki.minix3.org Wed like your feedback: http://minix3.org/community/ Your task is to append to the greeting with your own message This Minix installation is modified by team13, Name1 and Name February 3, 2022 You figure out how to do it. The source code repository https://github.com/Stichting-MINIX-Research-Foundation/minix is useful to study code. You dont need to submit anything for the project. After you complete the tasks, turn off your VM and leave it alone. You are expected to finish the project before2 PM on the due date. At that time, the instructor will log on and and clone your VM for grading. If your VM is still running, it will be shut down without warning. 3 Grading Rubric - (20 points) Minix password is changed - (20 points)pkginsetsare installed - (20 points) source code is cloned - (20 points) a new boot image is built - (20 points) the greeting message is modified 4 Some Commands to Manage the VM The following are some VirtualBox commands that manage VMs. It is a good idea that you make a clone of your pristine VM right away. If your VM becomes corruptedwhile you are working on the projects, you can unregister (and delete) the corrupted one and clone a fresh one from the backup. If all clones you have are bad, you can ask the instructor for a newone. Please keep no more than three clones. Each one occupies 4 GB. The SSD on MIC has 1 TB. If you keep too many clones, we will run out of space. //turn on, turn off vboxheadless –startvm team13 & vboxmanage controlvm team13 poweroff //get info vboxmanage list vms vboxmanage list runningvms vboxmanage showvminfo team //clone, register, unregister (and delete) vboxmanage clonevm team13 –name backup vboxmanage registervm /home/team13/VirtualBox\ VMs/backup/backup.vbox vboxmanage unregistervm backup –delete //rename vboxmanage modifyvm backup –name backup //ssh port forwarding vboxmanage modifyvm team13 –natpf1 delete guestssh vboxmanage modifyvm team13 –natpf1 "guestssh,tcp,,2213,,22" The last two commands that manage ssh port forwarding are included here for your knowledge. Your VMs should listen on the port that is assigned to your team. Do not listen on other teams ports without their consent. It is cheating and a violation of privacy. 5 VM Migration You can migrate your VM to your laptop as follows. //from a terminal on your laptop yourLaptop$ ssh -L 2200:mic.umb.edu:22 [email protected] //log on the CS server and create an ssh tunnel from laptop to MIC //port 2200 of your laptop is translated to port 22 of MIC //leave this ssh session open //from another terminal on your laptop //cd to where you want to store the VM yourLaptop$ scp -P 2200 -r team13@localhost:~/VirtualBox VMs/team. //notice the uppercase P //the option -r means recursively, to include subdirectories //notice the period. at the end //the VM is 4 GB, so this may take a while After the VM is downloaded, register it with VirtualBox on your machine. Then you can do the project locally. After it is completed, you need to upload the VM backto MIC. //from a terminal on your laptop yourLaptop$ ssh -L 2200:mic.umb.edu:22 [email protected] //leave this ssh session open //from another terminal on your laptop //cd to where the VM is stored yourLaptop$ scp -P 2200 -r team13 team13@localhost:~/VirtualBox VMs/ //upload may be slower than download //dont miss the deadline After the VM is uploaded, register it and make sure it still works. Dont miss the deadline. The above procedures assume you use a Linux or a Mac where you can open an SSH tunnel that connects your laptop to MIC through the CS firewall, like a direct flight. If you use Windows, you probably need to make one stop in each direction. To download, youscpthe VM from MIC to the CS server and then from there to your laptop. To upload, reverse the direction.
OPCFW_CODE
Event loop for large files? If I'm not mistaken, I remember the "event loop" model of asynchronous I/O (Node.js, Nginx) not being well-suited for the sake of serving large files. Is this the case, and if so, are there methods around it? I'm considering writing a real-time file explorer / file server in Node, but files could be anywhere from 100MB to ~3GB. I would assume that the event loop would block until the file is completely served? No, it will not be blocked. node.js will read a file in chunks and then send those chunks to the client. In between chunks it will service other requests. Reading files & sending data over the network are I/O bound operations. node.js will first ask the operating system to read a part of a file and while the OS is doing that node.js will service another request. When the OS gets back to node.js with the data, node.js will then tell the OS to send that data to the client. While the data is being sent, node.js will service another request. Try it for yourself: Create a large file dd if=/dev/zero of=file.dat bs=1G count=1 Run this node.js app var http = require('http'); var fs = require('fs'); var i = 1; http.createServer(function (request, response) { console.log('starting #' + i++); var stream = fs.createReadStream('file.dat', { bufferSize: 64 * 1024 }); stream.pipe(response); }).listen(8000); console.log('Server running at http://<IP_ADDRESS>:8000/'); Request http://<IP_ADDRESS>:8000/ several times and watch node.js handle them all. If you're going to serve lots of large files, you may want experiment with differnt bufferSize values. Perfect! That's excellent that you can even define the granularity of the buffer size. How would the event loop compare in performance for delivering said large files versus the multi-threaded fork approach? I imagine that there's some sort of sweet spot in terms of buffer size such that it would allow Node to cope with multiple simultaneous streams while still remaining responsive but also making meaningful progress. If I'm not mistaken, I remember the "event loop" model of asynchronous I/O (Node.js, Nginx) not being well-suited for the sake of serving large files. I think you stand correct that node.js is not optimized for serving big files. I advice you to have a look at Ryan Dahl's slides. Especially Slide 14 Wow. Node sucks at serving large files. Well over 3 second responses for 256 kilobyte files at 300 concurrent connections. Slide 15 What’s happening: V8 has a generational garbage collector. Moves objects around randomly. Node can’t get a pointer to raw string data to write to socket. Slide 21 But the fact remains, pushing large strings to socket is slow. Hopefully this can be mitigated in the future. are interesting. Maybe this has changed, but I think it would probably be better to use NGinx to serve your static files(or maybe a CDN). I think you are misinformed that NGinx is bad at serving large files. Node.js is(was) a bad at this because of V8 garbage collection, not because of event-loop. Also this link might be interesting. Slide 18 shows that node.js is on par with nginx when serving Buffers (and files can be read as buffers). @Mak you are right about that :). But still I think that node.js is probably not the best for this! I wonder if this continues when the filesize even gets bigger! Even Ryan says that "pushing large strings to socket is slow." my curiosity with this question is more about READING (POST or PUT) large files from the request, and I think node.js would be a good choice.
STACK_EXCHANGE
Avanade Inc. Senior Developer Microsoft Dynamics AX/365 (w/m/x) in Geneva, Switzerland Your tasks. Your challenges. Your opportunities. * You will work in projects on the target software architecture including integration into the existing system landscape and transition scenarios * You lead the project together with the project lead and functional architect. You take the responsibility for the technical aspects of the project and you will be the technical lead of the project developers. * You work with the latest Microsoft technologies and are ready to continuously update your knowledge. You are not afraid to use new technology as an early adopter. * You create technical concepts, e.g. for integrations and do effort estimations for the project lead. Our qualifications as a top employer. At Avanade you get both the benefits of a major corporation and the entrepreneurial spirit of a dynamic team. Early responsibility, meaningful work and appreciative superiors who support goals and interests - A successful quick start: a two-day initiation event at our headquarters in Kronberg, Germany, personal career advisor, mentoring program. - 80 hours of training per year and MS certifications. - State-of-the-art equipment (laptop, smart phone, including private use). - Attractive compensation package: A 40-hour working week with paid overtime up to Senior Consultant level. - Flexible working hours and working from home can be combined and organized to suit ongoing project activities. - Competitive pension package and excellent benefits both in terms of daily sickness allowance and work- and non-work-related occupational accident insurance. - You have a degree in Computer Science/Business IT or similar. - You have experience with Microsoft Dynamics AX/D365 or have gained equivalent experience in the development of business software. - You have many years of experience working in an object-oriented programming language (X++, C#, JAVA, C++, VB) and you have good knowledge of Microsoft SQL Server and Reporting Services. Ideally, you have experience in the development of business software in trade, logistics, production, etc. Fluent in English/French is a must. Information for recruitment agencies: Please note that we do not accept candidate profiles from agencies for this position. We therefore ask you to refrain from sending profiles. Employment Transparency Avanade® Is An Equal Opportunity Employer. Avanade prohibits discrimination and harassment against any employee or applicant for employment because of race, color, age, religion, sex, national origin, gender identity or expression, sexual orientation, disability, veteran, military or marital status, genetic information or any other protected status. The EEO is the Law poster is available here and poster supplement is available here The Pay Transparency Policy is available here Avanade is committed to working with and providing reasonable accommodations to individuals with disabilities. If you need a reasonable accommodation due to a disability for any part of the employment process, please send an e-mail to Avanade at firstname.lastname@example.org or call (206) 239-5610 and let us know the nature of your request and your contact information. By using this site, you agree that we can place Cookies on your device. See our Job Applicant Data Privacy Statement and Cookies statement.
OPCFW_CODE
I am a neuroscientist researching how narratives dynamically engage audiences. I combine techniques from neuroscience, communication, and computer science to explain how narratives motivate our attention and elicit positive emotional experiences. This includes investigating how messages are dynamically processed and represented in the brain to how they inspire action. I believe this approach, which emphasizes narratives as powerful tools for uniting diverse audiences, will help us understand how narratives dynamically evoke and regulate positive emotional experiences and well-being. Communication neuroscience, narratives, fMRI, audiences, affect & attention Here is a short video about my work (2018) How do we successfully communicate one message to many people? This question lies at the heart of communication research, as nicely stated by a dominant influence in communication science, Claude Shannon (A Mathematical Theory of Communication, 1948). “The fundamental problem of communication is that of reproducing at one point either exactly or approximately the message selected at another point.” Over 70 years later, this remains a fundamental problem in human communication- one that feels increasingly salient as our mass communication technologies rapidly proliferate. Information is everywhere. What can we do to ensure the important messages rise above the noise? How can we use messages to help increase our well-being and create positive social connection? Here's how I'm tackling the problem: Narratives are powerful tools for engaging large audiences. They elicit strong emotional and cognitive responses that motivate behavior and connect us socially. Consider Game of Thrones. Despite an eclectic media market with more original content than ever before, the season 8 premiere pulled in 17 million viewers plus an estimated 54 million pirated views in a 24-hour period (Clark, 2019). This is just one example of a story that has become the common ground for a large, disparate audience that continues to share their thoughts and feelings with friends and develop active communities. To learn how messages can successfully engage an audience, narratives present a perfect experimental model. Narratives unlock a host of dynamic affective and cognitive processes that keep us invested in the story. I believe the ability of narratives to drive motivated attention is key to explaining the successful communication of messages. Furthermore, they allow us to study complex brain function as it occurs in everyday life in response to our communication-driven world. CENTRAL TENETS OF MY RESEARCH APPROACH The human brain is the biological mediator between a message and its effects. Interdisciplinary pioneers have opened the doors for the integration of neuroscientific methods into communication science (Falk et al., 2015; Schmälzle et al., 2015; Weber et al., 2008). Social and affective neuroscience approaches allow us to unlock the “black box” of the brain as a critical component of the human information processing system. Time is an essential variable to studying the message reception process. Stories evolve through exposition, rising action, climax, falling action, and resolution- and so too do our responses! Although we can summarize our thoughts at the end of a film, this does not capture how the story made us laugh, cry, or jump out of our seats. To do that, we need to measure responses over time. Message content matters. Just as it is essential to study audience biology to understand how we respond to messages, it is just as important to study the messages that drive those responses. Developments in computer science and computational methods give us a clear path forward to develop the content analytic methods that are ingrained in the field of communication. Already, we can leverage new tools to efficiently and reliably assess the complexities of lower-order narrative content from objects on screen to the music that moves us (Grall & Schmälzle, 2018). Messages that represent what we see in our daily lives matter. I want to study the communication process from message to audience in a way that reflects what happens in our everyday life (naturalistic contexts). I strive for external validity, and new developments in methods and analyses across computer science and neuroscience make this an increasingly achievable goal (Hasson et al., 2004; McNamara et al., 2017). Science makes progress through transparency. I am a strong advocate of open science practices, and I have adopted several practices to maximize the replicability and reproducibility (and therefore quality) of my work. All current projects have open data and code in development on Github with the aid of Jupyter Notebooks, and many have accompanying pre-registrations. I am committed to upholding these practices as an essential part of the research process. Audience brain dynamics in response to suspenseful narratives Media psychology is rich in theory explaining why narratives can invoke strong experiences of suspense in audiences, but there is a lack of explanation for how suspense develops in the brains of audiences over time. Suspense represents a unique blend of affective and cognitive processes (Pessoa, 2008). As the audience becomes aware of a looming threat to a beloved protagonist, the tension rises and keeps us predicting, “What’s going to happen next?” What are the dynamic brain processes that give rise to the experience of suspense in an audience over time? We are currently exploring the brain mechanisms that contribute to this rich, affective experience that keeps eyes locked on-screen to the very end. Inspiration and the power of positive affect in narratives Think back to the last time you looked out across a vast landscape or heard a swelling melody from an orchestra. How did you feel? These are two commonly reported elicitors of inspiration, an experience that’s often discussed as a burst of transcendent feeling that makes us want to be better people and do good things. However, it’s a difficult thing to study. Although it’s fairly easy to get a strong aversive response from someone (try a picture of blood, that usually works), it’s very difficult to reliably inspire even a comparatively homogenous audience. We take on the challenge with our ongoing investigation of audience responses to inspirational stories to explain the biological underpinnings of this positive affective experience, which motivates personal growth and altruism. Vertical Integration from Message to Brains My career has taken me from media production and creative directing, to training in quantitative communication theory and methods, to years of dedicated study of cutting-edge neuroscientific methods and analysis. This background gives me a unique advantage to studying the communication process from story creation to reception in the brain to long-term behavioral effects. A brief summary on tools I am proficient in Python for experimental design and analysis, with current emphasis on developing dynamic inter-subject correlation (ISC) analyses for fMRI data. I have a strong history and training in behavioral experimental design, questionnaire design, and content analysis (both traditional and computational) using media stimuli, as aided by my skills in video and photo editing (favoring Adobe Premiere and Photoshop). I am familiar with a wide range of tools to facilitate collaboration and open science including Github, Open Science Framework, Jupyter Notebooks, and As Predicted. I love using my skills as an illustrator for figure design and data visualization. I believe that being able to effectively communicate my research is crucial to advancing the field of media neuroscience. I am currently learning computational techniques for multimodal sentiment analysis and natural language processing.
OPCFW_CODE
Jekyll & GitHub Pages It’s 2019 and I’m moving swoicik.com to Jekyll and GitHub Pages. For as long as this website has existed, it has been powered by WordPress. WordPress is an amazing platform, and has been a huge part of my career in web development. But… the platform has grown from its blogging roots into something much larger and complex. It was time to try something new. Over the past few years, WordPress has become a huge and powerful piece of software. It has become overkill for a personal blog. The maintenance required for the site is causing me to write less. I just don’t want to deal with WordPress side of WordPress. I just want to write. The writing environment in WordPress has always been a bit cluttered. With the recent Gutenberg update it seems even more so. I don’t have the patience to deal with it, when all I want to publish is a simple text blog post. All the formatting and plugin options are overkill. Publishing to a Jekyll site from my phone is simple. It’s the same workflow for any other GitHub project I may be working on. I write the post in iA Writer and Git Commit the post with Working Copy. Seconds later the post is live. Mobile publishing on WordPress has never been great. At best I can get a draft done in the mobile app before logging in on a desktop to actually publish the post. WordPress themes have become more and more complex and harder to edit. Compared to editing a Jekyll theme it’s like night and day. I started my research by looking to other websites I liked. I found a lot of them were powered by Jekyll. Initially I though Jekyll would be too complex. After discovering how easy Jekyll is to run on GitHub Pages, I was sold. Jekyll is a platform focused on blog writing and not much else. It also had an active developer community that I could rely on. And the price of free is always helpful. I take almost all of my notes in markdown. The drafts for all of the posts I write are done in markdown. Being able to keep everything in markdown from draft to publish is just easier.l and makes sense. All my posts live in a plain text directory, and they live on all my devices. There is something great about having everything is a truly portable format. Posts are backed up on GitHub and locally on the devices I have connected to the Git Repository. It’s a very simple and secure set up. Jekyll is open source and free to use. GitHub Pages is free to use. This entire site is now free to develop and host (side note: I am a GitHub Pro subscriber). How To Migrate to Jekyll Now that the reasons are out of the way, how did the actual migration go? Respository and Theme Set Up I set up a repository called swoicik.github.io and did the initial commit with the default Tale Theme. I updated the config file and started editing the css to build the site you are using now. In a week or so I will release the full code for this website as its own Jekyll Theme (need to clean it up a bit). Most of the theme editing was done right in GitHub. I just refreshed swoicik.github.io in the browser to see the changes. It’s not the cleanest method, but it worked. I wanted to make the theme very simple, but still have the functionality I had on WordPress. The only difficult part (that just means I needed to look up some code) was adding the archive page. I set up a separate private repository for the content migration. I figured a complete dump of all the content from WordPress would be a mess. I didn’t want to add useless or unrelated files to my clean theme repository I just completed. I used this great plugin, WordPress to Jekyll Exporter, to migrate all posts and pages from WordPress to a format Jekyll can handle. The plugin dumped the content to the repository in a folder structure that Jekyll uses. I then went through the content to check the export and clean up any code. Most everything went as planned. I found some of the newer WordPress content that was written with Gutenberg had a lot of extra html tags and comments. I cleaned these up as I went through. I deleted the old content that was no longer needed. Burried pages, post drafts, and pending review posts mostly. I wanted a clean start with the new Jekyll site. Once the content was ready, I copied and pasted the _posts and _pages directories to the swoicik.github.io repository. Waited for everything to updated and started testing out the site with live content. The Result of All This Work? Before I migrated I knew Jekyll would provide the writing experience and post updating style I wanted. I didn’t need to check if this worked, I knew it would. One things I did want to make sure was speed of the site. I had felt that my WordPress was loading slower. And Jekyll is prompted as being much faster (static content vs dynamic). I used Pingdom to test the site before and after the migrations. Here is a screenshot of the results from the WordPress site. Here is a screenshot of the result from the Jeyll site. I think the numbers speak for themselves, but the site also feels much faster. It loads quicker on my phone or tablet and is easier to use. Not All Perfect It can’t all be perfect. There are a few features I lost in the migration. Some of them I will be adding back in later, some of them are just going away. They a things you should be aware of if you plan on making this same migration. Categories and Tags I no longer have my posts organized by categories and tags. It was more effort than it was worth to get this working with my Jekyll theme. It was also a function that was barely used by my end users on WordPress. I may add as implied version back in at a later time. The Jekyll site no longer has search built in. I’ll need to come up with a solution to fix this. Jekyll is a static site. I can’t have a built in contact form as I did on WordPress. There are third party options available to embed a form. I hav opted to just exclude it for now. My new site doesn’t have an RSS feed right now. I’ll need to add that back in later. I still rely pretty heavily on RSS from other websites. I should provide the same functionality on my site. No more comments on Jekyll. Much like the contact form, you can use an embed option if you wanted, but I don’t. I removed comments entirely. I added a message at the end of each post to either contact me on twitter or email me to discuss anything. That’s everything. I hope you found something useful in it. I’m happy overall with the migration process and the new site. I’m sure I’ll have more updates, let me know if there is anything specific you’d like the to discuss.
OPCFW_CODE
Who were the most productive developers? Top three in terms of lines added: In order to prevent cvs from filling up with all this code, it’s necessary to delete some old code. Special mention to jsing for achieving the most churn and smallest net gain by adding 153802 lines and deleting 152604. Workflow is as much about people and process as it is about tools. The tools simply serve the people and processes. This is a project that has consistently hit high quality releases, on a predetermined schedule, for coming up on two decades. That is unprecedented. I can't think of anything even remotely similar. Version control is a tool for integrating change and managing releases. They are arguably one of the best projects at doing it. See the silliness of the "CVS?!" non sequitur yet? So for people to drive by, who are statistically more likely part of the problems in the software industry, and critique the OpenBSD development process.. is at best cute and worst delusional. Inadvertently(?) it functions as a litmus test.. if you care so much about this, you aren't really who we want to work with anyway, similar to the candidate fixated on his title in http://dtrace.org/blogs/eschrock/2012/08/14/engineer-anti-pa... As expected, that got a bunch of people talking about comic sans, instead of OpenSSL, and acted as a pretty good filter for serious programmers. ...which apparently I've also just fallen on the wrong side of. Darn. Given that OpenBSD is supposed to be a security-oriented OS, it's slightly weird that they are using a version control system that does not guarantee that what you put in the repo actually stays there, unchanged. Git guarantees that. > They are arguably one of the best projects at doing it. What makes you think that? I'll end my debating with the fact that there is room for improvement. 2) Releasing on time, of high quality, for nearly two decades. Development process.. not that it is some ultimate code or product. This sentence contradicts itself. > Is there any current reason to still use CVS in 2015? Instead they are and have been using GNU CVS instead. OpenCVS was linked to the normal build for a while but has been removed from it four years ago. @bottom you can see the real productivity: lines added 1192696 lines deleted 3484520 files added 2653 files deleted 9995 So it's been a really productive year. (edit: updated to fix codeblock :) ) To be fair, I think the "net gain" there just means "net increase" not net gain in the sense of a positive outcome. Plus, the top line deleters are removing way more than the top line adders. If I'm reading it right the total "net gain" was somewhere around negative 2.2 million lines. The post lacks nuance, but it's really just supposed to be a fun little snapshot with some simple numbers. Is there anything wrong with it? This is a level of pedantry that deserves respect.
OPCFW_CODE
The what, when, why, how, and who for a testing project is documented by a test plan. The overall objective and scope of the tests to be run are laid by the test plan. The test plan’s main purpose is to set expectations and align the team for success throughout the testing process. A test plan doesn’t include individual test cases as it is meant to be a higher-level document. A test plan usually is lines teams with delivering a more reliable product to the users by orchestrating a smooth testing procedure. The first thing in the software testing life cycle is testing preparation. A test plan document is a living and breathing artefact, and it is dynamic because it should always be up to date. When plans change, that test plan should be updated to maintain an accurate set of information for the team. Types of Test Plans in Software Testing Test plans could be used as supporting documentation for an overall testing objective and specific types of tests. Below are the two types of test plans: Master test plan: A master test plan is a high-level document for a project or a product’s overall testing objectives and goals. It lists the activities, milestones and details of a test project’s scale and scope. All other test plans are encapsulated in the project. Testing type-specific plan: You can also use test plans for outlining details related to specific types of tests. For example, if you have a test plan for unit testing, acceptance testing, and integration testing. These plans will drill deeper into the particular type of test being conducted. Also read: About Test Plans Test Strategy vs Test Plan It isn’t uncommon to see a test strategy and test plan compared to one another or sometimes used mutually. The test strategies of and written as a part of the test plan in its dedicated section of the master test plan. Test strategy is usually used at an organizational level. The project or result is a static document. Test decisions are political in themselves and the test plan shows when and why tests are conducted within the group. That test strategy unusually changes hence its static nature. At the project, the product manager usually creates a test strategy. On the other hand, a test plan is used at the project level and the dynamic document provisioned with the uniqueness of the project. It is more explicit as it focuses on the when, how, and who off testing. The test lead or test manager usually creates a test plan. Also read: Test Script vs Test Plan Importance of a Test Plan A test plan will help you and your team to get on the same page. It serves as a framework and guides to ensure that your testing project is successful and helps you control the risk. The very act of composing a test plan will help you to think through things in a way you might not consider without writing them down. Accordingly, the value of writing a test plan is enormous by itself. A test plan is a great way to communicate a testing project objective across the entire team. It is exceptionally useful in remote workers where testing teams could be spread across the globe, and asynchronous transmission is more common. When to Write a Test Plan? It would help if you wrote a test plan after a test strategy is already in place. Both these activities will happen early in the project lifecycle. It is ok to have breaks and generalities in a test plan at this point, and it is ok to go back and update the test plan as details form up for things to change. Always remember that the test plan is a living and breathing document, and it is dynamic. You will be setting your team up for success during the entire testing project by writing the test plan early. Also read: Steps to Write a Test Plan How to Write a Test Plan? It may be difficult for you to write your first test plan, but it will start to feel natural the more you do it. A manager or team lead is usually responsible for writing a test plan while others contribute and support the process. For crucial tips for writing a good test plan: Keep it concise: It is easy for a test plan to become overly verbose. A short and concise test plan will be more effective and easier to read. Nevertheless, every project is different, and there is no set strength at the test plan as it varies by team and project. More complicated projects will have longer test plans than simpler projects. A super long test plan is likely to be ignored, so try focusing on compactness whenever possible. Keep it organized: The test plan is regularly scanned for information throughout the testing project. It is essential for keeping the information organized so teammates could easily find what they are looking for. Make it easy to read: Like keeping a test plan organized, you should make it easy to read. Try writing the test plan with the audience in mind and keep the language simple so that anyone could read the test plan and understand it. Make it easy to update: A testing project is likely to change at some point, and the test plan document is required to be updated to be accurate. An inaccurate test plan is much better. What to do After a Test Plan is Written? It is important to review a test plan after it is written with the team and discuss dependencies with specific team members. It ensures that the entire team is aligned on the test plan. A test plan review could be performed asynchronous or real-time, depending on where the team is located. Reviewing the test plan could also serve as final proofreading and uncover possible areas for improvements, gaps, and errors. A test plan is a significant part of a testing team’s alignment and success. It is an up-to-date document that could be easily accessed and read by the entire team at any point for understanding the overall testing objectives.
OPCFW_CODE
Linux and Windows are the two most popular operating systems for personal computers. Both of the systems have different purposes and functionality. But each will help you with homework and inquiries like “write my papers” for academic purposes. In this article, we will explore the features of the systems, their architecture, advantages, and disadvantages. There is no correct answer to the question of which operating system is better? Linux and Windows serve different aims and were designed with different purposes in mind. Therefore, the best system is the one that fits your preferences. What is Windows? Windows is a commercial operating system. It was created for business and home usage. The first version of Windows was released in 1985. Bill Gates was one of the system’s developers and founder of Microsoft. The initial release was for a narrow audience. However, Windows 95 made a revolution on the market of home Pcs, making them available for a broad audience. Since then, Windows has not changed much in its core. There are several prominent Windows features: - Integrated cash. - Virtual memory. - Client-server computing. - Multithread processing. These features make Windows a popular and user-friendly choice. Windows is a commercial product. A home user or a company needs to buy a licensed copy to use Windows. What is Linux? Linux is an open-source operating system based on Unix standards. It has a programming and user interface. The author of Linux was a Finnish student Linus Torvalds. Linux was released much later than the original Windows, in 1991. However, Linux still holds the competition. Linux’s main features are monolithic kernel and modular structure. It gives Linux enhanced performance and flexibility in its customization. Linux has three types of users: These types divide the roles between people who can operate the Linux-based pc. Regular users have limited access to the system functionality. Administrative users have full access to the Pc data. System’ services, such as web and mail service, take the role of the service account for convenience. Windows has four types of users: These types share similar functions but have different levels of access to the system’s data. Windows uses a folder-based file system. It means that each file has a directory. A directory includes folders and subfolders for file storage. Linux uses a more straightforward approach. Everything in Linux is treated as a file: a folder, a hard disk, a file – they are the same type of items within Linux. It makes operations easier with different components of the system. At the same time, there is a higher chance of making a damaging mistake during such operations. The question of security is one of the most essential for a computer. In terms of safety, Windows is much more vulnerable than Linux. One of the factors is Windows’ popularity. More than 90% of home Pcs have Windows as an operating system. Such high popularity makes Windows a target for hackers and cyber-criminals. Such pressure makes Windows vulnerable and requires extra caution for the system’s operation. However, even the most cautious user will face problems with Windows at a certain point. Linux in this term is safer and more stable. It is a community-driven system. Therefore, the response for any vulnerability comes within hours and spreads even faster. Finally, Linux is targeted less by hackers due to less number of Pcs that run the system. In terms of the supported programs, Windows is an indisputable leader. The system supports office, multimedia, and cloud-based applications. Windows can manage any task, business, work, or entertainment, with an appropriate application. At the same time, Linux struggles even with the basic set of programs. Naturally, you can have a player, a text editor, and a browser in Linux, but it will require extra effort to install and operate them. Once again, Windows is a leader in this comparison. Most of the gaming titles support Windows. Besides, its latest versions, Windows 10 and 11, are compatible with some Xbox titles. Linux also has access to certain games, but the library is much more humble. Steam, the biggest digital gaming marketplace, has titles for Linux. However, Linux does not allow the installation of Windows-based games. No matter what software you use, with Windows, you are exposed in one way or another. The system asks for regular updates and surveys and collects personal data. The primary reason is tailoring personal user experience. Nevertheless, if a security breach happens, such a vast amount of personal data collected in one place may cause severe harm to Windows users. Linux does not have any sort of monitoring software. The operating system belongs completely to you. Linux cannot do things that you do not allow it to do. Windows did not change in its core since the Windows 95 version. The current version receives regular updates, and once in five years, there is the next iteration of Windows. Each new iteration is distributed like a commercial product for home and corporate use. Linux does not have a default version per se. Instead, the system has a variety of distributions that vary in functionality and aims. Ubuntu is the closest version of “default” Linux. Ubuntu is a user-friendly Linux distribution with a familiar interface and all the basic functionality included in it. If you want to switch from Windows and try Linux, Ubuntu is the best choice. Windows and Linux are both great systems for their purposes. Windows provides a wide variety of compatible programs and games. It is great for managing daily tasks of any level of complexity. Linux is a more intricate and, at the same time, more reliable option. It will do exactly what you want it to do. It has more accessible policies in distribution and a safer environment. Linux is a perfect choice for program design and server work. In the end, the best system is the one that meets your needs and helps you to achieve your goals.
OPCFW_CODE
Yesterday Microsoft announced the release of a new edition of Windows Essentials 2012 (formerly Windows Live Essentials 2011) that suite of installable programs designed to supplement and enhance Windows 7 and Windows 8. Included in the package is a new version of Windows Movie Maker. The changes are hardly radical, but they do include a few features that users have been clamoring for since the previous version of WMM (version 6) was abandoned for the radically simplified (some say ‘dumbed down’) 2011 version. A little history Windows Movie Maker version 6, intended to run on Windows XP, was a timeline-based video editing program a la Adobe Premiere. It was not so sophisticated, but could be used to perform basic edits using the Premiere tracks style user interface. with the 2011 edition of Movie Maker, released with Windows Vista and revised with Windows 7, the timeline was abandoned in favor of a very simple—and bewildering, for veterans of the previous version—user interface intended primarily as a quick and easy way to dump in vacation photos and videos, auto-add visual effects, pan and zoom effects, transitions, background music, and titles, and then produce a WMV video for play on computers, emails, or as input to Microsoft DVD maker for trans-coding as a DVD that could be played on the family big screen. This left veteran users of the program howling. Gone was all the find control on the timeline, and the ability to syn multiple sound tracks, narrations, etc. On the whole, Movie Maker 2011 succeeded just where Microsoft meant it to succeed, with the average consumer, but they were apparently stung by the vociferous criticism of some of the “power users” who wanted more, because they have added a few of those elements back in with the new release. The new release The new release is still not timeline-based, it still has the ultra-simple Interface of the previous version, but Microsoft has added: In all the additions to the now familiar Movie Maker interface are very welcome. I did have a few problems with splitting videos that contained a narration (the narration disappeared once in a while, especially on Windows 8), and I wish when recording a narration there would be some sort of indicator other than the play head moving through the video, but the addition of the narration tool in itself, and the addition of audio waveforms for clarity’s sake, makes Movie Maker a truly useful academic tool now, rather than a tool for grandma to put together the family slide show. It doesn’t take the place of a powerful video editor like Camtasia, never mind Premiere, but for the quick and easy video/audio edit its great.
OPCFW_CODE
18 May 2009 It's the coolest platform for your Ruby applications, ever. Really. It's the coolest platform for your Ruby applications, ever. Really. Just when I've trained my boss to read this blog to keep up with my status, I go and launch another blog, specifically for my workstuff. In addition to blogs about JBoss things, it'll include other documentation for building, installing and using the related projects. The projects are grouped into constellations based around theses. Hence the name: In addition to being a new blog, it's also my attempt to eat my own dogfood and prove it all works in the Real World. The Odd Thesis site runs entirely on the JBoss-Rails stack. I'll still blog random crap here at fnokd, but if you're looking for my JBoss experiments, add the Odd Thesis feed to your reader. I've initially imported appropriate posts and comments from this blog. On my path towards clustering a Rails app on JBoss on EC2, I stumbled across Bryan Kearney and the other Thincrust guys. With their help, I now have a JBoss AS5 + jboss-rails "appliance" ready to roll. Fire up the image in your favorite virtualization environment. I give my virtual machine at least a gig of RAM. Marvel at the pretty Grub splash screen, courtesy of James Cobb (JBoss.org designer). Let it boot on up, and you'll notice a handful of things: You can login with root password of thincrust. The login prompt will tell you the IP address of the appliance, since it probably booted off DHCP. JBoss will be up and running at su jboss, whose /opt/jboss/jboss-as5, which coincidentally is default configuration is used to start the AS. Logs are under /opt/jboss/jboss-as5/server/default/log/. And to deploy, just drop something into /opt/jboss/jboss-as5/server/default/deploy/ and it'll hot-deploy. To control the service, as root: service jboss stop service jboss start service jboss status I'll make the RPMs used to build this available sometime soon. Until then, you can poke around the bits I use to create the RPMs ultimately used by Thincrust to build the appliance image. They are packaged in a way that makes my Red Hat brethren throw up in their mouths a little bit. Also, once I test'em on EC2, I'll throw out public AMIs for testing. By no means is this complete. This just marked a nice spike of a milestone along the way. There's still plenty of things that'll poke you in the eye if you're not careful. Always wear your safety harness. Drink plenty of fluids. Keep away from children. JBoss on Rails will indeed cluster! After modifying and dropping my jboss-rails.deployer into an 'all' configured server of JBoss AS 5, and firing up 3 instances on my localhost (non-trivial on OSX...): 10:43:28,409 INFO [RPCManagerImpl] Received new cluster view: [127.0.0.10:63740|2] [127.0.0.10:63740, 127.0.0.11:63747, 127.0.0.12:63749] 10:43:28,435 INFO [RPCManagerImpl] Cache local address is 127.0.0.12:63749 10:43:28,469 INFO [ComponentRegistry] JBoss Cache version: JBossCache 'Poblano' 2.2.0.GA And I've got 3 nodes running the same Rails app, all sharing a cookie and a JBossCache cache. Nick Sieger's JRuby-Rack handles binding the Rails session to the actual servlet session, and JBossCache takes care of the rest. A little 8-line perl round-robinning load-balancer is wired up through mod_rewrite in my Apache httpd.conf to throw requests to each of the nodes. Anything set in the session is immediately available at the next request which lands at a different node. Further down the line, we can look at a clustered cache for caching AR models and view fragments. Not too shabby. It should be fairly easy to create a nice Amazon EC2 AMI with Fedora+AS5+jboss-rails, plus some better Rake/capistrano tasks, and make for quick cluster deployment. Any EC2 experts wanting to jump in? Since JBoss AS 5 is built on top of Microcontainer, it's effectively a network of beans just doing their respective jobs. You have already probably noticed that it ships with 3 included configurations: minimal, default, and all. Unfortunately, they're awefully far apart along the spectrum of configuration options. The minimal configuration barely gets the container running, while the all configuration includes, well, everything. For most apps, it's safe to start with the default configuration. But default to who? It's the 80% case. But the 80% of the world who gets by with default, it still probably includes way more than they need. Of course, each user needs a different subset of that 80%. Given that the MC is managing a graph, it seems like we should be able to actually perform a solve to determine what is or is not required. Or at least get a good idea. Bonus points if someone can then twiddle conf/, deploy/ and deployers/ to tidy things up. Anyhow, for the jboss-as-rails project, I'm kicking it old-school, and going with default. I'm jamming it into git, and hopefully with some help, we can get it pared down to what we need. Here's a picture to see how it'll all ultimately fit together. The plugin will offer rake tasks for managing the included AS, deploying your app. I'll also produce an AS-free version of the plugin, assuming you want to manage your own AS separately. And poke around jboss-as-rails, see what we can rip out. Tomorrow is my first real status update call with my boss, Sacha Labourey. I've been anxious to deliver something, to prove I hadn't gone completely pudding-brained during my tenure as management. This morning, it all finally came together in a pleasing fashion, causing me to hoot and holler loud enough to scare the cats and probably some cows. It's not very consumable at this point, as it's just a deployer, not a nice Rails plugin with a set of Rake tasks. Heck, it doesn't even undeploy yet. But adding the deployer to your server's deployers/ directory allows you symlink live RAILS_ROOTs into your deploy/ directory, and be running on JBoss. Live. In-situ. Edit your controllers or views as you like, and your changes are immediately reflected in the running instance. Just like with ./script/server. It does not even have to redeploy your app. The rails framework is handling the magic reloading. It's taken me some time to dig through the innards of JBoss-Microcontainer, and a few false starts, but I finally figured out a super simple deployment process. I'd previously been trying to manipulate a RAILS_ROOT into a synthetic Java WAR archive, and shoe-horn things around that. But I have the freedom to go lower than that, so the jboss-rails deployer just sets up a Catalina context appropriately, without regard to WEB-INF or other non-Rails stuff. There's no need for that cruft. Likewise, I can directly control and manipulate the classpath, so the RAILS_ROOT does not even have to have any JRuby bits in it. The example application (src/test/ballast) is a virgin rails app with ActiveRecord disabled so I don't have to deal with database-driver gems just yet. Once deployed, a Rails app looks like pretty much any other web-app. The jboss.rails.deployment domain contains deployment objects for each rails app. And jboss.web contains all the webby bits floating around. I need to go back and remove the dead-end code I've left in my wake, and update the tests I'd disabled while in a coding flury (bad Bob!) I plan to put together an easy-to-consume plugin gem which contains an nicely-configured AS along with the jboss-rails deployer pre-installed, along with rake tasks to start/stop AS, and deploy your app. I'd also like to give clustering a whirl, and see what we can do. It's been an excellent 3 weeks back as an engineer.
OPCFW_CODE
Occasionally I geek out on one of the more technical aspects of owning my Model S and today is going to be one of those days. Read on if you’re interested in the undocumented software interface to your Model S and Model X and the capabilities of the Tesla Mobile API. The Mobile API API means Application Programming Interface. Its a specification for how programmers can interact with another piece of software. Whenever you have connected devices they’re using APIs to communicate with other computers. These APIs come in all shapes and sizes. Security adds another level of complexity with different authentication methods for proving you are who you say you are and that you’re allowed to use the API. In the case of Tesla, the Tesla Mobile apps on iOS and Android use an API to communicate to Tesla to allow you to start your car, vent the sunroof, etc. The car itself is also using one or more APIs to talk to Tesla to get software updates, send debugging information to Tesla, etc. Systems often have many APIs for different purposes. The API we’re going to focus on is the one used by the mobile applications. This Mobile API is not documented by Tesla and is not intended to be used by anyone other than Tesla employees. You can use applications Tesla has created that use the API but you’re not supposed to make your own applications using the API. Like many things these days, if it exists it can be figured out and some energetic Tesla owners have figured out how the API works. There’s a free site called Apiary that can be used for documenting APIs and that was used to document the Tesla Model S API. This specification provides what most programmers would need to successfully write an application that uses the API. While the API documentation is sufficient to get started programming, some of the basics you would be doing as a programmer are a bit like re-inventing the wheel. Some people have created libraries in various programming languages to assist with the basics of interacting with the Tesla Mobile API. What language you choose to program in and whether you use a Library or not is either defined by the needs of the project or the personal preferences of the programmer. For my hacking/playing around I use the Python programming language. Searching around found a few Python libraries to make integrating to the Tesla Mobile API easier. The one I found that was the best fit for me was Teslajson by Greg Glockner which is a single small Python file containing the basics. For those interested in libraries for other languages i’ve seen libraries in Java, Node, Ruby, and C# among others. I won’t be going into all the API calls and how they work but here’s a quick minimal working example:In the example above i’m using my email address and password to log into the Tesla server (same one I would use to access MyTesla or the Tesla forums) and I am getting a connection object. Then I use that connection object to request data about my car (i’ve named it “Baddog”). From that data I extract the current odometer reading. Running this little program simply shows my cars current mileage: Note that there are various tricks to using APIs like this. You should keep the access token around to avoid having to re-authenticate each time, you should handle API access errors, etc. So what can you do with it? - Automatically recording my daily driving data (miles driven, charge amount, etc) (more on this in an upcoming data post). - Checking nightly if my Tesla is plugged in and email me if it isn’t. - Automatically tweeting my daily mileage and efficiency. - Automatically tweeting when I cross a 1,000 mile mark. - Check for new API fields/data (as Tesla evolves the Mobile API) and emailing me when things change. There are other things I have in mind like automatically closing the sunroof if it looks like its going to rain, checking my calendar and pre-heating/cooling the cabin based on the next appointment time, etc. You can also access the cars’ location to track things like speed, places you’ve visited etc so that is another area for experimentation. If you use the historical charge data you could also implement the much-desired “set charge end time” feature which is on my short list to do when I have some time. The Tesla Mobile API is undocumented and not officially supported by Tesla, but with some simple programming you can accomplish some cool things with it. Tesla understandably doesn’t have the resources to officially support this API right now, but its great to see that Tesla is not going out of their way to make it more difficult for people to use and experiment with. While this is a programming interface and you could do odd things like open the sunroof or honk the horn at the wrong time, but you can’t control steering, power, etc so it is a pretty safe API to have available. I believe Tesla is watching what people develop using this API and will use that knowledge to enhance their mobile apps or features in their cars over time. What would you do with access to the API? Leave your thoughts in the comments below.
OPCFW_CODE
Error using this package with STM32CubeIDE Hi, I tried to follow the instructions in "Using this package with STM32CubeIDE" to use the micro-ros with the STM32Cube MX. However I had the following error in the IDE when trying to build the project: "cannot find -lmicroros" Follow the complete error showed in the STM32CubeIDE console: arm-none-eabi-gcc -o "micro_ROS_test1.elf" @ "objects.list" -lmicroros -mcpu = cortex-m7 -T "/ media / bacurau / Data / OneDrive - Universidade Estadual de Campinas / Research / SAMSUNG Project / ARM Firmware / micro_ROS_test1 / STM32H753ZITX_FLASH.ld "--specs = nosys.specs -Wl, -Map =" micro_ROS_test1.map "-Wl, - gc-sections -static -Wl, - start-group -lmicroros -Wl, --end-group -Lmicro_ros_stm32cubemx_utils / microros_static_library_ide / libmicroros --specs = nano.specs -mfpu = fpv5-d16 -mfloat-abi = hard -mthumb -Wl, - start-group -lc -lm -Wl, - end -group /opt/st/stm32cubeide_1.6.0/plugins/com.st.stm32cube.ide.mcu.externaltools.gnu-tools-for-stm32.9-2020-q2-update.linux64_<IP_ADDRESS>011040924/tools/bin/. ./lib/gcc/arm-none-eabi/9.3.1/../../../../arm-none-eabi/bin/ld: cannot find -lmicroros /opt/st/stm32cubeide_1.6.0/plugins/com.st.stm32cube.ide.mcu.externaltools.gnu-tools-for-stm32.9-2020-q2-update.linux64_<IP_ADDRESS>011040924/tools/bin/. ./lib/gcc/arm-none-eabi/9.3.1/../../../../arm-none-eabi/bin/ld: cannot find -lmicroros collect2: error: ld returned 1 exit status make [1]: *** [makefile: 70: micro_ROS_test1.elf] Error 1 make: *** [makefile: 63: all] Error 2 "make -j12 all" terminated with exit code 2. Build might be incomplete. I can find those directories in my PC (/opt/st/stm32cubeide_1.6.0/plugins/com.st.stm32cube.ide.mcu.externaltools.gnu-tools-for-stm32.9-2020-q2-update.linux64_<IP_ADDRESS>011040924/tools/bin/. ./lib/gcc/arm-none-eabi/9.3.1/../../../../arm-none-eabi/bin/ld) but, in fact, there is no lmicroros file. I tried to update the STM32Cube IDE, but it has no updates. Could someone help me solve this problem? Best Regards, Rodrigo Bacurau Do you have the micro_ros_stm32cubemx_utils/microros_static_library_ide/libmicroros/libmicroros.a file? If not, building the static library in Docker could've failed. If that file is there, then something is probably wrong with this part: "-Lmicro_ros_stm32cubemx_utils / microros_static_library_ide / libmicroros". On my project, this part has an absolute path to the directory and not a relative one, so something might not be right with step 4 of the setup. Yes, first of all check if micro_ros_stm32cubemx_utils/microros_static_library_ide/libmicroros/libmicroros.a exists. -Lmicro_ros_stm32cubemx_utils / microros_static_library_ide / libmicroros instead of above maybe try giving the absolute path to the directory. In fact -lmicroros searches for the file named libmicroros.a. I think the directory containing the libmicroros.a is set faulty. @Enkuushka could you contribute a PR with this fix in the README.md instructions? Hi, Thank you very much to all of you! The file libmicroros.a had been generated correctly, the problem was in the path "micro_ros_stm32cubemx_utils / microros_static_library_ide / libmicroros". I replaced this path with the absolute one (I redone step 4 of the setup and it worked). Another thing I had to do, both in step 4 and in step 2, was to put the paths between " ", because my directories contain names with spaces. I think that in the tutorial it would be interesting to include these " ". Thank you all. Rodrigo Bacurau
GITHUB_ARCHIVE
- Around 5 years of IT experience in software development, which includes user interaction, system feasibility study, requirement analysis, design, testing, development, configuration, client interaction and support. - Expertise in handling application development life - cycle involving requirement analysis, system study, designing, coding, de-bugging, testing & documentation using C / C++ on Linux & Windows. - Software experience in Golang, Python, C/C++ on Linux, Linux kernel, Qt, QML Cross-platform and embedded systems. - Very strong exposure on software development on Embedded Linux kernel, VxWorks (RTOS) and Solaris and Linux. - Experience in deploying UNIX/Linux Inter Process Communication mechanisms like Shared memory, PIPES, signals and semaphores for various embedded products. - Developed Python on Linux, Cross-platform and Embedded systems. - Developed feature dynamic configuration change in the cluster using Python scripting. - Designed and developed automated function test cases in Python. - Effective in leading applications and driver development with end-to-end responsibilities using C, C++ and Client/Server Technologies with exposure to different domains. - Extensive knowledge in memory management, auto pointers, pointer handling, callbacks, function pointers and Functions in C / C++. - Expertise in the OOPS, Multi-threading, String pool, C++ Packages, Exception-Handling & Collections, developing various Web services like SOAP, REST and Restful APIs depending on client/Customer requirements. - Strong experience in automating system test and production tasks using Shell Scripting and Python. - Experienced in STL concepts of C++ in developing the application. - Have working knowledge and designed embedded systems with various micro-controllers (PIC, ARM, AVR etc.) and exposed to different compilers, debuggers and IDE's like Microchip MPLAB, Code warrior. - Handled different embedded communication buses (UART, I2C, MOST, USART, CAN). - Good knowledge on UNIX Inter Process Communication like Pipes, Message queues, Shared memory and Semaphore etc. - Communication protocol suits TCP/IP, UDP, working knowledge of CAN and LIN. - Experience in Data Modeling the Business Requirement and having excellent skills in Oracle 11g and earlier. - Experience in Oracle Analytical functions and having excellent Hands on experience in writing PL SQL like Procedures, Package, Triggers. - Management systems and used tools such as Rally, Review Board, Extra view and Clear Quest. - Good experience with JENKINS and other build environment tools. - A pro-active, assertive team player with good analytical, communication, interpersonal and organization skills with ability to establish project & operation management process/procedures as well as manage multiple complex time critical projects across multiple locations. - Excellent written and presentation skills, created reports, technical / functional specifications for stake holder reviews to gain approvals. Programming Languages & Scripts: C, C++, Python, Shell. Operating Systems: Windows NT/2003/XP/Vista, Sun Solaris, IBM AIX, RHEL, UNIX/Linux.Protocols TCP/IP, RTP, 802.11 Standards, UDP, CAN, SNMP, KWP2000, LIN, RS 232, GMLAN, KWP. Databases: Oracle, Microsoft SQL Server, IBM DB2, SQLite and MongoDB. Tools: Emacs, eclipse, UML tools, MS-Office, splunk, Jira, OpenGrok. Libraries & Frameworks: STL, BDE, IPC, multithreading, sockets, heap allocators, signals/event handling, SOAP webservices, Software Design OOP/OODVersion Controls: GIT/github, SVN, CVS.Build Jenkins, cppCheck, Coverity. Confidential, Hesston, KS C/C++ Embedded Developer - Responsible to develop an implementation of an order Management system to accept orders from UI, Fix, send/route to Exchanges and other broker dealers and fill the trades in C++ on Unix and Linux platforms. - Developed a multithreaded cache offline program in C++, supporting various user interfaces to deliver/process data for scalable, low latency applications. - To store data on order history, accounts, securities, etc. have extensively used STL for fast retrieval and update. - Experience with serial communications including RS232, RS485, I2C, SPI and I2S. - Good experience in 8/16/32 bit controller based h/w, design, testing and troubleshooting. - Developed SOAP web services for order and trade related information to display on UI. Used proprietary MQ to subscribe to order and trade updates. - Using Bloomberg toolkit to gather data about various securities using BPIPE. Have coded various Python scripts to fetch this data for various reports. - Used various data structures and design patterns in applications like Singleton, Observer, Factory methods etc. - Implemented unit tests using Python, C++ in Google test framework, Squish test in python to automate the GUI testing. - Used a C++ interface to retrieve data from the data base or to update data in the data base. - Have written many SQL stored procedures for data manipulation and to compute several metrics like gain or loss, realized or unrealized etc. - Used Python scripts to generate various reports like OATS, P&L, transaction history, user privileges, limit rules and commission schedule reports. - Used SVN and git/GitHub for source code control. - Followed Agile and Scrum Methodologies. Environment: C/C++, Design Patterns, SQL, Python, bash, ksh, Linux, POSIX Threads, SVN, git, GitHub, OOAD, BOOST libraries, gdb, pdb, dbx, OpenGrok, Jira. Confidential, Sparks, MD C / C++ Developer - Extensively involved in bug fixing, blocker removals and working on story points. - Proficient knowledge in C++ 11 standards and worked on UNIX/LINUX. - Working extensively with off-shore team and various teams on onsite for development on regular basis. - Used various Web Debugging proxy tools like Charles Web Debugging tool, Fiddler etc., - Worked on Mobile and web based applications like Android, Windows based apps. - Involved in code check-ins and code-checkout using GitHub repository and performs code reviews at regular intervals. - Provide training to help ground teams and programs in the principles and practices of Agile. - Had good experience on PCI, UART and USB. - Worked on various databases like SQL, My-SQL, PL/SQL. - Good hands on experience on Web services like REST API, SOAP API and RESTFUL API from data integration. - Had a sound and expertise knowledge on Telecom and Mobile based domains and applications like Android, windows etc., - Working on high priority tickets on various applications and providing them the exact resolution. - Writing Visual C++ code in MS Visual Studio 2015 Community version. Proficient knowledge on ticketing tool JIRA. - Designed and developed new C++ modules for sending open contracts to Equaled for reaching price agreement with counterparties. - Utilized C++ and Oracle. Git was used as the Source Control tool - Involved in the project documentation using MS-Office, Visio. Performed various Testing like Unit test and writing test cases. - Performing code reviews at regular intervals for the smooth running of application. - Providing on-call support for Global teams located at various locations. - Worked on SDLC methodologies like SCRUM (Sprints) involved in the development of the project. Working closely with Dev and QA team and resolving the crisis. Environment: MS Visual Studio 2015, Charles Debugging tool, MS-Office, REST API, PCI, USB, SOAP API, RESTFUL API, GitHub, JIRA, Android Studio, UART, UNIX/LINUX, C++ 11, SQL, PL/SQL, SCRUM (Sprints), UAT, Test cases. C / C++ Developer - Used OOAD in the software development for HP servers. - Used Linux device driver code in C\C++ on 32 bit to implement the device interaction code for application. - Modified C\C++ code on 32 Bit/64bits environment to support enhancements, fixed bugs in the existing software in multithreaded, highly scalable, high throughput applications - Used C++ STL containers, algorithms in the application. - Designed, developed and implemented algorithm for network servers to expand the capacity of existing tool with new released hardware. - Used SVN for code repository. - Used TCP/IP and UDP for communication on Linux environment. - Used concept of Design Pattern for design and implementation the code. - Designed, developed and Implemented for Logical configuration command to configure the device for Linux OS for 0x86 and x86 64 bit environments. - Implemented Identify command in C\C++ on Linux 32 bit and 64-bit environment to identify the devices and hardware - HP ACU implemented for various family of HP server including Gen 6, Gen 7 and Gen 8 blade and ProLiant servers. - Worked on Performance Improvement &memory leakage. - Provided support for production and development issues. - Coordinate offshore team by assigning tasks, mentoring them for technical issues and updating the status to client on daily basis. Environment: C, C++, STL, COM, Make file, Linux Driver interaction programming Integrated Development Environment and Debug Tools, GNU Debugger, POSIX threads, SVN, HP-UX and UNIX/Linux. - Provide technical support in design and development of embedded systems. - Worked on keil 4.0 for programming in embedded c to develop output and load to hardware by Flash magic tool. - Supported all phases of the software development process i.e., Requirements, Design, Development - Worked on ECG display on smart network project. - Hands on experience in Microcontrollers, Microprocessors, Analog and Digital Communications, RF filter analysis in LabVIEW. Environment: C, Advanced C, C++, TCP/IP, Embedded Linux, RTOS, Bluetooth, LabView.
OPCFW_CODE
What transistor if they are needed do I need to drive 5 blue LEDs on a lm3915 and how should I hook it up? I am using an LM3915 LED driver in dot-mode. I would like to drive 5 blue LEDs (3.3V forward voltage @ 20mA per LED) per output. What type of transistor should I use to make this happen? And how do i put the transistors in the circuit i was thinking to use 18 volts as power supply I don't think the IC can handle 5 LED's by itself thanks Original question: what i want to do is 5 leds all blue per output with the lm3915 in dot mode the leds have a voltage of 3,30 V the current of the leds each are 20 mA i want to put them in a analog vu shape <<< something like you get like simulated needle movement but with leds thanks Please rewrite this question to be more coherent...otherwise it's going to be (rightfully) closed. I Did rewrite the question i hope it be more clear thanks The LM3915 can drive LEDs at up to 30mA on each output, so it is capable of doing what you want without the transistors. The outputs are open collector, which means they sink the LED current through them to ground. Having 5 LEDs on each output isn't a problem as long as your supply voltage is high enough to cover the total voltage drops from the LEDs, and low enough not to go over the maximum recommended supply voltage. Just to check, 5 x 3.3V = 16.5V total drop from the LEDs, and the maximum supply is 25V so you should be fine. To reduce the power the LM3915 has to handle, use a supply just a bit over 16.5V, say 18-20V. If you use say 20V and 20mA per pin, the power consumed per pin will be (20V - 16.5V) * 0.020 = 70mW. If you use all 10 outputs (with 5 LEDs @20mA on each) then the total power dissipation will be 70mW * 10 = 700mW (plus ~10mA for the rest of the IC operation), so it will get a bit warm. The LM3915 is rated for a maximum of 1365mW so it's within limits if you are running it at normal temperatures (e.g. ~0 - 40 deg C). To limit power dissipation further you can drop the supply voltage to e.g. 18V, or put a series resistor in place on the supply line. A single 0.5W 5 ohm resistor would reduce the LM3915 dissipation by around a third. Read the datasheet carefully, all the stuff you need to know is in there, plus plenty of example circuits. EDIT - To clarify things, in a series circuit the current flowing through each component is the same, and the sum of the (can be different) voltages across each component adds up to the supply voltage. In a parallel circuit, the voltage across each component is the same, and the (can be different) currents add up to the total supply current. So with your 5 3.3V LEDs, if we put them in series and apply 20mA (it doesn't matter about how we limit the current for these examples), then 20mA will pass through each one. The total voltage drop will be 3.3 * 5 = 16.5V. The total power is 16.5 * 0.020 = 0.33W. If we place them in parallel and apply 20mA then we know the total current will be 100mA, and the voltage drop will be 3.3V. The total power will be 0.1 * 3.3 = 0.33W. So you can see the same power is dissipated in each case. For further reading Wiki has a good page on Series and Parallel circuits. well it is 20mA per LED so it is 100 mA per output so that is over the 30 mA of the IC thanks @Danny - If the LEDs are in series (as inferred above) then the same current flows through all of them, so it's 20mA. They would only draw 100mA if they were in parallel. The same amount of power is dissipated in either case. ok thanks what about the brightness ? The brightness for each LED will be whatever it gives in the datasheet for 20mA. Obviously you can choose to run them at a different current if you want a different brightness - the datasheet has an example of an adjustable brightness setup.
STACK_EXCHANGE
Kecerdasan Buatan Jobs Artificial Intelligence (AI) is a field of computer science that focuses on how machines can think and act like humans. AI utilizes computer algorithms to establish patterns in data and process information in order to even make decisions. When used correctly, AI can predict trends, automate mundane tasks, and improve overall productivity. An expert with diverse skills in both software engineering and mathematics is just what you need when implementing an AI project. At Freelancer.com, Artificial Intelligence experts are poised to provide esteemed clients with the best possible outcomes. Our professionals are dedicated to staying abreast with emerging technology, keeping up with the latest industry trends, and doing whatever it takes to provide premium outcomes for industrial projects. Here's some projects that our Artificial Intelligence Experts made real: - Utilizing modern web technologies for speech detection - Leveraging pre-trained models for specialized use - Automatically generating keywords from given input - Integrating third-party AI API into existing frameworks - Developing application for curve detection through ANN or ML - Implementing code for sudoku puzzles featuring AI elements - Creating Django web applications for OpenAI - Designing a machine learning recommendation system - Processing natural language using Python - Investigating creative IoT or AI concepts with analytical reports At Freelancer.com, our Artificial Intelligence experts understand how to implement the most advanced algorithms and features into your project while providing you with the highest quality of service. We invite you to join our growing list of satisfied customers who have experienced the exceptional power that AI brings to their projects. Make your projects reach beyond its expectations by letting us help bring it to life - post your project now and hire an Artificial Intelligence Expert on Freelancer.com!Daripada 67,981 ulasan, klien menilai Artificial Intelligence Experts 4.86 daripada 5 bintang. Upah Artificial Intelligence Experts I am looking for a UX designer to redesign 1 or 2 of the core pages of an AI platform's user interface. I am happy with the overall design, but I believe that UX (user experience) needs some improvement. Our primary goal is to improve user satisfaction with the platform and make sure that users can easily understand how to generate content (copy and art). The ideal candidate will have experience with UX design and be able to provide creative solutions for improving the user experience. Designer should be familiar with AI platforms and how they work to best suggest the best UX. Link to work in progress wireframe: Back up of Figma File: Overview: I am looking to redesign the core pages that are labeled: Toolkit General (at the bottom) And THEN, ImageGenerator4. This ToolKit Gener... I have a standardised tests app in market called "Test Me!" (we have both, an Android and an iOS version). Two stable versions already on stores (Android version currently has a much better look and feel), but with not so many downloads. One problem we detected is, given the times being, where everybody's talking / working on AI, our app is not being so competitive and I'd like to add some new features that could give it more added value, and I'm specially thinking on AI features, like ChatGPT or so. For you to have an idea, "Test Me!" intended to young people aiming to travel abroad to prepare their university admission exams (as well as, for example, TOEFL/IELTS english exams) through multiple choice tests. The app can be used to get ready to access... Build a viewAI to identify human skin. The AI will get an image (or short video 10 sec..) in which there will be a few shows of female human bare skin. The areas that are relevant, will be defined in each sample by x,y to x,y square. The AI will improve its forecast as new samples keep arriving. AI should be able to run on an android 6 and above device can get and receive images by url, json of signed areas and send forcast per image in a json of signed areas to display. AI can also work in pass mode and send an answer from higher AI and transmit its answer. AI can export his set of weights and as definition to make json script, that upon reading it will create the current AI state in another machine. MileStone ========= I. Build an android 6 basic app that can get an image and return ... I am looking for a skilled developer to create an AI machine learning risk and compliance tool that can be linked to chat GTP. The tool is intended for all industries, so the developer must have a broad understanding of various business sectors. I would prefer to use a supervised learning AI model For this compettion you are only required to build out full working mvp based on below. The qinner selected will then work on building out full scale system. MVP: The MVP would focus on collecting data from a single data source (such as company annual reports) and classifying the data into a limited set of risk categories. The AI model would be trained using machine learning algorithms to classify the data and the visualization would be a simple web-based dashboard that displays the relevan...
OPCFW_CODE
I started reading up on what exactly Heroku is, taken from their homepage: Heroku (pronounced her-OH-koo) is a cloud application platform – a new way of building and deploying web apps. Our service lets app developers spend 100% of their time on their application code, not managing servers, deployment, ongoing operations, or scaling. Huh. Alright, let's look around more. Oh, there is a how it works page, with cool pictures, let's click through that. I still don't get it. The whole how-it-works page seems very well made, and call me slow, but I still don't get what exactly is this thing. Since I already went through with deploying an app to it, I'm going to sum it up, in my own words: Heroku is like a server, that is managed by a third party. The whole architecture (PHP version, web server, etc.) is managed by them. You get a git repository address, that you can push code into, and each time you push, it will deploy your code automatically. You can also install a CLI client on your own machine, that you can use to interact with your remote application. That would have been something that could have made it click it for me right away, but maybe I'm just slow. Heroku doesn't really advertise PHP as a supported language, I'm not sure why, but a little googling tells me that the whole reason PHP support was born, is because of some kind of agreement between them, and Facebook, but take this with a grain of salt. The problems I encountered When I first tried to push code onto the remote heroku repo, I was greeted with: -----> Heroku receiving push ! Heroku push rejected, no Cedar-supported app detected This was easily googled, and the reason for this is that my application had no index.php in the root folder. In order for Heroku to identify your application as a PHP app, the there must be a index.php, in the root folder. This is a major bad practice, the document root should only contain the public facing files, otherwise you will have to resort to blacklisting every directory that you don't want to be world-readable, and blacklisting is usually a bad choice when it comes to security. Our app was made in Symfony, and the index.php was in a folder called "web". Actually, it wasn't even named index.php (which is something I don't get in Symfony). Using a dummy index file does the job, so I placed an empty index.php in the root. Next step, is to get Heroku to somehow recognize our web/ folder as the document root. I found a way to hook into the deployment process of Heroku, using a file called Procfile. I got this tip from a Github repo, that I no longer seem to find, but basically you can specify a shell script in your Procfile, which instructs Heroku how to boot your application. I created a file named Procfile in the root, that had a single line: web: sh www/config/web-boot.sh This is important, because I think this is the only place (the shell script) where you can actually make permanent changes, that will get propagated to all the nodes (heroku distributes your code to many nodes). In case you are wondering, /www/ is the folder where the files you push to the git repository get deployed. The procfile is ran relative to /app. So, let's take a look at web-boot.sh: echo "Include /app/www/config/httpd/*.conf" >> /app/apache/conf/httpd.conf touch /app/apache/logs/error_log touch /app/apache/logs/access_log tail -F /app/apache/logs/error_log & tail -F /app/apache/logs/access_log & export LD_LIBRARY_PATH=/app/php/ext export PHP_INI_SCAN_DIR=/app/www echo "Launching apache" exec /app/apache/bin/httpd -DNO_DETACH It will place a single line in Apache's conf, which instructs it to include every *.conf file in our applications 'config/httpd' folder. So, I created a virtual host, in our applications config/httpd/default.conf, and pushed it to the git repository: DocumentRoot "/app/www/web" <Directory "/app/www/web"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> This solves the DocumentRoot issue. I'm not sure what exactly the tail -f lines do, but I guess you need to have open file descriptors on them, in order to make the "heroku logs" command display them. Symfony includes a very useful console interface, that you can use to clear caches, generate your DB from your schema files, and so on. I needed to do the latter. I tried running heroku /app/bin/php /app/www/symfony propel:build-all, which is supposed to generate the database, but it failed with an error: /app/bin/php: error while loading shared libraries: libmcrypt.so.4: cannot open shared object file: No such file or directory This references a libmcrypt.so file, which is a PHP module. So, I tried finding where the php.ini file is located on Heroku's server, because I can probably mess around with that using sed/grep/awk in the Procfile, in the same way as I did with httpd.conf. Using "heroku run bash" gives you a pretty much fully functioning shell, and running a "find -name 'php.ini', revealed that it is under /app/php/php.ini. grep 'libmcrypt' /app/php/php.ini returns: ; Default: Compiled in into libmcrypt (usually /usr/local/lib/libmcrypt) Which is a comment. The libmcrypt.so.4 file that the error message references is not even enabled in php.ini, which leads me to believe that it is the actual linux libmcrypt package that is broken on the Heroku installs, so I did not chase this further. I went ahead and and loaded the database manually, but this a major pita. When the application goes live, doing every database change by hand, then checking the database if they actually went in correctly, is a major source of errors, we really need the symfony console to automate this. This was the point where I gave up. There are a few other things that I would have needed to figure out, like how to move the cache into memcache or something else (by default Symfony caches into the filesystem, which is a no-no if your app is distributed to many servers), or how to install PHP extensions. In the end we went with a VPS, but doing some Google searches now, reveals more results, like there is an article that describes how to get a Symfony2 app running, that advocates compiling PHP on Heroku for the various modules you might need, and there is also a PHP Buildpack, which I have been told is a good way to get PHP running, but I'm not entirely sure what it is. This wasn't such a surprise, the reason they don't really advertise PHP support anywhere is probably because it's still in a pretty early stage. If your application is simpler/smaller, I still think Heroku might be a good choice, and it will probably evolve as time goes on.
OPCFW_CODE
What operating system does Raspberry Pi use? There are many operating systems available for the Raspberry Pi. Raspberry’s own OS is based on Debian Linux and is called Raspberry Pi OS. This OS has most features that anyone would need from a basic OS. Other popular alternatives include: - Raspberry Pi OS The official OS for Raspberry Pie from the Raspberry Foundation. Based on Debian, this OS can be installed by using NOOBS (New out of the box software) setup that is often included in the Raspberry Pie kit or can be downloaded from the Raspberry Pie website. Alternatively, you can use the Raspberry Pie Imager to install the OS. More details here. - Kali Linux often known as the hacker’s OS, Kali Linux is an outstanding distribution for the Raspberry Pi. - Ubuntu Mate Although not officially supported. the Ubuntu Mate is an extremely lightweight distribution for the Raspberry Pi. - Ubuntu Server If you wish to use your Raspberry Pi as a server, the Ubuntu Server OS is perfect for the Raspberry Pi. Lightweight, robust and fast, Ubuntu Server checks all the right boxes as a server OS. OSMC is possibly the best media server software that can run on the Raspberry Pi. OSMC runs Kodi to help you manage and run your media files. A great lightweight OS that is capable enough to run KODI on your pi. - Windows 10 IoT Core Windows 10 IoT core is a great cross-platform dev system from Microsoft for embedded devices. DietPie is a great lightweight alternative tio Raspberry Pi OS. - Lakka Linux This one is for gamers. Lakka Linux helps you to turn your Raspberry Pi into a retro gaming console. Are you using your Raspberry Pi to mine Bitcoins? RokOS will help you turn your Raspberry Pi into a node. You can read all the information you need in detail about setting up Raspberry Pi 4 on their official website. Here is what I did—I downloaded and launched the Raspberry Pi Imager and followed the super-easy on-screen instructions to install the Raspberry Pi OS onto an old Micro SD Card I had lying around. I got a cheap card reader to transfer the files to it. The whole process took about 6 minutes, which included downloading the Imager to my Mac, downloading the OS needed, installing the OS on the SD card and verifying the card. The whole process is very seamless and can be done by anyone who understands the basics of using a computer. Installing an operating system on the Raspberry Pi 4B The easiest OS to install or upgrade is macOS. Windows 10 is a very close second, however, second place here definitely goes to Raspberry Pi OS. All you do is run an imager to copy files onto the SD card, shove the SD card into the Pi and let it do its magic. It will only take a few seconds to start up! It is really that simple. The on-screen instructions will help you set your locale, set a password, connect to your WiFi or Ethernet connection and you’re done. Raspberry Pi OS will automatically download the updates (if any) and be ready to use in a jiffy! Starting up the Raspberry Pi 4B Note: The Raspberry Pi 4B does not have a switch to turn it on or off. Simply plug in the USB-C cable to turn it on. DO NOT unplug the cable without saving your files and shutting down the device! You may lose unsaved data or even corrupt the OS installation on your SD card. Shutting down the Raspberry Pi 4B Turning off and shutting down are two different things, as is the case with any device. To Turn Off the Raspberry Pi, first,click on the Raspberry menu and choose Logout. From the Shutdown options pop-up, choose Shutdown and then unplug the power cable. You can also choose to Logout if you wish to log off but keep the system running, and Reboot if you wish to restart your device from the Shutdown options pop-up menu. Getting the software you need Chances are, that the software you really need might not exist for the Raspberry Pi 4, but an open-source clone may exist! The applications I use the most include a web browser and a word processor. Both of these applications are downloadable for free. I downloaded the Chromium browser, the Grammarly add-in, and installed LibreOffice—an open source productivity suite just like Microsoft Office. Okay, I admit that LibreOffice Writer is not as fully-featured as the latest version of MS Word for macOS could be, but it is pretty close! You can download all the software you need from Raspberry > Preferences > Recommended Software or from the internet and a lot of them are free! Running streaming services on the browser I sometimes watch movies on Netflix/Prime. Now you might encounter problems trying to run those on Chromium and Firefox. This is because of Widevine./DRM support being unavailable in Chromium by default. Now the easy way out is to use the solution proposed by V Petkov here. This essentially “adds a new browser called Chromium Media Edition”. Go read the blog if you want to know how the nuts and bolts of this work. If you do not want to read through the blog, here’s what you gotta do: To fix this for Chromium: Open your terminal, type the following: curl -fsSL https://pi.vpetkov.net -o ventz-media-pi Once installed, reboot your pi. Now from the applications menu (pi menu/start button) click on Internet > Chromium (Media Edition) Log in to Netflix/Prime and enjoy the videos! Using the Raspberry Pi 4 as a word processor aka your main computer This was the shortest list, so this is what I started with. Here’s what I disliked in my everyday use: - Finding alternatives to common software OK, this is not a deal-breaker, but for some, it might be. Although browsers, email clients, productivity suites, image and video editing software are all available, they might not be as seamless and awesome as their paid, mainstream OS counterparts are. Fanbois might beg to differ, but we are considering our average user and not a mega-nerd. The system IS a little slow. This is relative and it might be fine for your taste. By slow I mean that emails take half a second longer to open in chromium or Word files might take a second longer or two, but that is about it. Real-world slowness compared to a machine 5 times powerful and 10 times the cost is hardly noticeable, but it is there. - Browser slowness Apps that render in the browser such as Grammarly, Headline Analyzer, Google Docs and Gmail will be slow to load and slow to work with. In fact, it often takes up to 5 seconds for the text to appear when I type it in Gdocs. - Jitter while playing videos on YouTube I noticed a slight jitter, but it could be just me and my funky internet connection that sometimes drops out. I noticed a similar jitter when I was using my IdeaPad 330s with an 8th gen Intel i3 and a slow mechanical 5400RPM HDD. - Learning curve What you take for granted in Windows and Mac might need some getting used to on Linux/Raspberry Pi OS. You might need to learn how to do things on the terminal for tasks that you had apps or widgets for in Windows/ macOS. - A polished user experience Let us just admit it. macOS is prettier, snappier, faster, better. Windows 10 isn’t that ugly either. Raspberry Pi OS needs some amount of polish to make it nice and shiny that’s all. NOT a deal breaker in any way. - Keystrokes registering a little late This happened on more than one occasion and has never occurred on the Mac, but I can live with it. - WiFi connectivity is a little wonky Sometimes, turning the case the other way gets a better signal. I do not know why, but it just does. Just a minor annoyance. - Throttling and heating issues These happen in intensive tasks such as watching videos or having too many open tabs at the same time. Using a fan and a heatsink helps a LOT. - Grammarly add-in for LibreOffice Writer This I miss a LOT. I write content for a living. I rely on Grammarly for nit-picking. Although there is a browser extension, but I would like a native add-in. Not a Raspberry Pi OS problem, but I am learning to work around it. That’s it. There was nothing else that I missed. For an everyday, general-purpose computer that can do any usual task and the occasional heavy-ish lifting, the Raspberry Pi 4B is a great little computer!
OPCFW_CODE
What is the common construction of questions spoken by native speakers of Spanish? The following question is within a lesson on Memrise.com: Example1: ¿Conoció tu madre a tu padre en una estación de tren? Did your mother meet your father at a train station? Being that I'm so new to Spanish, I would construct this sentence as such: Example2: ¿Tu madre conoció a tu padre en una estación de tren? The construction of Example1 really stifles my listening comprehension. This is because my brain is programmed to English, and the construction of Example2 is similar to that of English. So ultimately my question is, how common do native speakers use the construction of Example1? If the answer is, "very common", then I'm in for a world of hurt because I find Example1 to be sooooo difficult, and therefore I'll have to increase my study times in this particular area. In my experience, native speakers would also prefer the second construction. In this specific case, even ¿Tu madre y tu padre se conocieron en una estación de tren? very common, and if you see the answers, there are other contructions Well, since Spanish are English are two different languages, yes, you will have to get used to a whole set of syntactical constructions that may or may not "look" similar to English. There are way too many Spanish speakers from way too many different countries, and you will not find a uniform way of doing syntax, especially when Spanish is a tad more flexible in that respect than English. As a follow-up question to native speakers, can native speakers usually and easily comprehend all of these various constructions? Often, in Spanish you'll see a lot of these constructions. We invert some part of the sentence because we want to show preference of something when speaking. ¿Conoció tu madre a tu padre en una estación de tren? ¿Tu madre conoció a tu padre en una estación de tren? As you were told, both constructions work, with no difference in meaning. The same applies if you wanted to turn both madre and padre into a pronoun (here we'll need to use the pronoun se): ¿Se conocieron en una estación de tren? ¿En una estación de tren se conocieron? We can even form more constructions, perhaps less used, but which can be used anyway: ¿Conoció tu madre en una estación de tren a tu padre? And commas can also be added to rewrite the question and interrogative signs are placed in different positions: En una estación de tren, ¿tu madre conoció a tu padre? Conoció tu madre a tu padre, ¿en una estación de tren? As you can see, in Spanish we have lots of options to rephrase a sentence, it all depends how we want to express it on that precise moment. Exactly. In general in Spanish, the more important elements go first. If the act of meeting is more important, ¿Conoció tu madre...?, if the mother is more important, ¿Tu madre conoció...?, if the father is more important, ¿A tu padre le conoció tu madre...?, etc Thanks, Ustanak and @guifa. I have no idea of what search criteria would help me to find the answers that your're providing here. This is soooo valuable! O si te asombra que fuera en una estación del tren; "¿Así que fué en una estación del tren que se conocieron tu mamá y tu papá? Both are correct however I would say that Example 1 sounds to me like a "book" while Example 2 sound like everyday common people talk. Given the answer from Ferran (I think from Spain) where he likes Example 1 best I'd say that the first sounds good in Spain and the second is best in Latinamerica. The way I would say it would be more like ¿Tus padres se conocieron en una estación de tren? I'm also from Spain and option 2 sounds better, but the example in this answer sounds the best (although slightly different meaning) @FranciscoPresencia I'm curious, in what way is the example in this answer slightly different in meaning? Well for starters the topic of the conversation are "the parents", not the mother only. Then it asks how they met each other, not how the mother met the father. So for instance, if you're talking specifically about your mother, I'd expect the sentence to be any of the examples you showed. However if you are talking about your parents or in other general topics, then the one with "Tus padres" sounds better. Also, as a native spanish speaker, i can confirm both ways are correct. First example is a "formal" mode of asking. More appropriate for writing or being polite. The most common way is the second example, since it's the colloquial way to do a question. As a native spanish speaker I would say both of your construction are correct. Maybe the first one is more common if you use a translation and you are answering a question of your students book. For me, the first one sounds more correct because of the order construction. However I miss in the first one a how (Cúando) like: ¿Cómo conoció tu madre a tu padre en una estación de tren? How did your mother meet your father at a train station? In Spanish there isn't an exact construction of questions because it admits several changes of sorting words,subject etc. It is not very common to put the subject at the beginning of a question. However, everybody will understand you whichever you choose. I do not agree that a "how (Cómo)" is missing. Those are to different questions. If you add "Cómo" you are asking for the whole story of how they met, but the original question only asks for if they met at a train station or not. As for the rest of the answer, I do agree. Thats true, but without "how" the questions sounds a bit weird.
STACK_EXCHANGE
Intermittent driver exception on shutdown Driver version 9.4.1.jre11 SQL Server version Microsoft SQL Server 2019 (RTM-GDR) (KB5035434) - 15.0.2110.4 (X64) Mar 12 2024 18:25:56 Copyright (C) 2019 Microsoft Corporation Developer Edition (64-bit) on Windows Server 2016 Datacenter 10.0 (Build 14393: ) (Hypervisor) Client Operating System Linux JAVA/JVM version eclipse-temurin:17.0.10_7-jre-jammy Table schema n/a Problem description While attempting to cycle a Virtual machine in AKS, we're getting the following exception. java.lang.NullPointerException: Cannot invoke "com.microsoft.sqlserver.jdbc.TDSChannel$ProxySocket.setStreams(java.io.InputStream, java.io.OutputStream)" because "this.proxySocket" is null The throwing of this exception causes other undesirable side-effects. Expected behavior No NullPointerException even though we are shutting down. Actual behavior java.lang.NullPointerException: Cannot invoke "com.microsoft.sqlserver.jdbc.TDSChannel$ProxySocket.setStreams(java.io.InputStream, java.io.OutputStream)" because "this.proxySocket" is null Error message/stack trace Complete error message and stack trace. java.lang.NullPointerException: Cannot invoke "com.microsoft.sqlserver.jdbc.TDSChannel$ProxySocket.setStreams(java.io.InputStream, java.io.OutputStream)" because "this.proxySocket" is null at com.microsoft.sqlserver.jdbc.TDSChannel.disableSSL(IOBuffer.java:734) at com.microsoft.sqlserver.jdbc.TDSChannel.close(IOBuffer.java:2097) at com.microsoft.sqlserver.jdbc.SQLServerConnection.clearConnectionResources(SQLServerConnection.java:3797) at com.microsoft.sqlserver.jdbc.SQLServerConnection.close(SQLServerConnection.java:3781) Any other details that can be helpful If this was already fixed in a newer version of the driver, please let me know so I can upgrade. JDBC trace logs Intermittent, sorry I don't have this... Hi @joel-rieke, It looks like a null check was merged for 9.5.0. Please upgrade your driver version to at least 9.5.0 (we recommend the latest, 12.6.2). Looks like it's already been fixed... I'll follow your advice and if I see this happen again, I'll either re-open this one or file a separate new issue. Incidentally, I did try to jump to 12.6.2, but none of our existing jdbc connection urls seem to work in our development environment. Perhaps there was a change in requirements for the jdbc connection url going from 9.4.1 12.6.2. I did notice that 9.5.0 only has a pre-release build available to it which makes me a little nervous about using it? What about the next stable release after 9.4.1, 10.2.0? Same problem with 10.2.0. I switched back to 9.5.0.jdk17 prerelease for now. It seems it can't find the database name using our existing jdbc url config. I'm not seeing any changes from 9.5.0 to 10.2.0 that would explain the issue you're seeing. Can you create a new issue and provide the connection string options you are using in the description? This will help us keep things organized as the thread we're commenting in now has had its issue already resolved. This maybe is an issue with trustServerCertificate=true parameter? We use this in development. Will the jdbc connections fail with new builds and this parameter? I don't see any changes made to trustServerCertificate between 9.5.0 to 10.2.0. One thing I am seeing is that the default value for encrypt was changed from false to true starting after the 9.4.0 release. Considering the two are linked, could this be a source of your issues? Please see our MS Docs page on "Understanding encryption support" for more information on these parameters and their changes.
GITHUB_ARCHIVE
Is there any way to identify that the job was ended manually with option '4' Some of the jobs got ended with the below message. "Profilexxxx has issued a controlled shutdown request for work". How to check if these jobs were ended manually by taking option '4'. The jobs ends and restarts everyday. When I check the previous day joblogs I could see the message CPC1125, and when the job ended abnormally it was cpc1126 and CPC1235. Why is inspecting the job log insufficient? Job log for the workstation is not saved in the server. Only the jobs that were ended, has the job log. In that log, I am not able to see what actions were performed on that job. It shows only the ENDJOB request has received. There are several ways that a job can end. The normal ways are: Normal end - The programs end normally without any messages. Controlled end of job - The job is ended by taking a 4 on the job in WRKACTJOB or by calling ENDJOB. Immediate end of job - The job is ended by taking 4 on the job in WRKACTJOB or by calling ENDJOB with OPTION(*IMMED). Controlled end of the subsystem - The subsystem the job is running in is ended by calling ENDSBS. Immediate end of the subsystem - The subsystem the job is running in is ended by calling ENDSBS with OPTION(*IMMED). Program failure. User takes a C or D on the message. There are others, but they are less likely. In fact, the ENDSBS OPTION(*IMMED) is not terribly likely either, but easy to test. One thing to notice right away is that jobs can be configured to only spool a job log when the job ends abnormally. In this case you should only get a job log for reason 6 above. Otherwise, the following will be found in job logs of the job that was ending: Normal end - Only CPF1164 with end code 0. No escape messages are present in the job log. CPF1164 Completion 00 03/26/19 09:06:14.261295 QWTMCEOJ QSYS 0162 *EXT *N Message . . . . : Job 274217/MMURPHY/MMURPHY ended on 03/26/19 at 09:06:14; .005 seconds used; end code 0 . Controlled end of job - Again no escape messages, but CPC1126 will be present. End code is 10 on CPF1164. This shows the user profile that ended the job. CPC1126 Completion 50 03/26/19 08:42:37.604265 QWTCCCNJ QSYS 0C74 *EXT *N Message . . . . : Job 274196/MMURPHY/MMURPHY was ended by user MMURPHY. Cause . . . . . : User MMURPHY issued a controlled end job request for job 274196/MMURPHY/MMURPHY. CPF1164 Completion 00 03/26/19 08:42:37.607135 QWTMCEOJ QSYS 0162 *EXT *N Message . . . . : Job 274196/MMURPHY/MMURPHY ended on 03/26/19 at 08:42:37; 6.291 seconds used; end code 10 . Immediate end of job - Again no escape messages, but CPC1125 will be present. End code is 50 on CPF1164. This shows the user profile that ended the job. CPC1125 Completion 50 03/26/19 08:44:46.773821 QWTCCCNJ QSYS 0C74 *EXT *N Message . . . . : Job 274200/MMURPHY/MMURPHY was ended by user MMURPHY. Cause . . . . . : User MMURPHY issued an immediate end job request for job 274200/MMURPHY/MMURPHY. CPF1164 Completion 00 03/26/19 08:44:46.774951 QWTMCEOJ QSYS 0162 *EXT *N Message . . . . : Job 274200/MMURPHY/MMURPHY ended on 03/26/19 at 08:44:46; 5.661 seconds used; end code 50 . Controlled end of subsystem - No escape message, CPC1206 will be present. No indication of who issued ENDSBS. End code 10 in CPF1164. CPC1206 Completion 50 03/26/19 08:52:59.936053 QWTMMTRS QSYS 0370 *EXT *N From user . . . . . . . . . : QSYS Message . . . . : Subsystem is ending controlled. CPF1164 Completion 00 03/26/19 08:52:59.939458 QWTMCEOJ QSYS 0162 *EXT *N Message . . . . : Job 274207/MMURPHY/MMURPHY ended on 03/26/19 at 08:52:59; 16.004 seconds used; end code 10 . Immediate end of subsystem - No escape message, CPC1207 will be present. No indication of who issued ENDSBS. End code 10 in CPF1164. CPC1207 Completion 50 03/26/19 09:05:00.642584 QWTMMTRS QSYS 0370 *EXT *N From user . . . . . . . . . : QSYS Message . . . . : Subsystem ending immediately. CPF1164 Completion 00 03/26/19 09:05:00.643785 QWTMCEOJ QSYS 0162 *EXT *N Message . . . . : Job 274213/MMURPHY/MMURPHY ended on 03/26/19 at 09:05:00; 14.583 seconds used; end code 50 . Program failure - There will be escape messages prior to the CPF1164, potentially a CEE9901 if the program is an ILE RPG program, maybe others depending on the program type that ended abnormally. You will likely see an inquiry message with a reply of C, D, or F. These all cancel the program, and if it is the top program in the stack, it will cancel the job. Be careful though, CL programs allow a reply of R or I to the inquiry message which will not cancel the job, but retry or ignore the failing program. So not all escape message will cause the job to fail, only unmonitored ones. Surprisingly the CPF1164 will have an end code of 0 as the job really does not fail, but it is ended normally after handling of the escape messages sent by the top program in the stack. For the job that is calling ENDJOB or ENDSBS, these will be logged as well, but once again, it could be that the job is configured to suppress the job log in the event of a successful completion, so you might not see it. The ENDJOB message is CPC1231 and shows the job that was ended. This happens when the user takes a 4 against the job. Message ID . . . . . . : CPC1231 Severity . . . . . . . : 00 Message type . . . . . : Completion Date sent . . . . . . : 03/26/19 Time sent . . . . . . : 08:44:46 Message . . . . : ENDJOB started for job 274200/MMURPHY/MMURPHY. Cause . . . . . : The End Job (ENDJOB) command is running for job 274200/MMURPHY/MMURPHY. If the user ends the job by typing ENDJOB or ENDSBS on the command line you will see a request message like this: From . . . . . . . . . : MMURPHY Severity . . . . . . . : 00 Message type . . . . . : Request Date sent . . . . . . : 03/26/19 Time sent . . . . . . : 08:52:57 Message . . . . : ENDSBS SBS(MMURPHY) Unfortunately in the case of the subsystem, there is no indication of which jobs were ended.
STACK_EXCHANGE
What are some Reptiles that dont require UVB? Before you start saying if you cant provide UVB lighting dont get a rep. thats not the case at all. I have 2 leo geckos and this summer were making a nice large cage for them much like this pic: http://www.cagesbydesign.com/p-252-majestic-reptil... So theyll be on the top(devided ofcourse) and the bottom will be open. Ive been thinking ill devide the bottom up for the feeders also so theyll be atleast 30 long by like 18 wide maybe 24 high Im not a 100% on the last two but the glass we have is 60 inchs long so well have alot of extra room. So i was thinking with the last bit i could get another reptile. But i dont want one that need UVB lighting. Not that i cant afford it. I actually have 2 right now brand new not using. Its just in the long wrong. the light bill will be a ton which i dont wanna have. So i was thinking something like my leo geckos that need a undertank heater and/or a heat lamp(which i have). any ideas? or suggestions? Id prefer that itd be handlable (not Tokay geckos!) and not something thats gonna need mice and rats for food(snakes!. I loveeee rats and couldnt bare to feed them to a snake>.<) that and my mother is terrified of them and rather cut there heads off then learn to like them. Anyways ideas? something thatll eat roachs, crickets, meal worms, wax worms and MAYBE the occasional pinky>.> If my geckos will eat them other wise just the other stuff. thanks best answer will get 10 Points~ also only answer if you know what your talking about. Dont say sayyy another leopard gecko coz there cool or whateverXD If the place i got mine ever has more super snows then ill probably get another one. Id just like to expand my collection^^. Id love to educate people around me that reptiles arent something to fear and kill rather are neat creatures that help out the enviorment and let kids experince it(in high school we had a pet ball python for a class pet and thats what got me over my fear! there such sweet snakes!) so nothing thats gonna bite to much and hurt. No the sizes are definate. Im not sure how big they are other then that there 6 feet long(the glass) i was just saying the minum of what they are. Im pretty sure we can make it wider if need be since we just need the glass on the bottom for the under tank heats. That was just the rough gestimante. Im thinking a african fat tailed gecko, or a crested gecko but im more leaning towards a pacman frog. Ofcourse id cut it smaller since it wouldnt need a HUGE tank like that and maybe get two. Any experince with them??! good idea bad? ive read they not need UVB lighting - Danger ErinLv 71 decade agoFavorite Answer reading all that, i'm sure you will do what is best for whatever pet, so i'm not going to mention tank demensions or anything :) pacman frogs are really cool little guys, but like other amphibians if you want something to handle then they arent really the best. Their skin quickly absorbs any chemicals or oils that are on your skin and they could seriously harm him - they are more a hands off pet AFTs are more or less the same as leopard geckos in temperament (not care - they need more humidity), but i dont think they look as pretty as leos ocelot geckos could be an option? these guys are seriously cute (i got myself a baby one a few weeks back; http://www.flickr.com/photos/33469876@N02/44602659... ). They are really funny little things just full of character! They come in quite a few morphs, like leos and are a little more unusual. They are just as friendly though :) sandfish skinks are truly awesome little lizards. i love my Merry and Pippin! Although they hide alot under the sand, when you do see them they are just great. They look really funny, and they are beautiful. http://sandfish.moonfruit.com/#/gallery/4537624816 i love my cresties too though! oh my gosh its so hard choosing new pets haha! if you are into arboreal guys, then gargoyle geckos look really nice, and i think they are similar in care and temperament to the cresties, but i've never kept one - AlexandraLv 61 decade ago Hmmm... Do you still have the option to change the dimensions? I'm not sure how you'd get around this, as your leopard geckos obviously need max floorspace, so a wide construction suits them, but if you could somehow increase the height of the lower section, crested geckos would be perfect! They don't need UV lighting (although some think they benefit from low levels) and heating costs would be virtually non-existent, as they like it pretty cool (75f-80ish). In fact, if temperatures go over the mid 80s they get v.stressed and it can be fatal, so they really wouldn't be a good choice if you live in a very hot area... Humidity needs to be much higher than your leos (not a problem - just choose appropriate substrate and mist regularly) but they're highly arboreal, so they need height and branches to climb, not floorspace, like terrestrial species, if you see what I mean? They're also pretty handleable, which was another of your criteria :o) Hope this helps? .Source(s): Keeping and breeding various herps since 1994. http://www.crestedgeckocanada.com/care.htm - 1 decade ago Geckos are nocturnal, although you can see activity in the day. Frogs are a good choice as well without UVB. If you can afford them, poison dart frogs make a nice colorful area. They are not poisonous in captivity. With the frogs they only need enough lighting that lights the tank up so they can bee seen when they are active in the day.Source(s): Reptile fanatic, weekend herper - weatherlyLv 44 years ago It relies upon what style of reptile. they are in a position to bypass relatively devoid of uva, the human beings on the expo in all hazard did no longer desire to haul around extra beneficial kit. maximum lizards want it, as with some geckos and snakes they'd not. All i will say is learn alot, in addition they want specific temperatures too.
OPCFW_CODE
Equivalent formulation of linear logic with more axioms and less inference rules We can formulate classical (sequent) logic with only the structural inference rules including cut, and a collection of axioms like $A, B \vdash A \wedge B$. This is equivalent to the usual sequent calculus of classical logic, in the sense that the provable sequents are the same. Can we do the same with classical linear logic? Assuming only Ax: $\frac{}{A \vdash A}$ and Cut: $\frac{\Gamma \vdash A, \Delta\quad\Xi,A \vdash \Phi}{\Gamma, \Xi \vdash \Delta, \Phi}$, and a collection of sequents as axioms, can we recover the full MALL? Of the four binary connectives, we can easily come up with the elimination axiom for $\&$ and $⅋$ (\lpar is not supported by MO), and the introduction axiom for $\otimes$ and $\oplus$. However, I'm having difficulty with the other side. It seems that if we allow ourselves additionally the rules for $\multimap$, then I think the elimination axiom for $\otimes$ would be $A \multimap (B \multimap C) \vdash (A \otimes B) \multimap C$. But I'm not able to proceed any further. Also, I haven't considered negation yet. Perhaps it will be better to regard negation as an involution on the formulae, which might give us a way to relate the different connectives. Is there any previous work on this? Or can it be proven that such a formulation is impossible? (This might be achieved by demonstrating that sequents as opposed to rules never impose enough constraints on the logic.) You should have a look at Residuated Lattices: An Algebraic Glimpse at Substructural Logics by Galatos, Jipsen, Kowalski, and Ono. Among other things, they give various presentations of substructural logics (of which MALL is a special case). E.g., IIRC there is a Hilbert calculus for FLe (of which MALL is an axiomatic extension) whose only proper rules of inference are modus ponens $A,A\to B/B$ and adjunction $A,B/A\land B$. (I suppose $\to$ is denoted by the lollipop thingy in linear logic; $\land$ is additive conjunction, I can’t remember how it is written in LL.) You can translate ... ... that into a sequent calculus with only a few inference rules. @EmilJeřábek Aww that book is probably what I've been vaguely wanting to read. It should really be an answer.
STACK_EXCHANGE
[Theme Request] Yuzuriha Inori (+ Mana Ouma?) Who/What do you want to be featured? Yuzuriha Inori, Guilty Crown's main character, possbily comboed with Mana Ouma, GC's supporting character for a two-character theme. What do they look like? Inori's Anilist character page Inori's fandom wiki page Mana's Anilist character page Mana's fandom wiki page Yuzuriha Inori Mana Ouma What type of theme? [ ] Light Theme [x] Dark Theme Additional Information (Optional) The character's connection cannot be explained without some spoilers to the story, so I leave the explanation of that to the provided links. Personally I have a bit of a sentiment towards both the anime and the character, it being my second show watched, regardless of its mixed reception. Inori herself (and EGOIST's music) was the show's saving grace at the time and a pretty popular waifu. The collection is by no means running short of pink themes so I thought adding Mana for a two-character theme would bring in some uniqueness to the table. If you don't like that idea though, I really mostly care about Inori. Also, thank you for all the good work done. The collection is by no means running short of pink themes Yup, there's a whole bunch of pink haired anime girls, and therefore pink themes. So many! I thought adding Mana for a two-character theme would bring in some uniqueness to the table. That's an interesting idea, though I don't know what I would do different with there color palette. This image is neat: However, that ☝️ color palette is kind of the same vibe I am planning for Senjougahara's theme I dunno, 🤷 Anyways thanks for the suggestion and taking the time bring this to my attention. Yup, there's a whole bunch of pink haired anime girls, and therefore pink themes. So many! There really is something in that hair, huh. However, that ☝️ color palette is kind of the same vibe I am planning for [Senjougahara's theme](https://bakemonogatari.fandom.com/wiki/Hitagi_Senjougahara) Damn, them popular waifus always win. Yup, there's a whole bunch of pink haired anime girls, and therefore pink themes. So many! There really is something in that hair, huh. However, that ☝️ color palette is kind of the same vibe I am planning for [Senjougahara's theme](https://bakemonogatari.fandom.com/wiki/Hitagi_Senjougahara) Damn, them popular waifus always win. As a matter of fact, Inori's also been then adapted as EGOIST's icon and persona for their vocalist during their later real life activities. They had support of her original designer - redjuice - which means you can also find her in some different color schemes: White Red/Black Or even orange Her original suit was often rendered in more of an orange-ish fashion, than pure red: A little extra So in my opinion her color palette could offer some more elasticity than intially expected, though idk, I'm not the color designer here. That's it for my attempt at convincing you, I did my best-ish. Thanks for consideration one way or another. Damn, them popular waifus always win. If you stick around till later this year, then it well make sense 😄 That being said this looks amazing: Thanks for the ideas, We'll see how I feel later this year.
GITHUB_ARCHIVE
Call reverts when asking a safe for it's owners (gnosis chain) We are running a safe indexing service using the graph protocol and are indexing all the safes created in gnosis chain since block 17265698 (which is meaningful to our particular needs). As part of this we perform an eth_call at the block the safe was created to find out its owners (as well as listen for owner change events to update the safe's owners in our subgraph). We ran into a very unusual situation, there is a specific safe, 0x3af12EcC0A8Ef31cc935E0B25ea445249207d21A on gnosis chain that was created in transaction 0xbd72451723d4a9cc3a039db3501ac105b3eba0f1deb4d4efb9ffd7c3408b6d83 in block 21735473 that will revert if you call getOwners() on it. We are very confused as to why this would ever happen since the getOwners() contains no require(). Many blocks after this safe was created, there was a subsequent transaction to assign owners to this safe, so that in the latest block, calling getOwners() no longer reverts. However, when building up our index, our subgraph queries a each new safe for its owners at the time of creation so we have a historical record of who owned the safe. The revert encountered when calling getOwners() at block 21735473 on gnosis chain breaks our subgraph's indexing process. Could you help us to understand why this happens? it looks like this safe initially didn't have any owners. That is a very unusual situation, but certainly calling getOwners() on a safe that has no owners should not cause a revert, right? Using a gnosis chain archive node you can see this here: (0x14BA831 is 21735473) POST https://xdai-archive-df.xdaichain.com/ { "jsonrpc": "2.0", "method": "eth_call", "params": [{ "to": "0x3af12EcC0A8Ef31cc935E0B25ea445249207d21A", "data": "0xa0e67e2b" }, "0x14BA831"], "id": 1 } returns { "jsonrpc": "2.0", "error": { "code": -32015, "message": "VM execution error.", "data": "Reverted 0x" }, "id": 1 } And if you use the latest block you can see it works, and if you use a block earlier you can see that the contract doesn't exist yet. but between block 21735473 and 21738156 for some reason this call reverts. Notably this did happen a few hundred blocks after the latest Gnosis chain hard fork--maybe related? Any idea what might be going on here and is there a bug in the Gnosis Safe contract that could be causing this? setup was not called until block 21738156: https://blockscout.com/xdai/mainnet/tx/0xdf352729d9ecc1ae13d0beda603bfdb7edc2c2304268e3fecbf101630a558e79 so getOwners() it's expected to fail. So generally it seems that the setup() function is called via the calldata that is passed to the proxy factory's createProxy(). This is the first time I've encountered a safe that didn't call setup() as part of the createProxy(). So it seems that these types of safes are in an awkward state, i.e. there are certain things you are not allowed to ask it, like it's owners. Is there a reason why safes are allowed to exist in this state? Couldn't anyone actually claim ownership of a safe in such a state? It looks like in the OwnerManager.sol we leverage a mapping to represent a linked list of owners for the safe, and in getOwners() we build up an array from this mapping by following that linked list. The logic, though, does not seem to work when there are no owners. The while loop that drives this gets stuck in a never ending loop. could we just return an empty array when the ownerCount is 0?
STACK_EXCHANGE
Welcome to my profile page :) I am Rafał Gicgier, an experienced Web Developer from Poland with more than 9 years of experience with WordPress. ➤ Informal Description (a little bit about my work habits) - I am a friendly, open-minded and experienced programmer willing to deliver a high quality, scalable, secure solution to the clients I work with. - I like to plan the project, hence expect me to ask a lot of questions throughout the whole development. - I appreciate ongoing and clear communication, which I believe helps in building a healthy relationship. - I usually need some time to think things through. So, I’d rather avoid ASAP projects. - I like to work on mid-sized projects the most, although I am open to any kind of work when it comes to the project size. ➤ My Certified Qualifications & Awards ➤ Formal Description (a little bit about my experience) Given my 9 years of experience working as a Lead WordPress Developer for the Xfive.co, academic background and a huge interest in Cybersecurity I can help Codeable client’s with: - Custom Theme development – from .psd theme development, child theme development, bug fixes, style fixes, WooCommerce Theme development. - Custom plugin development – custom database tables, API integrations, custom functionalities. - Site Migration – from one host to another, from wordpress.com to self-hosted and another way around. - Gravity Forms custom functionalities – custom front end login / registration / reset password / edit details / post creation forms. - WordPress maintenance fixes – inability to login, .js, .css, .html and .php bug fixes. - Specific host choices – I am very experienced with WP Engine, Bluehost,Kinsta and Cloudways – the latter becoming my preferred choice lately. - Custom Back End utilities based on Advanced Custom Fields (they are great for custom, hand-crafted Back End panels). - Safeguarding WordPress, removal of malicious code, security checks, hardening of WordPress using multiple tools, password suggestions, safety suggestions, file privileges and more. - Plugin wise I have extensive knowledge of: Gravity Forms, WooCommerce, EDD, WPML, Advanced Custom Fields, BackupBuddy, WP All Import, bbPress and BuddyPress. ➤ Other (a little bit about my hobbies) I love WordPress for 11 years where I started my first, personal blog. I like the platform’s availability, extensibility, ease of use and scalability. I like to blog as I can help the others out: WP doin. With Gravity Forms tutorials being the most popular so far. Great experience, excellent service and communication. Exactly what I was looking for. Thanks a lot! Rafal is the best, don't ever let him go! I will definitely request his assistance for all my development needs. What a brilliant developer! It was very plesant to work with Rafał, he helped me out with a domain issue. Quick, fast, during the weekend. Did a great job as usual. Great communication. Takes the time to understand the details of the project. The result is perfect. Thank you Rafal ! Quick response. Easy to work with. Did what i wanted. Happy with the results.
OPCFW_CODE
Feature request: Example of a more modern SPA dotnet app Prerequisites [X] I have searched the repository’s issues and Kinde community to ensure my feature request isn’t a duplicate [X] I have read the contributing guidelines [X] I agree to the terms in the code of conduct What is the problem you’re trying to solve? All the dotnet documentation i discovered even when you click on backend on your website talk about your SDK and authentication redirects etc etc. When i use other providers like Auth0 in a Rect SPA talking to dotnet lambda rest API's as the backend i can actually just use dotnets built in JWT support to verify the token easily and extract information. What solution would you like to see? I would like to see an example of a standalone dotnet rest service that gets called by an SPA using your authentication and it only implements what is needed to handle and authenticate the request correctly, so verify the JWT and authorization for method access etc. If it really needs your SDK and all that setup that is in the example that is fine but that to mean seems more like a setup when using dotnet for both frontend and backend in one not using dotnet for backend rest services like most may think of when you say backend dotnet vs listing it under frontend. Additional information No response Agree lack of .net moderen examples is not giving us confidence in the product. Especially when the this SDK has not been updated in sometime. Everyone else can supply Blazor Server, Blazor WASM examples. See Logto, Auth0 Thank you for the feedback. We will likely publish a starter for Web API projects that can be used for reference. To clarify, Kinde follows the OpenID Connect specification and as .NET has built-in libraries that support this it is not necessary to the use the SDK. Where a JWT is passed to the backend typically the JwtBearer middleware would be used. It might be a good idea then to supply a simple example using the .NET built in libraries. Showing how to use the settings from Kinde. Kinde support also provided me with settings needed to make the Swagger Ui work on the Web API project with authentication button which was extremely useful for my team To clarify, Kinde follows the OpenID Connect specification and as .NET has built-in libraries that support this it is not necessary to use the SDK. Where a JWT is passed to the backend typically the JwtBearer middleware would be used. This statement is mostly right from our testing but where the dotnet built in JwtBearer middleware will have issues is the format kinde uses for the roles in the JWT access token if you enable them. It is an array of objects vs a list of string like permissions are so custom code or SDK would likely be needed for that. We have ignored that for now and just use permissions. Just to follow-up, we have released a guide to securing web API projects with Kinde: https://docs.kinde.com/developer-tools/your-apis/dotnet-based-apis/ We have also released a starter kit for web API projects that contains a complete example: https://github.com/kinde-starter-kits/dotnet-webapi-starter-kit. I have had a further look into handling role. Microsoft's schema expects a role claim, the closest I could find in the OAuth 2.0 spec is a roles (with an s) claim, but has no canonical types. Returning the roles in the expected format could be something we support via a token customisation in future. ASP.NET does support creating custom policies that can inspect any claims, so provides more flexibility. Creating a custom policy to secure endpoints based on the roles defined in Kinde is the approach we have taken in the guide and starter kit.
GITHUB_ARCHIVE
HP EliteDesk 705 G3 Mini Performance and Power Consumption Instead of going through the entire Linux-Bench test suite, we are going to show a few performance and power numbers here to give a general sense of performance. We actually planned to do storage testing, but then we realized that there was a huge variability in terms of what drives could be found in machines. Python Linux 4.4.2 Kernel Compile Benchmark This is one of the most requested benchmarks for STH over the past few years. The task was simple, we have a standard configuration file, the Linux 4.4.2 kernel from kernel.org, and make the standard auto-generated configuration utilizing every thread in the system. We are expressing results in terms of compiles per hour to make the results easier to read: The performance here is more aligned to the lower-end Intel Atom line than the higher-end Core i3/i5/i7 and the AMD Ryzen Pro series. 7-zip Compression Performance 7-zip is a widely used compression/ decompression program that works cross-platform. We started using the program during our early days with Windows testing. It is now part of Linux-Bench. Again, we can see a similar trend to what we have seen before. These chips did not have features such as L3 caches that are prominent in larger designs today. Ryzen was an enormous upgrade. OpenSSL is widely used to secure communications between servers. This is an important protocol in many server stacks. We first look at our sign tests: Here are the verify results: Idle power consumption on 120V power we saw about 9.8W idle for the dual-core and just over 10W idle for the quad-core units. We generally assume these nodes will use 9-12W idle so this is in a reasonable range. The power supplies are 65W HP power adapters from the company’s notebook line. We never hit levels close to 65W in our testing, but we suggest this may be an absolute maximum bound for the systems. Most loaded use in our testing was around 35W +/- 5W to give some sense of performance you will likely see. Since these are so low power, they are basically inaudible from 1m at idle, but one can hear the fans spin under load. That is impacted by dust accumulated in the system. These are meant to be desktops for office environments so they are generally very quiet. Key Lesson Learned for TMM In this series, we wanted to also focus on some key lessons learned. Since we have already tested well over a dozen different models, we are taking away key pieces of advice from each that we wanted to share. First and foremost, we learned that the older AMD A6/A10 CPUs are simply not the performance class that we wanted. In the ~$200 range with lots of RAM, a SSD, and a Windows 10 Pro license, these are not necessarily bad buys for everyone, but they are not aligned with what we wanted from the systems. Every use case will be different, but we made the mistake of purchasing many units before realizing these are not what we wanted. Another interesting fact is that we purchased three A6 and two A10 units at the same time from the same sellers. The units all arrived with different SSDs. On the SSD side, as we will see these are actually some of the nicer units. Some of the newer units come with the DRAM-less TLC and QLC SSDs that are meant to check the box for providing a “256GB SSD” or “256GB NVMe SSD” while costing only a small amount. Another key takeaway is that even though these are the lowest-performing units, they also have a great place. Sometimes, one does not need massive amounts of performance, as an example providing service as a quorum node in a cluster. In that use case, these work great and are both low power and not expensive. Overall, these systems worked well. For some the fact they are often sold for less than the price of the RAM, SSD, and Windows 10 Pro license makes them extremely attractive. They do not use much power. This is one case where the primary value proposition is being a cheap node. We learned a lot from purchasing these nodes. They spawned the entire project TinyMiniMicro. Perhaps their best use has been to inexpensively validate the notion that a small cluster like this can be useful. In the first hardware deep-dive, we are doing as part of Project TinyMiniMicro. One bit that was challenging was getting a decent breadth of coverage (e.g. covering a multitude of options and use cases) along with a depth (getting into the hardware we purchased.) That breadth and depth had to be balanced by the fact that not many out there would want to read 10,000-word reviews of each of these units. Since we have a series of these coming, please give feedback understanding that we are writing for a broad audience so that necessarily means we will not meet everyone’s breadth and depth needs. We are going to be doing both these hardware overview pieces as deep-dives into the units, but then have some more up-leveled pieces across the population to start doing other how-tos and analysis. As a quick note, since this is what we are calling a “circular economy” or CE review we are not giving this a formal rating as we would in a standard STH review. That is not fair for older generation used devices, however, we will likely add ratings for new devices.
OPCFW_CODE
In order to successfully complete Freshman English, all students must pass each standard. 1. Use evidence to support sustained literary analysis. - I can read short stories, novels, and literary non-fiction and determine themes and important ideas. - I can find evidence in literary texts that support my interpretation and use it in writing. - I can write several linked paragraphs that explain my ideas in depth and stay on topic. - I can use English class vocabulary in my writing to communicate my ideas. - I can write in a formal, academic tone. 2. Use research to write an expository essay describing a controversial issue from different points of view. - I can evaluate a source to determine its credibility. - I can conduct basic research using the internet and library resources. - I can avoid bias when I write about a controversial topic. - I can write a thesis that uses parallel structure. - I can organize a paper effectively using transitions. 3. Write an argumentative essay that includes counterclaims and rebuttals. - I can evaluate an argument to determine it’s claims and effectiveness. - I can describe flaws in reasoning and avoid them in my writing. - I can support my claims with evidence and valid reasoning. - I can use counterclaim and rebuttal to strengthen my argument. - I can use research to support my claims effectively. 4. Select the plot, sensory detail(s), and dialog to tell an effective story. - I can hook the reader by introducing a problem, solution, or observation. - I can write a narrative using various techniques: dialogue, timing, description, and reflection. - I can use precise words, details, and sensory language to create a mental picture in my narrative. - I can conclude my story by reflecting on what is experienced, observed, or resolved. 5. Deliver a speech that effectively communicates ideas. - I can give a presentation that shares information and includes findings and supporting evidence from my research. - I can present information in a clear, concise and logical manner. - I can present information that is organized and developed in a style that fits the purpose, audience, and task. - I can give a presentation where I purposely use digital media to support the understanding of my research. 6. Use tier II academic vocabulary in writing and speaking. - I can understand and use language that I hear and see in classrooms, around the school, and in the professional world. - I can use several different strategies to determine word meanings, especially when a word has multiple meanings depending on context. - I can use English class vocabulary to support my analysis of texts. 7. Identify run-ons, fragments, and errors in capitalization and spelling and avoid them in writing. - I can write in complete sentences consistently. - I can identify major errors in punctuation and capitalization. - I can correctly use commas with subordinate clauses, conjunctions, and lists. - I can correctly use colons, semicolons, and serial lists - I can spell and use commonly confused words correctly. 8. Read widely and independently from fiction and informational texts and respond meaningfully. - I can select texts that are appropriate to my interests and reading levels. - I can sustain focus on reading for 20 minutes or longer 4-7 times per week. - I can respond to texts in meaningful ways without the need for specific prompts and guidance.
OPCFW_CODE
Bulgogi Marinade for Boneless, Skinless Chicken Thighs recipes - On this busy world of ours many individuals find themselves working extra hours than they wish to. bulgogi marinade for boneless, skinless chicken thighs Add the daily commute and the odd after work drink onto the daily schedule and you can see why prepared meals have grow to be very talked-about.After a very busy day on the workplace it's so much easier to put a prepared made meal into the microwave or oven than prepare it is to prepare a meal using fresh ingredients. All that chopping, peeling and what have you, it simply would not appear price it - all you want to do is kick back, watch TV and unwind. You can cook Bulgogi Marinade for Boneless, Skinless Chicken Thighs using 9 ingredients and 4 steps. Here is how you cook that. Ingredients of Bulgogi Marinade for Boneless, Skinless Chicken Thighs - It's 4 pounds boneless, skinless chicken thighs. - Prepare 1/2 a small apple, finely grated using a microplane or fine cheese grater (or you can use 1/4 cup apple sauce). - Prepare 1.5-2 Tablespoons minced garlic (about 3 or 4 large cloves). - You need 1/3 cup white sugar. - You need 1/4 cup mirin. - Prepare 1/3 cup + 2 Tablespoons soy sauce. - It's 2 Tablespoons toasted sesame oil. - Prepare 2 green onions chopped (green and white parts). - Prepare optional: 1/2 teaspoon ground ginger or 1.5 teaspoons minced fresh ginger root. Bulgogi Marinade for Boneless, Skinless Chicken Thighs instructions - I usually just throw everything into a large mixing bowl and get in there with my hands, gently tossing and massaging until all the seasonings are evenly distributed. If you prefer, you can mix all the marinade ingredients in a separate bowl, stir or whisk until all the sugar is dissolved, and then pour it over the chicken thighs and mix.. - I like to marinate it for at least 4 hours and up to 24 hours. Right around 6 to 8 hours is the sweet spot for me, where all the chicken takes on great flavor, but the texture of the meat hasn't taken on cured qualities and is tender, juicy, and still chicken-y. :). - Because it is dark and somewhat fatty meat, you'll want to grill over a medium low heat for 5 to 7 minutes per side, depending on the size of the thigh piece. Watch for flareups as the caramelized marinade mixed with the melting chicken fat hits the coals.. Bulgogi Marinade for Boneless, Skinless Chicken Thighs - Read Also Recipes
OPCFW_CODE
I have only found one site that offers completely free option charts for stocks, Exchange Traded Products, and indexes: I provide more information about it below. OptionZoom provides free option charts, but apparently only for large cap stocks. BigCharts uses their own custom option symbols. To get the BigCharts option symbol to use, enter in the underlying symbol e. Then click on the option chain link above the quote information to show the available options. BigCharts avoids the huge issue of charts not being available after the options expire. But it looks like it still suffers from the problem that intra-day information becomes unavailable soon after the options expire. Since many options are lightly traded their charts are deserts of information. Hi, I am new to options trading and I am looking for historical charts of daily option contract volume for specific stocks. Can you help me find something? I usually have to give up a penny or nickel to the market maker. So for example if I wanted to buy a option and the the quote was 0. If you are ever confused as to whether you should be using the ask or the bid price, just ask yourself which is worse for you, and that will be the price the market is offering for that transaction. I would be nice to see a chart or graph for all of the various expiration dates and strike prices. What specific kinds of options are you looking at e. Could you give me a quick example of the behavior you are seeing. I might be able to offer some suggestions if I have more info. Sorry, not any help. I recognize the presentation and controls. Alas, neither will overlay the stock price and neither will provide history just currently trading options. Hi Alexis, Do you mean OptionsXpress? Original post now is about 1. For example this one allows viewing free options historical charts: You can download and use software for free. Big Charts uses MarketWatch data which is crap because they do NOT show the data for standard option expiration dates…. Vance — do you have a recommendation for a platform or a service that can chart forward option returns for a multi position portfolio? I basically want to be able to load in multiple spread and straddle and short positions all for the same underlying to calculate the potential return gain or loss. You might try calling TD Ameritrade. They seem to be pretty good with interfacing their software with independent software. Free option charts Updated: BigCharts BigCharts uses their own custom option symbols. This site does not exist. Site does not exist loser…. Any advice would be appreciated. All content on this site is provided for informational and entertainment purposes only, and is not intended for trading purposes or advice. This site is not liable for any informational errors, incompleteness, or delays, or for any actions taken in reliance on information contained herein. It is not intended as advice to buy or sell any securities. I am not a registered investment adviser. Please do your own homework and accept full responsibility for any investment decisions you make.More...
OPCFW_CODE
Page Layouts: Adds page type based naming and filtering Summary Changes Page Layouts filter pills to being based on page type instead of template name. All templates have been categorized by the types specified in the figma. This also fixes page names when filtering and changes titles/hover text to be the page name instead of just "Page layouts". Relevant Technical Choices Karma tests are still pending for page layouts so I was unable to update them for this change in functionality, but will add karma tests for these in that work. Translation of the combined page names (template name + page type) was deferred to be a combination of template name + translated page type To-do [ ] Verify what category the "Prep the Squash" cooking page should be in. It's in steps for now, but is also in editorial in the figma. User-facing changes Testing Instructions Filter page layouts by each of the pills and verify that all items in each filter match the ones specified in figma Fixes #5872 How can we ensure pageLayoutType is not accidentally dropped again when updating the templates? I added a simple test for that to assets/src/dashboard/templates/test/raw.js, but maybe there are better ways? How can we ensure pageLayoutType is not accidentally dropped again when updating the templates? I added a simple test for that to assets/src/dashboard/templates/test/raw.js, but maybe there are better ways? Verify what category the "Prep the Squash" cooking page should be in. It's in steps for now, but is also in editorial in the figma. I'm only seeing it as "Steps" in Figma. Verify what category the "Prep the Squash" cooking page should be in. It's in steps for now, but is also in editorial in the figma. I'm only seeing it as "Steps" in Figma. @BrittanyIRL looks like someone removed the duplicate from figma so this categorization seems correct now @BrittanyIRL looks like someone removed the duplicate from figma so this categorization seems correct now @swissspidy Regarding your tests, that's a fair concern and this test does ensure that pages are marked up. One of the things talked about way back was that some pages may not want to be used as templates and the current code pulling in templates will omit any pages that don't have pageLayoutType which is a feature. If that's not a requirement then your checks seem good. @swissspidy Regarding your tests, that's a fair concern and this test does ensure that pages are marked up. One of the things talked about way back was that some pages may not want to be used as templates and the current code pulling in templates will omit any pages that don't have pageLayoutType which is a feature. If that's not a requirement then your checks seem good. @swissspidy Regarding your tests, that's a fair concern and this test does ensure that pages are marked up. One of the things talked about way back was that some pages may not want to be used as templates and the current code pulling in templates will omit any pages that don't have pageLayoutType which is a feature. If that's not a requirement then your checks seem good. I think at that point we can perhaps use null or something to identify pages that should be skipped. And for new templates / page layouts I suppose we just have to add that attribute manually every time for now. @swissspidy Regarding your tests, that's a fair concern and this test does ensure that pages are marked up. One of the things talked about way back was that some pages may not want to be used as templates and the current code pulling in templates will omit any pages that don't have pageLayoutType which is a feature. If that's not a requirement then your checks seem good. I think at that point we can perhaps use null or something to identify pages that should be skipped. And for new templates / page layouts I suppose we just have to add that attribute manually every time for now. I think at that point we can perhaps use null or something to identify pages that should be skipped. And for new templates / page layouts I suppose we just have to add that attribute manually every time for now. Sounds good! I think at that point we can perhaps use null or something to identify pages that should be skipped. And for new templates / page layouts I suppose we just have to add that attribute manually every time for now. Sounds good! I inadvertently merged this prematurely but it has been sent to QA. I inadvertently merged this prematurely but it has been sent to QA. @zachhale There's also an issue with the translation call. What I suggested: sprintf( /* translators: 1: template name. 2: page layout name. */ _x('%1$s %2$s', 'page layout title', 'web-stories'), template.title, pageLayoutName ) _x() means "translate with context", where page layout title is the context given to translators. And web-stories is the text domain. What was added: __('%1$s %2$s', 'web-stories', 'web-stories'), That's the string to translation, the text domain, and third param that's actually unused and ignored. @zachhale can you fix that as well? thanks :) @zachhale There's also an issue with the translation call. What I suggested: sprintf( /* translators: 1: template name. 2: page layout name. */ _x('%1$s %2$s', 'page layout title', 'web-stories'), template.title, pageLayoutName ) _x() means "translate with context", where page layout title is the context given to translators. And web-stories is the text domain. What was added: __('%1$s %2$s', 'web-stories', 'web-stories'), That's the string to translation, the text domain, and third param that's actually unused and ignored. @zachhale can you fix that as well? thanks :) @zachhale There's also an issue with the translation call. What I suggested: sprintf( /* translators: 1: template name. 2: page layout name. */ _x('%1$s %2$s', 'page layout title', 'web-stories'), template.title, pageLayoutName ) _x() means "translate with context", where page layout title is the context given to translators. And web-stories is the text domain. What was added: __('%1$s %2$s', 'web-stories', 'web-stories'), That's the string to translation, the text domain, and third param that's actually unused and ignored. can you fix that as well? thanks :) PR for this here: https://github.com/google/web-stories-wp/pull/5916 Thank you for the eyes on the translations, Pascal! @zachhale There's also an issue with the translation call. What I suggested: sprintf( /* translators: 1: template name. 2: page layout name. */ _x('%1$s %2$s', 'page layout title', 'web-stories'), template.title, pageLayoutName ) _x() means "translate with context", where page layout title is the context given to translators. And web-stories is the text domain. What was added: __('%1$s %2$s', 'web-stories', 'web-stories'), That's the string to translation, the text domain, and third param that's actually unused and ignored. can you fix that as well? thanks :) PR for this here: https://github.com/google/web-stories-wp/pull/5916 Thank you for the eyes on the translations, Pascal!
GITHUB_ARCHIVE
This blog post will detail how APS gives users the ability to: - Leverage Power Query, Power Pivot, and Power Map at massive scale - Iteratively query APS, adding BI on the fly - Combine data seamlessly from PDW, HDI, and Azure using PolyBase The Microsoft Analytics Platform System (APS) is a powerful scale out data warehouse solution for aggregating data across a variety of platforms. In Architecture of the Microsoft Analytics Platform System and PolyBase in APS – Yet another SQL over Hadoop solution?, the base architecture of the platform was defined. Here we’ll build on this knowledge to see how APS becomes a key element of your BI story at massive scale. Let’s first start with a business case. Penelope is a data analyst at a US based restaurant chain with hundreds of locations across the world. She is looking to use the power of the Microsoft BI stack to get insight into the business – both in real time and aggregate form for the last quarter. With the integration of APS with Microsoft BI stack, she is able to extend her analysis beyond simple querying. Penelope is able to utilize the MOLAP data model in SQL Server Analysis Services (SSAS) as a front end to the massive querying capabilities of APS. Using the combined tools, she is able to: - Quickly access data in stored aggregations that are compressed and optimized for analysis - Easily update these aggregations based on structured and unstructured data sets - Transparently access data through Excel’s front-end Using Excel, Penelope has quick access to all of the aggregations she has stored in SSAS with analysis tools like Power Query, Power Pivot, and Power Map. Using Power Map, Penelope is able to plot the growth of restaurants across America, and sees that lagging sales in two regions, the West Coast and Mid-Atlantic, are affecting the company as a whole. After Penelope discovers that sales are disproportionately low on the West Coast and in the Mid-Atlantic regions, she can use the speed of APS’ Massively Parallel Processor (MPP) architecture to iteratively query the database, create additional MOLAP cubes on the fly, and focus on issues driving down sales with speed and precision using Microsoft’s BI stack. By isolating the regions in question, Penelope sees that sales are predominantly being affected by two states – California and Connecticut. Drilling down further, she uses Power Chart and Power Pivot to breakdown sales by menu item in the two states, and sees that the items with low sales in those regions are completely different. While querying relational data stored in APS can get to the root of an issue, by leveraging PolyBase it becomes simple to also take advantage of the world of unstructured data, bringing additional insight from sources such as sensors or social media sites. In this way Penelope is able to incorporate the text of tweets relating to menu items into her analysis. She can use PolyBase’s predicate pushdown ability to filter tweets by geographic region and mentions of the low selling items in those regions, honing her analysis. In this way, she is able to discover that there are two separate issues at play. In California she sees customers complaining about the lack of gluten free options at restaurants, and in Connecticut she sees that many diners find the food to be too spicy. So how did Penelope use the power of APS to pull in structured data such as Point of Sale (POS), inventory and ordering history, website traffic, and social sentiment into a cohesive, actionable model? By using a stack that combines the might of APS, with the low time to insight of Excel – let’s breakdown the major components: - Microsoft Analytics Platform System (APS) - Microsoft HDInsight - Microsoft SQL Server Analysis Services (SSAS) - Microsoft Excel with Power Query, Power Pivot and Power Map Loading Data in APS and Hadoop Any analytics team is able to quickly load data into APS from many relational data sources using SSIS. By synchronizing the data flow between their production inventory and POS systems, APS is able to accurately capture and store trillions of transactional rows from within the company. By leveraging the massive scale of APS (up to 6 PB of storage), Penelope doesn’t have to create the data aggregates up front. Instead she can define them later. Concurrently, her team uses an HDInsight Hadoop cluster running in Microsoft Azure to aggregate all of the individual tweets and posts about the company alongside its menus, locations, public accounts, customer comments, and sentiment. By storing this data in HDInsight, the company is able to utilize the elastic scale of the Azure cloud, and continually update records with real-time sentiment from many social media sites. With PolyBase, Penelope is able to join transactional data with the external tables containing social sentiment data using standard TSQL constructs. Creating the External Tables Using the power of PolyBase, the development team can create external tables in APS connected to the HDInsight instance running in Azure. In two such tables, Tweets and WordCloud, Twitter data is easily collected and aggregated in HDFS. Here, the Tweets table is raw data with an additional sentiment value and the WordCloud table is an aggregate of all words used in posts about to the company. Connecting APS and SSAS to Excel Within Excel, Penelope has the ability to choose how she would like to access the data. At first she uses the aggregations that are available to her via SSAS – typical sales aggregates like menu items purchases, inventory, etc. – through PowerQuery. But how does Penelope access the social sentiment data directly from APS? Simple, by using the same data connection tab, Penelope can directly connect to APS and pull in the sentiment data using PolyBase. Once the process is complete, tables pulled into Excel, as well as their relationships, are shown as data connections. Once the data connection is created, Penelope is able to create a report using PowerPivot with structured data from the Orders table and the unstructured social sentiment data from HDInsight in Azure. With both data sets combined in Excel, Penelope is able to then create a Power Map of the sales data layered with the social sentiment. By diving into the details, she can clearly see issues with sentiment from customers in Connecticut and California. To learn more about APS, please visit http://www.microsoft.com/aps. Drew DiPalma – Program Manager – Microsoft APS Drew is a Program Manager working on Microsoft Analytics Platform System. His work on the team has covered many areas, including MPP architecture, analytics, and telemetry. Prior to starting with Microsoft, he studied Computer Science and Mathematics at Pomona College in Claremont, CA.
OPCFW_CODE
Difference between VM CPU usage and GKE container CPU usage I have a cluster of 2 nodes, each node is a VM of 2 CPU on GCE Here is the chart for VM CPU usage metric VM CPU Here is the chart for CPU usage from GKE containers GKE CPU So why is there much difference between 2 metric? Also why total CPU usage of GKE can be higher than 4 seconds (because I have 4 cores ) Cluster nodes PS1 : I found that there is a "bug" or something is not perfect with the chart in stackdriver monitoring. When I change the chart to be 1w then I get something like this 1w chart And if I use 1d chart then it looks like 1d chart So now I only have one question left, why total CPU usage from GKE containers are higher than number of cores? GCE will measure the overall CPU usage of a VM which includes all processes being run (containers, daemons, OS overhead, etc) whereas the GKE container metric only looks at specific container metrics. The container is a single process. Also, the metric value you are looking at is not utilization; utilization is measured as a percentage, not in seconds as per the stackdriver metrics reference page. The graph you are looking at shows seconds on the right-hand side, but the important value is the one on the left of the graph which should be a percentage. The utilization is a percentage of CPU used Vs the total CPU available. At the GCE level this means the CPU used by all processes of the OS Vs the total CPU allotted (2 CPUs). For the container, this is the CPU used by the container process Vs the CPU allocated by k8s. The sum of the containers will not result in the same value as that of the VM, and it is possible for the container CPU utilization to go over 100% I know the difference between CPU utilization and CPU usage. CPU utilization is just the label I left on the title. In my question, I mentioned they are CPU usage. I agree about CPU usage from VM should be higher than total CPU usage from GKE containers. But I cant understand why CPU usage from GKE is higher than 4 seconds and much higher than VM CPU usage. The metrics from the VM will not necessarily be higher than that of the containers. The VM has higher total CPU compared to that of the individual containers so it is normal that the CPU usage and utilization is higher than that of the VM Sorry but I still don't understand your comment clearly. From what I know there is no term of CPU utilization (percentage) for container, there is only CPU usage (calculated in milicore or seconds) which I'm using at moment for both VM and Containers measurement. My unclear understanding is that why total CPU usage from all GKE containers are higher than 4 seconds and much more than total VM CPU usage while I have only 4 cores in whole cluster. Because usage is spread across containers which can be spread across multiple nodes. Look at the breakdown in your screenshot, no individual container is very high. The sum of all containers across multiple nodes (all CPUs) will be higher than that of a single vm Basically, this is just funny math from stackdriver. The metrics are always taken over time and the graphs will come.out differently depending on how the metrics over that time period are aggregated. Comparing container performance vs GCE performance simply won't correlate properly because of how the metrics are collected, aggregated and presented Also notice the disclaimer on the container usage metric, that the overall metric is not limited by the cores I have edited my question a bit because I have just found that the bar drawing is playing role in this issue, in fact, the shorter timeline chart show the average more exactly. I have only one question left why total CPU usage from all containers are higher than number cores I have. I found that the GKE container CPU usage were not quite correct, we should filter out the container name podsgke...... and container without a name then the chart seems to match with VM CPU usage. I guess these are not a part of workload.
STACK_EXCHANGE
Legacy operating systems - October 10, 2011 Fixes in October’s Patch Tuesday release are fewer than recent releases, although six of the eight bulletins involve remote code execution flaws and should be implemented quickly. - October 07, 2011 Microsoft's upcoming Cluster Aware Update Wizard aims to ease the pain of patch management. But IT shops have questions about how it might fit with their internal patching procedures. - September 21, 2011 Windows Server 8 is the most ambitious version of the product Microsoft has developed to date. Mike Neil, GM of the project, talks about the importance of IT’s input. - September 14, 2011 Can Windows Server 8 bring Microsoft into the cloud era? With development of the new OS, the company aims to change the way IT shops deliver apps. - June 14, 2011 Windows administrators keep their jobs this month thanks to a hefty 34 security fixes to some mainstay products, including Windows Server 2008 and R2. - October 26, 2010 The first service packs for Windows 7 and Server 2008 R2 are a big deal for Microsoft, with new features that could help the company compete in the virtual desktop space. - September 29, 2010 Enterprises don’t let Microsoft’s Support Lifecycle policies dictate their upgrade cycles. But unless they buy hotfix support for Windows Server 2003, it could lead to trouble. - March 26, 2008 IT managers can monitor Windows Server 2008 from a desktop that runs Vista with Service Pack 1. - February 07, 2008 Microsoft's System Center Capacity Planner 2007, a tool for infrastructure planning, was released to manufacturing this week. - January 29, 2008 After years of delays and numerous changes, Windows Server 2008 code will finally be completed next week. - January 09, 2008 Microsoft MVP Brian Desmond talks about new Active Directory changes and features in Windows Server 2008 that will ease management in such areas as passwords and installation. - January 02, 2008 IT managers in Windows shops say the high-impact technologies next year will be Windows Server 2008 and virtualization. - December 05, 2007 Microsoft gave beta testers an early holiday gift this year with the release of Windows Server 2008 RC1. - November 26, 2007 SQL Server expert Brad McGehee dives into SQL Server 2008 and dishes about what IT managers and DBAs need to know. - November 16, 2007 Microsoft released a November test build of Windows Server 2008 to a group of beta testers.
OPCFW_CODE
External Connections: This plugin sends data to bstats. You can disable this in the general.conf. The geoip module also connects to an external database, you can disable this by just not using the module. Donations: I spend a lot of free time in programming. Donations are always welcome at paypal. Keep in mind all downloads are alpha builds and not ready for production servers. For now I will always be targeting the newest sponge version, currently API 6. API 5 should also work pretty well because it is currently still very close to API 6, altough this can change in the future. Downloads are available at Github: https://github.com/Bammerbom/UltimateCore/releases Added item commands: Change the name of the item in your hand. Change the lore of the item in your hand. Change the quantity of the item in your hand. Change the durability of the item in your hand. Change whether the item in your hand is unbreakable. /itemcanbreak [Block] [Block]… Change the blocks this item can break. /itemcanplaceon [Block] [Block]… Change the blocks this item can be placed on. /itemhidetags attributes/candestroy/canplace/enchantments/miscellaneous/unbreakable false/true Change whether a certain tag is hidden. Add an enchantment to the item in your hand. Added some extra bstats charts Added blacklist module (ban certain items) Added geoip module (%country% variable and /country command) Added votifier module (Vote handling) Added %uuid% variable Changes to permissions, shouldnt break any permissions [+] Added the /uuid command [+] Added delay & warmup options to commands [+] Added the /signedit command [+] Added /mail [+] Added /randomtp [+] Added /biometp [+] Added /top [+] Added /ban and /unban [+] Added /ip to get a player’s ip [+] Added ban motds (Configurable in the ban.conf file) [+] Added the /break command to break the block you are looking at [+] Added /list [+] The /world info & /world list commands are now seperate commands [*] Fixed building UltimateCore again… [*] Fixed the geoip module [*] Fixed the /afk arguments [*] Fixed the /signedit command [*] Fixed afk not going away on relogin [*] Fixed the tablist module giving errors with older configs [*] Fixed deaf exempt not working [*] Fixed deaf and mute arguments [*] Update bstats class [*] Improved command api [*] Implemented command events [-] Removed the ability to do ‘/ ?’ because it messed with the usage and tab completion. Might be re-added in the future. world name is correct, At first, I put my world into World folder, however, the plugin could not recognize it. Then I put it into Server root folder. Your plugin recongnize my world, However, it just create a new world but not import my world. I’m using the latest 1.11.2 sponge api.
OPCFW_CODE
A Coder is a specialist or generalist who writes codes for software. Their work typically includes designing and testing codes for computer or mobile software and applications, maintaining and debugging running software, knowing at least one programming language, analyzing clients’ needs, using businesses’ data to find solutions for their problems, and working with a team on projects especially in a bigger company. For many, writing a resume can be daunting. If you are working on your own, it is easy to become overwhelmed with the amount of information you need to include and how best to present yourself. The sample resumes below provide great examples of how to write an effective resume. The Best Coder Resume Samples These are some examples of accomplishments we have handpicked from real Coder resumes for your reference. Coder Resume Sample 1 - Mastered all coding languages necessary for the development of a C# WPF application, Object Oriented Analysis and Design, leading to successful project completion. - Trained under a Microsoft MVP as a VB.Net expert, gaining skills necessary for technical support of other coders and their projects at end client site. - Spearheaded development of self-diagnostics suite to monitor systems, providing real time alerts for potential errors; recognized by fellow engineers for exceptional accuracy and speed. Coder Resume Sample 2 - Skilled with many database management systems, scripting languages, and third party programs. - Resolved client problems for software bugs, configuration issues, data corruption, backup/recovery plan implementation. - Authored scripts to automate tedious tasks in automated testing suites used by thousands of developers across the globe. - Recognized as top user on programming site via upvotes from peers; earned highest rating possible for working with difficult clients to implement solutions. Awarded additional recognition after being sought out by administrators for insight into implementing new features onto website platform. - Performed code reviews on team members’ work to increase productivity through better practices and ensure compliance with programming standards via group code walk-throughs. Coder Resume Sample 3 - Authored a set of new APIs for the company’s flagship product which is used by thousands of its customers. - Provided technical consultation to management on various problems including performance and architectural issues in order to enhance system infrastructure. - Spearheaded an initiative to design internal software that would allow the team to find out when certain bugs were created, when they were fixed, who fixed them and who reported them originally. - Created a website where users can obtain information about their accounts in real time. This project was completed in 3 months with a budget of $750 dollars and it saved my employer over $10,000 in the first month after implementation. - Designed computer systems using different operating systems (e.g., Windows, Linux), programming languages (PHP, C++, Python) and software frameworks (Zend Framework, CodeIgniter, Bootstrap). Coder Resume Sample 4 - Developed a prototype software program that enables digital signatures for W-2s and I-9 forms. - Pioneered a project to modernize existing legacy systems through the integration of two proprietary custom applications. - Revised an online course management system used by University faculty, staff, and students improving overall operating efficiency and IT support costs. - Authored a set of instruction guidelines for all new co-op trainees to ensure successful productivity during their work term with full compliance from internal policies. - Trained 75 users in effective online search strategies to find company product information resulting in higher customer satisfaction rate as well as increased revenue. Coder Resume Sample 5 - Integrated my skills as a Java Script, C++ Programmer to automate module installation process for new accounts. - Developed software that allowed users to log into their accounts via web browser by typing in their email address rather than using password authentication. - Designed internal frameworks to improve development efficiency for all members of the programming team. - Completed entire administration system with full suite of administrative tools designed to help customer care agents better serve customers. - Implemented search engine optimization strategies across all company sites gaining top ranking on over 18,000 keywords. Coder Resume Sample 6 - Introduced new application coding languages and support structures for voice recognition compatibility. - Improved application design by adding security features to protect data; reduced network downtime due to crashes by 25%. - Provided technical support for various customers including IBM, Motorola, Texas Instruments, Xerox, AT&T; identified problem areas and developed solutions that increased system performance reliability. - Pioneered development of web-based applications that allowed customers to access inventory information without requiring changes in their programs. - Assisted in the transition from UNIX-based servers to Linux operating systems for primary ecommerce platforms. Coder Resume Sample 7 - Transferred non-linear footage from tape to digital files and edited material for use in marketing. Implemented video control of volume, color correction, and positioning of images. - Troubleshot and resolved customer issues by e-mail; enhanced communication with clients by updating SQL databases on request; received designation as Customer Service Star. - Improved communication between development teams by converting software documentation from Microsoft Office Word into Adobe PDF format; received recognition for superior performance. - Enriched online gaming experience for players after designing a system to transmit data while minimizing lag time; awarded title of Game Programming Winnar. - Established call center technology consulting business and provided assistance to companies on an as-needed basis. Coder Resume Sample 8 - Increased effectiveness of a team’s efforts by 25% while maintaining on-schedule delivery within budget constraints. - Conducted research on open source applications to determine viability of including in system design specifications. - Gained experience coding for new technologies including cloud, virtualization, and Linux/Unix implementations. - Launched website with an up-time rate which exceeded 99%. - Achieved top health and safety rating and best overall performance during site audits and reviews completed over three years’ employment. Coder Resume Sample 9 - Assisted in the development of application for taking photos and placing them into categories. - Developed, tested, and maintained modules that allowed access to sensitive financial data. - Introduced new workflows by creating programs that expedited redundant tasks across systems. - Improved performance of software through code-level optimizations implemented with a team lead. Coder Resume Sample 10 - Assisted in the creation of a new software product to manage an organized and efficient business workflow. - Optimized applications by reducing memory usage by more than 50% and doubling application performance for all supported browsers. - Contributed technical expertise as part of development team during lengthy, high-stress crisis situation. - Replaced legacy calculation engine with one that provided more accurate results while eliminating use of third party libraries or APIs; recognized by management for adding value to business within first month of employment. - Conducted comprehensive analysis on current system and identified critical architecture flaws making it difficult to maintain and extend; recommended solutions leading to better overall performance and stability of software suite. Coder Resume Sample 11 - Implemented WebGL rendering in a cross-browser manner in C++. - Programmed in Python to deploy and manage virtual machines, servers and network devices via RESTful APIs. - Administered Linux servers; maintained uptime, disk space usage, and resource allocation. - Saved time and money by creating automated unit tests for business objects using Mockito. - Converted Java to C++ using JNI for improved speed of small project; completed in one month with accurate results on first attempt. To work as a Coder, you may need an Associate’s or Bachelor’s degree in Computer Science, Information Systems, or a related field or several years’ experience. You must be reliable, logical, patient, empathetic, strong in memory, attentive to detail, a team player, an abstract thinker, and a good communicator. Resumes are a crucial aspect of any job search. In order to make a good first impression, it is important that your resume be formatted and written professionally. To create the perfect resume, think about what skills and qualities you want your future employer to see. Hope these samples gave you an idea of what your resume should look like and some tips on how to make sure that your resume stands out from the rest.
OPCFW_CODE
//------------------------------------------------------------------------------ //! @file ConstantValue.h //! @brief Compile-time constant representation // // File is under the MIT license; see LICENSE for details //------------------------------------------------------------------------------ #pragma once #include <string> #include <variant> #include <vector> #include "slang/numeric/SVInt.h" namespace slang { /// Represents an IEEE754 double precision floating point number. /// This is a separate type from `double` to make it less likely that /// an implicit C++ conversion will mess us up somewhere. struct real_t { double v; real_t() : v(0.0) {} real_t(double v) : v(v) {} operator double() const { return v; } }; /// Represents an IEEE754 single precision floating point number. /// This is a separate type from `double` to make it less likely that /// an implicit C++ conversion will mess us up somewhere. struct shortreal_t { float v; shortreal_t() : v(0.0) {} shortreal_t(float v) : v(v) {} operator float() const { return v; } }; /// Represents a constant (compile-time evaluated) value, of one of a few possible types. /// By default the value is indeterminate, or "bad". Expressions involving bad /// values result in bad values, as you might expect. /// class ConstantValue { public: /// This type represents the null value (class handles, etc) in expressions. struct NullPlaceholder : std::monostate {}; using Elements = std::vector<ConstantValue>; using Variant = std::variant<std::monostate, SVInt, real_t, shortreal_t, NullPlaceholder, Elements, std::string>; ConstantValue() = default; ConstantValue(nullptr_t) {} ConstantValue(const SVInt& integer) : value(integer) {} ConstantValue(SVInt&& integer) : value(std::move(integer)) {} ConstantValue(real_t real) : value(real) {} ConstantValue(shortreal_t real) : value(real) {} ConstantValue(NullPlaceholder nul) : value(nul) {} ConstantValue(const Elements& elements) : value(elements) {} ConstantValue(Elements&& elements) : value(std::move(elements)) {} ConstantValue(const std::string& str) : value(str) {} ConstantValue(std::string&& str) : value(std::move(str)) {} bool bad() const { return std::holds_alternative<std::monostate>(value); } explicit operator bool() const { return !bad(); } bool isInteger() const { return std::holds_alternative<SVInt>(value); } bool isReal() const { return std::holds_alternative<real_t>(value); } bool isShortReal() const { return std::holds_alternative<shortreal_t>(value); } bool isNullHandle() const { return std::holds_alternative<NullPlaceholder>(value); } bool isUnpacked() const { return std::holds_alternative<Elements>(value); } bool isString() const { return std::holds_alternative<std::string>(value); } SVInt& integer() & { return std::get<SVInt>(value); } const SVInt& integer() const& { return std::get<SVInt>(value); } SVInt integer() && { return std::get<SVInt>(std::move(value)); } SVInt integer() const&& { return std::get<SVInt>(std::move(value)); } real_t real() const { return std::get<real_t>(value); } shortreal_t shortReal() const { return std::get<shortreal_t>(value); } span<ConstantValue> elements() { return std::get<Elements>(value); } span<ConstantValue const> elements() const { return std::get<Elements>(value); } std::string& str() & { return std::get<std::string>(value); } const std::string& str() const& { return std::get<std::string>(value); } std::string str() && { return std::get<std::string>(std::move(value)); } std::string str() const&& { return std::get<std::string>(std::move(value)); } ConstantValue getSlice(int32_t upper, int32_t lower) const; const Variant& getVariant() const { return value; } std::string toString() const; bool isTrue() const; bool isFalse() const; bool equivalentTo(const ConstantValue& rhs) const; ConstantValue convertToInt(bitwidth_t width, bool isSigned, bool isFourState) const; ConstantValue convertToReal() const; ConstantValue convertToShortReal() const; ConstantValue convertToStr() const; static const ConstantValue Invalid; friend std::ostream& operator<<(std::ostream& os, const ConstantValue& cv); private: Variant value; }; /// Represents a simple constant range, fully inclusive. SystemVerilog allows negative /// indices, and for the left side to be less, equal, or greater than the right. /// /// Note that this class makes no attempt to handle overflow of the underlying integer; /// SystemVerilog places tighter bounds on possible ranges anyway so it shouldn't be an issue. /// struct ConstantRange { int32_t left = 0; int32_t right = 0; /// Gets the width of the range, regardless of the order in which /// the bounds are specified. bitwidth_t width() const { int32_t diff = left - right; return bitwidth_t(diff < 0 ? -diff : diff) + 1; } /// Gets the lower bound of the range, regardless of the order in which /// the bounds are specified. int32_t lower() const { return std::min(left, right); } /// Gets the upper bound of the range, regardless of the order in which /// the bounds are specified. int32_t upper() const { return std::max(left, right); } /// "Little endian" bit order is when the msb is >= the lsb. bool isLittleEndian() const { return left >= right; } /// Reverses the bit ordering of the range. ConstantRange reverse() const { return { right, left }; } /// Selects a subrange of this range, correctly handling both forms of /// bit endianness. This will assert that the given subrange is not wider. ConstantRange subrange(ConstantRange select) const; /// Translates the given index to be relative to the range. /// For example, if the range is [7:2] and you pass in 3, the result will be 1. /// If the range is [2:7] and you pass in 3, the result will be 4. int32_t translateIndex(int32_t index) const; /// Determines whether the given point is within the range. bool containsPoint(int32_t index) const; std::string toString() const; bool operator==(const ConstantRange& rhs) const { return left == rhs.left && right == rhs.right; } bool operator!=(const ConstantRange& rhs) const { return !(*this == rhs); } friend std::ostream& operator<<(std::ostream& os, const ConstantRange& cr); }; /// An lvalue is anything that can appear on the left hand side of an assignment /// expression. It represents some storage location in memory that can be read /// from and written to. /// class LValue { public: /// A concatenation of lvalues is also an lvalue and can be assigned to. using Concat = std::vector<LValue>; LValue() = default; LValue(nullptr_t) {} explicit LValue(Concat&& concat) : value(std::move(concat)) {} explicit LValue(ConstantValue& base) : value(&base) {} bool bad() const { return std::holds_alternative<std::monostate>(value); } explicit operator bool() const { return !bad(); } ConstantValue load() const; void store(const ConstantValue& value); LValue selectRange(ConstantRange range) const; LValue selectIndex(int32_t index) const; private: LValue(ConstantValue& base, ConstantRange range) : value(CVRange{ &base, range }) {} struct CVRange { ConstantValue* cv; ConstantRange range; }; std::variant<std::monostate, Concat, ConstantValue*, CVRange> value; }; } // namespace slang
STACK_EDU
How to find all records through a has_one through relations via an array I am trying to find all payments that belong to an array of clients. The payments have a has_one, through bills relationship with clients. The models include: class Payment < ActiveRecord::Base belongs_to :bill has_one :client, through: :bill class Client < ActiveRecord::Base has_many :bills has_many :payments, through: :bills class Bill < ActiveRecord::Base belongs_to :client has_many :payments I am trying to find with the following query @payments = Payment.joins(:bills).where('bill.client_id IN (?)', [1,2,3,4]) but get a PG timeout message tried a .includes instead of the .joins and also received a PG timeout message and also tried Payment.includes(:bill).where( bills: { 'client_id IN (?)', [1,2,3,4] } ) Thanks for any help. How big is your database? do you have an index on bill_id? ie is it just choking on how much data you have? Also - you have a client association on payment... have you tried something like Payment.joins(:client).where("clients.id" => [1,2,3,4]) In dev, not too many records, but do have an index on client_id on the bills table, I tried the joins on client, but it also timed out you'll want an index on bill_id too - because you are joining on that... and that will slow things down (a bit). Probably not your actual problem here, given your small dev db - but generally it's good practise to always have an index on join-columns (because when you hit prod, it will be slow) I tried your answer again after restarting the server and it worked, took 1.9ms, tried going through the bills and it took 1.5ms, not sure which is more efficient but do like the idea of going through the clients in your example. You've got your bill singular/plurals the wrong way around. When you make a string it has to be the table name, so it will be plural. When you use the hash format you use the same case as your association (singular when used from Payment). So: Payment.joins(:bill).where( bills: { client_id: [1,2,3,4] } ) was able to get this to work but needed bills not bill in the Payment.joins(:bill).where( bills: { client_id: [1,2,3,4] } ) Seems that joins is faster than includes, does anyone know if joins is preferred? Thanks for the info, I'll edit my answer (I could have sworn it was the name of the association, thanks). Re your other question, joins and includes do different things, joins does an INNER JOIN and includes either does a second (single) SELECT with all the IDs from the first resultset, or it does an OUTER JOIN.
STACK_EXCHANGE