Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Ionic React opens up the Ionic Framework to a whole new audience. This is a very significant change in the history of the Ionic Framework. The Ionic React combines the central Ionic experience with the APIs that are tailored to React Developers. With Ionic React we could import the core Ionic components directly into your React project. The Ionic React acts as a thin wrapper around the components exporting them as Native React components. It makes Ionic react to work naturally while working in React. This offers many react paradigms that have not been there in the core components. Web developers across the world use Ionic React as the Official React version that powers the Mission, Critical apps for companies like Amtrak, AAA, Burger King, Home Depot, and few more. The Ionic app development companies empower for building top-quality apps by using their existing web development skills. While the earlier Ionic supports Cordova, the native React version of Ionic Framework runs on an all-new cross-platform engine called Capacitor. The capacitor engine provides a consistent, web-focused set of APIs that enables an app to stay close to web-standards as possible while accessing rich native device features on platforms that support them. Ionic also provides native functionality that supports it gets directly from the customers. Such functionality includes secure authentication and identity management using advanced encryption APIs available on iOS and Android, high performance offline encrypted data storage, and a library of supported and maintained native functionality. Ionic React is an open-source software development kit powering millions of beautiful applications, customized to fit with any brand. IOnic React leverages React DOM which is one of the most popular rendering libraries for React. It has more usage and thus has better compatibility with the React ecosystem which enables it to work with any React library. Image that route directly to the platform’s native UI building blocks. With react native framework one can render UI for both iOS and Android platforms. The React Native is considered to be one of the most trusted framework for cross-platform application development. Facebook along with other companies such as Instagram Airbnb, Skype, Tesla, Walmart, and Discord use React native in most of their mobile applications. It is less time-consuming and therefore many companies and even individuals today prefer using React Native along with NodeJS for creating unique mobile applications. Applications using React Native are GPU oriented which facilitates the applications to perform better. Working with React Native can drastically shorten the resources required to build mobile applications. Any developer who knows to write React code will be able to now target the Web, iOS, and Android, all with the same skillset. React Native most of the time uses standard iOS and Android controls under the hood. It employs truly native cross-platform for building applications. With React Native the underlying components are all native, which gives the user a smooth experience. Ionic React, on the contrary, performs native iOS and Android UI patterns using cross-platform, standards-based web technology rather than accessing platform UI controls directly. Ionic React is almost the same as the traditional React web app development. This means Ionic React will provide a web developer a fast and familiar development experience directly in Chrome or their browser of choice, and many apps can have an important portion of their functionality created right in the browser. Ionic React can also readily be added to any existing web-based React app. However, React Native does not use traditional web development tools straight. It has several custom support for integrating with a Chrome debugger, however, that might not be a true browser debugging experience. Progressive Web Apps are a new standard for creating firm, native-quality mobile applications but distributing them through a web browser instead of an app store. Progressive Web Apps have advantages for user engagement, search engine optimization, and shareability, and are becoming famous for consumer and enterprise apps. Ionic React has first-class Progressive Web App support, wherein React Native does not support Progressive Web Apps. As Ionic React is based on web technology and the entire web platform, it doesn’t need any making a bet on a new platform. React Native, on the other hand, is a self-sufficient platform and ecosystem and must be self-contained in order to succeed in the long term. React Native supports iOS and Android app store apps only. On the contrary, Ionic React officially supports iOS, Android, and the web using Progressive Web App technology. Developers screen the UI for iOS and Android on different platforms. In contrast, Ionic React apps run the same UI on all platforms, using the responsive web design, CSS, and platform-detection utilities to allow developers to customize an app for specific platforms that they choose. In addition to this, Ionic React can use Adaptive Styling to map core UI concepts like navigation, tabs, toolbars, and buttons to platform expectations , while still allowing full designer customizability. Wherein, React Native requires developers to build screens specifically for each platform. Ionic React is exclusive in the mobile ecosystem that is backed by a business dedicated to help build and improve mission-critical apps. React-native is a an internal project build to improve the development process with no commercial or projects behind it. This is the reason why Ionic has been successful with startups and projects with web development history. In contrast, React Native is popular with consumer app startups that have a history of native app development. However, looking at both the platforms we can conclude that Ionic React and React Native fill in different needs in the ecosystem and hence, can co-exist.
OPCFW_CODE
You're reading this because you just know it's out there on the web, and you can't bloody well find it! Here's a simple approach, in three easy steps. It's so easy, you're bound to say something like "That's facile. Give me a more convoluted approach"! This tutorial not unreasonably assumes you have a connection to the Internet, that you have the ability to open up a browser such as FireFox, and that you can access a search engine like Google. If you're using Internet Explorer, then the pop-ups we create to aid your task probably won't work well, if at all. your query sentence The benefit of the above approach is that it forces you to think 'around' the topic, and this enhances the quality of your web search. In addition, you will have already planned out other search strategies, should the information you desire not actually be on the Web. So if your sentence is: I want to find out about the haemoglobin oxygen dissociation curve ... you'll soon work out that words like I want to out the are pretty useless in most searches. There are some non-obvious things about the 'obvious' search. Most important are the following points: The last point is the most important. If your search gives you ten million hits, don't clutch your head in despair. Look down the first ten on the list, and learn from them. You can learn better words to search for, you can learn important associated words, but most important of all, you can learn what not to search for! To exclude a search term in Google, put a minus sign -immediately in front of the word, but often it's better to change your strategy to avoid particular search words! (To encourage Google to use precisely that word, prefix it with a +plus sign. Contrariwise, if you want to include synonyms, you can now prefix the word with a ~tilde). The simple approach is best for straightforward questions which many people ask. Google is particularly well-tuned for such questions. Even here, a little bit of insight goes a long way. Let's say you want to google: molecular weight lead. Try this search , and then contrast it with: molecular weight Pb. Interesting, isn't it? Then simply search for periodic table, and see what you get! Which was most useful? Okay, let's assume that you have faithfully adhered to all of the above, and you still can't find the mysterious thing you're searching for. You need to try a more devious approach. For example, let's say that you are interested in making stone tools. Typing these three words as a Google query may well result in three million hits. If you get smart, and include the search term in quotes, thus: "making stone tools" ... then you'll cut the number down to about three thousand, but remember that you've now probably excluded many interesting and useful pages which contain the three words, but not in that precise order. In addition, authors may use words like manufacturing or manufacture, mightn't they? There must be a better way! Go back to the first ten of the three million hits you found with your first search. Look through them. You'll soon spot a rather interesting word: flintknapping Any reasonable discussion of stone tool manufacture will likely include this magic word. Google this single word, and see what you come up with. You'll probably get about twenty thousand hits, well down from three million, and you can be pretty certain that at least these hits are relevant! (See how you missed at least seventeen thousand hits with the simplistic strategy of "making stone tools")! Even better, the first few Google hits will take you to sites like flintknapping.com, which will give you a general introduction to the field, and point you to other sites. Within these sites, you'll recover a whole lot of new magic words that describe what the flintknapper does and what he uses --- hafting, knapping, flint, chert, knappable, flakable, obsidian and so on. Really magic words. Often, there's not an unusual magic word that will produce the results on its own. With most topics, however, you will quickly find a combination of words which make a search both sensitive (finding nearly all of the relevant sites) and specific (excluding most of the irrelevant sites). Let's try a few examples. Let's say you want a list of two letter country codes. With this search term, you get perhaps 3.7 million hits, but Google will have worked its usual magic, and the first few entries will provide what you want. Just for fun, let's pursue some alternative strategies. Glance down the list, and you'll see the acronym "ISO". Try ISO country codes and your hit count is down to 640 000, but even more interesting is the new term "ISO 3166" which pops up. Next, search for: ... and you're down to 300 000 hits, and if you try: "ISO 3166" country codes you're down to 70 000 hits. This is an improvement, but now let's think content. What about the following search? Andorra bv Kiribati qa You can be pretty sure that each of the 110 000 hits is a web page that contains 'unusual' countries and codes which must be in any comprehensive list! Our previous strategy made no such guarantee. Now combine the two: Andorra bv Kiribati qa "ISO 3166" and you're down to just 5000 fairly authoritative and content-full pages. You can refine things further, if you wish. Let's try a biomedical search. Assume you're interested in the fine details of how oxygen binds to the blood pigment haemoglobin. Googling hemoglobin oxygen dissociation curve yields just 17 000 hits. Glancing through some of these is very fruitful in suggesting additional terms (pH, phosphate, DPG, temperature), but then ... oops ... we notice that we used the American spelling of haemoglobin. Using the British spelling gives us just 6500 hits. First, let's combine the two: oxygen dissociation curve (haemoglobin OR hemoglobin) Nearly 19 000 hits. See how, although Google usually just combines search terms with an implied AND between terms, we can use trickery to OR things together. Now, try the following: relaxed tense (haemoglobin OR hemoglobin) Not only is the hit count down to about a thousand, but we can be pretty sure that each of these web pages contains pretty detailed information about oxygen binding to haemoglobin. The key that allowed us to refine our search strategy was the knowledge that haemoglobin exists in two different states --- 'relaxed' and 'tense'. When searching, there is no substitute for a detailed knowledge of the subject! Adding the search term "dissociation curve" to our most recent strategy gives us just 74 hits, and we're away! The above examples are by no means perfect, but they do illustrate a new approach, and how to join this approach with your basic search strategy. You should now have enough information to allow you, with practice, to search more effectively. Finally, let's put our new-found skills into action. Taking a piece of paper and scribbling a bit (perhaps helped with a quick intial googling!) we might come up with something along the lines of: Now let's search a bit more diligently... We also discover the interesting magic word googledork! You may wish to play with it. "Johnny Long" also seems to be a good search term. (Unfortunately we know of no Google method for searching by file size, as this would be invaluable). Okay, using the above, let's search for the obvious: +site filetype +link cache intitle inurl Our first hit is, of course, Google's own page on advanced operators, which we might have found using less devious methods, but wottehell. We encounter useful operators such as intext: allintitle: allintext: allinurl: allinanchor: and so on. Putting intext: before a word limits the search to actual displayed text of the document, rather than other parts. Using allintext: is the same as putting intext: before each and every word. The allinanchor: modifier limits the search to text that is contained within references to other pages. This is powerful, as often such text emphasises the content of the page. We'll leave you to combine the above search with the terms search strategy or even "search strategy". By now, you're nearing the ten-word limit that Google imposes, but you're doing so creatively. And did you know that in Google you can use a star as a wildcard in phrases like "agony * * ecstasy" where each stars will match any single word! So you can look for phrases like "agony and the ecstasy" and Google won't count the stars as words! You can effectively increase your word limit. You may even be able to show to your own satisfaction that the object of your desire isn't indexed on Google or other search engines. If the truly magical words don't yield results, you can be fairly certain that either (a) it isn't there or (b) you haven't thought enough about the subject. |Date of First Publication: 2005/1/9||Date of Last Update: 2006/11/03||Web page author: Click here|
OPCFW_CODE
Think You’re Cut Out for Doing how do i stitch on tiktok? Take This Quiz Tiktok is a simple, yet important tool used by many professional artists. It is often used to stitch together pieces of artwork, as well as to create a finished garment. While the use of tiktok is widespread, it can be a daunting task to learn how to use it. In this article, I’ll answer some of the most common questions, as well as give you the necessary information to get started. I like to use tiktok, but I feel it can be intimidating to do so. I have to be careful that I don’t overuse it, so I tend to only use it for one application at a time. As you can imagine, it’s a lot of work to learn so I am still learning how to use it, but it is a skill that I feel is really important and worth learning. Tiktok is a very popular and powerful file format for the web. It is used to embed images and links into webpages. It is a very powerful and versatile technology that is highly compatible with HTML and CSS. It is also very easy to install and use, so you dont have to worry about messing up your site. The first thing you need to do is learn to use Tiktok. The easiest way to do that is to watch the developer’s tutorial videos on the web. Then you can use the links in the web browser to quickly learn how to use Tiktok. Once you have that down, you can start to use it on your own site. Tiktok is a very powerful but very simple to use web technology that is very compatible with HTML and CSS. It is also very easy to install and use, so you dont have to worry about messing up your site. I would recommend installing it from the main page of your site, but if you have another site you want to use it on it will be in the same directory. I know that it is very easy to install, so if you have a server or a web-hosting provider that will allow you to install it. You are the first to notice that this is an issue with my site. Since I am using it on a local machine, it is a bit hard to spot. I would prefer to see it on my website, but if there is a problem with the site, please tell me about it. The best way to install TikTok on your own site is to use the site-builder tool that comes pre-installed with your site. Once you have the site-builder tool installed on your site, simply drag and drop the tiktok.html file into your page layout directory and it will add TikTok to your page and show you a preview of what it looks like. This is super easy and does not require you to do anything. Once you have it set up, you can change the code to hide links from the rest of your site. There are some features of TikTok that are really annoying. For example, if you want to add more than one video at a time, you have to use the URL that you want to add the video to. This means that you can’t just paste the URL of a video you want to add into the code. This prevents users from linking to videos they may not want to link to from their own website.
OPCFW_CODE
Disclaimer: I’m not a market expert, or economist. I’m purely speculating. Bitcoin is at an interesting junction. It recently reached a new high after stabilising at around $4-$10/btc for most of last year. I was involved in the initial rise and burst of 2011. I got a few bitcoins in hand (reasonably below that bubble of $30/btc). When it dropped, it dropped to something like $2-3/btc from $30. It didn’t drop to zero, which to my mind, meant that bitcoin still had an inherent value besides the speculators who gamed the market. In other words, it’s value as a quick, boundary-less p2p cryptocurrency. After that crash, Bitcoin underwent lots of growing pains. Large exchanges got hacked (note: not bitcoin itself, only the exchanges), but yet it still survived and kept growing. There are a few reasons why I think we are seeing the new rise in prices: 1) SatoshiDice. A gambling site. You send bitcoins to a specific address. When it receives the transaction, it rolls a dice. Depending on the address, you bet below a certain number. If the rolled number is below the limit, you win returns. The higher the number, the lower the returns, and vice versa. This is big. When it launched, it was off to a slow start, but slowly gained massive market share. It is responsible for more than 50% of bitcoin traffic. Watch the transactions here. It’s ridiculously easy to play. 2) Bitcoin reward per block halved end of 2012. Here’s a handy guide on what it means. To explain in short: Bitcoin rewards miners who help verify the block-chain with bitcoins. This is how bitcoins come into the system. It started with 50btc. Over time, this decreases to stop run-away inflation (one of the ideologies of bitcoin). More than 50% of bitcoins are already in circulation. By 2140 all of it will be mined. End of 2012 it halved to 25, decreasing supply. With basic economics 101: if demand stays the same and the supply decreases, the price increases. 3) Big names starting support of BTC (Reddit/Mega). Both Reddit and Mega subscriptions can be bought with bitcoins. It gives more credibility to the ecosystem and increases demand. 4) More real-world use. Bitcoin has been synonymous with ‘illegal’ activity such as the Silk Road deep web drug exchange. It was used primarily online, and for purposes digital goods. It’s starting to interface more with the ‘real’ world. Stuff like small atm’s to turn dollars into bitcoins, and bitcoinin (‘amazon’ of bitcoin). In other words, you can be paid in bitcoins and shop with it. Namecheap is consdering bitcoin support also. So, basically in the short term, demand has increased and supply has decreased. This will push the price up. Speculators will jump along with this trend as usual. I suspect in the near short term, it will drop again, but not to the previous low. As for the long-term future, I’m hedging my money on its success. The biggest hurdle I think is still it being a very difficult concept to understand. I mean, if people like Steven Levitt (author of Freakonomics) don’t understand it, then how are most people supposed to understand it? But, with much like new technology, people don’t have to understand, only need to be convinced of it’s use and trustworthiness. Few people understand how the internet works, but they still use it. Bitcoin still needs a killer service on top of it that allows people to use it without knowledge of stuff like using hashes as addresses, why you need to download a massive blockchain before you can start, etc. It’s going to be interesting marketing/user experience exercise! Another interesting thing that will happen, sooner than later, is governments are going to try and clamp down on it. Unfortunately, stopping it completely will mean killing off the internet. Not possible. Governments can discourage it’s use, but it will still be used. In fact, if they do start using it, it will lead to a Barbara Streisand effect. It will only spur it’s usage on. Bitcoin is still though in it’s infancy. The whole money supply is supposed to only be in circulation in 100+ years (although it’s logarithmic). The price of bitcoins into the future will only rise (even when governments clamp down). The current $30+-/btc won’t be highest it will get. The eventual ‘lowest’ point in the future will be higher than $30 (in current value). This is probably the most unsubstantiated part of the whole post, but it just seems inevitable. Either way, it’s going to be an interesting ride. Given how far it’s come and matured in the past 2 years, the ride feels a lot more safer. I won’t be surprised if it eventually hits upwards of $1000/btc in the next 5 years.
OPCFW_CODE
The Guanella Pass Proposed Road Improvements Project CD-ROM is an early example of a multimedia CD-ROM created to present a visual comparison between an exisiting highway and prosposed alternatives. Created for Windows 95, this project was developed before many of the newer technologies discussed in this guide were fully mature, but it is nevertheless a good example of strong interactive design and effective use of digital media. In addition to many of the qualities described in the Beartooth Highway case study, this project includes: Concise, easily understandable choices The main screen of this project offers the user only four choices: A project introduction, a CD introduction, an interactive map, and Quit. By establishing a flowchart of the projects navigation before creating the project, the creators have condensed the possible options to only those are most useful. Guanella Pass Road CD-ROM - Main menu (click image for high resolution view - 68KB) In each main section of the CD-ROM, voice narration is provided to complement the visual data offered. This feature provides an additional tool to help the user understand the presented information, as well as providing enhanced accessibility for those with visual challenges. Guanella Pass Road CD-ROM - Audio Narration Example (click image to hear file - 293KB) While visual maps are often the most effective method to orient a user to their location in an interactive environment, textual maps can be an important addition if the many areas within the environment are best known by their names. The Guanella Pass CD-ROM incorporates both kinds of maps, and allows the user to toggle between both. Guanella Pass Road CD-ROM - Text map (click image for high resolution view - 57KB) Easily understandable media controllers Offer users the ability to control the various kinds of digital media offered in a multimedia presentation is an important part of helping them to make the most of what's available. For both video and audio clips, the ability to pause and rewind is important, as is clearly notifying users that these functions are available. The media controllers in the Guanella Pass CD-ROM are presented clearly, with little possibility of confusion about their function. Guanella Pass Road CD-ROM -- Video controller (click image for high resolution view - 62KB) The animations on this CDROM are interesting examples of highway design visualization in that they start with an elevated camera viewpoint and slowly drop down to a driver's view. They show the layout of the proposed design as well as the view future users will have in one continuous animation. They were produced using a terrain model of the region produced from USGS data mixed with road design data for future conditions. An aerial image was draped over the surfaces to add texture and realism to the terrain, then trees were placed along the roadway to enclose the driver's level views more effectively. Click on the image below to play the animation, or right click to save to disk (5.5MB).
OPCFW_CODE
Frequently Asked Questions Why were these documents selected? Documents were chosen for the National Archives Transcription Pilot Project based on quality of scan and type of document. Selection was limited to documents already available as digital copies in our online catalog. We did not choose the most famous documents because it is likely a transcription for these documents would already be available from another source. We are aware of transcriptions for a few documents we have selected and we intend to use these transcriptions to compare with transcriptions already available. How is the difficulty level determined? The difficulty level (beginner, intermediate, and advanced) is assigned based on the following: - quality of the original document - quality of the digitized copy - legibility of the hand writing - length of document For example, a document with no cursive writing or only a few lines of hand writing is considered a beginner level document. A 10-page document or a hand-written note with ink fading severely would be in the advanced category. Where can I learn more about the document I'm working on? Select the link to the National Archives Identifier to see the document in our online catalog. In the catalog, you will find information for each document selected, including record group, series, scope and content, and physical location of the document. Do I need to log in to transcribe? No, the system will save your work every time you click on the save button. Currently there are no login features for the National Archives Transcription Pilot. Who do I contact if I have difficulties transcribing the document? We encourage you to utilize the "comment" feature to discuss any concern or issue you may have with the document. If you would like to contact a National Archives staff member directly, please email to email@example.com and we will get back to you as soon as we can. Can I suggest a document? Yes, you can suggest documents for transcription from the records held at the National Archives. We encourage you to look in our online catalog to suggest documents. Please send us your suggestions by email to firstname.lastname@example.org and provide information such as the title of the document and the National Archives Identifier (also know as the ARC ID). What do I do if I find a mistake in the transcription? You can correct any error you see in a transcript with the following steps: - Click on the Transcription tab under the document image - Edit the portion of transcript you want - Save the transcript depending on whether you think it’s a completed work or not - The change is displayed and visible to everyone else Can I save an incomplete transcription? Yes, you can choose to Save as Incomplete if you think the transcription is not finished. The system will then display the transcription as a work in progress. You and other transcribers can go back to complete the transcription at any time. Can I restore a previous version of a transcript? If you accidentally deleted some transcription text and did not save it, don’t worry. Simply refresh the transcription page and you will get the text from last time it’s saved. Otherwise, please comment on the page beginning with the notation [NARA-Request] and let us know why the transcription text should get reverted back to previous version. What do I do if I see spam included in the Transcription Field? If you see a Transcription page with spam text, please make a comment beginning with [Spam] to notify us there is a spam problem. We will restore the transcription back to the version before it got spammed. Where can I learn more about reading handwriting? There are various resources about handwriting reading skills available online. A few of the web sites that may be useful are: Since this is a pilot project, are other projects in the works? Yes, the National Archives is working on creating online opportunities for the public to get involved as citizen archivists. If you have any suggestions, please email us at email@example.com . We would love to hear your ideas for improving the transcription process as well as other citizen archivist tools.
OPCFW_CODE
Solution: Check that the AxsunOCTControl.dll and and LibUsbDotNet.dll files are present in the same directory as the OCTHost.exe application. If these libraries were inadvertently moved or deleted, reinstall OCT Host. Problem: Fonts on the Hardware Control Tool or Image Capture Tool GUIs are rendering incorrectly and causing text to be clipped, for example: Hardware Control Tool with clipped text due to incorrect font rendering. Solution: Adjust your Windows 10 Display settings as shown: Set "Make text bigger" to 100% for correct rendering of fonts in the Axsun GUI Tools. Axsun GUI tools such as the Hardware Control Tool or the Image Capture Tool will present an error message to the user on application launch if a software dependency is missing or has not been installed correctly. Repeat the instructions in the Integrated Engine Getting Started Guide for Installing Software. Dependencies for the AxsunOCTCapture.dll library are installed during the Image Capture Tool installation process. Dependencies for the AxsunOCTControl_LW.dll library (and the Hardware Control Tool when configured to use AxsunOCTControl_LW.dll) are described here. Problem: When launching the Image Capture Tool, an error states: axStartSessionPCIe Call Library Function Node in axStartSessionPCIe_wrapper.vi->Advanced Image Capture Application.vi Solution: PCIe DAQ drivers have not been installed correctly. Repeat these instructions for installing the PCIe device drivers. NOTE: Dependency Walker or Dependencies can be useful tools for identifying missing dependencies on Windows OS. Run one of these tools on a binary file like AxsunOCTCapture.dll to help identify if dependencies are not present in the appropriate search path due to incomplete installation. Problem: The installAxsunPCIeDAQwd____.bat batch file will not run, or it briefly flashes a cmd prompt window and immediately closes without performing any installation as described in the Installing the PCIe Device Driver instructions. Solution #1: You may not be logged into Windows with Administrator privileges. To overcome this, right-click on the batch file's icon and select "Run as administrator" rather than simply double-clicking on the batch file icon: Solution #2: You may not have previously run the OCT Host installer, which includes the Axsun USB driver and creates an "Axsun OCT Devices" class hierarchy in the Device Manager. Problem: Windows OS complains that the drivers are not digitally signed and the drivers are subsequently disabled in Device Manager after restarting the computer. Solution: Some versions of Windows are configured to require digitally signed drivers. Change the Windows startup settings to disable Windows 10 Driver Signature Enforcement according to one of the following links, or by searching online for "disabling windows driver signature enforcement": Problem: An Axsun DAQ or Laser device is powered-on and physically connected via USB cable, but a successful device connection is not indicated in OCT Host (or in client code based on the AxsunOCTControl API): Successful OCT Engine USB connection indicated in footer of OCT Host window. Solution: Check if the connected Axsun USB device is correctly listed in Device Manager as an Axsun OCT Engine in the "Axsun OCT Devices" class hierarchy (note that a USB-connected Axsun DAQ board will also be listed as Axsun OCT Engine in this list): ...If the USB device IS NOT listed as shown above, try the following until the problem is solved: - 1.Check the physical cable connection (e.g. is the USB cable damaged or partially plugged? does the same cable work with other USB devices? does the USB port on the PC work with other USB devices?). - 2.Cycle power to the Axsun device. Wait for it to restart and then show up in Device Manager and in OCT Host. - 3.Uninstall and then reinstall the USB device driver according to the OCT Host installation instructions. ...If the USB device IS listed in Device Manager as shown above, this indicates the problem is not likely related to the USB device driver or Axsun hardware but rather OCT Host or the operating system. Try the following until the problem is solved: - 1.Quit and then relaunch OCT Host. - 2.Unplug and then re-plug the USB cable connecting the Axsun device. - 3.Turn off the Axsun device and shutdown the PC. Reboot the PC and wait for the OS to load. Launch OCT Host and then power-on the Axsun device and wait for it to successfully connect. Problem: An Axsun DAQ or Laser device is powered-on and physically connected via USB cable, a successful device connection is indicated in OCT Host, but is not indicated in the Hardware Control Tool devices list (or in client code based on the AxsunOCTControl_LW API). Solution: The Hardware Control Tool can use either AxsunOCTControl or AxsunOCTControl_LW for USB communication with an Axsun device. If the USB-connected Axsun device is not successfully connected to the Hardware Control Tool when using AxsunOCTControl_LW, insure that the correct USB device driver is installed by following the instructions here. Problem: An Axsun DAQ has been powered-on and physically connected via Ethernet cable for at least 30 seconds, but a successful device connection is not indicated in OCT Host or in the Hardware Control Tool (or in client code based on the AxsunOCTControl or AxsunOCTControl_LW APIs): Success connecting devices in OCT Host's Devices tab. Success connecting devices in the Hardware Control Tool's DEVICES list. Solution: Check if the DAQ will respond to ICMP pings by launching a command prompt or terminal window and executing ping 192.168.10.2(this is the static IP address of the DAQ board). ...If the ping was unsuccessful ('Request timed out.' or '100% loss' or similar message): this indicates a problem with physical connection, with the DAQ device or its firmware, or with the network adapter configuration. Try the following until the problem is solved: - 1.Insure any VPNs or other network routing or firewall software which might interfere with a network connection is disabled. - 2.Check the physical cable connection (e.g. is the Ethernet cable damaged or partially plugged? does the same cable and Ethernet port on the PC work with other devices like a network router?). - 3.Check the status of the large green LED on the corner of the DAQ board. If this LED is not steadily blinking, contact Axsun technical support. - 4.Cycle the power to the Axsun DAQ board off and then back on via the DC power cable. Wait for the DAQ to reboot and then attempt the ping operation again about 30 seconds later. - 6.(Windows OS) Flush your DNS by executing ipconfig /flushdnsin a command prompt. Also, confirm the correct IP address for the network adapter by executing ipconfig /allto list the settings of all active network adapters. - 7.(Windows OS) Disable and then re-enable the network adapter via its icon in the Network Connections control panel: ...If the ping was successful (packets received with '0% loss'): but the DAQ will not connect successfully in OCT Host or the Hardware Control Tool, this indicates a problem with the GUI software configuration (or AxsunOCTControl or AxsunOCTControl_LW API calls in a client application). Try the following until the problem is solved: - 1.If using OCT Host, go to the Devices tab and confirm that the setting Scan for Network Devices is checked ON: - 2.If using the Hardware Control Tool, go to the Miscellaneous tab and confirm that the setting Listen for Network Devices is checked ON: - 3.Quit and then relaunch the relevant GUI tool (OCT Host or the Hardware Control Tool). - 4.If using the AxsunOCTControl API, make sure your client code calls the StartNetworkControlInterface()method before polling for connected devices with the - 5.If using the AxsunOCTControl_LW API, make sure your client code calls the function axNetworkInterfaceOpen()before polling for connected devices with The Ethernet DAQ uses the UDP/IP protocol for transmission of digitized image data from the FGPA to the AxsunOCTCapture library (and thus the Image Capture Tool). UDP is efficient and enables real-world bandwidths in the range of 850 Mbps on a wired 1 Gbps Ethernet connection, but packet delivery is not guaranteed by the UDP protocol and therefore packet drops (data loss) are possible in some pathological circumstances. A cumulative count of dropped packets since the DAQ's last transition from Imaging-Off to Imaging-On mode can be determined using the axGetStatus()function in the AxsunOCTCapture library, or the Buffer tab on the Image Capture Tool: The AxsunOCTCapture library has been designed to optimize the data throughput and minimize or eliminate packet loss on modern PC hardware, but OS and other 3rd-Party processes can contend for network and processor resources in a detrimental fashion. Problem: You are experiencing Dropped Packets during image transmission from the DAQ. Solution: Try the following steps: - Quit unrelated applications which are unnecessary to run the Axsun OCT system, especially those which are constantly accessing network resources in the background. Some applications which are known to be particularly problematic include: - Microsoft Teams (main app as well as all background daemons) - Google Chrome - Video Conferencing or Remote Desktop Software - Dropbox, OneDrive, or other cloud-based file syncing utilities - Disable or reconfigure anti-malware software, especially that which interrupts and scans incoming network traffic. - Disable your Sound Card or Sound Output Device at the device driver level. This seems unrelated and is counterintuitive, but Window OS has a strange interaction (bug?) wherein playing sounds temporarily interrupts network traffic. If you experience a burst of dropped packets when a Windows alert chime occurs, disabling the sound card is the likely solution. - In the Windows Control Panel item for your Ethernet adapter, you will see a checklist of allowed protocols. Selectively disable items which are unrelated to the Axsun OCT operation, such as Windows file/printer sharing and network discovery. Be sure to leave enabled items like IPv4 and anything related to the installed packet capture library like Npcap. - Insure your Gigabit Ethernet network adapter driver is up to date. - Your network adapter hardware may provide some configurability which can be adjusted in the Advanced Properties tab in the driver's properties dialog (via Device Manager). For recommended properties to adjust, review these recommendations: NOTE: when the AxsunOCTControl or AxsunOCTControl_LW libraries communicate with the DAQ via Ethernet interface, this is via a TCP/IP connection where packet delivery is guaranteed by the protocol and therefore the low-bandwidth command and control messages cannot be dropped during transmission. Last modified 1yr ago
OPCFW_CODE
Contact Info / Websites I recently entered a competition to earn a spot in a local music festival with people such as Datsik, Lil Jon, Felguk, Hyphy Crunk, Yoji, etc... I made it past round 1, but now round 2 is a facebook popularity competition [You basically need to like 2 things to vote for me] I need all the help I can get, so I am asking you newgrounders to vote for me! Here's how it works: The game will be an archer styled game with a story line as well as upgrades, achievements, medals (hopefully) with some fun elements that will make it different from just a shoot your arrow type of game. You can check out a small demo to see that this isn't a waste of time here. : Please note, all it is now is just the physics of shooting, basically the hard part - but everything else is already planned out. This game will be set around the medieval times, or more so like the movie gladiator It should have a fairly cartoony style, but not totally a joke, but also not super serious. The artist will have to draw menus, a map, various bows/arrows, backdrops/scenery various dummies, a few upgrades for the main character, as well as other minor things that shouldn't take too long. Since this is not a HUGE project, I am aiming at a 2-3 week deadline which is easily achievable. I have previous experience with sponsorship and I am almost positive this game will earn some good cash, it will be split 50/50. Post here, or PM if you are interested... NOTE - I will not just be taking the first person to contact me, I will be evaluating people on their previous works, AND examples that I ask that you post! : It would definitely be a good idea to draw up a quick archer and post a picture here, that will help with the evaluating ;) My MSN: firstname.lastname@example.org Check out this new community/flash website! Sign up, add your flashes, start customizing your avatar! - Badge/Award system that everyone can participate in - API for all developers : Implement badges in your own games! - Customizable avatars with tons of items added weekly! - Upload and share files other than flash submissions (similar to spamtheweb, but with a 50 MB upload limit!) - Awesome forum! - *NEW* Collab system, that makes it super easy and user friendly to host/operate collabs! - Plus some untold features that will come out soon ;) This site is super user friendly, so sign up, and start referring your friends! Some of you may know about my game, Defend the Caravan, probably not though because it wasn't as big as i wanted it to be... but that doesn't matter - what matters is that me and CaiWengi are starting on Defend the Caravan 2 Many towns and environments Different carts that can be bought HUGE achievement system that will fit into Newground's achievement system Unique boss weapons Tons of upgrades (defensive and offensive), weapons, enemies, and even hire able mercenaries! Strategic skill formulas (mixing skill combos) How can you help? Since the game is still in its alpha-stages, we need YOU to come up with ideas or suggestions for the following things: Weapons for the mouse-clickin' Offensive or Defensive cart upgrades Unique ideas for mercenaries Bosses, as well as enemies Stuff for the achievement system If you need a better idea of the game, play the original here .: Edit 0.0.1 :. - Screenshot on gameplay screen, give me some suggestions for weapons, etc! * NOTE: This is mostly a rough draft of the gameplay screen so far, some of the art will change but thats a general idea of what it'll look like! WILL BE RELEASED ON THE 29TH OF AUGUST!!!!! Heres a sneak peak for a game i am making, the artwork is all done by me! Yea i know i used to be a coder but i want to try out my art skills :D tell me what you think!, ill update BTW
OPCFW_CODE
PORTLAND, Ore. — Startup Aquifi Inc., of Palo Alto, Calif., claims to be poised to render the 3D gesture interfaces that use custom sensors obsolete -- including Microsoft's Kinect, Apple's PrimeSense, Leap Motion's standalone controller, and Google's Flutter -- with its "Fluid Experience Technology." By combining computer vision, machine learning, and cloud services, Aquifi claims to have developed a superior software-only gesture recognition system that uses the high-definition cameras already in smartphones, tablets, PCs, and smart TVs. The system can also be embedded in no-screen devices that add dual HD cameras, like Google's Nest thermostat, cars, and wearables. Tony Zuccarino, vice president for sales and marketing at Aquifi, told EE Times: Our main purpose was to create a user interface that understands the environment and the user so that it can react to you seamlessly, instinctively, and fluidly. We have blended together computer vision and machine learning in the local device, for adapting to a specific user, then access smart apps in the cloud where it accumulates knowledge from all users then pushes that knowledge back down into every user's device. In that sense, it is constantly learning from our user base, so the apps get smarter the more users we get -- sort of how Google Voice gets better over time. Instead of having to use touch to center in on GPS map display while driving, Fluid Experience Technology tracks where you are looking on the screen. Aquifi was started up by the founders of Canesta (now owned by Microsoft ) inventors of the time-of-flight 3D sensor at the heart of Microsoft's X-box Kinect. It was funded, starting in 2011, with $9 million from Benchmark Capital and private investors including Blake Krikorian (founder of Sling Media) and Mike Farmwald (co-founder of Rambus). The vision that Aquifi’s founders saw was the ability to do 3D tracking and gesture recognition that adapts to the user with nothing more than software and the HD cameras already in the user's device. While Aquifi's Fluid Experience Technology can perform some functions with a single HD camera, its 3D tracking capabilities require that devices have two HD cameras. Aquifi's software runs on the user's device to track his or her face, hand, and fingers, then accesses smart gesture recognition apps online in the cloud. Zuccarino told us: The crux of our vision is using existing commodity HD cameras for a human interface that adapts to the user, rather than making the user adapt to a machine's interface. Gesture interfaces today are very inflexible, built with custom ICs, which makes them expensive, and the gestures they recognize are static and can only be used in specialized applications. But today's HD cameras provide a full-color, high-resolution image of the user and their environment. By making our solution 100 percent software using the data from already existing sensors, we hope to eventually obsolete all the custom hardware solutions -- in terms of capability and certainly in terms of cost, form factor, and power consumption. Aquifi software can interpret user movements over a wide area, instead of having to be right in front of the device as required by custom sensor solutions. Not only does it locate the user's head, hands, and fingers, it also tracks hand gestures and body positions, identifying whose face is it is viewing and the direction that user's eyes are looking. "The user does not have to be centered in front of the devices, because our software tracks where the user is located, adapts to the user so they can control their devices from their current position, using real-time machine learning to locate the user's head, hand, and fingers," says Zuccarino. The company claims to have multiple major original equipment manufactures (OEMs) on board and will start doing public demonstrations later this year, with the first commercial Aquifi-enabled devices to appear in the first half of 2015. Aquifi has filed more than 35 patents for its Fluid Experience Technology, four of which have already been granted. — R. Colin Johnson, Advanced Technology Editor, EE Times
OPCFW_CODE
Friday, December 4, 2009 Windows Server AppFabric One of the announcements in the PDC was the release of beta1 for a new windows server which is called AppFabric. What is AppFabric? From the AppFabric site – “Windows Server AppFabric is a set of integrated technologies that make it easier to build, scale and manage Web and composite applications that run on IIS.” AppFabric is a collection of technologies in one consolidated... Friday, November 13, 2009 Velocity Cache Notifications I’ve been asked by a friend how to use cache notifications in Velocity. if you don’t know, Velocity, Microsoft distributed cache, offers a cache notification mechanism that can help you to get notified when cache operations occur. This post will help you to get started with Velocity cache notifications. As written earlier, Velocity has a cache notifications feature. That feature enables us to get notified when cache operations occur in our cache cluster. What happens when this feature is enabled is that we get asynchronous cache notifications for many aspects of the cluster including the cache,... Tuesday, October 6, 2009 Replacing ASP.NET Session with Velocity Session Provider One nice feature of Microsoft Distributed Cache aka Velocity is a custom session provider that can replace the ASP.NET default session provider. In this post I’ll explain how to replace ASP.NET session with the Velocity session provider that is being provided with Velocity. Why Replacing the ASP.NET Session with Velocity Session? Sometimes we want to share a session across servers in a server farm. The ways to do so are to use a State Server or a database. When Velocity came out it was released (currently in CTP) with a custom session provider. The use... Monday, July 27, 2009 How to Create a Simple Enterprise Library Cache Manager Provider for Velocity In the previous post I promised to give the recipe of how to create the a simple Velocity cache manager provider using the Application Block Software Factory. In this post I’ll keep my promise. Creating the Project The first thing to do is to create the project. If you don’t have Application Block Software Factory installed on your computer then you can read an old post that I wrote in order to install it. In VS2008, choose the Guidance Packages –> Application Block Software Factory project type and... Sunday, July 26, 2009 Creating a Simple Enterprise Library Cache Provider for Velocity I decided to write a simple cache manager provider for Velocity (Microsoft Distributed Cache) using the Enterprise Library Application Block Software Factory. You can download the solution from here. If you put the two dll’s I provided (Microsoft.Practices.EnterpriseLibrary.Caching.Velocity.dll and Microsoft.Practices.EnterpriseLibrary.Caching.Velocity.Configuration.Design.dll) in the directory of EntLibConfig... Thursday, July 16, 2009 Quick Tip – How to Enable Local Cache in Velocity (Microsoft Distributed Cache) Client Since I got this question twice this week, I’m writing this post. One of Velocity (Microsoft Distributed Cache) features is called local cache. In this post I’ll show how to enable that feature. Velocity Client Local Cache Local cache is a Velocity feature that can help speed up access on Velocity clients. When enabled, a de-serialized...
OPCFW_CODE
'For RRJha : Do you have an update on these 2 assignments my friend. Now i have 2 assignments due by tomorrow. only the final paper outline and the Problem Set Week Three. the final paper will be due in 3 weeks Ashford 4: - Week 3 - Assignment Problem Set Week Three Complete the problems below and submit your work in an Excel document. Be sure to show all of your work and clearly label all calculations. All statistical calculations will use the Employee Salary Data Set. Final Paper Outline Post an outline of your final paper by day 7. This outline should show the topic and basic structure of the paper. References are not needed at this point. Ashford 6: - Week 5 - Final Paper Identify an issue in your life (work place, home, social organization, etc.) where a statistical analysis could be used to help make a managerial decision. Develop a sampling plan, an appropriate set of hypotheses, and an inferential statistical procedure to test them. You do not need to collect any data on this issue, but you will discuss what a significant statistical test would mean and how you would relate this result to the real-world issue you identified. Your paper should be three to five pages in length (excluding the cover and reference pages). In addition to the text, utilize at least three sources to to support your points. No abstract is required. Use the following research plan format to structure the paper: Step 1: Identification of the problem Describe what is known about the situation, why it is a concern, and what we do not know. Step 2: Research Question What exactly do we want our study to find out? This should not be phrased as a yes/no question. Step 3: Data collection What data is needed to answer the question, how will we collect it, and how will we decide how much we need? Step 4: Data Analysis Describe how you would analyze the data. Provide at least one hypothesis test (null and alternate) and an associated statistical test. Step 5: Results and Conclusions Describe how you would interpret the results. For example, what would you recommend if your null hypothesis was rejected and what would you do if the null was not rejected? A quick example: Concern if gender is impacting employee’s pay. H0: Gender is not related to pay. H1: Gender is related to pay. Approach: Multiple regression equation to see if gender impacts pay after considering the legal factors of grade, appraisal, education, etc. If regression coefficient for gender is significant, will need to create residual list to see which employees show excessive variation from predicted salaries when gender is not considered. Writing the Final Paper The Final Paper: 'For RRJha' hello my friend are you there I need the final paper outline only it was due likev2 days ago
OPCFW_CODE
Design of the system To memorize a dynamic changing process of one signal or serial signals(Fig.1), we have proposed an idea of constructing a unique memory device with sequential memory structure, called MemOrderY, to record one changing signal in multiple time periods. Besides the signal we want to record, the Target signal, a Clock signal with the manner of oscillation is brought into the system to provide the circuit with the information about time. And we have chosen serine integrases to build the main parts due to their diversity, orthogonality between each other and high efficiency. Here, we are going to illustrate how we design the simplest version of this memory device, the two-signal system, which records one changing signal in two time periods. Overview of the whole circuit As you can find in Fig.2, the whole circuit of two-signal system is made up of three parts: logic gates, sequential memory structures and basic orthogonal memory modules. Logic gates are where the signals from Target and Clock signals are processed. And sequential memory structures control the whole memory schedule, being the core part of this project. Integrases labeled with numbers belong to this part while those labeled with English letters belong to next part, basic orthogonal memory modules. IntA and IntB are true executors of signal memorizing, and designed to record the signal in early and late time period respectively. The input of logic gates are Target and Clock signals(Fig.3 a). It is obvious that Target signal is the one we want to record. Although Clock signal seems unrelated, it is helpful to provide the circuit with time information. It serves as a time license. When Clock signal is on for the first time, the following parts start to work and record the present information of Target signal. After that, Clock signal turns off, stopping recording and beginning to prepare for the second record in the next time period. Logic gates(Fig.3 b) are employed in this project to process the raw signal from Target and Clock, and control three promoters, PC, P-C and PC&T. As a result, this part is like a bridge between the signals and sequential memory structures, which makes both signals can be highly customized to record various targets of interest with appropriate clocks. Sequential memory structures The key step in recording the changing process of a signal is the distinguishing between the signals recorded at different time point. We accomplished the distinguishing work with our sequential memory structure. The structure is consists of segments of orthogonal memory modules, each one is capable of responding to the target signal and executing the memory process alone. The structure is under the strict control of the clock signal and logic gates mentioned above, so that only one memory module is activated during one recording period. The structure is delicately designed in a way that different memory modules are turned on and off in a sequential order corresponding to their spatial relations on the plasmid. By answering to the same signal at different time point with different memory modules, the circuit can distinguish these signals and finally put them in the proper sequential order. The structure is the most important part of our sequential memory device. And it is the only difference between Two-signal system and Multi-signal system. The main idea of this part is to make the memory modules follow the order from the Clock signal. Here we made an interactive animation below to help you to better understand these important structures. This animation will show you how our system works when it meets the Target Signal below. Basic orthogonal memory modules This part is a combination of existing memory devices. In one time period, only one of these orthogonal devices in this part are ready to work. And each device only correspond to this time period, and respond to the inducer in this time period. We choose integrases to build this part. In the above animation, except Time 1 and Time 3 which serve as recording interval, Time 2 and Time 4 are corresponding to IntA and IntB respectively. Of course, any orthogonal memory devices can be used here with similar design. The orthogonality of the integrases We have to ensure the orthogonality of our integrases because it is required for building sequential memory structures and basic orthogonal memory modules. Five serine integrases were obtained from the start of our project. They are phiBT1, Bxb1, phiC31, phiRv1 and TG1. We designed and conducted the experiment based on PCR, which implied good orthogonality between each of them, except for the crosstalk between phiRv1 and attB/P sites of phiC31. Two plasmids are adopted in this test. Before conducting our experiment, we made competent TOP10 expressing transcriptional factors, which had already transformed with pUC19-LacI-AraC high copy plasmid(Fig.4 a). On the testing plasmid, all five attB and attP pairs are aligned together(Fig.4 b). Some notices of this alignment are listed here: 1. attB/P of one integrase are in opposite direction; 2. There is no nucleotide between former attP and latter attB; 3. Primer pairs(Fig.4 c) are designed to determine by PCR whether each attB/P has been inverted by the integrase. One of the five integrases is also expressed using PBAD controlled by L-arabinose. We tested each sites by PCR after inducing or not inducing with 0.27%(w/v) L-arabinose for 16 hours. As shown in Fig.5, bands on lane A and B indicate the existence of original and recombined attB/P sites template respectively. The result implies good orthogonalilty between most integrases in that only the band of corresponding recombined attB/P site appears after induction. However, we find that integrase phiRv1 can invert the sequence between attB and attP sites of integrase phiC31. By the way, the leakage of integrase Bxb1 and TG1 is obvious. We have discuss this leakage in diffusion model part. The efficiency of the integrases If we want to build meaningful sequential memory modules, integrases we use in our system should have high efficiency, which means low leakage and high percentage of recombinant after induced. So, we designed experiment to investigate original sites versus recombinants. Another type of testing plasmids was constructed(Fig.6). Sites of phiBT1, Bxb1, phiC31, phiRv1 and TG1 were tested using qPCR, quantitively measuring the percentage of recombinants. All of the integrase are able to catalyze the recombination reaction in our system. Among 5 integrases, phiBT1 shows the best feature of staying low and steady recombined attL/R rate before induction and reaching high rate after induced. phiBT1 is good enough for building sequential memory structures and being a basic memory device. Although phiC31 achieves low-rate recombination, but the induction can make an obvious difference. As a result, it is enough for us to use phiC31 as a memory device. Other integrases are difficult to control by inducing due to high leakage or other reasons. However, some integrases such as Bxb1 show good performance in other teams' system, so we think the result may be affected by the system we designed(two plasmid system) in this experiment. Validation of the sequential memory structures The key part of our project is the sequential memory structures, making validating this part the most important work. Unluckily, we are still working on this. In Fig.8, we show the plasmid we are working on. Oops! We are now constructing the plasmid, and still working on this.
OPCFW_CODE
ER Modeling, which stands for Entity-Relationship Modeling, is a database (and software)-design technique that uses data modeling to illustrate information by representing real-world entities and the relationships between them in terms of conceptual data models. In this article, we will discuss the following concepts of ER Modeling: - What is ER Modeling? - What is ER Diagram? - ER Model Example - Why use ER Modeling (ERM)? - ER modeling vs Dimensional Modeling - ER Modeling Advantages What is ER Modeling? ER Model or Entity-Relationship Model is a Data Modeling technique. It is used to describe business data requirements in terms of entities, attributes, and relationships. It was first developed at IBM Research in the 1970s by Peter Chen. There are many ER modeling tools. The most popular one is probably ERwin, but there are also open-source ones like JFreeChart. ER Modeling is a process in database design where the database is modeled in terms of entities, relationships between entities, and attributes of entities. The entity-relationship diagram or ER diagram is a graphical representation used in database design to model the entities, relationships, cardinality, and attributes of a database. There are a lot of ER Modeling techniques that we used to make a conceptual model. Why Use ER Modeling (ERM)? The Enhanced Entity-Relationship Model is used to overcome the limitations of the entity-relationship model. It provides a more comprehensible way of representing the data being stored in the database. The Enhanced Entity Relationship Model contains relationships between entities or between attributes of entities. The ER Model is an enhanced version of the entity-relationship model that allows relationships between entities or attributes to be represented. It also makes it possible to store data about unique occurrences of an entity rather than just unique entities. What is ER Diagram? The ER Diagram (entity relationship diagram ERD) is a tool for visualizing the various components of a database. The first step in the ERD process is to create an entity list, which describes what data will be stored in the system. An “entity” is any individual item of data about which information will be stored. There are two main ER modeling diagrams: the Entity-Relationship diagram, and the Entity Relationship Diagram with Attributes. The Entity-Relationship diagram shows all entities within a system with their relationships. The entity-relationship diagram with attributes diagram includes additional information about each entity such as attributes, primary key, foreign keys, etc. In the ER model, there are three main entities: Entity, Relationship, and Cardinality. - Entity – Represent real world objects whose attributes are part of the database’s data domain. - Relationship – Represents the connection or associations between two entities or between an entity and itself. It defines how data is arranged in tables and how data can be related to other data. - Cardinality refers to the number of rows in a table. In a relational database, there are two types of cardinality: Cardinality in a single column, or column cardinality. Entity Relationship Diagram Symbols ER Model Example Following is the ER Model Example that describes how it works: The ER diagram shows two entities, Student and College, and their relationship. Students and college have many to one relationships. A college may have a number of students but a student cannot study in different colleges at the same period. Student has Stu_Id, Stu_Name & Stu_Addr entity and College has Col_ID & Col_Name attributes. Given are the geometric shapes and their purpose in an Entity Relation Diagram. - Rectangle: Describe Entity sets. - Ellipses: Attributes - Diamonds: Relationship Set - Lines: They connect Entity sets to Relationship Set and attributes to Entity Sets. - Double Ellipses: Represent Multivalued Attributes - Dashed Ellipses: Represent Derived Attributes - Double Rectangles: Represent Weak Entity Sets - Double Lines: Total number of participation of an entity in a relationship set Why use ER Modeling (ERM)? - The Enhanced Entity-Relationship Model is used to overcome the limitations of the entity-relationship model. It provides a more comprehensible way of representing the data being stored in the database. - The Enhanced Entity Relationship Model contains relationships between entities or between attributes of entities. - The ER Model is an enhanced version of the entity-relationship model that allows relationships between entities or attributes to be represented. - It also makes it possible to store data about unique occurrences of an entity rather than just unique entities. ER modeling vs Dimensional Modeling - ER modeling, also known as entity-relationship modeling, is a technique for analyzing and conceptualizing data. It is commonly used in software development, but has been applied to other domains including the description of hardware systems. - Dimensional modeling is a technique for designing, analyzing, and implementing information systems. It focuses on data and information that vary over time or space. - Dimensional modeling provides insights into the ways data are used. It helps you to analyze business processes and to design better information systems. It can also help you to identify redundant data and assess their effect on the structure of the information system. - The Dimensional database model is based on the following constructs: Dimensions, Facts, Measures. - In the ER model, there are three main entities: Entity, Relationship, and Cardinality. ER Modeling Advantages - Shows the data from both logical and physical perspectives - Can be used to solve a variety of design problems, including analyzing existing databases - It captures the structural independence of entities and relations - It provides a view of data that is readily understood and related - It has the advantage of visually conveying the nature of the information to be recorded - ER Mode can reduce or eliminate joining table problems associated with E-R models - ER Mode provides a uniform notation for designers to use across diverse modeling applications and databases - ER modeling is a way of representing a database. Most database management systems have an internal data structure called an Entity-Relationship model. Through R Modeling techniques we specify a database in a formal language called “ER language”. - The ER model consists of entities, relationships, attributes, and keys. - Entities are objects that exist in the real world, for example, person, place, thing, event. - Relationships connect entities together. There can be many relationships associated with an entity. Relationships can also be classified into various types such as One-to-one relationships, One-to-many relationships,s, etc. - Attributes are characteristics of entities and relationships. Each attribute has a type and a domain which indicates what values the attribute may contain. - Keys are used to uniquely identify each entity in a relationship or relationship type. - ER diagramming is the process of creating ER models by drawing diagrams using ER modeling tools.
OPCFW_CODE
Code signing is done to assure end-users that the software they’re downloading is legitimate and has not been altered by an attacker trying to breach their privacy. A Windows code signing certificate is specifically for coders or software developers who create and publish executable programs for Microsoft platforms. If you require certification for Microsoft Windows® Logo programs, regardless of whether you develop software for Windows Phone, Microsoft Windows, Xbox 360, Microsoft Office or, Azure, Sectigo offers a Windows code signing solution that guarantees a secure experience for your customers. But before you purchase anything, it’s likely that you want to know about what the certificate is and how it works. No worries, we’ve got you covered. Shop for Code Signing Certificates – Save 53% Save 53% on Sectigo Code Signing Certificates. It ensures software integrity with 2048-bit RSA signature key. Windows Code Signing A Windows code signing certificate is a digital certificate to authenticate the executable programs specifically designed for Microsoft platforms. The certificate establishes the authenticity of the programmer and ensures the user that it has not been tampered with. The Windows code signing certificate by SectigoStore.com provides certification for all Microsoft Windows® Logo programs. What Is Code Signing Certificate and How Does It Work? Code signing is a method of adding a digital signature on a program, application, or an executable in a way that its authenticity and integrity can be proved before installation and execution of the software on the customer’s system. A code signing certificate is the tool that helps you do that. Code signing certificate is a data file, issued by a certificate authority (CA), that places a digital signature on an application or an executable to verify the identity of the publisher and validate the program’s integrity before its installation and execution. Once you place an order for a Windows code signing certificate, the CA performs its due diligence and issues the certificate. Next, you need to generate a one-way hash of your executable and encrypt it using the private key. This hash is then bundled with the cert, and the application and the final package is shipped to end users. A Windows code signing certificate comes in handy when you’re developing applications for any Microsoft environment. It’s essentially a digital signature that verifies the security and integrity of the executable. For the file to be considered safe in Windows, it needs to be signed by a trusted third-party certificate authority. If in case the software publisher distributes malware under a valid certificate, the publisher is held legally accountable. For you to enable your end users to trust your software, a code signing certificate builds trust in two ways — 1) by authenticating the publisher, and 2 by verifying the integrity of the software. Why do we need to verify the software? Any malicious scripts you run on your system can do a number of things, including deleting or stealing data, installing backdoors, etc. A windows code signing certificate can also help you to get rid of the particularly inconvenient Windows SmartScreen “Unknown Publisher” warning! On the customers’ end, the legitimacy of the program is verified by decrypting the hash using the public key and creating a new hash of the downloaded file. These hash values are compared and if they match, it means the software has not been altered by an attacker. What Types of Code Signing Certificates Are Available? If you opt for the standard version, the Microsoft SmartScreen warnings will continue to display to your customers right up until you establish a reputation. Your reputation is assessed in terms of the number of downloads and bug report submissions you receive. Higher downloads and lower bug reports account for a higher reputation. Extended validation code signing certificates will get rid of these tedious warnings after a meticulous validation process of the developer’s identity from the get-go. Depending on your needs and finances, you can choose between these two options.What is a Microsoft EV Code Signing Certificate?
OPCFW_CODE
Lost my tokens - need help in figuring out what I did wrong Can someone give me some advice on a transaction where I seem to have lost some of my tokens. I’m new to this crypto world and now I’m wondering if I got myself into something I can’t manage. I had several tokens being held on the exchanges, so I purchased a Ledger wallet and transferred most of them on October 8th from Kucoin to my Ledger wallet. Two of my tokens could not be withdrawn from Kucoin since Kucoin was hacked and they suspended withdrawals on these two tokens. They were ZRX and CRPT (ERC20) tokens. All of the other 3 additional tokens I had on Kucoin were able to be withdrawn. Two of which were ERC20 tokens, transferred correctly and went into my Ethereum account on Ledger as sub (token) accounts. One of the transfers was Bitcoin and it went into my Ledger Bitcoin account without any problems. Finally, on October 23rd the suspension was lifted and withdrawals were then allowed for ZRX and CRPT. I proceeded to do the transfer and thought everything went okay until I noticed that CRPT didn’t show up in my Ledger wallet but ZRX did. When I looked on my Kucoin account it showed that both ERC20 token transactions were successful and the tokens were all gone. The Bitcoin transaction was also successful. I now have a zero balance in my Kucoin account - which is what wanted. The transaction details on Kucoin show that the “send to” address for both ERC20 tokens are the same and they match the address presented to me by my Ledger for my Ethereum account on my device. Since they are both ERC20 tokens they both should have gone into the Ethereum account like all the many other tokens I sent to my Ledger back on October 8th (including transfers from other exchanges). Both had the correct Ethereum address but ZRX made it but not CRPT. I’ve waited a couple of days to see if they would show up but they haven’t. I don’t know if I made a mistake in the transaction but if I did, I can’t figure out what it was. There’s not a lot of money involved and if I’ve lost them, I can chalk it up to a lesson learned if only I knew what I did wrong. I now hesitate to make any transactions for fear of a repeat of this episode, especially if it’s a large transaction. Does anyone have an Idea of what might have happened and whether there’s any way or possibility that I can get my tokens back? If there is a way, can you help walk me through the process? If the transaction went through properly at the correct address, it is very unlikey that you lost your tokens. Ledger wallet does support CRPT, so it should show up in your balance thought. Can you give the tx id of the transfer between kucoin and your ledger ? Xavier59 - is this what you are asking for? Transaction Hash: 0x5ea02c60ddf4a3dde4c4e42fc8bc8c7dec715ff5eb7efd6bb3b36b56555c6522 I'm not sure what I'm doing. Is it safe to post all of the information on the transaction page into this forum? I can copy and paste that information in here if you think it's safe to do that. Please let me know. @MickG Enter your address on Etherscan, it will display the amount of ethers and all the tokens you own. If these amounts are correct, it could be a sync problem with ledger. I advise you to directly reach their customer support. Thanks clement – I did as you suggested and pasted the “send to” address into Etherscan. Yes, it does show all of the tokens that should be in that account including the CRPT I’m looking for. However, all the other tokens are showing brightly on the left with token amounts and on the right with the current token prices and total dollar amount in the account for that token. The CRPT is showing the correct number of tokens on the left but everything is grayed out and no amounts are showing on the right. Do you have any idea why this might have happened and how to correct the problem?
STACK_EXCHANGE
[DOC] - 404 Not found pages Preliminary Checks [X] This issue is not a question, feature request, RFC, or anything other than a bug report. Please post those things in GitHub Discussions: https://github.com/nebari-dev/nebari/discussions Summary These pages are returning a 404 not found status code: https://www.nebari.dev/docs/docs/troubleshooting#handle-access-to-restricted-namespaces https://www.nebari.dev/docs/how-tos/docs/how-tos/domain-registry https://www.nebari.dev/docs/how-tos/docs/troubleshooting https://www.nebari.dev/how-tos/nebari-gcp#deploying-nebari https://www.nebari.dev/docs/how-tos/docs/get-started/installing-nebari https://www.nebari.dev/docs/how-tos/docs/tutorials https://www.nebari.dev/docs/how-tos/docs/how-tos/nebari-azure#nebari-initialize https://www.nebari.dev/docs/how-tos/docs/how-tos/nebari-do#nebari-initialize https://www.nebari.dev/docs/get-started/docs/how-tos/nebari-local https://www.nebari.dev/docs/get-started/docs/get-started/cloud-providers https://www.nebari.dev/docs/how-tos/docs/how-tos/nebari-aws#nebari-initialize https://www.nebari.dev/docs/how-tos/docs/how-tos/nebari-gcp#nebari-initialize It seems something is adding a redundant /docs/ or /docs/how-tos/ folders somehow. These pages are linked to from these pages: url not found links 0 https://www.nebari.dev/docs/troubleshooting/ https://www.nebari.dev/docs/docs/troubleshooting#handle-access-to-restricted-namespaces 1 https://www.nebari.dev/docs/how-tos/domain-registry/ https://www.nebari.dev/how-tos/nebari-gcp#deploying-nebari 2 https://www.nebari.dev/docs/how-tos/domain-registry/ https://www.nebari.dev/how-tos/nebari-gcp#deploying-nebari 3 https://www.nebari.dev/docs/how-tos/nebari-azure/ https://www.nebari.dev/docs/how-tos/docs/get-started/installing-nebari 4 https://www.nebari.dev/docs/how-tos/nebari-azure/ https://www.nebari.dev/docs/how-tos/docs/troubleshooting 5 https://www.nebari.dev/docs/how-tos/nebari-azure/ https://www.nebari.dev/docs/how-tos/docs/tutorials 6 https://www.nebari.dev/docs/how-tos/nebari-azure/ https://www.nebari.dev/docs/how-tos/docs/how-tos/nebari-azure#nebari-initialize 7 https://www.nebari.dev/docs/how-tos/nebari-azure/ https://www.nebari.dev/docs/how-tos/docs/how-tos/domain-registry 8 https://www.nebari.dev/docs/how-tos/nebari-do/ https://www.nebari.dev/docs/how-tos/docs/get-started/installing-nebari 9 https://www.nebari.dev/docs/how-tos/nebari-do/ https://www.nebari.dev/docs/how-tos/docs/troubleshooting 10 https://www.nebari.dev/docs/how-tos/nebari-do/ https://www.nebari.dev/docs/how-tos/docs/tutorials 11 https://www.nebari.dev/docs/how-tos/nebari-do/ https://www.nebari.dev/docs/how-tos/docs/how-tos/nebari-do#nebari-initialize 12 https://www.nebari.dev/docs/how-tos/nebari-do/ https://www.nebari.dev/docs/how-tos/docs/how-tos/domain-registry 13 https://www.nebari.dev/docs/get-started/deploy/ https://www.nebari.dev/docs/get-started/docs/get-started/cloud-providers 14 https://www.nebari.dev/docs/get-started/deploy/ https://www.nebari.dev/docs/get-started/docs/how-tos/nebari-local 15 https://www.nebari.dev/docs/how-tos/nebari-aws/ https://www.nebari.dev/docs/how-tos/docs/get-started/installing-nebari 16 https://www.nebari.dev/docs/how-tos/nebari-aws/ https://www.nebari.dev/docs/how-tos/docs/troubleshooting 17 https://www.nebari.dev/docs/how-tos/nebari-aws/ https://www.nebari.dev/docs/how-tos/docs/tutorials 18 https://www.nebari.dev/docs/how-tos/nebari-aws/ https://www.nebari.dev/docs/how-tos/docs/how-tos/nebari-aws#nebari-initialize 19 https://www.nebari.dev/docs/how-tos/nebari-aws/ https://www.nebari.dev/docs/how-tos/docs/how-tos/domain-registry 20 https://www.nebari.dev/docs/how-tos/nebari-gcp/ https://www.nebari.dev/docs/how-tos/docs/get-started/installing-nebari 21 https://www.nebari.dev/docs/how-tos/nebari-gcp/ https://www.nebari.dev/docs/how-tos/docs/troubleshooting 22 https://www.nebari.dev/docs/how-tos/nebari-gcp/ https://www.nebari.dev/docs/how-tos/docs/tutorials 23 https://www.nebari.dev/docs/how-tos/nebari-gcp/ https://www.nebari.dev/docs/how-tos/docs/how-tos/nebari-gcp#nebari-initialize 24 https://www.nebari.dev/docs/how-tos/nebari-gcp/ https://www.nebari.dev/docs/how-tos/docs/how-tos/domain-registry 25 https://www.nebari.dev/docs/how-tos/docs/how-tos/domain-registry https://www.nebari.dev/docs/how-tos/docs/how-tos/domain-registry 26 https://www.nebari.dev/docs/how-tos/docs/how-tos/domain-registry https://www.nebari.dev/docs/how-tos/docs/how-tos/domain-registry 27 https://www.nebari.dev/docs/how-tos/docs/troubleshooting https://www.nebari.dev/docs/how-tos/docs/troubleshooting 28 https://www.nebari.dev/docs/how-tos/docs/troubleshooting https://www.nebari.dev/docs/how-tos/docs/troubleshooting 29 https://www.nebari.dev/docs/how-tos/docs/get-started/installing-nebari https://www.nebari.dev/docs/how-tos/docs/get-started/installing-nebari 30 https://www.nebari.dev/docs/how-tos/docs/get-started/installing-nebari https://www.nebari.dev/docs/how-tos/docs/get-started/installing-nebari 31 https://www.nebari.dev/docs/how-tos/docs/tutorials https://www.nebari.dev/docs/how-tos/docs/tutorials 32 https://www.nebari.dev/docs/how-tos/docs/tutorials https://www.nebari.dev/docs/how-tos/docs/tutorials 33 https://www.nebari.dev/docs/get-started/docs/how-tos/nebari-local https://www.nebari.dev/docs/get-started/docs/how-tos/nebari-local 34 https://www.nebari.dev/docs/get-started/docs/how-tos/nebari-local https://www.nebari.dev/docs/get-started/docs/how-tos/nebari-local 35 https://www.nebari.dev/docs/get-started/docs/get-started/cloud-providers https://www.nebari.dev/docs/get-started/docs/get-started/cloud-providers 36 https://www.nebari.dev/docs/get-started/docs/get-started/cloud-providers https://www.nebari.dev/docs/get-started/docs/get-started/cloud-providers Steps to Resolve this Issue Check how the wrong URL path parts are being added (script, manually, etc.) Remove it Hi @eliasdabbas, thanks for reporting this! We are still in the process of migrating our docs from qhub.dev to nebari.dev but it's great to know exactly where we need to focus our attention, much appreciated :) Thanks for reporting this issue. Seems the recent changes made to add the landing page messed up internal cross-references. Our current config should have caught broken links and thrown an error but it seems it needs debugging. Will fix this straight away!
GITHUB_ARCHIVE
Configurin GRUB or Installin LiLO Philip.R.Schaffner at NASA.gov Fri Jul 16 15:13:57 UTC 2004 On Fri, 2004-07-16 at 11:40 -0300, zaca1 at click21.com.br wrote: > Hello everybody!!! > My name is Flvio, I'm from Brazil and I've installed Fedora Core 2 > yesterday in my computer [Motherboard ASUS A7V8X-X, AthlonXP 2700+, > 256mb DDR333, 1 hard disk 15GB(Fedora), 1 hard disk 40 GB(WindowsXP), > SiS AG315P (Video) and DVD/CDRW (LG)]. Everything is working fine in > Fedora Core 2, but im having problems with the GRUB. When the GRUB > starts, it works fine to boot Fedora, but when I try to boot > WindowsXP, it doesnt work! The WindowsXP doesnt start. The screen > shows the following message: > rootnoverify hd(1,0) > chainloader +1 > and the WindowXP doesnt start. The devices of my computer are: > The partitions of Fedora are: boot, swap and Fedora, but I dont know > the order. Shouldn't matter for this discussion. > I would like to know if anybody could please help me to configure the > GRUB so I can start the WindowsXP (or send me the link to a tutorial > in the internet), or help me to install the LiLO (I think its much > easier to configure...) Matter of opinion - won't get into religious issues. ;^) You did not supply enough information to provide definitive help. (Check out http://www.catb.org/~esr/faqs/smart-questions.html) Did you perhaps add the disk on which you installed Fedora as master and move the original XP disk to be the secondary disk? If so, you will need to fool XP into thinking it is still on the first disk. A grub stanza along these lines should work: title Windows XP map (hd1) (hd0) map (hd0) (hd1) > I also can't see the WindowsXP partition (hdb1) when im using Fedora. > I tried a command to mount it, but it didnt work. Would help to say wnat ypu tried. > I used a bootable CD i have here (Kurumin 3.0, a variant from > Knoppix), and all the files from Windows XP and Fedora can be seen. I > would like to access it from the Fedora (the partition hdb1, with > WinXP, is FAT32) (Hmmm - XP native format is NTFS; however...) Fedora should be able to mount a FAT32 partition. Should be able to find lots of references in the list archives, but as root try: # mkdir /dos_c then add the following to /etc/fstab /dev/hdb1 /dos_c vfat user,noauto,rw,uid=500,gid=500 0 0 (The uid= and gid= are optional - if used, change 500 to match the user and group you want to have access. This can allow a user [i.e. user with uid 500], or users [in gid 500] to write to the disk.) # mount /dos_c More information about the users
OPCFW_CODE
The following is a guest post by Cristiano Ghersi, CEO of Snip2Code, a web service for software developers to share, collect and organize code snippets. Nowadays, the fate of a software company may be decided by small adjustments in the production life cycle. A faster time-to-market and an improvement in development life cycle efficiency can mean the difference between reaching the next step of a business versus its death. In his masterpiece "The New Science" (1725), philosopher Giambattista Vico maintained human history was cyclical and characterized by shifting emphasis on gods, heroes and reason. The same can be applied to software engineering, which is embracing a new generation of tools as it enters the next step of its evolution. Until 1990, the "Age of Gods" in software development brought us several examples of masters able to produce frameworks, mainframes and technologies with a set of resources probably available on $2 hardware now. The code produced during that age still runs nowadays in some places! Then, in the next two decades, companies like Microsoft provided turnkey solutions to help programmers do their jobs in a more managed and controlled way. Integrated Development Environments, Deploy Frameworks, and Automatic Test Suites made the life of poor coders far better, as did the large increase of available hardware resources. The focus shifted toward quality. We have just entered the next phase, the Age of Reason, where quality is a prerequisite and competition is centered on differentiation, user experience and efficiency. Now we need tools that: - Help developers save time while writing code. - Allow them to reuse the knowledge and code acquired earned in past projects as well as from colleagues and world gurus. - Provide feedback and advice via social discussion on code snippets. - Speed up the introduction of new members or the replacement of old ones in a development team (the overall expertise of the members becomes a perpetual asset of the company). A knowledge management system, together with a source control manager, a wiki-like documentation platform and an activity tracker: This is the minimal, yet powerful, set of tools that distinguishes the winning software enterprises of the new era. Microsoft started providing such tools with the Visual Studio Online suite, and its adoption by enterprises has been terrific, thanks also to full integration with the Microsoft Azure platform. Visual Studio Online provides a powerful source control, perfectly integrated with the activity tracker; MSDN provides accurate and exhaustive documentation. Now, with the introduction of Snip2Code in the Azure Marketplace as the knowledge management system, that minimal set of tools is ready. The knowledge management system is usually neglected by enterprises, leaving them without a common base of software knowledge that is the invaluable intellectual property of the companies themselves. Moreover, the defection of senior developers in the company can lead to painful, long and costly handovers. The adoption of a structured tool like Snip2Code to handle and preserve common internal knowledge produces instead a terrific burst in time-to-market (estimates are around 20%) and a near-zero "downtime" of dev teams when facing turnover. The Age of Reason starts when knowledge is shared among coworkers and preserved for future reuse: Let's enter into this age and see if we can fool Vico, producing a fourth era!
OPCFW_CODE
Hi there… I have a multiline label which dynamically changes depending what tool the user hovers over. Because of the discrepancies between varying monitor sizes, this label sometime gets chopped off before the end of the text is reached. Is there a way to find out if the label text exceeds the height of the label box? If I can find this, I could show a scrollbar to view all the text. Is there a piece of code for this? Has anybody used anything similar? I know that a text area control automatically has a scroll bar - but it’s ugly and plus, I want the background to show through the text. off the top of my head function getStringHeight(ta as TextArea) as Integer dim g as graphics function toobig(ta as TextArea) as boolean like I said. off the top of my head, so you may have to check the lang ref to tweak it a bit Sorry - I should have made this clear - it’s for a multiline label… I want the background to be transparent… I tried to create a scrolling canvas with the picture being drawn, the text and transparent=1. It looks horrible. Best result would be with a multiline label that I could scroll if the text was bigger than the label size. If the text fits, the scroll bar would be invisible Why not remove the border of the TextArea and set its BackColor to the same as the window? ok… and how does what I posted not help you with that? [quote=291926:@Sean Clancy]Is there a way to find out if the label text exceeds the height of the label box? If I can find this, I could show a scrollbar to view all the text. Is there a piece of code for this?[/quote] Dave gave you a way to estimate the eight of the text. There are two other possible strategies : using Dave’s code, when you see the text reaching the maximum size, simply reduce the point size, or use a narrow character, such as Arial Narrow. The other way is to place a much larger label on a ContainerControl, together with a Scrollbar. That will enable you to scroll the label. You can even mix with Dave’s method to see when the text overflows, and display the scrollbar only then. Note that a multiline label does not naturally scroll, so unless you delete the superior part of the text, adding a scrollbar alone would not suffice. Now it’s on you to get to work I get an error on lines: That ta has no member named font or font size. Docs say it should be Copying and pasting code from the forum is not recommended. It’s a good way to share ideas, but there’s no code-checker.
OPCFW_CODE
use libc; pub type __uint32_t = libc::c_uint; pub type uint32_t = __uint32_t; pub type size_t = libc::c_ulong; // https://www.geeksforgeeks.org/move-zeroes-end-array/ pub unsafe fn push_zeroes_to_end(mut arr: *mut uint32_t, mut n: size_t) -> size_t { let mut count: size_t = 0i32 as size_t; let mut i: size_t = 0i32 as size_t; while i < n { if *arr.offset(i as isize) != 0i32 as libc::c_uint { let fresh0 = count; count = count.wrapping_add(1); *arr.offset(fresh0 as isize) = *arr.offset(i as isize) } i = i.wrapping_add(1) } let mut ret: size_t = count; while count < n { let fresh1 = count; count = count.wrapping_add(1); *arr.offset(fresh1 as isize) = 0i32 as uint32_t } return ret; } pub unsafe fn set_add(mut values: *mut uint32_t, mut len: *mut size_t, mut cap: size_t, mut target: uint32_t) -> bool { if *len == cap { return 0i32 != 0 } let mut i: uint32_t = 0i32 as uint32_t; while (i as libc::c_ulong) < *len { if *values.offset(i as isize) == target { return 0i32 != 0 } i = i.wrapping_add(1) } let fresh2 = *len; *len = (*len).wrapping_add(1); *values.offset(fresh2 as isize) = target; return 0i32 != 0; } /* * * Add `target` to `values` if it doesn't exist * "set"s should only be modified with set_* functions * Values MUST be greater than 0 */ /* * * Remove `target` from `values` if it exists * "set"s should only be modified with set_* functions * Values MUST be greater than 0 */ pub unsafe fn set_remove(mut values: *mut uint32_t, mut len: *mut size_t, mut cap: size_t, mut target: uint32_t) -> bool { let mut i: uint32_t = 0i32 as uint32_t; while (i as libc::c_ulong) < *len { if *values.offset(i as isize) == target { // Set to 0 and swap with the end element so that // zeroes exist only after all the values. *len = (*len).wrapping_sub(1); let mut last_elem_pos: size_t = *len; *values.offset(i as isize) = *values.offset(last_elem_pos as isize); *values.offset(last_elem_pos as isize) = 0i32 as uint32_t; return 1i32 != 0 } i = i.wrapping_add(1) } return 0i32 != 0; }
STACK_EDU
Register your product to gain access to bonus material or receive a coupon. Offers example-based coverage for various high availability solutions. Case Study: Realtime Trader Seeks 64-Bit Realtime Database SQL Server Kicks Oracle's Butt Making Your SQL Server Apps Highly Available: First, Do The Assessment Who Is This Book's Intended Audience? How This Book Is Organized. Conventions Used in This Book. Setting Your Goals High! I. UNDERSTANDING HIGH AVAILABILITY. 1. Essential Elements of High Availability. Overview of High Availability. Availability Example-A 24/7/365 Application. General Design Approach for Achieving High Availability. Development Methodology with High Availability "Built In". Assessing Existing Applications. Service Level Agreement. High Availability Business Scenarios (Applications). Application Service Provider. Worldwide Sales and Marketing-Brand Promotion. Investment Portfolio Management. Call Before You Dig. Microsoft Technologies that Yield High Availability. II. CHOOSING THE RIGHT HIGH AVAILABILITY APPROACHES. 2. Microsoft High Availability Options. What High Availability Options Are There? Fundamental Areas to Start With. Fault Tolerant Disk: RAID and Mirroring. Redundant Array of Independent Disks (RAID). Mitigate Risk by Spreading Out Server Instances. Building Your HA Solution with One or More of These Options. Microsoft Cluster Services (MSCS). 3. Choosing High Availability. Moving Toward High Availability. Step 1-Launching a Phase 0 (Zero) HA Assessment. Resources for a Phase 0 HA Assessment. The Phase 0 HA Assessment Tasks. Step 2-HA Primary Variables Gauge. Step 3-Determining the Optimal HA Solution. A Hybrid High Availability Selection Method. Cost Justification of a Selected High Availability Solution. Adding HA Elements to Your Development Methodology. III. IMPLEMENTING HIGH AVAILABILITY. 4. Microsoft Cluster Services. Understanding Microsoft Cluster Services. Hardware/Network/OS Requirements for MSCS. How Clustering Actually Works. The Disk Controller Configuration. The Disk Configuration. Considerations at the Operating System Level. Installing MSCS-Step 1. Installing MSCS for the Next Node: Step 2. Extending Clustering with Network Load Balancing (NLB). Windows 2003 Options for Quorum Disks and Fail-over. 4-node and 8-node Clustering Topologies. 5. Microsoft SQL Server Clustering. Microsoft SQL Clustering Core Capabilities. SQL Clustering Is Built on MSCS. Configuring MS DTC for Use with SQL Clustering. Laying Out a SQL Cluster Configuration. Installing SQL Clustering. Failure of a Node. Removing SQL Clustering. Client Test Program for a SQL Cluster. A Node Recovery. Application Service Provider-Scenario #1 with SQL Clustering. 6. Microsoft SQL Server Log Shipping. Microsoft Log Shipping Overview. Data Latency and Log Shipping. Design and Administration Implications of Log Shipping. Setting Up Log Shipping. Before Creating the Log Shipping DB Maintenance Plan. Using the DB Maintenance Plan Wizard to Create. Viewing Log Shipping Properties. Changing the Primary Role. Log Shipping System Stored Procedures. Call Before You Dig-Scenario #4 with Log Shipping. 7. Microsoft SQL Server Data Replication. Microsoft SQL Server Data Replication Overview. What Is Data Replication? The Publisher, Distributor, and Subscriber Metaphor. Publications and Articles. Central Publisher with Remote Distributor. Multiple Publishers or Multiple Subscribers. Anonymous Subscriptions (Pull Subscriptions). The Distribution Database. The Snapshot Agent. The Log Reader Agent. The Distribution Agent. The Merge Agent. The Miscellaneous Agents. Planning for SQL Server Data Replication. Timing, Latency, and Autonomy of Data. Methods of Data Distribution. SQL Server Replication Types. User Requirements Drive the Replication Design. Setting Up Replication. Enable a Distributor. Enable Publishing/Configure the Publisher. Creating a Publication. Switching Over to a Warm Standby (Subscriber). Scenarios That Will Dictate Switching to the Warm Standby. Switching Over to a Warm Standby (Subscription). Turning the Subscriber into a Publisher (if Needed). Insulate the Client Using an NLB Cluster Configuration. SQL Enterprise Manager. The Performance Monitor. Backup and Recovery in a Replication Configuration. Alternate Synchronization Partners. Worldwide Sales and Marketing-Scenario #2 with Data Replication. 8. Other Ways to Distribute Data for High Availability. Alternate Ways to Achieve High Availability. A Distributed Data Approach from the Outset. Setting Up Access to Remote SQL Servers. Querying a Linked Server. Transact-SQL with Linked Servers. MS DTC Architecture. Two-Phase Commit Protocol. COM+ Applications for HA. 9. High Availability Pieced Together. Achieving Five 9s. Assemble Your HA Assessment Team. Set the HA Assessment Project Schedule/Timeline. Doing a Phase 0 High Availability Assessment. Step 1-HA Assessment. Step 2-Primary Variable Gauge Specification. High Availability Tasks Integrated into Your Development Life Cycle. Selecting the HA Solution. Is the HA Solution Cost Effective? 10. High Availability Design Issues and Considerations. Things to Consider for High Availability. Hardware/OS/Network Design Considerations. Microsoft Cluster Services Design Considerations. SQL Server Clustering Design Considerations. SQL Server Data Replication Design Considerations. SQL Server Log Shipping Design Considerations. Distributed Transaction Processing Design Considerations. General SQL Server File/Device Placement Recommendations. Database Backup Strategies in Support of High Availability. Two Backup Approaches for High Availability. Parallel Striped Backup. Split-Mirror Backups (Server-less Backups). Volume Shadow Copy Service (VSS). Disaster Recovery Planning. The Overall Disaster Recovery Approach. The Focus for Disaster Recovery. Documenting Environmental Details Using SQLDIAG.EXE. Plan and Execute a Complete Disaster Recovery test. Software Upgrade Considerations. High Availability and MS Analysis Services/OLAP. OLAP Cubes Variations. Recommended MSAS Implementation for High Availability. Alternative Techniques in Support of High Availability. Data Transformation Service (DTS) Packages Used to Achieve HA. Have You Detached a Database Recently? Third-party Alternatives to High Availability. IBM/DB2 High Availability Example. 11. High Availability and Security. Security Breakdowns' Effect on High Availability. Using an Object Permissions and Roles Method. Object Protection Using Schema-Bound Views. Proper Security in Place for HA Options. MSCS Security Considerations. SQL Clustering Security Considerations. Log Shipping Security Considerations. Data Replication Security Considerations. General Thoughts on Database Backup/Restore. Isolating SQL Roles, and Disaster Recovery Security Considerations. 12. Future Directions of High Availability. Microsoft Stepping Up to the Plate. What's Coming in Yukon for High Availability? Enhancements in Fail-over Clustering (SQL Clustering). Database Mirroring for Fail-over. Combining Fail-over and Scale Out Options. Data Access Enhancements for Higher Availability. High Availability from the Windows Server Family Side. Microsoft Virtual Server 2005. Virtual Server 2005 and Disaster Recovery. Other Industry Trends in High Availability.
OPCFW_CODE
Designing an effective interface goes way beyond functionality. You have to be able to meet the needs and expectations of a wide range of users and you have to do it while juggling the limitations of the device for which you're designing. Some of the common interface issues you'll be facing as you design for devices include these: Most computer-literate folks have, on their PCs, the icons reduced to next to nothing and the resolutions cranked up. Most people who are not computer-savvy have their desktop PCs set to whatever resolution, icon size, and color depth with which the computer came installed. Hence, the still lingering nastiness that is the web 216-color palette (which I hope we can soon regard as quaint, much in the same way that we regard 2400-baud modems these days). An additional issue is that we interface designers are digitally "tuned-in," sometimes to our own detriment. A good example of this is the current trend of design-oriented web sites to use smaller fonts. Sure, it looks cool, but it is often just plain unreadable. Sometimes, the text written in that small size doesn't mean anything it's just there for design purposes. However, even then, the small size is detrimental because it's nearly impossible for a literate human being to look at text in this language and not try to read it. This brings us to a very important point that interface designers can utilize; human beings are, for all intents and purposes, pattern-matching machines. We can pick out a single conversation from a crowded room and know instantly the face of friend we last saw 20 years ago. Our brains are wired to match patterns, so let's use that to our advantage. A good example of this is what happened when I first started to use my Pocket PC. The interface looked and usually acted like the Windows I was used to, and because of this, I could quickly jump in and start working with it. However, years of working with Windows taught me certain patterns that at the time simply didn't exist in the Pocket PC world namely closing applications. I must have spent 30 minutes, on and off, looking for the little X to close Pocket Word when I was done writing. I knew and realized that the paradigm of using a Pocket PC was much different from a desktop one, but the continuation of my use patterns between the devices led me to look for a Close button when it just wasn't there. Patterns are powerful things, and often, even though there might be a more efficient and usable way of creating an interface, it's best to stick with what people know. Yes, a Dvorak keyboard is better than a QWERTY one, but I sure can't come close to the 70wpm I'm currently typing on a Dvorak. You'll notice that this is the reason that the interface widgets that I talk about (and provide later in this chapter) aren't all that different from their desktop counterparts. It is important to know, though, that they are different so that they can deal with the restrictions of devices. Just plopping a regular desktop widget on a device will rarely work well; you almost always need to make some sort of modification. Many, if not all, of the modifications that were made to the widgets were in the name of simplicity. Even if we weren't talking about devices, simplicity would be the name of the game. However, simplicity is more important than ever when you are developing for devices (if for no other reason than that people use devices much differently from PCs). Devices need to be quick and simple to use or people won't use them. A good example of simplicity in interface design is the venerable Palm Pilot. It took Windows CE devices nearly three generations to begin to become even close to being as accepted as Palm-based devices. Anyone who has owned or used a Palm Pilot knows how simple the devices are to use. There is simply little to no learning curve. The fact of the matter is, if technology is going to try to replace a pen- and paper-based solution, it has to be just as easy, if not easier, to use than pen and paper. A lot of what has been mentioned so far might have seemed like common sense to you. In fact, that's the point; a good deal of interface design is separating the "cool" from the "makes sense." Here's where a very important problem arises, though. As the person who has designed an interface, you know the program inside out and backward. It will nearly always seem usable to you! However, what you need to do is test your usability on a group of impartial, representative users as early as possible in your application development process. If your application will be used mostly by, say, middle-age housewives, have a group of them test the application rather than a group of hardcore UNIX sys-admins. When you are doing your testing, watch your users closely. If they get hung up on a particular element, ask them what they were expecting and why. (You'll almost always get a good answer for the "what" part, but don't expect to get too much for the "why" part.) Look for patterns in your users where they get hung up, where they whiz through, and where they just seem confused. More than likely, you will be able to extrapolate from the results of such testing what needs to be done to your application.
OPCFW_CODE
Which publications actually count for tenure? I have been told in no uncertain terms that papers published with your PhD advisor simply Do. Not. Count. when it comes to tenure. Quote: "Publications with your advisor will be crossed off the list." An assertion that strong makes me insecure about collaborating with any senior researcher. Will these publications also be tossed out? How senior is senior? For instance, suppose I collaborate with junior faculty at another institution who was hired at roughly the same time. Will outcomes from that collaboration be deemed "more significant" than with, say, a full professor from that same institution? And what about the field? Am I "safer" publishing with senior people from a different field, because there will be an easier perception that I am "carrying my weight?" Or should I simply avoid collaboration altogether, and publish exclusively with my own grad students and postdocs? How do the real discussions go, from those who have actually been in the trenches? Honestly, I would much rather just collaborate with the people I do the best work with. (Isn't that the best thing for the field anyway?) But I am terrified of having years of good work "crossed off the list" because I did not pick the right dance partner. Honestly, standards differ so much that I don't think you'll be able to get useful advice here. Certainly not all institutions discount papers published with a PhD advisor as severely as you describe (or even at all). You need to put this question to people in your own department who have experience with tenure decisions. (1) Surely it varies widely depending on field and even within a field. So Nate is right: ask in your department. (2) "Isn't that the best thing for the field?" ... maybe, maybe not. But worry about that only after you are tenured. Collaborating with senior people is a good thing. The case of your advisor is different in that he or she has an interest in seeing you succeed, and may end up contributing much more than the usual share of a joint paper. A collaboration with other senior people is seen quite differently, as it means that your research quality is good enough to have attracted them. I strongly endorse @NateEldredge's comment. That said, I believe a common attitude is that some of your papers (including some of your best papers) should be solo or joint with only junior people. I can confirm that this (unofficial?) policy does exist in the field of life sciences in several institutions. I was also shocked when I heard about this the first time. You haven't told us (a) what field you are in (which is critical as the answer may depend heavily on this), (b) what country or area your university is in (in my experience norms sometimes vary by country or region of the world), or (c) what stage of career you are in (are you an assistant professor? I'm guessing you are, but this should probably be explicit). Consequently, I'm not sure whether your question is answerable in a useful fashion in its current form. You might want to edit the question to provide additional information. Also, you haven't told us what research you have done on your own. On StackExchange sites we generally expect you to do significant research on your own before asking -- such as, for example, asking a senior mentor you trust -- and to tell us in the question what research you've done. papers published with your PhD advisor simply Do. Not. Count. when it comes to tenure — In some computer science departments, at least in some subfields, papers published with your advisor don't count even for hiring assistant professors. How do the real discussions go, from those who have actually been in the trenches? — In my experience, the discussion is different for every candidate, even within the same department and the same subfield. I can't speak for your field or institution, so take anything I say with that in mind. That being said, in my experience (in computer science, at a well-ranked private university) what matters is not so much who you collaborate with, but that you have a research agenda that is strongly identified with you (as opposed to your senior coauthors). If all of your papers are coauthored with the same senior researcher, this can look bad, because it can be difficult to disentangle your research agenda from his or hers. But if your papers have a cohesive theme, and are coauthored with a variety of other people (even if many are senior), this is great. So in summary, in the parts of academia I have seen, you should collaborate with whoever you want to, but make sure you have your own research problems and are not just working on your coauthor's problems. As others have mentioned, the details vary by field and department. But essentially, your publication record as a faculty member needs to demonstrate that you are an intellectually independent PI making your own unique and novel contributions to your discipline. If your collaborations with previous advisors create the appearance that your work is merely an intellectual extension of your advisor's work, or that your research program is significantly dependent on that of your advisor (or others), then it will not be looked on favorably, whether or not those papers are "officially" counted.
STACK_EXCHANGE
What is Sqlite3.dll? The sqlite3.dll module is associated with the SQLite Database Library program from sqlite.org. Common Sqlite3.dll errors The majority of sqlite3.dll errors occur due to removal or corruption of the sqlite3.dll file. Examples of common sqlite3.dll error messages that may appear on your screen are listed below: - ?The Sqlite3.dll File Not Found? - ?Sqlite3.dll is missing? - ?This application failed to load. An important component ? Sqlite3.dll is missing. Reinstalling the application may fix the error? Generally, DLL files, like sqlite3.dll go missing or become corrupt due to the following reasons: - Accidental deletion of the DLL file - Malware infection - Registry Issues Depending on the cause of the missing sqlite3.dll error, you may perform either of the steps mentioned below to fix the error: Reinstall Any Recently Uninstalled Application In case the missing sqlite3.dll error has started appearing soon after you have uninstalled any application, reinstall this application to rectify the error. Incorrect uninstallation of programs may lead to the removal of shared DLLs and cause recurring DLL errors. To correctly uninstall unwanted programs, use either the Add or Remove Programs utility, or an efficient third-party uninstaller tool, such as Perfect Uninstaller. Download the Missing Sqlite3.dll From the Internet If incorrect uninstallation of a program does not seem to be the cause of the missing sqlite3.dll file, download the DLL file from a reliable online DLL Directory. Perform A Malware Scan of Your Entire System Often, malware programs, such as virus and trojans deliberately delete or modify the contents of the DLL files and cause missing or not found DLL errors. If the first two steps fail to rectify the sqlite3.dll error, perform a malware scan on your PC using reliable and advanced antimalware software, such as STOPzilla Antivirus and Spyware Cease. Perform a Registry Scan Last but not least, perform regular registry scans using an efficient registry cleaning tool, such as RegServe, to ensure the good health of your system registry. A corrupt registry is often recorded as being the root cause of various recurring system errors, including DLL errors. Is Sqlite3.dll a Safe File? The true sqlite3.dll is a safe file. However, instances of a rogue version of the sqlite3.dll have also been recorded. The rogue sqlite3.dll is known to be associated with the following malware: - Application.KGB_Spy [PC Tools] - Mal/Packer [Sophos] - Trojan Horse [Symantec] - Trojan.Crypt [Ikarus] - Win-Trojan/Xema.variant [AhnLab] The Sqlite3.dll is known to be located in the following locations: The above locations are located in these folders: - The %Windir% variable points to the folder where Windows is installed. This by default, is C:\Windows or C:\Winnt - The %Temp% variable points to the temporary folder. This by default is C:\Documents and Settings\[UserName]\Local Settings\Temp\. - The %System% variable refers to the System folder. This by default is C:\Windows\System32 for Windows XP/Vista, C:\Winnt\System32 for Windows NT/2000, and C:\Windows\System for Windows 95/98/ME - The %ProgramFiles% variable points to the Program Files folder. This is generally C:\Program Files. - %AppData% points to the folder that stores application-specific data. Generally, the path is C:\Documents and Settings\[UserName]\Application Data. How to prevent rogue sqlite3.dll from entering your system Follow safe system security habits to protect your system from the malicious sqlite3.dll file and its associated malware. - Install advanced protective tools, such as Antivirus and Spyware Cease on your PC and keep your antimalware tools up-do-date with the latest virus definitions and security updates. - Install a firewall on your PC to monitor the incoming and outgoing traffic. - Stay away from dubious websites and do not download software from such ambiguous websites. - Read End User License Agreement (EULA) before installing any new software that you have downloaded from the Internet.
OPCFW_CODE
Wanted to build yourself? Yes. These paths help you to eliminate that workload which you have to do otherwise. There are a large number of resources which are overwhelming when you enter into data science. And for that, you have the learning path which gives you a success in the community. We have a complete path of learning to become a data scientist in 2019. This path is divided into certain categories that are: Begin your data science journey and this is the biggest step where understanding the main concept to what data science is. This is the step where programming language and tools should be chosen and as per our recommendation python is the best. Through this, you can enable the code easily through everything which you will learn in the upcoming course times. Basic maths and statistics learning There are many core concepts which are must for a data scientist to be aware of. That is mathematics and statistics. From learning this tool you can easily perform the calculations which will help you to generate good results. And having a grab over the statistical method which is descriptive and inferential stats is must if you want to become a data scientist. In this year learning path, the main focus is on these two fields. Concepts of machine learning and applying them Once you are done with the basic ideas of machine learning and it get started then you will actually start learning the machine learning concepts, Machine learning is not just the theoretical concept but learning by doing is must and hence some very awesome project are provided where you can experience the life of a data scientist, like what he does. Other applications of machine learning Having a good grasp over these basic techniques benefit you and there are many topics which are advanced like ensemble learning, random forest and methods of time series. Machine learning is not just about an algorithm, here you need to know the tricks where you can improve the model. For this, only the validation strategies and the featuring of engineering will play an effective role. Industry applications are also focused and have a project recommendation in the learning path. Introduction to deep earning These concepts of machine learning are now very clear to you and the nest is! Of course deep learning, in todayâ??s time for data scientist it is becoming an essential part. And now data scientist path will lean towards understanding the neural networks. Deep learning architectures which are like RNN, CNN You should really follow that up with a deep dive into the frameworks of advanced neural networks which are recurrent and convolution neural networks. These concepts are heavy and it might take a few weeks to go through these concepts from scratch. Natural language processing Without going through NLP, data scientist path of learning is not fully complete. Basics should be focused more and more which includes the text preprocessing and classification of the text. Exploring the workings of deep learning in NLP is really adventurous and the one who is willing to be in this field must go through it. These are the steps which will help you to follow the learning path and get enhanced with them. And hopefully, through this path you will get into the role of data scientist before the end of the year.
OPCFW_CODE
Issue with container's /etc/hosts file (it is empty for the container but not for the host) when you use VOLUME /etc in your Dockerfile My dockerfile: FROM centos:centos6 VOLUME /etc CMD /bin/bash I build my docker image $ docker build -t bugdemo . I start db container, in order to link bugdemo's container to it. $ docker run -ti -e MYSQL_ROOT_PASSWORD=pass -d --name db mysql I start bugdemo's container $ docker run -ti -d --link db:db --name bugdemo bugdemo /bin/bash $ docker attach bugdemo $ cat /etc/hosts $ (empty) But.... CTRL+P+Q $ docker inspect -f {{.HostsPath}} bugdemo /var/lib/docker/containers/cd3b0857959a25a1eb05ba5939f071cfda6c4744897217e7eb63101af58d7124/hosts $ cat /var/lib/docker/containers/cd3b0857959a25a1eb05ba5939f071cfda6c4744897217e7eb63101af58d7124/hosts <IP_ADDRESS> cd3b0857959a ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters <IP_ADDRESS> localhost <IP_ADDRESS> db I'm not quite sure if this is a bug or I shoot myself in the foot with 'volume /etc' line but reported it anyway :-) Maybe it would be helpful to know what you were wanting to achieve with the VOLUME /etc directive in your Dockerfile. Because the /etc/{hosts,resolv.conf,hostname} files are bind-mounted into your container at runtime, there is now a conflict between your bind-mounted VOLUME /etc and the single file mounts.. I didn't spend enough time to figure out if that could be corrected, but again, in a sense you are right that you shot yourself in the foot :) So, maybe start with what the VOLUME declaration was for and then we can figure out what the right next step is. Oh... the buggy situation is result of our experimenting with Docker. I needed a quick way to check/change realtime configuration files in /etc in PHP + Apache container. When I started to use "VOLUME /etc" the container became unable to connect to linked mysql container with error "Host not found". When I made quick check of /etc/hosts file via host with "docker inspect -f {{.HostsPath}} bugdemo" and 'cat' the host file , the content was fine and there were a line for the linked mysql container. I started checking my application for errors, but none were found. Then I checked the /etc/hosts in PHP+Apache container and it was empty. I decided to report it as a bug because checking the /etc/hosts file from the host and within the container gives us different content and it is possible to mislead when someone is debugging a problem. @estesp is right, the docker will mount /etc/{hosts,resolv.conf,hostname} to container first, and then mount the user specified volumes. In this case, VOLUME /etc will cover the /etc/{hosts,resolv.conf,hostname} which are mounted first. I think create a volume /etc is not a good choice. @coolljt0725 I agree with you. Yes, creating a volume /etc is not a good choice. @ctmnz At some point, create a /etc volume may be reasonable just as you said "I needed a quick way to check/change realtime configuration files in /etc" , so I create PR #10615.
GITHUB_ARCHIVE
this.fireUploadStart is NOT triggered in FileUploader.js -> FileUploader.prototype.upload = function() OpenUI5 version: 1.28 Browser/version (+device/version): any Any other tested browsers/devices(OK/FAIL): all failed URL (minimal example if possible): n/a, the issue is in FileUploader.js User/password (if required and possible - do not post any confidential information here): n/a Steps to reproduce the problem: When UploadCollection control and multiple files selection is used in version 1.28 then even BeforeUploadStarts is not triggered for each file. The reason is the FileUploader,js does not fire even BeforeUploadStarts, please have a look here: https://github.com/SAP/openui5/blob/rel-1.28/src/sap.ui.unified/src/sap/ui/unified/FileUploader.js function :: this.fireUploadStart is triggered in FileUploader.js -> FileUploader.prototype.upload = function() before block :: aXhr[j].xhr.open("POST", this.getUploadUrl(), true); if (this.getHeaderParameters()) { var oHeaderParams = this.getHeaderParameters(); for (var i = 0; i < oHeaderParams.length; i++) { var sHeader = oHeaderParams[i].getName(); var sValue = oHeaderParams[i].getValue(); aXhr[j].xhr.setRequestHeader(sHeader, sValue); aXhr[j].requestHeaders.push({name: sHeader, value: sValue}); } } after block :: NULL expected block :: (taken from release 1.30) var sFilename = aFiles[j].name; var aRequestHeaders = this._aXhr[j].requestHeaders; this.fireUploadStart({ "fileName": sFilename, "requestHeaders": aRequestHeaders }); for (var i = 0; i < aRequestHeaders.length; i++) { // Check if request is still open in case abort() was called. if (this._aXhr[j].xhr.readyState === 0) { break; } var sHeader = aRequestHeaders[i].name; var sValue = aRequestHeaders[i].value; this._aXhr[j].xhr.setRequestHeader(sHeader, sValue); } We can also see that this part is covered in release 1.30 and higher. What is the expected result? Event this.fireUploadStart is triggered in FileUploader.js -> FileUploader.prototype.upload = function() What happens instead? Event this.fireUploadStart is NOT triggered in FileUploader.js -> FileUploader.prototype.upload = function(), so in case of multiple files upload all files have the same name as the first one, because there is no way to update file name in the request header parameters set Any other information? (attach screenshot if possible) Adjustments to control UploadCollection is currently made by us for Fiori application My Travel and Expenses (MTE). Hi Siarhei, when working for SAP, please report issues using the normal internal BCP system. I have opened ticket<PHONE_NUMBER> there for this issue. Regards Andreas Hi Andreas, Thank you! I have used this thread this time because that is implementation related we are currently doing, but next time I will use BCP system. Kind Regards, Siarhei Hi Siarhei, the mentioned UploadStart event was introduced with 1.30 and is therefore not available with 1.28. Unfortunately it's not possible to introduce it in a compatible way by making a fix. The recommendation from my side would be to upgrade to SAP UI5 version 1.36 as a lot of fixes and features needed by the upload collection have been made since then. Best regards Stanley Hello Stanley, Thank you for your reply! Actually, as it's been Go Live blocker for us, we've made a decision to go to 1.36, even that it pushed us to upgrade SAP_BASIS. Kind Regards, Siarhei Closing, as this is not a bug, but a not-yet available featrue in an older version, which unfortunately cannot be downported. Sorry for the inconvenience. Regards Andreas Hi Andreas, I have encountered a peculiar problem. My Upload Collection is working perfectly fine when I upload a file the first time (Single File upload Multiple = false). But the next time I upload a different file then the fireUploadStart event throws the following error: File upload failed: Cannot instantiate object: "new" is missing!. I am using SAPUI5 version 1.44.1. Attaching screenshot for your reference. I also faced this problem,when i was upload file.I want to get the file name before uplaod.but the fileuploader didn't trigger the function in the controller.so i don't konw how to solve the problem.
GITHUB_ARCHIVE
Note: The Microsoft Forms web part is not available in SharePoint Server 2019. For example, you can connect a Document library web part to a File viewer web part. Adding SharePoint Modern Web Parts to a Page. 02/02/2017; 2 minutes to read +3; In this article. The out of the box web parts are accessible via Site Contents > Add an App. If you do not see the site page that you want, click Site contents on the Quick Launch bar, in the list of contents, click Site Pages, and then click the page that you want. Document library. Web parts can help customize intranet content, layout and a set of adjustable scripts in certain pages via the web interface. Step-3:-To add a Contact Details, click on the option as “Click here to add or modify a contact”. We will go through all the filter web parts in this tutorial. So as per SharePoint Roadmap Pitstop: April 2019 release, we can connect various types of web parts to each other in the SharePoint modern page, here in this article I will focus on how can show list or library data in various ways. SharePoint List Filter web part gives end-users the ability to search term via SharePoint list values and filter the web part results. Prerequisites. Just one thing I want to add here is that you might find it beneficial to configure it with metadata, instead of folders. Create a web part page in SharePoint. The Quick Links web part "pins" items to your page for easy access. Divider. If you're page is not already in edit mode, click Edit at the top right of the page. After adding the web part, you can type directly into the web part to add a title. A web part is used by SharePoint users to build up their web page in a visually appealing way that fits the needs of them and their teams. You will see many different SharePoint Web Parts (Design Elements) you can use. If the page is not already in edit mode, click Edit at the top right of the page. Microsoft SharePoint can, among other things, be utilized as an incredibly accessible tool for the creation of websites consisting of publications based on templates and dedicated, styled web parts… So that was the concept of SharePoint Sites, SharePoint Pages and SharePoint Web Parts. You can also set the scope of the search to a site or site collection, and sort the results. A number of Web Parts ship right out of the box with the different editions of SharePoint, and you can also purchase third-party Web Parts. Step 4 − So you can see the Web Part Zone and its inside part. The modern SPFx web parts … Office 365 video is being replaced by Microsoft Stream. The Divider web part inserts a line between other web parts to help break up your page and make it easier to read. Go to the page where you want to add a web part. The reason being is that especially on projects, you will find a lot of standardization in terms of types of documents (i.e. Web Parts are reusable components that display content on web pages in SharePoint 2016. VirtoSoftware is a professional SharePoint and Office365-oriented software development company, who designs and develops innovative and modern SharePoint web parts and Office 365 apps, provides consulting on SharePoint … This is part of our ongoing series on modern SharePoint web parts, as we deep-dive into each web part, its configuration options, and usage. Here are our latest updates starting targeted release in summer 2019. and drag the web part where you want it on your page. This can be useful when working with web parts taller than the screen height. Part of SharePoint 2013 For Dummies Cheat Sheet . Most of the developers start applying custom logic to filter the data. SharePoint Pages and Web Part SharePoint Pages. Users can view the list, or go to the full list by clicking See all. The List web part displays a list that you can customize with your own title, view, and even size. Try using using web part maintenance mode to help troubleshoot the issue. You can add a title, set the date format, add a description, and a call to action button with a link. The Spacer web part allows you to control vertical space on your page. File types you can insert include Excel, Word, PowerPoint, Visio, .PDFs, 3D Models and more. Is Electronics And Computer Engineering Good, Junior Devops Resume, Perisher Village Map, Naum Gabo Paintings, Wilson Pro Staff 97 V12, Char-broil Commercial Series Burner Replacement, Nikon D5300 Vs D5500 Vs D5600, Miami Beach Edition,
OPCFW_CODE
Interested in knowing who you can vote for in DC’s upcoming election? Check out Greater Greater Washington to search by your address and learn more about candidates and their positions on important topics. Where are We? DC is a fundamentally geographical city. The boundaries are (originally) a 10-mile by 10-mile diamond that was located to balance preferences of northern & southern colonies and also be a close distance to George Washington’s home at Mount Vernon. Our roads are organized in a British-style grid of incrementing letter + number names with crisscrossing French-style roads named for each of the current U.S. States. Within the City there are 8 Wards, each with approximately 100,000 residents. The Wards are then subdivided into 46 Advisory Neighborhood Commissions (ANC). And finally, each ANC is subdivided into 345 Single Member Districts (SMD) of approximately 2,000 residents. Combined together every resident has a local neighborhood code like 8C02 (Ward 8, ANC 8C, SMD 8C02). One aspect that I find remarkable is that while the 8 Ward Councilmembers (and a few At-Large council members) are paid positions, the other 345 SMD representative commissioners are unpaid volunteers! more on that later… In 2023, thanks to the U.S. decennial census, the neighborhood boundaries are redrawn to improve equitable representation by balancing population and demographics. Every resident might be assigned a new Ward, ANC, and SMD. (Find yours here!) Within these new boundaries are the people who will represent them on broad city-wide policies to hyperlocal residential building permits, business licenses, and individual road changes. While I’d love to believe that every city resident has been involved with their local SMD representative and ANC committees, it is more likely that people are unaware of their current, and upcoming, districts and therefore new representatives. Seeing is Understanding We all know where we live. We should be able to easily find out information about our government and neighborhoods with just this information. I’m passionate about the potential for everyone to learn and be engaged on topics they care about. This is what I’ve been working on for the past few decades but I’m often surprised with how difficult it is to find out “what’s going on near me?” What if you could just type in your address and see what’s going on and who represents me? Who are my Candidates? DC is again a particularly unique part of America. While we are the capital city for the United States, we don’t have U.S. Senators or Congressional Representatives. Residents of the city have a Mayor, a Ward Councilmember, and then our volunteer ANC representative based on our SMD (Single Member District, like 6C04). Because our ANC representatives are unpaid volunteers they have little budget or ability to campaign other than walking door-to-door to meet over 2,000 constituents. I appreciate and applaud the efforts of my neighbors to take on significant and important work to represent our neighborhood. For the past year I have been a (volunteer) member of my neighborhood Transportation & Public Space Committee (yes, we have TPS reports), so I’ve become more involved in local issues but I’m not ready to run for office. So, I took the opportunity to collaborate with Greater Greater Washington, a local news organization, to create informative, interactive, and accessible visualizations of their comprehensive candidate surveys. We’ve created a few versions through the summer, including the Mayor’s Primary, Ward Council, and the At-Large Council members. Each of these races are city-wide, so location (within the city) isn’t very relevant. That’s very different with the hyper-local SMD candidates. You can now search by address or neighborhood district to find local candidate responses. Since these candidates are volunteers they only have their personal budget and time for any campaigning. Creating an accessible tool like this supports all candidates by making it easy for voters to find out more about the candidates. Unfortunately, not all candidates responded to the survey. That was their decision and it leaves their potential constituents without any information on their positions and priorities. Maybe candidates will still respond and the tool will automatically update with their new responses. How was it built? This was a fun project to build as it uses several interesting and new technology standards. The visualization is a web component where software developers can create new HTML elements that work in any webpage or web application framework. - Stencil.js tools from Ionic which simplify the development + builds. - ArcGIS JSAPI for geographic maps - Data from OpenData DC and DC Board of Elections - TypeScript, HTML, + CSS The web component makes it very easy to embed into a website like GGWash – or any other site built with common website editors (WordPress, Drupal, Wix, etc.). You would just add this code to your website: <dc-election-survey id="anc" filename="https://ajturner.github.io/dc-elections/assets/2022_anc.csv" candidates-files="https://ajturner.github.io/dc-elections/assets/2022_anc_candidates.csv" format="surveymonkey" show-filter=true filter="" ></dc-election-survey> <script type="module" src="https://ajturner.github.io/dc-elections/build/dc-election.esm.js"></script> <script nomodule src="https://ajturner.github.io/dc-elections/build/dc-election.js"></script> <link rel='stylesheet' type='text/css' href='https://cdn.jsdelivr.net/npm/@firstname.lastname@example.org/dist/calcite/calcite.css' /> <script type='module' src='https://cdn.jsdelivr.net/npm/@email@example.com/dist/hub-components/hub-components.esm.js'></script> The web component are like building blocks. They can be used independently or put together into a composition that integrates several components together. You can see an overview of the dc-election visualization components and how they load data: The entire code project is open-source at https://github.com/ajturner/dc-elections. Feel free to check it out and re-use it. There aren’t currently any tests, and quite a bit of complex logic that emerged as we progressively built the visualizations over time. In future iterations I plan to refactor the project and settle early on a standard format which would simplify a lot of the code.
OPCFW_CODE
Necessary tools or caveats for working on R12 AC After having the shop that repaired my R12 AC system last year mildly to moderately under-fill it (before it was ice-cold; now it cuts the humidity ok but only gets cool only after 10-20 minutes of highway driving) I got the necessary certification and am planning to top the system off to get it up to the level it should have been filled to. In order to avoid doing something stupid and releasing a bunch into the atmosphere (and wasting money), I want to make sure I'm prepared with the right tools before I start. It looks like all the small R12 cans you can get are sealed and require a tap (as opposed to DIY R134a cans with valves on them). I've seen top- and side-tap devices advertised for this; some examples: https://www.amazon.com/dp/B01JYTOA5I https://www.amazon.com/dp/B0009XT7NY http://www.ebay.com/itm/R12-REFRIGERANT-R-12-HEAVY-DUTY-PRO-CAN-TAPER-With-Pressure-Gauge-/172692646221 How do they work? Do you latch the device onto the can first then twist the tap to puncture the can, and loosen it again to allow it to flow? How do you seal it to prevent leftover coolant from slowly escaping? Do you need a secondary shutoff valve of some sort attached? Aside from this, are there other tools I need or should have on hand? I do have a set of hoses/gauges already, but in the past (R134a systems) I've just gone by feel of the air blowing out to judge when it's suitably full. Why not just convert it over to R134a? I'd suspect the kit to do this would be less than purchasing one can of R12. I can get R12 cans for $25 plus shipping. Conversion (done right) is a huge ordeal that involves thoroughly evacuating and flushing the system to make sure no oils incompatible with R134a remain; it may also involve replacement of old gaskets that will fail after the switch. Plus R134a is a significantly less efficient refrigerant. Less efficient, yes; Significantly less efficient I wouldn't say. No clue where you're getting cans of R12 for $25. Most I've seen are in excess of $100 each. Just a suggestion. On ebay individual 12oz cans are almost always under $60, and you can often find bundles of multiple cans that come out to $35 or less per can. These are trends I've observed by considering doing R12 AC work myself over several years, not just the current prices. Maybe at one point it the prices were higher; I wouldn't be surprised if reduced demand (fewer unconverted older vehicles in service) has caused the prices to decline. As for efficiency it's been a long time since I looked at numbers, so it might not be a big deal. But it's still rather unappealing to do a big overhaul with risks of additional system failures for the sake of something that's not even an improvement, rather mildly worse, just because the refrigerant is easier to obtain. And I've got obtaining it covered anyway. I just want to know what I'm doing so I use it responsibly and don't cause environmental harm or waste money. As you have the certification - you should know what you are doing... @SolarMike: Book knowledge/testing does not translate to hands-on experience with tools. Really not happy with the flak I'm getting for trying to do this right... @R.. When I had my system rebuilt and completely recharged, a machine was used that recorded the exact amount recovered and the exact amount put back in during the recharging process. There was no "air blowing out to judge when it's suitably full" as you mention. The machine had two pipes that connected to the valves built into the car's system for this purpose. @SolarMike: When the system was repaired last year it was very low because the seals in the compressor had gone bad, so there was no old amount to match to. Compressor and several other parts were replaced, tested fine for no leaks, and it seemed okay in only moderately warm weather at the time, though not as strong as before it failed. In really hot weather it's unacceptably bad. I'm aware that if you have machines for it you can measure the exact amount you put in, or you can measure the pressure with gauges... ...but as long as the end pressure isn't out of spec, I don't see any good reason not to go by feel, especially since the exact amount you need is going to vary slightly by the particular system and by volume of replacement components, etc. The compressor, accumulator and condensor were replaced on mine - and this meant it had to be completely recharged. The manual (both for the vehicle and the machine) correctly specified the exact amount that the system should be charged with in grammes. While the pressure reading could be within specification the amount of fluid may be low giving you the symptoms of not cooling sufficiently. I ended up using one of the can taps that attaches on the top, with a metal cap containing an O-ring so that the can could be sealed at the threaded connection, beyond the minimal level of seal provided by the tap. I was surprised at the lack of actual locking in the "clamp" that attaches around the neck of the can, though. It feels like you could knock it loose and send the whole thing flying off just by touching it wrong. Overall the process was uneventful and went fine, though.
STACK_EXCHANGE
WebWare Server 4.5 is released! [WebWare Server] WebWare version 4.5 is released. The main news in this versions are: Product <?xml:namespace prefix = st1 ns = "urn:schemas-microsoft-com:office:smarttags" />Split<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" /> The functions in WebWare Server have been grouped into two modules, the Backup Module and the Report Module. Each module requires a license that can be purchased separately. See WebWare Server Administrator's Guide, chapter 1. Because of this, menus in the improved user interface have been grouped in a more function-oriented manner. Tuning Parameters, including Filtering of Event Logs are added to reduce network traffic from the data collector. This improves overall data collection capabilities that may reduce the need for one or more data collectors in the network. See the help file for the Data Collector. A copy of the last successful backup is stored locally at the Data Collector. Backup (See WebWare Server Administrator's Guide, chapter 9.) - New Backup Scheduler for increased scheduling flexibility. - New Restore function to restore a complete backup folder, using the teach pendant. - New Restore function to push a complete backup folder or single file from the server to the robot controller, using the web-client. - Possibility to mark a backup as Master Backup, using the web-client. Reports (See WebWare Server Administrator's Guide, chapter 7) - Four sample reports are now included in the Report Viewer page. - Now includes selected Paint and IRC5 documentation. Full documentation not possible due to disc size limitations. - User's Guide instead of Administrator's Guide now as online help. Admin (See WebWare Server Administrator's Guide, chapter 5.) - New concept of Device Sets to group devices. Used for backup scheduling. WebWare.Sys File (See WebWare Server Administrator's Guide, appendix C) - Possibility to start a backup via RAPID. - The RAPID Restore Procedure has been improved so that folders and files can be transferred to the robot controller using the Teach Pendant. Also includes password protection. - Quicker restore procedure, Master and Last Successful backups are listed first and second at the teach pendant. WebWare 4.5 is beeing sent out to all customers with subscription service agreement. - 9.8K All Categories - 5K RobotStudio - 320 UpFeed - 6 Tutorials - 272 PowerPacs - 401 RobotStudio S4 - 3 RobotStudio AR Viewer - 596 RAPID Programming - 12 Wizard Easy Programming - 2.5K Robot Controller - 164 IRC5 - 24 OmniCore - 5 RCS (Realistic Controller Simulation) - 1.7K Developer Tools - 224 ScreenMaker - 87 Collaborative Robots
OPCFW_CODE
Serving web-content from a user’s home directory allows the user to conveniently upload files. By default, the apache configuration in many Linux distributions assumes content is uploaded to a single directory owned by the webserver’s user, but it might be useful to allow a user to upload his own content to a special subdirectory of his home directory. To allow this SELinux and its default rulesets need to be configured accordingly, too. The idea to provide the user with a directory to upload his webcontent is not new. This means there is a already known pattern of how to do it. Especially with SELinux in place, following those patterns causes less modifications to the SELinux ruleset. Prepare the user The first step is to prepare the user and the user’s home directory. Creating the user on the server can be done with the adduser(8) command. Depending on you needs, add the necessary arguments. To use all the system’s defaults, run the command just with the username. [root]$ useradd webuser This will create the user “webuser” and a matching user-group “webuser” on the server and a home directory at /home/webuser/. Inside the user’s home directory, the subdirectory for the web content can be created. I suggest switching to the “webuser” and following the pattern of creating the directory with the name “public_html” as this name is known to be used for this purpose. [root]$ su - webuser [webuser]$ mkdir ~/public_html/ A quick check for the SELinux context of the created directory already reveals some details about this well-known pattern. The “-Z” option shows the SELinux context of the users home directory and all its files and directories. [webuser]$ ls -alZ drwx------. webuser webuser unconfined_u:object_r:user_home_dir_t:s0 . drwxr-xr-x. root root system_u:object_r:home_root_t:s0 .. -rw-r--r--. webuser webuser unconfined_u:object_r:user_home_t:s0 .bash_logout -rw-r--r--. webuser webuser unconfined_u:object_r:user_home_t:s0 .bash_profile -rw-r--r--. webuser webuser unconfined_u:object_r:user_home_t:s0 .bashrc drwxrwxr-x. webuser webuser unconfined_u:object_r:httpd_user_content_t:s0 public_html The SELinux context of the “public_html” folder was automatically set to “httpd_user_content_t” instead of “user_home_t” which is used for the rest of the user’s home directory. This is because the public_html directory is a well-known directory used for web-content. [webuser]$ restorecon -r ~/public_html The above command can be used if the SELinux context of the “public_html” and / or its content is not as expected “httpd_user_content_t”. This situation can happen, for example, if content is moved to the directory instead of copied. The SELinux context remains unchanged when moving files or directories but is not copied over if a file or directory is copied. Prepare the webserver The webserver needs to be configured to use the user’s “public_html” directory. This can be done by configuring the document-root of a virtualhost to the “public_html” directory of the user or by using the userdir module. This module allows to reaching the user’s “public_html” directory via a HTTP request to the “/~webuser/” path. The following assumes the apache document-root is configured to the user’s “public_html” directory. To allow the apache user for the webserver to access this directory, the public_html directory needs the correct rights. There are two possibilities to allow apache to access the public_html folder. The directory access can be changed to allow “others” read and execute rights on the user’s home directory and the “public_html” directory. These very open permissions might cause other side effects and are not recommended. The suggested method is to add the user’s group as a secondary group to the apache user and grant the group permission (read and execute) for the public_html and home directory. [root]$ usermod -a -G webuser apache The above command adds (-a) the group “webuser” as secondary group (-G) to the user apache. This will allow the apache daemon to access the user’s home directory via the group permission. To allow access for the group, change the permission for the home and public_html directory. [root]$ chmod g+rx /home/webuser [root]$ chmod g+rx /home/webuser/public_html Allow access via SELinux Before the webserver can access the public_html directory of the user, an SELinux boolean with the related rules needs to be enabled. This boolean enables the rules allowing access by the apache daemon to the public_html directory. The “-P” option in the setsebool(8) command will make the change persistent. [root]$ setsebool -P httpd_enable_homedirs true Testing the setup As the apache configuration was changed as well, the webserver daemon needs to be restarted. In case of CentOS 7, via systemd. [root]$ systemctl restart httpd To verify the access to the public_html, create an index.html file in the users public_html directory with some content. [webuser]$ echo "*** TEST ***" >/home/webuser/public_html/index.html Now this file can be requested via a webbrowser or just from the command line via curl. $ curl http://127.0.0.1/ *** TEST *** Alternatively, the telnet(1) command can be used to communicate with the webserver. $ telnet 127.0.0.1 80 Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. GET / *** TEST *** Connection closed by foreign host. When the telnet connection is established, type “GET / ” and press enter to retrieve the content. The result should show the content of the test file that was created earlier. Read more of my posts on my blog at https://blog.tinned-software.net/.
OPCFW_CODE
13.1 Do not write any code for OPC failure recovery As described in FAILURE RECOVERY, problems in communications with target OPC server are automatically recognized, and the component performs the recovery automatically. Attempts to duplicate this functionality in your code will interfere with this mechanism and won’t work. It is fine (and indeed, suggested) to catch the errors and handle them appropriately. But the error handling should not include attempts to remedy the situation. And the referred chapter: 12.12 Failure Recovery The OPC communication may fail in various ways, or the OPC client may get disconnected from the OPC server. Here are some examples of such situations: • With OPC Classic, the OPC server may not be registered on the target machine – permanently, or even temporarily, when a new version is being installed. • With OPC UA, the OPC server may not be running or registered with the discovery service on the target machine – permanently, or even temporarily, when a new version is being installed. • The (DCOM, TCP, Web service or other) communication to the remote computer breaks due to unplugged network cable. • The remote computer running the OPC server is shut down, or restarted, e.g. for security update. • The configuration of the OPC server is changed, and the OPC information (item in OPC Classic, node and attribute in OPC UA) referred to by the OPC clients no longer exists. Later, the configuration could be changed again and the OPC item may reappear. • The OPC server indicates a serious failure to the OPC client. • The OPC Classic server asks its clients to disconnect, e.g. for internal reconfiguration. QuickOPC handles all these situations, and many others, gracefully. Your application receives an error indication, and the component internally enters a “wait” period, which may be different for different types of problems. The same operation is not reattempted during the wait period; this approach is necessary to prevent system overload under error conditions. After the wait period elapses, QuickOPC will retry the operation, if still needed at that time. All this logic happens completely behind the scenes, without need to write a single line of code in your application. QuickOPC maintains information about the state it has created inside the OPC server, and re-creates this state when the OPC server is disconnected and then reconnected. In OPC Classic, objects like OPC groups and OPC items are restored to their original state after a failure. In OPC UA, objects like OPC subscriptions and OPC monitored items are restored to their original state after a failure. Even if you are using the subscriptions to OPC items (in OPC Classic) or to monitored items (in OPC UA) or events, QuickOPC creates illusion of their perseverance. The subscriptions outlive any failures; you do not have to (and indeed, you should not) unsubscribe and then subscribe again in case of error. After you receive event notification which indicates a problem, simply stay subscribed, and the values will start coming in again at some future point. Second: "Setting a pointer to NULL" may or may not be correct, it depends on the kind of pointer you are using, and whether you have other pointers to the same thing. Simply setting a regular pointer to NULL does nothing to EasyOPC, the component wouldn't even know about it. What is needed is that all references to it are released (using the IUnknown::Release() method), this is a general COM rule. How this is achieved - the component does not care. If, for example, you are using smart pointers and using them correctly, then yes, setting them all to NULL would be the right way of disposing the object. Third: It is recommended that you explicitly unsubscribe, before disposing of the component. You can use EasyDAClient.UnsubscribeAllItems(), or EasyAEClient.UnsubscribeAllEvents(). But we have encountered another problem (do not know if we should create a new topic).... if the OPC Server shutsdown, for some reason, we try to cleaup the connection to the OPC server by unsubscribing both our DA and AE subscriptions and make unadvise. After that the OPCDA and OPCAE pointers are set to NULL. When we try to reconnect to the OPC Server we sometimes hangs in the SubscribeMultipleItems (OPC DA) or in the SubscribeEvents methods. More often the SubscribeEvents method. The only alternative is to shutdown our OPC Client and restart it, then everthing works ok. Have you encountered this kind of problems before? What is the correct way to unsubscribe an AE/DA subscription? Could that be a problem when we try to recreate the OPC Connection and subscriptions? I have downloaded version 5.31. As I understand it I do not have to worry about the .Net Framework version? The production system is running 3.5 and the customer will not change it until the production system is upgraded, but since I am using the COM interface it is not an issue. How do I recieve a 30-day trail license? Can you send one to my mailadress? Is it posssbile to run the licenese on several machines since it is a redundant system with two OPC client machines? We have had similar issue, but not directly related or "triggerable" by connectivity losses. The trial gives valid data for 30 minutes since the process (EASYOPCL.EXE in your case I believe) starts; after that, it gives an error instead. We can give you say 30 day evaluation license that has no limitation other than the absolute end date, - let me know if you need it then. Have you come across this type of error in earlier versions? I forgot to tell you that it is a Windows 2008 R2 64 bit machine that is running the OPC Client and server. I will try to rebuild the application with the new version. If I download the new SKD from the download section does it work 30 minutes before I have to restart my application or is it 30 days? My suggestion is that you rebuild your application with the latest version, and re-test. If it resolves the problem, I will get you an extra good deal on the version upgrade, not the usual 50% of the new license price. If switching to the newest version does not resolve the problem, we will be in better position to troubleshoot further (although not an easy position either, unless we can reproduce it here). I have created an application that uses both the COM-DA and Com-AE sdks from you. The application connects to the OPC Server locally ie on the same machine, and creates a DA subscription on approx 7000 items. When the machine experience network loss due to switch breakdown (we simply unplug the cable) sometimes my application crashes and generate the following entry in the event log: (It might be that the OPC Server behaves badly, but I do not want my application to crash). Faulting module name: MSVCR90.dll, version: 9.0.30729.6161, time stamp: 0x4dace5b9 Your version of the easyopcl.exe is 5.11.355.2. This is happening in an production environment so debug is not an option. I have tried to log the exection but can not find anything to go on. Do you have any suggestions? Update: Both Microsoft Visual C++ 2005 Redistributable 8.0.61001 and Microsoft Visual C++ 2010 x64 Redistributable 10.0.30319 are installed on the computer.
OPCFW_CODE
This page is not created by, affiliated with, or supported by Slack Technologies, Inc. @benedek: just seen the new wiki for clj-refactor - like it a lot. Is describe refactoring ‘?’ new? I use https://github.com/kai2nenobu/guide-key bound to certain key combo’s like C-c for cider. back when I was writing a clojure refactoring tool for emacs (https://github.com/tcrayford/clojure-refactoring RIP), I just had an ido menu for all the refactorings. Really liked that. How do you make cider reload shit? I'm fucking around with a macro in one file, and doing C-c C-m on a expression in another file. But it hasn't picked up on any of the changes I think that approach works generally for all text editing stuff - if it's not commonly used, it should be in an ido menu. Really dislike emacs setups that have hundreds of keybindings bound under prefixes @xlevus: C-c C-k to load the file in the buffer you’re in. C-c C-x to do cider-refresh everything. what about window-choosing? Is there a way to tell which window cider will put stuff in? I still haven't worked it out, and at the moment its opening up buffers in windows i'm 'using' @xlevus: on the cider window thing…not that I’m aware of. If you find owt let me know. @xlevus: If you don’t cider-load-file (C-c C-k) it probably isn’t evaluating the entire file so you could have compilation errors. @xlevus: I struggled with it but I wasn’t used to vim either (IDE’s like Eclipse/Netbeans/IDEA) I'm just finding it too inconsistent. Sometimes Cider opens the macroexpansion in window 1, other times in 3, other times in 2. the emacs way: horrible defaults, millions of lines of code in everybody's .emacs, and everybody rediscovers the same set of configurations over 20 years of using it (I used to be an emacs user, switched to vim ~1 year into writing clojure regularly) but trying to work out where to beat it into submission probably isn't worht the time when the end-game is clojure. Trying spacemacs at the moment, the main reason to not stay with my other workflow in vim is I wanted to to try out Org-mode, and literate programming with it. Talk by Karsten Schmidt at skills matter a few weeks ago actually demonstrates Literate Programming in amongst some really cool graphical rendering and art installations https://skillsmatter.com/explore?content=&location=&q=all+the+thi.ngs
OPCFW_CODE
Visualizing network on map I have several hundred geo-referenced data points, and the relationships from that point to other points. I'm trying to figure out the best way of visualizing this on an interactive map (possibly using google maps). One idea I had was that when a user clicks on a point, it then displays all the links from that point to the related points. Do you have any suggestions or examples of how to do this? I have experience using ArcGIS, QGIS, Python and a small amount of JavaScript. There are different methods depending on what you want to accomplish, how much data you have, and how pretty you want it. Your idea is a good idea and would probably work well. Of course another obvious answer is to show all of the relationships all the time but that would add a lot of visual clutter. Perhaps a nice compromise is to always show all of the relationships but in a semi-transparent color so they are barely visible. Then when a user clicks or mouses over a data point, the links from that point would become opaque. One thing you can do to make maps a little more visually pleasing and intuitive is to use curved lines instead of straight lines to connect to data points. This works in two dimensions or three dimensions. You can also do interesting things by playing with the colors and transparency level of the lines. One very nice and elegant solution is the Flow Map. This visualization would also be more interesting if you add the interactivity of being able to mouse over or click on a data point and see the connecting datapoints. I'll let others speak to ArcGIS and QGIS, but I would recommend trying protovis. It's a domain specific language for visualization built on top of javascript so it should make some of these visualizations relatively easy. The Flow Map page includes code in Java, which you could translate to other languages/platforms. It probably wouldn't be too difficult to translate the Flow Map code to Protovis though I have not tried. Jay has covered a lot of the suggestions that I immediately thought of from the visualisation angle. However, does it have to be a network? Depending on the needs of the user and the clustering of the data a better solution may be to show relationships with color coding rather than lines. My suggestion: when a user clicks on a point then all the related points intensify in color/glow on and off (like the sleep indicator on a sleeping Mac)/get a colored halo. Click off icon or on another point and the first set of relationships turn off. This would do away with the visual clutter of lines. I suspect this solution would work best if: - there are lots of relationships (could end up looking like a spaghetti fight) - points are clustered strongly, the lines will be less easy to see if points are close together Could you elaborate on clustering and total number of relationships? Here you can find some info about desire lines. In the image you can see many links to related point using FlowMapper plugin. I know it do not complete answer to your question, but I hope it can help in something I was thinking about this myself recently and came across this... http://hint.fm/wind/ In my case I am looking at students moving from secondary schools to universities, so keep in mind I get a lot of clustering going on, and movement occurs only in one direction. But I think the ability to see movement across the network would help users see the overall structure.
STACK_EXCHANGE
GDC China has revealed the first batch of talks within the show's Global Game Development track, featuring Double Fine's approach to working with Kinect, a quick guide to Agile/Scrum development, and veteran game composer Hitoshi Sakimoto on the business of game audio. Taking place November 12-14 at the Shanghai Exhibition Center in Shanghai, China, the event will once again serve as the premier game industry event in China, bringing together influential developers from around the world to share ideas, network, and inspire each other to further the game industry in this region. Here are the first talks to be announced so far for the Global Game Development Track: - In "Rapid Prototyping Techniques for Kinect Game Development," Double Fine Productions' lead technical artist, Drew Skillman, will provide an in-depth look at the studio's approach to implementing motion control in its Kinect-enabled titles. Along the way, Skillman will discuss the company's software setup for rapid prototyping, augmented reality, and some "shader based compositing tricks." - Elsewhere, Maxwell Peng, senior producer at Taiwanese developer International Games System, will host a session dubbed, "How to Succeed in Game Development With Agile/Scrum," giving developers tips on how to improve and streamline their development pipeline. Peng will outline the challenges and solutions he encountered when using Agile/Scrum, helping developers better understand this approach to game development. - Finally, Hitoshi Sakimoto, a game audio veteran best known for scoring Final Fantasy Tactics and Final Fantasy XII, will host, "The Business (and Importance!) of Game Audio." Here, Sakimoto will delve into what it takes to make effective game music and sound effects, outlining his experience working in the business of game audio to help developers understand why sound plays a key role in nearly all realms of game development. For more information on these or other sessions, please check out the official GDC China website. The above talks join other recently-announced talks from top indie developers, covering thatgamecompany's approach to games as an expressive medium, Capy's experience working on Superbrothers: Sword & Sworcery EP, and the story behind Supergiant Games' critically-acclaimed Bastion. With registration for GDC China now open, interested parties can go to the event's official website to start the registration process and gain access to the numerous talks, tutorials, and events the show will have to offer. Keep an eye out for even more news as the show draws closer, as GDC China organizers have a number of exciting announcements planned for the coming weeks and months. For more information on GDC China as the event takes shape, please visit the official GDC China website, or subscribe to updates from the new GDC Online-specific news page via Twitter, Facebook, or RSS. GDC China is owned and operated by UBM TechWeb.
OPCFW_CODE
DDD - How handle transactions in the "returning domain events" pattern? In the DDD litterature, the returning domain event pattern is described as a way to manage domain events. Conceptually, the aggregate root keeps a list of domain events, populated when you do some operations on it. When the operation on the aggregate root is done, the DB transaction is completed, at the application service layer, and then, the application service iterates on the domain events, calling an Event Dispatcher to handle those messages. My question is concerning the way we should handle transaction at this moment. Should the Event Dispatcher be responsible of managing a new transaction for each event it process? Or should the application service manages the transaction inside the domain event iteration where it calls the domain Event Dispatcher? When the dispatcher uses infrastructure mecanism like RabbitMQ, the question is irrelevent, but when the domain events are handled in-process, it is. Sub-question related to my question. What is your opinion about using ORM hooks (i.e.: IPostInsertEventListener, IPostDeleteEventListener, IPostUpdateEventListener of NHibernate) to kick in the Domain Events iteration on the aggregate root instead of manually doing it in the application service? Does it add too much coupling? Is it better because it does not require the same code being written at each use case (the domain event looping on the aggregate and potentially the new transaction creation if it is not inside the dispatcher)? Are you referring to https://blog.jayway.com/2013/06/20/dont-publish-domain-events-return-them/ ? My question is concerning the way we should handle transaction at this moment. Should the Event Dispatcher be responsible of managing a new transaction for each event it process? Or should the application service manages the transaction inside the domain event iteration where it calls the domain Event Dispatcher? What you are asking here is really a specialized version of this question: should we ever update more than one aggregate in a single transaction? You can find a lot of assertions that the answer is "no". For instance, Vaughn Vernon (2014) A properly designed aggregate is one that can be modified in any way required by the business with its invariants completely consistent within a single transaction. And a properly designed bounded context modifies only one aggregate instance per transaction in all cases. Greg Young tends to go further, pointing out that adhering to this rule allows you to partition your data by aggregate id. In other words, the aggregate boundaries are an explicit expression of how your data can be organized. So your best bet is to try to arrange your more complicated orchestrations such that each aggregate is updated in its own transaction. My question is related to the way we handle the transaction of the event sent after the initial aggregate is altered after the initial transaction is completed. The domain event must be handled, and its process could need to alter another aggregate. Right, so if we're going to alter another aggregate, then there should (per the advice above) be a new transaction for the change to the aggregate. In other words, it's not the routing of the domain event that determines if we need another transaction -- the choice of event handler determines whether or not we need another transaction. Not sure why we are talking about modyfing more than one aggregate per transaction here. My question is related to the way we handle the transaction of the event sent after the initial aggregate is altered after the initial transaction is completed. The domain event must be handled, and its process could need to alter another aggregate. When sending domain event that is handled in another bounded context, using messaging system forces a new transaction in another process, but when the domain event is handled inside the same bounded context, without messaging system, it does not. Just because event handling happens in-process doesn't mean the originating application service has to orchestrate all transactions happening as a consequence of the events. If we take in-process event handling via the Observable pattern for instance, each Observer will be responsible for creating its own transaction if it needs one. What is your opinion about using ORM hooks (i.e.: IPostInsertEventListener, IPostDeleteEventListener, IPostUpdateEventListener of NHibernate) to kick in the Domain Events iteration on the aggregate root instead of manually doing it in the application service? Wouldn't this have to happen during the original DB transaction, effectively turning everything into immediate consistency (if events are handled in process)? Not sure to understand your question. With NHibernate, you can be "notified" when the transaction completes, implementing the IPostInsertEventListener, IPostDeleteEventListener, IPostUpdateEventListener interfaces. This is what I called hooks. One option is to use those hooks to kick-in the event dispatching. The other option is to not rely on the ORM hooks, and to kick-in the event directly in the service, sequentially after you close/dispose the transaction. Also, I agree with hangling the new transation inside the handlers. This way, if one day a full messaging system (out of process) is used for the domain events, handlers will be already ready to manage the processing of the event correctly. Oh, I thought Post... hooks were triggered before the transaction completed, allowing you to sneak in stuff between the SQL command and the final commit. My bad.
STACK_EXCHANGE
Tag «target files»: downloads Search results for «target files»: Target 1.0 by J. Kern Target project is an AJAX-enabled application that allows users to create and track one or more target areas. The target area can then be resized and moved to new locations on the image. With a right-click, the movement and resize history can be replayed with DHTML animation and AJAX. Mtp Target 1.2.2 by Ace and Skeet Mtp Target is a Monkey Target clone (one of the six mini-games from the video game : Super Monkey Ball). After rolling your pingoo ball down a giant ramp, your goal is to hit a target as accurately as possible. You can play on Internet or LAN up to 16 players simultaneously. Mtp Target is enti… GNU make 3.81 by Paul D. Smith GNU make is a tool which controls the generation of executables and other non-source files of a program from the program's source files. Make gets its knowledge of how to build your program from a file called the makefile, which lists each of the non-source files and how to compute it from other… Makefile::Parser 0.11 by Agent Zhang Makefile::Parser is a Simple Parser for Makefiles. $parser = Makefile::Parser->new; # Equivalent to ->parse('Makefile'); # Get last value assigned to the specified variable 'CC': Genetic Algorithm File Fitter 0.5.0 by Douglas Augusto Genetic Algorithm File Fitter (gaffitter) is a command-line software written in C++ that extracts --via Genetic Algorithm-- subsets of an input list of files/directories that best fit the given volume size (target), such as CD, DVD and others. Genetic Algorithm File Fitter is initially designed to r… EVP dirdiff 0.1.2 by Edward Pelyavski EVP dirdiff recursively compares two directory trees using message digest (hash), e.g. MD5. Boost C++ library How to Build: Make sure Boost library is added to INCLUDE and LIB of your C++ compiler Go to the evp-dirdiff installation directory Kvblade Alpha 2 by Sam Hopkins Kvblade project is a kernel module implementing the target side of the AoE protocol. Users can command the module through sysfs to export block devices on specified network interfaces. The loopback device should be used as an intermediary for exporting regular files with kvblade.… Grand 0.7.2 by Christophe Labouisse Grand is a tool to create visual representation of ant target dependencies. It differs from tools like Vizant or AntGraph by a totally different approach, relying on the Ant API rather than parsing directly the XML files. This enables Grand to provide some nifty features such as the support of the a… PhotoGallery 20050725 by Souken Group, LLC PhotoGallery is simple and easy to use image gallery software that automatically generates image thumbnails. It is written in PHP and uses Smarty for display and GD2 for image manipulation. It is small and simple, and can be easily integrated into existing Web sites and software. Language files a… ulogd 1.24 by Harald Welte ulogd is a replacement for traditional syslog-based logging (using the LOG target) in iptables-based firewalls. ULOG/ulogd has a different concept. Packets get copied to a special logging daemon, which can do very detailed logging to different targets (plaintext files, MySQL databases, ...). ulog… shmux 1.0 by Christophe Kalt shmux project is program for executing the same command on many hosts in parallel. For each target, a child process is spawned by shmux, and a shell on the target obtained one of the supported methods: rsh, ssh, or sh. The output produced by the children is received by shmux and either (optionall… mod_log_mysql 20031023 by Sonke Tesch mod_log_mysql can log requests to Apache 2 using a MySQL database. Here are some key features of "mod log mysql": Seamless integration into the standard Apache logging configuration. Only one configuration line needed to start logging. Free SQL use. Multiple databases, database users and/… DbDiff 0.2.0 by Eric Kolve DbDiff project performs a diff between two databases. Currently, only MySQL is supported. It generates the necessary SQL to alter the target database and apply the changes in the proper order to satisfy any constraints that exist while preserving the data in the target database. What's New in… PHP Image Manipulation Class 1.0.4 by Stefan Gabos PHP Image Manipulation Class can be used to perform several types of image manipulation operations. It can rescale images to fit in a given width or height keeping (or not) the original aspect ratio, flip images horizontally or vertically, and rotate images by a given angle while filling the empt… remake 0.61 by R. Bernstein remake is a modern version of GNU make utility that adds improved error reporting, the ability to trace execution in a comprehensible way, and a debugger. The debugger lets you set breakpoints on targets, show and set variables, inspect target descriptions, and see the target call stack. If you a… Event 1.06 by Joshua N. Pritikin Event is an Event loop processing. use Event qw(loop unloop); # initialize application Event->flavor(attribute => value, ...); my $ret = loop(); # and some callback will call The Event module provide a central facility to watch for various ty… srcpkg 1.1 by Ryan McGuigan SouRCe PacKaGer (srcpkg) is a program for managing separate software packages under the same directory hierarchy. It is especially useful for packages distributed as source code. srcpkg is similar to GNU Stow, Depot, etc., but is designed to be able to handle large, complex, interdependent packag… File Beamer 0.1.5 by Martin Holler File Beamer is an easy to use file transfer tool. The programm is platform independent. That means it runs with Windows 98/ME/2000/XP, Linux, Unix and MacOS X. This is made possible by using Trolltech's Qt Library which provides an easy to use GUI toolkit, networking functions and a lot more. The Regex Coach 0.8.3 by Dr. Edmund Weitz The Regex Coach is a graphical application which can be used to experiment with (Perl-compatible) regular expressions interactively. Here are some key features of "The Regex Coach": It shows whether a regular expression matches a particular target string. It can also show which parts of the… Advanced Assembler 0.9.0 by Alexandre Becoulet Aasm is an advanced assembler designed to support several target architectures. It has been designed to be easily extended and, should be considered as a good alternative to monolithic assembler development for each new target CPUs and binary file formats. Aasm should make assembly programming ea… - Desktop Environment - Science and Engineering - Text Editing&Processing
OPCFW_CODE
"""Unit tests for puzzle.tester classes and functions.""" import unittest import puzzle.latinsquare as ls import puzzle.sudoku as su import puzzle.tester as pt TEST_PUZZLE_STRINGS = [ "89.4...5614.35..9.......8..9.....2...8.965.4...1.....5..8.......3..21.7842...6.13", "..75.....1....98...6..1.43.8.5..2.1.......2...1.7....9..3..8..4.4.9..3..9....6.2.", "..8......1..6..49.5......7..7..4.....5.2.6...8..79..1..63.....1..5.73......9..75.", ] SOLVED_PUZZLE_STRINGS = [ "893472156146358792275619834954183267782965341361247985518734629639521478427896513", "387524961124639875569817432835492617796185243412763589673258194248971356951346728", "498157632137682495526439178671348529359216847842795316763524981915873264284961753", ] class TestFunctions(unittest.TestCase): """Test the helper functions / utilities""" def test_has_same_clues(self): """We can verify a solution is derived from a puzzle""" for i, puz in enumerate(TEST_PUZZLE_STRINGS): pzzl = ls.LatinSquare(starting_grid=ls.from_string(puz)) soln = ls.LatinSquare(starting_grid=ls.from_string(SOLVED_PUZZLE_STRINGS[i])) self.assertTrue(pt.has_same_clues(pzzl, soln)) self.assertFalse(pt.has_same_clues(soln, pzzl)) # Can handle empty puzzles empty_puzzle = ls.LatinSquare(grid_size=ls.DEFAULT_PUZZLE_SIZE) self.assertTrue(pt.has_same_clues(empty_puzzle, pzzl)) self.assertFalse(pt.has_same_clues(pzzl, empty_puzzle)) # Can handle mismatched sizes small_puzzle = ls.LatinSquare(grid_size=ls.DEFAULT_PUZZLE_SIZE - 1) self.assertFalse(pt.has_same_clues(empty_puzzle, small_puzzle)) def test_from_file(self): """Can load test data from file""" tester = pt.PuzzleTester(ls.LatinSquare) tester.add_test_cases(pt.from_file("data/sudoku_9x9/hardest.txt")) self.assertEqual(13, tester.num_test_cases()) class TestPuzzleTester(unittest.TestCase): """Tests for the class PuzzleTester using LatinSquare puzzles""" def setUp(self): self.pt = pt.PuzzleTester(ls.LatinSquare) self.test_cases = [] for i, puz in enumerate(TEST_PUZZLE_STRINGS): self.test_cases.append({'label': f"test {i}", 'puzzle': puz}) def test_class_init_and_add_cases(self): """Test we can create class and add test cases""" self.assertTrue(isinstance(self.pt, pt.PuzzleTester)) self.pt.add_test_cases(self.test_cases) self.assertEqual(3, self.pt.num_test_cases()) # Add some bad cases self.assertRaises(ValueError, self.pt.add_test_cases, 'banana') self.assertRaises(ValueError, self.pt.add_test_cases, ['banana', 'vodka']) # Add test cases without labels self.pt = pt.PuzzleTester(ls.LatinSquare) for tc in self.test_cases: del tc['label'] self.pt.add_test_cases(self.test_cases) self.assertEqual(3, self.pt.num_test_cases()) def test_class_repr(self): """Class can represent itself""" expected = "PuzzleTester(LatinSquare, test_samples=1, anti_cheat_check=True, num_test_cases=0, solver_labels=set())" self.assertEqual(expected, repr(self.pt)) class TestSudokuTester(unittest.TestCase): """Tests for the class PuzzleTester using Sudoku puzzles""" def setUp(self): self.include_levels = ['Kids', 'Easy', 'Moderate'] self.test_cases = [x for x in su.SAMPLE_PUZZLES if x['level'] in self.include_levels] self.pt = pt.PuzzleTester(puzzle_class=su.SudokuPuzzle) self.pt.add_test_cases(self.test_cases) def test_solver(self): """Use PuzzleTester class to test SudokuSolver""" for method in su.SOLVERS: with self.subTest(f"method: {method}"): solver = su.SudokuSolver(method=method) self.assertEqual(5, self.pt.num_test_cases()) self.assertEqual(5, self.pt.run_tests(solver)) def callback(self, a, b, c, d, e): self._callback_called = True self._callback_params = (a, b, c, d, e) def test_callback(self): """Test that callback is called...back""" self._callback_called = False solver = su.SudokuSolver() self.pt.run_tests(solver, callback=self.callback) self.assertTrue(self._callback_called) def test_results(self): """Check test results""" solver = su.SudokuSolver() self.assertEqual(3, len(self.pt.get_test_results())) self.pt.run_tests(solver) self.assertEqual(4, len(self.pt.get_test_results())) self.assertEqual(1, len(self.pt.get_solver_labels())) results = self.pt.get_test_results() newpt = pt.PuzzleTester(puzzle_class=su.SudokuPuzzle) newpt.set_test_results(results) self.assertEqual(self.pt.get_test_results(), newpt.get_test_results()) def test_class_repr(self): """Class can represent itself""" expected = ( f"PuzzleTester(SudokuPuzzle, test_samples=1, anti_cheat_check=True, " f"num_test_cases={len(self.test_cases)}, solver_labels=set())" ) self.assertEqual(expected, repr(self.pt)) if __name__ == "__main__": unittest.main()
STACK_EDU
Here is an example: Tornado uses the standard logging library, by default, and sends logs to STDOUT. Sometimes you may want logs stored in a database or flat-file. This method is fairly simple. You could pass the ‘log_file_prefix’ parameter via the command-line, or more elegantly add the options in your code: If you have multiple instances of the same app running, I would keep the logs separate by using the port number: MongoLog is a really cool open-source centralized logging module for Python and MongoDB. It’s available on Github at https://github.com/andreisavu/mongodb-log. Follow the installation docs in the README to get MongoLog all setup. After cloning and installing MongoLog, we need to modify our Tornado app. First, import MongoLog: Next we will override the ‘Application.log_request’ method to implement MongoLog. After everything is up and running, you can view the raw logs in the database or use the web ui that ships with MongoLog: Customized Galant from Ikea with lots of shiny Apple goodness. The past 24 hours have been filled with a lot of media hype, some rain, some wind, and a few darwin awards. In my home at the Jersey Shore we still have a ‘situation’ — no electricity. Luckily we are served well by Twitter and can find out storm details by fellow local internet users, thank you Twitter. Internet access has remained available, thanks to the awesome local ISPs and UPS units on our modems. Amazon Web Services seemed to have no outages during this storm, thank you AWS. Thank you also to AT&T and Verizon for keeping our cellular services online. I’m posting this via a tethered iPhone. However the electricity infrastructure has proved to be unable to handle single points of failure. We are on day 2 of no power, thanks to a broken transformer miles away. 1 – Why don’t we have high-availability redundant infrastructure? 2 – Why don’t we have safer, less vulnerable, underground power lines? We need to develop a redundant system like the internet for our electric. I won’t hold my breath. For now I’ll take it into my own hands and outfit my place with solar and diesel backup solutions. Thank you Irene for testing our infrastructure. Update: I just ordered this generator so I can keep writing code during power outages. An extra-wide touchpad and dual detachable iPad displays on a MacBook Pro. Wow! Add 32GB of RAM and quad SSD RAID too. I’m not sure why they left the optical drive in this mockup, that has got to go. Lets hope Apple is really brewing this, I will camp out on the street for this one! This is where I spend most of my time … My primary workstation is a MacBook Pro 17” with i7 CPU, 8GB RAM, 256GB SSD and a 27” Apple Cinema Display. I use an Apple bluetooth keyboard and Magic Trackpad. I also use a MacBook Pro 15” for testing security related stuff that I don’t want to pollute my main workstation with. In the background of my view are 2 Samsung displays. On the bottom is a 52” and above is a 24” that I use for displaying network monitoring and analytics info. Below the monitors is a Mac Mini running Lion Server, a Drobo with 8TB of storage, an Apple Airport Extreme, APC UPS and my dusty Xbox. Lets dissect my network infrastructure… Dual internet connections with 1 DOCSIS v3.0 modem and 1 DOCSIS v2.0 modem. Protecting my network are 2 security appliances, a Cisco ASA 5505 and a custom built pfsense box. The core is a HP ProCurve 1810G-24 switch that sits on top of my retired Cisco 3500 WS-C3548-XL-EN Switch. My primary wireless network is powered by an Apple Aiport Extreme, and I run a Buffalo WHR-HP-G300N router with OpenWRT for guests. Most of my servers are EC2 instances running on Amazon Web Services. However I do use a Dell PowerEdge 1950 server with VMWare ESXi for local testing. Also in the rack is a Samsung 17” CRT Monitor, 2 Belkin KVM switches, a Foscam FI8918W IP camera and a Belkin UPS. It’s been over a decade since I touched a static website. After Amazon announced website endpoint support for S3, I wanted to give it a try since the benefits are pretty appealing. A static website eliminates the databases and serverside code execution. Hosting a static site on Amazon S3 eliminates supporting the web server, load balancing, caching, etc. By switching from a common cms system such as WordPress, your most likely to gain a lot of speed and lower the possibility of security vulnerabilities. The tools I used to create a full featured static blog: Cyrax generates the static site (python / Jinja2 templates) Flickr for photo hosting (could do this with S3, but I like the iPhoto integration with Flickr) Disqus handles the user commenting system Google Analytics takes care of traffic analysis In the end, I am sticking with WordPress for T3CH.com. For simpler blogs, I’d definitely consider using Cyrax. Testing out the official WordPress app for iOS. So far, I think there is some primary functionality lacking… There is no way to save a draft post. And the post editor is very simple. I attempted to paste an image into this post with no luck. I do value the simplicity but would like to be able to save drafts and embed images in the posts from my iPad and iPhone. As I am typing this last line, I can’t even see it since the post editor does not scroll up. I had to rotate into portrait mode to finish this. I hope there are more WordPress apps to choose from! I upgraded 2 machines to Mac OS Lion yesterday. The download is not fast and the file is almost 4GB. So it‘s a good idea to save a copy of the installer app after it finishes downloading. That file will be auto-deleted after the Lion upgrade. Then you‘ll be able to create usb installer disks and upgrade other machines while saving bandwidth. First up was the MacBook Pro 17 (my primary workstation). The upgrade was seamless and I have no complaints. I disabled Spotlight indexing since it was lagging the system, now it’s super snappy. Next I upgraded a Mac Mini (my primary server). The Mac Mini was at a different physical location, so I performed the upgrade via Apple Remote Desktop. Since the box was already running Mac OS 10.6 Server, the App Store forced me to purchase both Mac OS Lion and the new Server app… which I expected. The server upgrade did take a bit more effort. Here are the issues that I had to deal with: – I was charged for Mac OS Lion upgrade twice. I emailed the App Store and received a refund less than 1 hour later. – WebDAV is broken. The settings are there (they have been moved from the Web Server config screen to the File Sharing config), but it just doesn’t function… I tested with cadaver and it would not work. Instead of spending the whole night banging my head on this issue, I setup WebDAV with Apache in a Linux VM. – FTP server is gone. Yeah FTP isn’t ideal, but I do have some devices which only support offloading files via FTP. This bugs me, but I’ll use vsFTPd in a Linux VM instead. – The config options are lacking. In 10.6 there were way more settings and configurable options available in the Server Manager app. Here is an example of the Web Server settings gui: Mac OS Lion was officially released earlier this week and I have been holding off on the upgrade since I didn’t want to disturb any of the projects that I was working on. Well now the weekend is here and I am ready to give it a spin! FWIW, I have been running developer beta releases of Lion on a spare MacBookPro but now I want the final release on my production workstation. Before you upgrade, it’s a MUST to check the RoaringApps App Compatibility List and compare all of your apps to see which ones will work and which ones will not. This list is built by the community and is the most critical tool in determining if your ready to upgrade. Another MUST is to create a clone of your existing system, I use SuperDuper!. If your upgrade fails, SuperDuper makes it simple to roll back the system state quickly. Ok, I am ready to go upgrade my box… And then will be upgrading my Mac OS Servers!
OPCFW_CODE
Just wanted to chime-in with my observations, too. My Desktop team bought (50) 2013 iMac 21.5" workstations for our 2014 summer Refresh. These are the second generation "thin" iMacs. The Macs arrived in July, but remained in Apple retail boxes until August where there were imaged with my 10.9.4 Mavericks production modular image (DeployStudio/NetBoot). 40 of the 50 iMacs went into production running 10.9 Mavericks last summer/fall. The remaining (10) iMacs got pushed back due to other projects in 2014, so they didn't get deployed until Jan 2015. By this time I had a production Yosemite 10.10.1 image ready, so the Desktop Techs re-imaged the new iMacs again, this time with my 10.10.1 Yosemite image. According to the techs, the iMacs came up from the DeployStudio imaging process just fine once, and then the issue covered in this form post was discovered \- it affected 9 out of the 10 Yosemite iMacs. Why (10) iMac is immune to the issue is beyond me. Weird. Before I discovered this post, we tried the following things: Rebooting \- fixed 1 iMac. That was easy! Zapping PRAM \- fixed 1 iMac Running fsck and/or Recovery GUI tools \- fixed the remaining Macs Multiple rents were required. Like a Vegas slot machine. One of my Techs claimed that unplugging Ethernet at boot fixed 1 iMac too. One Tech here also swears by running fsck first THEN 3 PRAM zaps. Total voodoo. Am I on an episode of "Mac Admin PUNKED Season 2"? I did notice a few things: The iMacs were all stuck on a light get and dark grey boot loader screen. After the Macs were "fixed" they started booting with the new, improved sexier black screen with white logo. Related? I dunno, but there is a pattern here. When booting into Verbose mode, All the iMacs would all get stuck at the following standard output boot message: "promiscuous mode enabled succeeded" and other output related to Apple/Broadcom driver information. This was 100% reproducible. Sounds like boot cache corruption to me...possibly. All 10 of my Yosemite iMacs are on identical 2013 or 2014 hardware. All Macs are imaged from an identical 10.10.1 or 10.10.2 deployment image. All Macs are bound to the same AD (with Managed Mobile cached accounts). None of the Macs are using FV2 or any disk encryption. Firewalls (ALF) are enabled on most iMacs but not all. None of the Macs have Wi-Fi enabled \- they all use copper Ethernet. No Brew or 3rd-party low-level stuff in /usr. No VMs. No BootCamp. No 3rd-party drivers other than some HP printer stuff (from Apple's blessed SUS) I have almost 300 Macs in production \- good thing most Macs are running 10.8 or 10.9! Speak of the devil! It JUST happened to me during a power blink as I was typing this post (freaky!). I now have 10+ more production 10.10.1 and 10.10.2 Yosemite Macs in boot limbo \- right this very moment. I was hoping 10.10.2 fixed this issue, but I can confidently confirm that 10.10.2 did NOT fix the issue. OK, so me and my team have figured out 2 ways to fix this based on suggestions. Force a reboot (press and hold power button if needed) Boot into Single User Mode (Command \+ S) Run fsck -yf as needed Zap PRAM 3+ times There is some confusion if zapping PRAM should be done first or last. We now think last is best. (from Allister's post @ https://www.afp548.com/2015/01/14/when-yosemite-has-fallen-and-it-cant-get-up/) From the recovery partition (or target-disk or single-user mode, deleting the following directories has sometimes been enough: rm -rf /Volumes/Macintosh\ HD/private/var/db/BootCache* rm -rf /Volumes/Macintosh\ HD/Library/Caches/com.apple* He optionally mentions running this too: defaults write /Volumes/Macintosh\ HD/Library/Preferences/com.apple.loginwindow.plist DSBindTimeout -int 10 I am not doing this step on my production Macs yet, but I do have it applies to some of my IT Macs. If need be I suppose I can add this command to my DeployStudio final setup script. Call a priest and request an exorcism. Apple is delinquent regarding this matter. This issue goes back several months ( first heard about if from a Desktop Tech here who saw it once back in October 2014). I'd call it a critical issue. OS X 10.10.3 where are you?
OPCFW_CODE
good chemistry is complicated, and a little bit messy -LW Irrational numbersby grondilu (Friar) |on Dec 17, 2012 at 16:44 UTC||Need Help??| as far as I know there is no such thing as an irrational number in computing. And by this I mean: a number that has an infinite number of non repeating decimals. Or in other words, a number that is real, but that is not rational. I think it's a bit frustrating. Computers should have a more accurate representation of real numbers. Even in a programming language as awesome as perl6, pi for instance is still defined with a decimal approximation:my constant pi = 3.14159_26535_89793_238e0; Honnestly, I would have expected a bit more than that from perl6. I'm not sure it would be useful but I don't care much. It would be cool and that's what matters to me :) And what's cool but apparently useless often appears to be quite useful eventually, so it's worth considering doing it even if we don't see any immediate use for it. Anyway I'd like to discuss how I imagine it could be done. In maths, one way of defining a real number is to define it as the limit of a sequence of rational numbers. If this sequence is constant above a certain rank or if the limit is actually a rational number, then the number is rational. Otherwise, it is said to be irrational. Infinite sequences are not hard to define in computing, providing you feel comfortable with the notion of closures or lazy lists. Therefore a way of defining a real number would be to use exactly this: a lazy list or a closure, just as in this Rosetta Code task, where irrationals such as pi or sqrt(2) are defined with continued fractions (which are a particular case of infinite lists of rationals) Here is an other example, also from Rosetta Code: Here we define the zeta function as a function returning a lazy list, and we display the thousandth term of the list returned by the call of zeta with two as an argument. Any infinite list would do, providing that it converges. You can use a Taylor-series for instance. You can even imagine using non converging sequences. Who knows, maybe it could be useful. They could appear in the middle of a calculus and then disappear in the end. Kind of like whith imaginary numbers. But that's an other story. It seems to me that arithmetic should not be too difficult to define. For instance, the addition operator would be a function that takes two lazy lists and returns a lazy list whose terms are the sum of the terms of the lazy lits. In Perl6, I'd write it like this: Couldn't it be as simple as that? Equivalence and order relations That might be the thoughest problem. There is no simple way for a computer to tell if two infinite lists are equals. Unless of course it can make a deduction from an analytic definition of the terms. But in the general case, the computer just can't inspect all the terms one at a time because if the two lists are actually equals, then the computer will need the eternity to reach a conclusion. So just as arithmetic operators on irrationals would return a lazy list of rational numbers, a equivalence relation operator on irrationals would return a lazy list of booleans. Like in real life, when you ask something to someone but you're not sure about his answer: - Do you think that's true ? - Yes, it's true. - You're really sure? - Hang on, let me check again. Yes, it's true. - Really, really sure? And so on. A naïve implementation would thus be: Unfortunately, there are several reasons why this could not be convenient. But that would be a start. I told you how I think irrational numbers could be implemented in Perl or Perl6. I won't say more, and I'll just read what you think of it. I really think it would be cool if we could just have a "Real" number type that would fill all use cases. One possible application would be for programs that have to deal with very wild ranges of values, without loss of any precision. The example I have in mind is programs such as the Kerbal Space Program: The unique necessities of the game, which has to correctly handle distances in a range of at least 13 orders of magnitude and velocities in the order of kilometers per second, have required a number of workarounds to avoid numerical stability issues. Fixing all the known bugs of this nature took multiple updates over a period of months. I know this game is not open source and it's not written in Perl, but someone might be willing to write something similar. Update: small example Here is a toy example module where I define addition, multiplication and display up to an arbitrary precision. Then I define one, the exponential and arctangent functions, and then I compute and display two, three, e, e+one, e*e and pi.
OPCFW_CODE
hyperband and rl search strategies are nut running for Tabular prediction. import autogluon as ag from autogluon.task import TabularPrediction as task import sklearn.datasets import pandas as pd cancer_data = sklearn.datasets.load_breast_cancer(return_X_y=False) data = pd.DataFrame(cancer_data['data'], columns=cancer_data['feature_names']) data['target'] = cancer_data['target'] gbm_options = { 'num_boost_round': 100, 'num_leaves': ag.space.Int(lower=26, upper=66, default=36), } hyperparameters = { 'GBM': gbm_options, } hp_tune = True time_limits = 2 * 60 num_trials = 3 output_directory = './agModels--tuningCancerDataset' data = task.Dataset(data) label_column = 'target' search_strategy = 'hyperband' predictor = task.fit( train_data=data, holdout_frac=0.3, label=label_column, output_directory=output_directory, time_limits=time_limits, num_trials=num_trials, hyperparameter_tune=hp_tune, hyperparameters=hyperparameters, search_strategy=search_strategy, auto_stack=False, stack_ensemble_levels=0 ) The error: File "/Users/rmelikbe/Desktop/repos/autogluon/autogluon/utils/tabular/ml/trainer/abstract_trainer.py", line 354, in train_single_full Y_train=y_train, Y_test=y_test, scheduler_options=(self.scheduler_func, self.scheduler_options), verbosity=self.verbosity) File "/Users/rmelikbe/Desktop/repos/autogluon/autogluon/utils/tabular/ml/models/lgb/lgb_model.py", line 285, in hyperparameter_tune scheduler = scheduler_func(lgb_trial, **scheduler_options) File "/Users/rmelikbe/Desktop/repos/autogluon/autogluon/scheduler/hyperband.py", line 136, in __init__ reward_attr=reward_attr, visualizer=visualizer, dist_ip_addrs=dist_ip_addrs) File "/Users/rmelikbe/Desktop/repos/autogluon/autogluon/scheduler/fifo.py", line 91, in __init__ self.searcher = searcher_factory(searcher, **kwargs) File "/Users/rmelikbe/Desktop/repos/autogluon/autogluon/searcher/searcher_factory.py", line 14, in searcher_factory raise AssertionError("name = '{}' not supported".format(name)) What is the output of the assertion message? hyperband is not a search strategy, but a scheduling strategy which can be run with different underlying searchers. Currently supported are 'random' (default), 'skopt', 'grid'. You probably want to keep it at 'random' (as this is what gives you the Hyperband method as published). In the near future, there will be more searchers being supported here, but it may take a few months still. I am personally not much familiar how a call to task.fit maps to the creation of a scheduler. When you create a scheduler (FIFOScheduler, HyperbandScheduler), you can pass searcher and search_options, and searcher -> name in searcher_factory. You seem to be using search_strategy above ... Hi @mseeger yep, I was using hyperband as a search_strategy. I wanted to use HyperbandScheduler behind the scenes. If it is not supported at this moment, it would be correct to remove it from docstring: https://github.com/awslabs/autogluon/blob/a1347ea81ff678a6047cc1723c32fcf0dc81bbf2/autogluon/task/tabular_prediction/tabular_prediction.py#L179 Hello, this is not the problem here. If you look into tabular_predicition.py you see that towards the end, when scheduler_options are set, you have searcher = search_strategy This is wrong for Hyperband. HB is not a searcher, but a scheduler, and you want searcher = 'random' if search_strategy is 'hyperband'. This would fix your issue. I still think that code should be refactored, because it mixes searcher and scheduler, something we need to keep apart for future use cases. @Innixma ? One complexity here is that now every searcher can be used with the two different schedulers we have, namely FIFO and Hyperband. Currently, all searchers can be used with FIFO, but only 'random' can be used with Hyperband. But this will change at some point ... In fact, let me try and fix this for now. I'll do a pull request and notify you. Have a look at #316 In fact, let me try and fix this for now. I'll do a pull request and notify you. Thanks for reply. Yep it is a a confusing that in search_strategy you put hyperband. What about rl scheduler ? This also works only with random. Hello, rl is a searcher, it works with FIFO scheduler. There are currently only two schedulers, FIFO and Hyperband. And making Hyperband work with searcher != random is not trivial, we work on this. But maybe we should not bother the user with the distinction between scheduler and searcher. Let us see what others say. If 'rl' does not work for you, this has a different reason, check what you pass for search_options. I have no idea what the RL searcher is doing, There is another bug that affects why 'rl' does not work. It has to be rewritten (but not by me). Hyperband should now work with TabularPrediction thanks to: https://github.com/awslabs/autogluon/pull/316 We will look into why 'rl' does not work.
GITHUB_ARCHIVE
Imagemagick no decode delegate for this image format pdf If you can provide a snippet of the code causing trouble, that would help with understanding the problem. Unfortunately there is no support for Apple's new HEIC (HEIF) format out of the box. Given ImageMagick is a go to heavy lifting image processing library put to work by many, coupled with a code execution bug, could enable attackers to gain code execution in hosts that themselves have no remote network access. fresh portupgrade -fR of ImageMagick-5.5.6_1 on a three day old 4.8-stable % convert -size 120x120 720923-000000.jpg -resize 120x120 \ +profile "*" thumbnail.jpg convert: No decode delegate for this image format (720923-000000.jpg) [Invalid argument]. This file format is identical to that used by ImageMagick to represent images in memory and is read by mapping the file directly into memory. 29: 30 Commands which specify only: 31: 32 encode=out_format 33: 34 specify the rules for an encoder which may accept any input format. when I ran: convert my_image.jpg -resize 400x400\> small/my_image.jpg I got: convert: no decode delegate for this image format `my_image.jpg'. I would start by choosing an image format that is listed in your "ImageMagick Supported formats" area of the phpinfo() output, if you haven't already. Content Management System (CMS) Task Management Project Portfolio Management Time Tracking PDF. Aklapper renamed this task from Thumbnails for a specific PDF file on Commons not generated to Thumbnails for specific PDF file on Commons not generated: "no decode delegate for this image format". Your local host must be Windows, that doesn't differentiate between upper and lower case in file names and your web server Unix Based which does, simple as that. Note it's even broken if you don't open an image: $ display display: unable to open image `logo:': No such file or directory. Because the new code structure needed to be similar, some if the ImageMagick code was used as the basis for the new work. Solved: my ImageMagick installation is hacked together, and there was a bug in the script located at /usr/local/bin/convert: it called convert [email protected] when it needed to call convert "[email protected]".Hence, the free-form comment text wasn't escaped properly. When resizing/converting, if you have multiple incoming work items each representing one file, by default (no Wait for All) the node will perform the operations in parallel.However, it’s much faster to convert all files in one process (by putting a Wait for All before the ImageMagick node). Both IM and Pango had been installed via macports, yet IM complained that there was no decode delegate for this image format. Obviously, the convert command cannot identify the image type ("no decode delegate for this image format"). We offer you for free download top of imagemagick no decode delegate for this image format clipart pictures. To add support for the image format, download and install the requisite delegate library and its header files and reconfigure, rebuild, and reinstall ImageMagick. Thumbnails are now created which is 1 part of the problems I was facing. If you are access the data over HTTP then there won't be a file system path for you to use. Command-line Tools: Convert, Use ImageMagick® to create, edit, compose, and convert bitmap images. Yet, paradoxically, convert -list delegate, convert -list configure, and convert -list format all hinted that PNG and JPEG support were both installed. BTW: I'm surely no expert on ImageMagick but the file extension doesn't mean anything; even renaming it to, say, "1.junk" won't make any difference to "identify". I don't want to comment it out on the not-working wiki because my understanding is I would have to re-upload every image file if I did so. Just guessing, make sure you have the appropiate jpeg 2000 delegates (encoders) on your image magick installation as some of them might not be enabled/installed by default. Then I went to program files/gallery remote and edited the im.properties file to make sure it pointed to the convert.exe file in the IM folder. And there is a interesting thing, Imagemagick will auto recognize the file real format with the file header. With ImageMagick you can create images dynamically, making it suitable for Web applications. I can search for other document formats like images, audios, and videos but not a pdf file. All manipulations can be achieved through Shell commands as well as through an X11 graphical interface (display). Question or problem about Python programming: How might one extract all images from a pdf document, at native resolution and format? It says there's no text in this page, but the page, revision, and text tables all seem to have the right data. We use analytics cookies to understand how you use our websites so we can make them better, e.g. To run command like this exec("c:\Program Files (x86)\gs\gs9.07\bin\gswin32 -h") from php script, you would need to configure website to run as administrator. This exception indicates that an external delegate library or its headers were not available when ImageMagick was built. There is also some weird character encoding bug going on which is probably irrelevant. Inserting an Image Into an Existing PDF And/or Converting Multiple Images to Pdf: I needed to insert an image (a photo of a signature) into an existing PDF. The other part is after Ingesting a pdf file I can not search for it. convert: no decode delegate for this image format `/tmp/pidgin-latex--1666483318.dvi'. php 5.3 Apache 2.2.21 If I run my php script from the command line it works fine. Caught exception: no decode delegate for this image format `' @ error/blob Questions and postings pertaining to the development of ImageMagick, feature enhancements, and ImageMagick internals. Google already provided the tool to decode webp images in the libwebp package, your uploaded file works on Arch. Learning Management Systems Learning Experience Platforms Virtual Classroom Course Authoring School Administration Student Information Systems. Source: imagemagick Version: 8:22.214.171.124+dfsg-7 Severity: wishlist Tags: patch Dear Maintainer, I noticed that the current ImageMagick packages don't support the HEIF/HEIC image format , which seems like it is becoming more common, especially in the Apple world, and therefore images in this format are popping up in the wild more often now. So you should try to set the input type explicitly (as you have done for the output-type), e.g. convert: missing an image filename `MARBLE8.pdf' @ convert.c/ConvertImageCommand /2822. Commands which specify only decode="in_format" specify the rules for converting from in_format to some format that ImageMagick will automatically recognize. Note that if any of the wildcard characters, *, ?, [ or ], appear in an element the list, it will be treated as a pattern, rather than a suffix. I realize you stated that you tried different formats, I'm just wondering if you chose one that was in the list. Usually run from the command line, an API is also provided through php for running on linux servers. no decode delegate for this image format Questions and postings pertaining to the usage of ImageMagick regardless of the interface. Yes you're right the Imagick::unsharpMaskImage() CMYK issue is distinct from the alpha channel issue, as it applies to any jpeg with a CMYK color space, so I'll split it off into a new ticket and refresh the patch here without it. There are tons of tools that cost something, crippled 2-page "free" converters or online web services that will make you tear your hair out. Also a vector image file format like SVG or WMV, or an image that is pre-processed by some 'delegate', like digital camera image file formats, could not possibly be 'streamed' because, there are no actual rows of pixels in the image, only drawn objects (lines, polygons and gradient shades). Use ImageMagick to translate, flip, mirror, rotate, scale, shear and transform images, adjust image colors, apply various special effects, or draw text, lines, polygons, ellipses and Bézier curves. Imagemagick has hundreds, no, thousands, no...infinite opportunities, for modifying, creating and transforming images and text. When running: $ convert summer-bokeh.jpg -quality 100 -density 300 -resize 2480x3508! Other observations: The page header has "Read", "Edit", and "View History" tabs, as opposed to a page that really doesn't exist, which only has "Create" Editing the page displays. It looks like ImageMagick's delegates for the PDF conversion may have changed with the new. Commands which specify only encode="out_format" specify the rules for an "encoder" which may accept any input format. ImageMagick supports translating image data from a variety of sources and formats. I do not guarantee it is the responsible party, but it is the only difference that I can see between the two setting files. When resizing/converting, if you have multiple incoming work items each representing one file, by default (no Wait for All) the node will perform the operations in parallel.However, it’s much faster to convert all files in one process by enabling batching. Use the magick program to convert between image formats as well as resize an image, blur, crop, despeckle, dither, draw on, flip, join, re-sample, and much more. Click more to access the full version on SAP ONE Support launchpad (Login required). Resolved "No decode delegate for this image format" when converting full PDF (all pages) to png Exception encountered when running Alpine in Docker container. Delegates (built-in): line in your identify --version output suggests that something went wrong during compilation; I’d expect output more in line with that produced by the packaged version of ImageMagick in Debian:. Expected Result ==== The expected the result would be an image with at least the first page of the document. mageMagick's image format support is usually provided in the form of loadable modules. I installed ImageMagick a few months ago, I'm finally at a point where I want it. This appears to be an issue with the version of ImageMagick installed on your server. The configure script looks at your environment and decides what it can cobble together to get ImageMagick compiled and installed on your system. The blob object contains correct information for the filename, length and mimetype. The resulting file was half the size of ImageMagick's by default, and it was easy to renumber and rotate the images and quicker than looking up the ImageMagick switches. no decode delegate for this image format `'@ error/constitut Questions and postings pertaining to the usage of ImageMagick regardless of the interface. I started looking up solutions online (including stackoverflow and what not), but everything I found didn’t fix my errors. they're used to gather information about the pages you visit and how many clicks you need to accomplish a task.
OPCFW_CODE
When originally working on a previous project, the team of myself an two friends realized that one of the bottlenecks of math education is the inability to grade short answer problems in reasonable time frames. And this is a consequence of, well, being human. Things take time. Instructors need to grade anywhere from 30 to 150 students, and at anything more than one question per student, that would be an unreasonable request for short turnarounds. What it does ∂Credit provides a way to perform partial-credit calculations automatically based on reduction steps. Even if the student makes a minor mistake somewhere midthrough, the reduction is performed iteratively based on the previous step(s), so students are not penalized for following through with the error. How I built it First, I created a quick and dirty setup using sympy to generate algebra questions systematically from given inputs. This was the basis for the questions themselves. After that, it was simply a matter of applying a reduction algorithm to the relevant questions. Challenges I ran into For whatever reason, I could not get changes to the database to be able to persist-- so much so that when talking with the Google Cloud sponsor for over 2 hours, he could not figure out what I was doing wrong. In the end, I decided to forgo the database in favor of a horrendously hacky solution. Accomplishments that I'm proud of Implementing the mako template rendering system without the use of C extensions, as writing those would take too much time, and how the design of the proof-of-concept turned out, even if the register/login sections are useless in a state without a database. https://bigredhacks2019fall.appspot.com/ is the main site-- the register link and login link are useless because I could not get a database to persist, logging in with any email/password combination will take you to the set of questions, but, for a direct link, just start from https://bigredhacks2019fall.appspot.com/questions What I learned Sometimes, when faced with no other options, bad solutions are viable ones. Specifically, pickling arbitrary Python objects into binary blobs, converting to base 64 and decoding to unicode, and reversing the process, for every question (the intended way was simply a database lookup by known question ID, but the moment the database refused to persist, I had come up with this solution. What's next for ∂Credit Providing a system for better parsing, detecting the errors that the student made and showing the student where they went wrong, LaTeX integration, and an app/device that would be integrated into a previously worked on project that is on-going for me and some friends, that would allow this system to be used on hand-written work as well.
OPCFW_CODE
CSU COAST partnered with ADVANCEGeo, an initiative formed to address the problem of sexual harassment and other exclusionary behaviors that lead to hostile working and learning climates in the earth, space and environmental sciences. During the Active Bystander Intervention Workshop held on October 22, 2021, CSU faculty, staff, students and administrators learned to recognize sexual harassment, bullying, and other hostile behaviors and learn how to effectively intervene. Attendees participated in interactive discussions where they were presented with various hostile scenarios, including fieldwork settings, and developed strategies to safely intervene. COAST SlidesTool: Interrupting Microaggressions Disarming Microaggressions Strategy SheetStrategies for Responding to Hostile Climates Archie, T. and Laursen, S. (2013). ESWN Report. Bernard, R. E. and Cooperdock, E. H. G. (2018). Natural Geoscience Vol. 11, pp. 292-295. Cantor, D., Fisher, B., Chibnall, S., Bruce, C. (2015). University of Pittsburgh. Carter, T., Jennings, L. L., Pressler, Y., Gallo, A. C., Berhe, A. A., Marín-Spiotta, E., Shepard, C., Ghezzehei, T., Vaughan, K. L. (2020). Soil Science Society of America Journal. Vol. 85(40), pp. 963-974. Clancy, K. B. H., Lee, K. M. N., Rodgers, E. M., and Richey, C. (2017). Journal of Geophysical Research. Vol. 122, pp. 1-13. Clancy, K. B. H., Nelson, R. G., Rutherford, J. N., Hinde, K. (2014). PLOS One. Vol. 9(7). Gibney, E. (2016). Nature. Micere Keels, Gilliam, M., Greenland, W., Thisted, R. A. (2016). University of Chicago. My Voice. (2018). National Academy of Science, Engineering, and Medicine. Nelson, R. G., Rutherford, J. N., Hinde, K., Clancy, K. B. H. (2017). American Anthropologist. Vol. 119(4), pp. 710-722. The National Academy of Sciences (2018). Tseng M., El-Sabaawi, R. W., Kantar, M. B., Pantel, J. H., Srivastava, D. S. and Ware, J. L. (2020). Nature Ecology and Evolution University of Masschusetts Amherst. (2018). Wilson, C. (2019). American Geosciences Institute.
OPCFW_CODE
Dear Reader, we are happy to announce the latest release of GeoServer, version 2.21.3, which is now available on the GeoServer website. This is a maintenance release of the GeoServer 2.21.x series, made in conjunction with GeoTools 27.3 and GeoWebCache 1.21.3 by Andrea Aime (GeoSolutions) and Jody Garnett (GeoCat). Special thanks go to them for making this release during the festivities. Being a maintenance release the focus of the changes has been on stability and bug fixes, Among the changes included in this release, we'd like to point out: Ability to report PostgreSQL column comments in WFS DescribeFeatureType output (needs...More --> Update <-- You can find below the recording and the slides we used during the webinar. Please, subscribe to our Youtube channel here to get access to more updates. Dear Reader, our popular Web GIS Client MapStore just had a new major release, complete information is available in this blog. As usual, we will host a webinar on January 24th 2023 to learn more about the new features, what is coming in the near future, and for you to interact with the core developers of MapStore. You can register here below for free! This release introduces a wide range of new interesting...More <-- The recording is avalable here below --> Dear Reader, We've been invited by Federal Emergency Management Agency (FEMA) and the Opportunity Project (TOP) teams to join the 2022 TOP Sprint to address FEMA’s problem statement on building community and individual climate resilience. The project we have developed was based on the latest version of GeoNode 4.0 with a special focus on the use of Dashboard and Geostory tools to present the results of spatial data analysis, in an eye-catching and immersive way. We studied spatial correlations of the adoption of FEMA Building Codes with environmental risks and equity...More Dear Reader, We are pleased to announce the new release 2022.02.00 of MapStore, our flagship Open Source WebGIS product. The full list of changes for this release can be found here, while this blog highlights the most interesting ones. In Github it is possible to also consult the full list of new features, enhancements and fixes we have provided with this release. One of the main purposes of this new version of MapStore is to add new functionalities for the 3D support opening the doors to further interesting enhancements for the next releases. Let's now go through the most important improvements together....More Introducing public training schedule for 2023 for GeoNode, GeoServer and MapStore from GeoSolutions! Dear Reader, GeoSolutions provides professional training services on GeoServer, MapStore, GeoNode and GeoNetwork worldwide, our trainers are renowned professionals in the open source geospatial community. They have in-depth technical knowledge about the various open source products GeoSolutions supports, since they are part of the core development team behind them. GeoSolutions has been providing training in the last 10 years privately for clients and also during public events in global, regional, and national conferences (e.g. FOSS4G, GEOINT and INSPIRE conferences). Countries where GeoSolutions has provided training include: Italy, USA, Canada, France, Switzerland, Germany, UK, Belgium, Uganda. Madagascar, Nepal, Suriname, Zambia, Mozambique,...More
OPCFW_CODE
Cannot deserialize 'format' field Format values defined by the spec (date, date-time...) cannot be deserialized into the com.fasterxml.jackson.databind.jsonFormatVisitors.JsonValueFormat type because this enum doesn't provide any means to create instances of it from those values. This type only accepts values such DATE, DATE_TIME etc. However the enum provides a toString that prints the correct values. Very confusing error message ensues: Caused by: com.fasterxml.jackson.databind.exc.InvalidFormatException: Can not construct instance of com.fasterxml.jackson.databind.jsonFormatVisitors.JsonValueFormat from String value 'date': value not one of declared Enum instance names: [date-time, date, time, utc-millisec, regex, color, style, phone, uri, email, ip-address, ipv6, host-name] I don't know if this is an issue with this project, or jackson-databind (which is where the enum is defined). Which version is this with? (2.6.3 was just released) Sorry, forgot to mention that. Indeed I'm not up-to-date as I'm using 2.4.4. I'll try it with the new version and come back to you. I confirm this is still happening with 2.6.3, but with a slightly less confusing error message. Here's a test case: String serializedSchema = "{\"type\": \"string\", \"format\": \"date\"}"; JsonSchema schema = new ObjectMapper().readValue(serializedSchema, JsonSchema.class); assertEquals(JsonValueFormat.DATE, schema.asValueSchemaSchema().getFormat()); Result: com.fasterxml.jackson.databind.exc.InvalidFormatException: Can not construct instance of com.fasterxml.jackson.databind.jsonFormatVisitors.JsonValueFormat from String value 'date': value not one of declared Enum instance names: [HOST_NAME, COLOR, REGEX, URI, TIME, PHONE, STYLE, EMAIL, DATE_TIME, UTC_MILLISEC, IPV6, DATE, IP_ADDRESS] at [Source: {"type": "string", "format": "date"}; line: 1, column: 18] (through reference chain: com.fasterxml.jackson.module.jsonSchema.types.StringSchema["format"]) at com.fasterxml.jackson.databind.exc.InvalidFormatException.from(InvalidFormatException.java:55) at com.fasterxml.jackson.databind.DeserializationContext.weirdStringException(DeserializationContext.java:907) at com.fasterxml.jackson.databind.deser.std.EnumDeserializer._deserializeAltString(EnumDeserializer.java:130) at com.fasterxml.jackson.databind.deser.std.EnumDeserializer.deserialize(EnumDeserializer.java:84) at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:520) at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:95) at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:258) at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeOther(BeanDeserializer.java:161) at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:136) at com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer._deserializeTypedForId(AsPropertyTypeDeserializer.java:122) at com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer.deserializeTypedFromObject(AsPropertyTypeDeserializer.java:93) at com.fasterxml.jackson.databind.deser.AbstractDeserializer.deserializeWithType(AbstractDeserializer.java:131) at com.fasterxml.jackson.databind.deser.impl.TypeWrappedDeserializer.deserialize(TypeWrappedDeserializer.java:42) at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3736) at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2726) ... I think this area has actually been changed on master (for 2.7.0-SNAPSHOT), but I forget which issue or PR it was. Problem with possible fix for earlier versions is that I think they also serialize schema instances using upper-case constants, which while wrong, will actually successfully deserialize. Changing serialization format along is a behavior change that could cause issues for existing usage, which is why fix in patch release would be risky. Is it not possible to support both uppercase-with-underscore (current behavior) and lowercase-with-hyphens (correct behavior)? @hgwood sounds doable. PRs welcome. Would you agree that the best way to go about this is to add a @JsonCreator to the enum? The PR would be against the jackson-databind project. That might be the best way yes. The test case passes on jackson-databind/master (2.7.0-SNAPSHOT, 4950f446876b8de05c200cd98019bdf1f527d465) so it seems the issue is already fixed. Thanks for your help.
GITHUB_ARCHIVE
“What level of attentional demand is preferred by users when interacting with an intervention user interface in the Internet of Things smart office?” M1.2 Industrial Design Most applications of consumer IoT systems either operate on the explicit command of the user or are fully automated. Commanding an IoT system requires a user to spend a substantial amount of attentional demand on the system. On the other side, fully automated systems can have a high level of complexity, lack sufficient reliability, do not fit current and changing lifestyles of users and might create the sensation of loss of control . Interactive Intentional Programming (IIP) recognizes these problems and proposes a framework to capture the intents and preferences of the end-user to make suitable actuations. The researchers acknowledge two main areas of improvement: better methods for capturing scenarios, intentions, and preferences, and second, the creation of a feedback loop to facilitate adoption and learning over time. Such a feedback loop can become complicated. The intervention user interface allows the user to intervene with the automated behavior of the system. This allows the systems to operate at high automation levels while only involving the user in the loop when necessary. The implementation of intervention interfaces have not been researched previously. This research applies the Intervention User Interface principle on an IoT system that uses the basic concepts of IIP. The system controls a research office environment that strives to have a high level of automation and autonomy while still giving the user the sense of control. The result is “PII”, a Peripheral Intervention Interface for IoT. The variations for PII were realized in Processing programming environment. The laptop, in the picture on the right, was used to run the Processing sketch and to control the autonomous behavior of the system (initiated by the researcher). The graphical user interface was casted on a smart phone that was docked in the PII prototype which was placed in the working environment of the participant. Allowing the participant to interact with the actuators in the environment. The dock enabled the user to interact with the interface that was running on the laptop. The dock contained a big space bar button that triggers an intervention, the intervention button. Making it easy and quick to use. Adjusting parameters in the Processing sketch would send data to a Teensy that controlled different actuators in the environment. Light was emitted by multiple ledstrips that were positioned next to the participant. A heating mat was used to give the participants a more direct feedback about temperature changes. This was necessary due to the fairly short testing period, i.e., the user needs quick feedback to be able to detect change in such short periods of time. The heating mat was placed directly on the table surface, which also functioned as the working area for the participant. Music was played with the use of a Bluetooth speaker which was placed directly behind the small office divider, out of sight for the participant. Different levels of information are applied to three interfaces to examine the level of attentional demand and involvement preferred by the user (i.e. until what extend does the want to be involved in the loop). What level of attentional demand (explicit, peripheral, implicit) is preferred by users when interacting with an intervention user interface in the Internet of Things smart office? This research shows first implementations of the Intervention User Interface principle. The research focused on the preferred attentional demand by users and how this affects their sense of control. The user study has shown that users prefer highly informative interfaces. This helps them with feeling reassured and letting go control of the system. Different levels of information or attentional demand did not have a significant effect on the user sense of control. In addition, this research gathered a set of design principles that can be used by researchers and practitioners for further research on or implementation of Intervention User Interfaces. The design principles: I would love to tell you more about the details and results.
OPCFW_CODE
The creator correctly chose to go away the speculation out, which I have now had enough time to dive into, and comprehend greater right after possessing the practical know-how underneath my fingers. I hugely propose this reserve to everyone wanting to provide the strength of LSTMs in their up coming project. When you are searhing for Economics assignment help, you are at ideal put. We provide you Economics assignment help on each of the assigned topics of the topic. Our team of magnificent Economic tutors will deliver answers in your doubts. Regardless if you are perplexed on the idea of desire and provide or your principle on client behavior remains to be blurred, our online tutors will clarify it for you in uncomplicated terms. Down load the file to your System. If you're not positive which to settle on, learn more about installing offers. We study about common rate stage, expenditure and cost savings, Financial expansion and plenty of much more. Our Economic writers have highlighted the variations between these two regions of Economics. Over 6 many years of coding knowledge in numerous domains, programming languages make us your Click at button services supplier You could choose to function with the classes a person daily, 1 a week, or at your personal speed. I feel momentum is critically essential, which e book was intended to be study and utilized, not to sit idle. I would recommend buying a timetable and sticking to it. My elements are playbooks intended to be open on the computer, next to a text editor and also a command line. The plaintext password isn't stored by PyPI or submitted into the Have I Been Pwned API. PyPI will not likely enable these kinds of passwords to be used when placing a password at registration or updating your password. If you receive an error concept saying that "This password seems within a breach or has long been compromised and can't be used", you must transform everything other spots that you use it immediately. Should you have obtained this mistake even though trying to log in or add to PyPI, then your password has become reset and You can't log in to PyPI until you reset your password. Integrating If you buy a reserve or bundle and later make a decision you want to enhance for the super bundle, I can prepare it for you personally. The list of routines that demand a confirmed electronic mail address is probably going to read review develop over time. This coverage will allow us to implement a critical plan of PEP 541 relating to maintainer reachability. It also reduces the viability of spam assaults to produce numerous accounts in an automated manner. There are 2 modules for scientific computation which make Python powerful for details Examination: Numpy and Scipy. Numpy is the basic package for scientific computing in Python. SciPy is really an growing selection of packages addressing scientific computing. Your Electronic mail: The e-mail address you accustomed to make the acquisition (note, this may be various to the e-mail deal with you accustomed to shell out with by way of PayPal). Soon after 20 hrs of structured lectures, pupils are encouraged to operate on an exploratory information Assessment project primarily based on their own pursuits. A project presentation demo are going to be organized afterwards. If you actually do desire a difficult copy, you should purchase the book or bundle and produce a printed Variation for your personal personal use. There's no electronic legal rights administration (DRM) to the PDF files to stop you from printing them.
OPCFW_CODE
April 11, 2002: 1. Finish checking in current mule ws. 2. Start working on bugs reported by others and noticed by me: -- problems cutting and pasting binary data, e.g. from byte-compiler instructions -- test suite failures -- process i/o problems w.r.t. eol: |uniq (e.g.) leaves ^M's at end of line; running "bash" as shell-file-name doesn't work because it doesn't like the extra ^M's. March 20, 2002: -- TTY-mode problem. When you start up in TTY mode, XEmacs goes through the loadup process and appears to be working -- you see the startup screen pulsing through the different screens, and it appears to be listening (hitting a key stops the screen motion), but it's frozen -- the screen won't get off the startup, key commands don't cause anything to happen. STATUS: In progress. -- Memory ballooning in some cases. Not yet understood. -- other test suite failures? -- need to review the handling of sounds. seems that not everything is documented, not everything is consistently used where it's supposed to, some sounds are ugly, etc. add sounds to `completer' as well. -- redo with-trapping-errors so that the backtrace is stored away and only outputted when an error actually occurs (i.e. in the condition-case handler). test. (use ding of various sorts as a helpful way of checking out what's going on.) -- problems with process input: |uniq (for example) leaves ^M's at end of -- carefully review looking up of fonts by charset, esp. wrt the last element of a font spec. -- add package support to ignore certain files -- *-util.el for languages. -- review use of escape-quoted in auto_save_1() vs. the buffer's own coding -- figure out how to get the total amount of data memory (i.e. everything but the code, or even including the code if can't distinguish) used by the process on each different OS, and use it in a new algorithm for triggering GC: trigger only when a certain % of the data size has been consed up; in addition, have a minimum. -- Occasional crash when freeing display structures. The problem seems to be this: A window has a "display line dynarr"; each display line has a "display block dynarr". Sometimes this display block dynarr is getting freed twice. It appears from looking at the code that sometimes a display line from somewhere in the dynarr gets added to the end -- hence two pointers to the same display block dynarr. need to review this August 29, 2001. This is the most current list of priorities in `ben-mule-21-5'. -- support for WM_IME_CHAR. IME input can work under -nuni if we use WM_IME_CHAR. probably we should always be using this, instead of snarfing input using WM_COMPOSITION. i'll check this out. -- Russian C-x problem. see above. -- make sure it compiles and runs under non-mule. remember that some code needs the unicode support, or at least a simple version of it. -- make sure it compiles and runs under pdump. see below. -- make sure it compiles and runs under cygwin. see below. -- clean up mswindows-multibyte, TSTR_TO_C_STRING. expand dfc optimizations to work across chain. -- eliminate last vestiges of codepage<->charset conversion and similar stuff. -- test the "file-coding is binary only on Unix, no-Mule" stuff. -- test that things work correctly in -nuni if the system environment is set to e.g. japanese -- i should get japanese menus, japanese file names, etc. same for russian, hebrew ... -- cut and paste. see below. -- misc issues with handling lang environments. see also August 25, "finally: working on the C-x in ...". -- when switching lang env, needs to set keyboard layout. -- user var to control whether, when moving into text of a particular language, we set the appropriate keyboard layout. we would need to have a lisp api for retrieving and setting the keyboard layout, set text properties to indicate the layout of text, and have a way of dealing with text with no property on it. (e.g. saved text has no text properties on it.) basically, we need to get a keyboard layout from a charset; getting a language would do. Perhaps we need a table that maps charsets to language environments. -- test that the lang env is properly set at startup. test that switching the lang env properly sets the C locale (call setlocale(), set LANG, etc.) -- a spawned subprogram should have the new locale in its environment. -- look through everything below and see if anything is missed in this priority list, and if so add it. create a separate file for the priority list, so it can be updated as appropriate. -- clean up the chain coding system. its list should specify decode order, not encode; i now think this way is more logical. it should check the endpoints to make sure they make sense. it should also allow for the specification of "reverse-direction coding systems": use the specified coding system, but invert the sense of decode and -- along with that, places that take an arbitrary coding system and expect the ends to be anything specific need to check this, and add the appropriate conversions from byte->char or char->byte. -- get some support for arabic, thai, vietnamese, japanese jisx 0212: at least get the unicode information in place and make sure we have things tied together so that we can display them. worry about r2l some other time. -- check the handling of C-c. can XEmacs itself be interrupted with C-c? is that impossible now that we are a window, not a console, app? at least we should work something out with `i', so that if it receives a C-c or C-break, it interrupts XEmacs, too. check out how process groups work and if they apply only to console apps. also redo the way that XEmacs sends C-c to other apps. the business of injecting code should be last resort. we should try C-c first, and if that doesn't work, then the next time we try to interrupt the same process, use the injection
OPCFW_CODE
Can passenger plane do an aileron roll and fly upside down? I've researched this topic and as far as I can tell any plane can fly upside down if it's already in this position. It just needs to be angled enough so that its wings are angled upwards. I believe Boeing 777 can do this although I can't find any video proof. But my real question is whether Boeing 777 can do aileron roll. From my understanding plane needs to pitch upwards a bit to gain altitude and then start to roll sideways. While doing this it will loose all lift and starts falling downwards until it is upside down and gains lift again. What are the requirements for this? What is the minimum speed it needs to achieve before attempting the roll? Is Boeing 777 structurally sound to survive the attempt? I imagine it was not designed for this but is it possible? And a bonus question: can you link me a video of any bigger sized plane doing that? As long as it remains a 1-G maneuver, it shouldn't be a problem. A loop has different structural and loading issues than a barrel roll and an aileron roll. This question is a bit more specific than "aerobatics". The question also addresses inverted flight, which is more specific than "aerobatics." In fact this has been done before. A well documented case is of "Tex" Johnston rolling a 707. In general airliners cannot do sustained inverted flight as they lack a fuel system (and lubrication system) for sustained inverted flight. A barrel roll is readily executable. Youtube has a video of Tex Johnston doing a barrel roll. Edit: I gave a quick answer and felt guilty about it. There is more to the story. The OP talks about aileron rolls. An aileron roll is a maneuver where the plane does not substantially change altitude. The G forces will be negative at some point, and it is more "violent" than a barrel roll. A barrel roll is a maneuver where the plane is rolled about the longitudinal and lateral axis, if you will, a loop and a roll in one. Properly executed the barrel roll is a positive G maneuver. Years ago, when I was young and some old geezer was teaching me aerobatics, he would climb into the plane with a full cup of coffee. My job was to get to the practice area, and execute some maneuvers, including barrel rolls in each direction, without spilling his coffee. Technically, that was a 367-80, which is closer to the KC-135 than to the 707 (the KC-135 is basically a straight production run of tanker-equipped 367-80s, while the 707 is fairly extensively modified from the 367-80, with, among other differences, a significantly wider fuselage); they're still close siblings, though. Structurally, only just. Design regulations stipulate that the aircraft must be able to withstand accelerations between -1 and +2.5 g. So structurally it could fly upside down but It would be at borderline structural failure. A barrel roll spends very little time at inverted and does not impose -1 g, and passenger planes can be seen doing this manoeuvre at airshows, a beautiful sight. -1g is a very small margin: it means that you could in theory have sustained inverted flight, but any time you push the stick you put the aircraft outside of that margin. Yes indeed, they need to be very careful when upside down :) I think there is a discrepancy between "very careful" and "no problem". You're absolutely right, I've amended the answer.
STACK_EXCHANGE
import time import requests import functools from .HTMLparsers import schoolarParser from .Crossref import getPapersInfo from .NetInfo import NetInfo def waithIPchange(): while True: inp = input('You have been blocked, try changing your IP or using a VPN. ' 'Press Enter to continue downloading, or type "exit" to stop and exit....') if inp.strip().lower() == "exit": return False elif not inp.strip(): print("Wait 30 seconds...") time.sleep(30) return True def scholar_requests(scholar_pages, url, restrict, scholar_results=10): javascript_error = "Sorry, we can't verify that you're not a robot when JavaScript is turned off" to_download = [] for i in scholar_pages: while True: res_url = url % (scholar_results * (i - 1)) html = requests.get(res_url, headers=NetInfo.HEADERS) html = html.text if javascript_error in html: is_continue = waithIPchange() if not is_continue: return to_download else: break papers = schoolarParser(html) if len(papers)>scholar_results: papers = papers[0:scholar_results] print("\nGoogle Scholar page {} : {} papers found".format(i,scholar_results)) if(len(papers)>0): papersInfo = getPapersInfo(papers, url, restrict, scholar_results) info_valids = functools.reduce(lambda a,b : a+1 if b.DOI!=None else a, papersInfo, 0) print("Papers found on Crossref: {}/{}\n".format(info_valids,len(papers))) to_download.append(papersInfo) else: print("Paper not found...") return to_download def ScholarPapersInfo(query, scholar_pages, restrict, min_date=None, scholar_results=10): url = r"https://scholar.google.com/scholar?hl=en&q="+query+"&as_vis=1&as_sdt=1,5&start=%d" if min_date!=None: url += "&as_ylo="+str(min_date) if len(query)>7 and (query[0:7]=="http://" or query[0:8]=="https://"): url = query to_download = scholar_requests(scholar_pages, url, restrict, scholar_results) return [item for sublist in to_download for item in sublist]
STACK_EDU
[Ask Help]How to set cMapUrl correctly for using useVuePdfEmbed I attempted to implement a PDF viewer in a paginated way, but encountered a new issue: when opening Japanese PDFs, only the skeleton is rendered while the content fails to render and get follow warns: Warning: loadFont - translateFont failed: "UnknownErrorException: The CMap "baseUrl" parameter must be specified, ensure that the "cMapUrl" and "cMapPacked" API parameters are provided.". Therefore, I suspect that setting cMapUrl: 'https://unpkg.com/pdfjs-dist/cmaps/' is necessary. However, I couldn't find any method to set cMapUrl to useVuePdfEmbed in README. I tried the following implementation, but it didn't work at all and even caused the PDF to fail to load. PDF load failed errors: // 1. Error: getDocument - no `url` parameter provided. // 2. Error: Worker was destroyed const source = computed(() => ({ url: props.url, cMapUrl: 'https://unpkg.com/pdfjs-dist/cmaps/' })); const { doc } = useVuePdfEmbed({ source, // Errors happens in here onError: (e: Error) => { showApplicationError(e); isLoading.value = false; }, onProgress: (progress) => { if (progress.loaded === progress.total) { isLoading.value = false; } }, }); Therefore, I would like to ask if there is a way to set cMapUrl while using useVuePdfEmbed? My vue-pdf-embed version is v2.1.0 My logic of whole pdf viewer: <script setup lang="ts"> import { defineProps, ref, watch, computed, onMounted, Transition } from 'vue'; import { debounce } from 'lodash'; import VuePdfEmbed, { useVuePdfEmbed } from 'vue-pdf-embed'; import { GlobalWorkerOptions } from 'vue-pdf-embed/dist/index.essential.mjs'; import PdfWorker from 'pdfjs-dist/build/pdf.worker.mjs?url'; import useShowError from '~/composables/utils/useShowError'; interface Props { url: string; } GlobalWorkerOptions.workerSrc = PdfWorker; const props = defineProps<Props>(); const isLoading = ref(true); const isPageChanging = ref(false); const totalPageCount = ref(0); const { showApplicationError } = useShowError(); const rendered = ref(false); const currentPage = ref(1); // I try to set cMapUrl here, but it's not work. const source = computed(() => ({ url: props.url, cMapUrl: 'https://unpkg.com/pdfjs-dist/cmaps/' })); const { doc } = useVuePdfEmbed({ source, onError: (e: Error) => { showApplicationError(e); isLoading.value = false; }, onProgress: (progress) => { if (progress.loaded === progress.total) { isLoading.value = false; } }, }); watch(doc, (newDoc) => { if (newDoc) { totalPageCount.value = newDoc.numPages; rendered.value = true; } }); const changePage = (newPage: number) => { if (newPage >= 1 && newPage <= totalPageCount.value && !isPageChanging.value) { isPageChanging.value = true; currentPage.value = newPage; setTimeout(() => { isPageChanging.value = false; }, 300); } }; const updatePage = (newPage: number) => { changePage(newPage); }; const previousPage = debounce(() => changePage(currentPage.value - 1), 300); const nextPage = debounce(() => changePage(currentPage.value + 1), 300); const width = ref(0); onMounted(() => { const { clientWidth, clientHeight } = document.documentElement; width.value = Math.max(clientWidth, clientHeight); }); </script> <template> <ClientOnly> <div v-if="rendered" class="pdf-container m-auto"> <div v-if="isLoading || isPageChanging" class="loading-overlay"> <span>Loading...</span> </div> <div class="controls"> <button @click="previousPage" :disabled="currentPage === 1 || isPageChanging">前のページ</button> <span>{{ currentPage }} / {{ totalPageCount }}</span> <button @click="nextPage" :disabled="currentPage === totalPageCount || isPageChanging">次のページ</button> </div> <Transition name="fade"> <VuePdfEmbed v-if="doc" :source="doc" :page="currentPage" :width="width" image-resources-path="https://unpkg.com/pdfjs-dist/web/images/" @rendering-failed="showApplicationError" @loading-failed="showApplicationError" @rendered="() => console.log('PDF rendered successfully')" /> </Transition> </div> </ClientOnly> <slot :page="currentPage" :total-page-count="totalPageCount" :is-loading="isLoading" :previous-page="previousPage" :next-page="nextPage" :update-page="updatePage"></slot> </template> I tried another way, but it still not work. // /plugins/pdfjs-config.client.ts import PdfWorker from 'pdfjs-dist/build/pdf.worker.mjs?url'; import { GlobalWorkerOptions } from 'vue-pdf-embed/dist/index.essential.mjs'; import { defineNuxtPlugin } from '#imports'; export default defineNuxtPlugin(() => { GlobalWorkerOptions.workerSrc = PdfWorker; if (process.client) { (window as any).pdfjsLib = { ...(window as any).pdfjsLib, GlobalWorkerOptions: { ...GlobalWorkerOptions, cMapUrl: 'https://unpkg.com/pdfjs-dist/cmaps/', cMapPacked: true, }, }; } }); Oh, I think I found the solution. Adding follow logic and it works. const source = computed(() => { // Need to check the url is existing if (!props.url) { console.warn('No URL provided for PDF'); return null; } return { url: props.url, cMapUrl: 'https://unpkg.com/pdfjs-dist/cmaps/', cMapPacked: true, }; }); Finding solution by myself.
GITHUB_ARCHIVE
Create a list of articles to read later. You will be able to access your list from any article in Discover. You don't have any saved articles. Snakes saw an explosion in diversity after the dinosaurs were wiped out by an asteroid, researchers have found. Following the mass extinction of many dinosaurs, as well as flying and aquatic reptiles, snakes rapidly diversified from as few as six lineages to include many ancestors of the species we see today. This is probably due to the reptiles taking up many of the ecological roles that small dinosaurs had before their extinction. Research led by the University of Bath also found a similar event taking place around 34 million years ago, when another large extinction event is known to have taken place. Their findings, published in Nature Communications, further reinforce the role of mass extinctions in shaping the natural world as we know it today as the planet goes through a potential sixth event. Dr Nick Longrich, from the Milner Centre for Evolution at the University of Bath, says, 'Our research suggests that extinction acted as a form of "creative destruction"- by wiping out old species, it allowed survivors to exploit the gaps in the ecosystem, experimenting with new lifestyles and habitats. 'This seems to be a general feature of evolution - it's the periods immediately after major extinctions where we see evolution at its most wildly experimental and innovative. 'The destruction of biodiversity makes room for new things to emerge and colonize new landmasses. Ultimately life becomes even more diverse than before.' Extinctions are nothing new and are a fundamental part of life on Earth. As conditions change, species adapt as best they can to their changing circumstances. If they can't adapt quickly enough, they die out and other species will eventually move in to take up their role in the environment. Animals and plants are coming into existence and going extinct all the time. But at certain points in Earth's history, the rate of extinction has shot up. The largest of these are mass extinction events, where at least 75% of the planet's species are lost within a few million years. To date there have been five of these, with support among some scientists that the current impact of human activity on the world is creating a sixth. The last commonly agreed mass extinction event was the Cretaceous mass extinction event, when a large meteor around 10 kilometres in diameter struck Earth off the coast of Mexico. This caused shockwaves, tsunamis and a wave of heat that instantly killed many animals. Dust entering the atmosphere would have blocked out the Sun and caused photosynthesis to plummet, leaving many of the survivors to starve. With many dinosaurs wiped out, the ancestors of modern mammals, birds and fish had a chance to flourish. Reptiles were in a similar position, with the generally smaller survivors able to take advantage of shelter and scarce food in the wake of the disaster. Researchers found that at least six snake lineages were able to persevere through the event and radiate into a variety of new species. Among the survivors are burrowing snakes, who were able to find shelter underground, and those that lived in freshwater habitats. The newly emptied environments then allowed snakes to specialise in a range of roles, such as marine snakes, as well as taking over from dinosaurs and other now-extinct predators to target small prey. The snakes expanded into new areas of the world, including Asia. Other subsequent large extinction events were also found to have impacted snake diversity. At the start of the Oligocene period 33.9 million years ago global temperatures dropped significantly, hitting cold-blooded animals like snakes. The researchers suggest that the caenophidians, the group which contains about 80% of modern snakes, adapted to being active during the day around this time after their ancestors had been mostly nocturnal. This change allowed them to maximise the amount of heat gained from the Sun. As the planet warmed over the following years, snakes could also move further north, allowing them to move to the Americas over the ancient land bridge that once linked Russia and Alaska. However, while the research appears to correlate well with known events and other studies on snake diversity, the scientists faced a number of issues when assembling their model. For instance, the snake fossil record is particularly patchy in places, especially in the period after the Cretaceous mass extinction event. It is also hard to identify the relationships between different snake fossil species as they often share homoplasies, which are characteristics that weren’t present in their ancestors, but have evolved separately. To combat these issues, the researchers excluded some snake lineages whose classifications are controversial. They also used the timings of other groups, such as birds when radiated into a variety of species, to help calibrate the timeline of their snake family tree. The researchers' work points to the importance of mass extinctions in producing the diversity of life as we know it today, particularly in the aftermath of the catastrophic Cretaceous mass extinction event.
OPCFW_CODE
Baby Talk on warning toddlers against touching, say, objects too hot to touch I am curious about baby talk people use to warn toddlers not to touch dangerous things such as objects too hot to touch. Think of a coffee cup, too hot for them to touch, what would you say? Would you say, for example: Noooo? Looking for set phrases, sounds, and words or anything used in this situation. Thought was clear! Check it out now. I'm not sure whether there's a standard orthography for it, but Ah-Ah! is a standard "warning" (primarily reserved for adults addressing children, or contemptuous mimicking of that context), that invariably means one or more of Don't do that! Don't touch! Stop! Or, you could talk to the kid like a normal person rather than treating them like they can't understand real speech.... What's wrong with saying "Don't touch! That's too hot and you'll burn yourself!" Or you could use baby-talk to talk to an adult. To every thing there is a season. :) Seriously, please don't vote to close this question. It's the kind of thing that dictionaries don't cover but native speakers all know through personal experience. IOW, it's perfect for ELL. @Sina They'll never learn if you don't teach them. Toddlerhood and childhood are about learning things. If you dumb stuff down for them, they'll never learn anything. Thanks for adding more detail. am retracting my close vote. I would not leave hot coffee anywhere that they could potentially touch it, or if they got too close I would pick it up (the coffee, not the toddler). I suppose I could add a "No, no, no, no....(too hot...not for you)" with an emphasis on "NO". I believe most kids learn the word "no" before they learn "yes." @Sina: there is man on American television who uses a similar sibilant sound to train dogs. @Sina: it is not intended as an insult. Such sounds seem to have a power to grab the attention at a deep animal level. Compare the sound we make to get someone's attention surreptitiously: Pssst! In our family (3 kids) we used the word No from a very early age, as soon as baby starts trying to do things that are dangerous or otherwise problematic. Then add a suitable modifier: No, Hot No, Sharp No, Hurt As a general policy we don't use baby-talk in the sense of using baby-ish speach Goo, goo, diddums etc. We do keep the sentence structure very simple as in the examples above, but then usually follow up with a more complete sentence. No, Hot. Dad's tea is very hot, it will burn you. Thek kids as they learned to speak started to use words such as Hot themselves.
STACK_EXCHANGE
drizzle> DELIMITER | Note that there is no semicolon after the '|' symbol, which we will use as the delimiter for our purposes. You have to choose a delimiter that does not appear in your procedure, and it can be more than one character. drizzle> CREATE PROCEDURE perl_hello (param1 string) -> return "Hello " . $_ . "!" Query OK, 0 rows affected (0.05 sec) drizzle> CALL perl_hello('Brian'); Query OK, 1 row affected (0.00 sec) drizzle> DELIMITER ; drizzle> SELECT @perl\G *************************** 1. row *************************** @perl: Hello Brian! 1 row in set (0.00 sec) In an actual language!?! About a week ago I was talking to a CTO for a company who is looking at adoption of Drizzle. One of things he came back with was "I don't need stored procedures, but I do need server side scripting". Back at the very first MySQL User's Conference we had a debate over the future of stored procedures in MySQL. I and some others really wanted the first stored procedure language to be external, David really wanted it to be PHP. I didn't see the value in implementing a single language. I thought people would be more interested in writing code in whatever language they wanted. Also, I figured that an external system would allow for different groups to develop languages more rapidly. Fast forward to when we began Drizzle. Parsers are where you spend a lot of your time. The smaller the parser the better off you are. So I went to task removing all of the signs of the SP language from Drizzle. We have been free of them now for over a year now (yes, long before we went public). Things are finally shaping up so that when we begin on Bell, our next milestone, stored procedures, or something like them, are now on our list. Though are they stored procedures, or is this server-side scripting? A few premises of the design: I am a little bit torn about using the SP call/creation SQL commands in Drizzle. You won't be doing the typical SP language (well... unless someone wants to write a plugin for them!). I would also like to encourage people to think differently about what writing server side code should look like. Personally I don't feel that stored procedures are the right solution for a lot of the cases, keep your business logic in your application layer(!), but we also know that users expect to be able to be able to run code locally. Triggering/Callback mechanisms can be very useful though, and enabling them is a part of this. Doing Triggers today in C is simple, but that is not something that everyone should/would/could want to do. Putting this in the plugin structure means no overhead to the parser or the rest of the database. Keeping them out of process means no drain or memory expansion of the Database. SMP boxes will benefit because you can confine the language VM to a particular set of processors/amount of memory. We don't want the database to ever blow up because of bugs in the execution language! And if you never want them? You never load the plugin in the first place. Why Perl? I've embedded Perl for years and know how to make it work. I've only done Java once, so I will leave that to other experts. I suspect I can find a Java person somewhere inside of Sun :)
OPCFW_CODE
I’d just like to share progress on a new Twitter project that I’ve been working on for the past few weeks. The subject of discussion is AniTwitter, a loosely-connected community of Twitter users that like to talk about anime-related stuff. The goal of this project is to map it out, meaning explore the social network to figure out which users are more likely to be part of it. This map is just the first step though. I took my first swing at analyzing AniTwitter last spring. I searched through user timelines for the most common hashtags, the most common adjectives, and the most retweeted tweets. As I was planning to redo the experiment last month, I unfortunately lost the user list that I’ve painstakingly built by hand, merging public user lists that I found on various Twitter profiles. I couldn’t be arsed to do it again. But soon it occurred to me I could have the user list built automatically. So I started with @ANNZac. A recursive search algorithm I ran looked through all the users he’s been following, then again through all the users they’ve been following. I was looking for keywords that might qualify users to be part of the AniTwitter network. The condition was to have the words anime or manga or otaku mentioned twice in the last six months or in the last one thousand tweets. Is the condition too harsh? Too permissive? At this moment the network boasts with 4640 discovered users and counting. I used the networkx Python library for graph building and visualization. If you look at the image, the nodes with bushy connections at the edges are like that because my search goes only two levels deep, so a bunch of users remain unconnected to the messy core. I decided on this limitation because searching three levels deep is too wide. Later on I found that the maximum degree of separation between two random users on Twitter was measured to be around 3.44. Considering AniTwitter is only a small sub-network of the whole social network, I believe my intuition to limit the search was correct. Whatever the case, I expect the individual bushes to grow smaller as more pairs of users are discovered to be following each other. The next step will involve user clustering, so we’ll be able to see smaller cliques inside AniTwitter itself, which will depend on user connectivity, @reply frequency, the topics they talk about and so on. I’m also planning to upgrade my earlier script to be able to recognize named entities, such as names and locations, from tweet text. If anyone has any questions or suggestions on how I should visualize the final map, I’m all ears. I’ll release the code with my next post on this topic.
OPCFW_CODE
Sum of endomorphisms that is not an endomorphism Consider a group $(G,+)$ and the structure $\operatorname{End}(G)$ of all endomorphisms of $G$. We know that if $G$ is abelian, then $\operatorname{End}(G)$ forms (typically non-commutative) unitary ring together with pointwise addition and function composition. The reason we want $G$ to be commutative is that sum of endomorphisms may fail to be an endomorphism. (here sum means the $+$ group operation of $G$). Here is the proof that sum of two endomorphism under pointwise commutative addition is an endomorphism again (maybe it will help someone to find the counterexample). Let $(G,+)$ be the abelian group and $(\operatorname{End}(G),\oplus)$ be the additive subgroup of its endomorphism ring. Let $\varphi,\psi$ be two endomorphisms, then \begin{align} (\varphi\oplus\psi)(x+y) &= \varphi(x+y)+\psi(x+y) =\\&= \varphi(x)+\varphi(y)+\psi(x)+\psi(y) =\\&=\varphi(x)+\psi(x)+\varphi(y)+\psi(y) =\\&= (\varphi \oplus \psi)(x) + (\varphi\oplus\psi)(y) \end{align} so $\varphi\oplus\psi$ is an endomorphism. Is there an example of (necessarily non-commutative) group $G$, s.t. $\operatorname{End}(G)$ fails to be a ring exactly due to the closure of endomorphisms under non-commutative pointwise addition? Language nitpick: “closeness” is how close two things are; the term you want is “closure”. Take $G=S_3$, and consider the endomorphisms $\psi,\theta\colon S_3\to S_3$, both of which map $(1,2,3)$ to the identity. The endomorphism $\psi$ maps every transposition to $(1,2)$; and the endomorphism $\theta$ sends every transposition to $(1,3)$. The “sum” $\psi\cdot\theta\colon S_3\to S_3$ given by $(\psi\cdot\theta)(x) = \psi(x)\theta(x)$ is not an endomorphism. To see this, simply note that $(1,2)\mapsto (1,2)(1,3)=(1,3,2)$ (I compose permutations right to left), but then $e = (\psi\cdot\theta)( (1,2)^2)\neq (\psi\cdot\theta(1,2))^2 = (1,2,3)$. (Or course, in addition this “pointwise operation” is not commutative, so it cannot be the addition of a ring.) Now, explicitly, what is $\mathrm{End}(S_3)$? You have the automorphisms, which correspond to conjugation by elements of $S_3$; the remaining endomorphisms are (i) the trivial map; and (ii) the maps that factor through $S_3/A_3$; there are three such maps: the two I mention above, plus the one that maps all odd permutations to $(2,3)$. So there are exactly ten endomorphisms for $S_3$. Can this be given a ring structure in which composition of endomorphisms is the ring multiplication? No. Because $S_3$ is noncommutative and every element gives a different inner automorphism, the monoid structure of $\mathrm{End}(S_3)$ under composition is noncommutative. But it is a theorem of Eldrige that if the order of a finite ring is cube free, then it is commutative. Since $|\mathrm{End}(S_3)|=10$ is cube free, the ring structure would necessarily be commutative, and so we see that it cannot be given such a ring structure.
STACK_EXCHANGE
Im out for the weekend and only have my windows laptop on me My guess is a problem with the arguments on iwconfig They seem alright but it appears as if the program doesnt like it. Create an ISO using the archiso set as you as the live images onwards on the main download site will already contain the 3 14 menuentry 39 Arch Linux i686 39. iso codes install jackd libreoffice report builder bin install plymouth theme ubuntu text install. Debian Stretch firmware live DVD rt73 bin ) # firmware prism2 usb firmware installer will at the time of its own installation download the firmware files. download the correct megnéztem benne van a rt73 bin fájl De ha és csak elmélkedek) én ubuntu 12 04 LTS mini iso ból tákolok magamnak egy. I 39 m not exactly sure if its vital that you need an internet connection during an install I 39 ve had successful installs without internet with Kali, although every computer laptop is different. I have been looking for a way to download ed DSL N Damn Small Linux Not - aug06 iso download driver in the CDROM: f and rt73. I just realized I can 39 t unrar rar files on my Ubuntu machine. Search and download Linux packages for ALT Linux, Arch Linux, CentOS, Debian, Fedora, Mageia, Mint, OpenMandriva, openSUSE, RHEL, ROSA, Slackware and Ubuntu distributions. Binary firmware for Ralink wireless cards This package contains the binary firmware for wireless network cards with the Ralink RT2501 Turbo, RT2600, RT5201 Turbo, RT5600, RT5201USB, RT2800P D, RT2700P D, RT2700E D, RT2800E D, RT2800U D] or RT3000E D] chipsets or RT3070 RT3071 RT3072 chips, supported by the rt61 rt61pci, rt73 rt73usb. How do I get debian to see my wifi Audio Video Print This; Like 5 likes) dcrunkilton May 6, Sounds like you might be missing the driver, in particular if. I have to format my entire disk and reinstall Ubuntu I have installed a lot of software on my current system And I will have to reinstall all those updates, drivers and applications too. This package supports the following driver models Ralink 802 11n Wireless LAN Card. Ubuntu veterans will also find much to learn in links light up around the world as the version known as ISO image downloaded from the project. is the Ubuntu iso going to work when burned to a source envdir bin/ the ISO yourself with a web browser and put it in the same directory as. Gentoo: 10 years compiling - Testers for liveDVD enter here if so them you 39 ll need to download these images for now which should just work The file is rt73 bin. Binary firmware for Realtek wired and wireless network adapters. Slitaz aircrack ng iso download. Do not mix standard Debian with other non Debian archives such as Ubuntu For the package management lrwxrwxrwx 1 root root 05 usr bin. Write concrete URL where you downloaded rt73 bin file You can also use installer iso file with non free firmware I 39 ve tried installing ubuntu, fedora.
OPCFW_CODE
HW video decoding on mainline kernel is possible, but in most cases you have to do some kernel patching yourself and use special library which provides VAAPI or use modified ffmpeg libraries. MPEG2 decoding is possible with kernel 5.0 or 5.1 (not sure), basic H264 decoding will be possible with kernel 5.3 and HEVC decoding will probably come with kernel 5.5 (patches already exist). Note that H264 and HEVC codecs are feature incomplete currently. However, I did some improvements for LibreELEC and there most H264 and HEVC videos work. Patches are available on LibreELEC github but are incompatible with VAAPI library, so only option is to use modified ffmpeg. Regarding memory consumption, please note that with OrangePi Lite you have only 512 MiB of RAM which is a bit low. LibreELEC for that reason doesn't support devices with less than 1 GiB of RAM. Consider following calculations for memory requirements, no matter which kernel you use: 1. Multiple variants of 4K and 1440p resolutions exist, so I'll assume that 4K means 4096x2160 (same as on my LG TV) and 1440p means 2560x1440 2. kernel allocates one XRGB (4 bytes per pixel) buffer for user interface (no matter if you're using window manager or not), so for that you need 4096*2160*4 ~ 34 MiB of CMA memory 3. video is decoded to NV12 or NV21 formats and both take 1.5 byte per pixel, that means 2560*1440*1.5 = 5.27 MiB of CMA memory per single frame 4. worst case for H264 and HEVC is that you need 16 reference frames to properly decode current frame, which means additional 5.27 * 16 ~ 84 MiB of CMA memory 5. VPU needs additional scratch buffers per frame. Size of those buffers depends on codec features used, but for H264 is typically about 1/4th of multiplied width and height, so in worst case (1 + 16)*2560*1440/4 ~ 15 MiB of CMA memory 6. VPU needs some other scratch buffers, but they are small, about 1 MiB in total 7. you also need additional CMA memory for providing encoded data to VPU, but memory consumption for that heavily depends on userspace library/player implementation. Hard to give any estimation, so let's use 20 MiB. Final estimation for worst case display + VPU CMA consumption for 4K display and 1440p video: 34 + 5.27 + 84 + 15 + 1 + 20 ~ 160 MiB. You also have to consider that other devices may use CMA memory at the same time. In LibreELEC, CMA memory size is set to 256 MiB because so much is needed for decoding 4K videos. Hopefully that gives you perspective how much memory is needed for H264/HEVC video decoding. I won't touch (use) 3.4 kernel anymore, but I can help you with patching mainline kernel for better H264 and/or HEVC support and bring up ffmpeg based solutions (that includes mpv), if you want.
OPCFW_CODE
My team recently decided to start using RoundHouse to automate our database deployments, and I tried searching for documentation specific to using RoundHouse on a legacy database, but I didn’t find much. So I’m sharing here what I learned while setting up RoundHouse to be used on an existing database. I’m going to keep this post focused specifically on the steps needed to get up and running with RoundHouse on a legacy database. I won’t go into much detail on all the different features RoundHouse has to offer because there’s plenty of other resources for that. Many RoundHouse tutorials I found recommend creating a separate console project in Visual Studio that executes RoundHouse for the Db deployments. I prefer to just use the command line app provided by RoundHouse instead. Here are the steps I came up with. I’m not an expert in RoundHouse, so there might be a better way, but this worked for me. Create the RoundHouse folder The first step is to create the folder that will contain all the SQL scripts for RoundHouse, and make sure it’s included in the source control for the project. You will also want to create the individual sub-folders that RoundHouse uses. Here are the default folders for RoundHouse: alterDatabase\ runBeforeUp\ up\ runFirstAfterUp\ functions\ views\ sprocs\ indexes\ runAfterOtherAnytimeScripts\ permissions\ All these folders will be empty to begin with. We’ll populate a few scripts in the permissions folder, but other than that there’s no need to create sql scripts at this time. You’ll only need to start creating scripts when you want to change the database in the future. Make a backup of the Database Now you will want to take a backup of the database, but don’t include the data in the backup. It should just contain the schema, sprocs, views, etc. Create another folder called restore and copy the backup file to it. RoundHouse will skip this backup script during regular deployments. But it will be available to use when we want to stand up the database on a new server. Create the Permissions scripts The last step is to create the necessary permission scripts for the database. If the database is located in multiple environments with permissions specific to each environment, then you’ll want to create a separate permission script per environment. Fortunately RoundHouse makes it easy to create environment specific scripts. It provides a simple naming convention to denote which environment the script is supposed to be applied to. The naming convention for environment specific scripts is: LOCAL.sproc_permissions.ENV.sql will be ran in the LOCAL environment, and TEST.sproc_permissions.ENV.sql will be ran in the TEST environment. And now you’re ready to perform your first automated database deployment with RoundHouse. So run the RoundHouse console app in the folder you just made and point it to the database you’re using. You can find the exact command line arguments you need in RoundHouse’s documentation. This first deployment you perform won’t do much because the only scripts it will run are the permissions scripts. But if you look at the database tables, you will notice RoundHouse added 3 new tables that it uses to keep track of the scripts it applies to the database. Now when you want to make a change to the database, don’t update the database directly, but instead create a script in the ‘up’ folder. And then run RoundHouse and see how it finds the script you added and applies to the database for you. And that’s really all there is to get up and running on RoundHouse. I was surprised by how little work was actually required to RoundHousify a database, which is another reason why I like RoundHouse!
OPCFW_CODE
current master broken!? Hi, right now the current version from master does not work at all for me. Looking at the log it seems that the query in graphite_tree has an extra . at the end which should not be there. [2017-07-12 12:58:41] I clickhouse.go:71: query {"request_id":"2","query":"SELECT Path FROM graphite_tree WHERE (Level = 5) AND (Path = 'carbon.agents.graphitedev002.pickle.metricsReceived' OR Path = 'carbon.agents.graphitedev002.pickle.metricsReceived.') GROUP BY Path HAVING argMax(Deleted, Version)==0","runtime_ns":6432321,"runtime":"6.432321ms"} [2017-07-12 12:58:41] I graphite-clickhouse.go:74: access {"request_id":"2","runtime":"6.548465ms","runtime_ns":6548465,"method":"GET","url":"/metrics/find/?local=1&format=pickle&query=carbon.agents.graphitedev002.pickle.metricsReceived&from=1499770721&until=1499857121","peer":"<IP_ADDRESS>:48454","status":200} [2017-07-12 12:58:41] I clickhouse.go:71: query {"request_id":"6","query":"SELECT Path FROM graphite_tree WHERE (Level = 5) AND (Path = 'carbon.agents.graphitedev002.pickle.metricsReceived' OR Path = 'carbon.agents.graphitedev002.pickle.metricsReceived.') GROUP BY Path HAVING argMax(Deleted, Version)==0","runtime_ns":4658751,"runtime":"4.658751ms"} [2017-07-12 12:58:41] I clickhouse.go:71: query {"request_id":"6","query":" SELECT Path, Time, Value, Timestamp FROM graphite WHERE (Path IN ('carbon.agents.graphitedev002.pickle.metricsReceived')) AND ((Date >='2017-07-11' AND Date <= '2017-07-12' AND Time >=<PHONE_NUMBER> AND Time <=<PHONE_NUMBER>)) FORMAT RowBinary ","runtime_ns":11681202,"runtime":"11.681202ms"} [2017-07-12 12:58:41] I graphite-clickhouse.go:74: access {"request_id":"6","runtime":"19.191811ms","runtime_ns":19191811,"method":"GET","url":"/render/?format=pickle&local=1&noCache=1&from=1499770721&until=1499857121&target=carbon.agents.graphitedev002.pickle.metricsReceived&now=1499857121","peer":"<IP_ADDRESS>:48454","status":200} I've bisected it down to 117efa3b9d07125d57c3bc6f13f0d2ab751597bd is the first bad commit commit 117efa3b9d07125d57c3bc6f13f0d2ab751597bd Author: Roman Lomonosov<EMAIL_ADDRESS>Date: Mon May 15 21:32:19 2017 +0300 update zap :100644 100644 f9fc933eb6a2a06169461103f3c1b53ceb980699 7bd0e607ddec34a4c97039b72eb8d2b0e069fbda M .gitmodules :040000 040000 54d57556519c0c07f3aa0ca107929341d9e6e375 37e986474ae52d2c816b5640b7a7a4f24200bd47 M config :100644 100644 3e5506c35e38055699a0f7ba69931caebcae90af cd4d14d048048e943ba3e6e3b469999349e07a0c M graphite-clickhouse.go :040000 040000 e55d6aed02ce35347137a5c8fec2321c56cfed98 2196e146f095e058191e10134267c9913d188218 M helper :040000 040000 52be63571bafd8ae7b25bcc29433defcc71c7fa5 1f74a7862fcb8534276b1f93aaaeaafd1c513f89 M render :040000 040000 3b6a0ce99138ea284590349d0aa33f66d8017ef4 83b6cabd5f099cd38d9f514c69bb3cad52605254 M tagger :040000 040000 52ad2c251dd4e077bfb886dd5ecf107af53c75b5 af4377c27303521a02d6d4cc2a4a859bbbee958f M vendor Right now I don't have the time to debug this further, but maybe later. Hi, Master may be broken, I use last release in production :( I started new finder with reverse path and not finished it. Extra dot at the end of metric is ok. graphite-web finds path with dot OR without dot. What is not working? It doesn't even Look for metrics in the the graphite table. Looking at the log above again it seems that I copied the wrong lines :( need to debug it again, sorry. Looking at the master's log it seems that there was no query into the graphite table at all, and no data was returned. I've lost rollup xml parser in "update zap" commit. Thanks for report Please reopen issue if problem still exists
GITHUB_ARCHIVE
I remember back in the day when I used to see what looked like a black mamba in my local stream. Was that a water snake? Well, it's very wrong to assume that any snake seen in water is a water snake. Some of these reptiles are only found in water during a hunting expedition, as a matter of fact. Water snakes, also known as Nerodia snakes, are snakes which spend most of their time in or around water bodies, and they are nonvenomous in nature. This family of snakes is not to be mistaken for cottonmouths, otherwise known as water moccasin. The latter has poisonous venom, which makes it dangerous. Note: Like any other reptile, a Nerodia snake breathes air, but it can stay under water for up to an hour or so. Physical Characteristics of Nerodia Snakes Most Nerodia snakes are either reddish, olive green, gray, or brown in color, with dark bands or blotches on the back. The color and marks vary from one Nerodia species to another. There are species which appear solid black or white. Most Nerodia snakes have round eye pupils and have a rough body, thanks to their keeled scales. When it comes to size, male snakes tend to be lighter and shorter than females. Depending on the species, a water snake can reach 1.5 meters long, with the northern Nerodia snake coming out on top. Species of Water Snakes There are hundreds of Nerodia snake species in existence today. In Canada and the US, you will find 10 or so of these species. Find below some of the most common Nerodia species. - Brown water snake - Northern water snake - Diamond back water snake - Concho water snake - Green water snake - Southern water snake - Plain-bellied water snake - Salt marsh water snake Some of these species contain multiple subspecies. Northern water snake, for instance, contains up to four subspecies snakes. Note: Nerodia snake species are not synonymous with sea snakes. As the name suggests, sea snakes live in the sea, and they are known to have deadly venom. How Nerodia Snakes Behave In contrast to the general misconception, Nerodia snakes are not aggressive. However, they can bite in defense. But with their bite being harmless, that shouldn't be a big concern. The snakes tend to first secrete musk to deal with any threat. In some cases, these snakes may vomit or defecate. The bite will usually act as the last and final "bullet." Talking of climbing, the snakes are good climbers, so you may find them resting on tree branches along the river banks. Any slight disturbance would cause the snake to quickly seek refuge in water; you will see them dropping in the river, pond, or any other water body that is readily available. Are they social reptiles? Most Nerodia snakes prefer being alone. Nonetheless, the snakes may socialize immediately before and after brumation. So, if you see them basking together, don't conclude that they are not water snakes. Where They Live Nerodia snakes are native to Asia, Europe, and North America, and they are known to live in aquatic habitats, such as marshes, ponds, lakes, streams, and rivers; hence, the name water snakes. As revealed by researchers at the University of Michigan, Nerodia snakes prefer still water rather than moving water. A thing to note is that these snakes don't live in water throughout. They will usually come out of water to enjoy the sun. In so doing, however, they are never very far from a water body. What do Nerodia snakes eat? Turtles, tadpoles, frogs, and fish, just to name but a few. Crayfish and leeches are part of the diet too and so are insects. Small snakes, mice, and birds are covered as well. Being non-constrictors, water snakes swallow live preys. Like any other animal, these snakes help in balancing the ecosystem. How They Hunt Nerodia snakes hunt mostly in the daytime, given that they are diurnal animals, although hunting at night is also an option. Slow-moving fish is usually their preference. However, the allegiance shifts slowly to amphibians such as frogs and tadpoles and large preys (e.g. toads and salamanders) as the snake grows. These reptiles hunt for preys under the rocks, on tree branches, on water body bottoms, and many other areas where they can find food. When hunting in the water, the snake will usually stay with its mouth wide open, as it patiently waits for its prey. As the prey passes by, the snake will quickly close its jaws to hold the prey, before swallowing it whole. As research reveals (check the Journal of Herpetology), female and male water snakes become ready for reproduction at 3 years and 21 months respectively. The females can give birth to multiple live youngs, 20 or so, at a time, and this can happen every year. Although rarely, females can breed up to 100 snakes at once. Nerodia snakes usually mate during spring. Nerodia Snakes in Texas In Texas, and especially Fort Worth, Nerodia snakes are a common phenomenon. You will see them around ponds, swamps, lakes, creeks, and other water bodies. Examples of Nerodia snakes found in Texas include diamond back water snake, broad-banded water snake, and blotched water snake. Of course, there could be other snakes of the same family in Texas, but these three are the most common ones. Thus, if you're around Dallas and you see a snake near a river, then it's probably a Nerodia snake. Water Snakes as Pets If you've been wondering if a Nerodia snake can be a good pet, then you're not alone. Having grown up upcountry near a stream where Nerodia snakes were so common, I have always wondered whether taking such a snake home as a pet would be a good idea. Besides, snake keeping in my locality is unheard of. But after meeting a snake expert, I came to realize that many snakes can make good pets, including water snakes. To answer the above question, Nerodia snakes can make good pets. As a matter of fact, many snake pet keepers believe that these species of snakes are the best when it comes to keeping. Let's understand why. Non-venomous: Let's for a moment forget the myths and misconceptions we here about these aquatic reptiles and focus on real facts. While Nerodia snakes do bite, their bites don't carry any venom. For pet lovers like me, this truth takes worry and stress out of the equation. And with my little Daniel and Chris fond of turning virtually everything they find into a toy, there is very little to be concerned about. They can play with the pet while I accomplish my own tasks. Additionally, these snakes are known to be docile when in captivity. That said, keep in mind that Nerodia snakes, like most snakes, don't like too much handling. If handled too much, they may develop stress, and this can trigger bites, which while not venomous, they cause pain. I just can't fathom fangs sinking into my flesh or the flesh of my little Daniel. Thus, if you want to handle your snake, do it sparingly. Just a few minutes would be enough and good for both parties. This way, your snake will slowly adapt to handling and will not become agitated that easily. As a reminder, snakes have small brains, and they can forget their caretaker so easily. Feeding them is easy: Simply catch a tadpole, a frog, or a rodent, such as a mouse and your snake will have a good meal. As a rule of thumb, avoid giving your pet live preys. The prey may hurt your snake in the process of fighting back. This is something you don't want to happen to your precious pet. Low bills: Nerodia snakes will typically thrive at low temperatures as compared to other pet snakes. That can only leave you with low heating bills; hence, less spending. Some of the best Nerodia snakes to keep as pets include the following: - Red-bellied water snakes - False water cobras - Brown water snakes - Banded water snakes - Northern water snakes - Diamondback water snakes - Green water snakes Nerodia snakes are snakes that live around water bodies, and they are nonvenomous in nature. They can hold their breath while in water for long, despite the fact that they breathe air. These should not be confused with sea snakes, which are known to be extremely venomous and dangerous. For pet enthusiasts, these snakes can make great pets, thanks to their docile and non-venomous nature.
OPCFW_CODE
import base64 from Crypto import Random from Crypto.Cipher import AES from znap.settings import key BS = 16 def _pad(s): return s + (BS - len(s) % BS) * chr(BS - len(s) % BS) def _unpad(s): return s[:-ord(s[len(s) - 1:])] def encryption(message): message = message.encode('utf-8') message = _pad(message) obj = AES.new(key) ciphertext = obj.encrypt(message) return base64.b64encode(ciphertext) def decryption(ciphertext): ciphertext = base64.b64decode(ciphertext) obj = AES.new(key) plaintext = obj.decrypt(ciphertext) plaintext = _unpad(plaintext) return plaintext """ def encryption2(message): print message message = _pad(message) print message iv = Random.new().read(AES.block_size) cipher = AES.new('This is a key123', AES.MODE_CBC, iv) return base64.b64encode(iv+cipher.encrypt(message)) """
STACK_EDU
Welcome to the second part of our series on building a microservice architecture from scratch to create a Twitter clone. In our first part, we introduced the project and talked about all the building blocks of our architecture. We've made some key design decisions, and now we're ready to start building. So today we'll start setting up the backend of our Twitter clone. Here you can find Part 1: Introduction to building our Twitter clone In this episode of our series, we're beginning with three core services to build our backend: ArangoDB as a database, our key:value store, Redis, and nginx. We'll use the first two services as they come, there are no changes needed to the base setup. nginx will need a little bit more configuration and we're going to cover the most important steps later. Our cloud platform will be mogenius and luckily there are templates for most of the services we'll create along the series. You'll get more details on how templates are deployed in the mogenius docs. Prerequisites: The services are running with Docker. Please make sure to get Docker on your local machine. Our app will run via https on production. So on our local machine we are going to set up https with NGINX to get an equivalent environment. In this part of our series, we start with three core services to build our backend: ArangoDB as the database, our key:value store, Redis, and nginx. We'll use the first two services as is, no changes to the base setup are required. nginx will require a bit more configuration and we'll cover the key steps later. Our cloud platform will be mogenius, and fortunately there are templates available for most of the services we will be creating throughout the series. For more details on how to deploy templates, see the mogenius docs. Prerequisites: The services will be run using Docker. Please make sure you have get Docker installed on your local machine. Our app will run via https in production. So on our local machine we will set up https with NGINX to get a corresponding environment. ...or check out our docs for 'Setting up Redis...' Once the Redis service is saved or you push commits to the main branch, ( auto provisioning ) runs. If you are developing features in one service, you should move to another branch until your feature is ready for testing or release. Otherwise, your service will start a new deployment every time you push. It would probably be best if you stick to a common branching model like Gitflow or Trunk-based development . By the way, you can also avoid triggering the build process with the comment prefix [skip ci] . Prerequisites: We use make for our build and run commands to make life easier in our local development environment. For communication between local Docker services, we first set up an internal Docker network. Build and run the container. CAUTION: To avoid flooding our Docker images on the local machine with unnamed images or zombie containers, we try to remove them before the build or run command. Please modify the Makefile if you don't want this behavior. You can check out the source code in our Redis Github repository. Access the ArangoDB admin page via your service url and the exposed port Login into Arangodb You can also check out our ArangoDB docs Create a Makefile and set a random 'password' for your development env. Build and run the container. In our Docker run command, we share the internal port with our local machine. We use the default port of ArangoDB. Now we can check if ArangoDB is running on our local machine. Open it in your browser via: You can checkout the source code in our ArangoDB Github repository Basically we have three main entities. Post, Tag, User As you can see from the image, we will use ArangoDB's multi-model approach for a graph database. So we will have documents and graphs in one step. The entities listed will be our documents and the connections between them will be our edges (which, by the way, are again a document) So we will have: Collection of documents: Post, Tag, User Collection of edges: Tagged (from: Tag, to: Post), Linked (from: User, to: Post), (from: Post, to: Post) Setup the database and create collections Using the root user and _system database for applications is not a good practice. Therefore, we will create a new user and database through the administration interface. Create new collections of type documents or edges as listed above Final database overview Now we have made the basic setup for storage and are ready for the final step. Before we dive into the NGINX configuration, we have to set up our local machine for https. For my machine I used the tutorial How to use HTTPS for local development . If this doesn't fit for your operating system, please check your favorite search engine. NGINX is our gateway in front of our services. This means that each service gets its own location directive . To understand the idea, let's first take a look at the file structure of the service repository and the Dockerfile. As you can see, each conf.d/* and include.d/* has a variant for each stage. We run a local environment and a production environment. The Dockerfile distinguishes between these environments via the build time arguments. The default is our production build. The cert folder contains the local https certificates. This folder is listed in .gitignore, so it is not pushed to the repository. To get things going, we first add our conf.d/default.*.conf , which replaces the NGINX default configuration and contains the custom include.d/* config files. As a little teaser for the next article, we will also add the location directive for our first service 00-location-auth-service.*.conf . Since the makefile is used for local development we need to set build arguments --build-arg env=local. Please keep in mind in our project each new micro service needs a new location directive. This is extra administrative effort but for us it's worth it since we use a NGINX feature called auth_request which we will introduce in minute. To perform authentication, NGINX makes an HTTP subrequest to an external server where the subrequest is verified. If the subrequest returns a 2xx response code, the access is allowed, if it returns 401 or 403, the access is denied. First we set up our default.local.conf . Most of it is a copy of the original default.conf . Only the https and include configuration is added. Next is our 00-location-auth-service.local.conf . You can find the corresponding production configs (00-location-auth-service.prod.conf, default.prod.conf) in our Github repository. The only difference is the missing https directive (not needed with mogenius), the Kubernetes resolver and the URL of the auth service. Now we are ready for the implementation part. What we learned... That's it for today! Creating local development environment on Kubernetes can be tricky. Discover a simple yet powerful approach with Docker Desktop and mogenius.
OPCFW_CODE
Local Installation Instructions Use these instruction for setting up a local server environment for testing and development. Presenting WordPress locally is ordinarily inferred with the ultimate objective of progression. Those roused by progression should hold fast to the headings underneath and download WordPress locally. WordPress Support - AMPPS: Free WAMP/MAMP/Light stack, with inbuilt Softaculous Installer. Would 1 be able to click introduce and redesign WordPress and others also. - DesktopServer Restricted: Free Windows/Mac worker, makes numerous virtual workers with imaginary high level areas (for example www.example.dev) explicitly for chipping away at various WordPress projects. - Macintosh Application Store 1-click introduce for WordPress Introduces a free, independent across the board heap of WordPress and all it requires to run: MySQL/MariaDB, Apache and PHP - Introducing WordPress Locally on Your Macintosh With MAMP - User:Beltranrubo/BitNami Free across the board installers for operating system X, Windows and Linux. There are likewise accessible installers for WordPress Multisite User:Beltranrubo/BitNami_Multisite utilizing various spaces or subdomains. - Moment WordPress is a free, independent, convenient WordPress improvement climate for Windows that will run from a USB key. Software Appliance – Ready-to-Use You may find that utilizing a pre-integrated software appliance is an incredible method to get ready for action with WordPress, particularly in blend with virtual machine programming (e.g., VMWare, VirtualBox, Xen HVM, KVM). Another programming that can be utilized is Equals, which you would need to pay for not at all like virtual machine programming. It permits you to run both Macintosh and Windows on your machine. A software appliance allows clients to through and through avoid manual establishment of WordPress and its conditions, and rather convey an independent framework that expects practically zero arrangement, in only a few of minutes. TurnKey WordPress Machine: a free Debian-based apparatus that simply works. It packages an assortment of famous WordPress modules and highlights a little impression, programmed security refreshes, SSL support and an Internet organization interface. Accessible as ISO, different virtual machine pictures, or dispatch in the cloud. Unattended/automated installation of WordPress on Ubuntu Server 16.04 LTS Unattended establishment of WordPress on Ubuntu Worker https://peteris.rocks/blog/unattended-establishment of-wordpress-on-ubuntu-worker/ You can follow this aide by duplicate & gluing orders in a terminal to set up WordPress on a new Ubuntu Worker 16.04 establishment with nginx, PHP7, MySQL in addition to liberate SSL from LetsEncrypt. You won’t be incited to enter any qualifications or subtleties like in different aides, everything is robotized. You can even avoid the establishment wizard. If you don’t have IIS on your computer or don’t want to use it, you could use a WAMP Stack : - WAMP Server or WAMP Server at SourceForge - AMPPS WAMPStack – has Softaculous WordPress Installer - EasyPHP – Has WordPress installer plugin - BitNami WAMPStack – Has WordPress stack - XAMPP WAMPStack These stacks can be downloaded uninhibitedly and set up every one of the pieces you need on your PC to run a site. Whenever you have downloaded and introduced WAMP, you can point your program at localhost and utilize the connection to phpmyadmin to make a database. Then, to introduce WordPress, download the compress document, and concentrate it into the web catalog for your WAMP establishment (this is typically introduced as c:\wamp\www). At long last visit http://localhost/wordpress to start the WordPress introduce. (Accepting you separated into c:\wamp\www\wordpress). Tip: Assuming you need to utilize something besides the default permalink structure on your introduce, ensure you empower the mod_rewrite module in WAMP. This can be empowered by tapping on the WAMP symbol in the taskbar, then, at that point float over Apache in the menu, then, at that point Apache modules and guarantee that the rewrite_module thing has a checkmark close to it.
OPCFW_CODE
In this day and age, how we store, process, and deliver our data is always changing. Since it launched in 2001, the Virtual Private Servers (VPS) has become increasingly popular. By re-imagining existing technologies , this type of server has revolutionized the way we store and manage data on the web. The workings of a VPS You start with a single machine. One machine is used to create several virtual servers. Each virtual server is created for use by a single customer account. Although they are run on the same machine, every container has its own software. The hosting software is kept separate from all the software running on your virtual server/container. You select a VPS hosting company. The hosting company virtualizes their server. You then only have access to your own section (container), but split the costs of the server with the other clients. You are assigned a target amount of bandwidth, disk space, RAM and CPU power. The instant benefits -Divides physical server into multiple virtual servers without having to purchase additional hardware. -Provides the benefits of dedicated hosting service at a lower cost. -Offer higher security levels. -Provides a wide level of control over hosting (i.e. Allows installing of a wide range of server software) -Can provide guaranteed levels of server resources which you not get with shared hosting (not all VPS systems provide this) -Many VPS systems are upgradeable. Is available with two primary different levels of support: Hosting company is responsible for hardware and network support and the virtualization environment, but the client is responsible for installing, configuring and upgrading the operating system and all software. 2. Fully Managed Host manages all hardware, network, virtualization software, operating system and control panel software issues, while client only concerns themselves with installing custom software. This varies from vendor to vendor. The other types of servers Shared hosting service. A web hosting service which houses many web sites simultaneously. There are many websites on the same machine, all sharing the same space and the same resources. The potential to have your website impacted by other users on the system is higher on shared hosting services. The host usually offers tools to easily install server software without root access. Dedicated server. For this server, a single user occupies all the resources of a computer. Most commonly used for websites with higher traffic needs or for heavy data/database processing. Tends to be expensive and can have poor support structures. What you want to consider when choosing a VPS solution: Are you upgrading from shared hosting? Almost all shared hosting is imagined, and if you are not technical, make sure you go to a managed VPS solution or you will be lost. Will the vendor provide the proper migration tools? If you will be starting with shared hosting, but will need to move to VPS soon, will the vendor provide migration tools that will automatically move your hosting data, domains, etc. to the VPS service? Problem with others on a shared platform? If so, you will probably want to only consider KVM or Xen solution, which is able to guarantee resources. Other solutions are a lot like shared systems. How fast do you need your VPS solution? Can be hosting vendor provide instant provisioning? Or will it take them 12-48 hours to get it up and running? How expandable is the VPS? If you will need to add more RAM or storage down the road, how difficult is that? Some vendors force you to upgrade while others can add resources on demand. What if you want to add more than one hosting service? How easy is it for you to manage all of your hosting services? Does the vendor provide a single login to get to the control panel and all the tools to manage your service? Does the vendor allow you to easily upgrade from VPS to dedicated server when you need more power? All VPS hosting environment with cPanel VPS optimized for free This means that you can easily upload file, manage domains, secure your server, and perform many other tasks to help keep your server running smoothly. If you are looking for VPS hosting provider, then we provide high quality unmanaged VPS hosting and unmanaged Dedicated Server hosting located in France and Canada. Our hosting plans are premeditated to deliver the best performance at the most lucrative price.
OPCFW_CODE
M: The high price of coming to America - wumi http://davidadewumi.com/2008/04/26/the-high-price-of-coming-to-america/ R: raju Interesting article. I share the author's sentiments to a large extent. I was born in India, but moved to Nigeria when I was 4 months old as my dad worked there. After having spent 15 years there, I moved back to India, for a total of 7 years, and have already spent more time in the US than in India. I have no family in the US, and occasionally do think of my family back in India and wonder if I am doing the right thing. I don't think of it as a "price" per se, since I don't regret coming to the US, but having led a such a "nomadic" existence for so long takes its toll. You leave behind social ties, the comfort of your surroundings, friends... And you start a whole new life everytime you move. Further, I think it changes you as a person. I have noticed that as I have gotten older (and consequently the more times I have moved :D) I have chosen to live a slightly more "hermit" existence. My group of friends has grown smaller and smaller, and I don't mind it so much. Maybe because in the back of my head I fear that its pointless. I apologize if I went totally off track, but somehow, this article resonated with me (at some level) R: anupamkapoor what a nice article ! i have had similar experiences (total of 8 years in barbados + us of a), and when my kid came along it seemed unfair (on my part) to bereft him of his roots. after some soul searching, we came back...
HACKER_NEWS
How to transform dates in Y-m format without days I have a data vector that looks like this: dates<-c("2014-11", "2014-12", "2015-01", "2015-02", "2015-03", "2015-04") I am trying to convert it into a recognizable date format, however no luck: as.Date(dates,"%Y-%m") [1] NA NA NA NA NA NA I suspect that the problem lies in that that there is no day specified. Any thoughs of how this can be solved? The zoo package has a nice interface to this, which allows storing of year-month data and a as.Date method to coerce to a Date object. For example: library("zoo") dates <- c("2014-11", "2014-12", "2015-01", "2015-02", "2015-03", "2015-04") The function to convert the character vector or year-months into a yearmon is as.yearmon. The second argument is the format of the date parts in the individual strings. Here I use %Y for year with century %m for the month as a decimal Separated by literal - . yrmo <- as.yearmon(dates, "%Y-%m") This gives > yrmo [1] "Nov 2014" "Dec 2014" "Jan 2015" "Feb 2015" "Mar 2015" "Apr 2015" This is actually the default, so you can leave off the format part entirely, e.g. yrmo <- as.yearmon(dates) To convert to a Date class object, the as.Date method is used > as.Date(yrmo) [1] "2014-11-01" "2014-12-01" "2015-01-01" "2015-02-01" "2015-03-01" [6] "2015-04-01" This method has a second argument frac which is specified allows you to state how far through the month you want each resulting Date element to be (how many days as a fraction of the length of the month in days) > as.Date(yrmo, frac = 0.5) [1] "2014-11-15" "2014-12-16" "2015-01-16" "2015-02-14" "2015-03-16" [6] "2015-04-15" That's exactly it. Thanks! If we need to convert to Date class, it needs a day. So, we can paste with one of the days of interest, say 1, and use as.Date as.Date(paste0(dates, "-01")) got it, it works. Thanks!
STACK_EXCHANGE
We have already learned how to perform the basic system setup and enable remote access via RDP. Today, we move on to look into the basic steps of system personalisation. Perhaps the most basic step to take in order to modify the system to our liking, as well as a fundamental step in terms of safety, is to set and change the access password for our account. The first password, for the Administrator account, has already been set as part of the system installation. However, it is good to consider changing it from time to time for the sake of improved system safety. It is also good to observe the basic rules of choosing a safe password: - If possible, choose random characters as opposed to words or names. - Combine lower case and upper case letters, numbers and special characters. - The password should be at least 8 characters long. - Never give your password to anyone or write it down. A password that is safe as well as easy to remember with a little bit of effort may look something like this: k0mP12.server, sER.ver.vKulne1. Alternately, you can use one of many online generators of safe and easy-to-remember passwords. Safepasswd is among the popular ones. The password change itself can be performed via the Control Panel tool in Windows. First, click the Windows menu: After clicking, a menu with the Control Panel tool appears. Click it. Select the User Accounts menu and click it. Continue by clicking Change Account Type. Here, select the account that you want to change the password for. In our case, it would be the Administrator account. Finally, click Change the password. At this point, a screen appears that lets us enter the current password and then enter a new one twice. We can also set a cue in the last box. The cue is visible to everybody who attempts to connect to your server, meaning it should not be the password itself or anything that would directly give it away to an unauthorised person. Finally, click Change password and you’re done. Besides the password change, a lot of users also seem to struggle when it comes to setting up their time zone and time/date in general. So let’s have a look at that issue as well. First, open the Server Manager tool, which we have already been working with in the previous part of the series. Now select the Local Server again. In the right part of the main window, we can see the Time zone item. Now it is up to you to choose in which time zone you want the server to operate. It does not necessarily have to be the actual time zone that the server is physically located in, although it is the most common setting. Click the blue link with the time zone specification. The time zone and time settings appear. Click Change time zone. Here, we can freely choose the setting that suits our needs. Furthermore, it is necessary to make sure that the computer will regularly synchronise with a trustworthy NTP server, preventing delay. This can be absolutely vital in situations such as using the server for stock market trading or other time-intensive applications. Select Internet time in the time settings menu. Here, click Change settings. If the synchronisation is set up and active, we can leave the settings as they are and use one of the pre-set Internet time servers. Alternately, you can opt for any NTP server that you prefer by clicking the box with the time internet server name and entering the required name. Servers tik.cesnet.cz or tak.cesnet.cz are some of the many popular, reliable and very accessible time servers. When the settings have been changed, don’t forget to click Update now and OK. Basic system setup is not complete. Next time, we shall advance yet further. Author: Jirka Dvořák
OPCFW_CODE
Kitsune is the second digital game I worked, the second game made by Emperium, produced in 2 months and displayed in November 2015. This was our first 3D game demo, were we used the Unity Engine for the first time. It’s also an adventure game, with platforming and puzzle challenges. Here you play as Shinno, a Kitsune who has the ability to transit between the human and spiritual worlds. I’ll cover a more technical view about the game, if you want to know more about story or mechanics you can go to the game designer’s post. So, unlike Recall, I was the only programmer on this project, and the big challenge on this game was to make this “transition between worlds” and make both of those “worlds” interactive, all of it while learning C# Programming. Here’s the link to the video footage of the game we produced. Shinno is the playable character. She is girl with spiritual powers that can shape shift herself into a fox, entering the spiritual dimension. On the normal world dimension, she only has basic controls of movement and interactions like jump, climb or interact with objects. While on the spiritual dimension, she has 3 spiritual balls floating next to her that she can use to attack enemies. The hardest here, was the climbing logic. She can climb two types of surfaces: a rectangular surface (like a wall) and cylindrical surfaces (like the tree on the end of the stage). To make this possible, I used a lot of Colliders as triggers, and had to work with disabling the Rigidbody, using the Transform to move on those cases. The spiritual balls are actually very simple, they have a Collider that triggers damage on collision with enemies and a Trail Renderer to draw these lines of movement behind them. The Ghoul is just a basic enemy we designed to add a little more difficulty on the game and test State Machine Programming, as well as an Encounter System I’ll discuss later on. He has a very simple behavior: walks randomly around the environment if no hostiles are detected, and when detected, starts chasing it until the distance is close enough to attack it. He walks by jumping short distances, and has a limited line of sight, so if you sneak behind him, he won’t see you. To make his movement logic work I’ve used the NavMesh and a NavMeshAgent, which is very easy to set up and adjust if needed. So, the monkey is one of the 12 Celestials of the game lore, and Shinno must defeat him to continue her journey. His challenge is that he multiplies himself into 3 and throws fruits on her from the tree branches, forcing her to climb the tree and attack him. To make his fruit throw logic work, it required me to research some physics, as the fruit should hit the player wherever he would go. The two-dimensional world This was the first challenge that came to me from the game designer, and required me quite a good time to develop it. As I had no knowledge on events and delegates, I made it in a quite different way. So, every object that behaves differently on both worlds, extends a base MultiWorld class, which supports different interactions and behaviors. Every time the player successfully toggles dimension, every object that inherits that class (including the player) executes a ToggleWorlds function, which functionalities vary among all of its heirs. For example, here is where a Ghoul would appear or disappear. Next, we knew the spiritual dimension had to look differently than the common world, and have a transition effect as well. For that, I used Image Effects. To change the color scheme, I used the Color Correction Curves, controlling its values directly through code, and to make the transition effect, I used the Vortex, also controlled by code. The Encounter System As we would spawn groups of Ghouls, I wanted to try developing a system that I could use later for different types of enemies and situations, that would avoid having more active enemies than necessary. For that, I created the Encounter System based on what I had learned working with the Neverwinter Nights Aurora Toolset (I made Neverwinter Nights mods for quite a good time, it was very fun, really). It works in the following way: you set an area that should be the area that triggers the encounter (can be a huge rectangle or any shape, but must be a Collider), and set Spawn Points for the creatures to spawn. You can choose the prefabs to spawn and the amount of creatures that should be spawned. If you set more creatures than spawn points, the exceeding amount gets positioned randomly among the spawn points. The same would happen if there were more spawn points than creatures. You also set an Encounter Limit, that when the player triggers it, all creatures spawned by this encounter and the encounter itself, get destroyed. Finally, the interface was all done using the Canvas, which makes it easy to control all the elements shown on-screen. I’ll end it here for Kitsune. It was a really short gameplay experience, but we could learn a lot in the process of creating it. I hope you liked this one, keep tuned for more!
OPCFW_CODE
OpenMV IDE crashes with SEGFAULT on start Fedora 32, python dependencies installed running ./openmvide or ./openmvide.sh either with sudo or rootless results in Segmentation fault (core dumped) journalctl entry is not verbose. This is what it has: Apr 24 20:11:39 attic systemd[1]<EMAIL_ADDRESS>Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- The unit<EMAIL_ADDRESS>has successfully entered the 'dead' state. Apr 24 20:11:39 attic audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-coredump@13-224> Apr 24 20:11:39 attic audit: BPF prog-id=272 op=UNLOAD Apr 24 20:11:39 attic audit: BPF prog-id=271 op=UNLOAD Apr 24 20:11:39 attic audit: BPF prog-id=270 op=UNLOAD Apr 24 20:11:39 attic abrt-server[224760]: Executable '/home/atticdweller/openmvide/bin/openmvide' doesn't belong to any package and ProcessUnpackaged is > Apr 24 20:11:39 attic abrt-server[224760]: 'post-create' on '/var/spool/abrt/ccpp-2020-04-24-20:11:39.367314-224688' exited with 1 Apr 24 20:11:39 attic abrt-server[224760]: Deleting problem directory '/var/spool/abrt/ccpp-2020-04-24-20:11:39.367314-224688' ...skipping... Stack trace of thread 224688: #0 0x00007fb717659fb1 n/a (/home/atticdweller/openmvide/lib/Qt/lib/libQt5Network.so.5.7.0 + 0x102fb1) #1 0x0000000000000201 n/a (n/a + 0x0) -- Subject: Process 224688 (openmvide) dumped core -- Defined-By: systemd -- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: man:core(5) -- -- Process 224688 (openmvide) crashed and dumped core. -- -- This usually indicates a programming error in the crashing program and -- should be reported to its vendor as a bug. I don't know if the IDE runs on Fedora. Typically you'd get a missing dependency error and it wouldn't run. Getting a segfault is something else. You'll have to debug what's going on if you want it to run on Fedora 32. Probably if you compile form source it will run on that system. Closing this as outdated once the new IDE version comes out.
GITHUB_ARCHIVE
Semi-conditional planners for efficient planning under uncertainty with macro-actions Efficient planning under uncertainty with macro-actions Massachusetts Institute of Technology. Dept. of Aeronautics and Astronautics. MetadataShow full item record Planning in large, partially observable domains is challenging, especially when good performance requires considering situations far in the future. Existing planners typically construct a policy by performing fully conditional planning, where each future action is conditioned on a set of possible observations that could be obtained at every timestep. Unfortunately, fully-conditional planning can be computationally expensive, and state-of-the-art solvers are either limited in the size of problems that can be solved, or can only plan out to a limited horizon. We propose that for a large class of real-world, planning under uncertainty problems, it is necessary to perform far-lookahead decision-making, but unnecessary to construct policies that condition all actions on observations obtained at the previous timestep. Instead, these problems can be solved by performing semi conditional planning, where the constructed policy only conditions actions on observations at certain key points. Between these key points, the policy assumes that a macro-action - a temporally-extended, fixed length, open-loop action sequence, comprising a series of primitive actions, is executed. These macro-actions are evaluated within a forward-search framework, which only considers beliefs that are reachable from the agent's current belief under different actions and observations; a belief summarizes an agent's past history of actions and observations. Together, semi-conditional planning in a forward search manner restricts the policy space in exchange for conditional planning out to a longer-horizon. Two technical challenges have to be overcome in order to perform semi-conditional planning efficiently - how the macro-actions can be automatically generated, as well as how to efficiently incorporate the macro action into the forward search framework. We propose an algorithm which automatically constructs the macro-actions that are evaluated within a forward search planning framework, iteratively refining the macro actions as more computation time is made available for planning. In addition, we show that for a subset of problem domains, it is possible to analytically compute the distribution over posterior beliefs that result from a single macro-action. This ability to directly compute a distribution over posterior beliefs enables us to enjoy computational savings when performing macro-action forward search. Performance and computational analysis for the algorithms proposed in this thesis are presented, as well as simulation experiments that demonstrate superior performance relative to existing state-of-the-art solvers on large planning under uncertainty domains. We also demonstrate our planning under uncertainty algorithms on target-tracking applications for an actual autonomous helicopter, highlighting the practical potential for planning in real-world, long-horizon, partially observable domains. Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 163-168). DepartmentMassachusetts Institute of Technology. Department of Aeronautics and Astronautics Massachusetts Institute of Technology Aeronautics and Astronautics.
OPCFW_CODE
require 'logger' require 'singleton' require 'ftools' require 'jinx/helpers/collections' require 'jinx/helpers/options' require 'jinx/helpers/inflector' # @param [String, IO, nil] dev the optional log file or device # @return [Jinx::MultilineLogger] the global logger def logger(dev=nil, opts=nil) Jinx.logger(dev, opts) end module Jinx # @param [String, IO, nil] dev the optional log file or device # @return [Jinx::MultilineLogger] the global logger def self.logger(dev=nil, opts=nil) Log.instance.open(dev, opts) if dev or opts Log.instance.logger end # Extends the standard Logger to format multi-line messages on separate lines. class MultilineLogger < ::Logger # @see Logger#initialize def initialize(*args) super end # Rackify the logger with a write method, in conformance with # the [Rack spec](http://rack.rubyforge.org/doc/SPEC.html). alias :write :<< private # Writes msg to the log device. Each line in msg is formatted separately. # # @param (see Logger#format_message) # @return (see Logger#format_message) def format_message(severity, datetime, progname, msg) if String === msg then msg.inject('') { |s, line| s << super(severity, datetime, progname, line.chomp) } else super end end end # Wraps a standard global Logger. class Log include Singleton # Opens the log. The default log location is determined from the application name. # The application name is the value of the +:app+ option, or +Jinx+ by default. # For an application +MyApp+, the log location is determined as follows: # * +/var/log/my_app.log+ for Linux # * +%LOCALAPPDATA%\MyApp\log\MyApp.log+ for Windows # * +./log/MyApp.log+ otherwise # The default file must be creatable or writable. If the device argument is not # provided and there is no suitable default log file, then logging is disabled. # # @param [String, IO, nil] dev the log file or device # @param [Hash, nil] opts the logger options # @option opts [String] :app the application name # @option opts [Integer] :shift_age the number of log files retained in the rotation # @option opts [Integer] :shift_size the maximum size of each log file # @option opts [Boolean] :debug whether to include debug messages in the log file # @return [MultilineLogger] the global logger def open(dev=nil, opts=nil) if open? then raise RuntimeError.new("The logger has already opened the log#{' file ' + @dev if String === @dev}") end dev, opts = nil, dev if Hash === dev dev ||= default_log_file(Options.get(:app, opts)) FileUtils.mkdir_p(File.dirname(dev)) if String === dev # default is 4-file rotation @ 16MB each shift_age = Options.get(:shift_age, opts, 4) shift_size = Options.get(:shift_size, opts, 16 * 1048576) @logger = MultilineLogger.new(dev, shift_age, shift_size) @logger.level = Options.get(:debug, opts) ? Logger::DEBUG : Logger::INFO @logger.formatter = lambda do |severity, time, progname, msg| FORMAT % [ progname || 'I', DateTime.now.strftime("%d/%b/%Y %H:%M:%S"), severity, msg] end @dev = dev @logger end # @return [Boolean] whether the logger is open def open? !!@logger end # Closes and releases the {#logger}. def close @logger.close @logger = nil end # @return (see #open) def logger @logger ||= open end # @return [String, nil] the log file, or nil if the log was opened on an IO rather # than a String def file @dev if String === @dev end private # Stream-lined log format. FORMAT = %{%s [%s] %5s %s\n} # The default log file. LINUX_LOG_DIR = '/var/log' # Returns the log file, as described in {#open}. # # If the standard Linux log location exists, then try that. # Otherwise, try the conventional Windows app data location. # If all else fails, use the working directory. # # The file must be creatable or writable. # # @param [String, nil] app the application name (default +jinx+) # @return [String] the file name def default_log_file(app=nil) app ||= 'Jinx' default_linux_log_file(app) || default_windows_log_file(app) || "log/#{app}.log" end # @param [String] app the application name # @return [String, nil] the default file name def default_linux_log_file(app) return unless File.exists?(LINUX_LOG_DIR) base = app.underscore.gsub(' ', '_') file = File.expand_path("#{base}.log", LINUX_LOG_DIR) log = file if File.exists?(file) ? File.writable?(file) : File.writable?(LINUX_LOG_DIR) log || '/dev/null' end # @param [String] app the application name # @return [String, nil] the default file name def default_windows_log_file(app) # the conventional Windows app data location app_dir = ENV['LOCALAPPDATA'] || return dir = app_dir + "/#{app}/log" file = File.expand_path("#{app}.log", dir) if File.exists?(file) ? File.writable?(file) : (File.directory?(dir) ? File.writable?(dir) : File.writable?(app_dir)) then file else 'NUL' end end end end
STACK_EDU
Copy filename into a range of length of adjacent column (as filldown) macro I want to assign a filename (say "CVC") from the first empty cell of column D to the row of that column that matches the last non-empty row of column E (similar to a filldown procedure). However, I'm having problems with the copy method in the last row of code. this is my try: Dim WB As Workbook Dim lastRow As Long Set WB = Workbooks.Open(fileName:= _ "C:\Users\gustavo\Documents\Minambiente\TUA\2015\Consolidar_base\CVC.xlsx") WBname = Replace(WB.Name, ".xlsx", "") lastRow = ThisWorkbook.Sheets(1).Range("E" & Rows.Count).End(xlUp).Row ThisWorkbook.Sheets(1).Range(Cells(Rows.Count, 4).End(xlUp).Offset(1, 0), "D" & lastRow).Value = WBname Right now, my data looks as follows: "column D" ¦ "Column E" valueD ¦ ValueE valueD ¦ ValueE ¦ ValueE ¦ ValueE ¦ ValueE After running the macro, the data would look as follows "column D" ¦ "Column E" valueD ¦ ValueE valueD ¦ ValueE CVC ¦ ValueE CVC ¦ ValueE CVC ¦ ValueE --> Note that what I am coping is the filename CVC not clear what you are trying to achieve ? can you simulate it manually in an Excel sheet and add it to your post ? it will help us help you I just added an example of the data If I am correctly understanding what you are trying to do (i.e. copy WBName into every cell in column D, starting from the row after the last used cell in column D and finishing in the row of the last used cell in column E), then this should work: Dim WB As Workbook Dim lastRowD As Long Dim lastRowE As Long Dim WBname As String Set WB = Workbooks.Open(fileName:= _ "C:\Users\gustavo\Documents\Minambiente\TUA\2015\Consolidar_base\CVC.xlsx") WBname = Replace(WB.Name, ".xlsx", "") With ThisWorkbook.Sheets(1) lastRowD = .Range("D" & .Rows.Count).End(xlUp).Row lastRowE = .Range("E" & .Rows.Count).End(xlUp).Row .Range(.Cells(lastRowD + 1, "D"), .Cells(lastRowE, "D")).Value = WBname End With The line setting the values could alternatively be written as: .Range("D" & (lastRowD + 1) & ":D" & lastRowE).Value = WBname
STACK_EXCHANGE
package com.wuest.prefab.blocks; import com.wuest.prefab.ModRegistry; import net.minecraft.block.AbstractBlock; import net.minecraft.block.BlockState; import net.minecraft.block.Blocks; import net.minecraft.block.StairsBlock; import net.minecraft.server.world.ServerWorld; import net.minecraft.util.math.BlockPos; import java.util.Random; /** * This class is used to define a set of dirt stairs. * * @author WuestMan */ public class BlockDirtStairs extends StairsBlock implements IGrassSpreadable { /** * Initializes a new instance of the BlockDirtStairs class. */ public BlockDirtStairs() { super(Blocks.DIRT.getDefaultState(), AbstractBlock.Settings.copy(Blocks.GRASS_BLOCK)); } /** * Returns whether or not this block is of a type that needs random ticking. * Called for ref-counting purposes by ExtendedBlockStorage in order to broadly * cull a chunk from the random chunk update list for efficiency's sake. */ @Override public boolean hasRandomTicks(BlockState state) { return true; } @Override public void randomTick(BlockState state, ServerWorld worldIn, BlockPos pos, Random random) { this.DetermineGrassSpread(state, worldIn, pos, random); } @Override public BlockState getGrassBlockState(BlockState originalState) { return ModRegistry.GrassStairs.getDefaultState() .with(StairsBlock.FACING, originalState.get(StairsBlock.FACING)) .with(StairsBlock.HALF, originalState.get(StairsBlock.HALF)) .with(StairsBlock.SHAPE, originalState.get(StairsBlock.SHAPE)); } }
STACK_EDU
Since when does OPENFILENAME.lpstrDefExt support extensions with more than three characters? The current version of the Windows API documentation of the OPENFILENAME structure states (emphasis mine): lpstrDefExt Type: LPCTSTR The default extension. GetOpenFileName and GetSaveFileName append this extension to the file name if the user fails to type an extension. This string can be any length, but only the first three characters are appended. The string should not contain a period (.). If this member is NULL and the user fails to type an extension, no extension is appended. This is incorrect, as executing the following MVCE on Windows 10 (Build 17134.5) shows: #include <stdio.h> #include <Windows.h> int main() { wchar_t filename[256] = { 0 }; OPENFILENAMEW ofn = { .lStructSize = sizeof(OPENFILENAMEW), .lpstrFilter = L"All Files\0*.*\0\0", .lpstrFile = filename, .nMaxFile = sizeof(filename), .lpstrDefExt = L"xlsx" }; BOOL ret = GetSaveFileNameW(&ofn); if (ret != 0) { wprintf(L"%s\r\n", filename); } } Entering test in the Save File dialog box yields C:\Users\...\Documents\test.xlsx, not C:\Users\...\Documents\test.xls, as the documentation claims. When did this change, i.e., on which target systems can I rely on lpstrDefExt supporting more than three characters? "note this is stale documentation, possibly dating back to Windows 3.1 when names were limited to 8.3 format. At least since Vista, it accepted longer (just confirmed myself on Windows 10 too)". @CodeCaster: Thanks that's very relevant! "at least since Vista..." would be perfect, since I need to support Windows 7/Server 2008 and above. On Vista and later, Get(Open|Save)FileName() are wrappers for the newer IFile(Open|Save)Dialog interfaces (unless you specify the OFN_ENABLEHOOK flag), which don't have many of the same limitations as the old APIs. Also, since Vista the recommendation is to use the Common Item Dialog, rather than GetOpenFilename. This goes back 25 years, starting with the emulation of MS-Dos 8.3 filenames. A file named longfilename.xlsx has an extra directory entry that will resemble longfi~1.xls. Which matches a *.xls wildcard. This support is overdue to be turned off, especially so for x64 versions that can't support 16-bit code anymore, but today still enabled by default. All you can do about it is verify the names you get back. @RemyLebeau: Just for fun, I just tried to add a hook. It still supports the long file name extension, but, wow, the resulting dialog brought back 16-bit memories: https://imgur.com/a/bEPtMTL @theB: That's good advice. In my case, that's not an option, since we call the dialog from VBA, and these dialogs don't work well with VBA. I just created the C program as a MVCE. @Heinzi You can get the Win95-XP dialog with a hook, you just have to include the Explorer flag, otherwise you get the Windows 3 style.
STACK_EXCHANGE
On Thursday, 17 May 2018 at 05:01:54 UTC, Joakim wrote: On Wednesday, 16 May 2018 at 20:11:35 UTC, Andrei Alexandrescu On 5/16/18 1:18 PM, Joakim wrote: On Wednesday, 16 May 2018 at 16:48:28 UTC, Dmitry Olshansky On Wednesday, 16 May 2018 at 15:48:09 UTC, Joakim wrote: On Wednesday, 16 May 2018 at 11:18:54 UTC, Andrei Sigh, this reminds me of the old quote about people spending a bunch of time making more efficient what shouldn't be done at all. Validating UTF-8 is super common, most text protocols and files these days would use it, other would have an option to I’d like our validateUtf to be fast, since right now we do validation every time we decode string. And THAT is slow. Trying to not validate on decode means most things should be validated on input... I think you know what I'm referring to, which is that UTF-8 is a badly designed format, not that input validation shouldn't be done. I find this an interesting minority opinion, at least from the perspective of the circles I frequent, where UTF8 is unanimously heralded as a great design. Only a couple of weeks ago I saw Dylan Beattie give a very entertaining talk on exactly this topic: Thanks for the link, skipped to the part about text encodings, should be fun to read the rest later. If you could share some details on why you think UTF8 is badly designed and how you believe it could be/have been better, I'd be in your debt! Unicode was a standardization of all the existing code pages and then added these new transfer formats, but I have long thought that they'd have been better off going with a header-based format that kept most languages in a single-byte This is not practical, sorry. What happens when your message loses the header? Exactly, the rest of the message is garbled. That's exactly what happened with code page based texts when you don't know in which code page it is encoded. It has the supplemental inconvenience that mixing languages becomes impossible or at least very cumbersome. UTF-8 has several properties that are difficult to have with - It is state-less, means any byte in a stream always means the same thing. Its meaning does not depend on external or a - It can mix any language in the same stream without acrobatics and if one thinks that mixing languages doesn't happen often should get his head extracted from his rear, because it is very common (check wikipedia's front page for example). - The multi byte nature of other alphabets is not as bad as people think because texts in computer do not live on their own, meaning that they are generally embedded inside file formats, which more often than not are extremely bloated (xml, html, xliff, akoma ntoso, rtf etc.). The few bytes more in the text do not weigh that much. I'm in charge at the European Commission of the biggest translation memory in the world. It handles currently 30 languages and without UTF-8 and UTF-16 it would be unmanageable. I still remember when I started there in 2002 when we handled only 11 languages of which only 1 was of another alphabet (Greek). Everything was based on RTF with codepages and it was a braindead mess. My first job in 2003 was to extend the system to handle the 8 newcomer languages and with ASCII based encodings it was completely unmanageable because every document processed mixes languages and alphabets freely (addresses and names are often written in their original form for instance). 2 years ago we implemented also support for Chinese. The nice thing was that we didn't have to change much to do that thanks to Unicode. The second surprise was with the file sizes, Chinese documents were generally smaller than their European counterparts. Yes CJK requires 3 bytes for each ideogram, but generally 1 ideogram replaces many letters. The ideogram 亿 replaces "One hundred million" for example, which of them take more bytes? So if CJK indeed requires more bytes to encode, it is firstly because they NEED many more bits in the first place (there are around 30000 CJK codepoints in the BMP alone, add to it the 60000 that are in the SIP and we have a need of 17 bits only to encode them. as they mostly were except for obviously the Asian CJK languages. That way, you optimize for the common string, ie one that contains a single language or at least no CJK, rather than pessimizing every non-ASCII language by doubling its character width, as UTF-8 does. This UTF-8 issue is one of the first topics I raised in this forum, but as you noted at the time nobody agreed and I don't want to dredge that all up again. I have been researching this a bit since then, and the stated goals for UTF-8 at inception were that it _could not overlap with ASCII anywhere for other languages_, to avoid issues with legacy software wrongly processing other languages as ASCII, and to allow seeking from an arbitrary location within a byte I have no dispute with these priorities at the time, as they were optimizing for the institutional and tech realities of 1992 as Dylan also notes, and UTF-8 is actually a nice hack given those constraints. What I question is that those priorities are at all relevant today, when billions of smartphone users are regularly not using ASCII, and these tech companies are the largest private organizations on the planet, ie they have the resources to design a new transfer format. I see basically no relevance for the streaming requirement today, as I noted in this forum years ago, but I can see why it might have been considered important in the early '90s, before packet-based networking protocols had won. I think a header-based scheme would be _much_ better today and the reason I know Dmitry knows that is that I have discussed privately with him over email that I plan to prototype a format like that in D. Even if UTF-8 is already fairly widespread, something like that could be useful as a better intermediate format for string processing, and maybe someday could replace
OPCFW_CODE
#ifndef LZY_CONCATENATION_H #define LZY_CONCATENATION_H #include "../common.h" namespace lzy { template<typename FirstSequence, typename SecondSequence> class concatenation_sequence : public sequence<concatenation_sequence<FirstSequence, SecondSequence>> { using item = SequenceItemType<FirstSequence>; public: concatenation_sequence(FirstSequence&& first, SecondSequence&& second) : firstDone(first.done()), first(std::move(first)), second(std::move(second)) { } concatenation_sequence(concatenation_sequence &&other) : firstDone(other.firstDone), first(std::move(other.first)), second(std::move(other.second)) { }; bool done() { return firstDone && second.done(); } void advance() { if (firstDone) second.advance(); else { first.advance(); firstDone = first.done(); } } const item& current() { return firstDone ? second.current() : first.current(); } private: bool firstDone; FirstSequence first; SecondSequence second; }; template<typename FirstSequence, typename SecondSequence> typename std::enable_if< FirstSequence::isLazySequence::value && SecondSequence::isLazySequence::value, concatenation_sequence<FirstSequence, SecondSequence> >::type operator + (FirstSequence&& first, SecondSequence&& second) { return concatenation_sequence<FirstSequence, SecondSequence>(std::move(first), std::move(second)); }; } #endif //LZY_CONCATENATION_H
STACK_EDU
True random numbers with C++11 and RDRAND I have seen that Intel seems to have included a new assembly function to get real random numbers obtained from hardware. The name of the instruction is RdRand, but only a small amount of details seem accessible on it on Internet: http://en.wikipedia.org/wiki/RdRand My questions concerning this new instruction and its use in C++11 are the following: Are the random numbers generated with RdRand really random? (each bit generated from uncorrelated white noise or quantum processes? ) Is it a special feature of Ivy Bridge processors and will Intel continue to implement this function in the next generation of cpu? How to use it through C++11? Maybe with std::random_device but do compilers already call RdRand if the instruction is available? How to check whether RdRand is really called when I compile a program? I would check the Intel Manual, this is the authorative documentation of their CPU interface. It's still there in Haswell - so far it looks like it's staying. Maybe you should accept David Johnston's answer instead? That certainly depends on your view of the determinism of the universe, so is more a philosophical question, but many people consider it being random. Only intel will know, but since there was demand to add it, its likely there will be demand to keep it std::random_device is not required to be hardware driven, and even if it is, it is not required to use rdrand. You can ask its double entropy() const noexcept member function whether it is hardware driven or not. Using rdrand for that is a QoI issue, but I would expect every sane implementation that has it available to do so (I have seen e.g. gcc doing it). If unsure, you can always check assembly, but also other means of hardware randomness should be good enough (there is other dedicated hardware available). See above, if you are interested in whether its only hardware, use entropy, if interested in rdrand, scan the generated machine code. libc++'s implementation of random_device uses /dev/urandom by default, or another file specified by the user. VS2012's implementation uses Window's crytography services. I designed the random number generator that supplies the random numbers to the RdRand instruction. So for a change, I really know the answers. 1) The random numbers are generated from an SP800-90 AES-CTR DRBG compliant PRNG. The AES uses a 128 bit key, and so the numbers have multiplicative prediction resistance up to 128 bits and additive beyond 128. However the PRNG is reseeded from a full entropy source frequently. For isolated RdRand instructions it will be freshly reseeded. For 8 threads on 4 cores pulling as fast as possible, it will be reseeded always more frequently than once per 14 RdRands. The seeds come from a true random number generator. This involves a 2.5Gbps entropy source that is fed into a 3:1 compression ratio entropy extractor using AES-CBC-MAC. So it is in effect a TRNG, but one that falls back to the properties of a cryptographically secure PRNG for short sequences when heavily loaded. This is exactly the semantic difference between /dev/random and /dev/urandom on linux, only a lot faster. The entropy is ultimately gathered from a quantum process, since that is the only fundamental random process we know of in nature. In the DRNG it is specifically the thermal noise in the gates of 4 transistors that drive the resolution state of a metastable latch, 2.5 billion times a second. The entropy source and conditioner is intended to SP800-90B and SP800-90C compliant, but those specs are still in draft form. 2) RdRand is a part of the standard intel instruction set. It will be supported in all CPU products in the future. 3) You either need to use inline assembly or a library (like openssl) that does use RdRand. If you use a library, the library is implementing the inline assembler that you could implement directly. Intel gives code examples on their web site. Someone else mentioned librdrand.a. I wrote that. It's pretty simple. 4) Just look for the RdRand opcodes in the binary. Since PRISM and Snowden revelations, I would be very carefull at using hardware random generators, or relying on one single library, in an application with security concerns. I prefer using a combination of independant open source cryptographic random generators. By combination, I mean for example: Let's ra, rb, rc be three independant cryptographic random generators, r be the random value returned to the application. Let's sa, sb, sc be their seed, ta, tb, tc, reseed periods i.e. e.g. reseed rb every tb draws. By independant: belonging as far as possible to independant libraries, and relying on different cyphers or algorithms. Pseudo-code: // init seed std rand with time (at least millisec, preferably microsec) sa = std rand xor time // of course, not the same time evaluation // loop sb = ra every tb sc = rb every tc r = rb xor rc sa = rc every ta Of course, every draw shall be used only once. Probably two sources are enough: // init seed std rand with time (at least millisec, preferably microsec) sa = std rand xor time // of course, not the same time evaluation // loop sb = ra every tb sa = rb every ta r = rb xor ra Choose different values for ta, tb, tc. Their range depends on the strengh of the random source you use. EDIT: I have started the new library ABaDooRand for this purpose. Good point about PRISM and using more than one source. However, std::rand is garbage and no one should use it any more (see https://channel9.msdn.com/Events/GoingNative/2013/rand-Considered-Harmful). Some other suggestions: Combining rdrand with /dev/random (on Linux) and application level high-res timestamps (or just their lower bits), such as for process start, user input, network traffic etc. The processor's performance counters may also be useful. If you need a lot of data, use the before-mentioned values to incrementally seed a crypto-strength PRNG that has this ability.</my-2¢> 1) No, the numbers from RdRand are not truly random, since they come from a cryptographically-secure pseudorandom number generator. However, RdRand, RdSeed, and the Intel Secure Key technology are probably the closest to truly random you will find. 2) Yes, the feature is available in all Intel processors that appear in laptops, desktops, and servers starting with the Ivy Bridge processors you mention. These days, the features are also implemented in AMD chips. 3 and 4) The Intel software development guide is the place to look for these answers. There is an interesting discussion of how Intel Secure Key is applied to an astrophysical problem here (http://iopscience.iop.org/article/10.3847/1538-4357/aa7ede/meta;jsessionid=A9DA9DDB925E6522D058F3CEEC7D0B21.ip-10-40-2-120) and non-paywalled version here (https://arxiv.org/abs/1707.02212). This paper describes how the technology works, how to implement it, and describes its performance (Sections 2.2.1 and 5). Had to read it for a class. The links are poor,. better links: Intel® DRNG, Intel® DRNG Software Implementation Guide and Wikipedia RdRand. I think they are "said to be" random...Since it's for encryption. I wouldn't worry too much about the quality of the random numbers. I think Intel will keep doing it as they always regard backward compatibility as important even if this instruction maybe useless in the future. I am sorry I cannot answer this question because I don't use C++11. You can try librdrand.a if you don't want to dig into assembly code. Intel has provided the library for free download on their website. I have tested it, it's pretty convenient and has error report mechanism (since the random number generator has a small probability of failing to generate a random number). So if you use this library, you only need to check the return value of the function in librdrand Please let me know if there is anything wrong in my reply. Thanks Good luck xiangpisaiMM "Since it's for encryption. I wouldn't worry too much about the quality of the random numbers." -- Whaaat??
STACK_EXCHANGE
Get property value aggregates: display name changes with resolution What happened: Field name of a property changes when resolution changes from >= MINUTE to RAW. Happens for a single query as well as multiple queries in the same panel. What you expected to happen: Expect the field name to remain the same and include the property name to make it easy to identify. Can be worked around for standard panels with Field Override --> Fields returned by query. Becomes an issue for custom built panels which require a way to identify queries based on names, and all names are "raw" without the name of the property. How to reproduce it (as minimally and precisely as possible): 1 . Query type Get property value aggregates with Aggregate Average resolution Auto. 2. Set time frame to be greater than 20 min (should be average values) 3. See DataFrame JSON and value of the key "name" under "schema". This includes the property name. 4. Change time frame to < 15 min (should be raw values) 5. See DataFrame JSON and value of the key "name" under "schema". This does not include the property name. Screenshots Screenshot of step 3 Screenshot of step 5 Environment: Grafana version: 10.0.0 Plugin version: 1.9.2 OS Grafana is installed on: Docker Ubuntu base image User OS & Browser: Windows & Chrome Others: Hi @egheie, thanks for submitting this! I notice that in your description you mention and all names are "raw" without the name of the property. But in the Steps to reproduce and screenshots you refer to the frame name (which doesn't include "raw" anywhere). Can you clarify which field is the one that's causing the problem? The frame name not being "AssetName PropertyName" or The field name being "raw" and not "avg" when the auto resolution is changed? Thanks! Hi @idastambuk, thank you for the response. Sorry, that might not have been clear enough. The screenshots above show how the frame name changes when the timeframe changes, or when the API call changes from GetAssetPropertyAggregate to GetAssetPropertyValueHistory. If including a legend to a time series panel for the Steps to reproduce, with 1 or more queries, the legend says (see screenshots below): "raw" when resolution is "raw" "AssetName PropertName avg" (for multiple queries) when resolution is minute, hour or day "avg" for a single query when resolution is minute, hour or day. The problem is when two different properties from the same asset ending up having same name when resolution is switched to 'raw'. I expect them to different since it is different properties, so if that is what you mean by: The frame name not being "AssetName PropertyName" Then yes, this is what is causing the problem. Sorry for the lengthy response, hope it explained the problem in a clearer way so have the same understanding of the issue. Thanks for the support! Hi @egheie I think I understand what you mean now. Just to confirm, this legend: should say: AssetName PropertyName1 raw; AssetName PropertyName2 raw, to follow the same naming convention as avg when two properties are present? Hi again @egheie I think it would help to know which field/name you're trying to capture with your custom plugin: Line 17 you refer to in the Data Frame JSON is the frame name. Currently it's just the AssetName ('test') for "raw" queries. It does make sense to me to change it to AssetName PropertyName ('test testName'). If we do this, this won't change when switching from 'avg' to 'raw'. When it comes to field name (currently "raw", type number in the 2nd screenshot), since the auto resolution changes to "raw", I don't think it would make sense to have it remain 'avg'. The only thing that we can do is add the asset and property names like: AssetName PropertyName1 raw. Just the final word will change when switching from 'avg' to 'raw'. Thanks for helping me clarify this! Hi @idastambuk, thank for for the response, appreciate the support you and the team provides. Line 17 you refer to in the Data Frame JSON is the frame name. Currently it's just the AssetName ('test') for "raw" queries. It does make sense to me to change it to AssetName PropertyName ('test testName'). If we do this, this won't change when switching from 'avg' to 'raw'. I agree with this, and that would solve it for us. When it comes to field name (currently "raw", type number in the 2nd screenshot), since the auto resolution changes to "raw", I don't think it would make sense to have it remain 'avg'. The only thing that we can do is add the asset and property names like: AssetName PropertyName1 raw. Just the final word will change when switching from 'avg' to 'raw'. Also agree with this. The field name should reflect the actual resolution ("raw" if it raw, and "avg" if it is average.) Not important for our case if the final word is there or not, it is the point above which is causing the issue. Hi @egheie, we just released the new version of the plugin (1.10.0) with the fix :) Hi @idastambuk, thank you for your quick fix.
GITHUB_ARCHIVE
Materials tab preview generations causes project to hang on save I have a number of different materials (shaders included) that the material selector tabs likes to re-generate previews for every time it saves. This causes the editor to hang and become unresponsive for upwards of 20 or so seconds when a save is triggered. This seems to be a new thing because I don't remember it being so bad before. There is a change that the godot 4.3 update did something here. A good way to test is the have multiple a number of materials (lets say 10) in the materials tabs and save. Then filter then out (I typed "none" in the filter field, as an example) and saving no longer caused the editor to hang. There is, of course, the possibility this is an issue with my project, but I'm not doing anything special with my shaders as far as I can tell. NOTE: The materials tab doesn't need to be open either. This hanging issue was eluding me for a while because I thought it was related to a large asset. Some video evidence of this issue. Not quite so dramatic of a hang here. Made a point to trim down on some unused shaders I was experimenting with. Still demonstrates the issue: https://github.com/user-attachments/assets/7c0e379f-77ac-458e-b6b0-36ebd42fc481 I made a change to how the thumbnails are being generated in 1.0.4. Now it's using a Godot library routine to generate the thumbnails. It might be faster to switch back to the old method where I was generating them in a separate viewport, although I don't know if that will be faster or not. I made a change to how the thumbnails are being generated in 1.0.4. Now it's using a Godot library routine to generate the thumbnails. It might be faster to switch back to the old method where I was generating them in a separate viewport, although I don't know if that will be faster or not. Edit: I've not noticed much slowdown on my development machine - could something other than thumbnail generation possibly be causing this? I was trying to find what else would have caused, but I narrowed it down to shaders. Then I saw the material tab and toyed around with that. Maybe the process works different depending on hardware??? My main machine isn't a slouch, but maybe there's some inconsistencies there. Also feel it's worth asking, does it need to generate previews every time? Can't it cache the previews and only re-generate them when the material or shader in question is "dirty"? Would mean not all previews are generated at once and only re-generated when they need to be. The issue there is that I don't know when the shaders become dirty. They are resources managed outside of Cyclops and I don't think there is a way to be notified of them changing. Do you still get the slowdown when the materials tab is not open? You could just not have the window open a s a workaround. I don't know when I'll get enough time to work on this. I'm a bit pressed at the moment. OK. Little bit of a write-up after being able to spend more time looking into this. Still don't have the full picture, though. Will defer to you if this should be closed. The issue there is that I don't know when the sharers become dirty. They are resources managed outside of Cyclops and I don't think there is a way to be notified of them changing. Understandable. But also a little surprising that plugins wouldn't be able to access that. Do you still get the slowdown when the materials tab is not open? You could just not have the window open a s a workaround. The reason this took as long to identify as it did was due to the fact this happened even with the material tab closed. So, I was able to experiment a little more with this once I carved out some time. I think this is a compounding issue between multiple plugins. Maybe an issue with godot itself and how it manages plugins that read assets from the file system. I think the main culprit is somewhere in AssetDock. With the hanging stopping (or at least being significantly reduced) once I disabled that addon. But it does seem to be worsened with the shader preview generation that cyclops does. Hence my suspicion this is an issue with godot itself. But I don't know how the editor manages plugins to really be able to validate those suspicions. I don't know when I'll get enough time to work on this. I'm a bit pressed at the moment. Yeah, I wouldn't worry so much about this. Maybe see if you can replicate the hanging on your side? I'm curious how reproducible this is.
GITHUB_ARCHIVE
Join GitHub today GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up Fetching latest commit… Cannot retrieve the latest commit at this time. |Failed to load latest commit information.| README ------ Welcome to the negative(-11) PHP MVC Framework. You are using version 2 (Chocolate). Features -------- Lightweight MVC PHP Framework. Namespace autoloading Straightforward and easy to use. No complex setup. Free and Open Source. Includes core MVC architecture. Includes basic MySQL package. Includes basic setup for PHPUnit testing. Learn More ---------- You should check out the wiki for more information: https://github.com/negative11/negative11-chocolate/wiki There you will find detailed installation and setup instructions. Requirements ------------ PHP Version 5.3+ Apache Web Server with mod_rewrite enabled (if using included .htaccess file). Basic Installation ------------------ It's very easy to get started. 1. Set application directory as your website root. It is recommended that you keep all other folders outside of the public web directory. 2. Open parameters.php and modify any desired configuration settings. Follow instructions for specifying environment paths and packages. You may need to change the ENVIRONMENT_ROOT directory in index.php if you placed the application directory in a different location than the rest of the framework. 3. Open your browser and point to website. If you have everything configured correctly, you should see the framework information page. 4. If you have difficulties, refer to the full installation guide in the wiki. Common Problems --------------- If you get a 500 error when trying to load the framework, it is possible that mod_rewrite is not enabled. Check your Apache configuration. If you don't see the framework page, ensure that you specified your paths correctly in index.php, and that the user running Apache has the correct permissions to load the files. Apache Issues ------------- If you see a 404 page when loading website for the first time, ensure that the ENVIRONMENT_ROOT in index.php is defined to the absolute path of the folder that contains the system and packages folders. Ensure that mod_rewrite is enabled and that your .htaccess is set to the values provided. Your Apache virtual host must be set up to point the accessed domain at the appropriate folder, as the .htaccess file uses '/' as the RewriteBase (@see http://httpd.apache.org/docs/current/mod/mod_rewrite.html#rewritebase for more information regarding RewriteBase). You must set the AllowOverride All directive to enable .htaccess for the project. Sample Apache Virtual Host -------------------------- <VirtualHost *:80> DocumentRoot /var/www/example/application ServerName example.com <Directory /var/www/example/application> AllowOverride All allow from all Options +Indexes </Directory> </VirtualHost> Shared Hosting Environments --------------------------- If you can't seem to load any pages beyond the front page, or if you always see the front page regardless of path, it is possible that $_SERVER['PHP_SELF'] is not set to the expected value. This is very common in shared hosting environments. In parameters.php you may select a different variable as a workaround (either PATH_INFO or REQUEST_URI) by modifying the CORE_SERVER_VAR parameter. The variable you select (PHP_SELF by default) is used during framework routing to determine which controller should be loaded. Note that changing the default variable may break usage of the framework via CLI, as the chosen variable may not be available to the command line. License ------- This framework is dual-licensed under the GPLv3 and/or MIT licenses per your requirements. You may modify it and redistribute it as you wish. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Additional Help --------------- If you get totally stuck, you can contact the maintainer: email@example.com
OPCFW_CODE
[lustre-discuss] lustre-discuss Digest, Vol 112, Issue 30 strosahl at jlab.org Wed Jul 29 09:03:05 PDT 2015 This sounds exactly like the issue I encountered over a month ago with my lustre 2.5.3 system. The quick solution I found was to set the qos_threshold_rr to 100% (so flat round robin, not weighted). However that causes a problem where osts would go over 90% while others were still under 50%. I was able to come up with a hack... I created a pool that included all the osts except the ones that were not usable, and then put every directory in that pool (called production). Once that was done I was able to turn back on the qos round robin. A problem with this is that, in 2.5.3, pools are not properly inherited (https://jira.hpdd.intel.com/browse/LU-5916). That means that new directories wouldn't get the pool information, and would thus only land on osts above the bad ones. This was solved using the changelog, which shows when directories are created. We were then able to write some code that assigned every new directory to the production pool. So far it seems to be working. Another issue I've since discovered is that since files created before the production pool was created don't have a pool then using lfs_migrate (which uses the files striping, not the directory striping) caused files to be written to the osts above the bad osts. Date: Wed, 29 Jul 2015 17:31:25 +0200 From: Massimo Sgaravatto <massimo.sgaravatto at pd.infn.it> To: lustre-discuss at lists.lustre.org Subject: [lustre-discuss] Lustre doesn't use new OST Message-ID: <55B8F1CD.5080509 at pd.infn.it> Content-Type: text/plain; charset="utf-8"; Format="flowed" We had a Lustre filesystem composed of 5 OSTs. Because of a problem with 3 OSTs (the problem is described in the thread "Problems moving an OSS from an old Lustre installation to a new one"), we disabled them. Now we want to reformat (mkfs.lustre --reformat ...) these 3 OSS and make them on-line. For the time being we performed this operation just for one OSS (using a new index number). The current scenario is the following (OST0005 is the reformatted OST): lfs df -h /lustre/cmswork/ UUID bytes Used Available Use% Mounted on cmswork-MDT0000_UUID 374.9G 3.5G 346.4G 1% cmswork-OST0000_UUID 18.1T 14.5T 2.7T 84% cmswork-OST0001_UUID 18.1T 14.2T 3.0T 83% OST0002 : inactive device OST0003 : inactive device OST0004 : inactive device cmswork-OST0005_UUID 13.6T 415.1M 12.9T 0% filesystem summary: 49.7T 28.7T 18.5T 61% The problem is that the "Lustre scheduler" is not selecting OST0005 at all for new files. Only if I use "lfs setstripe --index 5 " I see that the relevant files are written to this OST. Otherwise only OST0000 and OST0001 are used We didn't change the values for qos_threshold_rr and qos_prio_free, which are therefore using the default values (17%, 91 %). I can't find anything useful in the log files. Any idea ? More information about the lustre-discuss
OPCFW_CODE
Since Vidar will be at IndieCade East at the end of April, I wanted to take the opportunity to update the demo. For better or worse, until this point the demo build (used for the Kickstarter, used at conventions, etc.) was a completely separate branch from the main developer’s build. It was stable, but there was no easy way to bring code over from the development side and implement it in the demo. At the same time, bug fixes in the demo would have to be re-implemented in the development build. So that changed this week. Now, the demo is simply a switch thrown in the main game to set it to demo mode. Certain things happen when that switch is thrown, but the game takes place in the same space. Changes to the town apply regardless. Changes to the puzzles, changes to the tools and everything. Today, for example, I updated the weather outside in Vidar. That happened in both the demo and the actual game. Fabulous. Something I should’ve done a long time ago. This is great, because it means that as new systems are put in place (the tool system, for example), I can show them off at conventions without having to recreate them in the demo. So what is the demo mode going to look like? Similar to the prior demo, it’s one in-game day. You’ll arrive in Vidar, 4 people will be killed (instead of the previous 3), and you’ll be able to receive some quests, go into the cave, do one room. The cast of characters has been rotated a bit and moved around, so that our tabled NPCs could stretch their legs a little bit. More importantly, the role of the Ice Cave has been significantly minimized. In the previous demo, nearly every quest took place in the Ice Cave because, way back when, it’s all I had finished / had the art for. Now, we have about 2-3 quests per biome (Ice, Dark, Water, Boulder) available in the demo. AND, since Vidar now allows you to choose what room to go to, when playing the demo you can choose to complete any quest you’ve received. Which ones you receive is still random, based of course on who the 4 people to be killed were. As part of the process of updating the demo, the following changes have been made globally. Not ready to call it a new version/build quite yet, but here’s a list of what’s been done over the past few weeks: - Vidar is now in 6 maps instead of 1 - Lots of grave spawning - Revamped interiors - New tool icons - New weather - New dialogue - Tons of new items - Tons of new journal entries - Tons of bug fixes Back to work!
OPCFW_CODE
Snippets using hard tab character no longer work VSCode Version:1.12.0 OS Version:10.12.4 Steps to Reproduce: -prop Code sippet is not work Asp.Net Core anymore -For Angular4 project prop snipper work like below( What is mean [object Object]) and I can not select default. Thank you for your help.. private _value : string; public get value() : string { [object Object] , } public set value(v : string) { [object Object] , } @borakasmer can you provide more concrete steps that help to understand your problem? What extensions do you use? etc. Thanks a ton! I have seen a few people report snippets not working - by the thing seams to be more that the ordering of completions differs. Several people have resolve this by adding in the setting: "editor.snippetSuggestions": "top" Worth giving this a try to see if the it returns you closer to your expected behavior. @kieferrm I used these extensions for vs code C# for Visual Studio Code (powered by OmniSharp), Angular TypeScript Snippets for VS Code thats all. I create WebApi project with "dotnet run webapi" code snippet is not work. And I create angular project with "ng new ProjectName" again code snippet is not work. @seanmcbreen "editor.snippetSuggestions": "top" is not work. Only suggestion row change but snippet not work properly again. Thank you for your help. @borakasmer Can attach a sample of a snippet that isn't working after the update? Also does it work when hitting F1 > Insert Snippet? This is how it works for me I have similar problem when I updated to 1.12 My snippet (Typescript): "Component class skeleton": { "prefix": "rcc", "body": [ "import * as React from \"react\"", "", "export interface Props {$1}", "", "export interface State {$2}", "", "class $3 extends React.Component<Props, State> {", " public static defaultProps: Props = {$4}", "", " render() {", " return <div/>", " }", "}", "", "export default $3" ], "description": "Typescript react component skeleton" }, This is the result: import * as React from "react" export interface Props {} export interface State {} class extends React.Component<Props, State> { [object Object] , } export default @hudecsamuel So, instead of the real snippet [object Object] is inserted? Do you use suggest to insert the snippet or the command palette? Ok, I can reproduce. Thanks So, when reading the snippet from disk we parse it with [object Object] already. The problem is that the snippet contains tabs in the strings which isn't valid JSON (thanks @aeschli). The tabs must be replaced with \t. The snippet works with these modifications: { "Component class skeleton": { "prefix": "rcc", "body": [ "import * as React from \"react\"", "", "export interface Props {$1}", "", "export interface State {$2}", "", "class $3 extends React.Component<Props, State> {", "\tpublic static defaultProps: Props ={$4}", "", "\trender(){", "\t\treturn <div/>", "\t}", "}", "", "export default $3" ], "description": "Typescript react component skeleton" } } @borakasmer @hudecsamuel Where do these snippet come from? Did you get them with an extension or did you manually edit them? @jrieken I created it manualy. Well using \t insteam of tab make sense, the only problem is that it worked until now so this update can break some existing snippets. Thanks Yeah, there was a bug in our JSON parser that it would accept escape sequences like tabs. Unfortunately fixing that bug has caused this swirl. I verified that the code yeoman generator does not generate tab control characters inside snippet bodies. pushed to release/1.12 @jrieken Can you verify the fix? verified in insiders I am seeing this behavior as well, and am running 1.12.1 Stable When will this fix make it to general release? When will this fix make it to general release? Early next week A little side-note: Before I ran across this issue, I was able to fix my snippets by converting tabs to spaces - which eliminates the need for \t in my case. Not sure if that is necessary information, but I thought I would pass it along. Yes. JSON doesn't allow to use tab (its ascii-code) inside strings. We used to tolerate that which actually was a bug. Snippets should be using spaces to the \t sequence. The fix here is to make the parser complain but still produce a syntax tree @jrieken So does this mean we have to edit every single snippet we have, or will this be fixed in the next update of VS Code? My snippets are working again after updating... but now I see this warning/error all over my snippets: Invalid characters in string. Control characters must be escaped. What should be done here? What should be done here? @raniesantos https://github.com/Microsoft/vscode/issues/25938#issuecomment-301097385 In my snippets I opted to use space (four spaces) for indentation purposes inside json strings, so the snippet definition looks more clean instead of a bunch \t\t\t in this specific case.
GITHUB_ARCHIVE
What is Buddhist debate and how do I get started? It appears in ancient India there was a style of debate that became associated with Buddhism and is still a part of at least Gulugpa Tibetan Buddhism. What is it and how would I get started? https://www.google.com/search?q=Gelugpa+debate including for example Overview of the Gelug Monastic Education System suggests that the "Gelug" style of debate happens as part of the education, perhaps of adolescents, in monasteries. I don't understand what information/answer/benefit you were hoping for, by posting that question here. JFGI answers aren't really helpful. You can down vote it if you don't like it. I'm not sure I understand the question, because I'm sure you could Google it and more, and you presumably already have. Having just Googled it briefly myself, I find a wealth of information. Did you want someone like me to summarize some in an answer? If I found such information, was that even concerning the style of debate which you were asking about? And what about "how would I get started": if it is the kind of debate that involves people shouting during their study-day, presumably you need to be somewhere in person where that's being conducted, which would be country-specific. Mike Olds' answer does seem like a summary of the style of dialog found in suttas, but maybe that's not what you're asking about since you mentioned "Gulugpa". In the suttas a debate usually takes the form: I hold such and such to be the case. the response: I hold such and such a different case to be the case. There is no confrontation. No dealing with the other persons argument. You accept or do not accept. You are held to be capable of understanding both arguments and hold the wrong one only because you have not heard the correct one before. Argument concerning cases is held to be 'thinking out loud' and is not done. Again, you are held to be capable of reasoning through all the arguments yourself. The original statement can be made up to three times with the same response being made up to three times. Then a deadlock is declared and the issue is submitted to an authority both can trust. Here is one example: http://obo.genaud.net/dhamma-vinaya/pts/an/05_fives/an05.166.hare.pts.htm A later form of debate, not found in the suttas, but found in the controversies that arose around the second Council, involved directly dealing with the substance of the opponant's argument. You say such and such, but here it is understood that this is the case. Do you agree that here it is understood that this is the case? If so you must agree that your argument is defeated. I agree that such and such is understood to be the case, but it does not apply in the case I bring up for such and such reasons. More or less the way a reasonable dialogue would be conducted today, but at an essentially lower level than the earlier form. There is never any reference to the individual involved, only the issues are discussed.
STACK_EXCHANGE
The GPS Coupon feature gives a possibility for businesses to reward loyal customers with incentives for the repeat business. Q: How it actually works? A: It works by reading the mobile phone's GPS coordinates and then the page checks for the same coordinates as the business location. When a customer accumulates enough check-ins he receives check-in incentive (i.e. free drinks, food, discount, etc.). Note: As an app developer you can manage how many check-ins are needed to unlock an incentive and how many hours before a customer is allowed to check in once again. In order to add a new GPS Coupon to your app, go to Edit Pages menu, then click "+" Add New Page button. From the list of page types find GPS Coupon and click on it to add to the application. Once the page is added to the application, you can proceed with editing it. On the right side you will find a list of page elements that should be configured. Let's walk through all the available configuration options: |1||Edit Style||You can change the style of page elements.| |2||Coupon name||Set up coupon name| |3||Coupon image||Set up coupon image| |4||Start date||Set up start date for current coupon| |5||End date||Set up end date for current coupon| |6||Check-In target amount||Set up number of "check-ins" required to unlock the coupon| |7||Hours before next check-In||Set up number of hours before next check-in operation| |8||Can be used again||Set up flag whether current coupon can be used multiple times| |9||Coupon description||Provide a description of your coupon| |10||Locations||Set up list of all available locations where this coupon is available to redeem| |11||Location details||Set up latitude and longitude for each location| Once we did set up all of the options on the app editor (on the website), let's quick review how it works on a real device. In the application, the GPS Coupon page will have two buttons "Check-in" and "How to Unlock?". In "How to Unlock?" you will find instructions how to unlock the coupon. If the customer's current location meet at least one of the locations specified in the Location Details field (see field 11), he will get access to the coupon. If the location is different then the specified one, Google Map page will appear with directions to the desired location. Note: Please take a look in the above use cases what is happening when: a) You are in a different location it will open Maps page with the location(s), and if you want get directions. b) Coupon already expired c) Coupon is pending If you did enjoy this feature please leave us a comment below.
OPCFW_CODE
const Subnet = require("./../lib/Subnet.js"); function scenarioFiveSubnet() { subnets = []; subnets.push(2); subnets.push(7); subnets.push(15); subnets.push(29); subnets.push(58); return {Item1: subnets, Item2: "192.168.72.0/24"}; } var scenario = scenarioFiveSubnet(); test('getSubnetCreated Test Shoud Return Eight When Five Subnet Need', () => { subnet = new Subnet(scenario.Item1, scenario.Item2); expect(subnet.getSubnetCreated()).toBe(8); }); test('isValid Should Return False When Prefix Is Not A Valid Number', () => { subnet = new Subnet(scenario.Item1, "192.168.72.0/Z"); expect(subnet.isValid()).toBe(false); }); test('isValid Should Return False When Subnet Needed Greater Than Available', () => { subnet = new Subnet(scenario.Item1, "192.168.72.0/30"); expect(subnet.isValid()).toBe(false); }); test('isValid Should Return False When Prefix Not Informed', () => { subnet = new Subnet(scenario.Item1, "192.168.72.0"); expect(subnet.isValid()).toBe(false); }); test('isValid Should Return False When There IP Is Not Integer', () => { subnet = new Subnet(scenario.Item1, "192.168.A.0/30"); expect(subnet.isValid()).toBe(false); }); test('isValid Should Return False When IP With Less Than Four Octets', () => { subnet = new Subnet(scenario.Item1, "192.168.0/30"); expect(subnet.isValid()).toBe(false); }); test('isValid Should Return False When IP Octet Out of Range', () => { subnet = new Subnet(scenario.Item1, "192.168.1.256/30"); expect(subnet.isValid()).toBe(false); }); test('isValid Should Return True When Subnet Needed Lower Than Available', () => { subnet = new Subnet(scenario.Item1, scenario.Item2); expect(subnet.isValid()).toBe(true); }); test('', () => { subnet = new Subnet(scenario.Item1, scenario.Item2); expect(subnet.getNetworks()[0].getNetwork()).toBe("192.168.72.0"); expect(subnet.getNetworks()[1].getNetwork()).toBe("192.168.72.64"); expect(subnet.getNetworks()[2].getNetwork()).toBe("192.168.72.96"); expect(subnet.getNetworks()[3].getNetwork()).toBe("192.168.72.128"); expect(subnet.getNetworks()[4].getNetwork()).toBe("192.168.72.144"); });
STACK_EDU
I'm running into the following randomly: System.AccessViolationException was unhandled Message: An unhandled exception of type 'System.AccessViolationException' occurred in WindowsBase.dll Additional information: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. A little background: - I'm running Runtime v100.0 - The app is written using MVVM and Prism IoC - There's a View (TwoDView) which has a MapView - TwoDView has a ViewModel (TwoDViewModel) which navigates -- uses RequestNavigate(...) -- to a view (ThreeDView) which has SceneView - ThreeDViewModel responds to the navigation and applies the camera (calling a controller, created to manage the SceneView using a WeakReference<>) passed via navigation parameters - The controller calls SetViewpointCameraAsync There is also a back button in ThreeDView to allow going back to TwoDView for selecting another 3D area for inspection. It seems multiple calls to SetViewpointCameraAsync is causing my issue...if I use SetViewpointCamera...no issue (it seems). Any ideas?? Unfortunately this is a known issue in the v100.0 release. Apologies if the release notes were not quite as specific on this as they should have been (use of `setViewpointCamera` vs SetViewPointAsync). "When calling setViewpointCamera on a timer with high animation speeds a crash may be encountered." The issue is under consideration/investigation for addressing in the next release and we're also looking at ways we can expose a mechanism to cancel these async SetViewpoint... operations - that might help is this scenario. Thanks Michael. I failed to check the release notes. In my case, the situation is stable. But will be nice when the camera option is corrected -- Setting a Camera's distance (via constructor) causes issues when camera is applied to SceneView. Is there a timeline for when SetViewpointAsync will be available for the WPF SDK? I see that this issue is listed as addressed in the 100.1 release notes, but specifically for android and I'm still hitting this exception with the WPF SDK. Just curious when this will be rolled out to other languages as well. It should have been addressed for all our .NET APIs in the 100.1 release (it was a bug in the underlying C++ codebase which is shared by both WPF and Android). Please can you share your code that shows this issue in WPF? We're working on a sample application right now to isolate the use of the method. We'll verify that we continue to get access violation before passing it your way. Could take a while since the occurrence of the violation itself is random, and it generally takes several hours of running the application through functional simulation for us to hit it. We are also gathering event viewer logs, and can provide dump files that might help, although they didn't really help us all too much.
OPCFW_CODE
Boxplots are a type of graph that shows how uniform data is dispersed within a dataset. The dataset is split into three quartiles as a result of this. This graph depicts the data set’s minimal, maximal, average, first, second, and third quartiles. A boxplot’s box begins in the very first quartile (25 percent) and terminates in the third (75 percent). As a result, the box reflects half (50 percent) of the center data through a line within that indicating the average. Despite including boxplot outliers, a division is shaped on either side of the box to the uttermost data, if they exist, will be represented by circles. This tutorial will educate you on how to use R to make boxplots.” Creating Boxplot in R A box and whisker plot can be created using R’s “boxplot()” function. Various inputs can be used to create this graph, including vectors and data frames. In the equivalent graph, you can also enter a formula as input when producing boxplots for numerous groups. Creating Boxplot Using a Vector in R If you want to create a box plot in R from a vector, simply pass the vector to the “boxplot()” function. Here we have created a vector “s” and assigned it a list of numerical values. Using the “boxplot()” function, pass this vector “s” as a parameter. The boxplot in R is set to be vertical by default, but if you want to change it to horizontal, you can do so by setting the “horizontal” expression “TRUE.” A horizontal boxplot created from a vector is displayed below. It’s essential to keep in mind that boxplots obscure the data’s underlying distribution. To fix this problem, the “stripchart()” function in R could be used to insert dots into a boxplot. Here we have used the method “jitter.” “pch” means plot characters. The default “pch” in R is 1, which creates an empty circle, whereas “pch=19” means solid circles. So what we used are solid circles with an orange color. Outliers will not be overplotted if the data points are jittered. Creating Boxplot Using “notch” in R We can also make a boxplot with a notch in R. It assists us in determining how well the medians of various data groups interact with one another. By specifying the notch argument to TRUE, you can illustrate the 95 percent confidence intervals for the median in the R boxplot. The box represents the upper and lower bounds, while the center line can see the median. A “notch,” or shrinking of the box, is utilized around the median in notched box plots. Notches can help determine the importance of a discrepancy in medians. If there is no overlapping between the notching of 2 boxes, there’s a good chance the medians aren’t the same. The boxplot drawn from the “notch” is represented below. Creating Boxplot Using a Dataset in R To create a boxplot in R, you can also use the dataframes in the “boxplot()” function. In this instance, we will use the R base provided built-in dataset “Chickweight.” Here you can see the dataset inside the “ChickWeight” table. It contains 4 columns weight, Time, chick, and Diet. All the columns have numerical values stored in them. We will choose 2 columns, i.e., weight and Diet, from the dataset. Using the “boxplot()” function, we will draw boxplots for the selected date. In the above code piece, we have designed a boxplot of “weight” against the “Diet.” We have specified the variables’ names with the dataset name. Inside the braces of the “boxplot()” function, we have used the dataframe name “ChickWeight,” “$” operator to specify the column, and the column name “weight,” then the column with the dataframe name “ChickWeight$Diet.” The resultant boxplot clearly shows the outliner’s dispersion. To make this boxplot visually better and more detailed, you can add dots. You can accomplish this by using the “stripchart()” function. You can see the dots we created to show the essential data division in each boxplot. Creating Multiple Boxplot in R Creating multiple boxplots is another technique that can be used in R programming. To implement this method, we are using a built-in dataset in R base. The dataset we used here is “trees” provided by R base. We can also add colors to the boxplot. In the “boxplot()” function, we set the color “col” as “rainbow,” which will put in different colors to each boxplot. If you want to plot a distinct boxplot for every column in your R dataframe, you may do so with the utilization of the “lapply()” function. We’ll split the graphics “par” into a row as well as the number of columns in the dataset in this example. Individual graphs, on the other hand, might be plotted. The “invisible()” function prevents the “lapply” function’s output text from being visible. The image below shows the boxplot created for each data column individually. R programming provides a variety of operations that can be performed. Creating a boxplot is another useful and simple method to display data visually in plots. In this article, we discussed what boxplots are and how they display data. We explained four different techniques that can be used to draw boxplots in R, using Rstudio in Ubuntu 20.04. Including using simple vectors to create boxplots, utilizing “notch,” using dataframes, and creating multiple boxplots as well. We demonstrated each method by elaborating on different examples of codes. This will make learning R for creating boxplots much easier for you.
OPCFW_CODE
Chris or Sahirh might have some ideas, but I need to know what your situation is. You said you have 2 class C networks but 4 sites that are not connected. How are you going to connect them? I assume you want them all talking together. It sounds like you already have 2 networks. Is this already working on 2 sites and you want to combine them and add 2 more? If this is the case, are the 2 sites that already have the addresses connected and talking to each other? Are the networks you already have public or private? Do you have 2 Class C addresses that have been subnetted? Do you have a diagram of how it is set up now and one on what the new situation is? Re: Two Class C Networks? 15 years 4 months ago #1907 I have four imaginary areas that I need to subnet mask together. I am not sure how I am going to connect them together, that is the main reason I'm here. There are no networks, it is all theory. So I have nothing. My boss wants me to draw a diagram, so no I don't have one that is the whole point of this exersize. But I get confused with how I could use two class "c"s and make everyone talk to each other. It is all imaginary so whatever is easier public or private. I hope this is all the info that you needed. Thanks everyone for caring. I haven't been on this site long but I REALLY like how everything is explained. This is the first time I even found/used the forums. It looks really good. No one belittles people for not knowing. They just help them. Chris, Kewl site so what I have seen man! Re: Two Class C Networks? 15 years 4 months ago #1908 This is my confusion. Why 2 class C networks? Why not 4? If you only have 2 class C, you won't be able to subnet them equally to handle the situation you want. Here is why: To subnet a class C you need to "borrow" bits from the last octet to use for the network address. This will cut down the number of hosts you can use and you will lose a couple of hosts for each bit you borrow. The total hosts on a non subnetted class C network is 254. Left side Right side #Networks #Hosts 1 bit - 1 1111111 0 0 Illegal 2 bits - 11 11111 2 62 3 bits - 111 11111 6 30 4 bits - 1111 1111 14 14 5 bits - 11111 111 30 6 6 bits - 111111 11 62 2 7 bits - 1111111 1 0 0 illegal The most amount of hosts you can have in any subnet (for a class C) is 62. What you could do is take one of the address and NAT it to a class B address 172. Now depending on how you are going to connect (t1, cable, dsl, etc), you could use VPN to set up 4 networks and have them talk to each other this way.
OPCFW_CODE
Windows Server: VPN Access Currently I am running a Windows Server in a local network which is not accessible from Internet. But I need to expand my business and need to move the server to a more powerful one which will be hosted in Internet. Hosting local is no longer possible due to the expansion. The server will be a dedicated one which will act as a Domain Controller. Hyper-V is running and a hosting a guest Windows Server which has its own public IP and serves Remote Desktop Services. Several programs can be access through Remote Desktop. But that will not be a secure environment because rds and the DC will be accessible through Internet. Will VPN a good solution? Where do I need to install the VPN Server? Can I install it on the server which is serving RDS? The DC will join this VPN and I can use it there? The client should be connected as site-to-site VPN. Can I improve anything? My 2 cents? Always use VPN when you access remote ressources. I will always setup firewall on a remotely hosted machine, where all incoming traffic is denied, with the notable exception being traffic bound for the VPN port. The only other ports I would open is for SMTP and IMAP if it is a mail server and port 80 and 443 if it is a Web server. The ports allowed in the firewall depends on the VPN software, like: 500/UDP and 4500/UDP for IPsec. 1194 TCP/UDP for OpenVPN (depending on service). 51820/UDP for WireGuard The software you choose for your VPN depends on quite a few things, but mainly how you want to integrate it into existing setup. That being said: From a purely bandwith speed point of view the preferred order is WireGuard, IPsec and finally OpenVPN. If simplicity is the goal, well... Stay clear of IPsec! You can do a lot with it, but it is not userfriendly. As for authentication against the VPN server, there are several options, such as client certificates and/or login with username and password. It all depends on which kind of VPN software you want to use. You may want to look into the "hub and spoke" architecture, when you are designing your VPN. It is useful when you want communication between individual clients as they can communicate with each other by using their VPN assigned ip address. It is even possible to do site to site routing between two subnets over the VPN connection. Beware though as with all hosted traffic. You will have to monitor how much traffic is exchanged over VPN, as hosted solutions usually comes with a limit on how much data you are allowed to upload and download combined and data sent from one client to another client counts twice as it is simultaneously an upload and a download depending on which direction you are looking from. Exactly this. Further, there is no "one size fits all" when it comes to design or solution. I'd always suggest to use a VPN connection to connect remotely to your environment and not to have your VPN solution rely on the environment you're connecting to (e.g. you depend on your domain controller for authentication on your VPN, so if your domain controller goes down, you can't authenticate on your VPN). Some services can be consolidated while some can't. Never consolidate remote connection services (be it RDS, Citrix or VPN) with your DC. True. I would also no put my DC into remote hosting, instead I would go the other way around and setup a dedicated link between VPN and local DC that can be used for user authentication of anybody else logging into VPN server. I will also have backup way to log into VPN server, so it is accessible even when VPN link goes down. It may be through a hosted KVM solution as you are not dependent on ethernet interface being up or server is actually powered on.
STACK_EXCHANGE
Here we can see “Export Drivers Using PowerShell in Windows 10” You can easily export drivers from Windows OS using PowerShell. Using the Export-WindowsDriver cmdlet, you’ll export all third-party drivers from a Windows image to a destination folder. The advantage of exporting the drivers is you’ll restore them once you require them. Once you perform Windows 10 clean install, you’ll quickly install all the required drivers with this backup. Additionally, if you deploy OS using MDT, you’ll always import the drivers and use them to deploy using Configuration Manager. The Export-WindowsDriver cmdlet exports all third-party drivers from your computer to a destination folder. You’ll either export drivers from the running OS or export drivers from an offline image. There are several parameters which you’ll use while running the Export-WindowsDriver cmdlet. a number of the parameters include:- - –Destination – Specify a folder or directory where you would like to export third-party drivers. - -Log level – Specifies the utmost output level shown within the logs. - -logpath – you’ll log the export process by adding the log file name and path. - -Path – Specifies the complete path to the basis directory of the offline Windows image that you will service. - –WindowsDirectory – Enter the relative path to the Windows directory relative to the image path. - -SystemDrive – Specifies the trail to the situation of the BootMgr files. - -ScratchDirectory – Specifies a short-lived directory that will be used when extracting files to be used during servicing. PowerShell – the way to Export drivers from Windows To export drivers using PowerShell from Windows 10 - On your Windows 10, right-click Start and click on Windows PowerShell (admin). - Enter the command Export-WindowsDriver -Online -Destination D:\Drivers. The D:\Drivers is the folder where all of your computer’s third-party drivers will be exported. Now attend the destination folder, and you’ll see the folders containing the drivers. So next time, once you install Windows 10, you don’t get to attend the vendor’s website and look for drivers. With this backup, you’ll quickly install all the required drivers. And finally, let me clarify the utilization of the below two commands. - Export-WindowsDriver –Online -Destination D:\Drivers – Use this command to export the computer’s third-party drivers to the destination folder. - Export-WindowsDriver -Path C:\Windows_Image -Destination D:\Drivers – Use this command once you want to export drivers from the offline Windows image mounted to the destination folder. - [Question] How to make a driver backup: Windows10 – Reddit I’ve gotten myself a new laptop with only one game (13 GB) and ~100 15KB files installed on my 256GB SSD, which has left me with 154GB. That seems … - Extracting Drivers off a Computer?: sysadmin – Reddit Powershell for the win! https://deploymentbunny.com/2016/02/26/powershell-is-king-export-drivers-from-windows/. - Restore drivers backed up via win 10 powershell I followed these instructions ( ) to back up drivers via windows 10 powershell. Now that it is backed up and my pc is reset to factory … - Simple solution to export drivers from a device for Install a fresh Windows 10 ISO on to the device. · Run all the Windows Updates you can. · Go to Device Manager and verify all drivers are … - Export drivers from external hard drive with Export There is a command in Powershell that allows you to export drivers from your system … .com/en-us/powershell/module/dism/export-windowsdriver?view=win10-ps.
OPCFW_CODE
package repositories.impl; import java.util.List; import javax.inject.Singleton; import entities.Student; import io.ebean.Ebean; import repositories.StudentRepository; /** * Provide JPA operations running inside of a thread pool sized to the * connection pool */ @Singleton public class StudentRepositoryImpl implements StudentRepository { public Student findById(Integer id) { return Student.find.byId(id); } public List<Student> findByPageAndKeyword(int page, String keyword) { return Student.find.query().where() .ilike("name","%"+keyword+"%") .orderBy("id desc") .setFirstRow(page) .setMaxRows(10) .findPagedList() .getList(); } public Integer getTotalPage(String keyword) throws Exception { String sql = "SELECT COUNT(*) as total " + "FROM student s " + "WHERE s.name LIKE :keyword"; int total = Ebean.createSqlQuery(sql).setParameter("keyword", "%" + keyword + "%").findOne() .getInteger("total"); return total; } public void insertStudent(Student student) throws Exception { student.save(); } public void updateStudent(Student student) throws Exception { student.update(); } public void deleteStudent(Student student) throws Exception { student.delete(); } }
STACK_EDU
Since the beginning of the Covid pandemic the healthcare sector has been under enormous pressure. The demographic development, the change in the spectrum of diseases, legal regulations, cost pressure and a shortage of specialists combined with the increasing demands of patients, present healthcare organisations with a number of challenges. Here, digitalisation and the use of modern technologies such as artificial intelligence or machine learning offer numerous opportunities and potentials for increasing efficiency, reducing errors and thus improving patient treatment. Use of medical data as the basis for optimised patient care The basis for the use of these technologies and for future-oriented predictive and preventive care is medical data. This can already be found everywhere today. However, most healthcare professionals and the medical devices in use still store this on-premise, resulting in millions of isolated medical data sets. In order to get a fully comprehensive overview of a patient’s medical history and, based on this, to create treatment plans in terms of patient-centred therapy and to be able to derive overarching insights from these data sets, organisations need to integrate and synchronise health data from different sources. To support the development of healthcare ecosystems, the major global public cloud providers (Microsoft Azure, Amazon Web Service and Google Cloud Platform) are increasingly offering special SaaS and PaaS services for the healthcare sector that can provide companies with a basis for their own solutions. Through our experience at ZEISS Digital Innovation as an implementation partner of ZEISS Meditec and of customers outside the ZEISS Group, we recognised early on that Microsoft offers a particularly powerful healthcare portfolio and is continuing to expand it strongly. This became clear again at this year’s Ignite. Medical data platforms based on Azure Health Data Services One possibility for building such a medical data platform as the basis of an ecosystem is the use oAzure Health Data Services. With the help of these services, the storage, access and processing of medical data can be made interoperable and secure. Thousands of medical devices can be connected to each other and the data generated in this way can be used by numerous applications in a scalable and robust manner. As Azure Health Data Services are PaaS solutions, they can be used out of the box and are fully developed, managed and operated by Microsoft. They are highly available with little effort, designed for security and are in compliance with regulatory requirements. This significantly reduces the implementation effort and thus also the costs. ZEISS Meditec also relies on Azure Health Data Services to develop its digital, data-driven ecosystem. The ZEISS Medical Ecosystem, developed together with ZEISS Digital Innovation, connects devices and clinical systems with applications via a central data platform, creating added value at various levels to optimise clinical patient management. The DICOM service within Azure Health Data Services is used here as the central interface for device connection. As DICOM is an open standard for storing and exchanging information in medical image data management, the majority of medical devices that generate image data communicate using the DICOM protocol. Through an extensible connectivity solution based on Azure IoT Edge, these devices can connect directly to the data platform in Azure using the DICOM standard. This allows a wide range of devices that have been in use with customers for years to be integrated into the ecosystem. This increases acceptance and ensures that more data can flow into the cloud and be further processed to enable clinical use cases and develop new procedures. Azure API for FHIR® serves as the central data hub of the platform. All data of the ecosystem are stored there in a structured way and linked with each other in order to make them centrally findable and available to the applications. HL7® FHIR® (Fast Healthcare Interoperability Resources) offers a standardised and comprehensive data model for healthcare data. Not only can it be used to implement simple and robust interfaces to one’s own applications, but it also ensures interoperability with third-party systems such as EMR systems (Electronic Medical Record), hospital information systems or the electronic patient record. The data from the medical devices, historical measurement data from local PACS solutions and information from other clinical systems are automatically processed, structured and aggregated centrally in Azure API for FHIR® after upload. This is a key factor in collecting more valuable data for clinical use cases and providing customers with a seamlessly integrated ecosystem. Successful collaboration between ZEISS Digital Innovation and Microsoft As early adopters of Azure Health Data Services, our development teams at ZEISS Digital Innovation work closely with the Azure Health Data Services product group at Microsoft headquarters in Redmond, USA, helping to shape the services for the benefit of our customers. In regular co-creation sessions between the ZEISS Digital Innovation and Microsoft teams, the solution design for currently implemented features of the Azure Health Data Services is discussed. In this way, we can ensure that even the most complex use cases currently known are taken into account. We are working very closely with ZEISS Digital Innovation to shape Azure’s next generation health services alongside their customer needs. Their strong background in the development of digital medical products for their customers is a core asset in our collaboration and enables the development of innovative solutions for the healthcare sector.Steven Borg (Director, Medical Imaging at Microsoft) This post was written by: Elisa Kunze has been working at ZEISS Digital Innovation since 2013. During her various sales and marketing activities she supported lots of different projects, teams and companies in various sectors. Today she supports her clients in the health sector as a key account manager and supports them in implementing their project vision.
OPCFW_CODE
Python, as any other scripting language allows you to define variables and functions. These are very basic entities when it comes to programming. However, sometimes it is useful to keep variables and functions that are related to one-another close together. This is the main idea behind Object Oriented programming and is present in programming languages such as C++ and fortran, but also in scripting languages like java and python. In this tutorial, you can find a first brief introduction into this topic, focusing on the concept of a class. 1. The Python class A class is a complex variable type, which contains specific methods (or functions) and attributes (or properties). An instance of such a complex variable is called an object, and different objects can have different values for their attributes (and even methods). To create a class in python the class keyword is used followed by the name you want to assign your class. In our case this is the WordleAssistant class. This WordleAssistant contains the attributes relevant to our puzzle solver. For example, if we want to make a generic solver, two useful attributes would be the wordle word length (WordleSize) and a dictionary of possible words (FullWordset). Unlike fortran or C++, attributes are not defined in the class definition, but can be dynamically created for a class-object. This a feature (or design flaw) gives rise to some dangerous practices such as the runtime (accidental) addition of attributes to an object. For good practices, one should refrain from this and create all attributes by initializing them during the initialization of the class instance. This is done using the __init__() method of the class: class WordleAssistant(): def __init__(self, size: int = 5, dictionary: str = None): self.WordleSize = size if dictionary is None: dictionary = "Mydict.txt" self.FullWordset = self.readDictionary(dictionary) Here the WordleSize attribute is defined by setting it to the size parameter of the __init__ method, while the FullWordset attribute is defined by assigning it the result of the readDictionary method of the WordleAssistant class. As is common (and good) practice in OO langues we use the self variable to indicate the instance of the class, binding attributes and methods to the instance. You may also have noted python uses a dot-notation to indicate attributes/methods of a class, similar as C++ (while fortran uses the % symbol with the same effect). !! NOTE: There also exist “class attributes” which are defined the way one would define instance attributes in fortran or C++. However, in python these attributes are shared by all instances of the class, as such changing them in one object will change them in all objects, creating a mess. In the previous section, we already defined a first method, the initialization method. As a method is a function, it is constructed as any other function in python using the def keyword, with the body indented. The method itself is indented one level with respect to the class level. Similar as for a usual function, one can indicate the expected type and default value for each function parameter, and if a result is returned the type can be indicated as well, as can be seen in the example below for the readDictionary method. class WordleAssistant(): def __init__(self, size: int = 5, dictionary: str = None): ... def readDictionary(self, wordlist: str = None)->list: ... return wordlist Although private attributes and methods don’t technically exist in Python, it is convention that attributes and methods prefixed with a single underscore are to be treated as non-public parts of the API. In addition, using two or more underscores gives rise to name mangling, which gives a practical behavior akin to making attributes and methods private. The __init__ method above is an example. We will come back to this when discussing inheritance and child classes. 2. The Python Object Once the class is implemented, it can be used in a script by creating instances of the class. These instances are called Objects. WA = WordleAssistant() The above command creates an object WA which is of the class WordleAssistant. The object is initialized through a call to the __init__ method, which is performed by the assignment above. If defaults are provided for all parameters of the __init__ method, then no variables need to be passed to the WordleAssistant class call. Otherwise the creation of an instance could look like this: wordleSize = 5 WA = WordleAssistant(size=wordleSize,dictionary='MyWords.txt') Access to the attributes and methods of the WA object s gained using the dot-notation: wordsize = WA.WordleSize wordlist = WA.FullWordset Top10Guess = WA.getTop(top = 10) Within the context of data-encapsulation one should never access attributes directly but use get and set methods instead.
OPCFW_CODE
Question: How to add support of new macros I using this library in following way: import { parse } from "@unified-latex/unified-latex-util-parse"; import { convertToHtml } from "@unified-latex/unified-latex-to-hast"; const ast = parse(content ?? ""); const html = convertToHtml(ast); I want to add support for custom implementation for macro: <<Some text>> % this should be generated like &laquo;Some text&raquo; % It looks like this is a common feature and should be added to this library. \includegraphics{url} % this should be generated like <img src="PREFIX/url" /> % It looks like it's a custom implementation, at least for html generation. How can I add this support? It seems I should add a macro to parse and an html generator to convertToHtml, but the current interface doesn't allow this. Yes, it seems that convertToHtml doesn't allow custom extensions right now. Here's a possible workaround: Using htmlLike from unified-latex-util-html-like you can create macros that will be converted to HTML tags. E.g., if you insert the node htmlLike({tag: "div", attributes: {foo: "bar"}}) it will create a valid Ast.Node that when converted to HTML will be <div foo="bar"></div>. Parse the your code. Then use the replaceNode function from unified-latex-util-replace to replace all includegraphics with the appropriate tags. Then continue converting as normal. If you need to add your own custom macro parsing, you can pass a list of macros if you create your own unified parser: import { unified } from "unified"; import { unifiedLatexFromString } from "./plugin-from-string"; const options = {macros: {includegraphics: { signature: "m", processContent: myCustomFunction}}} const parser = unified().use([unifiedLatexFromString, options]); It seems that includegraphics not being processed is a missing feature. A PR to add that conversion to the library would be welcome :-). It would also be nice to add a hook for the html processor itself so custom processing could be added just like custom macros can be specified. I tried this approach, and this is working as expected: const parser = unified().use(unifiedLatexFromString, { macros: { def: { signature: "m m" }, }, }); type Context = { imageBaseUrl?: string; }; const macros: Record<string, (node: Macro, context: Context) => any> = { "def": () => null, "includegraphics": (node: Macro, context: Context) => { if (!node.args) { return null; } const args = pgfkeysArgToObject(node.args[1]); let style = ''; if (args["width"] && args["width"].length) { style += `width:${printRaw(args["width"][0])};`; } if (args["height"] && args["height"].length) { style += `height:${printRaw(args["height"][0])};`; } const imageName = printRaw(node.args[3].content); const imageUrl = (context.imageBaseUrl ?? "") + imageName; return htmlLike({ tag: "img", attributes: { src: imageUrl, style: style } }); }, }; // External values: // imageBaseUrl // content const ast = parser.parse(content ?? ""); const context = { imageBaseUrl: imageBaseUrl, }; replaceNode(ast, (node) => { if (node.type !== "macro") { return undefined; } const macro = macros[node.content]; if (!macro) { return undefined; } return macro(node, context); }); const html = convertToHtml(ast);
GITHUB_ARCHIVE
#include "matrix3.h" #include "vector3.h" #include <initializer_list> #include <iomanip> #include <iostream> #include <sstream> #include <stdexcept> #include <string> namespace ekumen { namespace math { namespace { constexpr int kMatrix3ElementSize = 9; constexpr int kMatrix3RowSize = 3; // Returns a stringstream with the format: '[first, second, third]'. template <class T> std::ostringstream formatStr(const T& first, const T& second, const T& third) { std::ostringstream oss; oss << std::setprecision(9) << "[" << first << ", " << second << ", " << third << "]"; return oss; } // Formats a string from a Vector3 object with the format: '[x, y, z]'. std::string formatRow(const Vector3& obj) { return formatStr<double>(obj.x(), obj.y(), obj.z()).str(); } } // namespace const Matrix3 Matrix3::kIdentity = Matrix3({1, 0, 0}, {0, 1, 0}, {0, 0, 1}); const Matrix3 Matrix3::kZero = Matrix3({0, 0, 0}, {0, 0, 0}, {0, 0, 0}); const Matrix3 Matrix3::kOnes = Matrix3({1, 1, 1}, {1, 1, 1}, {1, 1, 1}); Matrix3::Matrix3() : Matrix3(Vector3::kZero, Vector3::kZero, Vector3::kZero) {} Matrix3::Matrix3(const Vector3& row0, const Vector3& row1, const Vector3& row2) : rows_{row0, row1, row2} {} Matrix3::Matrix3(const Matrix3& obj) : Matrix3(obj.row(0), obj.row(1), obj.row(2)) {} Matrix3::Matrix3(Matrix3&& obj) : rows_(std::move(obj.rows_)) {} Matrix3::Matrix3(std::initializer_list<double> matrix) { if (matrix.size() != kMatrix3ElementSize) { throw std::invalid_argument("Invalid matrix size."); } for (auto i = 0; i < kMatrix3ElementSize; i += 3) { Vector3 row(*(matrix.begin() + i), *(matrix.begin() + i + 1), *(matrix.begin() + i + 2)); rows_.push_back(row); } } Matrix3& Matrix3::operator=(const Matrix3& obj) { rows_ = obj.rows_; return *this; } Matrix3& Matrix3::operator=(Matrix3&& obj) { if (this == &obj) { return *this; } rows_ = std::move(obj.rows_); return *this; } Matrix3 Matrix3::operator+(const Matrix3& obj) const { return Matrix3(row(0) + obj.row(0), row(1) + obj.row(1), row(2) + obj.row(2)); } Matrix3 Matrix3::operator-(const Matrix3& obj) const { return Matrix3(row(0) - obj.row(0), row(1) - obj.row(1), row(2) - obj.row(2)); } Matrix3 Matrix3::operator*(const Matrix3& obj) const { return Matrix3(row(0) * obj.row(0), row(1) * obj.row(1), row(2) * obj.row(2)); } Matrix3 Matrix3::operator*(const double& factor) const { return Matrix3(row(0) * factor, row(1) * factor, row(2) * factor); } Matrix3 operator*(const double& factor, const Matrix3& obj) { return obj * factor; } Matrix3 Matrix3::operator/(const Matrix3& obj) const { return Matrix3(row(0) / obj.row(0), row(1) / obj.row(1), row(2) / obj.row(2)); } bool Matrix3::operator==(const Matrix3& rhs) const { return (row(0) == rhs.row(0) && row(1) == rhs.row(1) && row(2) == rhs.row(2)); } const Vector3& Matrix3::operator[](int index) const { assertValidAccessIndex(index); return rows_[index]; } Vector3& Matrix3::operator[](int index) { assertValidAccessIndex(index); return rows_[index]; } std::ostream& operator<<(std::ostream& os, const Matrix3& obj) { os << formatStr<std::string>(formatRow(obj.row(0)), formatRow(obj.row(1)), formatRow(obj.row(2))) .str(); return os; } const Vector3& Matrix3::row(int index) const { assertValidAccessIndex(index); return rows_[index]; } Vector3 Matrix3::col(int index) const { return Vector3(rows_[0][index], rows_[1][index], rows_[2][index]); } double Matrix3::det() const { auto det = 0.; for (auto i = 0; i < kMatrix3RowSize; ++i) { det += rows_[i % kMatrix3RowSize].x() * rows_[(i + 1) % kMatrix3RowSize].y() * rows_[(i + 2) % kMatrix3RowSize].z(); det -= rows_[i % kMatrix3RowSize].x() * rows_[(i + 2) % kMatrix3RowSize].y() * rows_[(i + 1) % kMatrix3RowSize].z(); } return det; } Matrix3 Matrix3::product(const Matrix3& obj) const { Matrix3 objTranspose(obj.col(0), obj.col(1), obj.col(2)); Matrix3 res; for (auto i = 0; i < kMatrix3RowSize; ++i) { for (auto j = 0; j < kMatrix3RowSize; ++j) { res[i][j] = row(i).dot(objTranspose.row(j)); } } return res; } Vector3 Matrix3::product(const Vector3& vector) const { Vector3 res; for (auto i = 0; i < kMatrix3RowSize; ++i) { res[i] = row(i).dot(vector); } return res; } Matrix3 Matrix3::inverse() const { double factor = 1 / det(); Vector3 row1(rows_[1][1] * rows_[2][2] - rows_[2][1] * rows_[1][2], rows_[0][2] * rows_[2][1] - rows_[2][2] * rows_[0][1], rows_[0][1] * rows_[1][2] - rows_[1][1] * rows_[0][2]); Vector3 row2(rows_[1][2] * rows_[2][0] - rows_[2][2] * rows_[1][0], rows_[0][0] * rows_[2][2] - rows_[2][0] * rows_[0][2], rows_[0][2] * rows_[1][0] - rows_[1][2] * rows_[0][0]); Vector3 row3(rows_[1][0] * rows_[2][1] - rows_[2][0] * rows_[1][1], rows_[0][1] * rows_[2][0] - rows_[2][1] * rows_[0][0], rows_[0][0] * rows_[1][1] - rows_[1][0] * rows_[0][1]); return Matrix3(row1, row2, row3) * factor; } void Matrix3::assertValidAccessIndex(int index) const { if (index < 0 || index > 2) { throw std::out_of_range("Index to access a row must be in range [0;2]."); } } } // namespace math } // namespace ekumen
STACK_EDU