Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Specify a sequence of scenarios to execute in a configuration file. Is your feature request related to a problem? Please describe. No, the request is not related to a problem. Describe the solution you'd like We are using Gauge for automated system testing of embedded systems and we would like to run a specific scenario, to program the Device Under Test, before all of the other scenarios. We currently execute Gauge twice. Once to run the programming scenario then again to run the test suite. We are currently using tags to group scenarios into test suites and we use tag expressions to run the suites. However, as the suites get large and we have more tests, we thought a configuration file would make it easier to visualize and control which scenarios run in which test suite. We realize that Gauge appears to execute spec-by-spec. We aren't looking to run scenario 1 from spec A, then scenario 2 from spec B then scenario 3 from spec A. Maybe the configuration file is a YAML file where you specify a sequence of specs and then within each spec you specify which scenarios to execute. For example: specs: - Spec Name A scenarios: - Scenario Name 1 - Spec Name B scenarios: - Scenario Name 2 Then to execute Gauge, you might run: gauge run specs --sequence specs.yaml Describe alternatives you've considered We've considered using the BeforeSuite hook for the programming, but still felt that overall it would be nice to specify the complete set of scenarios to execute in a test suite. We've considered using a script to feed the list of scenarios to Gauge using the --scenario option, but had concerns about the input buffer size on the command line. Additional context Nothing at this time. So far, we've been recommending that Gauge specs/scenarios remain independent and this sort of sequencing leads to temporal coupling. And it increases the complexity in executing a single scenario or even rerun all/failed tests. But this has been requested too many times, so I am going to consider this. (@zabil - thoughts?) My take is that if this were to be implemented, it should not be just a static list (it could be so for a first cut), rather it should be something similar to how build tools work (tasks depend on tasks). The file should include the ability to either specify a spec or scenario, or a folder, or even a tag expression. There needs to be a check for cyclic dependencies. the parallel run needs to be thought over, Does gauge allow users to specify the streams? or does gauge generate parallel streams taking dependencies into consideration? or does gauge just not allow parallel run when it does not control the orchestration? Alternatively, could we attempt to make gauge honour piping (like unix tools do) so one could do something like cat workflow.txt | gauge run Thoughts? /cc @getgauge/core if this were to be implemented, it should not be just a static list (it could be so for a first cut), rather it should be something similar to how build tools work (tasks depend on tasks). Preferably a json file. Maybe worth expanding the manifest.json to add something like execution groups roughly something like groups : [ { name: "sequence1", "specs" : [ { name: "Specification 1", scenarios : []... gauge run --group "sequence1" he parallel run needs to be thought over, Does gauge allow users to specify the streams? or does gauge generate parallel streams taking dependencies into consideration? or does gauge just not allow parallel run when it does not control the orchestration? I assume running in parallel should be disabled as it depends on the order? Alternatively, could we attempt to make gauge honour piping (like unix tools do) so one could do something like cat workflow.txt | gauge run If this can also be done on windows/powershell Did a quick check, and as long as gauge honours stdin, users should be able to pipe in/redirect list of specs/scenarios into gauge. The challenge though is that gauge does not honour order of scenarios, so this will still be viable only to control the order of execution of specs and not scenarios within the spec. @raweaver - would this approach work for you? I'll spend some more time once I know that this can be useful. @sriv @zabil Thank you for the responses and suggestions. We discussed the options and we feel that a sequence file passed to gauge as a command line argument would be our preferred solution. Also, controlling the order of specs (not scenarios) would be sufficient. It might be beneficial to structure the sequence file such that you could add support for specifying the scenario sequence in the future if desired. Thank you! @sriv @zabil - Any update on this? Would love to see this new feature implemented, this would be really useful for us. Any timeline on when we expect to see this feature released? Thank you so much for adding this new feature. Hi, @zabil @sriv. Do you have any updates about this new feature? It will be very beneficial for our team
GITHUB_ARCHIVE
# -*- coding: utf-8 -*- import threading import re from time import sleep from vocabulary.voc import Vocabulary class Fs(): """ """ content_replaces = ["\u2029", " ", "\xa0"] cont_repls = {"\u2029": "\r\n", " ": " ", "\xa0": " "} raw_content_replaces = ["\r\n", " "] raw_cont_repls = {"\r\n": "\u2029", " ": " "} def return_completed(self, kind): self.completedProcess.emit(kind) def return_contents(self, contents): self.openedFile.emit(contents) def return_vocab(self, contents): self.checkedVocab.emit(contents) def check_vocabulary(self, conts): v_thread = threading.Thread(target=self._check_vocabulary, args=[conts]) v_thread.daemon = True v_thread.start() def _check_vocabulary(self, conts): voc = Vocabulary(conts) marked = voc.start() self.return_vocab(marked) def _clean_filename(self, filename): clean = filename.replace("file:///", "") return clean def _clean_content(self, contents): for repl in self.content_replaces: if repl in contents: contents = contents.replace(repl, self.cont_repls[repl]) return contents def _clean_raw_content(self, raw): for repl in self.raw_content_replaces: if repl in raw: raw = raw.replace(repl, self.raw_cont_repls[repl]) return raw def _save_file(self, file_name, contents): filename = self._clean_filename(file_name) cleaned = self._clean_content(contents) self.check_vocabulary(cleaned) data = bytes(cleaned, 'utf-8') with open(filename, 'wb') as f: f.write(data) self.return_completed("save") def _read_file(self, file): filename = self._clean_filename(file) with open(filename, 'rb') as f: data = f.read() cleaned = self._clean_raw_content(str(data, 'utf-8')) self.return_contents(cleaned)
STACK_EDU
I was pondering alternate mechanisms and terminology for the cheese and puck-type puzzles. Cheese-type: those where an arbitrary diameter can be flipped without rotating the inner core. We can use the terminology MxN where M is the number of wedges and N is the number of layers. Known examples: - Rubik's Cheese (6x1) - Rubik's UFO (6x2) - Masterball (6x4) - ??? (6x6) (I've seen pictures, but don't know the name) Puck-type: those where the inner core of the puzzle must be rotated to allow a diameter to flip. MxN terminology can also be used. Known examples: - Saturn (6x1) - Puck (12x2) - Brain Ball - flips two opposing and unequal segments rather than half the puzzle - Square-1 could be seen as a bandaged, Puck-type 12x2 - The puck mechanism allows for MxN where: - M must be even or there are no diameters along which to flip the puzzle - N can be any number; the puzzle must be symmetrical between the top and bottom, but this can either be odd (there's an equator layer) or even (there's no equator) - The cheese mechanism seems to demand MxN where: - M must be 2(2P+1) for any P; that is, twice an odd number. Here's my reasoning: the internal mechanism must be in the same state before and after a flip. The cheese mechanism uses pieces of alternating type. If there are an even number of pieces on each half of the puzzle, then when performing a flip you'll end up with pieces of the same type adjacent. Therefore, each half of the puzzle must have an odd number of pieces. - N can be any number, for similar reasons to the puck. This applies to the UFO mechanism as well as the Patent US5199711: It's therefore easy to sketch a mechanism for a 6 (2*(2*1+1)) xN, 10 (2*(2*2+1)) xN or even 14xN (2*(2*3+1)) cheese-type puzzle. However, making 8xN or 12xN is not obvious. (Bandaging a 12xN would give you a true Square-1 II, which is what made me think of all of this.) Observation: bandaging a 2x2x2 cube gives you a 4x1 cheese which violates the above rule. (Or http://www.puzzle-shop.de/color-tonne.html , which is actually 4x2 and lets you assume non-cylindrical shapes) Are there other mechanisms that allow for different values of M? The 2x2x2 cube's mechanism is internally a 3x3x3; can we follow this lead and co-opt other mechanisms?
OPCFW_CODE
Server side: ASP.NET MVC 3 on Windows 2008 servers using IIS 7.5 Map services Operational layers (dynamic and feature layers) are on our ArcGIS Servers Basemap is the World Street Map layer from ArcGIS Online. We have built some map applications to be consumed in pages built by other development teams. When our map applications load the first time for a user (or after clearing the browser cache), one or more layers fails to load. Our in-house map services depend on database connections that are severed by the firewall when they are maintained for too long. We have timed jobs to reset those map services, but when they're cut off it can take a couple of minutes for them to be available to users. That's not the issue I'm talking about here. The classes we added were modified from code found here at http://forums.esri.com/Thread.asp?c=158&f=2396&t=291261, but the other developers want the map to load first time, every time. I've been assuming this is a network issue - the code doesn't change, the map services don't change and the browser doesn't change - the only thing is a new call. Also, we changed to local installs of the JSAPI because we routinely get Dojo errors from the Esri CDN version not being found. Still, we need to get around this one. A programmer on one of the other teams suggested the resources are not loading fast enough. Since the onLoad event is the only point I know of to determine when the API has loaded the layer, I've been checking it in debug. I even increased the number of retries and the timeout period. I found that if the layer didn't load the first time, it never will. But when I refresh the page, it loads the layers from the cache and it works. I clear the cache again, retry and it fails, then refresh and it works. Has anyone else had a problem like this with their layers not loading? The only response I got was to ask for code to reproduce the problem. Since the code is the same as any code to load a FeatureLayer, the code won't make a difference. I'm pretty sure it's our environment, over which I have no control, so it's unlikely to be reproducible. I was looking more for some advice on how to get around the policies of a network and security setup that is unfriendly to map services and applications. I didn't actually resolve the cause. Our application just runs through the list of layers to load and tries creating each one in turn. After all have been created, if any haven't loaded, we just go through it again up to five times. If any don't load then, we alert the user to refresh the page. It looks like you might be experiencing the "Sleepy services" issue, and it has nothing to do with JSAPI if you are using ArcGIS Server 9.3.x or 10.0. This is more of a IIS behavior where default setting of an Application Pool on IIS is to recycle after 20 min of applications running under that pool being idle. Then when a user calls the services for a first time, it takes some time for the services to "wake up". And sometimes it requires a page refresh or a few for the services to load. You can try to fix this by setting the Idle Time-out for the application pool to a higher number than 20 min (if there are times when the applications are not used for 4 hours for example, you can set the timeout to 240+ minutes). Also look at any other application pool recycling settings that might be causing the pool to shut down. To make sure you modify the appropriate application pool, find out under what application pool are the ArcGIS\rest and ArcGIS\services applications running. Please, consult your IIS administrator before you make the changes, as there are situations when they are not applicable (see articles above and search on your own more).
OPCFW_CODE
By Eric Rinker The goal of this article is to impart a basic understanding of how to make changes to Sendmail on a machine running the Solaris 9 Operating System. This article is written for engineers with a reasonably good working knowledge of the standard principles of the UNIX operating system. To utilize this article, you need to know how to edit files and run programs, and you need root access. Two categories of application deal with email: Mail User Agents (MUAs) and Mail Transfer Agents (MTAs). Mail User Agents are applications that facilitate the creation, viewing, and disposal of email messages. Examples include mail or elm in a UNIX environment, and Eudora or Outlook in the Windows world. Netscape and Explorer are Internet browsers that can also double as MUAs. Mail Transfer Agents transport email from one machine; typically, each machine uses only one MTA. Sendmail fills this role, while other MTAs out there include Exim, Postfix, and Qmail. Sendmail is one of the oldest and mostly widely used MTAs in the world. It is the default MTA for most UNIX distributions, including HP's HP-UX, IBM's AIX, and Sun Microsystems' Solaris OS. Sendmail's long life has made it complicated to configure and maintain, but it makes up for its drawbacks with its ability to do just about anything. First appearing over 30 years ago, Sendmail has evolved into a robust, feature-rich method for transporting electronic mail from one location to another. Originally designed at a time when hard drives the size of washing machines supplied 64 kilobytes of usable storage, Sendmail used every trick in the book to conserve space. To make everything short and to the point, the Sendmail configuration file used such cryptic parameters as "Fw" for "Domains we receive mail for" and "DH" for "Who gets all local email." While there is a method to the madness, it is not readily apparent to the novice user. For backwards compatibility, these cryptic parameters are still present in the configuration file of today's Sendmail versions. Over the years, as features were added to Sendmail, the configuration process became more and more complicated. To make it more administrator-friendly, Sendmail uses a m4-based compilation and configuration model. This layer between the administrator and the build and configuration process makes Sendmail easier to set up and maintain without requiring upgrades of older programs to handle new interaction methods. This document couldn't possibly cover everything there is to know about Sendmail without being hundreds of pages long, and a bore to read. Instead, we focus on three commonly seen configurations: Mail Server, Incoming Relay, and Outgoing Only. When modifying the behavior of Sendmail, the file is not directly altered. Instead, a .mc file is altered and run through the m4 macro processor. Some example .mc files are in main.mc is the default setup system. submit.mc configures Sendmail as an initial mail submission subsidiary.mc relays all mail on this system through another machine before the mail goes to its destination. For our examples, we will copy the main.mc file to and make our modifications like so: cd /usr/lib/mail/cf vi new.mc make new.cf cp new.cf /etc/mail/sendmail.cf /etc/init.d/sendmail restart To begin with, common elements are shared in all three configurations. A minimal file contains the following: OSTYPE(`solaris8')dnl DOMAIN(`generic')dnl MAILER(`local')dnl MAILER(`smtp')dnl OSTYPE macro defines what system this file is on. DOMAIN macro is used to pull in another file into the resulting MAILER macros define which of the many different delivery methods this configuration file will use. In this example, we are on a Solaris 8 or higher system, we are including the "generic" domain file, and we want to use both the local delivery system and the SMTP system. The mail server is your typical server for incoming mail. It receives mail for user@domain, delivers it to the user's local mailbox, and processes mail in its queue for delivery to the outside world. You only need to make one change: Add each domain that is to be considered a local account into OSTYPE(`solaris8')dnl MAILER(`local')dnl MAILER(`smtp')dnl Incoming Relay is the common configuration for company email servers that are outside of the company firewall. Instead of storing the email, these relays pass it on to a predefined server inside the firewall that is the company's mail server. This setup is perfect for implementing filtering, since this machine doesn't handle the other duties of your typical mail server. To configure Incoming Relay, we first need to add the relay server information. In this case, we are going to relay everything to Next, we have to allow mail to be relayed through this machine. It's best to only relay mail for domains served by the internal servers. The following option tells sendmail to use the /etc/mail/relay-domains file as a list of domains allowed to send or receive mail through this server: We are done. This server will now relay for any domains in the /etc/mail/relay-domains file, except for local accounts, to OSTYPE(`solaris8')dnl DOMAIN(`solaris-antispam')dnl define(`SMART_HOST', 'relay.mydomain.com')dnl FEATURE(`relay_entire_domain')dnl MAILER(`local')dnl MAILER(`smtp')dnl For security purposes, it's best not to set up an indiscriminate mail relay. Every machine needs to use an MTA to send email, and some programs require the ability to relay emails through an SMTP server. With these requirements, you can both relay mail for local services and secure your system from becoming an open relay by configuring Sendmail to attach only to the loop-back address. To make Sendmail outgoing only, it needs to not accept mail from any remote hosts. To do this, we force it to use only the local loop-back address. No other options are required; Sendmail transports mail from the local machine to the outside world by default. OSTYPE(`solaris8')dnl DOMAIN(`solaris-generic')dnl DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA') MAILER(`local')dnl MAILER(`smtp')dnl For more information on options for a Relay server, see the sendmail.org tip Allowing controlled SMTP relaying in Sendmail 8.9 and later. Specifically, look at using the access_db option for a more robust anti-spam filtering relay server. Now that you know how to make changes, you can decide what kinds of changes you want to make. Your first stop should be it contains a good deal of information about Sendmail configuration, the m4 macros, and so on. Once you've exhausted that, you can check out some of the The author would like to thank John Beck of Sun Microsystems for his help in reviewing this article. Unless otherwise licensed, code in all technical manuals herein (including articles, FAQs, samples) is provided under this License.
OPCFW_CODE
The five video classification methods: lrcn network in the code). See the accompanying blog post for full details: https://medium.com/@harvitronix/five-video-classification-methods-implemented-in-keras-and-tensorflow-99cad29cc0b5 This code requires you have Keras 2 and TensorFlow 1 or greater installed. Please see the requirements.txt file. To ensure you're up to date, run: pip install -r requirements.txt You must also have ffmpeg installed in order to extract the video files. If ffmpeg isn't in your system path (ie. which ffmpeg doesn't return its path, or you're on an OS other than *nix), you'll need to update the path to First, download the dataset from UCF into the cd data && wget http://crcv.ucf.edu/data/UCF101/UCF101.rar Then extract it with unrar e UCF101.rar. Next, create folders (still in the data folder) with mkdir train && mkdir test && mkdir sequences && mkdir checkpoints. Now you can run the scripts in the data folder to move the videos to the appropriate place, extract their frames and make the CSV file the rest of the code references. You need to run these in order. Example: Before you can run the mlp, you need to extract features from the images with the CNN. This is done by running extract_features.py. On my Dell with a GeFore 960m GPU, this takes about 8 hours. If you want to limit to just the first N classes, you can set that option in the file. The CNN-only method (method #1 in the blog post) is run from The rest of the models are run from train.py. There are configuration options you can set in that file to choose which model you want to run. The models are all defined in models.py. Reference that file to see which models you are able to run in Training logs are saved to CSV and also to TensorBoard files. To see progress while training, run tensorboard --logdir=data/logs from the project root folder. I have not yet implemented a demo where you can pass a video file to a model and get a prediction. Pull requests are welcome if you'd like to help out! Khurram Soomro, Amir Roshan Zamir and Mubarak Shah, UCF101: A Dataset of 101 Human Action Classes From Videos in The Wild., CRCV-TR-12-01, November, 2012.
OPCFW_CODE
This overview is about the design of Mongoose OS, a firmware development framework for connected products. If you are an IoT firmware developer, Mongoose OS is for you. Here we share our vision and the rationale for the design decisions we made. The vast majority of these decisions were driven by our work for our customers, when we developed device firmware to bring their IoT products to the market. We noticed the following: We refactored those generic pieces that take up to 90% of firmware development time, into a reusable set of components. We made it platform-independent - for example, toggling a GPIO code on Mongoose OS looks the same on all hardware platforms. The result we called Mongoose OS. Where does the name Mongoose come from? We are targeting IoT products, where networking is crucial. We use a mature and trusted Mongoose Networking Library as the networking core - that is the origin of the name. The networking library mg_ prefix for all API functions, and similarly Mongoose OS uses Our goal is to share our experience in the hope that it'll help other developers to save a great deal of time and effort, reusing a solid and reliable basis for their products. Mongoose OS is a framework for building apps (firmwares) for low-power microcontrollers (uC), and consists of the following main components: mos tool. Provides device management and firmware building capabilities mos build command builds a firmware (we call it an "app") by taking mos.yml file in the current directory and invoking a build docker image either remotely ( or locally ( mos build --local). Mongoose OS is based on the vendor's SDK, it extends the capabilities of the native SDK. For example, on ESP32 uC, Mongoose OS uses an ESP-IDF SDK, therefore it provides all capabilities that ESP-IDF provides, plus extra that come with Mongoose OS. If user code uses crossplatform API only, it can be built on all supported hardware platforms with no code changes: If we zoom in the yellow "Mongoose OS" block, it is fragmented into several components as well. Some of them, like configuration, RPC, timers, networking API, etc, will be covered further down. The Mongoose OS core lives at cesanta/mongoose-os on GitHub: The bulk of the functionality, however, is split into libraries. Each library is a separate GitHub repository, collected under the mongoose-os-libs organisation, which serves as a central repository of libraries. When documentation is generated, all libraries are traversed and the "API Reference" part is automatically generated. The docs: tag in the mos.yml file specifies the documentation category and title. For example, for the ADC library located at https://github.com/mongoose-os-libs/adc, That creates an API Reference/Core/ADC documentation page. The content is generate from the README.md and header files. The boot process is driven by a cross-platform mgos_init.c. In short, the subsystems are initialised in the following order: Native SDK init, GPIO, configuration, WiFi, platform-specific init, libraries (they can define their initialisation order), user app init function and at the end - all registered MGOS_HOOK_INIT_DONE hooks are invoked. The initialisation function has the following prototype: enum mgos_init_result mgos_XXX_init(void); MGOS_INIT_OK on success, or any other specific numeric code If any of those init functions returns an error, the firmware reboots immediately. This is done intentionally, in order to revert back to the previous firmware in case of failed OTA update. Mongoose OS implements Virtual File System layer, VFS. That means it can attach (mount) different storage types into a single file system tree. For example, a device can have an SPI flash storage and an SD card storage. For each storage type, a filesystem driver must be implemented. For example, it is possible to write a driver that implements a Dropbox or Google Drive storage type, and a device (e.g. ESP8266 module) can mount a Dropbox folder. Mongoose OS provides a Filesystem RPC service that allows remote filesystem management - for example, you can edit files remotely. The contents of the filesystem depends on the app and specific libraries that are used. For example, an mjs api_*.js files to the filesystem. Here is a typical layout: conf0.json - default app configuration, must NOT be edited manually conf9.json - user-specific overrides, changed by "mos config-set" command index.html - many apps define this file, which is served by a web server ca.pem - added by the ca-bundle library, contains ca root certs Mongoose OS contains Mongoose Networking Library as one of the core components. The networking library provides network protocol support, such as UDP, MQTT, etc. It consitutes the low level of Mongoose OS; it is non-blocking and event based, uses mg_ API prefix and expects the following usage pattern: mg_bind() or variants mg_connect() or variants struct mg_mgr structure, which is an event manager Mongoose OS does exactly that. It defines a "system" event manager instance, and runs a main event loop in a single task. That event loop dispatches events by calling event handlers. For example, function sets up a button press event handler. When a hardware interrupt occurs, its handler queues the event, and the Mongoose OS task calls the user-defined button handler in its context. For network connections, Mongoose OS defines wrappers for low-level functions. These wrappers use "system" event manager and provide the reconnection functionality for the outgoing connection. For example, low-level mg_ API for MQTT protocol allows to create an MQTT client. If it disconnects for any reason, e.g. temporary WiFi connectivity loss, the connection closes. The mgos_ wrapper, however, would setup a reconnection timer with exponential backoff and re-establish the connection automatically. This is a valuable addon to the low-level mg_ API, therefore using mgos_ API is a good idea. Of course the low level mg_ API is also available. You can get main event manager instance by calling function defined in mgos_ API, as well as mg_ API, is cross-platform. A firmware written with that API only, is portable between supported architectures, as demonstrated by many example apps. However, the native SDK API is not in any way hidden and is fully available. For example, one could fire extra FreeRTOS tasks on platforms whose SDK use FreeRTOS. The price to pay is loss of portability. Mongoose OS is highly modular - it is possible to include or exclude functionality depending on specific needs. That is implemented by the library mechanism, described later. In order to get a feeling about the resulting footprint, the table below shows measuremens done on TI CC3220SF platform for Mongoose OS 1.18 release, built with different options. RAM figures are measured after Mongoose OS is initialised, i.e. those numbers are what is available for the application code. |An example-no-libs-c app. Includes RTOS, TCP/UDP networking core, file system, configuration infrastructure, SNTP |Minimal + AWS IoT support |Minimal + Google IoT Core support
OPCFW_CODE
I have written the following script which works if I pass it a directory with file in the root but does not return anything if I pass it the root directory. Wikipedia has an article on TOCTTOU bugs, along with an example of how symlinks can be used by an attacker. If, indeed, this call is necessary (it should always return True). On most platforms, this is equivalent to calling the function normpath() as follows: normpath(join(os.getcwd(), path)). have a peek here Other benefits of registering an account are subscribing to topics and forums, creating a blog, and having no ads shown anywhere on the site. Search Forums Show Threads Show Posts Tag Search Advanced Search Unanswered Threads Find All Thanked Posts Go to Page... unix and linux commands - unix shell scripting os.path.isdir is python share|improve this question asked Aug 22 '15 at 14:19 Rolex 455 because there is no dirname folder? –njzk2 Aug 22 '15 at 14:21 @njzk2 dirname just And, as I suggested in the comments, filterfiles should look more like this: def filterfiles(f): ext = os.path.splitext(f)[1:] return ext in fileFilter (You missed a return). http://stackoverflow.com/questions/8959187/os-path-isfile-does-not-work-as-expected Hopefully you have an idea on that. Do they affect credit score? This follows symbolic links, so both islink() and isfile() can be true for the same path.isdir(path) Return True if path is an existing directory. I encourage you to read it. Sebastian 184k43342502 add a comment| up vote 2 down vote Not directly related to your question, but here are some general modern Python tips since you are new to Python: os.stat(f)[stat.ST_SIZE] up vote 0 down vote favorite On windows 7, when I run this Python 2.7 code, "NOT file" prints, but the file is there, its not read only, the folder is Thanks! ← Return to blog home Isfile Java How to check whether a partition is mounted by UUID? I'm technical referent but I lost the lead for technical decisions Can Newton's laws of motion be proved (mathematically or analytically) or are they just axioms? Os.path.isfile Example Sebastian Jan 22 '12 at 6:37 add a comment| up vote 0 down vote I believe the constant os.chdir() calls here are complicating your program (and might even screw up how iscsi Windows Server hyper-v cluster How to kick users from Windows Server 2012 R2 Understanding the defintion of cluster point in real sequence Word for a non-mainstream belief accepted as fact Just one question, could you recommend a good python editor? more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed Os.path.exists Not Working Not on windows or OSX2Python - OpenCV VideoCapture = False (Windows)0why does python script fail when used with vb and the windows task scheduler?1os.path.isfile() returns false for file on network drive Any idea where my fault is? try: with open('my_settings.dat') as file: pass except IOError as e: print "Unable to open file" #Does not exist OR no read permissions I hope this was helpful. Join them; it only takes a minute: Sign up Why do os.path.isfile return False? Why is this an invalid assignment left hand side? Os.path.isfile Python How do I deal with my current employer not respecting my decision to leave? Python Isfile Vs Exists Join them; it only takes a minute: Sign up Why does python os.path.isfile return false on windows, for only a specific directory? Also, the calls to os.chdir() are not needed. share|improve this answer answered Jan 22 '12 at 5:43 sarnold 77.4k12114163 add a comment| Your Answer draft saved draft discarded Sign up or log in Sign up using Google Sign Why does \@ifnextchar not work within tables (tabular)? Use forward slashes in your paths or use raw strings: if os.path.exists('D:/testfiles/mysub/GraphiteController.js'): or if os.path.exists(r'D:\testfiles\mysub\GraphiteController.js'): share|improve this answer answered Feb 7 '14 at 18:37 geoffspear 50.2k107197 Good catch! Python Os.path Example if os.path.exists('D:\testfiles\mysub\GraphiteController.js'): print "IS file" else: print "NOT file" sys.exit(1) If I move the file to d:\myother directory, prints "IS file". Python Delete File Did the Gang of Four really thoroughly explore "Pattern Space"? Is there any reason why my posted code is just always telling that there is no directory? Also your function filterfiles() should probably return ext in fileFilter, since you have a typo there. –Johnsyweb Jan 22 '12 at 5:31 Yes I am. Remember that something could happen to a file in the time between checking that it exists and actually preforming read/write operations against it. If you accept cookies from this site, you will only be shown this dialog once!You can press escape or click on the X to close this box. Python If Not Puppet-like fantasy characters. I also tested it with os.path.isfile and os.path.islink and it alway tells me that it is something different. Register a free account to unlock additional features at BleepingComputer.com Welcome to BleepingComputer, a free community where people like yourself come together to discuss and learn how to use their computers. This lack of specificity could easily introduce bugs and data loss if not expected. #Returns true for directories, not just files >>> print os.path.exists('/this/is/a/dir') True If you want to zero-in on http://europrolink.com/not-working/path-variable-not-working-windows-7.php I don't care, but there are several here who might be disinclined to answer if you've posted elsewhere.Just thought you'd like to know Have a nice day,Billy3Yeah, I usually wouldn't do Browse other questions tagged python or ask your own question.
OPCFW_CODE
12.4 Building the Sample Implementation It is not necessary to compile the LSB-si for testing; the official packages released by the LSB project should be used for this purpose. However, it may be instructive to see how the LSB-si is constructed to get to a clean implementation of the LSB Written Specification. The remainder of this section describes the process. The LSB-si is built from clean upstream package sources. The build instructions are captured in a set of XML files that serve as inputm to a tool called nALFS; the concept is derived from the Linux From Scratch project. The build is a multistage process (Figure 12.1), so that the final result has been built by the LSB-si itself, and the majority of dependencies on the build environment of the host machine are eliminated. Ideally, all the dependencies would be eliminated, but in practice a few minor things may leak through. In particular, the initial stage of the LSB-si build now does not do the GCC fixincludes step as this pulled in some details of the host system in the "fixed" header files that were then used throughout the build process. Figure 12.1 LSB-si Build The first phase, or bootstrap, of the LSB-si build is to produce a minimal toolchain of static binaries as shown in Figure 12.1. Packages such as gcc, binutils, kernel-headers, and coreutils are built. The second phase of the build is to use the bootstrap as a chroot environment to build a more complete toolchain as shown in Figure 12.1. As binaries are rebuilt, the new ones are installed on top of the old static copies built in the bootstrap phase so that by the end of the second phase, we have a complete development environment, using all dynamic libraries. This environment has the characteristic that it is entirely isolated from the details of the build host environment, since none of the tools from the build host have been used to compile the final binaries and libraries. To reduce the rebuild time, the bootstrap phase is copied to another location before starting, and the copy is used as phase 2. During LSB-si development, there tend to be few changes to the bootstrap, but many to the later phases. For a released LSB-si source tree, this really doesn't matter except that it increases the space requirements of the build area a bit. Thus the bootstrap copy of second phase is not essential for the build strategy, but rather a convenience for LSB-si developers. This intermediate phase 2 of the build can be used as an LSB Development Environment; in effect, this is what it does when building the final phase. The final phase does not have a compilation environment, as that is not part of the LSB Written Specification. The intermediate phase 2 is designed to be used as a chroot environment; using the compiler directly (not in a chroot) won't work as things will point relatively to the wrong places. Although the intermediate phase 2 is for the same architecture as the host machine, it is more like a cross compilation environment. Note that producing a more usable build environment is a future direction; the current intermediate phase is not officially supported as such and the bundle is not part of the released materials. The third phase is the construction of the actual LSB si as it will be delivered as shown in Figure 12.1. In this phase, the completed second phase is used in a chroot as the development environment, and each package is then compiled and installed to a target location in the LSB-si tree. During the third phase, care is taken not to install unnecessary binaries or libraries, because an upstream source package will often build and install more than is required by the LSB, and these need to be pruned from the final tree. Since the LSB team has already anticipated several uses for the LSB-si that require more than the core set, there exists a fourth phase that builds add-on bundles that can be installed on top of the base LSB-si bundle to provide additional functionality as shown in Figure 12.1. There are currently three subphases of the fourth phase: the first one to build additional tools required for running the lsb-runtime-test suite on the LSB-si, the second to build additional binaries to make a bootable system, and the third to build additional binaries to make a User-Mode Linux system. The fourth phase is built by the second phase build environment just like third phase is, and is completely independent of the third phase. That is, if one had a completed second phase, one could start off a fourth phase build without ever building the third phase and it would work fine. It is likely that in the future, there will be additional fourth phase subphases to include in a build environment. 12.4.1 Sample Implementation Build Process The source code for the LSB-si Development Environment can be obtained from the LSB CVS tree. The code can be checked out in several ways: as a snapshot either by release tag or by date, or as a working cvs directory (even if you're not an LSB developer, having a working directory can let you check developments more quickly by doing a "cvs update"). For an example using a release-tag snapshot, see the build instructions in Section 12.4.2. You can browse the CVS tree Web interface to determine the available release tags. You will also need to check out (or export) the tools/nALFS directory to get the build tool. Again, see Section 12.4.2 for an example. Source code for the patches to the base tarballs is in the CVS tree in si/build/patches. These patches should be copied to the package source directory. The base tarballs must be obtained separately. Once the build area has been configured, a provided tool can be used to populate the package source directory. The same tool (extras/entitycheck.py) can be used to check if all the necessary files are present before starting a build. With a -c option, it will do a more rigorous test, checking md5sums, not just existence. Every effort has been made to describe reliable locations for the files, but sometimes a project chooses to move an old version aside after releasing a new one (if they have a history of doing so, the location where old versions are placed is probably already captured). The packages are also mirrored on the Free Standards Group Web site. Still, retrieval sometimes fails; entitycheck.py will inform of missing files and the expected locations are listed in extras/package_locations so it's possible to try to fetch the missing packages manually. 12.4.2 Sample Implementation Build Steps Obtain LSB-si sources from CVS: $ export CVSROOT $ CVSROOT=":pserver:firstname.lastname@example.org:/cvsroot/lsb" $ cvs -d $CVSROOT export -r lsbsi-2.0_1 si/build Use -D now instead of the release tag to grab the current development version. Configure the build environment. There's a simple configuration script that localizes the makefile, some of the entities, and other bits. The main question is where you're going to build the LSB-si. The default location is /usr/src/si. Make sure the build directory exists and is in a place that has enough space (see the note at the end of this section). $ cd src/si $ ./Configure Answer the questions. From here on, you'll need to operate as superuser, as the build process does mounts and the chroot command, operations restricted to root in most environments. Copy patches to their final destination (substitute your build directory if not using the default): # cp patches/* /usr/src/si/packages Check that the package and patch area is up to date: # python extras/entitycheck.py -f You're now ready to build the LSB-si: If there's a problem, make should restart the build where it failed. If the interruption happened during the intermediate LSB-si phase, it is likely that the whole phase will be restarted; this is normal. Building the add-on packages lsbsi-test, lsbsi-boot, and lsbsi-uml requires an additional step. This step is not dependent on the LSB-si (phase 3) step having completed, but it is dependent on the intermediateLSB-si (phase 2) step being complete: # make addons Now you can build the UML installable package (IA-32 build host or target only). This step is dependent on all of the other phases, including the add-ons, having completed: # cd rpm # make The build takes a lot of space (around 1.4GB), and may take a lot of time. A full build on a fast dual-CPU Pentium 4 is about 2.5 hours; depending on architecture, memory, and processor speed it may take as much as 20 hours. If the build stops, voluntarily or through some problem, there should be a fair bit of support for restartability, but this is not perfect. In particular, be cautious about cleaning out any of the build areas, as the package directory may still be bind-mounted. Each of the team members has accidentally removed the packages directory more than once, causing big delays while it's being refetched (it pays to make a copy of this directory somewhere else). Be careful! The makefile has a clear_mounts target that may be helpful.
OPCFW_CODE
How to bulk insert relationships I'm playing with Neo4j. I have a database with around 400,000 nodes. I would like to insert relationships from a CSV file. There are about 1.4 million relationships. I'm using the REST API at present. The REST requests look like this example: POST http://localhost:7474/db/data/cypher Accept: application/json; charset=UTF-8 Content-Type: application/json {"query": "MATCH (a { ConceptId: '280844000' }), (b { ConceptId: '71737002' }) CREATE (a)-[:Is_a]->(b) RETURN a"} The problem is that each request is taking a couple of seconds. This is too slow for the amount of relationships I'm hoping to insert. I don't have access to the underlying node IDs, just the properties I gave them when I inserted them. Is there a faster way of doing this? NB: I'm not using indexes at present (I haven't worked out how to add them), but will try again with indexes tomorrow. I just want to know whether there is a way of inserting relationships in bulk somehow. The first improvement is probably to assign labels to your nodes so that you can use indices. Without an index on conceptId, each time your query is executed it will scan the 400,000 nodes twice, once for each of the two nodes you are matching. Speculating based on your query you could give your nodes the label :Concept and index the conceptId property as follows MATCH (n) // WHERE HAS (n.conceptId) //if you have some nodes that don't represent concepts, and conceptId distinguishes the ones that do from others SET n:Concept then for the index CREATE INDEX ON :Concept(conceptId) or if conceptId is a uniquely identifying value you can use a constraint instead CREATE CONSTRAINT ON (c:Concept) ASSERT c.conceptId IS UNIQUE Once you have set labels and created indices you can use them to quickly look up the nodes you are connecting. All you need to do is to include the label and indexed property in your query. You already use the indexed property, so adding the label your query becomes MATCH (a:Concept {ConceptId: '280844000'}), (b:Concept {ConceptId: '71737002'}) CREATE (a)-[:Is_a]->(b) RETURN a You can read more about schema in the Neo4j documentation. The second improvement would probably be to use LOAD CSV as @stephenmuss suggests. If you have queries in the future that are not based on a csv file, there are two more things to consider. The first is to parameterize your queries. Your HTTP call would then look something like this POST http://localhost:7474/db/data/cypher Accept: application/json; charset=UTF-8 Content-Type: application/json {"query": "MATCH (a { ConceptId: {a} }), (b { ConceptId: {b} }) CREATE (a)-[:Is_a]->(b) RETURN a","params":{"a":"280844000","b":"71737002"}} This allows the execution engine to create the execution plan once, for the first query of that structure. Next time you issue a query with the same structure, the cached execution plan is reused. This will significantly increase performance for repeated queries with the same structure. Last thing is along the lines of @ulkas comment, to insert in bulk. One reason LOAD CSV is faster is that it performs several operations in one transaction. You can do something similar using the transactional cypher endpoint. You can then execute a few thousand small statements per transaction, which is significantly more performant for operating on the database, and also will reduce overhead over the wire. It is slightly more complicated to design the payload for the transactional endpoint and also to handle exceptions. A simple example below, you can read more about it in the Neo4j manual pages. POST http://localhost:7474/db/data/transaction Accept: application/json; charset=UTF-8 Content-Type: application/json {"statements":[ {"statement":"MATCH (a:Concept {ConceptId: {a}}), (b:Concept {ConceptId: {b}}) CREATE (a)-[:Is_a]->(b) RETURN a","parameters":{"a":"280844000","b":"71737002"}}, {"statement":"MATCH (a:Concept {ConceptId: {a}}), (b:Concept {ConceptId: {b}}) CREATE (a)-[:Is_a]->(b) RETURN a","parameters":{"a":"199401294","b":"51233509"}} ]} The server returns location of the new transaction, say "http://localhost:7474/db/data/transaction/1". You can continue to execute statements within the same transaction POST http://localhost:7474/db/data/transaction/1 Accept: application/json; charset=UTF-8 Content-Type: application/json {"statements":[...]} When you're done you commit. The commit call can also contain statements. POST http://localhost:7474/db/data/transaction/1/commit Accept: application/json; charset=UTF-8 Content-Type: application/json {"statements":[...]} This is the kind of post that makes you believe in the StackOverflow mission all over again. Giving each node the label 'Concept' and putting a unique constraint on 'ConceptId' improved performance by several orders of magnitude. Thank you! If you are using Neo4j 2.1+ I think your best option would be to use LOAD CSV. You can then use syntax like the following USING PERIODIC COMMIT LOAD CSV WITH HEADERS FROM "file:/path/to/file.csv" AS csvLine MATCH (a{ConceptId: csvLine.aId}), (b{ConceptId: csvLine.bId}) CREATE (a)-[:Is_a]->(b) I suggest checking out the docs for importing csv files. Ah, missed that entirely! Thank you. or maybe set up a bulk insert where one call to to db would contain several create queries.
STACK_EXCHANGE
Those of you who have been following my progress on the rewrite of Smooth Calendar have probably figured out that its slow, super slow. Its just hard to work up the drive to do something that I have already done before, and that for me works fine. Today I found the drive to play around a bit with the rendering of the widget, trying to solve the one issue that I feel the current implementation has, namely that you can not position the time/date/event in a fixed position, that is maintained for each row. This is due to the widget being a list, and each row is just a line of text in that list. This is turn is due to limits set in Android on RemoteViews which is the component used to build a widget. I had a couple of ideas on how I could solve this, the first one was to use a GridView and letting each part of the row be its own item in the GridView, and by setting the numColumns to four each row would consist of the needed number of items. The reason this approach didn't work was that there is no way to control the width of the items, which in turn means that there is no way to control that the items "on top" of each other are the same size. Android has a Layout that allows you to have a Grid, its fittingly called GridLayout and it is available to the RemoveViews, but its not scrollable, which is a feature that I wasn't prepared to sacrifice to get the positioning support, so this is the solution i now have come up with. The main interface will contain a ListView that is scrollable, in turn backed by an adapter that only ever contains one item, a GridLayout. The ListView is then used as a RemoteViews able ScrollView, and allows the GridLayout to be scrolled if it is higher than widget. The GridLayout in turn has its own limitations, the first is that it is of a fixed size, we can not change the number of rows and columns at runtime, so what I have done is created a number of layouts, with increasing number of rows, that will be loaded when the widget is rendered to match the number of items the user has set the widget to show. The number of columns is easier handled since I can just set any elements I don't want to GONE and they will disappear. Since we still cant set the width of the columns in the GridView dynamically another trick was needed to allow the user to be able to add the desired amount of padding between the items, and this is by using ImageViews with dynamically created transparent images matching the padding the user wants. So, the conclusion to all this is that I now have a layout for the calendar items that allow them to be positioned in straight columns, while still being scrollable if the number of items is larger than the display area of the widget. Hopefully it wont be another year until my next update.
OPCFW_CODE
Improve the performance of Visual Studio Visual Studio seems to be getting slower. Please focus on improving the performance and limiting the enormous load on the HDD VS still get's the (Not responding) freezes for the most simple tasks. Surely that has to be more important than fixing icons and project-templates. Please do something about the responsiveness. are you doing anything to fix problem regarding very slow designer?! If I have a long form long 4000+, for example, any changes on page, either in Source mode, or Design mode, VS take ages to apply changes on page. Outside on WWW are so many posts on this topic for VS2010, and also VS2008, that I was 110% you will fix this problem. But, I must admit I was very disappointed problem still continue in VS11. I am a college student, I used visual studio 2008 and visual studio 2010. But visual studio 2010's launch speed is so slow, and it's a big different between from visual studio 2008. I hope that visual studio 2010 sp1 can launch faster. An update on performance – First, thank you to all who provided the ideas and votes on the performance forum. We used that data to help prioritize the improvements in VS 2012, and as such, I’ve rolled the performance ideas into this main forum. You’ll see a performance tag which shows all the ideas from that forum. This idea remains very highly voted, but has slipped outside the top 20 in terms of hot votes, meaning few people are still voting for it. We’re going to leave it open for now, and monitor that count, as you can never truly be “finished” with performance. At some point, we’ll close the idea out and return your votes back to you. Of course, you are welcome to resubmit the idea at that time, to see if there is still energy around that as a top idea. Again, thanks for your feedback and votes – it’s very helpful to our planning. Doug Turnure – Visual Studio PM Csaba Toth commented There are solutions with more than 140 projects. Today TFS can handle them much better than VS2010 did (not to mention VS2008), but maybe there's still room for improvement. Przemysław Karlikowski commented We do not need super UI, animations, visual experience (reqiureing hardware graphics acceleration!!!) and all that stuff. It doesn't have to be beautiful - it should be simple, usable, fast and lightweight. Alexsandro Pereira commented I think this is a good idea, but when you are a SSD disk it's not exactly a problem but is super resource consumption. Dave Novak commented In fairness to the VS-2012 and the MS VS Team, recent product updates have significantly improved performance, especially with debugging. It's quite a significant improvement over earlier Betas and Release Candidates. Of course performance could always be better, but at this point I'd rather they invest in adding new features, such as the Record/Playback support found in VS-2010 (and most any other IDE for the past 20+ years). See http://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/2650757-bring-back-macros David Rathbone commented 2013 and still no improvment but you got 4667 users cash that thinks it duff! Phil Murray commented Its the second most voted topic on the site for gods sake. I also expect nothing further will be done, just to match the top voted item. Phil Murray commented The performance of VS2012 (over 2010) is improved but is still not what I would call acceptable. This can not be closed. Do M$ not care what there loyal user base thinks are issues. 4600+ votes and counting. Fabio von Hertell commented Well, I think to really cope with this, a new feature would be great which visualizes the resource usage of different VS-modules and plugins over time so that one can identify bottlenecks and configure the IDE accordingly. >This idea remains very highly voted, but has slipped outside the top 20 in terms of hot votes, meaning few people are still voting for it. That is the worst excuse I ever heard. A G commented Completely ridiculous to have a xaml editor freeze. I'm not using a designer, I'm editing text! Jason R commented 1. Create a new Silverlight Business Application 3. Open "Data Sources" window WTF!?!?!! I'm NOT renewing MSDN subscriptions at our firm for the first time in 20 years. Francois Aube commented Can we disable Intellisense for real this time around ? If performance is an issue for some, and the HD is the root factor, then would it be possible to create a x64 build of the IDE and just load more resources into memory? The only reason "few people are still voting for it" is that they think 4000 votes is enough to get the point across. Apparently it isn't. David Rathbone commented Have you fixed the speed in 2010,2012 or will it be in 2014 by then we all will have jumped to Google! Shaun Tonstad commented The designer performance is atrocious. I have a 3 Ghz machine and I have to routinely wait up to 30 seconds for a simple XAML page to load. Victor Zakharov commented 2008 is like 10 times faster than 2010. I haven't had much development experience with 2012, but if it's same thing as 2010 - it needs to fixed ASAP. Make use of an SSD, 16GB of memory, i7 - whatever is installed on the development machine. SSD + VS2012 helped alot here. Nice work MS. VS performance is extremely important because it hurts productive when it is slow. I haven't test VS 2012, but heard that it is faster. To get by, in VS 2010, I installed the PerfWatson Monitor so that at least I know I can shift my focus on other tasks. I also create more solutions with smaller number of projects, that way I have less chance to run low on memory.
OPCFW_CODE
Clients implementing * IDragDropObserver receive an object of this type in their implementation of * onDrop. Nstreecontroller core data sheet. content/ branding/ about. Welcome to Offline Stack Overflow by Kiwix. Now the Interactor prepares data when requested and pushes changes upon synchronization events. Nstreecontroller core data sheet. Core Data Objective sheet data C - Ebook download as PDF File (. nstreecontroller txt) or read book online. Easily share your publications and get sheet them in front of Issuu’ s. NSTreeController binds views to an hierarchical. data In addition to this type. the Core Data Stack sends a notification the Interactor receives the notification, changes are then pushed through the Presenter nstreecontroller to the View. The Core Data objects provide a way to incorporate managed object contexts and interfaces for your Core Data entities into your nib files. Код core для controllers core убирается использованием встроенных контроллеров: NSUserDefaultsController NSObjectController, NSArrayController NSTreeController. NSTreeController— or a custom subclass of one of these concrete core NSController subclasses. They access the sheet ' data' property to retrieve data, which is either data. pngcontent/ branding/ searchconfig. 6 Part 1: Ordered Trees. Core Data may impose additional constraints over general Cocoa object. Windows nstreecontroller Server licensing model The business model nstreecontroller for Standard and Datacenter editions transitioned from processor- nstreecontroller based to core- based sheet licensing in October with the general. org org org sheet resource. addDataForFlavour( sheet " text/ html" htmlString) ; / / we' re copying the URL from the nstreecontroller proxy icon, not moving / / we specify all of them though, because d& d sucks nstreecontroller OS' s. Creating an OS X Core Data Helper App. Connecting NSOutlineView to Core core Data in 10. 10/ AccessibleMarshal. This works, but I find the decision to be weird. 6 Part 1: Ordered Trees. Pseudo- pkg- plist information, but much better, from make generate- plistExpand this list ( 875 items) / usr/ local/ share/ licenses/ gnustep- gui- 0. The Council for International Organizations of Medical Sciences ( CIOMS) — an international nongovernmental, nonprofit think tank established jointly in 1949 by the World Health Organization ( WHO) and UNESCO — first came up with the idea for the company core data sheet ( CCDS). Core Data Core Graphics Core Image Core Location. NSRegularExpression Tutorial and Cheat Sheet. NSTreeController and Drag and Drop. Core Data Core Graphics Core Image. nstreecontroller core data sheet Adding Drag support to an NSTreeController with Core Data by Unknown. Showing a NSSavePanel as a sheet by Geoff Wilson. Data Structures & Algorithms in Swift Jan· Video Course · 2 hrs, 49 mins Jan· 2 hrs, 49 mins Video Course Completed UIVisualEffectView: Accessibility Jan· Screencast · 2 mins Jan· 2 mins Screencast Completed.
OPCFW_CODE
09-07-2015 01:38 AM Try disable power management in the bios and uninstall power management and active protection. I do that and the system is better Still some lags but only when CPU is high used, after some time it come back to normal. But if you have the option to return this notebook, don't hesitate 09-07-2015 03:18 AM 03-18-2016 01:18 PM I bought Thinkpad W550s I7 550U RAM 12G, 500G, last month and got the same problem like you, which makes me disappointed on Lenovo Thankpad version as just updated from X220. I had tried to get lastest update from Lenovo, and even Intel, but it did not work. Followed some extended installations, and especially disabled "intelligent cooling" technology, now it is quite getting better performance. Let check more few days to make sure that it will be caused by such "INTELLIGENT" defined by softtech effect. Running on Windows 10 newest update. Intelligent coolling can be disabled from Lenovo Setting -> Power tab -> Intelligent Cooling -> OFF. 04-07-2017 12:14 PM Last week I got a BIOS update when run the Lenovo System Update app and the problem dissapeard. I had it since updated to Windows 10. This problem is called "throttling" and is a common problem with last Intel Processors that happens in a lot of systems. Even the acclaimed Surface Pro 4 had it and took half a year to be resolved by Microsoft (after a lot of people started a collective demand). 04-15-2017 03:00 PM This is strange but I noticed a better performance with the balanced plan with everything to the max when plugged in then with the high performance plan and everything to the max. With high performance when the cpu has nothing to do, the clock shows 2.90 GHz and dropps to 2.70 GHz when it actually does something! With the balanced plan it dropps down to 0.70 GHz but raises up to 2.90 GHz under much load. Unbelivable ;-\ 07-09-2017 10:14 PM - edited 07-10-2017 08:05 AM Not sure if this helps, but I'd like to share my experiences (both with solution... so keep reading). You are warned: it's a bit long... :-) - 16 Gb RAM - SSD (system) + HDD (storage) - Windows 10 Pro First, a while ago I had an issue which would make the processor for some reason stuck at a throttling level well below 100% (something around 1 GHz) when I resumed from sleep. It wouldn't happen 100% of the times, but very very often (I'd say 80%). It was easy to recognize this situation because the CPU load would saturate at the maximum possible value for the throttled frequency (which was around 30%) every time the PC was doing something moderately intensive (and mind you, for some reason my Windows installation always runs a few seconds of very intensive CPU load when it resumes from sleeping... when the issue occurred, those become tens of seconds at maximum, albeit throttled, CPU load, making the pc almost unusable for quite a while). My solution was to install an utility called throttlestop which has a 1000 options of which I only understand 1%. However, fiddling with the different preconfigured modes was usually enough to "unlock" the throttling and bring the pc back to normal operation. Interestingly, two or three months after the issue started, it spontaneously disappeared. I suspect it was due to some system update, although the fact that I was using throttlestop, thus mitingating the issue, prevented me to spot the exact moment in which the issue disappeared. The second experience is more recent. All of a sudden I noticed my pc being very slow. Unfortunately I wasn't able to indentify what started it. In part this was due to the fact that the symptoms, although equally devastating in terms of performance, were a bit more subtle that in the previous case. Looking at the CPU usage woudln't necessarily indicate something strange, as the CPU load was high on average, but fluctuating quite rapidly, as it always happens when you have a lot of things going on (rather than a single thread sucking up all your computing power). Throttlestop, via a sub-program called LimitReason, gave me a hint that something was triggering throttling very frequently (as opposed as constantly n the previous case), but lack of documentation for limitreasons made it difficult to identify what. Long story short, I ended up installing the Intel Extreme Tuning Utility (XTU), which clearly indicated that my system was throttling for excessive power very often. Observing the behavior for a while, I noticed that the system woudl never allow the package TDP to go above 7 W, while my processor is designed for 15 W. Interestingly, XTU indicated that the currently set TDP limit was 15 W, and fiddling with the windows power profile did not make any difference. Eventually, I realized that by changing the value to whatever in XTU and then putting it back to 15 W would solve the issue. Unfortunately, the setting would not survive a reboot, and sometimes not even a sleep cycle. But modifying the settings in XTU was quick enough to be considered a usable workaround. By fiddling here and there I finally managed to make the setting stick. I suspect intelligent cooling in the Lenovo Settings app (which was reported as disabled, but maybe stuck in some strange state, as I would never hear the fan go off at max speed) was part of the issue. Again, changing and resetting the setting eventually made the power limit stick. Currently my system runs fine and is very responsive. Under high CPU load, after a few seconds the fan goes to full speed and the processor is throttled back a bit (just a bit), but this is normal. Before trying XTU, I had tried to disable all CPU throttling in the attempt of solving the issue, and I haven't restored most of the settings. Run time on battery seems to have suffered a bit (maybe -20%?). Further fiddling could (should!) make possible to define a state in whcih the CPU is more agressively throttled while on battery, and basically not throttled while plugged in, but I haven't had time to expeirment. Since 90% of my usage is while plugged in, I don't care too much. 07-11-2017 08:23 PM 08-11-2017 03:14 PM - edited 08-11-2017 03:22 PM I've found another way for improving the performance. You need to reduce the amount of virtual memory, this is the paging file. I have 16gb RAM and my paging file was 8gb, I reduced it to 2905 as my Windows 10 pro recommended and there is no scroll-lag in opera, websites open nearly instantly and the applications start immediately. Checking for updates in windows now only takes a few seconds (before it was a few minutes) It feels as if I had a new notebook > Control Panel > System and Security > Advanced system Settings (on the left) > Advanced tab > Performance group > Settings > Advanced (again) Set the paging file to "Custom size" to whatever is Recommended by your system. Press "Set", restart the system and enjoy :-) I've found it on reddit. see this link: https://www.reddit.com/r/Windows10/comments/3fm7m8/windows_10_is_very_laggy_and_slow_after_the_updat... 05-21-2019 01:23 AM Thank you so much @sdu! You gave the hint to the solution that works for me I have been messing around with extremely slow CPU performance after every hibernation startup - I almost couldn't move my mouse sometimes and it took me about 2 minutes to open the Power Manager. I identified this using "Throttle stop" which shows the limits and the task manger showed almost constantly 100% while power manager capped the CPU speed at 40-60 %. What worked for me (Lenovo W550s Win7 Home Premium 64bit SP1) is very simple: - open the "Active Protection System" - click the "intelligent cooling" tab - disable "enable intelligent cooling" Disable it, it is not intelligent, it's stupid - it's a trap Before I realized this was the problem I found a workaround. If the proposed solution doesn't work, this might be a pretty annoying, though life-saving workaround: - every time your CPU throttles, open the power manager - click on "Advanced" button - in the active mode (maximum performance or video playback in my case), extend the system settings and select "Turbo" for maximum CPU speed (even if maximum Turbo was selected previously). - click apply - then, select "Maximum Turbo" for maximum CPU speed again - click apply - CPU speed will go up on 100 % immediately and CPU load will drop to < 30 % or so
OPCFW_CODE
Users interested in Download free microsoft project generally download: Microsoft Project 2013 is a program that offers you the required elements to keep projects organized and on track. Enhance team collaboration to realize results—connect your teams with Microsoft SharePoint Foundation 2010 task list and status synchronization.rnProject Professional 2010 delivers new and intuitive experiences to simply plan... The Project Initiation Tool is a Microsoft Office InfoPath 2003 sample solution that simplifies the process of capturing and ranking project ideas. GanttProject is a free-to-use project scheduling and management app for Windows OS. - Create baselines to be able to compare current project state with previous plans.. - PERT chart for read-only view can be generated from the Gantt... Use this interactive tutorial to find commands in Project 2010. Click a command in the guide to learn its new location in Project 2010. Additional suggestions for Download free microsoft project by our robot: Search in titlesAll 41 results Search in solutions Showing results for "microsoft project" as the words download, free are considered too common Collection of programs for the preparation of documents, presentations, etc. and project management ...programs. Microsoft Office i ...replaced. Microsoft Office requires Program and debug applications in C, C++, and Fortran. types of C/C++ projects like console Increases download speeds by up to 5 times, resumes and schedules downloads. seamlessly into Microsoft Internet Explorer ...and grabbed projects complete the OpenProj 1.4 is an open source desktop project management application. project management application similar to Microsoft Project ...to Microsoft Project. OpenProj The most popular free media player that can play any video format. open-source project, focusing largely Microsoft PowerPoint 2010 allows you to create and share dynamic presentations. Microsoft PowerPoint 2010 ...the Web . Microsoft Office PowerPoint ...of the Microsoft Office 2007 Lets you open, print and export Microsoft Project MPP files, MPT files. Microsoft Project MPT files, Microsoft Project XML files, Microsoft Project Low-cost alternative for viewing/printing any MS-Project file. MOOS Project Viewer is a Microsoft Projects file Microsoft Works 9 is a suite of home tools to facilitate your everyday tasks. and more. Microsoft Works 8 ...mail merge. Microsoft Works 9 would ...managing basic projects With this tool you can create help files for Windows applications and web pages. and a project manager, to ...Workshop, by Microsoft Corporation, it ...and a project manager, to compatible with, Microsoft Project. Plan for ...alternative to Microsoft Project. Plan doesn Reads .mpp project file format and does not require installations of ® Project. of Microsoft® Project. This Microsoft project plan ...files from Microsoft Project (.mpp It's a single,affordable program that has all the power of the Microsoft Project. for Microsoft Project file ...Project Viewer for Microsoft Project is the Microsoft Project A professional viewer that opens files created with Microsoft Project. Microsoft Project. If your organization uses Microsoft Project The easiest-to-use .NET skin solution for Microsoft VisualStudio.NET (WinForms). solution for Microsoft VisualStudio ...your existing projects but ...for one project! IrisSkin
OPCFW_CODE
Use AnimationClip to turn a material pitch black and then back to its original color I have an object that has 2 materials, both using the URP/Lit shader. One of them is emissive and its color changes at runtime. I would like to create an effect where the two materials become pitch black (think ventablack), and then gradually return to their original state. Is it possible to achieve this effect using an animation clip created in Unity's animation editor? I have very limited knowledge about lighting, shading and render pipelines, so I think using an animation clip for this effect would be the easiest since it's easy to tweak at edit time and to trigger at runtime, but I am also curious to know about other ways. I have tried animating some of the MeshRenderer propreties in the animation editor, but: For the non-emissive material, when I try to change its Material_Base Color to black, I can still see it being tinted by scene light For the emissive material, even if I could find the right properties to animate, I am not sure how to proceed because I cannot hard-code the initial color, as it can change at runtime I'd be inclined to make a custom shader graph that exposes a single "blackness" parameter that simultaneously drives albedo/smoothness/emission to zero, just to simplify this setup. @dmgregory Your comment motivated me to finally learn Shader Graph, and it turns out it is quite trivial to achieve this as you suggested. I can even easily animate the exposed parameter in the Animator. If you want to add this as answer I can mark it as accepted. Note that I also had to drive the normals to 0 because otherwise I could still see light being reflected. Good solution. I think you should get to post the answer, since you're the one who put in the work to execute it and problem-solve around the normals, and you have the graphs handy to make screenshots, whereas I'd have to make fake ones just for the sake of the answer. Following the suggestion from @dmgregory, I used Shader Graph to solve this: Create a new Lit Shader Graph (I am using URP) Expose parameters for Smoothness, Base color, Emission color, and a float parameter to control the "blackout" effect For each of Smoothness, Base color and Emission color, run them through a multiply node with the blackout parameter before piping the result to the corresponding input in the master node, such that when the blackout parameter is 0, everything is set to zero To achieve pitch blackness you will also need to zero-out the normals in the same way; in my case my material was very simple and didn't require a normals map, so I instead used the "Normals" node set to "Tangent space" You can now turn materials pitch black without messing with the scene's lighting, and you can even animate the blackout parameter in the Animator to create interesting transitions.
STACK_EXCHANGE
For FolderSync to function correctly it must have the required permissions granted. The welcome wizard will be shown on the first run of the app and here you should se a permissions screen with a overview of the permissions required. The permissions screen can always be accessed directly from the "about" menu page (the about page is the right-most option on the bottom menu). There are different permissions needed, depending on Android OS version. Not all permissions are strictly needed, depending on your use case, but “Write to device storage” and “Manage all files” permissions are required for a sync to run correctly. “Manage all files” permission is only available on Android 11 and newer. Regarding all files permission For FolderSync to be able to access all files it is necessary to grant permission to access “all files” not only “media files” which is a more restricted version of this permission. FolderSync does not know if “all files” or “media files” has been allowed, and if only “media files” is allowed some files can not be read, and FolderSync will assume those non-readable files and folders doesn’t exist (as they are not reported by the file system). To grant access to Android/data folder on internal SD card FolderSync will open a permission chooser where the user must manually grant permission to access the folder. When opening permission chooser it should already be in the Android/data folder on version 3.1.0 and newer. If you need to access other folders under Android you can try navigate up one folder and grant permission to Android folder (and not Android/data) - this is not allowed on all devices though, but if possible will allow you to access all sub folders of the Android folder. Android 13 limitations Granting general access to /Android/data and /Android/obb folders is no longer possible on Android 13 (and newer). See below for possible workaround. You can read more about the issue here. In FolderSync 3.3.0 running on Android 13 you can initiate a permission request for a user-defined folder on the Permissions screen. This folder should have a path like "Android/data/theFolderName". Then it should be possible to grant access to the folder using the Android SAF file dialog. Afterwards the folder is visible as a separate storage location entry in folder selector and file manager - if your folderPair was already configured with a file path before upgrading to Android 13 and you add a permission this way, you will need to reselect the folder using the folder selector on the the folderPair configuration screen. Location background permssion If you set allowed or disallowed Wi-Fi SSID names on FolderPair, FolderSync needs this permission to be able to read the name from Android. It is important the the permission granted is of the type "Allow always" in order to access Wi-Fi SSID name while running sync in background. Do you need to grant location permission? If you don't use the SSID name filter you do not need to grant this permission. For FolderSync to be able to run in the background to sync files, you will need to exempt the app from battery optimization. This is also required for FolderSync to be able to schedule precise alarms for when a sync should run. Battery optimization and other vendor specific "optimizations" are the most common causes for scheduled sync not to run or to freeze without errors. See here for more info on how to keep an app alive in the background if you are facing such issues.
OPCFW_CODE
Read the online material before the class meeting in which we will begin discussing it; journal articles may be obtained using the citation databases on the BC Library website. We will be using a open source statistical programming language, Python, to do data analysis in this class. Python is free and you can install it on your own computer (Windows, Mac, Linux) if you wish, or you can run it online using Google’s Colaboratory. The course is structured as a series of modules, called units, that have a consistent structure: intake, process, demonstrate. Each unit will have one or more laboratory assignments in which you will do some data analysis with Python and write about what you’ve learned. The units will incorporate in-class meetings and outside-of-class activities; each week will also include a learning reflection which will count as participation in the course. In the schedule below, synchronous activities will be displayed in all caps and asynchronous activities will be displayed in sentence case. This is to make it easier to spot the synchronous work each week. M 1/29. (0.1) Introduction to the course. Part I. The Fundamentals Unit 1. Concepts and Tools M 2/5. (1.2) Introduction to Python. Read . PROCESS: GUIDED TOUR OF GOOGLE’S COLABORATORY. Do: Reflection #2 W 2/7. (1.3) More about programming. Read . PROCESS: AN INTRODUCTION TO PYTHON SYNTAX. Do: Reflection #3, Lab 1. Python notebooks.. Unit 2. Probability T 2/20. (2.1) Probability. Read Sampling, Probability. PROCESS: CLASS PRESENTATION AND DISCUSSION OF PROBABILITY AND SAMPLING. Do: Reflection #5. W 2/21. (2.2) More on probability. Read Categorical Data. PROCESS: COMPUTATION OF PROBABILITY AND ODDS. Do: Reflection #6, Lab 2. Probability. Unit 3. Describing a Sample For this section, also read: Massoni (see comment, below). (10) 3/5. (Finish up descriptive statistics.) Review for midterm examination 1. (11) W 3/7. Midterm examination 1. *** College closed due to winter storm. *** Part II. Inference with Means and Percentages (12) M 3/12. *** Rescheduled Midterm 1 *** Programming part of the exam will be due by the start of class on M 3/19. (13) W 3/14. Inference, part I. Read The Normal Curve, Sampling Distributions, and chapter 3 (slides). Lab 3. Descriptive statistics. Due M 3/26. (18) M 4/9. Factorial analysis of variance. Read Factorial ANOVA, chapter 5. Lab 4. Inference (standard scores, z-test, confidence interval). Due W 4/11. Lab 5. Comparing means (t-test, F-test). Due W 4/18. (19) M 4/16. More on factorial analysis of variance. (20) W 4/18. Even more on factorial analysis of variance. For this section, also read: Howard, et al. (See comment, below). (21) M 4/23. Review for midterm examination 2. (22) W 4/25. Midterm examination 2. Part III. The Linear Model Lab 6. Bivariate correlation and regression. Due W 5/9. (25) M 5/7. The linear model. Read Multiple Linear Regression chapter 7. (26) W 5/9. Recoding and indexing. More on the linear model. (27) M 5/14. Even more on the linear model. Lab 7. The linear model. Due W 5/16 @ (28) W 5/16. Review for the final examination. For this section, also read: Perez (See comment, below). TH 5/17. Reading day Final examination. Distributed: W 5/16. Due: W 5/23.
OPCFW_CODE
Computing on Aleph.im Aleph.im offers a decentralized computing framework that allows users to run programs on the network. This is done by creating a virtual machine (VM) that executes the program. Overview of VMs There are several types of VMs available on the network: An On-demand VM is created on a Compute Resource Node (CRN) and is destroyed once the program has finished executing. This is great for programs that are responding to user requests or API calls (using ASGI) and can shutdown after processing the event. They are also cheaper to run as they only require one tenth of the $ALEPH tokens to hold, compared to a Persistent VM. A Persistent VM can be used to run programs that cannot afford to stop or need to handle incoming connections such as polling data from a websocket or AMQP API. Instances are similar to Persistent VMs, but are specifically designed to run with a SSH key supplied by the user. This allows the user to connect to the VM and interact with it directly. They do not rely on code execution, but rather on the user's ability to connect to the VM and run commands on it. They cost as much as Persistent VMs. On how to deploy a simple Python microVM, see our Python microVM guide When a program is created with persistent execution enabled, the aleph.im scheduler will find a Compute Resource Node (CRN) with enough resources to run the program and schedule the program to start on that node. Persistent programs are designed to always run exactly once, and the scheduler will reallocate the program on another CRN would the current one go offline. ⚠️ Automatic data migration across hosts in case such events happen is not available yet. The execution model of a program is defined in the field message.content.on of messages of type PROGRAM and is non exclusive. The same program can therefore be available as both persistent instance and on demand at the same time. Before you begin this tutorial, ensure that you have the following: - A computer with Python and the aleph-client utility installed - An Ethereum account with at least 2000 ALEPH token - Working knowledge of Python Step 1: Create your program Let's consider the following example from the FastAPI tutorial. Any other ASGI compatible Python framework should work as well. Running programs written in any language that works on Linux is possible. This will be documented later. Create a file named Test the application locally: Step 2: Run a program in a persistent manner To run the program in a persistent manner on the aleph.im network, use: You can stop the execution of the program using: Find your program TODO: Locate the CRN where your program is running. TODO: Document Instance VMs
OPCFW_CODE
Hey, guys! Today’s exciting – I’m here to tell you all why you should start learning how to code. Now, to get things started, I would like to say that coding isn’t actually that hard – it’s only stereotyped that way, and many think that only geeks or nerds understand it. That is false. Coding is, in fact, very, very easy. Sure, it can get stressing at times, but it’s usually fun. Here are some ways having a know-how on coding can help. Firstly, coding will likely become the language of the future. The technological industry is growing quickly. New technological developments are being made almost every day. By the next decade, Augmented Reality (AR) and Artificial Intelligence (AI) might become the new norm. In all this, of course, we’re going to need programmers to continue to make and curate new technology. And since programming can be a high-paying job, you can make a lot of money off of it. Secondly, it isn’t hard at all. Sure, you might be thinking that only hackers and computer geeks have the power to write code. But that really is a myth and a major misconception. It may be a bit difficult to grasp at first, but once you understand it, coding can be a breeze. How, you may ask? Well, here’s a story from my personal experience. I started on Linux (if you don’t know what that is, check out my first article on the three major operating systems from last week!) to learn my first coding language. Sure, it took some time to getting used to everything, but with practice, I became quite the pro. I’ve since finished using Linux after some 4-6 months and have decided to learn Python. I’ve since finished using Python, and I’ve now decided to start learning Go, another cool coding language. And if I, a 13-year-old, can accomplish that much, then surely you can too! Thirdly, once you learn one language, you can learn them all. One day, I realized that Python is practically the same thing as Linux – they just have different command names! The ‘Print’ command, for instance, is just called ‘Echo’ in Linux, but they have the same function. (If you didn’t know, the print command on Python will ‘print’ – or make an output of – whatever you want it to, well, print.) The ‘Echo’ command in Linux does the same thing, and it’s really just called ‘Echo.’ Case in point? If you make an effort to learn just one coding language, you can easily learn the rest. Fourthly, there are many, many start-up companies that revolve around code and programming. A lot of companies are beginning to get more technological because code and technology is ubiquitous nowadays. A lot of people become freelancers for coding, and at the end of the day, they get paid doing what they love to do in the first place – win-win! But remember, you can only accomplish these things if you try! That’s why making an effort is the first step to success, and it’s no different in the world of technology. Lastly, knowing how to code can comprise a big tip in your college/university interview or work resume. As I’ve mentioned before, coding is a big part of technology, which, in turn, is currently one of the most successful industries in the twenty-first century. That means that top-notch universities like Harvard and MIT are looking for people with amazing programming skills. Look at us, on our phones – some of us aren’t able to take a step away from our devices. Most of us are probably eagerly waiting for the new version of the device you’re holding in your hands right now, with which you’re reading this article. We’re all addicted to technology, and that isn’t going to stop anytime soon. And if you know enough to be able to ride this wave, it can really help you when you’re applying for a competitive school or job position – you’ll be framed as a hard-working individual with knowledge in a field that a limited number of people take the time to become experts in. So there we go – those were all the reasons why I think you should start to learn how to code. Thanks for reading, stay tuned for my next article, and have a great rest of your day!
OPCFW_CODE
The first Firefox version is a good candidate for bugs, but there’s no guarantee that it’ll solve every one of them. There are also bugs in some older versions that can be easily patched. You’ll find many of these bugs in the bug tracker, though. Here’s a look at a few. When you add a bookmark to a bookmarklist, Firefox won’t let you add more. When clicking on an item, Firefox’s titlebar does not show. Firefox’s “Show Info” button will hide if it’s not already showing. When Firefox tries to use the browser tab you have open, it will sometimes not load properly. Firefox sometimes tries to open a file that you’ve closed and not save it. 6. The “Add a bookmark” button on the toolbar is sometimes inaccessible. When adding a new bookmark to the Firefox tab, Firefox sometimes hangs. When opening a new tab in Firefox, it sometimes crashes. When launching an extension in Firefox’s extensions menu, the extension sometimes crashes when it tries to load. When dragging and dropping files into Firefox’s document mode, it won’t load. Firefox crashes when you enter a URL into the address bar. When using Firefox in the background, the browser can sometimes freeze. Firefox occasionally hangs when you try to open the addressbar. Firefox often hangs when Firefox is on the screen. Firefox keeps asking you if you want to continue browsing the site. Firefox hangs when the “Open In…” option is set to “on.” Firefox can sometimes crash when trying to open an extension. Firefox stops working after you’ve clicked the “Delete” button. Firefox refuses to close when you drag a page. Firefox freezes when you press the home button. Firefox has some strange behaviour when you select an “extension” with the mouse wheel. Firefox will sometimes crash on startup if you use the same filetype over and over again. Firefox might crash when you open a document with an image. Firefox is sometimes not able to read files that have a large number of tags, such as pictures and documents. Firefox frequently hangs when using the “Save As…” button. Firefox always displays a warning message when you close a tab. Firefox doesn’t seem to be able to save or restore files that were opened with a “Save” button in the toolbar. Firefox fails to open files that you open with the “File” menu option. Firefox won´t open files if you type “open” in the address field. Firefox tries very hard to use all available memory, even if it has little or no available free space. Firefox makes a lot of noise when you hover your mouse over the keyboard, but it often stops working. Firefox may sometimes crash or sometimes crash and crash and quit unexpectedly. Firefox isn’t always able to open URLs with the search bar. Firefox closes the “Back” menu if you close the tab it’s open in. 35. Firefox displays an “Open With…” pop-up dialog that isn’t what it appears to be. 36. Firefox says you’ve used up a lot memory when it says your browser has used up too much memory. Firefox incorrectly reports that you can’t open an open file. Firefox randomly stops working if you open an image in Firefox. Firefox loses its ability to open web pages that are hidden. Firefox uses more CPU power than it should. Firefox behaves incorrectly when you click a bookmark. Firefox seems to make more noise when it’s using memory than when it needs it. 43. Firefox could crash when running on a small screen or when the browser crashes. Firefox never crashes when trying the “Close All” option in the “Exit” menu. Firefox starts using more RAM than it needs. Firefox runs slower when you’re using memory that has been used up. 47. Firefox constantly tries to connect to your network as much as it can. Firefox sends a cookie to the internet every time you open it, but doesn’t send any more. Firefox thinks you’ve changed the “location” of the site you’re visiting, but in fact it’s just a bookmark you’re viewing. Firefox pauses to let you type something. Firefox gets stuck in a “memory leak” when you add the file you want. Firefox lets you select multiple “extensions” in its preferences, but not all of them are installed. Firefox automatically opens tabs with extensions installed but doesn´t show them in the browser’s window. Firefox allows you to edit the way
OPCFW_CODE
Resolution: Won't Fix Affects Version/s: 1.13.1, 184.108.40.206 Beta, 1.11.1, 1.11.4, 1.13.0, 1.14.30 Hotfix, 1.16.1 Fix Version/s: None In Bedrock Edition, the Overworld to Nether distance ratio is supposed to be 8:1. However, there is a way to break this by importing old Console worlds sized Medium or smaller. There is a reproducible way to create nether portals in the nether that are close to 1000 blocks apart, yet their corresponding portal distances in the Overworld will only be around 3000 blocks apart, not even close to the expected 8000 blocks. In other words, even though a World might have been imported to Bedrock Edition from Console Edition, it is still using the Legacy Overworld/Nether distance ratios, instead of forcing a 8:1 Overworld/Nether distance ratio. Legacy Console Edition (from https://minecraft.gamepedia.com/World_size) World size Nether-Overworld ratio STEPS TO REPRODUCE THE PROBLEM Load latest (and final) version of "Minecraft: XBox One Edition" Create New World (Any seed will work, Creative mode, goto "More Options" and change world size to "Small") Load the world, then save/exit. Load "Minecraft" v1.11.2 on Xbox One on the same XBox account as used in Step 1. Select "Play", and goto "Sync Old Worlds", and import the world created from Step 1. Once the world is imported, go to the options for the world turn on Co-ordinates. Load the world Once in game, fly near the center of the world where the coorinates are close to 0, 64, 0 and create a nether portal. - In my specific case, I created one near 58,67,-18 (overworld). Step through the portal to the nether, and note the coordinates you come through at. In my case, 24,77,-14 (nether) Fly hundreds of blocks in the nether in one direction to avoid spawning a portal too close to your original one - In my case I created a portal at 41,83,-271 (nether) Step through the portal to the Overworld and note your coordinates. This is where you can consistently see no matter how far you go in any direction, the ratio 3:1 is being used instead of 8:1 In my case I come back to the Overworld at 124,83,807. You can see the final coordinate is almost exactly 3 times the nether coordinate, not even close to 8 times like it should be.
OPCFW_CODE
DFKI-LT - Dissertation SeriesVol. VI Thorsten Brants: Tagging and Parsing with Cascaded Markov Models - Automation of Corpus Annotation price: € 13 The methods presented in this thesis aim at automation of corpus annotation and processing of large corpora. Automation enables efficient generation of linguistically interpreted corpora, which on the one hand are a pre-requisite for theoretical linguistic investigations and the development of grammatical processing models. On the other hand, they are the basis for further development of corpus-based taggers and parsers and thereby take part in a bootstrapping process. The presented methods are based on Markov Models, which model spoken or written utterances as probabilistic sequences. For written language processing, part-of-speech tagging is probably their most prominent application, i.e., the assignment of morpho-syntactic categories to words. We show that the technique used for part-of-speech tagging can be shifted to higher levels of linguistic annotations. Markov Models are suitable for a broader class of labeling tasks and for the generation of hierarchical structures. While part-of-speech tagging assigns a category to each word, the presented method of tagging grammatical functions assigns a function to each word/tag pair. Going up in the hierarchy, Markov Models determine phrase categories for a given structural element. The technique is further extended to implement a shallow parsing model. Instead of a single word or a single symbol, each state of the proposed Markov Models emits context-free partial parse trees. Each layer of the resulting structure is represented by its own Markov Model, hence the name Cascaded Markov Models. The output of each layer of the cascades is a probability distribution over possible bracketings and labelings for that layer. This output forms a lattice and is passed as input to the next layer. After presenting the methods, we investigate two applications of Cascaded Markov Models: creation of resources in corpus annotation and partial parsing as pre-processing for other applications. During corpus annotation, an instance of the model and a human annotator interact. Cascaded Markov Models create the syntactic structure of a sentence layer by layer, so that the human annotator can follow and correct the automatic output if necessary. The result is very efficient corpus annotation. Additionally, we exploit a feature that is particular to probabilistic models. The existence of alternative assignments and their probabilities are important information about the reliability of automatic annotations. Unreliable assignments can be identified automatically and may trigger additional actions in order to achieve high accuracies. The second application uses Cascaded Markov Models without human supervision. The possibly ambiguous output of a lower layer is directly passed to the next layer. This type of processing is well suited for partial parsing (chunking), e.g., the recognition of noun phrases, prepositional phrases, and their constituents. Partial parsing delivers less information than deep parsing, but with much higher accuracy and speed. Both are important features for processing large corpora and for the use in applications like message extraction and information retrieval. We evaluate the proposed methods using German and English corpora, representing the domains of newspaper texts and transliterated spoken dialogues. In addition to standard measures like accuracy, precision, and recall, we present learning curves by using different amounts of training data, and take into account selected alternative assignments. For the tasks of part-of-speech tagging and chunking German and English corpora, our results (96.3% - 97.7% for tagging, 85% - 91% recall, 88% - 94% precsision for chunking) are on a par with state-of-the-art results found in the literature. For the tasks of assigning grammatical functions and phrase labels and the interactive annotation task, our results are the first published. The presented methods enabled the efficient annotation of the NEGRA corpus as their first practical application. Now, they are being successfully used for the annotation of several other corpora in different languages and domains, using different annotation schemes.
OPCFW_CODE
Hosts: Josh, Shamus, Campster. Episode edited by Issac. Trivia: My daughter Rachel went out and got herself a full-time job, and doesn’t have time to edit the show. So my son Issac took over. So audio may be a little wobbly for the next few weeks until he learns the ropes. We’ll see how it goes. 00:01:00 Microsoft and Windows 10 Store Shenanigans Here is Tim Sweeny, literally calling on people to fight the Microsoft initiative. If there’s one critique I have for Sweeny, it’s that I think bringing up the Microsoft antitrust lawsuit is a mistake in this context. That case was really complex and doesn’t come off as the scathing condemnation he intends it to. If you’re trying to persuade an undecided audience to join your cause, doing so under a controversial banner only limits your reach. If you’re arguing tax policy, don’t try to bolster your argument with arguments that require the reader agree with you on (say) abortion. You’re just going to thread-jack your own argument, and people will debate the controversial thing instead of examining the new thing you’re trying to get them to care about. For example, now that I’ve mentioned the antitrust suit, people are going to be tempted to bring it up here, so I need to head them off with this: This is not about the [lack of?] merit of the antitrust lawsuit and if you try to argue about it here you are the debate equivalent of a cat chasing a laser pointer. Let’s keep the conversation focused on Microsoft’s shenanigans in 2016. I think you can make a really good case against UWP on simple practical grounds as a consumer, and that doesn’t require agreement on the antitrust case, which was a debate that took thousands of different viewpoints and tried to shove them into simple boxes labeled “for” and “against”. Also, since we mentioned Jill of the Jungle, I have to link to this. I fell in love with this game back in the day, despite the fact that I don’t really enjoy 2D sidescrolling platformers. This game won me over with presentation and soundtrack. (Mostly soundtrack.) 00:22:00 Stardew Valley 00:29:30 Xcom 2 00:43:25 No Man’s Sky 00:51:30 Both EA and Activision are skipping E3 this year. “Skipping” in this case means no standalone booth; they'll likely have representation at the console booths, etc. A horrible, railroading, stupid, contrived, and painfully ill-conceived roleplaying campaign. All in good fun. Crash Dot Com Back in 1999, I rode the dot-com bubble. Got rich. Worked hard. Went crazy. Turned poor. It was fun. Diablo III Retrospective We were so upset by the server problems and real money auction that we overlooked just how terrible everything else is. Project Button Masher I teach myself music composition by imitating the style of various videogame soundtracks. How did it turn out? Listen for yourself. Starcraft: Bot Fight Let's do some scripting to make the Starcraft AI fight itself, and see how smart it is. Or isn't.
OPCFW_CODE
Sql Native Client Error Codes For errors that occur in the data source (returned by SQL Server), the SQL Server Native Client ODBC driver returns the native error number returned to it by SQL Server. The Microsoft ODBC Driver for SQL Server provides native connectivity from Windows to Microsoft SQL Server and Microsoft Azure SQL Database. The installer has encountered an unexpected error. What do you call someone without a nationality? http://grebowiec.net/error-code/sql-native-error-codes.php In C you can test the return value from an ODBC function using the macro SQL_SUCCEEDEDe.g. ER_INSIDE_TRANSACTION_PREVENTS_SWITCH_SQL_LOG_BIN1 and ER_INSIDE_TRANSACTION_PREVENTS_SWITCH_SQL_LOG_BIN0 represent numbers and strings, respectively, that are substituted into the Message values when they are displayed. Solution: 1. Launch Beutility.exe located under X:\Programs Files\Symantec\ Backup Exec (Where X: is the location for Backup Exec Installation, by default it would be C:\) 2. The native error of 5701 is generated by SQL Server. https://msdn.microsoft.com/en-us/library/ms131381.aspx Sql Native Error Code -104 Select Change Database Access from the drop down menu. 5. Run the Symantec Backup Exec Installation again. For more information about this error please refer: http://www.symantec.com/docs/TECH71380 Error 29552: Upgrade from Backup Exec for Windows Servers 10d to Error: HY0000 SQLSTATE: ER_INDEX_CORRUPT9 (ER_INDEX_CORRUPT8) Message: Cannot delete or update a parent row: a foreign key constraint fails ER_INDEX_CORRUPT7 reports this error when you try to delete a parent row that Avoid using SNAC in new development work, and plan to modify applications that currently use it. Each of these errors includes an SQLSTATE that provides detailed information about the cause of a warning or error and a diagnostic message that includes a native error code (generated by Then, run the SQL Server Setup again. In this case, SQLExecute would return SQL_SUCCESS_WITH_INFO and the driver would add a diagnostic indicating the cursor type had been changed.You should note that a few ODBC functions return a status Error Number 2147467259 It is possible that updates have been made to the original version after this document was translated and published. what really are: Microcontroller (uC), System on Chip (SoC), and Digital Signal Processor (DSP)? Not the answer you're looking for? Delete the subhive named MSSQL.# (where # is the number discovered from searching in step 5) 7. For example : If the name of the server on which you are installing Baclup Exec is BKUPSRV then the name of the group that you need to create would be Class values other than "01," except for the class "IM," indicate an error and are accompanied by a return value of SQL_ERROR. Sql State 42000 These kinds of error messages are generated at different levels of the ODBC interface. The Client will sometimes suppress such warnings if they are expected; in all other cases, these warnings are displayed. Thank You! Error Number -1073548784 Sql Why can't linear maps map to higher dimensions? Error: ER_STORED_FUNCTION_PREVENTS_SWITCH_SQL_LOG_BIN0 SQLSTATE: ER_STORED_FUNCTION_PREVENTS_SWITCH_SQL_LOG_BIN9 (ER_STORED_FUNCTION_PREVENTS_SWITCH_SQL_LOG_BIN8) Message: Can't create database '%s' (errno: %d) Error: ER_STORED_FUNCTION_PREVENTS_SWITCH_SQL_LOG_BIN7 SQLSTATE: ER_STORED_FUNCTION_PREVENTS_SWITCH_SQL_LOG_BIN6 (ER_STORED_FUNCTION_PREVENTS_SWITCH_SQL_LOG_BIN5) Message: Can't create database '%s'; database exists An attempt to create a database failed Sql Native Error Code -104 Microsoft (R) SQL Server Execute Package Utility Version 10.50.1600.1 for 64-bit Copyright (C) Microsoft Corporation 2010. Odbc Error Codes Manual uninstall of 11d and 12.x: http://support.veritas.com/docs/287320 2. You’ll be auto redirected in 1 second. http://grebowiec.net/error-code/sql-native-error-7678.php Great care should be taken when making changes to a Windows registry. There are two ways of resolving: Set the Protection level to something different Create an SQL Agent Proxy with your credentials and set up the job to use that when executing For example, here are some message texts and error conditions:The following three examples of diagnostic messages can be generated using the Easysoft ODBC-ODBC Bridge to access Microsoft SQL Server. [Easysoft ODBC Sql Server 2014 Error Codes SQLSTATE values are strings that contain five characters. Can someone Please assist to help me solve this problem? Error messages do not change often, but it is possible. this page End Error Error: 2014-08-14 12:10:22.24 Code: 0xC0047017 Source: Data Flow Task SSIS.Pipeline Description: component "OLE DB Destination" (15) failed validation and returned error code 0xC020801C. RetCode = SQL_ERROR, SQLState = 23000; native_error = 2601, error = [Microsoft][ODBC SQL Server Driver][SQL Server]Cannot insert duplicate key row in object 'customer' with unique index 'customer_set'. Odbc Connection Error SQL Server Native Client Programming SQL Server Native Client (ODBC) Handling Errors and Messages Handling Errors and Messages SQLSTATE (ODBC Error Codes) SQLSTATE (ODBC Error Codes) SQLSTATE (ODBC Error Codes) Processing Now, one part of your package (probably a connection manager) has properties called EncryptSensitive and ProtectionLevel (https://msdn.microsoft.com/en-us/library/ms141747.aspx) and by default this is set to your user account. For tables without an explicit WARN_OPTION_BELOW_LIMIT5, WARN_OPTION_BELOW_LIMIT4 creates an implicit clustered index using the first columns of the table that are declared WARN_OPTION_BELOW_LIMIT3 and WARN_OPTION_BELOW_LIMIT2. Yes No Additional feedback? 1500 characters remaining Submit Skip this Thank you! Version: '%s' socket: '%s' port: %d Error: 420000 SQLSTATE: ER_TRUNCATE_ILLEGAL_FK9 (ER_TRUNCATE_ILLEGAL_FK8) Message: %s: Normal shutdown Error: ER_TRUNCATE_ILLEGAL_FK7 SQLSTATE: ER_TRUNCATE_ILLEGAL_FK6 (ER_TRUNCATE_ILLEGAL_FK5) Message: %s: Got signal %d. Add the parent row first. http://grebowiec.net/error-code/sql-native-error-code-102.php We appreciate your feedback. Manage Your Profile | Site Feedback Site Feedback x Tell us about your experience... This option will be removed in MySQL 5.6. The OOB alone was involved in this process. [Easysoft ODBC (Server)][Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified. Please change column '%s' to be NOT NULL or use another handler Error: ER_SLAVE_HEARTBEAT_VALUE_OUT_OF_RANGE_MAX4 SQLSTATE: ER_SLAVE_HEARTBEAT_VALUE_OUT_OF_RANGE_MAX3 (ER_SLAVE_HEARTBEAT_VALUE_OUT_OF_RANGE_MAX2) Message: Can't load function '%s' Error: ER_SLAVE_HEARTBEAT_VALUE_OUT_OF_RANGE_MAX1 SQLSTATE: ER_SLAVE_HEARTBEAT_VALUE_OUT_OF_RANGE_MAX0 (17059) Message: Can't initialize function The first two characters indicate the class and the next three indicate the subclass. Solution: Setup might fail and roll back with the following error message: An installation package for the product Microsoft SQL Native Client cannot be found. If the error message refers to error −1, table creation probably failed because the table includes a column name that matched the name of an internal ER_STORED_FUNCTION_PREVENTS_SWITCH_SQL_LOG_BIN1 table. Uncheck System Files, Executable files and Temporary Files groups. 5. The content you requested has been removed. Error codes are stable across GA releases of a given MySQL series. We appreciate your feedback. Error: HY0008 SQLSTATE: HY0007 (HY0006) Message: NO Used in the construction of other messages. This error occurs when there is a cryptographic error. SQLSTATE (ODBC Error Codes) SQL Server 2016 Other Versions SQL Server 2014 SQL Server 2012 Warning SQL Server Native Client (SNAC) is not supported beyond SQL Server 2012. There may be error messages posted before this with more information on why the AcquireConnection method call failed. Thus, when you re-run the transaction that was rolled back, it might have to wait for other transactions to complete, but typically the deadlock does not recur. more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed sql sql-server ssis sql-server-2008-r2 share|improve this question edited Aug 14 '14 at 18:04 AHiggins 5,24461839 asked Aug 14 '14 at 10:16 user3676496 3616 "You may not be authorized to If a native error number does not have an ODBC error code to map to, the SQL Server Native Client ODBC driver returns SQLSTATE 42000 ("syntax error or access violation"). Dev centers Windows Office Visual Studio Microsoft Azure More... Also if the database administrator changes the language setting, that affects the language of error messages. Click to clear the Compress contents to save disk space check box. 4. Look through all subhives named MSSQL.# for an entry including BKUPEXEC as the (Default) key in the subhive or expand to HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL.#\MSSQLServer\SuperSocketNetLib\Np within the PipeName string value. (where #
OPCFW_CODE
git submodule foreach git checkout main git submodule foreach git add --all git submodule foreach git diff-index --quiet HEAD || git commit -m "%CommitMessage%" git submodule foreach git push This runs command 1 for all submodules, then command 2 for all submodules, etc. I would like to only have one foreach, and do all of the commands for a submodule at once, then move on to the next submodule. Is there a way to have the git submodule foreach call a method, or in some way call multiple commands at once? I want to do this within a batch script on Windows. To complement larsks' helpful Unix solution (which would also work in Unix-like environments on Windows, such as Git Bash and WSL) with a solution for Windows, from a batch file (see the next section for PowerShell): As in other cases where git supports passing shell commands, these are evaluated by Git Bash, i.e the Bash implementation that comes bundled with git on Windows. As such, you must use Bash (POSIX-compatible) shell syntax even on Windows. git submodule foreach defines the following (environment) variables to provide information about the submodule at hand, which you may reference in your shell command: $toplevel; as noted, due to having to use Bash syntax in your shell command, you need to reference these variables as shown (e.g. as $nameinstead of batch-file style git help submodulefor details. git submodule foreach "git checkout main && git add --all && git diff-index --quiet HEAD || git commit -m \"%CommitMessage%\" && git push" "..." quoting on a single line ( cmd.exe / batch files don't support multiline quoted strings, and programs only expect " , not also ' to have syntactic function on their process command lines). %CommitMessage% batch-style variable reference is expanded up front, by \"...\" is used to properly escape the embedded "..." around the expanded result. Caveat: If the value of cmd.exe metacharacters such as &, the command will break, because cmd.exe sees these as unquoted, due to not understanding that the surrounding \" are escaped double quotes; as you report, set "CommitMessage=%info1% | %info2% | %info3%" caused a problem in your case, and there are two solution options: Either: If feasible, manually ^-escape the metacharacters; e.g.: set "CommitMessage=%info1% ^| %info2% ^| %info3%" Or, as you have done, use delayed variable expansion, which bypasses the problem (but can result in literal ! characters getting eliminated): setlocal enableDelayedExpansionat the top of your batch file. %...%to refer to your variable, i.e. Chain the commands with &&, so that subsequent commands only execute if the previous ones succeeded. The PowerShell perspective (on both Windows and Unix-like platforms): PowerShell has flexible string literals, including support for multiline literals. The here-string variant used below helps readability and obviates the need for escaping embedded quotes. # NOTE: In Windows PowerShell and PowerShell (Core) 7.2-, # you must manually \-escape the embedded " chars. # (... -m \"$CommitMessage\") # $CommitMessage is expanded *by PowerShell*, up front. git submodule foreach @" git checkout main && git add --all && git diff-index --quiet HEAD || git commit -m "$CommitMessage" && git push "@ As noted in the code comments, Windows PowerShell and PowerShell (Core) versions up to v7.2.x unfortunately require embedded " chars. to be explicitly \-escaped when passing arguments to external programs such as git, which is fortunately no longer need in PowerShell (Core) 7.3+ Because PowerShell too uses sigil $ for variable references, you must $ characters you want to preserve as such, as part of the (POSIX-compatible) shell command to be executed by git; e.g., in order to pass verbatim $name through in order to refer to the submodule name, use However, this only necessary if "..." quoting is used, i.e an expandable string, which in turn is only necessary if you need PowerShell's string interpolation (expansion), such as to embed the value of PowerShell variable $CommitMessage as shown above. If string interpolation isn't needed, use '...' quoting, i.e. a verbatim string ( '...'), in which case pass-through $ chars. need no escaping. larsks' Unix solution can be used as-is in PowerShell (Core) 7.3+, but only from Unix-like environments (including WSL, if you have PowerShell (Core) installed there (too)), given that the standard Unix shell, /bin/sh, is explicitly called. shprocess per submodule, but is convenient, because the -eoption - to abort when a command fails - allows specifying the commands individually, without having to chain them with
OPCFW_CODE
convert db to dbm of a wireless device I bought a replacement wireless device. I want to compare it's stats with the old one. I'm not very familiar with measuring units. Old device showed 60dbm (I think negative value) as signal strength. New device shows 25db. Can I somehow compare them? Perhaps using noise metric of the old device? IIRC noise was 92 or 97. My gut feeling tells me there should be a way because db I imagine should be signal/noise ration so I should be able to compare the two devices. Anybody more knowledgeable? Google is. The third link down for me seems pretty useful. In summary: dBm is used as an absolute unit (in reference to 1mW) while dB is relational between two power values. Hopefully someone who knows hardware more can enlighten you as to which power values apply.. @akostadinov - You can't compare the two values ( two entirely differents of measurements ) unless you have additional information. I've looked at google already. I am thinking that knowing signal dbm and noise dbm I can compare with db. I'm not sure though and I'm not sure what the equation would be. So I hope somebody can tell here for sure. I think I got it. If we consider that the db value is a signal/noice ratio and we have to compare that with signal and noise expressed in dbm then we can calculate: SIGNALdbm/NOISEdbm=Xdb Since dbm is some kind of logarithm and we have a common base, then we just need to subtract the absolute values. In this case I think this is what needs to be done: SIGNAL-NOISE=Xdb In my case I compare (92 - 60) with 25 so it seems my old device had 32db and the new one only 25. dB is a relative unit that represents a ratio. dBm is an absolute unit that is referenced to 1mW 1watt = 1000mW = 10log10(1000) = 30dBm (not 30dB) So you can't convert dB to dBm. More about converting decibels here: http://www.rapidtables.com/electric/dBm.htm You can compare a relative unit with absolute units if you know what absolute units produce the relation and the exact equation. Do you know that absolute units produce the relative db stat I see and what is the exact way to calculate. In this way I can produce the db value for the device diving absolute units and compare with the other one.
STACK_EXCHANGE
The results are in! Eclipse.org recently published their 2019 IoT Developer Survey. Ubuntu is again the top choice for embedded & IoT, with our cousins Raspbian and Debian taking 2nd and 3rd respectively. The numbers fall off pretty steeply after that. 😉 For those who create embedded products or solutions, the message couldn’t be more clear. If you’re looking for an embedded Linux OS with long-term maintenance, hardware certification, and commercial support, Ubuntu is your choice. Even if you haven’t deployed Ubuntu yet, there’s a good chance your developers are using it already, for prototyping or side-projects. When you give them what they want, you not only gain the aforementioned benefits, you’ll make your developers happy, which has a whole list of possible upsides. Shorter timelines, cooler products, and increased developer referrals to name a few. This latest validation of Ubuntu’s excellence is certainly something to celebrate, but some of you may be wondering: why is Ubuntu so popular? Is it the clever release names? The beautiful UX? The witty, charming, life-of-the-party product managers? Apart from that last one (wishful thinking), those things do contribute to our success. But in embedded and IoT there are other factors that play a larger role. Turns out, many of them can be found right in the very same survey we were just talking about! I’ll highlight a few of my favorites here.* First, let’s look at the list of top developer concerns on page 7. The number one concern is security, as it should be. In honor of this, I want to give a shout-out to our security team, because they’re amazing! You don’t have to take my word for it, their track record speaks for itself. This is a good time to point out, the Eclipse survey is for embedded and IoT device operating systems only. Separately, Ubuntu is also the #1 choice for developers on their laptops and desktops, and the #1 choice in the cloud as well. This is not the first time we’ve come out on top, just the most recent. One of the main things driving our success is a borderline-obsessive security focus. It permeates our company culture, but is particularly embodied by the security team. These folks work very hard to deliver fixes for critical issues in the shortest time possible, and it shows in the stability and reliability of our operating systems. If you’d like to know more about our security team and the work they do, you should follow them on twitter, and check out a postcast or two. Before we move on, let’s spend a moment on the other 2 top concerns, connectivity and analytics. Another major reason people choose Ubuntu is our package ecosystem. We all know the latest and greatest apps, toolkits, and frameworks run in Ubuntu first, and sometimes only. This was a common refrain from the developers I spoke to at Hannover Messe a few weeks ago. Now let’s jump to the slide on page 16: Non Linux OSes. The main observation here is, barring a few small outliers, non-Linux usage continues to decline. This is unsurprising as Linux continues to mature and be, well, free. 🙂 One particularly interesting stat is the steep drop in “No OS / Bare metal” year over year. It may be tempting to attribute this to Moore’s Law in a generic fashion. But I think there’s a more specific reason, namely our friends over at Raspberry Pi, and their continued success in delivering a phenomenal family of products at unheard-of prices! Why mess around with assembly when you can have a full stack on your rPi (running Ubuntu, natch) for a couple bucks? Another thing that struck me was slide 19, the split between Intel/x86 and ARM for industrial gateways. The astute reader will note that the numbers add up to over 100%! This is no error, it simply means that many people who took the survey are using both architectures. This is another area where Ubuntu shines, especially when using snaps, which take the pain out of multi-arch quite effectively. Snap-enabled developers often prefer to get started prototyping directly on their laptop, which is also running snapd, knowing they can easily switch to an embedded hardware platform any time. Last slide for today is 25, the list of top programming languages for IoT. I count 6 different languages there, enough to give any engineering leader headaches! Unless of course, your shop is running Ubuntu, and your developers can easily manage their own tools via freely available snaps or debs of the very latest of everything, for those 6 languages and dozens more. I want to close back at the beginning, up on page 6, where they highlight IoT adoption. Fully two-thirds of respondents have either deployed IoT solutions or are actively planning to do so. That’s a healthy majority, and speaks to how important embedded, IoT and edge computing has become. IoT has only begun the process of transforming the computing landscape. As adoption continues to grow, it will effect dramatic changes across IT, OT, business strategy and planning. Change can be stressful. We’re here to help you make sense of everything, by providing the same solid foundation you’ve come to depend on elsewhere. That’s all for now! If you want to keep learning about Ubuntu in the embedded space, check out my previous blog post the path to Ubuntu Core. * page numbers reference the actual pages of the PDF, shown lower-right, not the numbers some slides have top-center Bring an IoT device to market fast. Focus on your apps, we handle the rest. Canonical offers hardware bring up, app integration, knowledge transfer and engineering support to get your first device to market. App store and security updates guaranteed.
OPCFW_CODE
I'm trying to use the Zonal Statistics as Table tool (Spatial Analyst) to summarize the stats of a number of rasters within the feature 'zones' of watershed polygons. I keep getting this error: ERROR 010160: Unable to open raster t_t_t2\t_t_t2. Zonal statistics program failed I looked up the documentation on the error and here's what the Help has to say about it: "Description The grid could not be successfully accessed. This may be due to incorrectly specifying the paths or not having permission to access the data folders. If the path and permissions are okay, the next possible source of the problem may be the result of missing required component files in the grid's folder. A valid grid must have, at minimum,dblbnd.adf, hdr.adf, and sta.adf files, as well as at least one pair of files in the format w00n00n.adf and w00n00nx.adf (where n is typically 1 but can be more for multitiled grids). If any of these files is missing from the grid folder, the grid is considered invalid. Note that for integer grids, a vat.adf file is also usually present but not necessary. Consult the documentation for more information about the ESRI Grid format. If all the required components are present, it is also possible that the files that contain the binary raster data (the w00n and w00nx files) are internally corrupted. Solution Check that you have correctly identified the dataset and that you do have read permission. If this has been done and the problem continues, you should then determine whether the grid is a valid one. First try displaying it. If you cannot display it, check to see that all the required component files are present in the grid's folder. If there is no sta.adf file in the grid directory, try creating one with the Calculate Statistics tool. If its data files seem to have been corrupted, you may need to re-create the grid. Hopefully you have a backup copy that can be used or can run the process that created it again." The rasters that are supposedly not accessible are FGDBR, continuous type, 16 bit signed pixel. Working through the list of troubleshooting diagnostics, I eliminated the following: They all have attribute tables, and I exported them from the .gdb to check to see that they all have the required component files in the grid folders, which they do. They all have read permission and they all display fine. They all have .sta files. Arc Help's next helpful tip is that 'maybe they're corrupt.' But how do I tell if they're corrupt if all of the above attributes are working fine? One thing I thought of was this: They are all mosaics made from other rasters (note they are NOT mosaic datasets, I used the tool Mosaic to New Raster so they are all actual rasters as far as I know) and I am wondering if something went wrong in the mosaicking process and now they are buggy. I just need a diagnostic to figure out if the files are corrupt and why the Zonal Stats tool can't access them. One other weird thing is that two of the mosaicked raster files DID work OK with the Zonal Stats tool (while the other 16 didn't) for some totally unknown reason. I went through and systematically examined the properties of these two outlier files as compared with the ones that are misbehaving, and they all have the same properties so I'm not sure why Arc *can* access them while it *can't* access their brethren.
OPCFW_CODE
The new ROBOTILL Branch Module allows you to manage multiple branches over the Internet, anywhere in the world, from a central location. You can update branches with products and prices and also transfer stock between branches. ROBOTILL always supported any number of tills (POS Computers) and back office computers on a local network using a powerful and robust database (Microsoft SQL Server). You also had the option of using a cloud server for your Point of Sale System if a local network was not an option for you. It was however limited to one 'shop'. Now, you can manage multiple shops (branches) over the Internet. Each branch will still have its own database with a local network (or cloud server) to connect the various tills and back office computers at the branch. The Branch Module license, like our Point of Sale licenses, cost less than what most other POS companies charge for support! (ROBOTILL support is free). You can view prices here. Updating Branches Remotely The ROBOTILL Branch Module can remotely update branches so that all your branches will have the same products, till slip design and more. The Branch Module has the flexibility to allow each branch to still have products, categories, departments, etc. that is unique to that branch. At your head office, you simply do the till slip design and setup your products using ROBOTILL Manager as you would usually do. You can then 'roll out' these changes to all or selected branches using the Branch Module. An internet connection is required, but the system is designed to work even if the internet connection goes down. The update will simply resume when the connection is back up. Transfer Stock Between Branches You can also remotely transfer stock between branches. In the Branch Module you simply select the 'Dispatch' branch and the 'Receiving' branch. You select the products and quantities you want to transfer. From the Branch Module you will be able to see the details of the transfer as well as the progress. Each branch will receive a 'Dispatch' or 'Receive' instruction that they can process once the stock is actually dispatched or received. This is done with the ROBOTILL Manager application at each branch. Once they process the instruction they will be able to enter the actual amount of stock dispatched or received (to cater for the stock that 'fell of the truck'). A confirmation will automatically be sent to the head office. At the head office (in the Branch Module) the dates and times as well as actual quantities dispatched or received will be updated. The Branch Module will remotely and automatically receive branch reports. At this stage it is only the 'Stock on Hand' and 'Sales Per Day' reports that are received remotely. More reports will be added during the next couple of weeks. The daily sales per branch are also displayed in a chart in the main screen of the Branch Module. Unfortunately the Branch Module is not available in the Free Point of Sale System of ROBOTILL. For more information you can have a look at the Branch Module help section in our online help: You can also contact us if you have any questions.
OPCFW_CODE
This blog post is part 3 in a three-part series. It focuses on some interesting low-level challenges we faced along the way, as well as some surprises we found during the migration. - To read about the design and planning phase, check out part 1. - To read about how we executed the actual migration and our results, check out part 2. The challenge with taking GitLab.com offline One key part of our migration process was to take all systems offline that could potentially talk to the database. This may seem as simple as "shutting down the servers" but given the scale and complexity of GitLab.com's infrastructure this proved to be really quite complex. Here is just a subset of the different things we had to shut down: - Kubernetes pods corresponding to web, API, and Sidekiq services - Cron jobs across various VMs Surprises along the way Even though we had rehearsed the migration many times in staging, there were still some things that caught us off-guard in production. Luckily, we had allocated sufficient buffer time during the migration to resolve all of these during the call: - Autovacuum on our largest CI tables take a long time and can run at any time. This delayed our migration as we needed to gain table locks for our Adding these triggers requires a ShareRowExclusiveLockwhich cannot be acquired while the autovacuum is running for that table. We disabled some manual vacuum processes we were aware of ahead of the call but autovacuum can happen at any time and our ci_buildstable just happen to have autovacuum at the time we were trying to block writes to this table. To work around this we needed to temporarily disable autovacuum for the relevant tables and then find the pidfor the autovacuum process and terminate this which allowed our triggers to be successfully added. - Sometimes a long-running SSH session by an SRE or developer can leave open a surprising database connection that needs to be tracked down and closed. - Cron jobs can be run on various hosts that start rails processes or database connections at any time. We had many examples that were created with different purposes for database maintenance over the years, and we missed at least one in our practice runs. They weren't as easy to detect on staging as they may not all be configured on staging, or they run a lot faster on staging. Also, our staging runs all happened on week days, but our production migration happened on a weekend where it seemed we were deliberately running some database maintenance workloads during low utilization hours. - Our Sentry client-side error tracking caused us to overload our Sentry server due to many of users leaving open GitLab browser tabs. As the browser tabs periodically make asynchronous requests to GitLab and get errors (since GitLab.com was down), they then send all these errors to Sentry and this overloaded our Sentry error server to the point we couldn't load it to check for errors. This was quickly diagnosed based on the URL all the requests were sent to, but it did delay our migration as checking for new errors was key to determining success or failure of the migration. Cascading replication doubles latency (triples in our case) A key initial step in our phased rollout was to move all read-only CI traffic to dedicated CI replicas. These were cascading replicas from the main Patroni cluster. Furthermore, we made the decision to create the standby cluster leader as a replica of another replica in the Main Patroni cluster. Ultimately this meant the replication process for our CI replicas was Main Primary -> Main Replica -> CI Standby Leader -> CI Replica. This change meant that our CI replicas had roughly three times as much latency compared with our Main replicas, which previously served CI read-only traffic. Since our read-only load balancing logic is based on users sticking to the primary until a replica catches up with the last write that they performed, users might end up sticking to the primary longer than they previously would have. This may have served to increase our load on the primary database after rolling out Phase 3. We never measured this impact, but in hindsight it is something we should have factored in and benchmarked with our gradual rollout of Phase 3. Additionally, we should have considered mitigating this issue by having the Standby Leader replicating straight from the Main Primary or adding the Standby Leader to the pool of replicas that we could service CI read-only Re-balancing PGBouncer connections incrementally without saturating anything Phase 4 of our rollout turned out to be one of the trickiest parts of the migration. Since we wanted all phases (where possible) to be rolled out incrementally we needed some way to solve for incrementally re-balancing connection pool limits GitLab -> PGBouncer -> Postgres without exceeding the total connection limit of Postgres or opening too many connections to Postgres that might saturate CPU. This was difficult because all the connection limits were very well tuned, and we were close to saturation across all these limits. The gradual rollout of traffic for Phase 4 looked like: We wanted to gradually increase X from 0-100. But this presented a problem, because the number of connections to the PostgresMain DB will change with this number. We assume it has some initial limit K connections, and we assume this limit is deliberately just high enough to handle the current PGBouncerMain and not overload the CPU. We need to carefully pool_size values across the separate PGBouncer processes to avoid overloading the limit K, and we also need to avoid saturating the Postgres server CPU with too much traffic. At the same time, we need to ensure there are enough connections to handle the traffic to both PGBouncer pools. We addressed this issue by taking very small steps during low utilization hours (where CPU and connection pools weren't near saturation) and doing very detailed analysis after each step. We would wait a day or so to figure out how many connections to move over with the following steps, based on the number of connections that were used by the smaller step. We also used what data we had early on from table-based metrics to get an insight into how many connections we thought we'd need to move to the CI PGBouncer pool. In the end, we did need to make small adjustments to our estimates along the way as we saw saturation occur, but there was never any major user-facing saturation incidents, as the steps were small enough. We're very happy with the results of this project overall. A key objective of this project, which was hard to predict, was how the complexity of an additional database might impact developer productivity. They can't do certain types of joins and there is more information to be aware of. However, many months have now passed, and it seems clear now that the complexity is mostly abstracted by Rails models. With continued large number of developers contributing, we have seen little-to-no impact on productivity. Combining this success with the huge scalability headroom we've gained, we believe this was a great decision for GitLab. This blog series contains many links to see our early designing, planning, and implementation of various parts of this project. GitLab's transparency value means you can read all the details and get a sense of what it's like to work on projects like this at GitLab. If you'd like to know more or something was unclear please leave a comment, so we can make sure we share all our learnings. “The final part in our series on decomposing the GitLab backend database examines the challenges and surprises our team encountered.” – Dylan Griffith Click to tweet
OPCFW_CODE
If this variable is set to yes (or -U/--user-group is specified on the command line), a groupCouldnt update the password file. 2. The syntax of the command was invalid.Couldnt update SE Linux user mapping. useradd examples. Note: For these commands to work you must have root privileges. Here we will examine a nice way to change a users password with a one line command.How to configure system proxy settings in Red Hat Enterprise Linux. Here we will examine how to set proxy settings in RHEL 5 or 6. This will also work for centOS 5 and 6 as well as most versions of Fedora This guide shows how to create a user within Linux using the command line. It covers creating users, assigning them to groups and setting expiry dates.To set a users password you need to use the following command Specifies the comma separated list of password flags to set.Specifies the loadable I A module used to change users password. EXAMPLE: To reset password for users from command line,type. chpasswd. How do I set or change Linux system password for any user account? Both Linux and UNIX use the passwd command to change user password.Hello dear, i need to add active user in single command line. when i add user like useradd username -pPWD. but i cant log into linux using new Step 1 : Check if user password is set or not. Advanced way of creating password in Linux. We can even create password by using md5pass command to create a password checksum and keep this encrypted password in shadow file. To force a password change on next login, use tack e. This is what I always do when setting up a new users account.matt on Connect Midi Keyboard for Playback in Linux. Ashish Jha on Update pfSense from the command line. For example, there are a few GUI-based password managers for end users, such as KeePass(X).from the command line by using pass, a simple command-line utility for password management.More in this category: « How to set password policy on Linux How to manage multiple passwords 21/09/2006 Linux Set or Change User Password Posted on September 21, 20 Unix/ Linux Command Line Tricks - Part I Linux creating CD-ROM ISO images from a Command Line tools for Linux user / group management Following are the Linux command line tools for managing users and groups : useradd usermodNow lets set a password for this user. The passwd command would be helpful in this case The pass utility is in fact a shell script frontend which uses several other tools (e.g gpg, pwgen, git, xsel) to manage users password info using OpenPGP.How to set up a secure SFTP server in Linux. How to validate JSON from the command line on Linux. how to share Linux user passwords and Samba passwords. Use PAMs support module for /etc/passwd.the password field (change a line starting root:: to rootdescription. The passwd command changes passwords for user accounts. Has anyone heard of an OpenSSH client being compiled with an additional command-line option for password input? I realize there are reasons to NOT do this, and I realize you can achieve the same type of thing with keys, but I am The above command will set password of the user sk to expire on 24/02/2017.Print. Tags: Linux Password complexity Password policy Set Password Policies In Linux.How To Add Line Numbers To Text Files On Linux. Pyenv Python Version Management Made Easier. To do it using command line: To change the root password: sudo passwd.Cannot change password using passwd. 0. Cant login even with password Linux ubuntu. 1.Related. 6. How to add a Password to User Account if previously it set to Log In Without Password. I frequently create new user accounts and change or set password for these accounts on a batch of Linux boxes. The create new user can be done by one command line. The problem is to change the password. Change User Passwords Using The Net User Command - Продолжительность: 6:39 Computerbasics 20 774 просмотра.ccrypt - Secure Password Management From The Linux Command Line - Продолжительность: 4:24 Practical Penguin 9 906 просмотров. However, if youve set PROMPTCOMMANDhistory -a the command with your password is immediately written to yourBrowse other questions tagged linux command-line bash centos or ask your own question.Drupal Answers. SharePoint. User Experience. Mathematica. Salesforce. Linux Command Line - nothing its impossible by command line!Description. chage -E 2005-12-31 user1. set deadline for user password [man]. groupadd [group]. How to to set password Zip files with command Line in Windows, Linux, and Apple Mac OS. 7 Zip Commandline Examples names in the 7-ZipHow to Change Linux Users Password in One Command Line tagged Bash, Command line, Howto, SSH, Tutorial. a linux user in single line? If it is set, users must authenticate themselves via a password (or other means of authentication) before they run commands with sudo.20 Command Line Tools to Monitor Linux Performance. 18 Tar Command Examples in Linux. Syntax to change the password. mysql> UPDATE wpusers SET userpass1, How to find the WordPress version from command line. 2, Top 10 WordPress themes for web hosting companies 3, 10 WordPress Web-hostingHow to delete a group from server via commandline Unix/Linux. I frequently create new user accounts and change or set password for these accounts on a batch of Linux boxes. The create new user can be done by one command line. Linux Commands.Apart of the MySQL command line interface a a system administrator is able to change MySQL users password using mysqladmin command directly from a shell command line.To change MySQL roots password to abc123 where that current password is set to newpass we Fr:25 set [email protected] tell mailx that it needs to authorise set smtp-authlogin set the user for SMTP set [email protected] set [email protected] set the password for authorisation setEnv -i bash norc clean up environment set o. Browse other questions tagged linux bash command-line. up vote 311 down vote accepted. The command to change your password isBrowse other questions tagged linux passwords password-protection or ask your own question.Is it unreasonable to set assignment deadlines on Saturday/Sunday? Install new elm aliases for user and/or system. Change NIS password information. manipulate media-independent interface status.Command line program for Internet News and Email.Documents Similar To Unix Linux Commands. Skip carousel. Unix Linux. Questions.Is there any way in which I can set the password for a newly created user account in AIX in a one line command? I dont want to give any prompt to the user. Linux chage command is used manage the Linux Password Expiration and Aging of User Accounts and Passwords.Set Last Password Change. Linux chage command (A tool for linux password expiration Management) with Examples Example:5 Delete a User along with its home directory. userdel command is used to delete local accounts or users in Linux.instead of using the echo -e command to set the password, the passwd command has an option to accept input from the Standard input. Change Passwords for All Users on Linux Server.21. Check if user password input is valid in Powershell script. 10. Change user password on other domain command line. Resetting WordPress users password from MySQL command prompt.In latest MySQL versions we can generate the password in MD5 format from the commandline itself. To set "Linux User" password using single command line. On your shell prompt, type below command. echo -e "HellonHello" | passwd nishith.Create one user set its initial password remotely. How to to set password Zip files with command Line in Windows, Linux, and Apple Mac OS.By Shais Last updated Nov 6, 2015. Compress and protect a folder files with third party software is pretty easy for a standard user. 2, Enter to MySQL command prompt. [rootvps ] mysql Welcome to the MySQL monitor.In latest MySQL versions we can generate the password in MD5 format from the commandline itself.Syntax to change the password. mysql> UPDATE wpusers SET userpass Filed Under: Fedora, Linux and Opensource, Ubuntu Tagged With: CLI, Commands, Linux, Linux Commands, Password, Ubuntu. About Dan. I am a Linux Enthusiast, Tinkerer, Blogger, Designer and Entrepreneur. 2. Enter to MySQL command prompt. 3. Use WordPress database.In latest MySQL versions we can generate the password in MD5 format from the commandline itself. Syntax: mysql> UPDATE wpusers SET userpass MD5(WPEXPLORER) WHERE ID1 LIMIT 1 Set these variables to configure Linux proxy server settings for the command-line toolsSpecial Characters: If your password contains special characters, you must replace them with ASCII codes, for example the at sign must be replaced by the 40 code, e.g. pssword p40ssword. Change password or set password. // change your own password passwd // change other user password sudo passwd USERNAME. Change other user password. (Need to have privilege to change other password) Since many distributions include commands to set the default user to root and also a root user with no password set, changing the default user to root is aRun the given command line in that distro, using the default configuration. - Everything after run is passed to the linux LaunchProcess cal. When the password is accepted, youll be brought back to the command prompt as the root user.Set up a Wireless Network in Linux Via the Command Line. How to. MySQL via Command Line 101: Basic Database Interaction I. Change a Password for MySQL on Linux via Command Line II.This guide shows how to create a user within Linux using the command line. It covers creating users, assigning them to groups and setting expiry dates. Set password linux command line is the worlds number one global design destination, championing the best in architecture, interiors, fashion, art and contemporary. Linux Administration Made Easy. Prev. Chapter 6. General System Administration Issues.6.3. Changing User Passwords. To change a password on behalf of a user, first sign on or "su" to the "root" account. 6 different examples of Chage command in Linux to set Password aging. Modifications to /etc/shadow files for password related configurations through command.TASK 1: Use chage command to list the password aging information of a user. User: PasswordPosted Mar 17, 2007 2:49 UTC (Sat) by terminator (guest, 2292) Parent article: How to create a command-line password locker (Linux.com). Linux passwd command examples. Howto change a password, lock an account, unlock an account.However it is not unusual for users to forget their passwords and then it is your duty to reset this for that user. As well as setting passwords, you will also need to understand how to "Lock" These instructions are intended for setting the password for all MySQL users named root on Linux via the command line. However, they can also be followed to change the password for any MySQL user. I frequently create new user accounts and change or set password for these accounts on a batch of Linux boxes. The create new user can be done by one command line. The problem is to change the password. linux command line, password protect file using script.location: linuxexchange.com - date: January 18, 2014 I am looking to set " password never expires" for a local Windows user account, for a list of servers in a text file.
OPCFW_CODE
1.8.6版本ios背景色问题 1.8.6版本加入的ios背景色设置,可能导致了全屏幕的webview无法覆盖状态栏的背景。(图片是1.8.4和1.8.6) 测试网页:https://riveronly.github.io/pure_divination/ @riveronly Thanks for your feedback! Could you please provide more details about the problem? Which picture represents the expected behavior? Additionally, it would be helpful if you could provide the test code. @riveronly Thanks for your feedback! Could you please provide more details about the problem? Which picture represents the expected behavior? Additionally, it would be helpful if you could provide the test code. 这张图片是预期的效果,既状态栏是沉浸效果。不会额外给白色的背景色。测试代码可以基于您的示例代码,仅改变URL访问https://riveronly.github.io/pure_divination/即可。 @riveronly Thanks for your feedback! Could you please provide more details about the problem? Which picture represents the expected behavior? Additionally, it would be helpful if you could provide the test code. 这张图片是预期的效果,既状态栏是沉浸效果。不会额外给白色的背景色。测试代码可以基于您的示例代码,仅改变URL访问https://riveronly.github.io/pure_divination/即可。 需要开启enableEdgeToEdge no white backgroudColor is expected behavior need use enableEdgeToEdge() and code: @Composable internal fun App() { val initialUrl = "https://riveronly.github.io/pure_divination/" val navigator = rememberWebViewNavigator() val state = rememberWebViewState(url = initialUrl) LaunchedEffect(Unit) { state.webSettings.apply { logSeverity = KLogSeverity.Debug customUserAgentString = "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_1) AppleWebKit/625.20 (KHTML, like Gecko) Version/14.3.43 Safari/625.20" } } Column { val loadingState = state.loadingState if (loadingState is LoadingState.Loading) { LinearProgressIndicator( progress = { loadingState.progress }, modifier = Modifier.fillMaxWidth(), ) } WebView( state = state, modifier = Modifier.fillMaxSize().border(5.dp, Color.Red, RectangleShape), navigator = navigator, ) } } Also, the sliding effect is different. In 1.8.4 it is a sliding view on the background. In 1.8.6 and later, the entire background is slid @msasikanth It seems that the problems are caused by this PR. Could you have a look at it? This is in the latest version of the library. It seems to be working as intended? This is in the latest version of the library. It seems to be working as intended? If the webView is full screen, it should be displayed below the status bar. It avoids the status bar which is not expected. The picture you posted seems to mask the issue because of the dark background and dark theme, but the second picture does @KevinnZou the issue seems to be WebView not going edge to edge? The background does color does go below the status bar. So maybe a safe area issue for the webview on iOS? In fact, there is also a sliding problem, which shows a completely different sliding effect in 4 and 6. In the latest version 6, you can slide the entire background color, which is strange. I can run my simple project and change the version dependency to observe the difference. Https://github.com/riveronly/river.git @KevinnZou the issue seems to be WebView not going edge to edge? The background color does go below the status bar. So maybe a safe area issue for the webview on iOS? @msasikanth Yes, it seems to be a safe area issue. However, according to @riveronly's's description, this issue only appeared after version 1.8.6. Thus, it should be caused by the changes in this version. The main changes are below: setOpaque(false) val composeBackgroundColor = state.webSettings.backgroundColor val backgroundColor = UIColor( red = composeBackgroundColor.red.toDouble(), green = composeBackgroundColor.green.toDouble(), blue = composeBackgroundColor.blue.toDouble(), alpha = composeBackgroundColor.alpha.toDouble(), ) setBackgroundColor(backgroundColor) scrollView.setBackgroundColor(backgroundColor) Is that because we also setBackgroundColor for scrollView?
GITHUB_ARCHIVE
How can I use Power BI with Power Apps and Power Automate? The concept of Power Automation as it relates to managing information in a Digital Asset from only data sources can prove serious. For example, you may find a list of files that make your documents good for an organization, or you may find a document that is being rented in a rental. The ability of Power Tools to manage a collection of objects in digital assets is becoming increasingly more popular. Data sources have a value that means that upon creation of new data source, Power Apps can assist it in the management of data in a way that saves nothing. If one considers the most commonly used sources (see Table 3.2) this could mean the option that the list of objects would become empty when they are created—which has the effect that, having the list of objects destroyed can greatly increase the risk of a data destruction. However, that solution provides many different ways for users to use Power tools to extract data from data sources into data feeds—further improving the likelihood that data will be destroyed. In the case in which information is inserted into a specific context a data source might represent many different categories. In the case of data in digital assets—items on a web page or a database—this is not always the case, especially for devices with very small data volumes, such as webpages in computer science labs. Power applications often store the items in a rich database making data available for user access. For example, the Microsoft Office365 data from NASA is on the list of documents for NASA which contain the data in terms of paper pages. A user may want to automate or convert these pages into HTML. As with all tools available to manage a resource, there are many different types and how tools are used before you even start. The main focus the most commonly used tools are provided by IFS, for example, Joomla, and the WebSafari software. In this work, we have seen some examples of tools which provide tools for managing theHow can I use Power BI with Power Apps and Power Automate? You will wonder how you can safely invoke this and why. How can I use Power BI with Power Apps and Power Automate? Essentially, if you don’t want to perform the task in-place at all, write a different program that calls the Power BI process using the Power BI command line that is written in. For each new connection you have to create an executable file for each of these connected devices. If you are new to Power BI, please follow these steps for creating your new connected devices (or use click site Create a New Connected Device tool) to create one to create your new connected devices to. Once that is done, you can click the Create a new device from the left on the Power BI screen. To create a new device image, open a new window as shown in the picture below. Pay Someone To Do Homework In the new window, create a new Image as shown in the picture. Open a Power BI script file with the PBE (Power BI Object API-Startup To create a new PBE image, to create a new PBE image with the PBE command line (the PBE CLI is the CLI for the Power BI script files. Open a Power BI script file for creating the PBE image to. Note that all tools come as PowerShell Tools, even Power BI toolbox, which is only compatible with Power BI with Powershell and Powershell CLI using check my source Tools. The Power BI script is company website to use and have run in Power BI tools. But the PBE process is easy and should not be ignored. The PBE process is the most reliable way to perform the Power BI execution if the processor(s) is one of your (laptop, workgroup, server, or web application) home network. You can also create an empty folder with PBE executable or Power BI application which needs article be executed using Power BI commands. To useHow can I use Power BI with Power Apps and Power Automate? (IMHO + an explanation of them). I found Power BI is a new method to convert the input data into digital forms with Visual Studio and Powers Automate (after it was put into a public SharePoint Online server (that I think should have been public). There is some work showing them that the methods are being called from the public version, but I get no indication what their signature is yet (I have no idea until this article). Any help would be very much appreciated, let me know you have a good idea (and I’d like to post it anyway). Thanks 🙂 A: There are web-services available for the users. What you need is to convert those tables back to Power Apps: http://blog.sqlalchemy.org/2010/01/27/create-power-apps-in-power-dev-and-generate-a-web-service/ What are the benefits of converting the user’s data to an image form? Convert the user data to an Image form. Suppose that it turns out that you are just converting the user data to a Power Automated image (image conversion is not the question); you would convert the user data back to XML files in Visual Studio (or at least that’s pretty much it). Convert the user data to an XML form.
OPCFW_CODE
Type alias for model_dump include/exclude is broken Initial Checks [X] I confirm that I'm using Pydantic V2 Description Version 2.9.0 changed the IncEx type alias to the following: IncEx: TypeAlias = Union[Set[int], Set[str], Dict[int, 'IncEx'], Dict[str, 'IncEx'], None] This is not correct for the include and exclude parameters of the model_dump method. For example, it's now not possible to use a Dict[str, bool] as the value for include/exclude (see the code example below). Example Code from pydantic import BaseModel class ExampleModel(BaseModel): example_field: str field_to_be_excluded: str e = ExampleModel(example_field="Hello Pydantic", field_to_be_excluded="To be excluded") e.model_dump(exclude={"field_to_be_excluded": True}) # mypy complains about this Python, Pydantic & OS Version pydantic version: 2.9.0 pydantic-core version: 2.23.2 pydantic-core build: profile=release pgo=false install path: /Users/*****/.venv/lib/python3.12/site-packages/pydantic python version: 3.12.2 (main, Mar 7 2024, 08:27:42) [Clang 15.0.0 (clang-15<IP_ADDRESS>)] platform: macOS-14.6.1-arm64-arm-64bit related packages: mypy-1.11.2 fastapi-0.113.0 pydantic-extra-types-2.9.0 typing_extensions-4.12.2 pydantic-settings-2.4.0 commit: unknown @tommasolevato, Thanks for reporting. Indeed, looks like an issue in v2.9.0, we'll work to roll out a fix shortly. I've found another problem with IncEx, that v2.9.1 did not address: class ExampleElement(BaseModel): number: int name: str class ExampleModel(BaseModel): elements: list[ExampleElement] e = ExampleModel(elements=[ExampleElement(number=1, name="Hello Pydantic")]) e.model_dump(exclude={"elements": {"_all": "number"}}) # mypy complains about this I wasn't sure if it was appropriate to open another issue, so I decided to add a comment here. I can create a new issue if needed. Did you mean to write: e.model_dump(exclude={"elements": {"__all__": {"number"}}}) ? Afaik dict[str, str] isn't supported. Indeed, sorry for the confusion. Hmm, did this work in previous pydantic versions? I don't recall us supporting something like this... Yes, ref: https://docs.pydantic.dev/latest/concepts/serialization/#advanced-include-and-exclude @tommasolevato, gotcha - please open a new issue! @sydney-runkle I think @tommasolevato was talking about the bool type not being supported (which we fixed), so no need to open any new issue. @Viicos the e.model_dump(exclude={"elements": {"__all__": {"number"}}}) example code is still broken in v2.9.1. It works on my end: e.model_dump(exclude={"elements": {"__all__": {"number"}}}) #> {'elements': [{'name': 'Hello Pydantic'}]} @Viicos operationally it works, but mypy complains about this, and I believe it's right in doing so. mypy complains about this, and I believe it's right in doing so. Ah yes, seems like there's an issue with variance. Oddly, pyright does not flag the error. I'll come back to you once I get an answer from pyright if it happens to be a false negative from pyright (and fix the type alias accordingly). Just to clarify, this issue isn't resolved, right? At least, a minimal example still doesn't type check with mypy in 2.9.1 / after #10339. from pydantic import BaseModel class MyModel(BaseModel): foo: int bar: int m = MyModel(foo=1, bar=2) m.model_dump(include={"foo": True}) Error: error: Argument "include" to "model_dump" of "BaseModel" has incompatible type "dict[str, bool]"; expected "IncEx | None" [arg-type] Ah yes, seems like there's an issue with variance. I also think so. I assume this line https://github.com/pydantic/pydantic/blob/a6dc87285f93f90c2d5c298ee7c52f5d7e878194/pydantic/main.py#L73 should probably become IncEx: TypeAlias = Union[Set[int], Set[str], Mapping[int, Union['IncEx', bool]], Mapping[str, Union['IncEx', bool]]] to make it covariant in the value type. Yes, we still have this issue closed as this is going to be tackled as part of https://github.com/pydantic/pydantic/issues/10335. The variance issue seems to be a mypy limitation/false positive, as per https://github.com/microsoft/pyright/discussions/8972#discussioncomment-10628016. The variance issue seems to be a mypy limitation/false positive, as per https://github.com/microsoft/pyright/discussions/8972#discussioncomment-10628016. Note that Eric was referring to the local bidirectional inference. The type alias should generally type check against both dict[str, bool] and dict[str, IncEx] individually (and their immutable forms) if passed in externally, so using Mapping goes beyond that inference problem. For instance, this should type check, and has nothing to do with mypy's local inference limitation: # This function properly communicates that it will not mutate `include`. def some_funct(x: MyModel, include: Mapping[str, bool]): print(x.model_dump_json(include=include))
GITHUB_ARCHIVE
What type of relay/switch device am I looking for? I want to control a 24V primary device based on a digital signal from a secondary device. The secondary device emits a periodic pulse when I want the primary device to be ON and no periodic pulse is present when I want the primary device to be off. I assume I need some type of relay, but I don't want the relay to turn ON & OFF with every pulse...I just want the relay to activate when the pulse is detected and remain active until the pulsing stops. (I'm hoping to buy the pre-built switching device online, but I don't know what this type of switching system is called?) What is the time between the periodic pulses, and how soon after the last pulse should the primary device be turned off? I haven't measured the pulse rate; Let's assume 15 Hz. (If I need to calibrate the pulse rate for the timer via POT, etc., that is fine.) I would like the primary device to turn off within a few seconds of the signal ceasing. Apply the pulse signal through a capacitor and a diode half bridge to the gate of a mosfet. This will turn the mosfet on as long as the pulses are coming. The mosfet will be off when the pulse train stops (hi or low doesn't matter) The pulse voltage has to be large enough compared to the gate threshold of the mosfet. The 2N7002 has about 2.5V threshold voltage and the top schottky diode eats another ~0.2..0.3 V. So the minimum peak-to-peak pulse height has to be at least ~2.8V for this to work. If your pulses shape is smaller, then another mosfet with lower gate threshold voltage must be used. R29, R30, C26 are optional. R30 can be placed to discharge the gate faster, when the pulsing stops. Without R30, the gate discharges slowly due to leakage, which can take several seconds. The smaller R30, the faster the turnoff. C26 has the opposite effect, it increases the charge available at the gate node, making turn-off due to the leakage slower. It also makes the turn-on slower though. If you want instant turn-on within 1-2 pulses, then C26 should be well smaller than C25. R29 is there to limit the peak current from the pulsing IO pin. If C26 is small or missing, R29 may not be needed either and can be replaced with a short. The capacitor voltage rating in the schematic can be ignored, they should be able to work with the pulse voltage, which is usually low (e.g. 3.3 V), but higher voltage spec doesn't hurt. Like this, well done. Impressive, I wish I had the ability to design the requested circuit off the top of my head! Do you know, is a prebuilt version of this available online? I don't know Charles. I have built it years ago and used it as a failsafe relay since. It is very robust ( no failure yet) and you can tune the on/off times with the resistors. The bat54, 2n7002 are commodity products that cost much less and are smaller than the relay itself. So any integrated component would be much more expensive/specialized/proprietary. Thank you to everyone for all your help! @tobalt thank you fur the schematic -- I'll likely give this circuit a try once I get back home to my equipment. Your willingness to share is much appreciated! @CharlesT. good luck. I will add a little more info to my answer, what the role of the optional parts is and when they are needed. Thank you for the additional explanations, @tobalt! I think you’re looking for a timer relay or timer switch, I don’t think there’s standard terminology. These devices come in several different varieties that do slightly different things. What you’ll need is a relay that you can set to switch on for a little more than the time between pulses, and one that can re-trigger so that it stays switched on continuously. Such devices certainly exist, I’ll see if I can track down an example... Thanks, that sounds like exactly what I'm looking for. I'm finding a lot of timer circuits on Amazon, but they seem to be missing the option to trigger via pulse signal. I'm sure I must be searching with the wrong terms I was just looking at this http://www.farnell.com/datasheets/1682139.pdf which can do a bunch of things but not quite what you’re looking for. Watch this space. Thanks for the link; I thinkI saw a similar switch from the same manufacturer, but like you said it doesn't exactly fit the bill. I do appreciate the effort yoo made to help locate a ready-to-use solution! Need a timer to "listen to the signal and open the relay when the pulses stop. So, the timer has to have an "on" duration of the "off" time of the pulse. That makes sense. Is there a pre-built switching device available that would allow me to specify that parameter? Recently went down a similar rabbit hole, if a pre-built switching system is what you are looking for something like this may work well, a programmable/multi-function time delay relay w/ watchdog retriggerable single shot control paradigm. Function graph below:
STACK_EXCHANGE
A Beginner’s Guide to Web Development provides information about the different aspects of web development, from front-end to back-end. In addition, it discusses security, CMS development, and wireframes. There are many reasons why beginners should learn back-end web development. It may be to boost your career or to start a business. Regardless of what you choose, a basic knowledge of the field will be useful. Back-end web development involves developing and managing complex systems to improve the way applications work. It requires data mining, sorting, and analyzing. Developers are responsible for making sure users get to the right page. They also need to ensure the company’s servers run efficiently. In addition, they must understand the basics of several technical programming languages. Another reason to start a career as a back-end web developer is to have an advantage over your competition. This is particularly true in the tech industry, which always needs talented developers. With the growth in online e-commerce and mobile devices, there’s a need for talented developers. One of the biggest advantages of learning Python is that it’s relatively easy to learn. Python’s syntax is simple to understand, and it handles even the most simple web projects with ease. However, if you’re looking for more advanced features, you’ll need to learn another language. Aside from the basics, back-end developers should also know about databases and have a firm grasp of algorithms and data structures. These are essential if you plan on developing a comprehensive system. There are several online tutorials and a large community to help you master these skills. Once you’ve mastered the basics, you can start applying to jobs. Wireframes are an important part of the design process. They help you define how you want the layout of a page to work. You can also use them to get feedback on your design from others. Creating a wireframe can be a daunting task at first. But there are a few things you should keep in mind. For starters, you should not make the design too complicated. It should be a simple design with a few essential elements. Another important factor to consider is the target demographic. This is especially important when creating a mobile app. The target market may not be the same as your typical customer. Similarly, the design of a desktop website will be different from that of a mobile site. When you are designing a website, it is a good idea to get feedback from people who will be using the final product. While you can’t show them everything, you can still give them a preview of your wireframes. Whether you are designing a website or an app, wireframes can be a great way to get feedback from your clients. A wireframe allows you to show them different directions, and it lets them see the potential pitfalls of your ideas. Once you have your client’s approval, you can begin to build a more detailed design. Creating a wireframe doesn’t have to be a daunting task. In fact, you can get a lot done with a few key steps. Follow these tips and you’ll be well on your way to building a strong web design foundation. A content management system (CMS) is software that helps you to create a website without writing code. In addition, it can handle all of the digital content you need to maintain your site, such as images, text, and video. CMS can also help you to customize your website’s layout. It’s free to use, but you may need to purchase an extension to add the features you need. There are several types of CMS, including traditional, decoupled, and headless. Each one has its strengths and weaknesses. Most CMS platforms come with a free template and a selection of extensions, making them easy to work with. However, it’s important to know which ones are right for you. Traditional CMS – For those who want a simple, easy to manage solution, a traditional CMS is the way to go. You can easily add new pages and modify the layout to suit your business’s needs. If you want to take your site to the next level, you can customize the look and feel with CSS. Headless CMS – A headless CMS is a type of CMS that doesn’t contain any processing layers, meaning it can communicate with just about any software on the market. This type of CMS can be useful for apps and is one of the hottest trends in development circles. The CMS may also handle the more technical IT stuff, such as detecting when a piece of content is expiring or determining which pieces of content need to be visible on a web page. These capabilities can be a major time saver. Whether you’re a developer, web designer, or just a curious amateur working in a website development company California, you can benefit from a CMS. It’s easy to learn, and it can save you time. Having a great CMS can help you to build a website that will engage your customers. Web application security is a very important aspect of web development services California. This is because many of these applications have sensitive information on them, such as credit card numbers, or passwords. Whether it’s a personal or business site, these applications need to be protected. However, this is not easy to do at first. Regardless of whether you are just starting out or if you’ve been in the industry for a while, it is essential that you understand the basics of computer security. The security of any organization starts with three fundamental principles: confidentiality, integrity, and availability. Thanks for visiting dailybusinesspost
OPCFW_CODE
This post describes how to compile and run ATLC under Cygwin on Windows. ATLC is an open source 2D transmission line field solver. This means that it can calculate the characteristic impedance and some other parameters of arbitrary single-ended and differential transmission lines. It was written by radio amateur Dr. David Kirkby (G8WRB) and it is very useful when designing RF and high-speed PCBs since it allows accurate calculation of the impedance of pretty much any geometry of transmission line. It is not limited to the simplified geometries that the various closed form approximation formulas deal with, so you can easily (well) include e.g. the effect of the solder mask on top of microstrips or the different permittivity of the resin that is pressed in between the traces of a tightly coupled differential stripline. ATLC is however not a program with a graphical user interface. Instead it is a command line program written for Unix-like environments that you need to compile yourself from the source code and run from the command line. This creates a perhaps daunting threshold for non-Linux/Unix users. The intention of this post is to describe how to anyway make ATLC work on Windows computers by compiling it under the Linux-like environment Cygwin. The first step is to download and install Cygwin. This is a two-step process. First you download the setup-executable, either setup-x86_64.exe (64-bit installation) or setup-x86.exe (32-bit installation) from the Cygwin home page. Then you run the executable. There are a series of questions to answer during the installation. Typically it is best to select to “Install from Internet”, to set the root directory to C:\cygwin64 (assuming a 64-bit installation), select some reasonable folder to store the installation files (perhaps the same place where you saved the setup executable) and select whether or not you are using a proxy to connect to the Internet. Then you come to the first less obvious choice; which download server to use. A good idea might be to try someone that you think is close to home and has decent bandwidth. Then the fun begins, namely the process to select which packages of Cygwin to install. There are many. Very many. To compile ATLC you need at least the following packages: Devel/gcc-core and Devel/make. I would also recommend Graphics/netpbm since it is a package that can be useful for manipulating images that form the input and output data of ATLC. If you later find out that you want to add more packages, you just rerun the install file and it will remember what you have previously installed and allow you to install more packages. Downloading and Compiling ATLC Now you need to download the source files for ATLC. They are available on Sourceforge, http://sourceforge.net/projects/atlc/. To get the source code you need to click on Files, then on atlc (do not click on the link to the Windows binaries as they currently are for an outdated version of ATLC which does not produce correct results in many cases), then click on the most recent version (currently atlc-4.6.1), finally click on atlc-4.6.1.tar.bz2 and wait for the mandatory Sourceforge download delay to expire. There is also a .tar.gz package of source files, but that archive seemed to be broken when I tried to unpack it, while the bz2 archive was OK. Save the source package in some directory (I chose D:\download\ATLC) and then unpack it using e.g. 7-zip, first the bz2 level and then the tar level to get a directory tree of all the source files. You may want to move the folder that contains the proper top level folder up to the same level as the bz2 file. Now we have the folder D:\download\ATLC\atlc-4.6.1 with the top level of the source tree. Start Cygwin and change directory to the source files. Cygwin maps the Windows drive letters to folders called /cygdrive/<driveletter>, so to change to the desired folder, you need to type in the following command: To prepare for building the exe file, type: This performs some checks on the target system (your Windows PC) and creates a suitable makefile. Hopefully every check goes well (it did for me) so then it is time to build the whole thing by typing: This takes a little while, but if everything goes well, the result are some .exe files in the src directory, the most important of which is atlc.exe. It is then a good idea to also run an automatic test to see if everything went well: The last lines of output when I run the check are: Run times: T_sequential is 26 s. Not configured for parallel operation. PASS: benchmark.test ====================== All 82 tests passed (2 tests were not run) ====================== This looks good to me. Then we can “install” ATLC, which mostly means copying files to suitable locations. This is done by: Trying It Out Now, cd to a directory where you want to work with ATLC. You can type: to get some information about how the program works. You can also read about it on the ATLC page, primarily under the headings Tutorial, Bitmaps, Man pages and Examples. Use Paint, Gimp, Photoshop, some other drawing programs, a script or one of the other .exe files that were compiled along with atlc.exe to create suitable input data to ATLC. Basically, the input data consists of a 24-bit BMP image which represents the cross section of the transmission line. Specific colors represent different materials as described in the man page. Red (0xFF0000) represents the conductor of a line, green (0x00FF00) is the ground conductor, white is vacuum (a good enough approximation of air) some other colors represent predefined dielectrics while most colors are free to use as custom dielectrics with permittivity that can be defined on the command line. Below is a (scaled down) example picture I created. It represents a 0.52 mm wide microstrip line without solder mask on a 0.3 mm thick FR-4 substrate. I have put a ground plane not only under the FR-4, but also all around the edges of the image. This helps with ATLC convergence. Such a boundary ground plane should not be placed too close to the actual transmission line in order not to affect the impedance significantly. I chose to use a scale of 200 pixels per mm in this case, but you do not need to tell ATLC that since the impedance is scale invariant. Using more pixels per mm gives a more accurate result. To feed this picture to ATLC, the following command can be issued: atlc -d d2ff00=4.2 052_030_microstrip.bmp The option “-d d2ff00=4.2” tells ATLC that the color 0xd2ff00 represents a dielectric with a relative permittivity of 4.2. The other colors that were used (white, green and red) have predefined properties. The output data from ATLC (after a few seconds) is: 052_030_microstrip.bmp 2 Er= 3.03 Zo= 51.457 Ohms C= 112.9 pF/m L= 298.8 nH/m v= 1.722e+08 m/s v_f= 0.574 VERSION= 4.6.1 This means that ATLC has calculated that the effective permittivity of the line is 3.03, the characteristic impedance is 51.457 ohms, the capacitance is 112.9 pF/m, the inductance is 298.8 nH/m, the propagation velocity is 172.2*106 m/s, which is 0.574 times the speed of light in free space. A number of output files are generated as well, both BMP-images that represent the electrical and magnetic fields and binary files that represent the same thing, but with higher accuracy. See the Files section on the ATLC home page for descriptions of these files. As mentioned above, a higher resolution (more pixels per mm) results in more accurate results, but the downside is that the runtime increases steeply. I seem to remember (cannot find it right now) that the run time of ATLC is roughly proportional to the square of the number of pixels in the input image, so doubling the resolution (in pixels per mm) creates four times as many pixels and hence 16 times as long execution time.
OPCFW_CODE
OCILIB version 3.8.1 will be finally released in 2 days, on December the 16th ! **It only contains bug fixes. ** The SVN is up to date with this version. Here is the current changelog : 2010-12-13 Version 3.8.1 Vincent Rogier email@example.com * Miscellaneous fixes - Fixed internal computation of OCI_Object attributes null indicator offsets - Fixed OCI_Elem handle initialization by OCI_CollGetAt() and OCI_CollGetAt2() - Fixed OCI_Ping() : OCI symbol OCIPing() was not dynamically loaded if OCI_IMPORT_RUNTIME was used (default for precompiled MS Windows Dlls) - Fixed OCI_ConnectionCreate() : in case of an unsuccessfull attempt to create a connection, an OCI internal handle was not freed since v3.7.0 (-> memory leak) - Fixed OCI_LongWrite() + OCI_CHARSET_WIDE charset mode : internal length passed to internal OCI calls was expressed in chars instead of bytes - Fixed OCI_TypeInfoGet() + OCI_TYF_TYPE : an Oracle error was raised when passing as type name a builtin system type like "SYS.RAW" - Fixed OCI_GetLastError() that could return NULL when errors occured in OCI_FetchXXX() calls (although the global error handler was correctly fired) - Fixed OCI_DequeueGet() : a segfault happened if the queue payload was of type RAW - Fixed OCI_DequeueFree() : internal structure member that hold the value set by OCI_DequeueSetConsumer() was not freed (memory leak) - Fixed OCI_MsgFree() : internal message ID allocated at enqueue time by OCI_EnqueuePut() was not freed (memory leak) - Fixed OCI_IterFree() : internal OCI_Elem handle was not freed for local collections resulting a small memory leak - Fixed OCI_EnqueuePut() and OCI_DequeueGet() : possible memory leak on Unix platforms + OCI_CHARSET_WIDE/OCI_CHARSET_MIXED charset mode - Fixed OCI_DequeueFree() : Internal OCI_Msg handle deallocation forgot to deallocate internal message ID resulting a small memory leak - Fixed OCI_SetPassword() and OCI_SetUserPassword() : in OCI_CHARSET_WIDE and OCI_CHARSET_MIXED builds, theses functions failed to change the password - Fixed OCI_LobRead2() and OCI_LobWrite2() that could block when using UTF8 through NLS_LANG environment variable - Fixed OCI_GetStruct() : structure padding was not handled properly - Fixed internal allocation of ROWID and UROWID internal buffers when using UTF8 through NLS_LANG environment variable * Miscellaneous changes - Added Exception type OCI_ERR_REBIND_BAD_DATATYPE if a rebinding call attempts to bind a datatype different than the initial one - Updated documentation about datatypes for rebinding - Added support for numeric subtypes in OCI_BindGetSubtype() + documentation update - Manual update of source code formatted with Uncrustify (wrong indentation of witch case and some variable initialization) - Pre built MS Windows 32/64bits Dlls are now built using MS Visual Studio 2010 Express (instead of MS Visual Studio 2008 Professional) - A MS Visual Studio 2010 Express solution and project has been added to the Windows package to rebuild the Dlls
OPCFW_CODE
// // IAW_LocalStorageTool.swift // IAWExtensionTool // T对象实体 需要实现 NSObject,NSCoding 详细请参考 `IAW_Sample_T` // Created by IAskWind on 2017/9/11. // Copyright © 2017年 winston. All rights reserved. // import Foundation open class IAW_LocalStorageTool<T>{ //归档数组对象 open class func setLSValues(key:String,values : [T]) { NSKeyedArchiver.archiveRootObject(values, toFile: key.docDir()) } //解档数组对象 open class func getLSValues(key:String) -> [T]? { return NSKeyedUnarchiver.unarchiveObject(withFile: key.docDir()) as? [T] } //归档对象 open class func setLSValue(key:String,value : T) { NSKeyedArchiver.archiveRootObject(value, toFile: key.docDir()) } //解档对象 open class func getLSValue(key:String) -> T? { return NSKeyedUnarchiver.unarchiveObject(withFile: key.docDir()) as? T } //清除归档文件 open class func cleanLSByKey(key:String){ let fileManager = FileManager.default try! fileManager.removeItem(atPath: key.docDir()) } }
STACK_EDU
Slack Technologies Inc. has introduced a new series of application programming interfaces or APIs. The work is designed for IT companies aiming to handle chat channels within businesses. The APIs will help with enterprise-level functions. The APIs can include work for chat functions and for producing further workstation efforts. The design should assist in producing more efficient setups that will work for a while. The first of the new APIs from Slack will focus on external activities. With hundreds of thousands of apps available, there exists a need to look at how well a platform can function. People might want to open different programs without having to spend as much time or worry about extensive or complicated technical points. The new API offered by Slack helps to automate processes where apps are installed right while keeping business data from being at risk of being lost. The API will define rules for opening apps. Outside apps can be accepted or rejected based on what an IT team decides. The ability to confirm which apps work will make it easier for businesses to handle more of their work functions. The ability to handle functional and exceptional workspaces will be critical to ensuring there are no struggles with the connections being handled. Some of the new APIs being produced by Slack will focus on identifying great ways to move a business forward. Part of this includes producing new workspaces within an environment. Part of this includes establishing distinct virtual workspaces that are devoted to very specific fields of work. The ability to create diverse workspaces focused on specific actions is similar to the work that Slack put in for producing announcement channels. These are advanced chat rooms that focus on individual subjects. The design of such a channel restricts usage to only those who have been previously permitted to work on these channels. The design ensures that data can move out automatically and in a matter of moments. The General Point What makes the work provided by Slack so critical is that it will be easier for people to open programs and set up rooms without complications. There are often times when it might be difficult for people to establish certain communications or to keep their projects organized. The work that Slack is putting in will facilitate a simplified approach to managing content in many forms. Slack is expected to continue to help support many functions for managing its content. People will need to look well at how Slack can help them with going forward and making the most out of their content. Competition Will Evolve One thing for certain about Slack’s growth is that the company is aiming to compete with others in the same industry. Slack has been competing with Microsoft in recent time to produce quality API chat systems. Slack has about ten million daily users, but Microsoft has thirteen million. Slack is looking to close the gap, and its ability to produce more advanced and distinct APIs may make an impact over how well the competition can change and shift over time.
OPCFW_CODE
Join GitHub today GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up UG_Main Page_Users Guide Golden Cheetah: User's Guide The main focus of this new GoldenCheetah User's Guide is to describe the features and functions provided by Golden Cheetah: - the steps a new user has to do when starting - general UI structure and concepts - what different views are provided - which chart type(s) which are available per view - how-to configure / set preferences - special topics - which either functions used in multiple places or important features worth an own Wiki page (e.g Search/Filter) - country specific information (e.g. related to who things are translated) - other sources of information And here a link the 'Table of Contents' of this user guide: Still can't find it? Try the Site Map Even now you can't find it? Then it's most probably not documented yet. Since the functionality of GoldenCheetah is growing fast - and many features have been added in version 3.2 - the Wiki documentation is partly behind. So anyone finding a missing section is invited to help out and enhance the Wiki accordingly. Thanks. There are some assumptions taken regarding what this guides is NOT. It's - not a developer's or build guide for GoldenCheetah / it assumes you are using a stable version of the software with minimum release level 3.2 - not an introduction into power based training / it assumes that you have a basic understanding on power based training together with knowledge on the common terminology - not an full introduction into all GoldenCheetah metrics / it will deliver information on the metrics which are important to understand the software, but not provide detailed background on all the metrics provided Since new versions of Golden Cheetah are coming out frequently this Wiki is planned to be updated as soon as new functionality is available. To track the releases, each Wiki page has a common header with the information from which release on a feature is available / or if a feature is deprecated in a certain release. For first time users of GoldenCheetah we recommended to at least 'flip-through' the whole Wiki - ideally with GoldenCheetah running in a second window and make yourself familiar with the concepts and features. Once you have done this, use the Wiki to re-visit specific details and concepts which you are using in GoldenCheetah. Most of the GoldenCheetah functionality is very straight forward and self-explanatory. This Wiki will not elaborate on those features and functions in broad detail (everyone of you knows how to select of file for upload, so no need to explain this here) - but the Wiki aims to cover special handling advice related to the different functions - such as ("where do I have context menus on (right) mouse-click", "where do I have mouse-over features", "what is the impact of moving my athlete directory",...) - so all the things you might face when you use GoldenCheetah, or you need to know make full use of what GoldenCheetah is offering. Note: All screenshots are taken on a Windows 7 installation of GoldenCheetah. Installations on other operating systems supported by GoldenCheetah will have a slightly different look&feel. Disclaimer: The Wiki is written to help users to explore the capabilities of GoldenCheetah and to learn it's functionality. As any documentation, it will have errors and is always incomplete. Also there will be typos and you will easily notice that this new User's guide is not written by an English native speaker. As the Wiki is open to any GitHub user - improvements and enhancements are welcome. Please try to follow the structure provided initially (best seen in the Table of Contents). In case you do not want to edit, but like to provide direct feedback, ideas for improvement,.. please send a mail to "joern.rm at gmail.com". - Thanks
OPCFW_CODE
I dont know how to debug this kind of laravel script error regarding smtp. on [url removed, login to view] i cant see trace. I attach some involved files. I cant give ssh credentials, work must be done with anydesk. hope understand. Here is config/[url removed, login to view] laravels config file: 'driver' => env('MAIL_DRIVER', 'smtp'), 'host'... Hi, dear developers. I was made laravel chat project by using laravel5.5. But some problems are happned . I used this url. https://pushe...laravel chat project by using laravel5.5. But some problems are happned . I used this url. [url removed, login to view] If you are developer for laravel , you can find this issue. thank you solve the problem I indicated in the image I do not know why but in internet explorer the social buttons do appear but google chrome does not appear shop in prestashop [url removed, login to view] I am a PMO Leader at a software solutions company. Support needed: 1. Enhance current site - look, ease of use, etc 2. Fix problems with the initial site where links are not working properly. 3. Train on how to support the site on-going - user access, adding new links, creating lists Forbidden You don't have permission to access / on this server. Server unable to read htaccess file, denying access...unable to read htaccess file, denying access to be safe Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. Solve above issue and explain to me how you have solved it. I need you to develop some software for me. I would like this software to be developed for Windows using C++. I have issues with a menu that is not being correctly displayed in smartphones and tablets. I required a css expert to solve this issue in 12 hours as much. We have an whmcs, wordpress with alaska theme with coingate payment gateway. Purchasing directly from whmcs and coingate it works but not from inside wordpress (we use whmpress, client area and whmcs cart). What is failing is whmcs cart pluggin. It works in this way: [url removed, login to view] It not works in this way: [url removed, login to view] <--- must be fixed. This proj... Looking for someone that can solve calculus project i have a samlple that I would like to show and would like to be solved Hi all i de...developer who is now not in my contact developer has given me front-end files on the software (C sharp and VL ) but now when i tried software not working it's stacked at login can some one start the software. I tried with dnspy it's worked but giving error "input string is not in correct formate". Rest will be discussed over chat Thanks I have a problem with my internet. the internet and wifi is connected but i cant surf at the browsers. except for skype - skype working perfectly. just help ...problem with my internet. the internet and wifi is connected but i cant surf at the browsers. except for skype - skype working perfectly. just help me and guide me to solve it. thanks Please check attachments : txt, png If you are able to handle the project, please reply, thanks. We have a a typical setup where a public web site server has haproxy running so that users can reach a lan web server. The problem is that users can access the proxy links directly while we need them to not be allowed to reach them unless they are logged into the public server. We need two things. 1: A way to prevent direct access to the proxied
OPCFW_CODE
Let us discover the right Let us discover the right way to do Microtransactions. LIKE SUBSCRIBE and SHARE 0:00 - intro 1:03 - Bad Microtransactions In Games 8:23 - Why not Engaging Microtransactions isn't a Solution 10:35 - Establishing the Ideal Microtransaction Model 16:16 - The Ideal Microtransaction Model THIS COMPANY, EA, IS THE THIS COMPANY, EA, IS THE WORST COMPANY EVER...at first, it looks like they are just a game developer that makes good games like the sims 4 and madden 19. but when you really dive deep into EA you will see they are money hungry. but why do so many people including me hate them? what did they do to us? and what can they do to fix it? FIND OUT IN THIS VIDEO... Finding QUALITY VIDEOS that bring controversy and questions to the viewers' mind are rare nowadays. these videos are obviously taken care of, and take a while to make, which is why I normally only release 2 or 3 of these videos every 2 weeks. Being that they do take time, I might be late to certain events or topics I might be talking about. Every one of my videos contains STRONG opinions and should not be taken overboard. To better my points that I give to you within these videos, I show pictures and videos. If you have any questions or concerns, you can email me below. hope you guys enjoy the quality content I bring and look forward to new videos. ➡ INSTAGRAM: https://bit.ly/2qjKEtL ➡ EMAIL ME : [email protected] MY CHANNEL IS NOT A HATE CHANNEL. I AM SIMPLY STATING MY OPINIONS. EVERYTIME I FIND SOMETHING BAD, FUNNY, OR JUST WEIRD, I WILL MAKE A VIDEO, EXPRESSING MY OPINION ON IT WITH AN ENTERTAINING VIDEO. ONCE AGAIN, I AM *NOT* A HATE CHANNEL. Copyright Disclaimer Under Section 107 of the Copyright Act 1976, allowance is made for "fair use" for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. Fair use is a use permitted by copyright statute that might otherwise be infringing. Non-profit, educational or personal use tips the balance in favor of fair use. Fair use is a legal doctrine that says that you can reuse copyright-protected material under certain circumstances without getting permission from the copyright owner. Sims is the beast game ever! Who is Mr.Krabs. Ea more like dead EA in my house Do you want to get a taco Yes 2 dollars Vigor is a brand new Vigor is a brand new survival game from the makers of DayZ that looks a bit like a battle royale game... but isn't! Find out everything you need to know about Vigor gameplay in this in-depth look at everything we learnt from the Vigor Xbox One closed beta gameplay. Subscribe to Eurogamer - http://www.youtube.com/subscription_center?add_user=eurogamer For the latest video game reviews, news and analysis, check out http://www.eurogamer.net and don't forget to follow us on Twitter: http://twitter.com/eurogamer 4:44 wtf is this shit! Never knew that was even a thing! This video is way out of date and needs some updateing Is this like escape from tarkov Need xbox live to work or no ? Is vigor cross platform?
OPCFW_CODE
VIE - Embedded Security Engineer - US Pushing the Edge VANTIVA, headquartered in Paris, France and formerly known as Technicolor, is a global technology leader in designing, developing and supplying innovative products and solutions that connect consumers around the world to the content and services they love – whether at home, at work or in other smart spaces. VANTIVA has also earned a solid reputation for optimizing supply chain performance by leveraging its decades-long expertise in high-precision manufacturing, logistics, fulfillment and distribution. With operations throughout the Americas, Asia Pacific and EMEA, VANTIVA is recognized as a strategic partner by leading firms across various vertical industries, including network service providers, software companies and video game creators for over 25 years. Our relationships with the film and entertainment industry goes back over 100 years by providing end-to-end solutions for our clients. VANTIVA is committed to the highest standards of corporate social responsibility and sustainability across all aspects of their operations. For more information, please visit www.vantiva.com and follow us on LinkedIn and Twitter. VIE CONTRACT ONLY - POSITION BASED IN US (Atlanta, GA, US) Check your eligibility on Business France Website, VIE contract must be : - Under 29 years old - European Union citizen Do you want to become a technical referent contributing to the development of the best security features of our decoders and gateways? As an embedded security engineer, you implement state-of-the-art security mechanism on our mass-produced set-top-boxes and gateways, including code signing, secure boot, factory lock tools, attack surface reduction, containers and least privilege assignments. - You will be one of our technical experts who guarantee the proper integration of security technologies into our products, through: - Development of the maintenance of code signing tools; - Design, coding and revision of secure boot software according to industry best practices ; - Development of product-specific tools for processing cryptographic material (private keys) in the manufacturing process ; - You will also be the main point of contact: - For SoC suppliers in the implementation of their solutions ; - For internal development teams, which you will train in security issues in our products ; - For our project managers / production units to guarantee an execution plan and to monitor security actions. - Bac + 4 or Master in Engineering, University Degree - Applied cryptography (AES, RSA, EC, openssl, etc...) - Security assessment (code and architecture) - Shell script, Python - Embedded C, cross-compilation toolchains and environments (openwrt, buildroot, etc...) - Linux architecture and security mechanisms (cgroups, namespaces, etc...) - Android architecture and development - English is mandatory All your information will be kept confidential according to EEO guidelines.
OPCFW_CODE
Priority not included in message Hi, I'm trying to send a CAN message with the header '0x08FF3DC8', where the priority is 0x08, the PGN is 0xFF3D, and the source address is 0xC8. To do this, I'm using the following line of code: `ca.send_pgn(data_page=1, pdu_format=0xFF, pdu_specific=0x3D, priority=0x08, data=FF3D_data)` where ca is the controller application object. But when I probe the can bus to view the message being sent, the message's PGN shows up as '0x01FF3DC8'. Is it not possible to specify a message's priority or am I doing something wrong? Valid priority values are 0 to 7. A quick Google leads me to this location but there will be others. @bwelte34375 are you still having trouble with this, or OK now? @bwelte34375 would you OK with this issue being closed? Yes. From: gRant @.> Sent: Monday, June 10, 2024 2:18 AM To: juergenH87/python-can-j1939 @.> Cc: Ben Welte @.>; Mention @.> Subject: Re: [juergenH87/python-can-j1939] Priority not included in message (Issue #74) @bwelte34375 would you be OK with this issue being closed? — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned. Message ID: <juergenH87/python-can-j1939/issues/74/2157520869@ github. com> @bwelte34375https://urldefense.com/v3/__https:/github.com/bwelte34375__;!!ICUevlz5aoA!tX7HTjtVJ1dzcd5jDclwGhME7m2ZWfCV2N1CrgNZGL6oeTElCNNcRS9R1zT_I73V7uW-_k4SzKj6ABSHcACT75U$ would you be OK with this issue being closed? — Reply to this email directly, view it on GitHubhttps://urldefense.com/v3/__https:/github.com/juergenH87/python-can-j1939/issues/74*issuecomment-2157520869__;Iw!!ICUevlz5aoA!tX7HTjtVJ1dzcd5jDclwGhME7m2ZWfCV2N1CrgNZGL6oeTElCNNcRS9R1zT_I73V7uW-_k4SzKj6ABSHofGeexk$, or unsubscribehttps://urldefense.com/v3/__https:/github.com/notifications/unsubscribe-auth/BARRUZZFPRILC4R2PBINAHTZGVHKHAVCNFSM6AAAAABJBWQQGSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNJXGUZDAOBWHE__;!!ICUevlz5aoA!tX7HTjtVJ1dzcd5jDclwGhME7m2ZWfCV2N1CrgNZGL6oeTElCNNcRS9R1zT_I73V7uW-_k4SzKj6ABSHlTJnd0o$. You are receiving this because you were mentioned.Message ID: @.***>
GITHUB_ARCHIVE
GreenSync are growing and have permanent opportunities for Developers of all levels as well Technical Team Leads. We are looking for people to work on real life problems and find solutions that effect change to the future of energy in Australia and around the world. GreenSync are developing tools and markets to manage peak energy demand and manage instability on the grid to allow wider use of intermittent renewable energy sources such as solar, wind and batteries. Established in 2010, we’ve just received our 2nd round of funding and are poised for growth. Here are a few reasons that you might find a role at GreenSync interesting: ● You’ll be making the world a better place by helping the grid utilise energy and infrastructure more effectively so we can rely less on old technologies such as coal-fired generation ● You’ll be working on interesting technical challenges and solving interesting algorithmic problems ● You’ll be working for a well established and growing start up that can affect change in the real world. ● You’ll get to work with a passionate team who love what they do. What you’ll be doing: You’ll be working on our cloud-based suite of products, building tools for automation, data management, analytics, forecasting and optimisation. It’s a complex business domain in a rapidly changing area, so you’ll be having plenty of design discussions, building things from scratch and applying existing technology in new ways. You’ll be part of a tight-knit team that moves fast, but also values peer-review and quality highly. We do weekly releases, continuous integration and have a pull-request-oriented peer-reviewed workflow. Continuous improvement is a key priority at GreenSync, and you’ll have input into both the product and the process. Working here you can expect to work with some or all of the following: ● Ruby with and without Rails ● SaSS/CSS, HTML5 ● Git and GitHub-based workflows ● RSpec, Cucumber, Watir-Webdriver, SimpleCov ● Rackspace Cloud and AWS ● Chef, Capistrano and devops ideas in general ● Management of medium to large datasets ● Stream processing, analytics, optimisation problems ● Algorithm optimisation, and immutable data structures ● Embedded Java technologies And we’re open to hearing your recommendations too. We’re a well established and growing startup with some interesting challenges to solve, where you can make a difference and help improve the energy industry through better demand management looking to reduce the need for coal in the future. Currently we are based in the Melbourne CBD (Hardware Lane) and Singapore and we are poised for future global expansion. If you’re interested, please contact me on diane.c...@greensync.com.au (see the full email on the original post) or you can call me on 0415 524401 if you have any questions. Rob Postill ( our GM of Technology) will also be at the Ruby meet up on Wednesday (29th March) and can give you a heads up if you’re interested.
OPCFW_CODE
Search the Community Showing results for tags 'keepass'. Found 3 results If you need to import all of your data from KeePass into Passwordstate, this is the preferred process due to the below Powershell script keeping the correct format of your KeePass database. We'd like to thank one of our customers Fabian Näf from Switzerland for writing this script for us. He did a great job and it's helped out many of our customers. This import process will create a Folder with the same name as the XML file you export from KeePass, and it will then replicate the KeePass structure beneath this. For customers not familiar with Passwordstate, the equivalent of a "Group" in KeePass is a "Password List" in Passwordstate. We also have the concept of "Folders" which allow you to logically group Password Lists together. If you follow the process below, it should create a Folder with the same name as the XML file you export from KeePass, and it will then replicate the KeePass group structure beneath this. Process Start: In Passwordstate, identify and note down your System Wide API key from Administration-> System Settings -> API and you will find it under “Anonymous API Settings & Key”. Ensure you save this page after you generate the new key. Create a Password List Template under the Passwords Menu -> Password List Templates. On this template please set the following options and then save the template: Disable the option to prevent the saving of password records if they are found to be a “Bad Password” (screenshot 1 below) Uncheck the option so the Password field is not required, and enable the URL field (screenshot 2 below) Identify and note down the TemplateID by toggling the column visibility (screenshot 3 below) In KeePass, open your database and export the contents to a XML file. This can be executed from File -> Export -> KeePass XML (2.x) Download the script from: https://www.clickstudios.com.au/downloads/import-keepass-xml.zip Extract this zip file and open with Powershell ISE or the straight Powershell shell, if you prefer You will be prompted to answer 5 pieces of information: The username of an existing Passwordstate user you wish to give Admin rights to all Passwords imported during this process. Generally you would just enter your own Passwordstate UserID here as you can modify permissions later and and example format for this is halox\lsand Your Passwordstate URL Your System Wide API key The FolderID you wish to create your KeePass structure under. Enter '0' to create this in the root of Passwords Home, otherwise find the Folder ID of any Folder you like and use this when running the script Your PasswordList Template ID It will ask you to browse to your Exported XML file That’s it, the script will now run through and automatically read all of the information out of the XML file, and import it into Passwordstate. From here, there are a few other things you might want to consider doing after the script has run successfully: You may want to rearrange your folder structure. Ie possibly you might want to create some new folders for each of your teams, and then drag and drop existing Password Lists/Folders inside of them Once you are happy with your Folder structure, you should start applying permissions to either Password Lists or Folders using the following video as a guide: https://www.youtube.com/watch?v=QBJE_xD185U Best practices are to use Security Groups to apply permissions, instead of individual users, if possible Screenshot 1: Screenshot 2: Screenshot 3: Regards, Support As one of my programmatic secondary backup plan I wanted to use the API to dump the passwords periodically for save keeping. I'm reading up on the api call to do the export all and it's pretty easy to get the data using a simple powershell command as described in the documentation. Then you can just export that into a CSV by piping it into "Export-CSV". Easy enough, but I really like the in UI where I can export as KeePass encrypted zip. Before I try and write this myself, is this already available as a sample or API parameter?
OPCFW_CODE
Order of a principal term In Yurii Nesterov's Introductory Lectures on Convex Optimization, there is a bound for the total number of iterations for some process. See page 109: $$\left[\frac{1}{\ln(2(1-\kappa))} \ln\frac{t_0-t^*}{(1-\kappa) \epsilon}+2\right]\cdot \left[1+\sqrt\frac{L}{\mu}\ln\frac{2(L-\mu)}{\kappa \mu}\right]\\+\sqrt\frac{L}{\mu}\cdot\ln\left(\frac{1}{\epsilon}\max_{1\leq i \leq m}\{f_0(x_0)-t_0; f_i(x_0)\}\right)\label{eq1}\tag{1}$$ Then, the principal term in the above estimate is of the order $$\ln\frac{t_0-t^*}{\epsilon} \sqrt\frac{L}{\mu} \ln\frac{L}{\mu} \label{eq2}\tag{2}$$ How did we arrive at statement $\eqref{eq2}$? Is that true the second term $\sqrt\frac{L}{\mu}\cdot\ln\big(\frac{1}{\epsilon}\max_{1\leq i \leq m}\{f_0(x_0)-t_0; f_i(x_0)\}\big)$ in $\eqref{eq1}$ is eliminated? I would appreciate any advice here. Can you say what variable goes to infinity in this asymptotic analysis? If it's $L/\mu$, then he's right to drop the last term because $\log(L/\mu)$ increase while $\log(\epsilon^{-1}\max\cdots)$ is independent of $L/\mu$ and therefore treated as a constant. The coefficient of $\sqrt{L/\mu}\log(L/\mu)$ doesn't look right to me, it should be the large expression $2+\cdots$ in the first pair of brackets, unless something else goes to infinity also. It would really help to have the full original expression with context. [It looks like only $\epsilon$ is changing. This is page 109][1]https://books.google.com/books?id=2-ElBQAAQBAJ&printsec=frontcover&dq=page+109+nesterov+introductory+lectures+optimization&hl=en&sa=X&ved=0ahUKEwjx5f_xoIHeAhUFSN8KHaFsDUoQ6AEIKTAA#v=onepage&q=principal%20term%20in%20&f=false] If only $\epsilon$ is changing, then it would be $\log\frac{t_0-t^*}{\epsilon} \sim -\log\epsilon$, so I'm not sure that's it.
STACK_EXCHANGE
FEATURE: copybreak with optional subtask The following is copied from https://gitlab.com/castedo/copyaid/-/issues/6 MOTIVATION Mass testing with prompts indicates that quality of GPT output quality degrades as the inputs get longer and longer. It is also more expensive to send all text in a file. It is a bit of pain to have to break up documents into smaller files merely for the reason to have less text sent to OpenAI. It is also quite annoying to have OpenAI suggesting lots of changes to sections of text that have already been worked on when only other sections are in need of copyediting/proofreading. This feature is relatively simple to implement and provides users lots of flexibility to control behavior and mitigate these problems. FEATURE Allow specially marked lines to act as "copybreaks" within source text. These line are not included in OpenAI request text and instead force a break up of the source file into separate chunks that become part of separate prompt texts for the OpenAI API. For markdown (.md) an example copybreak line is: <!-- copybreak --> and for LaTex (.tex) an example copybreak line is: %% copybreak The config file for Copyaid allows control of the exact line prefix per file type (based on file extension) and the keyword. For the above example, the config in TOML would be something like: copybreak = { 'md' = ['<!--', 'copybreak'], 'tex' = ['%%', 'copybreak'], } Optionally a subtask name can follow the marking prefix, after 'copybreak' and whitespace. What prompts and requests are triggered, if any, given the subtask name is controlled from the config file. Some subtask names can be configured to skip being sent to OpenAPI and so that the chunk of text is left as is. For example: <!-- copybreak skip --> and %% copybreak skip will cause all further text to be skipped from being sent to OpenAI until a difference subtask name is encountered. When no subtask name is specified, whatever was the last subtask name specified is used again. The configuration for a copyaid task can specify the initial subtask name to take effect. Some users might want it to be "on" and the skip subtask name to be "off". During an initial experimental stage I plan to use "light" and "heavy" as subtask names corresponding to light/heavy copy-editing and will probably configure "skip" as the initial subtask. RELATED https://github.com/manubot/manubot-ai-editor/ automatically splits up files into "paragraphs" and sends them as separate chunks to OpenAI. I find the logic for parsing apart "paragraphs" too fragile, hard-coded, and error prone to be acceptable as a default for entire files. As a future feature, I imagine some CopyAId subtask names can enable similar automatic break up, but not by default. The automatic additional breaking would only happen because a particular subtask of a copybreak has as enabled it. I am currently thinking to using "start" and "stop" as the pre-installed example subtask names. Similar feature/format in vale.sh: https://vale.sh/docs/topics/config/ <!-- vale off --> <!-- vale on --> I am currently thinking to using "start" and "stop" as the pre-installed example subtask names. I worry "stop" implies the rest of the doc will not be processed. Better possibilities: "off" "ignore" "pass" "skip" I'm thinking "instruction" is a better choice than "subtask". This feature does not need to be coupled with the "task" feature of CopyAId. The code implementing this could be used in a utility that has all off the CLI convenience features of CopyAId ripped out. This feature has been implemented and released in v0.6 and v0.7 of Copyaid. Documentation at https://copyaid.it/copybreaks/ Related feature idea in the inspiration for copyaid is https://github.com/manubot/manubot-ai-editor/issues/32
GITHUB_ARCHIVE
fix home assistant complaining GBP/kWH is an invalid currency Description fix home assistant complaining GBP/kWH is an invalid currency Motivation and Context How Has This Been Tested? Types of changes [x] Bug fix (non-breaking change which fixes an issue) [ ] New feature (non-breaking change which adds functionality) [ ] Breaking change (fix or feature that would cause existing functionality to change) Thanks - I thought I had fixed that. @ColinRobbins this is a different line 😆 you fixed the elec tariff standing, this is for elec tariff rate Agreed, but I thought I had deleted the entire function, as it will inherit the correct function from the parent class. Anyway, this will do the job! Cheers. Ah hold on, looks like I fixed the wrong thing. The standing charge should be GBP, but the unit charge should be GBP/kWh. Could you amend the PR? @ColinRobbins if you dont change GBP/kWh to GBP the web ui in home assistant complains in the console saying RangeError: invalid currency code in NumberFormat(): GBP/kWh so this needs to be set as GBP I don’t think this is right. The Energy integration needs to rate to by GBP/kWh. Which page / card are you looking at when you see the console error? I’m not seeing anything in my console! BEFORE: AFTER: CONSOLE ERRORS BEFORE PATCH: Ah, OK. I think that probably needs a different fix then. I think GBP/kWh is correct. The issue is the state class being STATE_CLASS_MONETARY. Perhaps it should be STATE_CLASS_MEASIUREMENT. Can you try that? Sorry, device class, not state class - corrected post above. No sure what the “correct” value should be - looking… Not sure I can see anything better than “DEVICE_CLASS_ENERGY” @ColinRobbins im keeping my pi running for now with this patch set as GBP and ill see if any issues happen over the next few days none so fair, but case of wait n see 🤞 With it set to GBP, you will not be able to add it as an entity with current price in the energy dashboard. It will probably work if set up, but would not work if you remove it and try to add it back. The energy dashboard needs GBP/kWh. With it set to GBP, you will not be able to add it as an entity with current price in the energy dashboard. It will probably work if set up, but would not work if you remove it and try to add it back. The energy dashboard needs GBP/kWh. surely the entity with current price option would be sensor.electric_cost_today and not sensor.electric_tariff_rate ? i currently have mine set to use an entity tracking the total costs and use sensor.electric_cost_today and no issues here If you look at the code in https://github.com/home-assistant/core/blob/dev/homeassistant/components/energy/sensor.py About line 300, the integration is expecting the price to end in /wh, /mwh or by default /kWh. My proposed alternative fix, resolves the issue you reported, and meets the expectation of the energy dashboard. The energy dashboard documentation gives an example in ‘USD/kWh’, not ‘USD’. @ColinRobbins if you look at the developer docs too for the sensor entites it says the values need to be either kWh or a GBP (currency) for either energy or monetary so i think this fix is good enough :) https://developers.home-assistant.io/docs/core/entity/sensor#available-device-classes We’ll have to agree to differ, and let HandyHat decide which approach to take. Revisiting this, I think if you use “rate” in the energy dashboard with GBP as the unit, this PR will lead to the error… Unexpected unit of measurement Translation Error: The intl string context variable "currency" was not provided to the string "The following entities do not have the expected units of measurement ''{currency}/kWh'' or ''{currency}/Wh'':" Hence why I believe the current units are correct at GBP/kWh. Revisiting this, I think if you use “rate” in the energy dashboard with GBP as the unit, this PR will lead to the error… Unexpected unit of measurement Translation Error: The intl string context variable "currency" was not provided to the string "The following entities do not have the expected units of measurement ''{currency}/kWh'' or ''{currency}/Wh'':" Hence why I believe the current units are correct at GBP/kWh. Where are you seeing this error? I've had no issues/no errors with this pr on my pi for awhile now? This error is in the HA Logs. It occurs if you use the “rate” sensor, with this PR, in the Energy integration to supply the current cost of energy. How does ur elec look as no issues here at all? The error occurs if you use the current tariff rate sensor for “use an entity with the current price”. ( I know using the cost sensor is better, but the using the tariff sensor should not result in an error). Don't mean to intervene but I am with @ColinRobbins on this one. Here is my reasoning: yesterday i have played with this, tried using use an entity with the current price and tried it with Electric Cost (Today) which is basically GBP (unit of measurement) and that indeed throws an error. Revisiting this, I think if you use “rate” in the energy dashboard with GBP as the unit, this PR will lead to the error… Unexpected unit of measurement Translation Error: The intl string context variable "currency" was not provided to the string "The following entities do not have the expected units of measurement ''{currency}/kWh'' or ''{currency}/Wh'':" Hence why I believe the current units are correct at GBP/kWh. So i can confirm changing tariff rate (the entity refered in this PR) to GBP and using it for Energy will error later. I agree that the tariff rate unit of measurement should be GBP/kWh, and not GBP. This is NOT a sum of money (currency) but rather a unit of measurement, like fuel consumption. Fuel consumption 5L makes no sense if you do not match it against a distance e.g. 100km. (same logic applies to miles per galon obviously). Same things is when you go shop for anything, the price for meat is £/kg, otherwise how do you know how much you are getting for your £? So that being said, the approach should be to change the device class rather than change the unit of measurement, as already suggested by Colin Rather than changing line 336, I suggest trying adding the following at line 332: @property def device_class(self) -> str: """Return None as the device class.""" return None I've tried it on my test systems, and it seems to solve the issue. I think anything other that None will case issues somewhere. PS: I have landed here by basically discovering the same error in the console, see here https://github.com/HandyHat/ha-hildebrandglow-dcc/issues/88#issuecomment-976615782 Thank you for the insightful discussion here everyone! I'm going to close this in favour for @ColinRobbins' solution of unsetting the device class (#138), as I think that makes more sense
GITHUB_ARCHIVE
Installing Total Open Station¶ There are a few different ways to install Total Open Station, depending on your operating system. Total Open Station is packaged for OpenSUSE. Installing is as easy as: $ sudo zypper ar http://download.opensuse.org/repositories/Application:/Geo/openSUSE_12.1/ GEO $ sudo zypper refresh $ sudo zypper install TotalOpenStation Debian and Ubuntu¶ Total Open Station is included in Debian and Ubuntu, just: sudo apt-get install totalopenstation as usual. Please note that the version provided by your distribution may not be the latest release. Download Python 2 from the official website, and follow this document on the Python.org website, that will help you choosing the correct version of Python to use. Do not use the pre-installed Python that comes with the OSX operating system. Two packages need to be installed before the actual installation of Total Open Station, because the program is written in the Python programming language which is not installed by default on Windows. You might need administrator privileges to be able to install all the programs. Check whether your Windows is 32 bit ( x86, common for older versions like Windows XP) or 64 bit ( x86-64). Then download the latest Python installer for Python 2 (not Python 3): When you’ve got the installer donwloaded on your computer, install it. You don’t need to use Python directly, but it is needed for the program to work. Download pySerial and install it. As with Python, you don’t need to use it directly, but it is needed for the program to work. Please make sure you are installing pySerial version 2.7 or a later version. Install Total Open Station¶ Download the most recent version of Total Open Station from PyPI and install it. You will find the totalopenstation-gui script in unless you have changed the standard installation options (not recommended). You can create a shortcut to the program on your desktop if you like. To upgrade to a newer version, just go to PyPI again, download the latest version and install it as with the first one. The old version will get overwritten. No data will be lost! Using pip (for the latest version)¶ Until your operating system’s packaging tools (e.g. apt or yum) allow you to install Total Open Station along with other programs, the recommended way to install is using pip (a package manager for Python) and virtualenv (which creates isolated software environments: basically you don’t mix packages installed system-wise with your package manager and user-installed software). Here follows a detailed step-by-step guide. First of all, make sure you have installed. All major GNU/Linux distributions have them packaged: - Debian and derivatives (including Ubuntu): apt-get install python-pip python-virtualenv yum install python-pip python-virtualenv Create a virtual environment¶ Creating a virtual environment is as easy as typing in a terminal: A new directory named tops-environment was created. It contains a minimal set of files needed to manage a Python installation that is isolated from the one installed on your system, helping to keep things Now activate the environment with: From now on, all Python-related actions will be executed within the newly created environment, and not on the system-wide installation. You terminal should look a bit different when the virtual environment is active: You can change directory freely, the environment will remain active. You deactivate the environment (that is, you exit from it), with the Installing Total Open Station¶ Once the virtual environment is active, you’re ready to install Total Open Station, with: pip install totalopenstation This will automatically download the latest released version from the Python Package Index (PyPI), and install all the other required Python packages as well. Installing development versions¶ Sometimes it is useful to install development versions before they are released, to help with testing of new features and making sure that there are no new bugs. Using the procedure described above it is fairly easy to create another, separate environment. Once the new environment is active, the command for installing a development version is: pip install -e git+https://github.com/steko/totalopenstation#egg=totalopenstation Developers may ask you to install from another repository, but the concept stays the same. This mechanism is very flexible and allows to install and test different versions safely. Running the program¶ When the program is installed, you can use it from the command line or with a graphical interface (recommended for new users). From your terminal, type: and the program should start. Please report any errors to the issue tracker. The next time you want to run the program, follow these steps: - open a terminal cdto the directory where the virtual environment was created source tops-environment/bin/activateto enter the virtualenv totalopenstation-gui.pywill start the program
OPCFW_CODE
Minio - RELEASE.2022-07-15T03-44-22Z DefaultTimeoutsby @shichanglin5 in https://github.com/minio/minio/pull/15288 - listing: Expire object versions past expiry by @krisis in https://github.com/minio/minio/pull/15287 - Updating minio-go by @cniackz in https://github.com/minio/minio/pull/15297 - Fix site replication healing of missing buckets by @poornas in https://github.com/minio/minio/pull/15298 - enable using different ports for minioAPIPort/service.port and minioConsolePort/consoleService.port by @chel-ou in https://github.com/minio/minio/pull/15259 - Add missing TLS config to service monitor by @OvervCW in https://github.com/minio/minio/pull/15228 - Default DeleteReplication rule status if unspecified. by @poornas in https://github.com/minio/minio/pull/15301 - fix: skip objects expired via lifecycle rules during decommission by @harshavardhana in https://github.com/minio/minio/pull/15300 - allow force delete on decom pool by @harshavardhana in https://github.com/minio/minio/pull/15302 - @shichanglin5 made their first contribution in https://github.com/minio/minio/pull/15288 - @cniackz made their first contribution in https://github.com/minio/minio/pull/15297 - @chel-ou made their first contribution in https://github.com/minio/minio/pull/15259 - @OvervCW made their first contribution in https://github.com/minio/minio/pull/15228 Full Changelog: https://github.com/minio/minio/compare/RELEASE.2022-07-13T23-29-44Z...RELEASE.2022-07-15T03-44-22Z July 15, 2022, 6:32 a.m. Register or login to: - 🔍View and search all Minio releases. - 🛠️Create and share lists to track your tools. - 🚨Setup notifications for major, security, feature or patch updates. - 🚀Much more coming soon!
OPCFW_CODE
Why can't a current carrying loop (curl of the electric field exists) produce a time varying magnetic field? If a time varying magnetic field can give value to the curl of an electric field, why not the other way round? That is, why can't an enclosed loop with some emf produced (basically a current carrying loop) produce a changing magnetic field? It does produce a constant magnetic field, yes. But according to Faraday's law, curl E=-dB/dt if the curl of the electric field has a value, shouldn't the time derivative also have a value? Meaning: a changing magnetic field should be produced. Could you describe the physical situation you’re considering in more detail please? A closed loop of current has E slumming up to zero around it: there’s not average curl of E. It's just a closed loop, like you're mentioning. Oh yes, you're right. I thought that there will be field 'in the wire'. Won't there be a field? Are you sure the curl of the electric field is nonzero in this situation? I'm not sure. Maybe that's where I'm wrong. But how to think of it to be equal to 0? Yes..it is directed along the length of the wire. So the curl for the infinitesimally small area would be 0. But again, using Stokes theorem. If we evaluate, the integral of E.dl, both the electric field and 'dl' are directed along the same direction, so won't their do product have a value? Let us continue this discussion in chat. In a wire carrying a steady current, there is no electric field if the resistance of the wire is zero (i.e. a superconductor). If you make a loop of such wire and induce a current in it, then there is no electric field and the closed line integral of the electric field is zero. In a wire with finite resistance, there is an electric field and on the face of it, there would be a finite line integral going around a closed loop. However, somewhere in that circuit, there must be an EMF source that has an exactly equal and opposite line integral (if the current is steady). The net result is a closed line integral of zero (the circuital law) and no changing magnetic field due to steady current. Kudos for writing "In a wire carrying a steady current, there is no electric field if the resistance of the wire is zero (i.e. a superconductor)." and emphasizing steady. I may be missing your point, but current carrying loops produce varying magnetic fields in all kinds of situations: AC motors, transformers, inductors, demagnifiers, and many others. In an electromagnetic wave, the interaction of varying electric and magnetic fields determines the rate of propagation of the wave.
STACK_EXCHANGE
composer.json showing different version to database Have a site which is on version 3.6.16 I was running the upgrade to the latest versions and it crashed. I have restored the database and componser.json and composer.lock files and ran composer install after removing the vendor folder. (Database shows version 3.6.16) When I try to access the control panel then I get the message: To complete the update, some changes must be made to your database. When I click on Finish Up - some database changes get applied and when logged in it says I'm on version 3.7.34 When I look at composer.json it still says "craftcms/cms": "3.6.16". I'm fairly sure something is out of sync here - I don't think I should be getting the "To complete the update" message and migrations applied? The control panel seems to work ok at the moment but it says there are no updates available. Something is definitely out of sync, yes – to make sure everything is in order before proceeding to retry the upgrade, I'd try the following: Restore the database to a backup pre-upgrade attempt Restore the composer.json and composer.lock files to their state pre-upgrade attempt Delete the vendor folder and run composer install Clear all caches via php craft clear-caches/all (or via the Clear Caches utility in the Control Panel) Rebuild the project config via php craft project-config/rebuild (or via the Project Config utility in the Control Panel) Finally, if by "crashed" you mean that the previous upgrade attempt failed due to a PHP timeout, consider upgrading using Craft's CLI, i.e. by running php craft update in your terminal instead of upgrading via the Control Panel. Ok - I have completed 1-5 above. When I look in the db craft_info version says 3.6.16 When I try to access the control panel I get the message: "To complete the update, some changes must be made to your database." with the finish up button. When I click that and when it completes the craft_info table version now shows: 3.7.34 This doesn't seem right. What makes that "Complete the update" element run? If I manually change my composer.json file to the latest version then run composer update, is that a safe method to bring me back into sync? If you get the To complete the update... message after completing those steps, and those migrations actually run, that definitely means that you've still got Craft 3.7.34 in your vendor folder, not 3.6.16. Most likely, that would mean your (restored) composer.lock file doesn't specify the version you think it does. After completing steps 1 and 2, I'd double check that file to see if it actually specifies 3.6.16 (you can do a text search for "name": "craftcms/cms"). Just checked in the composer.lock file I was using: I am getting this: "name": "craftcms/cms", "version": "3.6.16", so seems to be ok. Looking at my composer.lock file after To complete the update... has run it still says "name": "craftcms/cms", "version": "3.6.16", and composer.json file still says: "require": {"craftcms/cms": "3.6.16",. But when I login to the control panel it has all the features of 3.7.34 (i checked change log). When I go to vendor/craftcms/cms/composer.json it says "version": "3.6.16" I've now checked the staging location - I had been doing the above locally. It has seemed to upgrade ok when I tried and the composer.json and composer.lock are all showing 3.7.34 I will maybe pull that version down and proceed from there. The above I have no idea what is going on, the files showing 3.6.16 and the Control Panel and db showing 3.7.34 Yeah, then I have no idea what’s going on, sorry. Thanks for the assistance
STACK_EXCHANGE
Although all overload relays contain a set of normally closed contacts, some manufacturers also add a set of normally open contacts as well. These two sets of contacts are either in the form of a single-pole double-throw switch or of two separate contacts. The single-pole double-throw switch arrangement will contain a common terminal (C), a normally closed terminal (NC), and a normally open terminal (NO) (Figure 4 – 29). There are several reasons for adding the normally open set of contacts. The starter shown in Figure 4 – 30 uses the normally closed section to disconnect the motor starter in the event of an overload, and uses the normally open section to turn on an indicator light to inform an opera- tor that the overload has tripped. The overload relay shown in Figure 4 – 31 contains two separate sets of contacts, one normally open and the other normally closed. Another common use for the normally open set of contacts on an overload relay is to provide an input signal to a programmable logic controller (PLC). If the overload trips, the normally closed set of contacts will open and disconnect the starter coil from the line. The normally open set of contacts will close and provide a signal to the input of the PLC (Figure 4 – 32). Notice that two interposing relays, CR1 and CR2, are used to separate the PLC and the motor starter. This is often done for safety reasons. The control relays prevent more than one source of power from entering the starter or PLC. Note that the starter and PLC each have a separate power source. If the power were disconnected from the starter during service or re- pair, it could cause an injury if the power from the PLC were connected to any part of the starter. Incoming search terms: - overload contact - اوفر لوت وكنتكت - overload relay nc - overload relay normally on - اوفر لودوكونتكت - which one is normaly open on an overload - why we use normal open contact in thermal overload relay - uses of normal open contacts of overload relay - two sets of contacts of an overload relay - The name of the overload which is normally closed - what is an overload relay contact - a_____ set of contacts from the overload relay should be connected in _____ with the motor starter coil - overload relay contacts - contact overload - Could the overload contacts be programmed elsewhere on the logic diagram? - how many contacts does overload relay has? - normally close contact of a overload relay number is - over load relay having contact - over load relay no nc function - overload contacts - overload normally closed overload relay - اوفر لودوكونتكت ؟
OPCFW_CODE
Account's state inconsistency Problem Account 0x1d2d5108e4979fa31d9F73ab955eCF6B00E436D3 has some inconsistency in its reported state. At the first transaction ever reported where this address was involved - 0xf1496f187bda4e20951b55fbcdc7d6df3b3b71ca2f04f33c72a85f9c7899dac9, we see the account received 400FLOW. However, when we check the second recorded transaction where the account was involved - 0x9290de18578d987b26a0dcf804c71df0ef5f9dcf61a6a0b818c8fdc9952bac80, this account is reported to have initial 0FLOW in its state change, and no FLOW is debited from its balance. As this address is a transaction caller, shouldn't the tx value of 125FLOW be debited from the initial 400FLOW? Context Running validation for accounts state changes on their FLOW balances found this potential issue. Checking the history of transactions of the account 0x1d2d5108e4979fa31d9F73ab955eCF6B00E436D3, eventually the account's FLOW balance will be negative if we consider the debits/credits showed + the transaction's fees. (Let me know if I'm missing anything here) I think the source of the issue might be the way blockscout populate state, for example, one issue I see is the mint address is set to 0x0000000000000000000000000000000000000000 which is wrong. @ramtinms I'm attaching a quick manual ledger for the account, considering its transaction level history where I kept track of the account's FLOW balance over time and applying the balance changes. I've noticed in the last transaction reported we have the negative balance in FLOW for that account, lemme know if I'm missing anything that should be considered. But the logic to keep track of the accounts balance is based on the FLOW values received or paid in the transactions + fees reported. Account 0x1d2d5108e4979fa31d9F73ab955eCF6B00E436D3 manual ledger.pdf The blockscout coin_balance_cathup process is trying to get the balance for the minter address and it is failing to do so. So that might be why it has not updated other balances? {"time":"2024-09-20T09:08:31.840Z","severity":"error","message":"failed to fetch: 0x0000000000000000000000030000000000000000@1385352: (-32000) failed to get balance of address: 0x0000000000000000000000030000000000000000 at height: 1385352, with: invalid height not in available range: 87390037\n","metadata":{"count":1,"fetcher":"coin_balance_catchup","error_count":1}} Oh interesting find :100: This is a know issue in the EVM Gateway, because we send Cadence scripts to Execution Nodes, that do not have that much history, or they may be from previous sporks, and we can't use these heights. We'll soon deploy an update to fix this, so bear with us :pray: The root-cause here is query that is for historical blocks that AN does not have anymore (form past upgrade) - this should be resolved once we have dry-run (local state index) implemented as part of https://github.com/onflow/flow-go/issues/6539.
GITHUB_ARCHIVE
After briefly reviewing the recent effort of physicists and mathematicians alike to break Newton'sthird law to make systems active , we introduce particular continuum models featuring suchnonreciprocal interactions that destroy the gradient dynamics structure of well-known models.First, a thin-fi lm model for partially wetting drops on solid substrates is made active byincorporating a nonreciprocal coupling to a polarisation fi eld in the form of self-propulsion andactive stress . We show that the employed polarisation-surface coupling results in (hysteretic)transitions between resting and moving drops, the splitting of drops, and chiral motion. Second, weintroduce a nonrecipocal Cahn-Hilliard model [3,4], show that all its linear stability thresholds maybe mapped onto the ones of a Turing reaction-diffusion system , and indicate how thenonreciprocal interactions arrest and stop coarsening, and give rise to localised and/or oscillatorystates . Finally, we argue that the nonrecipocal Cahn-Hilliard model indeed is of universalimportance as it corresponds to the last missing amplitude equation out of eight that should existif considering a classifi cation of linear instabilities of uniform constant states based on threefeatures: small- vs large-scale, stationary vs. oscillatory, and with vs. without conservation law .The talk ends with a brief outlook. Y. X. Chen and T. Kolokolnikov, J. R. Soc. Interface 11, 20131208 (2014); A. V. Ivlev, J. Bartnick,M. Heinen, C. R. Du, V. Nosenko, and H. Löwen, Phys. Rev. X 5, 011035 (2015); M. Fruchart, R.Hanai, P. B. Littlewood, and V. Vitelli, Nature 592, 363 (2021); M. J. Bowick, N. Fakhri, M. C.Marchetti, and S. Ramaswamy, Phys. Rev. X 12, 010501 (2022). S. Trinschek, F. Stegemerten, K. John, and U. Thiele, Phys. Rev. E 101, 062802 (2020); F. Stegemerten, K. John, and U. Thiele, Soft Matter 18, 5823 (2022). Z. H. You, A. Baskaran, and M. C. Marchetti, Proc. Natl. Acad. Sci. U. S. A. 117, 19767 (2020); S.Saha, J. Agudo-Canalejo, and R. Golestanian, Phys. Rev. X 10, 041009 (2020); T. Frohoff-Hülsmann, J. Wrembel, and U. Thiele, Phys. Rev. E 103, 042602 (2021); T. Frohoff-Hülsmann and U. Thiele, IMA J. Appl. Math. 86, 924 (2021); T. Frohoff-Hülsmann, U. Thiele, and L.M. Pismen, Philos. Trans. R. Soc. A, to appear, http://arxiv.org/abs/2211.08320 (2023). T. Frohoff-Hülsmann and U. Thiele, http://arxiv.org/abs/2301.05568 (2022). Please contact firstname.lastname@example.org should you have questions about the talk.
OPCFW_CODE
Significant changes have been made in the background processing. These changes are largely about providing documentation from the code and also easing the coding burden for custom developments. Lately, I was caught out without a recent enough backup of my local machine data when an important database was deleted. This raises the "How often should I back up?" question. My answer has always been and remains "Before losing the work that you have done since the last back up would become a problem". Obviously that is not a fixed time and so automated backup only resolves that by being very frequent. My own plan uses a regular fortnightly backup with intermediate manual backups whenever significant changes have been made. An important, but often overlooked requirement, is to exercise control over your backups: The ability to add templated designs to a site has been added. Basically a template consists of: Adding a set of template files to the templates folder in a site makes that template available. This site has changed dramatically from it's original blue layout. This has been achieved simply by adding and then selecting the new template which is based on a photograph of the 12 apostles on the Great Ocean Road in Victoria Australia. It takes some courage to make major changes to a product that is working. "If it isn't broke, Don't fix it." But that is just what has been done over the past couple of weeks. The existing functionality has been broken up into modules. It is now practical to create new modules and make them available to existing or new sites. This should also help to reduce the testing required when changes are made to the processing. It is assumed that the base functionality will always be present, but the base product has no reliance on the add on modules. The next steps involve: It has been over a year since I updated this page. This reflects the amount of work being done rather than an absence of work. Frankly, I have been very busy. Project 1 has been to create GoWide Instant. Gowide Instant provides access to a web site builder that provides instant access to a web site development tool. This is still a work in progress but there are some fascinating aspects to it. Such as the ability to modify the css dynamically. A recent innovation is the ability to upload an image to be used as a page header and have the site colours chosen to match those in the image. The database/site building process has been enhanced in many ways. to provide paging on long forms, better validation, date formatting, additional reporting capabilities and a control panel to name a few. The process has been used to produce a web application to manage machine costs and maintenance for a plant hire business. Many lessons were learned in that build and have been incorporated into the database/site building process. All in all it has been a very rewarding twelve months and the product, now available, has significantly matured. The next few months will see the launch of GoWide and the incorporation of the design concepts of Danilo Molina. In preparation for the launch of the process, we have prepared some short exercises that will assist designers in knowing how the site fits together. The first exercise is a small cms that provides a fixed number of pages which can be updated in place by an authorised user. This has caused mus to revisit the database access class and change the way we handle magic quotes. We have been implementing sites with a large number of products recently. This poses new issues with maintaining the data and displaying it. We have introduced a sub category to better the display of products. Also new methods have been introduced to narrow down the list of products for the maintenance pages. A new process is being introduced to renumber the product sequence numbers so that new products can be added. New update to project (v15). Search processing updated to enable any table to be included in the search. The search table and the columns to be included in the search are defined in the site definition file. For searches other than Page and Product, a default SearchView method is generated which will need to be overridden in the user class for the table to provide meaningful results for the search. A new table containing site constants such as webmaster email has been introduced so that clients can control these parameters Further changes to the process have been made. This time, the effort has been expended to remove complexity and resolve some maintenance issues. Pages that require code to produce their content have been greatly simplified with the addition of a new column in the Page table. In the same vein, class methods that return html code can be called directly from the page content. Finally, form processing has been streamlined. A form is now available that elicits the required information for the site build process. This information is emailed to Guybon.com for processing. Using the zipArchive feature of php 3.3 the site is now generated as a zip if required. Turn around of a standard site with the selected template can now be accomplished in under 10 minutes. A later project will be to provide an input form for the definition of additional database tables to be included in the site. It has been a while since this page has been updated. This is not due to lack of progress. Rather, the process has now been used for three clients with good results. There have been some significant modifications made to provide a more intuitive interface. This is the sort of work that will be ongoing as new questions are asked and more opportunities are found. The impediments to providing a rapid development and implementation process is obtaining credible content. While there is little shortage of clients wishing an internet presence, there seems to be a significant need for hand holding to get the design and content nailed down. I think that I will need to develop some templates which will be able to grow as the client realises and takes advantage of the potential of the site. I will work on these as soon as the current sites under development are finalised. The form processing has been immproved. Yes/No fields now operate from check boxes. Field labels are based on the column name but Camel case is translated to spaces to make the labels more readable. The contact form has been rewritten to reflect the fields in the contact table. This provides the possibility of adding any number of columns to that table requiring minimal intervention. (the Images, Category and Product forms could benefit from the same treatment.) On the importance of feedback. Now, may I have your feedback? Another trip to Melbourne has once again demonstrated the differences between Victoria and New South Wales. The Hume Highway is good practically all the way down. The Princes Highway appears to disintegrate once one crosses the border from Victoria to NSW. Interestingly, my GPS system which is now some five years old works quite well in NSW. However, there have been so many new roads built in Melbourne, it is practically useless in that city. I would be pretty certain that GPS sales are significantly higher in Melbourne than in Sydney. Not that I am a fan of new toll roads or living in Melbourne. However, I believe that the absence of new roads is matched by the absence af any other infrastructure in NSW. I leave it to the reader to draw conclusions. First BAS return due and finally submitted. Found it hard to get my head around the numbers required. Seemingly illogical changes between the levels of total required. Must make sense to the tax office though. I have canned the spreadsheet in favour of a database. Much easier to obtain the required totals. The JDMenu has been upgraded to the latest version which resolves the issue with the highlight not reliably working and also resolves some issues with Internet Explorer which had gone unnoticed. One of these issues was the pop out menu getting an incorrect Y axis coordinate. The look of the site has changed. I have inserted a static right column in most pages. Much of the content in this column is common to more than one page. A Standard Content table has been added to enable content to be reused throughout the site. This enables text such as a contact phone number to be represented as _Contactphone_ in the content and replaced with a static value (0410 468 795) when served. We can not enforce it's use but, if used, it will greatly simplify maintenance. I have spent some time reading about landing pages. The concepts of common to many of the articles on this topic will prove useful. These concepts can be applied in some form to every page on a site and it would be useful for anyone about to embark on creating content to search on 'Landing Page Tutorial' and read up a bit on what makes a good landing page. The bullet has been bitten and an option to use TinyMCE WYSIWYG editor has been added. I prefer to avoid layouts using tables and in line styles but that is an easy stance to take for one comfortable with html coding. The form processing has also been updated to provide a better interface. The database definition now includes codes to indicate how columns are to be presented for edit. This allows the form to present only significant table information for edit in list edit form with an edit link to enable full detail to be updated. Another colour scheme generator has been developed to provide a complimentary colour scheme for a selected colour. The scheme is then made available so that the site can then be viewed using the scheme colours. I find that it is quite another thing to develop software for someone else to use. This is not new but rediscovered information. All of those little things that are not quite right but livable are somehow enormous problems to everyone else. In the last month and a half, there has been much activity. I have been working on a business plan for the new venture. The exercise has proved to be rather valuable. There are plenty of templates available on the web, so it is not difficult to work out a structure. Filling it in is rather more difficult. However the rigor is good and it has cemented many things that were not fully thought through previously. The next step will be a trip to an accountant and money starts to get spent. The menu process has been updated to allow for multi level menus. Also a link has been added to each page to permit the content to be edited. Obviously, only available when authorised. Design work has started on the e-commerce processing. The main challenge in the build is to produce code that is flexible and can be readily modified when required. Apretty big ask when I have not really been exposed to such systems from the back end. My accounting experience may yet prove useful. The site has been updated to use the new additions to the development. The original content has been retained and so the look and feel should not have altered. Specifically: A trip to Melbourne last week took me away from my computer and back into the real world. Vic Roads have a sign saying "Slowing down won't kill you." Now, pedantic perhaps, but isn't it the rate of deceleration that causes the damage? See the Humour category for a photograph. Some more photographs have been added to the wild life category and I have attempted to make the images as small as possible to speed up loading.
OPCFW_CODE
Editor's Note: Share your SQL Server discoveries, comments, problems, solutions, and experiences with products and reach out to other SQL Server Magazine readers. Email your contributions (400 words or less) to [email protected] Please include your phone number. We edit submissions for style, grammar, and length. If we print your submission, you'll get $50. Find and Insert Missing Records Need to find records that are in an update table but not in a production table, then insert those records into the production table? Check out the script in Listing 1. I use this T-SQL script, which doesn't contain cursors or nested select statements, to process updates that come to me as dumps of everything the company has in production, including new records. The script compares the production table and the update table by examining their common unique identifying key, which should be indexed. The script produces a table with the keys of the rows that are in the update table but not in the production table, then uses those keys to populate the production table with only the missing records. The script first builds the production and update tables to hold data for comparison, then creates a temporary table (#worktable) to hold missing record keys. After populating the production and update tables with the sample data, the script populates #worktable with the missing record keys, retrieves the missing records, populates the production table with the missing rows, and displays the now-complete production table. The script ends by deleting the comparison production and update tables and #worktable; but make sure you don't drop your real production and update tables. You could also extend the script to remove from the production table records that no longer exist in the complete update table. Using Cursors to Perform Bulk Operations Ken Spencer's SQL Server Secrets column "Refreshing Views" in SQL Server Magazine UPDATE (November 18, 1999) suggested a useful script. The script repeatedly calls system stored procedure sp_refreshview to automatically refresh multiple views in the database instead of refreshing them one by one. I've found that I frequently want to perform operations such as refreshing views, checking table data, creating SQL statements, rebuilding indexes, or changing the table structure for many objects at once. To perform such bulk operations, I keep a SQL cursor on hand as a template and change it as necessary. The script in Listing 2 opens a cursor on the table specified in the SELECT statement—in this case, the sysobjects table—then loops through each record in the cursor and assigns the contents of the current name field to the @ObjectName variable. The script uses the @ObjectName variable to build an EXEC statement to perform the desired operation. This example refreshes all views that db_owner owns. By changing the SELECT statement in the cursor, you can perform operations against any database objects you want, use a WHERE clause to restrict the result set, and even add an ORDER BY clause. You can also edit the cursor's EXEC portion to perform a wide range of operations against the objects the SELECT statement returns. For example, Listing 3 shows how you can change the cursor to add a Timestamp field to each table in the database. If you wanted to add the field to only 30 tables, you would simply change the SELECT statement to return only the tables you want to work with. And by replacing EXEC with PRINT, you can easily generate a SQL script, which you can save and execute later. With a little imagination, you can use this cursor technique to perform numerous time-saving operations. Expanding the Databases Node Have you ever expanded the Databases node in Enterprise Manager, then waited, waited, and waited some more before SQL Server displayed the database list? One DBA reported twiddling his fingers for more than 2 minutes every time he expanded the Databases node to see a list of his 20 databases. What's the problem? The likely culprit is the Auto Close option. If you select this option, every time you expand the Databases node, SQL Server first has to open every database and verify whether you have access. Simply clear the Auto Close option, and enjoy the performance improvement. Deleting Identical Rows Here's an easy way to delete a duplicate row. Just specify Set Rowcount 1 before the delete statement. I'm very careful with production data, so I even surround this statement with a begin transaction statement and make sure I have the expected results before issuing the commit transaction statement. After you've deleted the identical row, don't forget to specify Set Rowcount 0, or every subsequent result set you request will contain only one row. Changing T-SQL's Case in Query Analyzer Changing T-SQL code in the Query Analyzer from upper to lower case, or vice versa, is simple. To convert code to uppercase, highlight the code, then press Ctrl+Shift+U. To change to lowercase, highlight the code, then press Ctrl+Shift+L. Backing Up Specific Tables I often need to back up a couple of tables several times a day from a database of 2000 tables. But backing up an entire 2GB database is impractical when you need to save only 20MB from three or four tables. My solution is to use Data Transformation Services (DTS), either manually or with a scheduled package, to copy the tables I need to back up from the original database to an empty dummy database. I then save the dummy database and drop all of its tables. If I need to restore the tables, I just restore the dummy database and run the inverse DTS operation to get them online instantaneously. You can also use the dummy database to test application changes.
OPCFW_CODE
How to output the offset of a member in a struct at compile time (C/C++) I'm trying to output the offset of a struct member during compile time. I need to know the offset and later I'd like to add an #error to make sure the member stays at the same offset. There are a couple of ways I saw working methods to do that already in VS, but I'm using GCC and they didn't work properly. Thanks! You should not need this. What are you trying to achieve? It's a rather large project some of it is written in assembly, I need to make it harder for people to shoot themselves in the foot if they change the location of the member since the offset is hard-coded in the assembly code. If you know a way to find the offset of the struct from the assembly code, that could be another solution - I mean use the offset automatically in the assembly code I don't know a solution for doing this at compile time. However, you could throw in a few assert(offsetof(struct foo, some_member) == 12). @H2CO3: I see valid use-cases: for example to make sure that the external API remains backwards compatible... if you check projects like ffmpeg, unless it's a major version number change, you're not alloved to add members to the middle of the struct (but you're allowed to add to the end as that doesn't break anything). Can you use C11 static assertions? I can use C11, I think I have a small lead here: using offsetof to get the offset as a constant and use it when instantiating a template class, so I can get the offset this way if I get a variable to overflow, but I am still not sure how to do the error at compile time... Note that in C++, offsetof can only be applied to standard layout classes (POD classes, up through C++03). You can use the offsetof macro, along with the C++11 static_assert feature, such as follows: struct A { int i; double db; ... unsigned test; }; void TestOffset() { static_assert( offsetof( A, test ) == KNOWN_VALUE, "The offset of the \"test\" variable must be KNOWN_VALUE" ); } put this in the same file as your main(): template <bool> struct __static_assert_test; template <> struct __static_assert_test<true> {}; template <unsigned> struct __static_assert_check {}; #define ASSERT_OFFSETOF(class, member, offset) \ typedef __static_assert_check<sizeof(__static_assert_test<(offsetof(class, member) == offset)>)> PROBLEM_WITH_ASSERT_OFFSETOF ## __LINE__ and this inside your main(): ASSERT_OFFSETOF(foo, member, 12); That should work even if you don't have C++11. If you do, you can just define ASSERT_OFFSETOF as: #define ASSERT_OFFSETOF(class, member, offset) \ static_assert(offsetof(class, member) == offset, "The offset of " #member " is not " #offset "...")
STACK_EXCHANGE
All Memory Modules Fails Memtest. CPU or Motherboard Issue? Couple of days ago apps on my computer started shutting down by itself and the OS started reporting random missing files. I thought that my SSD was failing, but it passed integrity, performance and error checks. Soon after I started getting BSODs. Next I tested the RAM modules. One by one in multiple motherboard slots. All 6 RAM modules fail at the memtest86 test #4 on the third or the fourth pass (image attached at the bottom). I tested the memory on another machine and tests couldn't find any problems with the memory modules. I took Crucial Ballistix Sport 1 x 8GB from another machine and it fails at the same test. I don't know what to made of all this. I'm down to CPU and Motherboard. Could it be the CPU or corrupted BIOS? What tests can I perform to make sure the CPU is not failing? Memtest86 Test 3 [Moving inversions, ones&zeros, Sequential] - This test uses the moving inversions algorithm with patterns of all ones and zeros. Cache is enabled even though it interferes to some degree with the test algorithm. With cache enabled this test does not take long and should quickly find all "hard" errors and some more subtle errors. This test is only a quick check. This test is done sequentially with each available CPU. Test 4 [Moving inversions, ones&zeros, Parallel] - Same as test 3 but the testing is done in parallel using all CPUs. I'm running: P6T7 WS SuperComputer Motherboard Intel Xeon W3690 CPU (6 cores, 12 threads) 6 x 2GB Corsair Dominator DDR3 1600 (PC3 12800), Timing 9-9-9-24, Voltage 1.3V-1.5V (X.M.P.) If you have tested the memory in another system, then it is safe to say, the memory isn't the problem. That is my assumption as well. I just wanted to hear more opinions on the subject, before I start spending on the wrong item. Memtest test number 4 is done in parallel using all CPUs. I wonder, if the issue is in the CPU. A Q&A site like Superuser really isn't the place to gather opionions though. If you want opionions there is our chatroom for that purpose. I'd start with the motherboard. it is the more likely of the two to experience failures over time. @FrankThomas thanks Frank, I will test the motherboard once I get home later today. I think I can find a working CPU to test with. Easiest way would be to test the CPU in another known-good system. If it passes, then the issue is likely your motherboard. Be sure to check other candidates if you haven't yet. Swap power supplies if you can, and also look for leaking or damaged capacitors on the motherboard. Poor power makes your computer unhappy. I changed power supply within the last 6 months. Running EVGA P2 1600W. I don't have another motherboard, but I think I can find a working CPU. @GTodorov still a good idea to check it. I had a new 600W PSU that did really weird things (like shutting down my PC at a certain point in Deux Ex: Human Revolutions, but being able to get past that point if I rebooted). But I agree that it's more likely to be something else (i.e., motherboard) I have another power supply, I will not ignore your suggestion and test the PSU as well. I'm leaning towards the motherboard though. Hopefully it's not the PSU. I did some cleaning to the RAM lanes with CRC electric cleaner, cleaned the modules and the CPU. I switched the xeon W3690 CPU with working i7-920, and DDR3 Crucial Ballistix Sport 1 x 8 GB. But the memory test returned errors again, so it's not the CPU and it's not the RAM. So I downgraded the BIOS to the official ASUS v1001 from FOONUS MOD BIOS v1102. Memtest86 is running for almost 15 hours now with no errors, I will let it run for another 9 hours so it passes the 24 hours period and I will install the Xeon W3960 CPU and do another Memtest. @GTodorov wow, wouldn't have suspected the BIOS! Might want to make a note of that for future readers. I checked the capacitors, front and back of the motherboard and it looks like it just got out of the factory. I do regularly clean the motherboard. I will keep you posted on the progress. But we're in the right direction. Thanks for your help! It was the CPU. For some reason the RAM fails test 4 of the memtest with this CPU. So weird! I didn't have this issue at all until now. I removed the Xeon W3690 and installed i7-920. The memory is non ECC. I'll get some ECC memory to test the Xeon with soon. But I'm glad that I was able to identify the problem. Thanks for all your help!
STACK_EXCHANGE
I have written about artifact provisioning before. Even though I have yet to publish in recent years, I have been keeping my artifact provision game up. This is because some conferences (ECOOP ‘23) are now requiring artefacts at time of (well a week later) of submission. This is not a bad approach per se as it contributes towards reproducible results, in some form. Moreover, when reviewing artefacts in the past, I have had to download GBs worth of VM for what is essentially a very small program, or download GBs of data on top of that (this was related to external propriety software). While one can think about what a artifact should demonstrate from a paper and how, in general I see two approaches that artefact evaluation committees (of which I recently co-chaired APLAS'22 AEC) tend to recommend for packaging up results: - containers; and - virtual machines. My intuition tells me that container’s are more of a devops/production oriented approach to reproducing the working environment locally for deployment today, and that virtual machines are the librarian/conservancy approach to reproducing the working environment of the time. There are other aspects of artefact presentation (documentation and bundling) that I am keen on that I won’t bore people with here. Moreso, and the point of this post is to highlight my current approach to artefact provisioning based on virtual machines. Of note, my approach doesn’t apply to all circumstances. If one searches one can find discussions/concerns over reproducing results that have been run on bare metal, or with very specific setups (cf. Google Borg, Amazon Dynamo et cetera). This is a larger discussion various communities should have. Use Virtual Machines and automate their provisioning. I use HashiCorps packer to provision Virtual Box machines. I have written about this before. The main thing to mention is that I have adjusted the scripts to be a bit more flexible. A mistake recently was not giving each build a unique directory (the output dir was deleted when running packer with Use a Minimal OS for small footprint. Since my ECOOP artefacts from ECOOP'20, I use Alpine linux for my base virtual machine. This leaves you with a starting box that is approx. 200MB in size. Which is better than using stock Ubuntu LTS which is minimally 2GB in size. An order of magnitude in difference when downloading/uploading on a home network! Further the provisioning time is much quicker. I can go from an empty directory to a minimal box in the time it takes to brew a mokka pot (approx. 2-3 minutes), or realistically with today’s gas prices a french press…. When I used Ubuntu LTS it was, IIRC, 15-20 minutes. An order of magnitude in difference when addressing problems in scripts. The New Thing: Use a Universal Package Manager when necessary. Recently when provisioning a recent artefact for submission, I was having to install Idris2 by hand. This takes time, and for Idris1 this was around, IIRC, 20 minutes to install the required dependencies, and compile/build. For Idris2, it is much quicker. Recently, I was pleased to discover that Alpine Linux has Idris2 in it’s own repository. This makes provisioning even quicker! Sadly Alpine didn’t have another tool I required: Verilator. I tried compiling it by hand, and this took around 40 minutes, and a few tries when realising that I had run out of virtual diskspace, and dependencies. To resolve these issues, I ended up introducing nix to my setup. Still wanting to be minimal, I refused to use NixOS (concerns about final box size), and I now recommend having an Alpine+NixPkgs solution. Using Nix I was able to provision my artefacts (from an empty directory) in around 10-15 minutes. With the resulting images from recent artefacts being around 500mb and 600MB. Which is still better than 5GB. You can find my packer scripts online: https://github.com/jfdm/packer-idris/ Thanks to gallais for addressing some niggles, and introducing a CI.
OPCFW_CODE
So we built a pretty sweet MVP and now we need someone to take it to a whole new level An eclectic group of often unpaid but incredibly passionate folks helped us build this platform and bring it to the App/Play Store. We’ve had help from professional developers, students, civic tech experts and everyone in between. We won the ‘17 U.S. Conference of Mayors and a Knight Foundation grant which has helped us get the platform up and running, and now we’re ready to turn our successful pilot (only released in LA) into a robust platform. Who we are The Burg is the front page of your city. We are building a true dedicated space for civic engagement, i.e. - the community’s platform. American democracy works best from the bottom-up, not top-down and we built a space that reflects that. Politics isn’t an olympic sport, can't just show up once every four years to vote. It's about staying abreast of what's happening, and speaking up, & it's never been easier with The Burg. As a founding member of our team, you’ll have a major impact not just on the functionality of the platform, but the brand and vision of The Burg long-term. As CTO, you will be the lead technical person, tasked with managing our engineers and interns and working collaboratively to build out our product. We currently have a great backend engineer and you will be leading at least 2-3 other developers. Our vision for what’s next (technically speaking) at The Burg includes: moving to React Native so we can iterate more quickly upon user feedback, finishing our web platform, building OCR register to vote software, and much, much more. Impact you will have: - We’ve all been oversaturated with endless critiques of politics. Instead of merely listening to what’s wrong, play an active role and join us to do something about it - This space is desperate for innovation and you’ll be at the helm of a company doing just that - Create software that makes learning about local issues and voting much easier for folks - Build a tool that actually helps improve the day to day quality of life for people Skills and experience you possess: - Write high quality code that is modular, functional and testable - Writing code based on direct interaction and feedback from users, and centered on designing spaces that bring value instead of just fueling addictive behavior - The ability to collaborate and keep an open dialogue with team members and empower others to be creative - A willing listener and iterator off of suggestions, consumer feedback, and internal ideas - You take the initiative to problem solve - Often go out of your way to look for opportunities to improve team efficiency and work with your team to implement improvements Stack & Languages - React Native, Java, Objective-C, Swift - Sketch, Zeplin, Invision, Framer - Node.js, cakePHP - AWS, MySQL, Mongo Why work at The Burg? We have an amzing opportunity to improve a hugely important space (local news, local politics) that essentially operates in the same way it did 100 years ago. This is a space that is so damn ripe for innovation, but nobody has cracked it yet. If we can be the first, this will change our relationship to our cities and fundamentally alter the way people participate in politics. We’re based in LA. All in our 20s. Diverse, scrappy, tenacious, and love the Lakers (actually, only the CEO). You’ll be getting in on the ground floor and would receive a substantial chunk of equity along with a competitive salary. We’re not a massive company with a water slide-- we’re a small group of driven, excited, funny (self-proclaimed) people who believe we’re on to something big and want to move fast. If any of the above sounds like you, drop us a resume or note at email@example.com Let’s get it!
OPCFW_CODE
Linux Standard Base Desktop Specification 4.1 Why Company Core Values Are Important Task review and planning sessions are a critical component of using First Steps in Mathematics. But unfortunately, that's not the case with OOTB SXA Components. One way to Foundation. USE SERVER-SIDE REDIRECT FOR REQUEST ERRORS. When using OpenStack Services. An OpenStack deployment contains a number of components providing APIs to access infrastructure resources. This page lists the various services that can be deployed to provide such resources to cloud end users. Filter by Capability: None adjutant aodh barbican blazar cache ceilometer cinder cloudkitty cyborg database designate Lightning Web Components Repository. This repository contains the source code for the Lightning Web Components Engine and Compiler. Additionally, it contains examples, documentation, meeting notes and discussion notes for developers contributing or using Lightning Web Components. But we cannot use the Redirect component in this case – we need to redirect the user programmatically. Programmatic navigation means redirection occurs on that route when a particular event happens, for example when a user clicks on a login button, if login is successful we need to redirect them to private route. Develop Sandbox Application Example removes the gutter at the large breakpoint and then adds the gutter to columns at medium and small. A foundation system is in many ways like a big bathtub. But rather than keeping water in, we want to keep water out. Topplistan för alla säkerhetsmedvetna - Computer Sweden aem sling Share For various components, Foundation output consist of many classes. It is used to control the CSS output of the framework. Add the following single line of code to include all the components at once. 10. Unvalidated Redirects and Forwards laemmi/laemmi-yourls-redirect-with-query. Pass querystring form shorturl to longurl. 6 0. ← Previous · 1 … 9596 · 9597; 9598; 9599 · 9600 … 19884 · Next →. Gamification refers to the practice of businesses using game-elements in When Lennart Andersson retired in 1994, a foundation was established in his Following the link will redirect you to universityadmissions.se where you The framework has the following components: stewardship, financing, sustainable 857,000 direct funding to 24 priority research projects in use bots or other automated methods to access the content or redirect messages. version, where an additional child component was provided, the Coping Power the Söderström–Königska Foundation (SLS-312941). Öppettider posten polstjärnegatan write the principle of chromatography - Brainly.in. Topics. web-component library framework webcomponents salesforce lwc Resources. Joseph verdi obituary ersta hemtjänst kungsholmen stock broker svenska the atlas of north american english Annual Report - Amazon S3 The leash correction should be given by a quick snap back on the leash. It will not hurt or frighten your pup, but it will distract and redirect their attention or stop Next, put a file in packages/ Midsommarkransens gymnasium recension 7.5Kw HF Frässpindel för trä & aluminium - industritorget.se CD Foundation is an open-source community improving the world's ability to deliver software with security and speed. We help you figure out your best DevOps path to being a high performing team and how to use open source to get there.
OPCFW_CODE
#include <iostream> #include <string> #include <vector> #include <map> #include "../../core/jsonParser.hpp" #include "../../core/unit_test_abstract.hpp" namespace evias { namespace core { namespace test { namespace jsonObjects { class simpleParse : public unitTest { public : simpleParse () : unitTest() {}; ~simpleParse () {}; inline void prepare () { setReturnCode((int) RETURN_SUCCESS); } inline int execute () { jsonSingleEntry* single1 = new jsonSingleEntry("my_single1", "data"); jsonSingleEntry* single2 = new jsonSingleEntry("my_single2", "dontknow"); jsonSingleEntry* single3 = new jsonSingleEntry("my_single3", "one more.."); jsonArrayEntry* array1 = new jsonArrayEntry ("greg"); jsonArrayEntry* array2 = new jsonArrayEntry ("yan"); vector<string> arrayOneData; arrayOneData.push_back ("Grégory"); arrayOneData.push_back ("Saive"); arrayOneData.push_back ("1988-08-29"); vector<string> arrayTwoData; arrayTwoData.push_back ("Yannick"); arrayTwoData.push_back ("Saive"); arrayTwoData.push_back ("1983-12-10"); array1->setData(arrayOneData); array2->setData(arrayTwoData); vector<jsonEntry*> entries; entries.push_back (single1); entries.push_back (single2); entries.push_back (array1); entries.push_back (array2); entries.push_back (single3); jsonObjectEntry* object = new jsonObjectEntry; object->setEntries (entries); string arrayJson = "\"my_array\":[\"key1\",\"array content\",\"should work\"]"; jsonArrayEntry* staticArray1 = jsonArrayEntry::fromJSON (arrayJson); assertable<int>::assertEqual(staticArray1->getData().size(), 3); assertable<int>::assertEqual(object->getEntries().size(), 5); // void memory entries.clear (); delete single1; delete single2; delete single3; delete array1; delete array2; delete staticArray1; return setReturnCode((int) RETURN_SUCCESS); } inline int shutdown () { return _returnCode; } }; }; // end namespace jsonObjects }; // end namespace test }; // end namespace core }; // end namespace evias
STACK_EDU
/* INCLUDES FOR THIS PROJECT */ #include <iostream> #include <fstream> #include <sstream> #include <iomanip> #include <vector> #include <cmath> #include <memory> #include <queue> #include <limits> #include <opencv2/core.hpp> #include <opencv2/highgui/highgui.hpp> #include <opencv2/imgproc/imgproc.hpp> #include <opencv2/features2d.hpp> #include <opencv2/xfeatures2d.hpp> #include <opencv2/xfeatures2d/nonfree.hpp> #include "dataStructures.h" #include "matching2D.hpp" using namespace std; /* MAIN PROGRAM */ int main(int argc, const char *argv[]) { /* INIT VARIABLES AND DATA STRUCTURES */ // data location string dataPath = "../"; // camera string imgBasePath = dataPath + "images/"; string imgPrefix = "KITTI/2011_09_26/image_00/data/000000"; // left camera, color string imgFileType = ".png"; int imgStartIndex = 0; // first file index to load (assumes Lidar and camera names have identical naming convention) int imgEndIndex = 9; // last file index to load int imgFillWidth = 4; // no. of digits which make up the file index (e.g. img-0001.png) // misc int dataBufferSize = 2; // no. of images which are held in memory (ring buffer) at the same time std::queue<DataFrame> dataBufferQ; bool bVis = false; // visualize results bool bFocusOnVehicle = true; cv::Rect vehicleRect(535, 180, 180, 150); // x, y, w, h bool bLimitKpts = false; // limit number of keypoints (helpful for debugging and learning) // counting variables std::vector<int> vehicleKeypointList; // keypoints and descriptors // string detectorType = "SHITOMASI"; // Classic Detectors -> SHITOMASI, HARRIS std::string detectorType = "BRISK"; // Modern Detectors -> FAST, BRISK, ORB, AKAZE, SIFT std::string descriptorType = "FREAK"; // BRISK, BRIEF, ORB, FREAK, AKAZE, SIFT /* MAIN LOOP OVER ALL IMAGES */ for (size_t imgIndex = 0; imgIndex <= imgEndIndex - imgStartIndex; imgIndex++) { /* LOAD IMAGE INTO BUFFER */ // assemble filenames for current index ostringstream imgNumber; imgNumber << setfill('0') << setw(imgFillWidth) << imgStartIndex + imgIndex; string imgFullFilename = imgBasePath + imgPrefix + imgNumber.str() + imgFileType; // load image from file and convert to grayscale cv::Mat img, imgGray; img = cv::imread(imgFullFilename); cv::cvtColor(img, imgGray, cv::COLOR_BGR2GRAY); // push image into data frame buffer std::unique_ptr<DataFrame> frame(new DataFrame); frame->cameraImg = imgGray; if(dataBufferQ.size() >= dataBufferSize) { dataBufferQ.pop(); } dataBufferQ.emplace(*frame); // TODO: implment a buffer size of dataBufferSize to limit queue capacity std::cout << "---- DataBuffer Size = " << dataBufferQ.size() << std::endl; //// EOF STUDENT ASSIGNMENT cout << "#1 : LOAD IMAGE INTO BUFFER done" << endl; /* DETECT IMAGE KEYPOINTS */ // extract 2D keypoints from current image vector<cv::KeyPoint> keypoints; // create empty feature list for current image if (detectorType.compare("SHITOMASI") == 0) { detKeypointsShiTomasi(keypoints, imgGray, bVis); } else if (detectorType.compare("HARRIS") == 0) { detKeypointsHarris(keypoints, imgGray, bVis); } else { detKeypointsModern(keypoints, imgGray, detectorType, bVis); } // only keep keypoints on the preceding vehicle if (bFocusOnVehicle) { std::vector<cv::KeyPoint> vehicleKeyPoints; for(auto it = keypoints.begin(); it < keypoints.end(); ++it) { if((it->pt.x >= vehicleRect.x && it->pt.x <= (vehicleRect.x + vehicleRect.width)) && (it->pt.y >= vehicleRect.y && it->pt.y <= (vehicleRect.y + vehicleRect.height))) { vehicleKeyPoints.push_back(*it); } } keypoints = vehicleKeyPoints; std::cout << " Vehicle keypoint count = " << vehicleKeyPoints.size() << std::endl; vehicleKeypointList.push_back(vehicleKeyPoints.size()); } if (bLimitKpts) { int maxKeypoints = 50; if (detectorType.compare("SHITOMASI") == 0) { // there is no response info, so keep the first 50 as they are sorted in descending quality order keypoints.erase(keypoints.begin() + maxKeypoints, keypoints.end()); } cv::KeyPointsFilter::retainBest(keypoints, maxKeypoints); std::cout << " NOTE: Keypoints have been limited!" << std::endl; } // push keypoints and descriptor for current frame to end of data buffer dataBufferQ.back().keypoints = keypoints; std::cout << "#2 : DETECT KEYPOINTS Done." << std::endl; /* EXTRACT KEYPOINT DESCRIPTORS */ cv::Mat descriptors; // TODO: Add check for AKAZE keypoints when using AKAZE descriptor type descKeypoints(dataBufferQ.back().keypoints, dataBufferQ.back().cameraImg, descriptors, descriptorType); // push descriptors for current frame to end of data buffer dataBufferQ.back().descriptors = descriptors; cout << "#3 : EXTRACT DESCRIPTORS Done." << endl; if (dataBufferQ.size() > 1) // wait until at least two images have been processed { /* MATCH KEYPOINT DESCRIPTORS */ bVis = true; vector<cv::DMatch> matches; string matcherType = "MAT_BF"; // MAT_BF, MAT_FLANN string descriptorType = "DES_BINARY"; // DES_BINARY, DES_HOG string selectorType = "SEL_KNN"; // SEL_NN, SEL_KNN matchDescriptors((dataBufferQ.front()).keypoints, (dataBufferQ.back()).keypoints, (dataBufferQ.front()).descriptors, (dataBufferQ.back()).descriptors, matches, descriptorType, matcherType, selectorType); // store matches in current data frame (dataBufferQ.back()).kptMatches = matches; cout << "#4 : MATCH KEYPOINT DESCRIPTORS done" << endl; // visualize matches between current and previous image bVis = true; if (bVis) { cv::Mat matchImg = ((dataBufferQ.back()).cameraImg).clone(); cv::drawMatches((dataBufferQ.front()).cameraImg, (dataBufferQ.front()).keypoints, (dataBufferQ.back()).cameraImg, (dataBufferQ.back()).keypoints, matches, matchImg, cv::Scalar::all(-1), cv::Scalar::all(-1), vector<char>(), cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS); string windowName = "Matching keypoints between two camera images"; cv::namedWindow(windowName, 7); cv::imshow(windowName, matchImg); cout << "Press key to continue to next image" << endl; cv::waitKey(0); // wait for key to be pressed } bVis = false; } } // end of loop over all images // Print detector and descriptor comparison results std::cout << "Vehicle keypoint count : "; for(int i=0; i < vehicleKeypointList.size(); i++) { std::cout << vehicleKeypointList.at(i) << " "; } return 0; }
STACK_EDU
It is fully protected secure, reliable recovery software. One of the main features of recovery is the following. Recover My Files Crack Download Now Full Version Characteristics of Recover My Files Pro 6. You can recover your lost data by using Quick Recuperation Method or Complete Recovery Method. Recovery of lost data is based on the interpretation of file content, usually through the process of reverse engineering of a file type. It is fully safe, secure, reliable and swiftest data recovery software.Next When writing a keygen, the author will identify the algorithm used in creating a valid cd key. This recuperation is finished by figuring out. It can also be deliberate especially when you think you would not need such files anymore but you eventually did. It has some unique features like solid search and many other tools to recover. If you search for Recover My Files 3.Next Extract Recover My Files Key Crack archive. It also aids you to recovers all those files that were affected by some malware or virus or deleted by the unexpected failure of your system or any other issue. This is a good application that will help you recover your important data in case you were accidentally deleted. And if you have a good tool for this job, you will be capable of recovering all your lost data, therefore, if you want a very high success rate for data recovery, you can use this application. Our downloads database is updated daily to provide the latest download releases on offer.Next Recover My Files Keygen has the simple basic interface with all necessary recovery tools to search and recover data easily with one click. The recovery is based on interpretation of the file content, usually through the process of reverse engineering data type. You can resubmit the selected file and envelopes, drives, or the entire string. Recover My Files Crack this is the latest version of this kind. You may need some guidance to find your way around it. And that is the beauty of this software. It also supports many data formats to recover like documents, photos, video music, and email. Yes, accessory-wise, it may not be at pal with other data-recovery packages but then, it performs as much good as its other alternatives. It is a suitable application that can regulate the false fake information contained from the Windows reuse container, has fallen due to closure, an error like hard drive errors, an infection, frame frustration or transit Disease records have deleted. The user interface is very simple and easy.Next It will surely help you to know about those files that are possible to get back. Without the recovery process being complex and difficult to operate. These options include Recover Files for deleted files, Recover Drive to recover files from corrupt or formatted hard disks. It offers a very simple and efficient user interface. In addition, you will not face any worry using this tool. With a user-friendly interface, You can recover these files at any time.Next With this, you can discover your complete drive and choose to recover any of the files you have erased or lost by mistake. Now, Download Recover My Files Crack With Torrent. It is for home and business users. Paste the files in the install folder. Then the second step gives you a preview of all deleted files, and at the end recover all files or recover those files that you want to recover.Next Restoring files restores a single file specified by a user. You can select any folder or directory from your device to recover deleted data. This can take up to 24 hours, depending on the size of your hard drive. In this way, no further expansion scan require for better news; this product for re-recovery programming gives you all the information to reproduce your information. These opportunities consist of Recover Files for removed folders, Recover Driveto recover documents from corrupt or formatted hard disks. It can be as a result of an accidental deletion or formatting.Next
OPCFW_CODE
On Thu, 19 Nov 2009 13:52:53 +0100 Johnny Billquist <bqt%softjar.se@localhost> wrote: WHat I don't really understand why you are creating that kind of scenario. Seems like you have designed things in a backward way from There is no way with pthreads to do what you want. You can wait on condition variables, but as you noted, this means also an associated mutex, which must be the same for all threads. But what are you really doing which requires that all 256 (or whatever number) of threads needs to be woken up and started exactly Seems to me that this is where you maybe should rethink. Do you really have data for all 256 jobs ready at the same time, to be started at the same time? Have your worked threads each have their own condvar and mutex instead. and fire each one off as soon as enough data for that thread to work on is available? Or don't even create the thread until the data is there for it to work on, and don't do any waits at all. Obviously I don't know enough about your program to say how it can/should be done, but I can think of several alternative ways of doing most stuff, compared to trying to start 256 threads exactly OK, let me elaborate a bit more. You have a powerful Multicore/SMP system that supports a total of 256 parallel threads (which will be common in a few years time). Say you have a large music file 'song1.wav' that you want to convert to 'song1.mp3'. This can also have other uses, e.g. data compression, You call 'mmap()' function to map a big chunk (or a whole file) into your process' adress space. Also lets assume that the pages for this file were cached in memory and are still valid. So, when mmap() returns, you have a large memory buffer that contains all the data you need, hence no need for slow disk I/O. Provided you have enough memory, what you would like to do now is allocate a large output buffer, utilise 256 parallel threads to read input buffer, convert data into mp3 format and write it to output buffer. When data conversion is finished, you allocate writing output buffer to disk (file1.mp3) to a slow I/O thread in the background, so that other threads can carry on with compute-bound tasks. The problem now is how to find the most efficient way to partition input buffer into 256 segments and how to allocate jobs to 256 parallel There are different methods you can use, i.e. have a job queue and assign one job at a time to a queue and then call one thread to process each job, etc. I was looking for a way to minimise the number of times you need to call pthread_cond_signal(), or pthread_mutex_lock(). It doesn't make much difference on a 4-core CPU, but as the number of cores increases, the overheads of each function call will quickly add up. I thought a good way to start would be to assign thread_id number to each thread. So each thread would have thread_id from 0 to 255. Then all you do is provide a pointer to input and output buffers and call pthread_cond_broadcast() to wake up all threads. Each thread would look at its thread_id and based on that read/write input/output buffer only from the segment number that corresponds to thread_id. This way threads don't collide and don't modify shared data. There are a lot of details to work out, but the biggest issue is the way in which pthread_cond_wait() is implemented. I think that calling pthread_create() each time for parallel region will incur too much overhead, but then if I pre-spawn all threads that call pthread_cond_wait(), when woken up each of those threads will lock a mutex, preventing other threads from running until this mutex is The best workaround I came up with, was to divide all threads into groups, so 256 parallel threads would be arranged into something like 32 groups of 8 threads per group. Then all 8 threads in a group would sleep on the same condition variable, minimising the 'thundering herd' problem. You would have to step through 32 groups and broadcast to all 8 threads in a group. Does this make sense, or I'm doing something completely wrong?
OPCFW_CODE
Over the past 30 years, we’ve grown used to data protection solutions that require backup agents to be deployed within an OS. Historically, no other options were available, and the available solutions were designed to protect mostly physical machines, at least in the x86 world. Most people define a backup agent (sometimes known as a backup client) as a piece of software provided by the data protection solution that performs the actual backup job from within the workload. This job is usually comprised of several tasks, including: - Preparing the workload for online backup, which involves quiescing the filesystem and running applications to put them in a consistent state. - Identifying new and changed data, such as blocks or files, since the previous backup operation. - Processing new data, which might include actions such as deduplication, compression, and encryption. - Transporting the backed up data to a backup storage target. Backup agents are deployed, controlled, and maintained by a master server that tells each agent what to do. The master server also collects and stores information about the backed up data, called metadata, and logs everything that happens. This is basically a client-server relationship that requires network communication between the backup infrastructure and each workload being protected. Here’s where things start to get complicated. Backup agents, even though proven to provide good capabilities, bring complexity at different levels: - Deployment: Agents need to be deployed to each protected workload. - Multiple agents: Depending on the type of data being backed up and the desired recovery capabilities, some workloads may require deploying and configuring several specialized agents. For instance, one agent might perform a system state backup to allow for OS recovery while another performs file and folder backup. - Backup plan: Agents can require specific types of backup jobs that need to be configured, maintained, and scheduled separately, adding to the overall complexity. - Platform support: Backup admins must deploy the right version of the agents according to the version of the OS, applications, and filesystems. - Maintenance: Agents can have deep interaction with the OS, including components down to the kernel level. This requires the OS to be rebooted when an agent is installed or upgraded. Long Live Virtualization! Luckily, the 2000s experienced an IT revolution with the rise of x86 virtualization. The IT infrastructure landscape started to change dramatically and we discovered new ways to protect data. This was particularly true when VMware released the vSphere Storage APIs – Data Protection (formerly known as VADP) in 2009, allowing third-party vendors to provide agentless data protection solutions to protect VMs. No need to deal with agents anymore, and all of the above pain points suddenly disappeared! Sounds like the ideal solution, doesn’t it? Despite being a game-changer, agentless backup is not the solution to everything. There are still use cases where agents can help solve specific challenges. An agentless backup, or an image-level backup, backs up the entire VM object, including the content of the virtual disks (OS, files and folders, applications) and the container with everything that describes it (name, unique ID, virtual hardware configuration). Therefore, a single-pass backup job is able to backup everything. However, the recovery granularity greatly depends on the data protection solution and its capabilities. In addition, everything in a given VM gets the same Recovery Point Objective (RPO) because all data within the VM will be backed up at the same time by the same backup job, inheriting the RPO from the job’s scheduling. But for many companies, RPO requirements may not be the same for the OS system state, the files and folders within the VM, and the databases hosted in this VM. To solve these challenges, organizations must either select the same RPO for the data in a given VM or use some agents still. Or they can choose a modern approach to backup and recovery. Meet Rubrik Cloud Data Management Let’s face it: the real problem is not with backup agents, it’s the complexity that the legacy ones bring. We also don’t live in a world that’s 100% virtualized, so agents are still a big requirement for many IT organizations. But what if we could use smart backup agents that remove all complexity and allow us to set RPOs to each protected data source, even if multiple types of data are hosted within the same machine–whether virtual, physical, Windows, Linux, AIX, or Solaris? At Rubrik, we leverage smart agents called Connectors, also known as Rubrik Backup Service (RBS). A connector is a lightweight service that can be deployed and updated automatically without rebooting the target OS. It can also interact with operating systems, file systems, and applications to provide consistency and granular backup and recovery, whether the workload lives on-premises or in the cloud. A good use case is a SQL Server hosting multiple databases with different levels of criticality. Some databases may require a 15-minute RPO with transaction logs backup, whereas others may require to be backed up only once a day. In such a situation, users simply apply the corresponding SLA domains to individual databases, providing the desired RPO to each of them. This is just one example of how RBS can take the pain out of data protection for the modern enterprise. Agentless or not, backup should remain simple, yet flexible. What about you? Is your data protection strategy 100% agentless? To learn more, read our blog on Rubrik’s adaptative data consistency.
OPCFW_CODE
module Fxer class Fetcher class Ecb class << self # # download fetches the most recent data from the ECB URL if # today's data isn't already present in the user's chosen # directory (otherwise it aborts). # # After downloading the ECB data, it checks if the data contains # data not yet accounted for in the directory (otherwise it aborts). # # Then it saves that data to a new XML file in the user's # chosen directory. # def download set_data_parameters return true if abort_if_current(Date.today.to_s) fetch_data return true if abort_if_current(@date) save_data end private # # set_data_parameters fetches and assigns the ECB URL from config. # And it assigns the user's chosen rate directory, falling back to # the working directory. # def set_data_parameters config_path = File.join(Fxer::FXER_CONFIGURATION_PATH, "ecb.yml") @url = YAML.load_file(config_path)[:ecb_fx_rate_url] @dir = ENV['FXER_RATE_DATA_DIRECTORY'] || Dir.pwd end # # save_data to an XML file named after the @date # def save_data @path = File.join(@dir, "#{@date.to_s}.xml") puts "\tData found. Saving data for '#{@date.to_s}' to '#{@path}' ..." open(@path, "wb") { |f| f.write(@data) } puts "\tSuccess!\n\n" end # # fetch_data from the URL, and fetch the file's # most recent date from that data # def fetch_data puts "\n\n\tGoing to fetch data from '#{@url}'" @data = open(@url) { |io| io.read } @date = Hash.from_xml(@data)["Envelope"]["Cube"]["Cube"].first["time"] end # # abort_if_current takes one argument: # 1. date - a string representing the date a file is named for, # and returns a boolean of that file's existence. # def abort_if_current(date) return false unless File.exist?(File.join(@dir, "#{date.to_s}.xml")) puts "\n\n\tThe most recent data already exists in #{@dir}. Exiting ...\n\n" true end end end end end
STACK_EDU
Join GitHub today Add "·" MIDDLEDOT (U+00B7) support #4 *Note: this issue is copied from old "twitter-text-conformance" repo MDIDDLEDOT (U+00B7) is very used as inner-word punctuation in Catalan, a mandatory diacritical char in Catalan ortography rules. Currently Twitter doesn't allow to use "·" in several places, so I request to improve its support in Twitter. I requested it in Twitter support forum, without feedback. So, I request it here. If that's not the place, please, report it to L10N Twitter team. About 1 and 3 So, please, improve U+00B7 support in Twitter. Thanks in advance. I found a new bug related with U+00B7 and Twitter. Please, see this Tweet https://twitter.com/unjoanqualsevol/status/469148413486194688 There are 2 valid and registered URLs Current Unicode UAX 31 cites 00B7 and its use in hashtags Is there any improvement or roadmap about this issue? Apr 7, 2015 Twitter supports hashtags with middle dot (U+00B7), really good news, :) There are some issues around middle dot support in URLs: Expected behaviour in all 3 cases is same currently achieved with accented letters (à,ç,ñ...). I. E. autolinking working fine with L·L Please, note CMSs, like Wordpress, doesn't escape middle dot, and there are many word in Catalan Wiktionary with L·L. See: http://ca.wiktionary.org/wiki/Categoria:Mots_en_catal%C3%A0_amb_eles_geminades Just to point one more example about autolinking URLs See following Tweet: But Twitter autolink breaks on "·" U+00B7 char and split URL: Or, properly escaped if you copy it from the address bar of a modern I think it's funny how people and messaging products are gradually giving But, in the case of the middle dot, I don't mind adding it. It is just a Is there a new RFC for what chars are allowed in urls in the age of modern On Wed, Oct 7, 2015 at 12:08 PM, Joan Montané firstname.lastname@example.org Yeah! I know beyond-old-ASCII chars should be escaped but, as you point, several web services (Wordpress, Twitter...) generate URLs with such chars, so links become unusable, :( MIDDLE DOT (U+00B7) is used as inner-word char for Catalan language. According to Unicode UAX TR29 it's a MidLetter character on word boundary segmentation. So, it's unlikely that it's used as a URL terminator.
OPCFW_CODE
Error Deleting Data Row in C# i have a datagridview in C# showing results from query database. I want to delete the data and this method is using DataSet. This method execute when a button pressed in some index. The code: DialogResult _result = MessageBox.Show("Do you want to remove the data?", "Warning", MessageBoxButtons.OKCancel, MessageBoxIcon.Warning); if (_result == DialogResult.OK) { int idx = dataGridInfo.CurrentRow.Index; this.getInfoFilmDS.sp_getInfo.Rows.RemoveAt(idx); MessageBox.Show("Data Removed", "Processing", MessageBoxButtons.OK, MessageBoxIcon.Information); } this method this.getInfoFilmDS.sp_getInfo.Rows.RemoveAt(idx); is working only in the DataGridView but the data itself is not deleted in database. How to resolve this? Thanks in advance. UPDATE i've change the syntax into this: this.getInfoFilmDS.sp_getInfo.Rows[idx].Delete(); sp_getInfoTableAdapter.Adapter.Update(getInfoFilmDS); but the compiler throws an error. Invalid Operation Exception Update requires a valid DeleteCommand when passed DataRow collection with deleted rows. Did you implement some method to delete on database using ado.net? where are you processing your mySql command to delete row from database??? you are not doing anything with back end(MySql) to delete row.. @felipe this is automatic generated xsd file based on stored procedure in database. are you binding some key also from database like Pimary Key to reference a particular row ? @sajanyamaha no, the stored procedure itself is JOIN table, and the primary key is autoincrements DataRowCollection.RemoveAt does only remove the row from the DataTable, it does not delete the row (even if you would use a DataAdapter). Therefore you need to use DataRow.Delete: getInfoFilmDS.sp_getInfo.Rows[idx].Delete(); Note that you now need to use a DataAdapter to commit the changes in the DataSet to the database. Update i've change the syntax into this: this.getInfoFilmDS.sp_getInfo.Rows[idx].Delete(); sp_getInfoTableAdapter.Adapter.Update(getInfoFilmDS); but the compiler throws an error: Invalid Operation Exception. Update requires a valid DeleteCommand when passed DataRow collection with deleted rows. So you need to define the DeleteCommand which is normally auto generated from the DataSet when you select only one table, maybe you just have to reconfigure the TableAdapter and it will autocreate the delete-command. Otherwise you have to create it manually: TableAdapter Overview then because it is from automatic xsd, i couldn't find my DataAdapter. How to solve that? There is an autogenerated namespace "xzyTableAdapters". There you have to look for the right tableadapter for this table. Create an instance and use it to update the dataset or datatable or datarow. It will commit all changes to the database(all rows with RowState is != unchanged). okay i've change the syntax into this: this.getInfoFilmDS.sp_getInfo.Rows[idx].Delete(); sp_getInfoTableAdapter.Adapter.Update(getInfoFilmDS); @randytan: "The compiler throws an error" you have forgotten to tell us the error. Have you created an instance of the tableadapter as i've told you? TableAdapter.Update is not static. Invalid Operation Exception. Update requires a valid DeleteCommand when passed DataRow collection with deleted rows. @Tim the problem is, the select statement is taken from stored procedure, maybe it's the cause dataset is not autogenerate the DELETE command. Is there any possibility to create the DELETE statement? or i need to create a query ? thanks @randytan: It's easy to create a delete statement in the TableAdapter manuallay, just follow the tutorial. okay after research and asking @Tim for the support, it shows that DataSet is not generating Query based on Stored Procedure (please inform me, if i were wrong :D) so, i need to generate another stored procedure based on JOIN tables.. There are three tables in my MySQL db and i create another stored procedure to delete the data. then i assign the DELETE stored procedure into dataset configuration then applied this code try { int idx = dataGridInfo.CurrentRow.Index; this.getInfoFilmDS.sp_getInfo.Rows[idx].Delete(); sp_getInfoTableAdapter.Adapter.Update(getInfoFilmDS); MessageBox.Show("Data removed", "Process", MessageBoxButtons.OK, MessageBoxIcon.Information); } catch (Exception ex) { MessageBox.Show("Error Delete Data" + ex.Message, "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } it works like a charm.
STACK_EXCHANGE
You only need to do it once. Click "Finish" when you are satisfied with your selections. Moreover, if you double-click that jar file on a system that has JRE installed, the java launcher will be invoked automatically. You can see that only nine arguments are passed to the program at a time (in one loop). http://digitalezines.com/how-to/create-exe-java.html Completing the CAPTCHA proves you are a human and gives you temporary access to the web property. Update 07-Jul-2014: RoboVM is an interesting new open-source project that enables you to compile Java code down to native iOS executables. You need to: Set the project's main class. Consult the documentation of the shell that you are using for more information. https://netbeans.org/kb/articles/javase-deploy.html The xGrep window should open. Starting from version 8, the Oracle JDK includes the Java Packager tool, which can in particular prepare Java applications for deployment via Java Web Start. See Troubleshooting JAR File Associations below. It is also possible to compile Web applications running on Apache Tomcat. Note: If you find that the Swing Layout Extensions has already been added to your project, this might be a result of you having opened the xGrep.java file in the IDE. Once an empty file argument is detected by the if statement (there are no further files to process), the loop is ended. A .jar file should be sufficient. Java Executable Jar You need C\mywork> jar cvfm MyJarName.jar manifest.txt *.class cvfm means "create a jar; show verbose output; specify the output jar file name; specify the manifest file name." This is followed by Too many reports because of too convenient report button Authoritative source that <> and != are identical in performance in SQL Server What is the point of heating the IAC Valve? How To Make Exe File In Java Using Eclipse Thanks for letting us know. This feature is not available right now. Click on Configure/Options. Join them; it only takes a minute: Sign up How do I create executable Java program? [duplicate] up vote 32 down vote favorite 26 This question already has an answer here: http://www.wikihow.com/Create-an-Executable-File-from-Eclipse and expand the Java node. How To Make Exe File In Java Using Netbeans In the early days of Java, the only way to execute a Java program on a common PC hardware was to interpret that bytecode. How To Convert Java Program To Exe File Our file JavaHungry.java will look like this in eclipse 2. The unofficial compatibility testing results for GNU Classpath claim that it includes most, but not all of the JDK 1.4 API features as of Sep 2007 (the last time the tests this contact form It may involve download and installation of the required version of the JRE and Optional Packages. In the Add Library dialog box, select Swing Layout Extensions and click Add Library. Use this command: jar -cvf [name of jar file] [name of directory with Java files] This will create a directory called META-INF in the jar archive. How To Create .exe File Of Java Program Can guns be rendered unusable by changing the atmosphere? The nine arguments are represented inside the batch file by % Click on the newly created entry CreateJarFile in the left column under Tools. Exe4j There exist also Java-based setup authoring tools enabling you to create cross-platform installations, Those installations are essentially executable jars with platform-specific logic selected at run time. The locations in this list are separated by semicolons (;). CloudFlare Ray ID: 3046d411782220ea • Your IP: 18.104.22.168 • Performance & security by CloudFlare Skip navigation UploadSign inSearch Loading... Using command prompt by using specific commands Suppose we have awt/spring based java file i.eJavaHungry.java . c0d3n1nja 56,888 views 3:19 Python Programming - Duration: 43:16. Jsmooth It runs on any OS, supports built-in code signing, iconifying and auto-updating, and it can optionally bundle the JRE in a very small (heavily compressed) package. –AntonyM Feb 20 '13 at The following are the steps for setting the PATH variable on a Windows XP system: Choose Start > Control Panel and double-click System. NetBeans IDENetBeans PlatformEnterprisePluginsDocs & SupportCommunity HOME / Docs & Support Packaging and Deploying Desktop Java Applications Contributed by Max Sauer and maintained by Patrick Keegan Feedback One question that a lot The project folder does not have to be in the same location as the source files that you are importing into the project. Check This Out Price GCJ and libgcj are open source (GPL) and therefore can be freely downloaded, modified and distributed. If there were more than nine arguments, you would need to execute the JAR file multiple times. Our aim is to convert this file from .java extension to .exe file . So in computer directory it will look like C:\java\JavaHungry.class C:\java\manifest.txt So the following command needs to be executed to create the executable jar of JavaHungry class file . If your application needs a console, write a batch file which would start it using the java launcher. There are two ways to create the executable jar in java 1. I'm using C+. If you want to convert this to .exe, you can try http://sourceforge.net/projects/launch4j/files/launch4j-3/ STEP7: Give the .xml file an appropriate name and click "Save". Calling the application from the command line. Now set up a project for your program, create a manifest file manifest.txt or copy and edit an existing one. For example: Main-Class: Craps This line must end with a newline. Once an empty file argument is detected by the if statement (there are no further files to process), the loop is ended. In addition, this document provides information that you might need to configure your system (or which you might need to pass on to the users of your application). And yes, it does support JRE lookup and bundling, native launchers, and so on. If you know of any other interesting attempts to get rid of the JRE, please send them to me. We need to create a manifest file i.e " manifest.txt " manifest.txt Main-Class: JavaHungry Mandatory : Manifest file should An AOT compiler runs on the developer's system with no resource or compilation time constraints. Can't say I've used it myself, but it sounds like what you're after.
OPCFW_CODE
Is there a nonlinear optical material that absorbs visible light stronger with increasing intensity independent of its wavelength? One of the main shortcomings of digital image sensors is the quite "unnatural" behaviour for rendering highlights compared to film (and the human eye, I guess). Typically, with increasing intensity the image is quickly clipping while film material shows a much smoother transition to the brightest point. My question is, if there is an optical material that would allow to reduce the intensity of the brightest areas and thus doing an optical compression of dynamic range. So the absorption of light should ideally be depending only on the intensity, not the wavelength and grow with increasing intensity. I guess this is not so easy, because otherwise camera manufacturers would have included that to their products. But maybe someone has an idea. Perhaps a combination of different fluorescent materials absorbing visible light and emitting in the infrared spectrum (which then could be filtered)? I don't know if such a material, but in case of a camera it wouldn't need to have the same properties over the whole visible range. There could be 3 different materials for the red, green and blue part of a pixel, each only having the same properties for a smaller range of colors. Yes, maybe as part of a Bayer filter directly on the the image sensor. I was thinking more about a filter solution to apply to an existing camera. may I add, being a photography fan and an optics physicist, that I do not think this to be a problem. With my photography I clearly make the distinction: if digital, meter for highlights; if film, meter for shadows. Nowadays technology is already at such a great place either way, that consumer cameras have >10EV of dynamic range for a single exposure and that value has been increasing year after year as sensor technology evolves. Of course that at some point we will have 14+ EV sensors, but which technology will lead to that, I do not know. And another thing, for photography, nonlinear effects will be problematic as the light intensity is incredibly low. It is already amazing the quantum efficiency and low noise current sensors have and manufacturers take that into advantage, and pushing users to expose for highlights as shadows will be low noise and with enough info to be stretched. I strongly doubt it's possible, because (other than mechanical "light traps" like baffles) photon absorption is a quantum process. As you may know, there are a number of compounds which demonstrate significant, nonlinear, changes in absorptivity for a given wavelength. A 'saturable absorber' will have low transmissivity until the photon density reaches a certain threshold, at which point the material moves to an excited state and the transmissivity goes near to 1.0. Rarer are 'saturable transmitters' which start out transmitting but switch to blocking (low transmission) when excited. Because those processes depend on specific electron level transition energies, this approach cannot work for white light. I'm skipping the extreme case where the thermal absorption leads to oxidizing (i.e. burning/charring), which I suppose you could look at as absorption increasing. How about a material where two-photon absorption can excite charge carriers into the (broad) conduction band? Yes, that's of course true, absorption will by definition only happen for certain wavelengths. And the effect might typically even be decreasing with increasing light intensity? @A.P., just from quickly reading a bit about the two-photon absorption: The effect seems to be nonlinear to the intensity, but sill decreasing with increasing intensity? @JosefR No, it will increase with intensity. The higher the intensity, the higher the chances of 2 photons hitting the nonlinear material at the same time.
STACK_EXCHANGE
package visualiser.Controllers; import javafx.application.Platform; import javafx.fxml.FXML; import javafx.scene.control.Button; import javafx.scene.control.ListView; import javafx.scene.input.KeyCode; import javafx.scene.input.KeyEvent; import javafx.scene.layout.AnchorPane; import javafx.stage.Modality; import javafx.stage.Stage; import javafx.stage.WindowEvent; import visualiser.gameController.Keys.ControlKey; import visualiser.gameController.Keys.KeyFactory; import java.io.IOException; import java.util.HashMap; import java.util.Map; /** * Controller for the scene used to display and update current key bindings. */ public class KeyBindingsController extends Controller { private @FXML Button btnSave; private @FXML Button btnCancel; private @FXML Button btnReset; private @FXML ListView lstControl; private @FXML ListView lstKey; private @FXML ListView lstDescription; private @FXML AnchorPane anchor; private KeyFactory existingKeyFactory; private KeyFactory newKeyFactory; private Boolean changed = false; // keyBindings have been modified private Button currentButton = null; // last button clicked public void initialize(){ // create new key factory to modify, keeping the existing one safe existingKeyFactory = new KeyFactory(); existingKeyFactory.load(); newKeyFactory = copyExistingFactory(); initializeTable(); populateTable(); setKeyListener(); setClosedListener(); } /** * Sets up table before populating it. * Set up includes headings, CSS styling and modifying default properties. */ private void initializeTable(){ // set the headings for each column lstKey.getItems().add("Key"); lstControl.getItems().add("Command"); lstDescription.getItems().add("Description"); lstKey.getSelectionModel().select(0); lstControl.getSelectionModel().select(0); lstDescription.getSelectionModel().select(0); // add CSS stylesheet once the scene has been created lstKey.sceneProperty().addListener((obs, oldScene, newScene) -> { if (newScene != null) { newScene.getStylesheets().add("/css/keyBindings.css"); } }); // stop the columns from being selectable, so only the buttons are lstKey.getSelectionModel().selectedItemProperty() .addListener((observable, oldValue, newValue) -> Platform.runLater(() -> lstKey.getSelectionModel().select(0))); lstDescription.getSelectionModel().selectedItemProperty() .addListener((observable, oldValue, newValue) -> Platform.runLater(() -> lstDescription.getSelectionModel().select(0))); lstControl.getSelectionModel().selectedItemProperty() .addListener((observable, oldValue, newValue) -> Platform.runLater(() -> lstControl.getSelectionModel().select(0))); } /** * Populates the table with commands and their key binding details. */ private void populateTable(){ // add each command to the table for (Map.Entry<String, ControlKey> entry : newKeyFactory.getKeyState().entrySet()) { // create button for command Button button = new Button(entry.getKey()); button.setMinWidth(120); button.setId(entry.getValue().toString()); button.setOnAction(e -> currentButton = button); // display details for command in table lstControl.getItems().add(entry.getValue()); lstKey.getItems().add(button); lstDescription.getItems().add(entry.getValue().getProtocolCode()); } } /** * Makes a copy of the {@link KeyFactory} that does not modify the original. * @return new keyFactory to be modified */ private KeyFactory copyExistingFactory(){ newKeyFactory = new KeyFactory(); Map<String, ControlKey> oldKeyState = existingKeyFactory.getKeyState(); Map<String, ControlKey> newKeyState = new HashMap<>(); // copy over commands and their keys for (Map.Entry<String, ControlKey> entry : oldKeyState.entrySet()){ newKeyState.put(entry.getKey(), entry.getValue()); } newKeyFactory.setKeyState(newKeyState); return newKeyFactory; } /** * Creates a listener for when a user tries to close the current window. */ private void setClosedListener(){ anchor.sceneProperty().addListener((obsS, oldS, newS) -> { if (newS != null) { newS.windowProperty().addListener((obsW, oldW, newW) -> { if (newW != null) { Stage stage = (Stage)newW; // WE is processed by onExit method stage.setOnCloseRequest(we -> { if (we.getEventType() == WindowEvent.WINDOW_CLOSE_REQUEST) { onExit(we); } }); } }); } }); } /** * Creates a listener for the base anchorPane for key presses. * It updates the current key bindings of the {@link KeyFactory} if * required. */ private void setKeyListener(){ anchor.addEventFilter(KeyEvent.KEY_PRESSED, event -> { // if esc, cancel current button click if (event.getCode() == KeyCode.ESCAPE){ btnCancel.requestFocus(); currentButton = null; } // if a button was clicked else if (currentButton != null) { // check if a button is already mapped to this key for (int i = 1; i < lstKey.getItems().size(); i++) { Button button = (Button)lstKey.getItems().get(i); // update buttons text and remove key binding from command if (button.getText().equals(event.getCode().toString())) { button.setText(""); newKeyFactory.updateKey(button.getId(), button.getId()); } } // update text on the button currentButton.setText(event.getCode().toString()); // update the control key newKeyFactory.updateKey(event.getCode().toString(), currentButton.getId()); // remove current button selection currentButton = null; changed = true; btnCancel.requestFocus(); } event.consume(); }); } /** * Cancel and exits the key bindings menu. Changes are not forced to be * saved or fixed if invalid, and instead are defaulted back to the last * successful saved state. */ public void cancel(){ ((Stage)btnCancel.getScene().getWindow()).close(); } /** * Resets all key bindings to the built-in defaults. */ public void reset(){ lstKey.getItems().clear(); lstControl.getItems().clear(); lstDescription.getItems().clear(); newKeyFactory = new KeyFactory(); initializeTable(); populateTable(); changed = true; } /** * Replace existing {@link KeyFactory} with the modified key bindings. */ public void save(){ if (isFactoryValid()) { existingKeyFactory = newKeyFactory; newKeyFactory = new KeyFactory(); changed = false; existingKeyFactory.save(); // save persistently loadNotification("Key bindings were successfully saved.", false); } else { loadNotification("One or more key bindings are missing. " + "Failed to save.", true); } ((Stage)btnCancel.getScene().getWindow()).close(); } /** * Checks the {@link KeyFactory} being modified is valid and that no * commands are missing a key binding. * @return True if valid, false if invalid */ private Boolean isFactoryValid(){ for (Map.Entry<String, ControlKey> entry : newKeyFactory.getKeyState().entrySet ()) { if (entry.getKey().equals(entry.getValue().toString())){ return false; } } return true; } /** * Method used to stop a user from exiting key bindings without saving * their changes to the {@link KeyFactory}. * @param we {@link WindowEvent} close request to be consumed if settings * have not been successfully saved. */ private void onExit(WindowEvent we){ // if modified KeyFactory hasn't been saved if (changed){ loadNotification("Please cancel or save your changes before exiting" + ".", true); we.consume(); } } /** * Loads a popup window giving confirmation/warning of user activity. * @param message the message to be displayed to the user * @param warning true if the message to be displayed is due to user error */ private void loadNotification(String message, Boolean warning){ try { NotificationController nc = (NotificationController) loadPopupScene("notification.fxml", "", Modality.APPLICATION_MODAL); nc.setMessage(message, warning); } catch (IOException e) { e.printStackTrace(); } } }
STACK_EDU
M: Boutique medical services offer wealthy Americans the chance to cut the line - paulsutter https://mobile.nytimes.com/2017/06/03/business/economy/high-end-medical-care.html R: pyrophane My concern is that this sort of approach to medicine is going to result is the "top specialist" tier of doctors spending more time treating complaints that don't really require their particular level of expertise. You probably don't need to see the world's best cardiologist if you just have high blood pressure, but if you are paying 40k a year for a concierge medical service they might feel that the need to get you in front of him or her in order to justify their fee. Otherwise what are you paying for? Consequently this amazing heart doctor spends an hour telling you to eat better and exercise rather than evaluating someone's complicated and life threatening condition. R: Deregibus In theory though, the market should address that right? e.g. a version of this that's $10k a year but can handle 10x the volume because you triage patients with competent nurses and PAs and save the expert's time for the cases where they can actually make a difference. R: rev_bird Starting to sound an awful lot like the U.S. health care system... R: Deregibus Yeah, I had the same thoughts when I was writing that. The key difference is that the current system is so convoluted that there is almost no meaningful market for consumers on the actual care side of things. R: sbierwagen I like how the article says Private Medical "does not advertise", then immediately follows that with some glossy, staged press shots. They don't advertise, except for the advertisement they just ran in the New York Times. R: atemerev The NYT, with advertisement slogans like "Truth. It has no alternatives." (When I first saw it, I had to double-check whether I am reading some English translation of one of the "Pravda" issues in the USSR -- no, it was still NYT), -- well, despite such bold claims, they often act as the principal venue for top-level PR firms. Increasingly so in the last few years -- I understand that newspaper business is in deep trouble, and I understand their envy to Facebook and Twitter, but this sounds just a teeny tiny bit unethical to me. R: dsr_ This is the sort of thing that ends up with rich people swinging from lamp posts. Not the fact that you can hire a private doctor. Taking one doctor out of circulation, more or less, doesn't enrage people. No, the offensive thing is that John Battelle's kid's broken leg is too important to be seen by a mere ER doctor, and instead the very senior orthopedics doctor is summoned away from dealing with serious problems to deal with a broken leg. How does the head of orthopedics live with himself? I'm sure he comforts himself with money. Battelle made his money by doing things, and hardly anyone begrudges him that. But his son now knows that his broken leg is more important than some grandmother's shattered hip, and is likely to keep that attitude as he rises to adulthood. R: ice109 >This is the sort of thing that ends up with rich people swinging from lamp posts. i don't understand why smart-ish people keep repeating this. there will never be another revolution in a first-world (western) nation. it's just not happening. R: djohnston why do you think so? i don't have strong convictions either way, but i don't think i'd count it out as an impossibility. R: icelancer A lot of reasons, but one solid one is that there is a pretty strong anti- pattern in that the ones that desire wealth redistribution the most are the most likely to support gun control and/or the banning of private ownership of weapons. R: atemerev Wealth inequality is not the only reason for revolutions. There are also dictatorships, human rights abuse, corruption, disregard of laws for the elite, and other fine things that might warrant the direct action. R: icelancer While true, those values are highly correlated - at least in Americans, and most of the developed world - with people desiring lower access to guns for the private citizenship. Low access to force multipliers makes it much harder to spark and win a revolution against the state. R: dr_ I don't personally have a problem with people paying extra to receive what they perceive to better care. But I think if you look at outcomes, I doubt you will find that they have received better care by paying the extra 40k or whatever it is a year. It's great that the kid in the story was able to be seen by a established orthopedist in a city hospital - but most general orthopedists know how to set a kids leg, and I doubt the outcome would have been much different if they had gone to the originally intended ER. It's certainly a good gig for the family practitioner, but I think the following quote at the end of the article: ""The traditional model of having a good internist is dying," said Mr. Traina, a scion of a prominent family here that arrived with the California Gold Rush. will apply to the concierge doc in the longer run as well. If you are reasonably healthy, checking your blood work once a year and your blood pressure every so often, you don't have much need for a family doc. If catastrophe strikes, then you will probably need a specialist - and if you're willing to spend the money, it's best to save it for these times - yes your cash will get you in as easily as a concierge doctor can - rather than dishing out 40k annually. I recently chose a medical clinic that's part of a university system for my own healthcare, as opposed to a family doc who charges $1400 for an initial exam and $400 for follow ups. Do they spend as much time with me? No, I'm in quick and out quick. I get my shots and blood work and I'm out the door. Since I'm pretty healthy, that's the way I want it. R: atemerev It is the most rational model. Unfortunately, like mentioned in other threads here, fully private, cash-to-hands medical practice is prohibited in many countries. I have to travel to Spain for my medical tourism needs. However, it is mighty inconvenient. R: bmmayer1 It's unclear what the point of this article is supposed to be other than to stoke outrage at the 1% who get premium treatment while the rest of us proles suffer 'normal' healthcare. It's almost as if all of a sudden having money buys you nicer things, as it always has for all of history, including in the healthcare systems of every other nation. In developing countries like China, Russia, India, Saudi Arabia, there are entire parallel healthcare systems for the wealthy and connected to get high quality care and when such care isn't available, they travel to richer countries or hospitals that cater to foreigners in Bangkok, Seoul, Cleveland, Medellin, and any number of medical tourism destinations that have popped up. I've been to several of these hospitals and they are _nice_. In Seoul, at the Samsung Medical Center, hospital rooms for foreigners have one bed each, as opposed to four beds each for Korean nationals. Why? Koreans are paying less via subsidies and rich foreigners with weaker currencies are paying full fare. In universal healthcare countries with market economies like the UK and Canada, private clinics are the only way patients with money who want to avoid the limitations imposed by rationed care can get access. In Canada, instead of waiting an average 9.5 months(!) to replace a painful knee, you can go to Duval Orthopedic Clinic in Montreal to get a knee replacement for under $14,000 USD (compared to $49,000 in the US). In Hungary--this is a fun one-- you can go to a terrible state doctor for free or pony up $50 US for a private doctor. Most Hungarians who can afford it pay the cash, because they know they will get the quality for it. Even in the so-called universal medical system of Cuba, the political elite and their relatives have their own private hospital, whereas the rest of the population suffer substandard equipment, years-long wait lists, lack of medication and outdated facilities, and worst of all, doctors who spend half their time working side jobs as taxi drivers to pay the bills. The point is, markets will always step in to fill needs that aren't being met by state-run care--it's as predictable as the sun coming up. Often, the gap between rich and poor is worse in systems that try to do the most to eliminate it--Cuba being a prime example. What I found most interesting about this article--most likely unintentionally- was the hypocrisy of the Bay Area, the liberal mecca of the country, that would vote for universal healthcare in a landslide, also being the main drivers of demand in premium private service, widening the gap between the rich and the poor such a healthcare policy is bound to exasperate. R: blendo _In Canada, instead of waiting an average 9.5 months(!) to replace a painful knee, you can go to Duval Orthopedic Clinic in Montreal to get a knee replacement for under $14,000 USD (compared to $49,000 in the US)._ In the US, your _charge_ is likely to be $49,000, but I think many insurers have agreements to only pay 20-40% of that. Woe to you if uninsured in the US, though: they'll expect the full $49K. See David Belk's [http://truecostofhealthcare.net/](http://truecostofhealthcare.net/) R: bmmayer1 True, and it also varies widely by region. R: hn_throwaway_99 I think a fair question is whether all that additional expense results in better outcomes. I certainly understand the experience is better for patients, and there is little or no wait time, but is there evidence that health outcomes are actually better? I wonder if it's like the difference between taking coach vs. a private jet. The private jet is more convenient and comfortable, but safety ratings are considerably better on major airlines. R: sologoub I think another way to look at this is minimizing risk/resulting guilt. The broken leg example seems to support that - the father is paying for the best possible care for his son, even if the outcome is the same (leg heals properly) as the ER doctor, the father has obsolved himself of the risk that he didn't do enough and allowed the leg not to heal properly. In other words, the (I'm guessing) relatively slim chance there is a complication with the broken leg is not worth it to the wealthy father to take. R: hn_throwaway_99 > In other words, the (I'm guessing) relatively slim chance there is a > complication with the broken leg is not worth it to the wealthy father to > take. In medicine, though, there have been a lot of recent studies recently that have shown that too much care can be harmful (especially in the orthopedic realm). Sometimes the best thing to do really is nothing, but when you're paying 40k a year, my guess is you, and more importantly your doctors, would rarely take that point of view. R: skolos Doesn't this exist in almost every country? There is always market for rich to get better health care. R: ThomPete No. It's really as simple as that. only the US allow for this. Edit: To all the down-voters. The article isn't about private healthcare which in the US is more the rule than the exception. It's about a layer on top of private healthcare. Of course private healthcare exist in other countries. But this concierge layer for "normal" rich people you will be hard pressed to find many other places I ma pretty sure off (but could be wrong of course). Unless you are a politician or royal or something. R: kcorbitt While there may be some countries that disallow private healthcare entirely, the international situation is far more nuanced than your comment implies. For example, I lived in Spain, where most people use the highly functional public health system but there is a parallel private healthcare industry that is also very effective (and still reasonably priced, compared to private healthcare in the US). R: ThomPete Yes private healthcare systems of course they exist in other countries. But the article is as far as I understand it about something else. It's about a layer on top of the already existing private healthcare system in the US. R: jbandela1 What is interesting is that anecdotally in my experience and those I talked to the VIP patients in VIP rooms tend to have worse outcomes. It may be because there is a tendency for VIP to demand that something be done and it not be uncomfortable. So they may go through more tests and procedures for every little complaint and maybe get more pain meds which ends up contributing to worse outcomes. In addition, the doctor may be reluctant to say "no". For the ultimate example of this, see Michael Jackson. Finally, teaching hospitals provide some of the best care in the country. So because of that, a VIP may go there. However, many of them don't want to be seen by residents and medical students and residents. They want to be seen by the famous department chair. Well, in many cases the department chair may be good at research and politics and fundraising, but may not be the best actual doctor. In addition, they may be years removed from actual day to day patient management, relying on the residents to manage the details. You can imagine how things can go wrong. R: paulddraper And? You can always get better faster service if you pay more. Disney World, health care, cars, political favors, nutrition, hospice, climbing Everest, prosthetics, cleaning staff, coaches, going to outer space, etc. I wouldn't expect Health Care for the proletariat to be somehow different from the rest of our brief mortal existence. Somehow, it has seemed recently to gain a special status all to itself. \--- "Wish not so much to live long as to live well." \- Benjamin Franklin R: backtoyoujim You are equating health care with going to Disney World ? R: fsckin Pay more and cut the line all day long, access to exclusive restaurants, private meet and greets with characters, etc. R: brightball MDVIP. Pay an annual fee in exchange for the doctor keeping a reduced patient load. It's reasonably affordable without being velvet rope wealthy and gives you a much more streamlined care experience. R: watertorock As usual the elite can afford the best and will get it.
HACKER_NEWS
mobile results and download Learning at the Museum Frontiers others. It appears military with irresponsible download The Socially Revolution data sent on Microsoft Windows, Linux( looking Novell Moonlight), and Mac OS X according prices. SilverLight is n't, free how the earth works: 60 fun activities for exploring volcanoes, fossils, earthquakes, and more of unavailable website to all new dynamics ranging on the Mac OS or Windows. 2018 All Trademarks, clients, and Социологические теории деятельности и практической рациональности 2003 are the Example of their Adolescent tours. Your was a server that this age could n't let. 2006-2010, Data Pros; Object Factory, LLC. 2006-2010, Data types; Object Factory, LLC. ( Rich Internet Access) page, climate jS looking Silverlight. Solutions Manual To Accompany Classical Geometry: Euclidean, Transformational, Inversive, And Projective 2014 have: ensuring, Data Transfer Objects(DTO), local specimen beings, and SOA. This is not download The Cambridge Handbook of Thinking and Reasoning 2005 to fight in collection. When looking the Silverlight south carolina math connects: concepts, skills, and problem solving, course 3 2008 in Visual Studio 2010 for the famous assessment, you analysis some dynamics and pages. FAQAccessibilityPurchase evocative MediaCopyright signal processing for control; 2018 enterprise Inc. This sunt might above deploy many to have. visit the following site: s of our structures feel original books from connections you can identify, within the Wikipedia month! This stmv.com.ar/wp-includes/Requests is just back, but we 're involving on it! understand more download The leadership capital index : a new perspective on political leadership 2017 or our lapse of downloaded patterns. Microsoft Silverlight attaches a download Radical Dharma: Talking Race, Love, and Liberation 2016 sub Project that is technologies second to those in Adobe Flash, supporting items, theories, meetings and Goodreads into a Ultimate x-factor appearance. Indian country-houses and This Contact Form applications. It is reverse with particular read what he said association strategies expressed on Microsoft Windows, Linux( exploring Novell Moonlight), and Mac OS X looking thoughts.Printing Support in Silverlight 4 - Silverlight 4 is conceptualized and it is with a easy the real easy ear training book a beginningintermediate guide to hearing of solar grammars. One of the military effects mentioned in Silverlight 4 is the force for change. In this g, we will borrow how to Help the model API in Silverlight 4. redirect Silverlight Application introducing SharePoint 2010 reporting - In this sweat we will give how to use Silverlight Application suffering SharePoint Project in Visual Studio 2010.
OPCFW_CODE
Organising and Designing Quantitative Data 3. Choosing the right tools 3.3 Spreadsheet or Database? If you are only creating a small amount of quantitative data, or its structure is very simple (especially if it is primarily numeric) and will fit comfortably into a single table, then a spreadsheet will probably serve your needs, and it will not be as time-consuming to learn or set up as a database. When to Use a Spreadsheet Characteristics of your sources or research that might make a spreadsheet a more appropriate choice: - your sources already resemble spreadsheets - ie, they are in a regular tabular or list format - your sources consist of mainly numerical information - your sources are already aggregated information (even if not yet digital), suitable for statistical analysis without significant intermediate processing - you do not need to link together different sources - you are not creating large amounts of data European State Finance Database - "an international collaborative research project for the collection, archiving and dissemination of data on European fiscal history across the medieval, early modern and modern periods." The database contains a range of aggregated tabular data deposited by researchers, which can be downloaded in CSV text format as well as viewed in graphical forms on the website. 1831 Census Data - downloadable datasets with accompanying documentation, made available by the Staffordshire University Victorian Censuses project. Again, these are aggregate data ideal for a spreadsheet. When You Need a Database On the other hand, if several of the following apply, you probably need a database: - you are compiling data from varied sources that you will have to aggregate for statistical analysis yourself - you are collecting data from related sources that you will want to be able to cross-reference and link together - your sources are mainly text rather than numbers - your sources are too complex to fit into a simple flat table - you will be creating a lot of data In any case, if you think your research is outgrowing the spreadsheet format, spreadsheet software should normally have convenient facilities to export your data at any time; conversely, once you have data in a database, you will have options to compile it into subsets of aggregate data for analysis in a spreadsheet. The Old Bailey Online - a database of reports of nearly 200,000 trials held at the Old Bailey in London between 1674 and 1913. Not only is this a very large dataset, but trials are complex sources for quantification. They may contain multiple defendants, charges, verdicts and punishments - "many to many" relationships. Subsets of the data, however, can be generated in tabular format for spreadsheet analysis using the site's Statistical Search. Family Reconstitution Data, from Cheapside parish registers, c.1540-1710 - a relational database created by the People in Place project. Family reconstitution is a technique used by demographic historians using parish register data between the 16th and 19th centuries, which involves "linking series of births, marriages and burials in the same family and comparing the results across thousands of families" to generate data on long-term demographic trends. Already using a spreadsheet? If you answer yes to several of these questions, you probably should consider switching to a database. - Are you duplicating a lot of data in spreadsheets? - Are you having to make changes across multiple spreadsheets when you change one of them? - Are your spreadsheets becoming unwieldy from trying to manage too much information? - Are you finding it difficult to locate specific data because of the size of your spreadsheets?
OPCFW_CODE
Can't find boost with VS2017. CMake's FindBoost looks for libraries tagged with -vc150- when configuring for VS2017, while our Boost always uses vc140 tag (so I guess it compiles itself with VS2015). This is still the case with 1.63 update. I'm not familiar with Boost's buildsystem but maybe it needs to be explicitly told which toolset should be used? Sigh. It looks like we have two issues here; first, we need to fix the boost portfile to build with VS2017. Second, we need to make the user's find_package(Boost) work regardless of whether Boost was built with v140 or v141 (the official term for VS2017's default toolchain). As much as we've tried to avoid it, I suspect the only way to properly handle Boost is to special-case it in our CMake Toolchain file and override the configuration variables for the consuming project. We've been avoiding special treatment on a per-library basis here, but I don't see much of an alternative. In this particular case, it looks like the best option is to set Boost_COMPILER to "-vc140" and to always rename the output .lib files to look like they used the vc140 toolset. The DLL names will remain unchanged, however, so they can still sit side-by-side at deployment time for users who are relying on that behavior. This would correctly enable the cross scenarios of v141 boost -> v140 project and v140 boost -> v141 project. Related boost issue https://github.com/boostorg/build/issues/157 This would correctly enable the cross scenarios of v141 boost -> v140 project and v140 boost -> v141 project. Should those scenarios be supported at all? AFAIK major versions of MSVC are not guaranteed to be compatible. How is this done for other libs? IMO the libs should be separated per compiler, like it done with the triplets for platforms. AFAIK major versions of MSVC are not guaranteed to be compatible. v141 guaranteed to be compatible with v140 and backward Yes, but 2013 and 2015 aren't compatible so this structural problem will have to be solved some time. For totally incompatible toolchains (for example, v120, v140, and clang-android) we can handle those with entirely separate triplets. For example, x86-win-v120 and x64-win-v120-static. This ensures that you can, within a single vcpkg enlistment, make a functional zlib available for every version of VS you use and for every compilation target. The rationale behind merging v141 and v140 is that they are completely compatible with each other, so there's little benefit to segregate them by default. We do have a highly-experimental-undocumented option [1] that can be added to your triplet file to try to force the v140 toolset even if you have v141, but it doesn't seem worth investing in making it "production quality" so far. [1] If you want to play with it (note: will change at any time), it's set(VCPKG_TOOLSET_VERSION v140) in a triplet file. Ok, I've checked in the first half of the above change with b2b2c91. Boost_COMPILER will now be overridden to be -vc140 for all packages and all projects built using our toolchain file. I confirmed this fixed cpprestsdk, and I'm currently confirming that it fixes #626 [1]. I have the second half of this prepped locally (renaming the -vc150 libs to -vc140), however we currently aren't building boost using VS2017 so I have not been able to test that yet. I'm seeking to simultaneously add 2017 builds for boost as well as adding the renaming, which should prevent regression here. That said, the fix in b2b2c91 does resolve the original issue posted here, so I'll close this issue. Please let me know if you find a package that is still not correctly locating the boost libraries. Boost libraries will be named -vc140 for VS2017 (at least for now) https://github.com/boostorg/config/blob/d3c1db5436a2fb4682e15590f2d344d749579b31/include/boost/config/auto_link.hpp#L164-L167 PR about that https://github.com/boostorg/config/pull/110 Thanks for communicating with the boost project, @KindDragon!
GITHUB_ARCHIVE
Re-Imagining the Bottom Navigation Bar Inspired by Suchit Poojari’s article, Redesigning the bottom navigation for 2020, I have decided to re-evaluate the way we treat bottom navigation. Bottom navigation iterations of Facebook The bottom navigation bar breaks an interface into its core components and allows users to quickly and easily toggle between high-level functions. Its easily accessible and comfortable location makes it incredibly pervasive on mobile applications. Photo from UX Matters While the bottom navigation bar is incredibly useful, the sheer diversity in device size, shape, and edges makes it very difficult to design a uniform bottom navigation bar. Diversity of Phone Sizes and Edges As Poojari puts it, the problem is “that designers, as well as developers, would/are facing is the different corner radius and bottom chin of the devices.” With all of this in mind, how might we redesign the bottom navigation bar to accommodate for the increase in device diversity? I created 6 different iterations for what the future of the bottom navigation bar could be. I used the Facebook app when creating my mockups as it is commonly used and has 4 key functions (generally navigation bars have between 3–5). The Vertical Pop Up Side Navigation The Vertical Pop Up Side Nav relies on a single icon to show more options. It can be fixed to either the right or left side of the device depending on which hand the user prefers. Vertical Pop Up Side Navigation The floating icon would change depending on the page the user is currently on. For example, if the user is on Notifications, the floating icon would change to the Bell icon. Its fixed location slightly above the bottom edge accommodates any device type. The Horizontal Pop Up Side Navigation The Horizontal Pop Up Side Nav also relies on an icon to open up more options. The open nav does not span the full screen to accommodate for all device types. Horizontal Pop Up Side Navigation This iteration provides slightly more space to view content and is similar to existing interaction patterns. This should make it easier for new users to adjust to this navbar. Reduced Horizontal Navigation This design is based on Pinterest’s beta version navigation. The interaction patterns are the same, but the goal of this iteration is to show their concept could be applied to other applications. Reduced Horizontal Navigation This design condenses the navigation more toward the center which accommodates any device size. When a user scrolls down the navigation disappears allowing them to see more content. When they scroll up, the navigation bar re-appears and users can dive into any function. Center Navigation Drop Down This design relies on a centered icon to display more options. Upon pressing the icon, the entire navigation window opens and is fixed on the scroll. Center Navigation Drop Down Having one icon slightly above the rest is a salient design choice to show the user where they are on the interface. From a development/design standpoint, it fits slightly more content on your navigation horizontally. The icon in the center is relatively small, so it shouldn’t distract too much from the page content. Diamond Navigation The diamond navigation bar has a central icon that folds out similar to the way people navigate on a game controller. While this interaction pattern is slightly less space-efficient, it does easily divide up the content. Diamond Navigation This iteration is more playful which could make it useful for entertainment applications. For apps like Facebook or Instagram, I am not sure how productive it would be. Condensed Fixed Navigation This navigation option is most similar to the traditional bottom navigation… Like to keep reading? This article first appeared on medium.com. If you'd like to keep reading, follow the white rabbit.
OPCFW_CODE
The world of cybersecurity is rapidly evolving and becoming of great importance. As we dive deep into a world where technology controls everything through the IoT, and with massive amounts of data collected and stored to be processed when needed, cybersecurity becomes more of a necessity than it already is. It was reported that during the last year, 54% of companies experienced at the very least one single successful attack that actually managed to compromise data or/and IT infrastructure. The technology that attackers are using is also rapidly evolving, and they are migrating from using the good-old-fashioned file-based attacks to using fileless attacks. Moreover, experts are quite concerned that attackers might start using AI and ML to automate some of their tasks like data collection and create even more threatening and powerful attacks. So what if you want to break into the cyber warfare fortitude and become a cybersecurity specialist, what skill set do you need? Well, today we’ll cover the skills needed to dive into the cybersecurity world. There are two types of skills that comprise any type of career in the world. So we’ll break cybersecurity into its technical and soft-skills components that really make up the career. - Analytical skills: It goes without saying that analytical skills are very crucial to the job. You need to be able to make sense of any information and be able to see beyond the naked eye could see. As experts say, you have to be able to think like the bad guy. Get into his shoes and search where there might be a vulnerability. You’ll most likely be getting a streamline of information and data that you’ll have to utilize and make sense of. This is why sharpening your analytical skills is a must in this field. - Teamwork and collaboration: It’s an important skill set in the tech world, however, this is extremely important in the case of cybersecurity. No matter how good you are at a career like cybersecurity, certain attacks will elude you or leave you confused. This is why you need more than a pair of eyes in a team. - Time management: Time management is important in cybersecurity as you have to meet the deadlines of your clients and also quickly respond to any change in the field. If a certain vulnerability was found somewhere in a technology your company is using. If something like the Meltdown and Spectre vulnerabilities arises, how rapidly are you going to respond to that change? - Project management: Project management is the last skill we’ll cover on the soft skills part. Project management in cybersecurity is different. As the project takes on months or years of development, you’ll need to integrate your solutions and manage maintenance and upgrades. Cybersecurity managers are quite in demand as the standards of security rise up. Some people predict that it will get really hard to companies to comply with the General Data Protection Regulation (GDRP) so managers who can successfully plan and manage their projects are quite needed in the field. - Understanding the tools: There are many tools in the field, however, most companies have a set-it and forget-it approach towards security. Which is by no mean is a sane approach for today’s evolving threats. The tools provide you with a 50-thousand-foot view of everything on your network, allowing you to be the master of your entire communications and foreseeing any coming threat. Not all vulnerabilities come from the end user side, and so, the company’s side has to be well secured using the valuable tools at hand. - Understanding the technology: A great cybersecurity specialist will have a firm understanding of the technology he’s using, its components, and even how it came to be. This deep understanding of technology allows you to see how these components can formulate a threat, which parts of your solutions or services are exposed to the current technology and might need some covering. This deep understanding of technology is what allowed air gap vulnerabilities to exist. Can you imagine that a computer that is not connected to any network could still be hacked? It’s quite counter-intuitive and defies our logic. But understanding how computers work, how the binary ones and zeros emit different electromagnetic levels, and how these readings could be taken 6 meters away created a vulnerability that defies logic. - Data science and data engineering: This is sort of the technical side of the analytical skill we talked about earlier. Many experts today are using machine learning algorithms and AI, leveraging the power of the tools at hand to collect as much data as possible and utilize it. What’s the use of having that mass amount of data without using it? Automation is certainly a feature of the industry, but it won’t be as much if you can’t keep up with it. This is why data science, AI, and cybersecurity are closely intertwined fields. - Scripting: This is probably the first thing that jumped into your mind when you heard the world skills. And yes, it is a requirement of the field. Scripting allows you to interface with many of the tools you’re using in the field. Most cybersecurity specialists prefer to use Python, but there are many alternatives out there. Young developers also tend to nodejs tutorials these days in order to outbeat their competition. Scripting languages are usually easier to write code with as they provide an easier syntax and faster development time, although that separating line is dimming and the word scripting often refers to what the language does and not how it’s written. You don’t want to spend time trying to understand why the code’s not running or trying to get doing a simple thing in several lines of code. Cybersecurity careers are going to be needed everywhere, not only in the tech industry. Banks, hospitals, schools, governmental entities. Almost every industry is going to need a cybersecurity specialist in a few years from now. The field is currently lacking enough talents, and there will be even more openings in the future. If you’re looking for a career shift, this might be where you want to head, moreover if you’re passionate about the idea of cyber warfare and security. Read About: Biggest Myths About Your First Job
OPCFW_CODE
What is a SAMI file? A file with the extension “.sami” typically refers to a Subtitles And Metadata Interchange (SAMI) file. SAMI is a captioning format used for displaying subtitles or closed captions in videos. It was originally developed by Microsoft for their Windows Media Player. SAMI files contain timing information and text content for subtitles or closed captions, allowing them to be synchronized with a video playback. The format supports basic formatting options such as font style, color, and positioning of the subtitles on the screen. SAMI files are usually plain text files and can be opened and edited using a text editor. The contents of a SAMI file typically include a header section that provides information about the file, followed by individual subtitle entries with their respective timing and text. Here’s an example of what a SAMI file might look like: <SAMI> <HEAD> <TITLE>Example Subtitles</TITLE> </HEAD> <BODY> <SYNC Start=100><P Class=ENCC>Subtitle 1</P></SYNC> <SYNC Start=500><P Class=ENCC>Subtitle 2</P></SYNC> <SYNC Start=1000><P Class=ENCC>Subtitle 3</P></SYNC> </BODY> </SAMI> SAMI files are commonly used in conjunction with video players or media players that support subtitle display, such as Windows Media Player or VLC Media Player. The player reads the SAMI file and synchronizes the subtitles with the video content, allowing viewers to read the captions while watching the video. Supported HTML tags SAMI (Subtitles And Metadata Interchange) files support a limited set of HTML-like tags for formatting and styling the subtitles. Here are some of the commonly used tags supported by SAMI: <SAMI>:The root element that encapsulates the entire SAMI file. <HEAD>:Contains header information for the SAMI file. <TITLE>:Specifies the title of the SAMI file. <BODY>:Encloses the subtitle entries and their timing information. <SYNC>:Represents a synchronization point for a subtitle entry. It specifies the timing at which the subtitle should be displayed. <P>:Encloses the actual text content of a subtitle. It is typically used within a block. <FONT>:Defines font properties for the enclosed text. Attributes like Color, Face, Size, and Style can be used to modify the font appearance. <BR>:Inserts a line break within a subtitle. <B>:Renders the enclosed text in bold. <I>:Renders the enclosed text in italics. <U>:Renders the enclosed text underlined. <C>:Specifies the position or alignment of the subtitle text on the screen. It supports attributes like Center, Middle, Left, Right, Top, Bottom, and their combinations. <LANG>:Specifies the language code for the subtitle text. It helps in identifying the language of the subtitles. These are some of the basic tags supported by SAMI files. It’s important to note that SAMI does not support the full range of HTML tags and attributes. The supported tags are primarily focused on styling and positioning the subtitles rather than providing extensive document structuring or interactivity.
OPCFW_CODE
[10:24] <airurando> the ubuntu-ie website is filled with spam postings [10:25] <airurando> can it be taken down as in it's current state it reflects badly on us [10:27] <czajkowski> airurando: there is spam on the site? [10:27] <czajkowski> wow [10:27] <czajkowski> how the hell has this happened ? [10:27] <airurando> yeah just look at the front page [10:27] <airurando> I'm blocking the users now [10:28] <czajkowski> ok let me go to talk to IS [10:28] <airurando> is there anyway we can approve users before they get access? [10:29] <czajkowski> nobody bar a few folksshould have access to the website.... [10:29] <czajkowski> wiki yes as admins [10:29] <czajkowski> but not the website [10:30] <ebel> bah [10:30] <airurando> very annoying [10:31] <czajkowski> filed a RT asking for a rediect from website to http://loco.ubuntu.com/teams/ubuntu-ie [10:32] <czajkowski> airurando: ebel you cna log into rt.ubuntu.com and see #19274 [10:32] <airurando> I'm deleting the posts now [10:35] <airurando> there were only 4 spam posts on the main page. all gone now [10:35] <airurando> I spent hours going through the comments a few months ago. [10:35] <airurando> real annoying [10:37] <czajkowski> :( [10:37] <czajkowski> no idea why it's happening eithe [10:37] <ebel> there was a catpcha system up... [10:37] <czajkowski> thought that was on the wiki ? [10:38] <airurando> would the security have been better if we had moved to blacknight [10:38] <czajkowski> YES! [10:39] <airurando> ebel,czajkowski. catpcha system is up and running on the website [10:39] <czajkowski> hmm [10:39] <czajkowski> 3rd time now in as many weeks [10:41] <airurando> I think I remember slashtommy suggesting shelving the website in preference to the loco team portal and wiki [10:42] <czajkowski> nods [10:42] <czajkowski> for the time being [10:42] <czajkowski> I've asked for a redirect [10:42] <airurando> If no one is available to get things up and running on blacknight I'm for that [10:42] <czajkowski> if a website happens we can get it un directed [10:42] <airurando> aye [10:43] <airurando> the website 'as is' is WAY more trouble that it is worth. [10:43] <czajkowski> nods [10:46] * slashtommy remembers mentioning there are too many websites [10:46] <slashtommy> pick one, and we can all use that one :) [10:47] <czajkowski> rihght IS will do redirect to LTP [11:05] <airurando> the redirect is in place [11:05] <czajkowski> yup [11:06] <czajkowski> doing the whole site not just main page as well [11:08] <airurando> can we add mailing list details , wiki details etc to the LTP landing page [11:08] <czajkowski> yup [11:08] <airurando> who can do it? [11:09] <czajkowski> ebel: he's admin on the page [11:09] <czajkowski> as is mean-machine [11:09] <airurando> ebel: can you jazz up the LTP landing page with other team details? [11:10] <czajkowski> or maybe add airurando instead of mean-machine as he doesnt seem that active here any more [11:13] <airurando> yeah I'll do it, as best I can, if I get access. [11:20] <ebel> ok i'll have a look [11:21] <ebel> ok, given airurando and czajkowski team admin magic [11:22] <ebel> heheh "State/Province/Region" [11:23] <slashtommy> Ireland/Leinster/EU [11:23] <slashtommy> strange order to have things [11:23] <ebel> It has a box for the mailing list, which was correct, but that doesn't show up on front page [11:23] <ebel> slashtommy: [11:24] <ebel> slashtommy: #ubuntu-ie does all provinces, not lenister [11:26] <slashtommy> i thought we didn't care about munster these days ;) [11:27] <ebel> Although there are NUTS Regions in Ireland, and an Irish Regions Office ( http://www.iro.ie/index.shtml ), and Regions within the EU ( http://www.cor.europa.eu/ ) [11:27] <ebel> They aren't used by average people. Regions would be used in the UK, so doesn't really apply to use at all [11:28] <slashtommy> Regions and countries would be used within the UK, and regions within countries within the UK [11:29] <ebel> oh [11:29] <ebel> mailing list does come up [11:29] <ebel> that's the envelope icon [11:30] <ebel> and forum etc [11:32] <airurando> ebel is clever [12:03] <airurando> ebel our LTP team admin magic has disappeared! [12:03] <ebel> yes thought that happened before... [12:03] <ebel> tis odd [12:04] <airurando> maybe two is the max? [12:04] <ebel> i presume a regular task that syncs with launchpad? [12:06] * ebel goes to the horses mouth, i.e. loco team portal developers [12:08] <czajkowski> states for 2 of them and one in india [12:12] <ebel> actually probably isn't that, since ( https://launchpad.net/~ubuntu-ie ) only lists me (rorymcc) as an admin, and mean-machine is still staying admin on LTP [12:12] <ebel> oddness indeed [12:13] <czajkowski> not sure mean-machine should be on there given he doesnt seem active in the loco tbh [12:14] <czajkowski> and while I'm ok with you and airurando and slashtommy making changes as you're active, people being non active and making changes is a bit of a no no [12:17] <ebel> yeah good point. [12:17] <ebel> the admin magic is still there.... [12:20] <ebel> I have removed mean machine from the admin list [12:20] <ebel> They weren't even down as a team member, had to scroll all the waaaaaay down the full list of members :P [12:21] <ebel> s/members/launchpad usernames/ [12:21] <ebel> maybe it'll stick this time.... [12:31] <airurando> nope it has reverted [12:43] <ebel> lame [12:44] <ebel> looks like someoen else has noticed it https://bugs.launchpad.net/loco-team-portal/+bug/792475 [12:46] <ebel> It appears to affect other things, not just admins [12:46] <ebel> i added "All 4 Provinces" to the "State/Province/Region" for the lark [12:46] <ebel> and then it went away [12:49] * ebel comments on bug and waits [14:21] <ebel> upon the advise of some LTP devs, I've made airurando and czajkowski admins on the launchpad team. that should filter through to the LTP shortly [14:22] <czajkowski> ebel you've been busy [16:23] <ebel> ok I've given airurando and czajkowski admin on LTP [16:23] <ebel> mean-machine is still down as admin [17:27] <czajkowski> Fantastic new! I'm joining Canonical and shall be working with the amazing folks on the Launchpad team! Cannot wait! very happy!! [17:36] <ebel> what?! Really!! [17:36] <ebel> Congratulations! [17:36] <ebel> fair fucks to you [17:36] <czajkowski> start monday [17:37] <moylan> wow that is good news! they could really use your skills. [17:37] <ebel> wow, quick! [17:37] <ebel> you deserve it! [17:37] <moylan> ^this [17:38] <czajkowski> thank you [17:39] <moylan> you must be be pretty chuffed [17:40] <czajkowski> over the moon [17:42] <moylan> probably be the smartest thing canonical have done in the past few months
UBUNTU_IRC
use core::ops; use std::io::{LineWriter, Write, Read}; use std::fs::File; use std::path::{PathBuf, Path}; use std::iter::Map; pub trait HasFirst<T> { fn first(&self) -> T; } impl HasFirst<char> for &str { fn first(&self) -> char { self.chars().nth(0).unwrap() } } pub trait NewLineWriter { fn write_line(&mut self, str: String); } impl<W: Write> NewLineWriter for LineWriter<W> { fn write_line(&mut self, string: String) { self.write_all(f!("{}\n", string).as_ref()).unwrap(); } } #[allow(dead_code)] pub fn get_test_resource(name: &str) -> PathBuf { let dir = Path::new(env!("CARGO_MANIFEST_DIR")); dir.join("resources").join("test").join(name) } pub fn get_resource(name: &str) -> PathBuf { let dir = Path::new(env!("CARGO_MANIFEST_DIR")); dir.join("resources").join("main").join(name) } pub trait ReadContentsExt { fn read_contents(&mut self) -> String; } impl ReadContentsExt for File { fn read_contents(&mut self) -> String { let mut buffer = String::new(); self.read_to_string(&mut buffer).expect("Could not read file."); buffer } } pub trait IterableExt<T> { fn count_occurs(self, predicate: fn(&T) -> bool) -> usize; } impl<I: Iterator> IterableExt<I::Item> for I { fn count_occurs(self, predicate: fn(&I::Item) -> bool) -> usize { self.filter(predicate).count() } } //TOOD: expand to iterable that returns the type of self pub trait VecExt<E> { fn map_into<R, F>(self, mapping: F) -> Vec<R> where F: Fn(E) -> R; fn map<R, F>(&self, mapping: F) -> Vec<R> where F: Fn(&E) -> R; fn filter<F>(self, predicate: F) -> Vec<E> where F: Fn(&E) -> bool; fn find<F>(self, predicate : F) -> Option<E> where F : FnMut(&E) -> bool; // fn map_mut<'a, R, F>(&'a mut self, mapping: F) -> Vec<&'a mut R> where F: Fn(&mut E) -> R; fn filter_mut<F>(&mut self, predicate: F) -> Vec<&mut E> where F: Fn(&&mut E) -> bool; } impl<E> VecExt<E> for Vec<E> { fn map_into<R, F>(self, mapping: F) -> Vec<R> where F: FnMut(E) -> R { self.into_iter().map(mapping).collect() } fn map<R, F>(&self, mapping: F) -> Vec<R> where F: Fn(&E) -> R { self.iter().map(mapping).collect() } fn filter<F>(self, predicate: F) -> Vec<E> where F: Fn(&E) -> bool { self.into_iter().filter(predicate).collect() } fn find<F>(self, predicate : F) -> Option<E> where F : FnMut(&E) -> bool{ self.into_iter().find(predicate) } // fn map_mut<'a, R, F>(&'a mut self, mapping: F) -> Vec<&'a mut R> where F: Fn(&mut E) -> R{ // let map : Map<R,F> = self.iter_mut().map(mapping); //// .collect::<Vec<&mut R>>() // } fn filter_mut<F>(& mut self, predicate: F) -> Vec<& mut E> where F: Fn(&&mut E) -> bool { self.iter_mut().filter(predicate).collect() } } /* The difference between Path and PathBuf is roughly the same as the one between &str and String or &[] and Vec, ie. Path only holds a reference to the path string data but doesn't own this data, while PathBuf owns the string data itself. This means that a Path is immutable and can't be used longer than the actual data (held somewhere else) is available. The reason why both types exists is to avoid allocations where possible, however, since most functions take both Path and PathBuf as arguments (by using AsRef<Path> for example), this usually doesn't have a big impact on your code. A very rough guide for when to use Path vs. PathBuf: For return types: if your function gets passed a Path[Buf] and returns a subpath of it, you can just return a Path (like Path[Buf].parent()), if you create a new path, or combine paths or anything like that, you need to return a PathBuf. For arguments: Take a PathBuf if you need to store it somewhere, and a Path otherwise. For arguments (advanced): In public interfaces, you usually don't want to use Path or PathBuf directly, but rather a generic P: AsRef<Path> or P: Into<PathBuf>. That way the caller can pass in Path, PathBuf, &str or String. As for your strip_prefix example: Calling to_str() on a Path[Buf] is very often a bad idea. In fact, the reason why it returns an Option is that some paths simply aren't valid utf8 strings. Then, once you implement proper error handling (in the most simple case just use the Error type from the failure crate), this example might just shrink to: path.strip_prefix(env::current_dir()?)? which looks more reasonable. */
STACK_EDU
|Previous||Table of Contents||Next| These sections provide an overview of how the automounter works. When a system is booted, the automounter daemon is started from the /etc/init.d/nfs.client script. With Solaris 2.3 system software and later, the boot procedure is split into two programs: an automount command and an automountd daemon. The startup script for Solaris 2.3 system software is /etc/init.d/autofs. The automounter checks for the local auto_master map. When the first entry in the local auto_master map is +auto_master, the automounter consults the NIS+ auto_master table, builds a list of the specified mount points, and consults the auto_variable maps it finds listed there. When the first entry in the local auto_master map is not +auto_master, the automounter consults the local auto_variable maps. The startup procedure for Solaris 2.0, 2.1, and 2.2 system software is shown in Figure 7-1. If no NIS+ auto_master map is found, NIS+ searches for an NIS auto.master map. Figure 7-1 Starting the automounter. When a user changes to a directory that has a mount point controlled by the automounter, the automounter intercepts the request and mounts the remote file system in the /tmp_mnt directory if it is not already mounted. On the other hand, when a user changes out of a directory controlled by the automounter, the automounter waits a predetermined amount of time (the default is 5 minutes) and unmounts the file system if it has not been accessed during that time. Figure7-2 shows how the automounter works. Figure 7-2 How the automounter works. In Figure 7-2, when the user types cd, the automounter looks in the table that was created at boot time from the NIS+ auto_master map and NIS+ auto_home map and mounts the users home directory from the server named oak. When the user types man lp, the automounter looks in the table that was created at boot time, mounts the manual pages on /usr/man, and displays the manual page for the lp command. After 5 minutes of inactivity, the manual pages are unmounted. When the user types maker&, the automounter looks in the table that was created at boot time and mounts the executable for FrameMaker on /bin/frame3.1. In these discussions about the automounter, it is assumed that you are administering a network of systems running SunOS 4.x and SunOS 5.x system software and that you are using NIS on the 4.x systems and NIS+ on the 5.x systems. This configuration provides you with a global namespace so that you can mount file systems that are exported from any server on the network. It also creates host-independent resources so that you can specify a list of servers from which file systems can be mounted, and it allows you to relocate resources from one server to another without disrupting the user environment. NOTE: Although you can set up the automounter using local maps (auto_master files on a local system instead of on an NIS+ root master server), SunSoft strongly recommends that you do not do so. Decentralized and local maps are more complicated and expensive to maintain, and they are difficult to update consistently. SunSoft is implementing many new automount features in future versions of Solaris system software. Some of these new features will work only with maps stored in NIS+. Before you begin planning your automounting, review the list of recommended policies in the following sections. They may affect how you set up your automount maps. CAUTION! You should never change the SunOS 4.x auto.master map name; this name is required by the SunOS 4.x automounter. These sections describe the prerequisites for using the automounter. Before creating automount maps, the network should be up and running NIS+ on SunOS 5.x systems and NIS on SunOS 4.x systems. Each system on the network should have the default auto_master and auto_home maps in its local /etc directory. These maps are automatically installed with the system software. |Previous||Table of Contents||Next|
OPCFW_CODE
Vaults in 1Password accounts have twelve permissions which can be set for each team member and group. All permissions are securely enforced, but not all are enforced in the same way. To help you make informed choices about the security of your team, 1Password labels permissions according to their method of enforcement. These are the strongest permissions available in 1Password accounts. Only those who hold the cryptographic keys to a vault can perform these actions: - View Items Cryptographic permissions can’t be overcome with a backdoor or a software exploit; nothing short of breaking the encryption would work. Because no one but you has the encryption keys for your team, no one can bypass those restrictions. Revoking read access to a vault is a server-enforced permission. Once read access has been granted to a vault, it can't be taken away cryptographically. These are the strongest policy-enforced permissions in 1Password accounts. They are enforced by the 1Password accounts server rather than the 1Password apps: - Edit Items - Create Items - Archive Items - Delete Items - Import Items - Manage Vault A team member who can read a vault has its cryptographic keys, but the server can still limit their actions in the vault. If they try to make changes without write access, the server will reject those changes. Server-enforced permissions are safe. They’re used by almost all online services – sometimes as the only method of permission enforcement. But they aren’t mathematically guaranteed like cryptographically enforced permissions. They could in principle be bypassed by us or someone who has access to our server. These are the weakest policy-enforced permissions in 1Password accounts. They are enforced by the 1Password apps, rather than the laws of mathematics or the 1Password server: - View and Copy Passwords - View Item History - Export Items - Copy and Share Items - Print Items A team member who can read a vault has its cryptographic keys, but the client can still limit what they can easily see and do in that vault. For example, passwords will be concealed from them in the 1Password apps if they don’t have the View and Copy Passwords permission. However, the unencrypted data is still on their devices and could be extracted with some effort, like filling a password into a page and then revealing it on that page. A team member who is determined can easily overcome client-enforced permissions on their own device, so they’re most valuable as simple safeguards for people you already trust. A team member has to act deliberately and intentionally to violate these restrictions. These permissions shouldn’t be relied on to prevent hostile behaviour or enforce trust. Multiple levels of enforcement Permissions are labeled according to their strongest level of enforcement. However, most permissions are enforced in multiple ways at the same time. For example, read access is enforced by all three levels: cryptography, server, and client policy. Someone who doesn’t have read access to a vault doesn’t have its cryptographic keys. At the same time, the server won’t send them the encrypted data, and the client won’t ask for it. This adds extra layers of security. But even if the client were to ask for the data, or the server were to send it, the permission is still enforced by cryptography. Thus the Read permission is cryptographically enforced.
OPCFW_CODE
2:47 pm in Uncategorized by nsienaert At MMS we could hear that the MDT team will try to release MDT 2012 RTM 30 days after the RTM release of SCCM 2012. Last week the Beta1 was released. Let’s have a sneak peak. With this version there is support of: - The SCCM App Model & UDA - Integration with the SCCM 2012 Console - New UDI components (new wizard & Designer) and more customization will be possible - Support UEFI - Support of new OSes: Windows POSReady7 & Windows ThinPC - Cross-platfrom deployment (Install x64 OS from x86 windows PE) The other combination, booting from an x64 boot image and deploying an x86 OS, isn’t supported by Windows Setup. - Deploy to VHD – creating a VHD file during the task sequence that can then be used for booting the OS (“boot from VHD”). Maybe you are wondering if there are changes on USMT, Windows PE and WAIK. Well no, only bug fixes. WAIK (USMT is part of WAIK) is depending on Windows releases not MDT releases, that’s why. The MDT Team wants to remove support of Windows XP & Server 2003 but SCCM 2012 does still support XP SP3, so probably MDT will 2012 still support XP SP3 as Source OS, not as Target OS. Features that will be added in next releases: - Integration of Windows RE (Set Recovery partition during OS Process) - Integration with MDOP tools like App-V and DaRT - Powershell support in Task Sequence - LTI Facelift At first glance, visually not much has been changed in the SCCM console. One button has been added to the Tab Bar to start the MDT Task Sequence wizard. Once we have created an MDT Task Sequence via the wizard, we don’t notice much changes in the Task Sequence. Probably the 2 most biggest changes are: - USMT has more GUI options 2. The New App Model Note that the MDT Team added an extra script before the Install Application step to “workaround” something. Application variables end up with 2 digit suffixes (ex. Applications01), MDT expects 3 digit suffixes (ex. Packages001). The script is making a 3 digit list and convert it to a 2 digit list so SCCM can install the applications. Also in the MDT LTI Workbench no much changes can be observed. - Integration of the new App Model (note: also ConfigMgr Packages can still be used) 2. The Deploy to VHD Templates. (only supported in LTI scenarios) So, now we have looked into the changes, let’s deploy a Bare Metal image via the MDT Wizard in the SCCM Console. Once I finished the MDT wizard, it failed immediately with following error: Creating boot image. Generating boot image. Error while importing Microsoft Deployment Toolkit Task Sequence. System.Exception: Unable to mount image C:\Users\ADM-NS~1\AppData\Local\Temp\2\winpe.wim to C:\Users\ADM-NS~1\AppData\Local\Temp\2\PE20_mount.x86 —> System.ComponentModel.Win32Exception: A required privilege is not held by the client — End of inner exception stack trace — at Microsoft.BDD.Wizards.SCCM_ImportTaskSequenceTask.DoWork(SmsPageData smspageData, Dictionary`2 data) From my experience I remember that the deployment tools of WAIK like to be executed with “Run as Administrator”. So I have restarted my SCCM Console with “Run as Administrator”, as from now the image could be mounted correctly and the wizard finished successfully. Stay tuned for more blogs that go deeper into the MDT Topic. Till next time! Nico Sienaert (twitter: nsienaert)
OPCFW_CODE
Siwes project in computer science 11 overview of siwes siwes refers to the employment of students nearing under-graduation in firms or organizations, which operate on activities related to the respective student's major subjects. Report on student industrial work experience scheme (siwes) training programme at the nigerian airspace management agency faculty of science routers, hubs and computer peripherals the installation, repair, preventive maintenance and also auditing of these devices were. Thesis projects a guide for students in computer science and information systems authors: berndtsson, m, hansson, j, olsson, b, lundell, b. The researcher finds it very necessary to embark on the study because of the relevance and contribution of siwes to the training of computer science students in and school of information technology in which computer science department which is request for project. Evaluation of student industrial work experience scheme (siwes) (139%) were attached to libraries, 56 (487%) to cyber cafes/computer business centres, 9 challenges of students' industrial work experience scheme (siwes) in library and information science in the ict environment library. My main research interest is in the design, analysis, and implementation of computer algorithms, especially problems in combinatorial optimization i am also happy to advise princeton undergraduate projects in a number of popular computer science areas including the development of educational tools. Automated staff mailing system for computer science department in federal polytechnic nekede best project topics for computer science student computer science project topics and material in nigeria computer science seminar topics and materials list of computer science project for. Research projects by area artificial intelligence (ai) biosystems & computational biology (bio) communications & networking (comnet) computer architecture & engineering (arc) control, intelligent berkeley laboratory for automation science and engineering berkeley laboratory for. Free essays on a siwes report computer networking college essay for computer science it was the year 1994 when i was first introduced to a computer in school and 10 years later computer project report 27052 words. Undergraduate programs the admissions process for each undergraduate major varies from program to program, but admissions for our main bachelor of science in computer science are handled through carnegie mellon's central office of undergraduate admission. Computer science facebook twitter computer science research algorithms & theory data science machine learning networks security software design & implementation systems. Six potentially paradigm-shifting research projects - including two involving eecs faculty - will make strides with funding from professor amar g bose research grants mit electrical engineering & computer science | room 38-401. Enems free project provides you with samples of free project topics and material, thoroughly the researcher finds it very necessary to embark on the study because of the relevance and contribution of siwes to the training of computer science students in tertiary educational. Computer science technical report collection school of computer science, carnegie mellon university pittsburgh pa 15213-3891 4122688525 4122685576 (fax. Students industrial work experience scheme (siwes) in nigeria computer science ebonyi state educating articles education engineering entertainment facebook students industrial work experience scheme (siwes. Siwes project in computer science Aims and objectives of siwes specifically, the objectives of the students industrial works experience scheme are to: project for computer science students project in business management project in computer science project in mass communication. An all ready written siwes report for computer science department full seminar report on 3d face recognition system. This is reflective of the current state of computer science in general, as it finds itself at the core of both scientific discovery and technological progress to learn about our research and latest findings, please browse through our laboratories, specific projects. Looking for unique science fair project ideas check out educationcom's collection of free computer science fair projects and computer projects for kids. This paper examines the success of siwes in the provision of practical skills and competence to secretarial students from nigerian universities in project topics in computer science project topics in computer science 2018 project writers nigeria, project topics & custom writing. Here title page dedication acknowledgement overview table of content chapter one 10 introduction 11 background of siwes 12 aim and objectives of siwes chapter two 20 brief computer science projects topics faculty of science projects topics & materials download. Computer science national diploma (nd) curriculum and course specifications the national diploma in computer science obtained from an accredited programme iv supervised industrial work experience scheme (siwes) 32 the general education component shall include course in. Research projects are supported by the national science foundation, the marco fcrp gigascale systems they support the educational mission of the department through instruction in core and advanced principles of computer science their research advances the basic knowledge of computing and. Report of the students industrial work experience scheme (siwes) farm is over-spending and hence cut down expenditure on such to have enough finance to embark upon equally as important projects computer science. A technical report on students industrial work experience scheme (siwes) by: enefola j onuche registration no: 09/589 department of computer science school of technology federal polytechnic idah kogi state feb 2011. The results were collected and analyzed in the chapters that make up this study report and project works the siwes scheme further of hospital information system presented by in partial fulfillment for the requirement for the award of bachelor of science (bsc) in computer science and.
OPCFW_CODE
Although the last Symbian smartphone, the incredible Nokia 808 PureView, has been unveiled two years ago, there are still hundred thousands of smartphone users with a Symbian device out there only waiting to be able to use the app you have developed! There are different approaches out there to develop a Symbian app but you can get the best result by using the Qt framework which provides native UI components to develop an app which fits into the system. Additionally, you can port your Qt apps created for Symbian with a small effort to other major mobile platforms including iOS, Android, Windows Phone, MeeGo, Sailfish OS, Ubuntu Touch, Tizen and BlackBerry 10*. If you need to go beyond the possibilities of the Qt framework on Symbian you can also use native Symbian C++ functions although it's not that easy. Some limits you might reach while creating your Symbian app include notifications and the lack of a extensive homescreen widget API. *Qt 4.8 is used on Symbian, MeeGo and BlackBerry 10. A port to Qt 5.x is required for iOS, Android and Ubuntu Touch. Qt 5.3 brings first support for Windows Phone. Self-signed or unsigned An additional limit are the capabilities of self-signed applications. With closing the Nokia Store on 1. January 2014 Nokia also shut down the Symbian Signed service. So if you want to go deeper into the system you are not able to sign it with a self-signed certificate as it is limited to the basic capabilities (which should be enough for 90% of all apps) but you need to release your app as an unsigned version. This requires a hacked smartphone to be able to install it in order to pass the certificate check. If possible try to release your app as a self-signed version with everything that is possible with these capabilities and an extra unsigned version with full feature set. You can manage both with AppList. Downloads (Do a backup for yourself!) About Qt [developer.nokia.com] About Symbian [developer.nokia.com] Download SDKs, development tools, etc. [n8delight.blogspot.co.uk] Symbian Belle Design Guidelines (copy) [allaboutsymbian.com] Nokia Developer Wiki [developer.nokia.com] If you want to publish your app in AppList to add it to the AppList database you are welcome to do so. Before publishing you need to prepare some things: First make sure that your app is fully working and running. Have in mind which Symbian OS versions you want to target and which resolutions you want to cover. If your app requires special libraries going beyond what is available per default on a Symbian smartphone add the Smart Installer to your app. If your app is only based on the basic Symbian C++, Qt, QtQuick (QML) and QtMobility things do not wrap your app into the Smart Installer. Set a UID for your app. Each Symbian app has its unique id called UID. You can find additional infos about finding a vaild UID in the Nokia Developer Wiki [developer.nokia.com]. In future there will be a service to check if an UID is already in use based on the AppList database entries. Note: If you plan to release various versions of your app (self-signed, unsigned, Nokia Store link), each one must have a different UID! Set a version for your app. A version string contains three values, <major>.<minor>.<build>. An example would be 1.0.23. Note: If you plan to release a self-signed and unsigned version of your app, the latest versions in AppList must have the same version number! Create a launcher icon for your app and prepare up to five screenshots. See below to find more informations about creating your launcher icon. Screenshots must be in jpg format and should represent the UI and functions of your app. Avoid modifying your screenshots or add special effects. To provide a uniform UI there are special Icon Guidelines and templates available for the Symbian platform. If you want to publish your app in AppList it must adhere to these guidelines for a uniform look and feel. Symbian launcher icons are provided in the svg format. So you need either Adobe Illustrator or Inkspace to create your launcher icon. Sadly I do not have a complete copy of the icon guidelines in text form with images and I could not find a good cache version. If you maybe should have done or found a copy of the pages or the original Symbian Belle Design Guidelines it would be great if you could share them with us! Illustrator: Create Symbian launcher icon with own glyph [youtube.com] Illustrator: Create Symbian launcher icon with own icon and background [youtube.com] Inkscape: Create Symbian launcher icon with own glyph [youtube.com] Inkscape: Create Symbian launcher icon with own icon and background [youtube.com] Download Nokia Icon Toolkit with Illustrator and Inkscape templates Prepare icons for the AppList database When you have created your icon add it to your app via the common ways (to be linked here). For the AppList database you will additionally need a 80x80 and a 256x256 pixel version of your icon exported as a png with transparent borders and with the space next to the icon. To publish an app in AppList you need to have a publish account to access the area to manage your apps and add new ones. There is no QA here, so to ensure a quality of apps in the database I "take the freedom to check" a developer when he/she wants to publish apps in AppList. This sounds more specular than it actually is: If you want to publish/manage your apps write me a mail to applist at schumi1331.de. Give a reference to an app you have already published or when it is your first one attach it to the mail. Then everything else will find its good way... :) You have successfully published your app in AppList. What now? - If you have your own website you maybe want to link to it. To make this possible you can find here an "AppList badge" that allows you link to an app in AppList. You are free to use it on your website, but do not alter the badge in any way except resizing (with the same proportions). If you would like to suggest other layouts you are free to contact me. If you are using the badge, link to your/an app in the web pages of AppList, e.g. http://applist.schumi1331.de/content/1. Do not use it to link to the AppList client download page.
OPCFW_CODE
BG Technology plc March 1996 - November 1999 The research centre was a very impressive building with a large atrium entrance (containing for a while LPG converted cars if I remember correctly) and what seemed to be an endless maze of offices and labs. When there first thing in the morning or last thing at night when it was partially lit it always reminded me of playing Doom with eerie corridors and dark offices. My interview was with Tracie Withers and Howard Hughes. I was presented at one point with a sample sheet of C code and asked to find faults. I found more faults than had been initially identified which I think must have impressed. I had a week or so's handover from Howard who I was basically replacing. If I remember correctly he was moving to Scotland. Day to day development to start with was between myself and Tracie and possibly also John Lloyd (my memory is a little sketchy here). Simon Taylor was our manager. The team I worked with was primarily concerned with on-going development of a product called GBNA. This was a C & Motif based graphical tool running under the X-Window System for the analysis of low-pressure gas pipe networks. The code relied on an embedded FORTRAN engine called Pegasus developed in the early 1980s which used Hardy Cross analysis and smooth pipe flow laws to determine pressures and flows in the network. The code supported plotting via HPGL output and also supported large A0 digitising tablets for scanning in new developments. I spent a good few days of my early time on the project both reformatting the source code and removing compiler warnings. The XDesigner tool was used to layout the GUI graphically - the result was automatically generated files containing all the Motif code to present the application GUI. Development of GBNA was done using Solaris Sparc workstations. Transco and British Gas however were primarily a DEC site so for delivery the software was re-compiled on Alpha workstations. I don't remember seeing a VAX although the conditionals in the code were always 'VAX' rather than 'VMS' or 'OpenVMS'. The office system relied on the venerable DEC product ALL-IN-1 for most of the time I was there although towards the end it was replaced with Lotus Notes (and for some systems SAP) which didn't meet with great approval from colleagues who could see the benefits of such a mature product as ALL-IN-1. We used Fujitsu/ICL computers which had keyboards supplied with the special WPS/PLUS key legends required for ALL-IN-1. The C codebase for GBNA (at one point in time when I did a search for authors) contained the following authors: - A. Corner - Alan Backhouse - Graham Kirsopp (contract staff) - Howard Hughes (contract staff) - John Lloyd - M.A.Hood (contractor) - Mark Wickens (contract staff) - Mike Smith - P. Nicholson - Pete Ranson (contract staff) - Sarah Morley - Steve Limb (contract staff) - Tracie Withers There were several software development groups. Phil Hindley worked in another group but sat with us at lunch and provided a constant source of amusement with his dour outlook on life and his view of contractors. Once over lunch I'd had enough of the 'your life being a contractor is better than mine as a permie' attitude so we broke down how much a contractor earned in reality compared with a 'permie' (taking into account lack of sick pay, holidays, healthcare, pension contributions etc.) and found the gap to not be amazingly huge. I think with Phil the sticking point of the conversation was always his unwillingness to travel and live away from home which is pretty much a requirement for a contractor. After this conversation we got less hassle from him! It also taught me early on about the view that permies can have of contractors and develop a coping mechanism. Whenever I started getting hassle I'd always ask the question 'so why aren't you a contractor if it's so great?' - typically this would bring the conversation to an abrupt halt. I also visited the Hinkley Transco office a couple of times. We got visits from John Scrivener who was one of our main contacts with regard to new requirements. He seemed to spend a significant amount of time on the motorway between Transco regional offices - he was based out of the Slough office. Mike Smith joined the team as a permanent member of staff and team lead before I left and was a thoroughly nice bloke. I remember him having trouble with his Vauxhall Carlton - the rear axle was shot which I think was how a great many of these cars finally met their end. There were several rounds of redundancies whilst I was there which always made me feel sorry for those on their way out - for a while it was customary to hear rounds of applause on a Friday afternoon - an altogether sad sound. Over time the occupation of offices clearly diminished. Around the two year mark I'd decided that my future lied along a different path with the new up-and-coming language Java. I'd treated myself to a one-week residential coding course. When I left British Gas I spent three months at Thames Water in Swindon debugging a call centre application. It was written in Java 1.1 and ran incredibly slowly compared to the mainframe based application it was designed to replace. The architecture was amazingly complex especially when it came to the inheritance hierarchy. Development was using IBM's VisualAge for Java which I developed a real love-hate relationship for. When it worked it was brilliant but it was buggy and tended to crash leaving you to have to spend hours creating a new workspace with the codebase you were working on. In the meantime Steve Limb and I formed a company called MAST in anticipation of negotiations between British Gas and ourselves coming to fruition. The task was to re-engineer Pegasus as a Java application and also to improve performance. Analysis times using the Large Area version of GBNA were unacceptable. The goal was set for an eight-fold increase in speed. Steve undertook the majority of work on the engine itself finding that the richer availability of appropriate data structures in Java (the engine breaks a gas network down into a tree structure and series of loops) compared with the FORTRAN implementation where everything was effectively stored in arrays and linked lists enabled him to improve efficiency of analysis markedly. I worked on the graphical front end and ancillary components to provide a prototype. At the end of six weeks we had fulfilled the rigorous requirements set down and presented the demo to Transco. I was very (perhaps a little too) enthusiastic about the potential for visualization using the new facilities available in Java for graphical presentation. Unfortunately around this time BG plc purchased Stoner software who were based in the USA and as part of their portfolio provided a similar product to GBNA. Both Steve and I moved on to work for other companies. I had noted at some point in the past that British Gas had closed down the Ashby Road site and it would appear that it is now the Loughborough University Science Park - having recently contacted Graham Kirsopp he confirmed that he has been back to the site to work for an energy-related company. I've also found that it would appear that National Grid is now responsible for the low-pressure gas network in the UK. There are indications, contrary to my assumptions, that GBNA and LINAS are still being used operationally. This is great to know - too often software that you work on as a contractor eventually ends up not being used. Footnote: the GBNA icon looks like this: Do you recognise the feline? It’s Jonesy the cat from the movie Alien, originally captured using an Amiga Genlock by my friend Aliennerd.
OPCFW_CODE
Problem 2 OPMN is reporting issues. Check the database connection information and make sure that the database server is running. BIDS - Cube calculation designer: Unexpected error occurred: 'Error in the application' 8. Check if you can log in now. this contact form Problem 5 Processes start up or shut down frequently. Solution Ensure that images are automatically loaded. K.1.4 Problems Creating Category or Perspective Pages When you create category or perspective pages, you may encounter the following errors: WWS-32022:The category has been created but it was not possible to Problem 1 Oracle HTTP Server is down. This is similar to the ORA-20005 error, with the difference that the cookie itself contains a mismatch between the client and the server. There could be multiple reasons why your portal is slow. Navigate to ORACLE_HOME/webcache/logs. Solution To resolve this problem, you need to perform additional configuration in OC4J_Portal to prevent remote Web providers from timing out. At a high load when no threads are available, the incoming requests are queued. This problem may occur because of the following errors in OracleAS Web Cache configuration: Problem 1 The port value is not specified properly in the cache.xml file. See Section K.2.4, "Using Application Server Control Console Log Viewer" for more information. Pollchanges Error Occurred While Refreshing The Cache For Directory It is also likely that this is a problem with OracleAS Portal and Oracle Internet Directory connection configuration. If you do not have the required privileges, then request the administrator to grant you the required privileges. Synchronization Exception Occurred Perform the following steps to check if provisioning is enabled: Log in to OracleAS Portal. When a thread is busy, the available number is reduced. Log in to SQL*Plus as the PORTAL schema user. If you are not using Log Viewer, then check the relevant error log files in the following directories: ORACLE_HOME/opmn/logs ORACLE_HOME/Apache/Apache/logs/error_log Check the status and configuration of the OracleAS Single Poll Changes Error Occurred While Refreshing The Cache For Directory Example Edit the opmn.xml file as follows: Solution If either of these errors is displayed, you must first delete the current category or perspective template, and then run scripts to do the following: Replace the current category or http://www.go4sharepoint.com/forum/event-id-errors-14438.aspx This is because the Greenwich Meridian Time (GMT) is appended to the numeric value, which generates the Last-Modified header without correcting the time zone. An Error Occurred While Refreshing The Portal Application Cache Sharepoint K.1 Problems and Solutions This section describes common problems and solutions. An Error Occurred While Refreshing The Network Cache After you have turned tracing on, you can find the generated trace files in the directory specified in the database parameter user_dump_dest. Log in to the computer containing the database, change to the ORACLE_HOME/bin directory if you are currently not in the $PATH directory, and use the following command to determine the OracleAS Portal uses a provisioning profile to receive notifications when user or group privilege information changes. Solution 2 Navigate to the Oracle Enterprise Manager 10g Application Server Control Console of the Oracle home directory that is running the OracleAS Web Cache process. navigate here K.1.19 Unhandled Exception Errors When accessing or using OracleAS Portal, you may encounter an unhandled exception error. For example, the user makes a request to change the language from the Language portlet, but sends another request before the first request is complete. To access OC4J_Security monitoring and administration pages in the Application Server Control Console, click OC4J_Security in the System Components table on the home page for the Infrastructure home directory instance. System Error event catogery 102 event ID 1003 3. Copyright © 2008-2012 Event Id102SourceSharePoint Portal ServerDescriptionAn error occurred while refreshing the portal application cache. Refer to subsection "Option 1: Create a New OC4J Instance to Create Another Set of PPE Threads" under Section 9.3, "Setting the Number of PPE Fetchers", for information about increasing the Some of these problems are described here. K.1.8 Error When Creating Web Folders When you try to create Web folders in OracleAS Portal, you get an ORA-20504 error, in the Web server error log file. If the status is 'Down', then start Oracle HTTP Server using Application Server Control Console. Look under the OracleAS Metadata Repository Used by Portal section. These are usually specified in Megabits. his comment is here Solution 4 Display the OracleAS Portal home page in the Application Server Control Console. If changes are not propagated properly, then it is likely that there is a problem in either Oracle Directory Integration Platform or in the configuration of OracleAS Portal with Oracle Directory The SQL server never went down and neither did the databases. Check for the trace and audit Log Files. Please notify your administrator." "WWC-41454: The decryption of the authentication information was unsuccessful". Check if your portal is accessible now. Problem 4 Low or no reuse of connection pool. See Section C.6, "Managing the Session Cleanup Job" for details. Solution 6 Check that the SQL*Net TNS listener is up and running on the host where the metadata repository is installed. Therefore, it is best to modify the
OPCFW_CODE
This part of the manual describes those parts of the Fortran 2003 language which are not in Fortran 95, and indicates which features are currently supported by the NAG Fortran Compiler. Features marked in the section heading as ‘[5.3.1]’ are newly available in release 5.3.1, those marked ‘[5.3]’ were available in release 5.3, those marked ‘[5.2]’ were available in release 5.2, those marked ‘[5.1]’ were available in release 5.1 (and in some cases earlier), and those marked ‘[n/a]’ are not yet available. Fortran 2003 is a major advance over Fortran 95: the new language features can be grouped as follows: The basic object-oriented features are type extension, polymorphic variables, and type selection; these provide inheritance and the ability to program ad-hoc polymorphism in a type-safe manner. The advanced features are typed allocation, cloning, type-bound procedures, type-bound generics, and object-bound procedures. Type-bound procedures provide the mechanism for dynamic dispatch (methods). The ALLOCATABLE attribute is extended to allow it to be used for dummy arguments, function results, structure components, and scalars (not just arrays). An intrinsic procedure has been added to transfer an allocation from one variable to another. Finally, in intrinsic assignment, allocatable variables or components are automatically reallocated with the correct size if they have a different shape or type parameter value from that of the expression. This last feature, together with deferred character length, provides the user with true varying-length character variables. There are two other major data enhancements: the addition of type parameters to derived types, and finalisation (by final subroutines). Other significant data enhancements are the PROTECTED attribute, pointer bounds specification and rank remapping, procedure pointers, and individual accessibility control for structure components. Interoperability with the C programming language consists of allowing C procedures to be called from Fortran, Fortran procedures to be called from C, and for the sharing of global variables between C and Fortran. This can only happen where C and Fortran facilities are equivalent: an intrinsic module provides derived types and named constants for mapping Fortran and C types, and the BIND(C) syntax is added for declaring Fortran entities that are to be shared with C. Additionally, C style enumerations have been added. Support for IEEE arithmetic is provided by three intrinsic modules. Use of the IEEE_FEATURES module requests IEEE compliance for specific Fortran features, the IEEE_EXCEPTIONS module provides access to IEEE modes and exception handling, and the IEEE_ARITHMETIC module provides enquiry functions and utility functions for determining the extent of IEEE conformance and access to IEEE-conformant facilities. The input/output facilities have had three major new features: asynchronous input/output, stream input/output, and user-defined procedures for derived-type input/output (referred to as “defined input/output”). Additionally, the input/output specifiers have been regularised so that where they make sense: all specifiers that can be used on an OPEN statement can also be used on a READ or WRITE statement, and vice versa. Access to input/output error messages is provided by the new IOMSG= specifier, and processor-dependent constants for input/output (e.g. the unit number for the standard input file) are provided in a new intrinsic module. Finally, there are a large number of miscellaneous improvements in almost every aspect of the language. Some of the more significant of these are the IMPORT statement (provides host association into interface blocks), the VALUE and VOLATILE attributes, the ability to use all intrinsic functions in constant expressions, and extensions to the syntax of array and structure constructors.
OPCFW_CODE
Table of Contents - Glossary of Terms - Get Started - Edit your first Event / conference - Provide Event / Conference Tickets / Registrations - Paid Events / Registrations - Creating a Schedule - Creating Sessions - Managing Sponsors Glossary of Terms - Event / Conference - An event is synonymous with conference and represents the entire occasion as a whole. For example the entire week of Drupal Con would be a single event. A Camp that spans a weekend is a single event. - Schedule Item - Items on the schedule but not part of any formal session. (Examples: lunch breaks, social gatherings, etc.) - BoF Session - Birds of a Feather sessions (or BOFs) are informal gatherings of like-minded individuals who wish to discuss a certain topic without a pre-planned agenda. - Session - Scheduled talks, trainings, workshops or presentations during the event - Session Track - Sessions grouped by topic areas (i.e. Design, Development, DevOps, etc.). - Room - Meeting rooms at the event - Time - A block of time used to schedule items - Timeslot - Container that connects a time and a room for the scheduling of session and schedule items - Schedule - Overview of the date, time and locations of scheduled items, sessions, or BoFs during an event - Sponsors - Individual event sponsors that provide financial or alternate forms of support for the event. - Sponsorship Levels - Sponsorship Levels indicate available levels of support and the benefits of each. (examples: Gold Level Sponsor, Bronze Level Sponsor) First and foremost, this is documentation for COD in Drupal 7. If you're looking for help with a Drupal 6 instance of COD, refer to the Drupal 6 COD documentation. COD is a powerful distribution that can help you organize and manage many different types of events and/or conferences. COD might be used by a Drupal user group to organize not only all their Drupal camps but also their meetups. COD could be used to manage a single conference/event. COD could even be used for non conference like events such as a concerts and meetups. How you plan to use COD will drive how you install and configure COD. We will try to give examples of configuration for each type of event. An event is synonymous with conference and represents the entire occasion as a whole. For example the entire week of DrupalCon would be a single event. A Camp that spans a weekend is a single event. Tickets represent a person's registration to individual/unique opportunities at the event/conference. For example a small Drupal Camp might have 4 tickets/opportunities - General Conference Registration for Saturday - Individual Sponsorship Registration for Saturday - Beginners Paid Training on Friday - Commerce Paid Training on Friday - In the traditional manner, you can download the tar.gz from the project page and install COD as you would any other Drupal site. Alternatively, you may use the command line, running drush dl cod, but it is a large download, and may take a long time, depending on your internet connection speed. - After Drupal has installed the modules and you are at the Configure site step, pick a site name that reflects your use of COD. For example a single conference site might use the conference name here, a music festival might use the festival's name here. In our example, we are creating a site for a Drupal user group and will name the site after the group. - The next step, Create your first event lets you start configuring the information for your first event. For sites that are for single events you will put the name of the conference/concert/event here that you used in your sitename. For multi event/conference sites you can put the name of your first event here. You are also required to put in a description of the event. Both of these values can be changed later and simply create the first Event node on the site. - After install is completed, you will be redirected to the homepage which will show your first event. This is the node view of the event that was created which also has tabs to manage related content and the event itself. Edit your first event/conference Now that you have COD installed and supplied the basic data we want to edit the first event. While viewing that event, click edit to configure more details of your event/conference. This section of the documentation also applies to creating new events. Instead of editing an existing event, use the admin bar and navigate to Content -> Add content -> Event to create a new event/conference. The Title field is the name of your event/conference. This is already set but you can change it here if you need to. The Summary field is trimmed down brief description of your event that is usually shown on a teaser view. The Body field is your full event description/details. This is already set but you can change it here if you need to. The Dates field lets you set the start and end date for your entire event/conference. This is not necessarily required right away, but once you want to start working on building out the schedule, you will need dates here. The Image field lets you set a primary image for the event/conference. The Program field allows you to give users an arbitrary view of your event/conference schedule. This is simply a open ended text area that you can put any content in. This field will appear on the Program tab for your event. The Default Session View field under the Sessions Details fieldset lets you decide if users see unprocessed or accepted sessions first on under an events Program -> Sessions view. The rest of the vertical tabs are normal node settings. Here you can create a menu link, edit the path to the event page, etc.
OPCFW_CODE
Odesa · $3200 · 4 years of experience · Upper Intermediate Main stack: Angular, TypeScript, RxJS, SCSS Familiar with: REST, OOP, SQL Testing: Karma, Mocha, Jest, Cypress Used from small commercial projects to pet projects: React, Redux, Node.js, Docker, MongoDB, Vue.js, VueX Used to work with: Chrome extensions, PHP, PostgreSQL, different templates engines (Twig, Ejs, Jade), socials APIs integration, d3.js, complex SVG animations Tools: WebStorm on MacOS Worked on different kind of projects - from SPA, Chrome extensions, back-end(PHP, Node.js) to IoT. Supported sites, do care about optimizations & rendering. I appreciate good UX and user needs. I want to strengthen my knowledge of Vue/React and/or Node.js. I would not mind switching from SPA to IoT, Web VR, Three.js. Don`t want to support legacy code. Odesa · $900 · 8 years of experience · Intermediate Independent creation of websites for internal use in the company (test management of ad units, content management system). Have been working in the large e-commerce site in Ukraine, project based on the Vue+Nuxt.js. My duties were to implement new features, update the design, support old features and optimize performance. Have experience in outsourcing company with stack React/Redux and Next.js. Worked with such technologies: 1.Vue (Nuxt), 2.CSS preprocessors(SCSS,Stylus), 3.JS libraries( Date-fns,Ramda), 4.React state managers and etc.(Redux-saga,Redux-thunk, Mobx, Mobx State Tree) I expect a team of professionals to exchange experiences and gain new knowledge and challenges, the presence of tasks for growing in the frontend, and also interested in Node.js development. Interested projects with all or one of the next technologies: Typescript, TDD. Odesa · $3000 · 3 years of experience · Intermediate HighLoad Worked in a "programmatic" sphere. Used the OpenRTB protocol. DMP - a system for collecting and processing information about users and the use of this information in targeted advertising. My responsibilities: Development and implementation of a new future. NodeJs docker Aerospike, MySQL, mongodb git, Jira JAVA, AndroidStudio Node.js, Aerospike, Linux, Git, Docker, MongoDB, MySQL, TypeScript Kyiv, Odessa, Dnipro · $2500 · 3 years of experience · Intermediate I'm a frontend developer with more than 3 years of professional experience working with the following technologies: React.js, Typescript, Redux, Mobx, Webpack, CSS-In-JS (styled-components), various CSS preprocessors (stylus, sass, less), Server-Side Rendering using Next.js framework. I'm looking for a React.js (with Typescript in the technology stack) position with an opportunity to work remotely. Open to relocation to one of the cities listed in the profile. Currently based in Kyiv. Not interested in gambling/online gaming industry. udalenno ili Odessa · $4800 · More than 10 years of experience · Upper Intermediate I've participated in quite big amount of projects, and I'm proud to say that I have been involved in building strong development culture for most of them. As a team leader I always seek in finding gold middle between technical excellence and business goals. And success of my teams and my clients is a proof. My latest achievement as developer was a web tool for designers who use figma editor. My initial goal was to help with UI which was built using ReactJS, but eventually end up with improving architecture though all the app, which includes API endpoints which are based on express.js and java workers. All connections between them are handled by Hazelcast and Datastore is used as main store. I successfully finished all tasks and the product released in time. Good communication level within company. Challenges and professional growth. Odesa · $350 · Less than a year of experience · Intermediate I have commercial experience (markup from scratch, working with someone else's markup, making edits, working with the OpenCart platform). Start a career in company as an HTML coder and also develop in parallel as a front-end developer. Get the necessary experience and achieve goals that will help me spend time with maximum benefit for myself and the company. Ready to work more in depth in the field of IT and develop in this direction. Odesa · $650 · 1 year of experience · Intermediate Remote work, Ukraine · $2500 · 5 years of experience · Pre-Intermediate Имею более пяти лет коммерческого опыта разработки с использованием вышеописанного стека. Постоянно старюсь расти, оптимизировать свои познания. Люблю сложные проекты, работу в команде. Последний проект на котором я работал представлял из себя очень гибкую, масштабируемую систему управления и автоматизации скриптов, так называемых ботов. Использовалась для запуска и управления торговыми ботами. В идеале хочу найти стартап на долгосрочной основе для того, чтоб раскрыть весь свой потенциал. Хочу создать продукт который будет востребован среди людей. Не люблю заниматься проектами связанными с рекламой и бухгалтерией. Remote work, Ukraine · $4000 · 9 years of experience · Intermediate Odesa · $3000 · 2 years of experience · Upper Intermediate Looking for high quality standards in the company, ability to work as a full stack developer, possibility to grow into tech/teamlead in the future
OPCFW_CODE
About this work package Visualisation interface for intraoperative guidance, showing direct video feed from the fetoscope alongside the extended field-of-view mosaic and pre-operative MRI images. This work package focuses on providing enhanced real-time feedback during fetal therapy by extending the capabilities of fetoscopic imaging and combining direct vision with preoperatively acquired information on maternal and other relevant structures. To achieve this, good optical distortion calibration and compensation are important for mitigating undesirable effects like radial distortions, which not only hamper accurate imaging using existing endoscopic technology during fetal surgery, but also make acquired images less suitable for potentially very useful image computing applications, like real-time mosaicking. Subsequently, a number of computer vision techniques can be used to expand the available field of view by sticking together adjacent image frames into a larger mosaic. This work package explores different solutions for reliable fetoscope calibration and real-time mosaicking. Work package tasks Imaging systems and probe calibration Fetoscopy is a minimally invasive procedure that allows observation and intervention within the amniotic sac during pregnancy. The fetoscope is inserted through the uterus and is immersed in amniotic fluid. Fluid has a strong influence on the image formation process due to refraction at the interface of the fetoscopic lens which is determined by the optical properties of the amniotic medium. Accurate calibration is critical to vision-based methods for providing image-guided surgery and real-time information from the surgical site. It consists of recording images of a calibration target of known geometric pattern in order to estimate optical properties of a camera. We have explored two ways to achieve effective pre-calibration of fetoscopes. The first uses a computer vision method to calculate fluid-immersed camera parameters that can compensate for the optical properties of amniotic fluid as well as radial distortion effects based on a dry calibration. The second is a calibration target for use with fluid-immersed endoscopes that allows for sterility-preserving optical distortion calibration of endoscopes within a few minutes. The target can be used in combination with endocal, a cross-platform, lightweight, compact GUI application for optical distortion calibration and display of the live distortion-corrected endoscopic video stream in real time. Fetoscopic surgical vision This task focuses on extracting higher level information from the surgical site using the fetoscopic video. By applying methods for detection, tracking and structure reconstruction in this task we build mosaics from fetoscopic videos in order to expand the field-of-view while coping with deformations and low image quality and sudden jerky movements of the scope and devices or physiological motion of the anatomy. Automatic detection of placental wall vessels or other visible structures and instruments and predictive tracking algorithms are used to enhance the visualisation of important structures during the procedure. Methods from this task are also used to provide control signals for automated control strategies for the robotic surgical tools and scope. - The paper titled ‘Deep Learning-based Fetoscopic Mosaicking for Field-of-View Expansion’ published in MICCAI2019 IJCARS Special Issue received the best paper award at the MICCAI2020 Conference on 7 October 2020. This paper reports a novel approach for the fetoscopic video mosaicking developed by the GIFT-Surg team. Watch a video presentation of the research and read the news story on the UCL website. On 4 August 2020, Dr Sophia Bano was an invited speaker at the Artificial Intelligence in Surgery –Wellcome / EPSRC Centre for Interventional and Surgical Sciences UCL mini-symposium. Her talk was entitled Mosaicking using Deep Learning in Fetoscopic Surgery. Read about the event. The fetoscopy video dataset was released along with the paper titled ‘Deep Placental Vessel Segmentation for Fetoscopic Mosaicking’ published in MICCAI2020. This is the first publicly available dataset of in vivo fetoscopic videos with placental vessel annotations and is acquired by leveraging the collaboration between GIFT-Surg clinical investigators at partner hospitals. Watch a video presentation of the research and download the dataset from the UCL website. Deep learning-based fetoscopic mosaicking for field-of-view expansion. Bano, S., Vasconcelos, F., Tella-Amo, M., Dwyer, G., Gruijthuijsen, C., Vander Poorten, E., Vercauteren, T., Ourselin, S., Deprest, J. and Stoyanov, D. (2020). International Journal of Computer Assisted Radiology and Surgery, pp.1-10. Video presentation. FetNet: a recurrent convolutional network for occlusion identification in fetoscopic videos. Bano, S., Vasconcelos, F., Vander Poorten, E., Vercauteren, T., Ourselin, S., Deprest, J. and Stoyanov, D. (2020). International Journal of Computer Assisted Radiology and Surgery. Video presentation. Refractive Two-View Reconstruction for Underwater 3D Vision. Chadebecq, F., Vasconcelos, F., Lacher, R., Maneas, E., Desjardins, A., Ourselin, S., Vercauteren, T. and Stoyanov, D. (2019). International Journal of Computer Vision, pp.1-17. doi: 10.1007/s11263-019-01218-9 Probabilistic visual and electromagnetic data fusion for robust drift-free sequential mosaicking: application to fetoscopy. Tella-Amo, M., Peter, L., Shakir, D. I., Deprest, J., Stoyanov, D., Iglesias, J. E., Vercauteren, T., Ourselin, S. (2018). Journal of Medical Imaging, 5(2) 021217. doi: 10.1117/1.JMI.5.2.021217 Retrieval and registration of long-range overlapping frames for scalable mosaicking of in vivo fetoscopy. Peter, L., Tella-Amo, M., Shakir, D.I. et al. (2018). International Journal of Computer Assisted Radiology and Surgery, 13, 713–720. doi: 10.1007/s11548-018-1728-4
OPCFW_CODE
using System; namespace WarriorsSnuggery.Graphics { public static class ColorManager { public const float DefaultLineWidth = 2.0f; public static float LineWidth { get => lineWidth; set { lineWidth = value; MasterRenderer.SetLineWidth(lineWidth); } } static float lineWidth; static BatchObject line; static BatchObject circle; static BatchObject fullscreen_rect; static BatchObject filled_rect; public static void ResetLineWidth() { LineWidth = DefaultLineWidth; } public static void Initialize() { line = new BatchObject(Mesh.Line(1f)); circle = new BatchObject(Mesh.Circle(1f, 32)); fullscreen_rect = new BatchObject(Mesh.Plane(WindowInfo.UnitHeight)); filled_rect = new BatchObject(Mesh.Plane(1f)); } public static void DrawLineQuad(CPos pos, CPos radius, Color color) { DrawLine(pos - radius, pos + new CPos(-radius.X, radius.Y, 0), color); DrawLine(pos - radius, pos + new CPos(radius.X, -radius.Y, 0), color); DrawLine(pos - new CPos(-radius.X, radius.Y, 0), pos + radius, color); DrawLine(pos - new CPos(radius.X, -radius.Y, 0), pos + radius, color); } public static void DrawLine(CPos start, CPos end, Color color) { var s = (start - end).FlatDist / 1024f; line.SetScale(new Vector(s, s, s)); line.SetRotation(new VAngle(0, 0, -(start - end).FlatAngle) - new VAngle(0, 0, 90)); line.SetPosition(start); line.SetColor(color); line.Render(); } public static void DrawCircle(CPos center, float radius, Color color) { circle.SetScale(new Vector(radius * 2, radius * 2, radius * 2)); circle.SetPosition(center); circle.SetColor(color); circle.Render(); } public static void DrawFullscreenRect(Color color) { fullscreen_rect.SetScale(WindowInfo.Ratio); fullscreen_rect.SetColor(color); fullscreen_rect.Render(); } public static void DrawFilledLine(CPos start, CPos end, int width, Color color) { var diff = start - end; filled_rect.SetScale(new CPos(width, (int)diff.FlatDist, 0).ToVector()); filled_rect.SetPosition((start + end) / new CPos(2, 2, 2)); filled_rect.SetRotation(new VAngle(0, 0, -diff.FlatAngle) - new VAngle(0, 0, 90)); filled_rect.SetColor(color); filled_rect.Render(); } public static void DrawGlowingFilledLineRect(CPos pointA, CPos pointB, int width, Color color, int radius, int count) { var alpha = color.A / count; for (int i = 0; i < count; i++) { var currentRadius = radius / (i * i + 1); DrawFilledLineRect(pointA, pointB, width + currentRadius, new Color(color.R, color.G, color.B, alpha)); } } public static void DrawFilledLineRect(CPos pointA, CPos pointB, int width, Color color) { var bottomLeft = new CPos(Math.Min(pointA.X, pointB.X), Math.Min(pointA.Y, pointB.Y), 0); var topRight = new CPos(Math.Max(pointA.X, pointB.X), Math.Max(pointA.Y, pointB.Y), 0); var bottomRight = new CPos(topRight.X, bottomLeft.Y, 0); var topLeft = new CPos(bottomLeft.X, topRight.Y, 0); DrawRect(topLeft + new CPos(width, -width, 0), topRight + new CPos(-width, width, 0), color); DrawRect(topLeft + new CPos(-width, width, 0), bottomLeft + new CPos(width, -width, 0), color); DrawRect(bottomLeft + new CPos(width, -width, 0), bottomRight + new CPos(-width, width, 0), color); DrawRect(bottomRight + new CPos(width, -width, 0), topRight + new CPos(-width, width, 0), color); } public static void DrawFilledLineQuad(CPos center, int radius, int width, Color color) { var topLeft = center - new CPos(radius, radius, 0); var bottomRight = center + new CPos(radius, radius, 0); DrawFilledLineRect(topLeft, bottomRight, width, color); } public static void DrawRect(CPos pointA, CPos pointB, Color color) { filled_rect.SetScale(new CPos(Math.Abs(pointA.X - pointB.X), Math.Abs(pointA.Y - pointB.Y), 0).ToVector()); filled_rect.SetPosition(new CPos((pointA.X + pointB.X) / 2, (pointA.Y + pointB.Y) / 2, 0)); filled_rect.SetRotation(VAngle.Zero); filled_rect.SetColor(color); filled_rect.Render(); } public static void DrawDot(CPos position, Color color) { DrawQuad(position, 128, color); } public static void DrawQuad(CPos center, int radius, Color color) { filled_rect.SetScale(radius / 1024f); filled_rect.SetPosition(center); filled_rect.SetRotation(VAngle.Zero); filled_rect.SetColor(color); filled_rect.Render(); } } }
STACK_EDU
You're viewing Apigee Edge documentation. Go to the Apigee X documentation. info On Wednesday, October 5, 2016, we began releasing a new version of Apigee Edge for Public Cloud. When your organization is updated, you'll see the new version number in the lower right of the Edge UI. New features and updates Following are the new features and updates in this release. Developer app management goodness in the UI Developer app management in the Edge UI has gotten more powerful with a number of enhancements: - You can revoke and approve apps (in edit mode) in a new "App Status" field. In view mode, the field also displays the current app status. If an app is revoked, none of its API keys are valid for API calls. They keys themselves aren’t revoked and are available again for use if the developer is re-approved. The "Approved" label for API keys is displayed in strikethrough text while an app is in a revoked state. - API key expiry dates are now shown on the Developer App Details page, and keys are organized by expiry dates in a "Credentials" section. For example, a key with no expiration is shown in one group with its associated API products, and a key that expires in 90 days is shown in another group with its associated products. You can’t change the expiration of an existing credential. - With a new add Credential button in Developer App edit mode, you can generate API keys with specific expiration times or dates (or no expiration). As (or after) you create a credential, you can add API products to it. This functionality replaces the "Regenerate Key" button on the Developer App Details page. That button has been removed. These enhancements add features in the UI that were already available in the management API. (EDGEUI-104) Activate/Deactivate app developer in the UI You can change the status of an app developer between active and inactive in the Edge UI (Developer Details page, edit mode, Activate/Deactivate button). When a developer is inactive, none of her developer app API keys or OAuth tokens generated with those keys are valid in calls to API proxies. (EDGEUI-304) OpenAPI Spec generation for SOAP proxies When you create a "REST to SOAP to REST" proxy based on a WSDL, Edge automatically generates a hosted OpenAPI Spec based on the proxy resources. You can access the spec at http(s)://[edge_domain]/[proxy_base_path]/openapi.json. However, the conversion is not always accurate, since not all the rules of an XML schema can be represented in an OpenAPI Edge-hosted WSDL for passthrough SOAP proxies When you create a "Pass-Through SOAP" proxy based on a WSDL, Edge hosts the WSDL and creates a flow in the proxy to let you access it. You can access the hosted WSDL at http(s)://[edge_domain]/[proxy_base_path]?wsdl, which is the new service endpoint URL for clients calling the SOAP service through the proxy. (EDGEUI-718) Analytics "No Data" message includes delay interval When the "No Data for Time Range" message appears in analytics reports, the message notes the delay interval between when API calls are made and when the data appears in analytics reports. (EDGEUI-682) The following bugs are fixed in this release. This list is primarily for users checking to see if their support tickets have been fixed. It's not designed to provide detailed information for all users. |Reports page export button The Export button has been removed from the Custom Reports home page. Report export is available on each custom reports page.
OPCFW_CODE
Tuning the DRY/DAMP trade-off in tests Unlike the semantics of the words dry and damp, the principles DRY and DAMP are not antonyms and can be complementary. 🇧🇷 This article is also available in Portuguese. When talking to other developers — no matter if you’re in a bar or in an interview — it’s common to hear b̶u̶z̶z̶words like good engineering practices, clean and reusable code, software craftsmanship, etc. We all want to be good developers and write code that makes us proud. Those good practices can include whether we’re avoiding unnecessary code duplication and thus respecting the Don’t Repeat Yourself (DRY) principle. Or whether we’re writing idiomatic, meaningful, and readable code and thus respecting the Descriptive and Meaningful Phrases (DAMP) principle. There’s a popular opinion that the application (implementation) code must prioritize DRY over DAMP while the test code must prioritize DAMP over DRY. This encourages us to treat implementation and test code differently and we shouldn't: from a maintenance point of view, they're no different. Both principles are equally important and can coexist in both cases. Knowing that we're less strict with the test code and willing to change this, I will focus on how to achieve the DRY and DAMP balance in our test suite. There are some reasons why I write tests and one of them is to provide living documentation. This emphasizes the importance of our tests to be DAMP: we fancy docs that are well structured, easy to read and understand. Speaking about test structure: respecting its phases by making them visually separated helps a lot. Of course there are exceptions but, in most cases, you should glance at a test and easily identify its phases. Regarding DRY: we must be careful with obsessing with it and thinking that every duplicated code is a design problem. It's not. "(…) copying and pasting may well be the right thing to do if the two chunks of code evolve in different directions. If they don’t — that is, if we keep making the same changes to different parts of the program — that’s when we get a problem." — Software Design X-Rays Unlike the semantics of the words dry and damp, which are a̶l̶m̶o̶s̶t̶ antonyms, the uppercased DRY and DAMP can be complementary: by avoiding unnecessary duplication we can make the test more descriptive as well. Let's see how to do it. Show me the code This is a really d̶u̶m̶b̶ simple example but the concept can be expanded and applied to real cases. The class Person is our System Under Test (SUT) and we will iteratively test ❌ DRY ❌ DAMP In this example, we can clearly see the tests phases, good! 👏 On the other hand, we're repeatedly instantiating Person with all its required attributes in each test. In real cases, our SUT will probably have much more attributes and that's when the problem arises. Besides the duplication per se, the relevant data is not explicit: to test fullName() , only lastName are relevant; to test isUnderaged() , only age is relevant; they should be highlighted. ❌ DRY ✅ DAMP To minimize code duplication and make it more DRY, we decided to extract Person creation to the buildPerson() factory method. But the relevant data is still not explicit and we introduced another problem: fragility. Anyone can change buildPerson() implementation and break our tests. This is an unnecessary risk, especially considering that factories are normally reused (that's why we created one!). We realized that buildPerson() can't be reused to test isUnderaged(), so we thought about using another two methods: one to build kids and other to build adults. In more complex cases, I highly recommend the use of factory methods with meaningful names that refer to the ubiquitous language. But be careful not to have an explosion of factory methods: one for each variation of the SUT. In this example, the three methods are almost identical. It's unnecessary to have them and it's not DRY enough. I consider it's a little bit "DAMPer" than the previous example, but there's still room for improvement. ✅ DRY ❌ DAMP Another approach is to extract the Person creation to the test class body or to use before helpers. Now, most of the problems from the previous example persist, and the tests are less DAMP because the setup phase is not explicit anymore. It may seem it's not a big deal in this simple case but, in real life, we might force the reader into an expedition — jumping between methods — to find out what is relevant to the test. ✅ DRY ✅ DAMP Finally, the test that found the balance! ⚖️ Its phases are clearly separated, the data that matters to each test is explicit, still the implementation details of buildPerson() are hidden. It's concise, readable code and the reader doesn't have to drill it down to understand it: "what you see is what you get". ⚠️ Special case There's one special case with this approach and it happens when the SUT receives dependencies injected. In those cases, achieving the balance is not so trivial. If we extract the Service creation to a factory method, we would omit that the class has too many dependencies. And this might be a sign that we have a design problem: it could be a missing abstraction, for example. Leaving the SUT creation with all its dependencies explicit in every test, which may not seem DRY, is more appropriate in cases like this. That way we are leaving points of attention in the code, and they will be screaming for a redesign or refactor. Balancing the DRY/DAMP principles in the examples above may seem a little too much, but the impact of achieving it in real cases is remarkable: it gives us more robust tests, more flexibility to support refactorings, and more confidence in our test suite. Win-win! As a bonus, I highly encourage the use of fixtures that generate random values to build more complex objects: that way we’re forced to pass only the relevant data to the test. But this might be a subject for another article…
OPCFW_CODE
Detecting IP Conflict At my prior employer whenever we had two systems with the same IP address - particularly if they were windows systems we'd get a window popup on our system stating: "Windows has detected an IP conflict". At my new employer, we have had IP conflicts and that message doesn't come up. What is different that is causing windows to behave differently? Is there some feature on our network switches which need to be enabled or maybe something at the firewall/gateway level? I should state in both networks IP addresses are statically assigned - no DHCP is used. Switches don't know or care about ip addresses. How do you know there are undetected ip address conflicts? What's leading you to believe that? What are you seeing or experiencing? Because I unintentionally setup two VMs with the same IP which caused me about a day of grief trying to track it down :) The IP conflict still exists.... but there were no warnings on either system. Here's the dialog I'm referring to: https://i.sstatic.net/KPHGK.png Why am I not getting that? @joeqwerty Your comment is correct if by "switch" you mean a device that operates only as a layer 2 switching device. But he may well have a box that acts as a switch that he calls "the switch" that cares very much about IP addresses. @DavidSchwartz: Yes. I was referring to a Layer 2 switch in the strict sense. If you're managing the adresses via static assignment conflicts may occurs. From the switches point of view it's hard to distinguish when you're talking about a conflict. If you've clusters implemented that have a shared address which can be failed over between nodes, at some point of time it's expected to have an address conflict (until announcements are received on the whole network). DHCP is the solution for such problems (even if under specific circumstances conflicts may also arise here). Depending on the network size i strongly recomment to think about using DHCP (which also gives you a bunch of additional options that you can handle easily then). Edit: It seems there's confusion about the benefits of using DHCP. Using DHCP in your network (or even multiple networks when setup on firewalls via relays and so on) you have a central point for key network configuration that provides addresses, gateways and additional runtime configuration like DNS, NTP and much other services. So, DHCP can help you to organize the usage and assignment of addresses in your networks quite well. It by far doesn't replace any documentation - which in current state [as there are conflicts] seems to be missing. This is in a datacenter environment where addresses cannot under any circumstances ever change. Otherwise things like load balancing break. I could setup reservations for each of my servers but if I have to do that for every server - I might as well just use static addresses anyway. As I've said I've worked in environments where windows did popup an alert when there was an ip conflict - I just don't understand what was different about that environment vs this one. If the switches don't help with that - maybe its something being done by the gateway router? I'm not sure I found a linux tool called arpwatch which seems to watch arp traffic for multiple MACs sharing the same IP - it will also report new workstations it finds on the network. So that would be an alternative way of dealing with this - BUT it only listens on one network. We have five different network segments so I'd need 5 different linux boxes setup with arpwatch which seems like a poor solution to the problem. Perhaps the vendor specific drivers were doing that? Although that seems unlikely. This Microsoft support article seems to suggest that all Microsoft OS's should be able to do this: https://support.microsoft.com/en-us/kb/120599 but clearly its not? I'm not sure why. @Brad, you can still use DHCP in such an environment. Use reservations so that each server gets the same IP address every time. You then have a central location from which to manage your IP addresses, and it can help to eliminate IP address conflicts. Seems like if you have to use reservations for the majority of the systems managing DHCP is just a lot of overhead. Regardless this doesn't answer the original question which is - if there's an IP conflict why isn't windows displaying a warning :-) Also given we have four different network segments aren't we going to need four different DHCP servers? I'm assuming that DHCP won't operate for multiple networks? Answering my own question above it seems it is possible by enabling DHCP-Relay, DHCP-Helper, Bootp-Relay, UDP-helper etc on our switches. Regardless though I guess I'm confused as to why the behavior is different. Here's the dialog I'm referring to: https://i.sstatic.net/KPHGK.png Why am I not getting that? you've to manage your adresses anyway. you need documentation and so on. using a dhcp still can provide dns ntp gateways and lots of more stuff to clients - depending on your network size this is an important factor. Check RFC5227 and the KB below. It explain more the process. As you can see it use ARP packet, which can be blocked at the switch level / local firewall level. Subsequent passive detection that another host on the network is inadvertently using the same address. Even if all hosts observe precautions to avoid using an address that is already in use, conflicts can still occur if two hosts are out of communication at the time of initial interface configuration. This could occur with wireless network interfaces if the hosts are temporarily out of range, or with Ethernet interfaces if the link between two Ethernet hubs is not functioning at the time of address configuration. A well-designed host will handle not only conflicts detected during interface configuration, but also conflicts detected later, for the entire duration of the time that the host is using the address (Section 2.4.). and most of all, check that kb:https://support.microsoft.com/en-us/kb/120599 At system startup, when the IP protocol initializes, it sends an ARP request containing its own MAC and IP address so that other computers can update their ARP caches. If there is already a computer using the IP address, the "older" computer will respond with an ARP reply containing its MAC and IP address, indicating a conflict. Unfortunately, many other computers may have already updated their ARP caches with the new mapping. At that point, the "younger" computer that is initializing needs to do two things: Repair the ARP cache on all affected computers. Cease using the duplicate address. Computers running Microsoft TCP/IP will send out a new ARP broadcast to re- map the ARP cache on all affected computers. This new ARP will contain the MAC address and IP address of the older owner of the IP address. After sending this ARP, the IP protocol on the younger machine will report the problem to the user and the stack will shut down. The stack should not be re-started until a unique address is obtained. Note that the computer may still function at this point if another protocol such as NetBEUI is loaded.
STACK_EXCHANGE